<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "https://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en-US">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=11"/>
<meta name="generator" content="Doxygen 1.12.0"/>
<meta name="viewport" content="width=device-width, initial-scale=1"/>
<title>NeuZephyr: nz::data Namespace Reference</title>
<link rel="icon" href="NZ_logo2.png" type="image/x-icon" />
<link href="tabs.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript" src="dynsections.js"></script>
<link href="navtree.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="resize.js"></script>
<link href="doxygen.css" rel="stylesheet" type="text/css" />
</head>
<body>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<div id="titlearea">
<table cellspacing="0" cellpadding="0">
 <tbody>
 <tr id="projectrow">
  <td id="projectlogo"><img alt="Logo" src="NZ_logo2.png"/></td>
  <td id="projectalign">
   <div id="projectname">NeuZephyr
   </div>
   <div id="projectbrief">Simple DL Framework</div>
  </td>
 </tr>
 </tbody>
</table>
</div>
<!-- end header part -->
<!-- Generated by Doxygen 1.12.0 -->
<script type="text/javascript">
/* @license magnet:?xt=urn:btih:d3d9a9a6595521f9666a5e94cc830dab83b65699&amp;dn=expat.txt MIT */
$(function() { codefold.init(0); });
/* @license-end */
</script>
  <div id="navrow1" class="tabs">
    <ul class="tablist">
      <li><a href="index.html"><span>Main&#160;Page</span></a></li>
      <li><a href="pages.html"><span>Related&#160;Pages</span></a></li>
      <li class="current"><a href="namespaces.html"><span>Namespaces</span></a></li>
      <li><a href="annotated.html"><span>Classes</span></a></li>
      <li><a href="files.html"><span>Files</span></a></li>
    </ul>
  </div>
  <div id="navrow2" class="tabs2">
    <ul class="tablist">
      <li><a href="namespaces.html"><span>Namespace&#160;List</span></a></li>
      <li><a href="namespacemembers.html"><span>Namespace&#160;Members</span></a></li>
    </ul>
  </div>
<script type="text/javascript">
/* @license magnet:?xt=urn:btih:d3d9a9a6595521f9666a5e94cc830dab83b65699&amp;dn=expat.txt MIT */
$(function(){ initResizable(false); });
/* @license-end */
</script>
<div id="nav-path" class="navpath">
  <ul>
<li class="navelem"><b>nz</b></li><li class="navelem"><a class="el" href="namespacenz_1_1data.html">data</a></li>  </ul>
</div>
</div><!-- top -->
<div id="doc-content">
<div class="header">
  <div class="summary">
<a href="#nested-classes">Classes</a> &#124;
<a href="#func-members">Functions</a>  </div>
  <div class="headertitle"><div class="title">nz::data Namespace Reference</div></div>
</div><!--header-->
<div class="contents">

<p>Contains data structures and utilities for tensor operations in machine learning workflows.  
<a href="#details">More...</a></p>
<table class="memberdecls">
<tr class="heading"><td colspan="2"><h2 class="groupheader"><a id="nested-classes" name="nested-classes"></a>
Classes</h2></td></tr>
<tr class="memitem:"><td class="memItemLeft" align="right" valign="top">class &#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classnz_1_1data_1_1_dimension.html">Dimension</a></td></tr>
<tr class="memdesc:"><td class="mdescLeft">&#160;</td><td class="mdescRight">Represents a multi - dimensional shape, typically used in deep learning for tensor dimensions.  <a href="classnz_1_1data_1_1_dimension.html#details">More...</a><br /></td></tr>
<tr class="separator:"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:"><td class="memItemLeft" align="right" valign="top">class &#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html">MappedTensor</a></td></tr>
<tr class="memdesc:"><td class="mdescLeft">&#160;</td><td class="mdescRight">A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible container-like interfaces.  <a href="classnz_1_1data_1_1_mapped_tensor.html#details">More...</a><br /></td></tr>
<tr class="separator:"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:"><td class="memItemLeft" align="right" valign="top">class &#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classnz_1_1data_1_1_tensor.html">Tensor</a></td></tr>
<tr class="memdesc:"><td class="mdescLeft">&#160;</td><td class="mdescRight">A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.  <a href="classnz_1_1data_1_1_tensor.html#details">More...</a><br /></td></tr>
<tr class="separator:"><td class="memSeparator" colspan="2">&#160;</td></tr>
</table><table class="memberdecls">
<tr class="heading"><td colspan="2"><h2 class="groupheader"><a id="func-members" name="func-members"></a>
Functions</h2></td></tr>
<tr class="memitem:a4706224f5e7c9a0cfe4c74983aaef1bd" id="r_a4706224f5e7c9a0cfe4c74983aaef1bd"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:a4706224f5e7c9a0cfe4c74983aaef1bd"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#a4706224f5e7c9a0cfe4c74983aaef1bd">ReLU</a> (T &amp;input)</td></tr>
<tr class="memdesc:a4706224f5e7c9a0cfe4c74983aaef1bd"><td class="mdescLeft">&#160;</td><td class="mdescRight">Apply the Rectified Linear Unit (ReLU) activation function element-wise to an input tensor.  <br /></td></tr>
<tr class="separator:a4706224f5e7c9a0cfe4c74983aaef1bd"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:aa9a6da30ae0d71faa4ac32efb9dd1f2f" id="r_aa9a6da30ae0d71faa4ac32efb9dd1f2f"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:aa9a6da30ae0d71faa4ac32efb9dd1f2f"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#aa9a6da30ae0d71faa4ac32efb9dd1f2f">Sigmoid</a> (T &amp;input)</td></tr>
<tr class="memdesc:aa9a6da30ae0d71faa4ac32efb9dd1f2f"><td class="mdescLeft">&#160;</td><td class="mdescRight">Apply the sigmoid activation function element-wise to an input tensor.  <br /></td></tr>
<tr class="separator:aa9a6da30ae0d71faa4ac32efb9dd1f2f"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:aed71109d5ed6ecdb7181afc751fa2aa1" id="r_aed71109d5ed6ecdb7181afc751fa2aa1"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:aed71109d5ed6ecdb7181afc751fa2aa1"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#aed71109d5ed6ecdb7181afc751fa2aa1">Tanh</a> (T &amp;input)</td></tr>
<tr class="memdesc:aed71109d5ed6ecdb7181afc751fa2aa1"><td class="mdescLeft">&#160;</td><td class="mdescRight">Apply the hyperbolic tangent (tanh) activation function element-wise to an input tensor.  <br /></td></tr>
<tr class="separator:aed71109d5ed6ecdb7181afc751fa2aa1"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:ae8fb3052fdc2304fbb68c8dbad90e4ed" id="r_ae8fb3052fdc2304fbb68c8dbad90e4ed"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:ae8fb3052fdc2304fbb68c8dbad90e4ed"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#ae8fb3052fdc2304fbb68c8dbad90e4ed">LeakyReLU</a> (T &amp;input, const float alpha=0.01f)</td></tr>
<tr class="memdesc:ae8fb3052fdc2304fbb68c8dbad90e4ed"><td class="mdescLeft">&#160;</td><td class="mdescRight">Apply the Leaky Rectified Linear Unit (Leaky ReLU) activation function element-wise to an input tensor.  <br /></td></tr>
<tr class="separator:ae8fb3052fdc2304fbb68c8dbad90e4ed"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:ae563f53512549e2e54f066f7bf06622e" id="r_ae563f53512549e2e54f066f7bf06622e"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:ae563f53512549e2e54f066f7bf06622e"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#ae563f53512549e2e54f066f7bf06622e">Swish</a> (T &amp;input)</td></tr>
<tr class="memdesc:ae563f53512549e2e54f066f7bf06622e"><td class="mdescLeft">&#160;</td><td class="mdescRight">Apply the Swish activation function element-wise to an input tensor.  <br /></td></tr>
<tr class="separator:ae563f53512549e2e54f066f7bf06622e"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:adae3ca94a8c203f1e444751a1cba0d6d" id="r_adae3ca94a8c203f1e444751a1cba0d6d"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:adae3ca94a8c203f1e444751a1cba0d6d"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#adae3ca94a8c203f1e444751a1cba0d6d">ELU</a> (T &amp;input, const float alpha=1.0f)</td></tr>
<tr class="memdesc:adae3ca94a8c203f1e444751a1cba0d6d"><td class="mdescLeft">&#160;</td><td class="mdescRight">Apply the Exponential Linear Unit (ELU) activation function element-wise to an input tensor.  <br /></td></tr>
<tr class="separator:adae3ca94a8c203f1e444751a1cba0d6d"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a241d72367c091d0724b524f55289b2f0" id="r_a241d72367c091d0724b524f55289b2f0"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:a241d72367c091d0724b524f55289b2f0"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#a241d72367c091d0724b524f55289b2f0">HardSigmoid</a> (T &amp;input, const float alpha=0.2f, const float beta=0.5f)</td></tr>
<tr class="memdesc:a241d72367c091d0724b524f55289b2f0"><td class="mdescLeft">&#160;</td><td class="mdescRight">Apply the Hard Sigmoid activation function element-wise to an input tensor.  <br /></td></tr>
<tr class="separator:a241d72367c091d0724b524f55289b2f0"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:ac716ac93e673f4706963d194e8ea523e" id="r_ac716ac93e673f4706963d194e8ea523e"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:ac716ac93e673f4706963d194e8ea523e"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#ac716ac93e673f4706963d194e8ea523e">HardSwish</a> (T &amp;input, const float alpha=0.5f, const float beta=0.5f)</td></tr>
<tr class="memdesc:ac716ac93e673f4706963d194e8ea523e"><td class="mdescLeft">&#160;</td><td class="mdescRight">Apply the Hard Swish activation function element-wise to an input tensor.  <br /></td></tr>
<tr class="separator:ac716ac93e673f4706963d194e8ea523e"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a55e8a3fae0d75e214cd714fde8811543" id="r_a55e8a3fae0d75e214cd714fde8811543"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:a55e8a3fae0d75e214cd714fde8811543"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#a55e8a3fae0d75e214cd714fde8811543">Softmax</a> (T &amp;input)</td></tr>
<tr class="memdesc:a55e8a3fae0d75e214cd714fde8811543"><td class="mdescLeft">&#160;</td><td class="mdescRight">Compute the softmax function for a given input of type T.  <br /></td></tr>
<tr class="separator:a55e8a3fae0d75e214cd714fde8811543"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:ab99b7c0a7c96a6de43f5b3f25af7f918" id="r_ab99b7c0a7c96a6de43f5b3f25af7f918"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:ab99b7c0a7c96a6de43f5b3f25af7f918"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#ab99b7c0a7c96a6de43f5b3f25af7f918">operator+</a> (T &amp;lhs, const float rhs)</td></tr>
<tr class="memdesc:ab99b7c0a7c96a6de43f5b3f25af7f918"><td class="mdescLeft">&#160;</td><td class="mdescRight">Overload the addition operator to add a scalar float to a tensor of type T.  <br /></td></tr>
<tr class="separator:ab99b7c0a7c96a6de43f5b3f25af7f918"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a5cbc31234b126e3ce84c273e0cc8714a" id="r_a5cbc31234b126e3ce84c273e0cc8714a"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:a5cbc31234b126e3ce84c273e0cc8714a"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#a5cbc31234b126e3ce84c273e0cc8714a">operator+</a> (const float lhs, T &amp;rhs)</td></tr>
<tr class="memdesc:a5cbc31234b126e3ce84c273e0cc8714a"><td class="mdescLeft">&#160;</td><td class="mdescRight">Overload the addition operator to add a tensor of type T to a scalar float.  <br /></td></tr>
<tr class="separator:a5cbc31234b126e3ce84c273e0cc8714a"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:acc650ae262aba5f1b0fa9cca8cae311e" id="r_acc650ae262aba5f1b0fa9cca8cae311e"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:acc650ae262aba5f1b0fa9cca8cae311e"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#acc650ae262aba5f1b0fa9cca8cae311e">operator-</a> (T &amp;lhs, const float rhs)</td></tr>
<tr class="memdesc:acc650ae262aba5f1b0fa9cca8cae311e"><td class="mdescLeft">&#160;</td><td class="mdescRight">Overload the subtraction operator to subtract a scalar float from a tensor of type T.  <br /></td></tr>
<tr class="separator:acc650ae262aba5f1b0fa9cca8cae311e"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a5ecefd608c1f6b3ce4e9d752dd05c0e7" id="r_a5ecefd608c1f6b3ce4e9d752dd05c0e7"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:a5ecefd608c1f6b3ce4e9d752dd05c0e7"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#a5ecefd608c1f6b3ce4e9d752dd05c0e7">operator-</a> (const float lhs, T &amp;rhs)</td></tr>
<tr class="memdesc:a5ecefd608c1f6b3ce4e9d752dd05c0e7"><td class="mdescLeft">&#160;</td><td class="mdescRight">Overload the subtraction operator to subtract a tensor of type T from a scalar float.  <br /></td></tr>
<tr class="separator:a5ecefd608c1f6b3ce4e9d752dd05c0e7"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a8730252e35a8e59aacb429efb0d6b828" id="r_a8730252e35a8e59aacb429efb0d6b828"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:a8730252e35a8e59aacb429efb0d6b828"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#a8730252e35a8e59aacb429efb0d6b828">operator*</a> (T &amp;lhs, const float rhs)</td></tr>
<tr class="memdesc:a8730252e35a8e59aacb429efb0d6b828"><td class="mdescLeft">&#160;</td><td class="mdescRight">Overload the multiplication operator to multiply a tensor of type T by a scalar float.  <br /></td></tr>
<tr class="separator:a8730252e35a8e59aacb429efb0d6b828"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a6f0029a210088048368560c6e4c4d8a6" id="r_a6f0029a210088048368560c6e4c4d8a6"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:a6f0029a210088048368560c6e4c4d8a6"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#a6f0029a210088048368560c6e4c4d8a6">operator*</a> (const float lhs, T &amp;rhs)</td></tr>
<tr class="memdesc:a6f0029a210088048368560c6e4c4d8a6"><td class="mdescLeft">&#160;</td><td class="mdescRight">Overload the multiplication operator to multiply a scalar float by a tensor of type T.  <br /></td></tr>
<tr class="separator:a6f0029a210088048368560c6e4c4d8a6"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a771a257e9dd839ce330e9b40fd1dda56" id="r_a771a257e9dd839ce330e9b40fd1dda56"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:a771a257e9dd839ce330e9b40fd1dda56"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#a771a257e9dd839ce330e9b40fd1dda56">operator/</a> (T &amp;lhs, const float rhs)</td></tr>
<tr class="memdesc:a771a257e9dd839ce330e9b40fd1dda56"><td class="mdescLeft">&#160;</td><td class="mdescRight">Overload the division operator to divide a tensor of type T by a scalar float.  <br /></td></tr>
<tr class="separator:a771a257e9dd839ce330e9b40fd1dda56"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a275956a1088d701845f4599da84cdc84" id="r_a275956a1088d701845f4599da84cdc84"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:a275956a1088d701845f4599da84cdc84"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#a275956a1088d701845f4599da84cdc84">operator/</a> (const float lhs, T &amp;rhs)</td></tr>
<tr class="memdesc:a275956a1088d701845f4599da84cdc84"><td class="mdescLeft">&#160;</td><td class="mdescRight">Overload the division operator to divide a scalar float by a tensor of type T.  <br /></td></tr>
<tr class="separator:a275956a1088d701845f4599da84cdc84"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a8cf4ac2437dd67698684169bebb225d4" id="r_a8cf4ac2437dd67698684169bebb225d4"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:a8cf4ac2437dd67698684169bebb225d4"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, void &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#a8cf4ac2437dd67698684169bebb225d4">tensorMatrixAdd</a> (T &amp;out, const T &amp;lhs, const T &amp;rhs)</td></tr>
<tr class="memdesc:a8cf4ac2437dd67698684169bebb225d4"><td class="mdescLeft">&#160;</td><td class="mdescRight">Performs matrix addition operation on tensors with broadcast compatibility.  <br /></td></tr>
<tr class="separator:a8cf4ac2437dd67698684169bebb225d4"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a7503b6894e8052ed54eb169550d135c0" id="r_a7503b6894e8052ed54eb169550d135c0"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:a7503b6894e8052ed54eb169550d135c0"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, void &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#a7503b6894e8052ed54eb169550d135c0">tensorMatrixSub</a> (T &amp;out, const T &amp;lhs, const T &amp;rhs)</td></tr>
<tr class="memdesc:a7503b6894e8052ed54eb169550d135c0"><td class="mdescLeft">&#160;</td><td class="mdescRight">Performs matrix subtraction operation on tensors with broadcast compatibility.  <br /></td></tr>
<tr class="separator:a7503b6894e8052ed54eb169550d135c0"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a1da5cd018533919ed5a750b14c7d6d71" id="r_a1da5cd018533919ed5a750b14c7d6d71"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:a1da5cd018533919ed5a750b14c7d6d71"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, void &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#a1da5cd018533919ed5a750b14c7d6d71">tensorElementwiseDivide</a> (T &amp;out, const T &amp;lhs, const T &amp;rhs)</td></tr>
<tr class="memdesc:a1da5cd018533919ed5a750b14c7d6d71"><td class="mdescLeft">&#160;</td><td class="mdescRight">Performs element - wise division operation on tensors with broadcast compatibility.  <br /></td></tr>
<tr class="separator:a1da5cd018533919ed5a750b14c7d6d71"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a5a166a472b887c45fde9e5815f072234" id="r_a5a166a472b887c45fde9e5815f072234"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:a5a166a472b887c45fde9e5815f072234"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, void &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#a5a166a472b887c45fde9e5815f072234">tensorGeneralMatrixMul</a> (T &amp;out, const T &amp;lhs, const T &amp;rhs)</td></tr>
<tr class="memdesc:a5a166a472b887c45fde9e5815f072234"><td class="mdescLeft">&#160;</td><td class="mdescRight">Performs general matrix multiplication on tensors with broadcast compatibility.  <br /></td></tr>
<tr class="separator:a5a166a472b887c45fde9e5815f072234"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:ac8d64dd271e9a2e50682e733bd14ec19" id="r_ac8d64dd271e9a2e50682e733bd14ec19"><td class="memTemplParams" colspan="2">template&lt;typename T &gt; </td></tr>
<tr class="memitem:ac8d64dd271e9a2e50682e733bd14ec19"><td class="memTemplItemLeft" align="right" valign="top">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt;&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="#ac8d64dd271e9a2e50682e733bd14ec19">transpose</a> (const T &amp;in)</td></tr>
<tr class="memdesc:ac8d64dd271e9a2e50682e733bd14ec19"><td class="mdescLeft">&#160;</td><td class="mdescRight">Transposes a tensor with a valid tensor type.  <br /></td></tr>
<tr class="separator:ac8d64dd271e9a2e50682e733bd14ec19"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:af967fb10a908c374d8378ac7ef22779c" id="r_af967fb10a908c374d8378ac7ef22779c"><td class="memItemLeft" align="right" valign="top">std::ostream &amp;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="#af967fb10a908c374d8378ac7ef22779c">operator&lt;&lt;</a> (std::ostream &amp;os, const <a class="el" href="classnz_1_1data_1_1_mapped_tensor.html">MappedTensor</a> &amp;tensor)</td></tr>
<tr class="memdesc:af967fb10a908c374d8378ac7ef22779c"><td class="mdescLeft">&#160;</td><td class="mdescRight">Overload the &lt;&lt; operator to print a <a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a> object to an output stream.  <br /></td></tr>
<tr class="separator:af967fb10a908c374d8378ac7ef22779c"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a4ea5e60f987ab3853b4d0af44453a9e2" id="r_a4ea5e60f987ab3853b4d0af44453a9e2"><td class="memItemLeft" align="right" valign="top">std::istream &amp;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="#a4ea5e60f987ab3853b4d0af44453a9e2">operator&gt;&gt;</a> (std::istream &amp;is, <a class="el" href="classnz_1_1data_1_1_mapped_tensor.html">MappedTensor</a> &amp;tensor)</td></tr>
<tr class="memdesc:a4ea5e60f987ab3853b4d0af44453a9e2"><td class="mdescLeft">&#160;</td><td class="mdescRight">Overload the &gt;&gt; operator to read data from an input stream into a <a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a> object.  <br /></td></tr>
<tr class="separator:a4ea5e60f987ab3853b4d0af44453a9e2"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a2907370af84a6c5bdc4b72803c9edc68" id="r_a2907370af84a6c5bdc4b72803c9edc68"><td class="memItemLeft" align="right" valign="top">std::ostream &amp;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="#a2907370af84a6c5bdc4b72803c9edc68">operator&lt;&lt;</a> (std::ostream &amp;os, const <a class="el" href="classnz_1_1data_1_1_tensor.html">Tensor</a> &amp;tensor)</td></tr>
<tr class="memdesc:a2907370af84a6c5bdc4b72803c9edc68"><td class="mdescLeft">&#160;</td><td class="mdescRight">Overloads the <code>&lt;&lt;</code> operator to print the tensor's data to an output stream.  <br /></td></tr>
<tr class="separator:a2907370af84a6c5bdc4b72803c9edc68"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a40134aba93013e1b0d43c6fd5158d400" id="r_a40134aba93013e1b0d43c6fd5158d400"><td class="memItemLeft" align="right" valign="top">std::istream &amp;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="#a40134aba93013e1b0d43c6fd5158d400">operator&gt;&gt;</a> (std::istream &amp;is, const <a class="el" href="classnz_1_1data_1_1_tensor.html">Tensor</a> &amp;tensor)</td></tr>
<tr class="memdesc:a40134aba93013e1b0d43c6fd5158d400"><td class="mdescLeft">&#160;</td><td class="mdescRight">Overloads the <code>&gt;&gt;</code> operator to read a tensor's data from an input stream.  <br /></td></tr>
<tr class="separator:a40134aba93013e1b0d43c6fd5158d400"><td class="memSeparator" colspan="2">&#160;</td></tr>
</table>
<a name="details" id="details"></a><h2 class="groupheader">Detailed Description</h2>
<div class="textblock"><p>Contains data structures and utilities for tensor operations in machine learning workflows. </p>
<p>The <code><a class="el" href="namespacenz_1_1data.html" title="Contains data structures and utilities for tensor operations in machine learning workflows.">nz::data</a></code> namespace provides foundational classes and functions for managing and manipulating tensors in GPU-based computations. It is designed for use in deep learning frameworks and other numerical computing applications.</p>
<p>Key components within this namespace include:</p><ul>
<li><b><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></b>: A class representing multidimensional arrays (tensors) stored in GPU memory.</li>
<li><b>Utilities</b>: Functions and operators for performing mathematical operations, memory management, and activation functions.</li>
</ul>
<p>The namespace is intended to encapsulate all tensor-related functionality to ensure modularity and maintainability in the larger nz project.</p>
<dl class="section note"><dt>Note</dt><dd>The components in this namespace rely on CUDA for GPU-based operations. Ensure that CUDA-compatible hardware and software are properly configured.</dd></dl>
<dl class="section author"><dt>Author</dt><dd>Mgepahmge(<a href="https://github.com/Mgepahmge">https://github.com/Mgepahmge</a>)</dd></dl>
<dl class="section date"><dt>Date</dt><dd>2024/11/29 </dd></dl>
</div><h2 class="groupheader">Function Documentation</h2>
<a id="adae3ca94a8c203f1e444751a1cba0d6d" name="adae3ca94a8c203f1e444751a1cba0d6d"></a>
<h2 class="memtitle"><span class="permalink"><a href="#adae3ca94a8c203f1e444751a1cba0d6d">&#9670;&#160;</a></span>ELU()</h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; nz::data::ELU </td>
          <td>(</td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>input</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const float</td>          <td class="paramname"><span class="paramname"><em>alpha</em></span><span class="paramdefsep"> = </span><span class="paramdefval">1.0f</span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Apply the Exponential Linear Unit (ELU) activation function element-wise to an input tensor. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">input</td><td>The input tensor (either <code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>) to which the ELU function will be applied (device-to-device). </td></tr>
    <tr><td class="paramname">alpha</td><td>The alpha value for the ELU function. It controls the value to which the function saturates for negative inputs. The default value is 1.0f.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>A new tensor (of the same type as the input: <code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>) with the ELU function applied element-wise.</dd></dl>
<p>This function applies the ELU activation function, defined as ( f(x) = \begin{cases} x &amp; \text{if } x \geq 0 \ \alpha (e^{x}- 1) &amp; \text{if } x &lt; 0 \end{cases} ), to each element of the input tensor. It first creates a new tensor <code>result</code> with the same shape and gradient requirement as the input tensor. Then, it calls the <code>iELU</code> function to perform the actual ELU operation on the data of the input tensor and store the results in the <code>result</code> tensor. Finally, the <code>result</code> tensor is returned.</p>
<p>Memory management: A new tensor <code>result</code> is created, and its memory is managed by the tensor's own class (<code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>). The memory of the input tensor remains unchanged. Exception handling: There is no explicit exception handling in this function. However, if the <code>iELU</code> function or the tensor constructors throw exceptions, they will propagate up. Relationship with other components: This function depends on the <code>iELU</code> function to perform the ELU operation and the tensor's constructor to create a new tensor.</p>
<dl class="exception"><dt>Exceptions</dt><dd>
  <table class="exception">
    <tr><td class="paramname">[Exception</td><td>type thrown by iELU or tensor constructors] If there are issues during the operation, such as memory allocation failures or incorrect input data.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(n), where n is the number of elements in the input tensor (<code>input.size()</code>), as it needs to apply the ELU function to each element.</li>
<li>A positive <code>alpha</code> value is recommended for better performance and to avoid the vanishing gradient problem.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume T is either Tensor or MappedTensor</span></div>
<div class="line">nz::data::T::shape_type shape = {2, 3};</div>
<div class="line">nz::data::T input(shape, <span class="keyword">true</span>);</div>
<div class="line">nz::data::T output = <a class="code hl_function" href="#adae3ca94a8c203f1e444751a1cba0d6d">ELU</a>(input, 0.5f);</div>
<div class="line">```</div>
<div class="ttc" id="anamespacenz_1_1data_html_adae3ca94a8c203f1e444751a1cba0d6d"><div class="ttname"><a href="#adae3ca94a8c203f1e444751a1cba0d6d">nz::data::ELU</a></div><div class="ttdeci">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; ELU(T &amp;input, const float alpha=1.0f)</div><div class="ttdoc">Apply the Exponential Linear Unit (ELU) activation function element-wise to an input tensor.</div><div class="ttdef"><b>Definition</b> <a href="_tensor_operations_8cuh_source.html#l00241">TensorOperations.cuh:241</a></div></div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00241">241</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="a241d72367c091d0724b524f55289b2f0" name="a241d72367c091d0724b524f55289b2f0"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a241d72367c091d0724b524f55289b2f0">&#9670;&#160;</a></span>HardSigmoid()</h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; nz::data::HardSigmoid </td>
          <td>(</td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>input</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const float</td>          <td class="paramname"><span class="paramname"><em>alpha</em></span><span class="paramdefsep"> = </span><span class="paramdefval">0.2f</span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const float</td>          <td class="paramname"><span class="paramname"><em>beta</em></span><span class="paramdefsep"> = </span><span class="paramdefval">0.5f</span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Apply the Hard Sigmoid activation function element-wise to an input tensor. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">input</td><td>The input tensor (either <code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>) to which the Hard Sigmoid function will be applied (device-to-device). </td></tr>
    <tr><td class="paramname">alpha</td><td>The alpha value for the Hard Sigmoid function, controlling the slope of the linear part. The default value is 0.2f. </td></tr>
    <tr><td class="paramname">beta</td><td>The beta value for the Hard Sigmoid function, controlling the bias of the linear part. The default value is 0.5f.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>A new tensor (of the same type as the input: <code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>) with the Hard Sigmoid function applied element-wise.</dd></dl>
<p>This function applies the Hard Sigmoid activation function, typically defined as ( f(x) = \max(0, \min(1, \alpha x + \beta)) ), to each element of the input tensor. It first creates a new tensor <code>result</code> with the same shape and gradient requirement as the input tensor. Then, it calls the <code>iHardSigmoid</code> function to perform the actual Hard Sigmoid operation on the data of the input tensor and store the results in the <code>result</code> tensor. Finally, the <code>result</code> tensor is returned.</p>
<p>Memory management: A new tensor <code>result</code> is created, and its memory is managed by the tensor's own class (<code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>). The memory of the input tensor remains unchanged. Exception handling: There is no explicit exception handling in this function. However, if the <code>iHardSigmoid</code> function or the tensor constructors throw exceptions, they will propagate up. Relationship with other components: This function depends on the <code>iHardSigmoid</code> function to perform the Hard Sigmoid operation and the tensor's constructor to create a new tensor.</p>
<dl class="exception"><dt>Exceptions</dt><dd>
  <table class="exception">
    <tr><td class="paramname">[Exception</td><td>type thrown by iHardSigmoid or tensor constructors] If there are issues during the operation, such as memory allocation failures or incorrect input data.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(n), where n is the number of elements in the input tensor (<code>input.size()</code>), as it needs to apply the Hard Sigmoid function to each element.</li>
<li>The choice of <code>alpha</code> and <code>beta</code> values can significantly affect the behavior of the Hard Sigmoid function.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume T is either Tensor or MappedTensor</span></div>
<div class="line">nz::data::T::shape_type shape = {2, 3};</div>
<div class="line">nz::data::T input(shape, <span class="keyword">true</span>);</div>
<div class="line">nz::data::T output = <a class="code hl_function" href="#a241d72367c091d0724b524f55289b2f0">HardSigmoid</a>(input, 0.3f, 0.6f);</div>
<div class="line">```</div>
<div class="ttc" id="anamespacenz_1_1data_html_a241d72367c091d0724b524f55289b2f0"><div class="ttname"><a href="#a241d72367c091d0724b524f55289b2f0">nz::data::HardSigmoid</a></div><div class="ttdeci">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; HardSigmoid(T &amp;input, const float alpha=0.2f, const float beta=0.5f)</div><div class="ttdoc">Apply the Hard Sigmoid activation function element-wise to an input tensor.</div><div class="ttdef"><b>Definition</b> <a href="_tensor_operations_8cuh_source.html#l00281">TensorOperations.cuh:281</a></div></div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00281">281</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="ac716ac93e673f4706963d194e8ea523e" name="ac716ac93e673f4706963d194e8ea523e"></a>
<h2 class="memtitle"><span class="permalink"><a href="#ac716ac93e673f4706963d194e8ea523e">&#9670;&#160;</a></span>HardSwish()</h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; nz::data::HardSwish </td>
          <td>(</td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>input</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const float</td>          <td class="paramname"><span class="paramname"><em>alpha</em></span><span class="paramdefsep"> = </span><span class="paramdefval">0.5f</span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const float</td>          <td class="paramname"><span class="paramname"><em>beta</em></span><span class="paramdefsep"> = </span><span class="paramdefval">0.5f</span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Apply the Hard Swish activation function element-wise to an input tensor. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">input</td><td>The input tensor (either <code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>) to which the Hard Swish function will be applied (device-to-device). </td></tr>
    <tr><td class="paramname">alpha</td><td>The alpha value for the Hard Swish function, used to scale the input. The default value is 0.5f. </td></tr>
    <tr><td class="paramname">beta</td><td>The beta value for the Hard Swish function, used as an offset. The default value is 0.5f.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>A new tensor (of the same type as the input: <code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>) with the Hard Swish function applied element-wise.</dd></dl>
<p>This function applies the Hard Swish activation function to each element of the input tensor. The Hard Swish function is often defined as ( f(x)=x \cdot \max(0, \min(1, \alpha x+\beta)) ). It first creates a new tensor <code>result</code> with the same shape and gradient requirement as the input tensor. Then, it calls the <code>iHardSwish</code> function to perform the actual Hard Swish operation on the data of the input tensor and store the results in the <code>result</code> tensor. Finally, the <code>result</code> tensor is returned.</p>
<p>Memory management: A new tensor <code>result</code> is created, and its memory is managed by the tensor's own class (<code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>). The memory of the input tensor remains unchanged. Exception handling: There is no explicit exception handling in this function. However, if the <code>iHardSwish</code> function or the tensor constructors throw exceptions, they will propagate up. Relationship with other components: This function depends on the <code>iHardSwish</code> function to perform the Hard Swish operation and the tensor's constructor to create a new tensor.</p>
<dl class="exception"><dt>Exceptions</dt><dd>
  <table class="exception">
    <tr><td class="paramname">[Exception</td><td>type thrown by iHardSwish or tensor constructors] If there are issues during the operation, such as memory allocation failures or incorrect input data.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(n), where n is the number of elements in the input tensor (<code>input.size()</code>), as it needs to apply the Hard Swish function to each element.</li>
<li>The values of <code>alpha</code> and <code>beta</code> can be adjusted to fine - tune the behavior of the Hard Swish function.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume T is either Tensor or MappedTensor</span></div>
<div class="line">nz::data::T::shape_type shape = {2, 3};</div>
<div class="line">nz::data::T input(shape, <span class="keyword">true</span>);</div>
<div class="line">nz::data::T output = <a class="code hl_function" href="#ac716ac93e673f4706963d194e8ea523e">HardSwish</a>(input, 0.4f, 0.7f);</div>
<div class="line">```</div>
<div class="ttc" id="anamespacenz_1_1data_html_ac716ac93e673f4706963d194e8ea523e"><div class="ttname"><a href="#ac716ac93e673f4706963d194e8ea523e">nz::data::HardSwish</a></div><div class="ttdeci">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; HardSwish(T &amp;input, const float alpha=0.5f, const float beta=0.5f)</div><div class="ttdoc">Apply the Hard Swish activation function element-wise to an input tensor.</div><div class="ttdef"><b>Definition</b> <a href="_tensor_operations_8cuh_source.html#l00321">TensorOperations.cuh:321</a></div></div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00321">321</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="ae8fb3052fdc2304fbb68c8dbad90e4ed" name="ae8fb3052fdc2304fbb68c8dbad90e4ed"></a>
<h2 class="memtitle"><span class="permalink"><a href="#ae8fb3052fdc2304fbb68c8dbad90e4ed">&#9670;&#160;</a></span>LeakyReLU()</h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; nz::data::LeakyReLU </td>
          <td>(</td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>input</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const float</td>          <td class="paramname"><span class="paramname"><em>alpha</em></span><span class="paramdefsep"> = </span><span class="paramdefval">0.01f</span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Apply the Leaky Rectified Linear Unit (Leaky ReLU) activation function element-wise to an input tensor. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">input</td><td>The input tensor (either <code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>) to which the Leaky ReLU function will be applied (device-to-device). </td></tr>
    <tr><td class="paramname">alpha</td><td>The slope coefficient for negative values. It has a default value of 0.01f.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>A new tensor (of the same type as the input: <code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>) with the Leaky ReLU function applied element-wise.</dd></dl>
<p>This function applies the Leaky ReLU activation function, defined as ( f(x) = \begin{cases} x &amp; \text{if } x \geq 0 \ \alpha x &amp; \text{if } x &lt; 0 \end{cases} ), to each element of the input tensor. It first creates a new tensor <code>result</code> with the same shape and gradient requirement as the input tensor. Then, it calls the <code>iLeakyReLU</code> function to perform the actual Leaky ReLU operation on the data of the input tensor and store the results in the <code>result</code> tensor. Finally, the <code>result</code> tensor is returned.</p>
<p>Memory management: A new tensor <code>result</code> is created, and its memory is managed by the tensor's own class (<code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>). The memory of the input tensor remains unchanged. Exception handling: There is no explicit exception handling in this function. However, if the <code>iLeakyReLU</code> function or the tensor constructors throw exceptions, they will propagate up. Relationship with other components: This function depends on the <code>iLeakyReLU</code> function to perform the Leaky ReLU operation and the tensor's constructor to create a new tensor.</p>
<dl class="exception"><dt>Exceptions</dt><dd>
  <table class="exception">
    <tr><td class="paramname">[Exception</td><td>type thrown by iLeakyReLU or tensor constructors] If there are issues during the operation, such as memory allocation failures or incorrect input data.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(n), where n is the number of elements in the input tensor (<code>input.size()</code>), as it needs to apply the Leaky ReLU function to each element.</li>
<li>The value of <code>alpha</code> should be a small positive number to avoid vanishing gradient problem for negative inputs.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume T is either Tensor or MappedTensor</span></div>
<div class="line">nz::data::T::shape_type shape = {2, 3};</div>
<div class="line">nz::data::T input(shape, <span class="keyword">true</span>);</div>
<div class="line">nz::data::T output = <a class="code hl_function" href="#ae8fb3052fdc2304fbb68c8dbad90e4ed">LeakyReLU</a>(input, 0.02f);</div>
<div class="line">```</div>
<div class="ttc" id="anamespacenz_1_1data_html_ae8fb3052fdc2304fbb68c8dbad90e4ed"><div class="ttname"><a href="#ae8fb3052fdc2304fbb68c8dbad90e4ed">nz::data::LeakyReLU</a></div><div class="ttdeci">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; LeakyReLU(T &amp;input, const float alpha=0.01f)</div><div class="ttdoc">Apply the Leaky Rectified Linear Unit (Leaky ReLU) activation function element-wise to an input tenso...</div><div class="ttdef"><b>Definition</b> <a href="_tensor_operations_8cuh_source.html#l00165">TensorOperations.cuh:165</a></div></div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00165">165</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="a6f0029a210088048368560c6e4c4d8a6" name="a6f0029a210088048368560c6e4c4d8a6"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a6f0029a210088048368560c6e4c4d8a6">&#9670;&#160;</a></span>operator*() <span class="overload">[1/2]</span></h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; nz::data::operator* </td>
          <td>(</td>
          <td class="paramtype">const float</td>          <td class="paramname"><span class="paramname"><em>lhs</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>rhs</em></span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Overload the multiplication operator to multiply a scalar float by a tensor of type T. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">lhs</td><td>A constant float value representing the left - hand side scalar to multiply the tensor by. </td></tr>
    <tr><td class="paramname">rhs</td><td>A reference to the right - hand side tensor of type T. The tensor data is used in the multiplication operation.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>A new tensor of type T that is the result of multiplying each element of the tensor rhs by the scalar lhs.</dd></dl>
<p>This template operator overload first verifies if the type T is a valid tensor type using <code>is_valid_tensor_type&lt;T&gt;::value</code>. If the type is valid, it constructs a new tensor <code>result</code> with the same shape and gradient requirement as <code>rhs</code>. Subsequently, it invokes the <code>iScalarMul</code> function to multiply each element of <code>rhs</code> data by the scalar <code>lhs</code>. Finally, the newly created tensor <code>result</code> is returned.</p>
<p>Memory management:</p><ul>
<li>A new tensor <code>result</code> is created within the function, and its memory allocation depends on the constructor of type T. The memory of <code>result</code> will be managed by its destructor when it goes out of scope.</li>
</ul>
<p>Exception handling:</p><ul>
<li>There is no explicit exception handling in this function. If the <code>iScalarMul</code> function or the constructor of type T throws an exception, it will be propagated to the caller.</li>
</ul>
<p>Relationship with other components:</p><ul>
<li>This function relies on the <code>iScalarMul</code> function to perform the actual multiplication operation.</li>
<li>It also depends on the <code>shape()</code> and <code>requiresGrad()</code> member functions of type T.</li>
</ul>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(n), where n is the size of the tensor <code>rhs</code>. This is because the <code>iScalarMul</code> function needs to iterate over each element of the tensor.</li>
<li>Ensure that the type T is a valid tensor type as determined by <code>is_valid_tensor_type&lt;T&gt;::value</code>.</li>
<li>Ensure that the tensor <code>rhs</code> has valid shape, gradient requirement, and size information.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume Tensor is a valid tensor type with shape(), requiresGrad() member functions</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">nz::data::Tensor</a> tensor({2, 3}, <span class="keyword">true</span>);</div>
<div class="line"><span class="comment">// Assume tensor is filled with some values</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">nz::data::Tensor</a> result = 2.0f * tensor;</div>
<div class="line">```</div>
<div class="ttc" id="aclassnz_1_1data_1_1_tensor_html"><div class="ttname"><a href="classnz_1_1data_1_1_tensor.html">nz::data::Tensor</a></div><div class="ttdoc">A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.</div><div class="ttdef"><b>Definition</b> <a href="_tensor_8cuh_source.html#l00134">Tensor.cuh:134</a></div></div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00646">646</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="a8730252e35a8e59aacb429efb0d6b828" name="a8730252e35a8e59aacb429efb0d6b828"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a8730252e35a8e59aacb429efb0d6b828">&#9670;&#160;</a></span>operator*() <span class="overload">[2/2]</span></h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; nz::data::operator* </td>
          <td>(</td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>lhs</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const float</td>          <td class="paramname"><span class="paramname"><em>rhs</em></span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Overload the multiplication operator to multiply a tensor of type T by a scalar float. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">lhs</td><td>A reference to the left - hand side tensor of type T. The tensor data is used as the base for the multiplication operation. </td></tr>
    <tr><td class="paramname">rhs</td><td>A constant float value representing the right - hand side scalar to multiply the tensor by.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>A new tensor of type T that is the result of multiplying each element of the tensor lhs by the scalar rhs.</dd></dl>
<p>This template operator overload first checks if the type T is a valid tensor type using <code>is_valid_tensor_type&lt;T&gt;::value</code>. If valid, it creates a new tensor <code>result</code> with the same shape and gradient requirement as <code>lhs</code>. To perform the multiplication, it calls the <code>iScalarMul</code> function to multiply each element of <code>lhs</code> data by the scalar <code>rhs</code>. Finally, the newly created tensor <code>result</code> is returned.</p>
<p>Memory management:</p><ul>
<li>A new tensor <code>result</code> is created inside the function, which may allocate memory based on the constructor of type T. The memory of the result will be managed by its destructor when it goes out of scope.</li>
</ul>
<p>Exception handling:</p><ul>
<li>There is no explicit exception handling in this function. If the <code>iScalarMul</code> function or the constructor of type T throws an exception, it will propagate to the caller.</li>
</ul>
<p>Relationship with other components:</p><ul>
<li>This function depends on the <code>iScalarMul</code> function to perform the actual multiplication operation.</li>
<li>It also depends on the <code>shape()</code> and <code>requiresGrad()</code> member functions of type T.</li>
</ul>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(n), where n is the size of the tensor <code>lhs</code>. This is because the <code>iScalarMul</code> function needs to iterate over each element of the tensor.</li>
<li>Ensure that the type T is a valid tensor type as determined by <code>is_valid_tensor_type&lt;T&gt;::value</code>.</li>
<li>Ensure that the tensor <code>lhs</code> has valid shape, gradient requirement, and size information.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume Tensor is a valid tensor type with shape(), requiresGrad() member functions</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">nz::data::Tensor</a> tensor({2, 3}, <span class="keyword">true</span>);</div>
<div class="line"><span class="comment">// Assume tensor is filled with some values</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">nz::data::Tensor</a> result = tensor * 2.0f;</div>
<div class="line">```</div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00604">604</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="a5cbc31234b126e3ce84c273e0cc8714a" name="a5cbc31234b126e3ce84c273e0cc8714a"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a5cbc31234b126e3ce84c273e0cc8714a">&#9670;&#160;</a></span>operator+() <span class="overload">[1/2]</span></h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; nz::data::operator+ </td>
          <td>(</td>
          <td class="paramtype">const float</td>          <td class="paramname"><span class="paramname"><em>lhs</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>rhs</em></span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Overload the addition operator to add a tensor of type T to a scalar float. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">lhs</td><td>A constant float value representing the left - hand side scalar to be added to the tensor. </td></tr>
    <tr><td class="paramname">rhs</td><td>A reference to the right - hand side tensor of type T. The tensor data is used to perform the addition operation.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>A new tensor of type T that is the result of adding the scalar lhs to each element of the tensor rhs.</dd></dl>
<p>This function is a template operator overload. It first checks if the type T is a valid tensor type using <code>is_valid_tensor_type&lt;T&gt;::value</code>. If the type is valid, it creates a new tensor <code>result</code> with the same shape and gradient requirement as <code>rhs</code>. Then, it calls the <code>iScalarAdd</code> function to add the scalar <code>lhs</code> to each element of the data in <code>rhs</code> and stores the result in <code>result</code>. Finally, the newly created tensor <code>result</code> is returned.</p>
<p>Memory management:</p><ul>
<li>A new tensor <code>result</code> is created inside the function, which may allocate memory according to the constructor of type T. The memory of the result will be managed by its destructor when it goes out of scope.</li>
</ul>
<p>Exception handling:</p><ul>
<li>There is no explicit exception handling in this function. If the <code>iScalarAdd</code> function or the constructor of type T throws an exception, it will be propagated to the caller.</li>
</ul>
<p>Relationship with other components:</p><ul>
<li>This function depends on the <code>iScalarAdd</code> function to perform the actual scalar - tensor addition.</li>
<li>It also depends on the <code>shape()</code> and <code>requiresGrad()</code> member functions of type T.</li>
</ul>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(n), where n is the size of the tensor <code>rhs</code>. This is because the <code>iScalarAdd</code> function needs to iterate over each element of the tensor.</li>
<li>Ensure that the type T is a valid tensor type as determined by <code>is_valid_tensor_type&lt;T&gt;::value</code>.</li>
<li>Ensure that the tensor <code>rhs</code> has valid shape, gradient requirement, and size information.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume Tensor is a valid tensor type with shape(), requiresGrad() member functions</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">nz::data::Tensor</a> tensor({2, 3}, <span class="keyword">true</span>);</div>
<div class="line"><span class="comment">// Assume tensor is filled with some values</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">nz::data::Tensor</a> result = 2.0f + tensor;</div>
<div class="line">```</div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00478">478</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="ab99b7c0a7c96a6de43f5b3f25af7f918" name="ab99b7c0a7c96a6de43f5b3f25af7f918"></a>
<h2 class="memtitle"><span class="permalink"><a href="#ab99b7c0a7c96a6de43f5b3f25af7f918">&#9670;&#160;</a></span>operator+() <span class="overload">[2/2]</span></h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; nz::data::operator+ </td>
          <td>(</td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>lhs</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const float</td>          <td class="paramname"><span class="paramname"><em>rhs</em></span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Overload the addition operator to add a scalar float to a tensor of type T. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">lhs</td><td>A reference to the left - hand side tensor of type T. The tensor data is modified in - place during the addition operation. </td></tr>
    <tr><td class="paramname">rhs</td><td>A constant float value representing the right - hand side scalar to be added to the tensor.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>A new tensor of type T that is the result of adding the scalar rhs to each element of the tensor lhs.</dd></dl>
<p>This function is a template operator overload that adds a scalar float value to a tensor. It first checks if the type T meets the requirements using <code>is_valid_tensor_type&lt;T&gt;::value</code>. If the type is valid, it creates a new tensor <code>result</code> with the same shape and gradient requirement as <code>lhs</code>. Then, it calls the <code>iScalarAdd</code> function to perform the actual addition operation, which adds the scalar <code>rhs</code> to each element of the data in <code>lhs</code> and stores the result in <code>result</code>. Finally, the newly created tensor <code>result</code> is returned.</p>
<p>Memory management:</p><ul>
<li>A new tensor <code>result</code> is created inside the function, which may allocate memory depending on the implementation of the constructor of type T. The memory for the result will be managed by the destructor of the object when it goes out of scope.</li>
</ul>
<p>Exception handling:</p><ul>
<li>There is no explicit exception handling in this function. However, if the <code>iScalarAdd</code> function or the constructor of type T throws an exception, it will propagate up to the caller.</li>
</ul>
<p>Relationship with other components:</p><ul>
<li>This function depends on the <code>iScalarAdd</code> function to perform the actual scalar - tensor addition.</li>
<li>It also depends on the <code>shape()</code> and <code>requiresGrad()</code> member functions of type T.</li>
</ul>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(n), where n is the size of the tensor <code>lhs</code>. This is because the <code>iScalarAdd</code> function needs to iterate over each element of the tensor.</li>
<li>Ensure that the type T is a valid tensor type as determined by <code>is_valid_tensor_type&lt;T&gt;::value</code>.</li>
<li>Ensure that the tensor <code>lhs</code> has valid shape, gradient requirement, and size information.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume Tensor is a valid tensor type with shape(), requiresGrad() member functions</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">nz::data::Tensor</a> tensor({2, 3}, <span class="keyword">true</span>);</div>
<div class="line"><span class="comment">// Assume tensor is filled with some values</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">nz::data::Tensor</a> result = tensor + 2.0f;</div>
<div class="line">```</div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00436">436</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="a5ecefd608c1f6b3ce4e9d752dd05c0e7" name="a5ecefd608c1f6b3ce4e9d752dd05c0e7"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a5ecefd608c1f6b3ce4e9d752dd05c0e7">&#9670;&#160;</a></span>operator-() <span class="overload">[1/2]</span></h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; nz::data::operator- </td>
          <td>(</td>
          <td class="paramtype">const float</td>          <td class="paramname"><span class="paramname"><em>lhs</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>rhs</em></span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Overload the subtraction operator to subtract a tensor of type T from a scalar float. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">lhs</td><td>A constant float value representing the left - hand side scalar from which the tensor will be subtracted. </td></tr>
    <tr><td class="paramname">rhs</td><td>A reference to the right - hand side tensor of type T. The tensor data is used in the subtraction operation.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>A new tensor of type T that is the result of subtracting each element of the tensor rhs from the scalar lhs.</dd></dl>
<p>This template operator overload first checks if the type T is a valid tensor type using <code>is_valid_tensor_type&lt;T&gt;::value</code>. If the type is valid, it creates a new tensor <code>result</code> by negating the tensor <code>rhs</code>. Then, it calls the <code>iScalarAdd</code> function to add the scalar <code>lhs</code> to each element of the negated tensor <code>result</code>. Finally, the resulting tensor <code>result</code> is returned.</p>
<p>Memory management:</p><ul>
<li>A new tensor <code>result</code> is created inside the function, which may allocate memory according to the constructor of type T. The memory of the result will be managed by its destructor when it goes out of scope.</li>
</ul>
<p>Exception handling:</p><ul>
<li>There is no explicit exception handling in this function. If the negation operation of <code>rhs</code>, the <code>iScalarAdd</code> function, or the constructor of type T throws an exception, it will be propagated to the caller.</li>
</ul>
<p>Relationship with other components:</p><ul>
<li>This function depends on the negation operator of type T to obtain the negated tensor.</li>
<li>It also depends on the <code>iScalarAdd</code> function to perform the addition of the scalar to the negated tensor.</li>
</ul>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(n), where n is the size of the tensor <code>rhs</code>. This is because both the negation operation and the <code>iScalarAdd</code> function need to iterate over each element of the tensor.</li>
<li>Ensure that the type T is a valid tensor type as determined by <code>is_valid_tensor_type&lt;T&gt;::value</code>.</li>
<li>Ensure that the tensor <code>rhs</code> has valid shape, gradient requirement, and size information.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume Tensor is a valid tensor type</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">nz::data::Tensor</a> tensor({2, 3}, <span class="keyword">true</span>);</div>
<div class="line"><span class="comment">// Assume tensor is filled with some values</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">nz::data::Tensor</a> result = 2.0f - tensor;</div>
<div class="line">```</div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00562">562</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="acc650ae262aba5f1b0fa9cca8cae311e" name="acc650ae262aba5f1b0fa9cca8cae311e"></a>
<h2 class="memtitle"><span class="permalink"><a href="#acc650ae262aba5f1b0fa9cca8cae311e">&#9670;&#160;</a></span>operator-() <span class="overload">[2/2]</span></h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; nz::data::operator- </td>
          <td>(</td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>lhs</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const float</td>          <td class="paramname"><span class="paramname"><em>rhs</em></span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Overload the subtraction operator to subtract a scalar float from a tensor of type T. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">lhs</td><td>A reference to the left - hand side tensor of type T. The tensor data is used as the base for the subtraction operation. </td></tr>
    <tr><td class="paramname">rhs</td><td>A constant float value representing the right - hand side scalar to be subtracted from the tensor.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>A new tensor of type T that is the result of subtracting the scalar rhs from each element of the tensor lhs.</dd></dl>
<p>This template operator overload first checks if the type T is a valid tensor type using <code>is_valid_tensor_type&lt;T&gt;::value</code>. If valid, it creates a new tensor <code>result</code> with the same shape and gradient requirement as <code>lhs</code>. To perform the subtraction, it calls the <code>iScalarAdd</code> function with <code>-rhs</code> as the scalar to be added to each element of <code>lhs</code> data. Finally, the newly created tensor <code>result</code> is returned.</p>
<p>Memory management:</p><ul>
<li>A new tensor <code>result</code> is created inside the function, which may allocate memory based on the constructor of type T. The memory of the result will be managed by its destructor when it goes out of scope.</li>
</ul>
<p>Exception handling:</p><ul>
<li>There is no explicit exception handling in this function. If the <code>iScalarAdd</code> function or the constructor of type T throws an exception, it will propagate to the caller.</li>
</ul>
<p>Relationship with other components:</p><ul>
<li>This function depends on the <code>iScalarAdd</code> function to perform the actual subtraction operation (by adding the negative of the scalar).</li>
<li>It also depends on the <code>shape()</code> and <code>requiresGrad()</code> member functions of type T.</li>
</ul>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(n), where n is the size of the tensor <code>lhs</code>. This is because the <code>iScalarAdd</code> function needs to iterate over each element of the tensor.</li>
<li>Ensure that the type T is a valid tensor type as determined by <code>is_valid_tensor_type&lt;T&gt;::value</code>.</li>
<li>Ensure that the tensor <code>lhs</code> has valid shape, gradient requirement, and size information.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume Tensor is a valid tensor type with shape(), requiresGrad() member functions</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">nz::data::Tensor</a> tensor({2, 3}, <span class="keyword">true</span>);</div>
<div class="line"><span class="comment">// Assume tensor is filled with some values</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">nz::data::Tensor</a> result = tensor - 2.0f;</div>
<div class="line">```</div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00520">520</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="a275956a1088d701845f4599da84cdc84" name="a275956a1088d701845f4599da84cdc84"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a275956a1088d701845f4599da84cdc84">&#9670;&#160;</a></span>operator/() <span class="overload">[1/2]</span></h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; nz::data::operator/ </td>
          <td>(</td>
          <td class="paramtype">const float</td>          <td class="paramname"><span class="paramname"><em>lhs</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>rhs</em></span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Overload the division operator to divide a scalar float by a tensor of type T. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">lhs</td><td>A constant float value representing the left - hand side scalar dividend. </td></tr>
    <tr><td class="paramname">rhs</td><td>A reference to the right - hand side tensor of type T. The tensor data is used as the divisor for the division operation.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>A new tensor of type T that is the result of dividing the scalar lhs by each element of the tensor rhs.</dd></dl>
<p>This template operator overload first verifies if the type T is a valid tensor type using <code>is_valid_tensor_type&lt;T&gt;::value</code>. If valid, it creates a copy of the tensor <code>rhs</code> named <code>result</code>. Then it calls the <code>recip</code> method of <code>result</code> to compute the reciprocal of each element in the tensor. Finally, it uses the <code>iScalarMul</code> function to multiply each element of the <code>result</code> tensor by the scalar <code>lhs</code>.</p>
<p>Memory management:</p><ul>
<li>A copy of the tensor <code>rhs</code> is created as <code>result</code>, and its memory allocation depends on the copy - constructor of type T. The memory of <code>result</code> will be managed by its destructor when it goes out of scope.</li>
</ul>
<p>Exception handling:</p><ul>
<li>There is no explicit exception handling in this function. If the <code>recip</code> method, <code>iScalarMul</code> function, or the copy - constructor of type T throws an exception, it will be propagated to the caller.</li>
</ul>
<p>Relationship with other components:</p><ul>
<li>This function depends on the <code>recip</code> method of type T to compute the reciprocal of each element in the tensor.</li>
<li>It also depends on the <code>iScalarMul</code> function to perform the multiplication operation.</li>
</ul>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(n), where n is the size of the tensor <code>rhs</code>. This is because both the <code>recip</code> method and the <code>iScalarMul</code> function need to iterate over each element of the tensor.</li>
<li>Ensure that the type T is a valid tensor type as determined by <code>is_valid_tensor_type&lt;T&gt;::value</code>.</li>
<li>Ensure that the tensor <code>rhs</code> has valid shape, gradient requirement, and size information.</li>
<li>Ensure that no element in the tensor <code>rhs</code> is zero to avoid division by zero errors during the <code>recip</code> operation.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume Tensor is a valid tensor type with shape(), requiresGrad() and recip() member functions</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">nz::data::Tensor</a> tensor({2, 3}, <span class="keyword">true</span>);</div>
<div class="line"><span class="comment">// Assume tensor is filled with some non - zero values</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">nz::data::Tensor</a> result = 2.0f / tensor;</div>
<div class="line">```</div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00732">732</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="a771a257e9dd839ce330e9b40fd1dda56" name="a771a257e9dd839ce330e9b40fd1dda56"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a771a257e9dd839ce330e9b40fd1dda56">&#9670;&#160;</a></span>operator/() <span class="overload">[2/2]</span></h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; nz::data::operator/ </td>
          <td>(</td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>lhs</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const float</td>          <td class="paramname"><span class="paramname"><em>rhs</em></span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Overload the division operator to divide a tensor of type T by a scalar float. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">lhs</td><td>A reference to the left - hand side tensor of type T. The tensor data is used as the dividend for the division operation. </td></tr>
    <tr><td class="paramname">rhs</td><td>A constant float value representing the right - hand side scalar divisor.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>A new tensor of type T that is the result of dividing each element of the tensor lhs by the scalar rhs.</dd></dl>
<p>This template operator overload first checks if the type T is a valid tensor type using <code>is_valid_tensor_type&lt;T&gt;::value</code>. If valid, it creates a new tensor <code>result</code> with the same shape and gradient requirement as <code>lhs</code>. Then it calls the <code>iScalarDiv</code> function to divide each element of <code>lhs</code> data by the scalar <code>rhs</code>. Finally, the newly created tensor <code>result</code> is returned.</p>
<p>Memory management:</p><ul>
<li>A new tensor <code>result</code> is created inside the function, and its memory allocation depends on the constructor of type T. The memory of <code>result</code> will be managed by its destructor when it goes out of scope.</li>
</ul>
<p>Exception handling:</p><ul>
<li>There is no explicit exception handling in this function. If the <code>iScalarDiv</code> function or the constructor of type T throws an exception, it will propagate to the caller.</li>
</ul>
<p>Relationship with other components:</p><ul>
<li>This function depends on the <code>iScalarDiv</code> function to perform the actual division operation.</li>
<li>It also depends on the <code>shape()</code> and <code>requiresGrad()</code> member functions of type T.</li>
</ul>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(n), where n is the size of the tensor <code>lhs</code>. This is because the <code>iScalarDiv</code> function needs to iterate over each element of the tensor.</li>
<li>Ensure that the type T is a valid tensor type as determined by <code>is_valid_tensor_type&lt;T&gt;::value</code>.</li>
<li>Ensure that the tensor <code>lhs</code> has valid shape, gradient requirement, and size information.</li>
<li>Ensure that the scalar <code>rhs</code> is not zero to avoid division by zero errors.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume Tensor is a valid tensor type with shape(), requiresGrad() member functions</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">nz::data::Tensor</a> tensor({2, 3}, <span class="keyword">true</span>);</div>
<div class="line"><span class="comment">// Assume tensor is filled with some values</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">nz::data::Tensor</a> result = tensor / 2.0f;</div>
<div class="line">```</div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00689">689</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="af967fb10a908c374d8378ac7ef22779c" name="af967fb10a908c374d8378ac7ef22779c"></a>
<h2 class="memtitle"><span class="permalink"><a href="#af967fb10a908c374d8378ac7ef22779c">&#9670;&#160;</a></span>operator&lt;&lt;() <span class="overload">[1/2]</span></h2>

<div class="memitem">
<div class="memproto">
      <table class="memname">
        <tr>
          <td class="memname">std::ostream &amp; nz::data::operator&lt;&lt; </td>
          <td>(</td>
          <td class="paramtype">std::ostream &amp;</td>          <td class="paramname"><span class="paramname"><em>os</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const <a class="el" href="classnz_1_1data_1_1_mapped_tensor.html">MappedTensor</a> &amp;</td>          <td class="paramname"><span class="paramname"><em>tensor</em></span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Overload the &lt;&lt; operator to print a <a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a> object to an output stream. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">os</td><td>An output stream (host-to-host) where the <a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a> data and gradient will be printed. </td></tr>
    <tr><td class="paramname">tensor</td><td>A constant reference (host-to-host) to the <a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a> object to be printed.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>A reference to the output stream <code>os</code> after printing the tensor data and possibly its gradient.</dd></dl>
<p>This function provides a convenient way to print a <a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a> object using the &lt;&lt; operator. It first calls the <code>print</code> method of the <a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a> to print the tensor's data. If the tensor requires gradients, it then prints a header "Gradient: " followed by the gradient data using the <code>printGrad</code> method.</p>
<p>Memory management: The function does not allocate or deallocate any memory. It relies on the <code>print</code> and <code>printGrad</code> methods of the <a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a>, which also do not perform memory allocation. Exception handling: If the tensor requires gradients and an exception occurs during the <code>printGrad</code> call (e.g., due to an invalid state of the output stream or incorrect internal data), the exception will be propagated. If the tensor does not require gradients, the <code>printGrad</code> call is skipped, and no exception related to gradient printing will be thrown. Relationship with other components: This function is related to the data presentation component of the <a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a>. It integrates the <code>print</code> and <code>printGrad</code> methods to provide a unified way of printing the tensor and its gradient.</p>
<dl class="exception"><dt>Exceptions</dt><dd>
  <table class="exception">
    <tr><td class="paramname">std::invalid_argument</td><td>Propagated from the <code>printGrad</code> method if the tensor requires gradients and there is an issue with gradient printing.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The overall time complexity of this function is O(m * n) if the tensor does not require gradients and O(2 * m * n) if it does, where m is the number of rows (<code>_shape[0]</code>) and n is the number of columns (<code>_shape[1]</code>) of the tensor, as it iterates over the tensor data and possibly the gradient data.</li>
<li>Ensure that the output stream <code>os</code> is in a valid state before calling this function.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_dimension.html">nz::data::MappedTensor::shape_type</a> shape = {2, 3};</div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_mapped_tensor.html">nz::data::MappedTensor</a> tensor(shape, <span class="keyword">true</span>);</div>
<div class="line">tensor.dataInject({1, 2, 3, 4, 5, 6}, <span class="keyword">false</span>);</div>
<div class="line">tensor.dataInject({7, 8, 9, 10, 11, 12}, <span class="keyword">true</span>);</div>
<div class="line">std::cout &lt;&lt; tensor;</div>
<div class="line">```</div>
<div class="ttc" id="aclassnz_1_1data_1_1_dimension_html"><div class="ttname"><a href="classnz_1_1data_1_1_dimension.html">nz::data::Dimension</a></div><div class="ttdoc">Represents a multi - dimensional shape, typically used in deep learning for tensor dimensions.</div><div class="ttdef"><b>Definition</b> <a href="_dimension_8cuh_source.html#l00057">Dimension.cuh:57</a></div></div>
<div class="ttc" id="aclassnz_1_1data_1_1_mapped_tensor_html"><div class="ttname"><a href="classnz_1_1data_1_1_mapped_tensor.html">nz::data::MappedTensor</a></div><div class="ttdoc">A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...</div><div class="ttdef"><b>Definition</b> <a href="_mapped_tensor_8cuh_source.html#l00066">MappedTensor.cuh:66</a></div></div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_mapped_tensor_8cu_source.html#l00045">45</a> of file <a class="el" href="_mapped_tensor_8cu_source.html">MappedTensor.cu</a>.</p>

</div>
</div>
<a id="a2907370af84a6c5bdc4b72803c9edc68" name="a2907370af84a6c5bdc4b72803c9edc68"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a2907370af84a6c5bdc4b72803c9edc68">&#9670;&#160;</a></span>operator&lt;&lt;() <span class="overload">[2/2]</span></h2>

<div class="memitem">
<div class="memproto">
      <table class="memname">
        <tr>
          <td class="memname">std::ostream &amp; nz::data::operator&lt;&lt; </td>
          <td>(</td>
          <td class="paramtype">std::ostream &amp;</td>          <td class="paramname"><span class="paramname"><em>os</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const <a class="el" href="classnz_1_1data_1_1_tensor.html">Tensor</a> &amp;</td>          <td class="paramname"><span class="paramname"><em>tensor</em></span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Overloads the <code>&lt;&lt;</code> operator to print the tensor's data to an output stream. </p>
<p>This function is a friend of the <code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> class and provides an overloaded version of the output stream operator (<code>&lt;&lt;</code>) to print the contents of a tensor to the specified output stream (e.g., <code>std::cout</code> or a file stream).</p>
<p>The tensor's data is first copied from GPU memory to host memory for printing, and then the data is printed in a 2D matrix format. Each row of the tensor is printed on a new line, and each element in a row is separated by a space. Each row is enclosed in square brackets.</p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">os</td><td>The output stream to which the tensor will be printed. </td></tr>
    <tr><td class="paramname">tensor</td><td>The tensor whose contents will be printed. </td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>The output stream (<code>os</code>) after the tensor has been printed, allowing for chaining of operations.</dd></dl>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>This operator works by accessing the tensor's private data members (e.g., <code>_data</code>) directly.</li>
<li>The tensor's data is assumed to be in a valid state (i.e., properly allocated in GPU memory) before printing.</li>
<li>The function copies the tensor's data from device (GPU) memory to host (CPU) memory using <code>cudaMemcpy</code>, which may introduce performance overhead for large tensors.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">Tensor</a> tensor({2, 3});</div>
<div class="line">tensor.<a class="code hl_function" href="classnz_1_1data_1_1_tensor.html#ad220de56b18c404611f07f2290cd7e9d">fill</a>(1.0f);  <span class="comment">// Fill the tensor with 1.0f</span></div>
<div class="line">std::cout &lt;&lt; tensor &lt;&lt; std::endl;  <span class="comment">// Prints the tensor to standard output in matrix format</span></div>
<div class="line">```</div>
<div class="ttc" id="aclassnz_1_1data_1_1_tensor_html_ad220de56b18c404611f07f2290cd7e9d"><div class="ttname"><a href="classnz_1_1data_1_1_tensor.html#ad220de56b18c404611f07f2290cd7e9d">nz::data::Tensor::fill</a></div><div class="ttdeci">void fill(value_type value, bool isGrad=false) const</div><div class="ttdoc">Fills the tensor's data with a specified value.</div><div class="ttdef"><b>Definition</b> <a href="_tensor_8cu_source.html#l00306">Tensor.cu:306</a></div></div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_8cu_source.html#l00039">39</a> of file <a class="el" href="_tensor_8cu_source.html">Tensor.cu</a>.</p>

</div>
</div>
<a id="a40134aba93013e1b0d43c6fd5158d400" name="a40134aba93013e1b0d43c6fd5158d400"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a40134aba93013e1b0d43c6fd5158d400">&#9670;&#160;</a></span>operator&gt;&gt;() <span class="overload">[1/2]</span></h2>

<div class="memitem">
<div class="memproto">
      <table class="memname">
        <tr>
          <td class="memname">std::istream &amp; nz::data::operator&gt;&gt; </td>
          <td>(</td>
          <td class="paramtype">std::istream &amp;</td>          <td class="paramname"><span class="paramname"><em>is</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const <a class="el" href="classnz_1_1data_1_1_tensor.html">Tensor</a> &amp;</td>          <td class="paramname"><span class="paramname"><em>tensor</em></span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Overloads the <code>&gt;&gt;</code> operator to read a tensor's data from an input stream. </p>
<p>This function is a friend of the <code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> class and provides an overloaded version of the input stream operator (<code>&gt;&gt;</code>) to read the contents of a tensor from the specified input stream (e.g., <code>std::cin</code> or a file stream).</p>
<p>The function reads the tensor's data element by element from the input stream and stores the values in a temporary buffer. Once all the data has been read, it is copied from the host memory back into the tensor's GPU memory using <code>cudaMemcpy</code>.</p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">is</td><td>The input stream from which the tensor's data will be read. </td></tr>
    <tr><td class="paramname">tensor</td><td>The tensor to which the data will be read. </td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>The input stream (<code>is</code>) after reading the tensor's data, allowing for chaining of operations.</dd></dl>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>This operator works by reading data from the input stream and storing it in a temporary buffer on the host.</li>
<li>The function assumes that the input data matches the size of the tensor. If the data is malformed or does not match, the behavior may be undefined.</li>
<li>After reading, the data is copied from host memory back into the tensor's GPU memory.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">Tensor</a> tensor({2, 3});</div>
<div class="line">std::cin &gt;&gt; tensor;  <span class="comment">// Reads the tensor&#39;s data from standard input</span></div>
<div class="line">```</div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_8cu_source.html#l00076">76</a> of file <a class="el" href="_tensor_8cu_source.html">Tensor.cu</a>.</p>

</div>
</div>
<a id="a4ea5e60f987ab3853b4d0af44453a9e2" name="a4ea5e60f987ab3853b4d0af44453a9e2"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a4ea5e60f987ab3853b4d0af44453a9e2">&#9670;&#160;</a></span>operator&gt;&gt;() <span class="overload">[2/2]</span></h2>

<div class="memitem">
<div class="memproto">
      <table class="memname">
        <tr>
          <td class="memname">std::istream &amp; nz::data::operator&gt;&gt; </td>
          <td>(</td>
          <td class="paramtype">std::istream &amp;</td>          <td class="paramname"><span class="paramname"><em>is</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype"><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html">MappedTensor</a> &amp;</td>          <td class="paramname"><span class="paramname"><em>tensor</em></span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Overload the &gt;&gt; operator to read data from an input stream into a <a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a> object. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">is</td><td>An input stream (host-to-host) from which the data will be read. </td></tr>
    <tr><td class="paramname">tensor</td><td>A reference (host-to-host) to the <a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a> object where the data will be stored.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>A reference to the input stream <code>is</code> after the reading operation.</dd></dl>
<p>This function provides a convenient way to populate a <a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a> object with data from an input stream. It iterates through the elements of the tensor and reads values from the input stream one by one, until either all elements of the tensor have been filled or the input stream fails to provide more data.</p>
<p>Memory management: The function does not allocate or deallocate any memory. It assumes that the <code>_data</code> array of the <a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a> has already been allocated with the appropriate size (<code>_size</code>). Exception handling: If the input stream fails to provide data (e.g., due to end-of-file or an invalid input format), the loop will terminate, and the function will return the input stream in its current state. No exceptions are thrown by this function itself, but the <code>&gt;&gt;</code> operator on the input stream may throw exceptions depending on its implementation. Relationship with other components: This function is related to the data input component of the <a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a>. It integrates with the standard input stream to allow easy data population.</p>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(n), where n is the size of the tensor (<code>_size</code>), as it iterates through each element of the tensor once.</li>
<li>Ensure that the input stream contains valid data in the correct format to avoid unexpected behavior.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_dimension.html">nz::data::MappedTensor::shape_type</a> shape = {2, 3};</div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_mapped_tensor.html">nz::data::MappedTensor</a> tensor(shape, <span class="keyword">false</span>);</div>
<div class="line">std::istringstream iss(<span class="stringliteral">&quot;1 2 3 4 5 6&quot;</span>);</div>
<div class="line">iss &gt;&gt; tensor;</div>
<div class="line">```</div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_mapped_tensor_8cu_source.html#l00081">81</a> of file <a class="el" href="_mapped_tensor_8cu_source.html">MappedTensor.cu</a>.</p>

</div>
</div>
<a id="a4706224f5e7c9a0cfe4c74983aaef1bd" name="a4706224f5e7c9a0cfe4c74983aaef1bd"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a4706224f5e7c9a0cfe4c74983aaef1bd">&#9670;&#160;</a></span>ReLU()</h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; nz::data::ReLU </td>
          <td>(</td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>input</em></span></td><td>)</td>
          <td></td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Apply the Rectified Linear Unit (ReLU) activation function element-wise to an input tensor. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">input</td><td>The input tensor (either <code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>) to which the ReLU function will be applied (device to device).</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>A new tensor (of the same type as the input: <code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>) with the ReLU function applied element-wise.</dd></dl>
<p>This function applies the ReLU activation function, defined as ( f(x) = \max(0, x) ), to each element of the input tensor. It first creates a new tensor <code>result</code> with the same shape and gradient requirement as the input tensor. Then, it calls the <code>iRELU</code> function to perform the actual ReLU operation on the data of the input tensor and store the results in the <code>result</code> tensor. Finally, the <code>result</code> tensor is returned.</p>
<p>Memory management: A new tensor <code>result</code> is created, and its memory is managed by the tensor's own class (<code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>). The memory of the input tensor remains unchanged. Exception handling: There is no explicit exception handling in this function. However, if the <code>iRELU</code> function or the tensor constructors throw exceptions, they will propagate up. Relationship with other components: This function depends on the <code>iRELU</code> function to perform the ReLU operation and the tensor's constructor to create a new tensor.</p>
<dl class="exception"><dt>Exceptions</dt><dd>
  <table class="exception">
    <tr><td class="paramname">[Exception</td><td>type thrown by iRELU or tensor constructors] If there are issues during the operation, such as memory allocation failures or incorrect input data.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(n), where n is the number of elements in the input tensor (<code>input.size()</code>), as it needs to apply the ReLU function to each element.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume T is either Tensor or MappedTensor</span></div>
<div class="line">nz::data::T::shape_type shape = {2, 3};</div>
<div class="line">nz::data::T input(shape, <span class="keyword">true</span>);</div>
<div class="line">nz::data::T output = <a class="code hl_function" href="#a4706224f5e7c9a0cfe4c74983aaef1bd">ReLU</a>(input);</div>
<div class="line">```</div>
<div class="ttc" id="anamespacenz_1_1data_html_a4706224f5e7c9a0cfe4c74983aaef1bd"><div class="ttname"><a href="#a4706224f5e7c9a0cfe4c74983aaef1bd">nz::data::ReLU</a></div><div class="ttdeci">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; ReLU(T &amp;input)</div><div class="ttdoc">Apply the Rectified Linear Unit (ReLU) activation function element-wise to an input tensor.</div><div class="ttdef"><b>Definition</b> <a href="_tensor_operations_8cuh_source.html#l00050">TensorOperations.cuh:50</a></div></div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00050">50</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="aa9a6da30ae0d71faa4ac32efb9dd1f2f" name="aa9a6da30ae0d71faa4ac32efb9dd1f2f"></a>
<h2 class="memtitle"><span class="permalink"><a href="#aa9a6da30ae0d71faa4ac32efb9dd1f2f">&#9670;&#160;</a></span>Sigmoid()</h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; nz::data::Sigmoid </td>
          <td>(</td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>input</em></span></td><td>)</td>
          <td></td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Apply the sigmoid activation function element-wise to an input tensor. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">input</td><td>The input tensor (either <code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>) to which the sigmoid function will be applied (device-to-device).</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>A new tensor (of the same type as the input: <code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>) with the sigmoid function applied element-wise.</dd></dl>
<p>This function applies the sigmoid activation function, defined as ( f(x)=\frac{1}{1 + e^{-x}} ), to each element of the input tensor. It first creates a new tensor <code>result</code> with the same shape and gradient requirement as the input tensor. Then, it calls the <code>iSigmoid</code> function to perform the actual sigmoid operation on the data of the input tensor and store the results in the <code>result</code> tensor. Finally, the <code>result</code> tensor is returned.</p>
<p>Memory management: A new tensor <code>result</code> is created, and its memory is managed by the tensor's own class (<code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>). The memory of the input tensor remains unchanged. Exception handling: There is no explicit exception handling in this function. However, if the <code>iSigmoid</code> function or the tensor constructors throw exceptions, they will propagate up. Relationship with other components: This function depends on the <code>iSigmoid</code> function to perform the sigmoid operation and the tensor's constructor to create a new tensor.</p>
<dl class="exception"><dt>Exceptions</dt><dd>
  <table class="exception">
    <tr><td class="paramname">[Exception</td><td>type thrown by iSigmoid or tensor constructors] If there are issues during the operation, such as memory allocation failures or incorrect input data.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(n), where n is the number of elements in the input tensor (<code>input.size()</code>), as it needs to apply the sigmoid function to each element.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume T is either Tensor or MappedTensor</span></div>
<div class="line">nz::data::T::shape_type shape = {2, 3};</div>
<div class="line">nz::data::T input(shape, <span class="keyword">true</span>);</div>
<div class="line">nz::data::T output = <a class="code hl_function" href="#aa9a6da30ae0d71faa4ac32efb9dd1f2f">Sigmoid</a>(input);</div>
<div class="line">```</div>
<div class="ttc" id="anamespacenz_1_1data_html_aa9a6da30ae0d71faa4ac32efb9dd1f2f"><div class="ttname"><a href="#aa9a6da30ae0d71faa4ac32efb9dd1f2f">nz::data::Sigmoid</a></div><div class="ttdeci">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; Sigmoid(T &amp;input)</div><div class="ttdoc">Apply the sigmoid activation function element-wise to an input tensor.</div><div class="ttdef"><b>Definition</b> <a href="_tensor_operations_8cuh_source.html#l00088">TensorOperations.cuh:88</a></div></div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00088">88</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="a55e8a3fae0d75e214cd714fde8811543" name="a55e8a3fae0d75e214cd714fde8811543"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a55e8a3fae0d75e214cd714fde8811543">&#9670;&#160;</a></span>Softmax()</h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; nz::data::Softmax </td>
          <td>(</td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>input</em></span></td><td>)</td>
          <td></td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Compute the softmax function for a given input of type T. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">input</td><td>The input object of type T for which the softmax function will be computed. The input is passed by value, so a copy of the input is made inside the function.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>An object of type T representing the result of the softmax function applied to the input.</dd></dl>
<p>This function computes the softmax function for the given input. It first creates a new object <code>result</code> with the same shape and gradient requirement as the input. Then, it calls the <code>iSoftmax</code> function to perform the actual softmax computation. The <code>iSoftmax</code> function takes the data pointers of the result and input, the exponential sum of the input, and the size of the input as parameters. Finally, the computed result is returned.</p>
<p>Memory management:</p><ul>
<li>A new object <code>result</code> is created inside the function, which may allocate memory depending on the implementation of the constructor of type T. The memory for the result will be managed by the destructor of the object when it goes out of scope.</li>
</ul>
<p>Exception handling:</p><ul>
<li>There is no explicit exception handling in this function. However, if the <code>iSoftmax</code> function or the constructor of type T throws an exception, it will propagate up to the caller.</li>
</ul>
<p>Relationship with other components:</p><ul>
<li>This function depends on the <code>iSoftmax</code> function to perform the actual softmax computation.</li>
<li>It also depends on the <code>shape()</code>, <code>requiresGrad()</code>, <code>expSum()</code>, and <code>size()</code> member functions of type T.</li>
</ul>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function depends on the implementation of the <code>iSoftmax</code> function. If the <code>iSoftmax</code> function has a time complexity of O(n), where n is the size of the input, then the overall time complexity of this function is also O(n).</li>
<li>Ensure that the input object <code>input</code> has valid shape, gradient requirement, exponential sum, and size information.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume Tensor is a valid type with shape(), requiresGrad(), expSum(), and size() member functions</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">nz::data::Tensor</a> input({2, 3}, <span class="keyword">true</span>);</div>
<div class="line"><span class="comment">// Assume input is filled with some values</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">nz::data::Tensor</a> result = <a class="code hl_function" href="#a55e8a3fae0d75e214cd714fde8811543">Softmax</a>(input);</div>
<div class="line">```</div>
<div class="ttc" id="anamespacenz_1_1data_html_a55e8a3fae0d75e214cd714fde8811543"><div class="ttname"><a href="#a55e8a3fae0d75e214cd714fde8811543">nz::data::Softmax</a></div><div class="ttdeci">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; Softmax(T &amp;input)</div><div class="ttdoc">Compute the softmax function for a given input of type T.</div><div class="ttdef"><b>Definition</b> <a href="_tensor_operations_8cuh_source.html#l00364">TensorOperations.cuh:364</a></div></div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00364">364</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="ae563f53512549e2e54f066f7bf06622e" name="ae563f53512549e2e54f066f7bf06622e"></a>
<h2 class="memtitle"><span class="permalink"><a href="#ae563f53512549e2e54f066f7bf06622e">&#9670;&#160;</a></span>Swish()</h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; nz::data::Swish </td>
          <td>(</td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>input</em></span></td><td>)</td>
          <td></td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Apply the Swish activation function element-wise to an input tensor. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">input</td><td>The input tensor (either <code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>) to which the Swish function will be applied (device-to-device).</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>A new tensor (of the same type as the input: <code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>) with the Swish function applied element-wise.</dd></dl>
<p>This function applies the Swish activation function, defined as ( f(x)=x\cdot\sigma(x) ), where (\sigma(x)=\frac{1}{1 + e^{-x}}) is the sigmoid function, to each element of the input tensor. It first creates a new tensor <code>result</code> with the same shape and gradient requirement as the input tensor. Then, it calls the <code>iSwish</code> function to perform the actual Swish operation on the data of the input tensor and store the results in the <code>result</code> tensor. Finally, the <code>result</code> tensor is returned.</p>
<p>Memory management: A new tensor <code>result</code> is created, and its memory is managed by the tensor's own class (<code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>). The memory of the input tensor remains unchanged. Exception handling: There is no explicit exception handling in this function. However, if the <code>iSwish</code> function or the tensor constructors throw exceptions, they will propagate up. Relationship with other components: This function depends on the <code>iSwish</code> function to perform the Swish operation and the tensor's constructor to create a new tensor.</p>
<dl class="exception"><dt>Exceptions</dt><dd>
  <table class="exception">
    <tr><td class="paramname">[Exception</td><td>type thrown by iSwish or tensor constructors] If there are issues during the operation, such as memory allocation failures or incorrect input data.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(n), where n is the number of elements in the input tensor (<code>input.size()</code>), as it needs to apply the Swish function to each element.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume T is either Tensor or MappedTensor</span></div>
<div class="line">nz::data::T::shape_type shape = {2, 3};</div>
<div class="line">nz::data::T input(shape, <span class="keyword">true</span>);</div>
<div class="line">nz::data::T output = <a class="code hl_function" href="#ae563f53512549e2e54f066f7bf06622e">Swish</a>(input);</div>
<div class="line">```</div>
<div class="ttc" id="anamespacenz_1_1data_html_ae563f53512549e2e54f066f7bf06622e"><div class="ttname"><a href="#ae563f53512549e2e54f066f7bf06622e">nz::data::Swish</a></div><div class="ttdeci">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; Swish(T &amp;input)</div><div class="ttdoc">Apply the Swish activation function element-wise to an input tensor.</div><div class="ttdef"><b>Definition</b> <a href="_tensor_operations_8cuh_source.html#l00202">TensorOperations.cuh:202</a></div></div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00202">202</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="aed71109d5ed6ecdb7181afc751fa2aa1" name="aed71109d5ed6ecdb7181afc751fa2aa1"></a>
<h2 class="memtitle"><span class="permalink"><a href="#aed71109d5ed6ecdb7181afc751fa2aa1">&#9670;&#160;</a></span>Tanh()</h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; nz::data::Tanh </td>
          <td>(</td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>input</em></span></td><td>)</td>
          <td></td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Apply the hyperbolic tangent (tanh) activation function element-wise to an input tensor. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">input</td><td>The input tensor (either <code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>) to which the tanh function will be applied (device-to-device).</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>A new tensor (of the same type as the input: <code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>) with the tanh function applied element-wise.</dd></dl>
<p>This function applies the hyperbolic tangent activation function, defined as ( f(x)=\frac{e^{x}-e^{-x}}{e^{x}+e^{-x}} ), to each element of the input tensor. It first creates a new tensor <code>result</code> with the same shape and gradient requirement as the input tensor. Then, it calls the <code>iTanh</code> function to perform the actual tanh operation on the data of the input tensor and store the results in the <code>result</code> tensor. Finally, the <code>result</code> tensor is returned.</p>
<p>Memory management: A new tensor <code>result</code> is created, and its memory is managed by the tensor's own class (<code><a class="el" href="classnz_1_1data_1_1_tensor.html" title="A class for representing and manipulating multidimensional arrays (tensors) in GPU memory.">Tensor</a></code> or <code><a class="el" href="classnz_1_1data_1_1_mapped_tensor.html" title="A class for representing multidimensional arrays in CUDA zero-copy memory, providing host-accessible ...">MappedTensor</a></code>). The memory of the input tensor remains unchanged. Exception handling: There is no explicit exception handling in this function. However, if the <code>iTanh</code> function or the tensor constructors throw exceptions, they will propagate up. Relationship with other components: This function depends on the <code>iTanh</code> function to perform the tanh operation and the tensor's constructor to create a new tensor.</p>
<dl class="exception"><dt>Exceptions</dt><dd>
  <table class="exception">
    <tr><td class="paramname">[Exception</td><td>type thrown by iTanh or tensor constructors] If there are issues during the operation, such as memory allocation failures or incorrect input data.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(n), where n is the number of elements in the input tensor (<code>input.size()</code>), as it needs to apply the tanh function to each element.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume T is either Tensor or MappedTensor</span></div>
<div class="line">nz::data::T::shape_type shape = {2, 3};</div>
<div class="line">nz::data::T input(shape, <span class="keyword">true</span>);</div>
<div class="line">nz::data::T output = <a class="code hl_function" href="#aed71109d5ed6ecdb7181afc751fa2aa1">Tanh</a>(input);</div>
<div class="line">```</div>
<div class="ttc" id="anamespacenz_1_1data_html_aed71109d5ed6ecdb7181afc751fa2aa1"><div class="ttname"><a href="#aed71109d5ed6ecdb7181afc751fa2aa1">nz::data::Tanh</a></div><div class="ttdeci">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; Tanh(T &amp;input)</div><div class="ttdoc">Apply the hyperbolic tangent (tanh) activation function element-wise to an input tensor.</div><div class="ttdef"><b>Definition</b> <a href="_tensor_operations_8cuh_source.html#l00126">TensorOperations.cuh:126</a></div></div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00126">126</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="a1da5cd018533919ed5a750b14c7d6d71" name="a1da5cd018533919ed5a750b14c7d6d71"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a1da5cd018533919ed5a750b14c7d6d71">&#9670;&#160;</a></span>tensorElementwiseDivide()</h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, void &gt; nz::data::tensorElementwiseDivide </td>
          <td>(</td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>out</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const T &amp;</td>          <td class="paramname"><span class="paramname"><em>lhs</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const T &amp;</td>          <td class="paramname"><span class="paramname"><em>rhs</em></span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Performs element - wise division operation on tensors with broadcast compatibility. </p>
<p>This template function divides each element of the tensor <code>lhs</code> by the corresponding element of the tensor <code>rhs</code> and stores the result in the tensor <code>out</code>. It is only enabled for types <code>T</code> that satisfy <code>is_valid_tensor_type&lt;T&gt;::value</code>. The shapes of the input tensors must be broadcast compatible, and their height and width dimensions must match.</p>
<dl class="tparams"><dt>Template Parameters</dt><dd>
  <table class="tparams">
    <tr><td class="paramname">T</td><td>The tensor type, which must satisfy <code>is_valid_tensor_type&lt;T&gt;::value</code>. </td></tr>
  </table>
  </dd>
</dl>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">out</td><td>The output tensor where the result of the element - wise division will be stored. Memory flow: host - to - function (reference), function - to - host (modified). </td></tr>
    <tr><td class="paramname">lhs</td><td>The left - hand side tensor in the division operation. Memory flow: host - to - function. </td></tr>
    <tr><td class="paramname">rhs</td><td>The right - hand side tensor in the division operation. Memory flow: host - to - function.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>None</dd></dl>
<p><b>Memory Management Strategy</b>:</p><ul>
<li>The function does not allocate or free memory for the tensors. It creates local <code>std::vector</code> objects (<code>offsetC</code>, <code>offsetA</code>, <code>offsetB</code>) to store offset values. These vectors are automatically managed by their destructors.</li>
</ul>
<p><b>Exception Handling Mechanism</b>:</p><ul>
<li>Throws <code>std::invalid_argument</code> if the shapes of <code>lhs</code> and <code>rhs</code> are not broadcast compatible or if their height and width dimensions do not match.</li>
</ul>
<p><b>Relationship with Other Components</b>:</p><ul>
<li>Depends on the <code>shape()</code> method of the tensor type <code>T</code> to access shape information, including broadcast compatibility, height, width, batch size, channel count, and strides.</li>
<li>Relies on the <code>iElementwiseDivide</code> function to perform the actual element - wise division.</li>
</ul>
<dl class="exception"><dt>Exceptions</dt><dd>
  <table class="exception">
    <tr><td class="paramname">std::invalid_argument</td><td>When the shapes of <code>lhs</code> and <code>rhs</code> are not broadcast compatible or their height and width dimensions do not match.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(m * n), where m is the product of the batch and channel dimensions of the output tensor (<code>out.shape()[0] * out.shape()[1]</code>), and n is the number of elements in a single matrix (<code>lhs.shape().H() * lhs.shape().W()</code>).</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume we have a valid tensor type Tensor</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">Tensor</a> out;</div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">Tensor</a> lhs;</div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">Tensor</a> rhs;</div>
<div class="line"><span class="keywordflow">try</span> {</div>
<div class="line">    <a class="code hl_function" href="#a1da5cd018533919ed5a750b14c7d6d71">tensorElementwiseDivide</a>(out, lhs, rhs);</div>
<div class="line">} <span class="keywordflow">catch</span> (<span class="keyword">const</span> std::invalid_argument&amp; e) {</div>
<div class="line">    std::cerr &lt;&lt; e.what() &lt;&lt; std::endl;</div>
<div class="line">}</div>
<div class="line">```</div>
<div class="ttc" id="anamespacenz_1_1data_html_a1da5cd018533919ed5a750b14c7d6d71"><div class="ttname"><a href="#a1da5cd018533919ed5a750b14c7d6d71">nz::data::tensorElementwiseDivide</a></div><div class="ttdeci">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, void &gt; tensorElementwiseDivide(T &amp;out, const T &amp;lhs, const T &amp;rhs)</div><div class="ttdoc">Performs element - wise division operation on tensors with broadcast compatibility.</div><div class="ttdef"><b>Definition</b> <a href="_tensor_operations_8cuh_source.html#l00928">TensorOperations.cuh:928</a></div></div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00928">928</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="a5a166a472b887c45fde9e5815f072234" name="a5a166a472b887c45fde9e5815f072234"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a5a166a472b887c45fde9e5815f072234">&#9670;&#160;</a></span>tensorGeneralMatrixMul()</h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, void &gt; nz::data::tensorGeneralMatrixMul </td>
          <td>(</td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>out</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const T &amp;</td>          <td class="paramname"><span class="paramname"><em>lhs</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const T &amp;</td>          <td class="paramname"><span class="paramname"><em>rhs</em></span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Performs general matrix multiplication on tensors with broadcast compatibility. </p>
<p>This template function multiplies the tensor <code>lhs</code> by the tensor <code>rhs</code> and stores the result in the tensor <code>out</code>. It is only enabled for types <code>T</code> that satisfy <code>is_valid_tensor_type&lt;T&gt;::value</code>. The shapes of the input tensors must be broadcast compatible, and the width of <code>lhs</code> must be equal to the height of <code>rhs</code>.</p>
<dl class="tparams"><dt>Template Parameters</dt><dd>
  <table class="tparams">
    <tr><td class="paramname">T</td><td>The tensor type, which must satisfy <code>is_valid_tensor_type&lt;T&gt;::value</code>. </td></tr>
  </table>
  </dd>
</dl>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">out</td><td>The output tensor that will hold the result of the matrix multiplication. Memory flow: host-to-function (reference), function-to-host (modified). </td></tr>
    <tr><td class="paramname">lhs</td><td>The left-hand side tensor in the matrix multiplication. Memory flow: host-to-function. </td></tr>
    <tr><td class="paramname">rhs</td><td>The right-hand side tensor in the matrix multiplication. Memory flow: host-to-function.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>None</dd></dl>
<p><b>Memory Management Strategy</b>:</p><ul>
<li>The function does not allocate or free memory for the tensors themselves. It creates local <code>std::vector</code> objects (<code>offsetC</code>, <code>offsetA</code>, <code>offsetB</code>) to store offset values. These vectors are automatically managed by their destructors.</li>
</ul>
<p><b>Exception Handling Mechanism</b>:</p><ul>
<li>Throws <code>std::invalid_argument</code> if the shapes of <code>lhs</code> and <code>rhs</code> are not broadcast compatible or if the width of <code>lhs</code> is not equal to the height of <code>rhs</code>.</li>
</ul>
<p><b>Relationship with Other Components</b>:</p><ul>
<li>Depends on the <code>shape()</code> method of the tensor type <code>T</code> to obtain shape information, such as broadcast compatibility, height, width, batch size, channel count, and strides.</li>
<li>Relies on the <code>iGeneralMatrixMul</code> function to perform the actual matrix multiplication.</li>
</ul>
<dl class="exception"><dt>Exceptions</dt><dd>
  <table class="exception">
    <tr><td class="paramname">std::invalid_argument</td><td>When the shapes of <code>lhs</code> and <code>rhs</code> are not broadcast compatible or the width of <code>lhs</code> is not equal to the height of <code>rhs</code>.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(m * k * n), where m is the height of <code>lhs</code>, k is the width of <code>lhs</code> (equal to the height of <code>rhs</code>), and n is the width of <code>rhs</code>.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume we have a valid tensor type Tensor</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">Tensor</a> out;</div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">Tensor</a> lhs;</div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">Tensor</a> rhs;</div>
<div class="line"><span class="keywordflow">try</span> {</div>
<div class="line">    <a class="code hl_function" href="#a5a166a472b887c45fde9e5815f072234">tensorGeneralMatrixMul</a>(out, lhs, rhs);</div>
<div class="line">} <span class="keywordflow">catch</span> (<span class="keyword">const</span> std::invalid_argument&amp; e) {</div>
<div class="line">    std::cerr &lt;&lt; e.what() &lt;&lt; std::endl;</div>
<div class="line">}</div>
<div class="line">```</div>
<div class="ttc" id="anamespacenz_1_1data_html_a5a166a472b887c45fde9e5815f072234"><div class="ttname"><a href="#a5a166a472b887c45fde9e5815f072234">nz::data::tensorGeneralMatrixMul</a></div><div class="ttdeci">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, void &gt; tensorGeneralMatrixMul(T &amp;out, const T &amp;lhs, const T &amp;rhs)</div><div class="ttdoc">Performs general matrix multiplication on tensors with broadcast compatibility.</div><div class="ttdef"><b>Definition</b> <a href="_tensor_operations_8cuh_source.html#l01000">TensorOperations.cuh:1000</a></div></div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l01000">1000</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="a8cf4ac2437dd67698684169bebb225d4" name="a8cf4ac2437dd67698684169bebb225d4"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a8cf4ac2437dd67698684169bebb225d4">&#9670;&#160;</a></span>tensorMatrixAdd()</h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, void &gt; nz::data::tensorMatrixAdd </td>
          <td>(</td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>out</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const T &amp;</td>          <td class="paramname"><span class="paramname"><em>lhs</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const T &amp;</td>          <td class="paramname"><span class="paramname"><em>rhs</em></span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Performs matrix addition operation on tensors with broadcast compatibility. </p>
<p>This function is a template function that adds two tensors <code>lhs</code> and <code>rhs</code> and stores the result in <code>out</code>. It only accepts tensor types for which <code>is_valid_tensor_type&lt;T&gt;::value</code> is <code>true</code>. The shapes of the input tensors must be broadcast compatible, and the height and width dimensions must match.</p>
<dl class="tparams"><dt>Template Parameters</dt><dd>
  <table class="tparams">
    <tr><td class="paramname">T</td><td>The tensor type. This type must satisfy <code>is_valid_tensor_type&lt;T&gt;::value</code>. </td></tr>
  </table>
  </dd>
</dl>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">out</td><td>The output tensor where the result of the addition will be stored. Memory flow: host-to-function (for reference), function-to-host (modifies the object). </td></tr>
    <tr><td class="paramname">lhs</td><td>The left-hand side tensor of the addition. Memory flow: host-to-function. </td></tr>
    <tr><td class="paramname">rhs</td><td>The right-hand side tensor of the addition. Memory flow: host-to-function.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>None</dd></dl>
<p><b>Memory Management Strategy</b>:</p><ul>
<li>This function does not allocate or free any additional memory for the tensors. It only uses local <code>std::vector</code> objects (<code>offsetC</code>, <code>offsetA</code>, <code>offsetB</code>) to store offset values, and these vectors are automatically managed by their destructors.</li>
</ul>
<p><b>Exception Handling Mechanism</b>:</p><ul>
<li>Throws <code>std::invalid_argument</code> if the shapes of <code>lhs</code> and <code>rhs</code> are not broadcast compatible or if their height and width dimensions do not match.</li>
</ul>
<p><b>Relationship with Other Components</b>:</p><ul>
<li>Depends on the <code>shape()</code> method of the tensor type <code>T</code> to access shape information, including broadcast compatibility, height, width, number of batches, number of channels, and strides.</li>
<li>Relies on the <code>iMatrixAdd</code> function to perform the actual matrix addition operation.</li>
</ul>
<dl class="exception"><dt>Exceptions</dt><dd>
  <table class="exception">
    <tr><td class="paramname">std::invalid_argument</td><td>When the shapes of <code>lhs</code> and <code>rhs</code> are not broadcast compatible or their height and width dimensions do not match.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(m * n), where m is the product of the batch and channel dimensions of the output tensor (<code>out.shape()[0] * out.shape()[1]</code>), and n is the number of elements in a single matrix (<code>lhs.shape().H() * lhs.shape().W()</code>).</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume we have a valid tensor type Tensor</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">Tensor</a> out;</div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">Tensor</a> lhs;</div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">Tensor</a> rhs;</div>
<div class="line"><span class="keywordflow">try</span> {</div>
<div class="line">    <a class="code hl_function" href="#a8cf4ac2437dd67698684169bebb225d4">tensorMatrixAdd</a>(out, lhs, rhs);</div>
<div class="line">} <span class="keywordflow">catch</span> (<span class="keyword">const</span> std::invalid_argument&amp; e) {</div>
<div class="line">    std::cerr &lt;&lt; e.what() &lt;&lt; std::endl;</div>
<div class="line">}</div>
<div class="line">```</div>
<div class="ttc" id="anamespacenz_1_1data_html_a8cf4ac2437dd67698684169bebb225d4"><div class="ttname"><a href="#a8cf4ac2437dd67698684169bebb225d4">nz::data::tensorMatrixAdd</a></div><div class="ttdeci">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, void &gt; tensorMatrixAdd(T &amp;out, const T &amp;lhs, const T &amp;rhs)</div><div class="ttdoc">Performs matrix addition operation on tensors with broadcast compatibility.</div><div class="ttdef"><b>Definition</b> <a href="_tensor_operations_8cuh_source.html#l00787">TensorOperations.cuh:787</a></div></div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00787">787</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="a7503b6894e8052ed54eb169550d135c0" name="a7503b6894e8052ed54eb169550d135c0"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a7503b6894e8052ed54eb169550d135c0">&#9670;&#160;</a></span>tensorMatrixSub()</h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, void &gt; nz::data::tensorMatrixSub </td>
          <td>(</td>
          <td class="paramtype">T &amp;</td>          <td class="paramname"><span class="paramname"><em>out</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const T &amp;</td>          <td class="paramname"><span class="paramname"><em>lhs</em></span>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">const T &amp;</td>          <td class="paramname"><span class="paramname"><em>rhs</em></span>&#160;)</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Performs matrix subtraction operation on tensors with broadcast compatibility. </p>
<p>This template function subtracts the tensor <code>rhs</code> from the tensor <code>lhs</code> and stores the result in the tensor <code>out</code>. It is only enabled for types <code>T</code> that satisfy <code>is_valid_tensor_type&lt;T&gt;::value</code>. The shapes of the input tensors must be broadcast compatible, and their height and width dimensions must match.</p>
<dl class="tparams"><dt>Template Parameters</dt><dd>
  <table class="tparams">
    <tr><td class="paramname">T</td><td>The tensor type, which must meet the condition <code>is_valid_tensor_type&lt;T&gt;::value</code>. </td></tr>
  </table>
  </dd>
</dl>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">out</td><td>The output tensor that will hold the result of the subtraction. Memory flow: host-to-function (reference), function-to-host (modified). </td></tr>
    <tr><td class="paramname">lhs</td><td>The left-hand side tensor in the subtraction operation. Memory flow: host-to-function. </td></tr>
    <tr><td class="paramname">rhs</td><td>The right-hand side tensor in the subtraction operation. Memory flow: host-to-function.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>None</dd></dl>
<p><b>Memory Management Strategy</b>:</p><ul>
<li>The function does not allocate or free memory for the tensors themselves. It creates local <code>std::vector</code> objects (<code>offsetC</code>, <code>offsetA</code>, <code>offsetB</code>) to store offset values. These vectors are automatically managed by their destructors.</li>
</ul>
<p><b>Exception Handling Mechanism</b>:</p><ul>
<li>Throws <code>std::invalid_argument</code> if the shapes of <code>lhs</code> and <code>rhs</code> are not broadcast compatible or if their height and width dimensions do not match.</li>
</ul>
<p><b>Relationship with Other Components</b>:</p><ul>
<li>Depends on the <code>shape()</code> method of the tensor type <code>T</code> to obtain shape information, such as broadcast compatibility, height, width, batch size, channel count, and strides.</li>
<li>Relies on the <code>iMatrixSub</code> function to perform the actual matrix subtraction.</li>
</ul>
<dl class="exception"><dt>Exceptions</dt><dd>
  <table class="exception">
    <tr><td class="paramname">std::invalid_argument</td><td>When the shapes of <code>lhs</code> and <code>rhs</code> are not broadcast compatible or their height and width dimensions do not match.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(m * n), where m is the product of the batch and channel dimensions of the output tensor (<code>out.shape()[0] * out.shape()[1]</code>), and n is the number of elements in a single matrix (<code>lhs.shape().H() * lhs.shape().W()</code>).</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume we have a valid tensor type Tensor</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">Tensor</a> out;</div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">Tensor</a> lhs;</div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">Tensor</a> rhs;</div>
<div class="line"><span class="keywordflow">try</span> {</div>
<div class="line">    <a class="code hl_function" href="#a7503b6894e8052ed54eb169550d135c0">tensorMatrixSub</a>(out, lhs, rhs);</div>
<div class="line">} <span class="keywordflow">catch</span> (<span class="keyword">const</span> std::invalid_argument&amp; e) {</div>
<div class="line">    std::cerr &lt;&lt; e.what() &lt;&lt; std::endl;</div>
<div class="line">}</div>
<div class="line">```</div>
<div class="ttc" id="anamespacenz_1_1data_html_a7503b6894e8052ed54eb169550d135c0"><div class="ttname"><a href="#a7503b6894e8052ed54eb169550d135c0">nz::data::tensorMatrixSub</a></div><div class="ttdeci">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, void &gt; tensorMatrixSub(T &amp;out, const T &amp;lhs, const T &amp;rhs)</div><div class="ttdoc">Performs matrix subtraction operation on tensors with broadcast compatibility.</div><div class="ttdef"><b>Definition</b> <a href="_tensor_operations_8cuh_source.html#l00858">TensorOperations.cuh:858</a></div></div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l00858">858</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
<a id="ac8d64dd271e9a2e50682e733bd14ec19" name="ac8d64dd271e9a2e50682e733bd14ec19"></a>
<h2 class="memtitle"><span class="permalink"><a href="#ac8d64dd271e9a2e50682e733bd14ec19">&#9670;&#160;</a></span>transpose()</h2>

<div class="memitem">
<div class="memproto">
<div class="memtemplate">
template&lt;typename T &gt; </div>
      <table class="memname">
        <tr>
          <td class="memname">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; nz::data::transpose </td>
          <td>(</td>
          <td class="paramtype">const T &amp;</td>          <td class="paramname"><span class="paramname"><em>in</em></span></td><td>)</td>
          <td></td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Transposes a tensor with a valid tensor type. </p>
<p>This template function transposes the input tensor <code>in</code> and returns a new tensor <code>result</code>. It is only enabled for types <code>T</code> that satisfy <code>is_valid_tensor_type&lt;T&gt;::value</code>.</p>
<dl class="tparams"><dt>Template Parameters</dt><dd>
  <table class="tparams">
    <tr><td class="paramname">T</td><td>The tensor type, which must satisfy <code>is_valid_tensor_type&lt;T&gt;::value</code>. </td></tr>
  </table>
  </dd>
</dl>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">in</td><td>The input tensor to be transposed. Memory flow: host - to - function.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>A new tensor <code>result</code> which is the transposed version of the input tensor <code>in</code>. Memory flow: function - to - host.</dd></dl>
<p><b>Memory Management Strategy</b>:</p><ul>
<li>A new tensor <code>result</code> is created inside the function to store the transposed data. The memory for this tensor is managed by the tensor type <code>T</code> itself.</li>
<li>The function creates a local <code>std::vector</code> object <code>offset</code> to store offset values. This vector is automatically managed by its destructor.</li>
</ul>
<p><b>Exception Handling Mechanism</b>:</p><ul>
<li>This function does not throw any exceptions explicitly. However, exceptions may be thrown by the constructor of the tensor type <code>T</code> or the <code>iTranspose</code> function.</li>
</ul>
<p><b>Relationship with Other Components</b>:</p><ul>
<li>Depends on the <code>shape()</code> method of the tensor type <code>T</code> to access shape information, including dimensions and strides.</li>
<li>Relies on the <code>iTranspose</code> function to perform the actual transpose operation.</li>
</ul>
<dl class="section note"><dt>Note</dt><dd><ul>
<li>The time complexity of this function is O(m * n), where m is the product of the first two dimensions of the input tensor (<code>in.shape()[0] * in.shape()[1]</code>), and n is the product of the last two dimensions (<code>in.shape()[2] * in.shape()[3]</code>).</li>
<li>Ensure that the <code>iTranspose</code> function is correctly implemented and that the tensor types support the necessary shape and data access methods.</li>
</ul>
</dd></dl>
<dl class="section warning"><dt>Warning</dt><dd><ul>
<li>Incorrect implementation of the <code>iTranspose</code> function may lead to incorrect results or runtime errors.</li>
</ul>
</dd></dl>
<div class="fragment"><div class="line">```cpp</div>
<div class="line"><span class="comment">// Assume we have a valid tensor type Tensor</span></div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">Tensor</a> in;</div>
<div class="line"><a class="code hl_class" href="classnz_1_1data_1_1_tensor.html">Tensor</a> transposed = <a class="code hl_function" href="#ac8d64dd271e9a2e50682e733bd14ec19">transpose</a>(in);</div>
<div class="line">```</div>
<div class="ttc" id="anamespacenz_1_1data_html_ac8d64dd271e9a2e50682e733bd14ec19"><div class="ttname"><a href="#ac8d64dd271e9a2e50682e733bd14ec19">nz::data::transpose</a></div><div class="ttdeci">std::enable_if_t&lt; is_valid_tensor_type&lt; T &gt;::value, T &gt; transpose(const T &amp;in)</div><div class="ttdoc">Transposes a tensor with a valid tensor type.</div><div class="ttdef"><b>Definition</b> <a href="_tensor_operations_8cuh_source.html#l01073">TensorOperations.cuh:1073</a></div></div>
</div><!-- fragment --> 
<p class="definition">Definition at line <a class="el" href="_tensor_operations_8cuh_source.html#l01073">1073</a> of file <a class="el" href="_tensor_operations_8cuh_source.html">TensorOperations.cuh</a>.</p>

</div>
</div>
</div><!-- contents -->
<!-- start footer part -->
<hr class="footer"/><address class="footer"><small>
Generated by&#160;<a href="https://www.doxygen.org/index.html"><img class="footer" src="doxygen.svg" width="104" height="31" alt="doxygen"/></a> 1.12.0
</small></address>
</div><!-- doc-content -->
</body>
</html>
