<!-- HTML header for doxygen 1.8.14-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<meta name="generator" content="Doxygen 1.9.1"/>
<meta name="viewport" content="width=device-width, initial-scale=1"/>
<title>Unity ML-Agents Toolkit: Agent Class Reference</title>
<link href="tabs.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript" src="dynsections.js"></script>
<link href="navtree.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="resize.js"></script>
<script type="text/javascript" src="navtreedata.js"></script>
<script type="text/javascript" src="navtree.js"></script>
<link href="search/search.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="search/searchdata.js"></script>
<script type="text/javascript" src="search/search.js"></script>
<link href="doxygenbase.css" rel="stylesheet" type="text/css" />
<link href="unity.css" rel="stylesheet" type="text/css"/>
</head>
<body>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<div id="titlearea">
<table cellspacing="0" cellpadding="0">
 <tbody>
 <tr style="height: 56px;">
  <td id="projectlogo"><img alt="Logo" src="logo.png"/></td>
  <td id="projectalign" style="padding-left: 0.5em;">
   <div id="projectname">Unity ML-Agents Toolkit
   </div>
  </td>
 </tr>
 </tbody>
</table>
</div>
<!-- end header part -->
<!-- Generated by Doxygen 1.9.1 -->
<script type="text/javascript">
/* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&amp;dn=gpl-2.0.txt GPL-v2 */
var searchBox = new SearchBox("searchBox", "search",false,'Search','.html');
/* @license-end */
</script>
<script type="text/javascript" src="menudata.js"></script>
<script type="text/javascript" src="menu.js"></script>
<script type="text/javascript">
/* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&amp;dn=gpl-2.0.txt GPL-v2 */
$(function() {
  initMenu('',true,false,'search.php','Search');
  $(document).ready(function() { init_search(); });
});
/* @license-end */</script>
<div id="main-nav"></div>
</div><!-- top -->
<div id="side-nav" class="ui-resizable side-nav-resizable">
  <div id="nav-tree">
    <div id="nav-tree-contents">
      <div id="nav-sync" class="sync"></div>
    </div>
  </div>
  <div id="splitbar" style="-moz-user-select:none;" 
       class="ui-resizable-handle">
  </div>
</div>
<script type="text/javascript">
/* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&amp;dn=gpl-2.0.txt GPL-v2 */
$(document).ready(function(){initNavTree('classUnity_1_1MLAgents_1_1Agent.html',''); initResizable(); });
/* @license-end */
</script>
<div id="doc-content">
<!-- window showing the filter options -->
<div id="MSearchSelectWindow"
     onmouseover="return searchBox.OnSearchSelectShow()"
     onmouseout="return searchBox.OnSearchSelectHide()"
     onkeydown="return searchBox.OnSearchSelectKey(event)">
</div>

<!-- iframe showing the search results (closed by default) -->
<div id="MSearchResultsWindow">
<iframe src="javascript:void(0)" frameborder="0" 
        name="MSearchResults" id="MSearchResults">
</iframe>
</div>

<div class="header">
  <div class="summary">
<a href="#nested-classes">Classes</a> &#124;
<a href="#pub-methods">Public Member Functions</a> &#124;
<a href="#pub-attribs">Public Attributes</a> &#124;
<a href="#pro-methods">Protected Member Functions</a> &#124;
<a href="#pro-static-methods">Static Protected Member Functions</a> &#124;
<a href="#properties">Properties</a> &#124;
<a href="classUnity_1_1MLAgents_1_1Agent-members.html">List of all members</a>  </div>
  <div class="headertitle">
<div class="title">Agent Class Reference</div>  </div>
</div><!--header-->
<div class="contents">

<p>An agent is an actor that can observe its environment, decide on the best course of action using those observations, and execute those actions within the environment.  
 <a href="classUnity_1_1MLAgents_1_1Agent.html#details">More...</a></p>

<p>Inherits MonoBehaviour, and ISerializationCallbackReceiver.</p>
<table class="memberdecls">
<tr class="heading"><td colspan="2"><h2 class="groupheader"><a name="nested-classes"></a>
Classes</h2></td></tr>
<tr class="memitem:structUnity_1_1MLAgents_1_1Agent_1_1AgentParameters"><td class="memItemLeft" align="right" valign="top">struct &#160;</td><td class="memItemRight" valign="bottom"><b>AgentParameters</b></td></tr>
<tr class="memdesc:structUnity_1_1MLAgents_1_1Agent_1_1AgentParameters"><td class="mdescLeft">&#160;</td><td class="mdescRight">This code is here to make the upgrade path for users using MaxStep easier. <br /></td></tr>
<tr class="separator:structUnity_1_1MLAgents_1_1Agent_1_1AgentParameters"><td class="memSeparator" colspan="2">&#160;</td></tr>
</table><table class="memberdecls">
<tr class="heading"><td colspan="2"><h2 class="groupheader"><a name="pub-methods"></a>
Public Member Functions</h2></td></tr>
<tr class="memitem:a72c4055ca88e935619de54f1aeb8f5f2"><td class="memItemLeft" align="right" valign="top">void&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a72c4055ca88e935619de54f1aeb8f5f2">OnBeforeSerialize</a> ()</td></tr>
<tr class="memdesc:a72c4055ca88e935619de54f1aeb8f5f2"><td class="mdescLeft">&#160;</td><td class="mdescRight">Called by <a class="el" href="namespaceUnity.html">Unity</a> immediately before serializing this object.  <a href="classUnity_1_1MLAgents_1_1Agent.html#a72c4055ca88e935619de54f1aeb8f5f2">More...</a><br /></td></tr>
<tr class="separator:a72c4055ca88e935619de54f1aeb8f5f2"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:af97f96776a06243e316547fce49d877a"><td class="memItemLeft" align="right" valign="top">void&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#af97f96776a06243e316547fce49d877a">OnAfterDeserialize</a> ()</td></tr>
<tr class="memdesc:af97f96776a06243e316547fce49d877a"><td class="mdescLeft">&#160;</td><td class="mdescRight">Called by <a class="el" href="namespaceUnity.html">Unity</a> immediately after deserializing this object.  <a href="classUnity_1_1MLAgents_1_1Agent.html#af97f96776a06243e316547fce49d877a">More...</a><br /></td></tr>
<tr class="separator:af97f96776a06243e316547fce49d877a"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a387e6e9b5a9171dcae90c9037fe64dc5"><td class="memItemLeft" align="right" valign="top">void&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a387e6e9b5a9171dcae90c9037fe64dc5">LazyInitialize</a> ()</td></tr>
<tr class="memdesc:a387e6e9b5a9171dcae90c9037fe64dc5"><td class="mdescLeft">&#160;</td><td class="mdescRight">Initializes the agent.  <a href="classUnity_1_1MLAgents_1_1Agent.html#a387e6e9b5a9171dcae90c9037fe64dc5">More...</a><br /></td></tr>
<tr class="separator:a387e6e9b5a9171dcae90c9037fe64dc5"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:aba7983dfccddded21f1fbd4354bbf810"><td class="memItemLeft" align="right" valign="top">void&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#aba7983dfccddded21f1fbd4354bbf810">SetModel</a> (string behaviorName, NNModel model, <a class="el" href="namespaceUnity_1_1MLAgents_1_1Policies.html#a8ab452527c9b0df29c341a5a3a7cdaa6">InferenceDevice</a> inferenceDevice=InferenceDevice.CPU)</td></tr>
<tr class="memdesc:aba7983dfccddded21f1fbd4354bbf810"><td class="mdescLeft">&#160;</td><td class="mdescRight">Updates the Model assigned to this <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> instance.  <a href="classUnity_1_1MLAgents_1_1Agent.html#aba7983dfccddded21f1fbd4354bbf810">More...</a><br /></td></tr>
<tr class="separator:aba7983dfccddded21f1fbd4354bbf810"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:ad60a2a2684b0d8970230ab579e52e445"><td class="memItemLeft" align="right" valign="top">void&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#ad60a2a2684b0d8970230ab579e52e445">SetReward</a> (float reward)</td></tr>
<tr class="memdesc:ad60a2a2684b0d8970230ab579e52e445"><td class="mdescLeft">&#160;</td><td class="mdescRight">Overrides the current step reward of the agent and updates the episode reward accordingly.  <a href="classUnity_1_1MLAgents_1_1Agent.html#ad60a2a2684b0d8970230ab579e52e445">More...</a><br /></td></tr>
<tr class="separator:ad60a2a2684b0d8970230ab579e52e445"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:af54b9da1f764b0be8cafc581d8f9bc5f"><td class="memItemLeft" align="right" valign="top">void&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#af54b9da1f764b0be8cafc581d8f9bc5f">AddReward</a> (float increment)</td></tr>
<tr class="memdesc:af54b9da1f764b0be8cafc581d8f9bc5f"><td class="mdescLeft">&#160;</td><td class="mdescRight">Increments the step and episode rewards by the provided value.  <a href="classUnity_1_1MLAgents_1_1Agent.html#af54b9da1f764b0be8cafc581d8f9bc5f">More...</a><br /></td></tr>
<tr class="separator:af54b9da1f764b0be8cafc581d8f9bc5f"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a7dbd50c5e347a1fe0c8f2a63ccc1ebb5"><td class="memItemLeft" align="right" valign="top">float&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a7dbd50c5e347a1fe0c8f2a63ccc1ebb5">GetCumulativeReward</a> ()</td></tr>
<tr class="memdesc:a7dbd50c5e347a1fe0c8f2a63ccc1ebb5"><td class="mdescLeft">&#160;</td><td class="mdescRight">Retrieves the episode reward for the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a>.  <a href="classUnity_1_1MLAgents_1_1Agent.html#a7dbd50c5e347a1fe0c8f2a63ccc1ebb5">More...</a><br /></td></tr>
<tr class="separator:a7dbd50c5e347a1fe0c8f2a63ccc1ebb5"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a103f88d983fb59506131259d1793e14b"><td class="memItemLeft" align="right" valign="top">void&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a103f88d983fb59506131259d1793e14b">EndEpisode</a> ()</td></tr>
<tr class="memdesc:a103f88d983fb59506131259d1793e14b"><td class="mdescLeft">&#160;</td><td class="mdescRight">Sets the done flag to true and resets the agent.  <a href="classUnity_1_1MLAgents_1_1Agent.html#a103f88d983fb59506131259d1793e14b">More...</a><br /></td></tr>
<tr class="separator:a103f88d983fb59506131259d1793e14b"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:ae9f3c050a74cf26e2cd74e82deaa9a5d"><td class="memItemLeft" align="right" valign="top">void&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#ae9f3c050a74cf26e2cd74e82deaa9a5d">RequestDecision</a> ()</td></tr>
<tr class="memdesc:ae9f3c050a74cf26e2cd74e82deaa9a5d"><td class="mdescLeft">&#160;</td><td class="mdescRight">Requests a new decision for this agent.  <a href="classUnity_1_1MLAgents_1_1Agent.html#ae9f3c050a74cf26e2cd74e82deaa9a5d">More...</a><br /></td></tr>
<tr class="separator:ae9f3c050a74cf26e2cd74e82deaa9a5d"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a737d24da9fbe954cd4e41983ddee208a"><td class="memItemLeft" align="right" valign="top">void&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a737d24da9fbe954cd4e41983ddee208a">RequestAction</a> ()</td></tr>
<tr class="memdesc:a737d24da9fbe954cd4e41983ddee208a"><td class="mdescLeft">&#160;</td><td class="mdescRight">Requests an action for this agent.  <a href="classUnity_1_1MLAgents_1_1Agent.html#a737d24da9fbe954cd4e41983ddee208a">More...</a><br /></td></tr>
<tr class="separator:a737d24da9fbe954cd4e41983ddee208a"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:ab32ab0cc76b4eaa37077e89ee9e3cdc1"><td class="memItemLeft" align="right" valign="top">virtual void&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#ab32ab0cc76b4eaa37077e89ee9e3cdc1">Initialize</a> ()</td></tr>
<tr class="memdesc:ab32ab0cc76b4eaa37077e89ee9e3cdc1"><td class="mdescLeft">&#160;</td><td class="mdescRight">Implement <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#ab32ab0cc76b4eaa37077e89ee9e3cdc1" title="Implement Initialize() to perform one-time initialization or set up of the Agent instance.">Initialize()</a></code> to perform one-time initialization or set up of the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> instance.  <a href="classUnity_1_1MLAgents_1_1Agent.html#ab32ab0cc76b4eaa37077e89ee9e3cdc1">More...</a><br /></td></tr>
<tr class="separator:ab32ab0cc76b4eaa37077e89ee9e3cdc1"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:aa0e01de85276d0b7a79299d9d26b81dc"><td class="memItemLeft" align="right" valign="top">virtual void&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#aa0e01de85276d0b7a79299d9d26b81dc">Heuristic</a> (float[] actionsOut)</td></tr>
<tr class="memdesc:aa0e01de85276d0b7a79299d9d26b81dc"><td class="mdescLeft">&#160;</td><td class="mdescRight">Implement <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#aa0e01de85276d0b7a79299d9d26b81dc" title="Implement Heuristic() to choose an action for this agent using a custom heuristic.">Heuristic()</a></code> to choose an action for this agent using a custom heuristic.  <a href="classUnity_1_1MLAgents_1_1Agent.html#aa0e01de85276d0b7a79299d9d26b81dc">More...</a><br /></td></tr>
<tr class="separator:aa0e01de85276d0b7a79299d9d26b81dc"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a1c3586d1b95c619db2a9772f6818d44a"><td class="memItemLeft" align="right" valign="top">virtual void&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a1c3586d1b95c619db2a9772f6818d44a">CollectObservations</a> (<a class="el" href="classUnity_1_1MLAgents_1_1Sensors_1_1VectorSensor.html">VectorSensor</a> sensor)</td></tr>
<tr class="memdesc:a1c3586d1b95c619db2a9772f6818d44a"><td class="mdescLeft">&#160;</td><td class="mdescRight">Implement <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a1c3586d1b95c619db2a9772f6818d44a" title="Implement CollectObservations() to collect the vector observations of the agent for the step.">CollectObservations()</a></code> to collect the vector observations of the agent for the step.  <a href="classUnity_1_1MLAgents_1_1Agent.html#a1c3586d1b95c619db2a9772f6818d44a">More...</a><br /></td></tr>
<tr class="separator:a1c3586d1b95c619db2a9772f6818d44a"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a37ca6696f9570fedbcc5e00d884c741a"><td class="memItemLeft" align="right" valign="top">ReadOnlyCollection&lt; float &gt;&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a37ca6696f9570fedbcc5e00d884c741a">GetObservations</a> ()</td></tr>
<tr class="memdesc:a37ca6696f9570fedbcc5e00d884c741a"><td class="mdescLeft">&#160;</td><td class="mdescRight">Returns a read-only view of the observations that were generated in <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a1c3586d1b95c619db2a9772f6818d44a" title="Implement CollectObservations() to collect the vector observations of the agent for the step.">CollectObservations(VectorSensor)</a>.  <a href="classUnity_1_1MLAgents_1_1Agent.html#a37ca6696f9570fedbcc5e00d884c741a">More...</a><br /></td></tr>
<tr class="separator:a37ca6696f9570fedbcc5e00d884c741a"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a6f6491addb1c6942cc905f8d3df1b39a"><td class="memItemLeft" align="right" valign="top">virtual void&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a6f6491addb1c6942cc905f8d3df1b39a">CollectDiscreteActionMasks</a> (<a class="el" href="classUnity_1_1MLAgents_1_1DiscreteActionMasker.html">DiscreteActionMasker</a> actionMasker)</td></tr>
<tr class="memdesc:a6f6491addb1c6942cc905f8d3df1b39a"><td class="mdescLeft">&#160;</td><td class="mdescRight">Implement <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a6f6491addb1c6942cc905f8d3df1b39a" title="Implement CollectDiscreteActionMasks() to collects the masks for discrete actions.">CollectDiscreteActionMasks()</a></code> to collects the masks for discrete actions.  <a href="classUnity_1_1MLAgents_1_1Agent.html#a6f6491addb1c6942cc905f8d3df1b39a">More...</a><br /></td></tr>
<tr class="separator:a6f6491addb1c6942cc905f8d3df1b39a"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a01dce485e4f324afd12b98a068f0d242"><td class="memItemLeft" align="right" valign="top">virtual void&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a01dce485e4f324afd12b98a068f0d242">OnActionReceived</a> (float[] vectorAction)</td></tr>
<tr class="memdesc:a01dce485e4f324afd12b98a068f0d242"><td class="mdescLeft">&#160;</td><td class="mdescRight">Implement <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a01dce485e4f324afd12b98a068f0d242" title="Implement OnActionReceived() to specify agent behavior at every step, based on the provided action.">OnActionReceived()</a></code> to specify agent behavior at every step, based on the provided action.  <a href="classUnity_1_1MLAgents_1_1Agent.html#a01dce485e4f324afd12b98a068f0d242">More...</a><br /></td></tr>
<tr class="separator:a01dce485e4f324afd12b98a068f0d242"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a9bec2b5db75991122d64e0074ba9b3c3"><td class="memItemLeft" align="right" valign="top">virtual void&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a9bec2b5db75991122d64e0074ba9b3c3">OnEpisodeBegin</a> ()</td></tr>
<tr class="memdesc:a9bec2b5db75991122d64e0074ba9b3c3"><td class="mdescLeft">&#160;</td><td class="mdescRight">Implement <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a9bec2b5db75991122d64e0074ba9b3c3" title="Implement OnEpisodeBegin() to set up an Agent instance at the beginning of an episode.">OnEpisodeBegin()</a></code> to set up an <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> instance at the beginning of an episode.  <a href="classUnity_1_1MLAgents_1_1Agent.html#a9bec2b5db75991122d64e0074ba9b3c3">More...</a><br /></td></tr>
<tr class="separator:a9bec2b5db75991122d64e0074ba9b3c3"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a8a7b63bf4d0526d8fb6becfd8869b96a"><td class="memItemLeft" align="right" valign="top">float[]&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a8a7b63bf4d0526d8fb6becfd8869b96a">GetAction</a> ()</td></tr>
<tr class="memdesc:a8a7b63bf4d0526d8fb6becfd8869b96a"><td class="mdescLeft">&#160;</td><td class="mdescRight">Returns the last action that was decided on by the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a>.  <a href="classUnity_1_1MLAgents_1_1Agent.html#a8a7b63bf4d0526d8fb6becfd8869b96a">More...</a><br /></td></tr>
<tr class="separator:a8a7b63bf4d0526d8fb6becfd8869b96a"><td class="memSeparator" colspan="2">&#160;</td></tr>
</table><table class="memberdecls">
<tr class="heading"><td colspan="2"><h2 class="groupheader"><a name="pub-attribs"></a>
Public Attributes</h2></td></tr>
<tr class="memitem:a148c8b1a46774354f6bf4d597d939a57"><td class="memItemLeft" align="right" valign="top">int&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a148c8b1a46774354f6bf4d597d939a57">MaxStep</a></td></tr>
<tr class="memdesc:a148c8b1a46774354f6bf4d597d939a57"><td class="mdescLeft">&#160;</td><td class="mdescRight">The maximum number of steps the agent takes before being done.  <a href="classUnity_1_1MLAgents_1_1Agent.html#a148c8b1a46774354f6bf4d597d939a57">More...</a><br /></td></tr>
<tr class="separator:a148c8b1a46774354f6bf4d597d939a57"><td class="memSeparator" colspan="2">&#160;</td></tr>
</table><table class="memberdecls">
<tr class="heading"><td colspan="2"><h2 class="groupheader"><a name="pro-methods"></a>
Protected Member Functions</h2></td></tr>
<tr class="memitem:a34316462014f78aba29c389590f6b104"><td class="memItemLeft" align="right" valign="top">virtual void&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a34316462014f78aba29c389590f6b104">OnEnable</a> ()</td></tr>
<tr class="memdesc:a34316462014f78aba29c389590f6b104"><td class="mdescLeft">&#160;</td><td class="mdescRight">Called when the attached <a href="https://docs.unity3d.com/Manual/GameObjects.html">GameObject</a> becomes enabled and active.  <a href="classUnity_1_1MLAgents_1_1Agent.html#a34316462014f78aba29c389590f6b104">More...</a><br /></td></tr>
<tr class="separator:a34316462014f78aba29c389590f6b104"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a1aac1c9a4ae04ef3e2fbf26b0aa570cc"><td class="memItemLeft" align="right" valign="top">virtual void&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a1aac1c9a4ae04ef3e2fbf26b0aa570cc">OnDisable</a> ()</td></tr>
<tr class="memdesc:a1aac1c9a4ae04ef3e2fbf26b0aa570cc"><td class="mdescLeft">&#160;</td><td class="mdescRight">Called when the attached <a href="https://docs.unity3d.com/Manual/GameObjects.html">GameObject</a> becomes disabled and inactive.  <a href="classUnity_1_1MLAgents_1_1Agent.html#a1aac1c9a4ae04ef3e2fbf26b0aa570cc">More...</a><br /></td></tr>
<tr class="separator:a1aac1c9a4ae04ef3e2fbf26b0aa570cc"><td class="memSeparator" colspan="2">&#160;</td></tr>
</table><table class="memberdecls">
<tr class="heading"><td colspan="2"><h2 class="groupheader"><a name="pro-static-methods"></a>
Static Protected Member Functions</h2></td></tr>
<tr class="memitem:a40c4a915bf5afaf307f741e3f9c2c423"><td class="memItemLeft" align="right" valign="top">static float&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a40c4a915bf5afaf307f741e3f9c2c423">ScaleAction</a> (float rawAction, float min, float max)</td></tr>
<tr class="memdesc:a40c4a915bf5afaf307f741e3f9c2c423"><td class="mdescLeft">&#160;</td><td class="mdescRight">Scales continuous action from [-1, 1] to arbitrary range.  <a href="classUnity_1_1MLAgents_1_1Agent.html#a40c4a915bf5afaf307f741e3f9c2c423">More...</a><br /></td></tr>
<tr class="separator:a40c4a915bf5afaf307f741e3f9c2c423"><td class="memSeparator" colspan="2">&#160;</td></tr>
</table><table class="memberdecls">
<tr class="heading"><td colspan="2"><h2 class="groupheader"><a name="properties"></a>
Properties</h2></td></tr>
<tr class="memitem:a8e8dc6caaae14be76726dc80325b9e53"><td class="memItemLeft" align="right" valign="top">int&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a8e8dc6caaae14be76726dc80325b9e53">StepCount</a><code> [get]</code></td></tr>
<tr class="memdesc:a8e8dc6caaae14be76726dc80325b9e53"><td class="mdescLeft">&#160;</td><td class="mdescRight">Returns the current step counter (within the current episode).  <a href="classUnity_1_1MLAgents_1_1Agent.html#a8e8dc6caaae14be76726dc80325b9e53">More...</a><br /></td></tr>
<tr class="separator:a8e8dc6caaae14be76726dc80325b9e53"><td class="memSeparator" colspan="2">&#160;</td></tr>
<tr class="memitem:a3c3bd9d2c311f3b630b908053d2419d9"><td class="memItemLeft" align="right" valign="top">int&#160;</td><td class="memItemRight" valign="bottom"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a3c3bd9d2c311f3b630b908053d2419d9">CompletedEpisodes</a><code> [get]</code></td></tr>
<tr class="memdesc:a3c3bd9d2c311f3b630b908053d2419d9"><td class="mdescLeft">&#160;</td><td class="mdescRight">Returns the number of episodes that the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> has completed (either <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a103f88d983fb59506131259d1793e14b" title="Sets the done flag to true and resets the agent.">Agent.EndEpisode()</a> was called, or maxSteps was reached).  <a href="classUnity_1_1MLAgents_1_1Agent.html#a3c3bd9d2c311f3b630b908053d2419d9">More...</a><br /></td></tr>
<tr class="separator:a3c3bd9d2c311f3b630b908053d2419d9"><td class="memSeparator" colspan="2">&#160;</td></tr>
</table>
<a name="details" id="details"></a><h2 class="groupheader">Detailed Description</h2>
<div class="textblock"><p>An agent is an actor that can observe its environment, decide on the best course of action using those observations, and execute those actions within the environment. </p>
<p>Use the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> class as the subclass for implementing your own agents. Add your <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> implementation to a <a href="https://docs.unity3d.com/Manual/GameObjects.html">GameObject</a> in the <a href="https://docs.unity3d.com/Manual/CreatingScenes.html">Unity scene</a> that serves as the agent's environment.</p>
<p>Agents in an environment operate in <em>steps</em>. At each step, an agent collects observations, passes them to its decision-making policy, and receives an action vector in response.</p>
<p>Agents make observations using ISensor implementations. The ML-Agents API provides implementations for visual observations (CameraSensor) raycast observations (RayPerceptionSensor), and arbitrary data observations (VectorSensor). You can add the CameraSensorComponent and RayPerceptionSensorComponent2D or RayPerceptionSensorComponent3D components to an agent's <a href="https://docs.unity3d.com/Manual/GameObjects.html">GameObject</a> to use those sensor types. You can implement the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a1c3586d1b95c619db2a9772f6818d44a" title="Implement CollectObservations() to collect the vector observations of the agent for the step.">CollectObservations(VectorSensor)</a> function in your <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> subclass to use a vector observation. The <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> class calls this function before it uses the observation vector to make a decision. (If you only use visual or raycast observations, you do not need to implement <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a1c3586d1b95c619db2a9772f6818d44a" title="Implement CollectObservations() to collect the vector observations of the agent for the step.">CollectObservations</a>.)</p>
<p>Assign a decision making policy to an agent using a BehaviorParameters component attached to the agent's <a href="https://docs.unity3d.com/Manual/GameObjects.html">GameObject</a>. The BehaviorType setting determines how decisions are made:</p>
<ul>
<li>BehaviorType.Default: decisions are made by the external process, when connected. Otherwise, decisions are made using inference. If no inference model is specified in the BehaviorParameters component, then heuristic decision making is used.</li>
<li>BehaviorType.InferenceOnly: decisions are always made using the trained model specified in the BehaviorParameters component.</li>
<li>BehaviorType.HeuristicOnly: when a decision is needed, the agent's <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#aa0e01de85276d0b7a79299d9d26b81dc" title="Implement Heuristic() to choose an action for this agent using a custom heuristic.">Heuristic</a> function is called. Your implementation is responsible for providing the appropriate action.</li>
</ul>
<p>To trigger an agent decision automatically, you can attach a <a class="el" href="classUnity_1_1MLAgents_1_1DecisionRequester.html" title="The DecisionRequester component automatically request decisions for an Agent instance at regular inte...">DecisionRequester</a> component to the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> game object. You can also call the agent's <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#ae9f3c050a74cf26e2cd74e82deaa9a5d" title="Requests a new decision for this agent.">RequestDecision</a> function manually. You only need to call <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#ae9f3c050a74cf26e2cd74e82deaa9a5d" title="Requests a new decision for this agent.">RequestDecision</a> when the agent is in a position to act upon the decision. In many cases, this will be every <a href="https://docs.unity3d.com/ScriptReference/MonoBehaviour.FixedUpdate.html">FixedUpdate</a> callback, but could be less frequent. For example, an agent that hops around its environment can only take an action when it touches the ground, so several frames might elapse between one decision and the need for the next.</p>
<p>Use the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a01dce485e4f324afd12b98a068f0d242" title="Implement OnActionReceived() to specify agent behavior at every step, based on the provided action.">OnActionReceived</a> function to implement the actions your agent can take, such as moving to reach a goal or interacting with its environment.</p>
<p>When you call <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a103f88d983fb59506131259d1793e14b" title="Sets the done flag to true and resets the agent.">EndEpisode</a> on an agent or the agent reaches its <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a148c8b1a46774354f6bf4d597d939a57" title="The maximum number of steps the agent takes before being done.">MaxStep</a> count, its current episode ends. You can reset the agent &ndash; or remove it from the environment &ndash; by implementing the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a9bec2b5db75991122d64e0074ba9b3c3" title="Implement OnEpisodeBegin() to set up an Agent instance at the beginning of an episode.">OnEpisodeBegin</a> function. An agent also becomes done when the <a class="el" href="classUnity_1_1MLAgents_1_1Academy.html" title="The Academy singleton manages agent training and decision making.">Academy</a> resets the environment, which only happens when the <a class="el" href="classUnity_1_1MLAgents_1_1Academy.html" title="The Academy singleton manages agent training and decision making.">Academy</a> receives a reset signal from an external process via the Academy.Communicator.</p>
<p>The <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> class extends the <a class="el" href="namespaceUnity.html">Unity</a> <a href="https://docs.unity3d.com/ScriptReference/MonoBehaviour.html">MonoBehaviour</a> class. You can implement the standard <a href="https://docs.unity3d.com/ScriptReference/MonoBehaviour.html">MonoBehaviour</a> functions as needed for your agent. Since an agent's observations and actions typically take place during the <a href="https://docs.unity3d.com/ScriptReference/MonoBehaviour.FixedUpdate.html">FixedUpdate</a> phase, you should only use the <a href="https://docs.unity3d.com/ScriptReference/MonoBehaviour.Update.html">MonoBehaviour.Update</a> function for cosmetic purposes. If you override the <a href="https://docs.unity3d.com/ScriptReference/MonoBehaviour.html">MonoBehaviour</a> methods, <a href="https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnEnable.html">OnEnable()</a> or <a href="https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnDisable.html]">OnDisable()</a>, always call the base <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> class implementations.</p>
<p>You can implement the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#aa0e01de85276d0b7a79299d9d26b81dc" title="Implement Heuristic() to choose an action for this agent using a custom heuristic.">Heuristic</a> function to specify agent actions using your own heuristic algorithm. Implementing a heuristic function can be useful for debugging. For example, you can use keyboard input to select agent actions in order to manually control an agent's behavior.</p>
<p>Note that you can change the inference model assigned to an agent at any step by calling <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#aba7983dfccddded21f1fbd4354bbf810" title="Updates the Model assigned to this Agent instance.">SetModel</a>.</p>
<p>See <a href="https://github.com/Unity-Technologies/ml-agents/blob/release_6_docs/docs/Learning-Environment-Design-Agents.md">Agents</a> and <a href="https://github.com/Unity-Technologies/ml-agents/blob/release_6_docs/docs/Learning-Environment-Design.md">Reinforcement Learning in Unity</a> in the <a href="https://github.com/Unity-Technologies/ml-agents/blob/release_6_docs/docs/Readme.md">Unity ML-Agents Toolkit manual</a> for more information on creating and training agents.</p>
<p>For sample implementations of agent behavior, see the examples available in the <a href="https://github.com/Unity-Technologies/ml-agents">Unity ML-Agents Toolkit</a> on Github.</p>
</div><h2 class="groupheader">Member Function Documentation</h2>
<a id="af54b9da1f764b0be8cafc581d8f9bc5f"></a>
<h2 class="memtitle"><span class="permalink"><a href="#af54b9da1f764b0be8cafc581d8f9bc5f">&#9670;&nbsp;</a></span>AddReward()</h2>

<div class="memitem">
<div class="memproto">
      <table class="memname">
        <tr>
          <td class="memname">void AddReward </td>
          <td>(</td>
          <td class="paramtype">float&#160;</td>
          <td class="paramname"><em>increment</em></td><td>)</td>
          <td></td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Increments the step and episode rewards by the provided value. </p>
<p>Use a positive reward to reinforce desired behavior. You can use a negative reward to penalize mistakes. Use </p><dl class="section see"><dt>See also</dt><dd><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#ad60a2a2684b0d8970230ab579e52e445" title="Overrides the current step reward of the agent and updates the episode reward accordingly.">SetReward(float)</a></dd></dl>
<p>to set the reward assigned to the current step with a specific value rather than increasing or decreasing it.</p>
<p>Typically, you assign rewards in the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> subclass's <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a01dce485e4f324afd12b98a068f0d242" title="Implement OnActionReceived() to specify agent behavior at every step, based on the provided action.">OnActionReceived(float[])</a> implementation after carrying out the received action and evaluating its success.</p>
<p>Rewards are used during reinforcement learning; they are ignored during inference.</p>
<p>See <a href="https://github.com/Unity-Technologies/ml-agents/blob/release_6_docs/docs/Learning-Environment-Design-Agents.md#rewards">Agents - Rewards</a> for general advice on implementing rewards and <a href="https://github.com/Unity-Technologies/ml-agents/blob/release_6_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals">Reward Signals</a> for information about mixing reward signals from curiosity and Generative Adversarial Imitation Learning (GAIL) with rewards supplied through this method.</p>
<p>/remarks&gt; </p><dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">increment</td><td>Incremental reward value.</td></tr>
  </table>
  </dd>
</dl>

</div>
</div>
<a id="a6f6491addb1c6942cc905f8d3df1b39a"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a6f6491addb1c6942cc905f8d3df1b39a">&#9670;&nbsp;</a></span>CollectDiscreteActionMasks()</h2>

<div class="memitem">
<div class="memproto">
<table class="mlabels">
  <tr>
  <td class="mlabels-left">
      <table class="memname">
        <tr>
          <td class="memname">virtual void CollectDiscreteActionMasks </td>
          <td>(</td>
          <td class="paramtype"><a class="el" href="classUnity_1_1MLAgents_1_1DiscreteActionMasker.html">DiscreteActionMasker</a>&#160;</td>
          <td class="paramname"><em>actionMasker</em></td><td>)</td>
          <td></td>
        </tr>
      </table>
  </td>
  <td class="mlabels-right">
<span class="mlabels"><span class="mlabel">virtual</span></span>  </td>
  </tr>
</table>
</div><div class="memdoc">

<p>Implement <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a6f6491addb1c6942cc905f8d3df1b39a" title="Implement CollectDiscreteActionMasks() to collects the masks for discrete actions.">CollectDiscreteActionMasks()</a></code> to collects the masks for discrete actions. </p>
<p>When using discrete actions, the agent will not perform the masked action.</p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">actionMasker</td><td>The action masker for the agent. </td></tr>
  </table>
  </dd>
</dl>
<p>When using Discrete Control, you can prevent the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> from using a certain action by masking it with <a class="el" href="classUnity_1_1MLAgents_1_1DiscreteActionMasker.html#aad0493d9a8e6054f50e51bbb0b0b3ae6" title="Modifies an action mask for discrete control agents.">DiscreteActionMasker.SetMask(int, IEnumerable&lt;int&gt;)</a>.</p>
<p>See <a href="https://github.com/Unity-Technologies/ml-agents/blob/release_6_docs/docs/Learning-Environment-Design-Agents.md#actions">Agents - Actions</a> for more information on masking actions.</p>
<dl class="section see"><dt>See also</dt><dd><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a01dce485e4f324afd12b98a068f0d242" title="Implement OnActionReceived() to specify agent behavior at every step, based on the provided action.">OnActionReceived(float[])</a></dd></dl>

</div>
</div>
<a id="a1c3586d1b95c619db2a9772f6818d44a"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a1c3586d1b95c619db2a9772f6818d44a">&#9670;&nbsp;</a></span>CollectObservations()</h2>

<div class="memitem">
<div class="memproto">
<table class="mlabels">
  <tr>
  <td class="mlabels-left">
      <table class="memname">
        <tr>
          <td class="memname">virtual void CollectObservations </td>
          <td>(</td>
          <td class="paramtype"><a class="el" href="classUnity_1_1MLAgents_1_1Sensors_1_1VectorSensor.html">VectorSensor</a>&#160;</td>
          <td class="paramname"><em>sensor</em></td><td>)</td>
          <td></td>
        </tr>
      </table>
  </td>
  <td class="mlabels-right">
<span class="mlabels"><span class="mlabel">virtual</span></span>  </td>
  </tr>
</table>
</div><div class="memdoc">

<p>Implement <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a1c3586d1b95c619db2a9772f6818d44a" title="Implement CollectObservations() to collect the vector observations of the agent for the step.">CollectObservations()</a></code> to collect the vector observations of the agent for the step. </p>
<p>The agent observation describes the current environment from the perspective of the agent.</p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">sensor</td><td>The vector observations for the agent. </td></tr>
  </table>
  </dd>
</dl>
<p>An agent's observation is any environment information that helps the agent achieve its goal. For example, for a fighting agent, its observation could include distances to friends or enemies, or the current level of ammunition at its disposal.</p>
<p>You can use a combination of vector, visual, and raycast observations for an agent. If you only use visual or raycast observations, you do not need to implement a <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a1c3586d1b95c619db2a9772f6818d44a" title="Implement CollectObservations() to collect the vector observations of the agent for the step.">CollectObservations()</a></code> function.</p>
<p>Add vector observations to the <em>sensor</em>  parameter passed to this method by calling the </p><dl class="section see"><dt>See also</dt><dd>VectorSensor</dd></dl>
<p>helper methods:</p><ul>
<li>VectorSensor.AddObservation(int)</li>
<li>VectorSensor.AddObservation(float)</li>
<li>VectorSensor.AddObservation(Vector3)</li>
<li>VectorSensor.AddObservation(Vector2)</li>
<li>VectorSensor.AddObservation(Quaternion)</li>
<li>VectorSensor.AddObservation(bool)</li>
<li>VectorSensor.AddObservation(IEnumerable&lt;float&gt;)</li>
<li>VectorSensor.AddOneHotObservation(int, int)</li>
</ul>
<p>You can use any combination of these helper functions to build the agent's vector of observations. You must build the vector in the same order each time <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a1c3586d1b95c619db2a9772f6818d44a" title="Implement CollectObservations() to collect the vector observations of the agent for the step.">CollectObservations()</a></code> is called and the length of the vector must always be the same. In addition, the length of the observation must match the BrainParameters.VectorObservationSize attribute of the linked Brain, which is set in the Editor on the <b>Behavior Parameters</b> component attached to the agent's <a href="https://docs.unity3d.com/Manual/GameObjects.html">GameObject</a>.</p>
<p>For more information about observations, see <a href="https://github.com/Unity-Technologies/ml-agents/blob/release_6_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors">Observations and Sensors</a>.</p>

</div>
</div>
<a id="a103f88d983fb59506131259d1793e14b"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a103f88d983fb59506131259d1793e14b">&#9670;&nbsp;</a></span>EndEpisode()</h2>

<div class="memitem">
<div class="memproto">
      <table class="memname">
        <tr>
          <td class="memname">void EndEpisode </td>
          <td>(</td>
          <td class="paramname"></td><td>)</td>
          <td></td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Sets the done flag to true and resets the agent. </p>
<dl class="section see"><dt>See also</dt><dd><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a9bec2b5db75991122d64e0074ba9b3c3" title="Implement OnEpisodeBegin() to set up an Agent instance at the beginning of an episode.">OnEpisodeBegin</a></dd></dl>

</div>
</div>
<a id="a8a7b63bf4d0526d8fb6becfd8869b96a"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a8a7b63bf4d0526d8fb6becfd8869b96a">&#9670;&nbsp;</a></span>GetAction()</h2>

<div class="memitem">
<div class="memproto">
      <table class="memname">
        <tr>
          <td class="memname">float [] GetAction </td>
          <td>(</td>
          <td class="paramname"></td><td>)</td>
          <td></td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Returns the last action that was decided on by the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a>. </p>
<dl class="section return"><dt>Returns</dt><dd>The last action that was decided by the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> (or null if no decision has been made). </dd></dl>
<dl class="section see"><dt>See also</dt><dd><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a01dce485e4f324afd12b98a068f0d242" title="Implement OnActionReceived() to specify agent behavior at every step, based on the provided action.">OnActionReceived(float[])</a></dd></dl>

</div>
</div>
<a id="a7dbd50c5e347a1fe0c8f2a63ccc1ebb5"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a7dbd50c5e347a1fe0c8f2a63ccc1ebb5">&#9670;&nbsp;</a></span>GetCumulativeReward()</h2>

<div class="memitem">
<div class="memproto">
      <table class="memname">
        <tr>
          <td class="memname">float GetCumulativeReward </td>
          <td>(</td>
          <td class="paramname"></td><td>)</td>
          <td></td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Retrieves the episode reward for the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a>. </p>
<dl class="section return"><dt>Returns</dt><dd>The episode reward.</dd></dl>

</div>
</div>
<a id="a37ca6696f9570fedbcc5e00d884c741a"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a37ca6696f9570fedbcc5e00d884c741a">&#9670;&nbsp;</a></span>GetObservations()</h2>

<div class="memitem">
<div class="memproto">
      <table class="memname">
        <tr>
          <td class="memname">ReadOnlyCollection&lt;float&gt; GetObservations </td>
          <td>(</td>
          <td class="paramname"></td><td>)</td>
          <td></td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Returns a read-only view of the observations that were generated in <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a1c3586d1b95c619db2a9772f6818d44a" title="Implement CollectObservations() to collect the vector observations of the agent for the step.">CollectObservations(VectorSensor)</a>. </p>
<p>This is mainly useful inside of a <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#aa0e01de85276d0b7a79299d9d26b81dc" title="Implement Heuristic() to choose an action for this agent using a custom heuristic.">Heuristic(float[])</a> method to avoid recomputing the observations.</p>
<dl class="section return"><dt>Returns</dt><dd>A read-only view of the observations list.</dd></dl>

</div>
</div>
<a id="aa0e01de85276d0b7a79299d9d26b81dc"></a>
<h2 class="memtitle"><span class="permalink"><a href="#aa0e01de85276d0b7a79299d9d26b81dc">&#9670;&nbsp;</a></span>Heuristic()</h2>

<div class="memitem">
<div class="memproto">
<table class="mlabels">
  <tr>
  <td class="mlabels-left">
      <table class="memname">
        <tr>
          <td class="memname">virtual void Heuristic </td>
          <td>(</td>
          <td class="paramtype">float[]&#160;</td>
          <td class="paramname"><em>actionsOut</em></td><td>)</td>
          <td></td>
        </tr>
      </table>
  </td>
  <td class="mlabels-right">
<span class="mlabels"><span class="mlabel">virtual</span></span>  </td>
  </tr>
</table>
</div><div class="memdoc">

<p>Implement <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#aa0e01de85276d0b7a79299d9d26b81dc" title="Implement Heuristic() to choose an action for this agent using a custom heuristic.">Heuristic()</a></code> to choose an action for this agent using a custom heuristic. </p>
<p>Implement this function to provide custom decision making logic or to support manual control of an agent using keyboard, mouse, or game controller input.</p>
<p>Your heuristic implementation can use any decision making logic you specify. Assign decision values to the float[] array, <em>actionsOut</em> , passed to your function as a parameter. The same array will be reused between steps. It is up to the user to initialize the values on each call, for example by calling <code>Array.Clear(actionsOut, 0, actionsOut.Length);</code>. Add values to the array at the same indexes as they are used in your </p><dl class="section see"><dt>See also</dt><dd><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a01dce485e4f324afd12b98a068f0d242" title="Implement OnActionReceived() to specify agent behavior at every step, based on the provided action.">OnActionReceived(float[])</a></dd></dl>
<p>function, which receives this array and implements the corresponding agent behavior. See <a href="https://github.com/Unity-Technologies/ml-agents/blob/release_6_docs/docs/Learning-Environment-Design-Agents.md#actions">Actions</a> for more information about agent actions. Note : Do not create a new float array of action in the <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#aa0e01de85276d0b7a79299d9d26b81dc" title="Implement Heuristic() to choose an action for this agent using a custom heuristic.">Heuristic()</a></code> method, as this will prevent writing floats to the original action array.</p>
<p>An agent calls this <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#aa0e01de85276d0b7a79299d9d26b81dc" title="Implement Heuristic() to choose an action for this agent using a custom heuristic.">Heuristic()</a></code> function to make a decision when you set its behavior type to BehaviorType.HeuristicOnly. The agent also calls this function if you set its behavior type to BehaviorType.Default when the <a class="el" href="classUnity_1_1MLAgents_1_1Academy.html" title="The Academy singleton manages agent training and decision making.">Academy</a> is not connected to an external training process and you do not assign a trained model to the agent.</p>
<p>To perform imitation learning, implement manual control of the agent in the <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#aa0e01de85276d0b7a79299d9d26b81dc" title="Implement Heuristic() to choose an action for this agent using a custom heuristic.">Heuristic()</a></code> function so that you can record the demonstrations required for the imitation learning algorithms. (Attach a <a href="https://github.com/Unity-Technologies/ml-agents/blob/release_6_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations">Demonstration Recorder</a> component to the agent's <a href="https://docs.unity3d.com/Manual/GameObjects.html">GameObject</a> to record the demonstration session to a file.)</p>
<p>Even when you don’t plan to use heuristic decisions for an agent or imitation learning, implementing a simple heuristic function can aid in debugging agent actions and interactions with its environment.</p>
<p>The following example illustrates a <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#aa0e01de85276d0b7a79299d9d26b81dc" title="Implement Heuristic() to choose an action for this agent using a custom heuristic.">Heuristic()</a></code> function that provides WASD-style keyboard control for an agent that can move in two dimensions as well as jump. See <a href="https://docs.unity3d.com/Manual/class-InputManager.html">Input Manager</a> for more information about the built-in <a class="el" href="namespaceUnity.html">Unity</a> input functions. You can also use the <a href="https://docs.unity3d.com/Packages/com.unity.inputsystem@1.0/manual/index.html">Input System package</a>, which provides a more flexible and configurable input system. </p><div class="fragment"><div class="line"><span class="keyword">public</span> <span class="keyword">override</span> <span class="keywordtype">void</span> <a class="code" href="classUnity_1_1MLAgents_1_1Agent.html#aa0e01de85276d0b7a79299d9d26b81dc">Heuristic</a>(<span class="keywordtype">float</span>[] actionsOut)</div>
<div class="line">{</div>
<div class="line">    actionsOut[0] = Input.GetAxis(<span class="stringliteral">&quot;Horizontal&quot;</span>);</div>
<div class="line">    actionsOut[1] = Input.GetKey(KeyCode.Space) ? 1.0f : 0.0f;</div>
<div class="line">    actionsOut[2] = Input.GetAxis(<span class="stringliteral">&quot;Vertical&quot;</span>);</div>
<div class="line">}</div>
<div class="ttc" id="aclassUnity_1_1MLAgents_1_1Agent_html_aa0e01de85276d0b7a79299d9d26b81dc"><div class="ttname"><a href="classUnity_1_1MLAgents_1_1Agent.html#aa0e01de85276d0b7a79299d9d26b81dc">Unity.MLAgents.Agent.Heuristic</a></div><div class="ttdeci">virtual void Heuristic(float[] actionsOut)</div><div class="ttdoc">Implement Heuristic() to choose an action for this agent using a custom heuristic.</div><div class="ttdef"><b>Definition:</b> Agent.cs:816</div></div>
</div><!-- fragment --><dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">actionsOut</td><td>Array for the output actions.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section see"><dt>See also</dt><dd><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a01dce485e4f324afd12b98a068f0d242" title="Implement OnActionReceived() to specify agent behavior at every step, based on the provided action.">OnActionReceived(float[])</a></dd></dl>

</div>
</div>
<a id="ab32ab0cc76b4eaa37077e89ee9e3cdc1"></a>
<h2 class="memtitle"><span class="permalink"><a href="#ab32ab0cc76b4eaa37077e89ee9e3cdc1">&#9670;&nbsp;</a></span>Initialize()</h2>

<div class="memitem">
<div class="memproto">
<table class="mlabels">
  <tr>
  <td class="mlabels-left">
      <table class="memname">
        <tr>
          <td class="memname">virtual void Initialize </td>
          <td>(</td>
          <td class="paramname"></td><td>)</td>
          <td></td>
        </tr>
      </table>
  </td>
  <td class="mlabels-right">
<span class="mlabels"><span class="mlabel">virtual</span></span>  </td>
  </tr>
</table>
</div><div class="memdoc">

<p>Implement <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#ab32ab0cc76b4eaa37077e89ee9e3cdc1" title="Implement Initialize() to perform one-time initialization or set up of the Agent instance.">Initialize()</a></code> to perform one-time initialization or set up of the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> instance. </p>
<p><code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#ab32ab0cc76b4eaa37077e89ee9e3cdc1" title="Implement Initialize() to perform one-time initialization or set up of the Agent instance.">Initialize()</a></code> is called once when the agent is first enabled. If, for example, the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> object needs references to other [GameObjects] in the scene, you can collect and store those references here.</p>
<p>Note that </p><dl class="section see"><dt>See also</dt><dd><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a9bec2b5db75991122d64e0074ba9b3c3" title="Implement OnEpisodeBegin() to set up an Agent instance at the beginning of an episode.">OnEpisodeBegin</a></dd></dl>
<p>is called at the start of each of the agent's "episodes". You can use that function for items that need to be reset for each episode.</p>

</div>
</div>
<a id="a387e6e9b5a9171dcae90c9037fe64dc5"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a387e6e9b5a9171dcae90c9037fe64dc5">&#9670;&nbsp;</a></span>LazyInitialize()</h2>

<div class="memitem">
<div class="memproto">
      <table class="memname">
        <tr>
          <td class="memname">void LazyInitialize </td>
          <td>(</td>
          <td class="paramname"></td><td>)</td>
          <td></td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Initializes the agent. </p>
<p>Can be safely called multiple times.</p>
<p>This function calls your </p><dl class="section see"><dt>See also</dt><dd><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#ab32ab0cc76b4eaa37077e89ee9e3cdc1" title="Implement Initialize() to perform one-time initialization or set up of the Agent instance.">Initialize</a></dd></dl>
<p>implementation, if one exists. </p>

</div>
</div>
<a id="a01dce485e4f324afd12b98a068f0d242"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a01dce485e4f324afd12b98a068f0d242">&#9670;&nbsp;</a></span>OnActionReceived()</h2>

<div class="memitem">
<div class="memproto">
<table class="mlabels">
  <tr>
  <td class="mlabels-left">
      <table class="memname">
        <tr>
          <td class="memname">virtual void OnActionReceived </td>
          <td>(</td>
          <td class="paramtype">float[]&#160;</td>
          <td class="paramname"><em>vectorAction</em></td><td>)</td>
          <td></td>
        </tr>
      </table>
  </td>
  <td class="mlabels-right">
<span class="mlabels"><span class="mlabel">virtual</span></span>  </td>
  </tr>
</table>
</div><div class="memdoc">

<p>Implement <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a01dce485e4f324afd12b98a068f0d242" title="Implement OnActionReceived() to specify agent behavior at every step, based on the provided action.">OnActionReceived()</a></code> to specify agent behavior at every step, based on the provided action. </p>
<p>An action is passed to this function in the form of an array vector. Your implementation must use the array to direct the agent's behavior for the current step.</p>
<p>You decide how many elements you need in the action array to control your agent and what each element means. For example, if you want to apply a force to move an agent around the environment, you can arbitrarily pick three values in the action array to use as the force components. During training, the agent's policy learns to set those particular elements of the array to maximize the training rewards the agent receives. (Of course, if you implement a </p><dl class="section see"><dt>See also</dt><dd><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#aa0e01de85276d0b7a79299d9d26b81dc" title="Implement Heuristic() to choose an action for this agent using a custom heuristic.">Heuristic</a></dd></dl>
<p>function, it must use the same elements of the action array for the same purpose since there is no learning involved.)</p>
<p>Actions for an agent can be either&#160;<em>Continuous</em>&#160;or&#160;<em>Discrete</em>. Specify which type of action space an agent uses, along with the size of the action array, in the BrainParameters of the agent's associated BehaviorParameters component.</p>
<p>When an agent uses the continuous action space, the values in the action array are floating point numbers. You should clamp the values to the range, -1..1, to increase numerical stability during training.</p>
<p>When an agent uses the discrete action space, the values in the action array are integers that each represent a specific, discrete action. For example, you could define a set of discrete actions such as:</p>
<div class="fragment"><div class="line">0 = Do nothing</div>
<div class="line">1 = Move one space left</div>
<div class="line">2 = Move one space right</div>
<div class="line">3 = Move one space up</div>
<div class="line">4 = Move one space down</div>
</div><!-- fragment --><p>When making a decision, the agent picks one of the five actions and puts the corresponding integer value in the action vector. For example, if the agent decided to move left, the action vector parameter would contain an array with a single element with the value 1.</p>
<p>You can define multiple sets, or branches, of discrete actions to allow an agent to perform simultaneous, independent actions. For example, you could use one branch for movement and another branch for throwing a ball left, right, up, or down, to allow the agent to do both in the same step.</p>
<p>The action vector of a discrete action space contains one element for each branch. The value of each element is the integer representing the chosen action for that branch. The agent always chooses one action for each branch.</p>
<p>When you use the discrete action space, you can prevent the training process or the neural network model from choosing specific actions in a step by implementing the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a6f6491addb1c6942cc905f8d3df1b39a" title="Implement CollectDiscreteActionMasks() to collects the masks for discrete actions.">CollectDiscreteActionMasks(DiscreteActionMasker)</a> function. For example, if your agent is next to a wall, you could mask out any actions that would result in the agent trying to move into the wall.</p>
<p>For more information about implementing agent actions see <a href="https://github.com/Unity-Technologies/ml-agents/blob/release_6_docs/docs/Learning-Environment-Design-Agents.md#actions">Agents - Actions</a>.</p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">vectorAction</td><td>An array containing the action vector. The length of the array is specified by the BrainParameters of the agent's associated BehaviorParameters component. </td></tr>
  </table>
  </dd>
</dl>

</div>
</div>
<a id="af97f96776a06243e316547fce49d877a"></a>
<h2 class="memtitle"><span class="permalink"><a href="#af97f96776a06243e316547fce49d877a">&#9670;&nbsp;</a></span>OnAfterDeserialize()</h2>

<div class="memitem">
<div class="memproto">
      <table class="memname">
        <tr>
          <td class="memname">void OnAfterDeserialize </td>
          <td>(</td>
          <td class="paramname"></td><td>)</td>
          <td></td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Called by <a class="el" href="namespaceUnity.html">Unity</a> immediately after deserializing this object. </p>
<p>The <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> class uses <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#af97f96776a06243e316547fce49d877a" title="Called by Unity immediately after deserializing this object.">OnAfterDeserialize()</a> for internal housekeeping. Call the base class implementation if you need your own custom deserialization logic.</p>
<p>See <a href="https://docs.unity3d.com/ScriptReference/ISerializationCallbackReceiver.OnAfterDeserialize.html">OnAfterDeserialize</a> for more information.</p>
<div class="fragment"><div class="line"><span class="keyword">public</span> <span class="keyword">new</span> <span class="keywordtype">void</span> <a class="code" href="classUnity_1_1MLAgents_1_1Agent.html#af97f96776a06243e316547fce49d877a">OnAfterDeserialize</a>()</div>
<div class="line">{</div>
<div class="line">    base.OnAfterDeserialize();</div>
<div class="line">    <span class="comment">// additional deserialization logic...</span></div>
<div class="line">}</div>
<div class="ttc" id="aclassUnity_1_1MLAgents_1_1Agent_html_af97f96776a06243e316547fce49d877a"><div class="ttname"><a href="classUnity_1_1MLAgents_1_1Agent.html#af97f96776a06243e316547fce49d877a">Unity.MLAgents.Agent.OnAfterDeserialize</a></div><div class="ttdeci">void OnAfterDeserialize()</div><div class="ttdoc">Called by Unity immediately after deserializing this object.</div><div class="ttdef"><b>Definition:</b> Agent.cs:358</div></div>
</div><!-- fragment --> 
</div>
</div>
<a id="a72c4055ca88e935619de54f1aeb8f5f2"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a72c4055ca88e935619de54f1aeb8f5f2">&#9670;&nbsp;</a></span>OnBeforeSerialize()</h2>

<div class="memitem">
<div class="memproto">
      <table class="memname">
        <tr>
          <td class="memname">void OnBeforeSerialize </td>
          <td>(</td>
          <td class="paramname"></td><td>)</td>
          <td></td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Called by <a class="el" href="namespaceUnity.html">Unity</a> immediately before serializing this object. </p>
<p>The <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> class uses <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a72c4055ca88e935619de54f1aeb8f5f2" title="Called by Unity immediately before serializing this object.">OnBeforeSerialize()</a> for internal housekeeping. Call the base class implementation if you need your own custom serialization logic.</p>
<p>See <a href="https://docs.unity3d.com/ScriptReference/ISerializationCallbackReceiver.OnAfterDeserialize.html">OnBeforeSerialize</a> for more information.</p>
<div class="fragment"><div class="line"><span class="keyword">public</span> <span class="keyword">new</span> <span class="keywordtype">void</span> <a class="code" href="classUnity_1_1MLAgents_1_1Agent.html#a72c4055ca88e935619de54f1aeb8f5f2">OnBeforeSerialize</a>()</div>
<div class="line">{</div>
<div class="line">    base.OnBeforeSerialize();</div>
<div class="line">    <span class="comment">// additional serialization logic...</span></div>
<div class="line">}</div>
<div class="ttc" id="aclassUnity_1_1MLAgents_1_1Agent_html_a72c4055ca88e935619de54f1aeb8f5f2"><div class="ttname"><a href="classUnity_1_1MLAgents_1_1Agent.html#a72c4055ca88e935619de54f1aeb8f5f2">Unity.MLAgents.Agent.OnBeforeSerialize</a></div><div class="ttdeci">void OnBeforeSerialize()</div><div class="ttdoc">Called by Unity immediately before serializing this object.</div><div class="ttdef"><b>Definition:</b> Agent.cs:327</div></div>
</div><!-- fragment --> 
</div>
</div>
<a id="a1aac1c9a4ae04ef3e2fbf26b0aa570cc"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a1aac1c9a4ae04ef3e2fbf26b0aa570cc">&#9670;&nbsp;</a></span>OnDisable()</h2>

<div class="memitem">
<div class="memproto">
<table class="mlabels">
  <tr>
  <td class="mlabels-left">
      <table class="memname">
        <tr>
          <td class="memname">virtual void OnDisable </td>
          <td>(</td>
          <td class="paramname"></td><td>)</td>
          <td></td>
        </tr>
      </table>
  </td>
  <td class="mlabels-right">
<span class="mlabels"><span class="mlabel">protected</span><span class="mlabel">virtual</span></span>  </td>
  </tr>
</table>
</div><div class="memdoc">

<p>Called when the attached <a href="https://docs.unity3d.com/Manual/GameObjects.html">GameObject</a> becomes disabled and inactive. </p>
<p>Always call the base <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> class version of this function if you implement <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a1aac1c9a4ae04ef3e2fbf26b0aa570cc" title="Called when the attached GameObject becomes disabled and inactive.">OnDisable()</a></code> in your own <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> subclasses. </p>
<div class="fragment"><div class="line"><span class="keyword">protected</span> <span class="keyword">override</span> <span class="keywordtype">void</span> <a class="code" href="classUnity_1_1MLAgents_1_1Agent.html#a1aac1c9a4ae04ef3e2fbf26b0aa570cc">OnDisable</a>()</div>
<div class="line">{</div>
<div class="line">    base.OnDisable();</div>
<div class="line">    <span class="comment">// additional OnDisable logic...</span></div>
<div class="line">}</div>
<div class="ttc" id="aclassUnity_1_1MLAgents_1_1Agent_html_a1aac1c9a4ae04ef3e2fbf26b0aa570cc"><div class="ttname"><a href="classUnity_1_1MLAgents_1_1Agent.html#a1aac1c9a4ae04ef3e2fbf26b0aa570cc">Unity.MLAgents.Agent.OnDisable</a></div><div class="ttdeci">virtual void OnDisable()</div><div class="ttdoc">Called when the attached GameObject becomes disabled and inactive.</div><div class="ttdef"><b>Definition:</b> Agent.cs:454</div></div>
</div><!-- fragment --> <dl class="section see"><dt>See also</dt><dd><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a34316462014f78aba29c389590f6b104" title="Called when the attached GameObject becomes enabled and active.">OnEnable</a></dd></dl>

</div>
</div>
<a id="a34316462014f78aba29c389590f6b104"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a34316462014f78aba29c389590f6b104">&#9670;&nbsp;</a></span>OnEnable()</h2>

<div class="memitem">
<div class="memproto">
<table class="mlabels">
  <tr>
  <td class="mlabels-left">
      <table class="memname">
        <tr>
          <td class="memname">virtual void OnEnable </td>
          <td>(</td>
          <td class="paramname"></td><td>)</td>
          <td></td>
        </tr>
      </table>
  </td>
  <td class="mlabels-right">
<span class="mlabels"><span class="mlabel">protected</span><span class="mlabel">virtual</span></span>  </td>
  </tr>
</table>
</div><div class="memdoc">

<p>Called when the attached <a href="https://docs.unity3d.com/Manual/GameObjects.html">GameObject</a> becomes enabled and active. </p>
<p>This function initializes the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> instance, if it hasn't been initialized yet. Always call the base <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> class version of this function if you implement <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a34316462014f78aba29c389590f6b104" title="Called when the attached GameObject becomes enabled and active.">OnEnable()</a></code> in your own <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> subclasses. </p>
<div class="fragment"><div class="line"><span class="keyword">protected</span> <span class="keyword">override</span> <span class="keywordtype">void</span> <a class="code" href="classUnity_1_1MLAgents_1_1Agent.html#a34316462014f78aba29c389590f6b104">OnEnable</a>()</div>
<div class="line">{</div>
<div class="line">    base.OnEnable();</div>
<div class="line">    <span class="comment">// additional OnEnable logic...</span></div>
<div class="line">}</div>
<div class="ttc" id="aclassUnity_1_1MLAgents_1_1Agent_html_a34316462014f78aba29c389590f6b104"><div class="ttname"><a href="classUnity_1_1MLAgents_1_1Agent.html#a34316462014f78aba29c389590f6b104">Unity.MLAgents.Agent.OnEnable</a></div><div class="ttdeci">virtual void OnEnable()</div><div class="ttdoc">Called when the attached GameObject becomes enabled and active.</div><div class="ttdef"><b>Definition:</b> Agent.cs:302</div></div>
</div><!-- fragment --> 
</div>
</div>
<a id="a9bec2b5db75991122d64e0074ba9b3c3"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a9bec2b5db75991122d64e0074ba9b3c3">&#9670;&nbsp;</a></span>OnEpisodeBegin()</h2>

<div class="memitem">
<div class="memproto">
<table class="mlabels">
  <tr>
  <td class="mlabels-left">
      <table class="memname">
        <tr>
          <td class="memname">virtual void OnEpisodeBegin </td>
          <td>(</td>
          <td class="paramname"></td><td>)</td>
          <td></td>
        </tr>
      </table>
  </td>
  <td class="mlabels-right">
<span class="mlabels"><span class="mlabel">virtual</span></span>  </td>
  </tr>
</table>
</div><div class="memdoc">

<p>Implement <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a9bec2b5db75991122d64e0074ba9b3c3" title="Implement OnEpisodeBegin() to set up an Agent instance at the beginning of an episode.">OnEpisodeBegin()</a></code> to set up an <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> instance at the beginning of an episode. </p>
<dl class="section see"><dt>See also</dt><dd><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#ab32ab0cc76b4eaa37077e89ee9e3cdc1" title="Implement Initialize() to perform one-time initialization or set up of the Agent instance.">Initialize</a>, <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a103f88d983fb59506131259d1793e14b" title="Sets the done flag to true and resets the agent.">EndEpisode</a></dd></dl>

</div>
</div>
<a id="a737d24da9fbe954cd4e41983ddee208a"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a737d24da9fbe954cd4e41983ddee208a">&#9670;&nbsp;</a></span>RequestAction()</h2>

<div class="memitem">
<div class="memproto">
      <table class="memname">
        <tr>
          <td class="memname">void RequestAction </td>
          <td>(</td>
          <td class="paramname"></td><td>)</td>
          <td></td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Requests an action for this agent. </p>
<p>Call <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a737d24da9fbe954cd4e41983ddee208a" title="Requests an action for this agent.">RequestAction()</a></code> to repeat the previous action returned by the agent's most recent decision. A new decision is not requested. When you call this function, the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> instance invokes </p><dl class="section see"><dt>See also</dt><dd><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a01dce485e4f324afd12b98a068f0d242" title="Implement OnActionReceived() to specify agent behavior at every step, based on the provided action.">OnActionReceived(float[])</a></dd></dl>
<p>with the existing action vector.</p>
<p>You can use <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a737d24da9fbe954cd4e41983ddee208a" title="Requests an action for this agent.">RequestAction()</a></code> in situations where an agent must take an action every update, but doesn't need to make a decision as often. For example, an agent that moves through its environment might need to apply an action to keep moving, but only needs to make a decision to change course or speed occasionally.</p>
<p>You can add a </p><dl class="section see"><dt>See also</dt><dd><a class="el" href="classUnity_1_1MLAgents_1_1DecisionRequester.html" title="The DecisionRequester component automatically request decisions for an Agent instance at regular inte...">DecisionRequester</a></dd></dl>
<p>component to the agent's <a href="https://docs.unity3d.com/Manual/GameObjects.html">GameObject</a> to drive the agent's decision making and action frequency. When you use this component, do not call <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a737d24da9fbe954cd4e41983ddee208a" title="Requests an action for this agent.">RequestAction()</a></code> separately.</p>
<p>Note that </p><dl class="section see"><dt>See also</dt><dd><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#ae9f3c050a74cf26e2cd74e82deaa9a5d" title="Requests a new decision for this agent.">RequestDecision</a></dd></dl>
<p>calls <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a737d24da9fbe954cd4e41983ddee208a" title="Requests an action for this agent.">RequestAction()</a></code>; you do not need to call both functions at the same time.</p>

</div>
</div>
<a id="ae9f3c050a74cf26e2cd74e82deaa9a5d"></a>
<h2 class="memtitle"><span class="permalink"><a href="#ae9f3c050a74cf26e2cd74e82deaa9a5d">&#9670;&nbsp;</a></span>RequestDecision()</h2>

<div class="memitem">
<div class="memproto">
      <table class="memname">
        <tr>
          <td class="memname">void RequestDecision </td>
          <td>(</td>
          <td class="paramname"></td><td>)</td>
          <td></td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Requests a new decision for this agent. </p>
<p>Call <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#ae9f3c050a74cf26e2cd74e82deaa9a5d" title="Requests a new decision for this agent.">RequestDecision()</a></code> whenever an agent needs a decision. You often want to request a decision every environment step. However, if an agent cannot use the decision every step, then you can request a decision less frequently.</p>
<p>You can add a </p><dl class="section see"><dt>See also</dt><dd><a class="el" href="classUnity_1_1MLAgents_1_1DecisionRequester.html" title="The DecisionRequester component automatically request decisions for an Agent instance at regular inte...">DecisionRequester</a></dd></dl>
<p>component to the agent's <a href="https://docs.unity3d.com/Manual/GameObjects.html">GameObject</a> to drive the agent's decision making. When you use this component, do not call <code><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#ae9f3c050a74cf26e2cd74e82deaa9a5d" title="Requests a new decision for this agent.">RequestDecision()</a></code> separately.</p>
<p>Note that this function calls </p><dl class="section see"><dt>See also</dt><dd><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a737d24da9fbe954cd4e41983ddee208a" title="Requests an action for this agent.">RequestAction</a></dd></dl>
<p>; you do not need to call both functions at the same time.</p>

</div>
</div>
<a id="a40c4a915bf5afaf307f741e3f9c2c423"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a40c4a915bf5afaf307f741e3f9c2c423">&#9670;&nbsp;</a></span>ScaleAction()</h2>

<div class="memitem">
<div class="memproto">
<table class="mlabels">
  <tr>
  <td class="mlabels-left">
      <table class="memname">
        <tr>
          <td class="memname">static float ScaleAction </td>
          <td>(</td>
          <td class="paramtype">float&#160;</td>
          <td class="paramname"><em>rawAction</em>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">float&#160;</td>
          <td class="paramname"><em>min</em>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">float&#160;</td>
          <td class="paramname"><em>max</em>&#160;</td>
        </tr>
        <tr>
          <td></td>
          <td>)</td>
          <td></td><td></td>
        </tr>
      </table>
  </td>
  <td class="mlabels-right">
<span class="mlabels"><span class="mlabel">static</span><span class="mlabel">protected</span></span>  </td>
  </tr>
</table>
</div><div class="memdoc">

<p>Scales continuous action from [-1, 1] to arbitrary range. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">rawAction</td><td>The input action value.</td></tr>
    <tr><td class="paramname">min</td><td>The minimum output value.</td></tr>
    <tr><td class="paramname">max</td><td>The maximum output value.</td></tr>
  </table>
  </dd>
</dl>
<dl class="section return"><dt>Returns</dt><dd>The <em>rawAction</em>  scaled from [-1,1] to [<em>min</em> , <em>max</em> ].</dd></dl>

</div>
</div>
<a id="aba7983dfccddded21f1fbd4354bbf810"></a>
<h2 class="memtitle"><span class="permalink"><a href="#aba7983dfccddded21f1fbd4354bbf810">&#9670;&nbsp;</a></span>SetModel()</h2>

<div class="memitem">
<div class="memproto">
      <table class="memname">
        <tr>
          <td class="memname">void SetModel </td>
          <td>(</td>
          <td class="paramtype">string&#160;</td>
          <td class="paramname"><em>behaviorName</em>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype">NNModel&#160;</td>
          <td class="paramname"><em>model</em>, </td>
        </tr>
        <tr>
          <td class="paramkey"></td>
          <td></td>
          <td class="paramtype"><a class="el" href="namespaceUnity_1_1MLAgents_1_1Policies.html#a8ab452527c9b0df29c341a5a3a7cdaa6">InferenceDevice</a>&#160;</td>
          <td class="paramname"><em>inferenceDevice</em> = <code>InferenceDevice.CPU</code>&#160;</td>
        </tr>
        <tr>
          <td></td>
          <td>)</td>
          <td></td><td></td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Updates the Model assigned to this <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> instance. </p>
<p>If the agent already has an assigned model, that model is replaced with the the provided one. However, if you call this function with arguments that are identical to the current parameters of the agent, then no changes are made.</p>
<p><b>Note:</b> the <em>behaviorName</em>  parameter is ignored when not training. The <em>model</em>  and <em>inferenceDevice</em>  parameters are ignored when not using inference. </p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">behaviorName</td><td>The identifier of the behavior. This will categorize the agent when training. </td></tr>
    <tr><td class="paramname">model</td><td>The model to use for inference.</td></tr>
    <tr><td class="paramname">inferenceDevice</td><td>Define the device on which the model will be run.</td></tr>
  </table>
  </dd>
</dl>

</div>
</div>
<a id="ad60a2a2684b0d8970230ab579e52e445"></a>
<h2 class="memtitle"><span class="permalink"><a href="#ad60a2a2684b0d8970230ab579e52e445">&#9670;&nbsp;</a></span>SetReward()</h2>

<div class="memitem">
<div class="memproto">
      <table class="memname">
        <tr>
          <td class="memname">void SetReward </td>
          <td>(</td>
          <td class="paramtype">float&#160;</td>
          <td class="paramname"><em>reward</em></td><td>)</td>
          <td></td>
        </tr>
      </table>
</div><div class="memdoc">

<p>Overrides the current step reward of the agent and updates the episode reward accordingly. </p>
<p>This function replaces any rewards given to the agent during the current step. Use <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#af54b9da1f764b0be8cafc581d8f9bc5f" title="Increments the step and episode rewards by the provided value.">AddReward(float)</a> to incrementally change the reward rather than overriding it.</p>
<p>Typically, you assign rewards in the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> subclass's <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a01dce485e4f324afd12b98a068f0d242" title="Implement OnActionReceived() to specify agent behavior at every step, based on the provided action.">OnActionReceived(float[])</a> implementation after carrying out the received action and evaluating its success.</p>
<p>Rewards are used during reinforcement learning; they are ignored during inference.</p>
<p>See <a href="https://github.com/Unity-Technologies/ml-agents/blob/release_6_docs/docs/Learning-Environment-Design-Agents.md#rewards">Agents - Rewards</a> for general advice on implementing rewards and <a href="https://github.com/Unity-Technologies/ml-agents/blob/release_6_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals">Reward Signals</a> for information about mixing reward signals from curiosity and Generative Adversarial Imitation Learning (GAIL) with rewards supplied through this method.</p>
<dl class="params"><dt>Parameters</dt><dd>
  <table class="params">
    <tr><td class="paramname">reward</td><td>The new value of the reward.</td></tr>
  </table>
  </dd>
</dl>

</div>
</div>
<h2 class="groupheader">Member Data Documentation</h2>
<a id="a148c8b1a46774354f6bf4d597d939a57"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a148c8b1a46774354f6bf4d597d939a57">&#9670;&nbsp;</a></span>MaxStep</h2>

<div class="memitem">
<div class="memproto">
      <table class="memname">
        <tr>
          <td class="memname">int MaxStep</td>
        </tr>
      </table>
</div><div class="memdoc">

<p>The maximum number of steps the agent takes before being done. </p>
<p>The maximum steps for an agent to take before it resets; or 0 for unlimited steps.</p>
<p>The max step value determines the maximum length of an agent's episodes. Set to a positive integer to limit the episode length to that many steps. Set to 0 for unlimited episode length.</p>
<p>When an episode ends and a new one begins, the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> object's </p><dl class="section see"><dt>See also</dt><dd><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a9bec2b5db75991122d64e0074ba9b3c3" title="Implement OnEpisodeBegin() to set up an Agent instance at the beginning of an episode.">OnEpisodeBegin</a>, <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a103f88d983fb59506131259d1793e14b" title="Sets the done flag to true and resets the agent.">EndEpisode</a></dd></dl>
<p>function is called. You can implement <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a9bec2b5db75991122d64e0074ba9b3c3" title="Implement OnEpisodeBegin() to set up an Agent instance at the beginning of an episode.">OnEpisodeBegin</a> to reset the agent or remove it from the environment. An agent's episode can also end if you call its  method or an external process resets the environment through the <a class="el" href="classUnity_1_1MLAgents_1_1Academy.html" title="The Academy singleton manages agent training and decision making.">Academy</a>.</p>
<p>Consider limiting the number of steps in an episode to avoid wasting time during training. If you set the max step value to a reasonable estimate of the time it should take to complete a task, then agents that haven’t succeeded in that time frame will reset and start a new training episode rather than continue to fail. </p>
<p>To use a step limit when training while allowing agents to run without resetting outside of training, you can set the max step to 0 in <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#ab32ab0cc76b4eaa37077e89ee9e3cdc1" title="Implement Initialize() to perform one-time initialization or set up of the Agent instance.">Initialize</a> if the <a class="el" href="classUnity_1_1MLAgents_1_1Academy.html" title="The Academy singleton manages agent training and decision making.">Academy</a> is not connected to an external process. </p><div class="fragment"><div class="line"><span class="keyword">using</span> <a class="code" href="namespaceUnity.html">Unity</a>.<a class="code" href="namespaceUnity_1_1MLAgents.html">MLAgents</a>;</div>
<div class="line"> </div>
<div class="line"><span class="keyword">public</span> <span class="keyword">class </span>MyAgent : Agent</div>
<div class="line">{</div>
<div class="line">    <span class="keyword">public</span> <span class="keyword">override</span> <span class="keywordtype">void</span> <a class="code" href="classUnity_1_1MLAgents_1_1Agent.html#ab32ab0cc76b4eaa37077e89ee9e3cdc1">Initialize</a>()</div>
<div class="line">    {</div>
<div class="line">        <span class="keywordflow">if</span> (!Academy.Instance.IsCommunicatorOn)</div>
<div class="line">        {</div>
<div class="line">            this.<a class="code" href="classUnity_1_1MLAgents_1_1Agent.html#a148c8b1a46774354f6bf4d597d939a57">MaxStep</a> = 0;</div>
<div class="line">        }</div>
<div class="line">    }</div>
<div class="line">}</div>
<div class="ttc" id="aclassUnity_1_1MLAgents_1_1Agent_html_a148c8b1a46774354f6bf4d597d939a57"><div class="ttname"><a href="classUnity_1_1MLAgents_1_1Agent.html#a148c8b1a46774354f6bf4d597d939a57">Unity.MLAgents.Agent.MaxStep</a></div><div class="ttdeci">int MaxStep</div><div class="ttdoc">The maximum number of steps the agent takes before being done.</div><div class="ttdef"><b>Definition:</b> Agent.cs:220</div></div>
<div class="ttc" id="aclassUnity_1_1MLAgents_1_1Agent_html_ab32ab0cc76b4eaa37077e89ee9e3cdc1"><div class="ttname"><a href="classUnity_1_1MLAgents_1_1Agent.html#ab32ab0cc76b4eaa37077e89ee9e3cdc1">Unity.MLAgents.Agent.Initialize</a></div><div class="ttdeci">virtual void Initialize()</div><div class="ttdoc">Implement Initialize() to perform one-time initialization or set up of the Agent instance.</div><div class="ttdef"><b>Definition:</b> Agent.cs:758</div></div>
<div class="ttc" id="anamespaceUnity_1_1MLAgents_html"><div class="ttname"><a href="namespaceUnity_1_1MLAgents.html">Unity.MLAgents</a></div><div class="ttdoc">Welcome to Unity Machine Learning Agents (ML-Agents).</div><div class="ttdef"><b>Definition:</b> Academy.cs:26</div></div>
<div class="ttc" id="anamespaceUnity_html"><div class="ttname"><a href="namespaceUnity.html">Unity</a></div><div class="ttdef"><b>Definition:</b> Academy.cs:26</div></div>
</div><!-- fragment --><p> <b>Note:</b> in general, you should limit the differences between the code you execute during training and the code you run during inference. </p>

</div>
</div>
<h2 class="groupheader">Property Documentation</h2>
<a id="a3c3bd9d2c311f3b630b908053d2419d9"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a3c3bd9d2c311f3b630b908053d2419d9">&#9670;&nbsp;</a></span>CompletedEpisodes</h2>

<div class="memitem">
<div class="memproto">
<table class="mlabels">
  <tr>
  <td class="mlabels-left">
      <table class="memname">
        <tr>
          <td class="memname">int CompletedEpisodes</td>
        </tr>
      </table>
  </td>
  <td class="mlabels-right">
<span class="mlabels"><span class="mlabel">get</span></span>  </td>
  </tr>
</table>
</div><div class="memdoc">

<p>Returns the number of episodes that the <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html" title="An agent is an actor that can observe its environment, decide on the best course of action using thos...">Agent</a> has completed (either <a class="el" href="classUnity_1_1MLAgents_1_1Agent.html#a103f88d983fb59506131259d1793e14b" title="Sets the done flag to true and resets the agent.">Agent.EndEpisode()</a> was called, or maxSteps was reached). </p>
<dl class="section return"><dt>Returns</dt><dd>Current episode count. </dd></dl>

</div>
</div>
<a id="a8e8dc6caaae14be76726dc80325b9e53"></a>
<h2 class="memtitle"><span class="permalink"><a href="#a8e8dc6caaae14be76726dc80325b9e53">&#9670;&nbsp;</a></span>StepCount</h2>

<div class="memitem">
<div class="memproto">
<table class="mlabels">
  <tr>
  <td class="mlabels-left">
      <table class="memname">
        <tr>
          <td class="memname">int StepCount</td>
        </tr>
      </table>
  </td>
  <td class="mlabels-right">
<span class="mlabels"><span class="mlabel">get</span></span>  </td>
  </tr>
</table>
</div><div class="memdoc">

<p>Returns the current step counter (within the current episode). </p>
<dl class="section return"><dt>Returns</dt><dd>Current step count. </dd></dl>

</div>
</div>
<hr/>The documentation for this class was generated from the following file:<ul>
<li><a class="el" href="Agent_8cs.html">Agent.cs</a></li>
</ul>
</div><!-- contents -->
</div><!-- doc-content -->
<!-- HTML footer for doxygen 1.8.14-->
<!-- start footer part -->
<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
  <ul>
    <li class="navelem"><a class="el" href="namespaceUnity.html">Unity</a></li><li class="navelem"><a class="el" href="namespaceUnity_1_1MLAgents.html">MLAgents</a></li><li class="navelem"><a class="el" href="classUnity_1_1MLAgents_1_1Agent.html">Agent</a></li>
  </ul>
</div>
</body>
</html>
