<!DOCTYPE html>
<html lang="en-US">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>train_class_mlp [HALCON算子参考手册]</title>
<style type="text/css">
      body {
    color: #000000;
    background-color: #ffffff;
    margin: 0;
    font-family: Arial, Helvetica, sans-serif;
}

.body_main {
    margin-left: 35px;
    margin-right: 35px;
}

@media screen and (min-width:992px) {

    .body_main {
        margin-left: 10%;
        margin-right: 10%;
    }

    table.toctable {
        width: 80%
    }
}

@media screen and (min-width:1400px) {

    .body_main {
        margin-left: 15%;
        margin-right: 15%;
    }

    table.toctable {
        width: 70%
    }
}

body>div ul ul {
    margin-left: inherit;
}

a:link {
    color: #0044cc;
}

a:link,
a:visited {
    text-decoration: none;
}

a:link:hover,
a:visited:hover {
    text-decoration: underline;
}

th {
    text-align: left;
}

h1,
h2,
h3,
h4,
h5,
h6 {
    text-rendering: optimizeLegibility;
    color: #666666;
}

code {
    font-family: monospace,monospace;
}

h1 a.halconoperator {
    font-family: Arial, Helvetica, sans-serif;
    color: #666666;
}

h2 a.halconoperator {
    font-family: Arial, Helvetica, sans-serif;
    color: #666666;
}

hr {
    border: 0;
    border-top: solid 1px #f28d26;
}

.pre {
    display: block;
    padding-bottom: 1ex;
    font-family: monospace;
    white-space: pre;
}

pre {
    font-family: monospace, monospace;
    padding: 1ex;
    white-space: pre-wrap;
}

.toc {
    font-size: 80%;
    border-top: 1px dashed #f28d26;
    border-bottom: 1px dashed #f28d26;
    padding-top: 5px;
    padding-bottom: 5px;
}

.inv {
    margin: 0;
    border: 0;
    padding: 0;
}

.banner {
    color: #666666;
    padding-left: 1em;
}

.logo {
    background-color: white;
}

.keyboard {
    font-size: 80%;
    padding-left: 3px;
    padding-right: 3px;
    border-radius: 5px;
    border-width: 1px;
    border-style: solid;
    border-color: #f28d26;
    background-color: #f3f3f3;
}

.warning {
    margin-top: 2ex;
    margin-bottom: 1ex;
    padding: 10px;
    text-align: center;
    border: 1px solid;
    color: #bb0000;
    background-color: #fff7f7
}

.imprint {
    margin-top: 1ex;
    font-size: 80%;
    color: #666666;
}

.imprinthead {
    font-weight: bolder;
    color: #666666;
}

.indexlink {
    text-align: right;
    padding-bottom: 5px;
}

.postscript {
    margin-top: 2ex;
    font-size: 80%;
    color: #666666
}

.evenrow {
    background-color: #e7e7ef;
    vertical-align: top;
}

.oddrow {
    background-color: #f7f7ff;
    vertical-align: top;
}

.headrow {
    background-color: #97979f;
    color: #ffffff;
    vertical-align: top;
}

.logorow {
    vertical-align: top;
}

.error {
    color: red;
}

.var {
    font-style: italic
}

.halconoperator {
    font-family: monospace, monospace;
}

span.operator {
    font-family: monospace, monospace;
}

span.procedure {
    font-family: monospace, monospace;
}

span.operation {
    font-family: monospace, monospace;
}

span.feature {
    font-family: Arial, Helvetica, Homerton, sans-serif;
}

ul {
    padding-left: 1.2em;
}

li.dot {
    list-style-type: square;
    color: #f28d26;
}

.breadcrumb {
    font-size: 80%;
    color: white;
    background-color: #062d64;
    margin-bottom: 20px;
    padding-left: 35px;
    padding-right: 35px;
    padding-bottom: 15px;
}

.webbar {
    font-size: 80%;
    background-color: #dddddd;
    margin-top: 0px;
    margin-left: -35px;
    margin-right: -35px;
    margin-bottom: 0px;
    padding-top: 5px;
    padding-left: 35px;
    padding-right: 35px;
    padding-bottom: 5px;
}

.footer {
    display: flex;
    flex-wrap: wrap;
    justify-content: space-between;
    border-top: 1px dashed #f28d26;
    font-size: 80%;
    color: #666666;
    padding-bottom: 8px;
}

.footer .socialmedia a {
    padding-left: 7px;
}

.socialmedia {
    padding-top: 10px;
}

.copyright {
    margin-top: 19px;
}

.breadcrumb a {
    color: #ffffff;
    border-bottom: 1px solid white;
}

.breadcrumb a:link:hover,
.breadcrumb a:visited:hover {
    text-decoration: none;
    border-bottom: none;
}

.heading {
    margin-top: 1ex;
    font-size: 110%;
    font-weight: bold;
    color: #666666;
}

.text {
    color: black;
}

.example {
    font-size: 80%;
    background-color: #f3f3f3;
    border: 1px dashed #666666;
    padding: 10px;
}

.displaymath {
    display: block;
    text-align: center;
    margin-top: 1ex;
    margin-bottom: 1ex;
}

.title {
    float: left;
    padding-top: 3px;
    padding-bottom: 3px;
}

.signnote {
    font-family: Arial, Helvetica, Homerton, sans-serif;
    font-size: 80%;
    color: #666666;
    font-weight: lighter;
    font-style: italic
}

.par {
    margin-bottom: 1.5em;
}

.parhead {
    text-align: right;
}

.parname {
    float: left;
}

.pardesc {
    font-size: 85%;
    margin-top: 0.5em;
    margin-bottom: 0.5em;
    margin-left: 2em;
}

.parcat {
    color: #666;
    font-weight: bold;
}

*[data-if=cpp],
*[data-if=c],
*[data-if=dotnet],
*[data-if=com],
*[data-if=python] {
    display: none;
}

.tabbar {
    text-align: right;
    border-bottom: 1px solid #f28d26;
    margin-bottom: 0.5em;
}

ul.tabs {
    padding-top: 3px;
    padding-bottom: 3px;
    margin-top: 10px;
    margin-bottom: 0;
    font-size: 80%
}

ul.tabs li {
    padding-top: 3px;
    padding-bottom: 3px;
    display: inline;
    overflow: hidden;
    list-style-type: none;
    margin: 0;
    margin-left: 8px;
    border-top: 1px solid #666;
    border-left: 1px solid #666;
    border-right: 1px solid #666;
}

ul.tabs li.active {
    border-left: 1px solid #f28d26;
    border-right: 1px solid #f28d26;
    border-top: 1px solid #f28d26;
    border-bottom: 1px solid #fff;
}

ul.tabs li.inactive {
    background-color: #eee;
}

ul.tabs li a {
    padding-left: 5px;
    padding-right: 5px;
    color: #666;
}

ul.tabs li a:link:hover {
    text-decoration: none;
}

ul.tabs li.inactive a {
    color: #666;
}

ul.tabs li.active a {
    color: black;
}

dl.generic dd {
    margin-bottom: 1em;
}

.pari {
    color: olive;
}

.paro {
    color: maroon;
}

.comment {
    font-size: 80%;
    color: green;
    white-space: nowrap;
}

table.grid {
    border-collapse: collapse;
}

table.grid td {
    padding: 5px;
    border: 1px solid;
}

table.layout {
    border: 0px;
}

table.layout td {
    padding: 5px;
}

table.table {
    border-collapse: collapse;
}

table.table td {
    padding: 5px;
    border-left: 0px;
    border-right: 0px;
}

table.table tr:last-child {
    border-bottom: 1px solid;
}

table.table th {
    padding: 5px;
    border-top: 1px solid;
    border-bottom: 1px solid;
    border-left: 0px;
    border-right: 0px;
}

.details_summary {
    cursor: pointer;
}

table.toctable {
    width: 100%;
}

table.toctable col:first-child {
    width: 20%;
}

table.toctable col:nth-last-child(2) {
    width: 8%;
}

table.altcolored tr:nth-child(even) {
    background-color: #f3f3f3;
}

    </style>
<!--OP_REF_STYLE_END-->
<script>
    <!--
var active_lang='hdevelop';function switchVisibility(obj,active_lang,new_lang)
{var display_style='inline';
for(var i=0;i<obj.length;i++)
{if(obj.item(i).getAttribute('data-if')==new_lang)
{obj.item(i).style.display=display_style;}
if(obj.item(i).getAttribute('data-if')==active_lang)
{obj.item(i).style.display='none';}}
return;}
function toggleLanguage(new_lang,initial)
{if(active_lang!=new_lang)
{var lis=document.getElementsByTagName('li');for(var i=0;i<lis.length;i++)
{if(lis.item(i).id=='syn-'+new_lang)
{lis.item(i).className='active';}
else
{lis.item(i).className='inactive';}}
var divs=document.getElementsByTagName('div');var spans=document.getElementsByTagName('span');switchVisibility(divs,active_lang,new_lang);switchVisibility(spans,active_lang,new_lang);if(!initial)
{setCookie("halcon_reference_language",new_lang,null,null);}
active_lang=new_lang;}
return;}
function setCookie(name,value,domain,exp_offset,path,secure)
{localStorage.setItem(name,value);}
function getCookie(name)
{return localStorage.getItem(name);}
function initialize()
{var qs=location.href.split('?')[1];var qs_lang_raw=location.href.split('interface=')[1];var qs_lang;if(qs_lang_raw)
{qs_lang=qs_lang_raw.split('#')[0];}
var cookie_lang=getCookie("halcon_reference_language");var new_lang;if((qs_lang=="hdevelop")||(qs_lang=="dotnet")||(qs_lang=="python")||(qs_lang=="cpp")||(qs_lang=="c"))
{new_lang=qs_lang;setCookie("halcon_reference_language",new_lang,null,null);}
else if((cookie_lang=="hdevelop")||(cookie_lang=="dotnet")||(cookie_lang=="python")||(cookie_lang=="cpp")||(cookie_lang=="c"))
{new_lang=cookie_lang;}
else
{new_lang="hdevelop";}
toggleLanguage(new_lang,1);return;}
-->

  </script>
</head>
<body onload="initialize();">
<div class="breadcrumb">
<br class="inv"><a href="index.html">目录</a> / <a href="toc_classification.html">分类</a> / <a href="toc_classification_neuralnets.html">Neural Nets</a><br class="inv">
</div>
<div class="body_main">
<div class="tabbar"><ul class="tabs">
<li id="syn-hdevelop" class="active"><a href="javascript:void(0);" onclick="toggleLanguage('hdevelop')" onfocus="blur()">HDevelop</a></li>
<li id="syn-dotnet" class="inactive"><a href="javascript:void(0);" onclick="toggleLanguage('dotnet')" onfocus="blur()">.NET</a></li>
<li id="syn-python" class="inactive"><a href="javascript:void(0);" onclick="toggleLanguage('python')" onfocus="blur()">Python</a></li>
<li id="syn-cpp" class="inactive"><a href="javascript:void(0);" onclick="toggleLanguage('cpp')" onfocus="blur()">C++</a></li>
<li id="syn-c" class="inactive"><a href="javascript:void(0);" onclick="toggleLanguage('c')" onfocus="blur()">C</a></li>
</ul></div>
<div class="indexlink">
<a href="index_classes.html"><span data-if="dotnet" style="display:none;">类别</span><span data-if="cpp" style="display:none;">类别</span></a><span data-if="dotnet" style="display:none;"> | </span><span data-if="cpp" style="display:none;"> | </span><a href="index_by_name.html">算子列表</a>
</div>
<!--OP_REF_HEADER_END-->
<h1 id="sec_name">
<span data-if="hdevelop" style="display:inline;">train_class_mlp</span><span data-if="c" style="display:none;">T_train_class_mlp</span><span data-if="cpp" style="display:none;">TrainClassMlp</span><span data-if="dotnet" style="display:none;">TrainClassMlp</span><span data-if="python" style="display:none;">train_class_mlp</span> (算子名称)</h1>
<h2>名称</h2>
<p><code><span data-if="hdevelop" style="display:inline;">train_class_mlp</span><span data-if="c" style="display:none;">T_train_class_mlp</span><span data-if="cpp" style="display:none;">TrainClassMlp</span><span data-if="dotnet" style="display:none;">TrainClassMlp</span><span data-if="python" style="display:none;">train_class_mlp</span></code> — Train a multilayer perceptron.</p>
<h2 id="sec_synopsis">参数签名</h2>
<div data-if="hdevelop" style="display:inline;">
<p>
<code><b>train_class_mlp</b>( :  : <a href="#MLPHandle"><i>MLPHandle</i></a>, <a href="#MaxIterations"><i>MaxIterations</i></a>, <a href="#WeightTolerance"><i>WeightTolerance</i></a>, <a href="#ErrorTolerance"><i>ErrorTolerance</i></a> : <a href="#Error"><i>Error</i></a>, <a href="#ErrorLog"><i>ErrorLog</i></a>)</code></p>
</div>
<div data-if="c" style="display:none;">
<p>
<code>Herror <b>T_train_class_mlp</b>(const Htuple <a href="#MLPHandle"><i>MLPHandle</i></a>, const Htuple <a href="#MaxIterations"><i>MaxIterations</i></a>, const Htuple <a href="#WeightTolerance"><i>WeightTolerance</i></a>, const Htuple <a href="#ErrorTolerance"><i>ErrorTolerance</i></a>, Htuple* <a href="#Error"><i>Error</i></a>, Htuple* <a href="#ErrorLog"><i>ErrorLog</i></a>)</code></p>
</div>
<div data-if="cpp" style="display:none;">
<p>
<code>void <b>TrainClassMlp</b>(const HTuple&amp; <a href="#MLPHandle"><i>MLPHandle</i></a>, const HTuple&amp; <a href="#MaxIterations"><i>MaxIterations</i></a>, const HTuple&amp; <a href="#WeightTolerance"><i>WeightTolerance</i></a>, const HTuple&amp; <a href="#ErrorTolerance"><i>ErrorTolerance</i></a>, HTuple* <a href="#Error"><i>Error</i></a>, HTuple* <a href="#ErrorLog"><i>ErrorLog</i></a>)</code></p>
<p>
<code>double <a href="HClassMlp.html">HClassMlp</a>::<b>TrainClassMlp</b>(Hlong <a href="#MaxIterations"><i>MaxIterations</i></a>, double <a href="#WeightTolerance"><i>WeightTolerance</i></a>, double <a href="#ErrorTolerance"><i>ErrorTolerance</i></a>, HTuple* <a href="#ErrorLog"><i>ErrorLog</i></a>) const</code></p>
</div>
<div data-if="com" style="display:none;"></div>
<div data-if="dotnet" style="display:none;">
<p>
<code>static void <a href="HOperatorSet.html">HOperatorSet</a>.<b>TrainClassMlp</b>(<a href="HTuple.html">HTuple</a> <a href="#MLPHandle"><i>MLPHandle</i></a>, <a href="HTuple.html">HTuple</a> <a href="#MaxIterations"><i>maxIterations</i></a>, <a href="HTuple.html">HTuple</a> <a href="#WeightTolerance"><i>weightTolerance</i></a>, <a href="HTuple.html">HTuple</a> <a href="#ErrorTolerance"><i>errorTolerance</i></a>, out <a href="HTuple.html">HTuple</a> <a href="#Error"><i>error</i></a>, out <a href="HTuple.html">HTuple</a> <a href="#ErrorLog"><i>errorLog</i></a>)</code></p>
<p>
<code>double <a href="HClassMlp.html">HClassMlp</a>.<b>TrainClassMlp</b>(int <a href="#MaxIterations"><i>maxIterations</i></a>, double <a href="#WeightTolerance"><i>weightTolerance</i></a>, double <a href="#ErrorTolerance"><i>errorTolerance</i></a>, out <a href="HTuple.html">HTuple</a> <a href="#ErrorLog"><i>errorLog</i></a>)</code></p>
</div>
<div data-if="python" style="display:none;">
<p>
<code>def <b>train_class_mlp</b>(<a href="#MLPHandle"><i>mlphandle</i></a>: HHandle, <a href="#MaxIterations"><i>max_iterations</i></a>: int, <a href="#WeightTolerance"><i>weight_tolerance</i></a>: float, <a href="#ErrorTolerance"><i>error_tolerance</i></a>: float) -&gt; Tuple[float, Sequence[float]]</code></p>
</div>
<h2 id="sec_description">描述</h2>
<p><code><span data-if="hdevelop" style="display:inline">train_class_mlp</span><span data-if="c" style="display:none">train_class_mlp</span><span data-if="cpp" style="display:none">TrainClassMlp</span><span data-if="com" style="display:none">TrainClassMlp</span><span data-if="dotnet" style="display:none">TrainClassMlp</span><span data-if="python" style="display:none">train_class_mlp</span></code> trains the multilayer perceptron (MLP) given
in <a href="#MLPHandle"><i><code><span data-if="hdevelop" style="display:inline">MLPHandle</span><span data-if="c" style="display:none">MLPHandle</span><span data-if="cpp" style="display:none">MLPHandle</span><span data-if="com" style="display:none">MLPHandle</span><span data-if="dotnet" style="display:none">MLPHandle</span><span data-if="python" style="display:none">mlphandle</span></code></i></a>.  Before the MLP can be trained, <i>all</i>
training samples to be used for the training must be stored in the
MLP using <a href="add_sample_class_mlp.html"><code><span data-if="hdevelop" style="display:inline">add_sample_class_mlp</span><span data-if="c" style="display:none">add_sample_class_mlp</span><span data-if="cpp" style="display:none">AddSampleClassMlp</span><span data-if="com" style="display:none">AddSampleClassMlp</span><span data-if="dotnet" style="display:none">AddSampleClassMlp</span><span data-if="python" style="display:none">add_sample_class_mlp</span></code></a> or
<a href="read_samples_class_mlp.html"><code><span data-if="hdevelop" style="display:inline">read_samples_class_mlp</span><span data-if="c" style="display:none">read_samples_class_mlp</span><span data-if="cpp" style="display:none">ReadSamplesClassMlp</span><span data-if="com" style="display:none">ReadSamplesClassMlp</span><span data-if="dotnet" style="display:none">ReadSamplesClassMlp</span><span data-if="python" style="display:none">read_samples_class_mlp</span></code></a>.  If after the training new
additional training samples should be used a new MLP must be created
with <a href="create_class_mlp.html"><code><span data-if="hdevelop" style="display:inline">create_class_mlp</span><span data-if="c" style="display:none">create_class_mlp</span><span data-if="cpp" style="display:none">CreateClassMlp</span><span data-if="com" style="display:none">CreateClassMlp</span><span data-if="dotnet" style="display:none">CreateClassMlp</span><span data-if="python" style="display:none">create_class_mlp</span></code></a>, in which again <i>all</i> training
samples to be used (i.e., the original ones and the additional ones)
must be stored.  In these cases, it is useful to save and read the
training data with <a href="write_samples_class_mlp.html"><code><span data-if="hdevelop" style="display:inline">write_samples_class_mlp</span><span data-if="c" style="display:none">write_samples_class_mlp</span><span data-if="cpp" style="display:none">WriteSamplesClassMlp</span><span data-if="com" style="display:none">WriteSamplesClassMlp</span><span data-if="dotnet" style="display:none">WriteSamplesClassMlp</span><span data-if="python" style="display:none">write_samples_class_mlp</span></code></a> and
<a href="read_samples_class_mlp.html"><code><span data-if="hdevelop" style="display:inline">read_samples_class_mlp</span><span data-if="c" style="display:none">read_samples_class_mlp</span><span data-if="cpp" style="display:none">ReadSamplesClassMlp</span><span data-if="com" style="display:none">ReadSamplesClassMlp</span><span data-if="dotnet" style="display:none">ReadSamplesClassMlp</span><span data-if="python" style="display:none">read_samples_class_mlp</span></code></a>, respectively.  A second training
with additional training samples is not explicitly forbidden by
<code><span data-if="hdevelop" style="display:inline">train_class_mlp</span><span data-if="c" style="display:none">train_class_mlp</span><span data-if="cpp" style="display:none">TrainClassMlp</span><span data-if="com" style="display:none">TrainClassMlp</span><span data-if="dotnet" style="display:none">TrainClassMlp</span><span data-if="python" style="display:none">train_class_mlp</span></code>.  However, this typically does not lead to
good results because the training of an MLP is a complex nonlinear
optimization problem, and consequently the second training with new
data will very likely lead to the fact that the optimization gets
stuck in a local minimum.
</p>
<p>If a rejection class has been specified using
<a href="set_rejection_params_class_mlp.html"><code><span data-if="hdevelop" style="display:inline">set_rejection_params_class_mlp</span><span data-if="c" style="display:none">set_rejection_params_class_mlp</span><span data-if="cpp" style="display:none">SetRejectionParamsClassMlp</span><span data-if="com" style="display:none">SetRejectionParamsClassMlp</span><span data-if="dotnet" style="display:none">SetRejectionParamsClassMlp</span><span data-if="python" style="display:none">set_rejection_params_class_mlp</span></code></a>, before the actual training
the samples for the rejection class are generated.
</p>
<p>During the training, the error the MLP achieves on the stored
training samples is minimized by using a nonlinear optimization
algorithm.  If the MLP has been regularized with
<a href="set_regularization_params_class_mlp.html"><code><span data-if="hdevelop" style="display:inline">set_regularization_params_class_mlp</span><span data-if="c" style="display:none">set_regularization_params_class_mlp</span><span data-if="cpp" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="com" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="dotnet" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="python" style="display:none">set_regularization_params_class_mlp</span></code></a>, an additional weight
penalty term is taken into account.  With this, the MLP weights
described in <a href="create_class_mlp.html"><code><span data-if="hdevelop" style="display:inline">create_class_mlp</span><span data-if="c" style="display:none">create_class_mlp</span><span data-if="cpp" style="display:none">CreateClassMlp</span><span data-if="com" style="display:none">CreateClassMlp</span><span data-if="dotnet" style="display:none">CreateClassMlp</span><span data-if="python" style="display:none">create_class_mlp</span></code></a> are determined.  Furthermore,
if an automatic determination of the regularization parameters has
been specified with <a href="set_regularization_params_class_mlp.html"><code><span data-if="hdevelop" style="display:inline">set_regularization_params_class_mlp</span><span data-if="c" style="display:none">set_regularization_params_class_mlp</span><span data-if="cpp" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="com" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="dotnet" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="python" style="display:none">set_regularization_params_class_mlp</span></code></a>,
these parameters are optimized as well.  As described at
<a href="set_regularization_params_class_mlp.html"><code><span data-if="hdevelop" style="display:inline">set_regularization_params_class_mlp</span><span data-if="c" style="display:none">set_regularization_params_class_mlp</span><span data-if="cpp" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="com" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="dotnet" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="python" style="display:none">set_regularization_params_class_mlp</span></code></a>, training the MLP with
automatic determination of the regularization parameters requires
significantly more time than training an unregularized MLP or an MLP
with fixed regularization parameters.
</p>
<p><a href="create_class_mlp.html"><code><span data-if="hdevelop" style="display:inline">create_class_mlp</span><span data-if="c" style="display:none">create_class_mlp</span><span data-if="cpp" style="display:none">CreateClassMlp</span><span data-if="com" style="display:none">CreateClassMlp</span><span data-if="dotnet" style="display:none">CreateClassMlp</span><span data-if="python" style="display:none">create_class_mlp</span></code></a> initializes the MLP weights with random
values to make it very likely that the optimization converges to the
global minimum of the error function.  Nevertheless, in rare cases
it may happen that the random values determined with
<code><span data-if="hdevelop" style="display:inline">RandSeed</span><span data-if="c" style="display:none">RandSeed</span><span data-if="cpp" style="display:none">RandSeed</span><span data-if="com" style="display:none">RandSeed</span><span data-if="dotnet" style="display:none">randSeed</span><span data-if="python" style="display:none">rand_seed</span></code> in <a href="create_class_mlp.html"><code><span data-if="hdevelop" style="display:inline">create_class_mlp</span><span data-if="c" style="display:none">create_class_mlp</span><span data-if="cpp" style="display:none">CreateClassMlp</span><span data-if="com" style="display:none">CreateClassMlp</span><span data-if="dotnet" style="display:none">CreateClassMlp</span><span data-if="python" style="display:none">create_class_mlp</span></code></a> result in a relatively
large optimum error, i.e., that the optimization gets stuck in a
local minimum.  If it can be conjectured that this has happened the
MLP should be created anew with a different value for
<code><span data-if="hdevelop" style="display:inline">RandSeed</span><span data-if="c" style="display:none">RandSeed</span><span data-if="cpp" style="display:none">RandSeed</span><span data-if="com" style="display:none">RandSeed</span><span data-if="dotnet" style="display:none">randSeed</span><span data-if="python" style="display:none">rand_seed</span></code> in order to check whether a significantly smaller
error can be achieved.
</p>
<p>The parameters <a href="#MaxIterations"><i><code><span data-if="hdevelop" style="display:inline">MaxIterations</span><span data-if="c" style="display:none">MaxIterations</span><span data-if="cpp" style="display:none">MaxIterations</span><span data-if="com" style="display:none">MaxIterations</span><span data-if="dotnet" style="display:none">maxIterations</span><span data-if="python" style="display:none">max_iterations</span></code></i></a>, <a href="#WeightTolerance"><i><code><span data-if="hdevelop" style="display:inline">WeightTolerance</span><span data-if="c" style="display:none">WeightTolerance</span><span data-if="cpp" style="display:none">WeightTolerance</span><span data-if="com" style="display:none">WeightTolerance</span><span data-if="dotnet" style="display:none">weightTolerance</span><span data-if="python" style="display:none">weight_tolerance</span></code></i></a>, and
<a href="#ErrorTolerance"><i><code><span data-if="hdevelop" style="display:inline">ErrorTolerance</span><span data-if="c" style="display:none">ErrorTolerance</span><span data-if="cpp" style="display:none">ErrorTolerance</span><span data-if="com" style="display:none">ErrorTolerance</span><span data-if="dotnet" style="display:none">errorTolerance</span><span data-if="python" style="display:none">error_tolerance</span></code></i></a> control the nonlinear optimization
algorithm.  Note that if an automatic determination of the
regularization parameters has been specified with
<a href="set_regularization_params_class_mlp.html"><code><span data-if="hdevelop" style="display:inline">set_regularization_params_class_mlp</span><span data-if="c" style="display:none">set_regularization_params_class_mlp</span><span data-if="cpp" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="com" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="dotnet" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="python" style="display:none">set_regularization_params_class_mlp</span></code></a>, these parameters refer
to one training within one step of the evidence procedure.
<a href="#MaxIterations"><i><code><span data-if="hdevelop" style="display:inline">MaxIterations</span><span data-if="c" style="display:none">MaxIterations</span><span data-if="cpp" style="display:none">MaxIterations</span><span data-if="com" style="display:none">MaxIterations</span><span data-if="dotnet" style="display:none">maxIterations</span><span data-if="python" style="display:none">max_iterations</span></code></i></a> specifies the maximum number of iterations of
the optimization algorithm.  In practice, values between
<i>100</i> and <i>200</i> should be sufficient for most
problems.  <a href="#WeightTolerance"><i><code><span data-if="hdevelop" style="display:inline">WeightTolerance</span><span data-if="c" style="display:none">WeightTolerance</span><span data-if="cpp" style="display:none">WeightTolerance</span><span data-if="com" style="display:none">WeightTolerance</span><span data-if="dotnet" style="display:none">weightTolerance</span><span data-if="python" style="display:none">weight_tolerance</span></code></i></a> specifies a threshold for the
change of the weights per iteration.  Here, the absolute value of
the change of the weights between two iterations is summed.  Hence,
this value depends on the number of weights as well as the size of
the weights, which in turn depend on the scaling of the training
data.  Typically, values between <i>0.00001</i> and <i>1</i>
should be used.  <a href="#ErrorTolerance"><i><code><span data-if="hdevelop" style="display:inline">ErrorTolerance</span><span data-if="c" style="display:none">ErrorTolerance</span><span data-if="cpp" style="display:none">ErrorTolerance</span><span data-if="com" style="display:none">ErrorTolerance</span><span data-if="dotnet" style="display:none">errorTolerance</span><span data-if="python" style="display:none">error_tolerance</span></code></i></a> specifies a threshold for
the change of the error value per iteration.  This value depends on
the number of training samples as well as the number of output
variables of the MLP.  Also here, values between <i>0.00001</i>
and <i>1</i> should typically be used.  The optimization is
terminated if the weight change is smaller than
<a href="#WeightTolerance"><i><code><span data-if="hdevelop" style="display:inline">WeightTolerance</span><span data-if="c" style="display:none">WeightTolerance</span><span data-if="cpp" style="display:none">WeightTolerance</span><span data-if="com" style="display:none">WeightTolerance</span><span data-if="dotnet" style="display:none">weightTolerance</span><span data-if="python" style="display:none">weight_tolerance</span></code></i></a> and the change of the error value is
smaller than <a href="#ErrorTolerance"><i><code><span data-if="hdevelop" style="display:inline">ErrorTolerance</span><span data-if="c" style="display:none">ErrorTolerance</span><span data-if="cpp" style="display:none">ErrorTolerance</span><span data-if="com" style="display:none">ErrorTolerance</span><span data-if="dotnet" style="display:none">errorTolerance</span><span data-if="python" style="display:none">error_tolerance</span></code></i></a>.  In any case, the optimization
is terminated after at most <a href="#MaxIterations"><i><code><span data-if="hdevelop" style="display:inline">MaxIterations</span><span data-if="c" style="display:none">MaxIterations</span><span data-if="cpp" style="display:none">MaxIterations</span><span data-if="com" style="display:none">MaxIterations</span><span data-if="dotnet" style="display:none">maxIterations</span><span data-if="python" style="display:none">max_iterations</span></code></i></a> iterations.  It
should be noted that, depending on the size of the MLP and the
number of training samples, the training can take from a few seconds
to several hours.
</p>
<p>On output, <code><span data-if="hdevelop" style="display:inline">train_class_mlp</span><span data-if="c" style="display:none">train_class_mlp</span><span data-if="cpp" style="display:none">TrainClassMlp</span><span data-if="com" style="display:none">TrainClassMlp</span><span data-if="dotnet" style="display:none">TrainClassMlp</span><span data-if="python" style="display:none">train_class_mlp</span></code> returns the error of the MLP with
the optimal weights on the training samples in <a href="#Error"><i><code><span data-if="hdevelop" style="display:inline">Error</span><span data-if="c" style="display:none">Error</span><span data-if="cpp" style="display:none">Error</span><span data-if="com" style="display:none">Error</span><span data-if="dotnet" style="display:none">error</span><span data-if="python" style="display:none">error</span></code></i></a>.
Furthermore, <a href="#ErrorLog"><i><code><span data-if="hdevelop" style="display:inline">ErrorLog</span><span data-if="c" style="display:none">ErrorLog</span><span data-if="cpp" style="display:none">ErrorLog</span><span data-if="com" style="display:none">ErrorLog</span><span data-if="dotnet" style="display:none">errorLog</span><span data-if="python" style="display:none">error_log</span></code></i></a> contains the error value as a
function of the number of iterations.  With this, it is possible to
decide whether a second training of the MLP with the same training
data without creating the MLP anew makes sense.  If
<a href="#ErrorLog"><i><code><span data-if="hdevelop" style="display:inline">ErrorLog</span><span data-if="c" style="display:none">ErrorLog</span><span data-if="cpp" style="display:none">ErrorLog</span><span data-if="com" style="display:none">ErrorLog</span><span data-if="dotnet" style="display:none">errorLog</span><span data-if="python" style="display:none">error_log</span></code></i></a> is regarded as a function, it should drop off
steeply initially, while leveling out very flatly at the end.  If
<a href="#ErrorLog"><i><code><span data-if="hdevelop" style="display:inline">ErrorLog</span><span data-if="c" style="display:none">ErrorLog</span><span data-if="cpp" style="display:none">ErrorLog</span><span data-if="com" style="display:none">ErrorLog</span><span data-if="dotnet" style="display:none">errorLog</span><span data-if="python" style="display:none">error_log</span></code></i></a> is still relatively steep at the end, it usually
makes sense to call <code><span data-if="hdevelop" style="display:inline">train_class_mlp</span><span data-if="c" style="display:none">train_class_mlp</span><span data-if="cpp" style="display:none">TrainClassMlp</span><span data-if="com" style="display:none">TrainClassMlp</span><span data-if="dotnet" style="display:none">TrainClassMlp</span><span data-if="python" style="display:none">train_class_mlp</span></code> again.  It should be
noted, however, that this mechanism should <i>not</i> be used to
train the MLP successively with <a href="#MaxIterations"><i><code><span data-if="hdevelop" style="display:inline">MaxIterations</span><span data-if="c" style="display:none">MaxIterations</span><span data-if="cpp" style="display:none">MaxIterations</span><span data-if="com" style="display:none">MaxIterations</span><span data-if="dotnet" style="display:none">maxIterations</span><span data-if="python" style="display:none">max_iterations</span></code></i></a> =
<i>1</i> (or other small values for <a href="#MaxIterations"><i><code><span data-if="hdevelop" style="display:inline">MaxIterations</span><span data-if="c" style="display:none">MaxIterations</span><span data-if="cpp" style="display:none">MaxIterations</span><span data-if="com" style="display:none">MaxIterations</span><span data-if="dotnet" style="display:none">maxIterations</span><span data-if="python" style="display:none">max_iterations</span></code></i></a>)
because this will substantially increase the number of iterations
required to train the MLP.  Note that if an automatic determination
of the regularization parameters has been specified with
<a href="set_regularization_params_class_mlp.html"><code><span data-if="hdevelop" style="display:inline">set_regularization_params_class_mlp</span><span data-if="c" style="display:none">set_regularization_params_class_mlp</span><span data-if="cpp" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="com" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="dotnet" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="python" style="display:none">set_regularization_params_class_mlp</span></code></a>, <a href="#Error"><i><code><span data-if="hdevelop" style="display:inline">Error</span><span data-if="c" style="display:none">Error</span><span data-if="cpp" style="display:none">Error</span><span data-if="com" style="display:none">Error</span><span data-if="dotnet" style="display:none">error</span><span data-if="python" style="display:none">error</span></code></i></a> and
<a href="#ErrorLog"><i><code><span data-if="hdevelop" style="display:inline">ErrorLog</span><span data-if="c" style="display:none">ErrorLog</span><span data-if="cpp" style="display:none">ErrorLog</span><span data-if="com" style="display:none">ErrorLog</span><span data-if="dotnet" style="display:none">errorLog</span><span data-if="python" style="display:none">error_log</span></code></i></a> refer to the last training that was executed in
the evidence procedure.  If the error log should be monitored within
the individual iterations of the evidence procedure, the outer
iteration of the evidence procedure must be implemented explicitly,
as described at <a href="set_regularization_params_class_mlp.html"><code><span data-if="hdevelop" style="display:inline">set_regularization_params_class_mlp</span><span data-if="c" style="display:none">set_regularization_params_class_mlp</span><span data-if="cpp" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="com" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="dotnet" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="python" style="display:none">set_regularization_params_class_mlp</span></code></a>.</p>
<h2 id="sec_execution">运行信息</h2>
<ul>
  <li>多线程类型:可重入(与非独占操作符并行运行)。</li>
<li>多线程作用域:全局(可以从任何线程调用)。</li>
  
    <li>Automatically parallelized on internal data level.</li>
  
</ul>
<p>This operator modifies the state of the following input parameter:</p>
<ul><li><a href="#MLPHandle"><span data-if="hdevelop" style="display:inline">MLPHandle</span><span data-if="c" style="display:none">MLPHandle</span><span data-if="cpp" style="display:none">MLPHandle</span><span data-if="com" style="display:none">MLPHandle</span><span data-if="dotnet" style="display:none">MLPHandle</span><span data-if="python" style="display:none">mlphandle</span></a></li></ul>
<p>During execution of this operator, access to the value of this parameter must be synchronized if it is used across multiple threads.</p>
<h2 id="sec_parameters">参数表</h2>
  <div class="par">
<div class="parhead">
<span id="MLPHandle" class="parname"><b><code><span data-if="hdevelop" style="display:inline">MLPHandle</span><span data-if="c" style="display:none">MLPHandle</span><span data-if="cpp" style="display:none">MLPHandle</span><span data-if="com" style="display:none">MLPHandle</span><span data-if="dotnet" style="display:none">MLPHandle</span><span data-if="python" style="display:none">mlphandle</span></code></b> (input_control, state is modified)  </span><span>class_mlp <code>→</code> <span data-if="dotnet" style="display:none"><a href="HClassMlp.html">HClassMlp</a>, </span><span data-if="dotnet" style="display:none"><a href="HTuple.html">HTuple</a></span><span data-if="python" style="display:none">HHandle</span><span data-if="cpp" style="display:none"><a href="HTuple.html">HTuple</a></span><span data-if="c" style="display:none">Htuple</span><span data-if="hdevelop" style="display:inline"> (handle)</span><span data-if="dotnet" style="display:none"> (<i>IntPtr</i>)</span><span data-if="cpp" style="display:none"> (<i>HHandle</i>)</span><span data-if="c" style="display:none"> (<i>handle</i>)</span></span>
</div>
<p class="pardesc">MLP handle.</p>
</div>
  <div class="par">
<div class="parhead">
<span id="MaxIterations" class="parname"><b><code><span data-if="hdevelop" style="display:inline">MaxIterations</span><span data-if="c" style="display:none">MaxIterations</span><span data-if="cpp" style="display:none">MaxIterations</span><span data-if="com" style="display:none">MaxIterations</span><span data-if="dotnet" style="display:none">maxIterations</span><span data-if="python" style="display:none">max_iterations</span></code></b> (input_control)  </span><span>integer <code>→</code> <span data-if="dotnet" style="display:none"><a href="HTuple.html">HTuple</a></span><span data-if="python" style="display:none">int</span><span data-if="cpp" style="display:none"><a href="HTuple.html">HTuple</a></span><span data-if="c" style="display:none">Htuple</span><span data-if="hdevelop" style="display:inline"> (integer)</span><span data-if="dotnet" style="display:none"> (<i>int</i> / </span><span data-if="dotnet" style="display:none">long)</span><span data-if="cpp" style="display:none"> (<i>Hlong</i>)</span><span data-if="c" style="display:none"> (<i>Hlong</i>)</span></span>
</div>
<p class="pardesc">Maximum number of iterations of the
optimization algorithm.</p>
<p class="pardesc"><span class="parcat">Default:
      </span>200</p>
<p class="pardesc"><span class="parcat">Suggested values:
      </span>20, 40, 60, 80, 100, 120, 140, 160, 180, 200, 220, 240, 260, 280, 300</p>
</div>
  <div class="par">
<div class="parhead">
<span id="WeightTolerance" class="parname"><b><code><span data-if="hdevelop" style="display:inline">WeightTolerance</span><span data-if="c" style="display:none">WeightTolerance</span><span data-if="cpp" style="display:none">WeightTolerance</span><span data-if="com" style="display:none">WeightTolerance</span><span data-if="dotnet" style="display:none">weightTolerance</span><span data-if="python" style="display:none">weight_tolerance</span></code></b> (input_control)  </span><span>real <code>→</code> <span data-if="dotnet" style="display:none"><a href="HTuple.html">HTuple</a></span><span data-if="python" style="display:none">float</span><span data-if="cpp" style="display:none"><a href="HTuple.html">HTuple</a></span><span data-if="c" style="display:none">Htuple</span><span data-if="hdevelop" style="display:inline"> (real)</span><span data-if="dotnet" style="display:none"> (<i>double</i>)</span><span data-if="cpp" style="display:none"> (<i>double</i>)</span><span data-if="c" style="display:none"> (<i>double</i>)</span></span>
</div>
<p class="pardesc">Threshold for the difference of the weights of
the MLP between two iterations of the
optimization algorithm.</p>
<p class="pardesc"><span class="parcat">Default:
      </span>1.0</p>
<p class="pardesc"><span class="parcat">Suggested values:
      </span>1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001</p>
<p class="pardesc"><span class="parcat">Restriction:
      </span><code>WeightTolerance &gt;= 1.0e-8</code></p>
</div>
  <div class="par">
<div class="parhead">
<span id="ErrorTolerance" class="parname"><b><code><span data-if="hdevelop" style="display:inline">ErrorTolerance</span><span data-if="c" style="display:none">ErrorTolerance</span><span data-if="cpp" style="display:none">ErrorTolerance</span><span data-if="com" style="display:none">ErrorTolerance</span><span data-if="dotnet" style="display:none">errorTolerance</span><span data-if="python" style="display:none">error_tolerance</span></code></b> (input_control)  </span><span>real <code>→</code> <span data-if="dotnet" style="display:none"><a href="HTuple.html">HTuple</a></span><span data-if="python" style="display:none">float</span><span data-if="cpp" style="display:none"><a href="HTuple.html">HTuple</a></span><span data-if="c" style="display:none">Htuple</span><span data-if="hdevelop" style="display:inline"> (real)</span><span data-if="dotnet" style="display:none"> (<i>double</i>)</span><span data-if="cpp" style="display:none"> (<i>double</i>)</span><span data-if="c" style="display:none"> (<i>double</i>)</span></span>
</div>
<p class="pardesc">Threshold for the difference of the mean error
of the MLP on the training data between two
iterations of the optimization algorithm.</p>
<p class="pardesc"><span class="parcat">Default:
      </span>0.01</p>
<p class="pardesc"><span class="parcat">Suggested values:
      </span>1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001</p>
<p class="pardesc"><span class="parcat">Restriction:
      </span><code>ErrorTolerance &gt;= 1.0e-8</code></p>
</div>
  <div class="par">
<div class="parhead">
<span id="Error" class="parname"><b><code><span data-if="hdevelop" style="display:inline">Error</span><span data-if="c" style="display:none">Error</span><span data-if="cpp" style="display:none">Error</span><span data-if="com" style="display:none">Error</span><span data-if="dotnet" style="display:none">error</span><span data-if="python" style="display:none">error</span></code></b> (output_control)  </span><span>real <code>→</code> <span data-if="dotnet" style="display:none"><a href="HTuple.html">HTuple</a></span><span data-if="python" style="display:none">float</span><span data-if="cpp" style="display:none"><a href="HTuple.html">HTuple</a></span><span data-if="c" style="display:none">Htuple</span><span data-if="hdevelop" style="display:inline"> (real)</span><span data-if="dotnet" style="display:none"> (<i>double</i>)</span><span data-if="cpp" style="display:none"> (<i>double</i>)</span><span data-if="c" style="display:none"> (<i>double</i>)</span></span>
</div>
<p class="pardesc">Mean error of the MLP on the training data.</p>
</div>
  <div class="par">
<div class="parhead">
<span id="ErrorLog" class="parname"><b><code><span data-if="hdevelop" style="display:inline">ErrorLog</span><span data-if="c" style="display:none">ErrorLog</span><span data-if="cpp" style="display:none">ErrorLog</span><span data-if="com" style="display:none">ErrorLog</span><span data-if="dotnet" style="display:none">errorLog</span><span data-if="python" style="display:none">error_log</span></code></b> (output_control)  </span><span>real-array <code>→</code> <span data-if="dotnet" style="display:none"><a href="HTuple.html">HTuple</a></span><span data-if="python" style="display:none">Sequence[float]</span><span data-if="cpp" style="display:none"><a href="HTuple.html">HTuple</a></span><span data-if="c" style="display:none">Htuple</span><span data-if="hdevelop" style="display:inline"> (real)</span><span data-if="dotnet" style="display:none"> (<i>double</i>)</span><span data-if="cpp" style="display:none"> (<i>double</i>)</span><span data-if="c" style="display:none"> (<i>double</i>)</span></span>
</div>
<p class="pardesc">Mean error of the MLP on the training data as a
function of the number of iterations of the
optimization algorithm.</p>
</div>
<h2 id="sec_example_all">例程 (HDevelop)</h2>
<pre class="example">
* Train an MLP
create_class_mlp (NumIn, NumHidden, NumOut, 'softmax', \
                  'normalization', 1, 42, MLPHandle)
read_samples_class_mlp (MLPHandle, 'samples.mtf')
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
write_class_mlp (MLPHandle, 'classifier.mlp')
</pre>
<h2 id="sec_result">结果</h2>
<p>如果参数均有效，算子 <code><span data-if="hdevelop" style="display:inline">train_class_mlp</span><span data-if="c" style="display:none">train_class_mlp</span><span data-if="cpp" style="display:none">TrainClassMlp</span><span data-if="com" style="display:none">TrainClassMlp</span><span data-if="dotnet" style="display:none">TrainClassMlp</span><span data-if="python" style="display:none">train_class_mlp</span></code>
返回值 <TT>2</TT> (
      <TT>H_MSG_TRUE</TT>)
    .  If necessary, an exception is
raised.
</p>
<p><code><span data-if="hdevelop" style="display:inline">train_class_mlp</span><span data-if="c" style="display:none">train_class_mlp</span><span data-if="cpp" style="display:none">TrainClassMlp</span><span data-if="com" style="display:none">TrainClassMlp</span><span data-if="dotnet" style="display:none">TrainClassMlp</span><span data-if="python" style="display:none">train_class_mlp</span></code> may return the error 9211 (Matrix is not
positive definite) if <code><span data-if="hdevelop" style="display:inline">Preprocessing</span><span data-if="c" style="display:none">Preprocessing</span><span data-if="cpp" style="display:none">Preprocessing</span><span data-if="com" style="display:none">Preprocessing</span><span data-if="dotnet" style="display:none">preprocessing</span><span data-if="python" style="display:none">preprocessing</span></code> =
<i><span data-if="hdevelop" style="display:inline">'canonical_variates'</span><span data-if="c" style="display:none">"canonical_variates"</span><span data-if="cpp" style="display:none">"canonical_variates"</span><span data-if="com" style="display:none">"canonical_variates"</span><span data-if="dotnet" style="display:none">"canonical_variates"</span><span data-if="python" style="display:none">"canonical_variates"</span></i> is used.  This typically indicates
that not enough training samples have been stored for each class.</p>
<h2 id="sec_predecessors">可能的前置算子</h2>
<p>
<code><a href="add_sample_class_mlp.html"><span data-if="hdevelop" style="display:inline">add_sample_class_mlp</span><span data-if="c" style="display:none">add_sample_class_mlp</span><span data-if="cpp" style="display:none">AddSampleClassMlp</span><span data-if="com" style="display:none">AddSampleClassMlp</span><span data-if="dotnet" style="display:none">AddSampleClassMlp</span><span data-if="python" style="display:none">add_sample_class_mlp</span></a></code>, 
<code><a href="read_samples_class_mlp.html"><span data-if="hdevelop" style="display:inline">read_samples_class_mlp</span><span data-if="c" style="display:none">read_samples_class_mlp</span><span data-if="cpp" style="display:none">ReadSamplesClassMlp</span><span data-if="com" style="display:none">ReadSamplesClassMlp</span><span data-if="dotnet" style="display:none">ReadSamplesClassMlp</span><span data-if="python" style="display:none">read_samples_class_mlp</span></a></code>, 
<code><a href="set_regularization_params_class_mlp.html"><span data-if="hdevelop" style="display:inline">set_regularization_params_class_mlp</span><span data-if="c" style="display:none">set_regularization_params_class_mlp</span><span data-if="cpp" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="com" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="dotnet" style="display:none">SetRegularizationParamsClassMlp</span><span data-if="python" style="display:none">set_regularization_params_class_mlp</span></a></code>
</p>
<h2 id="sec_successors">可能的后置算子</h2>
<p>
<code><a href="evaluate_class_mlp.html"><span data-if="hdevelop" style="display:inline">evaluate_class_mlp</span><span data-if="c" style="display:none">evaluate_class_mlp</span><span data-if="cpp" style="display:none">EvaluateClassMlp</span><span data-if="com" style="display:none">EvaluateClassMlp</span><span data-if="dotnet" style="display:none">EvaluateClassMlp</span><span data-if="python" style="display:none">evaluate_class_mlp</span></a></code>, 
<code><a href="classify_class_mlp.html"><span data-if="hdevelop" style="display:inline">classify_class_mlp</span><span data-if="c" style="display:none">classify_class_mlp</span><span data-if="cpp" style="display:none">ClassifyClassMlp</span><span data-if="com" style="display:none">ClassifyClassMlp</span><span data-if="dotnet" style="display:none">ClassifyClassMlp</span><span data-if="python" style="display:none">classify_class_mlp</span></a></code>, 
<code><a href="write_class_mlp.html"><span data-if="hdevelop" style="display:inline">write_class_mlp</span><span data-if="c" style="display:none">write_class_mlp</span><span data-if="cpp" style="display:none">WriteClassMlp</span><span data-if="com" style="display:none">WriteClassMlp</span><span data-if="dotnet" style="display:none">WriteClassMlp</span><span data-if="python" style="display:none">write_class_mlp</span></a></code>, 
<code><a href="create_class_lut_mlp.html"><span data-if="hdevelop" style="display:inline">create_class_lut_mlp</span><span data-if="c" style="display:none">create_class_lut_mlp</span><span data-if="cpp" style="display:none">CreateClassLutMlp</span><span data-if="com" style="display:none">CreateClassLutMlp</span><span data-if="dotnet" style="display:none">CreateClassLutMlp</span><span data-if="python" style="display:none">create_class_lut_mlp</span></a></code>
</p>
<h2 id="sec_alternatives">可替代算子</h2>
<p>
<code><a href="train_dl_classifier_batch.html"><span data-if="hdevelop" style="display:inline">train_dl_classifier_batch</span><span data-if="c" style="display:none">train_dl_classifier_batch</span><span data-if="cpp" style="display:none">TrainDlClassifierBatch</span><span data-if="com" style="display:none">TrainDlClassifierBatch</span><span data-if="dotnet" style="display:none">TrainDlClassifierBatch</span><span data-if="python" style="display:none">train_dl_classifier_batch</span></a></code>, 
<code><a href="read_class_mlp.html"><span data-if="hdevelop" style="display:inline">read_class_mlp</span><span data-if="c" style="display:none">read_class_mlp</span><span data-if="cpp" style="display:none">ReadClassMlp</span><span data-if="com" style="display:none">ReadClassMlp</span><span data-if="dotnet" style="display:none">ReadClassMlp</span><span data-if="python" style="display:none">read_class_mlp</span></a></code>
</p>
<h2 id="sec_see">参考其它</h2>
<p>
<code><a href="create_class_mlp.html"><span data-if="hdevelop" style="display:inline">create_class_mlp</span><span data-if="c" style="display:none">create_class_mlp</span><span data-if="cpp" style="display:none">CreateClassMlp</span><span data-if="com" style="display:none">CreateClassMlp</span><span data-if="dotnet" style="display:none">CreateClassMlp</span><span data-if="python" style="display:none">create_class_mlp</span></a></code>
</p>
<h2 id="sec_references">References</h2>
<p>

Christopher M. Bishop: “Neural Networks for Pattern Recognition”;
Oxford University Press, Oxford; 1995.
<br>
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London;
1999.
</p>
<h2 id="sec_module">模块</h2>
<p>
Foundation</p>
<!--OP_REF_FOOTER_START-->
<hr>
<div class="indexlink">
<a href="index_classes.html"><span data-if="dotnet" style="display:none;">类别</span><span data-if="cpp" style="display:none;">类别</span></a><span data-if="dotnet" style="display:none;"> | </span><span data-if="cpp" style="display:none;"> | </span><a href="index_by_name.html">算子列表</a>
</div>
<div class="footer">
<div class="copyright">HALCON算子参考手册 Copyright © 2015-2023 51Halcon</div>
</div>
</div>
</body>
</html>
