<!DOCTYPE html>
<html lang="en-US">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>train_class_svm [HALCON算子参考手册]</title>
<style type="text/css">
      body {
    color: #000000;
    background-color: #ffffff;
    margin: 0;
    font-family: Arial, Helvetica, sans-serif;
}

.body_main {
    margin-left: 35px;
    margin-right: 35px;
}

@media screen and (min-width:992px) {

    .body_main {
        margin-left: 10%;
        margin-right: 10%;
    }

    table.toctable {
        width: 80%
    }
}

@media screen and (min-width:1400px) {

    .body_main {
        margin-left: 15%;
        margin-right: 15%;
    }

    table.toctable {
        width: 70%
    }
}

body>div ul ul {
    margin-left: inherit;
}

a:link {
    color: #0044cc;
}

a:link,
a:visited {
    text-decoration: none;
}

a:link:hover,
a:visited:hover {
    text-decoration: underline;
}

th {
    text-align: left;
}

h1,
h2,
h3,
h4,
h5,
h6 {
    text-rendering: optimizeLegibility;
    color: #666666;
}

code {
    font-family: monospace,monospace;
}

h1 a.halconoperator {
    font-family: Arial, Helvetica, sans-serif;
    color: #666666;
}

h2 a.halconoperator {
    font-family: Arial, Helvetica, sans-serif;
    color: #666666;
}

hr {
    border: 0;
    border-top: solid 1px #f28d26;
}

.pre {
    display: block;
    padding-bottom: 1ex;
    font-family: monospace;
    white-space: pre;
}

pre {
    font-family: monospace, monospace;
    padding: 1ex;
    white-space: pre-wrap;
}

.toc {
    font-size: 80%;
    border-top: 1px dashed #f28d26;
    border-bottom: 1px dashed #f28d26;
    padding-top: 5px;
    padding-bottom: 5px;
}

.inv {
    margin: 0;
    border: 0;
    padding: 0;
}

.banner {
    color: #666666;
    padding-left: 1em;
}

.logo {
    background-color: white;
}

.keyboard {
    font-size: 80%;
    padding-left: 3px;
    padding-right: 3px;
    border-radius: 5px;
    border-width: 1px;
    border-style: solid;
    border-color: #f28d26;
    background-color: #f3f3f3;
}

.warning {
    margin-top: 2ex;
    margin-bottom: 1ex;
    padding: 10px;
    text-align: center;
    border: 1px solid;
    color: #bb0000;
    background-color: #fff7f7
}

.imprint {
    margin-top: 1ex;
    font-size: 80%;
    color: #666666;
}

.imprinthead {
    font-weight: bolder;
    color: #666666;
}

.indexlink {
    text-align: right;
    padding-bottom: 5px;
}

.postscript {
    margin-top: 2ex;
    font-size: 80%;
    color: #666666
}

.evenrow {
    background-color: #e7e7ef;
    vertical-align: top;
}

.oddrow {
    background-color: #f7f7ff;
    vertical-align: top;
}

.headrow {
    background-color: #97979f;
    color: #ffffff;
    vertical-align: top;
}

.logorow {
    vertical-align: top;
}

.error {
    color: red;
}

.var {
    font-style: italic
}

.halconoperator {
    font-family: monospace, monospace;
}

span.operator {
    font-family: monospace, monospace;
}

span.procedure {
    font-family: monospace, monospace;
}

span.operation {
    font-family: monospace, monospace;
}

span.feature {
    font-family: Arial, Helvetica, Homerton, sans-serif;
}

ul {
    padding-left: 1.2em;
}

li.dot {
    list-style-type: square;
    color: #f28d26;
}

.breadcrumb {
    font-size: 80%;
    color: white;
    background-color: #062d64;
    margin-bottom: 20px;
    padding-left: 35px;
    padding-right: 35px;
    padding-bottom: 15px;
}

.webbar {
    font-size: 80%;
    background-color: #dddddd;
    margin-top: 0px;
    margin-left: -35px;
    margin-right: -35px;
    margin-bottom: 0px;
    padding-top: 5px;
    padding-left: 35px;
    padding-right: 35px;
    padding-bottom: 5px;
}

.footer {
    display: flex;
    flex-wrap: wrap;
    justify-content: space-between;
    border-top: 1px dashed #f28d26;
    font-size: 80%;
    color: #666666;
    padding-bottom: 8px;
}

.footer .socialmedia a {
    padding-left: 7px;
}

.socialmedia {
    padding-top: 10px;
}

.copyright {
    margin-top: 19px;
}

.breadcrumb a {
    color: #ffffff;
    border-bottom: 1px solid white;
}

.breadcrumb a:link:hover,
.breadcrumb a:visited:hover {
    text-decoration: none;
    border-bottom: none;
}

.heading {
    margin-top: 1ex;
    font-size: 110%;
    font-weight: bold;
    color: #666666;
}

.text {
    color: black;
}

.example {
    font-size: 80%;
    background-color: #f3f3f3;
    border: 1px dashed #666666;
    padding: 10px;
}

.displaymath {
    display: block;
    text-align: center;
    margin-top: 1ex;
    margin-bottom: 1ex;
}

.title {
    float: left;
    padding-top: 3px;
    padding-bottom: 3px;
}

.signnote {
    font-family: Arial, Helvetica, Homerton, sans-serif;
    font-size: 80%;
    color: #666666;
    font-weight: lighter;
    font-style: italic
}

.par {
    margin-bottom: 1.5em;
}

.parhead {
    text-align: right;
}

.parname {
    float: left;
}

.pardesc {
    font-size: 85%;
    margin-top: 0.5em;
    margin-bottom: 0.5em;
    margin-left: 2em;
}

.parcat {
    color: #666;
    font-weight: bold;
}

*[data-if=cpp],
*[data-if=c],
*[data-if=dotnet],
*[data-if=com],
*[data-if=python] {
    display: none;
}

.tabbar {
    text-align: right;
    border-bottom: 1px solid #f28d26;
    margin-bottom: 0.5em;
}

ul.tabs {
    padding-top: 3px;
    padding-bottom: 3px;
    margin-top: 10px;
    margin-bottom: 0;
    font-size: 80%
}

ul.tabs li {
    padding-top: 3px;
    padding-bottom: 3px;
    display: inline;
    overflow: hidden;
    list-style-type: none;
    margin: 0;
    margin-left: 8px;
    border-top: 1px solid #666;
    border-left: 1px solid #666;
    border-right: 1px solid #666;
}

ul.tabs li.active {
    border-left: 1px solid #f28d26;
    border-right: 1px solid #f28d26;
    border-top: 1px solid #f28d26;
    border-bottom: 1px solid #fff;
}

ul.tabs li.inactive {
    background-color: #eee;
}

ul.tabs li a {
    padding-left: 5px;
    padding-right: 5px;
    color: #666;
}

ul.tabs li a:link:hover {
    text-decoration: none;
}

ul.tabs li.inactive a {
    color: #666;
}

ul.tabs li.active a {
    color: black;
}

dl.generic dd {
    margin-bottom: 1em;
}

.pari {
    color: olive;
}

.paro {
    color: maroon;
}

.comment {
    font-size: 80%;
    color: green;
    white-space: nowrap;
}

table.grid {
    border-collapse: collapse;
}

table.grid td {
    padding: 5px;
    border: 1px solid;
}

table.layout {
    border: 0px;
}

table.layout td {
    padding: 5px;
}

table.table {
    border-collapse: collapse;
}

table.table td {
    padding: 5px;
    border-left: 0px;
    border-right: 0px;
}

table.table tr:last-child {
    border-bottom: 1px solid;
}

table.table th {
    padding: 5px;
    border-top: 1px solid;
    border-bottom: 1px solid;
    border-left: 0px;
    border-right: 0px;
}

.details_summary {
    cursor: pointer;
}

table.toctable {
    width: 100%;
}

table.toctable col:first-child {
    width: 20%;
}

table.toctable col:nth-last-child(2) {
    width: 8%;
}

table.altcolored tr:nth-child(even) {
    background-color: #f3f3f3;
}

    </style>
<!--OP_REF_STYLE_END-->
<script>
    <!--
var active_lang='hdevelop';function switchVisibility(obj,active_lang,new_lang)
{var display_style='inline';
for(var i=0;i<obj.length;i++)
{if(obj.item(i).getAttribute('data-if')==new_lang)
{obj.item(i).style.display=display_style;}
if(obj.item(i).getAttribute('data-if')==active_lang)
{obj.item(i).style.display='none';}}
return;}
function toggleLanguage(new_lang,initial)
{if(active_lang!=new_lang)
{var lis=document.getElementsByTagName('li');for(var i=0;i<lis.length;i++)
{if(lis.item(i).id=='syn-'+new_lang)
{lis.item(i).className='active';}
else
{lis.item(i).className='inactive';}}
var divs=document.getElementsByTagName('div');var spans=document.getElementsByTagName('span');switchVisibility(divs,active_lang,new_lang);switchVisibility(spans,active_lang,new_lang);if(!initial)
{setCookie("halcon_reference_language",new_lang,null,null);}
active_lang=new_lang;}
return;}
function setCookie(name,value,domain,exp_offset,path,secure)
{localStorage.setItem(name,value);}
function getCookie(name)
{return localStorage.getItem(name);}
function initialize()
{var qs=location.href.split('?')[1];var qs_lang_raw=location.href.split('interface=')[1];var qs_lang;if(qs_lang_raw)
{qs_lang=qs_lang_raw.split('#')[0];}
var cookie_lang=getCookie("halcon_reference_language");var new_lang;if((qs_lang=="hdevelop")||(qs_lang=="dotnet")||(qs_lang=="python")||(qs_lang=="cpp")||(qs_lang=="c"))
{new_lang=qs_lang;setCookie("halcon_reference_language",new_lang,null,null);}
else if((cookie_lang=="hdevelop")||(cookie_lang=="dotnet")||(cookie_lang=="python")||(cookie_lang=="cpp")||(cookie_lang=="c"))
{new_lang=cookie_lang;}
else
{new_lang="hdevelop";}
toggleLanguage(new_lang,1);return;}
-->

  </script>
</head>
<body onload="initialize();">
<div class="breadcrumb">
<br class="inv"><a href="index.html">目录</a> / <a href="toc_classification.html">分类</a> / <a href="toc_classification_supportvectormachines.html">支持向量机</a><br class="inv">
</div>
<div class="body_main">
<div class="tabbar"><ul class="tabs">
<li id="syn-hdevelop" class="active"><a href="javascript:void(0);" onclick="toggleLanguage('hdevelop')" onfocus="blur()">HDevelop</a></li>
<li id="syn-dotnet" class="inactive"><a href="javascript:void(0);" onclick="toggleLanguage('dotnet')" onfocus="blur()">.NET</a></li>
<li id="syn-python" class="inactive"><a href="javascript:void(0);" onclick="toggleLanguage('python')" onfocus="blur()">Python</a></li>
<li id="syn-cpp" class="inactive"><a href="javascript:void(0);" onclick="toggleLanguage('cpp')" onfocus="blur()">C++</a></li>
<li id="syn-c" class="inactive"><a href="javascript:void(0);" onclick="toggleLanguage('c')" onfocus="blur()">C</a></li>
</ul></div>
<div class="indexlink">
<a href="index_classes.html"><span data-if="dotnet" style="display:none;">类别</span><span data-if="cpp" style="display:none;">类别</span></a><span data-if="dotnet" style="display:none;"> | </span><span data-if="cpp" style="display:none;"> | </span><a href="index_by_name.html">算子列表</a>
</div>
<!--OP_REF_HEADER_END-->
<h1 id="sec_name">
<span data-if="hdevelop" style="display:inline;">train_class_svm</span><span data-if="c" style="display:none;">T_train_class_svm</span><span data-if="cpp" style="display:none;">TrainClassSvm</span><span data-if="dotnet" style="display:none;">TrainClassSvm</span><span data-if="python" style="display:none;">train_class_svm</span> (算子名称)</h1>
<h2>名称</h2>
<p><code><span data-if="hdevelop" style="display:inline;">train_class_svm</span><span data-if="c" style="display:none;">T_train_class_svm</span><span data-if="cpp" style="display:none;">TrainClassSvm</span><span data-if="dotnet" style="display:none;">TrainClassSvm</span><span data-if="python" style="display:none;">train_class_svm</span></code> — Train a support vector machine.</p>
<h2 id="sec_synopsis">参数签名</h2>
<div data-if="hdevelop" style="display:inline;">
<p>
<code><b>train_class_svm</b>( :  : <a href="#SVMHandle"><i>SVMHandle</i></a>, <a href="#Epsilon"><i>Epsilon</i></a>, <a href="#TrainMode"><i>TrainMode</i></a> : )</code></p>
</div>
<div data-if="c" style="display:none;">
<p>
<code>Herror <b>T_train_class_svm</b>(const Htuple <a href="#SVMHandle"><i>SVMHandle</i></a>, const Htuple <a href="#Epsilon"><i>Epsilon</i></a>, const Htuple <a href="#TrainMode"><i>TrainMode</i></a>)</code></p>
</div>
<div data-if="cpp" style="display:none;">
<p>
<code>void <b>TrainClassSvm</b>(const HTuple&amp; <a href="#SVMHandle"><i>SVMHandle</i></a>, const HTuple&amp; <a href="#Epsilon"><i>Epsilon</i></a>, const HTuple&amp; <a href="#TrainMode"><i>TrainMode</i></a>)</code></p>
<p>
<code>void <a href="HClassSvm.html">HClassSvm</a>::<b>TrainClassSvm</b>(double <a href="#Epsilon"><i>Epsilon</i></a>, const HTuple&amp; <a href="#TrainMode"><i>TrainMode</i></a>) const</code></p>
<p>
<code>void <a href="HClassSvm.html">HClassSvm</a>::<b>TrainClassSvm</b>(double <a href="#Epsilon"><i>Epsilon</i></a>, const HString&amp; <a href="#TrainMode"><i>TrainMode</i></a>) const</code></p>
<p>
<code>void <a href="HClassSvm.html">HClassSvm</a>::<b>TrainClassSvm</b>(double <a href="#Epsilon"><i>Epsilon</i></a>, const char* <a href="#TrainMode"><i>TrainMode</i></a>) const</code></p>
<p>
<code>void <a href="HClassSvm.html">HClassSvm</a>::<b>TrainClassSvm</b>(double <a href="#Epsilon"><i>Epsilon</i></a>, const wchar_t* <a href="#TrainMode"><i>TrainMode</i></a>) const  <span class="signnote">
            (
            Windows only)
          </span></code></p>
</div>
<div data-if="com" style="display:none;"></div>
<div data-if="dotnet" style="display:none;">
<p>
<code>static void <a href="HOperatorSet.html">HOperatorSet</a>.<b>TrainClassSvm</b>(<a href="HTuple.html">HTuple</a> <a href="#SVMHandle"><i>SVMHandle</i></a>, <a href="HTuple.html">HTuple</a> <a href="#Epsilon"><i>epsilon</i></a>, <a href="HTuple.html">HTuple</a> <a href="#TrainMode"><i>trainMode</i></a>)</code></p>
<p>
<code>void <a href="HClassSvm.html">HClassSvm</a>.<b>TrainClassSvm</b>(double <a href="#Epsilon"><i>epsilon</i></a>, <a href="HTuple.html">HTuple</a> <a href="#TrainMode"><i>trainMode</i></a>)</code></p>
<p>
<code>void <a href="HClassSvm.html">HClassSvm</a>.<b>TrainClassSvm</b>(double <a href="#Epsilon"><i>epsilon</i></a>, string <a href="#TrainMode"><i>trainMode</i></a>)</code></p>
</div>
<div data-if="python" style="display:none;">
<p>
<code>def <b>train_class_svm</b>(<a href="#SVMHandle"><i>svmhandle</i></a>: HHandle, <a href="#Epsilon"><i>epsilon</i></a>: float, <a href="#TrainMode"><i>train_mode</i></a>: Union[str, int]) -&gt; None</code></p>
</div>
<h2 id="sec_description">描述</h2>
<p><code><span data-if="hdevelop" style="display:inline">train_class_svm</span><span data-if="c" style="display:none">train_class_svm</span><span data-if="cpp" style="display:none">TrainClassSvm</span><span data-if="com" style="display:none">TrainClassSvm</span><span data-if="dotnet" style="display:none">TrainClassSvm</span><span data-if="python" style="display:none">train_class_svm</span></code> trains the support vector machine (SVM)
given in <a href="#SVMHandle"><i><code><span data-if="hdevelop" style="display:inline">SVMHandle</span><span data-if="c" style="display:none">SVMHandle</span><span data-if="cpp" style="display:none">SVMHandle</span><span data-if="com" style="display:none">SVMHandle</span><span data-if="dotnet" style="display:none">SVMHandle</span><span data-if="python" style="display:none">svmhandle</span></code></i></a>. Before the SVM can be trained, the
training samples to be used for the training must be added to the
SVM using <a href="add_sample_class_svm.html"><code><span data-if="hdevelop" style="display:inline">add_sample_class_svm</span><span data-if="c" style="display:none">add_sample_class_svm</span><span data-if="cpp" style="display:none">AddSampleClassSvm</span><span data-if="com" style="display:none">AddSampleClassSvm</span><span data-if="dotnet" style="display:none">AddSampleClassSvm</span><span data-if="python" style="display:none">add_sample_class_svm</span></code></a> or
<a href="read_samples_class_svm.html"><code><span data-if="hdevelop" style="display:inline">read_samples_class_svm</span><span data-if="c" style="display:none">read_samples_class_svm</span><span data-if="cpp" style="display:none">ReadSamplesClassSvm</span><span data-if="com" style="display:none">ReadSamplesClassSvm</span><span data-if="dotnet" style="display:none">ReadSamplesClassSvm</span><span data-if="python" style="display:none">read_samples_class_svm</span></code></a>.
</p>
<p>Technically, training an SVM means solving a convex quadratic
optimization problem.  This implies that it can be assured that
training terminates after finite steps at the global optimum.  In
order to recognize termination, the gradient of the function that is
optimized internally must fall below a threshold, which is set in
<a href="#Epsilon"><i><code><span data-if="hdevelop" style="display:inline">Epsilon</span><span data-if="c" style="display:none">Epsilon</span><span data-if="cpp" style="display:none">Epsilon</span><span data-if="com" style="display:none">Epsilon</span><span data-if="dotnet" style="display:none">epsilon</span><span data-if="python" style="display:none">epsilon</span></code></i></a>.  By default, a value of <i>0.001</i> should be
used for <a href="#Epsilon"><i><code><span data-if="hdevelop" style="display:inline">Epsilon</span><span data-if="c" style="display:none">Epsilon</span><span data-if="cpp" style="display:none">Epsilon</span><span data-if="com" style="display:none">Epsilon</span><span data-if="dotnet" style="display:none">epsilon</span><span data-if="python" style="display:none">epsilon</span></code></i></a> since this yields the best results in
practice.  A too big value leads to a too early termination and
might result in suboptimal solutions. With a too small value the
optimization requires a longer time, often without changing the
recognition rate significantly.  Nevertheless,
if longer training times are possible, a smaller value than
<i>0.001</i> might be chosen.  There are two common reasons for
changing <a href="#Epsilon"><i><code><span data-if="hdevelop" style="display:inline">Epsilon</span><span data-if="c" style="display:none">Epsilon</span><span data-if="cpp" style="display:none">Epsilon</span><span data-if="com" style="display:none">Epsilon</span><span data-if="dotnet" style="display:none">epsilon</span><span data-if="python" style="display:none">epsilon</span></code></i></a>: First, if you specified a very small value
for <code><span data-if="hdevelop" style="display:inline">Nu</span><span data-if="c" style="display:none">Nu</span><span data-if="cpp" style="display:none">Nu</span><span data-if="com" style="display:none">Nu</span><span data-if="dotnet" style="display:none">nu</span><span data-if="python" style="display:none">nu</span></code> when calling (<a href="create_class_svm.html"><code><span data-if="hdevelop" style="display:inline">create_class_svm</span><span data-if="c" style="display:none">create_class_svm</span><span data-if="cpp" style="display:none">CreateClassSvm</span><span data-if="com" style="display:none">CreateClassSvm</span><span data-if="dotnet" style="display:none">CreateClassSvm</span><span data-if="python" style="display:none">create_class_svm</span></code></a>),
e.g., <code><span data-if="hdevelop" style="display:inline">Nu</span><span data-if="c" style="display:none">Nu</span><span data-if="cpp" style="display:none">Nu</span><span data-if="com" style="display:none">Nu</span><span data-if="dotnet" style="display:none">nu</span><span data-if="python" style="display:none">nu</span></code> = <i>0.001</i>, a smaller <a href="#Epsilon"><i><code><span data-if="hdevelop" style="display:inline">Epsilon</span><span data-if="c" style="display:none">Epsilon</span><span data-if="cpp" style="display:none">Epsilon</span><span data-if="com" style="display:none">Epsilon</span><span data-if="dotnet" style="display:none">epsilon</span><span data-if="python" style="display:none">epsilon</span></code></i></a> might
significantly improve the recognition rate.  A second case is the
determination of the optimal kernel function and its parametrization (e.g.,
the <code><span data-if="hdevelop" style="display:inline">KernelParam</span><span data-if="c" style="display:none">KernelParam</span><span data-if="cpp" style="display:none">KernelParam</span><span data-if="com" style="display:none">KernelParam</span><span data-if="dotnet" style="display:none">kernelParam</span><span data-if="python" style="display:none">kernel_param</span></code>-<code><span data-if="hdevelop" style="display:inline">Nu</span><span data-if="c" style="display:none">Nu</span><span data-if="cpp" style="display:none">Nu</span><span data-if="com" style="display:none">Nu</span><span data-if="dotnet" style="display:none">nu</span><span data-if="python" style="display:none">nu</span></code> pair for the RBF kernel) with the
computationally intensive n-fold cross validation.  Here, choosing a
bigger <a href="#Epsilon"><i><code><span data-if="hdevelop" style="display:inline">Epsilon</span><span data-if="c" style="display:none">Epsilon</span><span data-if="cpp" style="display:none">Epsilon</span><span data-if="com" style="display:none">Epsilon</span><span data-if="dotnet" style="display:none">epsilon</span><span data-if="python" style="display:none">epsilon</span></code></i></a> reduces the computational time without
changing the parameters of the optimal kernel that would be obtained
when using the default <a href="#Epsilon"><i><code><span data-if="hdevelop" style="display:inline">Epsilon</span><span data-if="c" style="display:none">Epsilon</span><span data-if="cpp" style="display:none">Epsilon</span><span data-if="com" style="display:none">Epsilon</span><span data-if="dotnet" style="display:none">epsilon</span><span data-if="python" style="display:none">epsilon</span></code></i></a>.  After the optimal
<code><span data-if="hdevelop" style="display:inline">KernelParam</span><span data-if="c" style="display:none">KernelParam</span><span data-if="cpp" style="display:none">KernelParam</span><span data-if="com" style="display:none">KernelParam</span><span data-if="dotnet" style="display:none">kernelParam</span><span data-if="python" style="display:none">kernel_param</span></code>-<code><span data-if="hdevelop" style="display:inline">Nu</span><span data-if="c" style="display:none">Nu</span><span data-if="cpp" style="display:none">Nu</span><span data-if="com" style="display:none">Nu</span><span data-if="dotnet" style="display:none">nu</span><span data-if="python" style="display:none">nu</span></code> pair is obtained, the final
training is conducted with a small <a href="#Epsilon"><i><code><span data-if="hdevelop" style="display:inline">Epsilon</span><span data-if="c" style="display:none">Epsilon</span><span data-if="cpp" style="display:none">Epsilon</span><span data-if="com" style="display:none">Epsilon</span><span data-if="dotnet" style="display:none">epsilon</span><span data-if="python" style="display:none">epsilon</span></code></i></a>.
</p>
<p>The duration of the training depends on the training data, in
particular on the number of resulting support vectors (SVs), and
<a href="#Epsilon"><i><code><span data-if="hdevelop" style="display:inline">Epsilon</span><span data-if="c" style="display:none">Epsilon</span><span data-if="cpp" style="display:none">Epsilon</span><span data-if="com" style="display:none">Epsilon</span><span data-if="dotnet" style="display:none">epsilon</span><span data-if="python" style="display:none">epsilon</span></code></i></a>. It can lie between seconds and several hours.
It is therefore recommended to choose the SVM parameter <code><span data-if="hdevelop" style="display:inline">Nu</span><span data-if="c" style="display:none">Nu</span><span data-if="cpp" style="display:none">Nu</span><span data-if="com" style="display:none">Nu</span><span data-if="dotnet" style="display:none">nu</span><span data-if="python" style="display:none">nu</span></code>
in <a href="create_class_svm.html"><code><span data-if="hdevelop" style="display:inline">create_class_svm</span><span data-if="c" style="display:none">create_class_svm</span><span data-if="cpp" style="display:none">CreateClassSvm</span><span data-if="com" style="display:none">CreateClassSvm</span><span data-if="dotnet" style="display:none">CreateClassSvm</span><span data-if="python" style="display:none">create_class_svm</span></code></a> so that as few SVs as possible are
generated without decreasing the recognition rate.  Special care
must be taken with the parameter <code><span data-if="hdevelop" style="display:inline">Nu</span><span data-if="c" style="display:none">Nu</span><span data-if="cpp" style="display:none">Nu</span><span data-if="com" style="display:none">Nu</span><span data-if="dotnet" style="display:none">nu</span><span data-if="python" style="display:none">nu</span></code> in
<a href="create_class_svm.html"><code><span data-if="hdevelop" style="display:inline">create_class_svm</span><span data-if="c" style="display:none">create_class_svm</span><span data-if="cpp" style="display:none">CreateClassSvm</span><span data-if="com" style="display:none">CreateClassSvm</span><span data-if="dotnet" style="display:none">CreateClassSvm</span><span data-if="python" style="display:none">create_class_svm</span></code></a> so that the optimization starts from a
feasible region.  If too many training errors are chosen with a too
big <code><span data-if="hdevelop" style="display:inline">Nu</span><span data-if="c" style="display:none">Nu</span><span data-if="cpp" style="display:none">Nu</span><span data-if="com" style="display:none">Nu</span><span data-if="dotnet" style="display:none">nu</span><span data-if="python" style="display:none">nu</span></code>, an exception is raised.  In this case, an
SVM with the same training data, but with smaller <code><span data-if="hdevelop" style="display:inline">Nu</span><span data-if="c" style="display:none">Nu</span><span data-if="cpp" style="display:none">Nu</span><span data-if="com" style="display:none">Nu</span><span data-if="dotnet" style="display:none">nu</span><span data-if="python" style="display:none">nu</span></code> must
be trained.
</p>
<p>With the parameter <a href="#TrainMode"><i><code><span data-if="hdevelop" style="display:inline">TrainMode</span><span data-if="c" style="display:none">TrainMode</span><span data-if="cpp" style="display:none">TrainMode</span><span data-if="com" style="display:none">TrainMode</span><span data-if="dotnet" style="display:none">trainMode</span><span data-if="python" style="display:none">train_mode</span></code></i></a> you can choose between different
training modes.  Normally, you train an SVM without additional
information and <a href="#TrainMode"><i><code><span data-if="hdevelop" style="display:inline">TrainMode</span><span data-if="c" style="display:none">TrainMode</span><span data-if="cpp" style="display:none">TrainMode</span><span data-if="com" style="display:none">TrainMode</span><span data-if="dotnet" style="display:none">trainMode</span><span data-if="python" style="display:none">train_mode</span></code></i></a> is set to <i><span data-if="hdevelop" style="display:inline">'default'</span><span data-if="c" style="display:none">"default"</span><span data-if="cpp" style="display:none">"default"</span><span data-if="com" style="display:none">"default"</span><span data-if="dotnet" style="display:none">"default"</span><span data-if="python" style="display:none">"default"</span></i>.  If
multiple SVMs for the same data set but with different kernels are
trained, subsequent training runs can reuse optimization results and
thus speedup the overall training time of all runs. For this mode,
in <a href="#TrainMode"><i><code><span data-if="hdevelop" style="display:inline">TrainMode</span><span data-if="c" style="display:none">TrainMode</span><span data-if="cpp" style="display:none">TrainMode</span><span data-if="com" style="display:none">TrainMode</span><span data-if="dotnet" style="display:none">trainMode</span><span data-if="python" style="display:none">train_mode</span></code></i></a> a SVM handle of a previously trained SVM is
passed. Note that the SVM handle passed in <a href="#SVMHandle"><i><code><span data-if="hdevelop" style="display:inline">SVMHandle</span><span data-if="c" style="display:none">SVMHandle</span><span data-if="cpp" style="display:none">SVMHandle</span><span data-if="com" style="display:none">SVMHandle</span><span data-if="dotnet" style="display:none">SVMHandle</span><span data-if="python" style="display:none">svmhandle</span></code></i></a> and
the SVMHandle passed in <a href="#TrainMode"><i><code><span data-if="hdevelop" style="display:inline">TrainMode</span><span data-if="c" style="display:none">TrainMode</span><span data-if="cpp" style="display:none">TrainMode</span><span data-if="com" style="display:none">TrainMode</span><span data-if="dotnet" style="display:none">trainMode</span><span data-if="python" style="display:none">train_mode</span></code></i></a> must have the same
training data, the same mode and the same number of classes (see
<a href="create_class_svm.html"><code><span data-if="hdevelop" style="display:inline">create_class_svm</span><span data-if="c" style="display:none">create_class_svm</span><span data-if="cpp" style="display:none">CreateClassSvm</span><span data-if="com" style="display:none">CreateClassSvm</span><span data-if="dotnet" style="display:none">CreateClassSvm</span><span data-if="python" style="display:none">create_class_svm</span></code></a>). The application for this training mode is
the evaluation of different kernel functions given the same training
set. In the literature this is referred to as <code>alpha seeding</code>.
</p>
<p>With <a href="#TrainMode"><i><code><span data-if="hdevelop" style="display:inline">TrainMode</span><span data-if="c" style="display:none">TrainMode</span><span data-if="cpp" style="display:none">TrainMode</span><span data-if="com" style="display:none">TrainMode</span><span data-if="dotnet" style="display:none">trainMode</span><span data-if="python" style="display:none">train_mode</span></code></i></a> = <i><span data-if="hdevelop" style="display:inline">'add_sv_to_train_set'</span><span data-if="c" style="display:none">"add_sv_to_train_set"</span><span data-if="cpp" style="display:none">"add_sv_to_train_set"</span><span data-if="com" style="display:none">"add_sv_to_train_set"</span><span data-if="dotnet" style="display:none">"add_sv_to_train_set"</span><span data-if="python" style="display:none">"add_sv_to_train_set"</span></i> it is
possible to append the support vectors that were generated by a
previous call of <code><span data-if="hdevelop" style="display:inline">train_class_svm</span><span data-if="c" style="display:none">train_class_svm</span><span data-if="cpp" style="display:none">TrainClassSvm</span><span data-if="com" style="display:none">TrainClassSvm</span><span data-if="dotnet" style="display:none">TrainClassSvm</span><span data-if="python" style="display:none">train_class_svm</span></code> to the currently saved
training set. This mode has two typical application areas: First, it
is possible to gradually train a SVM.  For this, the complete
training set is divided into disjunctive chunks.  The first chunk is
trained normally using <a href="#TrainMode"><i><code><span data-if="hdevelop" style="display:inline">TrainMode</span><span data-if="c" style="display:none">TrainMode</span><span data-if="cpp" style="display:none">TrainMode</span><span data-if="com" style="display:none">TrainMode</span><span data-if="dotnet" style="display:none">trainMode</span><span data-if="python" style="display:none">train_mode</span></code></i></a> =
<i><span data-if="hdevelop" style="display:inline">'default'</span><span data-if="c" style="display:none">"default"</span><span data-if="cpp" style="display:none">"default"</span><span data-if="com" style="display:none">"default"</span><span data-if="dotnet" style="display:none">"default"</span><span data-if="python" style="display:none">"default"</span></i>. Afterwards, the previous training set is removed
with <a href="clear_samples_class_svm.html"><code><span data-if="hdevelop" style="display:inline">clear_samples_class_svm</span><span data-if="c" style="display:none">clear_samples_class_svm</span><span data-if="cpp" style="display:none">ClearSamplesClassSvm</span><span data-if="com" style="display:none">ClearSamplesClassSvm</span><span data-if="dotnet" style="display:none">ClearSamplesClassSvm</span><span data-if="python" style="display:none">clear_samples_class_svm</span></code></a>, the next chunk is added with
<a href="add_sample_class_svm.html"><code><span data-if="hdevelop" style="display:inline">add_sample_class_svm</span><span data-if="c" style="display:none">add_sample_class_svm</span><span data-if="cpp" style="display:none">AddSampleClassSvm</span><span data-if="com" style="display:none">AddSampleClassSvm</span><span data-if="dotnet" style="display:none">AddSampleClassSvm</span><span data-if="python" style="display:none">add_sample_class_svm</span></code></a> and trained with <a href="#TrainMode"><i><code><span data-if="hdevelop" style="display:inline">TrainMode</span><span data-if="c" style="display:none">TrainMode</span><span data-if="cpp" style="display:none">TrainMode</span><span data-if="com" style="display:none">TrainMode</span><span data-if="dotnet" style="display:none">trainMode</span><span data-if="python" style="display:none">train_mode</span></code></i></a> =
<i><span data-if="hdevelop" style="display:inline">'add_sv_to_train_set'</span><span data-if="c" style="display:none">"add_sv_to_train_set"</span><span data-if="cpp" style="display:none">"add_sv_to_train_set"</span><span data-if="com" style="display:none">"add_sv_to_train_set"</span><span data-if="dotnet" style="display:none">"add_sv_to_train_set"</span><span data-if="python" style="display:none">"add_sv_to_train_set"</span></i>. This is repeated until all chunks
are trained. This approach has the advantage that even huge training
data sets can be trained efficiently with respect to memory
consumption.  A second application area for this mode is that a
general purpose classifier can be specialized by adding
characteristic training samples and then retraining it. Please note that
the preprocessing (as described in <a href="create_class_svm.html"><code><span data-if="hdevelop" style="display:inline">create_class_svm</span><span data-if="c" style="display:none">create_class_svm</span><span data-if="cpp" style="display:none">CreateClassSvm</span><span data-if="com" style="display:none">CreateClassSvm</span><span data-if="dotnet" style="display:none">CreateClassSvm</span><span data-if="python" style="display:none">create_class_svm</span></code></a>) is not
changed when training with <a href="#TrainMode"><i><code><span data-if="hdevelop" style="display:inline">TrainMode</span><span data-if="c" style="display:none">TrainMode</span><span data-if="cpp" style="display:none">TrainMode</span><span data-if="com" style="display:none">TrainMode</span><span data-if="dotnet" style="display:none">trainMode</span><span data-if="python" style="display:none">train_mode</span></code></i></a> =
<i><span data-if="hdevelop" style="display:inline">'add_sv_to_train_set'</span><span data-if="c" style="display:none">"add_sv_to_train_set"</span><span data-if="cpp" style="display:none">"add_sv_to_train_set"</span><span data-if="com" style="display:none">"add_sv_to_train_set"</span><span data-if="dotnet" style="display:none">"add_sv_to_train_set"</span><span data-if="python" style="display:none">"add_sv_to_train_set"</span></i>.</p>
<h2 id="sec_execution">运行信息</h2>
<ul>
  <li>多线程类型:可重入(与非独占操作符并行运行)。</li>
<li>多线程作用域:全局(可以从任何线程调用)。</li>
  <li>未经并行化处理。</li>
</ul>
<p>This operator modifies the state of the following input parameter:</p>
<ul><li><a href="#SVMHandle"><span data-if="hdevelop" style="display:inline">SVMHandle</span><span data-if="c" style="display:none">SVMHandle</span><span data-if="cpp" style="display:none">SVMHandle</span><span data-if="com" style="display:none">SVMHandle</span><span data-if="dotnet" style="display:none">SVMHandle</span><span data-if="python" style="display:none">svmhandle</span></a></li></ul>
<p>During execution of this operator, access to the value of this parameter must be synchronized if it is used across multiple threads.</p>
<h2 id="sec_parameters">参数表</h2>
  <div class="par">
<div class="parhead">
<span id="SVMHandle" class="parname"><b><code><span data-if="hdevelop" style="display:inline">SVMHandle</span><span data-if="c" style="display:none">SVMHandle</span><span data-if="cpp" style="display:none">SVMHandle</span><span data-if="com" style="display:none">SVMHandle</span><span data-if="dotnet" style="display:none">SVMHandle</span><span data-if="python" style="display:none">svmhandle</span></code></b> (input_control, state is modified)  </span><span>class_svm <code>→</code> <span data-if="dotnet" style="display:none"><a href="HClassSvm.html">HClassSvm</a>, </span><span data-if="dotnet" style="display:none"><a href="HTuple.html">HTuple</a></span><span data-if="python" style="display:none">HHandle</span><span data-if="cpp" style="display:none"><a href="HTuple.html">HTuple</a></span><span data-if="c" style="display:none">Htuple</span><span data-if="hdevelop" style="display:inline"> (handle)</span><span data-if="dotnet" style="display:none"> (<i>IntPtr</i>)</span><span data-if="cpp" style="display:none"> (<i>HHandle</i>)</span><span data-if="c" style="display:none"> (<i>handle</i>)</span></span>
</div>
<p class="pardesc">SVM handle.</p>
</div>
  <div class="par">
<div class="parhead">
<span id="Epsilon" class="parname"><b><code><span data-if="hdevelop" style="display:inline">Epsilon</span><span data-if="c" style="display:none">Epsilon</span><span data-if="cpp" style="display:none">Epsilon</span><span data-if="com" style="display:none">Epsilon</span><span data-if="dotnet" style="display:none">epsilon</span><span data-if="python" style="display:none">epsilon</span></code></b> (input_control)  </span><span>real <code>→</code> <span data-if="dotnet" style="display:none"><a href="HTuple.html">HTuple</a></span><span data-if="python" style="display:none">float</span><span data-if="cpp" style="display:none"><a href="HTuple.html">HTuple</a></span><span data-if="c" style="display:none">Htuple</span><span data-if="hdevelop" style="display:inline"> (real)</span><span data-if="dotnet" style="display:none"> (<i>double</i>)</span><span data-if="cpp" style="display:none"> (<i>double</i>)</span><span data-if="c" style="display:none"> (<i>double</i>)</span></span>
</div>
<p class="pardesc">Stop parameter for training.</p>
<p class="pardesc"><span class="parcat">Default:
      </span>0.001</p>
<p class="pardesc"><span class="parcat">Suggested values:
      </span>0.00001, 0.0001, 0.001, 0.01, 0.1</p>
</div>
  <div class="par">
<div class="parhead">
<span id="TrainMode" class="parname"><b><code><span data-if="hdevelop" style="display:inline">TrainMode</span><span data-if="c" style="display:none">TrainMode</span><span data-if="cpp" style="display:none">TrainMode</span><span data-if="com" style="display:none">TrainMode</span><span data-if="dotnet" style="display:none">trainMode</span><span data-if="python" style="display:none">train_mode</span></code></b> (input_control)  </span><span>number <code>→</code> <span data-if="dotnet" style="display:none"><a href="HTuple.html">HTuple</a></span><span data-if="python" style="display:none">Union[str, int]</span><span data-if="cpp" style="display:none"><a href="HTuple.html">HTuple</a></span><span data-if="c" style="display:none">Htuple</span><span data-if="hdevelop" style="display:inline"> (string / </span><span data-if="hdevelop" style="display:inline">integer)</span><span data-if="dotnet" style="display:none"> (<i>string</i> / </span><span data-if="dotnet" style="display:none">int / </span><span data-if="dotnet" style="display:none">long)</span><span data-if="cpp" style="display:none"> (<i>HString</i> / </span><span data-if="cpp" style="display:none">Hlong)</span><span data-if="c" style="display:none"> (<i>char*</i> / </span><span data-if="c" style="display:none">Hlong)</span></span>
</div>
<p class="pardesc">Mode of training. For normal operation:
'default'. If SVs already included in
the SVM should be used for training:
'add_sv_to_train_set'. For alpha seeding: the respective
SVM handle.</p>
<p class="pardesc"><span class="parcat">Default:
      </span>
    <span data-if="hdevelop" style="display:inline">'default'</span>
    <span data-if="c" style="display:none">"default"</span>
    <span data-if="cpp" style="display:none">"default"</span>
    <span data-if="com" style="display:none">"default"</span>
    <span data-if="dotnet" style="display:none">"default"</span>
    <span data-if="python" style="display:none">"default"</span>
</p>
<p class="pardesc"><span class="parcat">List of values:
      </span><span data-if="hdevelop" style="display:inline">'add_sv_to_train_set'</span><span data-if="c" style="display:none">"add_sv_to_train_set"</span><span data-if="cpp" style="display:none">"add_sv_to_train_set"</span><span data-if="com" style="display:none">"add_sv_to_train_set"</span><span data-if="dotnet" style="display:none">"add_sv_to_train_set"</span><span data-if="python" style="display:none">"add_sv_to_train_set"</span>, <span data-if="hdevelop" style="display:inline">'default'</span><span data-if="c" style="display:none">"default"</span><span data-if="cpp" style="display:none">"default"</span><span data-if="com" style="display:none">"default"</span><span data-if="dotnet" style="display:none">"default"</span><span data-if="python" style="display:none">"default"</span></p>
</div>
<h2 id="sec_example_all">例程 (HDevelop)</h2>
<pre class="example">
* Train an SVM
create_class_svm (NumFeatures, 'rbf', 0.01, 0.01, NumClasses,\
                  'one-versus-all', 'normalization', NumFeatures,\
                  SVMHandle)
read_samples_class_svm (SVMHandle, 'samples.mtf')
train_class_svm (SVMHandle, 0.001, 'default')
write_class_svm (SVMHandle, 'classifier.svm')
</pre>
<h2 id="sec_result">结果</h2>
<p>If the parameters are valid 该算子 <code><span data-if="hdevelop" style="display:inline">train_class_svm</span><span data-if="c" style="display:none">train_class_svm</span><span data-if="cpp" style="display:none">TrainClassSvm</span><span data-if="com" style="display:none">TrainClassSvm</span><span data-if="dotnet" style="display:none">TrainClassSvm</span><span data-if="python" style="display:none">train_class_svm</span></code>
返回值 <TT>2</TT> (
      <TT>H_MSG_TRUE</TT>)
    .  If necessary, an exception is
raised.</p>
<h2 id="sec_predecessors">可能的前置算子</h2>
<p>
<code><a href="add_sample_class_svm.html"><span data-if="hdevelop" style="display:inline">add_sample_class_svm</span><span data-if="c" style="display:none">add_sample_class_svm</span><span data-if="cpp" style="display:none">AddSampleClassSvm</span><span data-if="com" style="display:none">AddSampleClassSvm</span><span data-if="dotnet" style="display:none">AddSampleClassSvm</span><span data-if="python" style="display:none">add_sample_class_svm</span></a></code>, 
<code><a href="read_samples_class_svm.html"><span data-if="hdevelop" style="display:inline">read_samples_class_svm</span><span data-if="c" style="display:none">read_samples_class_svm</span><span data-if="cpp" style="display:none">ReadSamplesClassSvm</span><span data-if="com" style="display:none">ReadSamplesClassSvm</span><span data-if="dotnet" style="display:none">ReadSamplesClassSvm</span><span data-if="python" style="display:none">read_samples_class_svm</span></a></code>
</p>
<h2 id="sec_successors">可能的后置算子</h2>
<p>
<code><a href="classify_class_svm.html"><span data-if="hdevelop" style="display:inline">classify_class_svm</span><span data-if="c" style="display:none">classify_class_svm</span><span data-if="cpp" style="display:none">ClassifyClassSvm</span><span data-if="com" style="display:none">ClassifyClassSvm</span><span data-if="dotnet" style="display:none">ClassifyClassSvm</span><span data-if="python" style="display:none">classify_class_svm</span></a></code>, 
<code><a href="write_class_svm.html"><span data-if="hdevelop" style="display:inline">write_class_svm</span><span data-if="c" style="display:none">write_class_svm</span><span data-if="cpp" style="display:none">WriteClassSvm</span><span data-if="com" style="display:none">WriteClassSvm</span><span data-if="dotnet" style="display:none">WriteClassSvm</span><span data-if="python" style="display:none">write_class_svm</span></a></code>, 
<code><a href="create_class_lut_svm.html"><span data-if="hdevelop" style="display:inline">create_class_lut_svm</span><span data-if="c" style="display:none">create_class_lut_svm</span><span data-if="cpp" style="display:none">CreateClassLutSvm</span><span data-if="com" style="display:none">CreateClassLutSvm</span><span data-if="dotnet" style="display:none">CreateClassLutSvm</span><span data-if="python" style="display:none">create_class_lut_svm</span></a></code>
</p>
<h2 id="sec_alternatives">可替代算子</h2>
<p>
<code><a href="train_dl_classifier_batch.html"><span data-if="hdevelop" style="display:inline">train_dl_classifier_batch</span><span data-if="c" style="display:none">train_dl_classifier_batch</span><span data-if="cpp" style="display:none">TrainDlClassifierBatch</span><span data-if="com" style="display:none">TrainDlClassifierBatch</span><span data-if="dotnet" style="display:none">TrainDlClassifierBatch</span><span data-if="python" style="display:none">train_dl_classifier_batch</span></a></code>, 
<code><a href="read_class_svm.html"><span data-if="hdevelop" style="display:inline">read_class_svm</span><span data-if="c" style="display:none">read_class_svm</span><span data-if="cpp" style="display:none">ReadClassSvm</span><span data-if="com" style="display:none">ReadClassSvm</span><span data-if="dotnet" style="display:none">ReadClassSvm</span><span data-if="python" style="display:none">read_class_svm</span></a></code>
</p>
<h2 id="sec_see">参考其它</h2>
<p>
<code><a href="create_class_svm.html"><span data-if="hdevelop" style="display:inline">create_class_svm</span><span data-if="c" style="display:none">create_class_svm</span><span data-if="cpp" style="display:none">CreateClassSvm</span><span data-if="com" style="display:none">CreateClassSvm</span><span data-if="dotnet" style="display:none">CreateClassSvm</span><span data-if="python" style="display:none">create_class_svm</span></a></code>
</p>
<h2 id="sec_references">References</h2>
<p>

John Shawe-Taylor, Nello Cristianini: “Kernel Methods for Pattern
Analysis”; Cambridge University Press, Cambridge; 2004.
<br>
Bernhard Schölkopf, Alexander J.Smola: “Learning with Kernels”;
MIT Press, London; 1999.
</p>
<h2 id="sec_module">模块</h2>
<p>
Foundation</p>
<!--OP_REF_FOOTER_START-->
<hr>
<div class="indexlink">
<a href="index_classes.html"><span data-if="dotnet" style="display:none;">类别</span><span data-if="cpp" style="display:none;">类别</span></a><span data-if="dotnet" style="display:none;"> | </span><span data-if="cpp" style="display:none;"> | </span><a href="index_by_name.html">算子列表</a>
</div>
<div class="footer">
<div class="copyright">HALCON算子参考手册 Copyright © 2015-2023 51Halcon</div>
</div>
</div>
</body>
</html>
