<!DOCTYPE html>
<!--

	Modified template for STM32CubeMX.AI purpose

	d0.1: 	jean-michel.delorme@st.com
			add ST logo and ST footer

	d2.0: 	jean-michel.delorme@st.com
			add sidenav support

	d2.1: 	jean-michel.delorme@st.com
			clean-up + optional ai_logo/ai meta data
			
==============================================================================
           "GitHub HTML5 Pandoc Template" v2.1 — by Tristano Ajmone           
==============================================================================
Copyright © Tristano Ajmone, 2017, MIT License (MIT). Project's home:

- https://github.com/tajmone/pandoc-goodies

The CSS in this template reuses source code taken from the following projects:

- GitHub Markdown CSS: Copyright © Sindre Sorhus, MIT License (MIT):
  https://github.com/sindresorhus/github-markdown-css

- Primer CSS: Copyright © 2016-2017 GitHub Inc., MIT License (MIT):
  http://primercss.io/

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The MIT License 

Copyright (c) Tristano Ajmone, 2017 (github.com/tajmone/pandoc-goodies)
Copyright (c) Sindre Sorhus <sindresorhus@gmail.com> (sindresorhus.com)
Copyright (c) 2017 GitHub Inc.

"GitHub Pandoc HTML5 Template" is Copyright (c) Tristano Ajmone, 2017, released
under the MIT License (MIT); it contains readaptations of substantial portions
of the following third party softwares:

(1) "GitHub Markdown CSS", Copyright (c) Sindre Sorhus, MIT License (MIT).
(2) "Primer CSS", Copyright (c) 2016 GitHub Inc., MIT License (MIT).

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
==============================================================================-->
<html>
<head>
  <meta charset="utf-8" />
  <meta name="generator" content="pandoc" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
  <meta name="keywords" content="STM32CubeMX, X-CUBE-AI, Neural Network, Quantization support, CLI, Code Generator, Automatic NN mapping tools" />
  <title>Command Line Interface</title>
  <style type="text/css">
.markdown-body{
	-ms-text-size-adjust:100%;
	-webkit-text-size-adjust:100%;
	color:#24292e;
	font-family:-apple-system,system-ui,BlinkMacSystemFont,"Segoe UI",Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol";
	font-size:16px;
	line-height:1.5;
	word-wrap:break-word;
	box-sizing:border-box;
	min-width:200px;
	max-width:980px;
	margin:0 auto;
	padding:45px;
	}
.markdown-body a{
	color:#0366d6;
	background-color:transparent;
	text-decoration:none;
	-webkit-text-decoration-skip:objects}
.markdown-body a:active,.markdown-body a:hover{
	outline-width:0}
.markdown-body a:hover{
	text-decoration:underline}
.markdown-body a:not([href]){
	color:inherit;text-decoration:none}
.markdown-body strong{font-weight:600}
.markdown-body h1,.markdown-body h2,.markdown-body h3,.markdown-body h4,.markdown-body h5,.markdown-body h6{
	margin-top:24px;
	margin-bottom:16px;
	font-weight:600;
	line-height:1.25}
.markdown-body h1{
	font-size:2em;
	margin:.67em 0;
	padding-bottom:.3em;
	border-bottom:1px solid #eaecef}
.markdown-body h2{
	padding-bottom:.3em;
	font-size:1.5em;
	border-bottom:1px solid #eaecef}
.markdown-body h3{font-size:1.25em}
.markdown-body h4{font-size:1em}
.markdown-body h5{font-size:.875em}
.markdown-body h6{font-size:.85em;color:#6a737d}
.markdown-body img{border-style:none}
.markdown-body svg:not(:root){
	overflow:hidden}
.markdown-body hr{
	box-sizing:content-box;
	height:.25em;
	margin:24px 0;
	padding:0;
	overflow:hidden;
	background-color:#e1e4e8;
	border:0}
.markdown-body hr::before{display:table;content:""}
.markdown-body hr::after{display:table;clear:both;content:""}
.markdown-body input{margin:0;overflow:visible;font:inherit;font-family:inherit;font-size:inherit;line-height:inherit}
.markdown-body [type=checkbox]{box-sizing:border-box;padding:0}
.markdown-body *{box-sizing:border-box}.markdown-body blockquote{margin:0}
.markdown-body ol,.markdown-body ul{padding-left:2em}
.markdown-body ol ol,.markdown-body ul ol{list-style-type:lower-roman}
.markdown-body ol ol,.markdown-body ol ul,.markdown-body ul ol,.markdown-body ul ul{margin-top:0;margin-bottom:0}
.markdown-body ol ol ol,.markdown-body ol ul ol,.markdown-body ul ol ol,.markdown-body ul ul ol{list-style-type:lower-alpha}
.markdown-body li>p{margin-top:16px}
.markdown-body li+li{margin-top:.25em}
.markdown-body dd{margin-left:0}
.markdown-body dl{padding:0}
.markdown-body dl dt{padding:0;margin-top:16px;font-size:1em;font-style:italic;font-weight:600}
.markdown-body dl dd{padding:0 16px;margin-bottom:16px}
.markdown-body code{font-family:SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace}
.markdown-body pre{font:12px SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace;word-wrap:normal}
.markdown-body blockquote,.markdown-body dl,.markdown-body ol,.markdown-body p,.markdown-body pre,.markdown-body table,.markdown-body ul{margin-top:0;margin-bottom:16px}
.markdown-body blockquote{padding:0 1em;color:#6a737d;border-left:.25em solid #dfe2e5}
.markdown-body blockquote>:first-child{margin-top:0}
.markdown-body blockquote>:last-child{margin-bottom:0}
.markdown-body table{display:block;width:100%;overflow:auto;border-spacing:0;border-collapse:collapse}
.markdown-body table th{font-weight:600}
.markdown-body table td,.markdown-body table th{padding:6px 13px;border:1px solid #dfe2e5}
.markdown-body table tr{background-color:#fff;border-top:1px solid #c6cbd1}
.markdown-body table tr:nth-child(2n){background-color:#f6f8fa}
.markdown-body img{max-width:100%;box-sizing:content-box;background-color:#fff}
.markdown-body code{padding:.2em 0;margin:0;font-size:85%;background-color:rgba(27,31,35,.05);border-radius:3px}
.markdown-body code::after,.markdown-body code::before{letter-spacing:-.2em;content:"\00a0"}
.markdown-body pre>code{padding:0;margin:0;font-size:100%;word-break:normal;white-space:pre;background:0 0;border:0}
.markdown-body .highlight{margin-bottom:16px}
.markdown-body .highlight pre{margin-bottom:0;word-break:normal}
.markdown-body .highlight pre,.markdown-body pre{padding:16px;overflow:auto;font-size:85%;line-height:1.45;background-color:#f6f8fa;border-radius:3px}
.markdown-body pre code{display:inline;max-width:auto;padding:0;margin:0;overflow:visible;line-height:inherit;word-wrap:normal;background-color:transparent;border:0}
.markdown-body pre code::after,.markdown-body pre code::before{content:normal}
.markdown-body .full-commit .btn-outline:not(:disabled):hover{color:#005cc5;border-color:#005cc5}
.markdown-body kbd{box-shadow:inset 0 -1px 0 #959da5;display:inline-block;padding:3px 5px;font:11px/10px SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace;color:#444d56;vertical-align:middle;background-color:#fcfcfc;border:1px solid #c6cbd1;border-bottom-color:#959da5;border-radius:3px;box-shadow:inset 0 -1px 0 #959da5}
.markdown-body :checked+.radio-label{position:relative;z-index:1;border-color:#0366d6}
.markdown-body .task-list-item{list-style-type:none}
.markdown-body .task-list-item+.task-list-item{margin-top:3px}
.markdown-body .task-list-item input{margin:0 .2em .25em -1.6em;vertical-align:middle}
.markdown-body::before{display:table;content:""}
.markdown-body::after{display:table;clear:both;content:""}
.markdown-body>:first-child{margin-top:0!important}
.markdown-body>:last-child{margin-bottom:0!important}
.Alert,.Error,.Note,.Success,.Warning{padding:11px;margin-bottom:24px;border-style:solid;border-width:1px;border-radius:4px}
.Alert p,.Error p,.Note p,.Success p,.Warning p{margin-top:0}
.Alert p:last-child,.Error p:last-child,.Note p:last-child,.Success p:last-child,.Warning p:last-child{margin-bottom:0}
.Alert{color:#246;background-color:#e2eef9;border-color:#bac6d3}
.Warning{color:#4c4a42;background-color:#fff9ea;border-color:#dfd8c2}
.Error{color:#911;background-color:#fcdede;border-color:#d2b2b2}
.Success{color:#22662c;background-color:#e2f9e5;border-color:#bad3be}
.Note{color:#2f363d;background-color:#f6f8fa;border-color:#d5d8da}
.Alert h1,.Alert h2,.Alert h3,.Alert h4,.Alert h5,.Alert h6{color:#246;margin-bottom:0}
.Warning h1,.Warning h2,.Warning h3,.Warning h4,.Warning h5,.Warning h6{color:#4c4a42;margin-bottom:0}
.Error h1,.Error h2,.Error h3,.Error h4,.Error h5,.Error h6{color:#911;margin-bottom:0}
.Success h1,.Success h2,.Success h3,.Success h4,.Success h5,.Success h6{color:#22662c;margin-bottom:0}
.Note h1,.Note h2,.Note h3,.Note h4,.Note h5,.Note h6{color:#2f363d;margin-bottom:0}
.Alert h1:first-child,.Alert h2:first-child,.Alert h3:first-child,.Alert h4:first-child,.Alert h5:first-child,.Alert h6:first-child,.Error h1:first-child,.Error h2:first-child,.Error h3:first-child,.Error h4:first-child,.Error h5:first-child,.Error h6:first-child,.Note h1:first-child,.Note h2:first-child,.Note h3:first-child,.Note h4:first-child,.Note h5:first-child,.Note h6:first-child,.Success h1:first-child,.Success h2:first-child,.Success h3:first-child,.Success h4:first-child,.Success h5:first-child,.Success h6:first-child,.Warning h1:first-child,.Warning h2:first-child,.Warning h3:first-child,.Warning h4:first-child,.Warning h5:first-child,.Warning h6:first-child{margin-top:0}
h1.title,p.subtitle{text-align:center}
h1.title.followed-by-subtitle{margin-bottom:0}
p.subtitle{font-size:1.5em;font-weight:600;line-height:1.25;margin-top:0;margin-bottom:16px;padding-bottom:.3em}
div.line-block{white-space:pre-line}
  </style>
  <style type="text/css">code{white-space: pre;}</style>
  <style type="text/css">
	code.sourceCode > span { display: inline-block; line-height: 1.25; }
code.sourceCode > span { color: inherit; text-decoration: inherit; }
code.sourceCode > span:empty { height: 1.2em; }
.sourceCode { overflow: visible; }
code.sourceCode { white-space: pre; position: relative; }
div.sourceCode { margin: 1em 0; }
pre.sourceCode { margin: 0; }
@media screen {
div.sourceCode { overflow: auto; }
}
@media print {
code.sourceCode { white-space: pre-wrap; }
code.sourceCode > span { text-indent: -5em; padding-left: 5em; }
}
pre.numberSource code
  { counter-reset: source-line 0; }
pre.numberSource code > span
  { position: relative; left: -4em; counter-increment: source-line; }
pre.numberSource code > span > a:first-child::before
  { content: counter(source-line);
    position: relative; left: -1em; text-align: right; vertical-align: baseline;
    border: none; display: inline-block;
    -webkit-touch-callout: none; -webkit-user-select: none;
    -khtml-user-select: none; -moz-user-select: none;
    -ms-user-select: none; user-select: none;
    padding: 0 4px; width: 4em;
    color: #aaaaaa;
  }
pre.numberSource { margin-left: 3em; border-left: 1px solid #aaaaaa;  padding-left: 4px; }
div.sourceCode
  {   }
@media screen {
code.sourceCode > span > a:first-child::before { text-decoration: underline; }
}
code span.al { color: #ff0000; font-weight: bold; } /* Alert */
code span.an { color: #60a0b0; font-weight: bold; font-style: italic; } /* Annotation */
code span.at { color: #7d9029; } /* Attribute */
code span.bn { color: #40a070; } /* BaseN */
code span.bu { } /* BuiltIn */
code span.cf { color: #007020; font-weight: bold; } /* ControlFlow */
code span.ch { color: #4070a0; } /* Char */
code span.cn { color: #880000; } /* Constant */
code span.co { color: #60a0b0; font-style: italic; } /* Comment */
code span.cv { color: #60a0b0; font-weight: bold; font-style: italic; } /* CommentVar */
code span.do { color: #ba2121; font-style: italic; } /* Documentation */
code span.dt { color: #902000; } /* DataType */
code span.dv { color: #40a070; } /* DecVal */
code span.er { color: #ff0000; font-weight: bold; } /* Error */
code span.ex { } /* Extension */
code span.fl { color: #40a070; } /* Float */
code span.fu { color: #06287e; } /* Function */
code span.im { } /* Import */
code span.in { color: #60a0b0; font-weight: bold; font-style: italic; } /* Information */
code span.kw { color: #007020; font-weight: bold; } /* Keyword */
code span.op { color: #666666; } /* Operator */
code span.ot { color: #007020; } /* Other */
code span.pp { color: #bc7a00; } /* Preprocessor */
code span.sc { color: #4070a0; } /* SpecialChar */
code span.ss { color: #bb6688; } /* SpecialString */
code span.st { color: #4070a0; } /* String */
code span.va { color: #19177c; } /* Variable */
code span.vs { color: #4070a0; } /* VerbatimString */
code span.wa { color: #60a0b0; font-weight: bold; font-style: italic; } /* Warning */
  </style>
  <style type="text/css">:root { --main-hx-color: rgb(0,32,88); --sidenav-font-size: 90%;}html {}* {xbox-sizing: border-box;}.st_header h1.title,.st_header p.subtitle {text-align: left;}.st_header h1.title {color: var(--main-hx-color)}.st_header p.subtitle {color: var(--main-hx-color)}.st_header h1.title.followed-by-subtitle {margin-bottom:5px;}.st_header p.revision {display: inline-block;width:70%;}.st_header div.author {font-style: italic;}.st_header div.summary {border-top: solid 1px #C0C0C0;background: #ECECEC;padding: 5px;}.st_footer img {float: right;}.markdown-body #header-section-number {font-size:120%;}.markdown-body h1 {border-bottom:1px solid #74767a;padding-bottom: 2px;padding-top: 10px;}.markdown-body h2 {padding-bottom: 5px;padding-top: 10px;}.markdown-body h2 code {background-color: rgb(255, 255, 255);}#func.sourceCode {border-left-style: solid;border-color: rgb(0, 32, 82);border-color: rgb(255, 244, 191);border-width: 8px;padding:0px;}pre > code {border: solid 1px blue;font-size:60%;}codeXX {border: solid 1px blue;font-size:60%;}#func.sourceXXCode::before {content: "Synopsis";padding-left:10px;font-weight: bold;}figure {padding:0px;margin-left:5px;margin-right:5px;margin-left: auto;margin-right: auto;}img[data-property="center"] {display: block;margin-top: 10px;margin-left: auto;margin-right: auto;padding: 10px;}figcaption {text-align:left;  border-top: 1px dotted #888;padding-bottom: 20px;margin-top: 10px;}section.st_footer {font-size:80%;}div.stnotice {width:80%;}h1 code, h2 code {font-size:120%;}	.markdown-body table {width: 100%;margin-left:auto;margin-right:auto;}.markdown-body img {border-radius: 4px;padding: 5px;display: block;margin-left: auto;margin-right: auto;width: auto;}.markdown-body .st_header img, .markdown-body {border: none;border-radius: none;padding: 5px;display: block;margin-left: auto;margin-right: auto;width: auto;box-shadow: none;}.markdown-body {margin: 10px;padding: 10px;width: auto;font-family: "Arial", sans-serif;color: #03234B;}.markdown-body h1, .markdown-body h2, .markdown-body h3 {   color: var(--main-hx-color)}.markdown-body:hover {}.markdown-body .contents {}.markdown-body .toc-title {}.markdown-body .contents li {list-style-type: none;}.markdown-body .contents ul {padding-left: 10px;}.markdown-body .contents a {color: #3CB4E6; }.sidenav {font-family: "Arial", sans-serif;font-family: segoe ui, verdona;color: #3CB4E6; color: #03234B; color: var(--main-hx-color);height: 100%;position: fixed;z-index: 1;top: 0;left: 0;margin-right: 10px;margin-left: 10px; overflow-x: hidden;}hr.new1 {border-width: thin;border-top: 1px solid #3CB4E6; margin-right: 10px;margin-top: -10px;}.sidenav #sidenav_header {margin-top: 10px;border: 1px;}.sidenav #sidenav_header img {float: left;}.sidenav #sidenav_header a {margin-left: 0px;margin-right: 0px;padding-left: 0px;color: #3CB4E6; color: #03234B; color: var(--main-hx-color)}.sidenav #sidenav_header a:hover {background-size: auto;color: #FFD200; }.sidenav #sidenav_header a:active {  }.sidenav > ul {background-color: rgba(57, 169, 220, 0.05);border-radius: 10px;padding-bottom: 10px;padding-top: 10px;padding-right: 10px;margin-right: 10px;}.sidenav a {padding: 2px 2px;text-decoration: none;font-size: var(--sidenav-font-size);  display:table;}.sidenav > ul > li,.sidenav > ul > li > ul > li { padding-right: 5px;padding-left: 5px;}.sidenav > ul > li > a { color: #03234B;  color: var(--main-hx-color)}.sidenav > ul > li > ul > li > a { color: #03234B; color: #3CB4E6; color: #03234B; font-weight: lighter;padding-left: 10px;}.sidenav > ul > li > ul > li > ul > li > a { display: None;}.sidenav li {list-style-type: none;}.sidenav ul {padding-left: 0px;}.sidenav > ul > li > a:hover,.sidenav > ul > li > ul > li > a:hover {background-color: rgba(70, 70, 80, 0.1); background-clip: border-box;margin-left: -10px;padding-left: 10px;}.sidenav > ul > li > a:hover {padding-right: 15px;width: 230px;	}.sidenav > ul > li > ul > li > a:hover {padding-right: 10px;width: 230px;	}.sidenav > ul > li > a:active { color: #FFD200; }.sidenav code {}.sidenav {width: 280px;}#sidenav {margin-left: 300px;display:block;}.markdown-body .print-contents {visibility:hidden;}.markdown-body .print-toc-title {visibility:hidden;}.markdown-body {max-width: 980px;min-width: 200px;padding: 40px;border-style: solid;border-style: outset;border-color: rgba(104, 167, 238, 0.089);border-radius: 5px;}@media screen and (max-height: 450px) {.sidenav {padding-top: 15px;}.sidenav a {font-size: 18px;}#sidenav {margin-left: 10px; }.sidenav {visibility:hidden;}.markdown-body {margin: 10px;padding: 40px;width: auto;border: 0px;}}@media screen and (max-width: 1024px) {.sidenav {visibility:hidden;}.markdown-body {margin: 10px;padding: 40px;width: auto;border: 0px;}#sidenav {margin-left: 10px;}}@media print {.sidenav {visibility:hidden;}#sidenav {margin-left: 10px;}.markdown-body {margin: 10px;padding: 10px;width:auto;border: 0px;}@page {size: A4;  margin:2cm;padding:2cm;margin-top: 1cm;padding-bottom: 1cm;}* {xbox-sizing: border-box;font-size:90%;}a {font-size: 100%;color: yellow;}.markdown-body article {xbox-sizing: border-box;font-size:100%;}.markdown-body p {windows: 2;orphans: 2;}.pagebreakerafter {page-break-after: always;padding-top:10mm;}.pagebreakbefore {page-break-before: always;}h1, h2, h3, h4 {page-break-after: avoid;}div, code, blockquote, li, span, table, figure {page-break-inside: avoid;}}</style>
  <!--[if lt IE 9]>
    <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
  <![endif]-->




<link href="" rel="shortcut icon">

</head>



<body>

		<div class="sidenav">
		<div id="sidenav_header">
							<img src="" title="STM32CubeMX.AI logo" align="left" height="70" />
										<br />5.2.0<br />
										<a href="#doc_title"> Command Line Interface </a>
					</div>
		<div id="sidenav_header_button">
			 
							<ul>
					<li><p><a id="index" href="index.html">[ Index ]</a></p></li>
				</ul>
						<hr class="new1">
		</div>	

		<ul>
<li><a href="#introduction">Introduction</a><ul>
<li><a href="#synopsis">Synopsis</a></li>
<li><a href="#comparison-with-the-x-cube-ai-ui-plug-in-features">Comparison with the X-CUBE-AI UI plug-in features</a></li>
<li><a href="#command-work-flow">Command work-flow</a></li>
<li><a href="#enable-automl-pipeline-for-resource-constrained-environment">Enable AutoML pipeline for resource-constrained environment</a></li>
<li><a href="#setting-the-environment">Setting the environment</a></li>
<li><a href="#error-handling">Error handling</a></li>
</ul></li>
<li><a href="#ref_com_options">Common arguments</a></li>
<li><a href="#ref_analyze_cmd">Analyze command</a><ul>
<li><a href="#description">Description</a></li>
<li><a href="#specific-arguments">Specific arguments</a></li>
<li><a href="#examples">Examples</a></li>
<li><a href="#ref_dl_fw_detection">DL framework detection</a></li>
<li><a href="#ref_out_of_box_report">Out-of-the-box information</a></li>
<li><a href="#ref_graph_desc">PINNR/IR graph description</a></li>
<li><a href="#ref_complexity_by_layer">MACC/ROM complexity by layer</a></li>
<li><a href="#ref_c_graph_desc">C-graph description</a></li>
</ul></li>
<li><a href="#ref_validate_cmd">Validate command</a><ul>
<li><a href="#description-1">Description</a></li>
<li><a href="#specific-arguments-1">Specific arguments</a></li>
<li><a href="#ref_example_val">Examples</a></li>
<li><a href="#ref_desc_arg">Serial COM port configuration</a></li>
<li><a href="#ref_l2_error">Report of the L2r error for a 32b float model</a></li>
<li><a href="#sec_exec_by_layer">Execution time per layer</a></li>
</ul></li>
<li><a href="#ref_generate_cmd">Generate command</a><ul>
<li><a href="#description-2">Description</a></li>
<li><a href="#specific-arguments-2">Specific arguments</a></li>
<li><a href="#examples-1">Examples</a></li>
<li><a href="#ref_addr_options">Particular network data c-file</a></li>
<li><a href="#ref_fota_support">FOTA support</a></li>
<li><a href="#ref_update_project">Update an ioc-based project</a></li>
<li><a href="#update-a-proprietary-source-tree">Update a proprietary source tree</a></li>
</ul></li>
<li><a href="#references">References</a></li>
<li><a href="#revision-history">Revision history</a></li>
</ul>
	</div>
	<article id="sidenav" class="markdown-body">
	


<header>
<section class="st_header" id="doc_title">

<div class="himage">
	<img src="" title="STM32CubeMX.AI" align="right" height="70" />
	<img src="" title="STM32" align="right" height="90" />
</div>

<h1 class="title followed-by-subtitle">Command Line Interface</h1>

	<p class="subtitle">X-CUBE-AI Expansion Package</p>

	<div class="revision">r2.1</div>

	<div class="ai_platform">
		AI PLATFORM r5.2.0
					(Embedded Inference Client API 1.1.0)
			</div>
			Command Line Interface r1.4.0
	




</section>
</header>




<section id="introduction" class="level1">
<h1>Introduction</h1>
<p>The <code>stm32ai</code> application is a console utility which provides a complete and unified <em>Command Line Interface</em> (CLI) to generate from a pre-trained model, an optimized neural network C-library for STM32 device family. It consists on three main commands: <a href="#ref_analyze_cmd"><strong>analyze</strong></a>, <a href="#ref_validate_cmd"><strong>validate</strong></a> and <a href="#ref_generate_cmd"><strong>generate</strong></a>. Each command can be used independently of the other with the same set of common options (model files, compression factor, output directory…) and specific options. The <a href="quantization.html"><strong>quantize</strong></a> command is a specific case to apply a post-training quantization process (refer to <a href="quantization.html">[8]</a>).</p>
<section id="synopsis" class="level2">
<h2>Synopsis</h2>
<hr />
<pre class="dosbatch"><code>Neural Network Tools for STM32 v1.4.0 (AI tools v5.2.0)
usage: stm32ai.py [-h] [--version] [--tools-version] [--model FILE]
                  [--verbosity [{0,1,2}]]
                  [--type [keras|tflite|caffe|convnetjs|lasagne|onnx]]
                  [--name STR] [--compression [1|4|8]] [--quantize [FILE]]
                  [--allocate-inputs] [--allocate-outputs] [--workspace DIR]
                  [--output DIR] [--lib DIR] [--series STR] [--split-weights]
                  [--relocatable] [--no-c-files] [--binary] [--address ADDR]
                  [--copy-weights-at ADDR] [--batches INT] [--mode MODE]
                  [--desc DESC] [--valinput FILE [FILE ...]]
                  [--valoutput FILE [FILE ...]] [--full] [--classifier]
                  [--no-check] [--no-exec-model]
                  analyze|generate|validate|quantize</code></pre>
<hr />
<p>Short description can be displayed with the following command:</p>
<div class="sourceCode" id="cb2"><pre class="sourceCode bash"><code class="sourceCode bash"><span id="cb2-1"><a href="#cb2-1"></a>$ <span class="ex">stm32ai</span> --help</span></code></pre></div>
<div class="Note">
<p><strong>Note</strong> — In this article, <strong><em>Netron</em></strong> application (<a href="https://github.com/lutzroeder/netron">https://github.com/lutzroeder/netron</a>) is used to visualize the original neural network model.</p>
</div>
</section>
<section id="comparison-with-the-x-cube-ai-ui-plug-in-features" class="level2">
<h2>Comparison with the X-CUBE-AI UI plug-in features</h2>
<p>Core entry point, the <code>stm32ai</code> application is used as back-end by the X-CUBE-AI UI plug-in (refer to <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">[2]</a>).</p>
<div id="fig:cli_in_ui" class="fignos">
<figure>
<img src="" property="center" style="width:95.0%" alt /><figcaption><span>Figure 1:</span> CLI as back-end</figcaption>
</figure>
</div>
<p>In comparison with the X-CUBE-AI UI plug-in, the following high-level features are not supported:</p>
<ul>
<li>extra C-code wrapper to manage multiple models. CLI manages only one model at the time.<br />
</li>
<li>creation of a whole IDE project including the optimized inference runtime library, AI headers files and the C-files related to the HW settings. CLI can be only used to generate the specialized NN C-files. However, it allows to update an initial IDE project, STM32CubeMX-based or proprietary source tree (see <a href="#ref_update_project">“Update a an ioc-based project”</a> section).<br />
</li>
<li>the check to know if a model will fit, in term of memory layout in a selected STM32 memory device. CLI reports (see <a href="#ref_analyze_cmd">“Analyze command”</a> section) the main system level dimensioning metrics: ROM, RAM, MACC.. (refer to <a href="evaluation_metrics.html">[6]</a> for the definition)</li>
<li>for the “<em>Validation process on target</em>”, as a full STM32 project is expected, it must be generated previously through the UI. Note that this project can be updated later (see <a href="#ref_update_project">“Update a an ioc-based project”</a> section). “<em>Validation process on desktop</em>” is fully supported through the CLI without restriction.</li>
<li>graphic visualization of the generated c-graph (including the usage of the RAM). CLI provides only a textual representation (table form) of the c-graph including a description of the tensors/operators (see <a href="#ref_analyze_cmd">“Analyze command”</a> section).</li>
</ul>
</section>
<section id="command-work-flow" class="level2">
<h2>Command work-flow</h2>
<p>For each command, the same preliminary steps are applied. A report (txt file) is systematically created and fully or partially displayed. Additional JSON files (dictionary based) are generated in the workspace to be parsed by the X-CUBE-AI plug-in to retrieve the results. Note that they can be also used by a non-regression environment. The format of these files is out of the scope of this document.</p>
<pre><code>&lt;workspace-directory-path&gt;\&lt;name&gt;_report.json, &lt;name&gt;_c_graph.json
&lt;output-directory-path&gt;\&lt;name&gt;_&lt;cmd_name&gt;_report.txt</code></pre>
<ul>
<li><code>&#39;analyze&#39;</code> flow
<ul>
<li>import the model<br />
</li>
<li>map, render and optimize internally the model</li>
<li>log and display a report</li>
</ul></li>
<li><code>&#39;validate&#39;</code> flow
<ul>
<li>import the model<br />
</li>
<li>map, render and optimize internally the model</li>
<li>execute the generated C-model (on desktop or from the STM32 target)<br />
</li>
<li>execute the original model using original deep learning runtime framework for x86</li>
<li>evaluate the metrics</li>
<li>log and display a report</li>
</ul></li>
<li><code>&#39;generate&#39;</code> flow
<ul>
<li>import the model<br />
</li>
<li>map, render and optimize internally the model</li>
<li>export the specialized C-files</li>
<li>log and display a report</li>
</ul></li>
</ul>
</section>
<section id="enable-automl-pipeline-for-resource-constrained-environment" class="level2">
<h2>Enable AutoML pipeline for resource-constrained environment</h2>
<p>CLI can be integrated in an automatic or manual pipeline, to design a deployable and effective neural network architectures for resource-constrained environment (i.e. with low memory/computational resources and/or critical power consumption budgets). The main loop can be extended with a post-analyzing/validating steps of the pre-trained model candidates to check and to take into account the end-user target constraints thanks to the respective <a href="#ref_analyze_cmd">“analyze”</a> and <a href="#ref_validate_cmd">“validate”</a> commands.</p>
<div id="fig:auto_ml" class="fignos">
<figure>
<img src="" property="center" style="width:95.0%" alt /><figcaption><span>Figure 2:</span> Possible AutoML flow with post deployment check</figcaption>
</figure>
</div>
<ul>
<li>checking of the budgeted memory (ROM/RAM) can be done in the inner (topology selection/definition) before the time-consuming training process (or re-training process) to pre-constraint the choices of the neural network architecture according the memory budgets.</li>
<li>note that the “analyze” and “X86 validate” step can be merged, “analyze” information are also available in the “validate” reports.</li>
</ul>
</section>
<section id="setting-the-environment" class="level2">
<h2>Setting the environment</h2>
<p>X-CUBE-AI Expansion Package is a complete self-contained application package. No external tools is requested to use the package. <code>%CUBE_FW_DIR%</code> designates the root path where the X-CUBE expansion packages are installed. Default location on Windows:</p>
<pre><code>C:\Users\&lt;user_name&gt;\STM32Cube\Repository</code></pre>
<div class="Warning">
<p><strong>Note</strong> — For the generation of a relocatable binary network (refer to <a href="relocatable.html">[9]</a>), a <a href="https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-rm">GNU ARM Embedded tool-chain</a> (<code>&#39;arm-none-eabi-&#39;</code> prefix) should be available in the PATH. In the case where the relocatable binary model is generated through the X-CUBE-AI plug-in, the requested pre-built GNU bare-metal tool-chain for 32-bit ARM processors is automatically downloaded and installed in the installation directory.</p>
</div>
<section id="windows-10" class="level3">
<h3>Windows® 10</h3>
<ol type="1">
<li>Open a Windows command prompt</li>
<li>Update system <code>PATH</code> variable</li>
</ol>
<pre class="dosbatch"><code>set CUBE_FW_DIR=C:\Users\&lt;user_name&gt;\STM32Cube\Repository

set X_CUBE_AI_DIR=%CUBE_FW_DIR%\Packs\STMicroelectronics\X-CUBE-AI\5.2.0
set PATH=%X_CUBE_AI_DIR%\Utilities\windows;%PATH%</code></pre>
<ol start="3" type="1">
<li>[ <em>Alternative solution</em> ] Create an alias with <code>doskey</code> command</li>
</ol>
<pre class="dosbatch"><code>doskey stm32ai=&quot;%X_CUBE_AI_DIR%\Utilities\windows\stm32ai.exe&quot; $*</code></pre>
<ol start="4" type="1">
<li>Verify the environment</li>
</ol>
<pre class="dosbatch"><code>&gt;  stm32ai --version
stm32ai - Neural Network Tools for STM32 v1.4.0 (AI tools v5.2.0)</code></pre>
<p>or</p>
<pre class="dosbatch"><code>&gt;  stm32ai --tools_version
Neural Network Tools for STM32 v1.4.0 (AI tools v5.2.0)
- Python version   : 3.5.7
- Numpy version    : 1.17.2
- TF version       : 2.3.0
- TF Keras version : 2.4.0
- Caffe version    : 1.0.0
- Lasagne version  : 0.2.dev1
- ONNX version     : 1.6.0
- ONNX RT version  : 1.1.2
</code></pre>
</section>
<section id="ubuntu-18.4-and-ubuntu-16.4-or-derived" class="level3">
<h3>Ubuntu® 18.4 and Ubuntu® 16.4 (or derived)</h3>
<p>Similar setting of the environment is expected. A native GCC (X86_64) tool-chain should be accessible in the system path.</p>
<div class="sourceCode" id="cb9"><pre class="sourceCode bash"><code class="sourceCode bash"><span id="cb9-1"><a href="#cb9-1"></a><span class="bu">export</span> <span class="va">X_CUBE_AI_DIR=$CUBE_FW_DIR</span>/Packs/STMicroelectronics/X-CUBE-AI/5.2.0</span>
<span id="cb9-2"><a href="#cb9-2"></a><span class="bu">export</span> <span class="va">PATH=$X_CUBE_AI_DIR</span>/Utilities/linux:<span class="va">$PATH</span></span></code></pre></div>
</section>
<section id="macos-x64" class="level3">
<h3>macOS® (x64)</h3>
<p>Similar setting of the environment is expected. A native GCC (X86_64) tool-chain should be accessible in the system path.</p>
<div class="sourceCode" id="cb10"><pre class="sourceCode bash"><code class="sourceCode bash"><span id="cb10-1"><a href="#cb10-1"></a><span class="bu">export</span> <span class="va">X_CUBE_AI_DIR=$CUBE_FW_DIR</span>/Packs/STMicroelectronics/X-CUBE-AI/5.2.0</span>
<span id="cb10-2"><a href="#cb10-2"></a><span class="bu">export</span> <span class="va">DYLD_LIBRARY_PATH=$X_CUBE_AI_DIR</span>/Utilities/mac</span>
<span id="cb10-3"><a href="#cb10-3"></a><span class="bu">export</span> <span class="va">DYLD_FALLBACK_LIBRARY_PATH=$X_CUBE_AI_DIR</span>/Utilities/mac</span>
<span id="cb10-4"><a href="#cb10-4"></a><span class="bu">export</span> <span class="va">PATH=$X_CUBE_AI_DIR</span>/Utilities/mac:<span class="va">$PATH</span></span></code></pre></div>
</section>
</section>
<section id="error-handling" class="level2">
<h2>Error handling</h2>
<p>During the execution of a given command, after the parsing of the arguments if an error is raised, the <code>stm32ai</code> application returns <code>-1</code> else <code>0</code>. The description of the error is displayed at the beginning of the summary.</p>
<p>An error message is prefixed by a category and a short description.</p>
<table>
<colgroup>
<col style="width: 21%"></col>
<col style="width: 78%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">category</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">LOAD ERROR</td>
<td style="text-align: left;">error during the load/import of the model or the connection with the STM32 board</td>
</tr>
<tr class="even">
<td style="text-align: left;">INVALID MODEL</td>
<td style="text-align: left;">provided network model file is corrupted or cannot be parsed</td>
</tr>
<tr class="odd">
<td style="text-align: left;">INVALID OPTIONS</td>
<td style="text-align: left;">specific model parameter is invalid</td>
</tr>
<tr class="even">
<td style="text-align: left;">NOT IMPLEMENTED</td>
<td style="text-align: left;">expected feature is not implemented</td>
</tr>
<tr class="odd">
<td style="text-align: left;">TOOLS ERROR INTERNAL ERROR</td>
<td style="text-align: left;">internal error</td>
</tr>
<tr class="even">
<td style="text-align: left;">CLI ERROR</td>
<td style="text-align: left;">specific CLI error</td>
</tr>
<tr class="odd">
<td style="text-align: left;">INTERRUPT</td>
<td style="text-align: left;">indicates that the execution of the command has been interrupted by the user (<code>CTRL-C</code> or kill system signal)</td>
</tr>
</tbody>
</table>
<div class="Warning">
<p><strong>Note</strong> — There is a specific attention to have explicit and relevant short description of the errors. Unfortunately, this is not always case, additional TIPS and TRICKS can be found in the <a href="faqs.html">FAQs [8]</a> article or don’t hesitate to use the <a href="https://community.st.com/s/topic/0TO0X0000003iUqWAI/stm32-machine-learning-ai">ST Community channel/forum</a> or local support.</p>
</div>
<section id="example-of-error" class="level3 unnumbered">
<h3>Example of error</h3>
<pre class="prose"><code>...
Exec/report summary (analyze 0.000s err=-1)
------------------------------------------------------------------------------------------------------
error           : NOT IMPLEMENTED: Quantizing a compressed tensor is not supported for dense_4_weights
model file      : &lt;full_model_file_path&gt;
type            : keras (keras_dump)
...</code></pre>
</section>
</section>
</section>
<section id="ref_com_options" class="level1">
<h1>Common arguments</h1>
<p>Following table describes the common arguments for the <code>&#39;analyze&#39;</code>, <code>&#39;validate&#39;</code> and <code>&#39;generate&#39;</code> commands. The specific arguments are described in the respective command section.</p>
<table>
<colgroup>
<col style="width: 23%"></col>
<col style="width: 76%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">parameter</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>-m/--model</code></td>
<td style="text-align: left;">indicates the original model file paths (see <a href="#ref_dl_fw_detection">“DL framework detection”</a> section) - <em>Mandatory</em></td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>-t/--type</code></td>
<td style="text-align: left;">indicates the type of original DL framework when it can be not inferred by the extensions of the model files (see <a href="#ref_dl_fw_detection">“DL framework detection”</a> section) - <em>Optional</em></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>-w/--workspace</code></td>
<td style="text-align: left;">indicates a working/temporary directory for the intermediate/temporary files (default:<code>&quot;./stm32ai_ws/&quot;</code> directory) - <em>Optional</em></td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>-o/--output</code></td>
<td style="text-align: left;">indicates the output directory for the generated C-files and report files (default:<code>&quot;./stm32ai_output/&quot;</code>directory) - <em>Optional</em></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>-n/--name</code></td>
<td style="text-align: left;">indicates the C-name (<code>C-string</code> type) of the imported model. Used to prefix the name of specialized NN C-files and the API functions. Also used for the temporary files, this allows to use the same workspace/output directories for different models (default: <code>&quot;network&quot;</code>). - <em>Optional</em></td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>-c/--compression</code></td>
<td style="text-align: left;">indicates the expected global factor of compression which will be applied. Supported values: <code>1|4|8</code> (default: ‘1’). Refer to <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">[2], “Graph flow and memory layout optimizer”</a> section. Compression can be only performed on the dense-type layer. - <em>Optional</em></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>--allocate-inputs</code></td>
<td style="text-align: left;">if defined, this flag indicates that the “activations” buffer will be also used to handle the input buffers else, default behavior, they should be allocated separately in the user memory space. Depending on the size of the input data, the “activations” buffer may be bigger but overall less than the sum of the activation buffer plus the input buffer. To retrieve the address of the associated input buffers (refer to <a href="embedded_client_api.html">[5], “IO buffers into activations buffer”</a> section). - <em>Optional</em></td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>--allocate-outputs</code></td>
<td style="text-align: left;">if defined, this flag indicates that the “activations” buffer will be also used to handle the outputs buffers, else default behavior, they should be allocated separately in the user memory space. (refer to <a href="embedded_client_api.html">[5], “IO buffers into activations buffer”</a> section). - <em>Optional</em></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>--split-weights</code></td>
<td style="text-align: left;">if defined, this flag indicates that one c-array is generated by weights/bias data tensor instead to have an unique C-array (“weights” buffer) for the whole (default: disabled), (refer to <a href="embedded_client_api.html">[5], “Split weights buffer”</a> section) - <em>Optional</em></td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>-q/--quantize</code></td>
<td style="text-align: left;">indicates file path of the <em>tensor format configuration</em> file for a Keras model or for the configuration file to perform the Keras post-training quantization process. (refer to <a href="quantization.html">[8]</a>) - <em>Optional</em></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>-v/--verbosity</code></td>
<td style="text-align: left;">indicates the level of verbosity (or level of displayed information). Supported values: 0,1,2 (default:<code>1</code>) - <em>Optional</em></td>
</tr>
</tbody>
</table>
</section>
<section id="ref_analyze_cmd" class="level1">
<h1>Analyze command</h1>
<section id="description" class="level2">
<h2>Description</h2>
<p>The <code>&#39;analyze&#39;</code> command is the primary command to import, to parse, to check and to render an uploaded pre-trained model. <a href="#ref_out_of_box_report">Detailed report</a> provides the main system metrics to know if the generated code can be deployed on a STM32 device. It includes also rendering information by layer or/and operator (see <a href="#ref_c_graph_desc">“C-graph description”</a> section). After completion, the user can be fully <em>confident</em> on the imported model in term of supported layer/operators.</p>
</section>
<section id="specific-arguments" class="level2">
<h2>Specific arguments</h2>
<p>Only the <a href="#ref_com_options">“Common”</a> arguments are considered.</p>
</section>
<section id="examples" class="level2">
<h2>Examples</h2>
<ul>
<li><p>Analyze a model (simple model file)</p>
<pre class="dosbatch"><code>$ stm32ai analyze -m &lt;model_file_path&gt;</code></pre></li>
<li><p>Analyze a multiple model files (caffe type example)</p>
<pre class="dosbatch"><code>$ stm32ai analyze -m ./caffe/lenet.prototxt -m ./caffe/lenet.caffemodel</code></pre></li>
<li><p>Analyze a 32b float model with compression request</p>
<pre class="dosbatch"><code>$ stm32ai analyze -m &lt;model_file_path&gt; -c 8</code></pre></li>
<li><p>Analyze a model with input tensors placed in activations buffer</p>
<pre class="dosbatch"><code>$ stm32ai analyze -m &lt;model_file_path&gt; --allocate-inputs</code></pre></li>
<li><p>Analyze a Keras post-quantized model (refer to <a href="quantization.html">[8]</a>)</p>
<pre class="dosbatch"><code>$ stm32ai analyze -m &lt;modified_model_file&gt;.h5 -q &lt;quant_file_desc&gt;.json</code></pre></li>
</ul>
</section>
<section id="ref_dl_fw_detection" class="level2">
<h2>DL framework detection</h2>
<p>Extension of the model files are used to identify the DL framework which should be used to import the model. If the auto-detection is ambiguous, the <code>&#39;--type/-t&#39;</code> option should be used to define the correct framework.</p>
<table>
<colgroup>
<col style="width: 21%"></col>
<col style="width: 22%"></col>
<col style="width: 56%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">DL framework</th>
<th style="text-align: left;">type (<code>--type/-t</code>)</th>
<th style="text-align: left;">file extension</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">Keras</td>
<td style="text-align: left;"><code>keras</code></td>
<td style="text-align: left;"><code>.h5</code> or <code>.hdf5</code> and <code>.json</code> or <code>.yml</code> or <code>yaml</code></td>
</tr>
<tr class="even">
<td style="text-align: left;">TensorFlow lite</td>
<td style="text-align: left;"><code>tflite</code></td>
<td style="text-align: left;"><code>.tflite</code></td>
</tr>
<tr class="odd">
<td style="text-align: left;">Lasagne</td>
<td style="text-align: left;"><code>lasagne</code></td>
<td style="text-align: left;"><code>.npz</code> and <code>.py</code></td>
</tr>
<tr class="even">
<td style="text-align: left;">Caffe</td>
<td style="text-align: left;"><code>caffe</code></td>
<td style="text-align: left;"><code>.prototxt</code> and <code>.caffemodel</code></td>
</tr>
<tr class="odd">
<td style="text-align: left;">ConvNetJS</td>
<td style="text-align: left;"><code>convnetjs</code></td>
<td style="text-align: left;"><code>.json</code></td>
</tr>
<tr class="even">
<td style="text-align: left;">ONNX</td>
<td style="text-align: left;"><code>onnx</code></td>
<td style="text-align: left;"><code>.onnx</code></td>
</tr>
</tbody>
</table>
<p>If you try to <em>force</em> a type which is not valid, typical following error is reported</p>
<pre><code>$ stm32ai /c/ai_lab/fxp_demo/demo_asc_fxp/Session_keras_mod_93_Model.h5 --type tflite
...
TOOL ERROR: Invalid extension on input file
    C:\ai_lab\fxp_demo\demo_asc_fxp\Session_keras_mod_93_Model.h5</code></pre>
</section>
<section id="ref_out_of_box_report" class="level2">
<h2>Out-of-the-box information</h2>
<p>The first part of the log shows the used arguments and the main system dimensioning C-model properties.</p>
<pre class="dosbatch"><code>$ stm32ai analyze -m ds_cnn.h5
Neural Network Tools for STM32 v1.4.0 (AI tools v5.2.0)
-- Importing model
-- Importing model - done (elapsed time 2.736s)
-- Rendering model
-- Rendering model - done (elapsed time 0.090s)

Creating report file &lt;output-directory-path&gt;\network_analyze_report.txt

Exec/report summary (analyze dur=2.742s err=0)
-----------------------------------------------------------------------------
model file         : &lt;model-directory-path&gt;\ds_cnn.h5
type               : keras (keras_dump) - tf.keras 2.4.0
c_name             : network
compression        : None
quantize           : None
workspace dir      : &lt;workspace-directory-path&gt;
output dir         : &lt;output-directory-path&gt;

model_name         : ds_cnn
model_hash         : b773f449281f9d970d5b982fb57db61f
input              : input_0 [490 items, 1.91 KiB, ai_float, FLOAT32, (49, 10, 1)]
input (total)      : 1.91 KiB
output             : dense_1_nl [12 items, 48 B, ai_float, FLOAT32, (12,)]
output (total)     : 48 B
params #           : 40,140 items (156.80 KiB)
macc               : 4,833,524
weights (ro)       : 159,536 (155.80 KiB) (-0.64%)
activations (rw)   : 64,000 (62.50 KiB)
ram (total)        : 66,008 (64.46 KiB) = 64,000 + 1,960 + 48
...</code></pre>
<p>Initial sub-section recalls the CLI arguments. Note that the full raw command line is saved at the beginning of the generated report file: <code>&lt;output-directory-path&gt;\network_&lt;cmd&gt;_report.txt</code></p>
<table>
<colgroup>
<col style="width: 21%"></col>
<col style="width: 78%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">field</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">model file</td>
<td style="text-align: left;">reports the full-path of the original model files (<code>--model</code>). If multiple files, there is one line by file.</td>
</tr>
<tr class="even">
<td style="text-align: left;">type</td>
<td style="text-align: left;">reports the <code>--type</code> value or inferred DL framework type. For Keras model, the version of the used framework to generate the model is also displayed.</td>
</tr>
<tr class="odd">
<td style="text-align: left;">c_name</td>
<td style="text-align: left;">reports the expected C-name for the generated C-model (<code>--name</code>)</td>
</tr>
<tr class="even">
<td style="text-align: left;">compression</td>
<td style="text-align: left;">reports the expected compression factor (<code>--compression</code>)</td>
</tr>
<tr class="odd">
<td style="text-align: left;">quantize</td>
<td style="text-align: left;">reports the quantization parameter</td>
</tr>
<tr class="even">
<td style="text-align: left;">workspace dir</td>
<td style="text-align: left;">full-path of the workspace directory (<code>--workspace</code>)</td>
</tr>
<tr class="odd">
<td style="text-align: left;">output dir</td>
<td style="text-align: left;">full-path of the output directory (<code>--output</code>)</td>
</tr>
</tbody>
</table>
<p>The second part shows the results of the importing and rendering stages.</p>
<table>
<colgroup>
<col style="width: 21%"></col>
<col style="width: 78%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">field</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">model_name</td>
<td style="text-align: left;">designates the name of the provided model. This is generally the name of the model file.</td>
</tr>
<tr class="even">
<td style="text-align: left;">model_hash</td>
<td style="text-align: left;">provides a calculated MD5 signature of the imported model files.</td>
</tr>
<tr class="odd">
<td style="text-align: left;">input</td>
<td style="text-align: left;">indicates the name, the item number, the format and the size in bytes of an input tensor. There is one line by input. <code>&#39;input (total)&#39;</code> field indicates the total size (in bytes) of the inputs.<br />
<br />
<code>&#39;input_0 [490 items, 1.91 KiB, ai_float, FLOAT32, (49, 10, 1)]&#39;</code> indicates that <code>input_0</code> tensor has a size of 490 floating-point items (size in bytes = <code>490 x 4B = 1.91KiB</code>) with a <code>(49, 10, 1)</code> shape. (refer to <a href="embedded_client_api.html">[5] “IO tensor”</a> section)</td>
</tr>
<tr class="even">
<td style="text-align: left;">output</td>
<td style="text-align: left;">indicates the name, the format and the size of the output tensor. There is one line by output. <code>output (total)</code> field indicates the total size (in bytes) of the outputs.</td>
</tr>
<tr class="odd">
<td style="text-align: left;">param #</td>
<td style="text-align: left;">indicates the total number of parameters of the original model and associated size in bytes.</td>
</tr>
<tr class="even">
<td style="text-align: left;">macc</td>
<td style="text-align: left;">indicates the whole computational complexity of the original model. Value is defined in <code>MACC</code> operations (Multiply-ACCumulated operations) (refer to <a href="evaluation_metrics.html">[6]</a>)</td>
</tr>
<tr class="odd">
<td style="text-align: left;">weights (ro)</td>
<td style="text-align: left;">indicates the requested size (in bytes) for the generated constant RO parameters (bias and weights tensors). Size is 4-bytes aligned. If the value is different from the original model files, the ratio is also reported. (refer to <a href="evaluation_metrics.html">[6], “Memory-related metrics”</a> section)</td>
</tr>
<tr class="even">
<td style="text-align: left;">activations (rw)</td>
<td style="text-align: left;">indicates the requested size (in bytes) for the working RW memory buffer (also called activations buffer). It is mainly used as <em>internal heap</em> for the activations and temporary results. (refer to <a href="evaluation_metrics.html">[6], “Memory-related metrics”</a> section)</td>
</tr>
<tr class="odd">
<td style="text-align: left;">ram (total)</td>
<td style="text-align: left;">indicates the requested total size (in bytes) for the RAM including the input and output buffers.</td>
</tr>
</tbody>
</table>
<section id="compressed-model-example" class="level3 unnumbered">
<h3>Compressed model example</h3>
<p>For a <em>“compressed”</em> model, the compression gain for the <code>&#39;weights&#39;</code> size, here <em>-72.90%</em> is the global difference between the original 32b float model and the generated <em>“compressed”</em> C-model. <em>Note that only the full-connected or dense layers can be compressed.</em></p>
<pre class="dosbatch"><code>$ stm32ai analyze -m dnn.h5 -c 4
...
input              : input_0 [490 items, 1.91 KiB, ai_float, FLOAT32, (490,)]
input (total)      : 1.91 KiB
output             : dense_4_nl [12 items, 48 B, ai_float, FLOAT32, (12,)]
output (total)     : 48 B
params #           : 114,204 items (446.11 KiB)
macc               : 114,372
weights (ro)       : 123,792 B (120.89 KiB) (-72.90%)
activations (rw)   : 1,152 B (1.12 KiB)
ram (total)        : 3,160 B (3.09 KiB) = 1,152 + 1,960 + 48
...</code></pre>
</section>
<section id="quantized-keras-model-example---qmn-format" class="level3 unnumbered">
<h3>Quantized Keras model example - Qmn format</h3>
<p>The gain for the weights size, here <em>-75%</em> is the difference between the original 32b float model and the generated quantized C-model (full 8bit quantized format). Additional info are displayed in the <a href="#ref_graph_desc">“Graph description”</a> section.</p>
<pre class="dosbatch"><code>$ stm32ai analyze -m &lt;modified_model_file&gt;.h5 -q &lt;quant_file_desc&gt;.json
...
input              : quantize_conv2d_1_input [784 items, 784 B, ai_i8, Q0.7, (28, 28, 1)]
input (total)      : 784 B
output             : softmax_8 [10 items, 40 B, ai_float, FLOAT32, (10,)]
output (total)     : 40 B
params #           : 1,199,882 items (4.58 MiB)
macc               : 12,088,202
weights (ro)       : 1,199,884 B (1171.76 KiB) (-75.00%)
activations (rw)   : 35,072 B (34.25 KiB)
ram (total)        : 35,896 B (35.05 KiB) = 35,072 + 784 + 40
...</code></pre>
</section>
<section id="quantized-tflite-model-example---integer-format" class="level3 unnumbered">
<h3>Quantized TFLite model example - integer format</h3>
<p>Following report shows the case where a TensorFlow lite quantized model is imported and the inputs are placed in the activations buffer. In this case as the parameters from the imported file are already quantized (8-b format), no gain of <code>weights</code> size is reported. Note that for each input (or output), type/scale and zero-point value are reported. Additional info are displayed in the <a href="#ref_graph_desc">“Graph description”</a> section.</p>
<pre class="dosbatch"><code>$ stm32ai analyze -m &lt;quantized_model_file&gt;.tflite --allocate-inputs
...
input              : input_0 [2,107 items, 2.06 KiB, ai_u8, scale=0.5, zero=0, (49, 43, 1)]
input (total)      : 2.06 KiB
output             : nl_2 [4 items, 4 B, ai_u8, scale=0.00390625, zero=0, (4,)]
output (total)     : 4 B
params #           : 18,252 items (17.86 KiB)
macc               : 369,684
weights (ro)       : 18,288 B (17.86 KiB)
activations (rw)   : 6,860 B (6.70 KiB) *
ram (total)        : 6,864 B (6.70 KiB) = 6,860 + 0 + 4

 (*) inputs are placed in the activations buffer
...</code></pre>
</section>
</section>
<section id="ref_graph_desc" class="level2">
<h2>PINNR/IR graph description</h2>
<p>The outlined “graph” section (table form) provides a summary of the topology of the network which is considered before the optimization, render and generation stages. The <code>id</code> column indicates the index of the operator from the original graph. It is generated by the X-CUBE-AI importer. The represented graph is an internal platform independent neural network representation, also called <code>PINNR</code> (or <code>ÌR</code>, internal representation) created during the import. Training only layers are ignored during the conversion. Note that if no input operator is defined, “input” layer is added and generally the layers are un-fused. A complete graphic representation is available through the UI (refer to <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">[2]</a>).</p>
<div id="fig:mod_to_ir_muspeech" class="fignos">
<figure>
<img src="" property="center" style="width:95.0%" alt /><figcaption><span>Figure 1:</span> IR Graph (microSpeech)</figcaption>
</figure>
</div>
<p>PINNR operator properties.</p>
<table>
<colgroup>
<col style="width: 21%"></col>
<col style="width: 78%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">field</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">id</td>
<td style="text-align: left;">indicates the layer/operator index of the original model</td>
</tr>
<tr class="even">
<td style="text-align: left;">layer (type)</td>
<td style="text-align: left;">designates the name of the type of the operator. Name is inferred from the original name. In the case where a layer is un-fused, the new layer is created with the original suffixed with <code>_nl</code> (see next figure with the first layer)</td>
</tr>
<tr class="odd">
<td style="text-align: left;">output shape</td>
<td style="text-align: left;">indicates the output shape of the layer. Follow the “HWC” layout or channel last representation: H=height, W=width, C=channel (refer to <a href="embedded_client_api.html">[5] “IO tensor”</a> section)</td>
</tr>
<tr class="even">
<td style="text-align: left;">param #</td>
<td style="text-align: left;">indicates the number of parameters</td>
</tr>
<tr class="odd">
<td style="text-align: left;">connected to</td>
<td style="text-align: left;">designates the name of the incoming layers</td>
</tr>
<tr class="even">
<td style="text-align: left;">macc</td>
<td style="text-align: left;">designates the associated complexity</td>
</tr>
<tr class="odd">
<td style="text-align: left;">rom</td>
<td style="text-align: left;">indicates the ROM size (in bytes and 4-bytes aligned)</td>
</tr>
</tbody>
</table>
<ul>
<li>Note that <code>&#39;rom&#39;</code> size information can be extended with a global indicator about of the C-storage format for the associated weights/bias tensors. See <a href="#ref_c_graph_desc">“C-graph description”</a> section.</li>
</ul>
<table>
<colgroup>
<col style="width: 21%"></col>
<col style="width: 78%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;"><code>rom</code> suffix</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>nothing</code></td>
<td style="text-align: left;">indicates that all parameters are 32b float numbers (<code>FP32</code>)</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>(c)</code></td>
<td style="text-align: left;">indicates that a part of the parameters is compressed. For a given layer, weights and bias are not necessarily compressed. The report <code>rom</code> size includes also the dictionary.</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>(q)</code></td>
<td style="text-align: left;">indicates that a part of the parameters are quantized (fixed-point representation, <code>Qmn</code> or power-of-two scaling integer format). Bias are encoded on 32-bits.</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>(i)</code></td>
<td style="text-align: left;">indicates that a part of the parameters are quantized (fixed-point representation, integer format). Bias are encoded on 32-bits.</td>
</tr>
</tbody>
</table>
<ul>
<li>It is possible to have no <code>&#39;macc&#39;</code> and <code>&#39;rom&#39;</code> values. This indicates that after rendering and optimizing phases, the referenced layer will be merged with the previous layer (fig. <a href="#fig:mod_to_ir_ds_cnn">2</a> or <a href="#fig:mod_to_ir_mnv2">3</a>) or as for the reshape operator (fig. <a href="#fig:mod_to_ir_muspeech">1</a>), the operation is ‘free’.</li>
</ul>
<div id="fig:mod_to_ir_ds_cnn" class="fignos">
<figure>
<img src="" property="center" style="width:95.0%" alt /><figcaption><span>Figure 2:</span> IR Graph (ds-cnn)</figcaption>
</figure>
</div>
<p>Following figure illustrates a multi-branch model case.</p>
<div id="fig:mod_to_ir_mnv2" class="fignos">
<figure>
<img src="" property="center" style="width:95.0%" alt /><figcaption><span>Figure 3:</span> IR Graph (part of mobilenetv2)</figcaption>
</figure>
</div>
<section id="compressed-and-quantized-model-example" class="level3 unnumbered">
<h3>Compressed and quantized model example</h3>
<div class="Warning">
<p><strong>Note</strong> — For a compressed or quantized model, the MACC values (by layer or globally) are unchanged. Number of operations are always the same. Only the associated number of CPU cycles by MACC is changed. In particular, for the quantized models.</p>
</div>
<section id="compressed-32b-float-model" class="level4 unnumbered">
<h4>Compressed 32b float model</h4>
<pre class="dosbatch"><code>$ stm32ai analyze -m  &lt;model_file&gt;.h5 -c 4
...
---------------------------------------------------------------------------------------------------
id  layer (type)              output shape      param #  connected to   macc         rom
---------------------------------------------------------------------------------------------------
0   input_0 (Input)           (490,)
    dense_1 (Dense)           (144,)            70,704   input_0        70,560       72,160 (c)
    dense_1_nl (Nonlinearity) (144,)                     dense_1        144
---------------------------------------------------------------------------------------------------
2   dense_2 (Dense)           (144,)            20,880   dense_1_nl     20,736       22,336 (c)
    dense_2_nl (Nonlinearity) (144,)                     dense_2        144
---------------------------------------------------------------------------------------------------
</code></pre>
</section>
<section id="keras-post-training-quantized-model" class="level4 unnumbered">
<h4>Keras post-training quantized model</h4>
<pre class="dosbatch"><code>$ stm32ai analyze -m &lt;modified_model_file&gt;.h5 -q &lt;quant_file_desc&gt;.json
...
---------------------------------------------------------------------------------------------------
id  layer (type)                output shape   param #   connected to   macc         rom
---------------------------------------------------------------------------------------------------
...
---------------------------------------------------------------------------------------------------
1   conv2d_3 (Conv2D)           (49, 10, 60)   2,460     reshape_2      1,205,460    2,460 (q)
---------------------------------------------------------------------------------------------------
2   activation_6 (Nonlinearity) (49, 10, 60)             conv2d_3
---------------------------------------------------------------------------------------------------
3   conv2d_4 (Conv2D)           (25, 10, 76)   182,476   activation_6   45,619,076   182,476 (q)
---------------------------------------------------------------------------------------------------
...</code></pre>
</section>
<section id="tflite-8b-quantized-model" class="level4 unnumbered">
<h4>TFlite 8b quantized model</h4>
<pre class="dosbatch"><code>$ stm32ai analyze -m microSpeech.tflite
...
---------------------------------------------------------------------------------------------------
id  layer (type)        output shape   param #     connected to        macc           rom
---------------------------------------------------------------------------------------------------
0   input_0 (Input)     (49, 40, 1)
    conv2d_0 (Conv2D)   (25, 20, 8)    648         input_0             320,008        672 (i)
---------------------------------------------------------------------------------------------------
1   reshape_1 (Reshape) (4000,)                    conv2d_0
    dense_1 (Dense)     (4,)           16,004      reshape_1           16,008         16,016 (i)
---------------------------------------------------------------------------------------------------
2   nl_2 (Nonlinearity) (4,)                       dense_1             68
---------------------------------------------------------------------------------------------------</code></pre>
</section>
</section>
</section>
<section id="ref_complexity_by_layer" class="level2">
<h2>MACC/ROM complexity by layer</h2>
<p>The last part of the report summarizes the relative network complexity in term of MACC and associated ROM size by layer. Note that only the operators which contribute to the global <code>rom</code> and <code>macc</code> metrics are reported.</p>
<pre class="dosbatch"><code>Complexity per-layer - macc=4,550,024 rom=159,536
---------------------------------------------------------------------------------------------------
id      layer (type)                       macc                         rom
---------------------------------------------------------------------------------------------------
0       conv2d_1 (Conv2D)                  |||||||||              7.2%  ||||||||||             6.6%
1       batch_normalization_1 (ScaleBias)  |                      0.4%  |                      0.3%
2       separable_conv2d_1 (Conv2D)        |                      0.0%  ||                     1.6%
2       separable_conv2d_1_conv2d (Conv2D) ||||||||||||||||||||  11.3%  ||||||||||||||||||||| 10.4%
4       conv2d_2 (Conv2D)                  ||||||||||||||||||||| 11.4%  ||||||||||||||||||||| 10.4%
5       batch_normalization_3 (ScaleBias)  |                      0.4%  |                      0.3%
6       separable_conv2d_2 (Conv2D)        |                      0.0%  ||||                   1.6%
6       separable_conv2d_2_conv2d (Conv2D) ||||||||||||||||||||  11.3%  ||||||||||||||||||||| 10.4%
8       conv2d_3 (Conv2D)                  ||||||||||||||||||||| 11.4%  ||||||||||||||||||||| 10.4%
9       batch_normalization_5 (ScaleBias)  |                      0.4%  |                      0.3%
10      separable_conv2d_3 (Conv2D)        |                      0.0%  |||                    1.6%
10      separable_conv2d_3_conv2d (Conv2D) ||||||||||||||||||||  11.3%  ||||||||||||||||||||| 10.4%
12      conv2d_4 (Conv2D)                  ||||||||||||||||||||| 11.4%  ||||||||||||||||||||| 10.4%
13      batch_normalization_7 (ScaleBias)  |                      0.4%  |                      0.3%
14      separable_conv2d_4 (Conv2D)        |                      0.0%  |||||                  1.6%
14      separable_conv2d_4_conv2d (Conv2D) ||||||||||||||||||||  11.3%  ||||||||||||||||||||| 10.4%
16      conv2d_5 (Conv2D)                  ||||||||||||||||||||| 11.4%  ||||||||||||||||||||| 10.4%
17      batch_normalization_9 (ScaleBias)  |                      0.4%  |                      0.3%
18      average_pooling2d_1 (Pool)         |                      0.2%  |                      0.0%
20      dense_1 (Dense)                    |                      0.0%  ||||                   2.0%
20      dense_1_nl (Nonlinearity)          |                      0.0%  |                      0.0%
---------------------------------------------------------------------------------------------------</code></pre>
</section>
<section id="ref_c_graph_desc" class="level2">
<h2>C-graph description</h2>
<p>Additional “Generated C-graph summary” section (also displayed with <code>&#39;-v 2&#39;</code> argument) is included in the report. It summarizes the main computational and associated elements (c-objects) used by the network runtime C-inference engine. It is based on the c-structures generated inside the <code>&#39;&lt;name&gt;.c&#39;</code> file. A complete graphic representation is available through the UI (refer to <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">[2]</a>).</p>
<p>The first part re-calls the main structural elements: c-name, number of c-nodes (implementation of the layer/operator), number of C-array for the data storage of the associated tensors. Input and output name of the I/O network tensors.</p>
<pre class="dosbatch"><code>Generated C-graph summary
---------------------------------------------------------------------------------------------------
model name         : microspeech_01
c-name             : network
c-node #           : 5
c-array #          : 11
activations size   : 4352
weights size       : 16688
macc               : 336084
inputs             : [&#39;Reshape_1_output_array&#39;]
outputs            : [&#39;nl_2_fmt_output_array&#39;]</code></pre>
<p>As illustrated in the figure <a href="#fig:c_graph_overview">4</a>, the implemented c-graph can be considered as a sequential graph, managed as a simple linked list. Fixed-executing order is defined by the C-code optimizer engine according two main criteria: data-path dependencies (or IO tensor dependencies) and the minimization of the RAM memory peak usage.</p>
<div id="fig:c_graph_overview" class="fignos">
<figure>
<img src="" property="center" style="width:95.0%" alt /><figcaption><span>Figure 4:</span> Computational c-graph objects</figcaption>
</figure>
</div>
<p>Each computational c-node is entirely defined by:</p>
<ul>
<li>operation type, parameters<br />
</li>
<li>input tensors list: [I]<br />
</li>
<li><em>optional</em> weights/bias tensors list: [W]<br />
</li>
<li><em>optional</em> scratches tensors list: [S]</li>
<li>outputs tensors list: [O]</li>
</ul>
<section id="c-arrays-table" class="level3 unnumbered">
<h3>C-Arrays table</h3>
<p><code>&#39;C-Arrays&#39;</code> table lists the objects allowing to handle the base address, size and meta-data of the data memory segments for the different tensors. For each item, number of item, size in byte (<code>item/size</code>), memory segment location (<code>mem-pool</code>), type (<code>c-type</code>) and short format description (<code>fmt</code>) are reported.</p>
<pre class="dosbatch"><code>C-Arrays (11)
---------------------------------------------------------------------------------------------------
c_id  name (*_array)      item/size           mem-pool     c-type         fmt         comment 
---------------------------------------------------------------------------------------------------
0     conv2d_0_scratch0   352/352             activations  uint8_t        fxp/q(8,0)              
1     dense_1_bias        4/16                weights      const int32_t  int/ss                   
2     dense_1_weights     16000/16000         weights      const uint8_t  int/ua                  
3     conv2d_0_bias       8/32                weights      const int32_t  int/ss                  
4     conv2d_0_weights    640/640             weights      const uint8_t  int/ua                 
5     Reshape_1_output    1960/1960           user         uint8_t        int/us      /input     
6     conv2d_0_output     4000/4000           activations  uint8_t        int/us                 
7     dense_1_output      4/4                 activations  uint8_t        int/ua                 
8     dense_1_fmt_output  4/16                activations  float          float                    
9     nl_2_output         4/16                activations  float          float                  
10    nl_2_fmt_output     4/4                 user         uint8_t        int/us      /output    
---------------------------------------------------------------------------------------------------</code></pre>
<table>
<colgroup>
<col style="width: 21%"></col>
<col style="width: 78%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">mem-pool</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">activations</td>
<td style="text-align: left;">part of the activations buffer</td>
</tr>
<tr class="even">
<td style="text-align: left;">weights</td>
<td style="text-align: left;">part of a <em>ROM</em> segment</td>
</tr>
<tr class="odd">
<td style="text-align: left;">user</td>
<td style="text-align: left;">part of a memory segment owned by the user (client application layer)</td>
</tr>
</tbody>
</table>
<table>
<colgroup>
<col style="width: 21%"></col>
<col style="width: 78%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">fmt</th>
<th style="text-align: left;">format description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">float</td>
<td style="text-align: left;">32b float numbers</td>
</tr>
<tr class="even">
<td style="text-align: left;">c4/c8</td>
<td style="text-align: left;">compressed 32b float numbers. The size includes the dictionary.</td>
</tr>
<tr class="odd">
<td style="text-align: left;">int</td>
<td style="text-align: left;">quantized data memory chunk using integer format (refer to <a href="quantization.html">[8]</a>). <code>&#39;/channel (n)&#39;</code> indicates that per-channel scheme is used (else per-tensor).</td>
</tr>
<tr class="even">
<td style="text-align: left;">fxp</td>
<td style="text-align: left;">quantized data memory chunk using Qmn format (refer to <a href="quantization.html">[8]</a>)</td>
</tr>
</tbody>
</table>
</section>
<section id="c-layers-table" class="level3 unnumbered">
<h3>C-Layers table</h3>
<p><code>&#39;C-Layers&#39;</code> table lists the c-nodes. For each node, the c-name (<code>name</code>), type, macc, rom and associated tensors (with the shape for the I/O tensors) are reported. Associated c-array can be found with its name (or array id).</p>
<pre class="dosbatch"><code>C-Layers (5)
---------------------------------------------------------------------------------------------------
c_id  name (*_layer)  id  type    macc        rom         tensors                shape (array id) 
---------------------------------------------------------------------------------------------------
0     conv2d_0        0   conv2d  320008      672         I: Reshape_1_output    [1, 49, 40, 1] (5)
                                                          S: conv2d_0_scratch0                     
                                                          W: conv2d_0_weights                      
                                                          W: conv2d_0_bias                         
                                                          O: conv2d_0_output     [1, 25, 20, 8] (6)
---------------------------------------------------------------------------------------------------
1     dense_1         1   dense   16000       16016       I: conv2d_0_output0    [1, 1, 1, 4000] (6)
                                                          W: dense_1_weights                      
                                                          W: dense_1_bias                      
                                                          O: dense_1_output      [1, 1, 1, 4] (7) 
---------------------------------------------------------------------------------------------------
2     dense_1_fmt     1   nl      8           0           I: dense_1_output      [1, 1, 1, 4] (7) 
                                                          O: dense_1_fmt_output  [1, 1, 1, 4] (8) 
---------------------------------------------------------------------------------------------------
3     nl_2            2   nl      60          0           I: dense_1_fmt_output  [1, 1, 1, 4] (8) 
                                                          O: nl_2_output         [1, 1, 1, 4] (9) 
---------------------------------------------------------------------------------------------------
4     nl_2_fmt        2   nl      8           0           I: nl_2_output         [1, 1, 1, 4] (9)
                                                          O: nl_2_fmt_output     [1, 1, 1, 4] (10)
---------------------------------------------------------------------------------------------------</code></pre>
<ul>
<li><code>&#39;id&#39;</code> designates the layer/operator index from the original model allowing to retrieve the link with the implemented node (<code>&#39;c_id&#39;</code>).</li>
</ul>
<div id="fig:id_map" class="fignos">
<figure>
<img src="" property="center" style="width:95.0%" alt /><figcaption><span>Figure 5:</span> Original <code>&#39;id&#39;</code> and <code>&#39;c_id&#39;</code> mapping</figcaption>
</figure>
</div>
</section>
</section>
</section>
<section id="ref_validate_cmd" class="level1">
<h1>Validate command</h1>
<section id="description-1" class="level2">
<h2>Description</h2>
<p>The <code>&#39;validate&#39;</code> command allows to import, to render and to validate the generated C-files. Please refer to <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">[2]</a>, <em>“Validation engine”</em> and <em>“AI validation application”</em> sections to have an overview of the underlying process. In particular for the validation on target (<code>&#39;--mode stm32&#39;</code>), the STM32 board should be flashed with a validation firmware including the model. Detailed description of the used metrics are described in <a href="evaluation_metrics.html">[6]</a>.</p>
<div class="Error">
<p><strong>Note</strong> — Be aware, that the <em>main purpose</em> of the underlying validation process is to test the generated C files with the associated network runtime library (desktop/X86 or STM32 run-time) by comparison with the imported model. Subsequently, only a representative and limited part of a whole validation or test data set can be used. It has not been designed to valid a pre-trained model as during a training/test phase.</p>
</div>
</section>
<section id="specific-arguments-1" class="level2">
<h2>Specific arguments</h2>
<table>
<colgroup>
<col style="width: 25%"></col>
<col style="width: 74%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">parameter</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>--mode</code></td>
<td style="text-align: left;">indicates the mode of validation. <code>x86</code> (<em>default value</em>) performs a validation on desktop. <code>stm32</code>/ <a href="#ref_stm32_io_only"><code>stm32_io_only</code></a> is used to perform a validation on target. - <em>Optional</em></td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>-vi/valinput</code></td>
<td style="text-align: left;">indicates the custom test data set which must be used. If not defined an internal self-generated random data set is used (refer to <a href="evaluation_metrics.html">[6], “Input validation files”</a> section) - <em>Optional</em></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>-vo/valoutput</code></td>
<td style="text-align: left;">indicates the expected custom output values. If the data are already provided in a simple file (<code>*.npz</code>) through the <code>-vi</code> option this argument is skipped - <em>Optional</em></td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>-b/--batches</code></td>
<td style="text-align: left;">indicates how many random data sample is generated (default: <code>10</code>) - <em>Optional</em></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>-d/--desc</code></td>
<td style="text-align: left;">describes the COM port which is used to communicate with a STM32 board (see <a href="#ref_valio_arg">“<code>desc</code> argument”</a> section) - <em>Optional</em></td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>--full</code></td>
<td style="text-align: left;">if defined, this flag indicates that an <a href="#ref_l2_error">extended validation</a> process is applied to report the <em>L2r</em> error layer-by-layer. Else only the <em>L2r</em> is evaluated on the last or <em>output</em> layers. - <em>Optional</em> For the <code>x86</code>, this allows also to report the relative executing time by layer. - <em>Optional</em></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>--validate.batch_mode</code></td>
<td style="text-align: left;">when an input custom data set is used, this argument is used to limit the number of samples. Two modes are possible: <code>first</code> to indicate that only the first <code>batches</code> samples are used. <code>random</code> indicates that <code>batches</code> samples are randomly selected with a fixed seed (see last example of the <a href="#ref_example_val">“Examples”</a> section) - <em>Optional</em></td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>--classifier</code></td>
<td style="text-align: left;">if defined, this flag indicates that the provided model should be considered as a classifier vs regressor. This implies that the computation of the <code>&#39;CM&#39;</code> and <code>&#39;ACC&#39;</code> metrics will be evaluated, else an auto-detection mechanism is used to evaluate if the model is a classifier or not. - <em>Optional</em></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>--no-check</code></td>
<td style="text-align: left;">if defined and combined with the <code>&#39;stm32&#39;</code> mode, this “debug” flag allows to reduce the full preliminary check-list to make sure that the flashed STM32 C-model has been generated with the same tools and options. Only the c-name and network IO shape/format are checked. - <em>Optional</em></td>
</tr>
</tbody>
</table>
</section>
<section id="ref_example_val" class="level2">
<h2>Examples</h2>
<ul>
<li><p>Minimal command to validate a 32b float model with the self-generated random input data (“Validation on desktop”).</p>
<pre class="dosbatch"><code>$ stm32ai validate -m &lt;model_f32p_file_path&gt;</code></pre></li>
<li><p>To report the <a href="#ref_l2_error">“L2r error”</a> and relative <a href="#sec_exec_by_layer">execution time by layer</a> (“Validation on desktop”).</p>
<pre class="dosbatch"><code>$ stm32ai validate -m &lt;model_f32p_file_path&gt; --full</code></pre></li>
<li><p>Minimal command to validate a 32b float model on STM32 target. Note that a complete profiling report including <a href="#sec_exec_by_layer">execution time by layer</a> is generated by default.</p>
<pre class="dosbatch"><code>$ stm32ai validate -m &lt;model_f32p_file_path&gt; --mode stm32</code></pre></li>
<li><p>Validation of a 32b float model with self-generated random input data and compression factor (“Validation on desktop”)</p>
<pre class="dosbatch"><code>$ stm32ai validate -m &lt;model_f32p_file_path&gt; -c 4</code></pre></li>
<li><p>Validate a model with a custom data set</p>
<pre class="dosbatch"><code>$ stm32ai validate -m &lt;model_file_path&gt; -vi test_data.csv</code></pre></li>
<li><p>Validate a quantized model with a custom data set</p>
<pre class="dosbatch"><code>$ stm32ai validate -m &lt;modified_model_file&gt;.h5 -q &lt;quant_file_desc&gt;.json -vi test_data.npz</code></pre></li>
<li><p>Validate a model with only 20 randomly selected samples from a large custom data set</p>
<pre class="dosbatch"><code>$ stm32ai validate -m &lt;modified_model_file&gt;.h5 -vi test_large_data.npz --validate.batch_mode random -b 20</code></pre></li>
</ul>
</section>
<section id="ref_desc_arg" class="level2">
<h2>Serial COM port configuration</h2>
<p>The <code>&#39;--desc/-d&#39;</code> argument should be used to indicate how to configure the serial COM driver to access the STM32 board. Previously, the STM32 board should be flashed with an <em>aiValidation</em> application (refer to <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">[2]</a>, <em>“AI validation application”</em> section).</p>
<p>By default, an auto-detection mechanism is applied to discover a connected board at <span class="citation" data-cites="115200">@115200</span> (default value: <code>default:115200</code>)</p>
<ul>
<li><p>set the baud-rate to <span class="citation" data-cites="961600">@961600</span></p>
<pre class="dosbatch"><code>$ stm32ai validate -m &lt;model_file_path&gt; --mode stm32 -d 921600</code></pre></li>
<li><p>set the COM port to <code>COM16</code> (Windows case ) or <code>/dev/ttyACM0</code> (Linux case)</p>
<pre class="dosbatch"><code>$ stm32ai validate -m &lt;model_file_path&gt; --mode stm32 -d COM16
$ stm32ai validate -m &lt;model_file_path&gt; --mode stm32 -d /dev/ttyACM0</code></pre></li>
<li><p>set the COM port to <code>COM16</code> and the baud-rate <span class="citation" data-cites="921600">@921600</span></p>
<pre class="dosbatch"><code>$ stm32ai validate -m &lt;model_file_path&gt; --mode stm32 -d COM16:921600</code></pre></li>
</ul>
</section>
<section id="ref_l2_error" class="level2">
<h2>Report of the L2r error for a 32b float model</h2>
<p>“L2 relative” error (refer to <a href="evaluation_metrics.html">[6]</a>) is the primary metric used to validate a 32b float model. It is reported through the <a href="#ref_complexity_by_layer">“MACC/ROM complexity by layer”</a> table, through a new <code>&#39;L2r error&#39;</code> column.</p>
<p><em>*</em> indicates the max value</p>
<pre class="dosbatch"><code>...
Complexity/l2r error per-layer - macc=4,833,524 rom=159,536
--------------------------------------------------------------------------------------------------
id  layer (type)                      macc                    rom                   l2r error
--------------------------------------------------------------------------------------------------
0   conv2d_1 (Conv2D)                 |||||||||||||     6.8%  |||||||||       6.6%
...
16  conv2d_5 (Conv2D)                 |||||||||||||||  10.8%  |||||||||||||  10.4%
16  conv2d_5_nl (Nonlinearity)        |                 0.0%  |               0.0%  8.24875428e-07
17  batch_normalization_9 (ScaleBias) |                 0.3%  |               0.3% 
18  average_pooling2d_1 (Pool)        |                 0.2%  |               0.0%
20  dense_1 (Dense)                   |                 0.0%  ||              2.0%
20  dense_1_nl (Nonlinearity)         |                 0.0%  |               0.0%  3.33251955e-06 *
--------------------------------------------------------------------------------------------------
...</code></pre>
<p>By default, <em>l2r</em> error is only computed on the last layers or outputs. The <code>&#39;--full&#39;</code> flag can be used to compute the L2r error for each hidden layers matching to an original layer. Note that this feature is only available for the Keras 32b float models.</p>
<pre class="dosbatch"><code>...
Complexity/l2r error per-layer - macc=4,833,524 rom=159,536
--------------------------------------------------------------------------------------------------
id  layer (type)                      macc                    rom                   l2r error
--------------------------------------------------------------------------------------------------
0   conv2d_1 (Conv2D)                 |||||||||||||     6.8%  |||||||||       6.6%
0   conv2d_1_nl (Nonlinearity)        |                 0.0%  |               0.0%  1.38719429e-07
1   batch_normalization_1 (ScaleBias) |                 0.3%  |               0.3%  1.24893788e-07
2   separable_conv2d_1 (Conv2D)       ||                1.5%  ||              1.6%
...
14  separable_conv2d_4 (Conv2D)       ||                1.5%  ||              1.6%
14  separable_conv2d_4_conv2d (Conv2D)||||||||||||||   10.6%  |||||||||||||  10.4%  7.93936863e-07
16  conv2d_5 (Conv2D)                 |||||||||||||||  10.8%  |||||||||||||  10.4%
16  conv2d_5_nl (Nonlinearity)        |                 0.0%  |               0.0%  8.24875428e-07
17  batch_normalization_9 (ScaleBias) |                 0.3%  |               0.3%  1.42430758e-06
18  average_pooling2d_1 (Pool)        |                 0.2%  |               0.0%  1.31099046e-06
20  dense_1 (Dense)                   |                 0.0%  ||              2.0%
20  dense_1_nl (Nonlinearity)         |                 0.0%  |               0.0%  3.33251955e-06 *
--------------------------------------------------------------------------------------------------
...</code></pre>
<div class="Warning">
<p><strong>Note</strong> — <code>&#39;--full&#39;</code> option can be used for validation on target (<code>&#39;--mode stm32&#39;</code>), to report the <em>L2r</em> error per layer, however, be aware that the validation time is significantly increased due to the upload of the intermediate results.</p>
</div>
</section>
<section id="sec_exec_by_layer" class="level2">
<h2>Execution time per layer</h2>
<section id="validation-on-target" class="level3">
<h3>Validation on target</h3>
<p>The validation on target allows to have a <em>full and accurate profiling</em> report including:</p>
<ul>
<li>inference-time</li>
<li>number of CPU cycles by MACC</li>
<li>execution time per layer</li>
<li>STM32 HW settings/configurations (clock frequency, memory configuration)</li>
</ul>
<div class="Note">
<p><strong>Note</strong> — All these information are also available through the “aiSystemPerformance” application (refer to <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">[2]</a>)</p>
</div>
<pre class="dosbatch"><code>...
-- Running STM32 C-model

ON-DEVICE STM32 execution (&quot;network&quot;, auto-detect, 115200)..

&lt;Stm32com id=0x1f23381f3c8 - CONNECTED(COM6/115200) devid=0x431/STM32F411xC/E msg=2.1&gt;
 0x431/STM32F411xC/E @100MHz/100MHz (FPU is present) lat=3 ART: PRFTen ICen DCen
 found network(s): [&#39;network&#39;]
 description    : &#39;network&#39; 1-&gt;[5]-&gt;1 macc=336084 rom=16.30KiB ram=4.25KiB
 tools versions : rt=(5, 2, 0) tool=(5, 2, 0)/(1, 3, 0) api=(1, 1, 0) &quot;Wed Sep 23 11:21:00 2020&quot;

Running with inputs (10, 49, 40, 1)..
...... 1/10
...
...... 10/10
 RUN Stats    : batches=10 dur=3.656s tfx=3.304s 5.805KiB/s (wb=19.141KiB,rb=40B)

Results for 10 inference(s) @100/100MHz (macc:336084)
 device      : 0x431/STM32F411xC/E @100MHz/100MHz (FPU is present) lat=3 ART: PRFTen ICen DCen
 duration    : 38.194 ms (average)
 CPU cycles  : 3819373 (average)
 cycles/MACC : 11.36 (average for all layers)
 c_nodes     : 5

Clayer  id  desc                          oshape          fmt       ms        (%)
-------------------------------------------------------------------------------------
0       0   10004/(2D Convolutional)      (25, 20, 8)     uint8     37.425     98.0%
1       1   10005/(Dense)                 (1, 1, 4)       uint8     0.752       2.0%
2       1   10009/(Nonlinearity)          (1, 1, 4)       float32   0.003       0.0%
3       2   10009/(Nonlinearity)          (1, 1, 4)       float32   0.010       0.0%
4       2   10009/(Nonlinearity)          (1, 1, 4)       uint8     0.004       0.0%
                                                                    38.194 (total)


-- Running STM32 C-model - done (elapsed time 3.585s)
...</code></pre>
<p>This report can be used to identify the main contributors and to re-fine the model accordingly. <code>&#39;Clayer&#39;</code> column references the index of the c-node (see <a href="#ref_c_graph_desc">“C-graph description”</a> section) and the <code>&#39;id&#39;</code> indicates the index from the original model.</p>
</section>
<section id="ref_stm32_io_only" class="level3">
<h3>stm32 out-of-the-box execution</h3>
<p>When <code>&#39;stm32_io_only&#39;</code> mode is used, the stm32 model is only executed out-of-the-box. Executing time or l2r per layer are no more computed. This can be used to limit the traffic between the host and the target decreasing the validation time.</p>
</section>
<section id="validation-on-desktop-and---full-flag" class="level3 unnumbered">
<h3>Validation on desktop and <code>&#39;--full&#39;</code> flag</h3>
<p>For the validation on desktop, the <code>&#39;--full&#39;</code> flag can be used to report a relative execution time per layer. Nevertheless, it is important to note that the values are only the indicators, they are dependent on the work-load of the desktop machine in contrary to the reported inference times for the validation on target.</p>
<pre class="dosbatch"><code>...
-- Running X86 C-model

Results for 10 inference(s)
 c_nodes     : 5
 duration    : 1.011 ms (average)

Clayer  id  desc                          oshape          fmt       exec time (%)
--------------------------------------------------------------------------------
0       0   10004/(2D Convolutional)      (25, 20, 8)     uint8      95.2%
1       1   10005/(Dense)                 (1, 1, 4)       uint8       4.4%
2       1   10009/(Nonlinearity)          (1, 1, 4)       float32     0.2%
3       2   10009/(Nonlinearity)          (1, 1, 4)       float32     0.2%
4       2   10009/(Nonlinearity)          (1, 1, 4)       uint8       0.0%

NOTE: duration and exec time per layer is just an indication. They are dependent
      of the HOST-machine work-load.

-- Running X86 C-model - done (elapsed time 0.302s)
...</code></pre>
<p><code>Clayer</code> value is the c-layer index reported in the <a href="#ref_c_graph_desc">“C-graph description”</a>.</p>
</section>
</section>
</section>
<section id="ref_generate_cmd" class="level1">
<h1>Generate command</h1>
<section id="description-2" class="level2">
<h2>Description</h2>
<p>The <code>&#39;generate&#39;</code> command is used to generate the specialized network and data C-files. They are generated in the specified output directory.</p>
<pre class="dosbatch"><code>$ stm32ai generate -m &lt;model_file_path&gt; -o &lt;output-directory-path&gt;
...
-- Generating C-code
-- Generating C-code - done (elapsed time 0.900s)
Installing..
 &lt;output-directory-path&gt;\&lt;name&gt;.c
 &lt;output-directory-path&gt;\&lt;name&gt;_data.c
 &lt;output-directory-path&gt;\&lt;name&gt;.h
 &lt;output-directory-path&gt;\&lt;name&gt;_data.h

Creating report file &lt;output-directory-path&gt;\network_generate_report.txt
...</code></pre>
<p>The <code>&#39;&lt;name&gt;.c/.h&#39;</code> files contain the topology of the C-model (C-struct definition of the tensors and the operators), including the embedded inference client API (refer to <a href="embedded_client_api.html">[5]</a>) to use the generated model on the top of the optimized inference runtime library. <code>&#39;&lt;name&gt;_data.c/.h&#39;</code> files are by default a simple C-array with the data of the weight/bias tensors. However, the <code>&#39;--split-weights&#39;</code> option allows to have a C-array by tensor (refer to <a href="embedded_client_api.html">[5], “<em>Split weights buffer</em>”</a> section) and the <code>&#39;--binary&#39;</code> option creates a simple binary file with the data of the weight/bias tensors. The <code>&#39;--relocatable&#39;</code> option allows to generate a relocatable binary model including the topology, the requested kernels and the weights in a single binary file (refer to <a href="relocatable.html">[9]</a>).</p>
<ul>
<li><code>-v 2</code> argument should be used to report more information similar, to the <a href="#ref_analyze_cmd">“Analyze”</a> command.</li>
</ul>
</section>
<section id="specific-arguments-2" class="level2">
<h2>Specific arguments</h2>
<table>
<colgroup>
<col style="width: 23%"></col>
<col style="width: 76%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">parameter</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>--binary</code></td>
<td style="text-align: left;">if defined, this flag forces the generation of a binary file <code>&#39;&lt;name&gt;_data.bin&#39;</code>. The <code>&#39;&lt;name&gt;_data.c&#39;</code> and <code>&#39;&lt;name&gt;_data.h&#39;</code> files are always generated (see <a href="#ref_addr_options"><em>“Particular network data c-file”</em></a> section). This binary file contains <strong>ONLY</strong> the data of the different weights/bias tensors, C-implementation of the topology (including the meta-data, scale/zero-point…) is always generated in the <code>&lt;name&gt;.c/.h</code> files. - <em>Optional</em></td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>--generate.fota</code></td>
<td style="text-align: left;">if the <code>--binary</code> flag is passed, this additional argument with <code>&#39;True&#39;</code> value allows to add a specific ST header which is expected for a partial <a href="#ref_fota_support">Firmware Over-The-Air (FOTA)</a> process. Name of the generated file is suffixed with the <code>_fota</code> extension: <code>&lt;name&gt;_data_fota.bin</code>. - <em>Optional</em></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>--address</code></td>
<td style="text-align: left;">with <code>--binary</code> flag, this helper option can be used to indicate the address where the weights will be located to generate a particular generated <a href="#ref_addr_options"><code>&#39;&lt;name&gt;_data.c&#39;</code></a> file. - <em>Optional</em></td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>--copy-weights-at</code></td>
<td style="text-align: left;">with <code>--binary</code> flag and <code>--address</code> option, this helper option can be used to indicate the destination address where the weights should be copied at initialization time thanks to a particular generated <a href="#ref_addr_options"><code>&#39;&lt;name&gt;_data.c&#39;</code></a> file. - <em>Optional</em></td>
</tr>
</tbody>
</table>
<p>Specific arguments to generate a relocatable binary model (refer to <a href="relocatable.html">[9]</a> for details).</p>
<table>
<colgroup>
<col style="width: 23%"></col>
<col style="width: 76%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">parameter</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>-r/--relocatable</code></td>
<td style="text-align: left;">if defined this option allows to generate a relocatable binary model. <code>&#39;--binary&#39;</code> option can be used to have a separate binary file with only the data of the weight/bias tensors. - <em>Optional</em></td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>--lib</code></td>
<td style="text-align: left;">indicates the root directory to find the relocatable network runtime libraries. Typical value: <code>&#39;$X_CUBE_AI_DIR/Middlewares/ST/AI&#39;</code>. - <em>Optional</em> (but mandatory if the <code>&#39;--relocatable</code>’ option is defined).</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>--series</code></td>
<td style="text-align: left;">indicates the targeted STM32 series for the generation of the relocatable binary model. Possible values: <code>stm32f4</code> (default), <code>stm32f3</code>, <code>stm32l4</code>, <code>stm32f7</code>, <code>stm32h7</code>, <code>stm32l5</code>, <code>stm32wl</code>, <code>stm32mp1</code>. - <em>Optional</em></td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>--no-c-files</code></td>
<td style="text-align: left;">if defined, this flags avoids to generate the specific C-files. - <em>Optional</em></td>
</tr>
</tbody>
</table>
<p>Note that <code>&#39;--split-weights&#39;</code>, <code>&#39;--address&#39;</code> and <code>&#39;--copy-weights-at&#39;</code> options are not supported with the <code>&#39;--relocatable&#39;</code> option.</p>
</section>
<section id="examples-1" class="level2">
<h2>Examples</h2>
<ul>
<li><p>Generate the specialized NN C-files for a 32b float model (default options).</p>
<pre class="dosbatch"><code>$ stm32ai generate -m &lt;model_file_path&gt;</code></pre></li>
<li><p>Generate the specialized NN C-files for a 32b float model with compression factor.</p>
<pre class="dosbatch"><code>$ stm32ai generate -m &lt;model_file_path&gt; -c 8</code></pre></li>
<li><p>Generate the specialized NN C-files for quantized model.</p>
<pre class="dosbatch"><code>$ stm32ai generate -m &lt;model_file_path&gt; -q &lt;quant_file_desc&gt;.json</code></pre></li>
<li><p>Generate only the network NN C-file, weights/bias parameters are provided as a binary file/object.</p>
<pre class="dosbatch"><code>$ stm32ai generate -m &lt;model_file_path&gt; -o &lt;output-directory-path&gt; -n &lt;name&gt; --binary

...
-- Generating C-code
-- Generating C-code - done (elapsed time 0.184s)
 &lt;output-directory-path&gt;\&lt;name&gt;.c
 &lt;output-directory-path&gt;\&lt;name&gt;_data.c
 &lt;output-directory-path&gt;\&lt;name&gt;.h
 &lt;output-directory-path&gt;\&lt;name&gt;_data.h
 &lt;output-directory-path&gt;\&lt;name&gt;_data.bin

Creating report file &lt;output-directory-path&gt;\&lt;name&gt;_generate_report.txt
...</code></pre></li>
<li><p>Generate a full relocatable binary file for a STM32H7 series (refer to <a href="relocatable.html">[9]</a>).</p>
<pre class="dosbatch"><code>$ stm32ai generate -m &lt;model_file_path&gt; -o &lt;output-directory-path&gt; --relocatable --series stm32h7

...
-- Generating C-code
-- Generating C-code - done (elapsed time 0.184s)
...
-- Generating C-code - done (elapsed time 2.026s)
Installing..
 &lt;output-directory-path&gt;\network.c
 &lt;output-directory-path&gt;\network_data.c
 &lt;output-directory-path&gt;\network_img_rel.c
 &lt;output-directory-path&gt;\network.h
 &lt;output-directory-path&gt;\network_data.h
 &lt;output-directory-path&gt;\network_img_rel.h
 &lt;output-directory-path&gt;\network_rel.bin

Creating report file &lt;output-directory-path&gt;\network_generate_report.txt
...</code></pre></li>
<li><p>Generate a relocatable binary file w/o the weights for a STM32F4 series. Weights/bias data are generated in a separated binary file (refer to <a href="relocatable.html">[9]</a>)</p>
<pre class="dosbatch"><code>$ stm32ai generate -m &lt;model_file_path&gt; -o &lt;output-directory-path&gt; -n &lt;name&gt; --relocatable --binary

...
-- Generating C-code
-- Generating C-code - done (elapsed time 0.184s)
...
-- Generating C-code - done (elapsed time 2.026s)
Installing..
 &lt;root_project_folder&gt;\Src\&lt;name&gt;.c
 &lt;root_project_folder&gt;\Src\&lt;name&gt;_data.c
 &lt;root_project_folder&gt;\Src\&lt;name&gt;_img_rel.c
 &lt;root_project_folder&gt;\Src\&lt;name&gt;.h
 &lt;root_project_folder&gt;\Src\&lt;name&gt;_data.h
 &lt;root_project_folder&gt;\Src\&lt;name&gt;_img_rel.h
 &lt;root_project_folder&gt;\Src\&lt;name&gt;_rel.bin
 &lt;root_project_folder&gt;\Inc\&lt;name&gt;_data.bin

Creating report file &lt;output-directory-path&gt;\&lt;name&gt;_generate_report.txt
...</code></pre></li>
</ul>
</section>
<section id="ref_addr_options" class="level2">
<h2>Particular network data c-file</h2>
<p>The helper <code>&#39;--address&#39;</code> and <code>&#39;--copy-weights-at&#39;</code> arguments are the convenience options to generate a specific <code>ai_network_data_weights_get()</code> function. The returned address should be passed to the <code>ai_&lt;network&gt;_init()</code> function through the <code>ai_network_params</code> structure (refer to <a href="embedded_client_api.html">[5]</a>). Note that this (including copy function) can be fully managed by the application code.</p>
<p>If the <code>--binary</code> (or <code>--relocatable</code>) option is passed without the <code>&#39;--address&#39;</code> or <code>&#39;--copy-weights-at&#39;</code> arguments, following <code>network_data.c</code> file is generated</p>
<div class="sourceCode" id="cb50"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb50-1"><a href="#cb50-1"></a><span class="pp">#include </span><span class="im">&quot;network_data.h&quot;</span></span>
<span id="cb50-2"><a href="#cb50-2"></a></span>
<span id="cb50-3"><a href="#cb50-3"></a>ai_handle ai_network_data_weights_get(<span class="dt">void</span>)</span>
<span id="cb50-4"><a href="#cb50-4"></a>{</span>
<span id="cb50-5"><a href="#cb50-5"></a>  <span class="cf">return</span> AI_HANDLE_NULL;</span>
<span id="cb50-6"><a href="#cb50-6"></a>}</span></code></pre></div>
<p>Example of generated <code>network_data.c</code> file with the <code>--binary</code> and <code>--address 0x810000</code> options.</p>
<div class="sourceCode" id="cb51"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb51-1"><a href="#cb51-1"></a><span class="pp">#include </span><span class="im">&quot;network_data.h&quot;</span></span>
<span id="cb51-2"><a href="#cb51-2"></a></span>
<span id="cb51-3"><a href="#cb51-3"></a><span class="pp">#define AI_NETWORK_DATA_ADDR 0x810000</span></span>
<span id="cb51-4"><a href="#cb51-4"></a></span>
<span id="cb51-5"><a href="#cb51-5"></a>ai_handle ai_network_data_weights_get(<span class="dt">void</span>)</span>
<span id="cb51-6"><a href="#cb51-6"></a>{</span>
<span id="cb51-7"><a href="#cb51-7"></a>  <span class="cf">return</span> AI_HANDLE_PTR(AI_NETWORK_DATA_ADDR);</span>
<span id="cb51-8"><a href="#cb51-8"></a>}</span></code></pre></div>
<p>Example of generated <code>network_data.c</code> file with the <code>--binary</code>, <code>--address 0x810000</code> and <code>--copy-weights-at 0cD0000000</code> options.</p>
<div class="sourceCode" id="cb52"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb52-1"><a href="#cb52-1"></a><span class="pp">#include </span><span class="im">&lt;string.h&gt;</span></span>
<span id="cb52-2"><a href="#cb52-2"></a><span class="pp">#include </span><span class="im">&quot;network_data.h&quot;</span></span>
<span id="cb52-3"><a href="#cb52-3"></a></span>
<span id="cb52-4"><a href="#cb52-4"></a><span class="pp">#define AI_NETWORK_DATA_ADDR 0x81000</span></span>
<span id="cb52-5"><a href="#cb52-5"></a><span class="pp">#define AI_NETWORK_DATA_DST_ADDR 0cD0000000</span></span>
<span id="cb52-6"><a href="#cb52-6"></a></span>
<span id="cb52-7"><a href="#cb52-7"></a>ai_handle ai_network_data_weights_get(<span class="dt">void</span>)</span>
<span id="cb52-8"><a href="#cb52-8"></a>{</span>
<span id="cb52-9"><a href="#cb52-9"></a>  memcpy((<span class="dt">void</span> *)AI_NETWORK_DATA_DST_ADDR, (<span class="dt">const</span> <span class="dt">void</span> *)AI_NETWORK_DATA_ADDR,</span>
<span id="cb52-10"><a href="#cb52-10"></a>                                            AI_NETWORK_DATA_WEIGHTS_SIZE);</span>
<span id="cb52-11"><a href="#cb52-11"></a>  <span class="cf">return</span> AI_HANDLE_PTR(AI_NETWORK_DATA_DST_ADDR);</span>
<span id="cb52-12"><a href="#cb52-12"></a>}</span></code></pre></div>
</section>
<section id="ref_fota_support" class="level2">
<h2>FOTA support</h2>
<p>The <code>&#39;--binary&#39;</code> flag allows to implement a simple/limited mechanism to be able to update a model w/o having to do a full firmware update. Only the weights (raw data) for a floating-point model is supported. <em>The topology changes or configuration modifications are not supported</em>. Consequently, a quantize model can be not updated because the tensor definitions are included in the <code>&#39;&lt;network&gt;.c&#39;</code> file (part of the generated C-struct to handle the model).</p>
<div class="Warning">
<p>The <code>&#39;--relocatable&#39;</code> flags (refer to <a href="relocatable.html">[9]</a>) allows to implement a more flexible support to upgrade a whole C-model, including the used forward kernel functions w/o having to do a full firmware update.</p>
</div>
</section>
<section id="ref_update_project" class="level2">
<h2>Update an ioc-based project</h2>
<p>For a X-CUBE-AI IDE project (ioc-based), the user has the possibility to update only the generated NN C-files. In this case, the <code>&#39;--output&#39;</code> option is used to indicate the root directory of the IDE project, i.e. location of the <code>&#39;.ioc&#39;</code> file. The destination of the previous NN c-files are automatically discovered in the source tree else output directory is used.</p>
<pre class="dosbatch"><code>$ stm32ai generate -m &lt;model_path&gt; -n &lt;name&gt; -c 4 -o &lt;root_project_folder&gt;
...
IOC file found in output directory
-- Importing model
-- Importing model - done (elapsed time 0.806s)
-- Rendering model
-- Rendering model - done (elapsed time 0.113s)
-- Generating C-code
-- Generating C-code - done (elapsed time 0.833s)
Installing..
 &lt;root_project_folder&gt;\Src\&lt;name&gt;.c
 &lt;root_project_folder&gt;\Src\&lt;name&gt;_data.c
 &lt;root_project_folder&gt;\Inc\&lt;name&gt;.h
 &lt;root_project_folder&gt;\Inc\&lt;name&gt;_data.h

Creating report file &lt;root_project_folder&gt;\&lt;name&gt;_generate_report.txt
...</code></pre>
<div class="Warning">
<p><strong>Note</strong> — For multiple network support, the update mechanism for a particular model is the same. <strong>Users</strong> should be vigilant to use the correct <em>name</em> (<code>&#39;--name my_name&#39;</code>) to avoid to overwrite/update an incorrect file and to be aligned with the multi-network helpers functions which are only generated by the X-CUBE-AI UI: <code>&#39;app_x-cube-ai.c/.h&#39;</code> files. If the number of networks is changed, X-CUBE-AI UI should be used to update the models.</p>
</div>
</section>
<section id="update-a-proprietary-source-tree" class="level2">
<h2>Update a proprietary source tree</h2>
<p>The <code>&#39;--output&#39;</code> option is used to indicate the single destination of the generated NN C-files. An empty file with the <code>&#39;.ioc&#39;</code> extension can be defined in the root directory of the custom source tree to use the discovery mechanism as for the <a href="#ref_update_project">update of an ioc-based project</a>.</p>
</section>
</section>
<section id="references" class="level1">
<h1>References</h1>
<table style="width:92%;">
<colgroup>
<col style="width: 13%"></col>
<col style="width: 77%"></col>
</colgroup>
<tbody>
<tr class="odd">
<td style="text-align: left;">[1]</td>
<td style="text-align: left;">X-CUBE-AI - <em>AI expansion pack for STM32CubeMX</em><br />
<a href="https://www.st.com/en/embedded-software/x-cube-ai.html">https://www.st.com/en/embedded-software/x-cube-ai.html</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[2]</td>
<td style="text-align: left;">User manual - Getting started with X-CUBE-AI Expansion Package for Artificial Intelligence (AI) <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">(pdf)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[3]</td>
<td style="text-align: left;">stm32ai - Command Line Interface <a href="command_line_interface.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[4]</td>
<td style="text-align: left;">Supported Deep Learning toolboxes and layers <a href="layer-support.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[5]</td>
<td style="text-align: left;">Embedded inference client API <a href="embedded_client_api.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[6]</td>
<td style="text-align: left;">Evaluation report and metrics <a href="evaluation_metrics.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[7]</td>
<td style="text-align: left;">FAQs <a href="faqs.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[8]</td>
<td style="text-align: left;">Quantization and quantize command <a href="quantization.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[9]</td>
<td style="text-align: left;">Relocatable binary network support <a href="relocatable.html">(link)</a></td>
</tr>
</tbody>
</table>
</section>
<section id="revision-history" class="level1">
<h1>Revision history</h1>
<table>
<colgroup>
<col style="width: 32%"></col>
<col style="width: 24%"></col>
<col style="width: 44%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">Date</th>
<th style="text-align: left;">version</th>
<th style="text-align: left;">changes</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><strong>2019-06-14</strong></td>
<td style="text-align: left;">r1.0</td>
<td style="text-align: left;">initial version (X-CUBE_AI 4.0)</td>
</tr>
<tr class="even">
<td style="text-align: left;"><strong>2019-09-20</strong></td>
<td style="text-align: left;">r1.1</td>
<td style="text-align: left;">X-CUBE-AI 4.1 update</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><strong>2019-12-03</strong></td>
<td style="text-align: left;">r1.2</td>
<td style="text-align: left;">X-CUBE-AI 5.0 update, add ONNX network type, add full,allocator-inputs, add description of the per-channel quantization support</td>
</tr>
<tr class="even">
<td style="text-align: left;"><strong>2020-05-12</strong></td>
<td style="text-align: left;">r2.0</td>
<td style="text-align: left;">X-CUBE-AI 5.1 update (new options, C-graph description), add new figures to illustrate the generated reports, remove quantize command section (new article has been created)</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><strong>2020-09-15</strong></td>
<td style="text-align: left;">r2.1</td>
<td style="text-align: left;">X-CUBE-AI 5.2 update, add generate options for relocatable binary model, fix typo.</td>
</tr>
</tbody>
</table>
</section>



<section class="st_footer">

<h1> <br> </h1>

<p style="font-family:verdana; text-align:left;">
 Embedded Documentation 

	- <b> Command Line Interface </b>
			<br> X-CUBE-AI Expansion Package
				<br> r2.1
		 - AI PLATFORM r5.2.0
			 (Embedded Inference Client API 1.1.0) 
			 - Command Line Interface r1.4.0 
		
	
</p>

<img src="" title="ST logo" align="right" height="100" />

<div class="stnotice">
Information in this document is provided solely in connection with ST products.
The contents of this document are subject to change without prior notice.
<br>
© Copyright STMicroelectronics 2020. All rights reserved. <a href="http://www.st.com">www.st.com</a>
</div>

<hr size="1" />
</section>


</article>
</body>

</html>
