<!DOCTYPE html>
<!--

	Modified template for STM32CubeMX.AI purpose

	d0.1: 	jean-michel.delorme@st.com
			add ST logo and ST footer

	d2.0: 	jean-michel.delorme@st.com
			add sidenav support

	d2.1: 	jean-michel.delorme@st.com
			clean-up + optional ai_logo/ai meta data
			
==============================================================================
           "GitHub HTML5 Pandoc Template" v2.1 — by Tristano Ajmone           
==============================================================================
Copyright © Tristano Ajmone, 2017, MIT License (MIT). Project's home:

- https://github.com/tajmone/pandoc-goodies

The CSS in this template reuses source code taken from the following projects:

- GitHub Markdown CSS: Copyright © Sindre Sorhus, MIT License (MIT):
  https://github.com/sindresorhus/github-markdown-css

- Primer CSS: Copyright © 2016-2017 GitHub Inc., MIT License (MIT):
  http://primercss.io/

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The MIT License 

Copyright (c) Tristano Ajmone, 2017 (github.com/tajmone/pandoc-goodies)
Copyright (c) Sindre Sorhus <sindresorhus@gmail.com> (sindresorhus.com)
Copyright (c) 2017 GitHub Inc.

"GitHub Pandoc HTML5 Template" is Copyright (c) Tristano Ajmone, 2017, released
under the MIT License (MIT); it contains readaptations of substantial portions
of the following third party softwares:

(1) "GitHub Markdown CSS", Copyright (c) Sindre Sorhus, MIT License (MIT).
(2) "Primer CSS", Copyright (c) 2016 GitHub Inc., MIT License (MIT).

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
==============================================================================-->
<html>
<head>
  <meta charset="utf-8" />
  <meta name="generator" content="pandoc" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
  <meta name="keywords" content="STM32CubeMX, X-CUBE-AI, Neural Network, Quantization, CLI, Code Generator, Automatic NN mapping tools" />
  <title>FAQs</title>
  <style type="text/css">
.markdown-body{
	-ms-text-size-adjust:100%;
	-webkit-text-size-adjust:100%;
	color:#24292e;
	font-family:-apple-system,system-ui,BlinkMacSystemFont,"Segoe UI",Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol";
	font-size:16px;
	line-height:1.5;
	word-wrap:break-word;
	box-sizing:border-box;
	min-width:200px;
	max-width:980px;
	margin:0 auto;
	padding:45px;
	}
.markdown-body a{
	color:#0366d6;
	background-color:transparent;
	text-decoration:none;
	-webkit-text-decoration-skip:objects}
.markdown-body a:active,.markdown-body a:hover{
	outline-width:0}
.markdown-body a:hover{
	text-decoration:underline}
.markdown-body a:not([href]){
	color:inherit;text-decoration:none}
.markdown-body strong{font-weight:600}
.markdown-body h1,.markdown-body h2,.markdown-body h3,.markdown-body h4,.markdown-body h5,.markdown-body h6{
	margin-top:24px;
	margin-bottom:16px;
	font-weight:600;
	line-height:1.25}
.markdown-body h1{
	font-size:2em;
	margin:.67em 0;
	padding-bottom:.3em;
	border-bottom:1px solid #eaecef}
.markdown-body h2{
	padding-bottom:.3em;
	font-size:1.5em;
	border-bottom:1px solid #eaecef}
.markdown-body h3{font-size:1.25em}
.markdown-body h4{font-size:1em}
.markdown-body h5{font-size:.875em}
.markdown-body h6{font-size:.85em;color:#6a737d}
.markdown-body img{border-style:none}
.markdown-body svg:not(:root){
	overflow:hidden}
.markdown-body hr{
	box-sizing:content-box;
	height:.25em;
	margin:24px 0;
	padding:0;
	overflow:hidden;
	background-color:#e1e4e8;
	border:0}
.markdown-body hr::before{display:table;content:""}
.markdown-body hr::after{display:table;clear:both;content:""}
.markdown-body input{margin:0;overflow:visible;font:inherit;font-family:inherit;font-size:inherit;line-height:inherit}
.markdown-body [type=checkbox]{box-sizing:border-box;padding:0}
.markdown-body *{box-sizing:border-box}.markdown-body blockquote{margin:0}
.markdown-body ol,.markdown-body ul{padding-left:2em}
.markdown-body ol ol,.markdown-body ul ol{list-style-type:lower-roman}
.markdown-body ol ol,.markdown-body ol ul,.markdown-body ul ol,.markdown-body ul ul{margin-top:0;margin-bottom:0}
.markdown-body ol ol ol,.markdown-body ol ul ol,.markdown-body ul ol ol,.markdown-body ul ul ol{list-style-type:lower-alpha}
.markdown-body li>p{margin-top:16px}
.markdown-body li+li{margin-top:.25em}
.markdown-body dd{margin-left:0}
.markdown-body dl{padding:0}
.markdown-body dl dt{padding:0;margin-top:16px;font-size:1em;font-style:italic;font-weight:600}
.markdown-body dl dd{padding:0 16px;margin-bottom:16px}
.markdown-body code{font-family:SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace}
.markdown-body pre{font:12px SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace;word-wrap:normal}
.markdown-body blockquote,.markdown-body dl,.markdown-body ol,.markdown-body p,.markdown-body pre,.markdown-body table,.markdown-body ul{margin-top:0;margin-bottom:16px}
.markdown-body blockquote{padding:0 1em;color:#6a737d;border-left:.25em solid #dfe2e5}
.markdown-body blockquote>:first-child{margin-top:0}
.markdown-body blockquote>:last-child{margin-bottom:0}
.markdown-body table{display:block;width:100%;overflow:auto;border-spacing:0;border-collapse:collapse}
.markdown-body table th{font-weight:600}
.markdown-body table td,.markdown-body table th{padding:6px 13px;border:1px solid #dfe2e5}
.markdown-body table tr{background-color:#fff;border-top:1px solid #c6cbd1}
.markdown-body table tr:nth-child(2n){background-color:#f6f8fa}
.markdown-body img{max-width:100%;box-sizing:content-box;background-color:#fff}
.markdown-body code{padding:.2em 0;margin:0;font-size:85%;background-color:rgba(27,31,35,.05);border-radius:3px}
.markdown-body code::after,.markdown-body code::before{letter-spacing:-.2em;content:"\00a0"}
.markdown-body pre>code{padding:0;margin:0;font-size:100%;word-break:normal;white-space:pre;background:0 0;border:0}
.markdown-body .highlight{margin-bottom:16px}
.markdown-body .highlight pre{margin-bottom:0;word-break:normal}
.markdown-body .highlight pre,.markdown-body pre{padding:16px;overflow:auto;font-size:85%;line-height:1.45;background-color:#f6f8fa;border-radius:3px}
.markdown-body pre code{display:inline;max-width:auto;padding:0;margin:0;overflow:visible;line-height:inherit;word-wrap:normal;background-color:transparent;border:0}
.markdown-body pre code::after,.markdown-body pre code::before{content:normal}
.markdown-body .full-commit .btn-outline:not(:disabled):hover{color:#005cc5;border-color:#005cc5}
.markdown-body kbd{box-shadow:inset 0 -1px 0 #959da5;display:inline-block;padding:3px 5px;font:11px/10px SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace;color:#444d56;vertical-align:middle;background-color:#fcfcfc;border:1px solid #c6cbd1;border-bottom-color:#959da5;border-radius:3px;box-shadow:inset 0 -1px 0 #959da5}
.markdown-body :checked+.radio-label{position:relative;z-index:1;border-color:#0366d6}
.markdown-body .task-list-item{list-style-type:none}
.markdown-body .task-list-item+.task-list-item{margin-top:3px}
.markdown-body .task-list-item input{margin:0 .2em .25em -1.6em;vertical-align:middle}
.markdown-body::before{display:table;content:""}
.markdown-body::after{display:table;clear:both;content:""}
.markdown-body>:first-child{margin-top:0!important}
.markdown-body>:last-child{margin-bottom:0!important}
.Alert,.Error,.Note,.Success,.Warning,.Tips,.HTips{padding:11px;margin-bottom:24px;border-style:solid;border-width:1px;border-radius:4px}
.Alert p,.Error p,.Note p,.Success p,.Warning p,.Tips p,.HTips p{margin-top:0}
.Alert p:last-child,.Error p:last-child,.Note p:last-child,.Success p:last-child,.Warning p:last-child,.Tips p:last-child,.HTips p:last-child{margin-bottom:0}
.Alert{color:#246;background-color:#e2eef9;border-color:#bac6d3}
.Warning{color:#4c4a42;background-color:#fff9ea;border-color:#dfd8c2}
.Error{color:#911;background-color:#fcdede;border-color:#d2b2b2}
.Success{color:#22662c;background-color:#e2f9e5;border-color:#bad3be}
.Note{color:#2f363d;background-color:#f6f8fa;border-color:#d5d8da}
.Alert h1,.Alert h2,.Alert h3,.Alert h4,.Alert h5,.Alert h6{color:#246;margin-bottom:0}
.Warning h1,.Warning h2,.Warning h3,.Warning h4,.Warning h5,.Warning h6{color:#4c4a42;margin-bottom:0}
.Error h1,.Error h2,.Error h3,.Error h4,.Error h5,.Error h6{color:#911;margin-bottom:0}
.Success h1,.Success h2,.Success h3,.Success h4,.Success h5,.Success h6{color:#22662c;margin-bottom:0}
.Note h1,.Note h2,.Note h3,.Note h4,.Note h5,.Note h6{color:#2f363d;margin-bottom:0}
.Tips h1,.Tips h2,.Tips h3,.Tips h4,.Tips h5,.Tips h6{color:#2f363d;margin-bottom:0}
.HTips h1,.HTips h2,.HTips h3,.HTips h4,.HTips h5,.HTips h6{color:#2f363d;margin-bottom:0}
.Tips h1:first-child,.Tips h2:first-child,.Tips h3:first-child,.Tips h4:first-child,.Tips h5:first-child,.Tips h6:first-child,.Alert h1:first-child,.Alert h2:first-child,.Alert h3:first-child,.Alert h4:first-child,.Alert h5:first-child,.Alert h6:first-child,.Error h1:first-child,.Error h2:first-child,.Error h3:first-child,.Error h4:first-child,.Error h5:first-child,.Error h6:first-child,.Note h1:first-child,.Note h2:first-child,.Note h3:first-child,.Note h4:first-child,.Note h5:first-child,.Note h6:first-child,.Success h1:first-child,.Success h2:first-child,.Success h3:first-child,.Success h4:first-child,.Success h5:first-child,.Success h6:first-child,.Warning h1:first-child,.Warning h2:first-child,.Warning h3:first-child,.Warning h4:first-child,.Warning h5:first-child,.Warning h6:first-child{margin-top:0}
h1.title,p.subtitle{text-align:center}
h1.title.followed-by-subtitle{margin-bottom:0}
p.subtitle{font-size:1.5em;font-weight:600;line-height:1.25;margin-top:0;margin-bottom:16px;padding-bottom:.3em}
div.line-block{white-space:pre-line}
  </style>
  <style type="text/css">code{white-space: pre;}</style>
  <style type="text/css">
	pre > code.sourceCode { white-space: pre; position: relative; }
 pre > code.sourceCode > span { display: inline-block; line-height: 1.25; }
 pre > code.sourceCode > span:empty { height: 1.2em; }
 .sourceCode { overflow: visible; }
 code.sourceCode > span { color: inherit; text-decoration: inherit; }
 div.sourceCode { margin: 1em 0; }
 pre.sourceCode { margin: 0; }
 @media screen {
 div.sourceCode { overflow: auto; }
 }
 @media print {
 pre > code.sourceCode { white-space: pre-wrap; }
 pre > code.sourceCode > span { text-indent: -5em; padding-left: 5em; }
 }
 pre.numberSource code
   { counter-reset: source-line 0; }
 pre.numberSource code > span
   { position: relative; left: -4em; counter-increment: source-line; }
 pre.numberSource code > span > a:first-child::before
   { content: counter(source-line);
     position: relative; left: -1em; text-align: right; vertical-align: baseline;
     border: none; display: inline-block;
     -webkit-touch-callout: none; -webkit-user-select: none;
     -khtml-user-select: none; -moz-user-select: none;
     -ms-user-select: none; user-select: none;
     padding: 0 4px; width: 4em;
     background-color: #ffffff;
     color: #a0a0a0;
   }
 pre.numberSource { margin-left: 3em; border-left: 1px solid #a0a0a0;  padding-left: 4px; }
 div.sourceCode
   { color: #1f1c1b; background-color: #ffffff; }
 @media screen {
 pre > code.sourceCode > span > a:first-child::before { text-decoration: underline; }
 }
 code span { color: #1f1c1b; } /* Normal */
 code span.al { color: #bf0303; background-color: #f7e6e6; font-weight: bold; } /* Alert */
 code span.an { color: #ca60ca; } /* Annotation */
 code span.at { color: #0057ae; } /* Attribute */
 code span.bn { color: #b08000; } /* BaseN */
 code span.bu { color: #644a9b; font-weight: bold; } /* BuiltIn */
 code span.cf { color: #1f1c1b; font-weight: bold; } /* ControlFlow */
 code span.ch { color: #924c9d; } /* Char */
 code span.cn { color: #aa5500; } /* Constant */
 code span.co { color: #898887; } /* Comment */
 code span.cv { color: #0095ff; } /* CommentVar */
 code span.do { color: #607880; } /* Documentation */
 code span.dt { color: #0057ae; } /* DataType */
 code span.dv { color: #b08000; } /* DecVal */
 code span.er { color: #bf0303; text-decoration: underline; } /* Error */
 code span.ex { color: #0095ff; font-weight: bold; } /* Extension */
 code span.fl { color: #b08000; } /* Float */
 code span.fu { color: #644a9b; } /* Function */
 code span.im { color: #ff5500; } /* Import */
 code span.in { color: #b08000; } /* Information */
 code span.kw { color: #1f1c1b; font-weight: bold; } /* Keyword */
 code span.op { color: #1f1c1b; } /* Operator */
 code span.ot { color: #006e28; } /* Other */
 code span.pp { color: #006e28; } /* Preprocessor */
 code span.re { color: #0057ae; background-color: #e0e9f8; } /* RegionMarker */
 code span.sc { color: #3daee9; } /* SpecialChar */
 code span.ss { color: #ff5500; } /* SpecialString */
 code span.st { color: #bf0303; } /* String */
 code span.va { color: #0057ae; } /* Variable */
 code span.vs { color: #bf0303; } /* VerbatimString */
 code span.wa { color: #bf0303; } /* Warning */
  </style>
  <link rel="stylesheet" href="data:text/css,%3Aroot%20%7B%2D%2Dmain%2Ddarkblue%2Dcolor%3A%20rgb%283%2C35%2C75%29%3B%20%2D%2Dmain%2Dlightblue%2Dcolor%3A%20rgb%2860%2C180%2C230%29%3B%20%2D%2Dmain%2Dpink%2Dcolor%3A%20rgb%28230%2C0%2C126%29%3B%20%2D%2Dmain%2Dyellow%2Dcolor%3A%20rgb%28255%2C210%2C0%29%3B%20%2D%2Dsecondary%2Dgrey%2Dcolor%3A%20rgb%2870%2C70%2C80%29%3B%20%2D%2Dsecondary%2Dgrey%2Dcolor%2D25%3A%20rgb%28209%2C209%2C211%29%3B%20%2D%2Dsecondary%2Dgrey%2Dcolor%2D12%3A%20rgb%28233%2C233%2C234%29%3B%20%2D%2Dsecondary%2Dlightgreen%2Dcolor%3A%20rgb%2873%2C177%2C112%29%3B%20%2D%2Dsecondary%2Dpurple%2Dcolor%3A%20rgb%28140%2C0%2C120%29%3B%20%2D%2Dsecondary%2Ddarkgreen%2Dcolor%3A%20rgb%284%2C87%2C47%29%3B%20%2D%2Dsidenav%2Dfont%2Dsize%3A%2090%25%3B%7Dhtml%20%7Bfont%2Dfamily%3A%20%22Arial%22%2C%20sans%2Dserif%3B%7D%2A%20%7Bxbox%2Dsizing%3A%20border%2Dbox%3B%7D%2Est%5Fheader%20h1%2Etitle%2C%2Est%5Fheader%20p%2Esubtitle%20%7Btext%2Dalign%3A%20left%3B%7D%2Est%5Fheader%20h1%2Etitle%20%7Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bmargin%2Dbottom%3A5px%3B%7D%2Est%5Fheader%20p%2Esubtitle%20%7Bcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bfont%2Dsize%3A90%25%3B%7D%2Est%5Fheader%20h1%2Etitle%2Efollowed%2Dby%2Dsubtitle%20%7Bborder%2Dbottom%3A2px%20solid%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bmargin%2Dbottom%3A5px%3B%7D%2Est%5Fheader%20p%2Erevision%20%7Bdisplay%3A%20inline%2Dblock%3Bwidth%3A70%25%3B%7D%2Est%5Fheader%20div%2Eauthor%20%7Bfont%2Dstyle%3A%20italic%3B%7D%2Est%5Fheader%20div%2Esummary%20%7Bborder%2Dtop%3A%20solid%201px%20%23C0C0C0%3Bbackground%3A%20%23ECECEC%3Bpadding%3A%205px%3B%7D%2Est%5Ffooter%20%7Bfont%2Dsize%3A80%25%3B%7D%2Est%5Ffooter%20img%20%7Bfloat%3A%20right%3B%7D%2Est%5Ffooter%20%2Est%5Fnotice%20%7Bwidth%3A80%25%3B%7D%2Emarkdown%2Dbody%20%23header%2Dsection%2Dnumber%20%7Bfont%2Dsize%3A120%25%3B%7D%2Emarkdown%2Dbody%20h1%20%7Bborder%2Dbottom%3A1px%20solid%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bpadding%2Dbottom%3A%202px%3Bpadding%2Dtop%3A%2010px%3B%7D%2Emarkdown%2Dbody%20h2%20%7Bpadding%2Dbottom%3A%205px%3Bpadding%2Dtop%3A%2010px%3B%7D%2Emarkdown%2Dbody%20h2%20code%20%7Bbackground%2Dcolor%3A%20rgb%28255%2C%20255%2C%20255%29%3B%7D%23func%2EsourceCode%20%7Bborder%2Dleft%2Dstyle%3A%20solid%3Bborder%2Dcolor%3A%20rgb%280%2C%2032%2C%2082%29%3Bborder%2Dcolor%3A%20rgb%28255%2C%20244%2C%20191%29%3Bborder%2Dwidth%3A%208px%3Bpadding%3A0px%3B%7Dpre%20%3E%20code%20%7Bborder%3A%20solid%201px%20blue%3Bfont%2Dsize%3A60%25%3B%7DcodeXX%20%7Bborder%3A%20solid%201px%20blue%3Bfont%2Dsize%3A60%25%3B%7D%23func%2EsourceXXCode%3A%3Abefore%20%7Bcontent%3A%20%22Synopsis%22%3Bpadding%2Dleft%3A10px%3Bfont%2Dweight%3A%20bold%3B%7Dfigure%20%7Bpadding%3A0px%3Bmargin%2Dleft%3A5px%3Bmargin%2Dright%3A5px%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3B%7Dimg%5Bdata%2Dproperty%3D%22center%22%5D%20%7Bdisplay%3A%20block%3Bmargin%2Dtop%3A%2010px%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3Bpadding%3A%2010px%3B%7Dfigcaption%20%7Btext%2Dalign%3Aleft%3B%20%20border%2Dtop%3A%201px%20dotted%20%23888%3Bpadding%2Dbottom%3A%2020px%3Bmargin%2Dtop%3A%2010px%3B%7Dh1%20code%2C%20h2%20code%20%7Bfont%2Dsize%3A120%25%3B%7D%09%2Emarkdown%2Dbody%20table%20%7Bwidth%3A%20100%25%3Bmargin%2Dleft%3Aauto%3Bmargin%2Dright%3Aauto%3B%7D%2Emarkdown%2Dbody%20img%20%7Bborder%2Dradius%3A%204px%3Bpadding%3A%205px%3Bdisplay%3A%20block%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3Bwidth%3A%20auto%3B%7D%2Emarkdown%2Dbody%20%2Est%5Fheader%20img%2C%20%2Emarkdown%2Dbody%20%7Bborder%3A%20none%3Bborder%2Dradius%3A%20none%3Bpadding%3A%205px%3Bdisplay%3A%20block%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3Bwidth%3A%20auto%3Bbox%2Dshadow%3A%20none%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2010px%3Bwidth%3A%20auto%3Bfont%2Dfamily%3A%20%22Arial%22%2C%20sans%2Dserif%3Bcolor%3A%20%2303234B%3Bcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%7D%2Emarkdown%2Dbody%20h1%2C%20%2Emarkdown%2Dbody%20h2%2C%20%2Emarkdown%2Dbody%20h3%20%7B%20%20%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%7D%2Emarkdown%2Dbody%3Ahover%20%7B%7D%2Emarkdown%2Dbody%20%2Econtents%20%7B%7D%2Emarkdown%2Dbody%20%2Etoc%2Dtitle%20%7B%7D%2Emarkdown%2Dbody%20%2Econtents%20li%20%7Blist%2Dstyle%2Dtype%3A%20none%3B%7D%2Emarkdown%2Dbody%20%2Econtents%20ul%20%7Bpadding%2Dleft%3A%2010px%3B%7D%2Emarkdown%2Dbody%20%2Econtents%20a%20%7Bcolor%3A%20%233CB4E6%3B%20%7D%2Emarkdown%2Dbody%20table%20%2Eheader%20%7Bbackground%2Dcolor%3A%20var%28%2D%2Dsecondary%2Dgrey%2Dcolor%2D12%29%3Bborder%2Dbottom%3A1px%20solid%3Bborder%2Dtop%3A1px%20solid%3Bfont%2Dsize%3A%2090%25%3B%7D%2Emarkdown%2Dbody%20table%20th%20%7Bfont%2Dweight%3A%20bolder%3B%20%7D%2Emarkdown%2Dbody%20table%20td%20%7Bfont%2Dsize%3A%2090%25%3B%7D%2Emarkdown%2Dbody%20code%7Bpadding%3A%200%3Bmargin%3A0%3Bfont%2Dsize%3A95%25%3Bbackground%2Dcolor%3Argba%2827%2C31%2C35%2C%2E05%29%3Bborder%2Dradius%3A1px%3B%7D%2ETips%20%7Bpadding%3A11px%3Bmargin%2Dbottom%3A24px%3Bborder%2Dstyle%3Asolid%3Bborder%2Dwidth%3A1px%3Bborder%2Dradius%3A1px%7D%2ETips%20%7Bcolor%3A%232f363d%3B%20background%2Dcolor%3A%20%23f6f8fa%3Bborder%2Dcolor%3A%23d5d8da%3Bborder%2Dtop%3A1px%20solid%3Bborder%2Dbottom%3A1px%20solid%3B%7D%2EHTips%20%7Bpadding%3A11px%3Bmargin%2Dbottom%3A24px%3Bborder%2Dstyle%3Asolid%3Bborder%2Dwidth%3A1px%3Bborder%2Dradius%3A1px%7D%2EHTips%20%7Bcolor%3A%232f363d%3B%20background%2Dcolor%3A%23fff9ea%3Bborder%2Dcolor%3A%23d5d8da%3Bborder%2Dtop%3A1px%20solid%3Bborder%2Dbottom%3A1px%20solid%3B%7D%2EHTips%20h1%2C%2EHTips%20h2%2C%2EHTips%20h3%2C%2EHTips%20h4%2C%2EHTips%20h5%2C%2EHTips%20h6%20%7Bcolor%3A%232f363d%3Bmargin%2Dbottom%3A0%7D%2Esidenav%20%7Bfont%2Dfamily%3A%20%22Arial%22%2C%20sans%2Dserif%3B%20%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bheight%3A%20100%25%3Bposition%3A%20fixed%3Bz%2Dindex%3A%201%3Btop%3A%200%3Bleft%3A%200%3Bmargin%2Dright%3A%2010px%3Bmargin%2Dleft%3A%2010px%3B%20overflow%2Dx%3A%20hidden%3B%7D%2Esidenav%20hr%2Enew1%20%7Bborder%2Dwidth%3A%20thin%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Dlightblue%2Dcolor%29%3Bmargin%2Dright%3A%2010px%3Bmargin%2Dtop%3A%20%2D10px%3B%7D%2Esidenav%20%23sidenav%5Fheader%20%7Bmargin%2Dtop%3A%2010px%3Bborder%3A%201px%3Bcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Dlightblue%2Dcolor%29%3B%7D%2Esidenav%20%23sidenav%5Fheader%20img%20%7Bfloat%3A%20left%3B%7D%2Esidenav%20%23sidenav%5Fheader%20a%20%7Bmargin%2Dleft%3A%200px%3Bmargin%2Dright%3A%200px%3Bpadding%2Dleft%3A%200px%3B%7D%2Esidenav%20%23sidenav%5Fheader%20a%3Ahover%20%7Bbackground%2Dsize%3A%20auto%3Bcolor%3A%20%23FFD200%3B%20%7D%2Esidenav%20%23sidenav%5Fheader%20a%3Aactive%20%7B%20%20%7D%2Esidenav%20%3E%20ul%20%7Bbackground%2Dcolor%3A%20rgba%2857%2C%20169%2C%20220%2C%200%2E05%29%3B%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bborder%2Dradius%3A%2010px%3Bpadding%2Dbottom%3A%2010px%3Bpadding%2Dtop%3A%2010px%3Bpadding%2Dright%3A%2010px%3Bmargin%2Dright%3A%2010px%3B%7D%2Esidenav%20a%20%7Bpadding%3A%202px%202px%3Btext%2Ddecoration%3A%20none%3Bfont%2Dsize%3A%20var%28%2D%2Dsidenav%2Dfont%2Dsize%29%3Bdisplay%3Atable%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%2C%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%7B%20padding%2Dright%3A%205px%3Bpadding%2Dleft%3A%205px%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%20%7B%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bfont%2Dweight%3A%20lighter%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%20%7B%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bfont%2Dsize%3A%2080%25%3Bpadding%2Dleft%3A%2010px%3Btext%2Dalign%2Dlast%3A%20left%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%20%7B%20display%3A%20None%3B%7D%2Esidenav%20li%20%7Blist%2Dstyle%2Dtype%3A%20none%3B%7D%2Esidenav%20ul%20%7Bpadding%2Dleft%3A%200px%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%2C%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%20%7Bbackground%2Dcolor%3A%20var%28%2D%2Dsecondary%2Dgrey%2Dcolor%2D12%29%3Bbackground%2Dclip%3A%20border%2Dbox%3Bmargin%2Dleft%3A%20%2D10px%3Bpadding%2Dleft%3A%2010px%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%20%7Bpadding%2Dright%3A%2015px%3Bwidth%3A%20230px%3B%09%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%20%7Bpadding%2Dright%3A%2010px%3Bwidth%3A%20230px%3B%09%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%3Aactive%20%7B%20color%3A%20%23FFD200%3B%20%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%3Aactive%20%7B%20color%3A%20%23FFD200%3B%20%7D%2Esidenav%20code%20%7B%7D%2Esidenav%20%7Bwidth%3A%20280px%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%20300px%3Bdisplay%3Ablock%3B%7D%2Emarkdown%2Dbody%20%2Eprint%2Dcontents%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%2Eprint%2Dtoc%2Dtitle%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%7Bmax%2Dwidth%3A%20980px%3Bmin%2Dwidth%3A%20200px%3Bpadding%3A%2040px%3Bborder%2Dstyle%3A%20solid%3Bborder%2Dstyle%3A%20outset%3Bborder%2Dcolor%3A%20rgba%28104%2C%20167%2C%20238%2C%200%2E089%29%3Bborder%2Dradius%3A%205px%3B%7D%40media%20screen%20and%20%28max%2Dheight%3A%20450px%29%20%7B%2Esidenav%20%7Bpadding%2Dtop%3A%2015px%3B%7D%2Esidenav%20a%20%7Bfont%2Dsize%3A%2018px%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%2010px%3B%20%7D%2Esidenav%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2040px%3Bwidth%3A%20auto%3Bborder%3A%200px%3B%7D%7D%40media%20screen%20and%20%28max%2Dwidth%3A%201024px%29%20%7B%2Esidenav%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2040px%3Bwidth%3A%20auto%3Bborder%3A%200px%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%2010px%3B%7D%7D%40media%20print%20%7B%2Esidenav%20%7Bvisibility%3Ahidden%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%2010px%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2010px%3Bwidth%3Aauto%3Bborder%3A%200px%3B%7D%40page%20%7Bsize%3A%20A4%3B%20%20margin%3A2cm%3Bpadding%3A2cm%3Bmargin%2Dtop%3A%201cm%3Bpadding%2Dbottom%3A%201cm%3B%7D%2A%20%7Bxbox%2Dsizing%3A%20border%2Dbox%3Bfont%2Dsize%3A90%25%3B%7Da%20%7Bfont%2Dsize%3A%20100%25%3Bcolor%3A%20yellow%3B%7D%2Emarkdown%2Dbody%20article%20%7Bxbox%2Dsizing%3A%20border%2Dbox%3Bfont%2Dsize%3A100%25%3B%7D%2Emarkdown%2Dbody%20p%20%7Bwindows%3A%202%3Borphans%3A%202%3B%7D%2Epagebreakerafter%20%7Bpage%2Dbreak%2Dafter%3A%20always%3Bpadding%2Dtop%3A10mm%3B%7D%2Epagebreakbefore%20%7Bpage%2Dbreak%2Dbefore%3A%20always%3B%7Dh1%2C%20h2%2C%20h3%2C%20h4%20%7Bpage%2Dbreak%2Dafter%3A%20avoid%3B%7Ddiv%2C%20code%2C%20blockquote%2C%20li%2C%20span%2C%20table%2C%20figure%20%7Bpage%2Dbreak%2Dinside%3A%20avoid%3B%7D%7D">
  <!--[if lt IE 9]>
    <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
  <![endif]-->





<link rel="shortcut icon" href="">

</head>



<body>

		<div class="sidenav">
		<div id="sidenav_header">
							<img src="" title="STM32CubeMX.AI logo" align="left" height="70" />
										<br />7.0.0-dev<br />
										<a href="#doc_title"> FAQs </a>
					</div>
		<div id="sidenav_header_button">
			 
							<ul>
					<li><p><a id="index" href="index.html">[ Index ]</a></p></li>
				</ul>
						<hr class="new1">
		</div>	

		<ul>
  <li><a href="#general">General</a>
  <ul>
  <li><a href="#ref_python_ver">How to know the version of the deep-learning framework components which are used?</a></li>
  <li><a href="#how-is-used-the-cmsis-nn-library">How is used the CMSIS-NN library?</a></li>
  <li><a href="#what-is-the-eabi-used-for-the-network_runtime-libraries">What is the EABI used for the <em>network_runtime</em> libraries?</a></li>
  <li><a href="#x-cube-ai-python-api-availability">X-CUBE-AI Python API availability?</a></li>
  <li><a href="#tensorflow-keras-tf.keras-vs-keras.io">TensorFlow Keras (tf.keras) vs Keras.io</a></li>
  <li><a href="#it-is-possible-to-update-a-model-on-the-firmware-wo-having-to-do-a-full-firmware-update">It is possible to update a model on the firmware w/o having to do a full firmware update?</a></li>
  <li><a href="#keras-model-or-sequential-layer-support">Keras Model or Sequential layer support?</a></li>
  <li><a href="#is-it-possible-to-split-the-weights-buffer">Is it possible to split the weights buffer?</a></li>
  <li><a href="#is-it-possible-to-place-the-activations-buffer-in-different-memory-segments">Is it possible to place the “activations” buffer in different memory segments?</a></li>
  <li><a href="#how-to-compress-the-non-densefully-connected-layers">How to compress the non-dense/fully-connected layers?</a></li>
  <li><a href="#is-it-possible-to-apply-a-compression-factor-different-of-x8-x4">Is it possible to apply a compression factor different of x8, x4?</a></li>
  <li><a href="#how-to-specify-or-to-indicate-a-compression-factor-by-layer">How to specify or to indicate a compression factor by layer?</a></li>
  <li><a href="#why-a-small-negative-ratio-is-reported-for-the-weights-size-with-a-model-wo-compression">Why a small negative ratio is reported for the weights size with a model w/o compression?</a></li>
  <li><a href="#is-it-possible-to-dumpcapture-the-intermediate-values-during-the-execution-of-the-inference">Is it possible to dump/capture the intermediate values during the execution of the inference?</a></li>
  </ul></li>
  <li><a href="#validation-process">Validation process</a>
  <ul>
  <li><a href="#how-to-validate-a-specific-network-when-multiple-networks-are-resident-into-the-same-firmware">How to validate a specific network when multiple networks are resident into the same firmware?</a></li>
  <li><a href="#stack_heap_size_issue">Reported STM32 results are incoherent</a></li>
  <li><a href="#unable-to-perform-automatic-validation-on-target">Unable to perform automatic validation on-target</a></li>
  <li><a href="#long-time-process-or-crash-with-a-large-test-data-set">Long time process or crash with a large test data set</a></li>
  </ul></li>
  <li><a href="#quantization-and-post-training-quantization-process">Quantization and post-training quantization process</a>
  <ul>
  <li><a href="#backward-compatibility-with-x-cube-ai-4.0-and-x-cube-ai-4.1">Backward compatibility with X-CUBE-AI 4.0 and X-CUBE-AI 4.1</a></li>
  <li><a href="#is-it-possible-to-use-the-keras-post-training-quantization-process-through-the-ui">Is it possible to use the Keras post-training quantization process through the UI?</a></li>
  <li><a href="#is-it-possible-to-use-the-keras-post-training-quantization-process-with-a-non-classifier-model">Is it possible to use the Keras post-training quantization process with a non-classifier model?</a></li>
  <li><a href="#is-it-possible-to-use-the-compression-for-a-quantized-model">Is it possible to use the compression for a quantized model?</a></li>
  <li><a href="#how-to-apply-the-keras-post-training-quantization-process-on-a-non-keras-model">How to apply the Keras post-training quantization process on a non-Keras model?</a></li>
  <li><a href="#tensorflow-lite-optimize_for_size-option-support">TensorFlow lite, OPTIMIZE_FOR_SIZE option support</a></li>
  </ul></li>
  <li><a href="#references">References</a></li>
  </ul>
	</div>
	<article id="sidenav" class="markdown-body">
		



 


	<h1 class="toc-title">Contents</h1>
	<div class="contents">
	<ul>
 <li><a href="#general">General</a>
 <ul>
 <li><a href="#ref_python_ver">How to know the version of the deep-learning framework components which are used?</a></li>
 <li><a href="#how-is-used-the-cmsis-nn-library">How is used the CMSIS-NN library?</a></li>
 <li><a href="#what-is-the-eabi-used-for-the-network_runtime-libraries">What is the EABI used for the <em>network_runtime</em> libraries?</a></li>
 <li><a href="#x-cube-ai-python-api-availability">X-CUBE-AI Python API availability?</a></li>
 <li><a href="#tensorflow-keras-tf.keras-vs-keras.io">TensorFlow Keras (tf.keras) vs Keras.io</a></li>
 <li><a href="#it-is-possible-to-update-a-model-on-the-firmware-wo-having-to-do-a-full-firmware-update">It is possible to update a model on the firmware w/o having to do a full firmware update?</a></li>
 <li><a href="#keras-model-or-sequential-layer-support">Keras Model or Sequential layer support?</a></li>
 <li><a href="#is-it-possible-to-split-the-weights-buffer">Is it possible to split the weights buffer?</a></li>
 <li><a href="#is-it-possible-to-place-the-activations-buffer-in-different-memory-segments">Is it possible to place the “activations” buffer in different memory segments?</a></li>
 <li><a href="#how-to-compress-the-non-densefully-connected-layers">How to compress the non-dense/fully-connected layers?</a></li>
 <li><a href="#is-it-possible-to-apply-a-compression-factor-different-of-x8-x4">Is it possible to apply a compression factor different of x8, x4?</a></li>
 <li><a href="#how-to-specify-or-to-indicate-a-compression-factor-by-layer">How to specify or to indicate a compression factor by layer?</a></li>
 <li><a href="#why-a-small-negative-ratio-is-reported-for-the-weights-size-with-a-model-wo-compression">Why a small negative ratio is reported for the weights size with a model w/o compression?</a></li>
 <li><a href="#is-it-possible-to-dumpcapture-the-intermediate-values-during-the-execution-of-the-inference">Is it possible to dump/capture the intermediate values during the execution of the inference?</a></li>
 </ul></li>
 <li><a href="#validation-process">Validation process</a>
 <ul>
 <li><a href="#how-to-validate-a-specific-network-when-multiple-networks-are-resident-into-the-same-firmware">How to validate a specific network when multiple networks are resident into the same firmware?</a></li>
 <li><a href="#stack_heap_size_issue">Reported STM32 results are incoherent</a></li>
 <li><a href="#unable-to-perform-automatic-validation-on-target">Unable to perform automatic validation on-target</a></li>
 <li><a href="#long-time-process-or-crash-with-a-large-test-data-set">Long time process or crash with a large test data set</a></li>
 </ul></li>
 <li><a href="#quantization-and-post-training-quantization-process">Quantization and post-training quantization process</a>
 <ul>
 <li><a href="#backward-compatibility-with-x-cube-ai-4.0-and-x-cube-ai-4.1">Backward compatibility with X-CUBE-AI 4.0 and X-CUBE-AI 4.1</a></li>
 <li><a href="#is-it-possible-to-use-the-keras-post-training-quantization-process-through-the-ui">Is it possible to use the Keras post-training quantization process through the UI?</a></li>
 <li><a href="#is-it-possible-to-use-the-keras-post-training-quantization-process-with-a-non-classifier-model">Is it possible to use the Keras post-training quantization process with a non-classifier model?</a></li>
 <li><a href="#is-it-possible-to-use-the-compression-for-a-quantized-model">Is it possible to use the compression for a quantized model?</a></li>
 <li><a href="#how-to-apply-the-keras-post-training-quantization-process-on-a-non-keras-model">How to apply the Keras post-training quantization process on a non-Keras model?</a></li>
 <li><a href="#tensorflow-lite-optimize_for_size-option-support">TensorFlow lite, OPTIMIZE_FOR_SIZE option support</a></li>
 </ul></li>
 <li><a href="#references">References</a></li>
 </ul>
	</div>




<section id="general" class="level1">
<h1>General</h1>
<section id="ref_python_ver" class="level2">
<h2>How to know the version of the deep-learning framework components which are used?</h2>
<p>X-CUBE-AI Expansion Package is a complete self-contained application package. No external tools is requested to use the package. To know the version of the main components which are embedded, the following command can be used:</p>
<pre class="dosbatch"><code>&gt;  stm32ai --tools_version
Neural Network Tools for STM32AI v1.5.1 (STM.ai v7.0.0)
- Python version   : 3.7.9
- Numpy version    : 1.19.5
- TF version       : 2.5.0
- TF Keras version : 2.5.0
- ONNX version     : 1.6.0
- ONNX RT version  : 1.7.0
</code></pre>
<div class="Warning">
<p><strong>Note</strong> — User should be aware that the respective Python DL modules are used to run the original model and also to import/parse the model. To know the supported layers/operators, please refer to the description in the <a href="supported_ops_tflite.html">[TFLITE]</a>, <a href="supported_ops_keras.html">[KERAS]</a> and <a href="supported_ops_onnx.html">[ONNX]</a>.</p>
</div>
</section>
<section id="how-is-used-the-cmsis-nn-library" class="level2">
<h2>How is used the CMSIS-NN library?</h2>
<p>The network runtime library is partially based on the CMSIS library. CMSIS is a vendor-independent hardware abstraction layer for micro-controllers that are based on Arm® Cortex® processors (<a href="https://arm-software.github.io/CMSIS_5/General/html/index.html">https://arm-software.github.io/CMSIS_5/General/html/index.html</a>). The CMSIS-NN sub-component and a minor part of the CMSIS-DSP sub-component are embedded in the <em>network_runtime.a</em> library to be sure that the requested files are compiled with the optimal options.</p>
<table>
<colgroup>
<col style="width: 39%" />
<col style="width: 60%" />
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">component</th>
<th style="text-align: left;">version</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">CMSIS package</td>
<td style="text-align: left;">r5.7.0</td>
</tr>
<tr class="even">
<td style="text-align: left;">CMSIS-Core(M)</td>
<td style="text-align: left;">V5.4.0</td>
</tr>
<tr class="odd">
<td style="text-align: left;">CMSIS-DSP</td>
<td style="text-align: left;">V1.8.0</td>
</tr>
<tr class="even">
<td style="text-align: left;">CMSIS-NN</td>
<td style="text-align: left;">V1.3.0</td>
</tr>
</tbody>
</table>
<p>A part of the forward kernel functions are <em>directly</em> mapped on the CMSIS-NN (legacy support of the Qmn format) when available but the major part, are the optimized implementations (integer and float format) only based on the CMSIS-type definitions for portability across the STM32 families.</p>
<figure>
<img src="" property="center" style="width:30.0%" />
</figure>
</section>
<section id="what-is-the-eabi-used-for-the-network_runtime-libraries" class="level2">
<h2>What is the EABI used for the <em>network_runtime</em> libraries?</h2>
<p>For performance motivations, the provided libraries (<code>network_runtime.a</code>) are compiled with the <em>hard</em> floating-point ABI option allowing generation of floating-point instructions and using FPU-specific calling conventions. This implies that the final end-user project should be also compiled and linked with this option.</p>
<div class="Warning">
<p><strong>Note</strong> — Abnormal situation is normally detected at link time, but there is a specific case where the firmware is generated w/o errors but at run-time the results are <em>UNPREDICTABLE</em>. For example for a GCC-base environment, <code>-mfloat-abi=soft or softfp</code> can be set by default for the whole project to use an other EABI-soft-base binary library. There is currently no link issue because the library has been generated with another EABI compliant tool-chain (IAR for ARM tool-chain).</p>
</div>
<blockquote>
<p>If specific binary library version is requested see with your local ST support.</p>
</blockquote>
</section>
<section id="x-cube-ai-python-api-availability" class="level2">
<h2>X-CUBE-AI Python API availability?</h2>
<p>X-CUBE-AI is only available through an UI interface (fully integrated in STM32CubeMX) and through a command line interface.</p>
</section>
<section id="tensorflow-keras-tf.keras-vs-keras.io" class="level2">
<h2>TensorFlow Keras (tf.keras) vs Keras.io</h2>
<p>X-CUBE-AI 6.0 pack embeds only TensorFlow 2.3.1. The Keras.io module is no more used. <code>&#39;tf.keras 2.4.0&#39;</code> is now used to import and to validate the Keras models. The previous <code>&#39;TF_KERAS=False&#39;</code> environment variable can be no more used to use a Kerias.io back-end. This implies that the Keras model with a NCHW tensor format (channel-first) can be no more validated. Code can be always generated but the validation flow to test the outputs against the outputs of the generated c-model can be no more used. Following type of error message will be generated during the execution of the provided model.</p>
<pre class="dosbatch"><code>...
INTERNAL ERROR: The Conv2D op currently only supports the NHWC tensor format
                on the CPU. The op was given the format: NCHW [Op:Conv2D]
...</code></pre>
<p>If the user wants to validate the generate c-model, he must provide a test data set. The <code>&#39;--no-exec-model&#39;</code> option can be used to avoid to execute the imported model.</p>
<pre class="dosbatch"><code>&gt; stm32ai validate -m &lt;keras_model_with_NCHW&gt;.h5 -vi test.npz --no-exec-model
...

Evaluation report (summary)
------------------------------------------------------------------------------------------------------
Mode              acc      rmse      mae       l2r       tensor
------------------------------------------------------------------------------------------------------
x86 C-model #1    100.00%  0.000000  0.000000  0.000000  activation_6 [ai_float, (1, 1, 2), m_id=20]
</code></pre>
</section>
<section id="it-is-possible-to-update-a-model-on-the-firmware-wo-having-to-do-a-full-firmware-update" class="level2">
<h2>It is possible to update a model on the firmware w/o having to do a full firmware update?</h2>
<p><strong>Yes</strong> - The <code>&#39;--binary&#39;</code> and <code>&#39;--relocatable&#39;</code> options of the <code>&#39;generate&#39;</code> command allow to implement a simple or complete mechanism to be able to upgrade a whole generated c-model w/o having to do a full firmware update (refer to <a href="command_line_interface.html">[CLI]</a> or <a href="relocatable.html">[RELOC]</a>, <em>&quot;Relocatable binary model support</em>&quot; article).</p>
</section>
<section id="keras-model-or-sequential-layer-support" class="level2">
<h2>Keras Model or Sequential layer support?</h2>
<p>Nested topologies are not supported. This can appear when the Keras functional API is used to build the network and a <code>&#39;Model&#39;</code> object is called as a layer.</p>
<p>A possible work-around is to convert the Keras model to a TensorFlow lite model, if all operators are supported.</p>
<div class="sourceCode" id="cb4"><pre class="sourceCode python"><code class="sourceCode python"><span id="cb4-1"><a href="#cb4-1" aria-hidden="true" tabindex="-1"></a>converter <span class="op">=</span> tf.lite.TFLiteConverter.from_keras_model_file(<span class="op">&lt;</span>keras_model_path<span class="op">&gt;</span>)</span>
<span id="cb4-2"><a href="#cb4-2" aria-hidden="true" tabindex="-1"></a>model <span class="op">=</span> converter.convert()</span></code></pre></div>
</section>
<section id="is-it-possible-to-split-the-weights-buffer" class="level2">
<h2>Is it possible to split the weights buffer?</h2>
<p><strong>Yes,</strong> by weights/bias tensors, see the <code>&#39;--split-weights&#39;</code> option (refer to <a href="command_line_interface.html">[CLI]</a>).</p>
</section>
<section id="is-it-possible-to-place-the-activations-buffer-in-different-memory-segments" class="level2">
<h2>Is it possible to place the “activations” buffer in different memory segments?</h2>
<p><strong>No,</strong> only a continuous memory-mapped buffer should be provided by the AI application (refer to <a href="embedded_client_api.html#sec_data_placement">[API]</a>).</p>
</section>
<section id="how-to-compress-the-non-densefully-connected-layers" class="level2">
<h2>How to compress the non-dense/fully-connected layers?</h2>
<p>Only the 32b floating-point dense/fully-connected layer can be compressed.</p>
</section>
<section id="is-it-possible-to-apply-a-compression-factor-different-of-x8-x4" class="level2">
<h2>Is it possible to apply a compression factor different of x8, x4?</h2>
<p><strong>No.</strong> The underlying weight-sharing algorithm (K-means clustering) is based on a dictionary with 16 (x8) or 256 (x4) entries. At the end, the global gain depends of the parameter number versus dictionary size. Note that for a given layer, bias parameters are not necessarily compressed. Compression is applied by tensor.</p>
</section>
<section id="how-to-specify-or-to-indicate-a-compression-factor-by-layer" class="level2">
<h2>How to specify or to indicate a compression factor by layer?</h2>
<p>By default, the compression process tries to apply globally the same compression factor (<em>x4</em> or <em>x8</em>) for all dense-layers or fully-connected-layers. If the global accuracy is too much impacted, the user has the possibility to refine the expected compression factor layer-by-layer.</p>
<p>A JSON file must be defined, to indicate what is the compression factor (<em>8</em> or <em>4</em>) which must be applied for a given layer. The layer is specified by its original name.</p>
<p>Example of configuration file:</p>
<div class="sourceCode" id="cb5"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb5-1"><a href="#cb5-1" aria-hidden="true" tabindex="-1"></a><span class="fu">{</span></span>
<span id="cb5-2"><a href="#cb5-2" aria-hidden="true" tabindex="-1"></a>    <span class="dt">&quot;layers&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb5-3"><a href="#cb5-3" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;dense_1&quot;</span><span class="fu">:</span> <span class="fu">{</span><span class="dt">&quot;factor&quot;</span><span class="fu">:</span> <span class="dv">8</span><span class="fu">},</span></span>
<span id="cb5-4"><a href="#cb5-4" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;dense_2&quot;</span><span class="fu">:</span> <span class="fu">{</span><span class="dt">&quot;factor&quot;</span><span class="fu">:</span> <span class="dv">4</span><span class="fu">},</span></span>
<span id="cb5-5"><a href="#cb5-5" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;dense_3&quot;</span><span class="fu">:</span> <span class="fu">{</span><span class="dt">&quot;factor&quot;</span><span class="fu">:</span> <span class="dv">8</span><span class="fu">}</span></span>
<span id="cb5-6"><a href="#cb5-6" aria-hidden="true" tabindex="-1"></a>    <span class="fu">}</span></span>
<span id="cb5-7"><a href="#cb5-7" aria-hidden="true" tabindex="-1"></a><span class="fu">}</span></span></code></pre></div>
<p>The option <code>-c/--compress</code> can be used to pass the configuration file.</p>
<pre class="dosbatch"><code>$ stm32ai analyze -m &lt;model_file&gt; -c &lt;conf_file&gt;.json</code></pre>
</section>
<section id="why-a-small-negative-ratio-is-reported-for-the-weights-size-with-a-model-wo-compression" class="level2">
<h2>Why a small negative ratio is reported for the weights size with a model w/o compression?</h2>
<p>X-CUBE-AI optimizer implements different merging engine, allowing for example to fold a batch-normalization layer in the previous layer. In this case the reported value take account of these optimizations, the parameters of the batch-normalization layer are removed in the generated C-model.</p>
<pre class="dosbatch"><code>$ stm32ai analyze -m ds_cnn.h5
...
weights (ro)       : 159,536 (155.80 KiB) (-0.64%)
...</code></pre>
</section>
<section id="is-it-possible-to-dumpcapture-the-intermediate-values-during-the-execution-of-the-inference" class="level2">
<h2>Is it possible to dump/capture the intermediate values during the execution of the inference?</h2>
<p>Yes, thanks the usage of the Platform Observer API (refer to <a href="embedded_client_api.html#ref_observer_api">[API]</a>, <em>“Platform Observer API”</em> section)</p>
</section>
</section>
<section id="validation-process" class="level1">
<h1>Validation process</h1>
<section id="how-to-validate-a-specific-network-when-multiple-networks-are-resident-into-the-same-firmware" class="level2">
<h2>How to validate a specific network when multiple networks are resident into the same firmware?</h2>
<p>Inside a firmware which is generated with multiple networks, by construction, each generated c-model has its own name. The “validate” command should be used with the argument <code>&#39;--name/-n &lt;c-name&gt;&#39;</code> to indicate the c-model which will be used, else the default name <em>“network”</em> is used.</p>
<pre class="dosbatch"><code>$ stm32ai validate -m original_model_path --mode stm32 -n net1_name </code></pre>
<p>After the connection phase, the list of the networks which are in the firmware are reported.</p>
<pre class="dosbatch"><code>...
&lt;Stm32com id=0x28074647320 - CONNECTED(COM35/115200) devid=0x450/STM32H743/753 and STM32H750 msg=2.1&gt;
 0x450/STM32H743/753 and STM32H750 @480MHz/240MHz (FPU is present) lat=4 Core:I$/D$
 found network(s): [&#39;net1_name&#39;, &#39;net2_name&#39;,...]
... </code></pre>
<p>Note that after this step, there is a basic <em>checking</em> process to ensure that the selected c-model is compliant with the original model. This <em>“signature”</em> is based on a set of values: TOOLS versions, MACC, RAM and ROM size. Don’t forget to pass also the arguments (compression factor, allocate-inputs…) which were used to generate the c-model else the “signature” will be invalid.</p>
</section>
<section id="stack_heap_size_issue" class="level2">
<h2>Reported STM32 results are incoherent</h2>
<p>Compared to the metrics reported by a validation on desktop, sometimes the STM32 results (validation on target, automatic or not) are different or incoherent. Unfortunately, this can appears because the defined stack or heap sizes are not enough for the generated project.</p>
<p>The work-around is to generate or to modify an <code>aiValidation</code> project, to increase manually the defined stack and/or heap sizes before to re-compile and to flash the STM32 firmware.</p>
<p>GCC-base IDE project</p>
<div class="sourceCode" id="cb10"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb10-1"><a href="#cb10-1" aria-hidden="true" tabindex="-1"></a><span class="co">/* Linker file : STM32XXX.ld */</span></span>
<span id="cb10-2"><a href="#cb10-2" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span>
<span id="cb10-3"><a href="#cb10-3" aria-hidden="true" tabindex="-1"></a>_Min_Heap_Size <span class="op">=</span> <span class="bn">0x2000</span> <span class="op">;</span>   <span class="co">/* required amount of heap  */</span></span>
<span id="cb10-4"><a href="#cb10-4" aria-hidden="true" tabindex="-1"></a>_Min_Stack_Size <span class="op">=</span> <span class="bn">0x800</span> <span class="op">;</span>   <span class="co">/* required amount of stack */</span></span>
<span id="cb10-5"><a href="#cb10-5" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span></code></pre></div>
<p>IAR Embedded Workbench IDE project</p>
<div id="fig:id_nn_lib_integration" class="fignos">
<figure>
<img src="" property="center" style="width:40.0%" alt="Figure 1: Stack or head size definition (IAR)" /><figcaption aria-hidden="true"><span>Figure 1:</span> Stack or head size definition (IAR)</figcaption>
</figure>
</div>
</section>
<section id="unable-to-perform-automatic-validation-on-target" class="level2">
<h2>Unable to perform automatic validation on-target</h2>
<p>Different reasons can interrupt the process of validation.</p>
<section id="during-the-compilation-phase-of-the-temporary-project" class="level3 unnumbered">
<h3 class="unnumbered">1 - During the compilation phase of the temporary project</h3>
<p>The following typical link issue can appear (IAR Embedded Workbench IDE):</p>
<pre class="batch"><code>Error[Lp011]: section placement failed
            unable to allocate space for sections/blocks with a  
total estimated minimum size of 0x3&#39;d724 bytes (max align  
0x8) in &lt;[0x2000&#39;0000-0x2001&#39;ffff]&gt; (total uncommitted space  
0x2&#39;0000).</code></pre>
<p>This indicates generally that the input/output tensors and/or activation buffer can be not placed simultaneously in the default or same RW section. A possible work-around is to place the activation buffer in the external memory device (refer to <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">[2]</a>).</p>
<p>Another work-around is to place manually the different buffers in different RAM memory regions.</p>
<div class="sourceCode" id="cb12"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb12-1"><a href="#cb12-1" aria-hidden="true" tabindex="-1"></a><span class="co">/* file: Middlewares/ST/Application/Validation/Src/aiValidation.c */</span></span>
<span id="cb12-2"><a href="#cb12-2" aria-hidden="true" tabindex="-1"></a><span class="co">/* file: Middlewares/ST/Application/SystemPerformance/Src/aiSystemPerformance.c */</span></span>
<span id="cb12-3"><a href="#cb12-3" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span>
<span id="cb12-4"><a href="#cb12-4" aria-hidden="true" tabindex="-1"></a>AI_ALIGNED<span class="op">(</span><span class="dv">4</span><span class="op">)</span></span>
<span id="cb12-5"><a href="#cb12-5" aria-hidden="true" tabindex="-1"></a><span class="dt">static</span> ai_u8 activations<span class="op">[</span>AI_NETWORK_DATA_ACTIVATIONS_SIZE<span class="op">];</span></span>
<span id="cb12-6"><a href="#cb12-6" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb12-7"><a href="#cb12-7" aria-hidden="true" tabindex="-1"></a>AI_ALIGNED<span class="op">(</span><span class="dv">4</span><span class="op">)</span></span>
<span id="cb12-8"><a href="#cb12-8" aria-hidden="true" tabindex="-1"></a><span class="dt">static</span> ai_u8 in_data<span class="op">[</span>AI_NETWORK_IN_1_SIZE_BYTES<span class="op">];</span></span>
<span id="cb12-9"><a href="#cb12-9" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb12-10"><a href="#cb12-10" aria-hidden="true" tabindex="-1"></a>AI_ALIGNED<span class="op">(</span><span class="dv">4</span><span class="op">)</span></span>
<span id="cb12-11"><a href="#cb12-11" aria-hidden="true" tabindex="-1"></a><span class="dt">static</span> ai_u8 out_data<span class="op">[</span>AI_NETWORK_OUT_1_SIZE_BYTES<span class="op">];</span></span>
<span id="cb12-12"><a href="#cb12-12" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span></code></pre></div>
</section>
<section id="during-the-set-up-of-the-communication-with-the-board" class="level3 unnumbered">
<h3 class="unnumbered">2 - During the set-up of the communication with the board</h3>
<pre class="dosbatch"><code>...
ON-DEVICE STM32 execution (&quot;network&quot;, auto-detect, 115200)..
LOAD ERROR: STM32 - no connected board(s), invalid firmware or the board should be re-started</code></pre>
<p>This illustrates the case where a device seems not correctly flashed/re-started or that the board is not connected to the workstation. To check it this point, the best way is to use a terminal application (like TeraTerm or Putty) and to check the init log. COM port and baud-rate information should be used to precise the connection through the UI validation interface (refer to <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">[2]</a>). This test allows to check that the firmware is correctly flashed.</p>
<pre class="dosbatch"><code>...
  params            : 16688 bytes
  inputs/outputs    : 1/1
   I[0]  u8, scale=0.101961, zero=0, 1960 bytes, shape=(49,40,1)
   O[0]  u8, scale=0.003906, zero=0, 4 bytes, shape=(1,1,4)
Initializing the network
 Activation buffer  : 0x24002108 (4352 bytes) internal

-------------------------------------------
| READY to receive a CMD from the HOST... |
-------------------------------------------

# Note: At this point, default ASCII-base terminal should be closed
# and a stm32com-base interface should be used
# (i.e. Python stm32com module). Protocol version = 2.1</code></pre>
</section>
<section id="during-the-communication-with-the-board" class="level3 unnumbered">
<h3 class="unnumbered">3 - During the communication with the board</h3>
<p>After the connection/discovery phase, the following read timeout message can appear after the message <em>Running with inputs=…</em></p>
<pre class="dosbatch"><code>...
ON-DEVICE STM32 execution (&quot;network&quot;, auto-detect, 115200)..

&lt;Stm32com id=0x22ba711dd30 - CONNECTED(COM35/115200) devid=0x450/STM32H743/753 and STM32H750 msg=2.1&gt;
 0x450/STM32H743/753 and STM32H750 @480MHz/240MHz (FPU is present) lat=4 Core:I$/D$
 found network(s): [&#39;network&#39;]
 description    : &#39;network&#39; uint8,(49, 40, 1)-[5]-&gt;uint8,(1, 1, 4) macc=336084 rom=16.30KiB ram=4.25KiB
 tools versions : rt=(4, 1, 0) tool=(4, 1, 0)/(1, 3, 0) api=(1, 1, 0) &quot;Wed Sep 18 22:28:49 2019&quot;

Running with inputs=(10, 49, 40, 1)..
.
LOAD ERROR: STM32 - read timeout 50000ms

or

LOAD ERROR: STM32 - read timeout 10000ms
</code></pre>
<p>The reached timeout of <code>10000ms</code>, indicates that the host has not received a ACK message after the transfer of the input buffer. <code>50000ms</code> indicates that the STM32 has not sent a end-of-computation message. Both cases indicate generally a STM32 hard-fault. The more common reason is an issue of the sizing of the stack or/and the heap (see previous <a href="#stack_heap_size_issue"><em>“Incoherent reported STM32 results”</em></a> section).</p>
</section>
</section>
<section id="long-time-process-or-crash-with-a-large-test-data-set" class="level2">
<h2>Long time process or crash with a large test data set</h2>
<p>With a large data validation/test set, as the data are completely loaded in memory before its usage, according the desktop memory resources, it is possible that the station hangs or crashes after a non-usual time. To avoid this situation, only a representative and limited part of the validation or test data set should be provided. The whole data set is not requested to evaluate the generated model.</p>
<p>Example of Python script to create a “small” test data set (based on a Numpy npz file):</p>
<div class="sourceCode" id="cb16"><pre class="sourceCode python"><code class="sourceCode python"><span id="cb16-1"><a href="#cb16-1" aria-hidden="true" tabindex="-1"></a><span class="im">from</span> __future__ <span class="im">import</span> absolute_import, division, print_function, unicode_literals</span>
<span id="cb16-2"><a href="#cb16-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb16-3"><a href="#cb16-3" aria-hidden="true" tabindex="-1"></a><span class="im">import</span> numpy <span class="im">as</span> np</span>
<span id="cb16-4"><a href="#cb16-4" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb16-5"><a href="#cb16-5" aria-hidden="true" tabindex="-1"></a><span class="co"># Load the whole data set</span></span>
<span id="cb16-6"><a href="#cb16-6" aria-hidden="true" tabindex="-1"></a>arrays <span class="op">=</span> np.load(<span class="st">&#39;large_data_set.npz&#39;</span>)</span>
<span id="cb16-7"><a href="#cb16-7" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb16-8"><a href="#cb16-8" aria-hidden="true" tabindex="-1"></a><span class="co"># Select the test data set</span></span>
<span id="cb16-9"><a href="#cb16-9" aria-hidden="true" tabindex="-1"></a>x_test, y_test <span class="op">=</span> arrays[<span class="st">&#39;x_test&#39;</span>], arrays[<span class="st">&#39;y_test&#39;</span>]</span>
<span id="cb16-10"><a href="#cb16-10" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb16-11"><a href="#cb16-11" aria-hidden="true" tabindex="-1"></a><span class="co"># Select randomly 100 samples (fixed seed)</span></span>
<span id="cb16-12"><a href="#cb16-12" aria-hidden="true" tabindex="-1"></a>msize <span class="op">=</span> <span class="bu">min</span>(<span class="dv">100</span>, <span class="bu">len</span>(x_test))</span>
<span id="cb16-13"><a href="#cb16-13" aria-hidden="true" tabindex="-1"></a>np.random.seed(<span class="dv">123</span>)</span>
<span id="cb16-14"><a href="#cb16-14" aria-hidden="true" tabindex="-1"></a>rchoice <span class="op">=</span> np.random.choice(<span class="bu">len</span>(x_test), size<span class="op">=</span>msize, replace<span class="op">=</span><span class="va">False</span>)</span>
<span id="cb16-15"><a href="#cb16-15" aria-hidden="true" tabindex="-1"></a>x_test, y_test <span class="op">=</span> x_test[rchoice], y_test[rchoice]</span>
<span id="cb16-16"><a href="#cb16-16" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb16-17"><a href="#cb16-17" aria-hidden="true" tabindex="-1"></a><span class="co"># Save the selected data</span></span>
<span id="cb16-18"><a href="#cb16-18" aria-hidden="true" tabindex="-1"></a>np.savez(<span class="st">&#39;small_data_set.npz&#39;</span>, x_test<span class="op">=</span>x_test, y_test<span class="op">=</span>y_test)</span></code></pre></div>
</section>
</section>
<section id="quantization-and-post-training-quantization-process" class="level1">
<h1>Quantization and post-training quantization process</h1>
<section id="backward-compatibility-with-x-cube-ai-4.0-and-x-cube-ai-4.1" class="level2">
<h2>Backward compatibility with X-CUBE-AI 4.0 and X-CUBE-AI 4.1</h2>
<p>X-CUBE-AI 4.0/4.1 reshaped Keras h5 file and associated tensor format configuration file (JSON file) are always supported without adaptation.</p>
<p>The Keras post-training configuration file and associated Python modules can be always used. <strong>Only</strong> the X-CUBE-AI 4.0 configuration file (JSON file) should be upgraded to add the new mandatory fields: <em>“arithmetic”</em>, <em>“weights_integer_scheme”</em> and <em>“activations_integer_scheme”</em> (refer to <a href="quantization.html#ref_quantize_cmd">[QUANT]</a>, <em>“Quantize command”</em> section).</p>
<div class="HTips">
<p><strong>Note</strong> — <strong>Readers</strong> should be aware that the Keras quantization script is fully and only based on the Keras API from the TensorFlow module (version 2.3.1).</p>
</div>
</section>
<section id="is-it-possible-to-use-the-keras-post-training-quantization-process-through-the-ui" class="level2">
<h2>Is it possible to use the Keras post-training quantization process through the UI?</h2>
<p><strong>No.</strong> This feature is only available through the CLI.</p>
</section>
<section id="is-it-possible-to-use-the-keras-post-training-quantization-process-with-a-non-classifier-model" class="level2">
<h2>Is it possible to use the Keras post-training quantization process with a non-classifier model?</h2>
<p><strong>Yes,</strong> classification accuracy or loss metrics are not used for the quantization of the weights and activation tensors.</p>
</section>
<section id="is-it-possible-to-use-the-compression-for-a-quantized-model" class="level2">
<h2>Is it possible to use the compression for a quantized model?</h2>
<p><strong>No.</strong> The <code>&#39;--compression&#39;</code> option is not supported for the quantized layers. If a compression factor is requested, the following error is generated:</p>
<pre><code>NOT IMPLEMENTED: Quantizing a compressed tensor is not supported for &lt;name_layer&gt;</code></pre>
</section>
<section id="how-to-apply-the-keras-post-training-quantization-process-on-a-non-keras-model" class="level2">
<h2>How to apply the Keras post-training quantization process on a non-Keras model?</h2>
<p>This feature is only supported for Keras model (refer to <a href="quantization.html">[QUANT]</a>)</p>
</section>
<section id="tensorflow-lite-optimize_for_size-option-support" class="level2">
<h2>TensorFlow lite, OPTIMIZE_FOR_SIZE option support</h2>
<p><a href="https://www.tensorflow.org/lite/performance/post_training_quantization">https://www.tensorflow.org/lite/performance/post_training_quantization</a></p>
<p>Post-quantization TensorFlow lite script (<em>TFLiteConverter</em>, TF 1.15) allows to generate a <strong>weight-only quantized</strong> file (<code>OPTIMIZE_FOR_SIZE</code> option). This simplest scheme (also called “hybrid” quantization) allows to reduce the size of the generated file (~by 4). Only the weights from floating point to 8-bits of precision are quantized. At inference, weights are converted from 8-bits of precision to floating point and computed using floating-point kernels.</p>
<p>This quantization scheme is not supported by X-CUBE-AI, in particular by the C-inference engine and operator implementations (network runtime library), mainly due to the MCU resource constraints. Additional RAM memory to cache the un-compressed parameters is requested to reduce the latency. If this model is imported, the parameters are converted to floating point before code generation. Only the <strong>full 8b integer quantization of weights and activations</strong> scheme is supported.</p>
<!-- External ST resources/links -->
<!-- Internal resources/links -->
<!-- External resources/links -->
<!-- Cross references -->
</section>
</section>
<section id="references" class="level1">
<h1>References</h1>
<table>
<colgroup>
<col style="width: 18%" />
<col style="width: 81%" />
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">ref</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">[DS]</td>
<td style="text-align: left;">X-CUBE-AI - AI expansion pack for STM32CubeMX <a href="https://www.st.com/en/embedded-software/x-cube-ai.html">https://www.st.com/en/embedded-software/x-cube-ai.html</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[UM]</td>
<td style="text-align: left;">User manual - Getting started with X-CUBE-AI Expansion Package for Artificial Intelligence (AI) <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">(pdf)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[CLI]</td>
<td style="text-align: left;">stm32ai - Command Line Interface <a href="command_line_interface.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[API]</td>
<td style="text-align: left;">Embedded inference client API <a href="embedded_client_api.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[METRIC]</td>
<td style="text-align: left;">Evaluation report and metrics <a href="evaluation_metrics.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[TFL]</td>
<td style="text-align: left;">TensorFlow Lite toolbox <a href="supported_ops_tflite.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[KERAS]</td>
<td style="text-align: left;">Keras toolbox <a href="supported_ops_keras.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[ONNX]</td>
<td style="text-align: left;">ONNX toolbox <a href="supported_ops_onnx.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[FAQS]</td>
<td style="text-align: left;">FAQ <a href="faqs.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[QUANT]</td>
<td style="text-align: left;">Quantization and quantize command <a href="quantization.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[RELOC]</td>
<td style="text-align: left;">Relocatable binary network support <a href="relocatable.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[CUST]</td>
<td style="text-align: left;">Support of the Keras Lambda/custom layers <a href="keras_lambda_custom.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[TFLM]</td>
<td style="text-align: left;">TensorFlow Lite for Microcontroller support <a href="tflite_micro_support.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[INST]</td>
<td style="text-align: left;">Setting the environment <a href="setting_env.html">(link)</a></td>
</tr>
</tbody>
</table>
</section>



<section class="st_footer">

<h1> <br> </h1>

<p style="font-family:verdana; text-align:left;">
 Embedded Documentation 

	- <b> FAQs </b>
			<br> X-CUBE-AI Expansion Package
	 
			<br> r3.0
		 - AI PLATFORM r7.0.0-dev
			 (Embedded Inference Client API 1.1.0) 
			 - Command Line Interface r1.5.1 
		
	
</p>

<img src="" title="ST logo" align="right" height="100" />

<div class="st_notice">
Information in this document is provided solely in connection with ST products.
The contents of this document are subject to change without prior notice.
<br>
© Copyright STMicroelectronics 2020. All rights reserved. <a href="http://www.st.com">www.st.com</a>
</div>

<hr size="1" />
</section>


</article>
</body>

</html>
