<!DOCTYPE html>
<!--

	Modified template for STM32CubeMX.AI purpose

	d0.1: 	jean-michel.delorme@st.com
			add ST logo and ST footer

	d2.0: 	jean-michel.delorme@st.com
			add sidenav support

	d2.1: 	jean-michel.delorme@st.com
			clean-up + optional ai_logo/ai meta data
			
==============================================================================
           "GitHub HTML5 Pandoc Template" v2.1 — by Tristano Ajmone           
==============================================================================
Copyright © Tristano Ajmone, 2017, MIT License (MIT). Project's home:

- https://github.com/tajmone/pandoc-goodies

The CSS in this template reuses source code taken from the following projects:

- GitHub Markdown CSS: Copyright © Sindre Sorhus, MIT License (MIT):
  https://github.com/sindresorhus/github-markdown-css

- Primer CSS: Copyright © 2016-2017 GitHub Inc., MIT License (MIT):
  http://primercss.io/

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The MIT License 

Copyright (c) Tristano Ajmone, 2017 (github.com/tajmone/pandoc-goodies)
Copyright (c) Sindre Sorhus <sindresorhus@gmail.com> (sindresorhus.com)
Copyright (c) 2017 GitHub Inc.

"GitHub Pandoc HTML5 Template" is Copyright (c) Tristano Ajmone, 2017, released
under the MIT License (MIT); it contains readaptations of substantial portions
of the following third party softwares:

(1) "GitHub Markdown CSS", Copyright (c) Sindre Sorhus, MIT License (MIT).
(2) "Primer CSS", Copyright (c) 2016 GitHub Inc., MIT License (MIT).

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
==============================================================================-->
<html>
<head>
  <meta charset="utf-8" />
  <meta name="generator" content="pandoc" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
  <meta name="keywords" content="STM32CubeMX, X-CUBE-AI, Neural Network, Quantization, CLI, Code Generator" />
  <title>FAQ - Generic aspects</title>
  <style type="text/css">
.markdown-body{
	-ms-text-size-adjust:100%;
	-webkit-text-size-adjust:100%;
	color:#24292e;
	font-family:-apple-system,system-ui,BlinkMacSystemFont,"Segoe UI",Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol";
	font-size:16px;
	line-height:1.5;
	word-wrap:break-word;
	box-sizing:border-box;
	min-width:200px;
	max-width:980px;
	margin:0 auto;
	padding:45px;
	}
.markdown-body a{
	color:#0366d6;
	background-color:transparent;
	text-decoration:none;
	-webkit-text-decoration-skip:objects}
.markdown-body a:active,.markdown-body a:hover{
	outline-width:0}
.markdown-body a:hover{
	text-decoration:underline}
.markdown-body a:not([href]){
	color:inherit;text-decoration:none}
.markdown-body strong{font-weight:600}
.markdown-body h1,.markdown-body h2,.markdown-body h3,.markdown-body h4,.markdown-body h5,.markdown-body h6{
	margin-top:24px;
	margin-bottom:16px;
	font-weight:600;
	line-height:1.25}
.markdown-body h1{
	font-size:2em;
	margin:.67em 0;
	padding-bottom:.3em;
	border-bottom:1px solid #eaecef}
.markdown-body h2{
	padding-bottom:.3em;
	font-size:1.5em;
	border-bottom:1px solid #eaecef}
.markdown-body h3{font-size:1.25em}
.markdown-body h4{font-size:1em}
.markdown-body h5{font-size:.875em}
.markdown-body h6{font-size:.85em;color:#6a737d}
.markdown-body img{border-style:none}
.markdown-body svg:not(:root){
	overflow:hidden}
.markdown-body hr{
	box-sizing:content-box;
	height:.25em;
	margin:24px 0;
	padding:0;
	overflow:hidden;
	background-color:#e1e4e8;
	border:0}
.markdown-body hr::before{display:table;content:""}
.markdown-body hr::after{display:table;clear:both;content:""}
.markdown-body input{margin:0;overflow:visible;font:inherit;font-family:inherit;font-size:inherit;line-height:inherit}
.markdown-body [type=checkbox]{box-sizing:border-box;padding:0}
.markdown-body *{box-sizing:border-box}.markdown-body blockquote{margin:0}
.markdown-body ol,.markdown-body ul{padding-left:2em}
.markdown-body ol ol,.markdown-body ul ol{list-style-type:lower-roman}
.markdown-body ol ol,.markdown-body ol ul,.markdown-body ul ol,.markdown-body ul ul{margin-top:0;margin-bottom:0}
.markdown-body ol ol ol,.markdown-body ol ul ol,.markdown-body ul ol ol,.markdown-body ul ul ol{list-style-type:lower-alpha}
.markdown-body li>p{margin-top:16px}
.markdown-body li+li{margin-top:.25em}
.markdown-body dd{margin-left:0}
.markdown-body dl{padding:0}
.markdown-body dl dt{padding:0;margin-top:16px;font-size:1em;font-style:italic;font-weight:600}
.markdown-body dl dd{padding:0 16px;margin-bottom:16px}
.markdown-body code{font-family:SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace}
.markdown-body pre{font:12px SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace;word-wrap:normal}
.markdown-body blockquote,.markdown-body dl,.markdown-body ol,.markdown-body p,.markdown-body pre,.markdown-body table,.markdown-body ul{margin-top:0;margin-bottom:16px}
.markdown-body blockquote{padding:0 1em;color:#6a737d;border-left:.25em solid #dfe2e5}
.markdown-body blockquote>:first-child{margin-top:0}
.markdown-body blockquote>:last-child{margin-bottom:0}
.markdown-body table{display:block;width:100%;overflow:auto;border-spacing:0;border-collapse:collapse}
.markdown-body table th{font-weight:600}
.markdown-body table td,.markdown-body table th{padding:6px 13px;border:1px solid #dfe2e5}
.markdown-body table tr{background-color:#fff;border-top:1px solid #c6cbd1}
.markdown-body table tr:nth-child(2n){background-color:#f6f8fa}
.markdown-body img{max-width:100%;box-sizing:content-box;background-color:#fff}
.markdown-body code{padding:.2em 0;margin:0;font-size:85%;background-color:rgba(27,31,35,.05);border-radius:3px}
.markdown-body code::after,.markdown-body code::before{letter-spacing:-.2em;content:"\00a0"}
.markdown-body pre>code{padding:0;margin:0;font-size:100%;word-break:normal;white-space:pre;background:0 0;border:0}
.markdown-body .highlight{margin-bottom:16px}
.markdown-body .highlight pre{margin-bottom:0;word-break:normal}
.markdown-body .highlight pre,.markdown-body pre{padding:16px;overflow:auto;font-size:85%;line-height:1.45;background-color:#f6f8fa;border-radius:3px}
.markdown-body pre code{display:inline;max-width:auto;padding:0;margin:0;overflow:visible;line-height:inherit;word-wrap:normal;background-color:transparent;border:0}
.markdown-body pre code::after,.markdown-body pre code::before{content:normal}
.markdown-body .full-commit .btn-outline:not(:disabled):hover{color:#005cc5;border-color:#005cc5}
.markdown-body kbd{box-shadow:inset 0 -1px 0 #959da5;display:inline-block;padding:3px 5px;font:11px/10px SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace;color:#444d56;vertical-align:middle;background-color:#fcfcfc;border:1px solid #c6cbd1;border-bottom-color:#959da5;border-radius:3px;box-shadow:inset 0 -1px 0 #959da5}
.markdown-body :checked+.radio-label{position:relative;z-index:1;border-color:#0366d6}
.markdown-body .task-list-item{list-style-type:none}
.markdown-body .task-list-item+.task-list-item{margin-top:3px}
.markdown-body .task-list-item input{margin:0 .2em .25em -1.6em;vertical-align:middle}
.markdown-body::before{display:table;content:""}
.markdown-body::after{display:table;clear:both;content:""}
.markdown-body>:first-child{margin-top:0!important}
.markdown-body>:last-child{margin-bottom:0!important}
.Alert,.Error,.Note,.Success,.Warning,.Tips,.HTips{padding:11px;margin-bottom:24px;border-style:solid;border-width:1px;border-radius:4px}
.Alert p,.Error p,.Note p,.Success p,.Warning p,.Tips p,.HTips p{margin-top:0}
.Alert p:last-child,.Error p:last-child,.Note p:last-child,.Success p:last-child,.Warning p:last-child,.Tips p:last-child,.HTips p:last-child{margin-bottom:0}
.Alert{color:#246;background-color:#e2eef9;border-color:#bac6d3}
.Warning{color:#4c4a42;background-color:#fff9ea;border-color:#dfd8c2}
.Error{color:#911;background-color:#fcdede;border-color:#d2b2b2}
.Success{color:#22662c;background-color:#e2f9e5;border-color:#bad3be}
.Note{color:#2f363d;background-color:#f6f8fa;border-color:#d5d8da}
.Alert h1,.Alert h2,.Alert h3,.Alert h4,.Alert h5,.Alert h6{color:#246;margin-bottom:0}
.Warning h1,.Warning h2,.Warning h3,.Warning h4,.Warning h5,.Warning h6{color:#4c4a42;margin-bottom:0}
.Error h1,.Error h2,.Error h3,.Error h4,.Error h5,.Error h6{color:#911;margin-bottom:0}
.Success h1,.Success h2,.Success h3,.Success h4,.Success h5,.Success h6{color:#22662c;margin-bottom:0}
.Note h1,.Note h2,.Note h3,.Note h4,.Note h5,.Note h6{color:#2f363d;margin-bottom:0}
.Tips h1,.Tips h2,.Tips h3,.Tips h4,.Tips h5,.Tips h6{color:#2f363d;margin-bottom:0}
.HTips h1,.HTips h2,.HTips h3,.HTips h4,.HTips h5,.HTips h6{color:#2f363d;margin-bottom:0}
.Tips h1:first-child,.Tips h2:first-child,.Tips h3:first-child,.Tips h4:first-child,.Tips h5:first-child,.Tips h6:first-child,.Alert h1:first-child,.Alert h2:first-child,.Alert h3:first-child,.Alert h4:first-child,.Alert h5:first-child,.Alert h6:first-child,.Error h1:first-child,.Error h2:first-child,.Error h3:first-child,.Error h4:first-child,.Error h5:first-child,.Error h6:first-child,.Note h1:first-child,.Note h2:first-child,.Note h3:first-child,.Note h4:first-child,.Note h5:first-child,.Note h6:first-child,.Success h1:first-child,.Success h2:first-child,.Success h3:first-child,.Success h4:first-child,.Success h5:first-child,.Success h6:first-child,.Warning h1:first-child,.Warning h2:first-child,.Warning h3:first-child,.Warning h4:first-child,.Warning h5:first-child,.Warning h6:first-child{margin-top:0}
h1.title,p.subtitle{text-align:center}
h1.title.followed-by-subtitle{margin-bottom:0}
p.subtitle{font-size:1.5em;font-weight:600;line-height:1.25;margin-top:0;margin-bottom:16px;padding-bottom:.3em}
div.line-block{white-space:pre-line}
  </style>
  <style type="text/css">code{white-space: pre;}</style>
  <style type="text/css">
	pre > code.sourceCode { white-space: pre; position: relative; }
 pre > code.sourceCode > span { display: inline-block; line-height: 1.25; }
 pre > code.sourceCode > span:empty { height: 1.2em; }
 .sourceCode { overflow: visible; }
 code.sourceCode > span { color: inherit; text-decoration: inherit; }
 div.sourceCode { margin: 1em 0; }
 pre.sourceCode { margin: 0; }
 @media screen {
 div.sourceCode { overflow: auto; }
 }
 @media print {
 pre > code.sourceCode { white-space: pre-wrap; }
 pre > code.sourceCode > span { text-indent: -5em; padding-left: 5em; }
 }
 pre.numberSource code
   { counter-reset: source-line 0; }
 pre.numberSource code > span
   { position: relative; left: -4em; counter-increment: source-line; }
 pre.numberSource code > span > a:first-child::before
   { content: counter(source-line);
     position: relative; left: -1em; text-align: right; vertical-align: baseline;
     border: none; display: inline-block;
     -webkit-touch-callout: none; -webkit-user-select: none;
     -khtml-user-select: none; -moz-user-select: none;
     -ms-user-select: none; user-select: none;
     padding: 0 4px; width: 4em;
     background-color: #ffffff;
     color: #a0a0a0;
   }
 pre.numberSource { margin-left: 3em; border-left: 1px solid #a0a0a0;  padding-left: 4px; }
 div.sourceCode
   { color: #1f1c1b; background-color: #ffffff; }
 @media screen {
 pre > code.sourceCode > span > a:first-child::before { text-decoration: underline; }
 }
 code span { color: #1f1c1b; } /* Normal */
 code span.al { color: #bf0303; background-color: #f7e6e6; font-weight: bold; } /* Alert */
 code span.an { color: #ca60ca; } /* Annotation */
 code span.at { color: #0057ae; } /* Attribute */
 code span.bn { color: #b08000; } /* BaseN */
 code span.bu { color: #644a9b; font-weight: bold; } /* BuiltIn */
 code span.cf { color: #1f1c1b; font-weight: bold; } /* ControlFlow */
 code span.ch { color: #924c9d; } /* Char */
 code span.cn { color: #aa5500; } /* Constant */
 code span.co { color: #898887; } /* Comment */
 code span.cv { color: #0095ff; } /* CommentVar */
 code span.do { color: #607880; } /* Documentation */
 code span.dt { color: #0057ae; } /* DataType */
 code span.dv { color: #b08000; } /* DecVal */
 code span.er { color: #bf0303; text-decoration: underline; } /* Error */
 code span.ex { color: #0095ff; font-weight: bold; } /* Extension */
 code span.fl { color: #b08000; } /* Float */
 code span.fu { color: #644a9b; } /* Function */
 code span.im { color: #ff5500; } /* Import */
 code span.in { color: #b08000; } /* Information */
 code span.kw { color: #1f1c1b; font-weight: bold; } /* Keyword */
 code span.op { color: #1f1c1b; } /* Operator */
 code span.ot { color: #006e28; } /* Other */
 code span.pp { color: #006e28; } /* Preprocessor */
 code span.re { color: #0057ae; background-color: #e0e9f8; } /* RegionMarker */
 code span.sc { color: #3daee9; } /* SpecialChar */
 code span.ss { color: #ff5500; } /* SpecialString */
 code span.st { color: #bf0303; } /* String */
 code span.va { color: #0057ae; } /* Variable */
 code span.vs { color: #bf0303; } /* VerbatimString */
 code span.wa { color: #bf0303; } /* Warning */
  </style>
  <link rel="stylesheet" href="data:text/css,%3Aroot%20%7B%2D%2Dmain%2Ddarkblue%2Dcolor%3A%20rgb%283%2C35%2C75%29%3B%20%2D%2Dmain%2Dlightblue%2Dcolor%3A%20rgb%2860%2C180%2C230%29%3B%20%2D%2Dmain%2Dpink%2Dcolor%3A%20rgb%28230%2C0%2C126%29%3B%20%2D%2Dmain%2Dyellow%2Dcolor%3A%20rgb%28255%2C210%2C0%29%3B%20%2D%2Dsecondary%2Dgrey%2Dcolor%3A%20rgb%2870%2C70%2C80%29%3B%20%2D%2Dsecondary%2Dgrey%2Dcolor%2D25%3A%20rgb%28209%2C209%2C211%29%3B%20%2D%2Dsecondary%2Dgrey%2Dcolor%2D12%3A%20rgb%28233%2C233%2C234%29%3B%20%2D%2Dsecondary%2Dlightgreen%2Dcolor%3A%20rgb%2873%2C177%2C112%29%3B%20%2D%2Dsecondary%2Dpurple%2Dcolor%3A%20rgb%28140%2C0%2C120%29%3B%20%2D%2Dsecondary%2Ddarkgreen%2Dcolor%3A%20rgb%284%2C87%2C47%29%3B%20%2D%2Dsidenav%2Dfont%2Dsize%3A%2090%25%3B%7Dhtml%20%7Bfont%2Dfamily%3A%20%22Arial%22%2C%20sans%2Dserif%3B%7D%2A%20%7Bxbox%2Dsizing%3A%20border%2Dbox%3B%7D%2Est%5Fheader%20h1%2Etitle%2C%2Est%5Fheader%20p%2Esubtitle%20%7Btext%2Dalign%3A%20left%3B%7D%2Est%5Fheader%20h1%2Etitle%20%7Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bmargin%2Dbottom%3A5px%3B%7D%2Est%5Fheader%20p%2Esubtitle%20%7Bcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bfont%2Dsize%3A90%25%3B%7D%2Est%5Fheader%20h1%2Etitle%2Efollowed%2Dby%2Dsubtitle%20%7Bborder%2Dbottom%3A2px%20solid%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bmargin%2Dbottom%3A5px%3B%7D%2Est%5Fheader%20p%2Erevision%20%7Bdisplay%3A%20inline%2Dblock%3Bwidth%3A70%25%3B%7D%2Est%5Fheader%20div%2Eauthor%20%7Bfont%2Dstyle%3A%20italic%3B%7D%2Est%5Fheader%20div%2Esummary%20%7Bborder%2Dtop%3A%20solid%201px%20%23C0C0C0%3Bbackground%3A%20%23ECECEC%3Bpadding%3A%205px%3B%7D%2Est%5Ffooter%20%7Bfont%2Dsize%3A80%25%3B%7D%2Est%5Ffooter%20img%20%7Bfloat%3A%20right%3B%7D%2Est%5Ffooter%20%2Est%5Fnotice%20%7Bwidth%3A80%25%3B%7D%2Emarkdown%2Dbody%20%23header%2Dsection%2Dnumber%20%7Bfont%2Dsize%3A120%25%3B%7D%2Emarkdown%2Dbody%20h1%20%7Bborder%2Dbottom%3A1px%20solid%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bpadding%2Dbottom%3A%202px%3Bpadding%2Dtop%3A%2010px%3B%7D%2Emarkdown%2Dbody%20h2%20%7Bpadding%2Dbottom%3A%205px%3Bpadding%2Dtop%3A%2010px%3B%7D%2Emarkdown%2Dbody%20h2%20code%20%7Bbackground%2Dcolor%3A%20rgb%28255%2C%20255%2C%20255%29%3B%7D%23func%2EsourceCode%20%7Bborder%2Dleft%2Dstyle%3A%20solid%3Bborder%2Dcolor%3A%20rgb%280%2C%2032%2C%2082%29%3Bborder%2Dcolor%3A%20rgb%28255%2C%20244%2C%20191%29%3Bborder%2Dwidth%3A%208px%3Bpadding%3A0px%3B%7Dpre%20%3E%20code%20%7Bborder%3A%20solid%201px%20blue%3Bfont%2Dsize%3A60%25%3B%7DcodeXX%20%7Bborder%3A%20solid%201px%20blue%3Bfont%2Dsize%3A60%25%3B%7D%23func%2EsourceXXCode%3A%3Abefore%20%7Bcontent%3A%20%22Synopsis%22%3Bpadding%2Dleft%3A10px%3Bfont%2Dweight%3A%20bold%3B%7Dfigure%20%7Bpadding%3A0px%3Bmargin%2Dleft%3A5px%3Bmargin%2Dright%3A5px%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3B%7Dimg%5Bdata%2Dproperty%3D%22center%22%5D%20%7Bdisplay%3A%20block%3Bmargin%2Dtop%3A%2010px%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3Bpadding%3A%2010px%3B%7Dfigcaption%20%7Btext%2Dalign%3Aleft%3B%20%20border%2Dtop%3A%201px%20dotted%20%23888%3Bpadding%2Dbottom%3A%2020px%3Bmargin%2Dtop%3A%2010px%3B%7Dh1%20code%2C%20h2%20code%20%7Bfont%2Dsize%3A120%25%3B%7D%09%2Emarkdown%2Dbody%20table%20%7Bwidth%3A%20100%25%3Bmargin%2Dleft%3Aauto%3Bmargin%2Dright%3Aauto%3B%7D%2Emarkdown%2Dbody%20img%20%7Bborder%2Dradius%3A%204px%3Bpadding%3A%205px%3Bdisplay%3A%20block%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3Bwidth%3A%20auto%3B%7D%2Emarkdown%2Dbody%20%2Est%5Fheader%20img%2C%20%2Emarkdown%2Dbody%20%7Bborder%3A%20none%3Bborder%2Dradius%3A%20none%3Bpadding%3A%205px%3Bdisplay%3A%20block%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3Bwidth%3A%20auto%3Bbox%2Dshadow%3A%20none%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2010px%3Bwidth%3A%20auto%3Bfont%2Dfamily%3A%20%22Arial%22%2C%20sans%2Dserif%3Bcolor%3A%20%2303234B%3Bcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%7D%2Emarkdown%2Dbody%20h1%2C%20%2Emarkdown%2Dbody%20h2%2C%20%2Emarkdown%2Dbody%20h3%20%7B%20%20%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%7D%2Emarkdown%2Dbody%3Ahover%20%7B%7D%2Emarkdown%2Dbody%20%2Econtents%20%7B%7D%2Emarkdown%2Dbody%20%2Etoc%2Dtitle%20%7B%7D%2Emarkdown%2Dbody%20%2Econtents%20li%20%7Blist%2Dstyle%2Dtype%3A%20none%3B%7D%2Emarkdown%2Dbody%20%2Econtents%20ul%20%7Bpadding%2Dleft%3A%2010px%3B%7D%2Emarkdown%2Dbody%20%2Econtents%20a%20%7Bcolor%3A%20%233CB4E6%3B%20%7D%2Emarkdown%2Dbody%20table%20%2Eheader%20%7Bbackground%2Dcolor%3A%20var%28%2D%2Dsecondary%2Dgrey%2Dcolor%2D12%29%3Bborder%2Dbottom%3A1px%20solid%3Bborder%2Dtop%3A1px%20solid%3Bfont%2Dsize%3A%2090%25%3B%7D%2Emarkdown%2Dbody%20table%20th%20%7Bfont%2Dweight%3A%20bolder%3B%20%7D%2Emarkdown%2Dbody%20table%20td%20%7Bfont%2Dsize%3A%2090%25%3B%7D%2Emarkdown%2Dbody%20code%7Bpadding%3A%200%3Bmargin%3A0%3Bfont%2Dsize%3A95%25%3Bbackground%2Dcolor%3Argba%2827%2C31%2C35%2C%2E05%29%3Bborder%2Dradius%3A1px%3B%7D%2Et01%20%7Bwidth%3A%20100%25%3Bborder%3A%20None%3Btext%2Dalign%3A%20left%3B%7D%2ETips%20%7Bpadding%3A11px%3Bmargin%2Dbottom%3A24px%3Bborder%2Dstyle%3Asolid%3Bborder%2Dwidth%3A1px%3Bborder%2Dradius%3A1px%7D%2ETips%20%7Bcolor%3A%232f363d%3B%20background%2Dcolor%3A%20%23f6f8fa%3Bborder%2Dcolor%3A%23d5d8da%3Bborder%2Dtop%3A1px%20solid%3Bborder%2Dbottom%3A1px%20solid%3B%7D%2EHTips%20%7Bpadding%3A11px%3Bmargin%2Dbottom%3A24px%3Bborder%2Dstyle%3Asolid%3Bborder%2Dwidth%3A1px%3Bborder%2Dradius%3A1px%7D%2EHTips%20%7Bcolor%3A%232f363d%3B%20background%2Dcolor%3A%23fff9ea%3Bborder%2Dcolor%3A%23d5d8da%3Bborder%2Dtop%3A1px%20solid%3Bborder%2Dbottom%3A1px%20solid%3B%7D%2EHTips%20h1%2C%2EHTips%20h2%2C%2EHTips%20h3%2C%2EHTips%20h4%2C%2EHTips%20h5%2C%2EHTips%20h6%20%7Bcolor%3A%232f363d%3Bmargin%2Dbottom%3A0%7D%2Esidenav%20%7Bfont%2Dfamily%3A%20%22Arial%22%2C%20sans%2Dserif%3B%20%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bheight%3A%20100%25%3Bposition%3A%20fixed%3Bz%2Dindex%3A%201%3Btop%3A%200%3Bleft%3A%200%3Bmargin%2Dright%3A%2010px%3Bmargin%2Dleft%3A%2010px%3B%20overflow%2Dx%3A%20hidden%3B%7D%2Esidenav%20hr%2Enew1%20%7Bborder%2Dwidth%3A%20thin%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Dlightblue%2Dcolor%29%3Bmargin%2Dright%3A%2010px%3Bmargin%2Dtop%3A%20%2D10px%3B%7D%2Esidenav%20%23sidenav%5Fheader%20%7Bmargin%2Dtop%3A%2010px%3Bborder%3A%201px%3Bcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Dlightblue%2Dcolor%29%3B%7D%2Esidenav%20%23sidenav%5Fheader%20img%20%7Bfloat%3A%20left%3B%7D%2Esidenav%20%23sidenav%5Fheader%20a%20%7Bmargin%2Dleft%3A%200px%3Bmargin%2Dright%3A%200px%3Bpadding%2Dleft%3A%200px%3B%7D%2Esidenav%20%23sidenav%5Fheader%20a%3Ahover%20%7Bbackground%2Dsize%3A%20auto%3Bcolor%3A%20%23FFD200%3B%20%7D%2Esidenav%20%23sidenav%5Fheader%20a%3Aactive%20%7B%20%20%7D%2Esidenav%20%3E%20ul%20%7Bbackground%2Dcolor%3A%20rgba%2857%2C%20169%2C%20220%2C%200%2E05%29%3B%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bborder%2Dradius%3A%2010px%3Bpadding%2Dbottom%3A%2010px%3Bpadding%2Dtop%3A%2010px%3Bpadding%2Dright%3A%2010px%3Bmargin%2Dright%3A%2010px%3B%7D%2Esidenav%20a%20%7Bpadding%3A%202px%202px%3Btext%2Ddecoration%3A%20none%3Bfont%2Dsize%3A%20var%28%2D%2Dsidenav%2Dfont%2Dsize%29%3Bdisplay%3Atable%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%2C%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%7B%20padding%2Dright%3A%205px%3Bpadding%2Dleft%3A%205px%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%20%7B%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bfont%2Dweight%3A%20lighter%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%20%7B%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bfont%2Dsize%3A%2080%25%3Bpadding%2Dleft%3A%2010px%3Btext%2Dalign%2Dlast%3A%20left%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%20%7B%20display%3A%20None%3B%7D%2Esidenav%20li%20%7Blist%2Dstyle%2Dtype%3A%20none%3B%7D%2Esidenav%20ul%20%7Bpadding%2Dleft%3A%200px%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%2C%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%20%7Bbackground%2Dcolor%3A%20var%28%2D%2Dsecondary%2Dgrey%2Dcolor%2D12%29%3Bbackground%2Dclip%3A%20border%2Dbox%3Bmargin%2Dleft%3A%20%2D10px%3Bpadding%2Dleft%3A%2010px%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%20%7Bpadding%2Dright%3A%2015px%3Bwidth%3A%20230px%3B%09%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%20%7Bpadding%2Dright%3A%2010px%3Bwidth%3A%20230px%3B%09%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%3Aactive%20%7B%20color%3A%20%23FFD200%3B%20%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%3Aactive%20%7B%20color%3A%20%23FFD200%3B%20%7D%2Esidenav%20code%20%7B%7D%2Esidenav%20%7Bwidth%3A%20280px%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%20300px%3Bdisplay%3Ablock%3B%7D%2Emarkdown%2Dbody%20%2Eprint%2Dcontents%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%2Eprint%2Dtoc%2Dtitle%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%7Bmax%2Dwidth%3A%20980px%3Bmin%2Dwidth%3A%20200px%3Bpadding%3A%2040px%3Bborder%2Dstyle%3A%20solid%3Bborder%2Dstyle%3A%20outset%3Bborder%2Dcolor%3A%20rgba%28104%2C%20167%2C%20238%2C%200%2E089%29%3Bborder%2Dradius%3A%205px%3B%7D%40media%20screen%20and%20%28max%2Dheight%3A%20450px%29%20%7B%2Esidenav%20%7Bpadding%2Dtop%3A%2015px%3B%7D%2Esidenav%20a%20%7Bfont%2Dsize%3A%2018px%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%2010px%3B%20%7D%2Esidenav%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2040px%3Bwidth%3A%20auto%3Bborder%3A%200px%3B%7D%7D%40media%20screen%20and%20%28max%2Dwidth%3A%201024px%29%20%7B%2Esidenav%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2040px%3Bwidth%3A%20auto%3Bborder%3A%200px%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%2010px%3B%7D%7D%40media%20print%20%7B%2Esidenav%20%7Bvisibility%3Ahidden%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%2010px%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2010px%3Bwidth%3Aauto%3Bborder%3A%200px%3B%7D%40page%20%7Bsize%3A%20A4%3B%20%20margin%3A2cm%3Bpadding%3A2cm%3Bmargin%2Dtop%3A%201cm%3Bpadding%2Dbottom%3A%201cm%3B%7D%2A%20%7Bxbox%2Dsizing%3A%20border%2Dbox%3Bfont%2Dsize%3A90%25%3B%7Da%20%7Bfont%2Dsize%3A%20100%25%3Bcolor%3A%20yellow%3B%7D%2Emarkdown%2Dbody%20article%20%7Bxbox%2Dsizing%3A%20border%2Dbox%3Bfont%2Dsize%3A100%25%3B%7D%2Emarkdown%2Dbody%20p%20%7Bwindows%3A%202%3Borphans%3A%202%3B%7D%2Epagebreakerafter%20%7Bpage%2Dbreak%2Dafter%3A%20always%3Bpadding%2Dtop%3A10mm%3B%7D%2Epagebreakbefore%20%7Bpage%2Dbreak%2Dbefore%3A%20always%3B%7Dh1%2C%20h2%2C%20h3%2C%20h4%20%7Bpage%2Dbreak%2Dafter%3A%20avoid%3B%7Ddiv%2C%20code%2C%20blockquote%2C%20li%2C%20span%2C%20table%2C%20figure%20%7Bpage%2Dbreak%2Dinside%3A%20avoid%3B%7D%7D">
  <!--[if lt IE 9]>
    <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
  <![endif]-->





<link rel="shortcut icon" href="">

</head>



<body>

		<div class="sidenav">
		<div id="sidenav_header">
							<img src="" title="STM32CubeMX.AI logo" align="left" height="70" />
										<br />7.0.0<br />
										<a href="#doc_title"> FAQ - Generic aspects </a>
					</div>
		<div id="sidenav_header_button">
			 
							<ul>
					<li><p><a id="index" href="index.html">[ Index ]</a></p></li>
				</ul>
						<hr class="new1">
		</div>	

		<ul>
  <li><a href="#ref_python_ver">How to know the version of the deep-learning framework components which are used?</a></li>
  <li><a href="#onnx_channel_first">Channel first support for ONNX model</a></li>
  <li><a href="#how-is-used-the-cmsis-nn-library">How is used the CMSIS-NN library?</a></li>
  <li><a href="#what-is-the-eabi-used-for-the-network_runtime-libraries">What is the EABI used for the <em>network_runtime</em> libraries?</a></li>
  <li><a href="#x-cube-ai-python-api-availability">X-CUBE-AI Python API availability?</a></li>
  <li><a href="#stateful-lstm-support">Stateful LSTM support?</a></li>
  <li><a href="#how-is-used-the-onnx-optimizer">How is used the ONNX optimizer?</a></li>
  <li><a href="#how-is-used-the-tflite-interpreter">How is used the TFLite interpreter?</a></li>
  <li><a href="#tensorflow-keras-tf.keras-vs-keras.io">TensorFlow Keras (tf.keras) vs Keras.io</a></li>
  <li><a href="#it-is-possible-to-update-a-model-on-the-firmware-wo-having-to-do-a-full-firmware-update">It is possible to update a model on the firmware w/o having to do a full firmware update?</a></li>
  <li><a href="#keras-model-or-sequential-layer-support">Keras Model or Sequential layer support?</a></li>
  <li><a href="#is-it-possible-to-split-the-weights-buffer">Is it possible to split the weights buffer?</a></li>
  <li><a href="#is-it-possible-to-place-the-activations-buffer-in-different-memory-segments">Is it possible to place the “activations” buffer in different memory segments?</a></li>
  <li><a href="#how-to-compress-the-non-densefully-connected-layers">How to compress the non-dense/fully-connected layers?</a></li>
  <li><a href="#is-it-possible-to-apply-a-compression-factor-different-of-x8-x4">Is it possible to apply a compression factor different of x8, x4?</a></li>
  <li><a href="#how-to-specify-or-to-indicate-a-compression-factor-by-layer">How to specify or to indicate a compression factor by layer?</a></li>
  <li><a href="#why-a-small-negative-ratio-is-reported-for-the-weights-size-with-a-model-wo-compression">Why a small negative ratio is reported for the weights size with a model w/o compression?</a></li>
  <li><a href="#is-it-possible-to-dumpcapture-the-intermediate-values-during-the-execution-of-the-inference">Is it possible to dump/capture the intermediate values during the execution of the inference?</a></li>
  <li><a href="#references">References</a></li>
  </ul>
	</div>
	<article id="sidenav" class="markdown-body">
		



<header>
<section class="st_header" id="doc_title">

<div class="himage">
	<img src="" title="STM32CubeMX.AI" align="right" height="70" />
	<img src="" title="STM32" align="right" height="90" />
</div>

<h1 class="title followed-by-subtitle">FAQ - Generic aspects</h1>

	<p class="subtitle">X-CUBE-AI Expansion Package</p>

	<div class="revision">r3.1</div>

	<div class="ai_platform">
		AI PLATFORM r7.0.0
					(Embedded Inference Client API 1.1.0)
			</div>
			Command Line Interface r1.5.1
	




</section>
</header>
 


	<h1 class="toc-title">Contents</h1>
	<div class="contents">
	<ul>
 <li><a href="#ref_python_ver">How to know the version of the deep-learning framework components which are used?</a></li>
 <li><a href="#onnx_channel_first">Channel first support for ONNX model</a></li>
 <li><a href="#how-is-used-the-cmsis-nn-library">How is used the CMSIS-NN library?</a></li>
 <li><a href="#what-is-the-eabi-used-for-the-network_runtime-libraries">What is the EABI used for the <em>network_runtime</em> libraries?</a></li>
 <li><a href="#x-cube-ai-python-api-availability">X-CUBE-AI Python API availability?</a></li>
 <li><a href="#stateful-lstm-support">Stateful LSTM support?</a></li>
 <li><a href="#how-is-used-the-onnx-optimizer">How is used the ONNX optimizer?</a></li>
 <li><a href="#how-is-used-the-tflite-interpreter">How is used the TFLite interpreter?</a></li>
 <li><a href="#tensorflow-keras-tf.keras-vs-keras.io">TensorFlow Keras (tf.keras) vs Keras.io</a></li>
 <li><a href="#it-is-possible-to-update-a-model-on-the-firmware-wo-having-to-do-a-full-firmware-update">It is possible to update a model on the firmware w/o having to do a full firmware update?</a></li>
 <li><a href="#keras-model-or-sequential-layer-support">Keras Model or Sequential layer support?</a></li>
 <li><a href="#is-it-possible-to-split-the-weights-buffer">Is it possible to split the weights buffer?</a></li>
 <li><a href="#is-it-possible-to-place-the-activations-buffer-in-different-memory-segments">Is it possible to place the “activations” buffer in different memory segments?</a></li>
 <li><a href="#how-to-compress-the-non-densefully-connected-layers">How to compress the non-dense/fully-connected layers?</a></li>
 <li><a href="#is-it-possible-to-apply-a-compression-factor-different-of-x8-x4">Is it possible to apply a compression factor different of x8, x4?</a></li>
 <li><a href="#how-to-specify-or-to-indicate-a-compression-factor-by-layer">How to specify or to indicate a compression factor by layer?</a></li>
 <li><a href="#why-a-small-negative-ratio-is-reported-for-the-weights-size-with-a-model-wo-compression">Why a small negative ratio is reported for the weights size with a model w/o compression?</a></li>
 <li><a href="#is-it-possible-to-dumpcapture-the-intermediate-values-during-the-execution-of-the-inference">Is it possible to dump/capture the intermediate values during the execution of the inference?</a></li>
 <li><a href="#references">References</a></li>
 </ul>
	</div>




<ul>
<li><a href="faq_validation.html">FAQ - Validation aspects</a></li>
<li><a href="faq_quantization.html">FAQ - Quantization and post-training quantization process</a></li>
</ul>
<section id="ref_python_ver" class="level2">
<h2>How to know the version of the deep-learning framework components which are used?</h2>
<p>X-CUBE-AI Expansion Package is a complete self-contained application package. No external tools is requested to use the package. To know the version of the main components which are embedded, the following command can be used:</p>
<div class="sourceCode" id="cb1"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb1-1"><a href="#cb1-1" aria-hidden="true" tabindex="-1"></a>$  stm32ai <span class="op">--</span>tools_version</span>
<span id="cb1-2"><a href="#cb1-2" aria-hidden="true" tabindex="-1"></a>Neural Network Tools <span class="kw">for</span> STM32AI v1<span class="op">.</span><span class="fu">5</span><span class="op">.</span><span class="fu">1</span> <span class="op">(</span>STM<span class="op">.</span><span class="fu">ai</span> v7<span class="op">.</span><span class="fu">0</span><span class="op">.</span><span class="fu">0</span><span class="op">)</span></span>
<span id="cb1-3"><a href="#cb1-3" aria-hidden="true" tabindex="-1"></a><span class="op">-</span> Python version   <span class="op">:</span> 3<span class="op">.</span><span class="fu">7</span><span class="op">.</span><span class="fu">9</span></span>
<span id="cb1-4"><a href="#cb1-4" aria-hidden="true" tabindex="-1"></a><span class="op">-</span> Numpy version    <span class="op">:</span> 1<span class="op">.</span><span class="fu">19</span><span class="op">.</span><span class="fu">5</span></span>
<span id="cb1-5"><a href="#cb1-5" aria-hidden="true" tabindex="-1"></a><span class="op">-</span> TF version       <span class="op">:</span> 2<span class="op">.</span><span class="fu">5</span><span class="op">.</span><span class="fu">0</span></span>
<span id="cb1-6"><a href="#cb1-6" aria-hidden="true" tabindex="-1"></a><span class="op">-</span> TF Keras version <span class="op">:</span> 2<span class="op">.</span><span class="fu">5</span><span class="op">.</span><span class="fu">0</span></span>
<span id="cb1-7"><a href="#cb1-7" aria-hidden="true" tabindex="-1"></a><span class="op">-</span> ONNX version     <span class="op">:</span> 1<span class="op">.</span><span class="fu">6</span><span class="op">.</span><span class="fu">0</span></span>
<span id="cb1-8"><a href="#cb1-8" aria-hidden="true" tabindex="-1"></a><span class="op">-</span> ONNX RT version  <span class="op">:</span> 1<span class="op">.</span><span class="fu">7</span><span class="op">.</span><span class="fu">0</span></span></code></pre></div>
<div class="Alert">
<p><strong>Warning</strong> — User should be aware that the respective Python DL modules are used to run the original model and also to import/parse the model. To know the supported layers/operators, please refer to the description in the <a href="supported_ops_tflite.html">[TFLITE]</a>, <a href="supported_ops_keras.html">[KERAS]</a> and <a href="supported_ops_onnx.html">[ONNX]</a>.</p>
</div>
</section>
<section id="onnx_channel_first" class="level2">
<h2>Channel first support for ONNX model</h2>
<p>Data format of the generated C-model are always <a href="embedded_client_api.html#ref_tensor_def">channel last</a> (or NHWC data format). To conserve the original data arrangement, by default the transpose operators will be added if:</p>
<ul>
<li>model is channel first<br />
</li>
<li>the number of input channels is greater than 1</li>
</ul>
<p>This default behavior can be disabled with the option: <code>--no-onnx-io-transpose</code></p>
<figure>
<img src="" property="center" style="width:70.0%" />
</figure>
<div class="Tips">
<p><strong>Tip</strong> — For the validation of the ONNX model, the data should be correctly presented by the user to build the representative data set according the generated and expected input shape.</p>
</div>
</section>
<section id="how-is-used-the-cmsis-nn-library" class="level2">
<h2>How is used the CMSIS-NN library?</h2>
<p>The network runtime library is partially based on the CMSIS library. CMSIS is a vendor-independent hardware abstraction layer for micro-controllers that are based on Arm® Cortex® processors (<a href="https://arm-software.github.io/CMSIS_5/General/html/index.html">https://arm-software.github.io/CMSIS_5/General/html/index.html</a>). The CMSIS-NN sub-component and a minor part of the CMSIS-DSP sub-component are embedded in the <em>network_runtime.a</em> library to be sure that the requested files are compiled with the optimal options.</p>
<table>
<thead>
<tr class="header">
<th style="text-align: left;">component</th>
<th style="text-align: left;">version</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">CMSIS package</td>
<td style="text-align: left;">r5.7.0</td>
</tr>
<tr class="even">
<td style="text-align: left;">CMSIS-Core(M)</td>
<td style="text-align: left;">V5.4.0</td>
</tr>
<tr class="odd">
<td style="text-align: left;">CMSIS-DSP</td>
<td style="text-align: left;">V1.8.0</td>
</tr>
<tr class="even">
<td style="text-align: left;">CMSIS-NN</td>
<td style="text-align: left;">V1.3.0</td>
</tr>
</tbody>
</table>
<p>A part of the forward kernel functions are <em>directly</em> mapped on the CMSIS-NN (legacy support of the Qmn format) when available but the major part, are the optimized implementations (integer and float format) only based on the CMSIS-type definitions for portability across the STM32 families.</p>
<figure>
<img src="" property="center" style="width:30.0%" />
</figure>
</section>
<section id="what-is-the-eabi-used-for-the-network_runtime-libraries" class="level2">
<h2>What is the EABI used for the <em>network_runtime</em> libraries?</h2>
<p>For performance motivations, the provided libraries (<code>network_runtime.a</code>) are compiled with the <em>hard</em> floating-point ABI option allowing generation of floating-point instructions and using FPU-specific calling conventions. This implies that the final end-user project should be also compiled and linked with this option.</p>
<div class="Alert">
<p><strong>Warning</strong> — Abnormal situation is normally detected at link time, but there is a specific case where the firmware is generated w/o errors but at run-time the results are <em>UNPREDICTABLE</em>. For example for a GCC-base environment, <code>-mfloat-abi=soft or softfp</code> can be set by default for the whole project to use an other EABI-soft-base binary library. There is currently no link issue because the library has been generated with another EABI compliant tool-chain (IAR for ARM tool-chain).</p>
</div>
<blockquote>
<p>If specific binary library version is requested see with your local ST support.</p>
</blockquote>
</section>
<section id="x-cube-ai-python-api-availability" class="level2">
<h2>X-CUBE-AI Python API availability?</h2>
<p>X-CUBE-AI is only available through an UI interface (fully integrated in STM32CubeMX) and through a command line interface. Only a specific Python module (<code>ai_runner</code>) is provided to be able to run locally a generated c-model for advanced validation flow purpose (refer to <a href="how_to_run_a_model_locally.html">“How to run locally a c-model”</a> article).</p>
</section>
<section id="stateful-lstm-support" class="level2">
<h2>Stateful LSTM support?</h2>
<p>For the <a href="supported_ops_onnx.html">ONNX LSTM</a> and <a href="supported_ops_tflite.html">TFlite UNIDIRECTIONAL_SEQUENCE_LSTM</a> operators, only stateless mode is supported. There is only a limited support for the <a href="keras_lstm_stateful.html">Keras LSTM layer</a> in stateful mode.</p>
</section>
<section id="how-is-used-the-onnx-optimizer" class="level2">
<h2>How is used the ONNX optimizer?</h2>
<p>The import of the ONNX model is based on the ONNX package 1.6.0. The ONNX optimizer is always used with the standard options to apply the minimum graph optimizations to perform fast inference. The X-CUBE-AI optimization passes are applied after. Some particular models or corner cases are sometimes not correctly supported by this version of ONNX optimizer generating specific internal errors. The <code>--no-onnx-optimizer</code> option can be used to disable the ONNX optimizer pass to import the model.</p>
<p>Example of error message:</p>
<div class="sourceCode" id="cb2"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb2-1"><a href="#cb2-1" aria-hidden="true" tabindex="-1"></a><span class="op">(</span>op_type<span class="op">:</span>LeakyRelu<span class="op">,</span> name<span class="op">:</span>LeakyRelu_26<span class="op">):</span> Inferred shape and existing shape differ <span class="kw">in</span> rank<span class="op">:</span> <span class="op">(</span>3<span class="op">)</span> vs <span class="op">(</span>2<span class="op">)</span></span>
<span id="cb2-2"><a href="#cb2-2" aria-hidden="true" tabindex="-1"></a>INTERNAL ERROR<span class="op">:</span> list index out of range</span></code></pre></div>
</section>
<section id="how-is-used-the-tflite-interpreter" class="level2">
<h2>How is used the TFLite interpreter?</h2>
<p>For the <a href="evaluation_metrics.html">built-in validation flow</a>, the TFLite interpreter is used to generate the predictions which will be compared to the outputs of the generated c-model. As the X-CUBE-AI c-runtime inference engine is stateless for the supported operators, between two samples, <code>reset_all_variables()</code> method is called.</p>
</section>
<section id="tensorflow-keras-tf.keras-vs-keras.io" class="level2">
<h2>TensorFlow Keras (tf.keras) vs Keras.io</h2>
<p>X-CUBE-AI 7.0 pack embeds only TensorFlow 2.5.0. The Keras.io module is no more used. <code>&#39;tf.keras 2.5.0&#39;</code> is now used to import and to validate the Keras models. The previous <code>&#39;TF_KERAS=False&#39;</code> environment variable (up to X-CUBE-AI 5.2) can be no more used to import a Kerias.io back-end. This implies that the Keras model with a NCHW tensor format (channel-first) can be no more validated. Code can be always generated but the validation flow to test the outputs against the outputs of the generated c-model can be no more used. Following type of error message will be generated during the execution of the provided model.</p>
<div class="sourceCode" id="cb3"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb3-1"><a href="#cb3-1" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span>
<span id="cb3-2"><a href="#cb3-2" aria-hidden="true" tabindex="-1"></a>INTERNAL ERROR<span class="op">:</span> The Conv2D op currently only supports the NHWC tensor format</span>
<span id="cb3-3"><a href="#cb3-3" aria-hidden="true" tabindex="-1"></a>                on the CPU<span class="op">.</span> The op was given the format<span class="op">:</span> NCHW <span class="op">[</span>Op<span class="op">:</span>Conv2D<span class="op">]</span></span>
<span id="cb3-4"><a href="#cb3-4" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span></code></pre></div>
<p>If the user wants to validate the generate c-model, he must provide a test data set. The <code>&#39;--no-exec-model&#39;</code> option can be used to avoid to execute the imported model.</p>
<div class="sourceCode" id="cb4"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb4-1"><a href="#cb4-1" aria-hidden="true" tabindex="-1"></a><span class="op">&gt;</span> stm32ai validate <span class="op">-</span>m <span class="op">&lt;</span>keras_model_with_NCHW<span class="op">&gt;.</span><span class="fu">h5</span> <span class="op">-</span>vi test<span class="op">.</span><span class="fu">npz</span> <span class="op">--</span>no<span class="op">-</span>exec<span class="op">-</span>model</span>
<span id="cb4-2"><a href="#cb4-2" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span>
<span id="cb4-3"><a href="#cb4-3" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb4-4"><a href="#cb4-4" aria-hidden="true" tabindex="-1"></a>Evaluation report <span class="op">(</span>summary<span class="op">)</span></span>
<span id="cb4-5"><a href="#cb4-5" aria-hidden="true" tabindex="-1"></a><span class="op">------------------------------------------------------------------------------------------------------</span></span>
<span id="cb4-6"><a href="#cb4-6" aria-hidden="true" tabindex="-1"></a>Mode              acc      rmse      mae       l2r       tensor</span>
<span id="cb4-7"><a href="#cb4-7" aria-hidden="true" tabindex="-1"></a><span class="op">------------------------------------------------------------------------------------------------------</span></span>
<span id="cb4-8"><a href="#cb4-8" aria-hidden="true" tabindex="-1"></a>x86 C<span class="op">-</span>model <span class="co">#1    100.00%  0.000000  0.000000  0.000000  activation_6 [ai_float, (1, 1, 2), m_id=20]</span></span></code></pre></div>
</section>
<section id="it-is-possible-to-update-a-model-on-the-firmware-wo-having-to-do-a-full-firmware-update" class="level2">
<h2>It is possible to update a model on the firmware w/o having to do a full firmware update?</h2>
<p><strong>Yes</strong> - The <code>&#39;--binary&#39;</code> and <code>&#39;--relocatable&#39;</code> options of the <code>&#39;generate&#39;</code> command allow to implement a simple or complete mechanism to be able to upgrade a whole generated c-model w/o having to do a full firmware update (refer to <a href="command_line_interface.html">[CLI]</a> or <a href="relocatable.html">[RELOC]</a>, <em>&quot;Relocatable binary model support</em>&quot; article).</p>
</section>
<section id="keras-model-or-sequential-layer-support" class="level2">
<h2>Keras Model or Sequential layer support?</h2>
<p>Nested topologies are not supported. This can appear when the Keras functional API is used to build the network and a <code>&#39;Model&#39;</code> object is called as a layer.</p>
<p>A possible work-around is to convert the Keras model to a TensorFlow lite model, if all operators are supported.</p>
<div class="sourceCode" id="cb5"><pre class="sourceCode python"><code class="sourceCode python"><span id="cb5-1"><a href="#cb5-1" aria-hidden="true" tabindex="-1"></a>converter <span class="op">=</span> tf.lite.TFLiteConverter.from_keras_model_file(<span class="op">&lt;</span>keras_model_path<span class="op">&gt;</span>)</span>
<span id="cb5-2"><a href="#cb5-2" aria-hidden="true" tabindex="-1"></a>model <span class="op">=</span> converter.convert()</span></code></pre></div>
</section>
<section id="is-it-possible-to-split-the-weights-buffer" class="level2">
<h2>Is it possible to split the weights buffer?</h2>
<p><strong>Yes,</strong> by <a href="embedded_client_api.html#ref_split_weights">weights/bias tensors</a>, see the <code>&#39;--split-weights&#39;</code> option (refer to <a href="command_line_interface.html#common-arguments">[CLI]</a>).</p>
</section>
<section id="is-it-possible-to-place-the-activations-buffer-in-different-memory-segments" class="level2">
<h2>Is it possible to place the “activations” buffer in different memory segments?</h2>
<p><strong>No,</strong> only a continuous memory-mapped buffer should be provided by the AI application (refer to <a href="embedded_client_api.html#sec_data_placement">[API]</a>).</p>
</section>
<section id="how-to-compress-the-non-densefully-connected-layers" class="level2">
<h2>How to compress the non-dense/fully-connected layers?</h2>
<p>Only the 32b floating-point dense/fully-connected layer can be compressed.</p>
</section>
<section id="is-it-possible-to-apply-a-compression-factor-different-of-x8-x4" class="level2">
<h2>Is it possible to apply a compression factor different of x8, x4?</h2>
<p><strong>No.</strong> The underlying weight-sharing algorithm (K-means clustering) is based on a dictionary with 16 (x8) or 256 (x4) entries. At the end, the global gain depends of the parameter number versus dictionary size. Note that for a given layer, bias parameters are not necessarily compressed. Compression is applied by tensor.</p>
</section>
<section id="how-to-specify-or-to-indicate-a-compression-factor-by-layer" class="level2">
<h2>How to specify or to indicate a compression factor by layer?</h2>
<p>By default, the compression process tries to apply globally the same compression factor (<em>x4</em> or <em>x8</em>) for all dense-layers or fully-connected-layers. If the global accuracy is too much impacted, the user has the possibility to refine the expected compression factor layer-by-layer.</p>
<p>A JSON file must be defined, to indicate what is the compression factor (<em>8</em> or <em>4</em>) which must be applied for a given layer. The layer is specified by its original name.</p>
<p>Example of configuration file:</p>
<div class="sourceCode" id="cb6"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb6-1"><a href="#cb6-1" aria-hidden="true" tabindex="-1"></a><span class="fu">{</span></span>
<span id="cb6-2"><a href="#cb6-2" aria-hidden="true" tabindex="-1"></a>    <span class="dt">&quot;layers&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb6-3"><a href="#cb6-3" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;dense_1&quot;</span><span class="fu">:</span> <span class="fu">{</span><span class="dt">&quot;factor&quot;</span><span class="fu">:</span> <span class="dv">8</span><span class="fu">},</span></span>
<span id="cb6-4"><a href="#cb6-4" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;dense_2&quot;</span><span class="fu">:</span> <span class="fu">{</span><span class="dt">&quot;factor&quot;</span><span class="fu">:</span> <span class="dv">4</span><span class="fu">},</span></span>
<span id="cb6-5"><a href="#cb6-5" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;dense_3&quot;</span><span class="fu">:</span> <span class="fu">{</span><span class="dt">&quot;factor&quot;</span><span class="fu">:</span> <span class="dv">8</span><span class="fu">}</span></span>
<span id="cb6-6"><a href="#cb6-6" aria-hidden="true" tabindex="-1"></a>    <span class="fu">}</span></span>
<span id="cb6-7"><a href="#cb6-7" aria-hidden="true" tabindex="-1"></a><span class="fu">}</span></span></code></pre></div>
<p>The option <code>-c/--compress</code> can be used to pass the configuration file.</p>
<div class="sourceCode" id="cb7"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb7-1"><a href="#cb7-1" aria-hidden="true" tabindex="-1"></a>$ stm32ai analyze <span class="op">-</span>m <span class="op">&lt;</span>model_file<span class="op">&gt;</span> <span class="op">-</span>c <span class="op">&lt;</span>conf_file<span class="op">&gt;.</span><span class="fu">json</span></span></code></pre></div>
</section>
<section id="why-a-small-negative-ratio-is-reported-for-the-weights-size-with-a-model-wo-compression" class="level2">
<h2>Why a small negative ratio is reported for the weights size with a model w/o compression?</h2>
<p>X-CUBE-AI optimizer implements different merging engine, allowing for example to fold a batch-normalization layer in the previous layer. In this case the reported value take account of these optimizations, the parameters of the batch-normalization layer are removed in the generated C-model.</p>
<div class="sourceCode" id="cb8"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb8-1"><a href="#cb8-1" aria-hidden="true" tabindex="-1"></a>$ stm32ai analyze <span class="op">-</span>m ds_cnn<span class="op">.</span><span class="fu">h5</span></span>
<span id="cb8-2"><a href="#cb8-2" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span>
<span id="cb8-3"><a href="#cb8-3" aria-hidden="true" tabindex="-1"></a>weights <span class="op">(</span>ro<span class="op">)</span>       <span class="op">:</span> 159<span class="op">,</span>536 <span class="op">(</span>155<span class="op">.</span><span class="fu">80</span> KiB<span class="op">)</span> <span class="op">(-</span>0<span class="op">.</span><span class="fu">64</span><span class="op">%)</span></span>
<span id="cb8-4"><a href="#cb8-4" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span></code></pre></div>
</section>
<section id="is-it-possible-to-dumpcapture-the-intermediate-values-during-the-execution-of-the-inference" class="level2">
<h2>Is it possible to dump/capture the intermediate values during the execution of the inference?</h2>
<p><strong>Yes</strong>, thanks the usage of the Platform Observer API (refer to <a href="api_platform_observer.html">[API]</a>, <em>“Platform Observer API”</em> section) in C-environment (STM32 or X86) or in Python environment with the <code>ai_runner</code> module (refer to <a href="how_to_run_a_model_locally.html">[C-RUN] “How to run locally a c-model”</a> article).</p>
<!-- External ST resources/links -->
<!-- Internal resources/links -->
<!-- External resources/links -->
<!-- Cross references -->
</section>
<section id="references" class="level1">
<h1>References</h1>
<table>
<colgroup>
<col style="width: 18%" />
<col style="width: 81%" />
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">ref</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">[DS]</td>
<td style="text-align: left;">X-CUBE-AI - AI expansion pack for STM32CubeMX <a href="https://www.st.com/en/embedded-software/x-cube-ai.html">https://www.st.com/en/embedded-software/x-cube-ai.html</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[UM]</td>
<td style="text-align: left;">User manual - Getting started with X-CUBE-AI Expansion Package for Artificial Intelligence (AI) <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">(pdf)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[CLI]</td>
<td style="text-align: left;">stm32ai - Command Line Interface <a href="command_line_interface.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[API]</td>
<td style="text-align: left;">Embedded inference client API <a href="embedded_client_api.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[METRIC]</td>
<td style="text-align: left;">Evaluation report and metrics <a href="evaluation_metrics.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[TFL]</td>
<td style="text-align: left;">TensorFlow Lite toolbox <a href="supported_ops_tflite.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[KERAS]</td>
<td style="text-align: left;">Keras toolbox <a href="supported_ops_keras.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[ONNX]</td>
<td style="text-align: left;">ONNX toolbox <a href="supported_ops_onnx.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[FAQS]</td>
<td style="text-align: left;">FAQ <a href="faq_generic.html">generic</a>, <a href="faq_validation.html">validation</a>, <a href="faq_quantization.html">quantization</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[QUANT]</td>
<td style="text-align: left;">Quantization and quantize command <a href="quantization.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[RELOC]</td>
<td style="text-align: left;">Relocatable binary network support <a href="relocatable.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[CUST]</td>
<td style="text-align: left;">Support of the Keras Lambda/custom layers <a href="keras_lambda_custom.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[TFLM]</td>
<td style="text-align: left;">TensorFlow Lite for Microcontroller support <a href="tflite_micro_support.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[INST]</td>
<td style="text-align: left;">Setting the environment <a href="setting_env.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[OBS]</td>
<td style="text-align: left;">Platform Observer API <a href="api_platform_observer.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[C-RUN]</td>
<td style="text-align: left;">Executing locally a generated c-model <a href="how_to_run_a_model_locally.html">(link)</a></td>
</tr>
</tbody>
</table>
</section>



<section class="st_footer">

<h1> <br> </h1>

<p style="font-family:verdana; text-align:left;">
 Embedded Documentation 

	- <b> FAQ - Generic aspects </b>
			<br> X-CUBE-AI Expansion Package
	 
			<br> r3.1
		 - AI PLATFORM r7.0.0
			 (Embedded Inference Client API 1.1.0) 
			 - Command Line Interface r1.5.1 
		
	
</p>

<img src="" title="ST logo" align="right" height="100" />

<div class="st_notice">
Information in this document is provided solely in connection with ST products.
The contents of this document are subject to change without prior notice.
<br>
© Copyright STMicroelectronics 2020. All rights reserved. <a href="http://www.st.com">www.st.com</a>
</div>

<hr size="1" />
</section>


</article>
</body>

</html>
