<!DOCTYPE html>
<!--

	Modified template for STM32CubeMX.AI purpose

	d0.1: 	jean-michel.delorme@st.com
			add ST logo and ST footer

	d2.0: 	jean-michel.delorme@st.com
			add sidenav support

	d2.1: 	jean-michel.delorme@st.com
			clean-up + optional ai_logo/ai meta data
			
==============================================================================
           "GitHub HTML5 Pandoc Template" v2.1 — by Tristano Ajmone           
==============================================================================
Copyright © Tristano Ajmone, 2017, MIT License (MIT). Project's home:

- https://github.com/tajmone/pandoc-goodies

The CSS in this template reuses source code taken from the following projects:

- GitHub Markdown CSS: Copyright © Sindre Sorhus, MIT License (MIT):
  https://github.com/sindresorhus/github-markdown-css

- Primer CSS: Copyright © 2016-2017 GitHub Inc., MIT License (MIT):
  http://primercss.io/

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The MIT License 

Copyright (c) Tristano Ajmone, 2017 (github.com/tajmone/pandoc-goodies)
Copyright (c) Sindre Sorhus <sindresorhus@gmail.com> (sindresorhus.com)
Copyright (c) 2017 GitHub Inc.

"GitHub Pandoc HTML5 Template" is Copyright (c) Tristano Ajmone, 2017, released
under the MIT License (MIT); it contains readaptations of substantial portions
of the following third party softwares:

(1) "GitHub Markdown CSS", Copyright (c) Sindre Sorhus, MIT License (MIT).
(2) "Primer CSS", Copyright (c) 2016 GitHub Inc., MIT License (MIT).

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
==============================================================================-->
<html>
<head>
  <meta charset="utf-8" />
  <meta name="generator" content="pandoc" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
  <meta name="keywords" content="STM32CubeMX, X-CUBE-AI, Neural Network, Quantization, CLI, Code Generator, Automatic NN mapping tools" />
  <title>Quantized model and quantize command</title>
  <style type="text/css">
.markdown-body{
	-ms-text-size-adjust:100%;
	-webkit-text-size-adjust:100%;
	color:#24292e;
	font-family:-apple-system,system-ui,BlinkMacSystemFont,"Segoe UI",Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol";
	font-size:16px;
	line-height:1.5;
	word-wrap:break-word;
	box-sizing:border-box;
	min-width:200px;
	max-width:980px;
	margin:0 auto;
	padding:45px;
	}
.markdown-body a{
	color:#0366d6;
	background-color:transparent;
	text-decoration:none;
	-webkit-text-decoration-skip:objects}
.markdown-body a:active,.markdown-body a:hover{
	outline-width:0}
.markdown-body a:hover{
	text-decoration:underline}
.markdown-body a:not([href]){
	color:inherit;text-decoration:none}
.markdown-body strong{font-weight:600}
.markdown-body h1,.markdown-body h2,.markdown-body h3,.markdown-body h4,.markdown-body h5,.markdown-body h6{
	margin-top:24px;
	margin-bottom:16px;
	font-weight:600;
	line-height:1.25}
.markdown-body h1{
	font-size:2em;
	margin:.67em 0;
	padding-bottom:.3em;
	border-bottom:1px solid #eaecef}
.markdown-body h2{
	padding-bottom:.3em;
	font-size:1.5em;
	border-bottom:1px solid #eaecef}
.markdown-body h3{font-size:1.25em}
.markdown-body h4{font-size:1em}
.markdown-body h5{font-size:.875em}
.markdown-body h6{font-size:.85em;color:#6a737d}
.markdown-body img{border-style:none}
.markdown-body svg:not(:root){
	overflow:hidden}
.markdown-body hr{
	box-sizing:content-box;
	height:.25em;
	margin:24px 0;
	padding:0;
	overflow:hidden;
	background-color:#e1e4e8;
	border:0}
.markdown-body hr::before{display:table;content:""}
.markdown-body hr::after{display:table;clear:both;content:""}
.markdown-body input{margin:0;overflow:visible;font:inherit;font-family:inherit;font-size:inherit;line-height:inherit}
.markdown-body [type=checkbox]{box-sizing:border-box;padding:0}
.markdown-body *{box-sizing:border-box}.markdown-body blockquote{margin:0}
.markdown-body ol,.markdown-body ul{padding-left:2em}
.markdown-body ol ol,.markdown-body ul ol{list-style-type:lower-roman}
.markdown-body ol ol,.markdown-body ol ul,.markdown-body ul ol,.markdown-body ul ul{margin-top:0;margin-bottom:0}
.markdown-body ol ol ol,.markdown-body ol ul ol,.markdown-body ul ol ol,.markdown-body ul ul ol{list-style-type:lower-alpha}
.markdown-body li>p{margin-top:16px}
.markdown-body li+li{margin-top:.25em}
.markdown-body dd{margin-left:0}
.markdown-body dl{padding:0}
.markdown-body dl dt{padding:0;margin-top:16px;font-size:1em;font-style:italic;font-weight:600}
.markdown-body dl dd{padding:0 16px;margin-bottom:16px}
.markdown-body code{font-family:SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace}
.markdown-body pre{font:12px SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace;word-wrap:normal}
.markdown-body blockquote,.markdown-body dl,.markdown-body ol,.markdown-body p,.markdown-body pre,.markdown-body table,.markdown-body ul{margin-top:0;margin-bottom:16px}
.markdown-body blockquote{padding:0 1em;color:#6a737d;border-left:.25em solid #dfe2e5}
.markdown-body blockquote>:first-child{margin-top:0}
.markdown-body blockquote>:last-child{margin-bottom:0}
.markdown-body table{display:block;width:100%;overflow:auto;border-spacing:0;border-collapse:collapse}
.markdown-body table th{font-weight:600}
.markdown-body table td,.markdown-body table th{padding:6px 13px;border:1px solid #dfe2e5}
.markdown-body table tr{background-color:#fff;border-top:1px solid #c6cbd1}
.markdown-body table tr:nth-child(2n){background-color:#f6f8fa}
.markdown-body img{max-width:100%;box-sizing:content-box;background-color:#fff}
.markdown-body code{padding:.2em 0;margin:0;font-size:85%;background-color:rgba(27,31,35,.05);border-radius:3px}
.markdown-body code::after,.markdown-body code::before{letter-spacing:-.2em;content:"\00a0"}
.markdown-body pre>code{padding:0;margin:0;font-size:100%;word-break:normal;white-space:pre;background:0 0;border:0}
.markdown-body .highlight{margin-bottom:16px}
.markdown-body .highlight pre{margin-bottom:0;word-break:normal}
.markdown-body .highlight pre,.markdown-body pre{padding:16px;overflow:auto;font-size:85%;line-height:1.45;background-color:#f6f8fa;border-radius:3px}
.markdown-body pre code{display:inline;max-width:auto;padding:0;margin:0;overflow:visible;line-height:inherit;word-wrap:normal;background-color:transparent;border:0}
.markdown-body pre code::after,.markdown-body pre code::before{content:normal}
.markdown-body .full-commit .btn-outline:not(:disabled):hover{color:#005cc5;border-color:#005cc5}
.markdown-body kbd{box-shadow:inset 0 -1px 0 #959da5;display:inline-block;padding:3px 5px;font:11px/10px SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace;color:#444d56;vertical-align:middle;background-color:#fcfcfc;border:1px solid #c6cbd1;border-bottom-color:#959da5;border-radius:3px;box-shadow:inset 0 -1px 0 #959da5}
.markdown-body :checked+.radio-label{position:relative;z-index:1;border-color:#0366d6}
.markdown-body .task-list-item{list-style-type:none}
.markdown-body .task-list-item+.task-list-item{margin-top:3px}
.markdown-body .task-list-item input{margin:0 .2em .25em -1.6em;vertical-align:middle}
.markdown-body::before{display:table;content:""}
.markdown-body::after{display:table;clear:both;content:""}
.markdown-body>:first-child{margin-top:0!important}
.markdown-body>:last-child{margin-bottom:0!important}
.Alert,.Error,.Note,.Success,.Warning,.Tips,.HTips{padding:11px;margin-bottom:24px;border-style:solid;border-width:1px;border-radius:4px}
.Alert p,.Error p,.Note p,.Success p,.Warning p,.Tips p,.HTips p{margin-top:0}
.Alert p:last-child,.Error p:last-child,.Note p:last-child,.Success p:last-child,.Warning p:last-child,.Tips p:last-child,.HTips p:last-child{margin-bottom:0}
.Alert{color:#246;background-color:#e2eef9;border-color:#bac6d3}
.Warning{color:#4c4a42;background-color:#fff9ea;border-color:#dfd8c2}
.Error{color:#911;background-color:#fcdede;border-color:#d2b2b2}
.Success{color:#22662c;background-color:#e2f9e5;border-color:#bad3be}
.Note{color:#2f363d;background-color:#f6f8fa;border-color:#d5d8da}
.Alert h1,.Alert h2,.Alert h3,.Alert h4,.Alert h5,.Alert h6{color:#246;margin-bottom:0}
.Warning h1,.Warning h2,.Warning h3,.Warning h4,.Warning h5,.Warning h6{color:#4c4a42;margin-bottom:0}
.Error h1,.Error h2,.Error h3,.Error h4,.Error h5,.Error h6{color:#911;margin-bottom:0}
.Success h1,.Success h2,.Success h3,.Success h4,.Success h5,.Success h6{color:#22662c;margin-bottom:0}
.Note h1,.Note h2,.Note h3,.Note h4,.Note h5,.Note h6{color:#2f363d;margin-bottom:0}
.Tips h1,.Tips h2,.Tips h3,.Tips h4,.Tips h5,.Tips h6{color:#2f363d;margin-bottom:0}
.HTips h1,.HTips h2,.HTips h3,.HTips h4,.HTips h5,.HTips h6{color:#2f363d;margin-bottom:0}
.Tips h1:first-child,.Tips h2:first-child,.Tips h3:first-child,.Tips h4:first-child,.Tips h5:first-child,.Tips h6:first-child,.Alert h1:first-child,.Alert h2:first-child,.Alert h3:first-child,.Alert h4:first-child,.Alert h5:first-child,.Alert h6:first-child,.Error h1:first-child,.Error h2:first-child,.Error h3:first-child,.Error h4:first-child,.Error h5:first-child,.Error h6:first-child,.Note h1:first-child,.Note h2:first-child,.Note h3:first-child,.Note h4:first-child,.Note h5:first-child,.Note h6:first-child,.Success h1:first-child,.Success h2:first-child,.Success h3:first-child,.Success h4:first-child,.Success h5:first-child,.Success h6:first-child,.Warning h1:first-child,.Warning h2:first-child,.Warning h3:first-child,.Warning h4:first-child,.Warning h5:first-child,.Warning h6:first-child{margin-top:0}
h1.title,p.subtitle{text-align:center}
h1.title.followed-by-subtitle{margin-bottom:0}
p.subtitle{font-size:1.5em;font-weight:600;line-height:1.25;margin-top:0;margin-bottom:16px;padding-bottom:.3em}
div.line-block{white-space:pre-line}
  </style>
  <style type="text/css">code{white-space: pre;}</style>
  <style type="text/css">
	pre > code.sourceCode { white-space: pre; position: relative; }
 pre > code.sourceCode > span { display: inline-block; line-height: 1.25; }
 pre > code.sourceCode > span:empty { height: 1.2em; }
 .sourceCode { overflow: visible; }
 code.sourceCode > span { color: inherit; text-decoration: inherit; }
 div.sourceCode { margin: 1em 0; }
 pre.sourceCode { margin: 0; }
 @media screen {
 div.sourceCode { overflow: auto; }
 }
 @media print {
 pre > code.sourceCode { white-space: pre-wrap; }
 pre > code.sourceCode > span { text-indent: -5em; padding-left: 5em; }
 }
 pre.numberSource code
   { counter-reset: source-line 0; }
 pre.numberSource code > span
   { position: relative; left: -4em; counter-increment: source-line; }
 pre.numberSource code > span > a:first-child::before
   { content: counter(source-line);
     position: relative; left: -1em; text-align: right; vertical-align: baseline;
     border: none; display: inline-block;
     -webkit-touch-callout: none; -webkit-user-select: none;
     -khtml-user-select: none; -moz-user-select: none;
     -ms-user-select: none; user-select: none;
     padding: 0 4px; width: 4em;
     background-color: #ffffff;
     color: #a0a0a0;
   }
 pre.numberSource { margin-left: 3em; border-left: 1px solid #a0a0a0;  padding-left: 4px; }
 div.sourceCode
   { color: #1f1c1b; background-color: #ffffff; }
 @media screen {
 pre > code.sourceCode > span > a:first-child::before { text-decoration: underline; }
 }
 code span { color: #1f1c1b; } /* Normal */
 code span.al { color: #bf0303; background-color: #f7e6e6; font-weight: bold; } /* Alert */
 code span.an { color: #ca60ca; } /* Annotation */
 code span.at { color: #0057ae; } /* Attribute */
 code span.bn { color: #b08000; } /* BaseN */
 code span.bu { color: #644a9b; font-weight: bold; } /* BuiltIn */
 code span.cf { color: #1f1c1b; font-weight: bold; } /* ControlFlow */
 code span.ch { color: #924c9d; } /* Char */
 code span.cn { color: #aa5500; } /* Constant */
 code span.co { color: #898887; } /* Comment */
 code span.cv { color: #0095ff; } /* CommentVar */
 code span.do { color: #607880; } /* Documentation */
 code span.dt { color: #0057ae; } /* DataType */
 code span.dv { color: #b08000; } /* DecVal */
 code span.er { color: #bf0303; text-decoration: underline; } /* Error */
 code span.ex { color: #0095ff; font-weight: bold; } /* Extension */
 code span.fl { color: #b08000; } /* Float */
 code span.fu { color: #644a9b; } /* Function */
 code span.im { color: #ff5500; } /* Import */
 code span.in { color: #b08000; } /* Information */
 code span.kw { color: #1f1c1b; font-weight: bold; } /* Keyword */
 code span.op { color: #1f1c1b; } /* Operator */
 code span.ot { color: #006e28; } /* Other */
 code span.pp { color: #006e28; } /* Preprocessor */
 code span.re { color: #0057ae; background-color: #e0e9f8; } /* RegionMarker */
 code span.sc { color: #3daee9; } /* SpecialChar */
 code span.ss { color: #ff5500; } /* SpecialString */
 code span.st { color: #bf0303; } /* String */
 code span.va { color: #0057ae; } /* Variable */
 code span.vs { color: #bf0303; } /* VerbatimString */
 code span.wa { color: #bf0303; } /* Warning */
  </style>
  <link rel="stylesheet" href="data:text/css,%3Aroot%20%7B%2D%2Dmain%2Ddarkblue%2Dcolor%3A%20rgb%283%2C35%2C75%29%3B%20%2D%2Dmain%2Dlightblue%2Dcolor%3A%20rgb%2860%2C180%2C230%29%3B%20%2D%2Dmain%2Dpink%2Dcolor%3A%20rgb%28230%2C0%2C126%29%3B%20%2D%2Dmain%2Dyellow%2Dcolor%3A%20rgb%28255%2C210%2C0%29%3B%20%2D%2Dsecondary%2Dgrey%2Dcolor%3A%20rgb%2870%2C70%2C80%29%3B%20%2D%2Dsecondary%2Dgrey%2Dcolor%2D25%3A%20rgb%28209%2C209%2C211%29%3B%20%2D%2Dsecondary%2Dgrey%2Dcolor%2D12%3A%20rgb%28233%2C233%2C234%29%3B%20%2D%2Dsecondary%2Dlightgreen%2Dcolor%3A%20rgb%2873%2C177%2C112%29%3B%20%2D%2Dsecondary%2Dpurple%2Dcolor%3A%20rgb%28140%2C0%2C120%29%3B%20%2D%2Dsecondary%2Ddarkgreen%2Dcolor%3A%20rgb%284%2C87%2C47%29%3B%20%2D%2Dsidenav%2Dfont%2Dsize%3A%2090%25%3B%7Dhtml%20%7Bfont%2Dfamily%3A%20%22Arial%22%2C%20sans%2Dserif%3B%7D%2A%20%7Bxbox%2Dsizing%3A%20border%2Dbox%3B%7D%2Est%5Fheader%20h1%2Etitle%2C%2Est%5Fheader%20p%2Esubtitle%20%7Btext%2Dalign%3A%20left%3B%7D%2Est%5Fheader%20h1%2Etitle%20%7Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bmargin%2Dbottom%3A5px%3B%7D%2Est%5Fheader%20p%2Esubtitle%20%7Bcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bfont%2Dsize%3A90%25%3B%7D%2Est%5Fheader%20h1%2Etitle%2Efollowed%2Dby%2Dsubtitle%20%7Bborder%2Dbottom%3A2px%20solid%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bmargin%2Dbottom%3A5px%3B%7D%2Est%5Fheader%20p%2Erevision%20%7Bdisplay%3A%20inline%2Dblock%3Bwidth%3A70%25%3B%7D%2Est%5Fheader%20div%2Eauthor%20%7Bfont%2Dstyle%3A%20italic%3B%7D%2Est%5Fheader%20div%2Esummary%20%7Bborder%2Dtop%3A%20solid%201px%20%23C0C0C0%3Bbackground%3A%20%23ECECEC%3Bpadding%3A%205px%3B%7D%2Est%5Ffooter%20%7Bfont%2Dsize%3A80%25%3B%7D%2Est%5Ffooter%20img%20%7Bfloat%3A%20right%3B%7D%2Est%5Ffooter%20%2Est%5Fnotice%20%7Bwidth%3A80%25%3B%7D%2Emarkdown%2Dbody%20%23header%2Dsection%2Dnumber%20%7Bfont%2Dsize%3A120%25%3B%7D%2Emarkdown%2Dbody%20h1%20%7Bborder%2Dbottom%3A1px%20solid%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bpadding%2Dbottom%3A%202px%3Bpadding%2Dtop%3A%2010px%3B%7D%2Emarkdown%2Dbody%20h2%20%7Bpadding%2Dbottom%3A%205px%3Bpadding%2Dtop%3A%2010px%3B%7D%2Emarkdown%2Dbody%20h2%20code%20%7Bbackground%2Dcolor%3A%20rgb%28255%2C%20255%2C%20255%29%3B%7D%23func%2EsourceCode%20%7Bborder%2Dleft%2Dstyle%3A%20solid%3Bborder%2Dcolor%3A%20rgb%280%2C%2032%2C%2082%29%3Bborder%2Dcolor%3A%20rgb%28255%2C%20244%2C%20191%29%3Bborder%2Dwidth%3A%208px%3Bpadding%3A0px%3B%7Dpre%20%3E%20code%20%7Bborder%3A%20solid%201px%20blue%3Bfont%2Dsize%3A60%25%3B%7DcodeXX%20%7Bborder%3A%20solid%201px%20blue%3Bfont%2Dsize%3A60%25%3B%7D%23func%2EsourceXXCode%3A%3Abefore%20%7Bcontent%3A%20%22Synopsis%22%3Bpadding%2Dleft%3A10px%3Bfont%2Dweight%3A%20bold%3B%7Dfigure%20%7Bpadding%3A0px%3Bmargin%2Dleft%3A5px%3Bmargin%2Dright%3A5px%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3B%7Dimg%5Bdata%2Dproperty%3D%22center%22%5D%20%7Bdisplay%3A%20block%3Bmargin%2Dtop%3A%2010px%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3Bpadding%3A%2010px%3B%7Dfigcaption%20%7Btext%2Dalign%3Aleft%3B%20%20border%2Dtop%3A%201px%20dotted%20%23888%3Bpadding%2Dbottom%3A%2020px%3Bmargin%2Dtop%3A%2010px%3B%7Dh1%20code%2C%20h2%20code%20%7Bfont%2Dsize%3A120%25%3B%7D%09%2Emarkdown%2Dbody%20table%20%7Bwidth%3A%20100%25%3Bmargin%2Dleft%3Aauto%3Bmargin%2Dright%3Aauto%3B%7D%2Emarkdown%2Dbody%20img%20%7Bborder%2Dradius%3A%204px%3Bpadding%3A%205px%3Bdisplay%3A%20block%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3Bwidth%3A%20auto%3B%7D%2Emarkdown%2Dbody%20%2Est%5Fheader%20img%2C%20%2Emarkdown%2Dbody%20%7Bborder%3A%20none%3Bborder%2Dradius%3A%20none%3Bpadding%3A%205px%3Bdisplay%3A%20block%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3Bwidth%3A%20auto%3Bbox%2Dshadow%3A%20none%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2010px%3Bwidth%3A%20auto%3Bfont%2Dfamily%3A%20%22Arial%22%2C%20sans%2Dserif%3Bcolor%3A%20%2303234B%3Bcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%7D%2Emarkdown%2Dbody%20h1%2C%20%2Emarkdown%2Dbody%20h2%2C%20%2Emarkdown%2Dbody%20h3%20%7B%20%20%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%7D%2Emarkdown%2Dbody%3Ahover%20%7B%7D%2Emarkdown%2Dbody%20%2Econtents%20%7B%7D%2Emarkdown%2Dbody%20%2Etoc%2Dtitle%20%7B%7D%2Emarkdown%2Dbody%20%2Econtents%20li%20%7Blist%2Dstyle%2Dtype%3A%20none%3B%7D%2Emarkdown%2Dbody%20%2Econtents%20ul%20%7Bpadding%2Dleft%3A%2010px%3B%7D%2Emarkdown%2Dbody%20%2Econtents%20a%20%7Bcolor%3A%20%233CB4E6%3B%20%7D%2Emarkdown%2Dbody%20table%20%2Eheader%20%7Bbackground%2Dcolor%3A%20var%28%2D%2Dsecondary%2Dgrey%2Dcolor%2D12%29%3Bborder%2Dbottom%3A1px%20solid%3Bborder%2Dtop%3A1px%20solid%3Bfont%2Dsize%3A%2090%25%3B%7D%2Emarkdown%2Dbody%20table%20th%20%7Bfont%2Dweight%3A%20bolder%3B%20%7D%2Emarkdown%2Dbody%20table%20td%20%7Bfont%2Dsize%3A%2090%25%3B%7D%2Emarkdown%2Dbody%20code%7Bpadding%3A%200%3Bmargin%3A0%3Bfont%2Dsize%3A95%25%3Bbackground%2Dcolor%3Argba%2827%2C31%2C35%2C%2E05%29%3Bborder%2Dradius%3A1px%3B%7D%2Et01%20%7Bwidth%3A%20100%25%3Bborder%3A%20None%3Btext%2Dalign%3A%20left%3B%7D%2ETips%20%7Bpadding%3A11px%3Bmargin%2Dbottom%3A24px%3Bborder%2Dstyle%3Asolid%3Bborder%2Dwidth%3A1px%3Bborder%2Dradius%3A1px%7D%2ETips%20%7Bcolor%3A%232f363d%3B%20background%2Dcolor%3A%20%23f6f8fa%3Bborder%2Dcolor%3A%23d5d8da%3Bborder%2Dtop%3A1px%20solid%3Bborder%2Dbottom%3A1px%20solid%3B%7D%2EHTips%20%7Bpadding%3A11px%3Bmargin%2Dbottom%3A24px%3Bborder%2Dstyle%3Asolid%3Bborder%2Dwidth%3A1px%3Bborder%2Dradius%3A1px%7D%2EHTips%20%7Bcolor%3A%232f363d%3B%20background%2Dcolor%3A%23fff9ea%3Bborder%2Dcolor%3A%23d5d8da%3Bborder%2Dtop%3A1px%20solid%3Bborder%2Dbottom%3A1px%20solid%3B%7D%2EHTips%20h1%2C%2EHTips%20h2%2C%2EHTips%20h3%2C%2EHTips%20h4%2C%2EHTips%20h5%2C%2EHTips%20h6%20%7Bcolor%3A%232f363d%3Bmargin%2Dbottom%3A0%7D%2Esidenav%20%7Bfont%2Dfamily%3A%20%22Arial%22%2C%20sans%2Dserif%3B%20%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bheight%3A%20100%25%3Bposition%3A%20fixed%3Bz%2Dindex%3A%201%3Btop%3A%200%3Bleft%3A%200%3Bmargin%2Dright%3A%2010px%3Bmargin%2Dleft%3A%2010px%3B%20overflow%2Dx%3A%20hidden%3B%7D%2Esidenav%20hr%2Enew1%20%7Bborder%2Dwidth%3A%20thin%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Dlightblue%2Dcolor%29%3Bmargin%2Dright%3A%2010px%3Bmargin%2Dtop%3A%20%2D10px%3B%7D%2Esidenav%20%23sidenav%5Fheader%20%7Bmargin%2Dtop%3A%2010px%3Bborder%3A%201px%3Bcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Dlightblue%2Dcolor%29%3B%7D%2Esidenav%20%23sidenav%5Fheader%20img%20%7Bfloat%3A%20left%3B%7D%2Esidenav%20%23sidenav%5Fheader%20a%20%7Bmargin%2Dleft%3A%200px%3Bmargin%2Dright%3A%200px%3Bpadding%2Dleft%3A%200px%3B%7D%2Esidenav%20%23sidenav%5Fheader%20a%3Ahover%20%7Bbackground%2Dsize%3A%20auto%3Bcolor%3A%20%23FFD200%3B%20%7D%2Esidenav%20%23sidenav%5Fheader%20a%3Aactive%20%7B%20%20%7D%2Esidenav%20%3E%20ul%20%7Bbackground%2Dcolor%3A%20rgba%2857%2C%20169%2C%20220%2C%200%2E05%29%3B%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bborder%2Dradius%3A%2010px%3Bpadding%2Dbottom%3A%2010px%3Bpadding%2Dtop%3A%2010px%3Bpadding%2Dright%3A%2010px%3Bmargin%2Dright%3A%2010px%3B%7D%2Esidenav%20a%20%7Bpadding%3A%202px%202px%3Btext%2Ddecoration%3A%20none%3Bfont%2Dsize%3A%20var%28%2D%2Dsidenav%2Dfont%2Dsize%29%3Bdisplay%3Atable%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%2C%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%7B%20padding%2Dright%3A%205px%3Bpadding%2Dleft%3A%205px%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%20%7B%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bfont%2Dweight%3A%20lighter%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%20%7B%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bfont%2Dsize%3A%2080%25%3Bpadding%2Dleft%3A%2010px%3Btext%2Dalign%2Dlast%3A%20left%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%20%7B%20display%3A%20None%3B%7D%2Esidenav%20li%20%7Blist%2Dstyle%2Dtype%3A%20none%3B%7D%2Esidenav%20ul%20%7Bpadding%2Dleft%3A%200px%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%2C%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%20%7Bbackground%2Dcolor%3A%20var%28%2D%2Dsecondary%2Dgrey%2Dcolor%2D12%29%3Bbackground%2Dclip%3A%20border%2Dbox%3Bmargin%2Dleft%3A%20%2D10px%3Bpadding%2Dleft%3A%2010px%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%20%7Bpadding%2Dright%3A%2015px%3Bwidth%3A%20230px%3B%09%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%20%7Bpadding%2Dright%3A%2010px%3Bwidth%3A%20230px%3B%09%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%3Aactive%20%7B%20color%3A%20%23FFD200%3B%20%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%3Aactive%20%7B%20color%3A%20%23FFD200%3B%20%7D%2Esidenav%20code%20%7B%7D%2Esidenav%20%7Bwidth%3A%20280px%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%20300px%3Bdisplay%3Ablock%3B%7D%2Emarkdown%2Dbody%20%2Eprint%2Dcontents%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%2Eprint%2Dtoc%2Dtitle%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%7Bmax%2Dwidth%3A%20980px%3Bmin%2Dwidth%3A%20200px%3Bpadding%3A%2040px%3Bborder%2Dstyle%3A%20solid%3Bborder%2Dstyle%3A%20outset%3Bborder%2Dcolor%3A%20rgba%28104%2C%20167%2C%20238%2C%200%2E089%29%3Bborder%2Dradius%3A%205px%3B%7D%40media%20screen%20and%20%28max%2Dheight%3A%20450px%29%20%7B%2Esidenav%20%7Bpadding%2Dtop%3A%2015px%3B%7D%2Esidenav%20a%20%7Bfont%2Dsize%3A%2018px%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%2010px%3B%20%7D%2Esidenav%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2040px%3Bwidth%3A%20auto%3Bborder%3A%200px%3B%7D%7D%40media%20screen%20and%20%28max%2Dwidth%3A%201024px%29%20%7B%2Esidenav%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2040px%3Bwidth%3A%20auto%3Bborder%3A%200px%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%2010px%3B%7D%7D%40media%20print%20%7B%2Esidenav%20%7Bvisibility%3Ahidden%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%2010px%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2010px%3Bwidth%3Aauto%3Bborder%3A%200px%3B%7D%40page%20%7Bsize%3A%20A4%3B%20%20margin%3A2cm%3Bpadding%3A2cm%3Bmargin%2Dtop%3A%201cm%3Bpadding%2Dbottom%3A%201cm%3B%7D%2A%20%7Bxbox%2Dsizing%3A%20border%2Dbox%3Bfont%2Dsize%3A90%25%3B%7Da%20%7Bfont%2Dsize%3A%20100%25%3Bcolor%3A%20yellow%3B%7D%2Emarkdown%2Dbody%20article%20%7Bxbox%2Dsizing%3A%20border%2Dbox%3Bfont%2Dsize%3A100%25%3B%7D%2Emarkdown%2Dbody%20p%20%7Bwindows%3A%202%3Borphans%3A%202%3B%7D%2Epagebreakerafter%20%7Bpage%2Dbreak%2Dafter%3A%20always%3Bpadding%2Dtop%3A10mm%3B%7D%2Epagebreakbefore%20%7Bpage%2Dbreak%2Dbefore%3A%20always%3B%7Dh1%2C%20h2%2C%20h3%2C%20h4%20%7Bpage%2Dbreak%2Dafter%3A%20avoid%3B%7Ddiv%2C%20code%2C%20blockquote%2C%20li%2C%20span%2C%20table%2C%20figure%20%7Bpage%2Dbreak%2Dinside%3A%20avoid%3B%7D%7D">
  <!--[if lt IE 9]>
    <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
  <![endif]-->





<link rel="shortcut icon" href="">

</head>



<body>

		<div class="sidenav">
		<div id="sidenav_header">
							<img src="" title="STM32CubeMX.AI logo" align="left" height="70" />
										<br />7.0.0<br />
										<a href="#doc_title"> Quantized model and quantize command </a>
					</div>
		<div id="sidenav_header_button">
			 
							<ul>
					<li><p><a id="index" href="index.html">[ Index ]</a></p></li>
				</ul>
						<hr class="new1">
		</div>	

		<ul>
  <li><a href="#introduction">Introduction</a>
  <ul>
  <li><a href="#quantized-models">Quantized models</a></li>
  <li><a href="#ref_support_arithmetic">Quantized tensors</a></li>
  <li><a href="#ref_tf_support">TFlite models</a></li>
  </ul></li>
  <li><a href="#ref_quantize_cmd">Quantize command</a>
  <ul>
  <li><a href="#overview">Overview</a></li>
  <li><a href="#examples">Examples</a></li>
  <li><a href="#ref_quant_conf_file">Post-training quantization configuration file</a></li>
  <li><a href="#ref_quant_flow">Keras Post-training quantization process</a></li>
  <li><a href="#ref_test_sets_loading">Test-set considerations</a></li>
  <li><a href="#ref_quant_algo">Quantizers</a></li>
  <li><a href="#ref_tensor_conf_file">Tensor format configuration file</a></li>
  <li><a href="#ref_quant_mnist">Quantize a MNIST model</a></li>
  </ul></li>
  <li><a href="#references">References</a></li>
  </ul>
	</div>
	<article id="sidenav" class="markdown-body">
		



<header>
<section class="st_header" id="doc_title">

<div class="himage">
	<img src="" title="STM32CubeMX.AI" align="right" height="70" />
	<img src="" title="STM32" align="right" height="90" />
</div>

<h1 class="title followed-by-subtitle">Quantized model and quantize command</h1>

	<p class="subtitle">X-CUBE-AI Expansion Package</p>

	<div class="revision">r3.0</div>

	<div class="ai_platform">
		AI PLATFORM r7.0.0
					(Embedded Inference Client API 1.1.0)
			</div>
			Command Line Interface r1.5.1
	




</section>
</header>
 




<section id="introduction" class="level1">
<h1>Introduction</h1>
<section id="quantized-models" class="level2">
<h2>Quantized models</h2>
<p>X-CUBE-AI code generator can be used to deploy a quantized model (<a href="#ref_support_arithmetic">8b integer format</a>). Quantization (also called calibration) is an optimization technique to compress a 32-bit floating-point model by reducing the size (smaller storage size and less memory peak usage at runtime), by improving CPU/MCU usage and latency (including power consumption) with a small degradation of accuracy. A quantized model executes some or all of the operations on tensors with integers rather than floating point values. It is an important part of various optimization techniques: topology-oriented, features-map reduction, pruning, weights compression… which can be applied to address the resource-constrained runtime environment.</p>
<p>There are two classical methods of quantization: post-training quantization and quantization aware training (QAT). First is relatively easier to use, it allows to quantize a pre-trained model with a limited and representative data set. Quantization aware training is done during the training process and is often better for model accuracy.</p>
<p>The CLI integrates an internal post-training quantization process (see <a href="#ref_quantize_cmd">“quantize” command</a> section) with different <a href="#ref_support_arithmetic">quantization schemes</a> for an already-trained Keras model.</p>
<figure>
<img src="" property="center" style="width:95.0%" />
</figure>
<p>X-CUBE-AI can import different type of quantized model:</p>
<ul>
<li>a Keras floating point model associated with its <a href="#ref_tensor_conf_file">tensor format configuration</a> file. The conversion of each 32b float weight/bias tensors to 8b integer format is directly achieved by the importer thanks to the provided settings.</li>
<li>a quantized TensorFlow lite model (generated by a post-training or training aware process). In this case, the calibration has been performed by the TensorFlow Lite framework, principally through the “TFLite converter” utility exporting the <a href="#ref_tf_support">TensorFlow lite</a> file.</li>
</ul>
<p>For a given operator, weights and activations should be quantized. Full 8b integer format is requested, <strong>weights only or float16 TFLite quantization variants</strong> are not supported. The mixed models with the convert operators (like QUANTIZE or DEQUANTIZE Tensor Lite operators), explicitly defined or automatically inserted by the X-CUBE-AI code generator are supported. Finally, the quantized tensors are mapped on the optimized and specialized C-implementation for the supported operators otherwise the floating-point version of the operator is used.</p>
<div class="Tips">
<p><strong>Tip</strong> — User has also the possibility to quantize a Keras model with the TFLite Converter utility and/or the X-CUBE-AI <a href="#ref_quantize_cmd">internal process</a>. Despite the current limitations, this internal process offers more <a href="#ref_support_arithmetic">quantization schemes</a> which can be more interesting in term of execution time and precision (i.e. accuracy). Results are fully dependent of the model size and associated topology. It provides also currently a better support to deploy a model with the recurrent layers which are kept in float w/o extra manipulation.</p>
</div>
<p>The “analyze”, “validate” and “generate” commands can be used w/o limitations. The <code>&#39;-q/--quantize&#39;</code> argument is used to pass the specific “tensor format configuration” file the associated reshaped Keras model.</p>
<div class="sourceCode" id="cb1"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb1-1"><a href="#cb1-1" aria-hidden="true" tabindex="-1"></a>$ stm32ai analyze <span class="op">-</span>m <span class="op">&lt;</span>reshaped_model_file<span class="op">&gt;.</span><span class="fu">h5</span> <span class="op">-</span>q <span class="op">&lt;</span>quant_file_desc<span class="op">&gt;.</span><span class="fu">json</span></span>
<span id="cb1-2"><a href="#cb1-2" aria-hidden="true" tabindex="-1"></a>$ stm32ai analyze <span class="op">-</span>m <span class="op">&lt;</span>quantized_model_file<span class="op">&gt;.</span><span class="fu">tflite</span></span>
<span id="cb1-3"><a href="#cb1-3" aria-hidden="true" tabindex="-1"></a>$ stm32ai validate <span class="op">-</span>m <span class="op">&lt;</span>quantized_model<span class="op">&gt;.</span><span class="fu">tflite</span> <span class="op">-</span>vi test_data<span class="op">.</span><span class="fu">npz</span></span></code></pre></div>
</section>
<section id="ref_support_arithmetic" class="level2">
<h2>Quantized tensors</h2>
<p>X-CUBE-AI supports only 8b integer-base arithmetic for the quantized tensors (<code>&#39;int8&#39;</code> or <code>&#39;uint8&#39;</code> C-type). <code>&#39;int32&#39;</code> C-type is only considered for the quantization of the bias. The support of the <em>Qm,n</em> arithmetic is now fully removed.</p>
<p>The <strong>integer</strong> arithmetic is based on a representative convention used by Google for the quantized models. See the following reference to highlight the underlying rational.</p>
<ul>
<li>Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference (<a href="https://arxiv.org/abs/1712.05877">https://arxiv.org/abs/1712.05877</a>)</li>
</ul>
<p>Each real number <em>r</em> is represented in function of the quantized value <em>q</em>, a <em>scale</em> factor (<em>arbitrary positive real number</em>) and a <em>zero_point</em> parameter. Quantization scheme is an affine mapping of the integers <em>q</em> to real numbers <em>r</em>. <em>zero_point</em> has the same integer C-type like the <em>q</em> data.</p>
<figure>
<img src="" property="center" style="width:60.0%" />
</figure>
<p>Precision is dependent of a <em>scale</em> factor and the quantized values are linearly distributed around the <em>zero_point</em> value. In both case, resolution/precision is constant vs floating-point representation.</p>
<div id="fig:id_quant" class="fignos">
<figure>
<img src="" property="center" style="width:45.0%" alt="Figure 1: Integer precision" /><figcaption aria-hidden="true"><span>Figure 1:</span> Integer precision</figcaption>
</figure>
</div>
<div class="Note">
<p><strong>Info</strong> — For the code generator, quantized tensors are defined as a container storing the quantized data, represented as int8/uint8/int32 C-array and the quantization parameters: <em>scale</em> and <em>zero-point</em> values. These parameters are statically defined in the tensor C-structure definition, <code>ai_tensor</code> object (part of the generated <code>&#39;&lt;network&gt;.c&#39;</code> file) and the data are stored in the <code>&#39;&lt;network&gt;_data.c&#39;</code> file.</p>
</div>
<section id="per-axis-or-per-channel-vs-per-tensor" class="level3">
<h3>Per-axis (or per-channel) vs per-tensor</h3>
<p>Per-tensor means that the same format (i.e. <em>scale</em>/<em>zero_point</em>) is used for the entire tensor. Per-axis (or per-channel) for conv-base operator means there will be one <em>scale</em> and/or <em>zero_point</em> per slice.</p>
<p>Activation tensors are always in <strong>per-tensor</strong>.</p>
</section>
<section id="symmetric-vs-asymmetric" class="level3">
<h3>Symmetric vs Asymmetric</h3>
<p><em>Asymmetric</em> means that the tensor can have <em>zero_point</em> anywhere within the signed 8b range [-128, 127] or unsigned 8b range [0, 255]. <em>Symmetric</em> means that the tensor is forced to have <em>zero_point</em> equal to zero. By enforcing <em>zero_point</em> to zero, some kernel optimization implementations are possible to limit the cost of the operations (off-line pre-calculation,…). By nature, the activations are asymmetric, consequently symmetric format for the activations is not supported. For the weights/bias, asymmetric and symmetric format are supported.</p>
</section>
<section id="signed-integer-vs-unsigned-integer---supported-schemes" class="level3">
<h3>Signed integer vs Unsigned integer - supported schemes</h3>
<p>Signed or unsigned integer type can be defined for the weights and/or activations. However all requested kernels are not implemented or relevant to support the different optimized combinations related to the symmetric and asymmetric format. This imply that <em>only</em> the following integer schemes or combinations are supported:</p>
<table>
<thead>
<tr class="header">
<th style="text-align: left;">scheme</th>
<th style="text-align: left;">weights</th>
<th style="text-align: left;">activations</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">ua/ua</td>
<td style="text-align: left;">unsigned and asymmetric</td>
<td style="text-align: left;">unsigned and asymmetric</td>
</tr>
<tr class="even">
<td style="text-align: left;">ss/sa</td>
<td style="text-align: left;">signed and symmetric</td>
<td style="text-align: left;">signed and asymmetric</td>
</tr>
<tr class="odd">
<td style="text-align: left;">ss/ua</td>
<td style="text-align: left;">signed and symmetric</td>
<td style="text-align: left;">unsigned and asymmetric</td>
</tr>
</tbody>
</table>
</section>
</section>
<section id="ref_tf_support" class="level2">
<h2>TFlite models</h2>
<p>X-CUBE-AI is able to import the quantization training-aware and post-training quantized TensorFlow lite models. Post-training quantized models (TensorFlow v1.15 or v2.x) are based on the “ss/sa” and per-channel scheme. Activations are asymmetric and signed (int8), weights/bias are symmetric and signed (int8). Previous quantized training-aware models are based on the “ua/ua” scheme, now the “ss/sa” and per-channel scheme is also the privileged scheme to address efficiently the <a href="https://coral.ai/docs/edgetpu/models-intro/#compatibility-overview">Coral Edge TPUs</a> or <a href="https://www.tensorflow.org/lite/microcontrollers">TensorFlow Lite for Microcontrollers</a> runtime.</p>
<p>For X-CUBE-AI, following code snippet illustrates the recommended <em>TFLiteConverter</em> options to enforce full integer post-training quantization for all operators including the input/output tensors.</p>
<div class="HTips">
<p><strong>Note</strong> — Quantization of the input or/and output tensors are optional. They can be conserved in float for convenience and deployment facility, for example to keep the pre or/and post-processes in float.</p>
</div>
<div class="sourceCode" id="cb2"><pre class="sourceCode python"><code class="sourceCode python"><span id="cb2-1"><a href="#cb2-1" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> representative_dataset_gen():</span>
<span id="cb2-2"><a href="#cb2-2" aria-hidden="true" tabindex="-1"></a>  data <span class="op">=</span> tload(...)</span>
<span id="cb2-3"><a href="#cb2-3" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb2-4"><a href="#cb2-4" aria-hidden="true" tabindex="-1"></a>  <span class="cf">for</span> _ <span class="kw">in</span> <span class="bu">range</span>(num_calibration_steps):</span>
<span id="cb2-5"><a href="#cb2-5" aria-hidden="true" tabindex="-1"></a>    <span class="co"># Get sample input data as a numpy array in a method of your choosing.</span></span>
<span id="cb2-6"><a href="#cb2-6" aria-hidden="true" tabindex="-1"></a>    <span class="bu">input</span> <span class="op">=</span> get_sample(data)</span>
<span id="cb2-7"><a href="#cb2-7" aria-hidden="true" tabindex="-1"></a>    <span class="cf">yield</span> [<span class="bu">input</span>]</span>
<span id="cb2-8"><a href="#cb2-8" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb2-9"><a href="#cb2-9" aria-hidden="true" tabindex="-1"></a>converter <span class="op">=</span> tf.lite.TFLiteConverter.from_keras_model_file(<span class="op">&lt;</span>keras_model_path<span class="op">&gt;</span>)</span>
<span id="cb2-10"><a href="#cb2-10" aria-hidden="true" tabindex="-1"></a>converter.representative_dataset <span class="op">=</span> representative_dataset_gen</span>
<span id="cb2-11"><a href="#cb2-11" aria-hidden="true" tabindex="-1"></a><span class="co"># This enables quantization</span></span>
<span id="cb2-12"><a href="#cb2-12" aria-hidden="true" tabindex="-1"></a>converter.optimizations <span class="op">=</span> [tf.lite.Optimize.DEFAULT]</span>
<span id="cb2-13"><a href="#cb2-13" aria-hidden="true" tabindex="-1"></a><span class="co"># This ensures that if any ops can&#39;t be quantized, the converter throws an error</span></span>
<span id="cb2-14"><a href="#cb2-14" aria-hidden="true" tabindex="-1"></a>converter.target_spec.supported_ops <span class="op">=</span> [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]</span>
<span id="cb2-15"><a href="#cb2-15" aria-hidden="true" tabindex="-1"></a><span class="co"># For full integer quantization, though supported types defaults to int8 only</span></span>
<span id="cb2-16"><a href="#cb2-16" aria-hidden="true" tabindex="-1"></a>converter.target_spec.supported_types <span class="op">=</span> [tf.int8]</span>
<span id="cb2-17"><a href="#cb2-17" aria-hidden="true" tabindex="-1"></a><span class="co"># These set the input and output tensors to uint8 (added in r2.3)</span></span>
<span id="cb2-18"><a href="#cb2-18" aria-hidden="true" tabindex="-1"></a>converter.inference_input_type <span class="op">=</span> tf.uint8  <span class="co"># or tf.int8/tf.float32</span></span>
<span id="cb2-19"><a href="#cb2-19" aria-hidden="true" tabindex="-1"></a>converter.inference_output_type <span class="op">=</span> tf.uint8  <span class="co"># or tf.int8/tf.float32</span></span>
<span id="cb2-20"><a href="#cb2-20" aria-hidden="true" tabindex="-1"></a>quant_model <span class="op">=</span> converter.convert()</span>
<span id="cb2-21"><a href="#cb2-21" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb2-22"><a href="#cb2-22" aria-hidden="true" tabindex="-1"></a><span class="co"># Save the quantized file</span></span>
<span id="cb2-23"><a href="#cb2-23" aria-hidden="true" tabindex="-1"></a><span class="cf">with</span> <span class="bu">open</span>(<span class="op">&lt;</span>tflite_quant_model_path<span class="op">&gt;</span>, <span class="st">&quot;wb&quot;</span>) <span class="im">as</span> f:</span>
<span id="cb2-24"><a href="#cb2-24" aria-hidden="true" tabindex="-1"></a>    f.write(quant_model)</span>
<span id="cb2-25"><a href="#cb2-25" aria-hidden="true" tabindex="-1"></a>...</span></code></pre></div>
<ul>
<li>Post-training quantization: <a href="https://www.tensorflow.org/lite/performance/post_training_quantization">https://www.tensorflow.org/lite/performance/post_training_quantization</a><br />
</li>
<li>Quantization aware training: <a href="https://www.tensorflow.org/model_optimization/guide/quantization/training">https://www.tensorflow.org/model_optimization/guide/quantization/training</a></li>
</ul>
</section>
</section>
<section id="ref_quantize_cmd" class="level1">
<h1>Quantize command</h1>
<section id="overview" class="level2">
<h2>Overview</h2>
<p>The “quantize” command allows to perform a <a href="#ref_quant_flow">post-training quantization process</a> on a 32b float Keras model. A reshaped Keras model and associated <a href="#ref_tensor_conf_file">tensor format configuration</a> are generated. Options are passed through a <a href="#ref_quant_conf_file">post-training quantization configuration</a> json file (<code>&#39;-q/--quantize&#39;</code> argument). This JSON file is generated because Keras h5 format does not provide natively a support to handle the params of quantization (or meta information). Note that the reshaped model file is basically an un-fused version of the original 32b float model which can be used as-is.</p>
<figure>
<img src="" property="center" style="width:95.0%" />
</figure>
<section id="limitations" class="level3">
<h3>Limitations</h3>
<ul>
<li>residual or multi-branches model</li>
<li>model with multiple inputs or outputs</li>
<li>only channel last (NHWC) tensor representation is supported</li>
<li>only the supported floating-point Keras operators (refer to <a href="supported_ops_keras.html">[KERAS]</a> article) can be quantized. Layers that are not supported for will be kept in floating point.</li>
</ul>
</section>
</section>
<section id="examples" class="level2">
<h2>Examples</h2>
<ul>
<li><p>Perform the post-training quantization process on an already-trained floating-point Keras model.</p>
<pre class="dosbatch"><code>$ stm32ai quantize -q &lt;conf_quant&gt;.json</code></pre></li>
<li><p>Validate a Keras model after Keras post-training quantization</p>
<pre class="dosbatch"><code>$ stm32ai validate -m &lt;reshaped_model_file&gt;.h5 -q &lt;quant_file_desc&gt;.json -vi test_data.npz</code></pre></li>
<li><p>Generate the specialized c-files for a quantized Keras model.</p>
<pre class="dosbatch"><code>$ stm32ai generate -m &lt;expanded_model_file&gt;.h5 -q &lt;quant_file_desc&gt;.json</code></pre></li>
</ul>
</section>
<section id="ref_quant_conf_file" class="level2">
<h2>Post-training quantization configuration file</h2>
<p>The use the Keras post-training quantization process a configuration file (JSON dictionary) is requested with the following keys:</p>
<table>
<colgroup>
<col style="width: 36%" />
<col style="width: 63%" />
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">key</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">“model_name”</td>
<td style="text-align: left;">indicates the name/suffix used for the produced files.</td>
</tr>
<tr class="even">
<td style="text-align: left;">“path_to_floatingpoint_h5”</td>
<td style="text-align: left;">indicates the path to the original model file.</td>
</tr>
<tr class="odd">
<td style="text-align: left;">“algorithm”</td>
<td style="text-align: left;">indicates the used algorithm. Possible values: <em>“User”</em>, or <em>“Minmax”</em> (see <em><a href="#ref_quant_algo">“Quantizers”</a></em> section)</td>
</tr>
<tr class="even">
<td style="text-align: left;">“arithmetic”</td>
<td style="text-align: left;">indicates the expected arithmetic. Possible value: <em>“Integer”</em>, (see <em><a href="#ref_support_arithmetic">“Integer format”</a></em> section)</td>
</tr>
<tr class="odd">
<td style="text-align: left;">“weights_integer_scheme”</td>
<td style="text-align: left;">indicates the expected scheme for the weights (<em>“Integer”</em> arithmetic only). Possible values: <em>“UnsignedAsymmetric”</em>, <em>“SignedSymmetric”</em> (see <em><a href="#ref_support_arithmetic">“Integer format”</a></em> section)</td>
</tr>
<tr class="even">
<td style="text-align: left;">“activations_integer_scheme”</td>
<td style="text-align: left;">indicates the expected scheme for the activations (<em>“Integer”</em> arithmetic only). Possible values: <em>“UnsignedAsymmetric”</em>, <em>“SignedAsymmetric”</em> (see <em><a href="#ref_support_arithmetic">“Integer format”</a></em> section)</td>
</tr>
<tr class="odd">
<td style="text-align: left;">“per_channel”</td>
<td style="text-align: left;">indicates if <em>per-channel</em> (or <em>per-axis</em>) quantization sub-mode must be applied for the weights/bias. Activation tensors still in per-tensor mode. This option is only applicable for the “integer” arithmetic. Possible values: “True” or “False”. If not defined, “False” is considered and <em>per-tensor</em> quantization sub-mode is used (see <em><a href="#ref_support_arithmetic">“Integer format”</a></em> section).</td>
</tr>
<tr class="even">
<td style="text-align: left;">“quant_test_set_dir”</td>
<td style="text-align: left;">indicates the <em>quantization test-set</em> directory (see <em><a href="#ref_test_sets_loading">“Test-set considerations”</a></em> section).</td>
</tr>
<tr class="odd">
<td style="text-align: left;">“evaluation_test_set_dir”</td>
<td style="text-align: left;">indicates the <em>evaluation test-set</em> directory (see <em><a href="#ref_test_sets_loading">“Test-set considerations”</a></em> section).</td>
</tr>
<tr class="even">
<td style="text-align: left;">“batch_size”</td>
<td style="text-align: left;">indicates number of inputs vectors processed for one <em>evaluation</em>. The user needs to choose carefully this parameter in function of its system memory. Recommendation is to start with a small value for example 32.</td>
</tr>
<tr class="odd">
<td style="text-align: left;">“quant_test_ratio”</td>
<td style="text-align: left;">indicates the ratio <code>[0..1]</code> of the vectors in “quant_test_set_dir” which are used for the quantization. They are randomly selected.</td>
</tr>
<tr class="even">
<td style="text-align: left;">“output_directory”</td>
<td style="text-align: left;">indicates the root directory to store the results. Produced files are stored in the following directory: <code>&lt;output_directory&gt;/&lt;model_name&gt;_&lt;algorithm&gt;_&lt;date&gt;_&lt;time&gt;/</code>.</td>
</tr>
<tr class="odd">
<td style="text-align: left;">“modules_directory”</td>
<td style="text-align: left;">indicates the directory containing the user test-set generation and optional user quantizer Python files.</td>
</tr>
<tr class="even">
<td style="text-align: left;">“filename_test_set_generation”</td>
<td style="text-align: left;">name of the file (with or without <code>py</code> extension), where the user writes a potential pre-processing of the data, and in any case, load the test-sets into generators. This file is <strong>mandatory</strong>.</td>
</tr>
<tr class="odd">
<td style="text-align: left;">“filename_quantizer_algos”</td>
<td style="text-align: left;">name of the file (with or without <code>py</code> extension), where the user can write his own quantizer. This is only needed if <em>“User”</em> algorithm is requested.</td>
</tr>
</tbody>
</table>
<p>Produced files:</p>
<ul>
<li><code>&lt;model_name&gt;.h5</code> - reshaped model file<br />
</li>
<li><code>&lt;model_name&gt;_Q.json</code> - tensor format configuration file<br />
</li>
<li><code>&lt;model_name&gt;_reference.npz</code> - reference file</li>
</ul>
<div class="HTips">
<p><strong>Note</strong> — For the “path_to_floatingpoint_h5”, “output_directory”, “modules_directory”, “quant_test_set_dir” and “evaluation_test_set_dir” keys if the values are not prefixed by <code>&quot;./&quot;</code> or <code>&quot;/&quot;</code>, path is relative to the location of the JSON file path. Otherwise, the path is absolute or relative to the current executing path where the stm32ai application is launched.</p>
</div>
<div class="Alert">
<p><strong>Warning</strong> —<br />
All fields are requested. Only the <em>“filename_quantizer_algos”</em> can be omitted if <em>“User”</em> algo is not selected.</p>
</div>
<p><strong>“quant_test_ratio” parameter</strong></p>
<p>If you have <em>1000</em> images in “quant_test_set_dir” and the user sets “quant_test_ratio” to <em>0.8</em> then only <em>800</em> images we be used from the quantization test-set. This parameter can be viewed as a way to control the execution time of the script that can be long depending on the user system, the size of the quantization test-set, the quantization algorithm and the depth of the network. Please, note that in case Keras pre-existing <em>ImageDataGenerator()</em> class is used to load data, then Keras constraints impose <em>“quant_test_ratio”</em> be strictly lower than <em>1</em>.</p>
<section id="example-of-configuration-files" class="level3 unnumbered">
<h3 class="unnumbered">Example of configuration files</h3>
<p>Following configuration file, set the SS/SA scheme with per-channel quantization.</p>
<div class="sourceCode" id="cb6"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb6-1"><a href="#cb6-1" aria-hidden="true" tabindex="-1"></a><span class="fu">{</span></span>
<span id="cb6-2"><a href="#cb6-2" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;model_name&quot;</span><span class="fu">:</span> <span class="st">&quot;mnist&quot;</span><span class="fu">,</span></span>
<span id="cb6-3"><a href="#cb6-3" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;path_to_floatingpoint_h5&quot;</span><span class="fu">:</span> <span class="st">&quot;mnist_cnn.h5&quot;</span><span class="fu">,</span></span>
<span id="cb6-4"><a href="#cb6-4" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;batch_size&quot;</span><span class="fu">:</span> <span class="st">&quot;128&quot;</span><span class="fu">,</span></span>
<span id="cb6-5"><a href="#cb6-5" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;quant_test_set_dir&quot;</span><span class="fu">:</span> <span class="st">&quot;.</span><span class="ch">\\</span><span class="st">&quot;</span><span class="fu">,</span></span>
<span id="cb6-6"><a href="#cb6-6" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;quant_test_ratio&quot;</span><span class="fu">:</span> <span class="st">&quot;0.3&quot;</span><span class="fu">,</span></span>
<span id="cb6-7"><a href="#cb6-7" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;evaluation_test_set_dir&quot;</span><span class="fu">:</span> <span class="st">&quot;&quot;</span><span class="fu">,</span></span>
<span id="cb6-8"><a href="#cb6-8" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;modules_directory&quot;</span><span class="fu">:</span> <span class="st">&quot;mnist_modules&quot;</span><span class="fu">,</span></span>
<span id="cb6-9"><a href="#cb6-9" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;filename_test_set_generation&quot;</span><span class="fu">:</span> <span class="st">&quot;test_set_generation&quot;</span><span class="fu">,</span></span>
<span id="cb6-10"><a href="#cb6-10" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;filename_quantizer_algos&quot;</span><span class="fu">:</span> <span class="st">&quot;quantizer_user_algo&quot;</span><span class="fu">,</span></span>
<span id="cb6-11"><a href="#cb6-11" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;algorithm&quot;</span><span class="fu">:</span> <span class="st">&quot;MinMax&quot;</span><span class="fu">,</span></span>
<span id="cb6-12"><a href="#cb6-12" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;arithmetic&quot;</span><span class="fu">:</span> <span class="st">&quot;Integer&quot;</span><span class="fu">,</span></span>
<span id="cb6-13"><a href="#cb6-13" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;weights_integer_scheme&quot;</span><span class="fu">:</span> <span class="st">&quot;SignedSymmetric&quot;</span><span class="fu">,</span></span>
<span id="cb6-14"><a href="#cb6-14" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;activations_integer_scheme&quot;</span><span class="fu">:</span> <span class="st">&quot;SignedAsymmetric&quot;</span><span class="fu">,</span></span>
<span id="cb6-15"><a href="#cb6-15" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;output_directory&quot;</span><span class="fu">:</span> <span class="st">&quot;out&quot;</span><span class="fu">,</span></span>
<span id="cb6-16"><a href="#cb6-16" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;per_channel&quot;</span><span class="fu">:</span> <span class="st">&quot;true&quot;</span><span class="fu">,</span></span>
<span id="cb6-17"><a href="#cb6-17" aria-hidden="true" tabindex="-1"></a><span class="fu">}</span></span></code></pre></div>
</section>
</section>
<section id="ref_quant_flow" class="level2">
<h2>Keras Post-training quantization process</h2>
<p>The Keras post-training quantization process goes through the following steps. For all selected algo or quantized scheme, the same flow is applied.</p>
<div class="Alert">
<p><strong>Warning</strong> — Internally the algorithm is fully based on the tf.keras module from the TensorFlow v2.x. It allows to import the h5 file generated with the original Keras module v2.x up to v2.3.1 and also with the tf.keras from TensorFlow v1.15. Consequently, it is recommended to use also the services from the tf.keras module to design the user modules (<code>test_set_generation.py</code> and <code>quantizer_user_algo.py</code> modules) avoiding possible incompatible situation.</p>
</div>
<div id="fig:id_quant_steps" class="fignos">
<figure>
<img src="" property="center" style="width:65.0%" alt="Figure 2: Quantization steps" /><figcaption aria-hidden="true"><span>Figure 2:</span> Quantization steps</figcaption>
</figure>
</div>
<p><strong>[1.0]</strong> - Load the <em>evaluation test-set</em>, <em>quantization test-set</em> and original model (see <em><a href="#ref_test_sets_loading">“Test-set considerations”</a></em> section). Note that the Keras <em>load_model()</em> v2.2.4 function is used to load the original model.</p>
<div class="sourceCode" id="cb7"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb7-1"><a href="#cb7-1" aria-hidden="true" tabindex="-1"></a>Neural Network Tools <span class="kw">for</span> STM32AI v1<span class="op">.</span><span class="fu">4</span><span class="op">.</span><span class="fu">1</span> <span class="op">(</span>STM<span class="op">.</span><span class="fu">ai</span> v6<span class="op">.</span><span class="fu">0</span><span class="op">.</span><span class="fu">0</span><span class="op">)</span></span>
<span id="cb7-2"><a href="#cb7-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb7-3"><a href="#cb7-3" aria-hidden="true" tabindex="-1"></a>Output directory    <span class="op">:</span> <span class="op">&lt;</span>output_directory<span class="op">&gt;</span></span>
<span id="cb7-4"><a href="#cb7-4" aria-hidden="true" tabindex="-1"></a>Module directory    <span class="op">:</span> <span class="op">&lt;</span>modules_directory<span class="op">&gt;</span></span>
<span id="cb7-5"><a href="#cb7-5" aria-hidden="true" tabindex="-1"></a>Test directory      <span class="op">:</span> <span class="op">&lt;</span>tests_directory<span class="op">&gt;</span></span>
<span id="cb7-6"><a href="#cb7-6" aria-hidden="true" tabindex="-1"></a>Eval directory      <span class="op">:</span> <span class="op">&lt;</span>eval_directory<span class="op">&gt;</span></span>
<span id="cb7-7"><a href="#cb7-7" aria-hidden="true" tabindex="-1"></a>Original model      <span class="op">:</span> <span class="op">&lt;</span>path_to_floatingpoint_h5<span class="op">&gt;</span></span>
<span id="cb7-8"><a href="#cb7-8" aria-hidden="true" tabindex="-1"></a>Quantization algo   <span class="op">:</span> minmax</span>
<span id="cb7-9"><a href="#cb7-9" aria-hidden="true" tabindex="-1"></a>Arithmetic          <span class="op">:</span> Integer</span>
<span id="cb7-10"><a href="#cb7-10" aria-hidden="true" tabindex="-1"></a>Scheme  <span class="op">(</span>w<span class="op">-</span>a<span class="op">)</span>       <span class="op">:</span> SS<span class="op">-</span>SA<span class="op">-</span>per<span class="op">-</span>tensor</span>
<span id="cb7-11"><a href="#cb7-11" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb7-12"><a href="#cb7-12" aria-hidden="true" tabindex="-1"></a><span class="op">--</span> Loading<span class="op">/</span>Initializing dataset</span>
<span id="cb7-13"><a href="#cb7-13" aria-hidden="true" tabindex="-1"></a><span class="op">--</span> Loading<span class="op">/</span>Initializing dataset <span class="op">-</span> done <span class="op">(</span>elapsed time 0<span class="op">.</span><span class="fu">016s</span><span class="op">)</span></span></code></pre></div>
<p><strong>[1.1]</strong> - Accuracy of the original model is evaluated with the <em>evaluation test-set</em>. The Keras <em>mean_squared_error</em> function is used as loss function. If no ground truth or reference values are provided, this step is skipped.</p>
<div class="sourceCode" id="cb8"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb8-1"><a href="#cb8-1" aria-hidden="true" tabindex="-1"></a><span class="op">--</span> Testing original model</span>
<span id="cb8-2"><a href="#cb8-2" aria-hidden="true" tabindex="-1"></a> Original model <span class="op">-</span> test accuracy <span class="op">(</span>loss<span class="op">):</span> 0<span class="op">.</span><span class="fu">9943</span> <span class="op">(</span>0<span class="op">.</span><span class="fu">00103</span><span class="op">)</span></span>
<span id="cb8-3"><a href="#cb8-3" aria-hidden="true" tabindex="-1"></a><span class="op">--</span> Testing original model <span class="op">-</span> done <span class="op">(</span>elapsed time 0<span class="op">.</span><span class="fu">837s</span><span class="op">)</span></span></code></pre></div>
<p><strong>[1.2]</strong> - Automatic <strong>reshape</strong> of the original model (see <a href="#ref_supported_layers"><em>“Supported Keras layers”</em></a> section).</p>
<ul>
<li>split a SeparableConv2D into a DepthwiseConv2D followed by a Pointwise Conv2D.</li>
<li>un-fuse activations whenever they are merged into a trainable layer in the original floating-point model</li>
<li>folding of Batch normalization weights (if there is no non-linearity between Batch-Norm and previous trainable layer). If folded, the Batch-norm layer no longer appears in the reshaped model. If Batch-norm cannot be folded, it will be automatically kept in floating point and following message is displayed:</li>
</ul>
<div class="sourceCode" id="cb9"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb9-1"><a href="#cb9-1" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span>
<span id="cb9-2"><a href="#cb9-2" aria-hidden="true" tabindex="-1"></a>Batch normalisation layer <span class="co">#9 of output model was not folded into previous layer weights</span></span>
<span id="cb9-3"><a href="#cb9-3" aria-hidden="true" tabindex="-1"></a>Reason<span class="op">:</span> layer <span class="co">#8 &#39;Activation&#39; is not supported for possible BatchNormalization folding</span></span>
<span id="cb9-4"><a href="#cb9-4" aria-hidden="true" tabindex="-1"></a>    supported layers<span class="op">:</span> <span class="op">(</span>&#39;Dense&#39;<span class="op">,</span> &#39;Conv2D&#39;<span class="op">,</span> &#39;DepthwiseConv2D&#39;<span class="op">,</span> &#39;SeparableConv2D&#39;<span class="op">,</span> &#39;Conv1D&#39;<span class="op">,</span></span>
<span id="cb9-5"><a href="#cb9-5" aria-hidden="true" tabindex="-1"></a>        &#39;SeparableConv1D&#39;<span class="op">)</span></span>
<span id="cb9-6"><a href="#cb9-6" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span></code></pre></div>
<div class="sourceCode" id="cb10"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb10-1"><a href="#cb10-1" aria-hidden="true" tabindex="-1"></a><span class="op">--</span> Reshaping original model</span>
<span id="cb10-2"><a href="#cb10-2" aria-hidden="true" tabindex="-1"></a> unroll the SeparableConv2D layers<span class="op">...</span></span>
<span id="cb10-3"><a href="#cb10-3" aria-hidden="true" tabindex="-1"></a> unfuse activations<span class="op">...</span></span>
<span id="cb10-4"><a href="#cb10-4" aria-hidden="true" tabindex="-1"></a> fold the BatchNormalization layers<span class="op">...</span></span>
<span id="cb10-5"><a href="#cb10-5" aria-hidden="true" tabindex="-1"></a><span class="op">--</span> Reshaping original model <span class="op">-</span> done <span class="op">(</span>elapsed time 1<span class="op">.</span><span class="fu">225s</span><span class="op">)</span></span></code></pre></div>
<p><strong>[1.3]</strong> - Accuracy of the reshaped model is evaluated with the <em>evaluation test-set</em>. If no ground truth or reference values are provided, this step is skipped.</p>
<div class="HTips">
<p><strong>Note</strong> — The modified network is expected to be mathematically equivalent to the original model.</p>
</div>
<div class="sourceCode" id="cb11"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb11-1"><a href="#cb11-1" aria-hidden="true" tabindex="-1"></a><span class="op">--</span> Testing reshaped model</span>
<span id="cb11-2"><a href="#cb11-2" aria-hidden="true" tabindex="-1"></a> Reshaped model <span class="op">-</span> test accuracy <span class="op">(</span>loss<span class="op">):</span> 0<span class="op">.</span><span class="fu">9943</span> <span class="op">(</span>0<span class="op">.</span><span class="fu">00103</span><span class="op">)</span></span>
<span id="cb11-3"><a href="#cb11-3" aria-hidden="true" tabindex="-1"></a><span class="op">--</span> Testing reshaped model <span class="op">-</span> done <span class="op">(</span>elapsed time 0<span class="op">.</span><span class="fu">590s</span><span class="op">)</span></span></code></pre></div>
<p><strong>[1.4]</strong> - Save the reshaped model (<code>h5*</code> file creation)</p>
<div class="sourceCode" id="cb12"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb12-1"><a href="#cb12-1" aria-hidden="true" tabindex="-1"></a><span class="op">--</span> Saving re<span class="op">-</span>shaped model</span>
<span id="cb12-2"><a href="#cb12-2" aria-hidden="true" tabindex="-1"></a>create <span class="st">&quot;mnist_ss_sa_pc.h5&quot;</span> file</span>
<span id="cb12-3"><a href="#cb12-3" aria-hidden="true" tabindex="-1"></a><span class="op">--</span> Saving re<span class="op">-</span>shaped model <span class="op">-</span> done <span class="op">(</span>elapsed time 0<span class="op">.</span><span class="fu">216s</span><span class="op">)</span></span></code></pre></div>
<p><strong>[2.0]</strong> - <strong>Quantize the weights</strong>: original weights are quantized, by default with the <em>“minmax”</em> algo. If <em>“User”</em> algorithm is set, the <em>WeightsBiasQuantizerUser()</em> function from the user <em>“filename_quantizer_algos.py”</em> module is imported and used (see <em><a href="#ref_quant_algo">“Quantizers”</a></em>). After this step, a function replaces the original weights of the reshaped model with “fake quantized” weights. Note that no <em>evaluation test-set</em> or <em>quantization test-set</em> or iterative algo are used.</p>
<div class="sourceCode" id="cb13"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb13-1"><a href="#cb13-1" aria-hidden="true" tabindex="-1"></a><span class="op">--</span> Quantizing weights with <span class="st">&quot;minmax&quot;</span> algo</span>
<span id="cb13-2"><a href="#cb13-2" aria-hidden="true" tabindex="-1"></a><span class="op">--</span> Quantizing weights with <span class="st">&quot;minmax&quot;</span> algo <span class="op">-</span> done <span class="op">(</span>elapsed time 0<span class="op">.</span><span class="fu">256s</span><span class="op">)</span></span></code></pre></div>
<p><strong>[2.1]</strong> - <strong>Quantize the activations</strong> by passing the <em>quantization test-set</em>. If <em>“User”</em> algorithm is set, the <em>ActivationsQuantizerUser()</em> function from the user <em>“filename_quantizer_algos.py”</em> module is imported and used (see <em><a href="#ref_quant_algo">“Quantizers”</a></em>))</p>
<p>Following traces shows a “Greedy” algo execution. For the other cases, simple line is displayed.</p>
<div class="sourceCode" id="cb14"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb14-1"><a href="#cb14-1" aria-hidden="true" tabindex="-1"></a><span class="op">--</span> Quantizing activations with <span class="st">&quot;minmax&quot;</span> algo</span>
<span id="cb14-2"><a href="#cb14-2" aria-hidden="true" tabindex="-1"></a> create <span class="st">&quot;mnist_ss_sa_pc_Q.json&quot;</span> file</span>
<span id="cb14-3"><a href="#cb14-3" aria-hidden="true" tabindex="-1"></a><span class="op">--</span> Quantizing activations with <span class="st">&quot;minmax&quot;</span> algo <span class="op">-</span> done <span class="op">(</span>elapsed time 1<span class="op">.</span><span class="fu">136s</span><span class="op">)</span></span></code></pre></div>
<p><strong>[2.2]</strong> - Create and save the tensor format configuration file.</p>
<div class="Tips">
<p><strong>Tip</strong> — By default, if a Softmax layer is part of the network, it is automatically keep in float.</p>
</div>
<p><strong>[3.0]</strong> - Evaluate the accuracy of the quantized model on the <em>evaluation test-set</em>. If no ground truth or reference values are provided, this step is skipped.</p>
<div class="sourceCode" id="cb15"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb15-1"><a href="#cb15-1" aria-hidden="true" tabindex="-1"></a><span class="op">--</span> Testing final quantized model</span>
<span id="cb15-2"><a href="#cb15-2" aria-hidden="true" tabindex="-1"></a> Final quantized model <span class="op">-</span> test accuracy <span class="op">(</span>loss<span class="op">):</span> 0<span class="op">.</span><span class="fu">9943</span> <span class="op">(</span>0<span class="op">.</span><span class="fu">00101</span><span class="op">)</span></span>
<span id="cb15-3"><a href="#cb15-3" aria-hidden="true" tabindex="-1"></a><span class="op">--</span> Testing final quantized model <span class="op">-</span> done <span class="op">(</span>elapsed time 0<span class="op">.</span><span class="fu">824s</span><span class="op">)</span></span></code></pre></div>
<div class="HTips">
<p><strong>Note</strong> — As Keras layers do not support quantized values, the behavior of the quantized layers is emulated by quantizing and then rescaling back to float the tensor values; this process is often called <strong>“fake_quantization”</strong>. The quantization changes significantly the outputs of the network and thus you cannot use the original model to validate its result; however, the script generates reference input and outputs you can use with <strong>X-CUBE-AI</strong>’s validation (see next section).</p>
</div>
<p><strong>[3.1]</strong> - If available, save a batch of inputs (“batch_size”) and the predicted values of the quantized model. It can be used as reference to valid the generated C-model.</p>
<p><strong>[3.2]</strong> - Produced files</p>
<div class="sourceCode" id="cb16"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb16-1"><a href="#cb16-1" aria-hidden="true" tabindex="-1"></a><span class="op">&lt;</span>output_directory<span class="op">&gt;/&lt;</span>model_name<span class="op">&gt;</span>_<span class="op">&lt;</span>algorithm<span class="op">&gt;</span>_<span class="op">&lt;</span>date<span class="op">&gt;</span>_<span class="op">&lt;</span>time<span class="op">&gt;/</span>final_accuracy<span class="op">.</span><span class="fu">txt</span></span>
<span id="cb16-2"><a href="#cb16-2" aria-hidden="true" tabindex="-1"></a><span class="op">&lt;</span>output_directory<span class="op">&gt;/&lt;</span>model_name<span class="op">&gt;</span>_<span class="op">&lt;</span>algorithm<span class="op">&gt;</span>_<span class="op">&lt;</span>date<span class="op">&gt;</span>_<span class="op">&lt;</span>time<span class="op">&gt;/&lt;</span>model_name<span class="op">&gt;.</span><span class="fu">h5</span></span>
<span id="cb16-3"><a href="#cb16-3" aria-hidden="true" tabindex="-1"></a><span class="op">&lt;</span>output_directory<span class="op">&gt;/&lt;</span>model_name<span class="op">&gt;</span>_<span class="op">&lt;</span>algorithm<span class="op">&gt;</span>_<span class="op">&lt;</span>date<span class="op">&gt;</span>_<span class="op">&lt;</span>time<span class="op">&gt;/&lt;</span>model_name<span class="op">&gt;</span>_Q<span class="op">.</span><span class="fu">json</span></span>
<span id="cb16-4"><a href="#cb16-4" aria-hidden="true" tabindex="-1"></a><span class="op">&lt;</span>output_directory<span class="op">&gt;/&lt;</span>model_name<span class="op">&gt;</span>_<span class="op">&lt;</span>algorithm<span class="op">&gt;</span>_<span class="op">&lt;</span>date<span class="op">&gt;</span>_<span class="op">&lt;</span>time<span class="op">&gt;/&lt;</span>model_name<span class="op">&gt;</span>_reference<span class="op">.</span><span class="fu">npz</span></span></code></pre></div>
<p><strong>[2.2]</strong> - Generate and save the tensor format configuration file.</p>
<section id="validation-on-desktop" class="level4 unnumbered">
<h4 class="unnumbered">Validation on desktop</h4>
<p>Following command can be used to evaluate the quantized model with the generated x86 C-model (refer to <a href="evaluation_metrics.html">[METRIC]</a> about the evaluated metrics).</p>
<div class="sourceCode" id="cb17"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb17-1"><a href="#cb17-1" aria-hidden="true" tabindex="-1"></a>$ stm32ai validate <span class="op">&lt;</span>model_name<span class="op">&gt;.</span><span class="fu">h5</span> <span class="op">-</span>q <span class="op">&lt;</span>model_name<span class="op">&gt;</span>_Q<span class="op">.</span><span class="fu">json</span> <span class="op">-</span>vi <span class="op">&lt;</span>model_name<span class="op">&gt;</span>_reference<span class="op">.</span><span class="fu">npz</span></span>
<span id="cb17-2"><a href="#cb17-2" aria-hidden="true" tabindex="-1"></a>Neural Network Tools <span class="kw">for</span> STM32AI v1<span class="op">.</span><span class="fu">4</span><span class="op">.</span><span class="fu">1</span> <span class="op">(</span>STM<span class="op">.</span><span class="fu">ai</span> v6<span class="op">.</span><span class="fu">0</span><span class="op">.</span><span class="fu">0</span><span class="op">)</span></span>
<span id="cb17-3"><a href="#cb17-3" aria-hidden="true" tabindex="-1"></a> <span class="op">...</span></span>
<span id="cb17-4"><a href="#cb17-4" aria-hidden="true" tabindex="-1"></a> model file         <span class="op">:</span> <span class="op">&lt;</span>reshaped_model_name<span class="op">&gt;.</span><span class="fu">h5</span></span>
<span id="cb17-5"><a href="#cb17-5" aria-hidden="true" tabindex="-1"></a> <span class="fu">type</span>               <span class="op">:</span> keras</span>
<span id="cb17-6"><a href="#cb17-6" aria-hidden="true" tabindex="-1"></a> c_name             <span class="op">:</span> network</span>
<span id="cb17-7"><a href="#cb17-7" aria-hidden="true" tabindex="-1"></a> compression        <span class="op">:</span> None</span>
<span id="cb17-8"><a href="#cb17-8" aria-hidden="true" tabindex="-1"></a> quantize           <span class="op">:</span> <span class="op">&lt;</span>model_name<span class="op">&gt;</span>_Q<span class="op">.</span><span class="fu">json</span></span>
<span id="cb17-9"><a href="#cb17-9" aria-hidden="true" tabindex="-1"></a> workspace <span class="fu">dir</span>      <span class="op">:</span> <span class="op">&lt;</span>workspace<span class="op">-</span>directory<span class="op">-</span>path<span class="op">&gt;</span></span>
<span id="cb17-10"><a href="#cb17-10" aria-hidden="true" tabindex="-1"></a> output <span class="fu">dir</span>         <span class="op">:</span> <span class="op">&lt;</span>output<span class="op">-</span>directory<span class="op">-</span>path<span class="op">&gt;</span></span>
<span id="cb17-11"><a href="#cb17-11" aria-hidden="true" tabindex="-1"></a> vinput files       <span class="op">:</span> <span class="op">&lt;</span>model_name<span class="op">&gt;</span>_reference<span class="op">.</span><span class="fu">npz</span></span>
<span id="cb17-12"><a href="#cb17-12" aria-hidden="true" tabindex="-1"></a> <span class="op">...</span></span>
<span id="cb17-13"><a href="#cb17-13" aria-hidden="true" tabindex="-1"></a>input              <span class="op">:</span> quantize_conv2d_1_input <span class="op">[</span>784 items<span class="op">,</span> 784 B<span class="op">,</span> ai_i8<span class="op">,</span></span>
<span id="cb17-14"><a href="#cb17-14" aria-hidden="true" tabindex="-1"></a>                         scale<span class="op">=</span>0<span class="op">.</span><span class="fu">00392156862745098</span><span class="op">,</span> zero_point<span class="op">=-</span>128<span class="op">,</span> <span class="op">(</span>28<span class="op">,</span> 28<span class="op">,</span> 1<span class="op">)]</span></span>
<span id="cb17-15"><a href="#cb17-15" aria-hidden="true" tabindex="-1"></a>inputs <span class="op">(</span>total<span class="op">)</span>     <span class="op">:</span> 784 B</span>
<span id="cb17-16"><a href="#cb17-16" aria-hidden="true" tabindex="-1"></a>output             <span class="op">:</span> softmax_8 <span class="op">[</span>10 items<span class="op">,</span> 40 B<span class="op">,</span> ai_float<span class="op">,</span> FLOAT32<span class="op">,</span> <span class="op">(</span>1<span class="op">,</span> 1<span class="op">,</span> 10<span class="op">)]</span></span>
<span id="cb17-17"><a href="#cb17-17" aria-hidden="true" tabindex="-1"></a>outputs <span class="op">(</span>total<span class="op">)</span>    <span class="op">:</span> 40 B</span>
<span id="cb17-18"><a href="#cb17-18" aria-hidden="true" tabindex="-1"></a>params <span class="co">#           : 1,199,882 items (4.58 MiB)</span></span>
<span id="cb17-19"><a href="#cb17-19" aria-hidden="true" tabindex="-1"></a>macc               <span class="op">:</span> 12<span class="op">,</span>029<span class="op">,</span>716</span>
<span id="cb17-20"><a href="#cb17-20" aria-hidden="true" tabindex="-1"></a>weights <span class="op">(</span>ro<span class="op">)</span>       <span class="op">:</span> 1<span class="op">,</span>200<span class="op">,</span>584 B <span class="op">(</span>1172<span class="op">.</span><span class="fu">45</span> KiB<span class="op">)</span> <span class="op">-</span>3<span class="op">,</span>598<span class="op">,</span>944<span class="op">(-</span>75<span class="op">.</span><span class="fu">0</span><span class="op">%)</span></span>
<span id="cb17-21"><a href="#cb17-21" aria-hidden="true" tabindex="-1"></a>activations <span class="op">(</span>rw<span class="op">)</span>   <span class="op">:</span> 32<span class="op">,</span>704 B <span class="op">(</span>31<span class="op">.</span><span class="fu">94</span> KiB<span class="op">)</span> </span>
<span id="cb17-22"><a href="#cb17-22" aria-hidden="true" tabindex="-1"></a>ram <span class="op">(</span>total<span class="op">)</span>        <span class="op">:</span> 33<span class="op">,</span>528 B <span class="op">(</span>32<span class="op">.</span><span class="fu">74</span> KiB<span class="op">)</span> <span class="op">=</span> 32<span class="op">,</span>704 <span class="op">+</span> 784 <span class="op">+</span> 40</span>
<span id="cb17-23"><a href="#cb17-23" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb17-24"><a href="#cb17-24" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb17-25"><a href="#cb17-25" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span>
<span id="cb17-26"><a href="#cb17-26" aria-hidden="true" tabindex="-1"></a>Evaluation report <span class="op">(</span>summary<span class="op">)</span></span>
<span id="cb17-27"><a href="#cb17-27" aria-hidden="true" tabindex="-1"></a><span class="op">----------------------------------------------------------------------------------</span> <span class="op">...</span></span>
<span id="cb17-28"><a href="#cb17-28" aria-hidden="true" tabindex="-1"></a>Mode                acc       rmse          mae           l2r           tensor</span>
<span id="cb17-29"><a href="#cb17-29" aria-hidden="true" tabindex="-1"></a><span class="op">----------------------------------------------------------------------------------</span> <span class="op">...</span></span>
<span id="cb17-30"><a href="#cb17-30" aria-hidden="true" tabindex="-1"></a>x86 C<span class="op">-</span>model <span class="co">#1      100.00%   0.000000002   0.000000000   0.000000007   softmax_8, ...</span></span>
<span id="cb17-31"><a href="#cb17-31" aria-hidden="true" tabindex="-1"></a>original model <span class="co">#1   100.00%   0.001213685   0.000121268   0.003879170   softmax_8, ...</span></span>
<span id="cb17-32"><a href="#cb17-32" aria-hidden="true" tabindex="-1"></a>X<span class="op">-</span>cross <span class="co">#1          100.00%   0.001213685   0.000121268   0.003878688   softmax_8, ...</span></span>
<span id="cb17-33"><a href="#cb17-33" aria-hidden="true" tabindex="-1"></a><span class="op">----------------------------------------------------------------------------------</span> <span class="op">...</span></span>
<span id="cb17-34"><a href="#cb17-34" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span></code></pre></div>
</section>
</section>
<section id="ref_test_sets_loading" class="level2">
<h2>Test-set considerations</h2>
<p>The quantization process gives the possibility to load batches of test vectors. The quantization requires the knowledge of the range of the values of all the tensors. The weights are constant and thus their range is easy to estimate, whereas the <em>activation’s range</em> depends on the input data. An accurate estimate of the activation range requires a test set representative of real data:</p>
<ul>
<li>it means a balanced test set over all the labels to be classified<br />
</li>
<li>test set should be large enough to reflect how variable can be the network inputs and thus to be as representative as possible</li>
</ul>
<p>We call it <strong>quantization test-set</strong>. If this test-set is too small, not representative enough or even unbalanced, quantization will be performed as usual but there is a risk that the quantized model performance be not satisfying. The other test-set, called <em>evaluation test-set</em>, is used to assess performance of floating point model and quantized model and potentially compare them. The <strong>evaluation test-set</strong> and the <strong>quantization test-set</strong> should be as independent as possible to avoid over-estimation of the quantized model performance.</p>
<p>The input vectors is loaded with the <em>create_test_generator()</em> function which is dynamically imported by the core from the user <em>“filename_test_set_generation.py”</em> module. Preprocessing or adaptation of the loaded data should be done in this function (see <em><a href="#ref_quant_mnist">“Quantize a NMIST model”</a></em> sections).</p>
<div class="sourceCode" id="cb18"><pre class="sourceCode python"><code class="sourceCode python"><span id="cb18-1"><a href="#cb18-1" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> create_test_generator(quant_test_set_dir, evaluation_test_set_dir,</span>
<span id="cb18-2"><a href="#cb18-2" aria-hidden="true" tabindex="-1"></a>                          quant_test_ratio, batch_size)</span></code></pre></div>
<div class="HTips">
<p><strong>Note</strong> — If the neural network is not used for a classification task, or if labels are not available for the quantizer then in the <em>create_test_generator()</em> function set <code>quant_test_set_labeled</code> to <code>None</code> and <code>eval_test_set_labeled</code> to <code>None</code>.</p>
</div>
</section>
<section id="ref_quant_algo" class="level2">
<h2>Quantizers</h2>
<p>One default quantizer (also called algorithm) can be used to generate the tensor format configuration file: <strong><em>“Minmax”</em></strong> . The user (<strong><em>“User”</em></strong> algorithm) has also the possibility to provide its own quantization method with different compromises between quantized data precision and saturation, by implementing the <em>estimate</em> method in the quantizer class and passing it to the functions for weight and activation quantization (see <em>“filename_quantizer_algos.py”</em> user module).</p>
<ul>
<li><em>“Minmax”</em> invokes the simple and quick quantization process based on min and max of all the tensors for weights and activations. The <em>quantization test-set</em> is used to estimate the activation ranges. The weights are constant and thus their ranges are directly estimated.</li>
</ul>
<div class="HTips">
<p><strong>Note</strong> — Whatever the algorithm used, the quantizer script issues the quantized model expected accuracy on the <em>evaluation test-set</em> (if labels are present). This is an interesting indication, that may be used to compare different quantizers but the recommendation is to verify that the quantized model generalize well-enough according to your requirements. This has to be done with real <em>field</em> inputs.</p>
</div>
</section>
<section id="ref_tensor_conf_file" class="level2">
<h2>Tensor format configuration file</h2>
<p>The proprietary <em>tensor format configuration</em> file is a JSON dictionary giving the expected tensor format. One entry is defined for each quantized tensor. If a tensor is omitted, the format is float by default (unless it is inferred). As a result, there is no way to indicate that a tensor format is float. The configuration is provided as a JSON file generated from the network structure and it is specific to a neural network model.</p>
<div class="sourceCode" id="cb19"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb19-1"><a href="#cb19-1" aria-hidden="true" tabindex="-1"></a><span class="fu">{</span></span>
<span id="cb19-2"><a href="#cb19-2" aria-hidden="true" tabindex="-1"></a>    <span class="dt">&quot;version&quot;</span><span class="fu">:</span> <span class="st">&quot;2.0&quot;</span><span class="fu">,</span></span>
<span id="cb19-3"><a href="#cb19-3" aria-hidden="true" tabindex="-1"></a>    <span class="dt">&quot;&lt;layer_type&gt;_&lt;idx&gt;_&lt;tensor_name&gt;&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb19-4"><a href="#cb19-4" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;format&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb19-5"><a href="#cb19-5" aria-hidden="true" tabindex="-1"></a>            <span class="dt">&quot;class&quot;</span><span class="fu">:</span> <span class="st">&quot;Integer&quot;</span><span class="fu">,</span></span>
<span id="cb19-6"><a href="#cb19-6" aria-hidden="true" tabindex="-1"></a>            <span class="dt">&quot;type&quot;</span><span class="fu">:</span> <span class="st">&quot;S8&quot;</span><span class="fu">,</span></span>
<span id="cb19-7"><a href="#cb19-7" aria-hidden="true" tabindex="-1"></a>            <span class="dt">&quot;params&quot;</span><span class="fu">:</span> <span class="ot">[</span> <span class="ot">[</span> <span class="fl">0.0019106452371559892</span> <span class="ot">],[</span> <span class="dv">132</span> <span class="ot">]</span> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb19-8"><a href="#cb19-8" aria-hidden="true" tabindex="-1"></a>        <span class="fu">}</span></span>
<span id="cb19-9"><a href="#cb19-9" aria-hidden="true" tabindex="-1"></a>    <span class="fu">},</span></span>
<span id="cb19-10"><a href="#cb19-10" aria-hidden="true" tabindex="-1"></a><span class="fu">}</span></span></code></pre></div>
<table>
<colgroup>
<col style="width: 23%" />
<col style="width: 76%" />
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">field</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">version</td>
<td style="text-align: left;">version/format of the JSON file</td>
</tr>
<tr class="even">
<td style="text-align: left;">name</td>
<td style="text-align: left;"><code>&lt;layer_type&gt;</code>: name/type of tf.keras layer, <code>&lt;idx&gt;</code>: position of the layer in the Keras network, <code>&lt;tensor_name&gt;</code>: designates the conventional name of the associated tensor: “out”, “weights”, “bias”.</td>
</tr>
<tr class="odd">
<td style="text-align: left;">format.class</td>
<td style="text-align: left;">indicates the arithmetic format: can be “FXP” (for “Qmn”) or “Integer”</td>
</tr>
<tr class="even">
<td style="text-align: left;">format.type</td>
<td style="text-align: left;">indicates the type of data: “U8”, “S8” or “S32” (bias in integer)</td>
</tr>
<tr class="odd">
<td style="text-align: left;">format.params</td>
<td style="text-align: left;">indicates the parameters: “FXP”: [number of integer bits (M), number of fractional bits (N)] “Integer”: [[scale value], [zero_point value]]</td>
</tr>
</tbody>
</table>
<p>In case of per channel, the output JSON file may look like this for example with 8 output channels layer whose weights are quantized in UnsignedAsymmetric:</p>
<div class="sourceCode" id="cb20"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb20-1"><a href="#cb20-1" aria-hidden="true" tabindex="-1"></a><span class="fu">{</span></span>
<span id="cb20-2"><a href="#cb20-2" aria-hidden="true" tabindex="-1"></a>    <span class="dt">&quot;version&quot;</span><span class="fu">:</span> <span class="st">&quot;2.0&quot;</span><span class="fu">,</span></span>
<span id="cb20-3"><a href="#cb20-3" aria-hidden="true" tabindex="-1"></a>    <span class="dt">&quot;&lt;layer_type&gt;_&lt;idx&gt;_&lt;tensor_name&gt;&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb20-4"><a href="#cb20-4" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;format&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb20-5"><a href="#cb20-5" aria-hidden="true" tabindex="-1"></a>            <span class="dt">&quot;class&quot;</span><span class="fu">:</span> <span class="st">&quot;Integer&quot;</span><span class="fu">,</span></span>
<span id="cb20-6"><a href="#cb20-6" aria-hidden="true" tabindex="-1"></a>            <span class="dt">&quot;type&quot;</span><span class="fu">:</span> <span class="st">&quot;U8&quot;</span><span class="fu">,</span></span>
<span id="cb20-7"><a href="#cb20-7" aria-hidden="true" tabindex="-1"></a>            <span class="dt">&quot;params&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb20-8"><a href="#cb20-8" aria-hidden="true" tabindex="-1"></a>                <span class="ot">[</span></span>
<span id="cb20-9"><a href="#cb20-9" aria-hidden="true" tabindex="-1"></a>                    <span class="fl">0.0018906078235370906</span><span class="ot">,</span></span>
<span id="cb20-10"><a href="#cb20-10" aria-hidden="true" tabindex="-1"></a>                    <span class="fl">0.0021662522019363765</span><span class="ot">,</span></span>
<span id="cb20-11"><a href="#cb20-11" aria-hidden="true" tabindex="-1"></a>                    <span class="fl">0.0017604492311402568</span><span class="ot">,</span></span>
<span id="cb20-12"><a href="#cb20-12" aria-hidden="true" tabindex="-1"></a>                    <span class="fl">0.0015629781043435644</span><span class="ot">,</span></span>
<span id="cb20-13"><a href="#cb20-13" aria-hidden="true" tabindex="-1"></a>                    <span class="fl">0.0019880688096594623</span><span class="ot">,</span></span>
<span id="cb20-14"><a href="#cb20-14" aria-hidden="true" tabindex="-1"></a>                    <span class="fl">0.002500925711759432</span><span class="ot">,</span></span>
<span id="cb20-15"><a href="#cb20-15" aria-hidden="true" tabindex="-1"></a>                    <span class="fl">0.0019362337711289173</span><span class="ot">,</span></span>
<span id="cb20-16"><a href="#cb20-16" aria-hidden="true" tabindex="-1"></a>                    <span class="fl">0.0016825192087278592</span><span class="ot">,</span></span>
<span id="cb20-17"><a href="#cb20-17" aria-hidden="true" tabindex="-1"></a>                <span class="ot">],</span></span>
<span id="cb20-18"><a href="#cb20-18" aria-hidden="true" tabindex="-1"></a>                <span class="ot">[</span></span>
<span id="cb20-19"><a href="#cb20-19" aria-hidden="true" tabindex="-1"></a>                    <span class="dv">132</span><span class="ot">,</span></span>
<span id="cb20-20"><a href="#cb20-20" aria-hidden="true" tabindex="-1"></a>                    <span class="dv">150</span><span class="ot">,</span></span>
<span id="cb20-21"><a href="#cb20-21" aria-hidden="true" tabindex="-1"></a>                    <span class="dv">120</span><span class="ot">,</span></span>
<span id="cb20-22"><a href="#cb20-22" aria-hidden="true" tabindex="-1"></a>                    <span class="dv">165</span><span class="ot">,</span></span>
<span id="cb20-23"><a href="#cb20-23" aria-hidden="true" tabindex="-1"></a>                    <span class="dv">23</span><span class="ot">,</span></span>
<span id="cb20-24"><a href="#cb20-24" aria-hidden="true" tabindex="-1"></a>                    <span class="dv">88</span><span class="ot">,</span></span>
<span id="cb20-25"><a href="#cb20-25" aria-hidden="true" tabindex="-1"></a>                    <span class="dv">230</span><span class="ot">,</span></span>
<span id="cb20-26"><a href="#cb20-26" aria-hidden="true" tabindex="-1"></a>                    <span class="dv">129</span><span class="ot">,</span></span>
<span id="cb20-27"><a href="#cb20-27" aria-hidden="true" tabindex="-1"></a>                <span class="ot">]</span></span>
<span id="cb20-28"><a href="#cb20-28" aria-hidden="true" tabindex="-1"></a>            <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb20-29"><a href="#cb20-29" aria-hidden="true" tabindex="-1"></a>        <span class="fu">}</span></span>
<span id="cb20-30"><a href="#cb20-30" aria-hidden="true" tabindex="-1"></a>    <span class="fu">},</span></span>
<span id="cb20-31"><a href="#cb20-31" aria-hidden="true" tabindex="-1"></a><span class="fu">}</span></span></code></pre></div>
<p>Qmn example</p>
<div class="sourceCode" id="cb21"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb21-1"><a href="#cb21-1" aria-hidden="true" tabindex="-1"></a><span class="fu">{</span></span>
<span id="cb21-2"><a href="#cb21-2" aria-hidden="true" tabindex="-1"></a>    <span class="dt">&quot;version&quot;</span><span class="fu">:</span> <span class="st">&quot;2.0&quot;</span><span class="fu">,</span></span>
<span id="cb21-3"><a href="#cb21-3" aria-hidden="true" tabindex="-1"></a>    <span class="dt">&quot;Input&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb21-4"><a href="#cb21-4" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;format&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb21-5"><a href="#cb21-5" aria-hidden="true" tabindex="-1"></a>            <span class="dt">&quot;type&quot;</span><span class="fu">:</span> <span class="st">&quot;S8&quot;</span><span class="fu">,</span></span>
<span id="cb21-6"><a href="#cb21-6" aria-hidden="true" tabindex="-1"></a>            <span class="dt">&quot;class&quot;</span><span class="fu">:</span> <span class="st">&quot;FXP&quot;</span><span class="fu">,</span></span>
<span id="cb21-7"><a href="#cb21-7" aria-hidden="true" tabindex="-1"></a>            <span class="dt">&quot;params&quot;</span><span class="fu">:</span> <span class="ot">[</span> <span class="dv">0</span><span class="ot">,</span> <span class="dv">7</span><span class="ot">]</span></span>
<span id="cb21-8"><a href="#cb21-8" aria-hidden="true" tabindex="-1"></a>        <span class="fu">}</span></span>
<span id="cb21-9"><a href="#cb21-9" aria-hidden="true" tabindex="-1"></a>    <span class="fu">},</span></span>
<span id="cb21-10"><a href="#cb21-10" aria-hidden="true" tabindex="-1"></a><span class="fu">}</span></span></code></pre></div>
</section>
<section id="ref_quant_mnist" class="level2">
<h2>Quantize a MNIST model</h2>
<p>Inside the <strong>X-CUBE-AI</strong> pack, a typical example of quantization configuration file and associated user Python scripts are provided</p>
<pre><code>%X-CUBE-AI-DIR%/scripts/quantization/</code></pre>
<p>This example is a ready-to-use example or <strong>reference code</strong> to use the <em>GenericInputBatchGenerator()</em> class which is an alternative working for any type of input tensor formats (part of the <code>test_set_generation_mnist.py</code> file).</p>
<div class="sourceCode" id="cb23"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb23-1"><a href="#cb23-1" aria-hidden="true" tabindex="-1"></a>    <span class="op">%</span>X<span class="op">-</span>CUBE<span class="op">-</span>AI<span class="op">-</span>DIR<span class="op">%</span>\scripts\quantization</span>
<span id="cb23-2"><a href="#cb23-2" aria-hidden="true" tabindex="-1"></a>                                <span class="op">|-</span> cfg_mnist_ss_sa_pc<span class="op">.</span><span class="fu">json</span></span>
<span id="cb23-3"><a href="#cb23-3" aria-hidden="true" tabindex="-1"></a>                                <span class="op">|-</span> mnist_cnn<span class="op">.</span><span class="fu">h5</span></span>
<span id="cb23-4"><a href="#cb23-4" aria-hidden="true" tabindex="-1"></a>                                <span class="op">|-</span> mnist<span class="op">.</span><span class="fu">npz</span></span>
<span id="cb23-5"><a href="#cb23-5" aria-hidden="true" tabindex="-1"></a>                                \_ mnist_modules</span>
<span id="cb23-6"><a href="#cb23-6" aria-hidden="true" tabindex="-1"></a>                                    <span class="op">|</span>_ test_set_generation_mnist<span class="op">.</span><span class="fu">py</span></span>
<span id="cb23-7"><a href="#cb23-7" aria-hidden="true" tabindex="-1"></a>                                    \_ quantizer_algos_user<span class="op">.</span><span class="fu">py</span></span></code></pre></div>
<div class="sourceCode" id="cb24"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb24-1"><a href="#cb24-1" aria-hidden="true" tabindex="-1"></a><span class="fu">{</span></span>
<span id="cb24-2"><a href="#cb24-2" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;model_name&quot;</span><span class="fu">:</span> <span class="st">&quot;mnist_ss_sa_pc&quot;</span><span class="fu">,</span></span>
<span id="cb24-3"><a href="#cb24-3" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;path_to_floatingpoint_h5&quot;</span><span class="fu">:</span> <span class="st">&quot;mnist_cnn.h5&quot;</span><span class="fu">,</span></span>
<span id="cb24-4"><a href="#cb24-4" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;quant_test_set_dir&quot;</span><span class="fu">:</span><span class="st">&quot;.</span><span class="ch">\\</span><span class="st">&quot;</span><span class="fu">,</span></span>
<span id="cb24-5"><a href="#cb24-5" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;quant_test_ratio&quot;</span><span class="fu">:</span> <span class="st">&quot;0.3&quot;</span><span class="fu">,</span></span>
<span id="cb24-6"><a href="#cb24-6" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;evaluation_test_set_dir&quot;</span><span class="fu">:</span><span class="st">&quot;&quot;</span><span class="fu">,</span></span>
<span id="cb24-7"><a href="#cb24-7" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;batch_size&quot;</span><span class="fu">:</span> <span class="st">&quot;128&quot;</span><span class="fu">,</span></span>
<span id="cb24-8"><a href="#cb24-8" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;modules_directory&quot;</span><span class="fu">:</span> <span class="st">&quot;mnist_modules&quot;</span><span class="fu">,</span></span>
<span id="cb24-9"><a href="#cb24-9" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;filename_test_set_generation&quot;</span><span class="fu">:</span> <span class="st">&quot;test_set_generation_mnist.py&quot;</span><span class="fu">,</span></span>
<span id="cb24-10"><a href="#cb24-10" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;filename_quantizer_algos&quot;</span><span class="fu">:</span> <span class="st">&quot;quantizer_algos_user.py&quot;</span><span class="fu">,</span></span>
<span id="cb24-11"><a href="#cb24-11" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;algorithm&quot;</span><span class="fu">:</span> <span class="st">&quot;MinMax&quot;</span><span class="fu">,</span></span>
<span id="cb24-12"><a href="#cb24-12" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;arithmetic&quot;</span><span class="fu">:</span> <span class="st">&quot;Integer&quot;</span><span class="fu">,</span></span>
<span id="cb24-13"><a href="#cb24-13" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;weights_integer_scheme&quot;</span><span class="fu">:</span> <span class="st">&quot;SignedSymmetric&quot;</span><span class="fu">,</span></span>
<span id="cb24-14"><a href="#cb24-14" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;activations_integer_scheme&quot;</span><span class="fu">:</span> <span class="st">&quot;SignedSymmetric&quot;</span><span class="fu">,</span></span>
<span id="cb24-15"><a href="#cb24-15" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;output_directory&quot;</span><span class="fu">:</span> <span class="st">&quot;out&quot;</span><span class="fu">,</span></span>
<span id="cb24-16"><a href="#cb24-16" aria-hidden="true" tabindex="-1"></a>        <span class="dt">&quot;per_channel&quot;</span><span class="fu">:</span> <span class="st">&quot;True&quot;</span><span class="fu">,</span></span>
<span id="cb24-17"><a href="#cb24-17" aria-hidden="true" tabindex="-1"></a><span class="fu">}</span></span></code></pre></div>
<div class="Tips">
<p><strong>Tip</strong> — To use directly this file, original pre-trained Keras model (<code>mnist_cnn.h5</code> file) must be downloaded and copied in the working directory (<a href="https://github.com/EN10/KerasMNIST/raw/master/cnn.h5">https://github.com/EN10/KerasMNIST/raw/master/cnn.h5</a>) with the <code>mnist_fused.h5</code> name. The associated data set is automatically downloaded and cached in the <code>~/.keras/datasets/</code> directory thanks the Keras <em>mnist.load_data()</em> function (see <em>create_test_generator()</em> function).</p>
</div>
<p>From the <code>%X-CUBE-AI-DIR%\scripts\quantization</code> directory, following command is used to launch quantization process (refer <a href="setting_env.html">[INST]</a> article, to set the <code>X_CUBE_AI_DIR</code> variable):</p>
<div class="sourceCode" id="cb25"><pre class="sourceCode powershell"><code class="sourceCode powershell"><span id="cb25-1"><a href="#cb25-1" aria-hidden="true" tabindex="-1"></a>$ stm32ai quantize <span class="op">-</span>q cfg_mnist_ss_sa_pc<span class="op">.</span><span class="fu">json</span></span></code></pre></div>
<p>The <em>create_test_generator()</em> function from the <code>test_set_generation_mnist.py</code> module calls a local <em>load_mnist()</em> function to download the public data set. Training samples are not used. “quant_test_ratio” parameter is used to create the expected <strong>quantization test-set</strong> and <strong>evaluation test-set</strong>.</p>
<p>Following code illustrates the minimal modifications to load the data set from the local <em>“quant_test_set_dir”</em> directory. Limited part of the original test set is used (randomly selected).</p>
<div class="sourceCode" id="cb26"><pre class="sourceCode python"><code class="sourceCode python"><span id="cb26-1"><a href="#cb26-1" aria-hidden="true" tabindex="-1"></a>...</span>
<span id="cb26-2"><a href="#cb26-2" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> create_test_generator(quant_test_set_dir, evaluation_test_set_dir,</span>
<span id="cb26-3"><a href="#cb26-3" aria-hidden="true" tabindex="-1"></a>                          quant_test_ratio, batch_size):</span>
<span id="cb26-4"><a href="#cb26-4" aria-hidden="true" tabindex="-1"></a>    ...</span>
<span id="cb26-5"><a href="#cb26-5" aria-hidden="true" tabindex="-1"></a>    x_testset,y_testset <span class="op">=</span> load_mnist(quant_test_set_dir)</span>
<span id="cb26-6"><a href="#cb26-6" aria-hidden="true" tabindex="-1"></a>    ...</span>
<span id="cb26-7"><a href="#cb26-7" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb26-8"><a href="#cb26-8" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> load_mnist(test_dir):</span>
<span id="cb26-9"><a href="#cb26-9" aria-hidden="true" tabindex="-1"></a>    <span class="im">import</span> numpy <span class="im">as</span> np</span>
<span id="cb26-10"><a href="#cb26-10" aria-hidden="true" tabindex="-1"></a>    <span class="im">import</span> sys</span>
<span id="cb26-11"><a href="#cb26-11" aria-hidden="true" tabindex="-1"></a>    <span class="im">import</span> os</span>
<span id="cb26-12"><a href="#cb26-12" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb26-13"><a href="#cb26-13" aria-hidden="true" tabindex="-1"></a>    MNIST_SHAPE <span class="op">=</span> (<span class="dv">28</span>, <span class="dv">28</span>, <span class="dv">1</span>)</span>
<span id="cb26-14"><a href="#cb26-14" aria-hidden="true" tabindex="-1"></a>    N_CLASSES <span class="op">=</span> <span class="dv">10</span></span>
<span id="cb26-15"><a href="#cb26-15" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb26-16"><a href="#cb26-16" aria-hidden="true" tabindex="-1"></a>    <span class="bu">print</span>(<span class="st">&#39;Keras version  :&#39;</span>, keras.__version__)</span>
<span id="cb26-17"><a href="#cb26-17" aria-hidden="true" tabindex="-1"></a>    <span class="bu">print</span>(<span class="st">&#39;Python version :&#39;</span>, sys.version)</span>
<span id="cb26-18"><a href="#cb26-18" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb26-19"><a href="#cb26-19" aria-hidden="true" tabindex="-1"></a>    fdata_set <span class="op">=</span> os.path.join(test_dir,<span class="st">&#39;mnist.npz&#39;</span>)</span>
<span id="cb26-20"><a href="#cb26-20" aria-hidden="true" tabindex="-1"></a>    <span class="cf">if</span> <span class="kw">not</span> os.path.isfile(fdata_set):</span>
<span id="cb26-21"><a href="#cb26-21" aria-hidden="true" tabindex="-1"></a>      <span class="bu">print</span>(<span class="st">&#39;Upload the nmist data set with Keras service...&#39;</span>)</span>
<span id="cb26-22"><a href="#cb26-22" aria-hidden="true" tabindex="-1"></a>      mnist <span class="op">=</span> keras.datasets.mnist</span>
<span id="cb26-23"><a href="#cb26-23" aria-hidden="true" tabindex="-1"></a>      _, (x_test, y_test) <span class="op">=</span> mnist.load_data()</span>
<span id="cb26-24"><a href="#cb26-24" aria-hidden="true" tabindex="-1"></a>    <span class="cf">else</span>:</span>
<span id="cb26-25"><a href="#cb26-25" aria-hidden="true" tabindex="-1"></a>      <span class="bu">print</span>(<span class="st">&#39;Use the data set </span><span class="sc">{}</span><span class="st">&#39;</span>.<span class="bu">format</span>(fdata_set))</span>
<span id="cb26-26"><a href="#cb26-26" aria-hidden="true" tabindex="-1"></a>      arrays <span class="op">=</span> np.load(os.path.join(fdata_set))</span>
<span id="cb26-27"><a href="#cb26-27" aria-hidden="true" tabindex="-1"></a>      x_test, y_test <span class="op">=</span> arrays[<span class="st">&#39;x_test&#39;</span>], arrays[<span class="st">&#39;y_test&#39;</span>]</span>
<span id="cb26-28"><a href="#cb26-28" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb26-29"><a href="#cb26-29" aria-hidden="true" tabindex="-1"></a>    msize <span class="op">=</span> <span class="bu">min</span>(<span class="dv">1000</span>, <span class="bu">len</span>(x_test))</span>
<span id="cb26-30"><a href="#cb26-30" aria-hidden="true" tabindex="-1"></a>    np.random.seed(<span class="dv">2</span>)  <span class="co"># deterministic results</span></span>
<span id="cb26-31"><a href="#cb26-31" aria-hidden="true" tabindex="-1"></a>    rchoice <span class="op">=</span> np.random.choice(<span class="bu">len</span>(x_test), size<span class="op">=</span>msize, replace<span class="op">=</span><span class="va">False</span>)</span>
<span id="cb26-32"><a href="#cb26-32" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb26-33"><a href="#cb26-33" aria-hidden="true" tabindex="-1"></a>    x_test, y_test <span class="op">=</span> x_test[rchoice], y_test[rchoice]</span>
<span id="cb26-34"><a href="#cb26-34" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb26-35"><a href="#cb26-35" aria-hidden="true" tabindex="-1"></a>    x_test <span class="op">=</span> x_test.reshape((<span class="op">-</span><span class="dv">1</span>, ) <span class="op">+</span> MNIST_SHAPE).astype(<span class="st">&#39;float32&#39;</span>) <span class="op">/</span> <span class="dv">255</span></span>
<span id="cb26-36"><a href="#cb26-36" aria-hidden="true" tabindex="-1"></a>    y_test <span class="op">=</span> to_categorical(y_test, N_CLASSES)</span>
<span id="cb26-37"><a href="#cb26-37" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb26-38"><a href="#cb26-38" aria-hidden="true" tabindex="-1"></a>    <span class="bu">print</span>(<span class="st">&#39;x_test&#39;</span>, x_test.shape)</span>
<span id="cb26-39"><a href="#cb26-39" aria-hidden="true" tabindex="-1"></a>    <span class="bu">print</span>(<span class="st">&#39;y_test&#39;</span>, y_test.shape)</span>
<span id="cb26-40"><a href="#cb26-40" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb26-41"><a href="#cb26-41" aria-hidden="true" tabindex="-1"></a>    <span class="cf">return</span> x_test, y_test</span></code></pre></div>
<!-- External ST resources/links -->
<!-- Internal resources/links -->
<!-- External resources/links -->
<!-- Cross references -->
</section>
</section>
<section id="references" class="level1">
<h1>References</h1>
<table>
<colgroup>
<col style="width: 18%" />
<col style="width: 81%" />
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">ref</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">[DS]</td>
<td style="text-align: left;">X-CUBE-AI - AI expansion pack for STM32CubeMX <a href="https://www.st.com/en/embedded-software/x-cube-ai.html">https://www.st.com/en/embedded-software/x-cube-ai.html</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[UM]</td>
<td style="text-align: left;">User manual - Getting started with X-CUBE-AI Expansion Package for Artificial Intelligence (AI) <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">(pdf)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[CLI]</td>
<td style="text-align: left;">stm32ai - Command Line Interface <a href="command_line_interface.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[API]</td>
<td style="text-align: left;">Embedded inference client API <a href="embedded_client_api.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[METRIC]</td>
<td style="text-align: left;">Evaluation report and metrics <a href="evaluation_metrics.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[TFL]</td>
<td style="text-align: left;">TensorFlow Lite toolbox <a href="supported_ops_tflite.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[KERAS]</td>
<td style="text-align: left;">Keras toolbox <a href="supported_ops_keras.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[ONNX]</td>
<td style="text-align: left;">ONNX toolbox <a href="supported_ops_onnx.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[FAQS]</td>
<td style="text-align: left;">FAQ <a href="faq_generic.html">generic</a>, <a href="faq_validation.html">validation</a>, <a href="faq_quantization.html">quantization</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[QUANT]</td>
<td style="text-align: left;">Quantization and quantize command <a href="quantization.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[RELOC]</td>
<td style="text-align: left;">Relocatable binary network support <a href="relocatable.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[CUST]</td>
<td style="text-align: left;">Support of the Keras Lambda/custom layers <a href="keras_lambda_custom.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[TFLM]</td>
<td style="text-align: left;">TensorFlow Lite for Microcontroller support <a href="tflite_micro_support.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[INST]</td>
<td style="text-align: left;">Setting the environment <a href="setting_env.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[OBS]</td>
<td style="text-align: left;">Platform Observer API <a href="api_platform_observer.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[C-RUN]</td>
<td style="text-align: left;">Executing locally a generated c-model <a href="how_to_run_a_model_locally.html">(link)</a></td>
</tr>
</tbody>
</table>
</section>



<section class="st_footer">

<h1> <br> </h1>

<p style="font-family:verdana; text-align:left;">
 Embedded Documentation 

	- <b> Quantized model and quantize command </b>
			<br> X-CUBE-AI Expansion Package
	 
			<br> r3.0
		 - AI PLATFORM r7.0.0
			 (Embedded Inference Client API 1.1.0) 
			 - Command Line Interface r1.5.1 
		
	
</p>

<img src="" title="ST logo" align="right" height="100" />

<div class="st_notice">
Information in this document is provided solely in connection with ST products.
The contents of this document are subject to change without prior notice.
<br>
© Copyright STMicroelectronics 2020. All rights reserved. <a href="http://www.st.com">www.st.com</a>
</div>

<hr size="1" />
</section>


</article>
</body>

</html>
