<!DOCTYPE html>
<!--

	Modified template for STM32CubeMX.AI purpose

	d0.1: 	jean-michel.delorme@st.com
			add ST logo and ST footer

	d2.0: 	jean-michel.delorme@st.com
			add sidenav support

	d2.1: 	jean-michel.delorme@st.com
			clean-up + optional ai_logo/ai meta data
			
==============================================================================
           "GitHub HTML5 Pandoc Template" v2.1 — by Tristano Ajmone           
==============================================================================
Copyright © Tristano Ajmone, 2017, MIT License (MIT). Project's home:

- https://github.com/tajmone/pandoc-goodies

The CSS in this template reuses source code taken from the following projects:

- GitHub Markdown CSS: Copyright © Sindre Sorhus, MIT License (MIT):
  https://github.com/sindresorhus/github-markdown-css

- Primer CSS: Copyright © 2016-2017 GitHub Inc., MIT License (MIT):
  http://primercss.io/

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The MIT License 

Copyright (c) Tristano Ajmone, 2017 (github.com/tajmone/pandoc-goodies)
Copyright (c) Sindre Sorhus <sindresorhus@gmail.com> (sindresorhus.com)
Copyright (c) 2017 GitHub Inc.

"GitHub Pandoc HTML5 Template" is Copyright (c) Tristano Ajmone, 2017, released
under the MIT License (MIT); it contains readaptations of substantial portions
of the following third party softwares:

(1) "GitHub Markdown CSS", Copyright (c) Sindre Sorhus, MIT License (MIT).
(2) "Primer CSS", Copyright (c) 2016 GitHub Inc., MIT License (MIT).

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
==============================================================================-->
<html>
<head>
  <meta charset="utf-8" />
  <meta name="generator" content="pandoc" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
  <title>Keras toolbox support</title>
  <style type="text/css">
.markdown-body{
	-ms-text-size-adjust:100%;
	-webkit-text-size-adjust:100%;
	color:#24292e;
	font-family:-apple-system,system-ui,BlinkMacSystemFont,"Segoe UI",Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol";
	font-size:16px;
	line-height:1.5;
	word-wrap:break-word;
	box-sizing:border-box;
	min-width:200px;
	max-width:980px;
	margin:0 auto;
	padding:45px;
	}
.markdown-body a{
	color:#0366d6;
	background-color:transparent;
	text-decoration:none;
	-webkit-text-decoration-skip:objects}
.markdown-body a:active,.markdown-body a:hover{
	outline-width:0}
.markdown-body a:hover{
	text-decoration:underline}
.markdown-body a:not([href]){
	color:inherit;text-decoration:none}
.markdown-body strong{font-weight:600}
.markdown-body h1,.markdown-body h2,.markdown-body h3,.markdown-body h4,.markdown-body h5,.markdown-body h6{
	margin-top:24px;
	margin-bottom:16px;
	font-weight:600;
	line-height:1.25}
.markdown-body h1{
	font-size:2em;
	margin:.67em 0;
	padding-bottom:.3em;
	border-bottom:1px solid #eaecef}
.markdown-body h2{
	padding-bottom:.3em;
	font-size:1.5em;
	border-bottom:1px solid #eaecef}
.markdown-body h3{font-size:1.25em}
.markdown-body h4{font-size:1em}
.markdown-body h5{font-size:.875em}
.markdown-body h6{font-size:.85em;color:#6a737d}
.markdown-body img{border-style:none}
.markdown-body svg:not(:root){
	overflow:hidden}
.markdown-body hr{
	box-sizing:content-box;
	height:.25em;
	margin:24px 0;
	padding:0;
	overflow:hidden;
	background-color:#e1e4e8;
	border:0}
.markdown-body hr::before{display:table;content:""}
.markdown-body hr::after{display:table;clear:both;content:""}
.markdown-body input{margin:0;overflow:visible;font:inherit;font-family:inherit;font-size:inherit;line-height:inherit}
.markdown-body [type=checkbox]{box-sizing:border-box;padding:0}
.markdown-body *{box-sizing:border-box}.markdown-body blockquote{margin:0}
.markdown-body ol,.markdown-body ul{padding-left:2em}
.markdown-body ol ol,.markdown-body ul ol{list-style-type:lower-roman}
.markdown-body ol ol,.markdown-body ol ul,.markdown-body ul ol,.markdown-body ul ul{margin-top:0;margin-bottom:0}
.markdown-body ol ol ol,.markdown-body ol ul ol,.markdown-body ul ol ol,.markdown-body ul ul ol{list-style-type:lower-alpha}
.markdown-body li>p{margin-top:16px}
.markdown-body li+li{margin-top:.25em}
.markdown-body dd{margin-left:0}
.markdown-body dl{padding:0}
.markdown-body dl dt{padding:0;margin-top:16px;font-size:1em;font-style:italic;font-weight:600}
.markdown-body dl dd{padding:0 16px;margin-bottom:16px}
.markdown-body code{font-family:SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace}
.markdown-body pre{font:12px SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace;word-wrap:normal}
.markdown-body blockquote,.markdown-body dl,.markdown-body ol,.markdown-body p,.markdown-body pre,.markdown-body table,.markdown-body ul{margin-top:0;margin-bottom:16px}
.markdown-body blockquote{padding:0 1em;color:#6a737d;border-left:.25em solid #dfe2e5}
.markdown-body blockquote>:first-child{margin-top:0}
.markdown-body blockquote>:last-child{margin-bottom:0}
.markdown-body table{display:block;width:100%;overflow:auto;border-spacing:0;border-collapse:collapse}
.markdown-body table th{font-weight:600}
.markdown-body table td,.markdown-body table th{padding:6px 13px;border:1px solid #dfe2e5}
.markdown-body table tr{background-color:#fff;border-top:1px solid #c6cbd1}
.markdown-body table tr:nth-child(2n){background-color:#f6f8fa}
.markdown-body img{max-width:100%;box-sizing:content-box;background-color:#fff}
.markdown-body code{padding:.2em 0;margin:0;font-size:85%;background-color:rgba(27,31,35,.05);border-radius:3px}
.markdown-body code::after,.markdown-body code::before{letter-spacing:-.2em;content:"\00a0"}
.markdown-body pre>code{padding:0;margin:0;font-size:100%;word-break:normal;white-space:pre;background:0 0;border:0}
.markdown-body .highlight{margin-bottom:16px}
.markdown-body .highlight pre{margin-bottom:0;word-break:normal}
.markdown-body .highlight pre,.markdown-body pre{padding:16px;overflow:auto;font-size:85%;line-height:1.45;background-color:#f6f8fa;border-radius:3px}
.markdown-body pre code{display:inline;max-width:auto;padding:0;margin:0;overflow:visible;line-height:inherit;word-wrap:normal;background-color:transparent;border:0}
.markdown-body pre code::after,.markdown-body pre code::before{content:normal}
.markdown-body .full-commit .btn-outline:not(:disabled):hover{color:#005cc5;border-color:#005cc5}
.markdown-body kbd{box-shadow:inset 0 -1px 0 #959da5;display:inline-block;padding:3px 5px;font:11px/10px SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace;color:#444d56;vertical-align:middle;background-color:#fcfcfc;border:1px solid #c6cbd1;border-bottom-color:#959da5;border-radius:3px;box-shadow:inset 0 -1px 0 #959da5}
.markdown-body :checked+.radio-label{position:relative;z-index:1;border-color:#0366d6}
.markdown-body .task-list-item{list-style-type:none}
.markdown-body .task-list-item+.task-list-item{margin-top:3px}
.markdown-body .task-list-item input{margin:0 .2em .25em -1.6em;vertical-align:middle}
.markdown-body::before{display:table;content:""}
.markdown-body::after{display:table;clear:both;content:""}
.markdown-body>:first-child{margin-top:0!important}
.markdown-body>:last-child{margin-bottom:0!important}
.Alert,.Error,.Note,.Success,.Warning,.Tips,.HTips{padding:11px;margin-bottom:24px;border-style:solid;border-width:1px;border-radius:4px}
.Alert p,.Error p,.Note p,.Success p,.Warning p,.Tips p,.HTips p{margin-top:0}
.Alert p:last-child,.Error p:last-child,.Note p:last-child,.Success p:last-child,.Warning p:last-child,.Tips p:last-child,.HTips p:last-child{margin-bottom:0}
.Alert{color:#246;background-color:#e2eef9;border-color:#bac6d3}
.Warning{color:#4c4a42;background-color:#fff9ea;border-color:#dfd8c2}
.Error{color:#911;background-color:#fcdede;border-color:#d2b2b2}
.Success{color:#22662c;background-color:#e2f9e5;border-color:#bad3be}
.Note{color:#2f363d;background-color:#f6f8fa;border-color:#d5d8da}
.Alert h1,.Alert h2,.Alert h3,.Alert h4,.Alert h5,.Alert h6{color:#246;margin-bottom:0}
.Warning h1,.Warning h2,.Warning h3,.Warning h4,.Warning h5,.Warning h6{color:#4c4a42;margin-bottom:0}
.Error h1,.Error h2,.Error h3,.Error h4,.Error h5,.Error h6{color:#911;margin-bottom:0}
.Success h1,.Success h2,.Success h3,.Success h4,.Success h5,.Success h6{color:#22662c;margin-bottom:0}
.Note h1,.Note h2,.Note h3,.Note h4,.Note h5,.Note h6{color:#2f363d;margin-bottom:0}
.Tips h1,.Tips h2,.Tips h3,.Tips h4,.Tips h5,.Tips h6{color:#2f363d;margin-bottom:0}
.HTips h1,.HTips h2,.HTips h3,.HTips h4,.HTips h5,.HTips h6{color:#2f363d;margin-bottom:0}
.Tips h1:first-child,.Tips h2:first-child,.Tips h3:first-child,.Tips h4:first-child,.Tips h5:first-child,.Tips h6:first-child,.Alert h1:first-child,.Alert h2:first-child,.Alert h3:first-child,.Alert h4:first-child,.Alert h5:first-child,.Alert h6:first-child,.Error h1:first-child,.Error h2:first-child,.Error h3:first-child,.Error h4:first-child,.Error h5:first-child,.Error h6:first-child,.Note h1:first-child,.Note h2:first-child,.Note h3:first-child,.Note h4:first-child,.Note h5:first-child,.Note h6:first-child,.Success h1:first-child,.Success h2:first-child,.Success h3:first-child,.Success h4:first-child,.Success h5:first-child,.Success h6:first-child,.Warning h1:first-child,.Warning h2:first-child,.Warning h3:first-child,.Warning h4:first-child,.Warning h5:first-child,.Warning h6:first-child{margin-top:0}
h1.title,p.subtitle{text-align:center}
h1.title.followed-by-subtitle{margin-bottom:0}
p.subtitle{font-size:1.5em;font-weight:600;line-height:1.25;margin-top:0;margin-bottom:16px;padding-bottom:.3em}
div.line-block{white-space:pre-line}
  </style>
  <style type="text/css">code{white-space: pre;}</style>
  <link rel="stylesheet" href="data:text/css,%3Aroot%20%7B%2D%2Dmain%2Ddarkblue%2Dcolor%3A%20rgb%283%2C35%2C75%29%3B%20%2D%2Dmain%2Dlightblue%2Dcolor%3A%20rgb%2860%2C180%2C230%29%3B%20%2D%2Dmain%2Dpink%2Dcolor%3A%20rgb%28230%2C0%2C126%29%3B%20%2D%2Dmain%2Dyellow%2Dcolor%3A%20rgb%28255%2C210%2C0%29%3B%20%2D%2Dsecondary%2Dgrey%2Dcolor%3A%20rgb%2870%2C70%2C80%29%3B%20%2D%2Dsecondary%2Dgrey%2Dcolor%2D25%3A%20rgb%28209%2C209%2C211%29%3B%20%2D%2Dsecondary%2Dgrey%2Dcolor%2D12%3A%20rgb%28233%2C233%2C234%29%3B%20%2D%2Dsecondary%2Dlightgreen%2Dcolor%3A%20rgb%2873%2C177%2C112%29%3B%20%2D%2Dsecondary%2Dpurple%2Dcolor%3A%20rgb%28140%2C0%2C120%29%3B%20%2D%2Dsecondary%2Ddarkgreen%2Dcolor%3A%20rgb%284%2C87%2C47%29%3B%20%2D%2Dsidenav%2Dfont%2Dsize%3A%2090%25%3B%7Dhtml%20%7Bfont%2Dfamily%3A%20%22Arial%22%2C%20sans%2Dserif%3B%7D%2A%20%7Bxbox%2Dsizing%3A%20border%2Dbox%3B%7D%2Est%5Fheader%20h1%2Etitle%2C%2Est%5Fheader%20p%2Esubtitle%20%7Btext%2Dalign%3A%20left%3B%7D%2Est%5Fheader%20h1%2Etitle%20%7Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bmargin%2Dbottom%3A5px%3B%7D%2Est%5Fheader%20p%2Esubtitle%20%7Bcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bfont%2Dsize%3A90%25%3B%7D%2Est%5Fheader%20h1%2Etitle%2Efollowed%2Dby%2Dsubtitle%20%7Bborder%2Dbottom%3A2px%20solid%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bmargin%2Dbottom%3A5px%3B%7D%2Est%5Fheader%20p%2Erevision%20%7Bdisplay%3A%20inline%2Dblock%3Bwidth%3A70%25%3B%7D%2Est%5Fheader%20div%2Eauthor%20%7Bfont%2Dstyle%3A%20italic%3B%7D%2Est%5Fheader%20div%2Esummary%20%7Bborder%2Dtop%3A%20solid%201px%20%23C0C0C0%3Bbackground%3A%20%23ECECEC%3Bpadding%3A%205px%3B%7D%2Est%5Ffooter%20%7Bfont%2Dsize%3A80%25%3B%7D%2Est%5Ffooter%20img%20%7Bfloat%3A%20right%3B%7D%2Est%5Ffooter%20%2Est%5Fnotice%20%7Bwidth%3A80%25%3B%7D%2Emarkdown%2Dbody%20%23header%2Dsection%2Dnumber%20%7Bfont%2Dsize%3A120%25%3B%7D%2Emarkdown%2Dbody%20h1%20%7Bborder%2Dbottom%3A1px%20solid%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bpadding%2Dbottom%3A%202px%3Bpadding%2Dtop%3A%2010px%3B%7D%2Emarkdown%2Dbody%20h2%20%7Bpadding%2Dbottom%3A%205px%3Bpadding%2Dtop%3A%2010px%3B%7D%2Emarkdown%2Dbody%20h2%20code%20%7Bbackground%2Dcolor%3A%20rgb%28255%2C%20255%2C%20255%29%3B%7D%23func%2EsourceCode%20%7Bborder%2Dleft%2Dstyle%3A%20solid%3Bborder%2Dcolor%3A%20rgb%280%2C%2032%2C%2082%29%3Bborder%2Dcolor%3A%20rgb%28255%2C%20244%2C%20191%29%3Bborder%2Dwidth%3A%208px%3Bpadding%3A0px%3B%7Dpre%20%3E%20code%20%7Bborder%3A%20solid%201px%20blue%3Bfont%2Dsize%3A60%25%3B%7DcodeXX%20%7Bborder%3A%20solid%201px%20blue%3Bfont%2Dsize%3A60%25%3B%7D%23func%2EsourceXXCode%3A%3Abefore%20%7Bcontent%3A%20%22Synopsis%22%3Bpadding%2Dleft%3A10px%3Bfont%2Dweight%3A%20bold%3B%7Dfigure%20%7Bpadding%3A0px%3Bmargin%2Dleft%3A5px%3Bmargin%2Dright%3A5px%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3B%7Dimg%5Bdata%2Dproperty%3D%22center%22%5D%20%7Bdisplay%3A%20block%3Bmargin%2Dtop%3A%2010px%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3Bpadding%3A%2010px%3B%7Dfigcaption%20%7Btext%2Dalign%3Aleft%3B%20%20border%2Dtop%3A%201px%20dotted%20%23888%3Bpadding%2Dbottom%3A%2020px%3Bmargin%2Dtop%3A%2010px%3B%7Dh1%20code%2C%20h2%20code%20%7Bfont%2Dsize%3A120%25%3B%7D%09%2Emarkdown%2Dbody%20table%20%7Bwidth%3A%20100%25%3Bmargin%2Dleft%3Aauto%3Bmargin%2Dright%3Aauto%3B%7D%2Emarkdown%2Dbody%20img%20%7Bborder%2Dradius%3A%204px%3Bpadding%3A%205px%3Bdisplay%3A%20block%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3Bwidth%3A%20auto%3B%7D%2Emarkdown%2Dbody%20%2Est%5Fheader%20img%2C%20%2Emarkdown%2Dbody%20%7Bborder%3A%20none%3Bborder%2Dradius%3A%20none%3Bpadding%3A%205px%3Bdisplay%3A%20block%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3Bwidth%3A%20auto%3Bbox%2Dshadow%3A%20none%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2010px%3Bwidth%3A%20auto%3Bfont%2Dfamily%3A%20%22Arial%22%2C%20sans%2Dserif%3Bcolor%3A%20%2303234B%3Bcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%7D%2Emarkdown%2Dbody%20h1%2C%20%2Emarkdown%2Dbody%20h2%2C%20%2Emarkdown%2Dbody%20h3%20%7B%20%20%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%7D%2Emarkdown%2Dbody%3Ahover%20%7B%7D%2Emarkdown%2Dbody%20%2Econtents%20%7B%7D%2Emarkdown%2Dbody%20%2Etoc%2Dtitle%20%7B%7D%2Emarkdown%2Dbody%20%2Econtents%20li%20%7Blist%2Dstyle%2Dtype%3A%20none%3B%7D%2Emarkdown%2Dbody%20%2Econtents%20ul%20%7Bpadding%2Dleft%3A%2010px%3B%7D%2Emarkdown%2Dbody%20%2Econtents%20a%20%7Bcolor%3A%20%233CB4E6%3B%20%7D%2Emarkdown%2Dbody%20table%20%2Eheader%20%7Bbackground%2Dcolor%3A%20var%28%2D%2Dsecondary%2Dgrey%2Dcolor%2D12%29%3Bborder%2Dbottom%3A1px%20solid%3Bborder%2Dtop%3A1px%20solid%3Bfont%2Dsize%3A%2090%25%3B%7D%2Emarkdown%2Dbody%20table%20th%20%7Bfont%2Dweight%3A%20bolder%3B%20%7D%2Emarkdown%2Dbody%20table%20td%20%7Bfont%2Dsize%3A%2090%25%3B%7D%2Emarkdown%2Dbody%20code%7Bpadding%3A%200%3Bmargin%3A0%3Bfont%2Dsize%3A95%25%3Bbackground%2Dcolor%3Argba%2827%2C31%2C35%2C%2E05%29%3Bborder%2Dradius%3A1px%3B%7D%2Et01%20%7Bwidth%3A%20100%25%3Bborder%3A%20None%3Btext%2Dalign%3A%20left%3B%7D%2ETips%20%7Bpadding%3A11px%3Bmargin%2Dbottom%3A24px%3Bborder%2Dstyle%3Asolid%3Bborder%2Dwidth%3A1px%3Bborder%2Dradius%3A1px%7D%2ETips%20%7Bcolor%3A%232f363d%3B%20background%2Dcolor%3A%20%23f6f8fa%3Bborder%2Dcolor%3A%23d5d8da%3Bborder%2Dtop%3A1px%20solid%3Bborder%2Dbottom%3A1px%20solid%3B%7D%2EHTips%20%7Bpadding%3A11px%3Bmargin%2Dbottom%3A24px%3Bborder%2Dstyle%3Asolid%3Bborder%2Dwidth%3A1px%3Bborder%2Dradius%3A1px%7D%2EHTips%20%7Bcolor%3A%232f363d%3B%20background%2Dcolor%3A%23fff9ea%3Bborder%2Dcolor%3A%23d5d8da%3Bborder%2Dtop%3A1px%20solid%3Bborder%2Dbottom%3A1px%20solid%3B%7D%2EHTips%20h1%2C%2EHTips%20h2%2C%2EHTips%20h3%2C%2EHTips%20h4%2C%2EHTips%20h5%2C%2EHTips%20h6%20%7Bcolor%3A%232f363d%3Bmargin%2Dbottom%3A0%7D%2Esidenav%20%7Bfont%2Dfamily%3A%20%22Arial%22%2C%20sans%2Dserif%3B%20%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bheight%3A%20100%25%3Bposition%3A%20fixed%3Bz%2Dindex%3A%201%3Btop%3A%200%3Bleft%3A%200%3Bmargin%2Dright%3A%2010px%3Bmargin%2Dleft%3A%2010px%3B%20overflow%2Dx%3A%20hidden%3B%7D%2Esidenav%20hr%2Enew1%20%7Bborder%2Dwidth%3A%20thin%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Dlightblue%2Dcolor%29%3Bmargin%2Dright%3A%2010px%3Bmargin%2Dtop%3A%20%2D10px%3B%7D%2Esidenav%20%23sidenav%5Fheader%20%7Bmargin%2Dtop%3A%2010px%3Bborder%3A%201px%3Bcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Dlightblue%2Dcolor%29%3B%7D%2Esidenav%20%23sidenav%5Fheader%20img%20%7Bfloat%3A%20left%3B%7D%2Esidenav%20%23sidenav%5Fheader%20a%20%7Bmargin%2Dleft%3A%200px%3Bmargin%2Dright%3A%200px%3Bpadding%2Dleft%3A%200px%3B%7D%2Esidenav%20%23sidenav%5Fheader%20a%3Ahover%20%7Bbackground%2Dsize%3A%20auto%3Bcolor%3A%20%23FFD200%3B%20%7D%2Esidenav%20%23sidenav%5Fheader%20a%3Aactive%20%7B%20%20%7D%2Esidenav%20%3E%20ul%20%7Bbackground%2Dcolor%3A%20rgba%2857%2C%20169%2C%20220%2C%200%2E05%29%3B%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bborder%2Dradius%3A%2010px%3Bpadding%2Dbottom%3A%2010px%3Bpadding%2Dtop%3A%2010px%3Bpadding%2Dright%3A%2010px%3Bmargin%2Dright%3A%2010px%3B%7D%2Esidenav%20a%20%7Bpadding%3A%202px%202px%3Btext%2Ddecoration%3A%20none%3Bfont%2Dsize%3A%20var%28%2D%2Dsidenav%2Dfont%2Dsize%29%3Bdisplay%3Atable%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%2C%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%7B%20padding%2Dright%3A%205px%3Bpadding%2Dleft%3A%205px%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%20%7B%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bfont%2Dweight%3A%20lighter%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%20%7B%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bfont%2Dsize%3A%2080%25%3Bpadding%2Dleft%3A%2010px%3Btext%2Dalign%2Dlast%3A%20left%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%20%7B%20display%3A%20None%3B%7D%2Esidenav%20li%20%7Blist%2Dstyle%2Dtype%3A%20none%3B%7D%2Esidenav%20ul%20%7Bpadding%2Dleft%3A%200px%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%2C%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%20%7Bbackground%2Dcolor%3A%20var%28%2D%2Dsecondary%2Dgrey%2Dcolor%2D12%29%3Bbackground%2Dclip%3A%20border%2Dbox%3Bmargin%2Dleft%3A%20%2D10px%3Bpadding%2Dleft%3A%2010px%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%20%7Bpadding%2Dright%3A%2015px%3Bwidth%3A%20230px%3B%09%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%20%7Bpadding%2Dright%3A%2010px%3Bwidth%3A%20230px%3B%09%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%3Aactive%20%7B%20color%3A%20%23FFD200%3B%20%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%3Aactive%20%7B%20color%3A%20%23FFD200%3B%20%7D%2Esidenav%20code%20%7B%7D%2Esidenav%20%7Bwidth%3A%20280px%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%20300px%3Bdisplay%3Ablock%3B%7D%2Emarkdown%2Dbody%20%2Eprint%2Dcontents%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%2Eprint%2Dtoc%2Dtitle%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%7Bmax%2Dwidth%3A%20980px%3Bmin%2Dwidth%3A%20200px%3Bpadding%3A%2040px%3Bborder%2Dstyle%3A%20solid%3Bborder%2Dstyle%3A%20outset%3Bborder%2Dcolor%3A%20rgba%28104%2C%20167%2C%20238%2C%200%2E089%29%3Bborder%2Dradius%3A%205px%3B%7D%40media%20screen%20and%20%28max%2Dheight%3A%20450px%29%20%7B%2Esidenav%20%7Bpadding%2Dtop%3A%2015px%3B%7D%2Esidenav%20a%20%7Bfont%2Dsize%3A%2018px%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%2010px%3B%20%7D%2Esidenav%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2040px%3Bwidth%3A%20auto%3Bborder%3A%200px%3B%7D%7D%40media%20screen%20and%20%28max%2Dwidth%3A%201024px%29%20%7B%2Esidenav%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2040px%3Bwidth%3A%20auto%3Bborder%3A%200px%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%2010px%3B%7D%7D%40media%20print%20%7B%2Esidenav%20%7Bvisibility%3Ahidden%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%2010px%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2010px%3Bwidth%3Aauto%3Bborder%3A%200px%3B%7D%40page%20%7Bsize%3A%20A4%3B%20%20margin%3A2cm%3Bpadding%3A2cm%3Bmargin%2Dtop%3A%201cm%3Bpadding%2Dbottom%3A%201cm%3B%7D%2A%20%7Bxbox%2Dsizing%3A%20border%2Dbox%3Bfont%2Dsize%3A90%25%3B%7Da%20%7Bfont%2Dsize%3A%20100%25%3Bcolor%3A%20yellow%3B%7D%2Emarkdown%2Dbody%20article%20%7Bxbox%2Dsizing%3A%20border%2Dbox%3Bfont%2Dsize%3A100%25%3B%7D%2Emarkdown%2Dbody%20p%20%7Bwindows%3A%202%3Borphans%3A%202%3B%7D%2Epagebreakerafter%20%7Bpage%2Dbreak%2Dafter%3A%20always%3Bpadding%2Dtop%3A10mm%3B%7D%2Epagebreakbefore%20%7Bpage%2Dbreak%2Dbefore%3A%20always%3B%7Dh1%2C%20h2%2C%20h3%2C%20h4%20%7Bpage%2Dbreak%2Dafter%3A%20avoid%3B%7Ddiv%2C%20code%2C%20blockquote%2C%20li%2C%20span%2C%20table%2C%20figure%20%7Bpage%2Dbreak%2Dinside%3A%20avoid%3B%7D%7D">
  <!--[if lt IE 9]>
    <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
  <![endif]-->





<link rel="shortcut icon" href="">

</head>



<body>

		<div class="sidenav">
		<div id="sidenav_header">
							<img src="" title="STM32CubeMX.AI logo" align="left" height="70" />
										<br />7.0.0<br />
										<a href="#doc_title"> Keras toolbox support </a>
					</div>
		<div id="sidenav_header_button">
			 
							<ul>
					<li><p><a id="index" href="index.html">[ Index ]</a></p></li>
				</ul>
						<hr class="new1">
		</div>	

		<ul>
  <li><a href="#overview">Overview</a>
  <ul>
  <li><a href="#summary-table">Summary table</a></li>
  <li><a href="#custom-operators">Custom operators</a></li>
  <li><a href="#common-constraints">Common constraints</a></li>
  </ul></li>
  <li><a href="#operators">Operators</a>
  <ul>
  <li><a href="#activation">Activation</a></li>
  <li><a href="#activityregularization">ActivityRegularization</a></li>
  <li><a href="#add">Add</a></li>
  <li><a href="#alphadropout">AlphaDropout</a></li>
  <li><a href="#average">Average</a></li>
  <li><a href="#averagepooling1d">AveragePooling1D</a></li>
  <li><a href="#averagepooling2d">AveragePooling2D</a></li>
  <li><a href="#batchnormalization">BatchNormalization</a></li>
  <li><a href="#bidirectional">Bidirectional</a></li>
  <li><a href="#concatenate">Concatenate</a></li>
  <li><a href="#conv1d">Conv1D</a></li>
  <li><a href="#conv2d">Conv2D</a></li>
  <li><a href="#conv2dtranspose">Conv2DTranspose</a></li>
  <li><a href="#cropping1d">Cropping1D</a></li>
  <li><a href="#cropping2d">Cropping2D</a></li>
  <li><a href="#customunpack">CustomUnpack</a></li>
  <li><a href="#dense">Dense</a></li>
  <li><a href="#depthwiseconv2d">DepthwiseConv2D</a></li>
  <li><a href="#dropout">Dropout</a></li>
  <li><a href="#elu">ELU</a></li>
  <li><a href="#flatten">Flatten</a></li>
  <li><a href="#gaussiandropout">GaussianDropout</a></li>
  <li><a href="#gaussiannoise">GaussianNoise</a></li>
  <li><a href="#globalaveragepooling1d">GlobalAveragePooling1D</a></li>
  <li><a href="#globalaveragepooling2d">GlobalAveragePooling2D</a></li>
  <li><a href="#globalmaxpooling1d">GlobalMaxPooling1D</a></li>
  <li><a href="#globalmaxpooling2d">GlobalMaxPooling2D</a></li>
  <li><a href="#gru">GRU</a></li>
  <li><a href="#inputlayer">InputLayer</a></li>
  <li><a href="#leakyrelu">LeakyReLU</a></li>
  <li><a href="#lstm">LSTM</a></li>
  <li><a href="#maximum">Maximum</a></li>
  <li><a href="#maxpooling1d">MaxPooling1D</a></li>
  <li><a href="#maxpooling2d">MaxPooling2D</a></li>
  <li><a href="#minimum">Minimum</a></li>
  <li><a href="#multiply">Multiply</a></li>
  <li><a href="#permute">Permute</a></li>
  <li><a href="#prelu">PReLU</a></li>
  <li><a href="#relu">ReLU</a></li>
  <li><a href="#repeatvector">RepeatVector</a></li>
  <li><a href="#reshape">Reshape</a></li>
  <li><a href="#separableconv1d">SeparableConv1D</a></li>
  <li><a href="#separableconv2d">SeparableConv2D</a></li>
  <li><a href="#softmax">Softmax</a></li>
  <li><a href="#spatialdropout1d">SpatialDropout1D</a></li>
  <li><a href="#spatialdropout2d">SpatialDropout2D</a></li>
  <li><a href="#subtract">Subtract</a></li>
  <li><a href="#thresholdedrelu">ThresholdedReLU</a></li>
  <li><a href="#timedistributed">TimeDistributed</a></li>
  <li><a href="#upsampling1d">UpSampling1D</a></li>
  <li><a href="#upsampling2d">UpSampling2D</a></li>
  <li><a href="#zeropadding1d">ZeroPadding1D</a></li>
  <li><a href="#zeropadding2d">ZeroPadding2D</a></li>
  </ul></li>
  <li><a href="#custom-operators-1">Custom operators</a>
  <ul>
  <li><a href="#abs">Abs</a></li>
  <li><a href="#acos">Acos</a></li>
  <li><a href="#acosh">Acosh</a></li>
  <li><a href="#asin">Asin</a></li>
  <li><a href="#asinh">Asinh</a></li>
  <li><a href="#atan">Atan</a></li>
  <li><a href="#atanh">Atanh</a></li>
  <li><a href="#ceil">Ceil</a></li>
  <li><a href="#clip">Clip</a></li>
  <li><a href="#cos">Cos</a></li>
  <li><a href="#exp">Exp</a></li>
  <li><a href="#fill">Fill</a></li>
  <li><a href="#floordiv">FloorDiv</a></li>
  <li><a href="#floormod">FloorMod</a></li>
  <li><a href="#gather">Gather</a></li>
  <li><a href="#lambda">Lambda</a></li>
  <li><a href="#log">Log</a></li>
  <li><a href="#pow">Pow</a></li>
  <li><a href="#reshape-1">Reshape</a></li>
  <li><a href="#round">Round</a></li>
  <li><a href="#shape">Shape</a></li>
  <li><a href="#sign">Sign</a></li>
  <li><a href="#sin">Sin</a></li>
  <li><a href="#sqrt">Sqrt</a></li>
  <li><a href="#square">Square</a></li>
  <li><a href="#tanh">Tanh</a></li>
  </ul></li>
  <li><a href="#references">References</a></li>
  </ul>
	</div>
	<article id="sidenav" class="markdown-body">
		



<header>
<section class="st_header" id="doc_title">

<div class="himage">
	<img src="" title="STM32CubeMX.AI" align="right" height="70" />
	<img src="" title="STM32" align="right" height="90" />
</div>

<h1 class="title followed-by-subtitle">Keras toolbox support</h1>

	<p class="subtitle">X-CUBE-AI Expansion Package</p>


	<div class="ai_platform">
		AI PLATFORM r7.0.0
					(Embedded Inference Client API 1.1.0)
			</div>
			Command Line Interface r1.5.1
	




</section>
</header>
 




<section id="overview" class="level1">
<h1>Overview</h1>
<p>This document lists the layers (or operators) which can be imported and converted. Supported operators allow to address a large range of classical topologies targeting a Mobile or IoT resource-constrained runtime environment: SqueezeNet, MobileNet V1 or V2, Inception, SSD MobileNet v1,..</p>
<blockquote>
<p>Purpose of this document is to list the operators and their associated constraints or limitations, please refer to the original documentation for details on a given layer.</p>
</blockquote>
<p><a href="https://keras.io">Keras</a> is supported through the <a href="https://www.tensorflow.org/">Tensorflow</a> backend with channels-last dimension ordering. Keras.io 2.0 up to version 2.5.1 is supported, while networks defined in Keras 1.x are not officially supported. Up-to TF Keras 2.5.0 is supported.</p>
<p><em>This file was automatically generated.</em></p>
<ul>
<li>X-CUBE-AI version : 7.0<br />
</li>
<li>53 operators found<br />
</li>
<li>26 custom operators found</li>
</ul>
<section id="summary-table" class="level2">
<h2>Summary table</h2>
<p>Following table contains the list of the operators that can be imported, if the constraints or limitations are met. The 26 custom operators are listed in the <a href="#custom-operators">next table</a>.</p>
<ul>
<li>supported optional fused activation (or non-linearity): linear, relu, relu_n1_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign, abs, acos, acosh, asin, asinh, atan, atanh, ceil, clip, cos, cosh, erf, exp, floor, identity, log, neg, prelu, reciprocal, relu_generic, relu_thresholded, round, rsqrt, sign, sin, sinh, sqrt, swish, tan<br />
</li>
<li>supported optional fused <strong>integer</strong> activation (or non-linearity): prelu, relu, clip, lut, swish, identity, relu6<br />
</li>
<li><em>integer operation</em> are only supported for the Keras models that have been quantized with the X-CUBE-AI post-training quantization script. An additional tensor format configuration file (json file format) is requested.<br />
</li>
<li>if an operator is not supported in integer, floating point version is used. Converters are automically added by the code generator.</li>
</ul>
<table>
<thead>
<tr class="header">
<th style="text-align: left;">operator</th>
<th style="text-align: left;">data types</th>
<th style="text-align: left;">constraints/limitations</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><a href="#activation">Activation</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#activation">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#activityregularization">ActivityRegularization</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#activityregularization">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#add">Add</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#alphadropout">AlphaDropout</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#alphadropout">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#average">Average</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#averagepooling1d">AveragePooling1D</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#averagepooling1d">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#averagepooling2d">AveragePooling2D</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#averagepooling2d">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#batchnormalization">BatchNormalization</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#batchnormalization">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#bidirectional">Bidirectional</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#bidirectional">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#concatenate">Concatenate</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#concatenate">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#conv1d">Conv1D</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#conv1d">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#conv2d">Conv2D</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#conv2d">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#conv2dtranspose">Conv2DTranspose</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#conv2dtranspose">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#cropping1d">Cropping1D</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#cropping2d">Cropping2D</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#customunpack">CustomUnpack</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#dense">Dense</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#dense">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#depthwiseconv2d">DepthwiseConv2D</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#depthwiseconv2d">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#dropout">Dropout</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#dropout">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#elu">ELU</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#flatten">Flatten</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#flatten">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#gaussiandropout">GaussianDropout</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#gaussiandropout">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#gaussiannoise">GaussianNoise</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#gaussiannoise">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#globalaveragepooling1d">GlobalAveragePooling1D</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#globalaveragepooling2d">GlobalAveragePooling2D</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#globalmaxpooling1d">GlobalMaxPooling1D</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#globalmaxpooling2d">GlobalMaxPooling2D</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#gru">GRU</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#gru">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#inputlayer">InputLayer</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#inputlayer">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#leakyrelu">LeakyReLU</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#lstm">LSTM</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#lstm">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#maximum">Maximum</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#maxpooling1d">MaxPooling1D</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#maxpooling1d">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#maxpooling2d">MaxPooling2D</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#maxpooling2d">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#minimum">Minimum</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#multiply">Multiply</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#permute">Permute</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#permute">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#prelu">PReLU</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#prelu">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#relu">ReLU</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#repeatvector">RepeatVector</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#reshape">Reshape</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#separableconv1d">SeparableConv1D</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#separableconv1d">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#separableconv2d">SeparableConv2D</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#separableconv2d">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#softmax">Softmax</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#softmax">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#spatialdropout1d">SpatialDropout1D</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#spatialdropout1d">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#spatialdropout2d">SpatialDropout2D</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#spatialdropout2d">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#subtract">Subtract</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#thresholdedrelu">ThresholdedReLU</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#timedistributed">TimeDistributed</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#upsampling1d">UpSampling1D</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#upsampling2d">UpSampling2D</a></td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#zeropadding1d">ZeroPadding1D</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#zeropadding2d">ZeroPadding2D</a></td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
</tbody>
</table>
</section>
<section id="custom-operators" class="level2">
<h2>Custom operators</h2>
<p>Following table contains the list of the custom operators that can be imported.</p>
<table>
<thead>
<tr class="header">
<th style="text-align: left;">Operator</th>
<th style="text-align: left;">Data types</th>
<th style="text-align: left;">Constraints/Limitations</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">Abs</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#abs">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">Acos</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#acos">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">Acosh</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#acosh">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">Asin</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#asin">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">Asinh</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#asinh">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">Atan</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#atan">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">Atanh</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#atanh">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">Ceil</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#ceil">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">Clip</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#clip">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">Cos</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#cos">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">Exp</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#exp">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">Fill</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#fill">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">FloorDiv</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#floordiv">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">FloorMod</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#floormod">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">Gather</td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#gather">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">Lambda</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">Log</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#log">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">Pow</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#pow">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">Reshape</td>
<td style="text-align: left;">float32, int8, uint8</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#reshape">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">Round</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#round">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">Shape</td>
<td style="text-align: left;">float32, int8, uint8, int32</td>
<td style="text-align: left;"><a href="#common-constraints">common</a>, <a href="#shape">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">Sign</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#sign">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">Sin</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#sin">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">Sqrt</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#sqrt">specific</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">Square</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#square">specific</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">Tanh</td>
<td style="text-align: left;">float32</td>
<td style="text-align: left;"><a href="#tanh">specific</a></td>
</tr>
</tbody>
</table>
</section>
<section id="common-constraints" class="level2">
<h2>Common constraints</h2>
<ul>
<li>input and output tensors must be <strong>not dynamic</strong>.
<ul>
<li>variable-length batch dimension (i.e. <code>(None,)</code>) is considered as equal to 1<br />
</li>
<li>must not be greater than 4D<br />
</li>
<li>dimension must be in the range [0, 65536[<br />
</li>
<li>batch dimension is not supported for the axis parameter- data type for the weights/activations tensors must be:
<ul>
<li>float32, int8, uint8<br />
</li>
<li>only int32 for the bias tensor is considered<br />
</li>
</ul></li>
<li>for some operators, bool type is also supported<br />
</li>
</ul></li>
<li>mixed data operations (i.e hybrid operator) are not supported, activations and weights should be quantized<br />
</li>
<li>generated c-model is always <strong>channel-last</strong> (or <code>NHWC</code> format)<br />
</li>
<li>1D operator is mapped on the respective 2D operator by adding a singleton dimension on the input: (12,3) -&gt; (12, 0, 3)</li>
</ul>
</section>
</section>
<section id="operators" class="level1">
<h1>Operators</h1>
<section id="activation" class="level2">
<h2>Activation</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8<br />
</li>
<li>fused activations (if present): linear, relu, relu_n1_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>softmax is always conserved in float32, converters are added if necessary</li>
</ul>
</section>
<section id="activityregularization" class="level2">
<h2>ActivityRegularization</h2>
<p>Performs regularization during the training phase</p>
<ul>
<li>category: regularization layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>operator is dropped during the conversion</li>
</ul>
</section>
<section id="add" class="level2">
<h2>Add</h2>
<p>Performs element-wise operation</p>
<ul>
<li>category: eltwise operator<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
</section>
<section id="alphadropout" class="level2">
<h2>AlphaDropout</h2>
<p>Performs regularization during the training phase</p>
<ul>
<li>category: regularization layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>operator is dropped during the conversion</li>
</ul>
</section>
<section id="average" class="level2">
<h2>Average</h2>
<p>Performs element-wise operation</p>
<ul>
<li>category: eltwise operator<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
</section>
<section id="averagepooling1d" class="level2">
<h2>AveragePooling1D</h2>
<p>Downsamples the input</p>
<ul>
<li>category: pooling layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>arbitrary strides, provided that they are smaller than the input size<br />
</li>
<li>arbitrary pool sizes, provided that they are smaller than the input size</li>
</ul>
</section>
<section id="averagepooling2d" class="level2">
<h2>AveragePooling2D</h2>
<p>Downsamples the input</p>
<ul>
<li>category: pooling layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>arbitrary strides, provided that they are smaller than the input size<br />
</li>
<li>arbitrary pool sizes, provided that they are smaller than the input size</li>
</ul>
</section>
<section id="batchnormalization" class="level2">
<h2>BatchNormalization</h2>
<p>Performs the normalization of the input</p>
<ul>
<li>category: normalization layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>Only normalization on the last axis (channels) is supported</li>
</ul>
</section>
<section id="bidirectional" class="level2">
<h2>Bidirectional</h2>
<p>Bidirectionnal wrapper for RNNs</p>
<ul>
<li>category: recurrent layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>supported layers: <a href="#lstm">LSTM</a>, <a href="#gru">GRU</a> and SimpleRNN<br />
</li>
<li>supported merge mode: <code>concat</code>, <code>mul</code>, <code>ave</code> and <code>sum</code></li>
</ul>
</section>
<section id="concatenate" class="level2">
<h2>Concatenate</h2>
<p>Performs concatenation of a list of inputs</p>
<ul>
<li>category: merge operator<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>concatenating on the batch dimension is not supported</li>
</ul>
</section>
<section id="conv1d" class="level2">
<h2>Conv1D</h2>
<p>Performs convolution operation</p>
<ul>
<li>category: convolutional layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8<br />
</li>
<li>fused activations (if present): linear, relu, relu_n1_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign<br />
</li>
<li>integer schemes: weights / activations
<ul>
<li>Signed Symmetric / Signed Asymmetric (SSSA)<br />
</li>
<li>Signed Symmetric per channel (or per-axis) / Signed Asymmetric (SSSA_CH)<br />
</li>
<li>Signed Symmetric / Unsigned Asymmetric (SSUA)<br />
</li>
<li>Signed Symmetric per channel (or per-axis) / Unsigned Asymmetric (SSUA_CH)<br />
</li>
<li>Unsigned Asymmetric / Unsigned Asymmetric (UAUA)<br />
</li>
<li>Unsigned Asymmetric per channel (or per-axis) / Unsigned Asymmetric (UAUA_CH)</li>
</ul></li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>arbitrary strides, provided that they are smaller than the input size<br />
</li>
<li>arbitrary filter kernel sizes, provided that they are smaller than the input size<br />
</li>
<li>for quantized model, dilation values different from 1 are not supported</li>
</ul>
</section>
<section id="conv2d" class="level2">
<h2>Conv2D</h2>
<p>Performs convolution operation</p>
<ul>
<li>category: convolutional layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8<br />
</li>
<li>fused activations (if present): linear, relu, relu_n1_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign<br />
</li>
<li>integer schemes: weights / activations
<ul>
<li>Signed Symmetric / Signed Asymmetric (SSSA)<br />
</li>
<li>Signed Symmetric per channel (or per-axis) / Signed Asymmetric (SSSA_CH)<br />
</li>
<li>Signed Symmetric / Unsigned Asymmetric (SSUA)<br />
</li>
<li>Signed Symmetric per channel (or per-axis) / Unsigned Asymmetric (SSUA_CH)<br />
</li>
<li>Unsigned Asymmetric / Unsigned Asymmetric (UAUA)<br />
</li>
<li>Unsigned Asymmetric per channel (or per-axis) / Unsigned Asymmetric (UAUA_CH)</li>
</ul></li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>arbitrary strides, provided that they are smaller than the input size<br />
</li>
<li>arbitrary filter kernel sizes, provided that they are smaller than the input size<br />
</li>
<li>for quantized model, dilation values different from 1 are not supported</li>
</ul>
</section>
<section id="conv2dtranspose" class="level2">
<h2>Conv2DTranspose</h2>
<p>Transposed convolutional layer</p>
<ul>
<li>category: convolutional layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8<br />
</li>
<li>fused activations (if present): linear, relu, relu_n1_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>arbitrary strides, provided that they are smaller than the input size<br />
</li>
<li>arbitrary filter kernel sizes, provided that they are smaller than the input size</li>
</ul>
</section>
<section id="cropping1d" class="level2">
<h2>Cropping1D</h2>
<p>Crops the input</p>
<ul>
<li>category: reshaping layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
</section>
<section id="cropping2d" class="level2">
<h2>Cropping2D</h2>
<p>Crops the input</p>
<ul>
<li>category: reshaping layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
</section>
<section id="customunpack" class="level2">
<h2>CustomUnpack</h2>
<ul>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
</section>
<section id="dense" class="level2">
<h2>Dense</h2>
<p>Fully Connected operation</p>
<ul>
<li>category: core layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8<br />
</li>
<li>fused activations (if present): linear, relu, relu_n1_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign<br />
</li>
<li>integer schemes: weights / activations
<ul>
<li>Signed Symmetric / Signed Asymmetric (SSSA)<br />
</li>
<li>Signed Symmetric per channel (or per-axis) / Signed Asymmetric (SSSA_CH)<br />
</li>
<li>Signed Symmetric / Unsigned Asymmetric (SSUA)<br />
</li>
<li>Signed Symmetric per channel (or per-axis) / Unsigned Asymmetric (SSUA_CH)<br />
</li>
<li>Unsigned Asymmetric / Unsigned Asymmetric (UAUA)<br />
</li>
<li>Unsigned Asymmetric per channel (or per-axis) / Unsigned Asymmetric (UAUA_CH)</li>
</ul></li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>for the floating point model, weights and/or bias can be compressed during the code generation</li>
</ul>
</section>
<section id="depthwiseconv2d" class="level2">
<h2>DepthwiseConv2D</h2>
<p>Performs convolution operation</p>
<ul>
<li>category: convolutional layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8<br />
</li>
<li>fused activations (if present): linear, relu, relu_n1_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign<br />
</li>
<li>integer schemes: weights / activations
<ul>
<li>Signed Symmetric / Signed Asymmetric (SSSA)<br />
</li>
<li>Signed Symmetric per channel (or per-axis) / Signed Asymmetric (SSSA_CH)<br />
</li>
<li>Signed Symmetric / Unsigned Asymmetric (SSUA)<br />
</li>
<li>Signed Symmetric per channel (or per-axis) / Unsigned Asymmetric (SSUA_CH)<br />
</li>
<li>Unsigned Asymmetric / Unsigned Asymmetric (UAUA)<br />
</li>
<li>Unsigned Asymmetric per channel (or per-axis) / Unsigned Asymmetric (UAUA_CH)</li>
</ul></li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>arbitrary strides, provided that they are smaller than the input size<br />
</li>
<li>arbitrary filter kernel sizes, provided that they are smaller than the input size<br />
</li>
<li>for quantized model, dilation values different from 1 are not supported</li>
</ul>
</section>
<section id="dropout" class="level2">
<h2>Dropout</h2>
<p>Applies Dropout to the input</p>
<ul>
<li>category: regularization layers<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>operator is dropped during the conversion</li>
</ul>
</section>
<section id="elu" class="level2">
<h2>ELU</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
</section>
<section id="flatten" class="level2">
<h2>Flatten</h2>
<p>Flattens the non-batch input dimensions to a vector</p>
<ul>
<li>category: Reshaping operation<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>Flatten on the batch dimension is not supported<br />
</li>
<li>operator is dropped during the conversion</li>
</ul>
</section>
<section id="gaussiandropout" class="level2">
<h2>GaussianDropout</h2>
<p>Performs regularization during the training phase</p>
<ul>
<li>category: regularization layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>operator is dropped during the conversion</li>
</ul>
</section>
<section id="gaussiannoise" class="level2">
<h2>GaussianNoise</h2>
<p>Performs regularization during the training phase</p>
<ul>
<li>category: regularization layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>operator is dropped during the conversion</li>
</ul>
</section>
<section id="globalaveragepooling1d" class="level2">
<h2>GlobalAveragePooling1D</h2>
<p>Downsamples the input</p>
<ul>
<li>category: pooling layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
</section>
<section id="globalaveragepooling2d" class="level2">
<h2>GlobalAveragePooling2D</h2>
<p>Downsamples the input</p>
<ul>
<li>category: pooling layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
</section>
<section id="globalmaxpooling1d" class="level2">
<h2>GlobalMaxPooling1D</h2>
<p>Downsamples the input</p>
<ul>
<li>category: pooling layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
</section>
<section id="globalmaxpooling2d" class="level2">
<h2>GlobalMaxPooling2D</h2>
<p>Downsamples the input</p>
<ul>
<li>category: pooling layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
</section>
<section id="gru" class="level2">
<h2>GRU</h2>
<p>Gated Recurrent Unit</p>
<ul>
<li>category: recurrent layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>stateless support only<br />
</li>
<li>fused activation: linear, relu, relu_n1_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign<br />
</li>
<li>fused recurrent activation: linear, relu, relu_n1_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign<br />
</li>
<li><code>return_state</code> not supported</li>
</ul>
</section>
<section id="inputlayer" class="level2">
<h2>InputLayer</h2>
<p>Optional placeholder for the network’s input</p>
<ul>
<li>category: core layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>operator is dropped during the conversion</li>
</ul>
</section>
<section id="leakyrelu" class="level2">
<h2>LeakyReLU</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
</section>
<section id="lstm" class="level2">
<h2>LSTM</h2>
<p>Computes a multi-layer long short-term memory (LSTM) RNN to an input sequence (batch=1, timesteps, features)</p>
<ul>
<li>category: recurrent layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>stateless and stateful (batch=1 only) mode support<br />
</li>
<li>in stateful mode the user is requested to define two C routines to allocate and deallocate internal layer state.
<ul>
<li>initial state must be provided as part of the allocation routine implementation<br />
</li>
<li>the two functions to implement are:
<ul>
<li><code>void _allocate_lstm_states(ai_float **states, ai_u32 size_in_bytes)</code><br />
</li>
<li><code>void _deallocate_lstm_states(ai_float **states)</code><br />
</li>
</ul></li>
</ul></li>
<li>fused activation: linear, relu, relu_n1_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign<br />
</li>
<li>fused recurrent activation: linear, relu, relu_n1_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign<br />
</li>
<li><code>return_state</code> not supported</li>
</ul>
</section>
<section id="maximum" class="level2">
<h2>Maximum</h2>
<p>Computes the maximum (element-wise) a list of inputs</p>
<ul>
<li>category: eltwise operator<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
</section>
<section id="maxpooling1d" class="level2">
<h2>MaxPooling1D</h2>
<p>Downsamples the input</p>
<ul>
<li>category: pooling layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>arbitrary strides, provided that they are smaller than the input size<br />
</li>
<li>arbitrary pool sizes, provided that they are smaller than the input size</li>
</ul>
</section>
<section id="maxpooling2d" class="level2">
<h2>MaxPooling2D</h2>
<p>Downsamples the input</p>
<ul>
<li>category: pooling layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>arbitrary strides, provided that they are smaller than the input size<br />
</li>
<li>arbitrary pool sizes, provided that they are smaller than the input size</li>
</ul>
</section>
<section id="minimum" class="level2">
<h2>Minimum</h2>
<p>Computes the minimum (element-wise) a list of inputs</p>
<ul>
<li>category: eltwise operator<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
</section>
<section id="multiply" class="level2">
<h2>Multiply</h2>
<p>Performs element-wise operation</p>
<ul>
<li>category: eltwise operator<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
</section>
<section id="permute" class="level2">
<h2>Permute</h2>
<p>Permutes the dimensions of the input according to a given pattern</p>
<ul>
<li>category: reshaping layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>transposing the batch dimension is not supported</li>
</ul>
</section>
<section id="prelu" class="level2">
<h2>PReLU</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>shared axes in PReLU supported only for the leading dimensions</li>
</ul>
</section>
<section id="relu" class="level2">
<h2>ReLU</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
</section>
<section id="repeatvector" class="level2">
<h2>RepeatVector</h2>
<p>Repeats the input n times</p>
<ul>
<li>category: reshaping layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
</section>
<section id="reshape" class="level2">
<h2>Reshape</h2>
<p>Reshapes a tensor</p>
<ul>
<li>category: Reshaping operation<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
</section>
<section id="separableconv1d" class="level2">
<h2>SeparableConv1D</h2>
<p>Performs convolution operation</p>
<ul>
<li>category: convolutional layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8<br />
</li>
<li>fused activations (if present): linear, relu, relu_n1_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign<br />
</li>
<li>integer schemes: weights / activations
<ul>
<li>Signed Symmetric / Signed Asymmetric (SSSA)<br />
</li>
<li>Signed Symmetric per channel (or per-axis) / Signed Asymmetric (SSSA_CH)<br />
</li>
<li>Signed Symmetric / Unsigned Asymmetric (SSUA)<br />
</li>
<li>Signed Symmetric per channel (or per-axis) / Unsigned Asymmetric (SSUA_CH)<br />
</li>
<li>Unsigned Asymmetric / Unsigned Asymmetric (UAUA)<br />
</li>
<li>Unsigned Asymmetric per channel (or per-axis) / Unsigned Asymmetric (UAUA_CH)</li>
</ul></li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>arbitrary strides, provided that they are smaller than the input size<br />
</li>
<li>arbitrary filter kernel sizes, provided that they are smaller than the input size<br />
</li>
<li>for quantized model, dilation values different from 1 are not supported</li>
</ul>
</section>
<section id="separableconv2d" class="level2">
<h2>SeparableConv2D</h2>
<p>Performs convolution operation</p>
<ul>
<li>category: convolutional layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8<br />
</li>
<li>fused activations (if present): linear, relu, relu_n1_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign<br />
</li>
<li>integer schemes: weights / activations
<ul>
<li>Signed Symmetric / Signed Asymmetric (SSSA)<br />
</li>
<li>Signed Symmetric per channel (or per-axis) / Signed Asymmetric (SSSA_CH)<br />
</li>
<li>Signed Symmetric / Unsigned Asymmetric (SSUA)<br />
</li>
<li>Signed Symmetric per channel (or per-axis) / Unsigned Asymmetric (SSUA_CH)<br />
</li>
<li>Unsigned Asymmetric / Unsigned Asymmetric (UAUA)<br />
</li>
<li>Unsigned Asymmetric per channel (or per-axis) / Unsigned Asymmetric (UAUA_CH)</li>
</ul></li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>arbitrary strides, provided that they are smaller than the input size<br />
</li>
<li>arbitrary filter kernel sizes, provided that they are smaller than the input size<br />
</li>
<li>for quantized model, dilation values different from 1 are not supported</li>
</ul>
</section>
<section id="softmax" class="level2">
<h2>Softmax</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>It is supported only for 1D tensor and only on the channel dimension</li>
</ul>
</section>
<section id="spatialdropout1d" class="level2">
<h2>SpatialDropout1D</h2>
<p>Performs regularization during the training phase</p>
<ul>
<li>category: regularization layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>operator is dropped during the conversion</li>
</ul>
</section>
<section id="spatialdropout2d" class="level2">
<h2>SpatialDropout2D</h2>
<p>Performs regularization during the training phase</p>
<ul>
<li>category: regularization layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>operator is dropped during the conversion</li>
</ul>
</section>
<section id="subtract" class="level2">
<h2>Subtract</h2>
<p>Performs element-wise operation</p>
<ul>
<li>category: eltwise operator<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
</section>
<section id="thresholdedrelu" class="level2">
<h2>ThresholdedReLU</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
</section>
<section id="timedistributed" class="level2">
<h2>TimeDistributed</h2>
<p>Applies a layer to every temporal slice of an input</p>
<ul>
<li>category: wrapper layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
</section>
<section id="upsampling1d" class="level2">
<h2>UpSampling1D</h2>
<p>Upsamples the input</p>
<ul>
<li>category: reshaping layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
</section>
<section id="upsampling2d" class="level2">
<h2>UpSampling2D</h2>
<p>Upsamples the input</p>
<ul>
<li>category: reshaping layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
</section>
<section id="zeropadding1d" class="level2">
<h2>ZeroPadding1D</h2>
<p>Pads an input tensor</p>
<ul>
<li>category: Reshaping layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
</section>
<section id="zeropadding2d" class="level2">
<h2>ZeroPadding2D</h2>
<p>Pads an input tensor</p>
<ul>
<li>category: Reshaping layer<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
</section>
</section>
<section id="custom-operators-1" class="level1">
<h1>Custom operators</h1>
<section id="abs" class="level2">
<h2>Abs</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.abs</li>
</ul>
</section>
<section id="acos" class="level2">
<h2>Acos</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.acos</li>
</ul>
</section>
<section id="acosh" class="level2">
<h2>Acosh</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.acosh</li>
</ul>
</section>
<section id="asin" class="level2">
<h2>Asin</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.asin</li>
</ul>
</section>
<section id="asinh" class="level2">
<h2>Asinh</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.asinh</li>
</ul>
</section>
<section id="atan" class="level2">
<h2>Atan</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.atan</li>
</ul>
</section>
<section id="atanh" class="level2">
<h2>Atanh</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.atanh</li>
</ul>
</section>
<section id="ceil" class="level2">
<h2>Ceil</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.ceil</li>
</ul>
</section>
<section id="clip" class="level2">
<h2>Clip</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.clip_by_value</li>
</ul>
</section>
<section id="cos" class="level2">
<h2>Cos</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.cos</li>
</ul>
</section>
<section id="exp" class="level2">
<h2>Exp</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.exp</li>
</ul>
</section>
<section id="fill" class="level2">
<h2>Fill</h2>
<p>Generates a tensor with given value and shape</p>
<ul>
<li>category: constant layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.fill</li>
</ul>
</section>
<section id="floordiv" class="level2">
<h2>FloorDiv</h2>
<p>Performs element-wise operation</p>
<ul>
<li>category: eltwise operator<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.floordiv</li>
</ul>
</section>
<section id="floormod" class="level2">
<h2>FloorMod</h2>
<p>Performs element-wise operation</p>
<ul>
<li>category: eltwise operator<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.floormod</li>
</ul>
</section>
<section id="gather" class="level2">
<h2>Gather</h2>
<p>Gathers values along a specified axis</p>
<ul>
<li>category: activation function<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.gather<br />
</li>
<li>gather along batch dimension is not supported</li>
</ul>
</section>
<section id="lambda" class="level2">
<h2>Lambda</h2>
<p>Wraps arbitrary expressions</p>
<ul>
<li>category: custom layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
</section>
<section id="log" class="level2">
<h2>Log</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.log</li>
</ul>
</section>
<section id="pow" class="level2">
<h2>Pow</h2>
<p>Performs element-wise operation</p>
<ul>
<li>category: eltwise operator<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.pow</li>
</ul>
</section>
<section id="reshape-1" class="level2">
<h2>Reshape</h2>
<p>Reshapes a tensor</p>
<ul>
<li>category: Reshaping operation<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: float32, int8, uint8</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.reshape<br />
</li>
<li>operator is dropped during the conversion</li>
</ul>
</section>
<section id="round" class="level2">
<h2>Round</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.round</li>
</ul>
</section>
<section id="shape" class="level2">
<h2>Shape</h2>
<p>Returns a tensor containing the shape of the input tensor</p>
<ul>
<li>category: Reshaping operation<br />
</li>
<li>input data types: float32, int8, uint8<br />
</li>
<li>output data types: int32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.shape</li>
</ul>
</section>
<section id="sign" class="level2">
<h2>Sign</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.sign</li>
</ul>
</section>
<section id="sin" class="level2">
<h2>Sin</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.sin</li>
</ul>
</section>
<section id="sqrt" class="level2">
<h2>Sqrt</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.sqrt</li>
</ul>
</section>
<section id="square" class="level2">
<h2>Square</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.square</li>
</ul>
</section>
<section id="tanh" class="level2">
<h2>Tanh</h2>
<p>Applies an activation function to the input tensor</p>
<ul>
<li>category: activation layer<br />
</li>
<li>input data types: float32<br />
</li>
<li>output data types: float32</li>
</ul>
<p>Specific constraints/recommendations:</p>
<ul>
<li>related TF operator: tf.math.tanh</li>
</ul>
<!-- External ST resources/links -->
<!-- Internal resources/links -->
<!-- External resources/links -->
<!-- Cross references -->
</section>
</section>
<section id="references" class="level1">
<h1>References</h1>
<table>
<colgroup>
<col style="width: 18%" />
<col style="width: 81%" />
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">ref</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">[DS]</td>
<td style="text-align: left;">X-CUBE-AI - AI expansion pack for STM32CubeMX <a href="https://www.st.com/en/embedded-software/x-cube-ai.html">https://www.st.com/en/embedded-software/x-cube-ai.html</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[UM]</td>
<td style="text-align: left;">User manual - Getting started with X-CUBE-AI Expansion Package for Artificial Intelligence (AI) <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">(pdf)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[CLI]</td>
<td style="text-align: left;">stm32ai - Command Line Interface <a href="command_line_interface.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[API]</td>
<td style="text-align: left;">Embedded inference client API <a href="embedded_client_api.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[METRIC]</td>
<td style="text-align: left;">Evaluation report and metrics <a href="evaluation_metrics.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[TFL]</td>
<td style="text-align: left;">TensorFlow Lite toolbox <a href="supported_ops_tflite.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[KERAS]</td>
<td style="text-align: left;">Keras toolbox <a href="supported_ops_keras.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[ONNX]</td>
<td style="text-align: left;">ONNX toolbox <a href="supported_ops_onnx.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[FAQS]</td>
<td style="text-align: left;">FAQ <a href="faq_generic.html">generic</a>, <a href="faq_validation.html">validation</a>, <a href="faq_quantization.html">quantization</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[QUANT]</td>
<td style="text-align: left;">Quantization and quantize command <a href="quantization.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[RELOC]</td>
<td style="text-align: left;">Relocatable binary network support <a href="relocatable.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[CUST]</td>
<td style="text-align: left;">Support of the Keras Lambda/custom layers <a href="keras_lambda_custom.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[TFLM]</td>
<td style="text-align: left;">TensorFlow Lite for Microcontroller support <a href="tflite_micro_support.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[INST]</td>
<td style="text-align: left;">Setting the environment <a href="setting_env.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[OBS]</td>
<td style="text-align: left;">Platform Observer API <a href="api_platform_observer.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[C-RUN]</td>
<td style="text-align: left;">Executing locally a generated c-model <a href="how_to_run_a_model_locally.html">(link)</a></td>
</tr>
</tbody>
</table>
</section>



<section class="st_footer">

<h1> <br> </h1>

<p style="font-family:verdana; text-align:left;">
 Embedded Documentation 

	- <b> Keras toolbox support </b>
			<br> X-CUBE-AI Expansion Package
	 
	
</p>

<img src="" title="ST logo" align="right" height="100" />

<div class="st_notice">
Information in this document is provided solely in connection with ST products.
The contents of this document are subject to change without prior notice.
<br>
© Copyright STMicroelectronics 2020. All rights reserved. <a href="http://www.st.com">www.st.com</a>
</div>

<hr size="1" />
</section>


</article>
</body>

</html>
