<!DOCTYPE html>
<!--

	Modified template for STM32CubeMX.AI purpose

	d0.1: 	jean-michel.delorme@st.com
			add ST logo and ST footer

	d2.0: 	jean-michel.delorme@st.com
			add sidenav support

	d2.1: 	jean-michel.delorme@st.com
			clean-up + optional ai_logo/ai meta data
			
==============================================================================
           "GitHub HTML5 Pandoc Template" v2.1 — by Tristano Ajmone           
==============================================================================
Copyright © Tristano Ajmone, 2017, MIT License (MIT). Project's home:

- https://github.com/tajmone/pandoc-goodies

The CSS in this template reuses source code taken from the following projects:

- GitHub Markdown CSS: Copyright © Sindre Sorhus, MIT License (MIT):
  https://github.com/sindresorhus/github-markdown-css

- Primer CSS: Copyright © 2016-2017 GitHub Inc., MIT License (MIT):
  http://primercss.io/

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The MIT License 

Copyright (c) Tristano Ajmone, 2017 (github.com/tajmone/pandoc-goodies)
Copyright (c) Sindre Sorhus <sindresorhus@gmail.com> (sindresorhus.com)
Copyright (c) 2017 GitHub Inc.

"GitHub Pandoc HTML5 Template" is Copyright (c) Tristano Ajmone, 2017, released
under the MIT License (MIT); it contains readaptations of substantial portions
of the following third party softwares:

(1) "GitHub Markdown CSS", Copyright (c) Sindre Sorhus, MIT License (MIT).
(2) "Primer CSS", Copyright (c) 2016 GitHub Inc., MIT License (MIT).

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
==============================================================================-->
<html>
<head>
  <meta charset="utf-8" />
  <meta name="generator" content="pandoc" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
  <meta name="keywords" content="STM32CubeMX, X-CUBE-AI, Neural Network, Quantization support, CLI, Code Generator, Automatic NN mapping tools" />
  <title>Embedded Inference Client API</title>
  <style type="text/css">
.markdown-body{
	-ms-text-size-adjust:100%;
	-webkit-text-size-adjust:100%;
	color:#24292e;
	font-family:-apple-system,system-ui,BlinkMacSystemFont,"Segoe UI",Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol";
	font-size:16px;
	line-height:1.5;
	word-wrap:break-word;
	box-sizing:border-box;
	min-width:200px;
	max-width:980px;
	margin:0 auto;
	padding:45px;
	}
.markdown-body a{
	color:#0366d6;
	background-color:transparent;
	text-decoration:none;
	-webkit-text-decoration-skip:objects}
.markdown-body a:active,.markdown-body a:hover{
	outline-width:0}
.markdown-body a:hover{
	text-decoration:underline}
.markdown-body a:not([href]){
	color:inherit;text-decoration:none}
.markdown-body strong{font-weight:600}
.markdown-body h1,.markdown-body h2,.markdown-body h3,.markdown-body h4,.markdown-body h5,.markdown-body h6{
	margin-top:24px;
	margin-bottom:16px;
	font-weight:600;
	line-height:1.25}
.markdown-body h1{
	font-size:2em;
	margin:.67em 0;
	padding-bottom:.3em;
	border-bottom:1px solid #eaecef}
.markdown-body h2{
	padding-bottom:.3em;
	font-size:1.5em;
	border-bottom:1px solid #eaecef}
.markdown-body h3{font-size:1.25em}
.markdown-body h4{font-size:1em}
.markdown-body h5{font-size:.875em}
.markdown-body h6{font-size:.85em;color:#6a737d}
.markdown-body img{border-style:none}
.markdown-body svg:not(:root){
	overflow:hidden}
.markdown-body hr{
	box-sizing:content-box;
	height:.25em;
	margin:24px 0;
	padding:0;
	overflow:hidden;
	background-color:#e1e4e8;
	border:0}
.markdown-body hr::before{display:table;content:""}
.markdown-body hr::after{display:table;clear:both;content:""}
.markdown-body input{margin:0;overflow:visible;font:inherit;font-family:inherit;font-size:inherit;line-height:inherit}
.markdown-body [type=checkbox]{box-sizing:border-box;padding:0}
.markdown-body *{box-sizing:border-box}.markdown-body blockquote{margin:0}
.markdown-body ol,.markdown-body ul{padding-left:2em}
.markdown-body ol ol,.markdown-body ul ol{list-style-type:lower-roman}
.markdown-body ol ol,.markdown-body ol ul,.markdown-body ul ol,.markdown-body ul ul{margin-top:0;margin-bottom:0}
.markdown-body ol ol ol,.markdown-body ol ul ol,.markdown-body ul ol ol,.markdown-body ul ul ol{list-style-type:lower-alpha}
.markdown-body li>p{margin-top:16px}
.markdown-body li+li{margin-top:.25em}
.markdown-body dd{margin-left:0}
.markdown-body dl{padding:0}
.markdown-body dl dt{padding:0;margin-top:16px;font-size:1em;font-style:italic;font-weight:600}
.markdown-body dl dd{padding:0 16px;margin-bottom:16px}
.markdown-body code{font-family:SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace}
.markdown-body pre{font:12px SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace;word-wrap:normal}
.markdown-body blockquote,.markdown-body dl,.markdown-body ol,.markdown-body p,.markdown-body pre,.markdown-body table,.markdown-body ul{margin-top:0;margin-bottom:16px}
.markdown-body blockquote{padding:0 1em;color:#6a737d;border-left:.25em solid #dfe2e5}
.markdown-body blockquote>:first-child{margin-top:0}
.markdown-body blockquote>:last-child{margin-bottom:0}
.markdown-body table{display:block;width:100%;overflow:auto;border-spacing:0;border-collapse:collapse}
.markdown-body table th{font-weight:600}
.markdown-body table td,.markdown-body table th{padding:6px 13px;border:1px solid #dfe2e5}
.markdown-body table tr{background-color:#fff;border-top:1px solid #c6cbd1}
.markdown-body table tr:nth-child(2n){background-color:#f6f8fa}
.markdown-body img{max-width:100%;box-sizing:content-box;background-color:#fff}
.markdown-body code{padding:.2em 0;margin:0;font-size:85%;background-color:rgba(27,31,35,.05);border-radius:3px}
.markdown-body code::after,.markdown-body code::before{letter-spacing:-.2em;content:"\00a0"}
.markdown-body pre>code{padding:0;margin:0;font-size:100%;word-break:normal;white-space:pre;background:0 0;border:0}
.markdown-body .highlight{margin-bottom:16px}
.markdown-body .highlight pre{margin-bottom:0;word-break:normal}
.markdown-body .highlight pre,.markdown-body pre{padding:16px;overflow:auto;font-size:85%;line-height:1.45;background-color:#f6f8fa;border-radius:3px}
.markdown-body pre code{display:inline;max-width:auto;padding:0;margin:0;overflow:visible;line-height:inherit;word-wrap:normal;background-color:transparent;border:0}
.markdown-body pre code::after,.markdown-body pre code::before{content:normal}
.markdown-body .full-commit .btn-outline:not(:disabled):hover{color:#005cc5;border-color:#005cc5}
.markdown-body kbd{box-shadow:inset 0 -1px 0 #959da5;display:inline-block;padding:3px 5px;font:11px/10px SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace;color:#444d56;vertical-align:middle;background-color:#fcfcfc;border:1px solid #c6cbd1;border-bottom-color:#959da5;border-radius:3px;box-shadow:inset 0 -1px 0 #959da5}
.markdown-body :checked+.radio-label{position:relative;z-index:1;border-color:#0366d6}
.markdown-body .task-list-item{list-style-type:none}
.markdown-body .task-list-item+.task-list-item{margin-top:3px}
.markdown-body .task-list-item input{margin:0 .2em .25em -1.6em;vertical-align:middle}
.markdown-body::before{display:table;content:""}
.markdown-body::after{display:table;clear:both;content:""}
.markdown-body>:first-child{margin-top:0!important}
.markdown-body>:last-child{margin-bottom:0!important}
.Alert,.Error,.Note,.Success,.Warning,.Tips,.HTips{padding:11px;margin-bottom:24px;border-style:solid;border-width:1px;border-radius:4px}
.Alert p,.Error p,.Note p,.Success p,.Warning p,.Tips p,.HTips p{margin-top:0}
.Alert p:last-child,.Error p:last-child,.Note p:last-child,.Success p:last-child,.Warning p:last-child,.Tips p:last-child,.HTips p:last-child{margin-bottom:0}
.Alert{color:#246;background-color:#e2eef9;border-color:#bac6d3}
.Warning{color:#4c4a42;background-color:#fff9ea;border-color:#dfd8c2}
.Error{color:#911;background-color:#fcdede;border-color:#d2b2b2}
.Success{color:#22662c;background-color:#e2f9e5;border-color:#bad3be}
.Note{color:#2f363d;background-color:#f6f8fa;border-color:#d5d8da}
.Alert h1,.Alert h2,.Alert h3,.Alert h4,.Alert h5,.Alert h6{color:#246;margin-bottom:0}
.Warning h1,.Warning h2,.Warning h3,.Warning h4,.Warning h5,.Warning h6{color:#4c4a42;margin-bottom:0}
.Error h1,.Error h2,.Error h3,.Error h4,.Error h5,.Error h6{color:#911;margin-bottom:0}
.Success h1,.Success h2,.Success h3,.Success h4,.Success h5,.Success h6{color:#22662c;margin-bottom:0}
.Note h1,.Note h2,.Note h3,.Note h4,.Note h5,.Note h6{color:#2f363d;margin-bottom:0}
.Tips h1,.Tips h2,.Tips h3,.Tips h4,.Tips h5,.Tips h6{color:#2f363d;margin-bottom:0}
.HTips h1,.HTips h2,.HTips h3,.HTips h4,.HTips h5,.HTips h6{color:#2f363d;margin-bottom:0}
.Tips h1:first-child,.Tips h2:first-child,.Tips h3:first-child,.Tips h4:first-child,.Tips h5:first-child,.Tips h6:first-child,.Alert h1:first-child,.Alert h2:first-child,.Alert h3:first-child,.Alert h4:first-child,.Alert h5:first-child,.Alert h6:first-child,.Error h1:first-child,.Error h2:first-child,.Error h3:first-child,.Error h4:first-child,.Error h5:first-child,.Error h6:first-child,.Note h1:first-child,.Note h2:first-child,.Note h3:first-child,.Note h4:first-child,.Note h5:first-child,.Note h6:first-child,.Success h1:first-child,.Success h2:first-child,.Success h3:first-child,.Success h4:first-child,.Success h5:first-child,.Success h6:first-child,.Warning h1:first-child,.Warning h2:first-child,.Warning h3:first-child,.Warning h4:first-child,.Warning h5:first-child,.Warning h6:first-child{margin-top:0}
h1.title,p.subtitle{text-align:center}
h1.title.followed-by-subtitle{margin-bottom:0}
p.subtitle{font-size:1.5em;font-weight:600;line-height:1.25;margin-top:0;margin-bottom:16px;padding-bottom:.3em}
div.line-block{white-space:pre-line}
  </style>
  <style type="text/css">code{white-space: pre;}</style>
  <style type="text/css">
	pre > code.sourceCode { white-space: pre; position: relative; }
 pre > code.sourceCode > span { display: inline-block; line-height: 1.25; }
 pre > code.sourceCode > span:empty { height: 1.2em; }
 .sourceCode { overflow: visible; }
 code.sourceCode > span { color: inherit; text-decoration: inherit; }
 div.sourceCode { margin: 1em 0; }
 pre.sourceCode { margin: 0; }
 @media screen {
 div.sourceCode { overflow: auto; }
 }
 @media print {
 pre > code.sourceCode { white-space: pre-wrap; }
 pre > code.sourceCode > span { text-indent: -5em; padding-left: 5em; }
 }
 pre.numberSource code
   { counter-reset: source-line 0; }
 pre.numberSource code > span
   { position: relative; left: -4em; counter-increment: source-line; }
 pre.numberSource code > span > a:first-child::before
   { content: counter(source-line);
     position: relative; left: -1em; text-align: right; vertical-align: baseline;
     border: none; display: inline-block;
     -webkit-touch-callout: none; -webkit-user-select: none;
     -khtml-user-select: none; -moz-user-select: none;
     -ms-user-select: none; user-select: none;
     padding: 0 4px; width: 4em;
     background-color: #ffffff;
     color: #a0a0a0;
   }
 pre.numberSource { margin-left: 3em; border-left: 1px solid #a0a0a0;  padding-left: 4px; }
 div.sourceCode
   { color: #1f1c1b; background-color: #ffffff; }
 @media screen {
 pre > code.sourceCode > span > a:first-child::before { text-decoration: underline; }
 }
 code span { color: #1f1c1b; } /* Normal */
 code span.al { color: #bf0303; background-color: #f7e6e6; font-weight: bold; } /* Alert */
 code span.an { color: #ca60ca; } /* Annotation */
 code span.at { color: #0057ae; } /* Attribute */
 code span.bn { color: #b08000; } /* BaseN */
 code span.bu { color: #644a9b; font-weight: bold; } /* BuiltIn */
 code span.cf { color: #1f1c1b; font-weight: bold; } /* ControlFlow */
 code span.ch { color: #924c9d; } /* Char */
 code span.cn { color: #aa5500; } /* Constant */
 code span.co { color: #898887; } /* Comment */
 code span.cv { color: #0095ff; } /* CommentVar */
 code span.do { color: #607880; } /* Documentation */
 code span.dt { color: #0057ae; } /* DataType */
 code span.dv { color: #b08000; } /* DecVal */
 code span.er { color: #bf0303; text-decoration: underline; } /* Error */
 code span.ex { color: #0095ff; font-weight: bold; } /* Extension */
 code span.fl { color: #b08000; } /* Float */
 code span.fu { color: #644a9b; } /* Function */
 code span.im { color: #ff5500; } /* Import */
 code span.in { color: #b08000; } /* Information */
 code span.kw { color: #1f1c1b; font-weight: bold; } /* Keyword */
 code span.op { color: #1f1c1b; } /* Operator */
 code span.ot { color: #006e28; } /* Other */
 code span.pp { color: #006e28; } /* Preprocessor */
 code span.re { color: #0057ae; background-color: #e0e9f8; } /* RegionMarker */
 code span.sc { color: #3daee9; } /* SpecialChar */
 code span.ss { color: #ff5500; } /* SpecialString */
 code span.st { color: #bf0303; } /* String */
 code span.va { color: #0057ae; } /* Variable */
 code span.vs { color: #bf0303; } /* VerbatimString */
 code span.wa { color: #bf0303; } /* Warning */
  </style>
  <link rel="stylesheet" href="data:text/css,%3Aroot%20%7B%2D%2Dmain%2Ddarkblue%2Dcolor%3A%20rgb%283%2C35%2C75%29%3B%20%2D%2Dmain%2Dlightblue%2Dcolor%3A%20rgb%2860%2C180%2C230%29%3B%20%2D%2Dmain%2Dpink%2Dcolor%3A%20rgb%28230%2C0%2C126%29%3B%20%2D%2Dmain%2Dyellow%2Dcolor%3A%20rgb%28255%2C210%2C0%29%3B%20%2D%2Dsecondary%2Dgrey%2Dcolor%3A%20rgb%2870%2C70%2C80%29%3B%20%2D%2Dsecondary%2Dgrey%2Dcolor%2D25%3A%20rgb%28209%2C209%2C211%29%3B%20%2D%2Dsecondary%2Dgrey%2Dcolor%2D12%3A%20rgb%28233%2C233%2C234%29%3B%20%2D%2Dsecondary%2Dlightgreen%2Dcolor%3A%20rgb%2873%2C177%2C112%29%3B%20%2D%2Dsecondary%2Dpurple%2Dcolor%3A%20rgb%28140%2C0%2C120%29%3B%20%2D%2Dsecondary%2Ddarkgreen%2Dcolor%3A%20rgb%284%2C87%2C47%29%3B%20%2D%2Dsidenav%2Dfont%2Dsize%3A%2090%25%3B%7Dhtml%20%7Bfont%2Dfamily%3A%20%22Arial%22%2C%20sans%2Dserif%3B%7D%2A%20%7Bxbox%2Dsizing%3A%20border%2Dbox%3B%7D%2Est%5Fheader%20h1%2Etitle%2C%2Est%5Fheader%20p%2Esubtitle%20%7Btext%2Dalign%3A%20left%3B%7D%2Est%5Fheader%20h1%2Etitle%20%7Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bmargin%2Dbottom%3A5px%3B%7D%2Est%5Fheader%20p%2Esubtitle%20%7Bcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bfont%2Dsize%3A90%25%3B%7D%2Est%5Fheader%20h1%2Etitle%2Efollowed%2Dby%2Dsubtitle%20%7Bborder%2Dbottom%3A2px%20solid%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bmargin%2Dbottom%3A5px%3B%7D%2Est%5Fheader%20p%2Erevision%20%7Bdisplay%3A%20inline%2Dblock%3Bwidth%3A70%25%3B%7D%2Est%5Fheader%20div%2Eauthor%20%7Bfont%2Dstyle%3A%20italic%3B%7D%2Est%5Fheader%20div%2Esummary%20%7Bborder%2Dtop%3A%20solid%201px%20%23C0C0C0%3Bbackground%3A%20%23ECECEC%3Bpadding%3A%205px%3B%7D%2Est%5Ffooter%20%7Bfont%2Dsize%3A80%25%3B%7D%2Est%5Ffooter%20img%20%7Bfloat%3A%20right%3B%7D%2Est%5Ffooter%20%2Est%5Fnotice%20%7Bwidth%3A80%25%3B%7D%2Emarkdown%2Dbody%20%23header%2Dsection%2Dnumber%20%7Bfont%2Dsize%3A120%25%3B%7D%2Emarkdown%2Dbody%20h1%20%7Bborder%2Dbottom%3A1px%20solid%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bpadding%2Dbottom%3A%202px%3Bpadding%2Dtop%3A%2010px%3B%7D%2Emarkdown%2Dbody%20h2%20%7Bpadding%2Dbottom%3A%205px%3Bpadding%2Dtop%3A%2010px%3B%7D%2Emarkdown%2Dbody%20h2%20code%20%7Bbackground%2Dcolor%3A%20rgb%28255%2C%20255%2C%20255%29%3B%7D%23func%2EsourceCode%20%7Bborder%2Dleft%2Dstyle%3A%20solid%3Bborder%2Dcolor%3A%20rgb%280%2C%2032%2C%2082%29%3Bborder%2Dcolor%3A%20rgb%28255%2C%20244%2C%20191%29%3Bborder%2Dwidth%3A%208px%3Bpadding%3A0px%3B%7Dpre%20%3E%20code%20%7Bborder%3A%20solid%201px%20blue%3Bfont%2Dsize%3A60%25%3B%7DcodeXX%20%7Bborder%3A%20solid%201px%20blue%3Bfont%2Dsize%3A60%25%3B%7D%23func%2EsourceXXCode%3A%3Abefore%20%7Bcontent%3A%20%22Synopsis%22%3Bpadding%2Dleft%3A10px%3Bfont%2Dweight%3A%20bold%3B%7Dfigure%20%7Bpadding%3A0px%3Bmargin%2Dleft%3A5px%3Bmargin%2Dright%3A5px%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3B%7Dimg%5Bdata%2Dproperty%3D%22center%22%5D%20%7Bdisplay%3A%20block%3Bmargin%2Dtop%3A%2010px%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3Bpadding%3A%2010px%3B%7Dfigcaption%20%7Btext%2Dalign%3Aleft%3B%20%20border%2Dtop%3A%201px%20dotted%20%23888%3Bpadding%2Dbottom%3A%2020px%3Bmargin%2Dtop%3A%2010px%3B%7Dh1%20code%2C%20h2%20code%20%7Bfont%2Dsize%3A120%25%3B%7D%09%2Emarkdown%2Dbody%20table%20%7Bwidth%3A%20100%25%3Bmargin%2Dleft%3Aauto%3Bmargin%2Dright%3Aauto%3B%7D%2Emarkdown%2Dbody%20img%20%7Bborder%2Dradius%3A%204px%3Bpadding%3A%205px%3Bdisplay%3A%20block%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3Bwidth%3A%20auto%3B%7D%2Emarkdown%2Dbody%20%2Est%5Fheader%20img%2C%20%2Emarkdown%2Dbody%20%7Bborder%3A%20none%3Bborder%2Dradius%3A%20none%3Bpadding%3A%205px%3Bdisplay%3A%20block%3Bmargin%2Dleft%3A%20auto%3Bmargin%2Dright%3A%20auto%3Bwidth%3A%20auto%3Bbox%2Dshadow%3A%20none%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2010px%3Bwidth%3A%20auto%3Bfont%2Dfamily%3A%20%22Arial%22%2C%20sans%2Dserif%3Bcolor%3A%20%2303234B%3Bcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%7D%2Emarkdown%2Dbody%20h1%2C%20%2Emarkdown%2Dbody%20h2%2C%20%2Emarkdown%2Dbody%20h3%20%7B%20%20%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%7D%2Emarkdown%2Dbody%3Ahover%20%7B%7D%2Emarkdown%2Dbody%20%2Econtents%20%7B%7D%2Emarkdown%2Dbody%20%2Etoc%2Dtitle%20%7B%7D%2Emarkdown%2Dbody%20%2Econtents%20li%20%7Blist%2Dstyle%2Dtype%3A%20none%3B%7D%2Emarkdown%2Dbody%20%2Econtents%20ul%20%7Bpadding%2Dleft%3A%2010px%3B%7D%2Emarkdown%2Dbody%20%2Econtents%20a%20%7Bcolor%3A%20%233CB4E6%3B%20%7D%2Emarkdown%2Dbody%20table%20%2Eheader%20%7Bbackground%2Dcolor%3A%20var%28%2D%2Dsecondary%2Dgrey%2Dcolor%2D12%29%3Bborder%2Dbottom%3A1px%20solid%3Bborder%2Dtop%3A1px%20solid%3Bfont%2Dsize%3A%2090%25%3B%7D%2Emarkdown%2Dbody%20table%20th%20%7Bfont%2Dweight%3A%20bolder%3B%20%7D%2Emarkdown%2Dbody%20table%20td%20%7Bfont%2Dsize%3A%2090%25%3B%7D%2Emarkdown%2Dbody%20code%7Bpadding%3A%200%3Bmargin%3A0%3Bfont%2Dsize%3A95%25%3Bbackground%2Dcolor%3Argba%2827%2C31%2C35%2C%2E05%29%3Bborder%2Dradius%3A1px%3B%7D%2Et01%20%7Bwidth%3A%20100%25%3Bborder%3A%20None%3Btext%2Dalign%3A%20left%3B%7D%2ETips%20%7Bpadding%3A11px%3Bmargin%2Dbottom%3A24px%3Bborder%2Dstyle%3Asolid%3Bborder%2Dwidth%3A1px%3Bborder%2Dradius%3A1px%7D%2ETips%20%7Bcolor%3A%232f363d%3B%20background%2Dcolor%3A%20%23f6f8fa%3Bborder%2Dcolor%3A%23d5d8da%3Bborder%2Dtop%3A1px%20solid%3Bborder%2Dbottom%3A1px%20solid%3B%7D%2EHTips%20%7Bpadding%3A11px%3Bmargin%2Dbottom%3A24px%3Bborder%2Dstyle%3Asolid%3Bborder%2Dwidth%3A1px%3Bborder%2Dradius%3A1px%7D%2EHTips%20%7Bcolor%3A%232f363d%3B%20background%2Dcolor%3A%23fff9ea%3Bborder%2Dcolor%3A%23d5d8da%3Bborder%2Dtop%3A1px%20solid%3Bborder%2Dbottom%3A1px%20solid%3B%7D%2EHTips%20h1%2C%2EHTips%20h2%2C%2EHTips%20h3%2C%2EHTips%20h4%2C%2EHTips%20h5%2C%2EHTips%20h6%20%7Bcolor%3A%232f363d%3Bmargin%2Dbottom%3A0%7D%2Esidenav%20%7Bfont%2Dfamily%3A%20%22Arial%22%2C%20sans%2Dserif%3B%20%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bheight%3A%20100%25%3Bposition%3A%20fixed%3Bz%2Dindex%3A%201%3Btop%3A%200%3Bleft%3A%200%3Bmargin%2Dright%3A%2010px%3Bmargin%2Dleft%3A%2010px%3B%20overflow%2Dx%3A%20hidden%3B%7D%2Esidenav%20hr%2Enew1%20%7Bborder%2Dwidth%3A%20thin%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Dlightblue%2Dcolor%29%3Bmargin%2Dright%3A%2010px%3Bmargin%2Dtop%3A%20%2D10px%3B%7D%2Esidenav%20%23sidenav%5Fheader%20%7Bmargin%2Dtop%3A%2010px%3Bborder%3A%201px%3Bcolor%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bborder%2Dcolor%3A%20var%28%2D%2Dmain%2Dlightblue%2Dcolor%29%3B%7D%2Esidenav%20%23sidenav%5Fheader%20img%20%7Bfloat%3A%20left%3B%7D%2Esidenav%20%23sidenav%5Fheader%20a%20%7Bmargin%2Dleft%3A%200px%3Bmargin%2Dright%3A%200px%3Bpadding%2Dleft%3A%200px%3B%7D%2Esidenav%20%23sidenav%5Fheader%20a%3Ahover%20%7Bbackground%2Dsize%3A%20auto%3Bcolor%3A%20%23FFD200%3B%20%7D%2Esidenav%20%23sidenav%5Fheader%20a%3Aactive%20%7B%20%20%7D%2Esidenav%20%3E%20ul%20%7Bbackground%2Dcolor%3A%20rgba%2857%2C%20169%2C%20220%2C%200%2E05%29%3B%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bborder%2Dradius%3A%2010px%3Bpadding%2Dbottom%3A%2010px%3Bpadding%2Dtop%3A%2010px%3Bpadding%2Dright%3A%2010px%3Bmargin%2Dright%3A%2010px%3B%7D%2Esidenav%20a%20%7Bpadding%3A%202px%202px%3Btext%2Ddecoration%3A%20none%3Bfont%2Dsize%3A%20var%28%2D%2Dsidenav%2Dfont%2Dsize%29%3Bdisplay%3Atable%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%2C%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%7B%20padding%2Dright%3A%205px%3Bpadding%2Dleft%3A%205px%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%20%7B%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bfont%2Dweight%3A%20lighter%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%20%7B%20color%3A%20var%28%2D%2Dmain%2Ddarkblue%2Dcolor%29%3Bfont%2Dsize%3A%2080%25%3Bpadding%2Dleft%3A%2010px%3Btext%2Dalign%2Dlast%3A%20left%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%20%7B%20display%3A%20None%3B%7D%2Esidenav%20li%20%7Blist%2Dstyle%2Dtype%3A%20none%3B%7D%2Esidenav%20ul%20%7Bpadding%2Dleft%3A%200px%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%2C%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%20%7Bbackground%2Dcolor%3A%20var%28%2D%2Dsecondary%2Dgrey%2Dcolor%2D12%29%3Bbackground%2Dclip%3A%20border%2Dbox%3Bmargin%2Dleft%3A%20%2D10px%3Bpadding%2Dleft%3A%2010px%3B%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%20%7Bpadding%2Dright%3A%2015px%3Bwidth%3A%20230px%3B%09%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%3Ahover%20%7Bpadding%2Dright%3A%2010px%3Bwidth%3A%20230px%3B%09%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20a%3Aactive%20%7B%20color%3A%20%23FFD200%3B%20%7D%2Esidenav%20%3E%20ul%20%3E%20li%20%3E%20ul%20%3E%20li%20%3E%20a%3Aactive%20%7B%20color%3A%20%23FFD200%3B%20%7D%2Esidenav%20code%20%7B%7D%2Esidenav%20%7Bwidth%3A%20280px%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%20300px%3Bdisplay%3Ablock%3B%7D%2Emarkdown%2Dbody%20%2Eprint%2Dcontents%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%2Eprint%2Dtoc%2Dtitle%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%7Bmax%2Dwidth%3A%20980px%3Bmin%2Dwidth%3A%20200px%3Bpadding%3A%2040px%3Bborder%2Dstyle%3A%20solid%3Bborder%2Dstyle%3A%20outset%3Bborder%2Dcolor%3A%20rgba%28104%2C%20167%2C%20238%2C%200%2E089%29%3Bborder%2Dradius%3A%205px%3B%7D%40media%20screen%20and%20%28max%2Dheight%3A%20450px%29%20%7B%2Esidenav%20%7Bpadding%2Dtop%3A%2015px%3B%7D%2Esidenav%20a%20%7Bfont%2Dsize%3A%2018px%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%2010px%3B%20%7D%2Esidenav%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2040px%3Bwidth%3A%20auto%3Bborder%3A%200px%3B%7D%7D%40media%20screen%20and%20%28max%2Dwidth%3A%201024px%29%20%7B%2Esidenav%20%7Bvisibility%3Ahidden%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2040px%3Bwidth%3A%20auto%3Bborder%3A%200px%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%2010px%3B%7D%7D%40media%20print%20%7B%2Esidenav%20%7Bvisibility%3Ahidden%3B%7D%23sidenav%20%7Bmargin%2Dleft%3A%2010px%3B%7D%2Emarkdown%2Dbody%20%7Bmargin%3A%2010px%3Bpadding%3A%2010px%3Bwidth%3Aauto%3Bborder%3A%200px%3B%7D%40page%20%7Bsize%3A%20A4%3B%20%20margin%3A2cm%3Bpadding%3A2cm%3Bmargin%2Dtop%3A%201cm%3Bpadding%2Dbottom%3A%201cm%3B%7D%2A%20%7Bxbox%2Dsizing%3A%20border%2Dbox%3Bfont%2Dsize%3A90%25%3B%7Da%20%7Bfont%2Dsize%3A%20100%25%3Bcolor%3A%20yellow%3B%7D%2Emarkdown%2Dbody%20article%20%7Bxbox%2Dsizing%3A%20border%2Dbox%3Bfont%2Dsize%3A100%25%3B%7D%2Emarkdown%2Dbody%20p%20%7Bwindows%3A%202%3Borphans%3A%202%3B%7D%2Epagebreakerafter%20%7Bpage%2Dbreak%2Dafter%3A%20always%3Bpadding%2Dtop%3A10mm%3B%7D%2Epagebreakbefore%20%7Bpage%2Dbreak%2Dbefore%3A%20always%3B%7Dh1%2C%20h2%2C%20h3%2C%20h4%20%7Bpage%2Dbreak%2Dafter%3A%20avoid%3B%7Ddiv%2C%20code%2C%20blockquote%2C%20li%2C%20span%2C%20table%2C%20figure%20%7Bpage%2Dbreak%2Dinside%3A%20avoid%3B%7D%7D">
  <!--[if lt IE 9]>
    <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
  <![endif]-->





<link rel="shortcut icon" href="">

</head>



<body>

		<div class="sidenav">
		<div id="sidenav_header">
							<img src="" title="STM32CubeMX.AI logo" align="left" height="70" />
										<br />7.0.0<br />
										<a href="#doc_title"> Embedded Inference Client API </a>
					</div>
		<div id="sidenav_header_button">
			 
							<ul>
					<li><p><a id="index" href="index.html">[ Index ]</a></p></li>
				</ul>
						<hr class="new1">
		</div>	

		<ul>
  <li><a href="#introduction">Introduction</a>
  <ul>
  <li><a href="#ref_quick_usage_code">Getting started</a></li>
  <li><a href="#ref_crc_usage">CRC IP usage</a></li>
  <li><a href="#sec_data_placement">AI buffers and privileged placement</a></li>
  <li><a href="#sec_alloc_inputs">I/O buffers inside the “activations” buffer</a></li>
  <li><a href="#ref_split_weights">Split weights buffer</a></li>
  <li><a href="#thread_safety">Re-entrance and thread safety considerations</a></li>
  <li><a href="#debug-support">Debug support</a></li>
  <li><a href="#versioning-and-checking">Versioning and checking</a></li>
  </ul></li>
  <li><a href="#ref_embedded_client_api">Embedded inference client API</a>
  <ul>
  <li><a href="#ref_network_defines">AI_&lt;NAME&gt;_XXX C-defines</a></li>
  <li><a href="#ref_api_create">ai_&lt;name&gt;_create()</a></li>
  <li><a href="#ref_api_init">ai_&lt;name&gt;_init()</a></li>
  <li><a href="#ref_api_run">ai_&lt;name&gt;_run()</a></li>
  <li><a href="#ref_api_get_error">ai_&lt;name&gt;_get_error()</a></li>
  <li><a href="#ref_api_info">ai_&lt;name&gt;_get_info()</a></li>
  </ul></li>
  <li><a href="#ref_tensor_def">IO tensor description</a>
  <ul>
  <li><a href="#ai_buffer-c-structure">ai_buffer C-structure</a></li>
  <li><a href="#ref_data_type">Tensor format</a></li>
  <li><a href="#sec_life_cycle">Life-cycle of the IO tensors</a></li>
  <li><a href="#sec_base_in_address">Base address of the IO buffers</a></li>
  <li><a href="#float32-to-8b-data-type-conversion">float32 to 8b data type conversion</a></li>
  <li><a href="#b-to-float32-data-type-conversion">8b to float32 data type conversion</a></li>
  <li><a href="#c-memory-layouts">C-memory layouts</a></li>
  </ul></li>
  <li><a href="#references">References</a></li>
  </ul>
	</div>
	<article id="sidenav" class="markdown-body">
		



<header>
<section class="st_header" id="doc_title">

<div class="himage">
	<img src="" title="STM32CubeMX.AI" align="right" height="70" />
	<img src="" title="STM32" align="right" height="90" />
</div>

<h1 class="title followed-by-subtitle">Embedded Inference Client API</h1>

	<p class="subtitle">X-CUBE-AI Expansion Package</p>

	<div class="revision">r4.0</div>

	<div class="ai_platform">
		AI PLATFORM r7.0.0
					(Embedded Inference Client API 1.1.0)
			</div>
			Command Line Interface r1.5.1
	




</section>
</header>
 




<section id="introduction" class="level1">
<h1>Introduction</h1>
<p>This article describes the embedded inference client API which must be used by a C-application layer (AI client) to use a deployed C-model. All model-specific definitions and implementations can be found in the generated C-files: <code>&lt;name&gt;.c</code>, <code>&lt;name&gt;.h</code> and <code>&lt;name&gt;_data.h</code> files (refer to <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">[UM], <em>“Generated STM32 NN library”</em></a> section). A <a href="api_platform_observer.html">Platform observer API</a> for debug, advanced use-cases and profiling purposes is also described.</p>
<hr />
<div id="fig:id_nn_lib_integration" class="fignos">
<figure>
<img src="" property="center" style="width:95.0%" alt="Figure 1: MCU integration model/view and dependencies" /><figcaption aria-hidden="true"><span>Figure 1:</span> MCU integration model/view and dependencies</figcaption>
</figure>
</div>
<p>Above figure shows that the integration of the AI stack in an application is simple and straightforward. There is few or standard SW/HW dependencies with the run-time. Only <a href="#ref_crc_usage">STM32 CRC IP</a> should be clocked to be able to use the inference runtime library. AI client uses the generated model through a set of well-defined <a href="#ref_embedded_client_api"><code>ai_&lt;name&gt;_XXX()</code></a> functions (also called <em>“Embedded inference client API”</em>). The X-CUBE-AI pack provides a compiled library (i.e. network runtime library) by STM32 series and by supported tool-chains.</p>
<section id="ref_quick_usage_code" class="level2">
<h2>Getting started</h2>
<p>The following code snippet provides a typical and minimal example using the API for a 32b float model. The pre-trained model is generated with the default options i.e. input buffer is not allocated in the “activations” buffer and default <code>network</code> c-name is used. Note that all AI requested client resources (activations buffer and data buffers for the IO) are allocated at compile time thanks the generated macros: <a href="#ref_network_defines"><code>AI_NETWORK_XXX_SIZE</code></a> allowing a minimalist, easier and quick integration.</p>
<div class="sourceCode" id="cb1"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="cb1-1"><a href="#cb1-1" aria-hidden="true" tabindex="-1"></a><span class="pp">#include </span><span class="im">&lt;stdio.h&gt;</span></span>
<span id="cb1-2"><a href="#cb1-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb1-3"><a href="#cb1-3" aria-hidden="true" tabindex="-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb1-4"><a href="#cb1-4" aria-hidden="true" tabindex="-1"></a><span class="pp">#include </span><span class="im">&quot;network_data.h&quot;</span></span>
<span id="cb1-5"><a href="#cb1-5" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb1-6"><a href="#cb1-6" aria-hidden="true" tabindex="-1"></a><span class="co">/* Global handle to reference an instantiated C-model */</span></span>
<span id="cb1-7"><a href="#cb1-7" aria-hidden="true" tabindex="-1"></a><span class="at">static</span> ai_handle network <span class="op">=</span> AI_HANDLE_NULL<span class="op">;</span></span>
<span id="cb1-8"><a href="#cb1-8" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb1-9"><a href="#cb1-9" aria-hidden="true" tabindex="-1"></a><span class="co">/* Global c-array to handle the activations buffer */</span></span>
<span id="cb1-10"><a href="#cb1-10" aria-hidden="true" tabindex="-1"></a>AI_ALIGNED<span class="op">(</span><span class="dv">32</span><span class="op">)</span></span>
<span id="cb1-11"><a href="#cb1-11" aria-hidden="true" tabindex="-1"></a><span class="at">static</span> ai_u8 activations<span class="op">[</span>AI_NETWORK_DATA_ACTIVATIONS_SIZE<span class="op">];</span></span>
<span id="cb1-12"><a href="#cb1-12" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb1-13"><a href="#cb1-13" aria-hidden="true" tabindex="-1"></a><span class="co">/* Data payload for input tensor */</span></span>
<span id="cb1-14"><a href="#cb1-14" aria-hidden="true" tabindex="-1"></a>AI_ALIGNED<span class="op">(</span><span class="dv">32</span><span class="op">)</span></span>
<span id="cb1-15"><a href="#cb1-15" aria-hidden="true" tabindex="-1"></a><span class="at">static</span> ai_float in_data<span class="op">[</span>AI_NETWORK_IN_1_SIZE<span class="op">];</span></span>
<span id="cb1-16"><a href="#cb1-16" aria-hidden="true" tabindex="-1"></a><span class="co">/* or static ai_u8 in_data[AI_NETWORK_IN_1_SIZE_BYTES]; */</span></span>
<span id="cb1-17"><a href="#cb1-17" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb1-18"><a href="#cb1-18" aria-hidden="true" tabindex="-1"></a><span class="co">/* Data payload for the output tensor */</span></span>
<span id="cb1-19"><a href="#cb1-19" aria-hidden="true" tabindex="-1"></a>AI_ALIGNED<span class="op">(</span><span class="dv">32</span><span class="op">)</span></span>
<span id="cb1-20"><a href="#cb1-20" aria-hidden="true" tabindex="-1"></a><span class="at">static</span> ai_float out_data<span class="op">[</span>AI_NETWORK_OUT_1_SIZE<span class="op">];</span></span>
<span id="cb1-21"><a href="#cb1-21" aria-hidden="true" tabindex="-1"></a><span class="co">/* static ai_u8 out_data[AI_NETWORK_OUT_1_SIZE_BYTES]; */</span></span>
<span id="cb1-22"><a href="#cb1-22" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb1-23"><a href="#cb1-23" aria-hidden="true" tabindex="-1"></a><span class="co">/* </span></span>
<span id="cb1-24"><a href="#cb1-24" aria-hidden="true" tabindex="-1"></a><span class="co"> * Bootstrap code</span></span>
<span id="cb1-25"><a href="#cb1-25" aria-hidden="true" tabindex="-1"></a><span class="co"> */</span></span>
<span id="cb1-26"><a href="#cb1-26" aria-hidden="true" tabindex="-1"></a><span class="dt">int</span> aiInit<span class="op">(</span><span class="dt">void</span><span class="op">)</span> <span class="op">{</span></span>
<span id="cb1-27"><a href="#cb1-27" aria-hidden="true" tabindex="-1"></a>  ai_error err<span class="op">;</span></span>
<span id="cb1-28"><a href="#cb1-28" aria-hidden="true" tabindex="-1"></a>  </span>
<span id="cb1-29"><a href="#cb1-29" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* 1 - Create an instance of the model */</span></span>
<span id="cb1-30"><a href="#cb1-30" aria-hidden="true" tabindex="-1"></a>  err <span class="op">=</span> ai_network_create<span class="op">(&amp;</span>network<span class="op">,</span> AI_NETWORK_DATA_CONFIG <span class="co">/* or NULL */</span><span class="op">);</span></span>
<span id="cb1-31"><a href="#cb1-31" aria-hidden="true" tabindex="-1"></a>  <span class="cf">if</span> <span class="op">(</span>err<span class="op">.</span>type <span class="op">!=</span> AI_ERROR_NONE<span class="op">)</span> <span class="op">{</span></span>
<span id="cb1-32"><a href="#cb1-32" aria-hidden="true" tabindex="-1"></a>    printf<span class="op">(</span><span class="st">&quot;E: AI ai_network_create error - type=</span><span class="sc">%d</span><span class="st"> code=</span><span class="sc">%d\r\n</span><span class="st">&quot;</span><span class="op">,</span> err<span class="op">.</span>type<span class="op">,</span> err<span class="op">.</span>code<span class="op">);</span></span>
<span id="cb1-33"><a href="#cb1-33" aria-hidden="true" tabindex="-1"></a>    <span class="cf">return</span> <span class="op">-</span><span class="dv">1</span><span class="op">;</span></span>
<span id="cb1-34"><a href="#cb1-34" aria-hidden="true" tabindex="-1"></a>    <span class="op">};</span></span>
<span id="cb1-35"><a href="#cb1-35" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb1-36"><a href="#cb1-36" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* 2 - Initialize the instance */</span></span>
<span id="cb1-37"><a href="#cb1-37" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_network_params params <span class="op">=</span> AI_NETWORK_PARAMS_INIT<span class="op">(</span></span>
<span id="cb1-38"><a href="#cb1-38" aria-hidden="true" tabindex="-1"></a>      AI_NETWORK_DATA_WEIGHTS<span class="op">(</span>ai_network_data_weights_get<span class="op">()),</span></span>
<span id="cb1-39"><a href="#cb1-39" aria-hidden="true" tabindex="-1"></a>      AI_NETWORK_DATA_ACTIVATIONS<span class="op">(</span>activations<span class="op">)</span></span>
<span id="cb1-40"><a href="#cb1-40" aria-hidden="true" tabindex="-1"></a>  <span class="op">);</span></span>
<span id="cb1-41"><a href="#cb1-41" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb1-42"><a href="#cb1-42" aria-hidden="true" tabindex="-1"></a>  <span class="cf">if</span> <span class="op">(!</span>ai_network_init<span class="op">(</span>network<span class="op">,</span> <span class="op">&amp;</span>params<span class="op">))</span> <span class="op">{</span></span>
<span id="cb1-43"><a href="#cb1-43" aria-hidden="true" tabindex="-1"></a>      err <span class="op">=</span> ai_network_get_error<span class="op">(</span>network<span class="op">);</span></span>
<span id="cb1-44"><a href="#cb1-44" aria-hidden="true" tabindex="-1"></a>      printf<span class="op">(</span><span class="st">&quot;E: AI ai_network_init error - type=</span><span class="sc">%d</span><span class="st"> code=</span><span class="sc">%d\r\n</span><span class="st">&quot;</span><span class="op">,</span> err<span class="op">.</span>type<span class="op">,</span> err<span class="op">.</span>code<span class="op">);</span></span>
<span id="cb1-45"><a href="#cb1-45" aria-hidden="true" tabindex="-1"></a>      <span class="cf">return</span> <span class="op">-</span><span class="dv">1</span><span class="op">;</span></span>
<span id="cb1-46"><a href="#cb1-46" aria-hidden="true" tabindex="-1"></a>    <span class="op">}</span></span>
<span id="cb1-47"><a href="#cb1-47" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb1-48"><a href="#cb1-48" aria-hidden="true" tabindex="-1"></a>  <span class="cf">return</span> <span class="dv">0</span><span class="op">;</span></span>
<span id="cb1-49"><a href="#cb1-49" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span>
<span id="cb1-50"><a href="#cb1-50" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb1-51"><a href="#cb1-51" aria-hidden="true" tabindex="-1"></a><span class="co">/* </span></span>
<span id="cb1-52"><a href="#cb1-52" aria-hidden="true" tabindex="-1"></a><span class="co"> * Run inference code</span></span>
<span id="cb1-53"><a href="#cb1-53" aria-hidden="true" tabindex="-1"></a><span class="co"> */</span></span>
<span id="cb1-54"><a href="#cb1-54" aria-hidden="true" tabindex="-1"></a><span class="dt">int</span> aiRun<span class="op">(</span><span class="at">const</span> <span class="dt">void</span> <span class="op">*</span>in_data<span class="op">,</span> <span class="dt">void</span> <span class="op">*</span>out_data<span class="op">)</span></span>
<span id="cb1-55"><a href="#cb1-55" aria-hidden="true" tabindex="-1"></a><span class="op">{</span></span>
<span id="cb1-56"><a href="#cb1-56" aria-hidden="true" tabindex="-1"></a>  ai_i32 n_batch<span class="op">;</span></span>
<span id="cb1-57"><a href="#cb1-57" aria-hidden="true" tabindex="-1"></a>  ai_error err<span class="op">;</span></span>
<span id="cb1-58"><a href="#cb1-58" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb1-59"><a href="#cb1-59" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* 1 - Create the AI buffer IO handlers with the default definition */</span></span>
<span id="cb1-60"><a href="#cb1-60" aria-hidden="true" tabindex="-1"></a>  ai_buffer ai_input<span class="op">[</span>AI_NETWORK_IN_NUM<span class="op">]</span> <span class="op">=</span> AI_NETWORK_IN <span class="op">;</span></span>
<span id="cb1-61"><a href="#cb1-61" aria-hidden="true" tabindex="-1"></a>  ai_buffer ai_output<span class="op">[</span>AI_NETWORK_OUT_NUM<span class="op">]</span> <span class="op">=</span> AI_NETWORK_OUT <span class="op">;</span></span>
<span id="cb1-62"><a href="#cb1-62" aria-hidden="true" tabindex="-1"></a>  </span>
<span id="cb1-63"><a href="#cb1-63" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* 2 - Update IO handlers with the data payload */</span></span>
<span id="cb1-64"><a href="#cb1-64" aria-hidden="true" tabindex="-1"></a>  ai_input<span class="op">[</span><span class="dv">0</span><span class="op">].</span>n_batches <span class="op">=</span> <span class="dv">1</span><span class="op">;</span></span>
<span id="cb1-65"><a href="#cb1-65" aria-hidden="true" tabindex="-1"></a>  ai_input<span class="op">[</span><span class="dv">0</span><span class="op">].</span>data <span class="op">=</span> AI_HANDLE_PTR<span class="op">(</span>in_data<span class="op">);</span></span>
<span id="cb1-66"><a href="#cb1-66" aria-hidden="true" tabindex="-1"></a>  ai_output<span class="op">[</span><span class="dv">0</span><span class="op">].</span>n_batches <span class="op">=</span> <span class="dv">1</span><span class="op">;</span></span>
<span id="cb1-67"><a href="#cb1-67" aria-hidden="true" tabindex="-1"></a>  ai_output<span class="op">[</span><span class="dv">0</span><span class="op">].</span>data <span class="op">=</span> AI_HANDLE_PTR<span class="op">(</span>out_data<span class="op">);</span></span>
<span id="cb1-68"><a href="#cb1-68" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb1-69"><a href="#cb1-69" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* 3 - Perform the inference */</span></span>
<span id="cb1-70"><a href="#cb1-70" aria-hidden="true" tabindex="-1"></a>  n_batch <span class="op">=</span> ai_network_run<span class="op">(</span>network<span class="op">,</span> <span class="op">&amp;</span>ai_input<span class="op">[</span><span class="dv">0</span><span class="op">],</span> <span class="op">&amp;</span>ai_output<span class="op">[</span><span class="dv">0</span><span class="op">]);</span></span>
<span id="cb1-71"><a href="#cb1-71" aria-hidden="true" tabindex="-1"></a>  <span class="cf">if</span> <span class="op">(</span>n_batch <span class="op">!=</span> <span class="dv">1</span><span class="op">)</span> <span class="op">{</span></span>
<span id="cb1-72"><a href="#cb1-72" aria-hidden="true" tabindex="-1"></a>      err <span class="op">=</span> ai_network_get_error<span class="op">(</span>network<span class="op">);</span></span>
<span id="cb1-73"><a href="#cb1-73" aria-hidden="true" tabindex="-1"></a>      printf<span class="op">(</span><span class="st">&quot;E: AI ai_network_run error - type=</span><span class="sc">%d</span><span class="st"> code=</span><span class="sc">%d\r\n</span><span class="st">&quot;</span><span class="op">,</span> err<span class="op">.</span>type<span class="op">,</span> err<span class="op">.</span>code<span class="op">);</span></span>
<span id="cb1-74"><a href="#cb1-74" aria-hidden="true" tabindex="-1"></a>      <span class="cf">return</span> <span class="op">-</span><span class="dv">1</span><span class="op">;</span></span>
<span id="cb1-75"><a href="#cb1-75" aria-hidden="true" tabindex="-1"></a>  <span class="op">};</span></span>
<span id="cb1-76"><a href="#cb1-76" aria-hidden="true" tabindex="-1"></a>  </span>
<span id="cb1-77"><a href="#cb1-77" aria-hidden="true" tabindex="-1"></a>  <span class="cf">return</span> <span class="dv">0</span><span class="op">;</span></span>
<span id="cb1-78"><a href="#cb1-78" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span>
<span id="cb1-79"><a href="#cb1-79" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb1-80"><a href="#cb1-80" aria-hidden="true" tabindex="-1"></a><span class="co">/* </span></span>
<span id="cb1-81"><a href="#cb1-81" aria-hidden="true" tabindex="-1"></a><span class="co"> * Example of main loop function</span></span>
<span id="cb1-82"><a href="#cb1-82" aria-hidden="true" tabindex="-1"></a><span class="co"> */</span></span>
<span id="cb1-83"><a href="#cb1-83" aria-hidden="true" tabindex="-1"></a><span class="dt">void</span> main_loop<span class="op">()</span></span>
<span id="cb1-84"><a href="#cb1-84" aria-hidden="true" tabindex="-1"></a><span class="op">{</span></span>
<span id="cb1-85"><a href="#cb1-85" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* The STM32 CRC IP clock should be enabled to use the network runtime library */</span></span>
<span id="cb1-86"><a href="#cb1-86" aria-hidden="true" tabindex="-1"></a>  __HAL_RCC_CRC_CLK_ENABLE<span class="op">();</span></span>
<span id="cb1-87"><a href="#cb1-87" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb1-88"><a href="#cb1-88" aria-hidden="true" tabindex="-1"></a>  aiInit<span class="op">();</span></span>
<span id="cb1-89"><a href="#cb1-89" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb1-90"><a href="#cb1-90" aria-hidden="true" tabindex="-1"></a>  <span class="cf">while</span> <span class="op">(</span><span class="dv">1</span><span class="op">)</span> <span class="op">{</span></span>
<span id="cb1-91"><a href="#cb1-91" aria-hidden="true" tabindex="-1"></a>    <span class="co">/* 1 - Acquire, pre-process and fill the input buffers */</span></span>
<span id="cb1-92"><a href="#cb1-92" aria-hidden="true" tabindex="-1"></a>    acquire_and_process_data<span class="op">(</span>in_data<span class="op">);</span></span>
<span id="cb1-93"><a href="#cb1-93" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb1-94"><a href="#cb1-94" aria-hidden="true" tabindex="-1"></a>    <span class="co">/* 2 - Call inference engine */</span></span>
<span id="cb1-95"><a href="#cb1-95" aria-hidden="true" tabindex="-1"></a>    aiRun<span class="op">(</span>in_data<span class="op">,</span> out_data<span class="op">);</span></span>
<span id="cb1-96"><a href="#cb1-96" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb1-97"><a href="#cb1-97" aria-hidden="true" tabindex="-1"></a>    <span class="co">/* 3 - Post-process the predictions */</span></span>
<span id="cb1-98"><a href="#cb1-98" aria-hidden="true" tabindex="-1"></a>    post_process<span class="op">(</span>out_data<span class="op">);</span></span>
<span id="cb1-99"><a href="#cb1-99" aria-hidden="true" tabindex="-1"></a>  <span class="op">}</span></span>
<span id="cb1-100"><a href="#cb1-100" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span></code></pre></div>
<p>Only the following <code>CFLAGS/LDFLAGS</code> extensions (Embedded GCC-based for ARM tool-chain) are requested to compile the specialized c-files and to add the inference runtime library in a STM32 Cortex-m4 based project.</p>
<div class="sourceCode" id="cb2"><pre class="sourceCode makefile"><code class="sourceCode makefile"><span id="cb2-1"><a href="#cb2-1" aria-hidden="true" tabindex="-1"></a><span class="dt">CFLAGS </span><span class="ch">+=</span><span class="st"> -mcpu=cortex-m4 -mthumb -mfpu=fpv4-sp-d16  -mfloat-abi=hard</span></span>
<span id="cb2-2"><a href="#cb2-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb2-3"><a href="#cb2-3" aria-hidden="true" tabindex="-1"></a><span class="dt">CFLAGS </span><span class="ch">+=</span><span class="st"> -IMiddlewares/ST/AI/Lib/Inc</span></span>
<span id="cb2-4"><a href="#cb2-4" aria-hidden="true" tabindex="-1"></a><span class="dt">LDFLAGS </span><span class="ch">+=</span><span class="st"> -LMiddlewares/ST/AI/Lib/Lib -l:NetworkRuntime510_CM4_GCC.a</span></span></code></pre></div>
<div class="Alert">
<p><strong>Warning</strong> — Be aware that all provided inference runtime libraries for the different STM32 series (excluding STM32WL series) are compiled with the FPU enabled and the <code>hard</code> float EABI option for performance reasons.</p>
</div>
</section>
<section id="ref_crc_usage" class="level2">
<h2>CRC IP usage</h2>
<p>To use the network-runtime library, the STM32 CRC IP should be enabled (or clocked) else the application hangs. The IP is used to check that the library is effectively used on a STM32 device at each call of an <code>ai_&lt;network&gt;_xx()&#39;</code> function. If this IP is used in parallel, the application code must make sure to save/restore its on-going context. Similarly, if the power consumption is privileged, it can be also interesting to enable and to disable the CRC IP between two calls thanks to support of an advanced feature, refer to <a href="crc_ip_support.html">“STM32 CRC IP as shared resource”</a> article to have more fine control granularity.</p>
</section>
<section id="sec_data_placement" class="level2">
<h2>AI buffers and privileged placement</h2>
<p>Application/integration point of view, only three memory-related objects are considered as dimensioning for the system. They are a fixed-size, there is no support for the dynamic tensors meaning that all the sizes and shapes of the tensors are defined/fixed at generation time. The system heap is not requested to use the inference C run-time engine.</p>
<ul>
<li>“activations” buffer is a simple contiguous memory-mapped buffer, placed into a read-write memory segment. It is owned and allocated by the AI client. It is passed to the network instance (see <a href="#ref_api_init"><code>ai_&lt;name&gt;_init()</code></a> function) and used as private heap (or working buffer) during the execution of the inference to store the intermediate results. Between two <em>runs</em>, the associated memory segment can be used by the application. Its size, <code>AI_&lt;NAME&gt;_DATA_ACTIVATIONS_SIZE</code> is defined during the code generation and corresponds to the reported <code>RAM</code> value.</li>
<li>“weights” buffer is a simple contiguous memory-mapped buffer (or multiple memory-mapped buffers with the <a href="#ref_split_weights"><code>--split-weights</code></a> option). It is generally placed into a non-volatile and read-only memory device. The total size, <code>AI_&lt;NAME&gt;_DATA_WEIGHTS_SIZE</code> is defined during the code generation and corresponds to the reported <code>ROM</code> value.<br />
</li>
<li>“output” and “input” buffers must be also placed in the read-write memory-mapped buffers. By default, they are owned and provided by the AI client. Their sizes are model dependent and known as generation time (<code>AI_&lt;NAME&gt;_IN/OUT_SIZE_BYTES</code>). They can be also located in the <a href="#sec_alloc_inputs">“activations” buffer</a>.</li>
</ul>
<div id="fig:id_mem_layout_default" class="fignos">
<figure>
<img src="" property="center" style="width:80.0%" alt="Figure 2: Default data memory layout" /><figcaption aria-hidden="true"><span>Figure 2:</span> Default data memory layout</figcaption>
</figure>
</div>
<p>The kernels (inference runtime library) is executed in the context of the caller, the minimal requested <strong>stack</strong> size can be evaluated at run-time by the <em>aiSystemPerformance</em> application (refer to <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">[UM], “AI system performance application”</a> section)</p>
<div class="HTips">
<p><strong>Note</strong> — Placement of these objects are application linker or/and runtime dependent. Additional ROM and RAM for the network runtime library itself and network c-files (txt/rodata/bss and data sections) can be also considered but they are generally not significant to dimension the system in comparison of the requested size the “weighs” and “activations” buffers. However, for the small models, as detailed in [<a href="evaluation_metrics.html#ref_memory_occupancy">]</a>, the <code>--relocatable</code> option allows to refine and to know the requested memory including the kernels w/o parsing and analyzing of the generated firmware map.</p>
</div>
<p>Following table indicates the privileged placement choices to minimize the inference time. According the model, the most constrained memory object is the “activations” buffer.</p>
<table>
<colgroup>
<col style="width: 38%" />
<col style="width: 61%" />
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">memory object type</th>
<th style="text-align: left;">preferably placed in</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">client stack</td>
<td style="text-align: left;">a low latency &amp; high bandwidth device. STM32 embedded SRAM or data-TCM when available (zero wait-state memory).</td>
</tr>
<tr class="even">
<td style="text-align: left;">activations, inputs/outputs</td>
<td style="text-align: left;">a low/medium latency &amp; high bandwidth device. STM32 embedded SRAM first or external RAM. Trade-off is mainly driven by the size and if the STM32 MCU has a data cache (Cortex-M7 family). If <a href="#sec_alloc_inputs">input buffers</a> are not allocated in the “activations” buffer, the “activations” buffer should be privileged.</td>
</tr>
<tr class="odd">
<td style="text-align: left;">weights</td>
<td style="text-align: left;">a medium latency &amp; medium bandwidth device. STM32 embedded FLASH memory or external FLASH. Trade-off is driven by the STM32 MCU data cache availability (Cortex-M7 family), the <a href="#ref_split_weights">weights can be split</a> between different memory devices.</td>
</tr>
</tbody>
</table>
</section>
<section id="sec_alloc_inputs" class="level2">
<h2>I/O buffers inside the “activations” buffer</h2>
<p>The <code>--allocate-inputs</code> (respectively <code>--allocate-outputs</code>) option permits to use the “activations” buffer to allocate the data of the input tensors (respectively the output tensors). At generation time, the minimal size of the “activations” buffer is adapted accordingly. Be aware that the base addresses of the respective memory sub-regions are dependent of the model, they are not necessarily aligned with the base address of the “activations” buffer and are pre-defined/pre-calculated at generation time (see the <a href="#sec_base_in_address">snippet code</a> to find them).</p>
<div id="fig:id_mem_layout_w_inputs" class="fignos">
<figure>
<img src="" property="center" style="width:80.0%" alt="Figure 3: Data memory layout with --allocate-inputs option" /><figcaption aria-hidden="true"><span>Figure 3:</span> Data memory layout with <code>--allocate-inputs</code> option</figcaption>
</figure>
</div>
<ul>
<li>“external” input buffers (i.e. allocated outside the “activations” buffer) can be always used even if <code>--allocate-inputs</code> option is used.<br />
</li>
<li><code>--allocate-inputs</code> option reserves only the place for <em>one</em> buffer by input tensor.<br />
</li>
<li>if a double buffer scheme should be implemented, <code>--allocate-inputs</code> flag should be not used.</li>
</ul>
</section>
<section id="ref_split_weights" class="level2">
<h2>Split weights buffer</h2>
<p>The <code>--split-weights</code> option is a convenience to be able to place statically tensor-by-tensor the weights in different STM32 memory segments (on or off-chip) thanks to specific linker directives for the end-user application.</p>
<ul>
<li>it relaxes the placing constraint of a large buffer into a constrained and non-homogenous memory sub-system.<br />
</li>
<li>after profiling, it allows to improve the global inference time, by placing the critical weights into a low latency memory. Or in contrary can free the critical resource (i.e. internal flash) which can be used by the application.</li>
</ul>
<div id="fig:id_mem_split_weights" class="fignos">
<figure>
<img src="" property="center" style="width:65.0%" alt="Figure 4: Split weights buffer (static placement)" /><figcaption aria-hidden="true"><span>Figure 4:</span> Split weights buffer (static placement)</figcaption>
</figure>
</div>
<p>The <code>--split-weights</code> option prevents the generation of a unique c-array for the whole data of the weights/bias tensors (<code>&lt;name&gt;_data.c</code> file) as following:</p>
<div class="sourceCode" id="cb3"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb3-1"><a href="#cb3-1" aria-hidden="true" tabindex="-1"></a>ai_handle ai_network_data_weights_get<span class="op">(</span><span class="dt">void</span><span class="op">)</span></span>
<span id="cb3-2"><a href="#cb3-2" aria-hidden="true" tabindex="-1"></a><span class="op">{</span></span>
<span id="cb3-3"><a href="#cb3-3" aria-hidden="true" tabindex="-1"></a>  AI_ALIGNED<span class="op">(</span><span class="dv">4</span><span class="op">)</span></span>
<span id="cb3-4"><a href="#cb3-4" aria-hidden="true" tabindex="-1"></a>  <span class="dt">static</span> <span class="dt">const</span> ai_u8 s_network_weights<span class="op">[</span> <span class="dv">794136</span> <span class="op">]</span> <span class="op">=</span> <span class="op">{</span></span>
<span id="cb3-5"><a href="#cb3-5" aria-hidden="true" tabindex="-1"></a>    <span class="bn">0xcf</span><span class="op">,</span> <span class="bn">0xae</span><span class="op">,</span> <span class="bn">0x9d</span><span class="op">,</span> <span class="bn">0x3d</span><span class="op">,</span> <span class="bn">0x1b</span><span class="op">,</span> <span class="bn">0x0c</span><span class="op">,</span> <span class="bn">0xd1</span><span class="op">,</span> <span class="bn">0xbd</span><span class="op">,</span> <span class="bn">0x63</span><span class="op">,</span> <span class="bn">0x99</span><span class="op">,</span></span>
<span id="cb3-6"><a href="#cb3-6" aria-hidden="true" tabindex="-1"></a>    <span class="bn">0x36</span><span class="op">,</span> <span class="bn">0xbd</span><span class="op">,</span> <span class="bn">0xdb</span><span class="op">,</span> <span class="bn">0x67</span><span class="op">,</span> <span class="bn">0x46</span><span class="op">,</span> <span class="bn">0xbe</span><span class="op">,</span> <span class="bn">0x3b</span><span class="op">,</span> <span class="bn">0xe7</span><span class="op">,</span> <span class="bn">0x0d</span><span class="op">,</span> <span class="bn">0x3e</span><span class="op">,</span></span>
<span id="cb3-7"><a href="#cb3-7" aria-hidden="true" tabindex="-1"></a>    <span class="op">...</span></span>
<span id="cb3-8"><a href="#cb3-8" aria-hidden="true" tabindex="-1"></a>    <span class="bn">0x41</span><span class="op">,</span> <span class="bn">0xbf</span><span class="op">,</span> <span class="bn">0xc6</span><span class="op">,</span> <span class="bn">0x7d</span><span class="op">,</span> <span class="bn">0x69</span><span class="op">,</span> <span class="bn">0x3e</span><span class="op">,</span> <span class="bn">0x18</span><span class="op">,</span> <span class="bn">0x87</span><span class="op">,</span> <span class="bn">0x37</span><span class="op">,</span></span>
<span id="cb3-9"><a href="#cb3-9" aria-hidden="true" tabindex="-1"></a>    <span class="bn">0xbe</span><span class="op">,</span> <span class="bn">0x83</span><span class="op">,</span> <span class="bn">0x63</span><span class="op">,</span> <span class="bn">0x0f</span><span class="op">,</span> <span class="bn">0x3f</span><span class="op">,</span> <span class="bn">0x51</span><span class="op">,</span> <span class="bn">0xa1</span><span class="op">,</span> <span class="bn">0xdd</span><span class="op">,</span> <span class="bn">0xbe</span></span>
<span id="cb3-10"><a href="#cb3-10" aria-hidden="true" tabindex="-1"></a>  <span class="op">};</span></span>
<span id="cb3-11"><a href="#cb3-11" aria-hidden="true" tabindex="-1"></a>  <span class="cf">return</span> AI_HANDLE_PTR<span class="op">(</span>s_network_weights<span class="op">);</span></span>
<span id="cb3-12"><a href="#cb3-12" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span></code></pre></div>
<p>A <code>s_&lt;network&gt;_&lt;layer_name&gt;_[bias|weights|*]_array_weights[]</code>) c-array is created to store the data of each weight/bias tensors. A global map table is also built. It is used by the run-time to retrieve the addresses of the different c-arrays.</p>
<div class="sourceCode" id="cb4"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb4-1"><a href="#cb4-1" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span>
<span id="cb4-2"><a href="#cb4-2" aria-hidden="true" tabindex="-1"></a><span class="co">/* conv2d_1_weights_array - FLOAT|CONST */</span></span>
<span id="cb4-3"><a href="#cb4-3" aria-hidden="true" tabindex="-1"></a>AI_ALIGNED<span class="op">(</span><span class="dv">4</span><span class="op">)</span></span>
<span id="cb4-4"><a href="#cb4-4" aria-hidden="true" tabindex="-1"></a><span class="dt">const</span> ai_u8 s_network_conv2d_1_weights_array_weights<span class="op">[</span> <span class="dv">2048</span> <span class="op">]</span> <span class="op">=</span> <span class="op">{</span></span>
<span id="cb4-5"><a href="#cb4-5" aria-hidden="true" tabindex="-1"></a>  <span class="bn">0xcf</span><span class="op">,</span> <span class="bn">0xae</span><span class="op">,</span> <span class="bn">0x9d</span><span class="op">,</span> <span class="bn">0x3d</span><span class="op">,</span> <span class="bn">0x1b</span><span class="op">,</span> <span class="bn">0x0c</span><span class="op">,</span> <span class="bn">0xd1</span><span class="op">,</span> <span class="bn">0xbd</span><span class="op">,</span> <span class="bn">0x63</span><span class="op">,</span> <span class="bn">0x99</span><span class="op">,</span></span>
<span id="cb4-6"><a href="#cb4-6" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span>
<span id="cb4-7"><a href="#cb4-7" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span>
<span id="cb4-8"><a href="#cb4-8" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span>
<span id="cb4-9"><a href="#cb4-9" aria-hidden="true" tabindex="-1"></a><span class="co">/* dense_3_bias_array - FLOAT|CONST */</span></span>
<span id="cb4-10"><a href="#cb4-10" aria-hidden="true" tabindex="-1"></a>AI_ALIGNED<span class="op">(</span><span class="dv">4</span><span class="op">)</span></span>
<span id="cb4-11"><a href="#cb4-11" aria-hidden="true" tabindex="-1"></a><span class="dt">const</span> ai_u8 s_network_dense_3_bias_array_weights<span class="op">[</span> <span class="dv">24</span> <span class="op">]</span> <span class="op">=</span> <span class="op">{</span></span>
<span id="cb4-12"><a href="#cb4-12" aria-hidden="true" tabindex="-1"></a>  <span class="bn">0xa2</span><span class="op">,</span> <span class="bn">0x72</span><span class="op">,</span> <span class="bn">0x82</span><span class="op">,</span> <span class="bn">0x3e</span><span class="op">,</span> <span class="bn">0x5a</span><span class="op">,</span> <span class="bn">0x88</span><span class="op">,</span> <span class="bn">0x41</span><span class="op">,</span> <span class="bn">0xbf</span><span class="op">,</span> <span class="bn">0xc6</span><span class="op">,</span> <span class="bn">0x7d</span><span class="op">,</span></span>
<span id="cb4-13"><a href="#cb4-13" aria-hidden="true" tabindex="-1"></a>  <span class="bn">0x69</span><span class="op">,</span> <span class="bn">0x3e</span><span class="op">,</span> <span class="bn">0x18</span><span class="op">,</span> <span class="bn">0x87</span><span class="op">,</span> <span class="bn">0x37</span><span class="op">,</span> <span class="bn">0xbe</span><span class="op">,</span> <span class="bn">0x83</span><span class="op">,</span> <span class="bn">0x63</span><span class="op">,</span> <span class="bn">0x0f</span><span class="op">,</span> <span class="bn">0x3f</span><span class="op">,</span></span>
<span id="cb4-14"><a href="#cb4-14" aria-hidden="true" tabindex="-1"></a>  <span class="bn">0x51</span><span class="op">,</span> <span class="bn">0xa1</span><span class="op">,</span> <span class="bn">0xdd</span><span class="op">,</span> <span class="bn">0xbe</span></span>
<span id="cb4-15"><a href="#cb4-15" aria-hidden="true" tabindex="-1"></a><span class="op">};</span></span>
<span id="cb4-16"><a href="#cb4-16" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb4-17"><a href="#cb4-17" aria-hidden="true" tabindex="-1"></a><span class="co">/* Entry point to retrieve the address of the c-arrays */</span></span>
<span id="cb4-18"><a href="#cb4-18" aria-hidden="true" tabindex="-1"></a>ai_handle ai_network_data_weights_get<span class="op">(</span><span class="dt">void</span><span class="op">)</span> <span class="op">{</span></span>
<span id="cb4-19"><a href="#cb4-19" aria-hidden="true" tabindex="-1"></a>  <span class="dt">static</span> <span class="dt">const</span> ai_u8<span class="op">*</span> <span class="dt">const</span> s_network_params_map_table<span class="op">[]</span> <span class="op">=</span> <span class="op">{</span></span>
<span id="cb4-20"><a href="#cb4-20" aria-hidden="true" tabindex="-1"></a>    <span class="op">&amp;</span>s_conv2d_1_weights_array_weights<span class="op">[</span><span class="dv">0</span><span class="op">],</span></span>
<span id="cb4-21"><a href="#cb4-21" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span>
<span id="cb4-22"><a href="#cb4-22" aria-hidden="true" tabindex="-1"></a>    <span class="op">&amp;</span>s_dense_3_bias_array_weights<span class="op">[</span><span class="dv">0</span><span class="op">],</span></span>
<span id="cb4-23"><a href="#cb4-23" aria-hidden="true" tabindex="-1"></a>  <span class="op">};</span></span>
<span id="cb4-24"><a href="#cb4-24" aria-hidden="true" tabindex="-1"></a>  <span class="cf">return</span> AI_HANDLE_PTR<span class="op">(</span>s_network_params_map_table<span class="op">);</span></span>
<span id="cb4-25"><a href="#cb4-25" aria-hidden="true" tabindex="-1"></a><span class="op">};</span></span></code></pre></div>
<ul>
<li>without particular linker directives, the placement of these multiple c-arrays are always placed in a <code>.rodata</code> section as for the unique c-array.<br />
</li>
<li>client API is not changed. the <code>ai_network_data_weights_get()</code> function is used to pass the entry point of the weights buffer to the <a href="#ref_api_init"><code>ai_&lt;name&gt;_init()</code></a> function.<br />
</li>
<li>as illustrated in the previous figure, <code>const</code> C-attribute can be manually commented to use the default C-startup behavior to copy the data in an initialized RAM data section.</li>
</ul>
</section>
<section id="thread_safety" class="level2">
<h2>Re-entrance and thread safety considerations</h2>
<p>No internal synchronization mechanism is implemented to protect the entry points against concurrent accesses. If the API is used in a multi-threaded context, the protection of the instantiated NN(s) must be guaranteed by the application layer itself. To minimize the usage of the RAM, a same activation memory chunk (<code>SizeSHARED</code>) can be used to support multiple networks. In this case, the user must guarantee that an on-going inference execution cannot be preempted by the execution of another network.</p>
<div class="sourceCode" id="cb5"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb5-1"><a href="#cb5-1" aria-hidden="true" tabindex="-1"></a>SizeSHARED <span class="op">=</span> MAX<span class="op">(</span>AI_<span class="op">&lt;</span>name<span class="op">&gt;</span>_DATA_ACTIVATIONS_SIZE<span class="op">)</span> <span class="cf">for</span> name <span class="op">=</span> “net1” … “net2”</span></code></pre></div>
<div class="Tips">
<p><strong>Tip</strong> — If the preemption is expected for real-time constraint or latency reasons, each network instance must have its own and private activations buffer.</p>
</div>
</section>
<section id="debug-support" class="level2">
<h2>Debug support</h2>
<p>The network runtime library must be considered as an optimized black box object in binary format (sources files are not delivered). There is no run-time services allowing to dump internal states. Mapping and port of the model is guaranteed by the X-CUBE-AI generator. Some integration issues can be highlighted by the <code>ai_&lt;name&gt;_get_error()</code> function or by the usage of the <a href="api_platform_observer.html">Platform observer API</a> to inspect the intermediate results.</p>
</section>
<section id="versioning-and-checking" class="level2">
<h2>Versioning and checking</h2>
<p>A dedicated <code>&lt;network&gt;_config.h</code> is generated with the C-defines which allows to know the version of the tool used to generated the specialized NN C-files and the versions of the associated run-time API.</p>
<div class="Alert">
<p><strong>Warning</strong> — Backward or/and forward compatibility is never considered, if a new version of the tool is used to generate the new specialized NN c-files, it is <strong>highly recommended</strong> to update also the associated header files and network run-time library.</p>
</div>
<div class="sourceCode" id="cb6"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb6-1"><a href="#cb6-1" aria-hidden="true" tabindex="-1"></a><span class="co">/* &lt;network&gt;_config.h file */</span></span>
<span id="cb6-2"><a href="#cb6-2" aria-hidden="true" tabindex="-1"></a><span class="pp">#define AI_TOOLS_VERSION_MAJOR 7</span></span>
<span id="cb6-3"><a href="#cb6-3" aria-hidden="true" tabindex="-1"></a><span class="pp">#define AI_TOOLS_VERSION_MINOR 0</span></span>
<span id="cb6-4"><a href="#cb6-4" aria-hidden="true" tabindex="-1"></a><span class="pp">#define AI_TOOLS_VERSION_MICRO 0</span></span>
<span id="cb6-5"><a href="#cb6-5" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb6-6"><a href="#cb6-6" aria-hidden="true" tabindex="-1"></a><span class="pp">#define AI_PLATFORM_API_MAJOR  1</span></span>
<span id="cb6-7"><a href="#cb6-7" aria-hidden="true" tabindex="-1"></a><span class="pp">#define AI_PLATFORM_API_MINOR  1</span></span>
<span id="cb6-8"><a href="#cb6-8" aria-hidden="true" tabindex="-1"></a><span class="pp">#define AI_PLATFORM_API_MICRO  0</span></span>
<span id="cb6-9"><a href="#cb6-9" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb6-10"><a href="#cb6-10" aria-hidden="true" tabindex="-1"></a><span class="pp">#define AI_TOOLS_API_VERSION_MAJOR 1</span></span>
<span id="cb6-11"><a href="#cb6-11" aria-hidden="true" tabindex="-1"></a><span class="pp">#define AI_TOOLS_API_VERSION_MINOR 4</span></span>
<span id="cb6-12"><a href="#cb6-12" aria-hidden="true" tabindex="-1"></a><span class="pp">#define AI_TOOLS_API_VERSION_MICRO 0</span></span></code></pre></div>
<table>
<colgroup>
<col style="width: 34%" />
<col style="width: 65%" />
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">type</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">AI_TOOLS_VERSION_XX</td>
<td style="text-align: left;">indicates the version of the tool</td>
</tr>
<tr class="even">
<td style="text-align: left;">AI_PLATFORM_API_XX</td>
<td style="text-align: left;">indicates the version of the generated API or embedded inference client API. Can be used by the application code to check if there is a API break (source level).</td>
</tr>
<tr class="odd">
<td style="text-align: left;">AI_TOOLS_API_VERSION_XX</td>
<td style="text-align: left;">indicates the version of the API which is used by the generated NN c-files to call the network runtime library.</td>
</tr>
</tbody>
</table>
<p>These C-defines can be used by the application code to check at compile time that the version of the used network run-time library is aligned with the generated NN c-files. At run-time, the network runtime library checks strictly only the <code>AI_TOOLS_API</code> versions. The following <code>ai_error</code> will be returned by the <a href="#ref_api_create"><code>ai_&lt;network&gt;_create()</code></a> function is the version is not respected.</p>
<div class="sourceCode" id="cb7"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb7-1"><a href="#cb7-1" aria-hidden="true" tabindex="-1"></a>  <span class="op">.</span>code <span class="op">=</span> AI_ERROR_CODE_NETWORK</span>
<span id="cb7-2"><a href="#cb7-2" aria-hidden="true" tabindex="-1"></a>  <span class="op">.</span>type <span class="op">=</span> AI_ERROR_TOOL_PLATFORM_API_MISMATCH</span></code></pre></div>
<p>At run-time, <a href="#ref_api_info"><code>ai_&lt;network&gt;_get_info()</code></a> function allows to retrieve the different versions through the <code>ai_network_report</code> C-structure (see <code>ai_platform.h</code> file).</p>
<div class="sourceCode" id="cb8"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb8-1"><a href="#cb8-1" aria-hidden="true" tabindex="-1"></a>  <span class="op">.</span>tool_version           <span class="co">/* return compiled version of the tool - AI_TOOLS_VERSION_XX */</span></span>
<span id="cb8-2"><a href="#cb8-2" aria-hidden="true" tabindex="-1"></a>  <span class="op">.</span>tool_api_version       <span class="co">/* return compiled version of the tool API - AI_TOOLS_API_VERSION_XX */</span></span>
<span id="cb8-3"><a href="#cb8-3" aria-hidden="true" tabindex="-1"></a>  <span class="op">.</span>api_version            <span class="co">/* return compiled version of the embedded client API - AI_PLATFORM_API_XX */</span></span></code></pre></div>
</section>
</section>
<section id="ref_embedded_client_api" class="level1">
<h1>Embedded inference client API</h1>
<section id="ref_network_defines" class="level2">
<h2>AI_&lt;NAME&gt;_XXX C-defines</h2>
<p>Different C-defines are generated in the <code>&lt;name&gt;.h</code> and <code>&lt;name&gt;_data.h</code> files. They can be used by the application code to allocate at compile time or dynamically the requested buffers, or for debug purpose. At run-time, <a href="#ref_api_info"><code>ai_&lt;network&gt;_get_info()</code></a> can be used to retrieve the requested sizes.</p>
<table>
<colgroup>
<col style="width: 44%" />
<col style="width: 55%" />
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">C-defines</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_MODEL_NAME</code></td>
<td style="text-align: left;">C-string with the C-name of the model</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_ORIGIN_MODEL_NAME</code></td>
<td style="text-align: left;">C-string with the original name of the model</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_IN/OUT_NUM</code></td>
<td style="text-align: left;">indicates the total number of input/output tensors</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_IN/OUT</code></td>
<td style="text-align: left;">C-table (<code>ai_buffer</code> type) to describe the input/output tensors (see <a href="#ref_api_run"><code>ai_&lt;name&gt;_run()</code></a> function)</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_IN/OUT_SIZE</code></td>
<td style="text-align: left;">C-table (integer type) to indicate the number of item by input/output tensors (= H x W x C) (see <a href="#ref_tensor_def">“Input/output xD tensor format”</a> section)</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_IN/OUT_SIZE_BYTES</code></td>
<td style="text-align: left;">C-table (integer type) to indicate the size in bytes by input/output tensors</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_IN/OUT_x_SIZE</code></td>
<td style="text-align: left;">indicates the total number of item for the x-th input/output tensor</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_IN/OUT_x_SIZE_BYTES</code></td>
<td style="text-align: left;">indicates the size in bytes for the x-th input/output tensor (see <a href="#ref_api_run"><code>ai_&lt;name&gt;_run()</code></a> function)</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_IN/OUT_x_HEIGHT</code></td>
<td style="text-align: left;">indicates the expected “height” dimension value for the x-th input/output tensor</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_IN/OUT_x_WIDTH</code></td>
<td style="text-align: left;">indicates the expected “width” dimension value for the x-th input/output tensor</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_IN/OUT_x_CHANNEL</code></td>
<td style="text-align: left;">indicates the expected “channel” dimension value for the x-th input/output tensor</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_DATA_ACTIVATIONS_SIZE</code></td>
<td style="text-align: left;">indicates the minimal size in bytes which must provided by a client application layer as activations buffer (see <a href="#ref_api_init"><code>ai_&lt;name&gt;_init()</code></a> function)</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_DATA__WEIGHTS_SIZE</code></td>
<td style="text-align: left;">indicates the size in bytes of the generated weights/bias buffer</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_INPUTS_IN_ACTIVATIONS</code></td>
<td style="text-align: left;">indicates that the input buffers can be used from the activations buffer. It is <em>only</em> defined if the <code>--allocate-inputs</code> option is used.</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_OUTPUTS_IN_ACTIVATIONS</code></td>
<td style="text-align: left;">indicates that the outputs buffers can be used from the activations buffer. It is <em>only</em> defined if the <code>--allocate-outputs</code> option is used.</td>
</tr>
</tbody>
</table>
</section>
<section id="ref_api_create" class="level2">
<h2>ai_&lt;name&gt;_create()</h2>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1" aria-hidden="true" tabindex="-1"></a>ai_error <span class="va">ai_</span><span class="op">&lt;</span>name<span class="op">&gt;</span>_create<span class="op">(</span>ai_handle<span class="op">*</span> network<span class="op">,</span> <span class="at">const</span> ai_buffer<span class="op">*</span> network_config<span class="op">);</span></span>
<span id="func-2"><a href="#func-2" aria-hidden="true" tabindex="-1"></a>ai_handle <span class="va">ai_</span><span class="op">&lt;</span>name<span class="op">&gt;</span>_destroy<span class="op">(</span>ai_handle network<span class="op">);</span></span></code></pre></div>
<p>This <strong>mandatory</strong> function is the <em>early</em> function which must be called by the application to create an instance of the c-model. Provided <code>ai_handle</code> object is updated and it references a context (opaque object) which should be passed to the other functions.</p>
<ul>
<li><code>network_config</code> parameter is a specific network configuration buffer (opaque structure) coded as a <code>ai_buffer</code>. It is generated by the code generator and should be <em>not modified</em> by the application. Currently, this object is always empty and <code>NULL</code> can be passed but it is preferable to pass <code>AI_NETWORK_DATA_CONFIG</code> (see <code>&lt;name&gt;_data.h</code> file).</li>
</ul>
<p>When the instance is no more used by the application, <code>ai_&lt;name&gt;_destroy()</code> function should be called to release the possible allocated resources.</p>
<div class="Alert">
<p><strong>Warning</strong> — <a href="#ref_crc_usage">STM32 CRC IP</a> should be enabled before to call the <code>ai_&lt;network&gt;_create()</code> function else the following <code>ai_error</code> is returned: <code>(.type =AI_ERROR_CREATE_FAILED, .code=AI_ERROR_CODE_NETWORK)</code>.</p>
</div>
<div class="Note">
<p><strong>Info</strong> — Current implementation supports only one instance by c-model. Consequently a same C-model can be used in a pre-emptive runtime environment.</p>
</div>
</section>
<section id="ref_api_init" class="level2">
<h2>ai_&lt;name&gt;_init()</h2>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1" aria-hidden="true" tabindex="-1"></a>ai_bool <span class="va">ai_</span><span class="op">&lt;</span>name<span class="op">&gt;</span>_init<span class="op">(</span>ai_handle network<span class="op">,</span> <span class="at">const</span> ai_network_params<span class="op">*</span> params<span class="op">);</span></span></code></pre></div>
<p>This <strong>mandatory</strong> function is used by the application to initialize the internal run-time data structures and to set the activations buffer and weights buffer.</p>
<ul>
<li><code>params</code> parameter is a structure (<code>ai_network_params</code> type) which allows to pass the references of the generated weights (<code>params</code> field) and the activations buffer (<code>activations</code> field)</li>
<li><code>network</code> handle should be a valid handle, see <a href="#ref_api_create"><code>ai_&lt;name&gt;_create()</code></a> function</li>
</ul>
<div class="sourceCode" id="cb9"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="cb9-1"><a href="#cb9-1" aria-hidden="true" tabindex="-1"></a><span class="co">/* @file: ai_platform.h */</span></span>
<span id="cb9-2"><a href="#cb9-2" aria-hidden="true" tabindex="-1"></a><span class="kw">typedef</span> <span class="kw">struct</span> <span class="va">ai_network_params_</span> <span class="op">{</span></span>
<span id="cb9-3"><a href="#cb9-3" aria-hidden="true" tabindex="-1"></a>  ai_buffer   params<span class="op">;</span>         <span class="co">/*! info about params buffer(required!) */</span></span>
<span id="cb9-4"><a href="#cb9-4" aria-hidden="true" tabindex="-1"></a>  ai_buffer   activations<span class="op">;</span>    <span class="co">/*! info about activations buffer (required!) */</span></span>
<span id="cb9-5"><a href="#cb9-5" aria-hidden="true" tabindex="-1"></a><span class="op">}</span> ai_network_params<span class="op">;</span></span></code></pre></div>
<ul>
<li><code>params</code> attribute handles the weights/bias memory buffer</li>
<li><code>activations</code> attribute handles the activations buffer which is used by the inference engine.</li>
<li>size of associated memory blocks are respectively defined by the following C-defines (see <code>&lt;name&gt;_data.h</code> file).
<ul>
<li><code>AI_&lt;NAME&gt;_DATA_WEIGHTS_SIZE</code></li>
<li><code>AI_&lt;NAME&gt;_DATA_ACTIVATIONS_SIZE</code></li>
</ul></li>
</ul>
<p><code>AI_NETWORK_PARAMS_INIT()</code>, <code>AI_NETWORK_DATA_WEIGHTS()</code>and <code>AI_NETWORK_DATA_ACTIVATIONS()</code> helper macros should be used to populate the requested <code>params</code> structure. Note that the <code>ai_network_data_weights_get()</code> functions allows to retrieve the base address of the weights buffer (see <code>&lt;network&gt;_data.c</code> file).</p>
<div class="sourceCode" id="cb10"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="cb10-1"><a href="#cb10-1" aria-hidden="true" tabindex="-1"></a>AI_ALIGNED<span class="op">(</span><span class="dv">32</span><span class="op">)</span></span>
<span id="cb10-2"><a href="#cb10-2" aria-hidden="true" tabindex="-1"></a><span class="at">static</span> ai_u8 activations<span class="op">[</span>AI_NETWORK_DATA_ACTIVATIONS_SIZE<span class="op">];</span></span>
<span id="cb10-3"><a href="#cb10-3" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb10-4"><a href="#cb10-4" aria-hidden="true" tabindex="-1"></a><span class="at">const</span> ai_network_params params <span class="op">=</span> AI_NETWORK_PARAMS_INIT<span class="op">(</span></span>
<span id="cb10-5"><a href="#cb10-5" aria-hidden="true" tabindex="-1"></a>  AI_NETWORK_DATA_WEIGHTS<span class="op">(</span>ai_network_data_weights_get<span class="op">()),</span></span>
<span id="cb10-6"><a href="#cb10-6" aria-hidden="true" tabindex="-1"></a>  AI_NETWORK_DATA_ACTIVATIONS<span class="op">(</span>activations<span class="op">)</span></span>
<span id="cb10-7"><a href="#cb10-7" aria-hidden="true" tabindex="-1"></a><span class="op">);</span></span>
<span id="cb10-8"><a href="#cb10-8" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb10-9"><a href="#cb10-9" aria-hidden="true" tabindex="-1"></a>ai_network_init<span class="op">(</span>network<span class="op">,</span> <span class="op">&amp;</span>params<span class="op">);</span></span></code></pre></div>
</section>
<section id="ref_api_run" class="level2">
<h2>ai_&lt;name&gt;_run()</h2>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1" aria-hidden="true" tabindex="-1"></a>ai_i32 <span class="va">ai_</span><span class="op">&lt;</span>name<span class="op">&gt;</span>_run<span class="op">(</span>ai_handle network<span class="op">,</span> <span class="at">const</span> ai_buffer<span class="op">*</span> input<span class="op">,</span> ai_buffer<span class="op">*</span> output<span class="op">);</span></span></code></pre></div>
<p>This function is called to feed the neural network. The input and output buffer parameters (<code>ai_buffer</code> type) allow to provide the input tensors and to store the predicted output tensors respectively (see “<a href="#ref_tensor_def">Input/output xD tensor format</a>” section).</p>
<ul>
<li>Returned value is the number of the input tensors processed when n_batches &gt;= 1. If &lt;=0 ,<a href="#ref_api_get_error"><code>ai_network_get_error()</code></a> function should be used to know the error</li>
</ul>
<div class="Tips">
<p><strong>Tip</strong> — Two separate lists of inputs and outputs <code>ai_buffer</code> can be passed. This permits to support a neural network model with multiple inputs or/and outputs. <code>AI_NETWORK_IN_NUM</code> and respectively <code>AI_NETWORK_OUT_NUM</code> helper macro can be used to know at compile-time the number of inputs and outputs. These values are also returned by the <code>struct ai_network_report</code> (see <a href="#ref_api_info"><code>ai_&lt;name&gt;_get_info()</code></a> function).</p>
</div>
<p><strong>Typical usages</strong></p>
<p>Default UC is illustrated by the <a href="#ref_quick_usage_code">“Getting starting”</a> code snippet. Following code is an example with a C-model which has two input and two output tensors. Note that the data payload of the <a href="#sec_alloc_inputs">input buffers</a> are also used in the “activations” buffer.</p>
<div class="sourceCode" id="cb11"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="cb11-1"><a href="#cb11-1" aria-hidden="true" tabindex="-1"></a><span class="pp">#include </span><span class="im">&lt;stdio.h&gt;</span></span>
<span id="cb11-2"><a href="#cb11-2" aria-hidden="true" tabindex="-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb11-3"><a href="#cb11-3" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span>
<span id="cb11-4"><a href="#cb11-4" aria-hidden="true" tabindex="-1"></a><span class="co">/* C-table to store the @ of the input buffers */</span></span>
<span id="cb11-5"><a href="#cb11-5" aria-hidden="true" tabindex="-1"></a><span class="at">static</span> ai_float <span class="op">*</span>in_data<span class="op">[</span>AI_NETWORK_IN_NUM<span class="op">];</span></span>
<span id="cb11-6"><a href="#cb11-6" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb11-7"><a href="#cb11-7" aria-hidden="true" tabindex="-1"></a><span class="co">/* ai input handlers */</span></span>
<span id="cb11-8"><a href="#cb11-8" aria-hidden="true" tabindex="-1"></a><span class="at">static</span> ai_buffer ai_inputs<span class="op">[</span>AI_NETWORK_IN_NUM<span class="op">]</span> <span class="op">=</span> AI_NETWORK_IN <span class="op">;</span></span>
<span id="cb11-9"><a href="#cb11-9" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb11-10"><a href="#cb11-10" aria-hidden="true" tabindex="-1"></a><span class="co">/* ai output handlers */</span></span>
<span id="cb11-11"><a href="#cb11-11" aria-hidden="true" tabindex="-1"></a><span class="at">static</span> ai_buffer ai_outputs<span class="op">[</span>AI_NETWORK_OUT_NUM<span class="op">]</span> <span class="op">=</span> AI_NETWORK_OUT <span class="op">;</span></span>
<span id="cb11-12"><a href="#cb11-12" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb11-13"><a href="#cb11-13" aria-hidden="true" tabindex="-1"></a><span class="co">/* data buffer for the output buffers */</span></span>
<span id="cb11-14"><a href="#cb11-14" aria-hidden="true" tabindex="-1"></a><span class="at">static</span> ai_float out_1_data<span class="op">[</span>AI_NETWORK_OUT_1_SIZE<span class="op">];</span></span>
<span id="cb11-15"><a href="#cb11-15" aria-hidden="true" tabindex="-1"></a><span class="at">static</span> ai_float out_2_data<span class="op">[</span>AI_NETWORK_OUT_2_SIZE<span class="op">];</span></span>
<span id="cb11-16"><a href="#cb11-16" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb11-17"><a href="#cb11-17" aria-hidden="true" tabindex="-1"></a><span class="co">/* C-table to store the @ of the output buffers */</span></span>
<span id="cb11-18"><a href="#cb11-18" aria-hidden="true" tabindex="-1"></a><span class="at">static</span> ai_float<span class="op">*</span> out_data<span class="op">[</span>AI_NETWORK_OUT_NUM<span class="op">]</span> <span class="op">=</span> <span class="op">{</span></span>
<span id="cb11-19"><a href="#cb11-19" aria-hidden="true" tabindex="-1"></a>  <span class="op">&amp;</span>out_1_data<span class="op">[</span><span class="dv">0</span><span class="op">],</span></span>
<span id="cb11-20"><a href="#cb11-20" aria-hidden="true" tabindex="-1"></a>  <span class="op">&amp;</span>out_2_data<span class="op">[</span><span class="dv">0</span><span class="op">]</span></span>
<span id="cb11-21"><a href="#cb11-21" aria-hidden="true" tabindex="-1"></a>  <span class="op">}</span></span>
<span id="cb11-22"><a href="#cb11-22" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb11-23"><a href="#cb11-23" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span>
<span id="cb11-24"><a href="#cb11-24" aria-hidden="true" tabindex="-1"></a><span class="dt">int</span> aiInit<span class="op">(</span><span class="dt">void</span><span class="op">)</span> <span class="op">{</span></span>
<span id="cb11-25"><a href="#cb11-25" aria-hidden="true" tabindex="-1"></a>  <span class="op">...</span></span>
<span id="cb11-26"><a href="#cb11-26" aria-hidden="true" tabindex="-1"></a>  ai_network_report report<span class="op">;</span> </span>
<span id="cb11-27"><a href="#cb11-27" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb11-28"><a href="#cb11-28" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* 1 - Create and initialize network */</span></span>
<span id="cb11-29"><a href="#cb11-29" aria-hidden="true" tabindex="-1"></a>  <span class="op">...</span></span>
<span id="cb11-30"><a href="#cb11-30" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb11-31"><a href="#cb11-31" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* 2 - Retrieve network infos */</span></span>
<span id="cb11-32"><a href="#cb11-32" aria-hidden="true" tabindex="-1"></a>  ai_network_get_info<span class="op">(</span>network<span class="op">,</span> <span class="op">&amp;</span>report<span class="op">);</span></span>
<span id="cb11-33"><a href="#cb11-33" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb11-34"><a href="#cb11-34" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* 3 - Update the ai input handlers with the effective @ of</span></span>
<span id="cb11-35"><a href="#cb11-35" aria-hidden="true" tabindex="-1"></a><span class="co">         the input buffers  */</span></span>
<span id="cb11-36"><a href="#cb11-36" aria-hidden="true" tabindex="-1"></a>  <span class="cf">for</span> <span class="op">(</span><span class="dt">int</span> i<span class="op">=</span><span class="dv">0</span><span class="op">;</span> i <span class="op">&lt;</span> AI_NETWORK_IN_NUM<span class="op">;</span> i<span class="op">++)</span> <span class="op">{</span></span>
<span id="cb11-37"><a href="#cb11-37" aria-hidden="true" tabindex="-1"></a>    ai_inputs<span class="op">[</span>i<span class="op">].</span>n_batches <span class="op">=</span> <span class="dv">1</span><span class="op">;</span></span>
<span id="cb11-38"><a href="#cb11-38" aria-hidden="true" tabindex="-1"></a>    ai_inputs<span class="op">[</span>i<span class="op">].</span>data <span class="op">=</span> AI_HANDLE_PTR<span class="op">(</span>report<span class="op">.</span>inputs<span class="op">[</span>i<span class="op">].</span>data<span class="op">);</span></span>
<span id="cb11-39"><a href="#cb11-39" aria-hidden="true" tabindex="-1"></a>    in_data<span class="op">[</span>i<span class="op">]</span> <span class="op">=</span> <span class="op">(</span>ai_u8 <span class="op">*)(</span>ai_inputs<span class="op">[</span>i<span class="op">].</span>data<span class="op">);</span></span>
<span id="cb11-40"><a href="#cb11-40" aria-hidden="true" tabindex="-1"></a>  <span class="op">}</span></span>
<span id="cb11-41"><a href="#cb11-41" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb11-42"><a href="#cb11-42" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* 4- Update the ai output handlers */</span></span>
<span id="cb11-43"><a href="#cb11-43" aria-hidden="true" tabindex="-1"></a>  <span class="cf">for</span> <span class="op">(</span><span class="dt">int</span> i<span class="op">=</span><span class="dv">0</span><span class="op">;</span> i <span class="op">&lt;</span> AI_NETWORK_OUT_NUM<span class="op">;</span> i<span class="op">++)</span> <span class="op">{</span></span>
<span id="cb11-44"><a href="#cb11-44" aria-hidden="true" tabindex="-1"></a>    ai_outputs<span class="op">[</span>i<span class="op">].</span>n_batches <span class="op">=</span> <span class="dv">1</span><span class="op">;</span></span>
<span id="cb11-45"><a href="#cb11-45" aria-hidden="true" tabindex="-1"></a>    ai_outputs<span class="op">[</span>i<span class="op">].</span>data <span class="op">=</span> AI_HANDLE_PTR<span class="op">(&amp;</span>out_data<span class="op">[</span>i<span class="op">]);</span></span>
<span id="cb11-46"><a href="#cb11-46" aria-hidden="true" tabindex="-1"></a>  <span class="op">}</span></span>
<span id="cb11-47"><a href="#cb11-47" aria-hidden="true" tabindex="-1"></a>  <span class="op">...</span></span>
<span id="cb11-48"><a href="#cb11-48" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span>
<span id="cb11-49"><a href="#cb11-49" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb11-50"><a href="#cb11-50" aria-hidden="true" tabindex="-1"></a><span class="dt">void</span> main_loop<span class="op">()</span></span>
<span id="cb11-51"><a href="#cb11-51" aria-hidden="true" tabindex="-1"></a><span class="op">{</span></span>
<span id="cb11-52"><a href="#cb11-52" aria-hidden="true" tabindex="-1"></a>  <span class="cf">while</span> <span class="op">(</span><span class="dv">1</span><span class="op">)</span> <span class="op">{</span></span>
<span id="cb11-53"><a href="#cb11-53" aria-hidden="true" tabindex="-1"></a>    <span class="co">/* 1 - Acquire, pre-process and fill the input buffers */</span></span>
<span id="cb11-54"><a href="#cb11-54" aria-hidden="true" tabindex="-1"></a>    acquire_and_process_data<span class="op">(</span>in_data<span class="op">);</span></span>
<span id="cb11-55"><a href="#cb11-55" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb11-56"><a href="#cb11-56" aria-hidden="true" tabindex="-1"></a>    <span class="co">/* 2 - Call inference engine */</span></span>
<span id="cb11-57"><a href="#cb11-57" aria-hidden="true" tabindex="-1"></a>    ai_network_run<span class="op">(</span>network<span class="op">,</span> <span class="op">&amp;</span>ai_inputs<span class="op">[</span><span class="dv">0</span><span class="op">],</span> <span class="op">&amp;</span>ai_outputs<span class="op">[</span><span class="dv">0</span><span class="op">]);</span></span>
<span id="cb11-58"><a href="#cb11-58" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb11-59"><a href="#cb11-59" aria-hidden="true" tabindex="-1"></a>    <span class="co">/* 3 - Post-process the predictions */</span></span>
<span id="cb11-60"><a href="#cb11-60" aria-hidden="true" tabindex="-1"></a>    post_process<span class="op">(</span>out_data<span class="op">);</span></span>
<span id="cb11-61"><a href="#cb11-61" aria-hidden="true" tabindex="-1"></a>  <span class="op">}</span></span>
<span id="cb11-62"><a href="#cb11-62" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span></code></pre></div>
</section>
<section id="ref_api_get_error" class="level2">
<h2>ai_&lt;name&gt;_get_error()</h2>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1" aria-hidden="true" tabindex="-1"></a>ai_error <span class="va">ai_</span><span class="op">&lt;</span>name<span class="op">&gt;</span>_get_error<span class="op">(</span>ai_handle network<span class="op">);</span></span></code></pre></div>
<p>This function can be used by the client application to retrieve the 1st error reported during the execution of a <code>ai_&lt;name&gt;_xxx()</code> function.</p>
<ul>
<li>See <code>ai_platform.h</code> file to have the list of the returned error type (<code>ai_error_type</code>) and associated code (<code>ai_error_code</code>).</li>
</ul>
<p><strong>Typical AI error function handler (debug/log purpose)</strong></p>
<div class="sourceCode" id="cb12"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="cb12-1"><a href="#cb12-1" aria-hidden="true" tabindex="-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb12-2"><a href="#cb12-2" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span>
<span id="cb12-3"><a href="#cb12-3" aria-hidden="true" tabindex="-1"></a><span class="dt">void</span> aiLogErr<span class="op">(</span><span class="at">const</span> ai_error err<span class="op">)</span></span>
<span id="cb12-4"><a href="#cb12-4" aria-hidden="true" tabindex="-1"></a><span class="op">{</span></span>
<span id="cb12-5"><a href="#cb12-5" aria-hidden="true" tabindex="-1"></a>  printf<span class="op">(</span><span class="st">&quot;E: AI error - type=</span><span class="sc">%d</span><span class="st"> code=</span><span class="sc">%d\r\n</span><span class="st">&quot;</span><span class="op">,</span> err<span class="op">.</span>type<span class="op">,</span> err<span class="op">.</span>code<span class="op">);</span></span>
<span id="cb12-6"><a href="#cb12-6" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span></code></pre></div>
</section>
<section id="ref_api_info" class="level2">
<h2>ai_&lt;name&gt;_get_info()</h2>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1" aria-hidden="true" tabindex="-1"></a>ai_bool <span class="va">ai_</span><span class="op">&lt;</span>name<span class="op">&gt;</span>_get_info<span class="op">(</span>ai_handle network<span class="op">,</span> ai_network_report<span class="op">*</span> report<span class="op">);</span></span></code></pre></div>
<p>This function allows to retrieve the run-time data attributes of an instantiated model. Refer to <code>ai_platform.h</code> file to show the details of the returned <code>ai_network_report</code> C-structure. If it is called before the <code>ai_&lt;name&gt;_init()</code> function, the reported information for the <code>activations</code>, <code>params</code>, <code>* inputs</code> and <code>* outputs</code> fields will be uncompleted. In particular the effective addresses of the buffers (the instance is not yet fully initialized), but all static information are available.</p>
<p><strong>Typical usage</strong></p>
<div class="sourceCode" id="cb13"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="cb13-1"><a href="#cb13-1" aria-hidden="true" tabindex="-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb13-2"><a href="#cb13-2" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span>
<span id="cb13-3"><a href="#cb13-3" aria-hidden="true" tabindex="-1"></a><span class="dt">int</span> aiInit<span class="op">(</span><span class="dt">void</span><span class="op">)</span> <span class="op">{</span></span>
<span id="cb13-4"><a href="#cb13-4" aria-hidden="true" tabindex="-1"></a>  ai_network_report report<span class="op">;</span></span>
<span id="cb13-5"><a href="#cb13-5" aria-hidden="true" tabindex="-1"></a>  ai_bool res<span class="op">;</span></span>
<span id="cb13-6"><a href="#cb13-6" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span>
<span id="cb13-7"><a href="#cb13-7" aria-hidden="true" tabindex="-1"></a>  res <span class="op">=</span> ai_network_get_info<span class="op">(</span>network<span class="op">,</span> <span class="op">&amp;</span>report<span class="op">)</span></span>
<span id="cb13-8"><a href="#cb13-8" aria-hidden="true" tabindex="-1"></a>  <span class="cf">if</span> <span class="op">(</span>res<span class="op">)</span> <span class="op">{</span></span>
<span id="cb13-9"><a href="#cb13-9" aria-hidden="true" tabindex="-1"></a>    <span class="co">/* display/use the reported data */</span></span>
<span id="cb13-10"><a href="#cb13-10" aria-hidden="true" tabindex="-1"></a>    <span class="op">...</span></span>
<span id="cb13-11"><a href="#cb13-11" aria-hidden="true" tabindex="-1"></a>  <span class="op">}</span></span>
<span id="cb13-12"><a href="#cb13-12" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span>
<span id="cb13-13"><a href="#cb13-13" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span></code></pre></div>
</section>
</section>
<section id="ref_tensor_def" class="level1">
<h1>IO tensor description</h1>
<p>Up-to 4-dimensional tensors are supported with a fixed format: <em>BHWC</em> format or <em>channel last</em> format representation. They are defined by a <code>struct ai_buffer</code> C-structure object. The referenced memory buffer (<code>data</code> field) is physically stored and referenced in memory as a standard C-array type. Scattered memory buffers are not supported.</p>
<ul>
<li><code>n_batches</code>, <code>height</code>, <code>width</code>, <code>channels</code> - indicate the dimension of the tensor<br />
</li>
<li><a href="#ref_data_type"><code>format</code></a> - indicates the format of the data<br />
</li>
<li><a href="#ref_data_type"><code>meta_info</code></a> - extra field to reference the additional data-dependent parameters which can be requested to handle a buffer</li>
</ul>
<div class="HTips">
<p><strong>Note</strong> — If the dimension order in the original toolbox is different that HWC (e.g. ONNX: CHW) it’s up to the application to properly re-arrange the element order.</p>
</div>
<section id="ai_buffer-c-structure" class="level2">
<h2>ai_buffer C-structure</h2>
<div class="sourceCode" id="cb14"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="cb14-1"><a href="#cb14-1" aria-hidden="true" tabindex="-1"></a><span class="co">/* @file: ai_platform.h */</span></span>
<span id="cb14-2"><a href="#cb14-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb14-3"><a href="#cb14-3" aria-hidden="true" tabindex="-1"></a><span class="kw">typedef</span> <span class="kw">struct</span> <span class="va">ai_buffer_</span> <span class="op">{</span></span>
<span id="cb14-4"><a href="#cb14-4" aria-hidden="true" tabindex="-1"></a>  ai_buffer_format        format<span class="op">;</span>     <span class="co">/*!&lt; buffer format */</span></span>
<span id="cb14-5"><a href="#cb14-5" aria-hidden="true" tabindex="-1"></a>  ai_u16                  n_batches<span class="op">;</span>  <span class="co">/*!&lt; number of batches in the buffer */</span></span>
<span id="cb14-6"><a href="#cb14-6" aria-hidden="true" tabindex="-1"></a>  ai_u16                  height<span class="op">;</span>     <span class="co">/*!&lt; buffer height dimension */</span></span>
<span id="cb14-7"><a href="#cb14-7" aria-hidden="true" tabindex="-1"></a>  ai_u16                  width<span class="op">;</span>      <span class="co">/*!&lt; buffer width dimension */</span></span>
<span id="cb14-8"><a href="#cb14-8" aria-hidden="true" tabindex="-1"></a>  ai_u32                  channels<span class="op">;</span>   <span class="co">/*!&lt; buffer number of channels */</span></span>
<span id="cb14-9"><a href="#cb14-9" aria-hidden="true" tabindex="-1"></a>  ai_handle               data<span class="op">;</span>       <span class="co">/*!&lt; pointer to buffer data */</span></span>
<span id="cb14-10"><a href="#cb14-10" aria-hidden="true" tabindex="-1"></a>  ai_buffer_meta_info<span class="op">*</span>    meta_info<span class="op">;</span>  <span class="co">/*!&lt; pointer to buffer metadata info */</span></span>
<span id="cb14-11"><a href="#cb14-11" aria-hidden="true" tabindex="-1"></a><span class="op">}</span> ai_buffer<span class="op">;</span></span></code></pre></div>
<p>Following table shows the generated mapping of the 1d, 2d and 3d-array tensor:</p>
<table>
<thead>
<tr class="header">
<th style="text-align: left;">tensor shape</th>
<th style="text-align: left;">mapped on (B, H, W, C)</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><a href="#ref_1d">1d-array</a></td>
<td style="text-align: left;">(-, 1, 1, c)</td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#ref_2d">2d-array</a></td>
<td style="text-align: left;">(-, h, 1, c)</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#ref_3d">3d-array</a></td>
<td style="text-align: left;">(-, h, w, c)</td>
</tr>
</tbody>
</table>
<p><strong>Retrieve tensor information</strong></p>
<p>Following code snippets show how to retrieve the tensor information from a buffer descriptor. <code>format</code> and <code>meta_info</code> fields are described in the next section.</p>
<div class="sourceCode" id="cb15"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="cb15-1"><a href="#cb15-1" aria-hidden="true" tabindex="-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb15-2"><a href="#cb15-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb15-3"><a href="#cb15-3" aria-hidden="true" tabindex="-1"></a><span class="op">{</span></span>
<span id="cb15-4"><a href="#cb15-4" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Use the generated macro to set the buffer input descriptors */</span></span>
<span id="cb15-5"><a href="#cb15-5" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_buffer input<span class="op">[]</span> <span class="op">=</span> AI_NETWORK_IN<span class="op">;</span></span>
<span id="cb15-6"><a href="#cb15-6" aria-hidden="true" tabindex="-1"></a>  </span>
<span id="cb15-7"><a href="#cb15-7" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Extract format of the first input tensor (index 0) */</span></span>
<span id="cb15-8"><a href="#cb15-8" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_buffer_format fmt_1 <span class="op">=</span> AI_BUFFER_FORMAT<span class="op">(&amp;</span>input<span class="op">[</span><span class="dv">0</span><span class="op">]);</span></span>
<span id="cb15-9"><a href="#cb15-9" aria-hidden="true" tabindex="-1"></a>  </span>
<span id="cb15-10"><a href="#cb15-10" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Extract height, width and channels of the first input tensor */</span></span>
<span id="cb15-11"><a href="#cb15-11" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_u16 height_1 <span class="op">=</span> AI_BUFFER_HEIGHT<span class="op">(&amp;</span>input<span class="op">[</span><span class="dv">0</span><span class="op">]);</span></span>
<span id="cb15-12"><a href="#cb15-12" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_u16 width_1 <span class="op">=</span> AI_BUFFER_WIDTH<span class="op">(&amp;</span>input<span class="op">[</span><span class="dv">0</span><span class="op">]);</span></span>
<span id="cb15-13"><a href="#cb15-13" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_u16 ch_1 <span class="op">=</span> AI_BUFFER_CHANNELS<span class="op">(&amp;</span>input<span class="op">[</span><span class="dv">0</span><span class="op">]);</span></span>
<span id="cb15-14"><a href="#cb15-14" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_u16 size_1 <span class="op">=</span> AI_BUFFER_SIZE<span class="op">(&amp;</span>input<span class="op">[</span><span class="dv">0</span><span class="op">]);</span> <span class="co">/* number of item*/</span></span>
<span id="cb15-15"><a href="#cb15-15" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_u32 size_in_bytes_1 <span class="op">=</span> AI_BUFFER_BYTE_SIZE<span class="op">(</span>size_1<span class="op">,</span> fmt_1<span class="op">);</span></span>
<span id="cb15-16"><a href="#cb15-16" aria-hidden="true" tabindex="-1"></a>  <span class="op">...</span></span>
<span id="cb15-17"><a href="#cb15-17" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span></code></pre></div>
<p>or with the <code>ai_network_info</code> structure</p>
<div class="sourceCode" id="cb16"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="cb16-1"><a href="#cb16-1" aria-hidden="true" tabindex="-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb16-2"><a href="#cb16-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb16-3"><a href="#cb16-3" aria-hidden="true" tabindex="-1"></a><span class="op">{</span></span>
<span id="cb16-4"><a href="#cb16-4" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Fetch run-time network descriptor */</span></span>
<span id="cb16-5"><a href="#cb16-5" aria-hidden="true" tabindex="-1"></a>  ai_network_report report<span class="op">;</span></span>
<span id="cb16-6"><a href="#cb16-6" aria-hidden="true" tabindex="-1"></a>  ai_network_get_info<span class="op">(</span>network<span class="op">,</span> <span class="op">&amp;</span>report<span class="op">);</span></span>
<span id="cb16-7"><a href="#cb16-7" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb16-8"><a href="#cb16-8" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Set the descriptor of the first input tensor (index 0) */</span></span>
<span id="cb16-9"><a href="#cb16-9" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_buffer <span class="op">*</span>input <span class="op">=</span> <span class="op">&amp;</span>report<span class="op">.</span>inputs<span class="op">[</span><span class="dv">0</span><span class="op">]</span></span>
<span id="cb16-10"><a href="#cb16-10" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb16-11"><a href="#cb16-11" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Extract format of the tensor */</span></span>
<span id="cb16-12"><a href="#cb16-12" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_buffer_format fmt_1 <span class="op">=</span> AI_BUFFER_FORMAT<span class="op">(</span>input<span class="op">);</span></span>
<span id="cb16-13"><a href="#cb16-13" aria-hidden="true" tabindex="-1"></a>  </span>
<span id="cb16-14"><a href="#cb16-14" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Extract height, width and channels of the tensor */</span></span>
<span id="cb16-15"><a href="#cb16-15" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_u16 height_1 <span class="op">=</span> AI_BUFFER_HEIGHT<span class="op">(</span>input<span class="op">);</span></span>
<span id="cb16-16"><a href="#cb16-16" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_u16 width_1 <span class="op">=</span> AI_BUFFER_WIDTH<span class="op">(</span>input<span class="op">);</span></span>
<span id="cb16-17"><a href="#cb16-17" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_u16 ch_1 <span class="op">=</span> AI_BUFFER_CHANNELS<span class="op">(</span>input<span class="op">);</span></span>
<span id="cb16-18"><a href="#cb16-18" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_u16 size_1 <span class="op">=</span> AI_BUFFER_SIZE<span class="op">(&amp;</span>input<span class="op">[</span><span class="dv">0</span><span class="op">]);</span> <span class="co">/* number of item*/</span></span>
<span id="cb16-19"><a href="#cb16-19" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_u32 size_in_bytes_1 <span class="op">=</span> AI_BUFFER_BYTE_SIZE<span class="op">(</span>size_1<span class="op">,</span> fmt_1<span class="op">);</span></span>
<span id="cb16-20"><a href="#cb16-20" aria-hidden="true" tabindex="-1"></a>  <span class="op">...</span></span>
<span id="cb16-21"><a href="#cb16-21" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span></code></pre></div>
</section>
<section id="ref_data_type" class="level2">
<h2>Tensor format</h2>
<p>The format of the data is mainly defined by the field <code>format</code>, a 32b word (<code>ai_buffer_format</code> type). Two types are supported.</p>
<div class="sourceCode" id="cb17"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="cb17-1"><a href="#cb17-1" aria-hidden="true" tabindex="-1"></a><span class="at">const</span> ai_buffer_format fmt <span class="op">=</span> AI_BUFFER_FORMAT<span class="op">(</span><span class="er">@</span>ai_buffer_object<span class="op">);</span></span></code></pre></div>
<table>
<colgroup>
<col style="width: 42%" />
<col style="width: 57%" />
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">type</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>AI_BUFFER_FMT_TYPE_FLOAT</code></td>
<td style="text-align: left;">indicates that the data container handles the <strong>floating-point data</strong>. mapped on a 32b float C-type (<code>ai_float</code> or <code>float</code>).</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_BUFFER_FMT_TYPE_Q</code></td>
<td style="text-align: left;">indicates that the data container handles the <strong>quantized data</strong>, mapped on 8b signed or unsigned integer C-type. See to <a href="quantization.html#ref_support_arithmetic">[QUANT], “Quantized tensors”</a> section, to detail the used integer arithmetic.</td>
</tr>
</tbody>
</table>
<p><strong>Helper C-macros</strong></p>
<p>Following set of C-macros can be used with the <code>format</code> field from the <code>struct ai_buffer</code> C-structure object to extract the information.</p>
<table>
<colgroup>
<col style="width: 36%" />
<col style="width: 63%" />
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">macros</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>AI_BUFFER_FMT_GET_TYPE(fmt)</code></td>
<td style="text-align: left;">returns <code>AI_BUFFER_FMT_TYPE_FLOAT</code> or <code>AI_BUFFER_FMT_TYPE_Q</code> buffer type</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_BUFFER_FMT_GET_FLOAT(fmt)</code></td>
<td style="text-align: left;">returns <code>1</code> if the data is a float type else <code>0</code></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>AI_BUFFER_FMT_GET_SIGN(fmt)</code></td>
<td style="text-align: left;">returns <code>1</code> if the data is signed else <code>0</code>.</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_BUFFER_FMT_GET_BITS(fmt)</code></td>
<td style="text-align: left;">returns the total number of bit which is used to encode the data. This is M+N+sign for <code>AI_BUFFER_FMT_TYPE_Q</code> type. Available values: <code>32</code> or <code>8</code></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>AI_BUFFER_FMT_GET_FBITS(fmt)</code></td>
<td style="text-align: left;">returns the total number of bit which is used to encode the fractional part for the 8b quantized data type.</td>
</tr>
</tbody>
</table>
<p>Additional macros are defined for the meta parameters:</p>
<div class="sourceCode" id="cb18"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="cb18-1"><a href="#cb18-1" aria-hidden="true" tabindex="-1"></a><span class="at">const</span> ai_buffer_meta_info <span class="op">*</span> meta_info <span class="op">=</span> AI_BUFFER_META_INFO<span class="op">(</span><span class="er">@</span>ai_buffer_object<span class="op">);</span></span></code></pre></div>
<table>
<colgroup>
<col style="width: 47%" />
<col style="width: 52%" />
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">macros</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>AI_BUFFER_META_INFO_INTQ(meta_info)</code></td>
<td style="text-align: left;">indicates if scale/zero-point meta info are available. If true, a reference of a <code>ai_intq_info</code> object is returned else <code>NULL</code>.</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_BUFFER_META_INFO_INTQ_GET_SCALE(meta_info, pos)</code></td>
<td style="text-align: left;">generic macro to returns the scale value at the pos-th position is available else <code>0</code> is returned. <code>ai_float</code> type. For the IO tensor only the position 0 is available.</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>AI_BUFFER_META_INFO_INTQ_GET_ZEROPOINT(meta_info, pos)</code></td>
<td style="text-align: left;">generic macro to returns the zero-point value at the pos-th position is available else <code>0</code> is returned. <code>ai_i8</code> or <code>ai_u8</code> type. Type can be deduced from the output of the <code>AI_BUFFER_FMT_GET_SIGN()</code> and <code>AI_BUFFER_FMT_GET_BITS()</code> macros.</td>
</tr>
</tbody>
</table>
<div class="Alert">
<p><strong>Warning</strong> — Be aware that the <code>meta_info</code> field is only available through the returned <code>ai_network_report()</code> structure. Otherwise the value defined by the generated <code>AI_&lt;NAME&gt;_IN/OUT</code> C-define are <code>NULL</code>.</p>
</div>
<p>Following code snippet illustrates a typical code to extract the <code>scale</code> and <code>zero_point</code> values:</p>
<div class="sourceCode" id="cb19"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="cb19-1"><a href="#cb19-1" aria-hidden="true" tabindex="-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb19-2"><a href="#cb19-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb19-3"><a href="#cb19-3" aria-hidden="true" tabindex="-1"></a><span class="at">static</span> ai_handle network<span class="op">;</span></span>
<span id="cb19-4"><a href="#cb19-4" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb19-5"><a href="#cb19-5" aria-hidden="true" tabindex="-1"></a><span class="op">{</span></span>
<span id="cb19-6"><a href="#cb19-6" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Fetch run-time network descriptor. This is MANDATORY</span></span>
<span id="cb19-7"><a href="#cb19-7" aria-hidden="true" tabindex="-1"></a><span class="co">     to retrieve the meta parameters. They are NOT available</span></span>
<span id="cb19-8"><a href="#cb19-8" aria-hidden="true" tabindex="-1"></a><span class="co">     through the definition of the AI_&lt;NAME&gt;_IN/OUT macro.</span></span>
<span id="cb19-9"><a href="#cb19-9" aria-hidden="true" tabindex="-1"></a><span class="co">  */</span></span>
<span id="cb19-10"><a href="#cb19-10" aria-hidden="true" tabindex="-1"></a>  ai_network_report report<span class="op">;</span></span>
<span id="cb19-11"><a href="#cb19-11" aria-hidden="true" tabindex="-1"></a>  ai_network_get_info<span class="op">(</span>network<span class="op">,</span> <span class="op">&amp;</span>report<span class="op">);</span></span>
<span id="cb19-12"><a href="#cb19-12" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb19-13"><a href="#cb19-13" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Set the descriptor of the first input tensor (index 0) */</span></span>
<span id="cb19-14"><a href="#cb19-14" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_buffer <span class="op">*</span>input <span class="op">=</span> <span class="op">&amp;</span>report<span class="op">.</span>inputs<span class="op">[</span><span class="dv">0</span><span class="op">]</span></span>
<span id="cb19-15"><a href="#cb19-15" aria-hidden="true" tabindex="-1"></a>  </span>
<span id="cb19-16"><a href="#cb19-16" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Extract format of the tensor */</span></span>
<span id="cb19-17"><a href="#cb19-17" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_buffer_format fmt_1 <span class="op">=</span> AI_BUFFER_FORMAT<span class="op">(</span>input<span class="op">);</span></span>
<span id="cb19-18"><a href="#cb19-18" aria-hidden="true" tabindex="-1"></a>  </span>
<span id="cb19-19"><a href="#cb19-19" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Extract the data type */</span></span>
<span id="cb19-20"><a href="#cb19-20" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> <span class="dt">uint32_t</span> type <span class="op">=</span> AI_BUFFER_FMT_GET_TYPE<span class="op">(</span>fmt_1<span class="op">);</span> <span class="co">/* -&gt; AI_BUFFER_FMT_TYPE_Q */</span></span>
<span id="cb19-21"><a href="#cb19-21" aria-hidden="true" tabindex="-1"></a>  </span>
<span id="cb19-22"><a href="#cb19-22" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Extract sign and number of bits */</span></span>
<span id="cb19-23"><a href="#cb19-23" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_size sign <span class="op">=</span> AI_BUFFER_FMT_GET_SIGN<span class="op">(</span>fmt_1<span class="op">);</span>  <span class="co">/* -&gt; 1 or 0*/</span></span>
<span id="cb19-24"><a href="#cb19-24" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_size bits <span class="op">=</span> AI_BUFFER_FMT_GET_BITS<span class="op">(</span>fmt_1<span class="op">);</span>  <span class="co">/* -&gt; 8 */</span></span>
<span id="cb19-25"><a href="#cb19-25" aria-hidden="true" tabindex="-1"></a>  </span>
<span id="cb19-26"><a href="#cb19-26" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Extract scale/zero_point values (only pos=0 is currently supported, per-tensor) */</span></span>
<span id="cb19-27"><a href="#cb19-27" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_float scale <span class="op">=</span> AI_BUFFER_META_INFO_INTQ_GET_SCALE<span class="op">(</span>input<span class="op">-&gt;</span>meta_info<span class="op">,</span> <span class="dv">0</span><span class="op">);</span></span>
<span id="cb19-28"><a href="#cb19-28" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> <span class="dt">int</span> zero_point <span class="op">=</span> AI_BUFFER_META_INFO_INTQ_GET_ZEROPOINT<span class="op">(</span>input<span class="op">-&gt;</span>meta_info<span class="op">,</span> <span class="dv">0</span><span class="op">);</span></span>
<span id="cb19-29"><a href="#cb19-29" aria-hidden="true" tabindex="-1"></a>  <span class="op">...</span></span>
<span id="cb19-30"><a href="#cb19-30" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span></code></pre></div>
<p>Floating-point case.</p>
<div class="sourceCode" id="cb20"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="cb20-1"><a href="#cb20-1" aria-hidden="true" tabindex="-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb20-2"><a href="#cb20-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb20-3"><a href="#cb20-3" aria-hidden="true" tabindex="-1"></a><span class="op">{</span></span>
<span id="cb20-4"><a href="#cb20-4" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Generated macro is used to set the buffer input descriptors */</span></span>
<span id="cb20-5"><a href="#cb20-5" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_buffer input<span class="op">[]</span> <span class="op">=</span> AI_<span class="op">&lt;</span>NAME<span class="op">&gt;</span>_IN<span class="op">;</span></span>
<span id="cb20-6"><a href="#cb20-6" aria-hidden="true" tabindex="-1"></a>  </span>
<span id="cb20-7"><a href="#cb20-7" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Retrieve format of the first input tensor (index 0) */</span></span>
<span id="cb20-8"><a href="#cb20-8" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_buffer_format fmt_1 <span class="op">=</span> AI_BUFFER_FORMAT<span class="op">(&amp;</span>input<span class="op">[</span><span class="dv">0</span><span class="op">]);</span></span>
<span id="cb20-9"><a href="#cb20-9" aria-hidden="true" tabindex="-1"></a>  </span>
<span id="cb20-10"><a href="#cb20-10" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Retrieve the data type */</span></span>
<span id="cb20-11"><a href="#cb20-11" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> <span class="dt">uint32_t</span> type <span class="op">=</span> AI_BUFFER_FMT_GET_TYPE<span class="op">(</span>fmt_1<span class="op">);</span> <span class="co">/* -&gt; AI_BUFFER_FMT_TYPE_FLOAT */</span></span>
<span id="cb20-12"><a href="#cb20-12" aria-hidden="true" tabindex="-1"></a>  </span>
<span id="cb20-13"><a href="#cb20-13" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Retrieve sign/size values */</span></span>
<span id="cb20-14"><a href="#cb20-14" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_size sign <span class="op">=</span> AI_BUFFER_FMT_GET_SIGN<span class="op">(</span>fmt_1<span class="op">);</span>   <span class="co">/* -&gt; 1 */</span></span>
<span id="cb20-15"><a href="#cb20-15" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_size bits <span class="op">=</span> AI_BUFFER_FMT_GET_BITS<span class="op">(</span>fmt_1<span class="op">);</span>   <span class="co">/* -&gt; 32 */</span></span>
<span id="cb20-16"><a href="#cb20-16" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_size N <span class="op">=</span> AI_BUFFER_FMT_GET_FBITS<span class="op">(</span>fmt_1<span class="op">);</span>     <span class="co">/* -&gt; 0 */</span></span>
<span id="cb20-17"><a href="#cb20-17" aria-hidden="true" tabindex="-1"></a>  <span class="op">...</span></span>
<span id="cb20-18"><a href="#cb20-18" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span></code></pre></div>
</section>
<section id="sec_life_cycle" class="level2">
<h2>Life-cycle of the IO tensors</h2>
<p>When the input buffers and output buffers are passed to the <a href="#ref_api_run"><code>ai_&lt;name&gt;_run()</code></a> function, the caller should wait the end of the inference to re-use the associated memory segments. There is no default mechanism to notify the application that the input tensors are released or no more used by the c-inference engine. This is particular true when the buffers are allocated in the activations buffer. However, in the case where an input buffer is allocated in the user space the <a href="api_platform_observer.html">Platform Observer API</a> can be used to be notified when the operator has finished (see <a href="api_platform_observer.html#ref_notify_input">“Processed input buffer notification use-case”</a> section).</p>
</section>
<section id="sec_base_in_address" class="level2">
<h2>Base address of the IO buffers</h2>
<p>Following code snippet illustrates the minimum requested instructions to retrieve the effective address of the buffer from the <em>activations</em> buffer. If the <a href="#sec_alloc_inputs"><code>--allocate-inputs</code></a> (or <code>--allocate-outputs</code>) option is not used, <code>NULL</code> is returned. Note that the instance should be previously fully <a href="#ref_api_init">initialized</a>, because the returned address is dependent to the base address of the <em>activations</em> buffer.</p>
<div class="sourceCode" id="cb21"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="cb21-1"><a href="#cb21-1" aria-hidden="true" tabindex="-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb21-2"><a href="#cb21-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb21-3"><a href="#cb21-3" aria-hidden="true" tabindex="-1"></a><span class="at">static</span> ai_handle network<span class="op">;</span></span>
<span id="cb21-4"><a href="#cb21-4" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb21-5"><a href="#cb21-5" aria-hidden="true" tabindex="-1"></a><span class="pp">#if defined(AI_NETWORK_INPUTS_IN_ACTIVATIONS)</span></span>
<span id="cb21-6"><a href="#cb21-6" aria-hidden="true" tabindex="-1"></a><span class="at">static</span> ai_u8 <span class="op">*</span>in_data_1<span class="op">;</span></span>
<span id="cb21-7"><a href="#cb21-7" aria-hidden="true" tabindex="-1"></a><span class="pp">#else</span></span>
<span id="cb21-8"><a href="#cb21-8" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Buffer should be allocated by the application </span></span>
<span id="cb21-9"><a href="#cb21-9" aria-hidden="true" tabindex="-1"></a><span class="co">     in this case: input-&gt;data == NULL */</span></span>
<span id="cb21-10"><a href="#cb21-10" aria-hidden="true" tabindex="-1"></a><span class="at">static</span> ai_u8 in_data_1<span class="op">[</span>AI_NETWORK_IN_1_SIZE_BYTES<span class="op">];</span></span>
<span id="cb21-11"><a href="#cb21-11" aria-hidden="true" tabindex="-1"></a><span class="pp">#endif</span></span>
<span id="cb21-12"><a href="#cb21-12" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb21-13"><a href="#cb21-13" aria-hidden="true" tabindex="-1"></a><span class="op">{</span></span>
<span id="cb21-14"><a href="#cb21-14" aria-hidden="true" tabindex="-1"></a>  ai_network_report report<span class="op">;</span></span>
<span id="cb21-15"><a href="#cb21-15" aria-hidden="true" tabindex="-1"></a>  ai_network_get_info<span class="op">(</span>network<span class="op">,</span> <span class="op">&amp;</span>report<span class="op">);</span></span>
<span id="cb21-16"><a href="#cb21-16" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb21-17"><a href="#cb21-17" aria-hidden="true" tabindex="-1"></a><span class="pp">#if defined(AI_NETWORK_INPUTS_IN_ACTIVATIONS)</span></span>
<span id="cb21-18"><a href="#cb21-18" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Set the descriptor of the first input tensor (index 0) */</span></span>
<span id="cb21-19"><a href="#cb21-19" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_buffer <span class="op">*</span>input <span class="op">=</span> <span class="op">&amp;</span>report<span class="op">.</span>inputs<span class="op">[</span><span class="dv">0</span><span class="op">];</span></span>
<span id="cb21-20"><a href="#cb21-20" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Retrieve the @ of the input buffer */</span></span>
<span id="cb21-21"><a href="#cb21-21" aria-hidden="true" tabindex="-1"></a>  in_data_1 <span class="op">=</span> <span class="op">(</span>ai_u8 <span class="op">*)</span>input<span class="op">-&gt;</span>data<span class="op">;</span></span>
<span id="cb21-22"><a href="#cb21-22" aria-hidden="true" tabindex="-1"></a><span class="pp">#endif </span></span>
<span id="cb21-23"><a href="#cb21-23" aria-hidden="true" tabindex="-1"></a><span class="op">...</span></span>
<span id="cb21-24"><a href="#cb21-24" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span></code></pre></div>
</section>
<section id="float32-to-8b-data-type-conversion" class="level2">
<h2>float32 to 8b data type conversion</h2>
<p>Following code snippet illustrates the float (ai_float) to integer (ai_i8/ai_u8) format conversion. Input buffer is used as destination buffer.</p>
<div class="sourceCode" id="cb22"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="cb22-1"><a href="#cb22-1" aria-hidden="true" tabindex="-1"></a><span class="pp">#include </span><span class="im">&lt;network.h&gt;</span></span>
<span id="cb22-2"><a href="#cb22-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb22-3"><a href="#cb22-3" aria-hidden="true" tabindex="-1"></a><span class="pp">#define _MIN</span><span class="op">(</span><span class="va">x_</span><span class="op">,</span><span class="pp"> </span><span class="va">y_</span><span class="op">)</span><span class="pp"> </span><span class="op">\</span></span>
<span id="cb22-4"><a href="#cb22-4" aria-hidden="true" tabindex="-1"></a><span class="pp">    </span><span class="op">(</span><span class="pp"> </span><span class="op">((</span><span class="va">x_</span><span class="op">)&lt;(</span><span class="va">y_</span><span class="op">))</span><span class="pp"> </span><span class="op">?</span><span class="pp"> </span><span class="op">(</span><span class="va">x_</span><span class="op">)</span><span class="pp"> </span><span class="op">:</span><span class="pp"> </span><span class="op">(</span><span class="va">y_</span><span class="op">)</span><span class="pp"> </span><span class="op">)</span></span>
<span id="cb22-5"><a href="#cb22-5" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb22-6"><a href="#cb22-6" aria-hidden="true" tabindex="-1"></a><span class="pp">#define _MAX</span><span class="op">(</span><span class="va">x_</span><span class="op">,</span><span class="pp"> </span><span class="va">y_</span><span class="op">)</span><span class="pp"> </span><span class="op">\</span></span>
<span id="cb22-7"><a href="#cb22-7" aria-hidden="true" tabindex="-1"></a><span class="pp">    </span><span class="op">(</span><span class="pp"> </span><span class="op">((</span><span class="va">x_</span><span class="op">)&gt;(</span><span class="va">y_</span><span class="op">))</span><span class="pp"> </span><span class="op">?</span><span class="pp"> </span><span class="op">(</span><span class="va">x_</span><span class="op">)</span><span class="pp"> </span><span class="op">:</span><span class="pp"> </span><span class="op">(</span><span class="va">y_</span><span class="op">)</span><span class="pp"> </span><span class="op">)</span></span>
<span id="cb22-8"><a href="#cb22-8" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb22-9"><a href="#cb22-9" aria-hidden="true" tabindex="-1"></a><span class="pp">#define _CLAMP</span><span class="op">(</span><span class="va">x_</span><span class="op">,</span><span class="pp"> </span><span class="va">min_</span><span class="op">,</span><span class="pp"> </span><span class="va">max_</span><span class="op">,</span><span class="pp"> </span><span class="va">type_</span><span class="op">)</span><span class="pp"> </span><span class="op">\</span></span>
<span id="cb22-10"><a href="#cb22-10" aria-hidden="true" tabindex="-1"></a><span class="pp">    </span><span class="op">(</span><span class="va">type_</span><span class="op">)</span><span class="pp"> </span><span class="op">(</span>_MIN<span class="op">(</span>_MAX<span class="op">(</span><span class="va">x_</span><span class="op">,</span><span class="pp"> </span><span class="va">min_</span><span class="op">),</span><span class="pp"> </span><span class="va">max_</span><span class="op">))</span></span>
<span id="cb22-11"><a href="#cb22-11" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb22-12"><a href="#cb22-12" aria-hidden="true" tabindex="-1"></a><span class="pp">#define _ROUND</span><span class="op">(</span><span class="va">v_</span><span class="op">,</span><span class="pp"> </span><span class="va">type_</span><span class="op">)</span><span class="pp"> </span><span class="op">\</span></span>
<span id="cb22-13"><a href="#cb22-13" aria-hidden="true" tabindex="-1"></a><span class="pp">    </span><span class="op">(</span><span class="va">type_</span><span class="op">)</span><span class="pp"> </span><span class="op">(</span><span class="pp"> </span><span class="op">((</span><span class="va">v_</span><span class="op">)&lt;</span><span class="dv">0</span><span class="op">)</span><span class="pp"> </span><span class="op">?</span><span class="pp"> </span><span class="op">((</span><span class="va">v_</span><span class="op">)-</span><span class="fl">0.5</span><span class="bu">f</span><span class="op">)</span><span class="pp"> </span><span class="op">:</span><span class="pp"> </span><span class="op">((</span><span class="va">v_</span><span class="op">)+</span><span class="fl">0.5</span><span class="bu">f</span><span class="op">)</span><span class="pp"> </span><span class="op">)</span></span>
<span id="cb22-14"><a href="#cb22-14" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb22-15"><a href="#cb22-15" aria-hidden="true" tabindex="-1"></a><span class="at">const</span> ai_buffer <span class="op">*</span>get_input_desc<span class="op">(</span>idx<span class="op">)</span></span>
<span id="cb22-16"><a href="#cb22-16" aria-hidden="true" tabindex="-1"></a><span class="op">{</span></span>
<span id="cb22-17"><a href="#cb22-17" aria-hidden="true" tabindex="-1"></a>  ai_network_report report<span class="op">;</span></span>
<span id="cb22-18"><a href="#cb22-18" aria-hidden="true" tabindex="-1"></a>  ai_network_get_info<span class="op">(</span>network<span class="op">,</span> <span class="op">&amp;</span>report<span class="op">);</span></span>
<span id="cb22-19"><a href="#cb22-19" aria-hidden="true" tabindex="-1"></a>  <span class="cf">return</span> <span class="op">&amp;</span>report<span class="op">.</span>inputs<span class="op">[</span>idx<span class="op">];</span></span>
<span id="cb22-20"><a href="#cb22-20" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span>
<span id="cb22-21"><a href="#cb22-21" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb22-22"><a href="#cb22-22" aria-hidden="true" tabindex="-1"></a>ai_float input_f<span class="op">[</span>AI_<span class="op">&lt;</span>NAME<span class="op">&gt;</span>_IN_1_SIZE<span class="op">];</span></span>
<span id="cb22-23"><a href="#cb22-23" aria-hidden="true" tabindex="-1"></a>ai_i8 input_q<span class="op">[</span>AI_<span class="op">&lt;</span>NAME<span class="op">&gt;</span>_IN_1_SIZE<span class="op">];</span> <span class="co">/* or ai_u8 */</span></span>
<span id="cb22-24"><a href="#cb22-24" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb22-25"><a href="#cb22-25" aria-hidden="true" tabindex="-1"></a><span class="op">{</span></span>
<span id="cb22-26"><a href="#cb22-26" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_buffer <span class="op">*</span>input <span class="op">=</span> get_input_desc<span class="op">(</span><span class="dv">0</span><span class="op">);</span></span>
<span id="cb22-27"><a href="#cb22-27" aria-hidden="true" tabindex="-1"></a>  ai_float scale  <span class="op">=</span> AI_BUFFER_META_INFO_INTQ_GET_SCALE<span class="op">(</span>input<span class="op">-&gt;</span>meta_info<span class="op">,</span> <span class="dv">0</span><span class="op">);</span></span>
<span id="cb22-28"><a href="#cb22-28" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_i32 zp <span class="op">=</span> AI_BUFFER_META_INFO_INTQ_GET_ZEROPOINT<span class="op">(</span>input<span class="op">-&gt;</span>meta_info<span class="op">,</span> <span class="dv">0</span><span class="op">);</span></span>
<span id="cb22-29"><a href="#cb22-29" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb22-30"><a href="#cb22-30" aria-hidden="true" tabindex="-1"></a>  scale <span class="op">=</span> <span class="fl">1.0</span><span class="bu">f</span> <span class="op">/</span> scale<span class="op">;</span></span>
<span id="cb22-31"><a href="#cb22-31" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb22-32"><a href="#cb22-32" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Loop */</span></span>
<span id="cb22-33"><a href="#cb22-33" aria-hidden="true" tabindex="-1"></a>  <span class="cf">for</span> <span class="op">(</span><span class="dt">int</span> i<span class="op">=</span><span class="dv">0</span><span class="op">;</span> i <span class="op">&lt;</span> AI_<span class="op">&lt;</span>NAME<span class="op">&gt;</span>_IN_1_SIZE<span class="op">;</span> i<span class="op">++)</span></span>
<span id="cb22-34"><a href="#cb22-34" aria-hidden="true" tabindex="-1"></a>  <span class="op">{</span></span>
<span id="cb22-35"><a href="#cb22-35" aria-hidden="true" tabindex="-1"></a>    <span class="at">const</span> ai_i32 <span class="va">tmp_</span> <span class="op">=</span> zp <span class="op">+</span> _ROUND<span class="op">(</span>input_f<span class="op">[</span>i<span class="op">]</span> <span class="op">*</span> scale<span class="op">,</span> ai_i32<span class="op">);</span></span>
<span id="cb22-36"><a href="#cb22-36" aria-hidden="true" tabindex="-1"></a>    <span class="co">/* for ai_u8 */</span></span>
<span id="cb22-37"><a href="#cb22-37" aria-hidden="true" tabindex="-1"></a>    input_q<span class="op">[</span>i<span class="op">]</span> <span class="op">=</span> _CLAMP<span class="op">(</span><span class="va">tmp_</span><span class="op">,</span> <span class="dv">0</span><span class="op">,</span> <span class="dv">255</span><span class="op">,</span> ai_u8<span class="op">);</span></span>
<span id="cb22-38"><a href="#cb22-38" aria-hidden="true" tabindex="-1"></a>    <span class="co">/* for ai_i8 */</span></span>
<span id="cb22-39"><a href="#cb22-39" aria-hidden="true" tabindex="-1"></a>    input_q<span class="op">[</span>i<span class="op">]</span> <span class="op">=</span> _CLAMP<span class="op">(</span><span class="va">tmp_</span><span class="op">,</span> <span class="op">-</span><span class="dv">128</span><span class="op">,</span> <span class="dv">127</span><span class="op">,</span> ai_i8<span class="op">);</span></span>
<span id="cb22-40"><a href="#cb22-40" aria-hidden="true" tabindex="-1"></a>  <span class="op">}</span></span>
<span id="cb22-41"><a href="#cb22-41" aria-hidden="true" tabindex="-1"></a>  <span class="op">...</span></span>
<span id="cb22-42"><a href="#cb22-42" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span></code></pre></div>
</section>
<section id="b-to-float32-data-type-conversion" class="level2">
<h2>8b to float32 data type conversion</h2>
<p>Following code snippet illustrates the integer (ai_i8/ai_u8) to float (ai_float) format conversion. The output buffer is used as source buffer.</p>
<div class="sourceCode" id="cb23"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="cb23-1"><a href="#cb23-1" aria-hidden="true" tabindex="-1"></a><span class="pp">#include </span><span class="im">&lt;network.h&gt;</span></span>
<span id="cb23-2"><a href="#cb23-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb23-3"><a href="#cb23-3" aria-hidden="true" tabindex="-1"></a>ai_i8 output_q<span class="op">[</span>AI_<span class="op">&lt;</span>NAME<span class="op">&gt;</span>_OUT_1_SIZE<span class="op">];</span> <span class="co">/* or ai_u8 */</span></span>
<span id="cb23-4"><a href="#cb23-4" aria-hidden="true" tabindex="-1"></a>ai_float output_f<span class="op">[</span>AI_<span class="op">&lt;</span>NAME<span class="op">&gt;</span>_OUT_1_SIZE<span class="op">];</span></span>
<span id="cb23-5"><a href="#cb23-5" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb23-6"><a href="#cb23-6" aria-hidden="true" tabindex="-1"></a><span class="at">const</span> ai_buffer <span class="op">*</span>get_output_desc<span class="op">(</span>idx<span class="op">)</span></span>
<span id="cb23-7"><a href="#cb23-7" aria-hidden="true" tabindex="-1"></a><span class="op">{</span></span>
<span id="cb23-8"><a href="#cb23-8" aria-hidden="true" tabindex="-1"></a>  ai_network_report report<span class="op">;</span></span>
<span id="cb23-9"><a href="#cb23-9" aria-hidden="true" tabindex="-1"></a>  ai_network_get_info<span class="op">(</span>network<span class="op">,</span> <span class="op">&amp;</span>report<span class="op">);</span></span>
<span id="cb23-10"><a href="#cb23-10" aria-hidden="true" tabindex="-1"></a>  <span class="cf">return</span> <span class="op">&amp;</span>report<span class="op">.</span>outputs<span class="op">[</span>idx<span class="op">];</span></span>
<span id="cb23-11"><a href="#cb23-11" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span>
<span id="cb23-12"><a href="#cb23-12" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb23-13"><a href="#cb23-13" aria-hidden="true" tabindex="-1"></a><span class="op">{</span></span>
<span id="cb23-14"><a href="#cb23-14" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_buffer <span class="op">*</span>output <span class="op">=</span> get_output_desc<span class="op">(</span><span class="dv">0</span><span class="op">);</span></span>
<span id="cb23-15"><a href="#cb23-15" aria-hidden="true" tabindex="-1"></a>  ai_float scale  <span class="op">=</span> AI_BUFFER_META_INFO_INTQ_GET_SCALE<span class="op">(</span>output<span class="op">-&gt;</span>meta_info<span class="op">,</span> <span class="dv">0</span><span class="op">);</span></span>
<span id="cb23-16"><a href="#cb23-16" aria-hidden="true" tabindex="-1"></a>  <span class="at">const</span> ai_i32 zp <span class="op">=</span> AI_BUFFER_META_INFO_INTQ_GET_ZEROPOINT<span class="op">(</span>output<span class="op">-&gt;</span>meta_info<span class="op">,</span> <span class="dv">0</span><span class="op">);</span></span>
<span id="cb23-17"><a href="#cb23-17" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb23-18"><a href="#cb23-18" aria-hidden="true" tabindex="-1"></a>  <span class="co">/* Loop */</span></span>
<span id="cb23-19"><a href="#cb23-19" aria-hidden="true" tabindex="-1"></a>  <span class="cf">for</span> <span class="op">(</span><span class="dt">int</span> i<span class="op">=</span><span class="dv">0</span><span class="op">;</span> i<span class="op">&lt;</span>AI_<span class="op">&lt;</span>NAME<span class="op">&gt;</span>_OUT_1_SIZE<span class="op">;</span> i<span class="op">++)</span></span>
<span id="cb23-20"><a href="#cb23-20" aria-hidden="true" tabindex="-1"></a>  <span class="op">{</span></span>
<span id="cb23-21"><a href="#cb23-21" aria-hidden="true" tabindex="-1"></a>    output_f<span class="op">[</span>i<span class="op">]</span> <span class="op">=</span> scale <span class="op">*</span> <span class="op">((</span>ai_float<span class="op">)(</span>output_q<span class="op">[</span>i<span class="op">])</span> <span class="op">-</span> zp<span class="op">);</span></span>
<span id="cb23-22"><a href="#cb23-22" aria-hidden="true" tabindex="-1"></a>  <span class="op">}</span></span>
<span id="cb23-23"><a href="#cb23-23" aria-hidden="true" tabindex="-1"></a>  <span class="op">...</span></span>
<span id="cb23-24"><a href="#cb23-24" aria-hidden="true" tabindex="-1"></a><span class="op">}</span></span></code></pre></div>
</section>
<section id="c-memory-layouts" class="level2">
<h2>C-memory layouts</h2>
<p><strong>1d-array tensor</strong></p>
<p>For a 1-D tensor, standard C-array type with the following memory layout is expected to handle the input and output tensors.</p>
<div class="sourceCode" id="cb24"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="cb24-1"><a href="#cb24-1" aria-hidden="true" tabindex="-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb24-2"><a href="#cb24-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb24-3"><a href="#cb24-3" aria-hidden="true" tabindex="-1"></a><span class="pp">#define xx_SIZE  </span>VAL<span class="pp">  </span><span class="co">/* = H * W * C = C */</span></span>
<span id="cb24-4"><a href="#cb24-4" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb24-5"><a href="#cb24-5" aria-hidden="true" tabindex="-1"></a>ai_float xx_data<span class="op">[</span>xx_SIZE<span class="op">];</span>     <span class="co">/* n_batch = 1, height = 1,</span></span>
<span id="cb24-6"><a href="#cb24-6" aria-hidden="true" tabindex="-1"></a><span class="co">                                  width = 1, channels = C */</span></span>
<span id="cb24-7"><a href="#cb24-7" aria-hidden="true" tabindex="-1"></a>ai_float xx_data<span class="op">[</span>B <span class="op">*</span> xx_SIZE<span class="op">];</span> <span class="co">/* n_batch = B, height = 1,</span></span>
<span id="cb24-8"><a href="#cb24-8" aria-hidden="true" tabindex="-1"></a><span class="co">                                  width = 1, channels = C */</span></span>
<span id="cb24-9"><a href="#cb24-9" aria-hidden="true" tabindex="-1"></a>ai_float xx_data<span class="op">[</span>B<span class="op">][</span>xx_SIZE<span class="op">];</span></span></code></pre></div>
<div id="fig:tensor_1d" class="fignos">
<figure>
<img src="" property="center" style="width:75.0%" alt="Figure 5: 1-D Tensor data layout" /><figcaption aria-hidden="true"><span>Figure 5:</span> 1-D Tensor data layout</figcaption>
</figure>
</div>
<p><strong>2d-array tensor</strong></p>
<p>For a 2-D tensor, standard C-array-of-array memory arrangement is used to handle the input and output tensors. 2-dim are mapped to the first two dimensions of the tensor in the original toolbox representation: e.g. H and C in Keras / Tensorflow.</p>
<div class="sourceCode" id="cb25"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="cb25-1"><a href="#cb25-1" aria-hidden="true" tabindex="-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb25-2"><a href="#cb25-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb25-3"><a href="#cb25-3" aria-hidden="true" tabindex="-1"></a><span class="pp">#define xx_SIZE  </span>VAL<span class="pp">  </span><span class="co">/* = H * W * C = H * C */</span></span>
<span id="cb25-4"><a href="#cb25-4" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb25-5"><a href="#cb25-5" aria-hidden="true" tabindex="-1"></a>ai_float xx_data<span class="op">[</span>xx_SIZE<span class="op">];</span>  <span class="co">/* n_batch = 1, height = H,</span></span>
<span id="cb25-6"><a href="#cb25-6" aria-hidden="true" tabindex="-1"></a><span class="co">                               width = 1, channels = C */</span></span>
<span id="cb25-7"><a href="#cb25-7" aria-hidden="true" tabindex="-1"></a>ai_float xx_data<span class="op">[</span>H<span class="op">][</span>C<span class="op">];</span></span>
<span id="cb25-8"><a href="#cb25-8" aria-hidden="true" tabindex="-1"></a>ai_float xx_data<span class="op">[</span>B <span class="op">*</span> xx_SIZE<span class="op">];</span> <span class="co">/* n_batch = B, height = H,</span></span>
<span id="cb25-9"><a href="#cb25-9" aria-hidden="true" tabindex="-1"></a><span class="co">                                  width = 1, channels = C */</span></span>
<span id="cb25-10"><a href="#cb25-10" aria-hidden="true" tabindex="-1"></a>ai_float xx_data<span class="op">[</span>B<span class="op">][</span>H<span class="op">][</span>C<span class="op">];</span></span></code></pre></div>
<div id="fig:tensor_2d" class="fignos">
<figure>
<img src="" property="center" style="width:75.0%" alt="Figure 6: 2-D Tensor data layout" /><figcaption aria-hidden="true"><span>Figure 6:</span> 2-D Tensor data layout</figcaption>
</figure>
</div>
<p><strong>3d-array tensor</strong></p>
<p>For a 3D-tensor, standard C-array-of-array-of-array memory arrangement is used to handle the input and output tensors.</p>
<div class="sourceCode" id="cb26"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="cb26-1"><a href="#cb26-1" aria-hidden="true" tabindex="-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb26-2"><a href="#cb26-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb26-3"><a href="#cb26-3" aria-hidden="true" tabindex="-1"></a><span class="pp">#define xx_SIZE  </span>VAL<span class="pp">  </span><span class="co">/* = H * W * C */</span></span>
<span id="cb26-4"><a href="#cb26-4" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb26-5"><a href="#cb26-5" aria-hidden="true" tabindex="-1"></a>ai_float xx_data<span class="op">[</span>xx_SIZE<span class="op">];</span>  <span class="co">/* n_batch = 1, height = H,</span></span>
<span id="cb26-6"><a href="#cb26-6" aria-hidden="true" tabindex="-1"></a><span class="co">                               width = W, channels = C */</span></span>
<span id="cb26-7"><a href="#cb26-7" aria-hidden="true" tabindex="-1"></a>ai_float xx_data<span class="op">[</span>H<span class="op">][</span>W<span class="op">][</span>C<span class="op">];</span></span>
<span id="cb26-8"><a href="#cb26-8" aria-hidden="true" tabindex="-1"></a>ai_float xx_data<span class="op">[</span>B <span class="op">*</span> xx_SIZE<span class="op">];</span> <span class="co">/* n_batch = B, height = H,</span></span>
<span id="cb26-9"><a href="#cb26-9" aria-hidden="true" tabindex="-1"></a><span class="co">                                  width = W, channels = C */</span></span>
<span id="cb26-10"><a href="#cb26-10" aria-hidden="true" tabindex="-1"></a>ai_float xx_data<span class="op">[</span>B<span class="op">][</span>H<span class="op">][</span>W<span class="op">][</span>C<span class="op">];</span></span></code></pre></div>
<div id="fig:tensor_3d" class="fignos">
<figure>
<img src="" property="center" style="width:95.0%" alt="Figure 7: 3-D Tensor data layout" /><figcaption aria-hidden="true"><span>Figure 7:</span> 3-D Tensor data layout</figcaption>
</figure>
</div>
<!-- External ST resources/links -->
<!-- Internal resources/links -->
<!-- External resources/links -->
<!-- Cross references -->
</section>
</section>
<section id="references" class="level1">
<h1>References</h1>
<table>
<colgroup>
<col style="width: 18%" />
<col style="width: 81%" />
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">ref</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">[DS]</td>
<td style="text-align: left;">X-CUBE-AI - AI expansion pack for STM32CubeMX <a href="https://www.st.com/en/embedded-software/x-cube-ai.html">https://www.st.com/en/embedded-software/x-cube-ai.html</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[UM]</td>
<td style="text-align: left;">User manual - Getting started with X-CUBE-AI Expansion Package for Artificial Intelligence (AI) <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">(pdf)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[CLI]</td>
<td style="text-align: left;">stm32ai - Command Line Interface <a href="command_line_interface.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[API]</td>
<td style="text-align: left;">Embedded inference client API <a href="embedded_client_api.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[METRIC]</td>
<td style="text-align: left;">Evaluation report and metrics <a href="evaluation_metrics.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[TFL]</td>
<td style="text-align: left;">TensorFlow Lite toolbox <a href="supported_ops_tflite.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[KERAS]</td>
<td style="text-align: left;">Keras toolbox <a href="supported_ops_keras.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[ONNX]</td>
<td style="text-align: left;">ONNX toolbox <a href="supported_ops_onnx.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[FAQS]</td>
<td style="text-align: left;">FAQ <a href="faq_generic.html">generic</a>, <a href="faq_validation.html">validation</a>, <a href="faq_quantization.html">quantization</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[QUANT]</td>
<td style="text-align: left;">Quantization and quantize command <a href="quantization.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[RELOC]</td>
<td style="text-align: left;">Relocatable binary network support <a href="relocatable.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[CUST]</td>
<td style="text-align: left;">Support of the Keras Lambda/custom layers <a href="keras_lambda_custom.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[TFLM]</td>
<td style="text-align: left;">TensorFlow Lite for Microcontroller support <a href="tflite_micro_support.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[INST]</td>
<td style="text-align: left;">Setting the environment <a href="setting_env.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[OBS]</td>
<td style="text-align: left;">Platform Observer API <a href="api_platform_observer.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[C-RUN]</td>
<td style="text-align: left;">Executing locally a generated c-model <a href="how_to_run_a_model_locally.html">(link)</a></td>
</tr>
</tbody>
</table>
</section>



<section class="st_footer">

<h1> <br> </h1>

<p style="font-family:verdana; text-align:left;">
 Embedded Documentation 

	- <b> Embedded Inference Client API </b>
			<br> X-CUBE-AI Expansion Package
	 
			<br> r4.0
		 - AI PLATFORM r7.0.0
			 (Embedded Inference Client API 1.1.0) 
			 - Command Line Interface r1.5.1 
		
	
</p>

<img src="" title="ST logo" align="right" height="100" />

<div class="st_notice">
Information in this document is provided solely in connection with ST products.
The contents of this document are subject to change without prior notice.
<br>
© Copyright STMicroelectronics 2020. All rights reserved. <a href="http://www.st.com">www.st.com</a>
</div>

<hr size="1" />
</section>


</article>
</body>

</html>
