<!DOCTYPE html>
<!--

	Modified template for STM32CubeMX.AI purpose

	d0.1: 	jean-michel.delorme@st.com
			add ST logo and ST footer

	d2.0: 	jean-michel.delorme@st.com
			add sidenav support

	d2.1: 	jean-michel.delorme@st.com
			clean-up + optional ai_logo/ai meta data
			
==============================================================================
           "GitHub HTML5 Pandoc Template" v2.1 — by Tristano Ajmone           
==============================================================================
Copyright © Tristano Ajmone, 2017, MIT License (MIT). Project's home:

- https://github.com/tajmone/pandoc-goodies

The CSS in this template reuses source code taken from the following projects:

- GitHub Markdown CSS: Copyright © Sindre Sorhus, MIT License (MIT):
  https://github.com/sindresorhus/github-markdown-css

- Primer CSS: Copyright © 2016-2017 GitHub Inc., MIT License (MIT):
  http://primercss.io/

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The MIT License 

Copyright (c) Tristano Ajmone, 2017 (github.com/tajmone/pandoc-goodies)
Copyright (c) Sindre Sorhus <sindresorhus@gmail.com> (sindresorhus.com)
Copyright (c) 2017 GitHub Inc.

"GitHub Pandoc HTML5 Template" is Copyright (c) Tristano Ajmone, 2017, released
under the MIT License (MIT); it contains readaptations of substantial portions
of the following third party softwares:

(1) "GitHub Markdown CSS", Copyright (c) Sindre Sorhus, MIT License (MIT).
(2) "Primer CSS", Copyright (c) 2016 GitHub Inc., MIT License (MIT).

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
==============================================================================-->
<html>
<head>
  <meta charset="utf-8" />
  <meta name="generator" content="pandoc" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
  <meta name="keywords" content="STM32CubeMX, X-CUBE-AI, Neural Network, Quantization support, CLI, Code Generator, Automatic NN mapping tools" />
  <title>Embedded Inference Client API</title>
  <style type="text/css">
.markdown-body{
	-ms-text-size-adjust:100%;
	-webkit-text-size-adjust:100%;
	color:#24292e;
	font-family:-apple-system,system-ui,BlinkMacSystemFont,"Segoe UI",Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol";
	font-size:16px;
	line-height:1.5;
	word-wrap:break-word;
	box-sizing:border-box;
	min-width:200px;
	max-width:980px;
	margin:0 auto;
	padding:45px;
	}
.markdown-body a{
	color:#0366d6;
	background-color:transparent;
	text-decoration:none;
	-webkit-text-decoration-skip:objects}
.markdown-body a:active,.markdown-body a:hover{
	outline-width:0}
.markdown-body a:hover{
	text-decoration:underline}
.markdown-body a:not([href]){
	color:inherit;text-decoration:none}
.markdown-body strong{font-weight:600}
.markdown-body h1,.markdown-body h2,.markdown-body h3,.markdown-body h4,.markdown-body h5,.markdown-body h6{
	margin-top:24px;
	margin-bottom:16px;
	font-weight:600;
	line-height:1.25}
.markdown-body h1{
	font-size:2em;
	margin:.67em 0;
	padding-bottom:.3em;
	border-bottom:1px solid #eaecef}
.markdown-body h2{
	padding-bottom:.3em;
	font-size:1.5em;
	border-bottom:1px solid #eaecef}
.markdown-body h3{font-size:1.25em}
.markdown-body h4{font-size:1em}
.markdown-body h5{font-size:.875em}
.markdown-body h6{font-size:.85em;color:#6a737d}
.markdown-body img{border-style:none}
.markdown-body svg:not(:root){
	overflow:hidden}
.markdown-body hr{
	box-sizing:content-box;
	height:.25em;
	margin:24px 0;
	padding:0;
	overflow:hidden;
	background-color:#e1e4e8;
	border:0}
.markdown-body hr::before{display:table;content:""}
.markdown-body hr::after{display:table;clear:both;content:""}
.markdown-body input{margin:0;overflow:visible;font:inherit;font-family:inherit;font-size:inherit;line-height:inherit}
.markdown-body [type=checkbox]{box-sizing:border-box;padding:0}
.markdown-body *{box-sizing:border-box}.markdown-body blockquote{margin:0}
.markdown-body ol,.markdown-body ul{padding-left:2em}
.markdown-body ol ol,.markdown-body ul ol{list-style-type:lower-roman}
.markdown-body ol ol,.markdown-body ol ul,.markdown-body ul ol,.markdown-body ul ul{margin-top:0;margin-bottom:0}
.markdown-body ol ol ol,.markdown-body ol ul ol,.markdown-body ul ol ol,.markdown-body ul ul ol{list-style-type:lower-alpha}
.markdown-body li>p{margin-top:16px}
.markdown-body li+li{margin-top:.25em}
.markdown-body dd{margin-left:0}
.markdown-body dl{padding:0}
.markdown-body dl dt{padding:0;margin-top:16px;font-size:1em;font-style:italic;font-weight:600}
.markdown-body dl dd{padding:0 16px;margin-bottom:16px}
.markdown-body code{font-family:SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace}
.markdown-body pre{font:12px SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace;word-wrap:normal}
.markdown-body blockquote,.markdown-body dl,.markdown-body ol,.markdown-body p,.markdown-body pre,.markdown-body table,.markdown-body ul{margin-top:0;margin-bottom:16px}
.markdown-body blockquote{padding:0 1em;color:#6a737d;border-left:.25em solid #dfe2e5}
.markdown-body blockquote>:first-child{margin-top:0}
.markdown-body blockquote>:last-child{margin-bottom:0}
.markdown-body table{display:block;width:100%;overflow:auto;border-spacing:0;border-collapse:collapse}
.markdown-body table th{font-weight:600}
.markdown-body table td,.markdown-body table th{padding:6px 13px;border:1px solid #dfe2e5}
.markdown-body table tr{background-color:#fff;border-top:1px solid #c6cbd1}
.markdown-body table tr:nth-child(2n){background-color:#f6f8fa}
.markdown-body img{max-width:100%;box-sizing:content-box;background-color:#fff}
.markdown-body code{padding:.2em 0;margin:0;font-size:85%;background-color:rgba(27,31,35,.05);border-radius:3px}
.markdown-body code::after,.markdown-body code::before{letter-spacing:-.2em;content:"\00a0"}
.markdown-body pre>code{padding:0;margin:0;font-size:100%;word-break:normal;white-space:pre;background:0 0;border:0}
.markdown-body .highlight{margin-bottom:16px}
.markdown-body .highlight pre{margin-bottom:0;word-break:normal}
.markdown-body .highlight pre,.markdown-body pre{padding:16px;overflow:auto;font-size:85%;line-height:1.45;background-color:#f6f8fa;border-radius:3px}
.markdown-body pre code{display:inline;max-width:auto;padding:0;margin:0;overflow:visible;line-height:inherit;word-wrap:normal;background-color:transparent;border:0}
.markdown-body pre code::after,.markdown-body pre code::before{content:normal}
.markdown-body .full-commit .btn-outline:not(:disabled):hover{color:#005cc5;border-color:#005cc5}
.markdown-body kbd{box-shadow:inset 0 -1px 0 #959da5;display:inline-block;padding:3px 5px;font:11px/10px SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace;color:#444d56;vertical-align:middle;background-color:#fcfcfc;border:1px solid #c6cbd1;border-bottom-color:#959da5;border-radius:3px;box-shadow:inset 0 -1px 0 #959da5}
.markdown-body :checked+.radio-label{position:relative;z-index:1;border-color:#0366d6}
.markdown-body .task-list-item{list-style-type:none}
.markdown-body .task-list-item+.task-list-item{margin-top:3px}
.markdown-body .task-list-item input{margin:0 .2em .25em -1.6em;vertical-align:middle}
.markdown-body::before{display:table;content:""}
.markdown-body::after{display:table;clear:both;content:""}
.markdown-body>:first-child{margin-top:0!important}
.markdown-body>:last-child{margin-bottom:0!important}
.Alert,.Error,.Note,.Success,.Warning{padding:11px;margin-bottom:24px;border-style:solid;border-width:1px;border-radius:4px}
.Alert p,.Error p,.Note p,.Success p,.Warning p{margin-top:0}
.Alert p:last-child,.Error p:last-child,.Note p:last-child,.Success p:last-child,.Warning p:last-child{margin-bottom:0}
.Alert{color:#246;background-color:#e2eef9;border-color:#bac6d3}
.Warning{color:#4c4a42;background-color:#fff9ea;border-color:#dfd8c2}
.Error{color:#911;background-color:#fcdede;border-color:#d2b2b2}
.Success{color:#22662c;background-color:#e2f9e5;border-color:#bad3be}
.Note{color:#2f363d;background-color:#f6f8fa;border-color:#d5d8da}
.Alert h1,.Alert h2,.Alert h3,.Alert h4,.Alert h5,.Alert h6{color:#246;margin-bottom:0}
.Warning h1,.Warning h2,.Warning h3,.Warning h4,.Warning h5,.Warning h6{color:#4c4a42;margin-bottom:0}
.Error h1,.Error h2,.Error h3,.Error h4,.Error h5,.Error h6{color:#911;margin-bottom:0}
.Success h1,.Success h2,.Success h3,.Success h4,.Success h5,.Success h6{color:#22662c;margin-bottom:0}
.Note h1,.Note h2,.Note h3,.Note h4,.Note h5,.Note h6{color:#2f363d;margin-bottom:0}
.Alert h1:first-child,.Alert h2:first-child,.Alert h3:first-child,.Alert h4:first-child,.Alert h5:first-child,.Alert h6:first-child,.Error h1:first-child,.Error h2:first-child,.Error h3:first-child,.Error h4:first-child,.Error h5:first-child,.Error h6:first-child,.Note h1:first-child,.Note h2:first-child,.Note h3:first-child,.Note h4:first-child,.Note h5:first-child,.Note h6:first-child,.Success h1:first-child,.Success h2:first-child,.Success h3:first-child,.Success h4:first-child,.Success h5:first-child,.Success h6:first-child,.Warning h1:first-child,.Warning h2:first-child,.Warning h3:first-child,.Warning h4:first-child,.Warning h5:first-child,.Warning h6:first-child{margin-top:0}
h1.title,p.subtitle{text-align:center}
h1.title.followed-by-subtitle{margin-bottom:0}
p.subtitle{font-size:1.5em;font-weight:600;line-height:1.25;margin-top:0;margin-bottom:16px;padding-bottom:.3em}
div.line-block{white-space:pre-line}
  </style>
  <style type="text/css">code{white-space: pre;}</style>
  <style type="text/css">
	code.sourceCode > span { display: inline-block; line-height: 1.25; }
code.sourceCode > span { color: inherit; text-decoration: inherit; }
code.sourceCode > span:empty { height: 1.2em; }
.sourceCode { overflow: visible; }
code.sourceCode { white-space: pre; position: relative; }
div.sourceCode { margin: 1em 0; }
pre.sourceCode { margin: 0; }
@media screen {
div.sourceCode { overflow: auto; }
}
@media print {
code.sourceCode { white-space: pre-wrap; }
code.sourceCode > span { text-indent: -5em; padding-left: 5em; }
}
pre.numberSource code
  { counter-reset: source-line 0; }
pre.numberSource code > span
  { position: relative; left: -4em; counter-increment: source-line; }
pre.numberSource code > span > a:first-child::before
  { content: counter(source-line);
    position: relative; left: -1em; text-align: right; vertical-align: baseline;
    border: none; display: inline-block;
    -webkit-touch-callout: none; -webkit-user-select: none;
    -khtml-user-select: none; -moz-user-select: none;
    -ms-user-select: none; user-select: none;
    padding: 0 4px; width: 4em;
    color: #aaaaaa;
  }
pre.numberSource { margin-left: 3em; border-left: 1px solid #aaaaaa;  padding-left: 4px; }
div.sourceCode
  {   }
@media screen {
code.sourceCode > span > a:first-child::before { text-decoration: underline; }
}
code span.al { color: #ff0000; font-weight: bold; } /* Alert */
code span.an { color: #60a0b0; font-weight: bold; font-style: italic; } /* Annotation */
code span.at { color: #7d9029; } /* Attribute */
code span.bn { color: #40a070; } /* BaseN */
code span.bu { } /* BuiltIn */
code span.cf { color: #007020; font-weight: bold; } /* ControlFlow */
code span.ch { color: #4070a0; } /* Char */
code span.cn { color: #880000; } /* Constant */
code span.co { color: #60a0b0; font-style: italic; } /* Comment */
code span.cv { color: #60a0b0; font-weight: bold; font-style: italic; } /* CommentVar */
code span.do { color: #ba2121; font-style: italic; } /* Documentation */
code span.dt { color: #902000; } /* DataType */
code span.dv { color: #40a070; } /* DecVal */
code span.er { color: #ff0000; font-weight: bold; } /* Error */
code span.ex { } /* Extension */
code span.fl { color: #40a070; } /* Float */
code span.fu { color: #06287e; } /* Function */
code span.im { } /* Import */
code span.in { color: #60a0b0; font-weight: bold; font-style: italic; } /* Information */
code span.kw { color: #007020; font-weight: bold; } /* Keyword */
code span.op { color: #666666; } /* Operator */
code span.ot { color: #007020; } /* Other */
code span.pp { color: #bc7a00; } /* Preprocessor */
code span.sc { color: #4070a0; } /* SpecialChar */
code span.ss { color: #bb6688; } /* SpecialString */
code span.st { color: #4070a0; } /* String */
code span.va { color: #19177c; } /* Variable */
code span.vs { color: #4070a0; } /* VerbatimString */
code span.wa { color: #60a0b0; font-weight: bold; font-style: italic; } /* Warning */
  </style>
  <style type="text/css">:root { --main-hx-color: rgb(0,32,88); --sidenav-font-size: 90%;}html {}* {xbox-sizing: border-box;}.st_header h1.title,.st_header p.subtitle {text-align: left;}.st_header h1.title {color: var(--main-hx-color)}.st_header p.subtitle {color: var(--main-hx-color)}.st_header h1.title.followed-by-subtitle {margin-bottom:5px;}.st_header p.revision {display: inline-block;width:70%;}.st_header div.author {font-style: italic;}.st_header div.summary {border-top: solid 1px #C0C0C0;background: #ECECEC;padding: 5px;}.st_footer img {float: right;}.markdown-body #header-section-number {font-size:120%;}.markdown-body h1 {border-bottom:1px solid #74767a;padding-bottom: 2px;padding-top: 10px;}.markdown-body h2 {padding-bottom: 5px;padding-top: 10px;}.markdown-body h2 code {background-color: rgb(255, 255, 255);}#func.sourceCode {border-left-style: solid;border-color: rgb(0, 32, 82);border-color: rgb(255, 244, 191);border-width: 8px;padding:0px;}pre > code {border: solid 1px blue;font-size:60%;}codeXX {border: solid 1px blue;font-size:60%;}#func.sourceXXCode::before {content: "Synopsis";padding-left:10px;font-weight: bold;}figure {padding:0px;margin-left:5px;margin-right:5px;margin-left: auto;margin-right: auto;}img[data-property="center"] {display: block;margin-top: 10px;margin-left: auto;margin-right: auto;padding: 10px;}figcaption {text-align:left;  border-top: 1px dotted #888;padding-bottom: 20px;margin-top: 10px;}section.st_footer {font-size:80%;}div.stnotice {width:80%;}h1 code, h2 code {font-size:120%;}	.markdown-body table {width: 100%;margin-left:auto;margin-right:auto;}.markdown-body img {border-radius: 4px;padding: 5px;display: block;margin-left: auto;margin-right: auto;width: auto;}.markdown-body .st_header img, .markdown-body {border: none;border-radius: none;padding: 5px;display: block;margin-left: auto;margin-right: auto;width: auto;box-shadow: none;}.markdown-body {margin: 10px;padding: 10px;width: auto;font-family: "Arial", sans-serif;color: #03234B;}.markdown-body h1, .markdown-body h2, .markdown-body h3 {   color: var(--main-hx-color)}.markdown-body:hover {}.markdown-body .contents {}.markdown-body .toc-title {}.markdown-body .contents li {list-style-type: none;}.markdown-body .contents ul {padding-left: 10px;}.markdown-body .contents a {color: #3CB4E6; }.sidenav {font-family: "Arial", sans-serif;font-family: segoe ui, verdona;color: #3CB4E6; color: #03234B; color: var(--main-hx-color);height: 100%;position: fixed;z-index: 1;top: 0;left: 0;margin-right: 10px;margin-left: 10px; overflow-x: hidden;}hr.new1 {border-width: thin;border-top: 1px solid #3CB4E6; margin-right: 10px;margin-top: -10px;}.sidenav #sidenav_header {margin-top: 10px;border: 1px;}.sidenav #sidenav_header img {float: left;}.sidenav #sidenav_header a {margin-left: 0px;margin-right: 0px;padding-left: 0px;color: #3CB4E6; color: #03234B; color: var(--main-hx-color)}.sidenav #sidenav_header a:hover {background-size: auto;color: #FFD200; }.sidenav #sidenav_header a:active {  }.sidenav > ul {background-color: rgba(57, 169, 220, 0.05);border-radius: 10px;padding-bottom: 10px;padding-top: 10px;padding-right: 10px;margin-right: 10px;}.sidenav a {padding: 2px 2px;text-decoration: none;font-size: var(--sidenav-font-size);  display:table;}.sidenav > ul > li,.sidenav > ul > li > ul > li { padding-right: 5px;padding-left: 5px;}.sidenav > ul > li > a { color: #03234B;  color: var(--main-hx-color)}.sidenav > ul > li > ul > li > a { color: #03234B; color: #3CB4E6; color: #03234B; font-weight: lighter;padding-left: 10px;}.sidenav > ul > li > ul > li > ul > li > a { display: None;}.sidenav li {list-style-type: none;}.sidenav ul {padding-left: 0px;}.sidenav > ul > li > a:hover,.sidenav > ul > li > ul > li > a:hover {background-color: rgba(70, 70, 80, 0.1); background-clip: border-box;margin-left: -10px;padding-left: 10px;}.sidenav > ul > li > a:hover {padding-right: 15px;width: 230px;	}.sidenav > ul > li > ul > li > a:hover {padding-right: 10px;width: 230px;	}.sidenav > ul > li > a:active { color: #FFD200; }.sidenav code {}.sidenav {width: 280px;}#sidenav {margin-left: 300px;display:block;}.markdown-body .print-contents {visibility:hidden;}.markdown-body .print-toc-title {visibility:hidden;}.markdown-body {max-width: 980px;min-width: 200px;padding: 40px;border-style: solid;border-style: outset;border-color: rgba(104, 167, 238, 0.089);border-radius: 5px;}@media screen and (max-height: 450px) {.sidenav {padding-top: 15px;}.sidenav a {font-size: 18px;}#sidenav {margin-left: 10px; }.sidenav {visibility:hidden;}.markdown-body {margin: 10px;padding: 40px;width: auto;border: 0px;}}@media screen and (max-width: 1024px) {.sidenav {visibility:hidden;}.markdown-body {margin: 10px;padding: 40px;width: auto;border: 0px;}#sidenav {margin-left: 10px;}}@media print {.sidenav {visibility:hidden;}#sidenav {margin-left: 10px;}.markdown-body {margin: 10px;padding: 10px;width:auto;border: 0px;}@page {size: A4;  margin:2cm;padding:2cm;margin-top: 1cm;padding-bottom: 1cm;}* {xbox-sizing: border-box;font-size:90%;}a {font-size: 100%;color: yellow;}.markdown-body article {xbox-sizing: border-box;font-size:100%;}.markdown-body p {windows: 2;orphans: 2;}.pagebreakerafter {page-break-after: always;padding-top:10mm;}.pagebreakbefore {page-break-before: always;}h1, h2, h3, h4 {page-break-after: avoid;}div, code, blockquote, li, span, table, figure {page-break-inside: avoid;}}</style>
  <!--[if lt IE 9]>
    <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
  <![endif]-->




<link href="" rel="shortcut icon">

</head>



<body>

		<div class="sidenav">
		<div id="sidenav_header">
							<img src="" title="STM32CubeMX.AI logo" align="left" height="70" />
										<br />5.2.0<br />
										<a href="#doc_title"> Embedded Inference Client API </a>
					</div>
		<div id="sidenav_header_button">
			 
							<ul>
					<li><p><a id="index" href="index.html">[ Index ]</a></p></li>
				</ul>
						<hr class="new1">
		</div>	

		<ul>
<li><a href="#introduction">Introduction</a><ul>
<li><a href="#ref_quick_usage_code">Getting started</a></li>
<li><a href="#sec_data_placement">AI buffers and privileged placement</a></li>
<li><a href="#sec_alloc_inputs">I/O buffers inside the “activations” buffer</a></li>
<li><a href="#ref_split_weights">Split weights buffer</a></li>
<li><a href="#thread_safety">Re-entrance and thread safety considerations</a></li>
<li><a href="#debug-support">Debug support</a></li>
</ul></li>
<li><a href="#embedded_client_api">Embedded Client API</a><ul>
<li><a href="#ai_name_xxx-c-defines"><code>AI_&lt;NAME&gt;_XXX</code> C-defines</a></li>
<li><a href="#ref_api_create"><code>ai_&lt;name&gt;_create()</code></a></li>
<li><a href="#ref_api_init"><code>ai_&lt;name&gt;_init()</code></a></li>
<li><a href="#ref_api_run"><code>ai_&lt;name&gt;_run()</code></a></li>
<li><a href="#ref_api_get_error"><code>ai_&lt;name&gt;_get_error()</code></a></li>
<li><a href="#ref_api_info"><code>ai_&lt;name&gt;_get_info()</code></a></li>
</ul></li>
<li><a href="#ref_tensor_def">IO buffer/tensor description</a><ul>
<li><a href="#ai_buffer-c-struct">ai_buffer C-struct</a></li>
<li><a href="#ref_data_type">Tensor format</a></li>
<li><a href="#sec_life_cycle">Life-cycle of the IO buffers</a></li>
<li><a href="#sec_base_in_address">Base address of the IO buffers</a></li>
<li><a href="#float-to-integer-format-conversion">Float to integer format conversion</a></li>
<li><a href="#integer-to-float-format-conversion">Integer to float format conversion</a></li>
<li><a href="#float-to-qmn-format-conversion">Float to Qmn format conversion</a></li>
<li><a href="#qmn-to-float-format-conversion">Qmn to float format conversion</a></li>
<li><a href="#ref_1d">1d-array tensor</a></li>
<li><a href="#ref_2d">2d-array tensor</a></li>
<li><a href="#ref_3d">3d-array tensor</a></li>
</ul></li>
<li><a href="#ref_observer_api">Platform Observer API</a><ul>
<li><a href="#ref_cb_ex">User call-back registration for profiling use-case</a></li>
<li><a href="#ref_node_info">Node-per-node inspection</a></li>
<li><a href="#copy-before-run-use-case">Copy-before-run use-case</a></li>
<li><a href="#ref_dump_output">Dumping intermediate output use-case</a></li>
<li><a href="#ref_notify_input">End-of-process input buffer notification use-case</a></li>
<li><a href="#ref_obs_node">“ai_observer_node” definition</a></li>
<li><a href="#ai_platform_observer_node_info"><code>ai_platform_observer_node_info()</code></a></li>
<li><a href="#ai_platform_observer_register"><code>ai_platform_observer_register()</code></a></li>
</ul></li>
<li><a href="#references">References</a></li>
<li><a href="#revision-history">Revision history</a></li>
</ul>
	</div>
	<article id="sidenav" class="markdown-body">
	


<header>
<section class="st_header" id="doc_title">

<div class="himage">
	<img src="" title="STM32CubeMX.AI" align="right" height="70" />
	<img src="" title="STM32" align="right" height="90" />
</div>

<h1 class="title followed-by-subtitle">Embedded Inference Client API</h1>

	<p class="subtitle">X-CUBE-AI Expansion Package</p>

	<div class="revision">r2.2</div>

	<div class="ai_platform">
		AI PLATFORM r5.2.0
					(Embedded Inference Client API 1.1.0)
			</div>
			Command Line Interface r1.4.0
	




</section>
</header>




<section id="introduction" class="level1">
<h1>Introduction</h1>
<p>This article describes the embedded inference client API (<code>ai_&lt;name&gt;_XXX()</code> functions) which must be used by a C-application layer (AI client) to use a deployed C-model. All model-specific definitions and implementations can be found in the specialized NN C-files: <code>&#39;&lt;name&gt;.c&#39;</code> and <code>&#39;&lt;name&gt;.h&#39;</code> (refer to <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">[2], <em>“Generated STM32 NN library”</em></a> section). A <a href="#ref_observer_api">Platform observer API</a> for debug, advanced use-cases and profiling purposes is also described.</p>
<hr />
<div id="fig:id_nn_lib_integration" class="fignos">
<figure>
<img src="" property="center" style="width:95.0%" alt /><figcaption><span>Figure 1:</span> MCU integration model/view and dependencies</figcaption>
</figure>
</div>
<p>Above figure shows that the integration of the AI stack in an application stack is simple and straightforward. There is only few and standard dependencies with the run-time (SW and/or HW). Only <a href="#ref_api_create">STM32 CRC IP</a> should be clocked to be able to use the inference runtime library. AI client uses the generated model only through the generated well-defined <a href="#embedded_client_api"><code>ai_&lt;name&gt;_XXX()</code></a> functions (also called <em>“Embedded Inference Client API”</em>). The X-CUBE-AI pack provides a compiled library (network runtime library) by STM32 series and by supported tool-chains.</p>
<section id="ref_quick_usage_code" class="level2">
<h2>Getting started</h2>
<p>The following code snippet provides a typical and minimal example using the API for a 32b float model. The pre-trained model is generated with the default options i.e. input buffer is not allocated in the “activations” buffer and default c-name (<code>&#39;network&#39;</code>) is used. Note that all AI requested client resources (activations buffer and data buffers for the IO) are allocated at compile time thanks the generated macros: <code>&#39;AI_NETWORK_XXX_SIZE&#39;</code> allowing a minimalist, easier and quick integration.</p>
<div class="sourceCode" id="cb1"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb1-1"><a href="#cb1-1"></a><span class="pp">#include </span><span class="im">&lt;stdio.h&gt;</span></span>
<span id="cb1-2"><a href="#cb1-2"></a></span>
<span id="cb1-3"><a href="#cb1-3"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb1-4"><a href="#cb1-4"></a><span class="pp">#include </span><span class="im">&quot;network_data.h&quot;</span></span>
<span id="cb1-5"><a href="#cb1-5"></a></span>
<span id="cb1-6"><a href="#cb1-6"></a><span class="co">/* Global handle to reference an instantiated C-model */</span></span>
<span id="cb1-7"><a href="#cb1-7"></a><span class="dt">static</span> ai_handle network = AI_HANDLE_NULL;</span>
<span id="cb1-8"><a href="#cb1-8"></a></span>
<span id="cb1-9"><a href="#cb1-9"></a><span class="co">/* Global c-array to handle the activations buffer */</span></span>
<span id="cb1-10"><a href="#cb1-10"></a>AI_ALIGNED(<span class="dv">4</span>)</span>
<span id="cb1-11"><a href="#cb1-11"></a><span class="dt">static</span> ai_u8 activations[AI_NETWORK_DATA_ACTIVATIONS_SIZE];</span>
<span id="cb1-12"><a href="#cb1-12"></a></span>
<span id="cb1-13"><a href="#cb1-13"></a><span class="co">/* Data payload for input tensor */</span></span>
<span id="cb1-14"><a href="#cb1-14"></a>AI_ALIGNED(<span class="dv">4</span>)</span>
<span id="cb1-15"><a href="#cb1-15"></a><span class="dt">static</span> ai_float in_data[AI_NETWORK_IN_1_SIZE];</span>
<span id="cb1-16"><a href="#cb1-16"></a><span class="co">/* or static ai_u8 in_data[AI_NETWORK_IN_1_SIZE_BYTES]; */</span></span>
<span id="cb1-17"><a href="#cb1-17"></a></span>
<span id="cb1-18"><a href="#cb1-18"></a><span class="co">/* Data payload for the output tensor */</span></span>
<span id="cb1-19"><a href="#cb1-19"></a>AI_ALIGNED(<span class="dv">4</span>)</span>
<span id="cb1-20"><a href="#cb1-20"></a><span class="dt">static</span> ai_float out_data[AI_NETWORK_OUT_1_SIZE];</span>
<span id="cb1-21"><a href="#cb1-21"></a><span class="co">/* static ai_u8 out_data[AI_NETWORK_OUT_1_SIZE_BYTES]; */</span></span>
<span id="cb1-22"><a href="#cb1-22"></a></span>
<span id="cb1-23"><a href="#cb1-23"></a><span class="co">/* </span></span>
<span id="cb1-24"><a href="#cb1-24"></a><span class="co"> * Bootstrap code</span></span>
<span id="cb1-25"><a href="#cb1-25"></a><span class="co"> */</span></span>
<span id="cb1-26"><a href="#cb1-26"></a><span class="dt">int</span> aiInit(<span class="dt">void</span>) {</span>
<span id="cb1-27"><a href="#cb1-27"></a>  ai_error err;</span>
<span id="cb1-28"><a href="#cb1-28"></a>  </span>
<span id="cb1-29"><a href="#cb1-29"></a>  <span class="co">/* 1 - Create an instance of the model */</span></span>
<span id="cb1-30"><a href="#cb1-30"></a>  err = ai_network_create(&amp;network, AI_NETWORK_DATA_CONFIG <span class="co">/* or NULL */</span>);</span>
<span id="cb1-31"><a href="#cb1-31"></a>  <span class="cf">if</span> (err.type != AI_ERROR_NONE) {</span>
<span id="cb1-32"><a href="#cb1-32"></a>    printf(<span class="st">&quot;E: AI ai_network_create error - type=%d code=%d</span><span class="sc">\r\n</span><span class="st">&quot;</span>, err.type, err.code);</span>
<span id="cb1-33"><a href="#cb1-33"></a>    <span class="cf">return</span> -<span class="dv">1</span>;</span>
<span id="cb1-34"><a href="#cb1-34"></a>    };</span>
<span id="cb1-35"><a href="#cb1-35"></a></span>
<span id="cb1-36"><a href="#cb1-36"></a>  <span class="co">/* 2 - Initialize the instance */</span></span>
<span id="cb1-37"><a href="#cb1-37"></a>  <span class="dt">const</span> ai_network_params params = {</span>
<span id="cb1-38"><a href="#cb1-38"></a>      AI_NETWORK_DATA_WEIGHTS(ai_network_data_weights_get()),</span>
<span id="cb1-39"><a href="#cb1-39"></a>      AI_NETWORK_DATA_ACTIVATIONS(activations) };</span>
<span id="cb1-40"><a href="#cb1-40"></a></span>
<span id="cb1-41"><a href="#cb1-41"></a>  <span class="cf">if</span> (!ai_network_init(network, &amp;params)) {</span>
<span id="cb1-42"><a href="#cb1-42"></a>      err = ai_network_get_error(network);</span>
<span id="cb1-43"><a href="#cb1-43"></a>      printf(<span class="st">&quot;E: AI ai_network_init error - type=%d code=%d</span><span class="sc">\r\n</span><span class="st">&quot;</span>, err.type, err.code);</span>
<span id="cb1-44"><a href="#cb1-44"></a>      <span class="cf">return</span> -<span class="dv">1</span>;</span>
<span id="cb1-45"><a href="#cb1-45"></a>    }</span>
<span id="cb1-46"><a href="#cb1-46"></a></span>
<span id="cb1-47"><a href="#cb1-47"></a>  <span class="cf">return</span> <span class="dv">0</span>;</span>
<span id="cb1-48"><a href="#cb1-48"></a>}</span>
<span id="cb1-49"><a href="#cb1-49"></a></span>
<span id="cb1-50"><a href="#cb1-50"></a><span class="co">/* </span></span>
<span id="cb1-51"><a href="#cb1-51"></a><span class="co"> * Run inference code</span></span>
<span id="cb1-52"><a href="#cb1-52"></a><span class="co"> */</span></span>
<span id="cb1-53"><a href="#cb1-53"></a><span class="dt">int</span> aiRun(<span class="dt">const</span> <span class="dt">void</span> *in_data, <span class="dt">void</span> *out_data)</span>
<span id="cb1-54"><a href="#cb1-54"></a>{</span>
<span id="cb1-55"><a href="#cb1-55"></a>  ai_i32 n_batch;</span>
<span id="cb1-56"><a href="#cb1-56"></a>  ai_error err;</span>
<span id="cb1-57"><a href="#cb1-57"></a></span>
<span id="cb1-58"><a href="#cb1-58"></a>  <span class="co">/* 1 - Create the AI buffer IO handlers with the default definition */</span></span>
<span id="cb1-59"><a href="#cb1-59"></a>  ai_buffer ai_input[AI_NETWORK_IN_NUM] = AI_NETWORK_IN ;</span>
<span id="cb1-60"><a href="#cb1-60"></a>  ai_buffer ai_output[AI_NETWORK_OUT_NUM] = AI_NETWORK_OUT ;</span>
<span id="cb1-61"><a href="#cb1-61"></a>  </span>
<span id="cb1-62"><a href="#cb1-62"></a>  <span class="co">/* 2 - Update IO handlers with the data payload */</span></span>
<span id="cb1-63"><a href="#cb1-63"></a>  ai_input[<span class="dv">0</span>].n_batches = <span class="dv">1</span>;</span>
<span id="cb1-64"><a href="#cb1-64"></a>  ai_input[<span class="dv">0</span>].data = AI_HANDLE_PTR(in_data);</span>
<span id="cb1-65"><a href="#cb1-65"></a>  ai_output[<span class="dv">0</span>].n_batches = <span class="dv">1</span>;</span>
<span id="cb1-66"><a href="#cb1-66"></a>  ai_output[<span class="dv">0</span>].data = AI_HANDLE_PTR(out_data);</span>
<span id="cb1-67"><a href="#cb1-67"></a></span>
<span id="cb1-68"><a href="#cb1-68"></a>  <span class="co">/* 3 - Perform the inference */</span></span>
<span id="cb1-69"><a href="#cb1-69"></a>  n_batch = ai_network_run(network, &amp;ai_input[<span class="dv">0</span>], &amp;ai_output[<span class="dv">0</span>]);</span>
<span id="cb1-70"><a href="#cb1-70"></a>  <span class="cf">if</span> (n_batch != <span class="dv">1</span>) {</span>
<span id="cb1-71"><a href="#cb1-71"></a>      err = ai_network_get_error(network);</span>
<span id="cb1-72"><a href="#cb1-72"></a>      printf(<span class="st">&quot;E: AI ai_network_run error - type=%d code=%d</span><span class="sc">\r\n</span><span class="st">&quot;</span>, err.type, err.code);</span>
<span id="cb1-73"><a href="#cb1-73"></a>      <span class="cf">return</span> -<span class="dv">1</span>;</span>
<span id="cb1-74"><a href="#cb1-74"></a>  };</span>
<span id="cb1-75"><a href="#cb1-75"></a>  </span>
<span id="cb1-76"><a href="#cb1-76"></a>  <span class="cf">return</span> <span class="dv">0</span>;</span>
<span id="cb1-77"><a href="#cb1-77"></a>}</span>
<span id="cb1-78"><a href="#cb1-78"></a></span>
<span id="cb1-79"><a href="#cb1-79"></a><span class="co">/* </span></span>
<span id="cb1-80"><a href="#cb1-80"></a><span class="co"> * Example of main loop function</span></span>
<span id="cb1-81"><a href="#cb1-81"></a><span class="co"> */</span></span>
<span id="cb1-82"><a href="#cb1-82"></a><span class="dt">void</span> main_loop()</span>
<span id="cb1-83"><a href="#cb1-83"></a>{</span>
<span id="cb1-84"><a href="#cb1-84"></a>  <span class="co">/* The STM32 CRC IP clock should be enabled to use the network runtime library */</span></span>
<span id="cb1-85"><a href="#cb1-85"></a>  __HAL_RCC_CRC_CLK_ENABLE();</span>
<span id="cb1-86"><a href="#cb1-86"></a></span>
<span id="cb1-87"><a href="#cb1-87"></a>  aiInit();</span>
<span id="cb1-88"><a href="#cb1-88"></a></span>
<span id="cb1-89"><a href="#cb1-89"></a>  <span class="cf">while</span> (<span class="dv">1</span>) {</span>
<span id="cb1-90"><a href="#cb1-90"></a>    <span class="co">/* 1 - Acquire, pre-process and fill the input buffers */</span></span>
<span id="cb1-91"><a href="#cb1-91"></a>    acquire_and_process_data(in_data);</span>
<span id="cb1-92"><a href="#cb1-92"></a></span>
<span id="cb1-93"><a href="#cb1-93"></a>    <span class="co">/* 2 - Call inference engine */</span></span>
<span id="cb1-94"><a href="#cb1-94"></a>    aiRun(in_data, out_data);</span>
<span id="cb1-95"><a href="#cb1-95"></a></span>
<span id="cb1-96"><a href="#cb1-96"></a>    <span class="co">/* 3 - Post-process the predictions */</span></span>
<span id="cb1-97"><a href="#cb1-97"></a>    post_process(out_data);</span>
<span id="cb1-98"><a href="#cb1-98"></a>  }</span>
<span id="cb1-99"><a href="#cb1-99"></a>}</span></code></pre></div>
<p>Only the following <code>CFLAGS/LDFLAGS</code> extensions (Embedded GCC-based for ARM tool-chain) are requested to compile the specialized c-files and to add the inference runtime library in a STM32 Cortex-m4 based project.</p>
<div class="sourceCode" id="cb2"><pre class="sourceCode makefile"><code class="sourceCode makefile"><span id="cb2-1"><a href="#cb2-1"></a><span class="dt">CFLAGS </span><span class="ch">+=</span><span class="st"> -mcpu=cortex-m4 -mthumb -mfpu=fpv4-sp-d16  -mfloat-abi=hard</span></span>
<span id="cb2-2"><a href="#cb2-2"></a></span>
<span id="cb2-3"><a href="#cb2-3"></a><span class="dt">CFLAGS </span><span class="ch">+=</span><span class="st"> -IMiddlewares/ST/AI/Lib/Inc</span></span>
<span id="cb2-4"><a href="#cb2-4"></a><span class="dt">LDFLAGS </span><span class="ch">+=</span><span class="st"> -LMiddlewares/ST/AI/Lib/Lib -l:NetworkRuntime510_CM4_GCC.a</span></span></code></pre></div>
<div class="Warning">
<p><strong>Note</strong> — Be aware that all provided inference runtime libraries for the different STM32 series are compiled with the FPU and with the <code>hard</code> float ABI for performance reasons.</p>
</div>
</section>
<section id="sec_data_placement" class="level2">
<h2>AI buffers and privileged placement</h2>
<p>Application/integration point of view, only three fixed-size memory-related objects are considered as system-dimensioning. There is no support for the dynamic tensor which means that all size and shape of the tensors are defined/fixed at generation time. No system heap is requested to use the inference run-time engine.</p>
<ul>
<li>“activations” buffer is a simple contiguous memory-mapped segment, placed into a RW memory region. It is owned and allocated by the AI client. It is passed to the network instance (see <a href="#ref_api_init"><code>&#39;ai_&lt;name&gt;_init()&#39;</code></a> function) to be only used as private heap (or working buffer) during the execution of the inference to store the intermediate results. Between two <em>runs</em>, the associated memory segment can be used by the application. Minimal size, <code>&#39;AI_&lt;NAME&gt;_DATA_ACTIVATIONS_SIZE&#39;</code> is defined at generation time (see reported <code>&#39;RAM&#39;</code> value).</li>
<li>“weights” buffer is a simple contiguous memory-mapped segment (or multiple memory-mapped segments with the <a href="#ref_split_weights"><code>&#39;--split-weights&#39;</code></a> generation option), generally placed into a non-volatile memory device. The total size, <code>&#39;AI_&lt;NAME&gt;_DATA_WEIGHTS_SIZE&#39;</code> is model-dependent (see reported <code>&#39;ROM&#39;</code> value).<br />
</li>
<li>“output” and “input” buffers must be also placed in the RW memory region. By default, they are owned and provided by the AI client. Sizes are model dependent and known as generation time (<code>AI_&lt;NAME&gt;_IN/OUT_SIZE_BYTES</code>).</li>
</ul>
<div id="fig:id_mem_layout_default" class="fignos">
<figure>
<img src="" property="center" style="width:80.0%" alt /><figcaption><span>Figure 2:</span> Default data memory layout</figcaption>
</figure>
</div>
<p>The kernels (inference runtime library) is executed in the context of the caller, the minimal requested <strong>stack</strong> size can be evaluated at run-time by the <em>aiSystemPerformance</em> application (refer to <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">[2], “AI system performance application”</a> section)</p>
<div class="Note">
<p><strong>Note</strong> — Placement of these objects are application linker or/and runtime dependent. Additional ROM and RAM for the network runtime library and network c-file (txt/rodata/bss and data sections) are also linker dependent but are not considered as system dimensioning.</p>
</div>
<p>Following table indicates the privileged placement choices to minimize the inference time. According the model, the most constrained memory object is the “activations” buffer.</p>
<table>
<colgroup>
<col style="width: 34%"></col>
<col style="width: 65%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">memory object type</th>
<th style="text-align: left;">preferably placed in</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">client stack</td>
<td style="text-align: left;">a low latency &amp; high bandwidth device. STM32 internal RAM or data-TCM when available (zero wait-state memory).</td>
</tr>
<tr class="even">
<td style="text-align: left;">activations<br />
inputs/outputs</td>
<td style="text-align: left;">a low/medium latency &amp; high bandwidth device. STM32 internal RAM first or external RAM. Trade-off is mainly driven by the size and if the STM32 MCU has a data cache (Cortex-M7 family). If <a href="#sec_alloc_inputs">input buffers</a> are not allocated in the “activations” buffer, the “activations” buffer should be privileged.</td>
</tr>
<tr class="odd">
<td style="text-align: left;">weights</td>
<td style="text-align: left;">a medium latency &amp; medium bandwidth device. STM32 internal FLASH or external FLASH. Trade-off is driven by the STM32 MCU data cache availability (Cortex-M7 family), the <a href="#ref_split_weights">weights can be split</a> between different memory devices.</td>
</tr>
</tbody>
</table>
</section>
<section id="sec_alloc_inputs" class="level2">
<h2>I/O buffers inside the “activations” buffer</h2>
<p>The <code>&#39;--allocate-inputs&#39;</code> (respectively <code>&#39;--allocate-outputs&#39;</code>) option permits to use the “activations” buffer to allocate the data of the input tensors (respectively the output tensors). At generation time, the minimal size of the “activations” buffer is adapted accordingly. Be aware that the base addresses of the respective memory sub-regions are dependent of the model, they are not necessarily aligned with the base address of the “activations” buffer and are pre-defined/pre-calculated at generation time (see the <a href="#sec_base_in_address">snippet code</a> to find them).</p>
<div id="fig:id_mem_layout_w_inputs" class="fignos">
<figure>
<img src="" property="center" style="width:80.0%" alt /><figcaption><span>Figure 3:</span> Data memory layout with <code>&#39;--allocate-inputs&#39;</code> option</figcaption>
</figure>
</div>
<ul>
<li>“external” input buffers (i.e. allocated outside the “activations” buffer) can be always used even if <code>&#39;--allocate-inputs&#39;</code> option is used.<br />
</li>
<li><code>&#39;--allocate-inputs&#39;</code> option reserves only the place for <em>one</em> buffer by input tensor.<br />
</li>
<li>if a double buffer scheme should be implemented, <code>&#39;--allocate-inputs&#39;</code> flag should be not used.</li>
</ul>
</section>
<section id="ref_split_weights" class="level2">
<h2>Split weights buffer</h2>
<p>The <code>&#39;--split-weights&#39;</code> option is a convenience to be able to place statically tensor-by-tensor the weights in different STM32 memory device (on or off-chip) thanks to specific end-user application linker directives.</p>
<ul>
<li>it relaxes the placing constraint of a large buffer into a constrained and non-homogenous memory sub-system.<br />
</li>
<li>after profiling, it allows to improve the global inference time, by placing the critical weights into a low latency memory. Or in contrary can free the critical resource (i.e. internal flash) which can be used by the application.</li>
</ul>
<div id="fig:id_mem_layout_w_inputs" class="fignos">
<figure>
<img src="" property="center" style="width:65.0%" alt /><figcaption><span>Figure 4:</span> Split weights buffer (static placement)</figcaption>
</figure>
</div>
<p>The <code>&#39;--split-weights&#39;</code> option avoids to generate an unique c-array for the whole data of the weights/bias tensors (<code>&#39;&lt;name&gt;_data.c&#39;</code> file) as following:</p>
<div class="sourceCode" id="cb3"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb3-1"><a href="#cb3-1"></a>ai_handle ai_network_data_weights_get(<span class="dt">void</span>)</span>
<span id="cb3-2"><a href="#cb3-2"></a>{</span>
<span id="cb3-3"><a href="#cb3-3"></a>  AI_ALIGNED(<span class="dv">4</span>)</span>
<span id="cb3-4"><a href="#cb3-4"></a>  <span class="dt">static</span> <span class="dt">const</span> ai_u8 s_network_weights[ <span class="dv">794136</span> ] = {</span>
<span id="cb3-5"><a href="#cb3-5"></a>    <span class="bn">0xcf</span>, <span class="bn">0xae</span>, <span class="bn">0x9d</span>, <span class="bn">0x3d</span>, <span class="bn">0x1b</span>, <span class="bn">0x0c</span>, <span class="bn">0xd1</span>, <span class="bn">0xbd</span>, <span class="bn">0x63</span>, <span class="bn">0x99</span>,</span>
<span id="cb3-6"><a href="#cb3-6"></a>    <span class="bn">0x36</span>, <span class="bn">0xbd</span>, <span class="bn">0xdb</span>, <span class="bn">0x67</span>, <span class="bn">0x46</span>, <span class="bn">0xbe</span>, <span class="bn">0x3b</span>, <span class="bn">0xe7</span>, <span class="bn">0x0d</span>, <span class="bn">0x3e</span>,</span>
<span id="cb3-7"><a href="#cb3-7"></a>    ...</span>
<span id="cb3-8"><a href="#cb3-8"></a>    <span class="bn">0x41</span>, <span class="bn">0xbf</span>, <span class="bn">0xc6</span>, <span class="bn">0x7d</span>, <span class="bn">0x69</span>, <span class="bn">0x3e</span>, <span class="bn">0x18</span>, <span class="bn">0x87</span>, <span class="bn">0x37</span>,</span>
<span id="cb3-9"><a href="#cb3-9"></a>    <span class="bn">0xbe</span>, <span class="bn">0x83</span>, <span class="bn">0x63</span>, <span class="bn">0x0f</span>, <span class="bn">0x3f</span>, <span class="bn">0x51</span>, <span class="bn">0xa1</span>, <span class="bn">0xdd</span>, <span class="bn">0xbe</span></span>
<span id="cb3-10"><a href="#cb3-10"></a>  };</span>
<span id="cb3-11"><a href="#cb3-11"></a>  <span class="cf">return</span> AI_HANDLE_PTR(s_network_weights);</span>
<span id="cb3-12"><a href="#cb3-12"></a>}</span></code></pre></div>
<p>A <code>&#39;s_&lt;network&gt;_&lt;layer_name&gt;_[bias|weights|*]_array_weights[]&#39;</code>) c-array is created to store the data of each weight/bias tensors. A global map table is also built, to be able to retrieve the addresses of the different c-arrays.</p>
<div class="sourceCode" id="cb4"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb4-1"><a href="#cb4-1"></a>...</span>
<span id="cb4-2"><a href="#cb4-2"></a><span class="co">/* conv2d_1_weights_array - FLOAT|CONST */</span></span>
<span id="cb4-3"><a href="#cb4-3"></a>AI_ALIGNED(<span class="dv">4</span>)</span>
<span id="cb4-4"><a href="#cb4-4"></a><span class="dt">const</span> ai_u8 s_network_conv2d_1_weights_array_weights[ <span class="dv">2048</span> ] = {</span>
<span id="cb4-5"><a href="#cb4-5"></a>  <span class="bn">0xcf</span>, <span class="bn">0xae</span>, <span class="bn">0x9d</span>, <span class="bn">0x3d</span>, <span class="bn">0x1b</span>, <span class="bn">0x0c</span>, <span class="bn">0xd1</span>, <span class="bn">0xbd</span>, <span class="bn">0x63</span>, <span class="bn">0x99</span>,</span>
<span id="cb4-6"><a href="#cb4-6"></a>...</span>
<span id="cb4-7"><a href="#cb4-7"></a>}</span>
<span id="cb4-8"><a href="#cb4-8"></a>...</span>
<span id="cb4-9"><a href="#cb4-9"></a><span class="co">/* dense_3_bias_array - FLOAT|CONST */</span></span>
<span id="cb4-10"><a href="#cb4-10"></a>AI_ALIGNED(<span class="dv">4</span>)</span>
<span id="cb4-11"><a href="#cb4-11"></a><span class="dt">const</span> ai_u8 s_network_dense_3_bias_array_weights[ <span class="dv">24</span> ] = {</span>
<span id="cb4-12"><a href="#cb4-12"></a>  <span class="bn">0xa2</span>, <span class="bn">0x72</span>, <span class="bn">0x82</span>, <span class="bn">0x3e</span>, <span class="bn">0x5a</span>, <span class="bn">0x88</span>, <span class="bn">0x41</span>, <span class="bn">0xbf</span>, <span class="bn">0xc6</span>, <span class="bn">0x7d</span>,</span>
<span id="cb4-13"><a href="#cb4-13"></a>  <span class="bn">0x69</span>, <span class="bn">0x3e</span>, <span class="bn">0x18</span>, <span class="bn">0x87</span>, <span class="bn">0x37</span>, <span class="bn">0xbe</span>, <span class="bn">0x83</span>, <span class="bn">0x63</span>, <span class="bn">0x0f</span>, <span class="bn">0x3f</span>,</span>
<span id="cb4-14"><a href="#cb4-14"></a>  <span class="bn">0x51</span>, <span class="bn">0xa1</span>, <span class="bn">0xdd</span>, <span class="bn">0xbe</span></span>
<span id="cb4-15"><a href="#cb4-15"></a>};</span>
<span id="cb4-16"><a href="#cb4-16"></a></span>
<span id="cb4-17"><a href="#cb4-17"></a><span class="co">/* Entry point to retrieve the address of the c-arrays */</span></span>
<span id="cb4-18"><a href="#cb4-18"></a>ai_handle ai_network_data_weights_get(<span class="dt">void</span>) {</span>
<span id="cb4-19"><a href="#cb4-19"></a>  <span class="dt">static</span> <span class="dt">const</span> ai_u8* <span class="dt">const</span> s_network_params_map_table[] = {</span>
<span id="cb4-20"><a href="#cb4-20"></a>    &amp;s_conv2d_1_weights_array_weights[<span class="dv">0</span>],</span>
<span id="cb4-21"><a href="#cb4-21"></a>...</span>
<span id="cb4-22"><a href="#cb4-22"></a>    &amp;s_dense_3_bias_array_weights[<span class="dv">0</span>],</span>
<span id="cb4-23"><a href="#cb4-23"></a>  };</span>
<span id="cb4-24"><a href="#cb4-24"></a>  <span class="cf">return</span> AI_HANDLE_PTR(s_network_params_map_table);</span>
<span id="cb4-25"><a href="#cb4-25"></a>};</span></code></pre></div>
<ul>
<li>without particular linker directives, the placement of the multiple c-arrays are always placed in a <code>.rodata</code> section as for the unique c-array.<br />
</li>
<li>client API is not changed. the <code>&#39;ai_network_data_weights_get()&#39;</code> function is used to pass the entry point of the weights buffer to the <a href="#ref_api_init"><code>&#39;ai_&lt;name&gt;_init()&#39;</code></a> function.<br />
</li>
<li>as illustrated in the figure <a href="#fig:id_mem_layout_w_inputs">4</a>, <code>&#39;const&#39;</code> C-attribute can be manually commented to use the default C-startup behavior to copy a data in an initialized RAM data section.</li>
</ul>
</section>
<section id="thread_safety" class="level2">
<h2>Re-entrance and thread safety considerations</h2>
<p>No internal synchronization mechanism is implemented to protect the entry points against concurrent accesses. If the API is used in a multi-threaded context, the protection of the instantiated NN(s) must be guaranteed by the application layer itself. To minimize the usage of the RAM, a same activation memory chunk (SizeSHARED) can be used to support multiple network. In this case, the user must guarantee that an on-going inference execution cannot be preempted by the execution of another network.</p>
<div class="sourceCode" id="cb5"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb5-1"><a href="#cb5-1"></a>SizeSHARED = MAX(AI_&lt;name&gt;_DATA_ACTIVATIONS_SIZE) <span class="cf">for</span> name = “net1” … “net2”</span></code></pre></div>
<div class="Warning">
<p><strong>Note</strong> — If the preemption is expected for real-time constraint or latency reasons, each network instance must have its own and private activation buffer.</p>
</div>
</section>
<section id="debug-support" class="level2">
<h2>Debug support</h2>
<p>The library must be considered as an optimized black box in binary format (sources files are not deliveries). There is no support for run-time internal data or state introspection. Mapping and port of the NN is guaranteed by the XCUBE-AI generator. Some integration issues can be highlighted by the <code>&#39;ai_&lt;name&gt;_get_error()</code>’ function or by the usage of the <a href="#ref_observer_api">Platform observer API</a> to inspect the intermediate results.</p>
</section>
</section>
<section id="embedded_client_api" class="level1">
<h1>Embedded Client API</h1>
<section id="ai_name_xxx-c-defines" class="level2">
<h2><code>AI_&lt;NAME&gt;_XXX</code> C-defines</h2>
<p>Different C-defines are generated in the <code>&lt;name&gt;.h</code> and <code>&lt;name&gt;_data.h</code> files. They can be used to retrieve the dimensioning values for C-compile time or dynamic allocation, or for debug purpose.</p>
<table>
<colgroup>
<col style="width: 40%"></col>
<col style="width: 59%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">C-defines</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_MODEL_NAME</code></td>
<td style="text-align: left;">C-string with the C-name of the model</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_IN/OUT_NUM</code></td>
<td style="text-align: left;">indicates the total number of input/output tensors</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_IN/OUT</code></td>
<td style="text-align: left;">C-table (<code>&#39;ai_buffer&#39;</code> type) to describe the input/output tensors (see <a href="#ref_api_run"><code>&#39;ai_&lt;name&gt;_run()&#39;</code></a> function)</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_IN/OUT_SIZE</code></td>
<td style="text-align: left;">C-table (integer type) to indicate the number of item by input/output tensors (= H x W x C) (see <a href="#ref_tensor_def">“Input/output xD tensor format”</a> section)</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_IN/OUT_1_SIZE</code></td>
<td style="text-align: left;">indicates the total number of item for the first input/output tensor</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_IN/OUT_1_SIZE_BYTES</code></td>
<td style="text-align: left;">indicates the size in bytes for the first input/output tensor (see <a href="#ref_api_run"><code>&#39;ai_&lt;name&gt;_run()&#39;</code></a> function)</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_DATA_ACTIVATIONS_SIZE</code></td>
<td style="text-align: left;">indicates the minimal size in bytes which must provided by a client application layer as working buffer (see <a href="#ref_api_init"><code>&#39;ai_&lt;name&gt;_init()&#39;</code></a> function)</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_DATA__WEIGHTS_SIZE</code></td>
<td style="text-align: left;">indicates the size in bytes of the generated weights/bias buffer segment</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_INPUTS_IN_ACTIVATIONS</code></td>
<td style="text-align: left;">indicates that the input buffers can be used from the activations buffer. It is <em>only</em> defined if the <code>&#39;--allocate-inputs&#39;</code> option is used.</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_&lt;NAME&gt;_OUTPUTS_IN_ACTIVATIONS</code></td>
<td style="text-align: left;">indicates that the outputs buffers can be used from the activations buffer. It is <em>only</em> defined if the <code>&#39;--allocate-outputs&#39;</code> option is used.</td>
</tr>
</tbody>
</table>
</section>
<section id="ref_api_create" class="level2">
<h2><code>ai_&lt;name&gt;_create()</code></h2>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1"></a>ai_error <span class="va">ai_</span>&lt;name&gt;_create(ai_handle* network, <span class="at">const</span> ai_buffer* network_config);</span>
<span id="func-2"><a href="#func-2"></a>ai_handle <span class="va">ai_</span>&lt;name&gt;_destroy(ai_handle network);</span></code></pre></div>
<p>This <strong>mandatory</strong> function is the <em>early</em> function which must be called by the application to create an instance of the neural network. Provided <code>&#39;ai_handle&#39;</code> object is updated and it references a context (opaque object) which should be passed to the other functions.</p>
<ul>
<li>The <code>&#39;network_config&#39;</code> parameter is a specific network configuration buffer (opaque structure) coded as a <code>&#39;ai_buffer&#39;</code>. It is normally generated by the code generator and should be <em>not modified</em> by the application. For the current supported STM32 series and model, this object is always empty and <code>NULL</code> can be passed but it is preferable to pass <code>&#39;AI_NETWORK_DATA_CONFIG&#39;</code> (see <code>&#39;&lt;name&gt;.h&#39;</code> file).</li>
</ul>
<p>When the instance is no more used by the application, <code>&#39;ai_&lt;name&gt;_destroy()&#39;</code> function should be called to release the possible allocated resources.</p>
<div class="Error">
<p><strong>Warning</strong> — the STM32 CRC IP clock should be enabled before to call the <code>ai_&lt;network&gt;_XXX()</code> functions else the application hangs.</p>
</div>
<div class="Alert">
<p><strong>NOTE</strong> — Current implementation supports only one instance by c-model. Consequently a same C-model can be used in a pre-emptive runtime environment.</p>
</div>
</section>
<section id="ref_api_init" class="level2">
<h2><code>ai_&lt;name&gt;_init()</code></h2>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1"></a>ai_bool <span class="va">ai_</span>&lt;name&gt;_init(ai_handle network, <span class="at">const</span> ai_network_params* params);</span></code></pre></div>
<p>This <strong>mandatory</strong> function is used by the application to initialize the internal run-time data structures and to set the activations buffer and weights buffer.</p>
<ul>
<li><code>params</code> parameter is a structure (<code>ai_network_params</code> type) which allows to pass the references of the generated weights (<code>params</code> field) and the activation/crash memory buffer (<code>activations</code> field)</li>
<li><code>network</code> handle should be a valid handle, see <a href="#ref_api_create"><code>&#39;ai_&lt;name&gt;_create()&#39;</code></a> function</li>
</ul>
<div class="sourceCode" id="cb6"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb6-1"><a href="#cb6-1"></a><span class="co">/* @file: ai_platform.h */</span></span>
<span id="cb6-2"><a href="#cb6-2"></a><span class="kw">typedef</span> <span class="kw">struct</span> ai_network_params_ {</span>
<span id="cb6-3"><a href="#cb6-3"></a>  ai_buffer   params;         <span class="co">/*! info about params buffer(required!) */</span></span>
<span id="cb6-4"><a href="#cb6-4"></a>  ai_buffer   activations;    <span class="co">/*! info about activations buffer (required!) */</span></span>
<span id="cb6-5"><a href="#cb6-5"></a>} ai_network_params;</span></code></pre></div>
<ul>
<li><code>params</code> attribute handles the weights/bias memory buffer</li>
<li><code>activations</code> attribute handles the activations buffer which is used by the inference engine.</li>
<li>size of associated memory blocks are respectively defined by the following C-defines (see <code>&lt;name&gt;_data.h</code> file).
<ul>
<li><code>AI_&lt;NAME&gt;_DATA_WEIGHTS_SIZE</code></li>
<li><code>AI_&lt;NAME&gt;_DATA_ACTIVATIONS_SIZE</code></li>
</ul></li>
</ul>
<blockquote>
<p>Memory layout of the weights/activation buffers are fully dependent of the implemented neural network.</p>
</blockquote>
<p><code>AI_NETWORK_DATA_WEIGHTS()</code>and <code>AI_NETWORK_DATA_ACTIVATIONS()</code> helper macros should be used to populate the requested <code>params</code> structure. Note that the <code>ai_network_data_weights_get()</code> functions allows to retrieve the base address of the weights buffer (see <code>&#39;&lt;network&gt;_data.h&#39;</code> file).</p>
<div class="sourceCode" id="cb7"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb7-1"><a href="#cb7-1"></a>AI_ALIGNED(<span class="dv">4</span>)</span>
<span id="cb7-2"><a href="#cb7-2"></a><span class="dt">static</span> ai_u8 activations[AI_NETWORK_DATA_ACTIVATIONS_SIZE];</span>
<span id="cb7-3"><a href="#cb7-3"></a></span>
<span id="cb7-4"><a href="#cb7-4"></a><span class="dt">const</span> ai_network_params params = {</span>
<span id="cb7-5"><a href="#cb7-5"></a>  AI_NETWORK_DATA_WEIGHTS(ai_network_data_weights_get()),</span>
<span id="cb7-6"><a href="#cb7-6"></a>  AI_NETWORK_DATA_ACTIVATIONS(activations) };</span>
<span id="cb7-7"><a href="#cb7-7"></a></span>
<span id="cb7-8"><a href="#cb7-8"></a>ai_network_init(network, &amp;params);</span></code></pre></div>
</section>
<section id="ref_api_run" class="level2">
<h2><code>ai_&lt;name&gt;_run()</code></h2>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1"></a>ai_i32 <span class="va">ai_</span>&lt;name&gt;_run(ai_handle network, <span class="at">const</span> ai_buffer* input, ai_buffer* output);</span></code></pre></div>
<p>This function is called to feed the neural network. The input and output buffer parameters (<code>&#39;ai_buffer&#39;</code> type) allow to provide the input tensors and to store the predicted output tensors respectively (see “<a href="#ref_tensor_def">Input/output xD tensor format</a>” section).</p>
<ul>
<li>Returned value is the number of the input tensors processed when n_batches &gt;= 1. If &lt;=0 ,<a href="#ref_api_get_error"><code>&#39;ai_network_get_error()&#39;</code></a> function should be used to know the error</li>
</ul>
<div class="Alert">
<p><strong>NOTE</strong> — Two separate lists of inputs and outputs <code>&#39;ai_buffer&#39;</code> can be passed. This permits to support a neural network model with multiple inputs or/and outputs. <code>&#39;AI_NETWORK_IN_NUM&#39;</code> and respectively <code>&#39;AI_NETWORK_OUT_NUM&#39;</code> helper macro can be used to know at compile-time the number of inputs and outputs. These values are also returned by the <code>&quot;struct ai_network_report&quot;</code> (see <a href="#ref_api_info"><code>&#39;ai_&lt;name&gt;_get_info()&#39;</code></a> function).</p>
</div>
<section id="typical-usages" class="level3 unnumbered">
<h3>Typical usages</h3>
<p>Default UC is illustrated by the <a href="#ref_quick_usage_code">“Getting starting”</a> code snippet. Following code is an example with a C-model which has two input and two output tensors. Note that the data payload of the <a href="#sec_alloc_inputs">input buffers</a> are also used in the “activations” buffer.</p>
<div class="sourceCode" id="cb8"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb8-1"><a href="#cb8-1"></a><span class="pp">#include </span><span class="im">&lt;stdio.h&gt;</span></span>
<span id="cb8-2"><a href="#cb8-2"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb8-3"><a href="#cb8-3"></a>...</span>
<span id="cb8-4"><a href="#cb8-4"></a><span class="co">/* @ of the input buffers */</span></span>
<span id="cb8-5"><a href="#cb8-5"></a><span class="dt">static</span> ai_float *in_data[AI_NETWORK_IN_NUM];</span>
<span id="cb8-6"><a href="#cb8-6"></a></span>
<span id="cb8-7"><a href="#cb8-7"></a><span class="co">/* ai input handlers */</span></span>
<span id="cb8-8"><a href="#cb8-8"></a><span class="dt">static</span> ai_buffer ai_inputs[AI_NETWORK_IN_NUM] = AI_NETWORK_IN ;</span>
<span id="cb8-9"><a href="#cb8-9"></a></span>
<span id="cb8-10"><a href="#cb8-10"></a><span class="co">/* ai output handlers */</span></span>
<span id="cb8-11"><a href="#cb8-11"></a><span class="dt">static</span> ai_buffer ai_outputs[AI_NETWORK_OUT_NUM] = AI_NETWORK_OUT ;</span>
<span id="cb8-12"><a href="#cb8-12"></a></span>
<span id="cb8-13"><a href="#cb8-13"></a><span class="co">/* data buffer for the output buffers */</span></span>
<span id="cb8-14"><a href="#cb8-14"></a><span class="dt">static</span> ai_float out_1_data[AI_NETWORK_OUT_1_SIZE];</span>
<span id="cb8-15"><a href="#cb8-15"></a><span class="dt">static</span> ai_float out_2_data[AI_NETWORK_OUT_2_SIZE];</span>
<span id="cb8-16"><a href="#cb8-16"></a></span>
<span id="cb8-17"><a href="#cb8-17"></a><span class="co">/* @ of the output buffers */</span></span>
<span id="cb8-18"><a href="#cb8-18"></a><span class="dt">static</span> ai_float* out_data[AI_NETWORK_OUT_NUM] = {</span>
<span id="cb8-19"><a href="#cb8-19"></a>  &amp;out_1_data[<span class="dv">0</span>],</span>
<span id="cb8-20"><a href="#cb8-20"></a>  &amp;out_2_data[<span class="dv">0</span>]</span>
<span id="cb8-21"><a href="#cb8-21"></a>  }</span>
<span id="cb8-22"><a href="#cb8-22"></a></span>
<span id="cb8-23"><a href="#cb8-23"></a>...</span>
<span id="cb8-24"><a href="#cb8-24"></a><span class="dt">int</span> aiInit(<span class="dt">void</span>) {</span>
<span id="cb8-25"><a href="#cb8-25"></a>  ...</span>
<span id="cb8-26"><a href="#cb8-26"></a>  ai_network_report report; </span>
<span id="cb8-27"><a href="#cb8-27"></a></span>
<span id="cb8-28"><a href="#cb8-28"></a>  <span class="co">/* 1 - Create and initialize network */</span></span>
<span id="cb8-29"><a href="#cb8-29"></a>  ...</span>
<span id="cb8-30"><a href="#cb8-30"></a></span>
<span id="cb8-31"><a href="#cb8-31"></a>  <span class="co">/* 2 - Retrieve network infos */</span></span>
<span id="cb8-32"><a href="#cb8-32"></a>  ai_network_get_info(network, &amp;report);</span>
<span id="cb8-33"><a href="#cb8-33"></a></span>
<span id="cb8-34"><a href="#cb8-34"></a>  <span class="co">/* 3 - Update the ai input handlers with the effective @ of</span></span>
<span id="cb8-35"><a href="#cb8-35"></a><span class="co">         the input buffers  */</span></span>
<span id="cb8-36"><a href="#cb8-36"></a>  <span class="cf">for</span> (<span class="dt">int</span> i=<span class="dv">0</span>; i &lt; AI_NETWORK_IN_NUM; i++) {</span>
<span id="cb8-37"><a href="#cb8-37"></a>    ai_inputs[i].n_batches = <span class="dv">1</span>;</span>
<span id="cb8-38"><a href="#cb8-38"></a>    ai_inputs[i].data = AI_HANDLE_PTR(report.inputs[i].data);</span>
<span id="cb8-39"><a href="#cb8-39"></a>    in_data[i] = (ai_u8 *)(ai_inputs[i].data);</span>
<span id="cb8-40"><a href="#cb8-40"></a>  }</span>
<span id="cb8-41"><a href="#cb8-41"></a></span>
<span id="cb8-42"><a href="#cb8-42"></a>  <span class="co">/* 4- Update the ai output handlers */</span></span>
<span id="cb8-43"><a href="#cb8-43"></a>  <span class="cf">for</span> (<span class="dt">int</span> i=<span class="dv">0</span>; i &lt; AI_NETWORK_OUT_NUM; i++) {</span>
<span id="cb8-44"><a href="#cb8-44"></a>    ai_inputs[i].n_batches = <span class="dv">1</span>;</span>
<span id="cb8-45"><a href="#cb8-45"></a>    ai_inputs[i].data = AI_HANDLE_PTR(&amp;out_data[i]);</span>
<span id="cb8-46"><a href="#cb8-46"></a>  }</span>
<span id="cb8-47"><a href="#cb8-47"></a>  ...</span>
<span id="cb8-48"><a href="#cb8-48"></a>}</span>
<span id="cb8-49"><a href="#cb8-49"></a></span>
<span id="cb8-50"><a href="#cb8-50"></a><span class="dt">void</span> main_loop()</span>
<span id="cb8-51"><a href="#cb8-51"></a>{</span>
<span id="cb8-52"><a href="#cb8-52"></a>  <span class="cf">while</span> (<span class="dv">1</span>) {</span>
<span id="cb8-53"><a href="#cb8-53"></a>    <span class="co">/* 1 - Acquire, pre-process and fill the input buffers */</span></span>
<span id="cb8-54"><a href="#cb8-54"></a>    acquire_and_process_data(&amp;in_data[<span class="dv">0</span>], &amp;in_data[<span class="dv">1</span>]);</span>
<span id="cb8-55"><a href="#cb8-55"></a></span>
<span id="cb8-56"><a href="#cb8-56"></a>    <span class="co">/* 2 - Call inference engine */</span></span>
<span id="cb8-57"><a href="#cb8-57"></a>    ai_network_run(network, &amp;ai_inputs[<span class="dv">0</span>], &amp;ai_outputs[<span class="dv">0</span>]);</span>
<span id="cb8-58"><a href="#cb8-58"></a></span>
<span id="cb8-59"><a href="#cb8-59"></a>    <span class="co">/* 3 - Post-process the predictions */</span></span>
<span id="cb8-60"><a href="#cb8-60"></a>    post_process(&amp;out_data[<span class="dv">0</span>], &amp;out_data[<span class="dv">1</span>]);</span>
<span id="cb8-61"><a href="#cb8-61"></a>  }</span>
<span id="cb8-62"><a href="#cb8-62"></a>}</span></code></pre></div>
</section>
</section>
<section id="ref_api_get_error" class="level2">
<h2><code>ai_&lt;name&gt;_get_error()</code></h2>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1"></a>ai_error <span class="va">ai_</span>&lt;name&gt;_get_error(ai_handle network);</span></code></pre></div>
<p>This function can be used by the client application to retrieve the 1st error reported during the execution of a <code>&#39;ai_&lt;name&gt;_xxx()&#39;</code> function.</p>
<ul>
<li>See <code>ai_platform.h</code> file to have the list of the returned error type (<code>&#39;ai_error_type&#39;</code>) and associated code (<code>&#39;ai_error_code&#39;</code>).</li>
</ul>
<section id="typical-ai-error-function-handler-debuglog-purpose" class="level3 unnumbered">
<h3>Typical AI error function handler (debug/log purpose)</h3>
<div class="sourceCode" id="cb9"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb9-1"><a href="#cb9-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb9-2"><a href="#cb9-2"></a>...</span>
<span id="cb9-3"><a href="#cb9-3"></a><span class="dt">void</span> aiLogErr(<span class="dt">const</span> ai_error err)</span>
<span id="cb9-4"><a href="#cb9-4"></a>{</span>
<span id="cb9-5"><a href="#cb9-5"></a>  printf(<span class="st">&quot;E: AI error - type=%d code=%d</span><span class="sc">\r\n</span><span class="st">&quot;</span>, err.type, err.code);</span>
<span id="cb9-6"><a href="#cb9-6"></a>}</span></code></pre></div>
</section>
</section>
<section id="ref_api_info" class="level2">
<h2><code>ai_&lt;name&gt;_get_info()</code></h2>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1"></a>ai_bool <span class="va">ai_</span>&lt;name&gt;_get_info(ai_handle network, ai_network_report* report);</span></code></pre></div>
<p>This function allows to retrieve the run-time data attributes of an instantiated model. Refer to <code>&#39;ai_platform.h&#39;</code> file to show the detail of the returned <code>&#39;ai_network_report&#39;</code> C-struct. It should be called after the call of <code>&#39;ai_&lt;name&gt;_init()&#39;</code>.</p>
<section id="typical-usage" class="level3 unnumbered">
<h3>Typical usage</h3>
<div class="sourceCode" id="cb10"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb10-1"><a href="#cb10-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb10-2"><a href="#cb10-2"></a>...</span>
<span id="cb10-3"><a href="#cb10-3"></a><span class="dt">int</span> aiInit(<span class="dt">void</span>) {</span>
<span id="cb10-4"><a href="#cb10-4"></a>  ai_network_report report;</span>
<span id="cb10-5"><a href="#cb10-5"></a>  ai_bool res;</span>
<span id="cb10-6"><a href="#cb10-6"></a>...</span>
<span id="cb10-7"><a href="#cb10-7"></a>  res = ai_network_get_info(network, &amp;report)</span>
<span id="cb10-8"><a href="#cb10-8"></a>  <span class="cf">if</span> (res) {</span>
<span id="cb10-9"><a href="#cb10-9"></a>    <span class="co">/* display/use the reported data */</span></span>
<span id="cb10-10"><a href="#cb10-10"></a>    ...</span>
<span id="cb10-11"><a href="#cb10-11"></a>  }</span>
<span id="cb10-12"><a href="#cb10-12"></a>...</span>
<span id="cb10-13"><a href="#cb10-13"></a>}</span></code></pre></div>
</section>
</section>
</section>
<section id="ref_tensor_def" class="level1">
<h1>IO buffer/tensor description</h1>
<p>Up-to 4-dimensional tensors are supported with a fixed representation: <strong>BHWC</strong> format or <em>channel last</em> format representation. They are handled by a <code>&#39;struct ai_buffer&#39;</code> C-struct object. The referenced payload/data memory segment (<code>&#39;data&#39;</code> field) is physically stored and referenced in memory as a simple standard C-array type. Scattered memory buffers are not supported. <a href="#ref_data_type"><code>&#39;format&#39;</code></a> indicates the format of the data. <a href="#ref_data_type"><code>&#39;meta_info&#39;</code></a> is an extra field to reference the additional data-dependent parameters which can be requested to handle a buffer.</p>
<section id="ai_buffer-c-struct" class="level2">
<h2>ai_buffer C-struct</h2>
<div class="sourceCode" id="cb11"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb11-1"><a href="#cb11-1"></a><span class="co">/* @file: ai_platform.h */</span></span>
<span id="cb11-2"><a href="#cb11-2"></a></span>
<span id="cb11-3"><a href="#cb11-3"></a><span class="kw">typedef</span> <span class="kw">struct</span> ai_buffer_ {</span>
<span id="cb11-4"><a href="#cb11-4"></a>  ai_buffer_format        format;     <span class="co">/*!&lt; buffer format */</span></span>
<span id="cb11-5"><a href="#cb11-5"></a>  ai_u16                  n_batches;  <span class="co">/*!&lt; number of batches in the buffer */</span></span>
<span id="cb11-6"><a href="#cb11-6"></a>  ai_u16                  height;     <span class="co">/*!&lt; buffer height dimension */</span></span>
<span id="cb11-7"><a href="#cb11-7"></a>  ai_u16                  width;      <span class="co">/*!&lt; buffer width dimension */</span></span>
<span id="cb11-8"><a href="#cb11-8"></a>  ai_u32                  channels;   <span class="co">/*!&lt; buffer number of channels */</span></span>
<span id="cb11-9"><a href="#cb11-9"></a>  ai_handle               data;       <span class="co">/*!&lt; pointer to buffer data */</span></span>
<span id="cb11-10"><a href="#cb11-10"></a>  ai_buffer_meta_info*    meta_info;  <span class="co">/*!&lt; pointer to buffer metadata info */</span></span>
<span id="cb11-11"><a href="#cb11-11"></a>} ai_buffer;</span></code></pre></div>
<p>Following table shows the expected mapping of the 1d, 2d and 3d-array tensor:</p>
<table style="width:69%;">
<colgroup>
<col style="width: 30%"></col>
<col style="width: 38%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">tensor shape</th>
<th style="text-align: left;">mapped on (B, H, W, C)</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><a href="#ref_1d">1d-array</a></td>
<td style="text-align: left;">(-, 1, 1, c)</td>
</tr>
<tr class="even">
<td style="text-align: left;"><a href="#ref_2d">2d-array</a></td>
<td style="text-align: left;">(-, h, 1, c)</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><a href="#ref_3d">3d-array</a></td>
<td style="text-align: left;">(-, h, w, c)</td>
</tr>
</tbody>
</table>
<section id="retrieve-tensor-information" class="level3 unnumbered">
<h3>Retrieve tensor information</h3>
<p>Following code snippets show how to retrieve the tensor information from a buffer descriptor. <code>&#39;format&#39;</code> and <code>&#39;meta_info&#39;</code> fields are described in the next section.</p>
<div class="sourceCode" id="cb12"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb12-1"><a href="#cb12-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb12-2"><a href="#cb12-2"></a></span>
<span id="cb12-3"><a href="#cb12-3"></a>{</span>
<span id="cb12-4"><a href="#cb12-4"></a>  <span class="co">/* Use the generated macro to set the buffer input descriptors */</span></span>
<span id="cb12-5"><a href="#cb12-5"></a>  <span class="dt">const</span> ai_buffer input[] = AI_NETWORK_IN;</span>
<span id="cb12-6"><a href="#cb12-6"></a>  </span>
<span id="cb12-7"><a href="#cb12-7"></a>  <span class="co">/* Extract format of the first input tensor (index 0) */</span></span>
<span id="cb12-8"><a href="#cb12-8"></a>  <span class="dt">const</span> ai_buffer_format fmt_1 = AI_BUFFER_FORMAT(&amp;input[<span class="dv">0</span>]);</span>
<span id="cb12-9"><a href="#cb12-9"></a>  </span>
<span id="cb12-10"><a href="#cb12-10"></a>  <span class="co">/* Extract height, width and channels of the first input tensor */</span></span>
<span id="cb12-11"><a href="#cb12-11"></a>  <span class="dt">const</span> ai_u16 height_1 = AI_BUFFER_HEIGHT(&amp;input[<span class="dv">0</span>]);</span>
<span id="cb12-12"><a href="#cb12-12"></a>  <span class="dt">const</span> ai_u16 width_1 = AI_BUFFER_WIDTH(&amp;input[<span class="dv">0</span>]);</span>
<span id="cb12-13"><a href="#cb12-13"></a>  <span class="dt">const</span> ai_u16 ch_1 = AI_BUFFER_CHANNELS(&amp;input[<span class="dv">0</span>]);</span>
<span id="cb12-14"><a href="#cb12-14"></a>  <span class="dt">const</span> ai_u16 size_1 = AI_BUFFER_SIZE(&amp;input[<span class="dv">0</span>]);</span>
<span id="cb12-15"><a href="#cb12-15"></a>  ...</span>
<span id="cb12-16"><a href="#cb12-16"></a>}</span></code></pre></div>
<p>or with the <code>&#39;ai_network_info&#39;</code> structure</p>
<div class="sourceCode" id="cb13"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb13-1"><a href="#cb13-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb13-2"><a href="#cb13-2"></a></span>
<span id="cb13-3"><a href="#cb13-3"></a>{</span>
<span id="cb13-4"><a href="#cb13-4"></a>  <span class="co">/* Fetch run-time network descriptor */</span></span>
<span id="cb13-5"><a href="#cb13-5"></a>  ai_network_report report;</span>
<span id="cb13-6"><a href="#cb13-6"></a>  ai_network_get_info(network, &amp;report);</span>
<span id="cb13-7"><a href="#cb13-7"></a></span>
<span id="cb13-8"><a href="#cb13-8"></a>  <span class="co">/* Set the descriptor of the first input tensor (index 0) */</span></span>
<span id="cb13-9"><a href="#cb13-9"></a>  <span class="dt">const</span> ai_buffer *input = &amp;report.inputs[<span class="dv">0</span>]</span>
<span id="cb13-10"><a href="#cb13-10"></a></span>
<span id="cb13-11"><a href="#cb13-11"></a>  <span class="co">/* Extract format of the tensor */</span></span>
<span id="cb13-12"><a href="#cb13-12"></a>  <span class="dt">const</span> ai_buffer_format fmt_1 = AI_BUFFER_FORMAT(input);</span>
<span id="cb13-13"><a href="#cb13-13"></a>  </span>
<span id="cb13-14"><a href="#cb13-14"></a>  <span class="co">/* Extract height, width and channels of the tensor */</span></span>
<span id="cb13-15"><a href="#cb13-15"></a>  <span class="dt">const</span> ai_u16 height_1 = AI_BUFFER_HEIGHT(input);</span>
<span id="cb13-16"><a href="#cb13-16"></a>  <span class="dt">const</span> ai_u16 width_1 = AI_BUFFER_WIDTH(input);</span>
<span id="cb13-17"><a href="#cb13-17"></a>  <span class="dt">const</span> ai_u16 ch_1 = AI_BUFFER_CHANNELS(input);</span>
<span id="cb13-18"><a href="#cb13-18"></a>  <span class="dt">const</span> ai_u16 size_1 = AI_BUFFER_SIZE(input);</span>
<span id="cb13-19"><a href="#cb13-19"></a>  ...</span>
<span id="cb13-20"><a href="#cb13-20"></a>}</span></code></pre></div>
</section>
</section>
<section id="ref_data_type" class="level2">
<h2>Tensor format</h2>
<p>The format of the data is mainly defined by the field <code>&#39;format&#39;</code>, a 32b word (<code>&#39;ai_buffer_format&#39;</code> type). Two types are supported.</p>
<div class="sourceCode" id="cb14"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb14-1"><a href="#cb14-1"></a><span class="dt">const</span> ai_buffer_format fmt = AI_BUFFER_FORMAT(@ai_buffer_object);</span></code></pre></div>
<table style="width:99%;">
<colgroup>
<col style="width: 37%"></col>
<col style="width: 61%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">type</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>AI_BUFFER_FMT_TYPE_FLOAT</code></td>
<td style="text-align: left;">indicates that the data container handles the <strong>floating-point data</strong>. mapped on a 32b float C-type (<code>&#39;ai_float&#39;</code> or <code>&#39;float&#39;</code>).</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_BUFFER_FMT_TYPE_Q</code></td>
<td style="text-align: left;">indicates that the data container handles the <strong>quantized data</strong>, mapped on 8b signed or unsigned integer C-type, specially for the IO tensors. 16b or 32b integer C-type can be also used for the weights and/or bias tensors. Two <strong>arithmetic</strong> are supported: <strong>Integer</strong> and <strong>Qm,n</strong> or fixed-point arithmetic (refer to <a href="quantization.html">[8], “Quantization and quantize command”</a> , article)</td>
</tr>
</tbody>
</table>
<section id="helper-c-macros" class="level3 unnumbered">
<h3>Helper C-macros</h3>
<p>Following set of C-macros can be used with the <code>&#39;format&#39;</code> field from a <code>&#39;struct ai_buffer&#39;</code> C-struct object to extract the information.</p>
<table>
<colgroup>
<col style="width: 40%"></col>
<col style="width: 59%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">macros</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>AI_BUFFER_FMT_GET_TYPE(fmt)</code></td>
<td style="text-align: left;">returns <code>&#39;AI_BUFFER_FMT_TYPE_FLOAT&#39;</code> or <code>&#39;AI_BUFFER_FMT_TYPE_Q&#39;</code> buffer type</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_BUFFER_FMT_GET_FLOAT(fmt)</code></td>
<td style="text-align: left;">returns <code>&#39;1&#39;</code> if the data is a float type else <code>&#39;0&#39;</code></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>AI_BUFFER_FMT_GET_SIGN(fmt)</code></td>
<td style="text-align: left;">returns <code>&#39;1&#39;</code> if the data is signed else <code>&#39;0&#39;</code>.</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_BUFFER_FMT_GET_BITS(fmt)</code></td>
<td style="text-align: left;">returns the total number of bit which is used to encode the data. This is M+N+sign for <code>&#39;AI_BUFFER_FMT_TYPE_Q&#39;</code> type. Available values: <code>&#39;32&#39;</code> or <code>&#39;8&#39;</code></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>AI_BUFFER_FMT_GET_FBITS(fmt)</code></td>
<td style="text-align: left;">returns the total number of bit which is used to encode the fractional part for the 8b quantized data type.</td>
</tr>
</tbody>
</table>
<p>Additional macros are defined for the meta parameters:</p>
<div class="sourceCode" id="cb15"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb15-1"><a href="#cb15-1"></a><span class="dt">const</span> ai_buffer_meta_info * meta_info = AI_BUFFER_META_INFO(@ai_buffer_object);</span></code></pre></div>
<table>
<colgroup>
<col style="width: 51%"></col>
<col style="width: 48%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">macros</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>AI_BUFFER_META_INFO_INTQ(meta_info)</code></td>
<td style="text-align: left;">indicates if scale/zero-point meta info are available. If true, a reference of a <code>&#39;ai_intq_info&#39;</code> object is returned else <code>&#39;NULL&#39;</code>.</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_BUFFER_META_INFO_INTQ_GET_SCALE(meta_info, pos)</code></td>
<td style="text-align: left;">generic macro to returns the scale value at the pos-th position is available else <code>0</code> is returned. <code>&#39;ai_float&#39;</code> type. For the IO tensor only the position 0 is available.</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>AI_BUFFER_META_INFO_INTQ_GET_ZEROPOINT(meta_info, pos)</code></td>
<td style="text-align: left;">generic macro to returns the zero-point value at the pos-th position is available else <code>0</code> is returned. <code>&#39;ai_i8&#39;</code> or <code>&#39;ai_u8&#39;</code> type. Type can be deduced from the output of the <code>&#39;AI_BUFFER_FMT_GET_SIGN()&#39;</code> and <code>&#39;AI_BUFFER_FMT_GET_BITS()&#39;</code> macros.</td>
</tr>
</tbody>
</table>
<div class="Alert">
<p><strong>NOTE</strong> — Be aware that the <code>&#39;meta_info&#39;</code> field is only available through the returned <code>&#39;ai_network_report()&#39;</code> structure. Otherwise the value defined by the generated <code>&#39;AI_&lt;NAME&gt;_IN/OUT&#39;</code> C-define are <code>&#39;NULL&#39;</code>.</p>
</div>
<p>Following code snippet illustrates a typical code to extract the <code>&#39;m,n&#39;</code> values.</p>
<div class="sourceCode" id="cb16"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb16-1"><a href="#cb16-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb16-2"><a href="#cb16-2"></a></span>
<span id="cb16-3"><a href="#cb16-3"></a>{</span>
<span id="cb16-4"><a href="#cb16-4"></a>  <span class="co">/* Use the generated macro to set the buffer input descriptors */</span></span>
<span id="cb16-5"><a href="#cb16-5"></a>  <span class="dt">const</span> ai_buffer input[] = AI_NETWORK_IN;</span>
<span id="cb16-6"><a href="#cb16-6"></a>  </span>
<span id="cb16-7"><a href="#cb16-7"></a>  <span class="co">/* Extract format of the first input tensor (index 0) */</span></span>
<span id="cb16-8"><a href="#cb16-8"></a>  <span class="dt">const</span> ai_buffer_format fmt_1 = AI_BUFFER_FORMAT(&amp;input[<span class="dv">0</span>]);</span>
<span id="cb16-9"><a href="#cb16-9"></a>  </span>
<span id="cb16-10"><a href="#cb16-10"></a>  <span class="co">/* Extract the data type */</span></span>
<span id="cb16-11"><a href="#cb16-11"></a>  <span class="dt">const</span> <span class="dt">uint32_t</span> type = AI_BUFFER_FMT_GET_TYPE(fmt_1); <span class="co">/* -&gt; AI_BUFFER_FMT_TYPE_Q */</span></span>
<span id="cb16-12"><a href="#cb16-12"></a>  </span>
<span id="cb16-13"><a href="#cb16-13"></a>  <span class="co">/* Extract m,n values */</span></span>
<span id="cb16-14"><a href="#cb16-14"></a>  <span class="dt">const</span> ai_size sign = AI_BUFFER_FMT_GET_SIGN(fmt_1);  <span class="co">/* -&gt; 1 */</span></span>
<span id="cb16-15"><a href="#cb16-15"></a>  <span class="dt">const</span> ai_size bits = AI_BUFFER_FMT_GET_BITS(fmt_1);  <span class="co">/* -&gt; 8 */</span></span>
<span id="cb16-16"><a href="#cb16-16"></a>  </span>
<span id="cb16-17"><a href="#cb16-17"></a>  <span class="dt">const</span> ai_i16 N = AI_BUFFER_FMT_GET_FBITS(fmt_1);</span>
<span id="cb16-18"><a href="#cb16-18"></a>  <span class="dt">const</span> ai_size M = bits - sign - N;</span>
<span id="cb16-19"><a href="#cb16-19"></a>  ...</span>
<span id="cb16-20"><a href="#cb16-20"></a>}</span></code></pre></div>
<p>Extraction of the <code>&#39;scale&#39;</code> and <code>&#39;zero_point&#39;</code> values are showed by the following snippet code:</p>
<div class="sourceCode" id="cb17"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb17-1"><a href="#cb17-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb17-2"><a href="#cb17-2"></a></span>
<span id="cb17-3"><a href="#cb17-3"></a><span class="dt">static</span> ai_handle network;</span>
<span id="cb17-4"><a href="#cb17-4"></a></span>
<span id="cb17-5"><a href="#cb17-5"></a>{</span>
<span id="cb17-6"><a href="#cb17-6"></a>  <span class="co">/* Fetch run-time network descriptor. This is MANDATORY</span></span>
<span id="cb17-7"><a href="#cb17-7"></a><span class="co">     to retrieve the meta parameters. They are NOT available</span></span>
<span id="cb17-8"><a href="#cb17-8"></a><span class="co">     with the definition of the AI_&lt;NAME&gt;_IN/OUT macro.</span></span>
<span id="cb17-9"><a href="#cb17-9"></a><span class="co">  */</span></span>
<span id="cb17-10"><a href="#cb17-10"></a>  ai_network_report report;</span>
<span id="cb17-11"><a href="#cb17-11"></a>  ai_network_get_info(network, &amp;report);</span>
<span id="cb17-12"><a href="#cb17-12"></a></span>
<span id="cb17-13"><a href="#cb17-13"></a>  <span class="co">/* Set the descriptor of the first input tensor (index 0) */</span></span>
<span id="cb17-14"><a href="#cb17-14"></a>  <span class="dt">const</span> ai_buffer *input = &amp;report.inputs[<span class="dv">0</span>]</span>
<span id="cb17-15"><a href="#cb17-15"></a>  </span>
<span id="cb17-16"><a href="#cb17-16"></a>  <span class="co">/* Extract format of the tensor */</span></span>
<span id="cb17-17"><a href="#cb17-17"></a>  <span class="dt">const</span> ai_buffer_format fmt_1 = AI_BUFFER_FORMAT(input);</span>
<span id="cb17-18"><a href="#cb17-18"></a>  </span>
<span id="cb17-19"><a href="#cb17-19"></a>  <span class="co">/* Extract the data type */</span></span>
<span id="cb17-20"><a href="#cb17-20"></a>  <span class="dt">const</span> <span class="dt">uint32_t</span> type = AI_BUFFER_FMT_GET_TYPE(fmt_1); <span class="co">/* -&gt; AI_BUFFER_FMT_TYPE_Q */</span></span>
<span id="cb17-21"><a href="#cb17-21"></a>  </span>
<span id="cb17-22"><a href="#cb17-22"></a>  <span class="co">/* Extract sign and number of bits */</span></span>
<span id="cb17-23"><a href="#cb17-23"></a>  <span class="dt">const</span> ai_size sign = AI_BUFFER_FMT_GET_SIGN(fmt_1);  <span class="co">/* -&gt; 1 or 0*/</span></span>
<span id="cb17-24"><a href="#cb17-24"></a>  <span class="dt">const</span> ai_size bits = AI_BUFFER_FMT_GET_BITS(fmt_1);  <span class="co">/* -&gt; 8 */</span></span>
<span id="cb17-25"><a href="#cb17-25"></a>  </span>
<span id="cb17-26"><a href="#cb17-26"></a>  <span class="co">/* Extract scale/zero_point values (only pos=0 is currently supported, per-tensor) */</span></span>
<span id="cb17-27"><a href="#cb17-27"></a>  <span class="dt">const</span> ai_float scale = AI_BUFFER_META_INFO_INTQ_GET_SCALE(input-&gt;meta_info, <span class="dv">0</span>);</span>
<span id="cb17-28"><a href="#cb17-28"></a>  <span class="dt">const</span> <span class="dt">int</span> zero_point = AI_BUFFER_META_INFO_INTQ_GET_ZEROPOINT(input-&gt;meta_info, <span class="dv">0</span>);</span>
<span id="cb17-29"><a href="#cb17-29"></a>  ...</span>
<span id="cb17-30"><a href="#cb17-30"></a>}</span></code></pre></div>
<p>Floating-point case.</p>
<div class="sourceCode" id="cb18"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb18-1"><a href="#cb18-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb18-2"><a href="#cb18-2"></a></span>
<span id="cb18-3"><a href="#cb18-3"></a>{</span>
<span id="cb18-4"><a href="#cb18-4"></a>  <span class="co">/* Generated macro is used to set the buffer input descriptors */</span></span>
<span id="cb18-5"><a href="#cb18-5"></a>  <span class="dt">const</span> ai_buffer input[] = AI_&lt;NAME&gt;_IN;</span>
<span id="cb18-6"><a href="#cb18-6"></a>  </span>
<span id="cb18-7"><a href="#cb18-7"></a>  <span class="co">/* Retrieve format of the first input tensor (index 0) */</span></span>
<span id="cb18-8"><a href="#cb18-8"></a>  <span class="dt">const</span> ai_buffer_format fmt_1 = AI_BUFFER_FORMAT(&amp;input[<span class="dv">0</span>]);</span>
<span id="cb18-9"><a href="#cb18-9"></a>  </span>
<span id="cb18-10"><a href="#cb18-10"></a>  <span class="co">/* Retrieve the data type */</span></span>
<span id="cb18-11"><a href="#cb18-11"></a>  <span class="dt">const</span> <span class="dt">uint32_t</span> type = AI_BUFFER_FMT_GET_TYPE(fmt_1); <span class="co">/* -&gt; AI_BUFFER_FMT_TYPE_FLOAT */</span></span>
<span id="cb18-12"><a href="#cb18-12"></a>  </span>
<span id="cb18-13"><a href="#cb18-13"></a>  <span class="co">/* Retrieve sign/size values */</span></span>
<span id="cb18-14"><a href="#cb18-14"></a>  <span class="dt">const</span> ai_size sign = AI_BUFFER_FMT_GET_SIGN(fmt_1);   <span class="co">/* -&gt; 1 */</span></span>
<span id="cb18-15"><a href="#cb18-15"></a>  <span class="dt">const</span> ai_size bits = AI_BUFFER_FMT_GET_BITS(fmt_1);   <span class="co">/* -&gt; 32 */</span></span>
<span id="cb18-16"><a href="#cb18-16"></a>  <span class="dt">const</span> ai_size N = AI_BUFFER_FMT_GET_FBITS(fmt_1);     <span class="co">/* -&gt; 0 */</span></span>
<span id="cb18-17"><a href="#cb18-17"></a>  ...</span>
<span id="cb18-18"><a href="#cb18-18"></a>}</span></code></pre></div>
</section>
</section>
<section id="sec_life_cycle" class="level2">
<h2>Life-cycle of the IO buffers</h2>
<p>When the input buffers and output buffers are passed to the <a href="#ref_api_run"><code>&#39;ai_&lt;name&gt;_run()&#39;</code></a> function, the caller should wait the end of the inference to re-use the associated memory segments. There is no default mechanism to notify the application that the input tensors are released or no more used by the c-inference engine. This is particular true when the buffers are allocated in the activations buffer. But in the case where an input buffer is allocated in the user space and if the APP client known the <code>&#39;c-id&#39;</code> (refer to <a href="command_line_interface.html">[3]</a>, <em>“C-graph description”</em> section) of the operator which processes the input, the <a href="#ref_observer_api">Platform Observer API</a> can be used to be notified when the operator has finished (see <a href="#ref_notify_input"><em>“Processed input buffer notification use-case”</em></a> section).</p>
</section>
<section id="sec_base_in_address" class="level2">
<h2>Base address of the IO buffers</h2>
<p>To retrieve the addresses of the IO buffers allocated in the activations buffer when the <a href="#sec_alloc_inputs"><code>&#39;--allocate-inputs&#39;</code></a> (or <code>&#39;--allocate-outputs&#39;</code>) flag is used, the <a href="#ref_api_info"><code>ai_&lt;name&gt;_get_info()</code></a> function should be used. Note that the instance should be previously fully <a href="#ref_api_init">initialized</a>, because the returned address is dependent to the base address of the provided activations buffer.</p>
<div class="sourceCode" id="cb19"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb19-1"><a href="#cb19-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb19-2"><a href="#cb19-2"></a></span>
<span id="cb19-3"><a href="#cb19-3"></a><span class="dt">static</span> ai_handle network;</span>
<span id="cb19-4"><a href="#cb19-4"></a></span>
<span id="cb19-5"><a href="#cb19-5"></a>{</span>
<span id="cb19-6"><a href="#cb19-6"></a>  ai_network_report report;</span>
<span id="cb19-7"><a href="#cb19-7"></a>  ai_network_get_info(network, &amp;report);</span>
<span id="cb19-8"><a href="#cb19-8"></a></span>
<span id="cb19-9"><a href="#cb19-9"></a>  <span class="co">/* Set the descriptor of the first input tensor (index 0) */</span></span>
<span id="cb19-10"><a href="#cb19-10"></a>  <span class="dt">const</span> ai_buffer *input = &amp;report.inputs[<span class="dv">0</span>];</span>
<span id="cb19-11"><a href="#cb19-11"></a></span>
<span id="cb19-12"><a href="#cb19-12"></a>  <span class="co">/* Retrieve the @ of the input buffer */</span></span>
<span id="cb19-13"><a href="#cb19-13"></a><span class="pp">#if defined(AI_NETWORK_INPUTS_IN_ACTIVATIONS)</span></span>
<span id="cb19-14"><a href="#cb19-14"></a>  ai_u8 *in_data_1 = (ai_u8 *)input-&gt;data;</span>
<span id="cb19-15"><a href="#cb19-15"></a><span class="pp">#else</span></span>
<span id="cb19-16"><a href="#cb19-16"></a>  <span class="co">/* Buffer should be allocated by the application </span></span>
<span id="cb19-17"><a href="#cb19-17"></a><span class="co">     in this case: input-&gt;data == NULL */</span></span>
<span id="cb19-18"><a href="#cb19-18"></a>  ai_u8 in_data_1[AI_NETWORK_IN_1_SIZE_BYTES];</span>
<span id="cb19-19"><a href="#cb19-19"></a><span class="pp">#endif </span></span>
<span id="cb19-20"><a href="#cb19-20"></a>...</span>
<span id="cb19-21"><a href="#cb19-21"></a>}</span></code></pre></div>
<p>Note that the provided <code>AI_NETWORK_INPUTS_IN_ACTIVATIONS</code> or/and <code>AI_NETWORK_OUTPUTS_IN_ACTIVATIONS</code> C-defines can be used to condition the code at compile time.</p>
</section>
<section id="float-to-integer-format-conversion" class="level2">
<h2>Float to integer format conversion</h2>
<p>Following code snippet illustrates the float (ai_float) to integer (ai_i8/ai_u8) format conversion. Input buffer is used as destination buffer.</p>
<div class="sourceCode" id="cb20"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb20-1"><a href="#cb20-1"></a><span class="pp">#include </span><span class="im">&lt;network.h&gt;</span></span>
<span id="cb20-2"><a href="#cb20-2"></a></span>
<span id="cb20-3"><a href="#cb20-3"></a><span class="pp">#define _MIN(x_, y_) \</span></span>
<span id="cb20-4"><a href="#cb20-4"></a><span class="pp">    ( ((x_)&lt;(y_)) ? (x_) : (y_) )</span></span>
<span id="cb20-5"><a href="#cb20-5"></a></span>
<span id="cb20-6"><a href="#cb20-6"></a><span class="pp">#define _MAX(x_, y_) \</span></span>
<span id="cb20-7"><a href="#cb20-7"></a><span class="pp">    ( ((x_)&gt;(y_)) ? (x_) : (y_) )</span></span>
<span id="cb20-8"><a href="#cb20-8"></a></span>
<span id="cb20-9"><a href="#cb20-9"></a><span class="pp">#define _CLAMP(x_, min_, max_, type_) \</span></span>
<span id="cb20-10"><a href="#cb20-10"></a><span class="pp">    (type_) (_MIN(_MAX(x_, min_), max_))</span></span>
<span id="cb20-11"><a href="#cb20-11"></a></span>
<span id="cb20-12"><a href="#cb20-12"></a><span class="pp">#define _ROUND(v_, type_) \</span></span>
<span id="cb20-13"><a href="#cb20-13"></a><span class="pp">    (type_) ( ((v_)&lt;0) ? ((v_)-0.5f) : ((v_)+0.5f) )</span></span>
<span id="cb20-14"><a href="#cb20-14"></a></span>
<span id="cb20-15"><a href="#cb20-15"></a><span class="dt">const</span> ai_buffer *get_input_desc(idx)</span>
<span id="cb20-16"><a href="#cb20-16"></a>{</span>
<span id="cb20-17"><a href="#cb20-17"></a>  ai_network_report report;</span>
<span id="cb20-18"><a href="#cb20-18"></a>  ai_network_get_info(network, &amp;report);</span>
<span id="cb20-19"><a href="#cb20-19"></a>  <span class="cf">return</span> &amp;report.inputs[idx];</span>
<span id="cb20-20"><a href="#cb20-20"></a>}</span>
<span id="cb20-21"><a href="#cb20-21"></a></span>
<span id="cb20-22"><a href="#cb20-22"></a>ai_float input_f[AI_&lt;NAME&gt;_IN_1_SIZE];</span>
<span id="cb20-23"><a href="#cb20-23"></a>ai_i8 input_q[AI_&lt;NAME&gt;_IN_1_SIZE]; <span class="co">/* or ai_u8 */</span></span>
<span id="cb20-24"><a href="#cb20-24"></a></span>
<span id="cb20-25"><a href="#cb20-25"></a>{</span>
<span id="cb20-26"><a href="#cb20-26"></a>  <span class="dt">const</span> ai_buffer *input = get_input_desc(<span class="dv">0</span>);</span>
<span id="cb20-27"><a href="#cb20-27"></a>  ai_float scale  = AI_BUFFER_META_INFO_INTQ_GET_SCALE(input-&gt;meta_info, <span class="dv">0</span>);</span>
<span id="cb20-28"><a href="#cb20-28"></a>  <span class="dt">const</span> ai_i32 zp = AI_BUFFER_META_INFO_INTQ_GET_ZEROPOINT(input-&gt;meta_info, <span class="dv">0</span>);</span>
<span id="cb20-29"><a href="#cb20-29"></a></span>
<span id="cb20-30"><a href="#cb20-30"></a>  scale = <span class="fl">1.0</span><span class="bu">f</span> / scale;</span>
<span id="cb20-31"><a href="#cb20-31"></a></span>
<span id="cb20-32"><a href="#cb20-32"></a>  <span class="co">/* Loop */</span></span>
<span id="cb20-33"><a href="#cb20-33"></a>  <span class="cf">for</span> (<span class="dt">int</span> i=<span class="dv">0</span>; i &lt; AI_&lt;NAME&gt;_IN_1_SIZE; i++)</span>
<span id="cb20-34"><a href="#cb20-34"></a>  {</span>
<span id="cb20-35"><a href="#cb20-35"></a>    <span class="dt">const</span> ai_i32 tmp_ = zp + _ROUND(input_f[i] * scale, ai_i32);</span>
<span id="cb20-36"><a href="#cb20-36"></a>    <span class="co">/* for ai_u8 */</span></span>
<span id="cb20-37"><a href="#cb20-37"></a>    input_q[i] = _CLAMP(tmp_, <span class="dv">0</span>, <span class="dv">255</span>, ai_u8);</span>
<span id="cb20-38"><a href="#cb20-38"></a>    <span class="co">/* for ai_i8 */</span></span>
<span id="cb20-39"><a href="#cb20-39"></a>    input_q[i] = _CLAMP(tmp_, -<span class="dv">128</span>, <span class="dv">127</span>, ai_i8);</span>
<span id="cb20-40"><a href="#cb20-40"></a>  }</span>
<span id="cb20-41"><a href="#cb20-41"></a>  ...</span>
<span id="cb20-42"><a href="#cb20-42"></a>}</span></code></pre></div>
</section>
<section id="integer-to-float-format-conversion" class="level2">
<h2>Integer to float format conversion</h2>
<p>Following code snippet illustrates the integer (ai_i8/ai_u8) to float (ai_float) format conversion. The output buffer is used as source buffer.</p>
<div class="sourceCode" id="cb21"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb21-1"><a href="#cb21-1"></a><span class="pp">#include </span><span class="im">&lt;network.h&gt;</span></span>
<span id="cb21-2"><a href="#cb21-2"></a></span>
<span id="cb21-3"><a href="#cb21-3"></a>ai_i8 output_q[AI_&lt;NAME&gt;_OUT_1_SIZE]; <span class="co">/* or ai_u8 */</span></span>
<span id="cb21-4"><a href="#cb21-4"></a>ai_float output_f[AI_&lt;NAME&gt;_OUT_1_SIZE];</span>
<span id="cb21-5"><a href="#cb21-5"></a></span>
<span id="cb21-6"><a href="#cb21-6"></a><span class="dt">const</span> ai_buffer *get_output_desc(idx)</span>
<span id="cb21-7"><a href="#cb21-7"></a>{</span>
<span id="cb21-8"><a href="#cb21-8"></a>  ai_network_report report;</span>
<span id="cb21-9"><a href="#cb21-9"></a>  ai_network_get_info(network, &amp;report);</span>
<span id="cb21-10"><a href="#cb21-10"></a>  <span class="cf">return</span> &amp;report.outputs[idx];</span>
<span id="cb21-11"><a href="#cb21-11"></a>}</span>
<span id="cb21-12"><a href="#cb21-12"></a></span>
<span id="cb21-13"><a href="#cb21-13"></a>{</span>
<span id="cb21-14"><a href="#cb21-14"></a>  <span class="dt">const</span> ai_buffer *output = get_output_desc(<span class="dv">0</span>);</span>
<span id="cb21-15"><a href="#cb21-15"></a>  ai_float scale  = AI_BUFFER_META_INFO_INTQ_GET_SCALE(output-&gt;meta_info, <span class="dv">0</span>);</span>
<span id="cb21-16"><a href="#cb21-16"></a>  <span class="dt">const</span> ai_i32 zp = AI_BUFFER_META_INFO_INTQ_GET_ZEROPOINT(output-&gt;meta_info, <span class="dv">0</span>);</span>
<span id="cb21-17"><a href="#cb21-17"></a></span>
<span id="cb21-18"><a href="#cb21-18"></a>  <span class="co">/* Loop */</span></span>
<span id="cb21-19"><a href="#cb21-19"></a>  <span class="cf">for</span> (<span class="dt">int</span> i=<span class="dv">0</span>; i&lt;AI_&lt;NAME&gt;_OUT_1_SIZE; i++)</span>
<span id="cb21-20"><a href="#cb21-20"></a>  {</span>
<span id="cb21-21"><a href="#cb21-21"></a>    output_f[i] = scale * ((ai_float)(output_q[i]) - zp);</span>
<span id="cb21-22"><a href="#cb21-22"></a>  }</span>
<span id="cb21-23"><a href="#cb21-23"></a>  ...</span>
<span id="cb21-24"><a href="#cb21-24"></a>}</span></code></pre></div>
</section>
<section id="float-to-qmn-format-conversion" class="level2">
<h2>Float to Qmn format conversion</h2>
<p>Following code snippet illustrates the float (ai_float) to Qmn (ai_i8) format conversion. The input tensor is used as destination buffer.</p>
<div class="sourceCode" id="cb22"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb22-1"><a href="#cb22-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb22-2"><a href="#cb22-2"></a></span>
<span id="cb22-3"><a href="#cb22-3"></a><span class="pp">#define _MIN(x_, y_) \</span></span>
<span id="cb22-4"><a href="#cb22-4"></a><span class="pp">  ( ((x_)&lt;(y_)) ? (x_) : (y_) )</span></span>
<span id="cb22-5"><a href="#cb22-5"></a></span>
<span id="cb22-6"><a href="#cb22-6"></a><span class="pp">#define _MAX(x_, y_) \</span></span>
<span id="cb22-7"><a href="#cb22-7"></a><span class="pp">  ( ((x_)&gt;(y_)) ? (x_) : (y_) )</span></span>
<span id="cb22-8"><a href="#cb22-8"></a></span>
<span id="cb22-9"><a href="#cb22-9"></a><span class="pp">#define _CLAMP(x_, min_, max_, type_) \</span></span>
<span id="cb22-10"><a href="#cb22-10"></a><span class="pp">  (type_) (_MIN(_MAX(x_, min_), max_))</span></span>
<span id="cb22-11"><a href="#cb22-11"></a></span>
<span id="cb22-12"><a href="#cb22-12"></a><span class="pp">#define _ROUND(v_, type_) \</span></span>
<span id="cb22-13"><a href="#cb22-13"></a><span class="pp">  (type_) ( ((v_)&lt;0) ? ((v_)-0.5f) : ((v_)+0.5f) )</span></span>
<span id="cb22-14"><a href="#cb22-14"></a>  </span>
<span id="cb22-15"><a href="#cb22-15"></a>ai_float input_f[AI_&lt;NAME&gt;_IN_1_SIZE];</span>
<span id="cb22-16"><a href="#cb22-16"></a>ai_i8 input_q[AI_&lt;NAME&gt;_IN_1_SIZE];</span>
<span id="cb22-17"><a href="#cb22-17"></a></span>
<span id="cb22-18"><a href="#cb22-18"></a>{</span>
<span id="cb22-19"><a href="#cb22-19"></a>  <span class="dt">const</span> ai_buffer input[] = AI_&lt;NAME&gt;_IN;</span>
<span id="cb22-20"><a href="#cb22-20"></a>  </span>
<span id="cb22-21"><a href="#cb22-21"></a>  <span class="co">/* Retrieve format of the output tensor - index 0 */</span></span>
<span id="cb22-22"><a href="#cb22-22"></a>  <span class="dt">const</span> ai_buffer_format fmt_ = AI_BUFFER_FORMAT(&amp;input[<span class="dv">0</span>]);</span>
<span id="cb22-23"><a href="#cb22-23"></a>  </span>
<span id="cb22-24"><a href="#cb22-24"></a>  <span class="co">/* Build the scale factor for conversion */</span></span>
<span id="cb22-25"><a href="#cb22-25"></a>  <span class="dt">const</span> ai_float scale = (<span class="bn">0x1</span><span class="bu">U</span> &lt;&lt; AI_BUFFER_FMT_GET_FBITS(fmt_));</span>
<span id="cb22-26"><a href="#cb22-26"></a>  </span>
<span id="cb22-27"><a href="#cb22-27"></a>  <span class="co">/* Loop */</span></span>
<span id="cb22-28"><a href="#cb22-28"></a>  <span class="cf">for</span> (<span class="dt">int</span> i=<span class="dv">0</span>; i &lt; AI_&lt;NAME&gt;_IN_1_SIZE; i++)</span>
<span id="cb22-29"><a href="#cb22-29"></a>  {</span>
<span id="cb22-30"><a href="#cb22-30"></a>    <span class="dt">const</span> ai_i32 tmp_ = _ROUND((input_f[i] * scale), ai_i32);</span>
<span id="cb22-31"><a href="#cb22-31"></a>    input_q[i] = _CLAMP(tmp_, -<span class="dv">128</span>, <span class="dv">127</span>, ai_i8);</span>
<span id="cb22-32"><a href="#cb22-32"></a>  }</span>
<span id="cb22-33"><a href="#cb22-33"></a>  ...</span>
<span id="cb22-34"><a href="#cb22-34"></a>}</span></code></pre></div>
</section>
<section id="qmn-to-float-format-conversion" class="level2">
<h2>Qmn to float format conversion</h2>
<p>Following code snippet illustrates the Qmn (ai_i8) to FLOAT (ai_float) format conversion. The output tensor is used as source buffer.</p>
<div class="sourceCode" id="cb23"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb23-1"><a href="#cb23-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb23-2"><a href="#cb23-2"></a></span>
<span id="cb23-3"><a href="#cb23-3"></a>ai_i8 output_q[AI_&lt;NAME&gt;_OUT_1_SIZE];</span>
<span id="cb23-4"><a href="#cb23-4"></a>ai_float output_f[AI_&lt;NAME&gt;_OUT_1_SIZE];</span>
<span id="cb23-5"><a href="#cb23-5"></a></span>
<span id="cb23-6"><a href="#cb23-6"></a>{</span>
<span id="cb23-7"><a href="#cb23-7"></a>  <span class="dt">const</span> ai_buffer output[] = AI_&lt;NAME&gt;_OUT;</span>
<span id="cb23-8"><a href="#cb23-8"></a>  </span>
<span id="cb23-9"><a href="#cb23-9"></a>  <span class="co">/* Retrieve format of the output tensor - index 0 */</span></span>
<span id="cb23-10"><a href="#cb23-10"></a>  <span class="dt">const</span> ai_buffer_format fmt_1 = AI_BUFFER_FORMAT(&amp;output[<span class="dv">0</span>]);</span>
<span id="cb23-11"><a href="#cb23-11"></a>  </span>
<span id="cb23-12"><a href="#cb23-12"></a>  <span class="co">/* Build the scale factor for conversion */</span></span>
<span id="cb23-13"><a href="#cb23-13"></a>  <span class="dt">const</span> ai_float scale = <span class="fl">1.0</span><span class="bu">f</span> / (<span class="bn">0x1</span><span class="bu">U</span> &lt;&lt; AI_BUFFER_FMT_GET_FBITS(fmt_1));</span>
<span id="cb23-14"><a href="#cb23-14"></a>  </span>
<span id="cb23-15"><a href="#cb23-15"></a>  <span class="co">/* Loop */</span></span>
<span id="cb23-16"><a href="#cb23-16"></a>  <span class="cf">for</span> (<span class="dt">int</span> i=<span class="dv">0</span>; i&lt;AI_&lt;NAME&gt;_OUT_1_SIZE; i++)</span>
<span id="cb23-17"><a href="#cb23-17"></a>  {</span>
<span id="cb23-18"><a href="#cb23-18"></a>    output_f[i] = (ai_float)(output_q[i]) * scale;</span>
<span id="cb23-19"><a href="#cb23-19"></a>  }</span>
<span id="cb23-20"><a href="#cb23-20"></a>  ...</span>
<span id="cb23-21"><a href="#cb23-21"></a>}</span></code></pre></div>
</section>
<section id="ref_1d" class="level2">
<h2>1d-array tensor</h2>
<p>For a <code>1-D tensor</code>, standard C-array type is expected to handle the input and output tensors.</p>
<div class="sourceCode" id="cb24"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb24-1"><a href="#cb24-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb24-2"><a href="#cb24-2"></a></span>
<span id="cb24-3"><a href="#cb24-3"></a><span class="pp">#define xx_SIZE  VAL  </span><span class="co">/* = H * W * C = C */</span></span>
<span id="cb24-4"><a href="#cb24-4"></a></span>
<span id="cb24-5"><a href="#cb24-5"></a>ai_float xx_data[xx_SIZE];     <span class="co">/* n_batch = 1, height = 1, width = 1, channels = C */</span></span>
<span id="cb24-6"><a href="#cb24-6"></a></span>
<span id="cb24-7"><a href="#cb24-7"></a>ai_float xx_data[B * xx_SIZE]; <span class="co">/* n_batch = B, height = 1, width = 1, channels = C */</span></span>
<span id="cb24-8"><a href="#cb24-8"></a>ai_float xx_data[B][xx_SIZE];</span></code></pre></div>
<div id="fig:tensor_1d" class="fignos">
<figure>
<img src="" property="center" style="width:75.0%" alt /><figcaption><span>Figure 1:</span> 1-D Tensor data layout</figcaption>
</figure>
</div>
</section>
<section id="ref_2d" class="level2">
<h2>2d-array tensor</h2>
<p>For a <code>2-D tensor</code>, standard C-array-of-array memory arrangement is used to handle the input and output tensors. 2-dim are mapped to the first two dimensions of the tensor in the original toolbox representation: e.g. H and C in Keras / Tensorflow, H and W in Lasagne.</p>
<div class="sourceCode" id="cb25"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb25-1"><a href="#cb25-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb25-2"><a href="#cb25-2"></a></span>
<span id="cb25-3"><a href="#cb25-3"></a><span class="pp">#define xx_SIZE  VAL  </span><span class="co">/* = H * W * C = H * C */</span></span>
<span id="cb25-4"><a href="#cb25-4"></a></span>
<span id="cb25-5"><a href="#cb25-5"></a>ai_float xx_data[xx_SIZE];  <span class="co">/* n_batch = 1, height = H, width = 1, channels = C */</span></span>
<span id="cb25-6"><a href="#cb25-6"></a>ai_float xx_data[H][C];</span>
<span id="cb25-7"><a href="#cb25-7"></a></span>
<span id="cb25-8"><a href="#cb25-8"></a>ai_float xx_data[B * xx_SIZE]; <span class="co">/* n_batch = B, height = H, width = 1, channels = C */</span></span>
<span id="cb25-9"><a href="#cb25-9"></a>ai_float xx_data[B][H][C];</span></code></pre></div>
<div id="fig:tensor_2d" class="fignos">
<figure>
<img src="" property="center" style="width:75.0%" alt /><figcaption><span>Figure 2:</span> 2-D Tensor data layout</figcaption>
</figure>
</div>
<div class="Alert">
<p><strong>NOTE</strong> — If the dimension order in the original toolbox is different from HWC (e.g. Lasagne: CHW) it’s up to the application to properly re-arrange the element order</p>
</div>
</section>
<section id="ref_3d" class="level2">
<h2>3d-array tensor</h2>
<p>For a <code>3D-tensor</code>, standard C-array-of-array-of-array memory arrangement is used to handle the input and output tensors.</p>
<div class="sourceCode" id="cb26"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb26-1"><a href="#cb26-1"></a><span class="pp">#include </span><span class="im">&quot;network.h&quot;</span></span>
<span id="cb26-2"><a href="#cb26-2"></a></span>
<span id="cb26-3"><a href="#cb26-3"></a><span class="pp">#define xx_SIZE  VAL  </span><span class="co">/* = H * W * C */</span></span>
<span id="cb26-4"><a href="#cb26-4"></a></span>
<span id="cb26-5"><a href="#cb26-5"></a>ai_float xx_data[xx_SIZE];  <span class="co">/* n_batch = 1, height = H, width = W, channels = C */</span></span>
<span id="cb26-6"><a href="#cb26-6"></a>ai_float xx_data[H][W][C];</span>
<span id="cb26-7"><a href="#cb26-7"></a></span>
<span id="cb26-8"><a href="#cb26-8"></a>ai_float xx_data[B * xx_SIZE]; <span class="co">/* n_batch = B, height = H, width = W, channels = C */</span></span>
<span id="cb26-9"><a href="#cb26-9"></a>ai_float xx_data[B][H][W][C];</span></code></pre></div>
<div id="fig:tensor_3d" class="fignos">
<figure>
<img src="" property="center" style="width:100.0%" alt /><figcaption><span>Figure 3:</span> 3-D Tensor data layout</figcaption>
</figure>
</div>
</section>
</section>
<section id="ref_observer_api" class="level1">
<h1>Platform Observer API</h1>
<p>For advanced run-time, debug or profiling purposes, an AI client can register a call-back function to be notified before or/end after the execution of a c-node. As detailed in the <em>“C-graph description”</em> section from <a href="command_line_interface.html">[3]</a>, each node is identified by its executing index: <code>&#39;c-id&#39;</code>. The call-back can be used to measure the execution time or/and to dump the intermediate values.</p>
<section id="ref_cb_ex" class="level2">
<h2>User call-back registration for profiling use-case</h2>
<p>Previous <a href="#ref_quick_usage_code">minimal code snippet</a> is updated to register a basic call-back function that logs the number of used core cycles after each executing of a node (More advanced implementation can be found in <code>&#39;aiSystemPerformance.c&#39;</code> file).</p>
<div class="sourceCode" id="cb27"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb27-1"><a href="#cb27-1"></a><span class="pp">#include </span><span class="im">&quot;ai_platform_interface.h&quot;</span></span>
<span id="cb27-2"><a href="#cb27-2"></a>...</span>
<span id="cb27-3"><a href="#cb27-3"></a><span class="co">/*</span></span>
<span id="cb27-4"><a href="#cb27-4"></a><span class="co"> * Observer initialization</span></span>
<span id="cb27-5"><a href="#cb27-5"></a><span class="co"> */</span></span>
<span id="cb27-6"><a href="#cb27-6"></a></span>
<span id="cb27-7"><a href="#cb27-7"></a><span class="co">/* Minimal ctx to store the timestamp (before execution) */</span></span>
<span id="cb27-8"><a href="#cb27-8"></a><span class="kw">struct</span> u_observer_ctx {</span>
<span id="cb27-9"><a href="#cb27-9"></a>  <span class="dt">uint64_t</span> ts;</span>
<span id="cb27-10"><a href="#cb27-10"></a>  <span class="dt">uint32_t</span> n_events;</span>
<span id="cb27-11"><a href="#cb27-11"></a>};</span>
<span id="cb27-12"><a href="#cb27-12"></a></span>
<span id="cb27-13"><a href="#cb27-13"></a><span class="kw">struct</span> u_observer_ctx u_observer_ctx;</span>
<span id="cb27-14"><a href="#cb27-14"></a></span>
<span id="cb27-15"><a href="#cb27-15"></a><span class="dt">static</span> ai_u32 u_observer_cb(<span class="dt">const</span> ai_handle cookie,</span>
<span id="cb27-16"><a href="#cb27-16"></a>    <span class="dt">const</span> ai_u32 flags,</span>
<span id="cb27-17"><a href="#cb27-17"></a>    <span class="dt">const</span> ai_observer_node *node) {</span>
<span id="cb27-18"><a href="#cb27-18"></a></span>
<span id="cb27-19"><a href="#cb27-19"></a>  <span class="dt">uint64_t</span> ts = dwtGetCycles();  <span class="co">/* time stamp entry */</span></span>
<span id="cb27-20"><a href="#cb27-20"></a>  <span class="kw">struct</span> u_observer_ctx *ctx = (u_observer_ctx *)cookie;</span>
<span id="cb27-21"><a href="#cb27-21"></a></span>
<span id="cb27-22"><a href="#cb27-22"></a>  <span class="cf">if</span> (flags &amp; AI_OBSERVER_POST_EVT) {</span>
<span id="cb27-23"><a href="#cb27-23"></a>    printf(<span class="st">&quot;%d - cpu cycles: %lld</span><span class="sc">\r\n</span><span class="st">&quot;</span>, node-&gt;c_idx, ts - ctx-&gt;ts);</span>
<span id="cb27-24"><a href="#cb27-24"></a>    ctx-&gt;n_events++;</span>
<span id="cb27-25"><a href="#cb27-25"></a>  }</span>
<span id="cb27-26"><a href="#cb27-26"></a>  ctx-&gt;ts = dwtGetCycles(); <span class="co">/* time stamp exit */</span></span>
<span id="cb27-27"><a href="#cb27-27"></a>  <span class="cf">return</span> <span class="dv">0</span>;</span>
<span id="cb27-28"><a href="#cb27-28"></a>}</span>
<span id="cb27-29"><a href="#cb27-29"></a></span>
<span id="cb27-30"><a href="#cb27-30"></a><span class="co">/* Register a call-back to be notified before</span></span>
<span id="cb27-31"><a href="#cb27-31"></a><span class="co">   and after each executing of a c-node */</span></span>
<span id="cb27-32"><a href="#cb27-32"></a><span class="dt">int</span> aiObserverSetup() {</span>
<span id="cb27-33"><a href="#cb27-33"></a></span>
<span id="cb27-34"><a href="#cb27-34"></a>  <span class="cf">if</span> (!ai_platform_observer_register(network,</span>
<span id="cb27-35"><a href="#cb27-35"></a>     u_observer_cb, &amp;u_observer_ctx,</span>
<span id="cb27-36"><a href="#cb27-36"></a>     AI_OBSERVER_PRE_EVT | AI_OBSERVER_POST_EVT)) {</span>
<span id="cb27-37"><a href="#cb27-37"></a>    err = ai_network_get_error(network);</span>
<span id="cb27-38"><a href="#cb27-38"></a>    printf(<span class="st">&quot;E: AI ai_platform_observer_register error - type=%d code=%d</span><span class="sc">\r\n</span><span class="st">&quot;</span>, err.type, err.code);</span>
<span id="cb27-39"><a href="#cb27-39"></a>    <span class="cf">return</span> -<span class="dv">1</span>;</span>
<span id="cb27-40"><a href="#cb27-40"></a>  }</span>
<span id="cb27-41"><a href="#cb27-41"></a>  <span class="cf">return</span> <span class="dv">0</span>;</span>
<span id="cb27-42"><a href="#cb27-42"></a>}</span></code></pre></div>
<div class="Alert">
<p><strong>NOTE</strong> — As for the <code>&#39;ai_&lt;network&gt;_run()&#39;</code> function, the registered callback function is executed synchronously in the context of the caller.</p>
</div>
</section>
<section id="ref_node_info" class="level2">
<h2>Node-per-node inspection</h2>
<p>The <code>&#39;ai_platform_observer_node_info()&#39;</code> function can be used to pass through the executing C-graph structure retrieving the tensor attributes node-per-node. A set of helper macros (<code>&#39;AI_TENSOR_XXX&#39;</code> from <code>&#39;ai_platform_interface.h&#39;</code> file) should be used to retrieve or to manipulate the returned tensor object: <code>&#39;t&#39;</code>.</p>
<div class="sourceCode" id="cb28"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb28-1"><a href="#cb28-1"></a><span class="pp">#include </span><span class="im">&quot;ai_platform_interface.h&quot;</span></span>
<span id="cb28-2"><a href="#cb28-2"></a></span>
<span id="cb28-3"><a href="#cb28-3"></a>{</span>
<span id="cb28-4"><a href="#cb28-4"></a>  ai_observer_node node_info;</span>
<span id="cb28-5"><a href="#cb28-5"></a>  ai_tensor_list *tl;</span>
<span id="cb28-6"><a href="#cb28-6"></a></span>
<span id="cb28-7"><a href="#cb28-7"></a>  node_info.c_idx = <span class="dv">0</span>; <span class="co">/* starting with the first node */</span></span>
<span id="cb28-8"><a href="#cb28-8"></a>  <span class="cf">while</span> (ai_platform_observer_node_info(network, &amp;node_info)) {</span>
<span id="cb28-9"><a href="#cb28-9"></a>    <span class="co">/* Check if the node is a &quot;Time Distributed&quot; operator. In this</span></span>
<span id="cb28-10"><a href="#cb28-10"></a><span class="co">     * case, weight/bias tensors are provided through the inner object</span></span>
<span id="cb28-11"><a href="#cb28-11"></a><span class="co">     * - node_info.inner_tensors != NULL condition can be also used</span></span>
<span id="cb28-12"><a href="#cb28-12"></a><span class="co">     */</span></span>
<span id="cb28-13"><a href="#cb28-13"></a>    <span class="dt">const</span> ai_bool is_time_dist = (node_info.type &amp; <span class="bn">0x8000</span> != <span class="dv">0</span>);</span>
<span id="cb28-14"><a href="#cb28-14"></a>    node_info.type &amp;= <span class="bn">0x7FFF</span>;</span>
<span id="cb28-15"><a href="#cb28-15"></a>    <span class="co">/* Retrieve the list of the input tensors */</span></span>
<span id="cb28-16"><a href="#cb28-16"></a>    tl = GET_TENSOR_LIST_IN(node_info.tensors);</span>
<span id="cb28-17"><a href="#cb28-17"></a>    <span class="cf">if</span> (tl) {</span>
<span id="cb28-18"><a href="#cb28-18"></a>      AI_FOR_EACH_TENSOR_LIST_DO(i, t, tl) {</span>
<span id="cb28-19"><a href="#cb28-19"></a>        ...</span>
<span id="cb28-20"><a href="#cb28-20"></a>      }</span>
<span id="cb28-21"><a href="#cb28-21"></a>    }</span>
<span id="cb28-22"><a href="#cb28-22"></a>    <span class="co">/* Retrieve the list of the output tensors */</span></span>
<span id="cb28-23"><a href="#cb28-23"></a>    tl = GET_TENSOR_LIST_OUT(node_info.tensors);</span>
<span id="cb28-24"><a href="#cb28-24"></a>    <span class="cf">if</span> (tl) {</span>
<span id="cb28-25"><a href="#cb28-25"></a>      AI_FOR_EACH_TENSOR_LIST_DO(i, t, tl) {</span>
<span id="cb28-26"><a href="#cb28-26"></a>        ...</span>
<span id="cb28-27"><a href="#cb28-27"></a>      }</span>
<span id="cb28-28"><a href="#cb28-28"></a>    }</span>
<span id="cb28-29"><a href="#cb28-29"></a>    <span class="co">/* Retrieve the list of the weight/bias tensors */</span></span>
<span id="cb28-30"><a href="#cb28-30"></a>    <span class="cf">if</span> (is_time_dist)</span>
<span id="cb28-31"><a href="#cb28-31"></a>      tl = GET_TENSOR_LIST_WEIGTHS(node_info.inner_tensors);</span>
<span id="cb28-32"><a href="#cb28-32"></a>    <span class="cf">else</span></span>
<span id="cb28-33"><a href="#cb28-33"></a>      tl = GET_TENSOR_LIST_WEIGTHS(node_info.tensors);</span>
<span id="cb28-34"><a href="#cb28-34"></a>    <span class="cf">if</span> (tl) {</span>
<span id="cb28-35"><a href="#cb28-35"></a>      AI_FOR_EACH_TENSOR_LIST_DO(i, t, tl) {</span>
<span id="cb28-36"><a href="#cb28-36"></a>        ...</span>
<span id="cb28-37"><a href="#cb28-37"></a>      }</span>
<span id="cb28-38"><a href="#cb28-38"></a>    }</span>
<span id="cb28-39"><a href="#cb28-39"></a>    <span class="co">/* Retrieve the list of the scratch tensors */</span></span>
<span id="cb28-40"><a href="#cb28-40"></a>    <span class="cf">if</span> (is_time_dist)</span>
<span id="cb28-41"><a href="#cb28-41"></a>      tl = GET_TENSOR_LIST_SCRATCH(node_info.inner_tensors);</span>
<span id="cb28-42"><a href="#cb28-42"></a>    <span class="cf">else</span></span>
<span id="cb28-43"><a href="#cb28-43"></a>      tl = GET_TENSOR_LIST_SCRATCH(node_info.tensors);</span>
<span id="cb28-44"><a href="#cb28-44"></a>    <span class="cf">if</span> (tl) {</span>
<span id="cb28-45"><a href="#cb28-45"></a>      AI_FOR_EACH_TENSOR_LIST_DO(i, t, tl) {</span>
<span id="cb28-46"><a href="#cb28-46"></a>        ...</span>
<span id="cb28-47"><a href="#cb28-47"></a>      }</span>
<span id="cb28-48"><a href="#cb28-48"></a>    }</span>
<span id="cb28-49"><a href="#cb28-49"></a>    node_info.c_idx++;</span>
<span id="cb28-50"><a href="#cb28-50"></a>  } <span class="co">/* end of the while loop */</span></span>
<span id="cb28-51"><a href="#cb28-51"></a>  ...</span>
<span id="cb28-52"><a href="#cb28-52"></a>}</span></code></pre></div>
<table>
<colgroup>
<col style="width: 50%"></col>
<col style="width: 50%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">macro</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>AI_TENSOR_ARRAY_BYTE_SIZE(t)</code></td>
<td style="text-align: left;">returns the size in byte of the data buffer.</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_TENSOR_ARRAY_GET_DATA_ADDR(t)</code></td>
<td style="text-align: left;">returns the effective address of the data buffer.</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>AI_TENSOR_ARRAY_UPDATE_DATA_ADDR(t, addr)</code></td>
<td style="text-align: left;">set a new effective address. It should be 4-bytes aligned. Previous address is forgotten and not saved (see next section).</td>
</tr>
</tbody>
</table>
<div class="Warning">
<p><strong>Warning</strong> – <code>&#39;ai_platform_observer_node_info()&#39;</code> should be called with an initialized instance to be sure to have a complete and ready-to-use initialization of the internal runtime data structure (in particular the arrays objects which handle the data of the tensors).</p>
</div>
</section>
<section id="copy-before-run-use-case" class="level2">
<h2>Copy-before-run use-case</h2>
<p>Kernels from the network runtime library are designed to take account flexible data placement, thanks to the usage of the scratch buffers or stack-based technics. After profiling session, and if a static placement approach (based on the <a href="#ref_split_weights"><code>&#39;--split-weights&#39;</code></a> option) is not sufficient or adapted, it is also possible to improve the inference time, by <em>copy-before-run</em> the critical weights/bias data buffer in a low latency memory.</p>
<p>Following code snippet illustrates the usage of a software “cache” memory to store the weights/bias of a specific critical layer before to call the <code>&#39;ai_&lt;name&gt;_run()&#39;</code> function. Particular compiler directive (tool-chain dependent) can be used to place <code>&#39;_w_cache&#39;</code> object.</p>
<div class="sourceCode" id="cb29"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb29-1"><a href="#cb29-1"></a><span class="pp">#include </span><span class="im">&lt;string.h&gt;</span></span>
<span id="cb29-2"><a href="#cb29-2"></a><span class="pp">#include </span><span class="im">&quot;ai_platform_interface.h&quot;</span></span>
<span id="cb29-3"><a href="#cb29-3"></a></span>
<span id="cb29-4"><a href="#cb29-4"></a><span class="pp">#define ALIGN_UP(num, align) \</span></span>
<span id="cb29-5"><a href="#cb29-5"></a><span class="pp">    (((num) + ((align) - 1)) &amp; ~((align) - 1))</span></span>
<span id="cb29-6"><a href="#cb29-6"></a></span>
<span id="cb29-7"><a href="#cb29-7"></a>AI_ALIGN(<span class="dv">4</span>)</span>
<span id="cb29-8"><a href="#cb29-8"></a><span class="dt">static</span> ai_u8 _w_cache[XXX]; <span class="co">/* reserve buffer to cache the weights */</span></span>
<span id="cb29-9"><a href="#cb29-9"></a></span>
<span id="cb29-10"><a href="#cb29-10"></a><span class="dt">int</span> aiCacheWeights(<span class="dt">void</span>) {</span>
<span id="cb29-11"><a href="#cb29-11"></a>  ai_observer_node node_info;</span>
<span id="cb29-12"><a href="#cb29-12"></a>  node_info.c_idx = ID; <span class="co">/* index of the critical node */</span></span>
<span id="cb29-13"><a href="#cb29-13"></a>  <span class="cf">if</span> (ai_platform_observer_node_info(network, &amp;node_info)) {</span>
<span id="cb29-14"><a href="#cb29-14"></a>    ai_tensor_list *tl;</span>
<span id="cb29-15"><a href="#cb29-15"></a>    tl = GET_TENSOR_LIST_WEIGTHS(node_info.tensors);</span>
<span id="cb29-16"><a href="#cb29-16"></a>    <span class="dt">uintptr_t</span> dst_addr = (<span class="dt">uintptr_t</span>)&amp;_w_cache[<span class="dv">0</span>];</span>
<span id="cb29-17"><a href="#cb29-17"></a>    AI_FOR_EACH_TENSOR_LIST_DO(i, t, tl) {</span>
<span id="cb29-18"><a href="#cb29-18"></a>        <span class="co">/* Retrieve the @/size of the data */</span></span>
<span id="cb29-19"><a href="#cb29-19"></a>        <span class="dt">const</span> <span class="dt">uintptr_t</span> src_addr = (<span class="dt">uintptr_t</span>)AI_TENSOR_ARRAY_GET_DATA_ADDR(t);</span>
<span id="cb29-20"><a href="#cb29-20"></a>        <span class="dt">const</span> ai_size sz = AI_TENSOR_ARRAY_BYTE_SIZE(t);</span>
<span id="cb29-21"><a href="#cb29-21"></a>        <span class="co">/* Copy the dta tensor in the SW cache */</span></span>
<span id="cb29-22"><a href="#cb29-22"></a>        memcpy(dst_addr, src_addr, sz);</span>
<span id="cb29-23"><a href="#cb29-23"></a>        <span class="co">/* set the new effective address */</span></span>
<span id="cb29-24"><a href="#cb29-24"></a>        AI_TENSOR_ARRAY_UPDATE_DATA_ADDR(t, dst_addr);</span>
<span id="cb29-25"><a href="#cb29-25"></a>        dst_addr += ALIGN_UP(sz, <span class="dv">4</span>);</span>
<span id="cb29-26"><a href="#cb29-26"></a>      }</span>
<span id="cb29-27"><a href="#cb29-27"></a>  }</span>
<span id="cb29-28"><a href="#cb29-28"></a>  <span class="cf">return</span> <span class="dv">0</span>;</span>
<span id="cb29-29"><a href="#cb29-29"></a>}</span></code></pre></div>
</section>
<section id="ref_dump_output" class="level2">
<h2>Dumping intermediate output use-case</h2>
<p>Following code snippet illustrates a simple call-back to dump an output of a given internal layer <code>&#39;C_ID&#39;</code>. Internal tensor description is converted to a <code>&#39;ai_buffer</code>’-type data.</p>
<div class="sourceCode" id="cb30"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb30-1"><a href="#cb30-1"></a><span class="pp">#include </span><span class="im">&quot;ai_platform_interface.h&quot;</span></span>
<span id="cb30-2"><a href="#cb30-2"></a>...</span>
<span id="cb30-3"><a href="#cb30-3"></a></span>
<span id="cb30-4"><a href="#cb30-4"></a><span class="pp">#define C_ID (12)  </span><span class="co">/* c-id of the operator which must be dumped */</span></span>
<span id="cb30-5"><a href="#cb30-5"></a></span>
<span id="cb30-6"><a href="#cb30-6"></a><span class="dt">static</span> ai_u32 u_observer_cb(<span class="dt">const</span> ai_handle cookie,</span>
<span id="cb30-7"><a href="#cb30-7"></a>    <span class="dt">const</span> ai_u32 flags,</span>
<span id="cb30-8"><a href="#cb30-8"></a>    <span class="dt">const</span> ai_observer_node *node) {</span>
<span id="cb30-9"><a href="#cb30-9"></a></span>
<span id="cb30-10"><a href="#cb30-10"></a>    ai_tensor_list *tl;</span>
<span id="cb30-11"><a href="#cb30-11"></a></span>
<span id="cb30-12"><a href="#cb30-12"></a>    <span class="cf">if</span> (node-&gt;c_idx == C_ID) {</span>
<span id="cb30-13"><a href="#cb30-13"></a>      tl = GET_TENSOR_LIST_OUT(node_info.tensors);</span>
<span id="cb30-14"><a href="#cb30-14"></a>      AI_FOR_EACH_TENSOR_LIST_DO(i, t, tl) {</span>
<span id="cb30-15"><a href="#cb30-15"></a>          <span class="co">/* Currently, only ONE output is supported */</span></span>
<span id="cb30-16"><a href="#cb30-16"></a>          ai_buffer buffer;</span>
<span id="cb30-17"><a href="#cb30-17"></a>          ai_float scale = AI_TENSOR_INTEGER_GET_SCALE(t, <span class="dv">0</span>);</span>
<span id="cb30-18"><a href="#cb30-18"></a>          ai_i32 zero_point = <span class="dv">0</span>;</span>
<span id="cb30-19"><a href="#cb30-19"></a></span>
<span id="cb30-20"><a href="#cb30-20"></a>          <span class="cf">if</span> (AI_TENSOR_FMT_GET_SIGN(t))</span>
<span id="cb30-21"><a href="#cb30-21"></a>            zero_point = AI_TENSOR_INTEGER_GET_ZEROPOINT_I8(t, <span class="dv">0</span>);</span>
<span id="cb30-22"><a href="#cb30-22"></a>          <span class="cf">else</span></span>
<span id="cb30-23"><a href="#cb30-23"></a>            zero_point = AI_TENSOR_INTEGER_GET_ZEROPOINT_U8(t, <span class="dv">0</span>);</span>
<span id="cb30-24"><a href="#cb30-24"></a></span>
<span id="cb30-25"><a href="#cb30-25"></a>          buffer.format = AI_TENSOR_GET_FMT(t);</span>
<span id="cb30-26"><a href="#cb30-26"></a>          buffer.n_batches = <span class="dv">1</span>;</span>
<span id="cb30-27"><a href="#cb30-27"></a>          buffer.data = AI_TENSOR_ARRAY_GET_DATA_ADDR(t);</span>
<span id="cb30-28"><a href="#cb30-28"></a>          buffer.height = AI_SHAPE_H(AI_TENSOR_SHAPE(t));</span>
<span id="cb30-29"><a href="#cb30-29"></a>          buffer.width = AI_SHAPE_W(AI_TENSOR_SHAPE(t));</span>
<span id="cb30-30"><a href="#cb30-30"></a>          buffer.channels = AI_SHAPE_CH(AI_TENSOR_SHAPE(t));</span>
<span id="cb30-31"><a href="#cb30-31"></a>          buffer.meta_info = NULL;</span>
<span id="cb30-32"><a href="#cb30-32"></a>          ...</span>
<span id="cb30-33"><a href="#cb30-33"></a>        }</span>
<span id="cb30-34"><a href="#cb30-34"></a>      }</span>
<span id="cb30-35"><a href="#cb30-35"></a>    }</span>
<span id="cb30-36"><a href="#cb30-36"></a></span>
<span id="cb30-37"><a href="#cb30-37"></a>  <span class="cf">return</span> <span class="dv">0</span>;</span>
<span id="cb30-38"><a href="#cb30-38"></a>}</span>
<span id="cb30-39"><a href="#cb30-39"></a></span>
<span id="cb30-40"><a href="#cb30-40"></a>...</span>
<span id="cb30-41"><a href="#cb30-41"></a><span class="co">/* registered call-back is only raised for the POST event */</span></span>
<span id="cb30-42"><a href="#cb30-42"></a>ai_platform_observer_register(network,</span>
<span id="cb30-43"><a href="#cb30-43"></a>     u_observer_cb, &amp;u_observer_ctx, AI_OBSERVER_POST_EVT))</span>
<span id="cb30-44"><a href="#cb30-44"></a>...</span></code></pre></div>
</section>
<section id="ref_notify_input" class="level2">
<h2>End-of-process input buffer notification use-case</h2>
<p>Following code snippet illustrates a simple call-back to notify the application when the input buffer is processed by a given layer <code>&#39;C_ID&#39;</code>. <strong>Warning</strong> the buffer should be not allocated in the activations buffer else there is no guarantee that the memory region will be not used by the other operators before the end of the inference. This can be use-full to anticipate a HW capture process (DMA-based) before the end of the inference.</p>
<div class="sourceCode" id="cb31"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb31-1"><a href="#cb31-1"></a><span class="pp">#include </span><span class="im">&quot;ai_platform_interface.h&quot;</span></span>
<span id="cb31-2"><a href="#cb31-2"></a>...</span>
<span id="cb31-3"><a href="#cb31-3"></a></span>
<span id="cb31-4"><a href="#cb31-4"></a><span class="pp">#define C_ID (0)  </span><span class="co">/* c-id of the operator which processes the input buffer */</span></span>
<span id="cb31-5"><a href="#cb31-5"></a></span>
<span id="cb31-6"><a href="#cb31-6"></a><span class="dt">static</span> ai_u32 u_observer_cb(<span class="dt">const</span> ai_handle cookie,</span>
<span id="cb31-7"><a href="#cb31-7"></a>    <span class="dt">const</span> ai_u32 flags,</span>
<span id="cb31-8"><a href="#cb31-8"></a>    <span class="dt">const</span> ai_observer_node *node) {</span>
<span id="cb31-9"><a href="#cb31-9"></a></span>
<span id="cb31-10"><a href="#cb31-10"></a>    <span class="cf">if</span> (node-&gt;c_idx == C_ID) {</span>
<span id="cb31-11"><a href="#cb31-11"></a>      <span class="co">/* start a new capture process to fill the input buffer before the end-of </span></span>
<span id="cb31-12"><a href="#cb31-12"></a><span class="co">         inference */</span></span>
<span id="cb31-13"><a href="#cb31-13"></a>        ...</span>
<span id="cb31-14"><a href="#cb31-14"></a>    }</span>
<span id="cb31-15"><a href="#cb31-15"></a></span>
<span id="cb31-16"><a href="#cb31-16"></a>  <span class="cf">return</span> <span class="dv">0</span>;</span>
<span id="cb31-17"><a href="#cb31-17"></a>}</span>
<span id="cb31-18"><a href="#cb31-18"></a></span>
<span id="cb31-19"><a href="#cb31-19"></a>...</span>
<span id="cb31-20"><a href="#cb31-20"></a><span class="co">/* registered call-back is only raised for the POST event */</span></span>
<span id="cb31-21"><a href="#cb31-21"></a>ai_platform_observer_register(network,</span>
<span id="cb31-22"><a href="#cb31-22"></a>     u_observer_cb, &amp;u_observer_ctx, AI_OBSERVER_POST_EVT))</span>
<span id="cb31-23"><a href="#cb31-23"></a>...</span></code></pre></div>
</section>
<section id="ref_obs_node" class="level2">
<h2>“ai_observer_node” definition</h2>
<p>The <code>&#39;ai_platform_observer_node_info()&#39;</code> function and registered call-back function use the <code>&#39;ai_observer_node&#39;</code> data structure to report the tensor attributes for a given node: <code>&#39;c_idx&#39;</code>.</p>
<div class="sourceCode" id="cb32"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb32-1"><a href="#cb32-1"></a><span class="co">/* @file ai_platform_interface.h */</span></span>
<span id="cb32-2"><a href="#cb32-2"></a></span>
<span id="cb32-3"><a href="#cb32-3"></a><span class="kw">typedef</span> <span class="kw">struct</span> ai_observer_node_s {</span>
<span id="cb32-4"><a href="#cb32-4"></a>  ai_u16            c_idx;   <span class="co">/*!&lt; node index (position in the execution list) */</span></span>
<span id="cb32-5"><a href="#cb32-5"></a>  ai_u16            type;    <span class="co">/*!&lt; node type info */</span></span>
<span id="cb32-6"><a href="#cb32-6"></a>  ai_u16            id;      <span class="co">/*!&lt; node id assigned by code generator to reference the model layer */</span></span>
<span id="cb32-7"><a href="#cb32-7"></a>  ai_u16            unused;  <span class="co">/*!&lt; unused field for alignment */</span></span>
<span id="cb32-8"><a href="#cb32-8"></a>  <span class="dt">const</span> ai_tensor_chain* inner_tensors; <span class="co">/*!&lt; pointer to the inner tensors if available */</span></span>
<span id="cb32-9"><a href="#cb32-9"></a>  <span class="dt">const</span> ai_tensor_chain* tensors;       <span class="co">/*!&lt; pointer to a 4 sized array */</span></span>
<span id="cb32-10"><a href="#cb32-10"></a>} ai_observer_node;</span></code></pre></div>
<table style="width:99%;">
<colgroup>
<col style="width: 33%"></col>
<col style="width: 65%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">field</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">c_idx</td>
<td style="text-align: left;">index of the associated c-node (also called c-id)</td>
</tr>
<tr class="even">
<td style="text-align: left;">type</td>
<td style="text-align: left;">define the type of the c-operator (see <code>layers_list.h</code> file: <code>100XX</code> values).</td>
</tr>
<tr class="odd">
<td style="text-align: left;">id</td>
<td style="text-align: left;">index of the original operator from the imported model.</td>
</tr>
<tr class="even">
<td style="text-align: left;">tensors</td>
<td style="text-align: left;">entry point to retrieve the list of [I], [O], [W] and [S] tensors.</td>
</tr>
<tr class="odd">
<td style="text-align: left;">inner_tensors</td>
<td style="text-align: left;">if the operator is a “Time Distributed” operator, [W] and [S] tensors are returned through this entry, else this field is NULL.</td>
</tr>
</tbody>
</table>
<p>If <code>&#39;type &amp; 0x8000 != 0&#39;</code>, the associated operator is a “Time Distributed” operator and <code>&#39;tensors&#39;</code> and <code>&#39;inner_tensors&#39;</code> fields should be used to retrieve all of the tensors: [I], [O], [W] and [S] lists (see <a href="#ref_node_info">“Node-by-node inspection”</a> section).</p>
<div class="Warning">
<p><strong>Limitation</strong> — <code>&#39;inner_tensors&#39;</code> field is always NULL and the most significant bit of <code>&#39;type&#39;</code> is not updated when the call-back is called.</p>
</div>
</section>
<section id="ai_platform_observer_node_info" class="level2">
<h2><code>ai_platform_observer_node_info()</code></h2>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1"></a>ai_bool ai_platform_observer_node_info(</span>
<span id="func-2"><a href="#func-2"></a>    ai_handle network, ai_observer_node *node_info);</span></code></pre></div>
<p>This function populates the referenced <a href="#ref_obs_node"><code>&#39;ai_observer_node&#39;</code></a> structure to retrieve the node and associated tensor attributes. Requested node index is defined through the <code>&#39;node_info.c_idx&#39;</code> field. If the <code>&#39;network&#39;</code> parameter is not a valid network instance or the index is out-of-range, <code>&#39;ai_false&#39;</code> is returned.</p>
</section>
<section id="ai_platform_observer_register" class="level2">
<h2><code>ai_platform_observer_register()</code></h2>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1"></a>ai_bool ai_platform_observer_register(</span>
<span id="func-2"><a href="#func-2"></a>    ai_handle network,</span>
<span id="func-3"><a href="#func-3"></a>    ai_observer_node_cb cb,</span>
<span id="func-4"><a href="#func-4"></a>    ai_handle cookie,</span>
<span id="func-5"><a href="#func-5"></a>    ai_u32 flags););</span>
<span id="func-6"><a href="#func-6"></a>ai_bool ai_platform_observer_unregister(ai_handle network,</span>
<span id="func-7"><a href="#func-7"></a>    ai_observer_node_cb cb, ai_handle cookie);</span></code></pre></div>
<p>This function registers an user call-back function. Only one call-back can be registered at a time by network instance.</p>
<ul>
<li><code>&#39;cb&#39;</code> pointer of an user callback function (see <a href="#ref_cb_ex">“User call-back registration”</a> code snippet)</li>
<li><code>&#39;cookie&#39;</code> reference of an user context/object which is returned without modification.<br />
</li>
<li><code>&#39;flags&#39;</code> bit-wise mask to indicate the type of requested events.</li>
</ul>
<table>
<colgroup>
<col style="width: 38%"></col>
<col style="width: 61%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">flags</th>
<th style="text-align: left;">event type</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>AI_OBSERVER_INIT_EVT</code></td>
<td style="text-align: left;">initialization (at the end of the call of <a href="#ref_api_init"><code>&#39;ai_&lt;name&gt;_init()&#39;</code></a>)</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_OBSERVER_PRE_EVT</code></td>
<td style="text-align: left;">before the execution of the kernel (during the call of <a href="#ref_api_run"><code>&#39;ai_&lt;name&gt;_run()&#39;</code></a>)</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>AI_OBSERVER_POST_EVT</code></td>
<td style="text-align: left;">after the execution of the kernel (during the call of <a href="#ref_api_run"><code>&#39;ai_&lt;name&gt;_run()&#39;</code></a>)</td>
</tr>
</tbody>
</table>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1"></a><span class="kw">typedef</span> ai_u32 (*ai_observer_node_cb)(<span class="at">const</span> ai_handle cookie,</span>
<span id="func-2"><a href="#func-2"></a>    <span class="at">const</span> ai_u32 flags,</span>
<span id="func-3"><a href="#func-3"></a>    <span class="at">const</span> ai_observer_node *node)</span></code></pre></div>
<p>When the call-back is called, the previous <code>&#39;flags&#39;</code> event types are extended with the following values:</p>
<table>
<colgroup>
<col style="width: 38%"></col>
<col style="width: 61%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">flags</th>
<th style="text-align: left;">event type</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>AI_OBSERVER_FIRST_EVT</code></td>
<td style="text-align: left;">event related to the first node.</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_OBSERVER_LAST_EVT</code></td>
<td style="text-align: left;">event related to the last node.</td>
</tr>
</tbody>
</table>
</section>
</section>
<section id="references" class="level1">
<h1>References</h1>
<table style="width:92%;">
<colgroup>
<col style="width: 13%"></col>
<col style="width: 77%"></col>
</colgroup>
<tbody>
<tr class="odd">
<td style="text-align: left;">[1]</td>
<td style="text-align: left;">X-CUBE-AI - <em>AI expansion pack for STM32CubeMX</em><br />
<a href="https://www.st.com/en/embedded-software/x-cube-ai.html">https://www.st.com/en/embedded-software/x-cube-ai.html</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[2]</td>
<td style="text-align: left;">User manual - Getting started with X-CUBE-AI Expansion Package for Artificial Intelligence (AI) <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">(pdf)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[3]</td>
<td style="text-align: left;">stm32ai - Command Line Interface <a href="command_line_interface.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[4]</td>
<td style="text-align: left;">Supported Deep Learning toolboxes and layers <a href="layer-support.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[5]</td>
<td style="text-align: left;">Embedded inference client API <a href="embedded_client_api.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[6]</td>
<td style="text-align: left;">Evaluation report and metrics <a href="evaluation_metrics.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[7]</td>
<td style="text-align: left;">FAQs <a href="faqs.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[8]</td>
<td style="text-align: left;">Quantization and quantize command <a href="quantization.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[9]</td>
<td style="text-align: left;">Relocatable binary network support <a href="relocatable.html">(link)</a></td>
</tr>
</tbody>
</table>
</section>
<section id="revision-history" class="level1">
<h1>Revision history</h1>
<table>
<colgroup>
<col style="width: 32%"></col>
<col style="width: 24%"></col>
<col style="width: 44%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">Date</th>
<th style="text-align: left;">version</th>
<th style="text-align: left;">changes</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><strong>2019-06-14</strong></td>
<td style="text-align: left;">r1.0</td>
<td style="text-align: left;">initial version</td>
</tr>
<tr class="even">
<td style="text-align: left;"><strong>2019-09-20</strong></td>
<td style="text-align: left;">r1.1</td>
<td style="text-align: left;">X-CUBE-AI 4.1 update, add integer arithmetic support for quantized data.</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><strong>2019-12-03</strong></td>
<td style="text-align: left;">r1.2</td>
<td style="text-align: left;">X-CUBE-AI 5.0 update, add allocate-inputs and multiple IO snippet code</td>
</tr>
<tr class="even">
<td style="text-align: left;"><strong>2020-05-18</strong></td>
<td style="text-align: left;">r2.0</td>
<td style="text-align: left;">X-CUBE-AI 5.1 update, add platform observer API, add/complete data placement section, minor re-work and clean-up</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><strong>2020-06-11</strong></td>
<td style="text-align: left;">r2.1</td>
<td style="text-align: left;">minor - add warning about the CRC IP clock</td>
</tr>
<tr class="even">
<td style="text-align: left;"><strong>2020-09-15</strong></td>
<td style="text-align: left;">r2.2</td>
<td style="text-align: left;">X-CUBE-AI 5.2 update, allocate-outputs support, add new observer UCs, add some words about the re-entrance and debug aspects, fix typo</td>
</tr>
</tbody>
</table>
</section>



<section class="st_footer">

<h1> <br> </h1>

<p style="font-family:verdana; text-align:left;">
 Embedded Documentation 

	- <b> Embedded Inference Client API </b>
			<br> X-CUBE-AI Expansion Package
				<br> r2.2
		 - AI PLATFORM r5.2.0
			 (Embedded Inference Client API 1.1.0) 
			 - Command Line Interface r1.4.0 
		
	
</p>

<img src="" title="ST logo" align="right" height="100" />

<div class="stnotice">
Information in this document is provided solely in connection with ST products.
The contents of this document are subject to change without prior notice.
<br>
© Copyright STMicroelectronics 2020. All rights reserved. <a href="http://www.st.com">www.st.com</a>
</div>

<hr size="1" />
</section>


</article>
</body>

</html>
