<!DOCTYPE html>
<!--

	Modified template for STM32CubeMX.AI purpose

	d0.1: 	jean-michel.delorme@st.com
			add ST logo and ST footer

	d2.0: 	jean-michel.delorme@st.com
			add sidenav support

	d2.1: 	jean-michel.delorme@st.com
			clean-up + optional ai_logo/ai meta data
			
==============================================================================
           "GitHub HTML5 Pandoc Template" v2.1 — by Tristano Ajmone           
==============================================================================
Copyright © Tristano Ajmone, 2017, MIT License (MIT). Project's home:

- https://github.com/tajmone/pandoc-goodies

The CSS in this template reuses source code taken from the following projects:

- GitHub Markdown CSS: Copyright © Sindre Sorhus, MIT License (MIT):
  https://github.com/sindresorhus/github-markdown-css

- Primer CSS: Copyright © 2016-2017 GitHub Inc., MIT License (MIT):
  http://primercss.io/

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The MIT License 

Copyright (c) Tristano Ajmone, 2017 (github.com/tajmone/pandoc-goodies)
Copyright (c) Sindre Sorhus <sindresorhus@gmail.com> (sindresorhus.com)
Copyright (c) 2017 GitHub Inc.

"GitHub Pandoc HTML5 Template" is Copyright (c) Tristano Ajmone, 2017, released
under the MIT License (MIT); it contains readaptations of substantial portions
of the following third party softwares:

(1) "GitHub Markdown CSS", Copyright (c) Sindre Sorhus, MIT License (MIT).
(2) "Primer CSS", Copyright (c) 2016 GitHub Inc., MIT License (MIT).

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
==============================================================================-->
<html>
<head>
  <meta charset="utf-8" />
  <meta name="generator" content="pandoc" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
  <meta name="keywords" content="STM32CubeMX, X-CUBE-AI, Neural Network, Relocatable model, CLI, Code Generator" />
  <title>Relocatable binary model support</title>
  <style type="text/css">
.markdown-body{
	-ms-text-size-adjust:100%;
	-webkit-text-size-adjust:100%;
	color:#24292e;
	font-family:-apple-system,system-ui,BlinkMacSystemFont,"Segoe UI",Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol";
	font-size:16px;
	line-height:1.5;
	word-wrap:break-word;
	box-sizing:border-box;
	min-width:200px;
	max-width:980px;
	margin:0 auto;
	padding:45px;
	}
.markdown-body a{
	color:#0366d6;
	background-color:transparent;
	text-decoration:none;
	-webkit-text-decoration-skip:objects}
.markdown-body a:active,.markdown-body a:hover{
	outline-width:0}
.markdown-body a:hover{
	text-decoration:underline}
.markdown-body a:not([href]){
	color:inherit;text-decoration:none}
.markdown-body strong{font-weight:600}
.markdown-body h1,.markdown-body h2,.markdown-body h3,.markdown-body h4,.markdown-body h5,.markdown-body h6{
	margin-top:24px;
	margin-bottom:16px;
	font-weight:600;
	line-height:1.25}
.markdown-body h1{
	font-size:2em;
	margin:.67em 0;
	padding-bottom:.3em;
	border-bottom:1px solid #eaecef}
.markdown-body h2{
	padding-bottom:.3em;
	font-size:1.5em;
	border-bottom:1px solid #eaecef}
.markdown-body h3{font-size:1.25em}
.markdown-body h4{font-size:1em}
.markdown-body h5{font-size:.875em}
.markdown-body h6{font-size:.85em;color:#6a737d}
.markdown-body img{border-style:none}
.markdown-body svg:not(:root){
	overflow:hidden}
.markdown-body hr{
	box-sizing:content-box;
	height:.25em;
	margin:24px 0;
	padding:0;
	overflow:hidden;
	background-color:#e1e4e8;
	border:0}
.markdown-body hr::before{display:table;content:""}
.markdown-body hr::after{display:table;clear:both;content:""}
.markdown-body input{margin:0;overflow:visible;font:inherit;font-family:inherit;font-size:inherit;line-height:inherit}
.markdown-body [type=checkbox]{box-sizing:border-box;padding:0}
.markdown-body *{box-sizing:border-box}.markdown-body blockquote{margin:0}
.markdown-body ol,.markdown-body ul{padding-left:2em}
.markdown-body ol ol,.markdown-body ul ol{list-style-type:lower-roman}
.markdown-body ol ol,.markdown-body ol ul,.markdown-body ul ol,.markdown-body ul ul{margin-top:0;margin-bottom:0}
.markdown-body ol ol ol,.markdown-body ol ul ol,.markdown-body ul ol ol,.markdown-body ul ul ol{list-style-type:lower-alpha}
.markdown-body li>p{margin-top:16px}
.markdown-body li+li{margin-top:.25em}
.markdown-body dd{margin-left:0}
.markdown-body dl{padding:0}
.markdown-body dl dt{padding:0;margin-top:16px;font-size:1em;font-style:italic;font-weight:600}
.markdown-body dl dd{padding:0 16px;margin-bottom:16px}
.markdown-body code{font-family:SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace}
.markdown-body pre{font:12px SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace;word-wrap:normal}
.markdown-body blockquote,.markdown-body dl,.markdown-body ol,.markdown-body p,.markdown-body pre,.markdown-body table,.markdown-body ul{margin-top:0;margin-bottom:16px}
.markdown-body blockquote{padding:0 1em;color:#6a737d;border-left:.25em solid #dfe2e5}
.markdown-body blockquote>:first-child{margin-top:0}
.markdown-body blockquote>:last-child{margin-bottom:0}
.markdown-body table{display:block;width:100%;overflow:auto;border-spacing:0;border-collapse:collapse}
.markdown-body table th{font-weight:600}
.markdown-body table td,.markdown-body table th{padding:6px 13px;border:1px solid #dfe2e5}
.markdown-body table tr{background-color:#fff;border-top:1px solid #c6cbd1}
.markdown-body table tr:nth-child(2n){background-color:#f6f8fa}
.markdown-body img{max-width:100%;box-sizing:content-box;background-color:#fff}
.markdown-body code{padding:.2em 0;margin:0;font-size:85%;background-color:rgba(27,31,35,.05);border-radius:3px}
.markdown-body code::after,.markdown-body code::before{letter-spacing:-.2em;content:"\00a0"}
.markdown-body pre>code{padding:0;margin:0;font-size:100%;word-break:normal;white-space:pre;background:0 0;border:0}
.markdown-body .highlight{margin-bottom:16px}
.markdown-body .highlight pre{margin-bottom:0;word-break:normal}
.markdown-body .highlight pre,.markdown-body pre{padding:16px;overflow:auto;font-size:85%;line-height:1.45;background-color:#f6f8fa;border-radius:3px}
.markdown-body pre code{display:inline;max-width:auto;padding:0;margin:0;overflow:visible;line-height:inherit;word-wrap:normal;background-color:transparent;border:0}
.markdown-body pre code::after,.markdown-body pre code::before{content:normal}
.markdown-body .full-commit .btn-outline:not(:disabled):hover{color:#005cc5;border-color:#005cc5}
.markdown-body kbd{box-shadow:inset 0 -1px 0 #959da5;display:inline-block;padding:3px 5px;font:11px/10px SFMono-Regular,Consolas,"Liberation Mono",Menlo,Courier,monospace;color:#444d56;vertical-align:middle;background-color:#fcfcfc;border:1px solid #c6cbd1;border-bottom-color:#959da5;border-radius:3px;box-shadow:inset 0 -1px 0 #959da5}
.markdown-body :checked+.radio-label{position:relative;z-index:1;border-color:#0366d6}
.markdown-body .task-list-item{list-style-type:none}
.markdown-body .task-list-item+.task-list-item{margin-top:3px}
.markdown-body .task-list-item input{margin:0 .2em .25em -1.6em;vertical-align:middle}
.markdown-body::before{display:table;content:""}
.markdown-body::after{display:table;clear:both;content:""}
.markdown-body>:first-child{margin-top:0!important}
.markdown-body>:last-child{margin-bottom:0!important}
.Alert,.Error,.Note,.Success,.Warning{padding:11px;margin-bottom:24px;border-style:solid;border-width:1px;border-radius:4px}
.Alert p,.Error p,.Note p,.Success p,.Warning p{margin-top:0}
.Alert p:last-child,.Error p:last-child,.Note p:last-child,.Success p:last-child,.Warning p:last-child{margin-bottom:0}
.Alert{color:#246;background-color:#e2eef9;border-color:#bac6d3}
.Warning{color:#4c4a42;background-color:#fff9ea;border-color:#dfd8c2}
.Error{color:#911;background-color:#fcdede;border-color:#d2b2b2}
.Success{color:#22662c;background-color:#e2f9e5;border-color:#bad3be}
.Note{color:#2f363d;background-color:#f6f8fa;border-color:#d5d8da}
.Alert h1,.Alert h2,.Alert h3,.Alert h4,.Alert h5,.Alert h6{color:#246;margin-bottom:0}
.Warning h1,.Warning h2,.Warning h3,.Warning h4,.Warning h5,.Warning h6{color:#4c4a42;margin-bottom:0}
.Error h1,.Error h2,.Error h3,.Error h4,.Error h5,.Error h6{color:#911;margin-bottom:0}
.Success h1,.Success h2,.Success h3,.Success h4,.Success h5,.Success h6{color:#22662c;margin-bottom:0}
.Note h1,.Note h2,.Note h3,.Note h4,.Note h5,.Note h6{color:#2f363d;margin-bottom:0}
.Alert h1:first-child,.Alert h2:first-child,.Alert h3:first-child,.Alert h4:first-child,.Alert h5:first-child,.Alert h6:first-child,.Error h1:first-child,.Error h2:first-child,.Error h3:first-child,.Error h4:first-child,.Error h5:first-child,.Error h6:first-child,.Note h1:first-child,.Note h2:first-child,.Note h3:first-child,.Note h4:first-child,.Note h5:first-child,.Note h6:first-child,.Success h1:first-child,.Success h2:first-child,.Success h3:first-child,.Success h4:first-child,.Success h5:first-child,.Success h6:first-child,.Warning h1:first-child,.Warning h2:first-child,.Warning h3:first-child,.Warning h4:first-child,.Warning h5:first-child,.Warning h6:first-child{margin-top:0}
h1.title,p.subtitle{text-align:center}
h1.title.followed-by-subtitle{margin-bottom:0}
p.subtitle{font-size:1.5em;font-weight:600;line-height:1.25;margin-top:0;margin-bottom:16px;padding-bottom:.3em}
div.line-block{white-space:pre-line}
  </style>
  <style type="text/css">code{white-space: pre;}</style>
  <style type="text/css">
	code.sourceCode > span { display: inline-block; line-height: 1.25; }
code.sourceCode > span { color: inherit; text-decoration: inherit; }
code.sourceCode > span:empty { height: 1.2em; }
.sourceCode { overflow: visible; }
code.sourceCode { white-space: pre; position: relative; }
div.sourceCode { margin: 1em 0; }
pre.sourceCode { margin: 0; }
@media screen {
div.sourceCode { overflow: auto; }
}
@media print {
code.sourceCode { white-space: pre-wrap; }
code.sourceCode > span { text-indent: -5em; padding-left: 5em; }
}
pre.numberSource code
  { counter-reset: source-line 0; }
pre.numberSource code > span
  { position: relative; left: -4em; counter-increment: source-line; }
pre.numberSource code > span > a:first-child::before
  { content: counter(source-line);
    position: relative; left: -1em; text-align: right; vertical-align: baseline;
    border: none; display: inline-block;
    -webkit-touch-callout: none; -webkit-user-select: none;
    -khtml-user-select: none; -moz-user-select: none;
    -ms-user-select: none; user-select: none;
    padding: 0 4px; width: 4em;
    color: #aaaaaa;
  }
pre.numberSource { margin-left: 3em; border-left: 1px solid #aaaaaa;  padding-left: 4px; }
div.sourceCode
  {   }
@media screen {
code.sourceCode > span > a:first-child::before { text-decoration: underline; }
}
code span.al { color: #ff0000; font-weight: bold; } /* Alert */
code span.an { color: #60a0b0; font-weight: bold; font-style: italic; } /* Annotation */
code span.at { color: #7d9029; } /* Attribute */
code span.bn { color: #40a070; } /* BaseN */
code span.bu { } /* BuiltIn */
code span.cf { color: #007020; font-weight: bold; } /* ControlFlow */
code span.ch { color: #4070a0; } /* Char */
code span.cn { color: #880000; } /* Constant */
code span.co { color: #60a0b0; font-style: italic; } /* Comment */
code span.cv { color: #60a0b0; font-weight: bold; font-style: italic; } /* CommentVar */
code span.do { color: #ba2121; font-style: italic; } /* Documentation */
code span.dt { color: #902000; } /* DataType */
code span.dv { color: #40a070; } /* DecVal */
code span.er { color: #ff0000; font-weight: bold; } /* Error */
code span.ex { } /* Extension */
code span.fl { color: #40a070; } /* Float */
code span.fu { color: #06287e; } /* Function */
code span.im { } /* Import */
code span.in { color: #60a0b0; font-weight: bold; font-style: italic; } /* Information */
code span.kw { color: #007020; font-weight: bold; } /* Keyword */
code span.op { color: #666666; } /* Operator */
code span.ot { color: #007020; } /* Other */
code span.pp { color: #bc7a00; } /* Preprocessor */
code span.sc { color: #4070a0; } /* SpecialChar */
code span.ss { color: #bb6688; } /* SpecialString */
code span.st { color: #4070a0; } /* String */
code span.va { color: #19177c; } /* Variable */
code span.vs { color: #4070a0; } /* VerbatimString */
code span.wa { color: #60a0b0; font-weight: bold; font-style: italic; } /* Warning */
  </style>
  <style type="text/css">:root { --main-hx-color: rgb(0,32,88); --sidenav-font-size: 90%;}html {}* {xbox-sizing: border-box;}.st_header h1.title,.st_header p.subtitle {text-align: left;}.st_header h1.title {color: var(--main-hx-color)}.st_header p.subtitle {color: var(--main-hx-color)}.st_header h1.title.followed-by-subtitle {margin-bottom:5px;}.st_header p.revision {display: inline-block;width:70%;}.st_header div.author {font-style: italic;}.st_header div.summary {border-top: solid 1px #C0C0C0;background: #ECECEC;padding: 5px;}.st_footer img {float: right;}.markdown-body #header-section-number {font-size:120%;}.markdown-body h1 {border-bottom:1px solid #74767a;padding-bottom: 2px;padding-top: 10px;}.markdown-body h2 {padding-bottom: 5px;padding-top: 10px;}.markdown-body h2 code {background-color: rgb(255, 255, 255);}#func.sourceCode {border-left-style: solid;border-color: rgb(0, 32, 82);border-color: rgb(255, 244, 191);border-width: 8px;padding:0px;}pre > code {border: solid 1px blue;font-size:60%;}codeXX {border: solid 1px blue;font-size:60%;}#func.sourceXXCode::before {content: "Synopsis";padding-left:10px;font-weight: bold;}figure {padding:0px;margin-left:5px;margin-right:5px;margin-left: auto;margin-right: auto;}img[data-property="center"] {display: block;margin-top: 10px;margin-left: auto;margin-right: auto;padding: 10px;}figcaption {text-align:left;  border-top: 1px dotted #888;padding-bottom: 20px;margin-top: 10px;}section.st_footer {font-size:80%;}div.stnotice {width:80%;}h1 code, h2 code {font-size:120%;}	.markdown-body table {width: 100%;margin-left:auto;margin-right:auto;}.markdown-body img {border-radius: 4px;padding: 5px;display: block;margin-left: auto;margin-right: auto;width: auto;}.markdown-body .st_header img, .markdown-body {border: none;border-radius: none;padding: 5px;display: block;margin-left: auto;margin-right: auto;width: auto;box-shadow: none;}.markdown-body {margin: 10px;padding: 10px;width: auto;font-family: "Arial", sans-serif;color: #03234B;}.markdown-body h1, .markdown-body h2, .markdown-body h3 {   color: var(--main-hx-color)}.markdown-body:hover {}.markdown-body .contents {}.markdown-body .toc-title {}.markdown-body .contents li {list-style-type: none;}.markdown-body .contents ul {padding-left: 10px;}.markdown-body .contents a {color: #3CB4E6; }.sidenav {font-family: "Arial", sans-serif;font-family: segoe ui, verdona;color: #3CB4E6; color: #03234B; color: var(--main-hx-color);height: 100%;position: fixed;z-index: 1;top: 0;left: 0;margin-right: 10px;margin-left: 10px; overflow-x: hidden;}hr.new1 {border-width: thin;border-top: 1px solid #3CB4E6; margin-right: 10px;margin-top: -10px;}.sidenav #sidenav_header {margin-top: 10px;border: 1px;}.sidenav #sidenav_header img {float: left;}.sidenav #sidenav_header a {margin-left: 0px;margin-right: 0px;padding-left: 0px;color: #3CB4E6; color: #03234B; color: var(--main-hx-color)}.sidenav #sidenav_header a:hover {background-size: auto;color: #FFD200; }.sidenav #sidenav_header a:active {  }.sidenav > ul {background-color: rgba(57, 169, 220, 0.05);border-radius: 10px;padding-bottom: 10px;padding-top: 10px;padding-right: 10px;margin-right: 10px;}.sidenav a {padding: 2px 2px;text-decoration: none;font-size: var(--sidenav-font-size);  display:table;}.sidenav > ul > li,.sidenav > ul > li > ul > li { padding-right: 5px;padding-left: 5px;}.sidenav > ul > li > a { color: #03234B;  color: var(--main-hx-color)}.sidenav > ul > li > ul > li > a { color: #03234B; color: #3CB4E6; color: #03234B; font-weight: lighter;padding-left: 10px;}.sidenav > ul > li > ul > li > ul > li > a { display: None;}.sidenav li {list-style-type: none;}.sidenav ul {padding-left: 0px;}.sidenav > ul > li > a:hover,.sidenav > ul > li > ul > li > a:hover {background-color: rgba(70, 70, 80, 0.1); background-clip: border-box;margin-left: -10px;padding-left: 10px;}.sidenav > ul > li > a:hover {padding-right: 15px;width: 230px;	}.sidenav > ul > li > ul > li > a:hover {padding-right: 10px;width: 230px;	}.sidenav > ul > li > a:active { color: #FFD200; }.sidenav code {}.sidenav {width: 280px;}#sidenav {margin-left: 300px;display:block;}.markdown-body .print-contents {visibility:hidden;}.markdown-body .print-toc-title {visibility:hidden;}.markdown-body {max-width: 980px;min-width: 200px;padding: 40px;border-style: solid;border-style: outset;border-color: rgba(104, 167, 238, 0.089);border-radius: 5px;}@media screen and (max-height: 450px) {.sidenav {padding-top: 15px;}.sidenav a {font-size: 18px;}#sidenav {margin-left: 10px; }.sidenav {visibility:hidden;}.markdown-body {margin: 10px;padding: 40px;width: auto;border: 0px;}}@media screen and (max-width: 1024px) {.sidenav {visibility:hidden;}.markdown-body {margin: 10px;padding: 40px;width: auto;border: 0px;}#sidenav {margin-left: 10px;}}@media print {.sidenav {visibility:hidden;}#sidenav {margin-left: 10px;}.markdown-body {margin: 10px;padding: 10px;width:auto;border: 0px;}@page {size: A4;  margin:2cm;padding:2cm;margin-top: 1cm;padding-bottom: 1cm;}* {xbox-sizing: border-box;font-size:90%;}a {font-size: 100%;color: yellow;}.markdown-body article {xbox-sizing: border-box;font-size:100%;}.markdown-body p {windows: 2;orphans: 2;}.pagebreakerafter {page-break-after: always;padding-top:10mm;}.pagebreakbefore {page-break-before: always;}h1, h2, h3, h4 {page-break-after: avoid;}div, code, blockquote, li, span, table, figure {page-break-inside: avoid;}}</style>
  <!--[if lt IE 9]>
    <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
  <![endif]-->




<link href="" rel="shortcut icon">

</head>



<body>

		<div class="sidenav">
		<div id="sidenav_header">
							<img src="" title="STM32CubeMX.AI logo" align="left" height="70" />
										<br />5.2.0<br />
										<a href="#doc_title"> Relocatable binary model support </a>
					</div>
		<div id="sidenav_header_button">
			 
							<ul>
					<li><p><a id="index" href="index.html">[ Index ]</a></p></li>
				</ul>
						<hr class="new1">
		</div>	

		<ul>
<li><a href="#introduction">Introduction</a><ul>
<li><a href="#what-is-a-relocatable-binary-model">What is a relocatable binary model?</a></li>
</ul></li>
<li><a href="#getting-started">Getting started</a><ul>
<li><a href="#generating-a-relocatable-binary-model">Generating a relocatable binary model</a></li>
<li><a href="#upgrading-the-firmware-image">Upgrading the firmware image</a></li>
<li><a href="#creating-an-instance">Creating an instance</a></li>
<li><a href="#running-an-inference">Running an inference</a></li>
</ul></li>
<li><a href="#generation-flow">Generation flow</a></li>
<li><a href="#ai-run-time-execution-modes">AI run time execution modes</a><ul>
<li><a href="#rel_xip_mode">XIP execution mode</a></li>
<li><a href="#rel_copy_mode">COPY execution mode</a></li>
<li><a href="#xip-execution-mode-and-separated-weight-binary-file">XIP execution mode and separated weight binary file</a></li>
</ul></li>
<li><a href="#ai_rel_api">AI relocatable run-time API</a><ul>
<li><a href="#ref_rel_rt_info"><code>ai_rel_network_rt_get_info()</code></a></li>
<li><a href="#ref_rel_load"><code>ai_rel_network_load_and_create()</code></a></li>
<li><a href="#ref_rel_init"><code>ai_rel_network_init()</code></a></li>
<li><a href="#ref_rel_get_info"><code>ai_rel_network_get_info()</code></a></li>
<li><a href="#ref_rel_get_error"><code>ai_rel_network_get_error()</code></a></li>
<li><a href="#ref_rel_run"><code>ai_rel_network_run()</code></a></li>
<li><a href="#ref_rel_register"><code>ai_rel_platform_observer_register()</code></a></li>
</ul></li>
<li><a href="#references">References</a></li>
<li><a href="#revision-history">Revision history</a></li>
</ul>
	</div>
	<article id="sidenav" class="markdown-body">
	


<header>
<section class="st_header" id="doc_title">

<div class="himage">
	<img src="" title="STM32CubeMX.AI" align="right" height="70" />
	<img src="" title="STM32" align="right" height="90" />
</div>

<h1 class="title followed-by-subtitle">Relocatable binary model support</h1>

	<p class="subtitle">X-CUBE-AI Expansion Package</p>

	<div class="revision">r1.0</div>

	<div class="ai_platform">
		AI PLATFORM r5.2.0
					(Embedded Inference Client API 1.1.0)
			</div>
			Command Line Interface r1.4.0
	




</section>
</header>




<section id="introduction" class="level1">
<h1>Introduction</h1>
<section id="what-is-a-relocatable-binary-model" class="level2">
<h2>What is a relocatable binary model?</h2>
<p>A relocatable binary model designates a binary object which can be installed and executed anywhere in a STM32 memory sub-system. It contents a compiled version of the generated NN C-files including the requested forward kernel functions and the weights. The principal objective is to provide a flexible way to upgrade an AI-based application w/o re-generating and flashing the whole end-user firmware. This is the primary element to use for example the FOTA (Firmware Over-The-Air) technology.</p>
<p>Generated binary object is a <em>lightweight</em> plug-in. It can run from any address (position-independent code) and have its data anywhere in memory (position-independent data). A simple and efficient AI relocatable run-time allows to instantiate and to use it. No complex and resource-consuming dynamic linker for ARM Cortex-M MCU is embedded in the STM32 firmware, the generated object is a self-contained entity, no external symbols/functions are requested at run-time.</p>
<div id="fig:reloc_reloc_obj" class="fignos">
<figure>
<img src="" property="center" style="width:100.0%" alt /><figcaption><span>Figure 1:</span> Relocatable binary object</figcaption>
</figure>
</div>
<blockquote>
<p>In this article “static” approach designates the case where the generated NN C-files are compiled and linked with the end-user application stack.</p>
</blockquote>
<section id="comparison-with-tf-lite-for-micro-controllers-solution" class="level3">
<h3>Comparison with TF Lite for micro-controllers solution</h3>
<p><a href="https://www.tensorflow.org/lite/microcontrollers">TF Lite for micro-controllers</a> framework provides a way to upgrade an AI-based application. It is based on the flat buffer technology (TF Lite file) which will be interpreted at run-time to create an executable instance. The code of the forward kernel functions should be already available in the initial end-user firmware image.</p>
</section>
</section>
</section>
<section id="getting-started" class="level1">
<h1>Getting started</h1>
<section id="generating-a-relocatable-binary-model" class="level2">
<h2>Generating a relocatable binary model</h2>
<p>To build a relocatable binary file for a given STM32 series, the <code>&#39;--relocatable&#39;</code> option is used with the <code>&#39;generate&#39;</code> command. Pay attention to the specific options to compress or to put the IO buffer in the activations buffer should be always applied as for the “standard” approach.</p>
<div class="Alert">
<p><strong>Note</strong> — As mentioned in <a href="command_line_interface.html">[3]</a>, <em>“Setting the environment”</em> section, a <a href="https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-rm">GNU ARM Embedded tool-chain</a> (<code>&#39;arm-none-eabi-&#39;</code> prefix) should be available in the PATH before to launch the command. The <code>&#39;--lib&#39;</code> option indicates the root location to find the relocatable network runtime libraries, default value: <code>&#39;$X_CUBE_AI_DIR/Middlewares/ST/AI&#39;</code>.</p>
</div>
<pre class="dosbatch"><code>&gt; stm32ai generate -m &lt;model_file_path&gt; &lt;gen_options&gt; --relocatable --series stm32f4 \
                   --lib $X_CUBE_AI_DIR/Middlewares/ST/AI

Neural Network Tools for STM32 v1.4.0 (AI tools v5.2.0)
-- Importing model
-- Importing model - done (elapsed time 0.580s)
-- Rendering model
-- Rendering model - done (elapsed time 0.074s)
-- Generating C-code
Generating relocatable binary image..

Runtime memory layout (series=&quot;stm32f4&quot;)
--------------------------------------------------------------------------------
section      size (bytes)
--------------------------------------------------------------------------------
header                100 *
txt                 7,864      network+kernel
rodata                128      network+kernel
data                1,756      network+kernel
bss                   132      network+kernel
got                   108 *
rel                   504 *
weights            15,560      network
--------------------------------------------------------------------------------
flash size         25,308 + 712 (+2.81%) *
ram size            1,888 + 108 (+5.72%) *
--------------------------------------------------------------------------------
bin size           26,024      binary image
act. size             192      activations buffer

(*) extra bytes for relocatable support

-- Generating C-code - done (elapsed time 2.086s)
Installing..
     &lt;output-directory-path&gt;\network.c
     &lt;output-directory-path&gt;\network_data.c
     &lt;output-directory-path&gt;\network_img_rel.c
     &lt;output-directory-path&gt;\network.h
     &lt;output-directory-path&gt;\network_data.h
     &lt;output-directory-path&gt;\network_img_rel.h
     &lt;output-directory-path&gt;\network_rel.bin

Creating report file &lt;output-directory-path&gt;\network_generate_report.txt
elapsed time (generate): 2.79s
</code></pre>
<table>
<colgroup>
<col style="width: 31%"></col>
<col style="width: 68%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">supported STM32 series</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>stm32f4</code></td>
<td style="text-align: left;">default series. All STM32F4xx devices with a ARM Cortex M4 core and FPU support enabled (simple precision).</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>stm32f3</code></td>
<td style="text-align: left;">All STM32F3xx devices with a ARM Cortex M4 core and FPU support enabled (simple precision).</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>stm32l4</code></td>
<td style="text-align: left;">All STM32L4xx/STM32L4Rxx devices with a ARM Cortex M4 core and FPU support enabled (simple precision).</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>stm32f7</code></td>
<td style="text-align: left;">All STM32F7xx device with a ARM Cortex M7 core and FPU support enabled (simple precision).</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>stm32h7</code></td>
<td style="text-align: left;">All STM32H7xx device with a ARM Cortex M7 core and FPU support enabled (double precision).</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>stm32wl</code></td>
<td style="text-align: left;">All STM32WLxx device with a ARM Cortex M4 core w/o FPU support enabled.</td>
</tr>
</tbody>
</table>
<section id="generated-files" class="level3">
<h3>Generated files</h3>
<table>
<colgroup>
<col style="width: 28%"></col>
<col style="width: 71%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">file</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>&lt;network&gt;_rel.bin</code></td>
<td style="text-align: left;"><em>Main binary file</em> (i.e. the “relocatable binary model”). Its contents the compiled version of the model, including the used forward kernel functions and the weights by default. It embeds also the additional sections (.header/.got/.rel) to be able to install the model.<br />
Note that the if the <code>&#39;--binary&#39;</code> option flag is used, the weights are generated in a separated file: <code>&lt;network&gt;_data.bin</code>.</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>&lt;network&gt;.c/.h</code><br />
<code>&lt;network&gt;_data.c/.h</code></td>
<td style="text-align: left;">Generated NN C-files which are used to generate the relocatable binary model. <code>&lt;network&gt;_data.c</code> is an empty file.</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>&lt;network&gt;_img_rel.c/.h</code></td>
<td style="text-align: left;">These files are generated for test purpose. It facilitates the deployment of the relocatable binary model in a test framework like <code>aiSystemPerformance/aiValidation</code> applications. It contents additional macros and a C byte array which the binary file which can be used by the <a href="#ai_rel_api">AI relocatable runtime API</a> to install and to use the model. The <code>&#39;--no-c-files&#39;</code> option flags can be used to avoid generating these additional files.</td>
</tr>
</tbody>
</table>
<blockquote>
<p>All files are generated in the <code>&#39;&lt;output-directory-path&gt;&#39;</code> directory.</p>
</blockquote>
</section>
<section id="memory-layout-information" class="level3">
<h3>Memory layout information</h3>
<p>The reported memory layout information completes the provided <code>&#39;ROM&#39;</code> and <code>&#39;RAM&#39;</code> memory size metrics (refer to <a href="evaluation_metrics.html">[6]</a>, <em>“Memory-related metrics” section</em>), with the AI memory resources, including the specific AI code and data sections which are requested to run the AI stack. Apart the additional sections to manage the relocatable binary model, the size of the other sections are like the “static” code generation approach where the NN C-files are compiled and linked with the code of the application. Only the requested size for the IO buffers are not reported here.</p>
<div id="fig:reloc_report_mem" class="fignos">
<figure>
<img src="" property="center" style="width:95.0%" alt /><figcaption><span>Figure 1:</span> MCU AI memory layout</figcaption>
</figure>
</div>
<p>Following table summarizes the difference in term of memory layout (in bytes) between a <em>static</em> and <em>relocatable</em> approach. Size of the network/kernels sections are dependent of the topology complexity (number of nodes) and the different forward kernel functions which are used. Activations and weights are always the same.</p>
<table>
<colgroup>
<col style="width: 18%"></col>
<col style="width: 13%"></col>
<col style="width: 14%"></col>
<col style="width: 52%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">AI object</th>
<th style="text-align: center;">static</th>
<th style="text-align: center;">reloc</th>
<th style="text-align: left;">typically placed in</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">activations</td>
<td style="text-align: center;">192</td>
<td style="text-align: center;">192</td>
<td style="text-align: left;">RAM type (rw), <code>&#39;.bss&#39;</code> section</td>
</tr>
<tr class="even">
<td style="text-align: left;">weights</td>
<td style="text-align: center;">15,560</td>
<td style="text-align: center;">15,560</td>
<td style="text-align: left;">FLASH type (ro), <code>&#39;.rodata&#39;</code> section</td>
</tr>
<tr class="odd">
<td style="text-align: left;">network/kernels</td>
<td style="text-align: center;">25,308<br />
1,888</td>
<td style="text-align: center;">26,024<br />
1,996</td>
<td style="text-align: left;">FLASH type (rx), <code>&#39;.txt\.rodata\(.data)&#39;</code> section<br />
RAM type (rw), <code>&#39;.data\.bss&#39;</code> section</td>
</tr>
</tbody>
</table>
</section>
</section>
<section id="upgrading-the-firmware-image" class="level2">
<h2>Upgrading the firmware image</h2>
<p>This step is out-of-scope of this article, the underlying process is fully application dependent. For the next steps, the snipped code expects that the image (<code>&lt;network&gt;_rel.bin</code>) has been flashed in a memory-mapped region.</p>
<p>To use a relocatable binary model, a specific <a href="#ai_rel_api">AI relocatable runtime API</a> (simple c-file) is requested to install and to use it. It is available in the X-CUBE-AI pack and should be integrated during the generation of the firmware. Note that the only the AI runtime header files are requested, the <code>&#39;network_runtime.a&#39;</code> library is not necessary.</p>
<div class="sourceCode" id="cb2"><pre class="sourceCode makefile"><code class="sourceCode makefile"><span id="cb2-1"><a href="#cb2-1"></a><span class="dt">CFLAGS </span><span class="ch">+=</span><span class="st"> -mcpu=cortex-m4 -mthumb -mfpu=fpv4-sp-d16  -mfloat-abi=hard</span></span>
<span id="cb2-2"><a href="#cb2-2"></a></span>
<span id="cb2-3"><a href="#cb2-3"></a><span class="dt">C_SOURCES </span><span class="ch">+=</span><span class="st"> </span><span class="ch">$X</span><span class="st">_CUBE_AI_DIR/Middlewares/ST/AI/Reloc/Src/ai_reloc_network.c</span></span>
<span id="cb2-4"><a href="#cb2-4"></a></span>
<span id="cb2-5"><a href="#cb2-5"></a><span class="dt">CFLAGS </span><span class="ch">+=</span><span class="st"> -I</span><span class="ch">$X</span><span class="st">_CUBE_AI_DIR/Middlewares/ST/AI/Inc</span></span>
<span id="cb2-6"><a href="#cb2-6"></a><span class="dt">CFLAGS </span><span class="ch">+=</span><span class="st"> -I</span><span class="ch">$X</span><span class="st">_CUBE_AI_DIR/Middlewares/ST/AI/Reloc/Inc</span></span></code></pre></div>
</section>
<section id="creating-an-instance" class="level2">
<h2>Creating an instance</h2>
<p>After the update of the model (binary file) inside the STM32 device at the address: <code>&#39;BIN_ADDRESS&#39;</code>, the following sequence of code can be used to create and to install an instance of the generated model.</p>
<div class="sourceCode" id="cb3"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb3-1"><a href="#cb3-1"></a><span class="pp">#include </span><span class="im">&lt;ai_reloc_network.h&gt;</span></span>
<span id="cb3-2"><a href="#cb3-2"></a></span>
<span id="cb3-3"><a href="#cb3-3"></a>ai_error err;</span>
<span id="cb3-4"><a href="#cb3-4"></a>ai_rel_network_info rt_info;</span>
<span id="cb3-5"><a href="#cb3-5"></a></span>
<span id="cb3-6"><a href="#cb3-6"></a>err = ai_rel_network_rt_get_info(BIN_ADDRESS, &amp;rt_info);</span></code></pre></div>
<p>This allows to retrieve a part of the meta-information embedded in the header of the binary.</p>
<div class="sourceCode" id="cb4"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb4-1"><a href="#cb4-1"></a>...</span>
<span id="cb4-2"><a href="#cb4-2"></a>printf(<span class="st">&quot;Load a relocatable binary model, located at the address 0x%08x</span><span class="sc">\r\n</span><span class="st">&quot;</span>, (<span class="dt">int</span>)BIN_ADDRESS);</span>
<span id="cb4-3"><a href="#cb4-3"></a>printf(<span class="st">&quot; model name                : %s</span><span class="sc">\r\n</span><span class="st">&quot;</span>, rt_info.c_name);</span>
<span id="cb4-4"><a href="#cb4-4"></a>printf(<span class="st">&quot; weights size              : %d bytes</span><span class="sc">\r\n</span><span class="st">&quot;</span>, (<span class="dt">int</span>)rt_info.weights_sz);</span>
<span id="cb4-5"><a href="#cb4-5"></a>printf(<span class="st">&quot; activations size          : %d bytes (minimum)</span><span class="sc">\r\n</span><span class="st">&quot;</span>, (<span class="dt">int</span>)rt_info.acts_sz);</span>
<span id="cb4-6"><a href="#cb4-6"></a>printf(<span class="st">&quot; compiled for a Cortex-Mx  : 0x%03X</span><span class="sc">\r\n</span><span class="st">&quot;</span>, (<span class="dt">int</span>)AI_RELOC_RT_GET_CPUID(rt_info.variant));</span>
<span id="cb4-7"><a href="#cb4-7"></a>printf(<span class="st">&quot; FPU should be enabled     : %s</span><span class="sc">\r\n</span><span class="st">&quot;</span>, AI_RELOC_RT_FPU_USED(rt_info.variant)?<span class="st">&quot;yes&quot;</span>:<span class="st">&quot;no&quot;</span>);</span>
<span id="cb4-8"><a href="#cb4-8"></a>printf(<span class="st">&quot; RT RAM minimum size       : %d bytes (%d bytes in COPY mode)</span><span class="sc">\r\n</span><span class="st">&quot;</span>, (<span class="dt">int</span>)rt_info.rt_ram_xip,</span>
<span id="cb4-9"><a href="#cb4-9"></a>        (<span class="dt">int</span>)rt_info.rt_ram_copy);</span>
<span id="cb4-10"><a href="#cb4-10"></a>...</span></code></pre></div>
<p>To create an executable instance of the C-model, a dedicated memory buffer (also called <em>AI RT RAM</em>), should be provided. Minimum requested size is model and execution mode dependent. For the <code>&#39;XIP&#39;</code> execution mode (<a href="#rel_xip_mode"><code>&#39;AI_RELOC_RT_LOAD_MODE_XIP&#39;</code></a>), only a buffer (rw memory mapped region) for the data sections is requested (minimum size = <code>rt_info.rt_ram_xip</code>). Note that the allocated buffer should be 4-bytes aligned. For the <code>&#39;COPY&#39;</code> execution mode (<a href="#rel_copy_mode"><code>&#39;AI_RELOC_RT_LOAD_MODE_COPY&#39;</code></a>), <code>rt_info.rt_ram_copy</code> minimum size is requested to be able to copy also the code sections. For this last case, the provided memory region, should be executable.</p>
<div class="sourceCode" id="cb5"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb5-1"><a href="#cb5-1"></a>ai_error err;</span>
<span id="cb5-2"><a href="#cb5-2"></a>ai_handle net = AI_HANDLE_NULL;</span>
<span id="cb5-3"><a href="#cb5-3"></a></span>
<span id="cb5-4"><a href="#cb5-4"></a><span class="dt">uint8_t</span> *rt_ai_ram = malloc(rt_info.rt_ram_xip);</span>
<span id="cb5-5"><a href="#cb5-5"></a></span>
<span id="cb5-6"><a href="#cb5-6"></a>err = ai_rel_network_load_and_create(BIN_ADDRESS, rt_ai_ram, rt_info.rt_ram_xip,</span>
<span id="cb5-7"><a href="#cb5-7"></a>                                     AI_RELOC_RT_LOAD_MODE_XIP, &amp;net);</span></code></pre></div>
<p>Before to install and to set the instance, the compatibility with the STM32 platform and the provided binary is verified, confirming the Cortex-Mx ID and if the FPU is enabled (if requested by the binary). If all is OK, an instance of the model is ready to be initialized and a handle is returned (<code>&#39;net&#39;</code> parameter).</p>
<p>As for the “static” approach, the next step is to complete the internal data structure with the activation buffer and the weights buffer. Only the addresses of the associated buffer should be provided. It the weights are loaded as a separated file (<code>&#39;--binary&#39;</code> option flag), <code>WEIGHTS_ADDRESS</code> indicates the location where the weights have been placed.</p>
<div class="sourceCode" id="cb6"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb6-1"><a href="#cb6-1"></a>ai_handle weights_addr;</span>
<span id="cb6-2"><a href="#cb6-2"></a>ai_bool res;</span>
<span id="cb6-3"><a href="#cb6-3"></a></span>
<span id="cb6-4"><a href="#cb6-4"></a><span class="dt">uint8_t</span> *act_addr = malloc(rt_info.acts_sz)</span>
<span id="cb6-5"><a href="#cb6-5"></a></span>
<span id="cb6-6"><a href="#cb6-6"></a><span class="cf">if</span> (rt_info.weights)</span>
<span id="cb6-7"><a href="#cb6-7"></a>  weights_addr = rt_info.weights;</span>
<span id="cb6-8"><a href="#cb6-8"></a><span class="cf">else</span></span>
<span id="cb6-9"><a href="#cb6-9"></a>  weights_addr = WEIGHTS_ADDRESS;</span>
<span id="cb6-10"><a href="#cb6-10"></a></span>
<span id="cb6-11"><a href="#cb6-11"></a>res = ai_rel_network_init(net, weights_addr, act_addr))</span></code></pre></div>
<p>At this stage, the instance is fully ready to be used. To retrieve the whole attributes of the instantiated model, the <code>&#39;ai_rel_network_get_info()&#39;</code> function can be used.</p>
<div class="sourceCode" id="cb7"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb7-1"><a href="#cb7-1"></a>ai_bool res;</span>
<span id="cb7-2"><a href="#cb7-2"></a>ai_network_report net_info;</span>
<span id="cb7-3"><a href="#cb7-3"></a></span>
<span id="cb7-4"><a href="#cb7-4"></a>res = ai_rel_network_get_info(net, &amp;net_info);</span></code></pre></div>
<div class="Warning">
<p><strong>TIPS</strong> — To avoid to allocate the model dependent memory regions through a system heap, a pre-allocate memory region can be used (<code>&#39;AI_RT_ADDR&#39;</code> address).</p>
<div class="sourceCode" id="cb8"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb8-1"><a href="#cb8-1"></a>rt_ai_ram = (<span class="dt">uint8_t</span> *)AI_RT_ADDR;</span>
<span id="cb8-2"><a href="#cb8-2"></a>act_addr = rt_ai_ram + AI_RELOC_ROUND_UP(rt_info.rt_ram_xip);</span></code></pre></div>
</div>
<div class="Alert">
<p><strong>NOTE</strong> — “Static” allowing only one instance at the time, there is no limitation here of the number of the created instances for a same generated model. Each instance can be created with its own <em>AI RT RAM</em> area. It is initialized with its own activations buffer, concurrent UC can be implemented w/o specific synchro.</p>
</div>
</section>
<section id="running-an-inference" class="level2">
<h2>Running an inference</h2>
<p>The function to run an inference is fully like the “static” case. Following snippet code illustrates the case where the generated model is defined with the simple input and output tensors.</p>
<div class="sourceCode" id="cb9"><pre class="sourceCode c"><code class="sourceCode c"><span id="cb9-1"><a href="#cb9-1"></a><span class="dt">static</span> <span class="dt">int</span> ai_run(<span class="dt">void</span> *data_in, <span class="dt">void</span> *data_out)</span>
<span id="cb9-2"><a href="#cb9-2"></a>{</span>
<span id="cb9-3"><a href="#cb9-3"></a>  ai_i32 batch;</span>
<span id="cb9-4"><a href="#cb9-4"></a></span>
<span id="cb9-5"><a href="#cb9-5"></a>  ai_buffer *ai_input = net_info.inputs;</span>
<span id="cb9-6"><a href="#cb9-6"></a>  ai_buffer *ai_output = net_info.outputs;</span>
<span id="cb9-7"><a href="#cb9-7"></a></span>
<span id="cb9-8"><a href="#cb9-8"></a>  ai_input[<span class="dv">0</span>].data = AI_HANDLE_PTR(data_in);</span>
<span id="cb9-9"><a href="#cb9-9"></a>  ai_output[<span class="dv">0</span>].data = AI_HANDLE_PTR(data_out);</span>
<span id="cb9-10"><a href="#cb9-10"></a></span>
<span id="cb9-11"><a href="#cb9-11"></a>  batch = ai_rel_network_run(net, ai_input, ai_output);</span>
<span id="cb9-12"><a href="#cb9-12"></a>  <span class="cf">if</span> (batch != <span class="dv">1</span>) {</span>
<span id="cb9-13"><a href="#cb9-13"></a>    ai_log_err(ai_rel_network_get_error(net),</span>
<span id="cb9-14"><a href="#cb9-14"></a>        <span class="st">&quot;ai_rel_network_run&quot;</span>);</span>
<span id="cb9-15"><a href="#cb9-15"></a>    <span class="cf">return</span> -<span class="dv">1</span>;</span>
<span id="cb9-16"><a href="#cb9-16"></a>  }</span>
<span id="cb9-17"><a href="#cb9-17"></a></span>
<span id="cb9-18"><a href="#cb9-18"></a>  <span class="cf">return</span> <span class="dv">0</span>;</span>
<span id="cb9-19"><a href="#cb9-19"></a>}</span></code></pre></div>
<div class="Warning">
<p><strong>NOTE</strong> — Properties of the input or output tensors are fully accessible through the <code>&#39;ai_network_report struct&#39;</code> as for the “static” approach (refer to <a href="embedded_client_api.html">[5]</a>), <em>“IO buffer/tensor description”</em> section). Payload can be allocated in the activations buffer w/o restrictions.</p>
</div>
</section>
</section>
<section id="generation-flow" class="level1">
<h1>Generation flow</h1>
<p>The following figure illustrates the flow to generate a relocatable binary model. The first step to import and generate the NN C-files is the same as the “static” approach. Only the <code>&#39;&lt;network&gt;_data.c/.h&#39;</code> files are not fully generated. The second step allows to compile and to link the generated NN C-files against a specific AI runtime library. It is just compiled with the relocatable options and it embeds the requested mathematical and memcopy/memset functions. The last post-processing step generates the binary file by appending a specific section (<code>&#39;.rel&#39;</code> section) and various information which will be used by the AI relocatable run-time API. The weights are appended as a <code>&#39;.weights&#39;</code> binary section at the end of the file.</p>
<div id="fig:reloc_gen" class="fignos">
<figure>
<img src="" property="center" style="width:100.0%" alt /><figcaption><span>Figure 1:</span> Generation of the relocatable binary model</figcaption>
</figure>
</div>
<ul>
<li>code is only compiled with a GCC ARM Embedded tool-chain. It is compiled with the <code>&#39;-fpic&#39;</code> and <code>&#39;-msingle-pic-base</code> options. The ARM Cortex-M <code>&#39;r9&#39;</code> register is used as platform register for the Global Offset Table, GOT. The AI relocatable run-time function is in charge to update the <code>&#39;r9&#39;</code> register before to call the code.</li>
<li>the generated relocatable binary object is independent of the end-user ARM embedded tool-chain used to build the end-user application. Consequently, for a same memory placement and HW setting, the inference time will be the same.</li>
</ul>
</section>
<section id="ai-run-time-execution-modes" class="level1">
<h1>AI run time execution modes</h1>
<section id="rel_xip_mode" class="level2">
<h2>XIP execution mode</h2>
<p>This execution mode is the default UC where the code and the weights sections are stored in the STM32 embedded/internal flash. In term of memory placement this is similar of the “static” approach.</p>
<div id="fig:reloc_reloc_xip" class="fignos">
<figure>
<img src="" property="center" style="width:80.0%" alt /><figcaption><span>Figure 1:</span> XIP execution mode</figcaption>
</figure>
</div>
</section>
<section id="rel_copy_mode" class="level2">
<h2>COPY execution mode</h2>
<p>This alternative execution mode should be considered when the weights should be placed in an external memory device, because they do not fit in the internal/embedded STM32 flash. Copying the code from a non-efficient executable memory region to a low latency executable region allows to significantly improve the inference time. Note that the requested <em>AI RT RAM</em> size is more important and that the associated memory region should be executable. Another inconvenient, for the Cortex-M4 based architecture (no Core <span class="math inline"><em>I</em>/</span>D cache available) is the contention due the code and data memory accesses which can degraded the performances. To avoid this drawback, next UC should be considered.</p>
<div id="fig:reloc_reloc_copy" class="fignos">
<figure>
<img src="" property="center" style="width:80.0%" alt /><figcaption><span>Figure 2:</span> COPY execution mode</figcaption>
</figure>
</div>
</section>
<section id="xip-execution-mode-and-separated-weight-binary-file" class="level2">
<h2>XIP execution mode and separated weight binary file</h2>
<p>This mode is an optimal case where the weights should be in an external memory device (<code>&#39;&lt;network&gt;_data.bin&#39;</code> file). This implies to have an second internal/embedded flash region to store the code (<code>&#39;&lt;network&gt;_rel.bin&#39;</code> file). In this case, the critical code is executed in place. The drawback is to manage two binary files for the upgrade.</p>
<div id="fig:reloc_reloc_sep" class="fignos">
<figure>
<img src="" property="center" style="width:80.0%" alt /><figcaption><span>Figure 3:</span> XIP execution mode with a separated weight binary file</figcaption>
</figure>
</div>
</section>
</section>
<section id="ai_rel_api" class="level1">
<h1>AI relocatable run-time API</h1>
<p>The proposed API (called AI relocatable run-time API) to manage the relocatable binary model is comparable to the Embedded inference client API (refer to <a href="embedded_client_api.html">[5]</a>) for the “standard” approach. Only the create and initialize functions have been enhanced to take account the specificities. All functions are prefixed by <code>&#39;ai_rel_network_&#39;</code> and they are not dependent of the c-name of the model. They are defined and implemented in the <code>&#39;ai_reloc_network.c/.h&#39;</code> files (<code>&#39;$X_CUBE_AI_DIR/Middlewares/ST/AI/Reloc/&#39;</code> folder).</p>
<section id="ref_rel_rt_info" class="level2">
<h2><code>ai_rel_network_rt_get_info()</code></h2>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1"></a>ai_error ai_rel_network_rt_get_info(<span class="at">const</span> <span class="dt">void</span>* obj, ai_rel_network_info* rt);</span></code></pre></div>
<p>Allow to retrieve the dimensioning information to instantiate a relocatable binary model.</p>
<ul>
<li><code>{AI_ERROR_INVALID_HANDLE, AI_ERROR_CODE_INVALID_PTR}</code> error is returned if the referenced object is not valid (i.e. invalid signature or the address is not aligned on 4-bytes).</li>
</ul>
<p>Following table describes the different fields which are available in the returned <code>&#39;ai_rel_network_info&#39;</code> C-struct.</p>
<table>
<colgroup>
<col style="width: 36%"></col>
<col style="width: 64%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">field</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>c_name</code></td>
<td style="text-align: left;">pointer on the user c-name of the generated model (debug purpose)</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>variant</code></td>
<td style="text-align: left;">32-bit word. Handle the AI RT version, requested Cortex-M ID,…</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>code_sz</code></td>
<td style="text-align: left;">code size in bytes (w/o the weight section)</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>weights\weights_sz</code></td>
<td style="text-align: left;">address/size (in bytes) of the weight section if available</td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>acts_sz</code></td>
<td style="text-align: left;">requested activations size (<code>&#39;RAM&#39;</code> metric) to run the model</td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>rt_ram_xip</code></td>
<td style="text-align: left;">requested RAM size (in bytes) to install the model in <a href="#rel_xip_mode">XIP mode</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><code>rt_ram_copy</code></td>
<td style="text-align: left;">requested RAM size (in bytes) to install the model in <a href="#rel_copy_mode">COPY mode</a></td>
</tr>
</tbody>
</table>
</section>
<section id="ref_rel_load" class="level2">
<h2><code>ai_rel_network_load_and_create()</code></h2>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1"></a>ai_error ai_rel_network_load_and_create(<span class="at">const</span> <span class="dt">void</span>* obj, ai_handle ram_addr,</span>
<span id="func-2"><a href="#func-2"></a>    ai_size ram_size, <span class="dt">uint32_t</span> mode, ai_handle* hdl);</span>
<span id="func-3"><a href="#func-3"></a>ai_handle ai_rel_network_destroy(ai_handle hdl);</span></code></pre></div>
<p>Create and install an instance of the relocatable binary model (referenced by <code>&#39;obj&#39;</code>). A RW memory buffer (<code>&#39;ram_addr/ram_size&#39;</code>) should be provided to create the data sections (<code>&#39;.data/.bss/.got&#39;</code>) and to fix/resolve the internal references during the relocation process. The <code>&#39;mode&#39;</code> indicates the expected execution mode. The expected size for the AI RT RAM buffer can be retrieved by the <span id="ref_rel_rt_info"><code>&#39;ai_rel_network_rt_get_info()&#39;</code></span> function.</p>
<table>
<colgroup>
<col style="width: 40%"></col>
<col style="width: 60%"></col>
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">mode</th>
<th style="text-align: left;">description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><code>AI_RELOC_RT_LOAD_MODE_XIP</code></td>
<td style="text-align: left;"><a href="#rel_xip_mode">XIP execution mode is requested</a></td>
</tr>
<tr class="even">
<td style="text-align: left;"><code>AI_RELOC_RT_LOAD_MODE_COPY</code></td>
<td style="text-align: left;"><a href="#rel_copy_mode">COPY execution mode is requested</a></td>
</tr>
</tbody>
</table>
<ul>
<li>updated <code>&#39;ai_handle&#39;</code> references a run-time context (opaque object) which must be used for the other functions.</li>
<li>before to create the instance, Cortex-M id is verified. If requested the function checks also if the FPU is enable.</li>
</ul>
<div class="Alert">
<p><strong>NOTE</strong> — If <code>&#39;ram_addr&#39;</code> or/and <code>&#39;ram_size&#39;</code> are <code>&#39;NULL&#39;</code>, default allocation is done through the system heap. Behavior can be overwritten in the <code>&#39;ai_reloc_network.c&#39;</code> file, see <code>&#39;AI_RELOC_MALLOC&#39;</code> macro definition.</p>
</div>
</section>
<section id="ref_rel_init" class="level2">
<h2><code>ai_rel_network_init()</code></h2>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1"></a>ai_bool ai_rel_network_init(ai_handle hdl, <span class="at">const</span> ai_handle weights,</span>
<span id="func-2"><a href="#func-2"></a>    <span class="at">const</span> ai_handle act);</span></code></pre></div>
<p>Finalize the initialization of the instance with the addresses of the weights and the activations buffer.</p>
<ul>
<li>if the weights are stored in the relocatable binary object, <a href="#ref_rel_rt_info"><code>&#39;ai_rel_network_rt_get_info()&#39;</code></a> should be used to retrieve the address.</li>
<li>as for the &quot;static` approach, an activations buffer should be also provided.</li>
</ul>
</section>
<section id="ref_rel_get_info" class="level2">
<h2><code>ai_rel_network_get_info()</code></h2>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1"></a>ai_bool ai_rel_network_get_info(ai_handle hdl, ai_network_report* report);</span></code></pre></div>
<p>Allow to retrieve the run-time data attributes of an instantiated model. Refer to <code>&#39;ai_platform.h&#39;</code> file to show the detail of the returned <code>&#39;ai_network_report&#39;</code> C-struct. It should be called after the call of <code>&#39;ai_rel_network_init()&#39;</code>.</p>
</section>
<section id="ref_rel_get_error" class="level2">
<h2><code>ai_rel_network_get_error()</code></h2>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1"></a>ai_error ai_rel_network_get_error(ai_handle hdl);</span></code></pre></div>
<p>Return the 1st error reported during the execution of a <code>&#39;ai_network_xxx()&#39;</code> function.</p>
<ul>
<li>see <code>&#39;ai_platform.h</code>’ file to have the list of the returned error type (<code>&#39;ai_error_type&#39;</code>) and associated code (<code>&#39;ai_error_code&#39;</code>).</li>
</ul>
</section>
<section id="ref_rel_run" class="level2">
<h2><code>ai_rel_network_run()</code></h2>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1"></a>ai_i32 ai_rel_network_run(ai_handle hdl, <span class="at">const</span> ai_buffer* input, ai_buffer* output);</span></code></pre></div>
<p>Perform one or the inferences. The input and output buffer parameters (<code>&#39;ai_buffer&#39;</code> type) allow to provide the input tensors and to store the predicted output tensors respectively (refer to <a href="embedded_client_api.html">[5]</a>, <em>“Input/output xD tensor format”</em> section).</p>
</section>
<section id="ref_rel_register" class="level2">
<h2><code>ai_rel_platform_observer_register()</code></h2>
<div class="sourceCode" id="func"><pre class="sourceCode cpp"><code class="sourceCode cpp"><span id="func-1"><a href="#func-1"></a>ai_bool ai_rel_platform_observer_register(ai_handle hdl,</span>
<span id="func-2"><a href="#func-2"></a>    ai_observer_node_cb cb, ai_handle cookie, ai_u32 flags);</span>
<span id="func-3"><a href="#func-3"></a>ai_bool ai_rel_platform_observer_unregister(ai_handle hdl,</span>
<span id="func-4"><a href="#func-4"></a>    ai_observer_node_cb cb, ai_handle cookie);</span>
<span id="func-5"><a href="#func-5"></a>ai_bool ai_rel_platform_observer_node_info(ai_handle hdl,</span>
<span id="func-6"><a href="#func-6"></a>    ai_observer_node *node_info);</span></code></pre></div>
<p>As for the “static” approach, these functions permit to register a user callback to be notified before and/or after the execution of a c-node. They is not restriction on the usage of the Platform Observer API with a relocatable binary model (refer to <a href="embedded_client_api.html">[5]</a>, <em>“Platform Observer API”</em> section).</p>
</section>
</section>
<section id="references" class="level1">
<h1>References</h1>
<table style="width:92%;">
<colgroup>
<col style="width: 13%"></col>
<col style="width: 77%"></col>
</colgroup>
<tbody>
<tr class="odd">
<td style="text-align: left;">[1]</td>
<td style="text-align: left;">X-CUBE-AI - <em>AI expansion pack for STM32CubeMX</em><br />
<a href="https://www.st.com/en/embedded-software/x-cube-ai.html">https://www.st.com/en/embedded-software/x-cube-ai.html</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[2]</td>
<td style="text-align: left;">User manual - Getting started with X-CUBE-AI Expansion Package for Artificial Intelligence (AI) <a href="https://www.st.com/resource/en/user_manual/dm00570145.pdf">(pdf)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[3]</td>
<td style="text-align: left;">stm32ai - Command Line Interface <a href="command_line_interface.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[4]</td>
<td style="text-align: left;">Supported Deep Learning toolboxes and layers <a href="layer-support.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[5]</td>
<td style="text-align: left;">Embedded inference client API <a href="embedded_client_api.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[6]</td>
<td style="text-align: left;">Evaluation report and metrics <a href="evaluation_metrics.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[7]</td>
<td style="text-align: left;">FAQs <a href="faqs.html">(link)</a></td>
</tr>
<tr class="even">
<td style="text-align: left;">[8]</td>
<td style="text-align: left;">Quantization and quantize command <a href="quantization.html">(link)</a></td>
</tr>
<tr class="odd">
<td style="text-align: left;">[9]</td>
<td style="text-align: left;">Relocatable binary network support <a href="relocatable.html">(link)</a></td>
</tr>
</tbody>
</table>
</section>
<section id="revision-history" class="level1">
<h1>Revision history</h1>
<table>
<thead>
<tr class="header">
<th style="text-align: left;">Date</th>
<th style="text-align: left;">version</th>
<th style="text-align: left;">changes</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><strong>2020-09-14</strong></td>
<td style="text-align: left;">r1.0</td>
<td style="text-align: left;">initial version (X-CUBE_AI 5.2)</td>
</tr>
</tbody>
</table>
</section>



<section class="st_footer">

<h1> <br> </h1>

<p style="font-family:verdana; text-align:left;">
 Embedded Documentation 

	- <b> Relocatable binary model support </b>
			<br> X-CUBE-AI Expansion Package
				<br> r1.0
		 - AI PLATFORM r5.2.0
			 (Embedded Inference Client API 1.1.0) 
			 - Command Line Interface r1.4.0 
		
	
</p>

<img src="" title="ST logo" align="right" height="100" />

<div class="stnotice">
Information in this document is provided solely in connection with ST products.
The contents of this document are subject to change without prior notice.
<br>
© Copyright STMicroelectronics 2020. All rights reserved. <a href="http://www.st.com">www.st.com</a>
</div>

<hr size="1" />
</section>


</article>
</body>

</html>
