<!-- HTML header for doxygen 1.8.6-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<meta name="generator" content="Doxygen 1.8.13"/>
<title>OpenCV: Conversion of TensorFlow Classification Models and Launch with OpenCV Python</title>
<link href="../../opencv.ico" rel="shortcut icon" type="image/x-icon" />
<link href="../../tabs.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="../../jquery.js"></script>
<script type="text/javascript" src="../../dynsections.js"></script>
<script type="text/javascript" src="../../tutorial-utils.js"></script>
<link href="../../search/search.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="../../search/searchdata.js"></script>
<script type="text/javascript" src="../../search/search.js"></script>
<script type="text/x-mathjax-config">
  MathJax.Hub.Config({
    extensions: ["tex2jax.js", "TeX/AMSmath.js", "TeX/AMSsymbols.js"],
    jax: ["input/TeX","output/HTML-CSS"],
});
//<![CDATA[
MathJax.Hub.Config(
{
  TeX: {
      Macros: {
          matTT: [ "\\[ \\left|\\begin{array}{ccc} #1 & #2 & #3\\\\ #4 & #5 & #6\\\\ #7 & #8 & #9 \\end{array}\\right| \\]", 9],
          fork: ["\\left\\{ \\begin{array}{l l} #1 & \\mbox{#2}\\\\ #3 & \\mbox{#4}\\\\ \\end{array} \\right.", 4],
          forkthree: ["\\left\\{ \\begin{array}{l l} #1 & \\mbox{#2}\\\\ #3 & \\mbox{#4}\\\\ #5 & \\mbox{#6}\\\\ \\end{array} \\right.", 6],
          forkfour: ["\\left\\{ \\begin{array}{l l} #1 & \\mbox{#2}\\\\ #3 & \\mbox{#4}\\\\ #5 & \\mbox{#6}\\\\ #7 & \\mbox{#8}\\\\ \\end{array} \\right.", 8],
          vecthree: ["\\begin{bmatrix} #1\\\\ #2\\\\ #3 \\end{bmatrix}", 3],
          vecthreethree: ["\\begin{bmatrix} #1 & #2 & #3\\\\ #4 & #5 & #6\\\\ #7 & #8 & #9 \\end{bmatrix}", 9],
          cameramatrix: ["#1 = \\begin{bmatrix} f_x & 0 & c_x\\\\ 0 & f_y & c_y\\\\ 0 & 0 & 1 \\end{bmatrix}", 1],
          distcoeffs: ["(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \\tau_x, \\tau_y]]]]) \\text{ of 4, 5, 8, 12 or 14 elements}"],
          distcoeffsfisheye: ["(k_1, k_2, k_3, k_4)"],
          hdotsfor: ["\\dots", 1],
          mathbbm: ["\\mathbb{#1}", 1],
          bordermatrix: ["\\matrix{#1}", 1]
      }
  }
}
);
//]]>
</script><script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js"></script>
<link href="../../doxygen.css" rel="stylesheet" type="text/css" />
<link href="../../stylesheet.css" rel="stylesheet" type="text/css"/>
</head>
<body>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<div id="titlearea">
<!--#include virtual="/google-search.html"-->
<table cellspacing="0" cellpadding="0">
 <tbody>
 <tr style="height: 56px;">
  <td id="projectlogo"><img alt="Logo" src="../../opencv-logo-small.png"/></td>
  <td style="padding-left: 0.5em;">
   <div id="projectname">OpenCV
   &#160;<span id="projectnumber">4.5.2</span>
   </div>
   <div id="projectbrief">Open Source Computer Vision</div>
  </td>
 </tr>
 </tbody>
</table>
</div>
<!-- end header part -->
<!-- Generated by Doxygen 1.8.13 -->
<script type="text/javascript">
var searchBox = new SearchBox("searchBox", "../../search",false,'Search');
</script>
<script type="text/javascript" src="../../menudata.js"></script>
<script type="text/javascript" src="../../menu.js"></script>
<script type="text/javascript">
$(function() {
  initMenu('../../',true,false,'search.php','Search');
  $(document).ready(function() { init_search(); });
});
</script>
<div id="main-nav"></div>
<!-- window showing the filter options -->
<div id="MSearchSelectWindow"
     onmouseover="return searchBox.OnSearchSelectShow()"
     onmouseout="return searchBox.OnSearchSelectHide()"
     onkeydown="return searchBox.OnSearchSelectKey(event)">
</div>

<!-- iframe showing the search results (closed by default) -->
<div id="MSearchResultsWindow">
<iframe src="javascript:void(0)" frameborder="0" 
        name="MSearchResults" id="MSearchResults">
</iframe>
</div>

<div id="nav-path" class="navpath">
  <ul>
<li class="navelem"><a class="el" href="../../d9/df8/tutorial_root.html">OpenCV Tutorials</a></li><li class="navelem"><a class="el" href="../../d2/d58/tutorial_table_of_content_dnn.html">Deep Neural Networks (dnn module)</a></li>  </ul>
</div>
</div><!-- top -->
<div class="header">
  <div class="headertitle">
<div class="title">Conversion of TensorFlow Classification Models and Launch with OpenCV Python </div>  </div>
</div><!--header-->
<div class="contents">
<div class="textblock"><table class="doxtable">
<tr>
<th align="right"></th><th align="left"></th></tr>
<tr>
<td align="right">Original author </td><td align="left">Anastasia Murzova </td></tr>
<tr>
<td align="right">Compatibility </td><td align="left">OpenCV &gt;= 4.5 </td></tr>
</table>
<h2>Goals</h2>
<p>In this tutorial you will learn how to:</p><ul>
<li>obtain frozen graphs of TensorFlow (TF) classification models</li>
<li>run converted TensorFlow model with OpenCV Python API</li>
<li>obtain an evaluation of the TensorFlow and OpenCV DNN models</li>
</ul>
<p>We will explore the above-listed points by the example of MobileNet architecture.</p>
<h2>Introduction</h2>
<p>Let's briefly view the key concepts involved in the pipeline of TensorFlow models transition with OpenCV API. The initial step in conversion of TensorFlow models into <a class="el" href="../../db/d30/classcv_1_1dnn_1_1Net.html" title="This class allows to create and manipulate comprehensive artificial neural networks. ">cv.dnn.Net</a> is obtaining the frozen TF model graph. Frozen graph defines the combination of the model graph structure with kept values of the required variables, for example, weights. Usually the frozen graph is saved in <a href="https://en.wikipedia.org/wiki/Protocol_Buffers">protobuf</a> (<code>.pb</code>) files. After the model <code>.pb</code> file was generated it can be read with <a class="el" href="../../d6/d0f/group__dnn.html#gad820b280978d06773234ba6841e77e8d" title="Reads a network model stored in TensorFlow framework&#39;s format. ">cv.dnn.readNetFromTensorflow</a> function.</p>
<h2>Requirements</h2>
<p>To be able to experiment with the below code you will need to install a set of libraries. We will use a virtual environment with python3.7+ for this:</p>
<div class="fragment"><div class="line">virtualenv -p /usr/bin/python3.7 &lt;env_dir_path&gt;</div><div class="line">source &lt;env_dir_path&gt;/bin/activate</div></div><!-- fragment --><p>For OpenCV-Python building from source, follow the corresponding instructions from the <a class="el" href="../../da/df6/tutorial_py_table_of_contents_setup.html">Introduction to OpenCV</a>.</p>
<p>Before you start the installation of the libraries, you can customize the <a href="https://github.com/opencv/opencv/tree/master/samples/dnn/dnn_model_runner/dnn_conversion/requirements.txt">requirements.txt</a>, excluding or including (for example, <code>opencv-python</code>) some dependencies. The below line initiates requirements installation into the previously activated virtual environment:</p>
<div class="fragment"><div class="line">pip install -r requirements.txt</div></div><!-- fragment --><h2>Practice</h2>
<p>In this part we are going to cover the following points:</p><ol type="1">
<li>create a TF classification model conversion pipeline and provide the inference</li>
<li>evaluate and test TF classification models</li>
</ol>
<p>If you'd like merely to run evaluation or test model pipelines, the "Model Conversion Pipeline" tutorial part can be skipped.</p>
<h3>Model Conversion Pipeline</h3>
<p>The code in this subchapter is located in the <code>dnn_model_runner</code> module and can be executed with the line:</p>
<div class="fragment"><div class="line">python -m dnn_model_runner.dnn_conversion.tf.classification.py_to_py_mobilenet</div></div><!-- fragment --><p>The following code contains the description of the below-listed steps:</p><ol type="1">
<li>instantiate TF model</li>
<li>create TF frozen graph</li>
<li>read TF frozen graph with OpenCV API</li>
<li>prepare input data</li>
<li>provide inference</li>
</ol>
<div class="fragment"><div class="line"># initialize TF MobileNet model</div><div class="line">original_tf_model = MobileNet(</div><div class="line">    include_top=True,</div><div class="line">    weights=&quot;imagenet&quot;</div><div class="line">)</div><div class="line"></div><div class="line"># get TF frozen graph path</div><div class="line">full_pb_path = get_tf_model_proto(original_tf_model)</div><div class="line"></div><div class="line"># read frozen graph with OpenCV API</div><div class="line">opencv_net = cv2.dnn.readNetFromTensorflow(full_pb_path)</div><div class="line">print(&quot;OpenCV model was successfully read. Model layers: \n&quot;, opencv_net.getLayerNames())</div><div class="line"></div><div class="line"># get preprocessed image</div><div class="line">input_img = get_preprocessed_img(&quot;../data/squirrel_cls.jpg&quot;)</div><div class="line"></div><div class="line"># get ImageNet labels</div><div class="line">imagenet_labels = get_imagenet_labels(&quot;../data/dnn/classification_classes_ILSVRC2012.txt&quot;)</div><div class="line"></div><div class="line"># obtain OpenCV DNN predictions</div><div class="line">get_opencv_dnn_prediction(opencv_net, input_img, imagenet_labels)</div><div class="line"></div><div class="line"># obtain TF model predictions</div><div class="line">get_tf_dnn_prediction(original_tf_model, input_img, imagenet_labels)</div></div><!-- fragment --><p>To provide model inference we will use the below <a href="https://www.pexels.com/photo/brown-squirrel-eating-1564292">squirrel photo</a> (under <a href="https://www.pexels.com/terms-of-service/">CC0</a> license) corresponding to ImageNet class ID 335: </p><div class="fragment"><div class="line">fox squirrel, eastern fox squirrel, Sciurus niger</div></div><!-- fragment --><div class="image">
<img src="../../squirrel_cls.jpg" alt="squirrel_cls.jpg"/>
<div class="caption">
Classification model input image</div></div>
<p> For the label decoding of the obtained prediction, we also need <code>imagenet_classes.txt</code> file, which contains the full list of the ImageNet classes.</p>
<p>Let's go deeper into each step by the example of pretrained TF MobileNet:</p><ul>
<li>instantiate TF model:</li>
</ul>
<div class="fragment"><div class="line"># initialize TF MobileNet model</div><div class="line">original_tf_model = MobileNet(</div><div class="line">    include_top=True,</div><div class="line">    weights=&quot;imagenet&quot;</div><div class="line">)</div></div><!-- fragment --><ul>
<li>create TF frozen graph</li>
</ul>
<div class="fragment"><div class="line"># define the directory for .pb model</div><div class="line">pb_model_path = &quot;models&quot;</div><div class="line"></div><div class="line"># define the name of .pb model</div><div class="line">pb_model_name = &quot;mobilenet.pb&quot;</div><div class="line"></div><div class="line"># create directory for further converted model</div><div class="line">os.makedirs(pb_model_path, exist_ok=True)</div><div class="line"></div><div class="line"># get model TF graph</div><div class="line">tf_model_graph = tf.function(lambda x: tf_model(x))</div><div class="line"></div><div class="line"># get concrete function</div><div class="line">tf_model_graph = tf_model_graph.get_concrete_function(</div><div class="line">    tf.TensorSpec(tf_model.inputs[0].shape, tf_model.inputs[0].dtype))</div><div class="line"></div><div class="line"># obtain frozen concrete function</div><div class="line">frozen_tf_func = convert_variables_to_constants_v2(tf_model_graph)</div><div class="line"># get frozen graph</div><div class="line">frozen_tf_func.graph.as_graph_def()</div><div class="line"></div><div class="line"># save full tf model</div><div class="line">tf.io.write_graph(graph_or_graph_def=frozen_tf_func.graph,</div><div class="line">                  logdir=pb_model_path,</div><div class="line">                  name=pb_model_name,</div><div class="line">                  as_text=False)</div></div><!-- fragment --><p>After the successful execution of the above code, we will get a frozen graph in <code>models/mobilenet.pb</code>.</p>
<ul>
<li>read TF frozen graph with with <a class="el" href="../../d6/d0f/group__dnn.html#gad820b280978d06773234ba6841e77e8d" title="Reads a network model stored in TensorFlow framework&#39;s format. ">cv.dnn.readNetFromTensorflow</a> passing the obtained in the previous step <code>mobilenet.pb</code> into it:</li>
</ul>
<div class="fragment"><div class="line"># get TF frozen graph path</div><div class="line">full_pb_path = get_tf_model_proto(original_tf_model)</div></div><!-- fragment --><ul>
<li>prepare input data with cv2.dnn.blobFromImage function:</li>
</ul>
<div class="fragment"><div class="line"># read the image</div><div class="line">input_img = cv2.imread(img_path, cv2.IMREAD_COLOR)</div><div class="line">input_img = input_img.astype(np.float32)</div><div class="line"></div><div class="line"># define preprocess parameters</div><div class="line">mean = np.array([1.0, 1.0, 1.0]) * 127.5</div><div class="line">scale = 1 / 127.5</div><div class="line"></div><div class="line"># prepare input blob to fit the model input:</div><div class="line"># 1. subtract mean</div><div class="line"># 2. scale to set pixel values from 0 to 1</div><div class="line">input_blob = cv2.dnn.blobFromImage(</div><div class="line">    image=input_img,</div><div class="line">    scalefactor=scale,</div><div class="line">    size=(224, 224),  # img target size</div><div class="line">    mean=mean,</div><div class="line">    swapRB=True,  # BGR -&gt; RGB</div><div class="line">    crop=True  # center crop</div><div class="line">)</div><div class="line">print(&quot;Input blob shape: {}\n&quot;.format(input_blob.shape))</div></div><!-- fragment --><p>Please, pay attention at the preprocessing order in the cv2.dnn.blobFromImage function. Firstly, the mean value is subtracted and only then pixel values are multiplied by the defined scale. Therefore, to reproduce the image preprocessing pipeline from the TF <a href="https://github.com/tensorflow/tensorflow/blob/02032fb477e9417197132648ec81e75beee9063a/tensorflow/python/keras/applications/mobilenet.py#L443-L445"><code>mobilenet.preprocess_input</code></a> function, we multiply <code>mean</code> by <code>127.5</code>.</p>
<p>As a result, 4-dimensional <code>input_blob</code> was obtained:</p>
<p><code>Input blob shape: (1, 3, 224, 224)</code></p>
<ul>
<li>provide OpenCV <a class="el" href="../../db/d30/classcv_1_1dnn_1_1Net.html" title="This class allows to create and manipulate comprehensive artificial neural networks. ">cv.dnn.Net</a> inference:</li>
</ul>
<div class="fragment"><div class="line"># set OpenCV DNN input</div><div class="line">opencv_net.setInput(preproc_img)</div><div class="line"></div><div class="line"># OpenCV DNN inference</div><div class="line">out = opencv_net.forward()</div><div class="line">print(&quot;OpenCV DNN prediction: \n&quot;)</div><div class="line">print(&quot;* shape: &quot;, out.shape)</div><div class="line"></div><div class="line"># get the predicted class ID</div><div class="line">imagenet_class_id = np.argmax(out)</div><div class="line"></div><div class="line"># get confidence</div><div class="line">confidence = out[0][imagenet_class_id]</div><div class="line">print(&quot;* class ID: {}, label: {}&quot;.format(imagenet_class_id, imagenet_labels[imagenet_class_id]))</div><div class="line">print(&quot;* confidence: {:.4f}\n&quot;.format(confidence))</div></div><!-- fragment --><p>After the above code execution we will get the following output:</p>
<div class="fragment"><div class="line">OpenCV DNN prediction:</div><div class="line">* shape:  (1, 1000)</div><div class="line">* class ID: 335, label: fox squirrel, eastern fox squirrel, Sciurus niger</div><div class="line">* confidence: 0.9525</div></div><!-- fragment --><ul>
<li>provide TF MobileNet inference:</li>
</ul>
<div class="fragment"><div class="line"># inference</div><div class="line">preproc_img = preproc_img.transpose(0, 2, 3, 1)</div><div class="line">print(&quot;TF input blob shape: {}\n&quot;.format(preproc_img.shape))</div><div class="line"></div><div class="line">out = original_net(preproc_img)</div><div class="line"></div><div class="line">print(&quot;\nTensorFlow model prediction: \n&quot;)</div><div class="line">print(&quot;* shape: &quot;, out.shape)</div><div class="line"></div><div class="line"># get the predicted class ID</div><div class="line">imagenet_class_id = np.argmax(out)</div><div class="line">print(&quot;* class ID: {}, label: {}&quot;.format(imagenet_class_id, imagenet_labels[imagenet_class_id]))</div><div class="line"></div><div class="line"># get confidence</div><div class="line">confidence = out[0][imagenet_class_id]</div><div class="line">print(&quot;* confidence: {:.4f}&quot;.format(confidence))</div></div><!-- fragment --><p>To fit TF model input, <code>input_blob</code> was transposed:</p>
<div class="fragment"><div class="line">TF input blob shape: (1, 224, 224, 3)</div></div><!-- fragment --><p>TF inference results are the following:</p>
<div class="fragment"><div class="line">TensorFlow model prediction:</div><div class="line">* shape:  (1, 1000)</div><div class="line">* class ID: 335, label: fox squirrel, eastern fox squirrel, Sciurus niger</div><div class="line">* confidence: 0.9525</div></div><!-- fragment --><p>As it can be seen from the experiments OpenCV and TF inference results are equal.</p>
<h3>Evaluation of the Models</h3>
<p>The proposed in <code>dnn/samples</code> <code>dnn_model_runner</code> module allows to run the full evaluation pipeline on the ImageNet dataset and test execution for the following TensorFlow classification models:</p><ul>
<li>vgg16</li>
<li>vgg19</li>
<li>resnet50</li>
<li>resnet101</li>
<li>resnet152</li>
<li>densenet121</li>
<li>densenet169</li>
<li>densenet201</li>
<li>inceptionresnetv2</li>
<li>inceptionv3</li>
<li>mobilenet</li>
<li>mobilenetv2</li>
<li>nasnetlarge</li>
<li>nasnetmobile</li>
<li>xception</li>
</ul>
<p>This list can be also extended with further appropriate evaluation pipeline configuration.</p>
<h4>Evaluation Mode</h4>
<p>To below line represents running of the module in the evaluation mode:</p>
<div class="fragment"><div class="line">python -m dnn_model_runner.dnn_conversion.tf.classification.py_to_py_cls --model_name &lt;tf_cls_model_name&gt;</div></div><!-- fragment --><p>Chosen from the list classification model will be read into OpenCV <code>cv.dnn_Net</code> object. Evaluation results of TF and OpenCV models (accuracy, inference time, L1) will be written into the log file. Inference time values will be also depicted in a chart to generalize the obtained model information.</p>
<p>Necessary evaluation configurations are defined in the <a href="https://github.com/opencv/opencv/tree/master/samples/dnn/dnn_model_runner/dnn_conversion/common/test/configs/test_config.py">test_config.py</a> and can be modified in accordance with actual paths of data location::</p>
<div class="fragment"><div class="line">@dataclass</div><div class="line">class TestClsConfig:</div><div class="line">    batch_size: int = 50</div><div class="line">    frame_size: int = 224</div><div class="line">    img_root_dir: str = &quot;./ILSVRC2012_img_val&quot;</div><div class="line">    # location of image-class matching</div><div class="line">    img_cls_file: str = &quot;./val.txt&quot;</div><div class="line">    bgr_to_rgb: bool = True</div></div><!-- fragment --><p>The values from <code>TestClsConfig</code> can be customized in accordance with chosen model.</p>
<p>To initiate the evaluation of the TensorFlow MobileNet, run the following line:</p>
<div class="fragment"><div class="line">python -m dnn_model_runner.dnn_conversion.tf.classification.py_to_py_cls --model_name mobilenet</div></div><!-- fragment --><p>After script launch, the log file with evaluation data will be generated in <code>dnn_model_runner/dnn_conversion/logs</code>:</p>
<div class="fragment"><div class="line">===== Running evaluation of the model with the following params:</div><div class="line">    * val data location: ./ILSVRC2012_img_val</div><div class="line">    * log file location: dnn_model_runner/dnn_conversion/logs/TF_mobilenet_log.txt</div></div><!-- fragment --><h4>Test Mode</h4>
<p>The below line represents running of the module in the test mode, namely it provides the steps for the model inference:</p>
<div class="fragment"><div class="line">python -m dnn_model_runner.dnn_conversion.tf.classification.py_to_py_cls --model_name &lt;tf_cls_model_name&gt; --test True --default_img_preprocess &lt;True/False&gt; --evaluate False</div></div><!-- fragment --><p>Here <code>default_img_preprocess</code> key defines whether you'd like to parametrize the model test process with some particular values or use the default values, for example, <code>scale</code>, <code>mean</code> or <code>std</code>.</p>
<p>Test configuration is represented in <a href="https://github.com/opencv/opencv/tree/master/samples/dnn/dnn_model_runner/dnn_conversion/common/test/configs/test_config.py">test_config.py</a> <code>TestClsModuleConfig</code> class:</p>
<div class="fragment"><div class="line">@dataclass</div><div class="line">class TestClsModuleConfig:</div><div class="line">    cls_test_data_dir: str = &quot;../data&quot;</div><div class="line">    test_module_name: str = &quot;classification&quot;</div><div class="line">    test_module_path: str = &quot;classification.py&quot;</div><div class="line">    input_img: str = os.path.join(cls_test_data_dir, &quot;squirrel_cls.jpg&quot;)</div><div class="line">    model: str = &quot;&quot;</div><div class="line"></div><div class="line">    frame_height: str = str(TestClsConfig.frame_size)</div><div class="line">    frame_width: str = str(TestClsConfig.frame_size)</div><div class="line">    scale: str = &quot;1.0&quot;</div><div class="line">    mean: List[str] = field(default_factory=lambda: [&quot;0.0&quot;, &quot;0.0&quot;, &quot;0.0&quot;])</div><div class="line">    std: List[str] = field(default_factory=list)</div><div class="line">    crop: str = &quot;False&quot;</div><div class="line">    rgb: str = &quot;True&quot;</div><div class="line">    rsz_height: str = &quot;&quot;</div><div class="line">    rsz_width: str = &quot;&quot;</div><div class="line">    classes: str = os.path.join(cls_test_data_dir, &quot;dnn&quot;, &quot;classification_classes_ILSVRC2012.txt&quot;)</div></div><!-- fragment --><p>The default image preprocessing options are defined in <code>default_preprocess_config.py</code>. For instance, for MobileNet:</p>
<div class="fragment"><div class="line">tf_input_blob = {</div><div class="line">    &quot;mean&quot;: [&quot;127.5&quot;, &quot;127.5&quot;, &quot;127.5&quot;],</div><div class="line">    &quot;scale&quot;: str(1 / 127.5),</div><div class="line">    &quot;std&quot;: [],</div><div class="line">    &quot;crop&quot;: &quot;True&quot;,</div><div class="line">    &quot;rgb&quot;: &quot;True&quot;</div><div class="line">}</div></div><!-- fragment --><p>The basis of the model testing is represented in <a href="https://github.com/opencv/opencv/blob/master/samples/dnn/classification.py">samples/dnn/classification.py</a>. <code>classification.py</code> can be executed autonomously with provided converted model in <code>--input</code> and populated parameters for <a class="el" href="../../d6/d0f/group__dnn.html#ga29f34df9376379a603acd8df581ac8d7" title="Creates 4-dimensional blob from image. Optionally resizes and crops image from center, subtract mean values, scales values by scalefactor, swap Blue and Red channels. ">cv.dnn.blobFromImage</a>.</p>
<p>To reproduce from scratch the described in "Model Conversion Pipeline" OpenCV steps with <code>dnn_model_runner</code> execute the below line:</p>
<div class="fragment"><div class="line">python -m dnn_model_runner.dnn_conversion.tf.classification.py_to_py_cls --model_name mobilenet --test True --default_img_preprocess True --evaluate False</div></div><!-- fragment --><p>The network prediction is depicted in the top left corner of the output window:</p>
<div class="image">
<img src="../../tf_mobilenet_opencv_test_res.jpg" alt="tf_mobilenet_opencv_test_res.jpg"/>
<div class="caption">
TF MobileNet OpenCV inference output</div></div>
</div></div><!-- contents -->
<!-- HTML footer for doxygen 1.8.6-->
<!-- start footer part -->
<hr class="footer"/><address class="footer"><small>
Generated on Fri Apr 2 2021 11:36:34 for OpenCV by &#160;<a href="http://www.doxygen.org/index.html">
<img class="footer" src="../../doxygen.png" alt="doxygen"/>
</a> 1.8.13
</small></address>
<script type="text/javascript">
//<![CDATA[
addTutorialsButtons();
//]]>
</script>
</body>
</html>
