<!-- HTML header for doxygen 1.8.6-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<meta name="generator" content="Doxygen 1.8.13"/>
<title>OpenCV: Conversion of TensorFlow Segmentation Models and Launch with OpenCV</title>
<link href="../../opencv.ico" rel="shortcut icon" type="image/x-icon" />
<link href="../../tabs.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="../../jquery.js"></script>
<script type="text/javascript" src="../../dynsections.js"></script>
<script type="text/javascript" src="../../tutorial-utils.js"></script>
<link href="../../search/search.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="../../search/searchdata.js"></script>
<script type="text/javascript" src="../../search/search.js"></script>
<script type="text/x-mathjax-config">
  MathJax.Hub.Config({
    extensions: ["tex2jax.js", "TeX/AMSmath.js", "TeX/AMSsymbols.js"],
    jax: ["input/TeX","output/HTML-CSS"],
});
//<![CDATA[
MathJax.Hub.Config(
{
  TeX: {
      Macros: {
          matTT: [ "\\[ \\left|\\begin{array}{ccc} #1 & #2 & #3\\\\ #4 & #5 & #6\\\\ #7 & #8 & #9 \\end{array}\\right| \\]", 9],
          fork: ["\\left\\{ \\begin{array}{l l} #1 & \\mbox{#2}\\\\ #3 & \\mbox{#4}\\\\ \\end{array} \\right.", 4],
          forkthree: ["\\left\\{ \\begin{array}{l l} #1 & \\mbox{#2}\\\\ #3 & \\mbox{#4}\\\\ #5 & \\mbox{#6}\\\\ \\end{array} \\right.", 6],
          forkfour: ["\\left\\{ \\begin{array}{l l} #1 & \\mbox{#2}\\\\ #3 & \\mbox{#4}\\\\ #5 & \\mbox{#6}\\\\ #7 & \\mbox{#8}\\\\ \\end{array} \\right.", 8],
          vecthree: ["\\begin{bmatrix} #1\\\\ #2\\\\ #3 \\end{bmatrix}", 3],
          vecthreethree: ["\\begin{bmatrix} #1 & #2 & #3\\\\ #4 & #5 & #6\\\\ #7 & #8 & #9 \\end{bmatrix}", 9],
          cameramatrix: ["#1 = \\begin{bmatrix} f_x & 0 & c_x\\\\ 0 & f_y & c_y\\\\ 0 & 0 & 1 \\end{bmatrix}", 1],
          distcoeffs: ["(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \\tau_x, \\tau_y]]]]) \\text{ of 4, 5, 8, 12 or 14 elements}"],
          distcoeffsfisheye: ["(k_1, k_2, k_3, k_4)"],
          hdotsfor: ["\\dots", 1],
          mathbbm: ["\\mathbb{#1}", 1],
          bordermatrix: ["\\matrix{#1}", 1]
      }
  }
}
);
//]]>
</script><script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js"></script>
<link href="../../doxygen.css" rel="stylesheet" type="text/css" />
<link href="../../stylesheet.css" rel="stylesheet" type="text/css"/>
</head>
<body>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<div id="titlearea">
<!--#include virtual="/google-search.html"-->
<table cellspacing="0" cellpadding="0">
 <tbody>
 <tr style="height: 56px;">
  <td id="projectlogo"><img alt="Logo" src="../../opencv-logo-small.png"/></td>
  <td style="padding-left: 0.5em;">
   <div id="projectname">OpenCV
   &#160;<span id="projectnumber">4.5.2</span>
   </div>
   <div id="projectbrief">Open Source Computer Vision</div>
  </td>
 </tr>
 </tbody>
</table>
</div>
<!-- end header part -->
<!-- Generated by Doxygen 1.8.13 -->
<script type="text/javascript">
var searchBox = new SearchBox("searchBox", "../../search",false,'Search');
</script>
<script type="text/javascript" src="../../menudata.js"></script>
<script type="text/javascript" src="../../menu.js"></script>
<script type="text/javascript">
$(function() {
  initMenu('../../',true,false,'search.php','Search');
  $(document).ready(function() { init_search(); });
});
</script>
<div id="main-nav"></div>
<!-- window showing the filter options -->
<div id="MSearchSelectWindow"
     onmouseover="return searchBox.OnSearchSelectShow()"
     onmouseout="return searchBox.OnSearchSelectHide()"
     onkeydown="return searchBox.OnSearchSelectKey(event)">
</div>

<!-- iframe showing the search results (closed by default) -->
<div id="MSearchResultsWindow">
<iframe src="javascript:void(0)" frameborder="0" 
        name="MSearchResults" id="MSearchResults">
</iframe>
</div>

<div id="nav-path" class="navpath">
  <ul>
<li class="navelem"><a class="el" href="../../d9/df8/tutorial_root.html">OpenCV Tutorials</a></li><li class="navelem"><a class="el" href="../../d2/d58/tutorial_table_of_content_dnn.html">Deep Neural Networks (dnn module)</a></li>  </ul>
</div>
</div><!-- top -->
<div class="header">
  <div class="headertitle">
<div class="title">Conversion of TensorFlow Segmentation Models and Launch with OpenCV </div>  </div>
</div><!--header-->
<div class="contents">
<div class="textblock"><h2>Goals</h2>
<p>In this tutorial you will learn how to:</p><ul>
<li>convert TensorFlow (TF) segmentation models</li>
<li>run converted TensorFlow model with OpenCV</li>
<li>obtain an evaluation of the TensorFlow and OpenCV DNN models</li>
</ul>
<p>We will explore the above-listed points by the example of the DeepLab architecture.</p>
<h2>Introduction</h2>
<p>The key concepts involved in the transition pipeline of the <a href="https://link_to_cls_tutorial">TensorFlow classification</a> and segmentation models with OpenCV API are almost equal excepting the phase of graph optimization. The initial step in conversion of TensorFlow models into <a class="el" href="../../db/d30/classcv_1_1dnn_1_1Net.html" title="This class allows to create and manipulate comprehensive artificial neural networks. ">cv.dnn.Net</a> is obtaining the frozen TF model graph. Frozen graph defines the combination of the model graph structure with kept values of the required variables, for example, weights. Usually the frozen graph is saved in <a href="https://en.wikipedia.org/wiki/Protocol_Buffers">protobuf</a> (<code>.pb</code>) files. To read the generated segmentation model <code>.pb</code> file with <a class="el" href="../../d6/d0f/group__dnn.html#gad820b280978d06773234ba6841e77e8d" title="Reads a network model stored in TensorFlow framework&#39;s format. ">cv.dnn.readNetFromTensorflow</a>, it is needed to modify the graph with TF <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms">graph transform tool</a>.</p>
<h2>Practice</h2>
<p>In this part we are going to cover the following points:</p><ol type="1">
<li>create a TF classification model conversion pipeline and provide the inference</li>
<li>evaluate and test TF classification models</li>
</ol>
<p>If you'd like merely to run evaluation or test model pipelines, the "Model Conversion Pipeline" tutorial part can be skipped.</p>
<h3>Model Conversion Pipeline</h3>
<p>The code in this subchapter is located in the <code>dnn_model_runner</code> module and can be executed with the line:</p>
<div class="fragment"><div class="line">python -m dnn_model_runner.dnn_conversion.tf.segmentation.py_to_py_deeplab</div></div><!-- fragment --><p>TensorFlow segmentation models can be found in <a href="https://github.com/tensorflow/models/tree/master/research/#tensorflow-research-models">TensorFlow Research Models</a> section, which contains the implementations of models on the basis of published research papers. We will retrieve the archive with the pre-trained TF DeepLabV3 from the below link:</p>
<div class="fragment"><div class="line">http://download.tensorflow.org/models/deeplabv3_mnv2_pascal_trainval_2018_01_29.tar.gz</div></div><!-- fragment --><p>The full frozen graph obtaining pipeline is described in <code>deeplab_retrievement.py</code>:</p>
<div class="fragment"><div class="line">def get_deeplab_frozen_graph():</div><div class="line">    # define model path to download</div><div class="line">    models_url = &#39;http://download.tensorflow.org/models/&#39;</div><div class="line">    mobilenetv2_voctrainval = &#39;deeplabv3_mnv2_pascal_trainval_2018_01_29.tar.gz&#39;</div><div class="line"></div><div class="line">    # construct model link to download</div><div class="line">    model_link = models_url + mobilenetv2_voctrainval</div><div class="line"></div><div class="line">    try:</div><div class="line">        urllib.request.urlretrieve(model_link, mobilenetv2_voctrainval)</div><div class="line">    except Exception:</div><div class="line">        print(&quot;TF DeepLabV3 was not retrieved: {}&quot;.format(model_link))</div><div class="line">        return</div><div class="line"></div><div class="line">    tf_model_tar = tarfile.open(mobilenetv2_voctrainval)</div><div class="line"></div><div class="line">    # iterate the obtained model archive</div><div class="line">    for model_tar_elem in tf_model_tar.getmembers():</div><div class="line">        # check whether the model archive contains frozen graph</div><div class="line">        if TF_FROZEN_GRAPH_NAME in os.path.basename(model_tar_elem.name):</div><div class="line">            # extract frozen graph</div><div class="line">            tf_model_tar.extract(model_tar_elem, FROZEN_GRAPH_PATH)</div><div class="line"></div><div class="line">    tf_model_tar.close()</div></div><!-- fragment --><p>After running this script:</p>
<div class="fragment"><div class="line">python -m dnn_model_runner.dnn_conversion.tf.segmentation.deeplab_retrievement</div></div><!-- fragment --><p>we will get <code>frozen_inference_graph.pb</code> in <code>deeplab/deeplabv3_mnv2_pascal_trainval</code>.</p>
<p>Before going to the network loading with OpenCV it is needed to optimize the extracted <code>frozen_inference_graph.pb</code>. To optimize the graph we use TF <code>TransformGraph</code> with default parameters:</p>
<div class="fragment"><div class="line">DEFAULT_OPT_GRAPH_NAME = &quot;optimized_frozen_inference_graph.pb&quot;</div><div class="line">DEFAULT_INPUTS = &quot;sub_7&quot;</div><div class="line">DEFAULT_OUTPUTS = &quot;ResizeBilinear_3&quot;</div><div class="line">DEFAULT_TRANSFORMS = &quot;remove_nodes(op=Identity)&quot; \</div><div class="line">                     &quot; merge_duplicate_nodes&quot; \</div><div class="line">                     &quot; strip_unused_nodes&quot; \</div><div class="line">                     &quot; fold_constants(ignore_errors=true)&quot; \</div><div class="line">                     &quot; fold_batch_norms&quot; \</div><div class="line">                     &quot; fold_old_batch_norms&quot;</div><div class="line"></div><div class="line"></div><div class="line">def optimize_tf_graph(</div><div class="line">        in_graph,</div><div class="line">        out_graph=DEFAULT_OPT_GRAPH_NAME,</div><div class="line">        inputs=DEFAULT_INPUTS,</div><div class="line">        outputs=DEFAULT_OUTPUTS,</div><div class="line">        transforms=DEFAULT_TRANSFORMS,</div><div class="line">        is_manual=True,</div><div class="line">        was_optimized=True</div><div class="line">):</div><div class="line">    # ...</div><div class="line"></div><div class="line">    tf_opt_graph = TransformGraph(</div><div class="line">        tf_graph,</div><div class="line">        inputs,</div><div class="line">        outputs,</div><div class="line">        transforms</div><div class="line">    )</div></div><!-- fragment --><p>To run graph optimization process, execute the line:</p>
<div class="fragment"><div class="line">python -m dnn_model_runner.dnn_conversion.tf.segmentation.tf_graph_optimizer --in_graph deeplab/deeplabv3_mnv2_pascal_trainval/frozen_inference_graph.pb</div></div><!-- fragment --><p>As a result <code>deeplab/deeplabv3_mnv2_pascal_trainval</code> directory will contain <code>optimized_frozen_inference_graph.pb</code>.</p>
<p>After we have obtained the model graphs, let's examine the below-listed steps:</p><ol type="1">
<li>read TF <code>frozen_inference_graph.pb</code> graph</li>
<li>read optimized TF frozen graph with OpenCV API</li>
<li>prepare input data</li>
<li>provide inference</li>
<li>get colored masks from predictions</li>
<li>visualize results</li>
</ol>
<div class="fragment"><div class="line"># get TF model graph from the obtained frozen graph</div><div class="line">deeplab_graph = read_deeplab_frozen_graph(deeplab_frozen_graph_path)</div><div class="line"></div><div class="line"># read DeepLab frozen graph with OpenCV API</div><div class="line">opencv_net = cv2.dnn.readNetFromTensorflow(opt_deeplab_frozen_graph_path)</div><div class="line">print(&quot;OpenCV model was successfully read. Model layers: \n&quot;, opencv_net.getLayerNames())</div><div class="line"></div><div class="line"># get processed image</div><div class="line">original_img_shape, tf_input_blob, opencv_input_img = get_processed_imgs(&quot;test_data/sem_segm/2007_000033.jpg&quot;)</div><div class="line"></div><div class="line"># obtain OpenCV DNN predictions</div><div class="line">opencv_prediction = get_opencv_dnn_prediction(opencv_net, opencv_input_img)</div><div class="line"></div><div class="line"># obtain TF model predictions</div><div class="line">tf_prediction = get_tf_dnn_prediction(deeplab_graph, tf_input_blob)</div><div class="line"></div><div class="line"># get PASCAL VOC classes and colors</div><div class="line">pascal_voc_classes, pascal_voc_colors = read_colors_info(&quot;test_data/sem_segm/pascal-classes.txt&quot;)</div><div class="line"></div><div class="line"># obtain colored segmentation masks</div><div class="line">opencv_colored_mask = get_colored_mask(original_img_shape, opencv_prediction, pascal_voc_colors)</div><div class="line">tf_colored_mask = get_tf_colored_mask(original_img_shape, tf_prediction, pascal_voc_colors)</div><div class="line"></div><div class="line"># obtain palette of PASCAL VOC colors</div><div class="line">color_legend = get_legend(pascal_voc_classes, pascal_voc_colors)</div><div class="line"></div><div class="line">cv2.imshow(&#39;TensorFlow Colored Mask&#39;, tf_colored_mask)</div><div class="line">cv2.imshow(&#39;OpenCV DNN Colored Mask&#39;, opencv_colored_mask)</div><div class="line"></div><div class="line">cv2.imshow(&#39;Color Legend&#39;, color_legend)</div></div><!-- fragment --><p>To provide the model inference we will use the below picture from the <a href="http://host.robots.ox.ac.uk/pascal/VOC/">PASCAL VOC</a> validation dataset:</p>
<div class="image">
<img src="../../images/2007_000033.jpg" alt="PASCAL VOC img"/>
</div>
<p>The target segmented result is:</p>
<div class="image">
<img src="../../images/2007_000033.png" alt="PASCAL VOC ground truth"/>
</div>
<p>For the PASCAL VOC colors decoding and its mapping with the predicted masks, we also need <code>pascal-classes.txt</code> file, which contains the full list of the PASCAL VOC classes and corresponding colors.</p>
<p>Let's go deeper into each step by the example of pretrained TF DeepLabV3 MobileNetV2:</p>
<ul>
<li>read TF <code>frozen_inference_graph.pb</code> graph :</li>
</ul>
<div class="fragment"><div class="line"># init deeplab model graph</div><div class="line">model_graph = tf.Graph()</div><div class="line"></div><div class="line"># obtain</div><div class="line">with tf.io.gfile.GFile(frozen_graph_path, &#39;rb&#39;) as graph_file:</div><div class="line">    tf_model_graph = GraphDef()</div><div class="line">tf_model_graph.ParseFromString(graph_file.read())</div><div class="line"></div><div class="line">with model_graph.as_default():</div><div class="line">    tf.import_graph_def(tf_model_graph, name=&#39;&#39;)</div></div><!-- fragment --><ul>
<li>read optimized TF frozen graph with OpenCV API:</li>
</ul>
<div class="fragment"><div class="line"># read DeepLab frozen graph with OpenCV API</div><div class="line">opencv_net = cv2.dnn.readNetFromTensorflow(opt_deeplab_frozen_graph_path)</div></div><!-- fragment --><ul>
<li>prepare input data with cv2.dnn.blobFromImage function:</li>
</ul>
<div class="fragment"><div class="line"># read the image</div><div class="line">input_img = cv2.imread(img_path, cv2.IMREAD_COLOR)</div><div class="line">input_img = input_img.astype(np.float32)</div><div class="line"></div><div class="line"># preprocess image for TF model input</div><div class="line">tf_preproc_img = cv2.resize(input_img, (513, 513))</div><div class="line">tf_preproc_img = cv2.cvtColor(tf_preproc_img, cv2.COLOR_BGR2RGB)</div><div class="line"></div><div class="line"># define preprocess parameters for OpenCV DNN</div><div class="line">mean = np.array([1.0, 1.0, 1.0]) * 127.5</div><div class="line">scale = 1 / 127.5</div><div class="line"></div><div class="line"># prepare input blob to fit the model input:</div><div class="line"># 1. subtract mean</div><div class="line"># 2. scale to set pixel values from 0 to 1</div><div class="line">input_blob = cv2.dnn.blobFromImage(</div><div class="line">    image=input_img,</div><div class="line">    scalefactor=scale,</div><div class="line">    size=(513, 513),  # img target size</div><div class="line">    mean=mean,</div><div class="line">    swapRB=True,  # BGR -&gt; RGB</div><div class="line">    crop=False  # center crop</div><div class="line">)</div></div><!-- fragment --><p>Please, pay attention at the preprocessing order in the <code>cv2.dnn.blobFromImage</code> function. Firstly, the mean value is subtracted and only then pixel values are multiplied by the defined scale. Therefore, to reproduce TF image preprocessing pipeline, we multiply <code>mean</code> by <code>127.5</code>. Another important point is image preprocessing for TF DeepLab. To pass the image into TF model we need only to construct an appropriate shape, the rest image preprocessing is described in <a href="https://github.com/tensorflow/models/blob/master/research/deeplab/core/feature_extractor.py">feature_extractor.py</a> and will be invoked automatically.</p>
<ul>
<li>provide OpenCV <code>cv.dnn_Net</code> inference:</li>
</ul>
<div class="fragment"><div class="line"># set OpenCV DNN input</div><div class="line">opencv_net.setInput(preproc_img)</div><div class="line"></div><div class="line"># OpenCV DNN inference</div><div class="line">out = opencv_net.forward()</div><div class="line">print(&quot;OpenCV DNN segmentation prediction: \n&quot;)</div><div class="line">print(&quot;* shape: &quot;, out.shape)</div><div class="line"></div><div class="line"># get IDs of predicted classes</div><div class="line">out_predictions = np.argmax(out[0], axis=0)</div></div><!-- fragment --><p>After the above code execution we will get the following output:</p>
<div class="fragment"><div class="line">OpenCV DNN segmentation prediction:</div><div class="line">* shape:  (1, 21, 513, 513)</div></div><!-- fragment --><p>Each prediction channel out of 21, where 21 represents the number of PASCAL VOC classes, contains probabilities, which indicate how likely the pixel corresponds to the PASCAL VOC class.</p>
<ul>
<li>provide TF model inference:</li>
</ul>
<div class="fragment"><div class="line">preproc_img = np.expand_dims(preproc_img, 0)</div><div class="line"></div><div class="line"># init TF session</div><div class="line">tf_session = Session(graph=model_graph)</div><div class="line"></div><div class="line">input_tensor_name = &quot;ImageTensor:0&quot;,</div><div class="line">output_tensor_name = &quot;SemanticPredictions:0&quot;</div><div class="line"></div><div class="line"># run inference</div><div class="line">out = tf_session.run(</div><div class="line">    output_tensor_name,</div><div class="line">    feed_dict={input_tensor_name: [preproc_img]}</div><div class="line">)</div><div class="line"></div><div class="line">print(&quot;TF segmentation model prediction: \n&quot;)</div><div class="line">print(&quot;* shape: &quot;, out.shape)</div></div><!-- fragment --><p>TF inference results are the following:</p>
<div class="fragment"><div class="line">TF segmentation model prediction:</div><div class="line">* shape:  (1, 513, 513)</div></div><!-- fragment --><p>TensorFlow prediction contains the indexes of corresponding PASCAL VOC classes.</p>
<ul>
<li>transform OpenCV prediction into colored mask:</li>
</ul>
<div class="fragment"><div class="line">mask_height = segm_mask.shape[0]</div><div class="line">mask_width = segm_mask.shape[1]</div><div class="line"></div><div class="line">img_height = original_img_shape[0]</div><div class="line">img_width = original_img_shape[1]</div><div class="line"></div><div class="line"># convert mask values into PASCAL VOC colors</div><div class="line">processed_mask = np.stack([colors[color_id] for color_id in segm_mask.flatten()])</div><div class="line"></div><div class="line"># reshape mask into 3-channel image</div><div class="line">processed_mask = processed_mask.reshape(mask_height, mask_width, 3)</div><div class="line">processed_mask = cv2.resize(processed_mask, (img_width, img_height), interpolation=cv2.INTER_NEAREST).astype(</div><div class="line">    np.uint8)</div><div class="line"></div><div class="line"># convert colored mask from BGR to RGB</div><div class="line">processed_mask = cv2.cvtColor(processed_mask, cv2.COLOR_BGR2RGB)</div></div><!-- fragment --><p>In this step we map the probabilities from segmentation masks with appropriate colors of the predicted classes. Let's have a look at the results:</p>
<div class="image">
<img src="../../images/colors_legend.png" alt="Color Legend"/>
</div>
<div class="image">
<img src="../../images/deeplab_opencv_colored_mask.png" alt="OpenCV Colored Mask"/>
</div>
<ul>
<li>transform TF prediction into colored mask:</li>
</ul>
<div class="fragment"><div class="line">colors = np.array(colors)</div><div class="line">processed_mask = colors[segm_mask[0]]</div><div class="line"></div><div class="line">img_height = original_img_shape[0]</div><div class="line">img_width = original_img_shape[1]</div><div class="line"></div><div class="line">processed_mask = cv2.resize(processed_mask, (img_width, img_height), interpolation=cv2.INTER_NEAREST).astype(</div><div class="line">    np.uint8)</div><div class="line"></div><div class="line"># convert colored mask from BGR to RGB for compatibility with PASCAL VOC colors</div><div class="line">processed_mask = cv2.cvtColor(processed_mask, cv2.COLOR_BGR2RGB)</div></div><!-- fragment --><p>The result is:</p>
<div class="image">
<img src="../../images/deeplab_tf_colored_mask.png" alt="TF Colored Mask"/>
</div>
<p>As a result, we get two equal segmentation masks.</p>
<h3>Evaluation of the Models</h3>
<p>The proposed in <code>dnn/samples</code> <code>dnn_model_runner</code> module allows to run the full evaluation pipeline on the PASCAL VOC dataset and test execution for the DeepLab MobileNet model.</p>
<h4>Evaluation Mode</h4>
<p>To below line represents running of the module in the evaluation mode:</p>
<div class="fragment"><div class="line">python -m dnn_model_runner.dnn_conversion.tf.segmentation.py_to_py_segm</div></div><!-- fragment --><p>The model will be read into OpenCV <code>cv.dnn_Net</code> object. Evaluation results of TF and OpenCV models (pixel accuracy, mean IoU, inference time) will be written into the log file. Inference time values will be also depicted in a chart to generalize the obtained model information.</p>
<p>Necessary evaluation configurations are defined in the <a href="https://github.com/opencv/opencv/tree/master/samples/dnn/dnn_model_runner/dnn_conversion/common/test/configs/test_config.py"><code>test_config.py</code></a>:</p>
<div class="fragment"><div class="line">@dataclass</div><div class="line">class TestSegmConfig:</div><div class="line">    frame_size: int = 500</div><div class="line">    img_root_dir: str = &quot;./VOC2012&quot;</div><div class="line">    img_dir: str = os.path.join(img_root_dir, &quot;JPEGImages/&quot;)</div><div class="line">    img_segm_gt_dir: str = os.path.join(img_root_dir, &quot;SegmentationClass/&quot;)</div><div class="line">    # reduced val: https://github.com/shelhamer/fcn.berkeleyvision.org/blob/master/data/pascal/seg11valid.txt</div><div class="line">    segm_val_file: str = os.path.join(img_root_dir, &quot;ImageSets/Segmentation/seg11valid.txt&quot;)</div><div class="line">    colour_file_cls: str = os.path.join(img_root_dir, &quot;ImageSets/Segmentation/pascal-classes.txt&quot;)</div></div><!-- fragment --><p>These values can be modified in accordance with chosen model pipeline.</p>
<h4>Test Mode</h4>
<p>The below line represents running of the module in the test mode, which provides the steps for the model inference:</p>
<div class="fragment"><div class="line">python -m dnn_model_runner.dnn_conversion.tf.segmentation.py_to_py_segm --test True --default_img_preprocess &lt;True/False&gt; --evaluate False</div></div><!-- fragment --><p>Here <code>default_img_preprocess</code> key defines whether you'd like to parametrize the model test process with some particular values or use the default values, for example, <code>scale</code>, <code>mean</code> or <code>std</code>.</p>
<p>Test configuration is represented in <a href="https://github.com/opencv/opencv/tree/master/samples/dnn/dnn_model_runner/dnn_conversion/common/test/configs/test_config.py"><code>test_config.py</code></a> <code>TestSegmModuleConfig</code> class:</p>
<div class="fragment"><div class="line">@dataclass</div><div class="line">class TestSegmModuleConfig:</div><div class="line">    segm_test_data_dir: str = &quot;test_data/sem_segm&quot;</div><div class="line">    test_module_name: str = &quot;segmentation&quot;</div><div class="line">    test_module_path: str = &quot;segmentation.py&quot;</div><div class="line">    input_img: str = os.path.join(segm_test_data_dir, &quot;2007_000033.jpg&quot;)</div><div class="line">    model: str = &quot;&quot;</div><div class="line"></div><div class="line">    frame_height: str = str(TestSegmConfig.frame_size)</div><div class="line">    frame_width: str = str(TestSegmConfig.frame_size)</div><div class="line">    scale: float = 1.0</div><div class="line">    mean: List[float] = field(default_factory=lambda: [0.0, 0.0, 0.0])</div><div class="line">    std: List[float] = field(default_factory=list)</div><div class="line">    crop: bool = False</div><div class="line">    rgb: bool = True</div><div class="line">    classes: str = os.path.join(segm_test_data_dir, &quot;pascal-classes.txt&quot;)</div></div><!-- fragment --><p>The default image preprocessing options are defined in <code>default_preprocess_config.py</code>:</p>
<div class="fragment"><div class="line">tf_segm_input_blob = {</div><div class="line">    &quot;scale&quot;: str(1 / 127.5),</div><div class="line">    &quot;mean&quot;: [&quot;127.5&quot;, &quot;127.5&quot;, &quot;127.5&quot;],</div><div class="line">    &quot;std&quot;: [],</div><div class="line">    &quot;crop&quot;: &quot;False&quot;,</div><div class="line">    &quot;rgb&quot;: &quot;True&quot;</div><div class="line">}</div></div><!-- fragment --><p>The basis of the model testing is represented in <code>samples/dnn/segmentation.py</code>. <code>segmentation.py</code> can be executed autonomously with provided converted model in <code>--input</code> and populated parameters for <code>cv2.dnn.blobFromImage</code>.</p>
<p>To reproduce from scratch the described in "Model Conversion Pipeline" OpenCV steps with <code>dnn_model_runner</code> execute the below line:</p>
<div class="fragment"><div class="line">python -m dnn_model_runner.dnn_conversion.tf.segmentation.py_to_py_segm --test True --default_img_preprocess True --evaluate False</div></div><!-- fragment --> </div></div><!-- contents -->
<!-- HTML footer for doxygen 1.8.6-->
<!-- start footer part -->
<hr class="footer"/><address class="footer"><small>
Generated on Fri Apr 2 2021 11:36:34 for OpenCV by &#160;<a href="http://www.doxygen.org/index.html">
<img class="footer" src="../../doxygen.png" alt="doxygen"/>
</a> 1.8.13
</small></address>
<script type="text/javascript">
//<![CDATA[
addTutorialsButtons();
//]]>
</script>
</body>
</html>
