<!-- HTML header for doxygen 1.8.6-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<meta name="generator" content="Doxygen 1.8.13"/>
<title>OpenCV: Conversion of PyTorch Classification Models and Launch with OpenCV Python</title>
<link href="../../opencv.ico" rel="shortcut icon" type="image/x-icon" />
<link href="../../tabs.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="../../jquery.js"></script>
<script type="text/javascript" src="../../dynsections.js"></script>
<script type="text/javascript" src="../../tutorial-utils.js"></script>
<link href="../../search/search.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="../../search/searchdata.js"></script>
<script type="text/javascript" src="../../search/search.js"></script>
<script type="text/x-mathjax-config">
  MathJax.Hub.Config({
    extensions: ["tex2jax.js", "TeX/AMSmath.js", "TeX/AMSsymbols.js"],
    jax: ["input/TeX","output/HTML-CSS"],
});
//<![CDATA[
MathJax.Hub.Config(
{
  TeX: {
      Macros: {
          matTT: [ "\\[ \\left|\\begin{array}{ccc} #1 & #2 & #3\\\\ #4 & #5 & #6\\\\ #7 & #8 & #9 \\end{array}\\right| \\]", 9],
          fork: ["\\left\\{ \\begin{array}{l l} #1 & \\mbox{#2}\\\\ #3 & \\mbox{#4}\\\\ \\end{array} \\right.", 4],
          forkthree: ["\\left\\{ \\begin{array}{l l} #1 & \\mbox{#2}\\\\ #3 & \\mbox{#4}\\\\ #5 & \\mbox{#6}\\\\ \\end{array} \\right.", 6],
          forkfour: ["\\left\\{ \\begin{array}{l l} #1 & \\mbox{#2}\\\\ #3 & \\mbox{#4}\\\\ #5 & \\mbox{#6}\\\\ #7 & \\mbox{#8}\\\\ \\end{array} \\right.", 8],
          vecthree: ["\\begin{bmatrix} #1\\\\ #2\\\\ #3 \\end{bmatrix}", 3],
          vecthreethree: ["\\begin{bmatrix} #1 & #2 & #3\\\\ #4 & #5 & #6\\\\ #7 & #8 & #9 \\end{bmatrix}", 9],
          cameramatrix: ["#1 = \\begin{bmatrix} f_x & 0 & c_x\\\\ 0 & f_y & c_y\\\\ 0 & 0 & 1 \\end{bmatrix}", 1],
          distcoeffs: ["(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \\tau_x, \\tau_y]]]]) \\text{ of 4, 5, 8, 12 or 14 elements}"],
          distcoeffsfisheye: ["(k_1, k_2, k_3, k_4)"],
          hdotsfor: ["\\dots", 1],
          mathbbm: ["\\mathbb{#1}", 1],
          bordermatrix: ["\\matrix{#1}", 1]
      }
  }
}
);
//]]>
</script><script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js"></script>
<link href="../../doxygen.css" rel="stylesheet" type="text/css" />
<link href="../../stylesheet.css" rel="stylesheet" type="text/css"/>
</head>
<body>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<div id="titlearea">
<!--#include virtual="/google-search.html"-->
<table cellspacing="0" cellpadding="0">
 <tbody>
 <tr style="height: 56px;">
  <td id="projectlogo"><img alt="Logo" src="../../opencv-logo-small.png"/></td>
  <td style="padding-left: 0.5em;">
   <div id="projectname">OpenCV
   &#160;<span id="projectnumber">4.5.2</span>
   </div>
   <div id="projectbrief">Open Source Computer Vision</div>
  </td>
 </tr>
 </tbody>
</table>
</div>
<!-- end header part -->
<!-- Generated by Doxygen 1.8.13 -->
<script type="text/javascript">
var searchBox = new SearchBox("searchBox", "../../search",false,'Search');
</script>
<script type="text/javascript" src="../../menudata.js"></script>
<script type="text/javascript" src="../../menu.js"></script>
<script type="text/javascript">
$(function() {
  initMenu('../../',true,false,'search.php','Search');
  $(document).ready(function() { init_search(); });
});
</script>
<div id="main-nav"></div>
<!-- window showing the filter options -->
<div id="MSearchSelectWindow"
     onmouseover="return searchBox.OnSearchSelectShow()"
     onmouseout="return searchBox.OnSearchSelectHide()"
     onkeydown="return searchBox.OnSearchSelectKey(event)">
</div>

<!-- iframe showing the search results (closed by default) -->
<div id="MSearchResultsWindow">
<iframe src="javascript:void(0)" frameborder="0" 
        name="MSearchResults" id="MSearchResults">
</iframe>
</div>

<div id="nav-path" class="navpath">
  <ul>
<li class="navelem"><a class="el" href="../../d9/df8/tutorial_root.html">OpenCV Tutorials</a></li><li class="navelem"><a class="el" href="../../d2/d58/tutorial_table_of_content_dnn.html">Deep Neural Networks (dnn module)</a></li>  </ul>
</div>
</div><!-- top -->
<div class="header">
  <div class="headertitle">
<div class="title">Conversion of PyTorch Classification Models and Launch with OpenCV Python </div>  </div>
</div><!--header-->
<div class="contents">
<div class="textblock"><p><b>Prev Tutorial:</b> <a class="el" href="../../d9/d1e/tutorial_dnn_OCR.html">How to run custom OCR model</a></p>
<p><b>Next Tutorial:</b> <a class="el" href="../../dd/d55/pytorch_cls_c_tutorial_dnn_conversion.html">Conversion of PyTorch Classification Models and Launch with OpenCV C++</a></p>
<table class="doxtable">
<tr>
<th align="right"></th><th align="left"></th></tr>
<tr>
<td align="right">Original author </td><td align="left">Anastasia Murzova </td></tr>
<tr>
<td align="right">Compatibility </td><td align="left">OpenCV &gt;= 4.5 </td></tr>
</table>
<h2>Goals</h2>
<p>In this tutorial you will learn how to:</p><ul>
<li>convert PyTorch classification models into ONNX format</li>
<li>run converted PyTorch model with OpenCV Python API</li>
<li>obtain an evaluation of the PyTorch and OpenCV DNN models.</li>
</ul>
<p>We will explore the above-listed points by the example of the ResNet-50 architecture.</p>
<h2>Introduction</h2>
<p>Let's briefly view the key concepts involved in the pipeline of PyTorch models transition with OpenCV API. The initial step in conversion of PyTorch models into <a class="el" href="../../db/d30/classcv_1_1dnn_1_1Net.html" title="This class allows to create and manipulate comprehensive artificial neural networks. ">cv.dnn.Net</a> is model transferring into <a href="https://onnx.ai/about.html">ONNX</a> format. ONNX aims at the interchangeability of the neural networks between various frameworks. There is a built-in function in PyTorch for ONNX conversion: <a href="https://pytorch.org/docs/stable/onnx.html#torch.onnx.export"><code>torch.onnx.export</code></a>. Further the obtained <code>.onnx</code> model is passed into <a class="el" href="../../d6/d0f/group__dnn.html#ga7faea56041d10c71dbbd6746ca854197" title="Reads a network model ONNX. ">cv.dnn.readNetFromONNX</a>.</p>
<h2>Requirements</h2>
<p>To be able to experiment with the below code you will need to install a set of libraries. We will use a virtual environment with python3.7+ for this:</p>
<div class="fragment"><div class="line">virtualenv -p /usr/bin/python3.7 &lt;env_dir_path&gt;</div><div class="line">source &lt;env_dir_path&gt;/bin/activate</div></div><!-- fragment --><p>For OpenCV-Python building from source, follow the corresponding instructions from the <a class="el" href="../../da/df6/tutorial_py_table_of_contents_setup.html">Introduction to OpenCV</a>.</p>
<p>Before you start the installation of the libraries, you can customize the <a href="https://github.com/opencv/opencv/tree/master/samples/dnn/dnn_model_runner/dnn_conversion/requirements.txt">requirements.txt</a>, excluding or including (for example, <code>opencv-python</code>) some dependencies. The below line initiates requirements installation into the previously activated virtual environment:</p>
<div class="fragment"><div class="line">pip install -r requirements.txt</div></div><!-- fragment --><h2>Practice</h2>
<p>In this part we are going to cover the following points:</p><ol type="1">
<li>create a classification model conversion pipeline and provide the inference</li>
<li>evaluate and test classification models</li>
</ol>
<p>If you'd like merely to run evaluation or test model pipelines, the "Model Conversion Pipeline" part can be skipped.</p>
<h3>Model Conversion Pipeline</h3>
<p>The code in this subchapter is located in the <code>dnn_model_runner</code> module and can be executed with the line:</p>
<div class="fragment"><div class="line">python -m dnn_model_runner.dnn_conversion.pytorch.classification.py_to_py_resnet50</div></div><!-- fragment --><p>The following code contains the description of the below-listed steps:</p><ol type="1">
<li>instantiate PyTorch model</li>
<li>convert PyTorch model into <code>.onnx</code></li>
<li>read the transferred network with OpenCV API</li>
<li>prepare input data</li>
<li>provide inference</li>
</ol>
<div class="fragment"><div class="line"># initialize PyTorch ResNet-50 model</div><div class="line">original_model = models.resnet50(pretrained=True)</div><div class="line"></div><div class="line"># get the path to the converted into ONNX PyTorch model</div><div class="line">full_model_path = get_pytorch_onnx_model(original_model)</div><div class="line"></div><div class="line"># read converted .onnx model with OpenCV API</div><div class="line">opencv_net = cv2.dnn.readNetFromONNX(full_model_path)</div><div class="line">print(&quot;OpenCV model was successfully read. Layer IDs: \n&quot;, opencv_net.getLayerNames())</div><div class="line"></div><div class="line"># get preprocessed image</div><div class="line">input_img = get_preprocessed_img(&quot;../data/squirrel_cls.jpg&quot;)</div><div class="line"></div><div class="line"># get ImageNet labels</div><div class="line">imagenet_labels = get_imagenet_labels(&quot;../data/dnn/classification_classes_ILSVRC2012.txt&quot;)</div><div class="line"></div><div class="line"># obtain OpenCV DNN predictions</div><div class="line">get_opencv_dnn_prediction(opencv_net, input_img, imagenet_labels)</div><div class="line"></div><div class="line"># obtain original PyTorch ResNet50 predictions</div><div class="line">get_pytorch_dnn_prediction(original_model, input_img, imagenet_labels)</div></div><!-- fragment --><p>To provide model inference we will use the below <a href="https://www.pexels.com/photo/brown-squirrel-eating-1564292">squirrel photo</a> (under <a href="https://www.pexels.com/terms-of-service/">CC0</a> license) corresponding to ImageNet class ID 335: </p><div class="fragment"><div class="line">fox squirrel, eastern fox squirrel, Sciurus niger</div></div><!-- fragment --><div class="image">
<img src="../../squirrel_cls.jpg" alt="squirrel_cls.jpg"/>
<div class="caption">
Classification model input image</div></div>
<p> For the label decoding of the obtained prediction, we also need <code>imagenet_classes.txt</code> file, which contains the full list of the ImageNet classes.</p>
<p>Let's go deeper into each step by the example of pretrained PyTorch ResNet-50:</p><ul>
<li>instantiate PyTorch ResNet-50 model:</li>
</ul>
<div class="fragment"><div class="line"># initialize PyTorch ResNet-50 model</div><div class="line">original_model = models.resnet50(pretrained=True)</div></div><!-- fragment --><ul>
<li>convert PyTorch model into ONNX:</li>
</ul>
<div class="fragment"><div class="line"># define the directory for further converted model save</div><div class="line">onnx_model_path = &quot;models&quot;</div><div class="line"># define the name of further converted model</div><div class="line">onnx_model_name = &quot;resnet50.onnx&quot;</div><div class="line"></div><div class="line"># create directory for further converted model</div><div class="line">os.makedirs(onnx_model_path, exist_ok=True)</div><div class="line"></div><div class="line"># get full path to the converted model</div><div class="line">full_model_path = os.path.join(onnx_model_path, onnx_model_name)</div><div class="line"></div><div class="line"># generate model input</div><div class="line">generated_input = Variable(</div><div class="line">    torch.randn(1, 3, 224, 224)</div><div class="line">)</div><div class="line"></div><div class="line"># model export into ONNX format</div><div class="line">torch.onnx.export(</div><div class="line">    original_model,</div><div class="line">    generated_input,</div><div class="line">    full_model_path,</div><div class="line">    verbose=True,</div><div class="line">    input_names=[&quot;input&quot;],</div><div class="line">    output_names=[&quot;output&quot;],</div><div class="line">    opset_version=11</div><div class="line">)</div></div><!-- fragment --><p>After the successful execution of the above code, we will get <code>models/resnet50.onnx</code>.</p>
<ul>
<li>read the transferred network with <a class="el" href="../../d6/d0f/group__dnn.html#ga7faea56041d10c71dbbd6746ca854197" title="Reads a network model ONNX. ">cv.dnn.readNetFromONNX</a> passing the obtained in the previous step ONNX model into it:</li>
</ul>
<div class="fragment"><div class="line"># read converted .onnx model with OpenCV API</div><div class="line">opencv_net = cv2.dnn.readNetFromONNX(full_model_path)</div></div><!-- fragment --><ul>
<li>prepare input data:</li>
</ul>
<div class="fragment"><div class="line"># read the image</div><div class="line">input_img = cv2.imread(img_path, cv2.IMREAD_COLOR)</div><div class="line">input_img = input_img.astype(np.float32)</div><div class="line"></div><div class="line">input_img = cv2.resize(input_img, (256, 256))</div><div class="line"></div><div class="line"># define preprocess parameters</div><div class="line">mean = np.array([0.485, 0.456, 0.406]) * 255.0</div><div class="line">scale = 1 / 255.0</div><div class="line">std = [0.229, 0.224, 0.225]</div><div class="line"></div><div class="line"># prepare input blob to fit the model input:</div><div class="line"># 1. subtract mean</div><div class="line"># 2. scale to set pixel values from 0 to 1</div><div class="line">input_blob = cv2.dnn.blobFromImage(</div><div class="line">    image=input_img,</div><div class="line">    scalefactor=scale,</div><div class="line">    size=(224, 224),  # img target size</div><div class="line">    mean=mean,</div><div class="line">    swapRB=True,  # BGR -&gt; RGB</div><div class="line">    crop=True  # center crop</div><div class="line">)</div><div class="line"># 3. divide by std</div><div class="line">input_blob[0] /= np.asarray(std, dtype=np.float32).reshape(3, 1, 1)</div></div><!-- fragment --><p>In this step we read the image and prepare model input with <a class="el" href="../../d6/d0f/group__dnn.html#ga29f34df9376379a603acd8df581ac8d7" title="Creates 4-dimensional blob from image. Optionally resizes and crops image from center, subtract mean values, scales values by scalefactor, swap Blue and Red channels. ">cv.dnn.blobFromImage</a> function, which returns 4-dimensional blob. It should be noted that firstly in <a class="el" href="../../d6/d0f/group__dnn.html#ga29f34df9376379a603acd8df581ac8d7" title="Creates 4-dimensional blob from image. Optionally resizes and crops image from center, subtract mean values, scales values by scalefactor, swap Blue and Red channels. ">cv.dnn.blobFromImage</a> mean value is subtracted and only then pixel values are multiplied by scale. Thus, <code>mean</code> is multiplied by <code>255.0</code> to reproduce the original image preprocessing order:</p>
<div class="fragment"><div class="line">img /= 255.0</div><div class="line">img -= [0.485, 0.456, 0.406]</div><div class="line">img /= [0.229, 0.224, 0.225]</div></div><!-- fragment --><ul>
<li>OpenCV <a class="el" href="../../db/d30/classcv_1_1dnn_1_1Net.html" title="This class allows to create and manipulate comprehensive artificial neural networks. ">cv.dnn.Net</a> inference:</li>
</ul>
<div class="fragment"><div class="line"># set OpenCV DNN input</div><div class="line">opencv_net.setInput(preproc_img)</div><div class="line"></div><div class="line"># OpenCV DNN inference</div><div class="line">out = opencv_net.forward()</div><div class="line">print(&quot;OpenCV DNN prediction: \n&quot;)</div><div class="line">print(&quot;* shape: &quot;, out.shape)</div><div class="line"></div><div class="line"># get the predicted class ID</div><div class="line">imagenet_class_id = np.argmax(out)</div><div class="line"></div><div class="line"># get confidence</div><div class="line">confidence = out[0][imagenet_class_id]</div><div class="line">print(&quot;* class ID: {}, label: {}&quot;.format(imagenet_class_id, imagenet_labels[imagenet_class_id]))</div><div class="line">print(&quot;* confidence: {:.4f}&quot;.format(confidence))</div></div><!-- fragment --><p>After the above code execution we will get the following output:</p>
<div class="fragment"><div class="line">OpenCV DNN prediction:</div><div class="line">* shape:  (1, 1000)</div><div class="line">* class ID: 335, label: fox squirrel, eastern fox squirrel, Sciurus niger</div><div class="line">* confidence: 14.8308</div></div><!-- fragment --><ul>
<li>PyTorch ResNet-50 model inference:</li>
</ul>
<div class="fragment"><div class="line">original_net.eval()</div><div class="line">preproc_img = torch.FloatTensor(preproc_img)</div><div class="line"></div><div class="line"># inference</div><div class="line">out = original_net(preproc_img)</div><div class="line">print(&quot;\nPyTorch model prediction: \n&quot;)</div><div class="line">print(&quot;* shape: &quot;, out.shape)</div><div class="line"></div><div class="line"># get the predicted class ID</div><div class="line">imagenet_class_id = torch.argmax(out, axis=1).item()</div><div class="line">print(&quot;* class ID: {}, label: {}&quot;.format(imagenet_class_id, imagenet_labels[imagenet_class_id]))</div><div class="line"></div><div class="line"># get confidence</div><div class="line">confidence = out[0][imagenet_class_id]</div><div class="line">print(&quot;* confidence: {:.4f}&quot;.format(confidence.item()))</div></div><!-- fragment --><p>After the above code launching we will get the following output:</p>
<div class="fragment"><div class="line">PyTorch model prediction:</div><div class="line">* shape:  torch.Size([1, 1000])</div><div class="line">* class ID: 335, label: fox squirrel, eastern fox squirrel, Sciurus niger</div><div class="line">* confidence: 14.8308</div></div><!-- fragment --><p>The inference results of the original ResNet-50 model and <a class="el" href="../../db/d30/classcv_1_1dnn_1_1Net.html" title="This class allows to create and manipulate comprehensive artificial neural networks. ">cv.dnn.Net</a> are equal. For the extended evaluation of the models we can use <code>py_to_py_cls</code> of the <code>dnn_model_runner</code> module. This module part will be described in the next subchapter.</p>
<h3>Evaluation of the Models</h3>
<p>The proposed in <code>samples/dnn</code> <code>dnn_model_runner</code> module allows to run the full evaluation pipeline on the ImageNet dataset and test execution for the following PyTorch classification models:</p><ul>
<li>alexnet</li>
<li>vgg11</li>
<li>vgg13</li>
<li>vgg16</li>
<li>vgg19</li>
<li>resnet18</li>
<li>resnet34</li>
<li>resnet50</li>
<li>resnet101</li>
<li>resnet152</li>
<li>squeezenet1_0</li>
<li>squeezenet1_1</li>
<li>resnext50_32x4d</li>
<li>resnext101_32x8d</li>
<li>wide_resnet50_2</li>
<li>wide_resnet101_2</li>
</ul>
<p>This list can be also extended with further appropriate evaluation pipeline configuration.</p>
<h4>Evaluation Mode</h4>
<p>The below line represents running of the module in the evaluation mode:</p>
<div class="fragment"><div class="line">python -m dnn_model_runner.dnn_conversion.pytorch.classification.py_to_py_cls --model_name &lt;pytorch_cls_model_name&gt;</div></div><!-- fragment --><p>Chosen from the list classification model will be read into OpenCV <a class="el" href="../../db/d30/classcv_1_1dnn_1_1Net.html" title="This class allows to create and manipulate comprehensive artificial neural networks. ">cv.dnn.Net</a> object. Evaluation results of PyTorch and OpenCV models (accuracy, inference time, L1) will be written into the log file. Inference time values will be also depicted in a chart to generalize the obtained model information.</p>
<p>Necessary evaluation configurations are defined in the <a href="https://github.com/opencv/opencv/tree/master/samples/dnn/dnn_model_runner/dnn_conversion/common/test/configs/test_config.py">test_config.py</a> and can be modified in accordance with actual paths of data location:</p>
<div class="fragment"><div class="line">@dataclass</div><div class="line">class TestClsConfig:</div><div class="line">    batch_size: int = 50</div><div class="line">    frame_size: int = 224</div><div class="line">    img_root_dir: str = &quot;./ILSVRC2012_img_val&quot;</div><div class="line">    # location of image-class matching</div><div class="line">    img_cls_file: str = &quot;./val.txt&quot;</div><div class="line">    bgr_to_rgb: bool = True</div></div><!-- fragment --><p>To initiate the evaluation of the PyTorch ResNet-50, run the following line:</p>
<div class="fragment"><div class="line">python -m dnn_model_runner.dnn_conversion.pytorch.classification.py_to_py_cls --model_name resnet50</div></div><!-- fragment --><p>After script launch, the log file with evaluation data will be generated in <code>dnn_model_runner/dnn_conversion/logs</code>:</p>
<div class="fragment"><div class="line">The model PyTorch resnet50 was successfully obtained and converted to OpenCV DNN resnet50</div><div class="line">===== Running evaluation of the model with the following params:</div><div class="line">    * val data location: ./ILSVRC2012_img_val</div><div class="line">    * log file location: dnn_model_runner/dnn_conversion/logs/PyTorch_resnet50_log.txt</div></div><!-- fragment --><h4>Test Mode</h4>
<p>The below line represents running of the module in the test mode, namely it provides the steps for the model inference:</p>
<div class="fragment"><div class="line">python -m dnn_model_runner.dnn_conversion.pytorch.classification.py_to_py_cls --model_name &lt;pytorch_cls_model_name&gt; --test True --default_img_preprocess &lt;True/False&gt; --evaluate False</div></div><!-- fragment --><p>Here <code>default_img_preprocess</code> key defines whether you'd like to parametrize the model test process with some particular values or use the default values, for example, <code>scale</code>, <code>mean</code> or <code>std</code>.</p>
<p>Test configuration is represented in <a href="https://github.com/opencv/opencv/tree/master/samples/dnn/dnn_model_runner/dnn_conversion/common/test/configs/test_config.py">test_config.py</a> <code>TestClsModuleConfig</code> class:</p>
<div class="fragment"><div class="line">@dataclass</div><div class="line">class TestClsModuleConfig:</div><div class="line">    cls_test_data_dir: str = &quot;../data&quot;</div><div class="line">    test_module_name: str = &quot;classification&quot;</div><div class="line">    test_module_path: str = &quot;classification.py&quot;</div><div class="line">    input_img: str = os.path.join(cls_test_data_dir, &quot;squirrel_cls.jpg&quot;)</div><div class="line">    model: str = &quot;&quot;</div><div class="line"></div><div class="line">    frame_height: str = str(TestClsConfig.frame_size)</div><div class="line">    frame_width: str = str(TestClsConfig.frame_size)</div><div class="line">    scale: str = &quot;1.0&quot;</div><div class="line">    mean: List[str] = field(default_factory=lambda: [&quot;0.0&quot;, &quot;0.0&quot;, &quot;0.0&quot;])</div><div class="line">    std: List[str] = field(default_factory=list)</div><div class="line">    crop: str = &quot;False&quot;</div><div class="line">    rgb: str = &quot;True&quot;</div><div class="line">    rsz_height: str = &quot;&quot;</div><div class="line">    rsz_width: str = &quot;&quot;</div><div class="line">    classes: str = os.path.join(cls_test_data_dir, &quot;dnn&quot;, &quot;classification_classes_ILSVRC2012.txt&quot;)</div></div><!-- fragment --><p>The default image preprocessing options are defined in <a href="https://github.com/opencv/opencv/tree/master/samples/dnn/dnn_model_runner/dnn_conversion/common/test/configs/default_preprocess_config.py">default_preprocess_config.py</a>. For instance:</p>
<div class="fragment"><div class="line">BASE_IMG_SCALE_FACTOR = 1 / 255.0</div><div class="line">PYTORCH_RSZ_HEIGHT = 256</div><div class="line">PYTORCH_RSZ_WIDTH = 256</div><div class="line"></div><div class="line">pytorch_resize_input_blob = {</div><div class="line">    &quot;mean&quot;: [&quot;123.675&quot;, &quot;116.28&quot;, &quot;103.53&quot;],</div><div class="line">    &quot;scale&quot;: str(BASE_IMG_SCALE_FACTOR),</div><div class="line">    &quot;std&quot;: [&quot;0.229&quot;, &quot;0.224&quot;, &quot;0.225&quot;],</div><div class="line">    &quot;crop&quot;: &quot;True&quot;,</div><div class="line">    &quot;rgb&quot;: &quot;True&quot;,</div><div class="line">    &quot;rsz_height&quot;: str(PYTORCH_RSZ_HEIGHT),</div><div class="line">    &quot;rsz_width&quot;: str(PYTORCH_RSZ_WIDTH)</div><div class="line">}</div></div><!-- fragment --><p>The basis of the model testing is represented in <a href="https://github.com/opencv/opencv/blob/master/samples/dnn/classification.py">samples/dnn/classification.py</a>. <code>classification.py</code> can be executed autonomously with provided converted model in <code>--input</code> and populated parameters for <a class="el" href="../../d6/d0f/group__dnn.html#ga29f34df9376379a603acd8df581ac8d7" title="Creates 4-dimensional blob from image. Optionally resizes and crops image from center, subtract mean values, scales values by scalefactor, swap Blue and Red channels. ">cv.dnn.blobFromImage</a>.</p>
<p>To reproduce from scratch the described in "Model Conversion Pipeline" OpenCV steps with <code>dnn_model_runner</code> execute the below line:</p>
<div class="fragment"><div class="line">python -m dnn_model_runner.dnn_conversion.pytorch.classification.py_to_py_cls --model_name resnet50 --test True --default_img_preprocess True --evaluate False</div></div><!-- fragment --><p>The network prediction is depicted in the top left corner of the output window:</p>
<div class="image">
<img src="../../pytorch_resnet50_opencv_test_res.jpg" alt="pytorch_resnet50_opencv_test_res.jpg"/>
<div class="caption">
ResNet50 OpenCV inference output</div></div>
</div></div><!-- contents -->
<!-- HTML footer for doxygen 1.8.6-->
<!-- start footer part -->
<hr class="footer"/><address class="footer"><small>
Generated on Fri Apr 2 2021 11:36:34 for OpenCV by &#160;<a href="http://www.doxygen.org/index.html">
<img class="footer" src="../../doxygen.png" alt="doxygen"/>
</a> 1.8.13
</small></address>
<script type="text/javascript">
//<![CDATA[
addTutorialsButtons();
//]]>
</script>
</body>
</html>
