<!-- HTML header for doxygen 1.8.6-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<meta name="generator" content="Doxygen 1.8.13"/>
<title>OpenCV: High Level API: TextDetectionModel and TextRecognitionModel</title>
<link href="../../opencv.ico" rel="shortcut icon" type="image/x-icon" />
<link href="../../tabs.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="../../jquery.js"></script>
<script type="text/javascript" src="../../dynsections.js"></script>
<script type="text/javascript" src="../../tutorial-utils.js"></script>
<link href="../../search/search.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="../../search/searchdata.js"></script>
<script type="text/javascript" src="../../search/search.js"></script>
<script type="text/x-mathjax-config">
  MathJax.Hub.Config({
    extensions: ["tex2jax.js", "TeX/AMSmath.js", "TeX/AMSsymbols.js"],
    jax: ["input/TeX","output/HTML-CSS"],
});
//<![CDATA[
MathJax.Hub.Config(
{
  TeX: {
      Macros: {
          matTT: [ "\\[ \\left|\\begin{array}{ccc} #1 & #2 & #3\\\\ #4 & #5 & #6\\\\ #7 & #8 & #9 \\end{array}\\right| \\]", 9],
          fork: ["\\left\\{ \\begin{array}{l l} #1 & \\mbox{#2}\\\\ #3 & \\mbox{#4}\\\\ \\end{array} \\right.", 4],
          forkthree: ["\\left\\{ \\begin{array}{l l} #1 & \\mbox{#2}\\\\ #3 & \\mbox{#4}\\\\ #5 & \\mbox{#6}\\\\ \\end{array} \\right.", 6],
          forkfour: ["\\left\\{ \\begin{array}{l l} #1 & \\mbox{#2}\\\\ #3 & \\mbox{#4}\\\\ #5 & \\mbox{#6}\\\\ #7 & \\mbox{#8}\\\\ \\end{array} \\right.", 8],
          vecthree: ["\\begin{bmatrix} #1\\\\ #2\\\\ #3 \\end{bmatrix}", 3],
          vecthreethree: ["\\begin{bmatrix} #1 & #2 & #3\\\\ #4 & #5 & #6\\\\ #7 & #8 & #9 \\end{bmatrix}", 9],
          cameramatrix: ["#1 = \\begin{bmatrix} f_x & 0 & c_x\\\\ 0 & f_y & c_y\\\\ 0 & 0 & 1 \\end{bmatrix}", 1],
          distcoeffs: ["(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \\tau_x, \\tau_y]]]]) \\text{ of 4, 5, 8, 12 or 14 elements}"],
          distcoeffsfisheye: ["(k_1, k_2, k_3, k_4)"],
          hdotsfor: ["\\dots", 1],
          mathbbm: ["\\mathbb{#1}", 1],
          bordermatrix: ["\\matrix{#1}", 1]
      }
  }
}
);
//]]>
</script><script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js"></script>
<link href="../../doxygen.css" rel="stylesheet" type="text/css" />
<link href="../../stylesheet.css" rel="stylesheet" type="text/css"/>
</head>
<body>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<div id="titlearea">
<!--#include virtual="/google-search.html"-->
<table cellspacing="0" cellpadding="0">
 <tbody>
 <tr style="height: 56px;">
  <td id="projectlogo"><img alt="Logo" src="../../opencv-logo-small.png"/></td>
  <td style="padding-left: 0.5em;">
   <div id="projectname">OpenCV
   &#160;<span id="projectnumber">4.5.2</span>
   </div>
   <div id="projectbrief">Open Source Computer Vision</div>
  </td>
 </tr>
 </tbody>
</table>
</div>
<!-- end header part -->
<!-- Generated by Doxygen 1.8.13 -->
<script type="text/javascript">
var searchBox = new SearchBox("searchBox", "../../search",false,'Search');
</script>
<script type="text/javascript" src="../../menudata.js"></script>
<script type="text/javascript" src="../../menu.js"></script>
<script type="text/javascript">
$(function() {
  initMenu('../../',true,false,'search.php','Search');
  $(document).ready(function() { init_search(); });
});
</script>
<div id="main-nav"></div>
<!-- window showing the filter options -->
<div id="MSearchSelectWindow"
     onmouseover="return searchBox.OnSearchSelectShow()"
     onmouseout="return searchBox.OnSearchSelectHide()"
     onkeydown="return searchBox.OnSearchSelectKey(event)">
</div>

<!-- iframe showing the search results (closed by default) -->
<div id="MSearchResultsWindow">
<iframe src="javascript:void(0)" frameborder="0" 
        name="MSearchResults" id="MSearchResults">
</iframe>
</div>

<div id="nav-path" class="navpath">
  <ul>
<li class="navelem"><a class="el" href="../../d9/df8/tutorial_root.html">OpenCV Tutorials</a></li><li class="navelem"><a class="el" href="../../d2/d58/tutorial_table_of_content_dnn.html">Deep Neural Networks (dnn module)</a></li>  </ul>
</div>
</div><!-- top -->
<div class="header">
  <div class="headertitle">
<div class="title">High Level API: TextDetectionModel and TextRecognitionModel </div>  </div>
</div><!--header-->
<div class="contents">
<div class="textblock"><p><b>Prev Tutorial:</b> <a class="el" href="../../d9/d1e/tutorial_dnn_OCR.html">How to run custom OCR model</a></p>
<p><b>Next Tutorial:</b> <a class="el" href="../../dc/d70/pytorch_cls_tutorial_dnn_conversion.html">Conversion of PyTorch Classification Models and Launch with OpenCV Python</a></p>
<table class="doxtable">
<tr>
<th align="right"></th><th align="left"></th></tr>
<tr>
<td align="right">Original author </td><td align="left">Wenqing Zhang </td></tr>
<tr>
<td align="right">Compatibility </td><td align="left">OpenCV &gt;= 4.5 </td></tr>
</table>
<h2>Introduction</h2>
<p>In this tutorial, we will introduce the APIs for TextRecognitionModel and TextDetectionModel in detail. </p><hr/>
 <h4>TextRecognitionModel:</h4>
<p>In the current version, <a class="el" href="../../de/dee/classcv_1_1dnn_1_1TextRecognitionModel.html">cv::dnn::TextRecognitionModel</a> only supports CNN+RNN+CTC based algorithms, and the greedy decoding method for CTC is provided. For more information, please refer to the <a href="https://arxiv.org/abs/1507.05717">original paper</a></p>
<p>Before recognition, you should <code>setVocabulary</code> and <code>setDecodeType</code>.</p><ul>
<li>"CTC-greedy", the output of the text recognition model should be a probability matrix. The shape should be <code>(T, B, Dim)</code>, where<ul>
<li><code>T</code> is the sequence length</li>
<li><code>B</code> is the batch size (only support <code>B=1</code> in inference)</li>
<li>and <code>Dim</code> is the length of vocabulary +1('Blank' of CTC is at the index=0 of Dim).</li>
</ul>
</li>
</ul>
<p><a class="el" href="../../de/dee/classcv_1_1dnn_1_1TextRecognitionModel.html#ad8bfaa53724392dc9477fb55168a0340">cv::dnn::TextRecognitionModel::recognize()</a> is the main function for text recognition.</p><ul>
<li>The input image should be a cropped text image or an image with <code>roiRects</code></li>
<li>Other decoding methods may supported in the future <hr/>
</li>
</ul>
<h4>TextDetectionModel:</h4>
<p><a class="el" href="../../d4/de1/classcv_1_1dnn_1_1TextDetectionModel.html">cv::dnn::TextDetectionModel</a> API provides these methods for text detection:</p><ul>
<li><a class="el" href="../../d4/de1/classcv_1_1dnn_1_1TextDetectionModel.html#a057582352eac7422cf1a47a7e3e463a7" title="Performs detection. ">cv::dnn::TextDetectionModel::detect()</a> returns the results in std::vector&lt;std::vector&lt;Point&gt;&gt; (4-points quadrangles)</li>
<li><a class="el" href="../../d4/de1/classcv_1_1dnn_1_1TextDetectionModel.html#aa85568c1a42dbed91e95c3b7370ee76c" title="Performs detection. ">cv::dnn::TextDetectionModel::detectTextRectangles()</a> returns the results in std::vector&lt;cv::RotatedRect&gt; (RBOX-like)</li>
</ul>
<p>In the current version, <a class="el" href="../../d4/de1/classcv_1_1dnn_1_1TextDetectionModel.html">cv::dnn::TextDetectionModel</a> supports these algorithms:</p><ul>
<li>use <a class="el" href="../../db/d0f/classcv_1_1dnn_1_1TextDetectionModel__DB.html">cv::dnn::TextDetectionModel_DB</a> with "DB" models</li>
<li>and use <a class="el" href="../../d8/ddc/classcv_1_1dnn_1_1TextDetectionModel__EAST.html">cv::dnn::TextDetectionModel_EAST</a> with "EAST" models</li>
</ul>
<p>The following provided pretrained models are variants of DB (w/o deformable convolution), and the performance can be referred to the Table.1 in the <a href="../../(https://arxiv.org/abs/1911.08947)">paper</a>. For more information, please refer to the <a href="https://github.com/MhLiao/DB">official code</a> </p><hr/>
<p>You can train your own model with more data, and convert it into ONNX format. We encourage you to add new algorithms to these APIs.</p>
<h2>Pretrained Models</h2>
<h4>TextRecognitionModel:</h4>
<div class="fragment"><div class="line">crnn.onnx:</div><div class="line">url: https://drive.google.com/uc?export=dowload&amp;id=1ooaLR-rkTl8jdpGy1DoQs0-X0lQsB6Fj</div><div class="line">sha: 270d92c9ccb670ada2459a25977e8deeaf8380d3,</div><div class="line">alphabet_36.txt: https://drive.google.com/uc?export=dowload&amp;id=1oPOYx5rQRp8L6XQciUwmwhMCfX0KyO4b</div><div class="line">parameter setting: -rgb=0;</div><div class="line">description: The classification number of this model is 36 (0~9 + a~z).</div><div class="line">             The training dataset is MJSynth.</div><div class="line"></div><div class="line">crnn_cs.onnx:</div><div class="line">url: https://drive.google.com/uc?export=dowload&amp;id=12diBsVJrS9ZEl6BNUiRp9s0xPALBS7kt</div><div class="line">sha: a641e9c57a5147546f7a2dbea4fd322b47197cd5</div><div class="line">alphabet_94.txt: https://drive.google.com/uc?export=dowload&amp;id=1oKXxXKusquimp7XY1mFvj9nwLzldVgBR</div><div class="line">parameter setting: -rgb=1;</div><div class="line">description: The classification number of this model is 94 (0~9 + a~z + A~Z + punctuations).</div><div class="line">             The training datasets are MJsynth and SynthText.</div><div class="line"></div><div class="line">crnn_cs_CN.onnx:</div><div class="line">url: https://drive.google.com/uc?export=dowload&amp;id=1is4eYEUKH7HR7Gl37Sw4WPXx6Ir8oQEG</div><div class="line">sha: 3940942b85761c7f240494cf662dcbf05dc00d14</div><div class="line">alphabet_3944.txt: https://drive.google.com/uc?export=dowload&amp;id=18IZUUdNzJ44heWTndDO6NNfIpJMmN-ul</div><div class="line">parameter setting: -rgb=1;</div><div class="line">description: The classification number of this model is 3944 (0~9 + a~z + A~Z + Chinese characters + special characters).</div><div class="line">             The training dataset is ReCTS (https://rrc.cvc.uab.es/?ch=12).</div></div><!-- fragment --><p>More models can be found in <a href="https://drive.google.com/drive/folders/1cTbQ3nuZG-EKWak6emD_s8_hHXWz7lAr?usp=sharing">here</a>, which are taken from <a href="https://github.com/clovaai/deep-text-recognition-benchmark">clovaai</a>. You can train more models by <a href="https://github.com/meijieru/crnn.pytorch">CRNN</a>, and convert models by <code>torch.onnx.export</code>.</p>
<h4>TextDetectionModel:</h4>
<div class="fragment"><div class="line">- DB_IC15_resnet50.onnx:</div><div class="line">url: https://drive.google.com/uc?export=dowload&amp;id=17_ABp79PlFt9yPCxSaarVc_DKTmrSGGf</div><div class="line">sha: bef233c28947ef6ec8c663d20a2b326302421fa3</div><div class="line">recommended parameter setting: -inputHeight=736, -inputWidth=1280;</div><div class="line">description: This model is trained on ICDAR2015, so it can only detect English text instances.</div><div class="line"></div><div class="line">- DB_IC15_resnet18.onnx:</div><div class="line">url: https://drive.google.com/uc?export=dowload&amp;id=1sZszH3pEt8hliyBlTmB-iulxHP1dCQWV</div><div class="line">sha: 19543ce09b2efd35f49705c235cc46d0e22df30b</div><div class="line">recommended parameter setting: -inputHeight=736, -inputWidth=1280;</div><div class="line">description: This model is trained on ICDAR2015, so it can only detect English text instances.</div><div class="line"></div><div class="line">- DB_TD500_resnet50.onnx:</div><div class="line">url: https://drive.google.com/uc?export=dowload&amp;id=19YWhArrNccaoSza0CfkXlA8im4-lAGsR</div><div class="line">sha: 1b4dd21a6baa5e3523156776970895bd3db6960a</div><div class="line">recommended parameter setting: -inputHeight=736, -inputWidth=736;</div><div class="line">description: This model is trained on MSRA-TD500, so it can detect both English and Chinese text instances.</div><div class="line"></div><div class="line">- DB_TD500_resnet18.onnx:</div><div class="line">url: https://drive.google.com/uc?export=dowload&amp;id=1vY_KsDZZZb_svd5RT6pjyI8BS1nPbBSX</div><div class="line">sha: 8a3700bdc13e00336a815fc7afff5dcc1ce08546</div><div class="line">recommended parameter setting: -inputHeight=736, -inputWidth=736;</div><div class="line">description: This model is trained on MSRA-TD500, so it can detect both English and Chinese text instances.</div></div><!-- fragment --><p>We will release more models of DB <a href="https://drive.google.com/drive/folders/1qzNCHfUJOS0NEUOIKn69eCtxdlNPpWbq?usp=sharing">here</a> in the future.</p>
<div class="fragment"><div class="line">- EAST:</div><div class="line">Download link: https://www.dropbox.com/s/r2ingd0l3zt8hxs/frozen_east_text_detection.tar.gz?dl=1</div><div class="line">This model is based on https://github.com/argman/EAST</div></div><!-- fragment --><h2>Images for Testing</h2>
<div class="fragment"><div class="line">Text Recognition:</div><div class="line">url: https://drive.google.com/uc?export=dowload&amp;id=1nMcEy68zDNpIlqAn6xCk_kYcUTIeSOtN</div><div class="line">sha: 89205612ce8dd2251effa16609342b69bff67ca3</div><div class="line"></div><div class="line">Text Detection:</div><div class="line">url: https://drive.google.com/uc?export=dowload&amp;id=149tAhIcvfCYeyufRoZ9tmc2mZDKE_XrF</div><div class="line">sha: ced3c03fb7f8d9608169a913acf7e7b93e07109b</div></div><!-- fragment --><h2>Example for Text Recognition</h2>
<p>Step1. Loading images and models with a vocabulary</p>
<div class="fragment"><div class="line"><span class="comment">// Load a cropped text line image</span></div><div class="line"><span class="comment">// you can find cropped images for testing in &quot;Images for Testing&quot;</span></div><div class="line"><span class="keywordtype">int</span> rgb = <a class="code" href="../../d8/d6a/group__imgcodecs__flags.html#gga61d9b0126a3e57d9277ac48327799c80af660544735200cbe942eea09232eb822">IMREAD_COLOR</a>; <span class="comment">// This should be changed according to the model input requirement.</span></div><div class="line">Mat image = <a class="code" href="../../d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56">imread</a>(<span class="stringliteral">&quot;path/to/text_rec_test.png&quot;</span>, rgb);</div><div class="line"></div><div class="line"><span class="comment">// Load models weights</span></div><div class="line">TextRecognitionModel model(<span class="stringliteral">&quot;path/to/crnn_cs.onnx&quot;</span>);</div><div class="line"></div><div class="line"><span class="comment">// The decoding method</span></div><div class="line"><span class="comment">// more methods will be supported in future</span></div><div class="line">model.setDecodeType(<span class="stringliteral">&quot;CTC-greedy&quot;</span>);</div><div class="line"></div><div class="line"><span class="comment">// Load vocabulary</span></div><div class="line"><span class="comment">// vocabulary should be changed according to the text recognition model</span></div><div class="line">std::ifstream vocFile;</div><div class="line">vocFile.open(<span class="stringliteral">&quot;path/to/alphabet_94.txt&quot;</span>);</div><div class="line"><a class="code" href="../../db/de0/group__core__utils.html#gaf62bcd90f70e275191ab95136d85906b">CV_Assert</a>(vocFile.is_open());</div><div class="line"><a class="code" href="../../dc/d84/group__core__basic.html#ga1f6634802eeadfd7245bc75cf3e216c2">String</a> vocLine;</div><div class="line">std::vector&lt;String&gt; vocabulary;</div><div class="line"><span class="keywordflow">while</span> (std::getline(vocFile, vocLine)) {</div><div class="line">    vocabulary.push_back(vocLine);</div><div class="line">}</div><div class="line">model.setVocabulary(vocabulary);</div></div><!-- fragment --><p>Step2. Setting Parameters</p>
<div class="fragment"><div class="line"><span class="comment">// Normalization parameters</span></div><div class="line"><span class="keywordtype">double</span> scale = 1.0 / 127.5;</div><div class="line"><a class="code" href="../../dc/d84/group__core__basic.html#ga599fe92e910c027be274233eccad7beb">Scalar</a> mean = <a class="code" href="../../dc/d84/group__core__basic.html#ga599fe92e910c027be274233eccad7beb">Scalar</a>(127.5, 127.5, 127.5);</div><div class="line"></div><div class="line"><span class="comment">// The input shape</span></div><div class="line"><a class="code" href="../../dc/d84/group__core__basic.html#ga346f563897249351a34549137c8532a0">Size</a> inputSize = <a class="code" href="../../dc/d84/group__core__basic.html#ga346f563897249351a34549137c8532a0">Size</a>(100, 32);</div><div class="line"></div><div class="line">model.setInputParams(scale, inputSize, mean);</div></div><!-- fragment --><p> Step3. Inference </p><div class="fragment"><div class="line">std::string recognitionResult = recognizer.recognize(image);</div><div class="line">std::cout &lt;&lt; <span class="stringliteral">&quot;&#39;&quot;</span> &lt;&lt; recognitionResult &lt;&lt; <span class="stringliteral">&quot;&#39;&quot;</span> &lt;&lt; std::endl;</div></div><!-- fragment --><p>Input image:</p>
<div class="image">
<img src="../../text_rec_test.png" alt="text_rec_test.png"/>
<div class="caption">
Picture example</div></div>
<p> Output: </p><div class="fragment"><div class="line">&#39;welcome&#39;</div></div><!-- fragment --><h2>Example for Text Detection</h2>
<p>Step1. Loading images and models </p><div class="fragment"><div class="line"><span class="comment">// Load an image</span></div><div class="line"><span class="comment">// you can find some images for testing in &quot;Images for Testing&quot;</span></div><div class="line">Mat frame = <a class="code" href="../../d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56">imread</a>(<span class="stringliteral">&quot;/path/to/text_det_test.png&quot;</span>);</div></div><!-- fragment --><p>Step2.a Setting Parameters (DB) </p><div class="fragment"><div class="line"><span class="comment">// Load model weights</span></div><div class="line">TextDetectionModel_DB model(<span class="stringliteral">&quot;/path/to/DB_TD500_resnet50.onnx&quot;</span>);</div><div class="line"></div><div class="line"><span class="comment">// Post-processing parameters</span></div><div class="line"><span class="keywordtype">float</span> binThresh = 0.3;</div><div class="line"><span class="keywordtype">float</span> polyThresh = 0.5;</div><div class="line"><a class="code" href="../../d1/d1b/group__core__hal__interface.html#ga4f5fce8c1ef282264f9214809524d836">uint</a> maxCandidates = 200;</div><div class="line"><span class="keywordtype">double</span> unclipRatio = 2.0;</div><div class="line">model.setBinaryThreshold(binThresh)</div><div class="line">     .setPolygonThreshold(polyThresh)</div><div class="line">     .setMaxCandidates(maxCandidates)</div><div class="line">     .setUnclipRatio(unclipRatio)</div><div class="line">;</div><div class="line"></div><div class="line"><span class="comment">// Normalization parameters</span></div><div class="line"><span class="keywordtype">double</span> scale = 1.0 / 255.0;</div><div class="line"><a class="code" href="../../dc/d84/group__core__basic.html#ga599fe92e910c027be274233eccad7beb">Scalar</a> mean = <a class="code" href="../../dc/d84/group__core__basic.html#ga599fe92e910c027be274233eccad7beb">Scalar</a>(122.67891434, 116.66876762, 104.00698793);</div><div class="line"></div><div class="line"><span class="comment">// The input shape</span></div><div class="line"><a class="code" href="../../dc/d84/group__core__basic.html#ga346f563897249351a34549137c8532a0">Size</a> inputSize = <a class="code" href="../../dc/d84/group__core__basic.html#ga346f563897249351a34549137c8532a0">Size</a>(736, 736);</div><div class="line"></div><div class="line">model.setInputParams(scale, inputSize, mean);</div></div><!-- fragment --><p>Step2.b Setting Parameters (EAST) </p><div class="fragment"><div class="line">TextDetectionModel_EAST model(<span class="stringliteral">&quot;EAST.pb&quot;</span>);</div><div class="line"></div><div class="line"><span class="keywordtype">float</span> confThreshold = 0.5;</div><div class="line"><span class="keywordtype">float</span> nmsThreshold = 0.4;</div><div class="line">model.setConfidenceThreshold(confThresh)</div><div class="line">     .setNMSThreshold(nmsThresh)</div><div class="line">;</div><div class="line"></div><div class="line"><span class="keywordtype">double</span> detScale = 1.0;</div><div class="line"><a class="code" href="../../dc/d84/group__core__basic.html#ga346f563897249351a34549137c8532a0">Size</a> detInputSize = <a class="code" href="../../dc/d84/group__core__basic.html#ga346f563897249351a34549137c8532a0">Size</a>(320, 320);</div><div class="line"><a class="code" href="../../dc/d84/group__core__basic.html#ga599fe92e910c027be274233eccad7beb">Scalar</a> detMean = <a class="code" href="../../dc/d84/group__core__basic.html#ga599fe92e910c027be274233eccad7beb">Scalar</a>(123.68, 116.78, 103.94);</div><div class="line"><span class="keywordtype">bool</span> swapRB = <span class="keyword">true</span>;</div><div class="line">model.setInputParams(detScale, detInputSize, detMean, swapRB);</div></div><!-- fragment --><p>Step3. Inference </p><div class="fragment"><div class="line">std::vector&lt;std::vector&lt;Point&gt;&gt; detResults;</div><div class="line">model.detect(detResults);</div><div class="line"></div><div class="line"><span class="comment">// Visualization</span></div><div class="line"><a class="code" href="../../d6/d6e/group__imgproc__draw.html#ga1ea127ffbbb7e0bfc4fd6fd2eb64263c">polylines</a>(frame, results, <span class="keyword">true</span>, <a class="code" href="../../dc/d84/group__core__basic.html#ga599fe92e910c027be274233eccad7beb">Scalar</a>(0, 255, 0), 2);</div><div class="line"><a class="code" href="../../d7/dfc/group__highgui.html#ga453d42fe4cb60e5723281a89973ee563">imshow</a>(<span class="stringliteral">&quot;Text Detection&quot;</span>, image);</div><div class="line"><a class="code" href="../../d7/dfc/group__highgui.html#ga5628525ad33f52eab17feebcfba38bd7">waitKey</a>();</div></div><!-- fragment --><p>Output:</p>
<div class="image">
<img src="../../text_det_test_results.jpg" alt="text_det_test_results.jpg"/>
<div class="caption">
Picture example</div></div>
 <h2>Example for Text Spotting</h2>
<p>After following the steps above, it is easy to get the detection results of an input image. Then, you can do transformation and crop text images for recognition. For more information, please refer to <b>Detailed Sample</b> </p><div class="fragment"><div class="line"><span class="comment">// Transform and Crop</span></div><div class="line">Mat cropped;</div><div class="line">fourPointsTransform(recInput, vertices, cropped);</div><div class="line"></div><div class="line"><a class="code" href="../../dc/d84/group__core__basic.html#ga1f6634802eeadfd7245bc75cf3e216c2">String</a> recResult = recognizer.recognize(cropped);</div></div><!-- fragment --><p>Output Examples:</p>
<div class="image">
<img src="../../detect_test1.jpg" alt="detect_test1.jpg"/>
<div class="caption">
Picture example</div></div>
 <div class="image">
<img src="../../detect_test2.jpg" alt="detect_test2.jpg"/>
<div class="caption">
Picture example</div></div>
 <h2>Source Code</h2>
<p>The <a href="https://github.com/opencv/opencv/blob/master/modules/dnn/src/model.cpp">source code</a> of these APIs can be found in the DNN module.</p>
<h2>Detailed Sample</h2>
<p>For more information, please refer to:</p><ul>
<li><a href="https://github.com/opencv/opencv/blob/master/samples/dnn/scene_text_recognition.cpp">samples/dnn/scene_text_recognition.cpp</a></li>
<li><a href="https://github.com/opencv/opencv/blob/master/samples/dnn/scene_text_detection.cpp">samples/dnn/scene_text_detection.cpp</a></li>
<li><a href="https://github.com/opencv/opencv/blob/master/samples/dnn/text_detection.cpp">samples/dnn/text_detection.cpp</a></li>
<li><a href="https://github.com/opencv/opencv/blob/master/samples/dnn/scene_text_spotting.cpp">samples/dnn/scene_text_spotting.cpp</a></li>
</ul>
<h4>Test with an image</h4>
<p>Examples: </p><div class="fragment"><div class="line">example_dnn_scene_text_recognition -mp=path/to/crnn_cs.onnx -i=path/to/an/image -rgb=1 -vp=/path/to/alphabet_94.txt</div><div class="line">example_dnn_scene_text_detection -mp=path/to/DB_TD500_resnet50.onnx -i=path/to/an/image -ih=736 -iw=736</div><div class="line">example_dnn_scene_text_spotting -dmp=path/to/DB_IC15_resnet50.onnx -rmp=path/to/crnn_cs.onnx -i=path/to/an/image -iw=1280 -ih=736 -rgb=1 -vp=/path/to/alphabet_94.txt</div><div class="line">example_dnn_text_detection -dmp=path/to/EAST.pb -rmp=path/to/crnn_cs.onnx -i=path/to/an/image -rgb=1 -vp=path/to/alphabet_94.txt</div></div><!-- fragment --><h4>Test on public datasets</h4>
<p>Text Recognition:</p>
<p>The download link for testing images can be found in the <b>Images for Testing</b></p>
<p>Examples: </p><div class="fragment"><div class="line">example_dnn_scene_text_recognition -mp=path/to/crnn.onnx -e=true -edp=path/to/evaluation_data_rec -vp=/path/to/alphabet_36.txt -rgb=0</div><div class="line">example_dnn_scene_text_recognition -mp=path/to/crnn_cs.onnx -e=true -edp=path/to/evaluation_data_rec -vp=/path/to/alphabet_94.txt -rgb=1</div></div><!-- fragment --><p>Text Detection:</p>
<p>The download links for testing images can be found in the <b>Images for Testing</b></p>
<p>Examples: </p><div class="fragment"><div class="line">example_dnn_scene_text_detection -mp=path/to/DB_TD500_resnet50.onnx -e=true -edp=path/to/evaluation_data_det/TD500 -ih=736 -iw=736</div><div class="line">example_dnn_scene_text_detection -mp=path/to/DB_IC15_resnet50.onnx -e=true -edp=path/to/evaluation_data_det/IC15 -ih=736 -iw=1280</div></div><!-- fragment --> </div></div><!-- contents -->
<!-- HTML footer for doxygen 1.8.6-->
<!-- start footer part -->
<hr class="footer"/><address class="footer"><small>
Generated on Fri Apr 2 2021 11:36:34 for OpenCV by &#160;<a href="http://www.doxygen.org/index.html">
<img class="footer" src="../../doxygen.png" alt="doxygen"/>
</a> 1.8.13
</small></address>
<script type="text/javascript">
//<![CDATA[
addTutorialsButtons();
//]]>
</script>
</body>
</html>
