<!-- HTML header for doxygen 1.8.6-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<meta name="generator" content="Doxygen 1.8.13"/>
<title>OpenCV: Implementing a face beautification algorithm with G-API</title>
<link href="../../opencv.ico" rel="shortcut icon" type="image/x-icon" />
<link href="../../tabs.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="../../jquery.js"></script>
<script type="text/javascript" src="../../dynsections.js"></script>
<script type="text/javascript" src="../../tutorial-utils.js"></script>
<link href="../../search/search.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="../../search/searchdata.js"></script>
<script type="text/javascript" src="../../search/search.js"></script>
<script type="text/x-mathjax-config">
  MathJax.Hub.Config({
    extensions: ["tex2jax.js", "TeX/AMSmath.js", "TeX/AMSsymbols.js"],
    jax: ["input/TeX","output/HTML-CSS"],
});
//<![CDATA[
MathJax.Hub.Config(
{
  TeX: {
      Macros: {
          matTT: [ "\\[ \\left|\\begin{array}{ccc} #1 & #2 & #3\\\\ #4 & #5 & #6\\\\ #7 & #8 & #9 \\end{array}\\right| \\]", 9],
          fork: ["\\left\\{ \\begin{array}{l l} #1 & \\mbox{#2}\\\\ #3 & \\mbox{#4}\\\\ \\end{array} \\right.", 4],
          forkthree: ["\\left\\{ \\begin{array}{l l} #1 & \\mbox{#2}\\\\ #3 & \\mbox{#4}\\\\ #5 & \\mbox{#6}\\\\ \\end{array} \\right.", 6],
          forkfour: ["\\left\\{ \\begin{array}{l l} #1 & \\mbox{#2}\\\\ #3 & \\mbox{#4}\\\\ #5 & \\mbox{#6}\\\\ #7 & \\mbox{#8}\\\\ \\end{array} \\right.", 8],
          vecthree: ["\\begin{bmatrix} #1\\\\ #2\\\\ #3 \\end{bmatrix}", 3],
          vecthreethree: ["\\begin{bmatrix} #1 & #2 & #3\\\\ #4 & #5 & #6\\\\ #7 & #8 & #9 \\end{bmatrix}", 9],
          cameramatrix: ["#1 = \\begin{bmatrix} f_x & 0 & c_x\\\\ 0 & f_y & c_y\\\\ 0 & 0 & 1 \\end{bmatrix}", 1],
          distcoeffs: ["(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \\tau_x, \\tau_y]]]]) \\text{ of 4, 5, 8, 12 or 14 elements}"],
          distcoeffsfisheye: ["(k_1, k_2, k_3, k_4)"],
          hdotsfor: ["\\dots", 1],
          mathbbm: ["\\mathbb{#1}", 1],
          bordermatrix: ["\\matrix{#1}", 1]
      }
  }
}
);
//]]>
</script><script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js"></script>
<link href="../../doxygen.css" rel="stylesheet" type="text/css" />
<link href="../../stylesheet.css" rel="stylesheet" type="text/css"/>
</head>
<body>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<div id="titlearea">
<!--#include virtual="/google-search.html"-->
<table cellspacing="0" cellpadding="0">
 <tbody>
 <tr style="height: 56px;">
  <td id="projectlogo"><img alt="Logo" src="../../opencv-logo-small.png"/></td>
  <td style="padding-left: 0.5em;">
   <div id="projectname">OpenCV
   &#160;<span id="projectnumber">4.5.2</span>
   </div>
   <div id="projectbrief">Open Source Computer Vision</div>
  </td>
 </tr>
 </tbody>
</table>
</div>
<!-- end header part -->
<!-- Generated by Doxygen 1.8.13 -->
<script type="text/javascript">
var searchBox = new SearchBox("searchBox", "../../search",false,'Search');
</script>
<script type="text/javascript" src="../../menudata.js"></script>
<script type="text/javascript" src="../../menu.js"></script>
<script type="text/javascript">
$(function() {
  initMenu('../../',true,false,'search.php','Search');
  $(document).ready(function() { init_search(); });
});
</script>
<div id="main-nav"></div>
<!-- window showing the filter options -->
<div id="MSearchSelectWindow"
     onmouseover="return searchBox.OnSearchSelectShow()"
     onmouseout="return searchBox.OnSearchSelectHide()"
     onkeydown="return searchBox.OnSearchSelectKey(event)">
</div>

<!-- iframe showing the search results (closed by default) -->
<div id="MSearchResultsWindow">
<iframe src="javascript:void(0)" frameborder="0" 
        name="MSearchResults" id="MSearchResults">
</iframe>
</div>

<div id="nav-path" class="navpath">
  <ul>
<li class="navelem"><a class="el" href="../../d9/df8/tutorial_root.html">OpenCV Tutorials</a></li><li class="navelem"><a class="el" href="../../df/d7e/tutorial_table_of_content_gapi.html">Graph API (gapi module)</a></li>  </ul>
</div>
</div><!-- top -->
<div class="header">
  <div class="headertitle">
<div class="title">Implementing a face beautification algorithm with G-API </div>  </div>
</div><!--header-->
<div class="contents">
<div class="toc"><h3>Table of Contents</h3>
<ul><li class="level1"><a href="#gapi_fb_intro">Introduction</a><ul><li class="level2"><a href="#gapi_fb_prerec">Prerequisites</a></li>
<li class="level2"><a href="#gapi_fb_algorithm">Face beautification algorithm</a></li>
</ul>
</li>
<li class="level1"><a href="#gapi_fb_pipeline">Constructing a G-API pipeline</a><ul><li class="level2"><a href="#gapi_fb_decl_nets">Declaring Deep Learning topologies</a></li>
<li class="level2"><a href="#gapi_fb_ppline">Describing the processing graph</a></li>
<li class="level2"><a href="#gapi_fb_unsh">Unsharp mask in G-API</a></li>
</ul>
</li>
<li class="level1"><a href="#gapi_fb_proc">Custom operations</a><ul><li class="level2"><a href="#gapi_fb_face_detect">Face detector post-processing</a></li>
<li class="level2"><a href="#gapi_fb_landm_detect">Facial landmarks post-processing</a><ul><li class="level3"><a href="#gapi_fb_ld_eye">Getting an eye contour</a></li>
<li class="level3"><a href="#gapi_fb_ld_fhd">Getting a forehead contour</a></li>
</ul>
</li>
<li class="level2"><a href="#gapi_fb_masks_drw">Drawing masks</a></li>
</ul>
</li>
<li class="level1"><a href="#gapi_fb_comp_args">Configuring and running the pipeline</a><ul><li class="level2"><a href="#gapi_fb_comp_args_net">DNN parameters</a></li>
<li class="level2"><a href="#gapi_fb_comp_args_kernels">Kernel packages</a></li>
<li class="level2"><a href="#gapi_fb_compiling">Compiling the streaming pipeline</a></li>
<li class="level2"><a href="#gapi_fb_running">Running the streaming pipeline</a></li>
</ul>
</li>
<li class="level1"><a href="#gapi_fb_cncl">Conclusion</a></li>
</ul>
</div>
<div class="textblock"><p><b>Prev Tutorial:</b> <a class="el" href="../../d3/d7a/tutorial_gapi_anisotropic_segmentation.html">Porting anisotropic image segmentation on G-API</a></p>
<h1><a class="anchor" id="gapi_fb_intro"></a>
Introduction</h1>
<p>In this tutorial you will learn:</p><ul>
<li>Basics of a sample face beautification algorithm;</li>
<li>How to infer different networks inside a pipeline with G-API;</li>
<li>How to run a G-API pipeline on a video stream.</li>
</ul>
<h2><a class="anchor" id="gapi_fb_prerec"></a>
Prerequisites</h2>
<p>This sample requires:</p><ul>
<li>PC with GNU/Linux or Microsoft Windows (Apple macOS is supported but was not tested);</li>
<li>OpenCV 4.2 or later built with Intel® Distribution of <a href="https://docs.openvinotoolkit.org/">OpenVINO™ Toolkit</a> (building with <a href="https://www.threadingbuildingblocks.org/intel-tbb-tutorial">Intel® TBB</a> is a plus);</li>
<li>The following topologies from OpenVINO™ Toolkit <a href="https://github.com/opencv/open_model_zoo">Open Model Zoo</a>:<ul>
<li><code>face-detection-adas-0001</code>;</li>
<li><code>facial-landmarks-35-adas-0002</code>.</li>
</ul>
</li>
</ul>
<h2><a class="anchor" id="gapi_fb_algorithm"></a>
Face beautification algorithm</h2>
<p>We will implement a simple face beautification algorithm using a combination of modern Deep Learning techniques and traditional Computer Vision. The general idea behind the algorithm is to make face skin smoother while preserving face features like eyes or a mouth contrast. The algorithm identifies parts of the face using a DNN inference, applies different filters to the parts found, and then combines it into the final result using basic image arithmetics:</p>
<div class="dotgraph">
<iframe scrolling="no" frameborder="0" src="../../dot_inline_dotgraph_2.svg" width="944" height="352"><p><b>This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead.</b></p></iframe></div>
<p>Briefly the algorithm is described as follows:</p><ul>
<li>Input image \(I\) is passed to unsharp mask and bilateral filters ( \(U\) and \(L\) respectively);</li>
<li>Input image \(I\) is passed to an SSD-based face detector;</li>
<li>SSD result (a \([1 \times 1 \times 200 \times 7]\) blob) is parsed and converted to an array of faces;</li>
<li>Every face is passed to a landmarks detector;</li>
<li>Based on landmarks found for every face, three image masks are generated:<ul>
<li>A background mask \(b\) &ndash; indicating which areas from the original image to keep as-is;</li>
<li>A face part mask \(p\) &ndash; identifying regions to preserve (sharpen).</li>
<li>A face skin mask \(s\) &ndash; identifying regions to blur;</li>
</ul>
</li>
<li>The final result \(O\) is a composition of features above calculated as \(O = b*I + p*U + s*L\).</li>
</ul>
<p>Generating face element masks based on a limited set of features (just 35 per face, including all its parts) is not very trivial and is described in the sections below.</p>
<h1><a class="anchor" id="gapi_fb_pipeline"></a>
Constructing a G-API pipeline</h1>
<h2><a class="anchor" id="gapi_fb_decl_nets"></a>
Declaring Deep Learning topologies</h2>
<p>This sample is using two DNN detectors. Every network takes one input and produces one output. In G-API, networks are defined with macro <a class="el" href="../../d6/d32/infer_8hpp.html#adfb450a1d7992bc72c9afaa758516f27">G_API_NET()</a>:</p>
<div class="fragment"><div class="line"><a class="code" href="../../d6/d32/infer_8hpp.html#adfb450a1d7992bc72c9afaa758516f27">G_API_NET</a>(FaceDetector,  &lt;<a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a>(<a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a>)&gt;, <span class="stringliteral">&quot;face_detector&quot;</span>);</div><div class="line"><a class="code" href="../../d6/d32/infer_8hpp.html#adfb450a1d7992bc72c9afaa758516f27">G_API_NET</a>(LandmDetector, &lt;<a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a>(<a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a>)&gt;, <span class="stringliteral">&quot;landm_detector&quot;</span>);</div></div><!-- fragment --><p> To get more information, see <a class="el" href="../../d8/d24/tutorial_gapi_interactive_face_detection.html#gapi_ifd_declaring_nets">Declaring Deep Learning topologies</a> described in the "Face Analytics pipeline" tutorial.</p>
<h2><a class="anchor" id="gapi_fb_ppline"></a>
Describing the processing graph</h2>
<p>The code below generates a graph for the algorithm above:</p>
<div class="fragment"><div class="line">    <a class="code" href="../../d9/dfe/classcv_1_1GComputation.html">cv::GComputation</a> pipeline([=]()</div><div class="line">    {</div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a>  gimgIn;                                                                           <span class="comment">// input</span></div><div class="line"></div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a>  faceOut  = cv::gapi::infer&lt;custom::FaceDetector&gt;(gimgIn);</div><div class="line">        GArrayROI garRects = custom::GFacePostProc::on(faceOut, gimgIn, config::kConfThresh);       <span class="comment">// post-proc</span></div><div class="line"></div><div class="line">        <a class="code" href="../../d3/d44/classcv_1_1GArray.html">cv::GArray&lt;cv::GMat&gt;</a> landmOut  = cv::gapi::infer&lt;custom::LandmDetector&gt;(garRects, gimgIn);</div><div class="line">        <a class="code" href="../../d3/d44/classcv_1_1GArray.html">cv::GArray&lt;Landmarks&gt;</a> garElems;                                                             <span class="comment">// |</span></div><div class="line">        <a class="code" href="../../d3/d44/classcv_1_1GArray.html">cv::GArray&lt;Contour&gt;</a>   garJaws;                                                              <span class="comment">// |output arrays</span></div><div class="line">        std::tie(garElems, garJaws)    = custom::GLandmPostProc::on(landmOut, garRects);            <span class="comment">// post-proc</span></div><div class="line">        <a class="code" href="../../d3/d44/classcv_1_1GArray.html">cv::GArray&lt;Contour&gt;</a> garElsConts;                                                            <span class="comment">// face elements</span></div><div class="line">        <a class="code" href="../../d3/d44/classcv_1_1GArray.html">cv::GArray&lt;Contour&gt;</a> garFaceConts;                                                           <span class="comment">// whole faces</span></div><div class="line">        std::tie(garElsConts, garFaceConts) = custom::GGetContours::on(garElems, garJaws);          <span class="comment">// interpolation</span></div><div class="line"></div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> mskSharp        = custom::GFillPolyGContours::on(gimgIn, garElsConts);             <span class="comment">// |</span></div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> mskSharpG       = <a class="code" href="../../da/dc5/group__gapi__filters.html#gaaca00b81d171421032917e53751ac427">cv::gapi::gaussianBlur</a>(mskSharp, config::kGKernelSize,           <span class="comment">// |</span></div><div class="line">                                                          config::kGSigma);                         <span class="comment">// |</span></div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> mskBlur         = custom::GFillPolyGContours::on(gimgIn, garFaceConts);            <span class="comment">// |</span></div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> mskBlurG        = <a class="code" href="../../da/dc5/group__gapi__filters.html#gaaca00b81d171421032917e53751ac427">cv::gapi::gaussianBlur</a>(mskBlur, config::kGKernelSize,            <span class="comment">// |</span></div><div class="line">                                                          config::kGSigma);                         <span class="comment">// |draw masks</span></div><div class="line">        <span class="comment">// The first argument in mask() is Blur as we want to subtract from                         // |</span></div><div class="line">        <span class="comment">// BlurG the next step:                                                                     // |</span></div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> mskBlurFinal    = mskBlurG - <a class="code" href="../../da/dd3/group__gapi__math.html#gaba076d51941328cb7ca9348b7b535220">cv::gapi::mask</a>(mskBlurG, mskSharpG);                  <span class="comment">// |</span></div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> mskFacesGaussed = mskBlurFinal + mskSharpG;                                        <span class="comment">// |</span></div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> mskFacesWhite   = <a class="code" href="../../d0/d86/group__gapi__matrixop.html#gad538f94c264624d0ea78b853d53adcb2">cv::gapi::threshold</a>(mskFacesGaussed, 0, 255, <a class="code" href="../../d7/d1b/group__imgproc__misc.html#ggaa9e58d2860d4afa658ef70a9b1115576a147222a96556ebc1d948b372bcd7ac59">cv::THRESH_BINARY</a>); <span class="comment">// |</span></div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> mskNoFaces      = <a class="code" href="../../d1/db2/group__gapi__pixelwise.html#ga02beaca6bb6fe5582d58ea829470da79">cv::gapi::bitwise_not</a>(mskFacesWhite);                            <span class="comment">// |</span></div><div class="line"><span class="comment"></span></div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> gimgBilat       = custom::GBilatFilter::on(gimgIn, config::kBSize,</div><div class="line">                                                            config::kBSigmaCol, config::kBSigmaSp);</div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> gimgSharp       = custom::unsharpMask(gimgIn, config::kUnshSigma,</div><div class="line">                                                       config::kUnshStrength);</div><div class="line">        <span class="comment">// Applying the masks</span></div><div class="line">        <span class="comment">// Custom function mask3C() should be used instead of just gapi::mask()</span></div><div class="line">        <span class="comment">//  as mask() provides CV_8UC1 source only (and we have CV_8U3C)</span></div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> gimgBilatMasked = custom::mask3C(gimgBilat, mskBlurFinal);</div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> gimgSharpMasked = custom::mask3C(gimgSharp, mskSharpG);</div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> gimgInMasked    = custom::mask3C(gimgIn,    mskNoFaces);</div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> gimgBeautif = gimgBilatMasked + gimgSharpMasked + gimgInMasked;</div><div class="line">        <span class="keywordflow">return</span> <a class="code" href="../../d9/dfe/classcv_1_1GComputation.html">cv::GComputation</a>(<a class="code" href="../../d2/d75/namespacecv.html#a8e40d34081b18c79ba4c3cfb9fd0634f">cv::GIn</a>(gimgIn), <a class="code" href="../../d2/d75/namespacecv.html#adaa9a308669926cecc4793c6e4449629">cv::GOut</a>(gimgBeautif,</div><div class="line">                                                          <a class="code" href="../../d6/d91/group__gapi__transform.html#gac782e501961826d93c5556e623fca3c3">cv::gapi::copy</a>(gimgIn),</div><div class="line">                                                          garFaceConts,</div><div class="line">                                                          garElsConts,</div><div class="line">                                                          garRects));</div><div class="line">    });</div></div><!-- fragment --><p> The resulting graph is a mixture of G-API's standard operations, user-defined operations (namespace <code>custom::</code>), and DNN inference. The generic function <code><a class="el" href="../../d4/d1c/namespacecv_1_1gapi.html#a4ad741555c257e68e542becbf65b8185" title="Calculates response for the specified network (template parameter) for the specified region in the so...">cv::gapi::infer</a>&lt;&gt;()</code> allows to trigger inference within the pipeline; networks to infer are specified as template parameters. The sample code is using two versions of <code><a class="el" href="../../d4/d1c/namespacecv_1_1gapi.html#a4ad741555c257e68e542becbf65b8185" title="Calculates response for the specified network (template parameter) for the specified region in the so...">cv::gapi::infer</a>&lt;&gt;()</code>:</p><ul>
<li>A frame-oriented one is used to detect faces on the input frame.</li>
<li>An ROI-list oriented one is used to run landmarks inference on a list of faces &ndash; this version produces an array of landmarks per every face.</li>
</ul>
<p>More on this in "Face Analytics pipeline" (<a class="el" href="../../d8/d24/tutorial_gapi_interactive_face_detection.html#gapi_ifd_gcomputation">Building a GComputation</a> section).</p>
<h2><a class="anchor" id="gapi_fb_unsh"></a>
Unsharp mask in G-API</h2>
<p>The unsharp mask \(U\) for image \(I\) is defined as:</p>
<p class="formulaDsp">
\[U = I - s * L(M(I)),\]
</p>
<p>where \(M()\) is a median filter, \(L()\) is the Laplace operator, and \(s\) is a strength coefficient. While G-API doesn't provide this function out-of-the-box, it is expressed naturally with the existing G-API operations:</p>
<div class="fragment"><div class="line"><span class="keyword">inline</span> <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> custom::unsharpMask(<span class="keyword">const</span> <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> &amp;src,</div><div class="line">                                    <span class="keyword">const</span> <span class="keywordtype">int</span>       sigma,</div><div class="line">                                    <span class="keyword">const</span> <span class="keywordtype">float</span>     strength)</div><div class="line">{</div><div class="line">    <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> blurred   = <a class="code" href="../../da/dc5/group__gapi__filters.html#ga90c28c4986e8117ecb1b61300ff3e7e8">cv::gapi::medianBlur</a>(src, sigma);</div><div class="line">    <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> laplacian = custom::GLaplacian::on(blurred, <a class="code" href="../../d1/d1b/group__core__hal__interface.html#ga32b18d904ee2b1731a9416a8eef67d06">CV_8U</a>);</div><div class="line">    <span class="keywordflow">return</span> (src - (laplacian * strength));</div><div class="line">}</div></div><!-- fragment --><p> Note that the code snipped above is a regular C++ function defined with G-API types. Users can write functions like this to simplify graph construction; when called, this function just puts the relevant nodes to the pipeline it is used in.</p>
<h1><a class="anchor" id="gapi_fb_proc"></a>
Custom operations</h1>
<p>The face beautification graph is using custom operations extensively. This chapter focuses on the most interesting kernels, refer to <a class="el" href="../../d0/d25/gapi_kernel_api.html">G-API Kernel API</a> for general information on defining operations and implementing kernels in G-API.</p>
<h2><a class="anchor" id="gapi_fb_face_detect"></a>
Face detector post-processing</h2>
<p>A face detector output is converted to an array of faces with the following kernel:</p>
<div class="fragment"><div class="line"><span class="keyword">using</span> VectorROI = std::vector&lt;cv::Rect&gt;;</div></div><!-- fragment --><div class="fragment"><div class="line"><a class="code" href="../../da/d73/gcpukernel_8hpp.html#aacef2b3c16c285adbd70a03ae8aedc46">GAPI_OCV_KERNEL</a>(GCPUFacePostProc, GFacePostProc)</div><div class="line">{</div><div class="line">    <span class="keyword">static</span> <span class="keywordtype">void</span> run(<span class="keyword">const</span> <a class="code" href="../../d3/d63/classcv_1_1Mat.html">cv::Mat</a>   &amp;inDetectResult,</div><div class="line">                    <span class="keyword">const</span> <a class="code" href="../../d3/d63/classcv_1_1Mat.html">cv::Mat</a>   &amp;inFrame,</div><div class="line">                    <span class="keyword">const</span> <span class="keywordtype">float</span>      faceConfThreshold,</div><div class="line">                          VectorROI &amp;outFaces)</div><div class="line">    {</div><div class="line">        <span class="keyword">const</span> <span class="keywordtype">int</span> kObjectSize  = 7;</div><div class="line">        <span class="keyword">const</span> <span class="keywordtype">int</span> imgCols = inFrame.<a class="code" href="../../d3/d63/classcv_1_1Mat.html#a146f8e8dda07d1365a575ab83d9828d1">size</a>().width;</div><div class="line">        <span class="keyword">const</span> <span class="keywordtype">int</span> imgRows = inFrame.<a class="code" href="../../d3/d63/classcv_1_1Mat.html#a146f8e8dda07d1365a575ab83d9828d1">size</a>().height;</div><div class="line">        <span class="keyword">const</span> <a class="code" href="../../d2/d44/classcv_1_1Rect__.html">cv::Rect</a> borders({0, 0}, inFrame.<a class="code" href="../../d3/d63/classcv_1_1Mat.html#a146f8e8dda07d1365a575ab83d9828d1">size</a>());</div><div class="line">        outFaces.clear();</div><div class="line">        <span class="keyword">const</span> <span class="keywordtype">int</span>    numOfDetections = inDetectResult.<a class="code" href="../../d3/d63/classcv_1_1Mat.html#a146f8e8dda07d1365a575ab83d9828d1">size</a>[2];</div><div class="line">        <span class="keyword">const</span> <span class="keywordtype">float</span> *data            = inDetectResult.<a class="code" href="../../d3/d63/classcv_1_1Mat.html#a13acd320291229615ef15f96ff1ff738">ptr</a>&lt;<span class="keywordtype">float</span>&gt;();</div><div class="line">        <span class="keywordflow">for</span> (<span class="keywordtype">int</span> i = 0; i &lt; numOfDetections; i++)</div><div class="line">        {</div><div class="line">            <span class="keyword">const</span> <span class="keywordtype">float</span> faceId         = data[i * kObjectSize + 0];</div><div class="line">            <span class="keywordflow">if</span> (faceId &lt; 0.f)  <span class="comment">// indicates the end of detections</span></div><div class="line">            {</div><div class="line">                <span class="keywordflow">break</span>;</div><div class="line">            }</div><div class="line">            <span class="keyword">const</span> <span class="keywordtype">float</span> faceConfidence = data[i * kObjectSize + 2];</div><div class="line">            <span class="comment">// We can cut detections by the `conf` field</span></div><div class="line">            <span class="comment">//  to avoid mistakes of the detector.</span></div><div class="line">            <span class="keywordflow">if</span> (faceConfidence &gt; faceConfThreshold)</div><div class="line">            {</div><div class="line">                <span class="keyword">const</span> <span class="keywordtype">float</span> left   = data[i * kObjectSize + 3];</div><div class="line">                <span class="keyword">const</span> <span class="keywordtype">float</span> top    = data[i * kObjectSize + 4];</div><div class="line">                <span class="keyword">const</span> <span class="keywordtype">float</span> right  = data[i * kObjectSize + 5];</div><div class="line">                <span class="keyword">const</span> <span class="keywordtype">float</span> bottom = data[i * kObjectSize + 6];</div><div class="line">                <span class="comment">// These are normalized coordinates and are between 0 and 1;</span></div><div class="line">                <span class="comment">//  to get the real pixel coordinates we should multiply it by</span></div><div class="line">                <span class="comment">//  the image sizes respectively to the directions:</span></div><div class="line">                <a class="code" href="../../db/d4e/classcv_1_1Point__.html">cv::Point</a> tl(toIntRounded(left   * imgCols),</div><div class="line">                             toIntRounded(top    * imgRows));</div><div class="line">                <a class="code" href="../../db/d4e/classcv_1_1Point__.html">cv::Point</a> br(toIntRounded(right  * imgCols),</div><div class="line">                             toIntRounded(bottom * imgRows));</div><div class="line">                outFaces.push_back(<a class="code" href="../../d2/d44/classcv_1_1Rect__.html">cv::Rect</a>(tl, br) &amp; borders);</div><div class="line">            }</div><div class="line">        }</div><div class="line">    }</div><div class="line">};</div></div><!-- fragment --> <h2><a class="anchor" id="gapi_fb_landm_detect"></a>
Facial landmarks post-processing</h2>
<p>The algorithm infers locations of face elements (like the eyes, the mouth and the head contour itself) using a generic facial landmarks detector (<a href="https://github.com/opencv/open_model_zoo/blob/master/models/intel/facial-landmarks-35-adas-0002/description/facial-landmarks-35-adas-0002.md">details</a>) from OpenVINO™ Open Model Zoo. However, the detected landmarks as-is are not enough to generate masks &mdash; this operation requires regions of interest on the face represented by closed contours, so some interpolation is applied to get them. This landmarks processing and interpolation is performed by the following kernel:</p>
<div class="fragment"><div class="line"><a class="code" href="../../da/d73/gcpukernel_8hpp.html#aacef2b3c16c285adbd70a03ae8aedc46">GAPI_OCV_KERNEL</a>(GCPUGetContours, GGetContours)</div><div class="line">{</div><div class="line">    <span class="keyword">static</span> <span class="keywordtype">void</span> run(<span class="keyword">const</span> std::vector&lt;Landmarks&gt; &amp;vctPtsFaceElems,  <span class="comment">// 18 landmarks of the facial elements</span></div><div class="line">                    <span class="keyword">const</span> std::vector&lt;Contour&gt;   &amp;vctCntJaw,        <span class="comment">// 17 landmarks of a jaw</span></div><div class="line">                          std::vector&lt;Contour&gt;   &amp;vctElemsContours,</div><div class="line">                          std::vector&lt;Contour&gt;   &amp;vctFaceContours)</div><div class="line">    {</div><div class="line">        <span class="keywordtype">size_t</span> numFaces = vctCntJaw.size();</div><div class="line">        <a class="code" href="../../db/de0/group__core__utils.html#gaf62bcd90f70e275191ab95136d85906b">CV_Assert</a>(numFaces == vctPtsFaceElems.size());</div><div class="line">        <a class="code" href="../../db/de0/group__core__utils.html#gaf62bcd90f70e275191ab95136d85906b">CV_Assert</a>(vctElemsContours.size() == 0ul);</div><div class="line">        <a class="code" href="../../db/de0/group__core__utils.html#gaf62bcd90f70e275191ab95136d85906b">CV_Assert</a>(vctFaceContours.size()  == 0ul);</div><div class="line">        <span class="comment">// vctFaceElemsContours will store all the face elements&#39; contours found</span></div><div class="line">        <span class="comment">//  in an input image, namely 4 elements (two eyes, nose, mouth) for every detected face:</span></div><div class="line">        vctElemsContours.reserve(numFaces * 4);</div><div class="line">        <span class="comment">// vctFaceElemsContours will store all the faces&#39; contours found in an input image:</span></div><div class="line">        vctFaceContours.reserve(numFaces);</div><div class="line"></div><div class="line">        Contour cntFace, cntLeftEye, cntRightEye, cntNose, cntMouth;</div><div class="line">        cntNose.reserve(4);</div><div class="line"></div><div class="line">        <span class="keywordflow">for</span> (<span class="keywordtype">size_t</span> i = 0ul; i &lt; numFaces; i++)</div><div class="line">        {</div><div class="line">            <span class="comment">// The face elements contours</span></div><div class="line"></div><div class="line">            <span class="comment">// A left eye:</span></div><div class="line">            <span class="comment">// Approximating the lower eye contour by half-ellipse (using eye points) and storing in cntLeftEye:</span></div><div class="line">            cntLeftEye = getEyeEllipse(vctPtsFaceElems[i][1], vctPtsFaceElems[i][0]);</div><div class="line">            <span class="comment">// Pushing the left eyebrow clock-wise:</span></div><div class="line">            cntLeftEye.insert(cntLeftEye.end(), {vctPtsFaceElems[i][12], vctPtsFaceElems[i][13],</div><div class="line">                                                 vctPtsFaceElems[i][14]});</div><div class="line"></div><div class="line">            <span class="comment">// A right eye:</span></div><div class="line">            <span class="comment">// Approximating the lower eye contour by half-ellipse (using eye points) and storing in vctRightEye:</span></div><div class="line">            cntRightEye = getEyeEllipse(vctPtsFaceElems[i][2], vctPtsFaceElems[i][3]);</div><div class="line">            <span class="comment">// Pushing the right eyebrow clock-wise:</span></div><div class="line">            cntRightEye.insert(cntRightEye.end(), {vctPtsFaceElems[i][15], vctPtsFaceElems[i][16],</div><div class="line">                                                   vctPtsFaceElems[i][17]});</div><div class="line"></div><div class="line">            <span class="comment">// A nose:</span></div><div class="line">            <span class="comment">// Storing the nose points clock-wise</span></div><div class="line">            cntNose.clear();</div><div class="line">            cntNose.insert(cntNose.end(), {vctPtsFaceElems[i][4], vctPtsFaceElems[i][7],</div><div class="line">                                           vctPtsFaceElems[i][5], vctPtsFaceElems[i][6]});</div><div class="line"></div><div class="line">            <span class="comment">// A mouth:</span></div><div class="line">            <span class="comment">// Approximating the mouth contour by two half-ellipses (using mouth points) and storing in vctMouth:</span></div><div class="line">            cntMouth = getPatchedEllipse(vctPtsFaceElems[i][8], vctPtsFaceElems[i][9],</div><div class="line">                                         vctPtsFaceElems[i][10], vctPtsFaceElems[i][11]);</div><div class="line"></div><div class="line">            <span class="comment">// Storing all the elements in a vector:</span></div><div class="line">            vctElemsContours.insert(vctElemsContours.end(), {cntLeftEye, cntRightEye, cntNose, cntMouth});</div><div class="line"></div><div class="line">            <span class="comment">// The face contour:</span></div><div class="line">            <span class="comment">// Approximating the forehead contour by half-ellipse (using jaw points) and storing in vctFace:</span></div><div class="line">            cntFace = getForeheadEllipse(vctCntJaw[i][0], vctCntJaw[i][16], vctCntJaw[i][8]);</div><div class="line">            <span class="comment">// The ellipse is drawn clock-wise, but jaw contour points goes vice versa, so it&#39;s necessary to push</span></div><div class="line">            <span class="comment">//  cntJaw from the end to the begin using a reverse iterator:</span></div><div class="line">            <a class="code" href="../../d6/d91/group__gapi__transform.html#gac782e501961826d93c5556e623fca3c3">std::copy</a>(vctCntJaw[i].crbegin(), vctCntJaw[i].crend(), std::back_inserter(cntFace));</div><div class="line">            <span class="comment">// Storing the face contour in another vector:</span></div><div class="line">            vctFaceContours.push_back(cntFace);</div><div class="line">        }</div><div class="line">    }</div><div class="line">};</div></div><!-- fragment --><p> The kernel takes two arrays of denormalized landmarks coordinates and returns an array of elements' closed contours and an array of faces' closed contours; in other words, outputs are, the first, an array of contours of image areas to be sharpened and, the second, another one to be smoothed.</p>
<p>Here and below <code>Contour</code> is a vector of points.</p>
<h3><a class="anchor" id="gapi_fb_ld_eye"></a>
Getting an eye contour</h3>
<p>Eye contours are estimated with the following function:</p>
<div class="fragment"><div class="line"><span class="keyword">inline</span> <span class="keywordtype">int</span> custom::getLineInclinationAngleDegrees(<span class="keyword">const</span> <a class="code" href="../../db/d4e/classcv_1_1Point__.html">cv::Point</a> &amp;ptLeft, <span class="keyword">const</span> <a class="code" href="../../db/d4e/classcv_1_1Point__.html">cv::Point</a> &amp;ptRight)</div><div class="line">{</div><div class="line">    <span class="keyword">const</span> <a class="code" href="../../db/d4e/classcv_1_1Point__.html">cv::Point</a> residual = ptRight - ptLeft;</div><div class="line">    <span class="keywordflow">if</span> (residual.<a class="code" href="../../db/d4e/classcv_1_1Point__.html#a157337197338ff199e5df1a393022f15">y</a> == 0 &amp;&amp; residual.<a class="code" href="../../db/d4e/classcv_1_1Point__.html#a4c96fa7bdbfe390be5ed356edb274ff3">x</a> == 0)</div><div class="line">        <span class="keywordflow">return</span> 0;</div><div class="line">    <span class="keywordflow">else</span></div><div class="line">        <span class="keywordflow">return</span> toIntRounded(<a class="code" href="../../df/dfc/group__cudev.html#ga1096ba687de70142e095cc791a8bcd65">atan2</a>(toDouble(residual.<a class="code" href="../../db/d4e/classcv_1_1Point__.html#a157337197338ff199e5df1a393022f15">y</a>), toDouble(residual.<a class="code" href="../../db/d4e/classcv_1_1Point__.html#a4c96fa7bdbfe390be5ed356edb274ff3">x</a>)) * 180.0 / CV_PI);</div><div class="line">}</div></div><!-- fragment --><div class="fragment"><div class="line"><span class="keyword">inline</span> Contour custom::getEyeEllipse(<span class="keyword">const</span> <a class="code" href="../../db/d4e/classcv_1_1Point__.html">cv::Point</a> &amp;ptLeft, <span class="keyword">const</span> <a class="code" href="../../db/d4e/classcv_1_1Point__.html">cv::Point</a> &amp;ptRight)</div><div class="line">{</div><div class="line">    Contour cntEyeBottom;</div><div class="line">    <span class="keyword">const</span> <a class="code" href="../../db/d4e/classcv_1_1Point__.html">cv::Point</a> ptEyeCenter((ptRight + ptLeft) / 2);</div><div class="line">    <span class="keyword">const</span> <span class="keywordtype">int</span> angle = getLineInclinationAngleDegrees(ptLeft, ptRight);</div><div class="line">    <span class="keyword">const</span> <span class="keywordtype">int</span> axisX = toIntRounded(<a class="code" href="../../dc/d84/group__core__basic.html#ga4e556cb8ad35a643a1ea66e035711bb9">cv::norm</a>(ptRight - ptLeft) / 2.0);</div><div class="line">    <span class="comment">// According to research, in average a Y axis of an eye is approximately</span></div><div class="line">    <span class="comment">//  1/3 of an X one.</span></div><div class="line">    <span class="keyword">const</span> <span class="keywordtype">int</span> axisY = axisX / 3;</div><div class="line">    <span class="comment">// We need the lower part of an ellipse:</span></div><div class="line">    <span class="keyword">static</span> constexpr <span class="keywordtype">int</span> kAngEyeStart = 0;</div><div class="line">    <span class="keyword">static</span> constexpr <span class="keywordtype">int</span> kAngEyeEnd   = 180;</div><div class="line">    <a class="code" href="../../d6/d6e/group__imgproc__draw.html#ga727a72a3f6a625a2ae035f957c61051f">cv::ellipse2Poly</a>(ptEyeCenter, <a class="code" href="../../d6/d50/classcv_1_1Size__.html">cv::Size</a>(axisX, axisY), angle, kAngEyeStart, kAngEyeEnd, config::kAngDelta,</div><div class="line">                     cntEyeBottom);</div><div class="line">    <span class="keywordflow">return</span> cntEyeBottom;</div><div class="line">}</div></div><!-- fragment --><p> Briefly, this function restores the bottom side of an eye by a half-ellipse based on two points in left and right eye corners. In fact, <code><a class="el" href="../../d6/d6e/group__imgproc__draw.html#ga727a72a3f6a625a2ae035f957c61051f" title="Approximates an elliptic arc with a polyline. ">cv::ellipse2Poly()</a></code> is used to approximate the eye region, and the function only defines ellipse parameters based on just two points:</p><ul>
<li>The ellipse center and the \(X\) half-axis calculated by two eye Points;</li>
<li>The \(Y\) half-axis calculated according to the assumption that an average eye width is \(1/3\) of its length;</li>
<li>The start and the end angles which are 0 and 180 (refer to <code><a class="el" href="../../d6/d6e/group__imgproc__draw.html#ga28b2267d35786f5f890ca167236cbc69" title="Draws a simple or thick elliptic arc or fills an ellipse sector. ">cv::ellipse()</a></code> documentation);</li>
<li>The angle delta: how much points to produce in the contour;</li>
<li>The inclination angle of the axes.</li>
</ul>
<p>The use of the <code><a class="el" href="../../df/dfc/group__cudev.html#ga1096ba687de70142e095cc791a8bcd65">atan2()</a></code> instead of just <code><a class="el" href="../../d0/de1/group__core.html#ga698c37839d751aaee3331d62fd13528b">atan()</a></code> in function <code>custom::getLineInclinationAngleDegrees()</code> is essential as it allows to return a negative value depending on the <code>x</code> and the <code>y</code> signs so we can get the right angle even in case of upside-down face arrangement (if we put the points in the right order, of course).</p>
<h3><a class="anchor" id="gapi_fb_ld_fhd"></a>
Getting a forehead contour</h3>
<p>The function approximates the forehead contour:</p>
<div class="fragment"><div class="line"><span class="keyword">inline</span> Contour custom::getForeheadEllipse(<span class="keyword">const</span> <a class="code" href="../../db/d4e/classcv_1_1Point__.html">cv::Point</a> &amp;ptJawLeft,</div><div class="line">                                          <span class="keyword">const</span> <a class="code" href="../../db/d4e/classcv_1_1Point__.html">cv::Point</a> &amp;ptJawRight,</div><div class="line">                                          <span class="keyword">const</span> <a class="code" href="../../db/d4e/classcv_1_1Point__.html">cv::Point</a> &amp;ptJawLower)</div><div class="line">{</div><div class="line">    Contour cntForehead;</div><div class="line">    <span class="comment">// The point amid the top two points of a jaw:</span></div><div class="line">    <span class="keyword">const</span> <a class="code" href="../../db/d4e/classcv_1_1Point__.html">cv::Point</a> ptFaceCenter((ptJawLeft + ptJawRight) / 2);</div><div class="line">    <span class="comment">// This will be the center of the ellipse.</span></div><div class="line"></div><div class="line">    <span class="comment">// The angle between the jaw and the vertical:</span></div><div class="line">    <span class="keyword">const</span> <span class="keywordtype">int</span> angFace = getLineInclinationAngleDegrees(ptJawLeft, ptJawRight);</div><div class="line">    <span class="comment">// This will be the inclination of the ellipse</span></div><div class="line"></div><div class="line">    <span class="comment">// Counting the half-axis of the ellipse:</span></div><div class="line">    <span class="keyword">const</span> <span class="keywordtype">double</span> jawWidth  = <a class="code" href="../../dc/d84/group__core__basic.html#ga4e556cb8ad35a643a1ea66e035711bb9">cv::norm</a>(ptJawLeft - ptJawRight);</div><div class="line">    <span class="comment">// A forehead width equals the jaw width, and we need a half-axis:</span></div><div class="line">    <span class="keyword">const</span> <span class="keywordtype">int</span> axisX        = toIntRounded(jawWidth / 2.0);</div><div class="line"></div><div class="line">    <span class="keyword">const</span> <span class="keywordtype">double</span> jawHeight = <a class="code" href="../../dc/d84/group__core__basic.html#ga4e556cb8ad35a643a1ea66e035711bb9">cv::norm</a>(ptFaceCenter - ptJawLower);</div><div class="line">    <span class="comment">// According to research, in average a forehead is approximately 2/3 of</span></div><div class="line">    <span class="comment">//  a jaw:</span></div><div class="line">    <span class="keyword">const</span> <span class="keywordtype">int</span> axisY        = toIntRounded(jawHeight * 2 / 3.0);</div><div class="line"></div><div class="line">    <span class="comment">// We need the upper part of an ellipse:</span></div><div class="line">    <span class="keyword">static</span> constexpr <span class="keywordtype">int</span> kAngForeheadStart = 180;</div><div class="line">    <span class="keyword">static</span> constexpr <span class="keywordtype">int</span> kAngForeheadEnd   = 360;</div><div class="line">    <a class="code" href="../../d6/d6e/group__imgproc__draw.html#ga727a72a3f6a625a2ae035f957c61051f">cv::ellipse2Poly</a>(ptFaceCenter, <a class="code" href="../../d6/d50/classcv_1_1Size__.html">cv::Size</a>(axisX, axisY), angFace, kAngForeheadStart, kAngForeheadEnd,</div><div class="line">                     config::kAngDelta, cntForehead);</div><div class="line">    <span class="keywordflow">return</span> cntForehead;</div><div class="line">}</div></div><!-- fragment --><p> As we have only jaw points in our detected landmarks, we have to get a half-ellipse based on three points of a jaw: the leftmost, the rightmost and the lowest one. The jaw width is assumed to be equal to the forehead width and the latter is calculated using the left and the right points. Speaking of the \(Y\) axis, we have no points to get it directly, and instead assume that the forehead height is about \(2/3\) of the jaw height, which can be figured out from the face center (the middle between the left and right points) and the lowest jaw point.</p>
<h2><a class="anchor" id="gapi_fb_masks_drw"></a>
Drawing masks</h2>
<p>When we have all the contours needed, we are able to draw masks:</p>
<div class="fragment"><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> mskSharp        = custom::GFillPolyGContours::on(gimgIn, garElsConts);             <span class="comment">// |</span></div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> mskSharpG       = <a class="code" href="../../da/dc5/group__gapi__filters.html#gaaca00b81d171421032917e53751ac427">cv::gapi::gaussianBlur</a>(mskSharp, config::kGKernelSize,           <span class="comment">// |</span></div><div class="line">                                                          config::kGSigma);                         <span class="comment">// |</span></div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> mskBlur         = custom::GFillPolyGContours::on(gimgIn, garFaceConts);            <span class="comment">// |</span></div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> mskBlurG        = <a class="code" href="../../da/dc5/group__gapi__filters.html#gaaca00b81d171421032917e53751ac427">cv::gapi::gaussianBlur</a>(mskBlur, config::kGKernelSize,            <span class="comment">// |</span></div><div class="line">                                                          config::kGSigma);                         <span class="comment">// |draw masks</span></div><div class="line">        <span class="comment">// The first argument in mask() is Blur as we want to subtract from                         // |</span></div><div class="line">        <span class="comment">// BlurG the next step:                                                                     // |</span></div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> mskBlurFinal    = mskBlurG - <a class="code" href="../../da/dd3/group__gapi__math.html#gaba076d51941328cb7ca9348b7b535220">cv::gapi::mask</a>(mskBlurG, mskSharpG);                  <span class="comment">// |</span></div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> mskFacesGaussed = mskBlurFinal + mskSharpG;                                        <span class="comment">// |</span></div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> mskFacesWhite   = <a class="code" href="../../d0/d86/group__gapi__matrixop.html#gad538f94c264624d0ea78b853d53adcb2">cv::gapi::threshold</a>(mskFacesGaussed, 0, 255, <a class="code" href="../../d7/d1b/group__imgproc__misc.html#ggaa9e58d2860d4afa658ef70a9b1115576a147222a96556ebc1d948b372bcd7ac59">cv::THRESH_BINARY</a>); <span class="comment">// |</span></div><div class="line">        <a class="code" href="../../df/daa/classcv_1_1GMat.html">cv::GMat</a> mskNoFaces      = <a class="code" href="../../d1/db2/group__gapi__pixelwise.html#ga02beaca6bb6fe5582d58ea829470da79">cv::gapi::bitwise_not</a>(mskFacesWhite);                            <span class="comment">// |</span></div></div><!-- fragment --><p> The steps to get the masks are:</p><ul>
<li>the "sharp" mask calculation:<ul>
<li>fill the contours that should be sharpened;</li>
<li>blur that to get the "sharp" mask (<code>mskSharpG</code>);</li>
</ul>
</li>
<li>the "bilateral" mask calculation:<ul>
<li>fill all the face contours fully;</li>
<li>blur that;</li>
<li>subtract areas which intersect with the "sharp" mask &mdash; and get the "bilateral" mask (<code>mskBlurFinal</code>);</li>
</ul>
</li>
<li>the background mask calculation:<ul>
<li>add two previous masks</li>
<li>set all non-zero pixels of the result as 255 (by <code><a class="el" href="../../d0/d86/group__gapi__matrixop.html#gad538f94c264624d0ea78b853d53adcb2" title="Applies a fixed-level threshold to each matrix element. ">cv::gapi::threshold()</a></code>)</li>
<li>revert the output (by <code><a class="el" href="../../d1/db2/group__gapi__pixelwise.html#ga02beaca6bb6fe5582d58ea829470da79" title="Inverts every bit of an array. ">cv::gapi::bitwise_not</a></code>) to get the background mask (<code>mskNoFaces</code>).</li>
</ul>
</li>
</ul>
<h1><a class="anchor" id="gapi_fb_comp_args"></a>
Configuring and running the pipeline</h1>
<p>Once the graph is fully expressed, we can finally compile it and run on real data. G-API graph compilation is the stage where the G-API framework actually understands which kernels and networks to use. This configuration happens via G-API compilation arguments.</p>
<h2><a class="anchor" id="gapi_fb_comp_args_net"></a>
DNN parameters</h2>
<p>This sample is using OpenVINO™ Toolkit Inference Engine backend for DL inference, which is configured the following way:</p>
<div class="fragment"><div class="line">    <span class="keyword">auto</span> faceParams  = <a class="code" href="../../d7/dda/classcv_1_1gapi_1_1ie_1_1Params.html">cv::gapi::ie::Params&lt;custom::FaceDetector&gt;</a></div><div class="line">    {</div><div class="line">        <span class="comment">/*std::string*/</span> faceXmlPath,</div><div class="line">        <span class="comment">/*std::string*/</span> faceBinPath,</div><div class="line">        <span class="comment">/*std::string*/</span> faceDevice</div><div class="line">    };</div><div class="line">    <span class="keyword">auto</span> landmParams = <a class="code" href="../../d7/dda/classcv_1_1gapi_1_1ie_1_1Params.html">cv::gapi::ie::Params&lt;custom::LandmDetector&gt;</a></div><div class="line">    {</div><div class="line">        <span class="comment">/*std::string*/</span> landmXmlPath,</div><div class="line">        <span class="comment">/*std::string*/</span> landmBinPath,</div><div class="line">        <span class="comment">/*std::string*/</span> landmDevice</div><div class="line">    };</div></div><!-- fragment --><p> Every <code><a class="el" href="../../d7/dda/classcv_1_1gapi_1_1ie_1_1Params.html">cv::gapi::ie::Params</a>&lt;&gt;</code> object is related to the network specified in its template argument. We should pass there the network type we have defined in <code><a class="el" href="../../d6/d32/infer_8hpp.html#adfb450a1d7992bc72c9afaa758516f27">G_API_NET()</a></code> in the early beginning of the tutorial.</p>
<p>Network parameters are then wrapped in <code>cv::gapi::NetworkPackage</code>:</p>
<div class="fragment"><div class="line">    <span class="keyword">auto</span> <a class="code" href="../../d4/d1c/namespacecv_1_1gapi.html#a7d2b842e9369c0d72b383e61b9038581">networks</a>      = <a class="code" href="../../d4/d1c/namespacecv_1_1gapi.html#a7d2b842e9369c0d72b383e61b9038581">cv::gapi::networks</a>(faceParams, landmParams);</div></div><!-- fragment --><p> More details in "Face Analytics Pipeline" (<a class="el" href="../../d8/d24/tutorial_gapi_interactive_face_detection.html#gapi_ifd_configuration">Configuring the pipeline</a> section).</p>
<h2><a class="anchor" id="gapi_fb_comp_args_kernels"></a>
Kernel packages</h2>
<p>In this example we use a lot of custom kernels, in addition to that we use Fluid backend to optimize out memory for G-API's standard kernels where applicable. The resulting kernel package is formed like this:</p>
<div class="fragment"><div class="line">    <span class="keyword">auto</span> customKernels = <a class="code" href="../../d9/d29/group__gapi__compile__args.html#ga18c46d5801429bb63848fc6e2391cb20">cv::gapi::kernels</a>&lt;custom::GCPUBilateralFilter,</div><div class="line">                                           custom::GCPULaplacian,</div><div class="line">                                           custom::GCPUFillPolyGContours,</div><div class="line">                                           custom::GCPUPolyLines,</div><div class="line">                                           custom::GCPURectangle,</div><div class="line">                                           custom::GCPUFacePostProc,</div><div class="line">                                           custom::GCPULandmPostProc,</div><div class="line">                                           custom::GCPUGetContours&gt;();</div><div class="line">    <span class="keyword">auto</span> <a class="code" href="../../d9/d29/group__gapi__compile__args.html#ga18c46d5801429bb63848fc6e2391cb20">kernels</a>       = <a class="code" href="../../d4/d1c/namespacecv_1_1gapi.html#ab3c55a390c722279ced6f56523fa01a7">cv::gapi::combine</a>(<a class="code" href="../../d9/d29/group__gapi__compile__args.html#ga18c46d5801429bb63848fc6e2391cb20">cv::gapi::core::fluid::kernels</a>(),</div><div class="line">                                           customKernels);</div></div><!-- fragment --> <h2><a class="anchor" id="gapi_fb_compiling"></a>
Compiling the streaming pipeline</h2>
<p>G-API optimizes execution for video streams when compiled in the "Streaming" mode.</p>
<div class="fragment"><div class="line">        <a class="code" href="../../d1/d9b/classcv_1_1GStreamingCompiled.html">cv::GStreamingCompiled</a> stream = pipeline.compileStreaming(<a class="code" href="../../d9/d29/group__gapi__compile__args.html#ga3ccf2a52953f18bb3e4c01243cc4e679">cv::compile_args</a>(kernels, <a class="code" href="../../d4/d1c/namespacecv_1_1gapi.html#a7d2b842e9369c0d72b383e61b9038581">networks</a>));</div></div><!-- fragment --><p> More on this in "Face Analytics Pipeline" (<a class="el" href="../../d8/d24/tutorial_gapi_interactive_face_detection.html#gapi_ifd_configuration">Configuring the pipeline</a> section).</p>
<h2><a class="anchor" id="gapi_fb_running"></a>
Running the streaming pipeline</h2>
<p>In order to run the G-API streaming pipeline, all we need is to specify the input video source, call <code><a class="el" href="../../d1/d9b/classcv_1_1GStreamingCompiled.html#a3cc45dcb57acab91359b4e8493bb39a4" title="Start the pipeline execution. ">cv::GStreamingCompiled::start()</a></code>, and then fetch the pipeline processing results:</p>
<div class="fragment"><div class="line">        <span class="keywordflow">if</span> (parser.has(<span class="stringliteral">&quot;input&quot;</span>))</div><div class="line">        {</div><div class="line">            stream.<a class="code" href="../../d1/d9b/classcv_1_1GStreamingCompiled.html#a20f82ac711832fec1b2be70e7a3785de">setSource</a>(cv::gapi::wip::make_src&lt;cv::gapi::wip::GCaptureSource&gt;(parser.get&lt;<a class="code" href="../../dc/d84/group__core__basic.html#ga1f6634802eeadfd7245bc75cf3e216c2">cv::String</a>&gt;(<span class="stringliteral">&quot;input&quot;</span>)));</div><div class="line">        }</div></div><!-- fragment --><div class="fragment"><div class="line">            <span class="keyword">auto</span> out_vector = <a class="code" href="../../d2/d75/namespacecv.html#ac82b8a261b82157293b603b55c096a9e">cv::gout</a>(imgBeautif, imgShow, vctFaceConts,</div><div class="line">                                       vctElsConts, vctRects);</div><div class="line">            stream.<a class="code" href="../../d1/d9b/classcv_1_1GStreamingCompiled.html#a3cc45dcb57acab91359b4e8493bb39a4">start</a>();</div><div class="line">            avg.start();</div><div class="line">            <span class="keywordflow">while</span> (stream.<a class="code" href="../../d1/d9b/classcv_1_1GStreamingCompiled.html#a1085c0627d28ea178f607ee225dd5100">running</a>())</div><div class="line">            {</div><div class="line">                <span class="keywordflow">if</span> (!stream.<a class="code" href="../../d1/d9b/classcv_1_1GStreamingCompiled.html#af4a40c09634ba45686b2177f38c49022">try_pull</a>(std::move(out_vector)))</div><div class="line">                {</div><div class="line">                    <span class="comment">// Use a try_pull() to obtain data.</span></div><div class="line">                    <span class="comment">// If there&#39;s no data, let UI refresh (and handle keypress)</span></div><div class="line">                    <span class="keywordflow">if</span> (<a class="code" href="../../d7/dfc/group__highgui.html#ga5628525ad33f52eab17feebcfba38bd7">cv::waitKey</a>(1) &gt;= 0) <span class="keywordflow">break</span>;</div><div class="line">                    <span class="keywordflow">else</span> <span class="keywordflow">continue</span>;</div><div class="line">                }</div><div class="line">                frames++;</div><div class="line">                <span class="comment">// Drawing face boxes and landmarks if necessary:</span></div><div class="line">                <span class="keywordflow">if</span> (flgLandmarks == <span class="keyword">true</span>)</div><div class="line">                {</div><div class="line">                    <a class="code" href="../../d6/d6e/group__imgproc__draw.html#ga1ea127ffbbb7e0bfc4fd6fd2eb64263c">cv::polylines</a>(imgShow, vctFaceConts, config::kClosedLine,</div><div class="line">                                  config::kClrYellow);</div><div class="line">                    <a class="code" href="../../d6/d6e/group__imgproc__draw.html#ga1ea127ffbbb7e0bfc4fd6fd2eb64263c">cv::polylines</a>(imgShow, vctElsConts, config::kClosedLine,</div><div class="line">                                  config::kClrYellow);</div><div class="line">                }</div><div class="line">                <span class="keywordflow">if</span> (flgBoxes == <span class="keyword">true</span>)</div><div class="line">                    <span class="keywordflow">for</span> (<span class="keyword">auto</span> rect : vctRects)</div><div class="line">                        <a class="code" href="../../d6/d6e/group__imgproc__draw.html#ga07d2f74cadcf8e305e810ce8eed13bc9">cv::rectangle</a>(imgShow, rect, config::kClrGreen);</div><div class="line">                <a class="code" href="../../d7/dfc/group__highgui.html#ga453d42fe4cb60e5723281a89973ee563">cv::imshow</a>(config::kWinInput,              imgShow);</div><div class="line">                <a class="code" href="../../d7/dfc/group__highgui.html#ga453d42fe4cb60e5723281a89973ee563">cv::imshow</a>(config::kWinFaceBeautification, imgBeautif);</div><div class="line">            }</div></div><!-- fragment --><p> Once results are ready and can be pulled from the pipeline we display it on the screen and handle GUI events.</p>
<p>See <a class="el" href="../../d8/d24/tutorial_gapi_interactive_face_detection.html#gapi_ifd_running">Running the pipeline</a> section in the "Face Analytics Pipeline" tutorial for more details.</p>
<h1><a class="anchor" id="gapi_fb_cncl"></a>
Conclusion</h1>
<p>The tutorial has two goals: to show the use of brand new features of G-API introduced in OpenCV 4.2, and give a basic understanding on a sample face beautification algorithm.</p>
<p>The result of the algorithm application:</p>
<div class="image">
<img src="../../example.jpg" alt="example.jpg"/>
<div class="caption">
Face Beautification example</div></div>
<p> On the test machine (Intel® Core™ i7-8700) the G-API-optimized video pipeline outperforms its serial (non-pipelined) version by a factor of <b>2.7</b> &ndash; meaning that for such a non-trivial graph, the proper pipelining can bring almost 3x increase in performance. </p>
</div></div><!-- contents -->
<!-- HTML footer for doxygen 1.8.6-->
<!-- start footer part -->
<hr class="footer"/><address class="footer"><small>
Generated on Fri Apr 2 2021 11:36:34 for OpenCV by &#160;<a href="http://www.doxygen.org/index.html">
<img class="footer" src="../../doxygen.png" alt="doxygen"/>
</a> 1.8.13
</small></address>
<script type="text/javascript">
//<![CDATA[
addTutorialsButtons();
//]]>
</script>
</body>
</html>
