<!-- HTML header for doxygen 1.8.6-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<meta name="generator" content="Doxygen 1.8.13"/>
<title>OpenCV: Using Orbbec Astra 3D cameras</title>
<link href="../../opencv.ico" rel="shortcut icon" type="image/x-icon" />
<link href="../../tabs.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="../../jquery.js"></script>
<script type="text/javascript" src="../../dynsections.js"></script>
<script type="text/javascript" src="../../tutorial-utils.js"></script>
<link href="../../search/search.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="../../search/searchdata.js"></script>
<script type="text/javascript" src="../../search/search.js"></script>
<script type="text/x-mathjax-config">
  MathJax.Hub.Config({
    extensions: ["tex2jax.js", "TeX/AMSmath.js", "TeX/AMSsymbols.js"],
    jax: ["input/TeX","output/HTML-CSS"],
});
//<![CDATA[
MathJax.Hub.Config(
{
  TeX: {
      Macros: {
          matTT: [ "\\[ \\left|\\begin{array}{ccc} #1 & #2 & #3\\\\ #4 & #5 & #6\\\\ #7 & #8 & #9 \\end{array}\\right| \\]", 9],
          fork: ["\\left\\{ \\begin{array}{l l} #1 & \\mbox{#2}\\\\ #3 & \\mbox{#4}\\\\ \\end{array} \\right.", 4],
          forkthree: ["\\left\\{ \\begin{array}{l l} #1 & \\mbox{#2}\\\\ #3 & \\mbox{#4}\\\\ #5 & \\mbox{#6}\\\\ \\end{array} \\right.", 6],
          forkfour: ["\\left\\{ \\begin{array}{l l} #1 & \\mbox{#2}\\\\ #3 & \\mbox{#4}\\\\ #5 & \\mbox{#6}\\\\ #7 & \\mbox{#8}\\\\ \\end{array} \\right.", 8],
          vecthree: ["\\begin{bmatrix} #1\\\\ #2\\\\ #3 \\end{bmatrix}", 3],
          vecthreethree: ["\\begin{bmatrix} #1 & #2 & #3\\\\ #4 & #5 & #6\\\\ #7 & #8 & #9 \\end{bmatrix}", 9],
          cameramatrix: ["#1 = \\begin{bmatrix} f_x & 0 & c_x\\\\ 0 & f_y & c_y\\\\ 0 & 0 & 1 \\end{bmatrix}", 1],
          distcoeffs: ["(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \\tau_x, \\tau_y]]]]) \\text{ of 4, 5, 8, 12 or 14 elements}"],
          distcoeffsfisheye: ["(k_1, k_2, k_3, k_4)"],
          hdotsfor: ["\\dots", 1],
          mathbbm: ["\\mathbb{#1}", 1],
          bordermatrix: ["\\matrix{#1}", 1]
      }
  }
}
);
//]]>
</script><script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js"></script>
<link href="../../doxygen.css" rel="stylesheet" type="text/css" />
<link href="../../stylesheet.css" rel="stylesheet" type="text/css"/>
</head>
<body>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<div id="titlearea">
<!--#include virtual="/google-search.html"-->
<table cellspacing="0" cellpadding="0">
 <tbody>
 <tr style="height: 56px;">
  <td id="projectlogo"><img alt="Logo" src="../../opencv-logo-small.png"/></td>
  <td style="padding-left: 0.5em;">
   <div id="projectname">OpenCV
   &#160;<span id="projectnumber">4.5.2</span>
   </div>
   <div id="projectbrief">Open Source Computer Vision</div>
  </td>
 </tr>
 </tbody>
</table>
</div>
<!-- end header part -->
<!-- Generated by Doxygen 1.8.13 -->
<script type="text/javascript">
var searchBox = new SearchBox("searchBox", "../../search",false,'Search');
</script>
<script type="text/javascript" src="../../menudata.js"></script>
<script type="text/javascript" src="../../menu.js"></script>
<script type="text/javascript">
$(function() {
  initMenu('../../',true,false,'search.php','Search');
  $(document).ready(function() { init_search(); });
});
</script>
<div id="main-nav"></div>
<!-- window showing the filter options -->
<div id="MSearchSelectWindow"
     onmouseover="return searchBox.OnSearchSelectShow()"
     onmouseout="return searchBox.OnSearchSelectHide()"
     onkeydown="return searchBox.OnSearchSelectKey(event)">
</div>

<!-- iframe showing the search results (closed by default) -->
<div id="MSearchResultsWindow">
<iframe src="javascript:void(0)" frameborder="0" 
        name="MSearchResults" id="MSearchResults">
</iframe>
</div>

<div id="nav-path" class="navpath">
  <ul>
<li class="navelem"><a class="el" href="../../d9/df8/tutorial_root.html">OpenCV Tutorials</a></li><li class="navelem"><a class="el" href="../../de/d3d/tutorial_table_of_content_app.html">Application utils (highgui, imgcodecs, videoio modules)</a></li>  </ul>
</div>
</div><!-- top -->
<div class="header">
  <div class="headertitle">
<div class="title">Using Orbbec Astra 3D cameras </div>  </div>
</div><!--header-->
<div class="contents">
<div class="textblock"><p><b>Prev Tutorial:</b> <a class="el" href="../../d7/d6f/tutorial_kinect_openni.html">Using Kinect and other OpenNI compatible depth sensors</a></p>
<p><b>Next Tutorial:</b> <a class="el" href="../../db/d08/tutorial_intelperc.html">Using Creative Senz3D and other Intel RealSense SDK compatible depth sensors</a></p>
<h3>Introduction</h3>
<p>This tutorial is devoted to the Astra Series of Orbbec 3D cameras (<a href="https://orbbec3d.com/product-astra-pro/">https://orbbec3d.com/product-astra-pro/</a>). That cameras have a depth sensor in addition to a common color sensor. The depth sensors can be read using the open source OpenNI API with <a class="el" href="../../d8/dfe/classcv_1_1VideoCapture.html">cv::VideoCapture</a> class. The video stream is provided through the regular camera interface.</p>
<h3>Installation Instructions</h3>
<p>In order to use the Astra camera's depth sensor with OpenCV you should do the following steps:</p>
<ol type="1">
<li>Download the latest version of Orbbec OpenNI SDK (from here <a href="https://orbbec3d.com/develop/">https://orbbec3d.com/develop/</a>). Unzip the archive, choose the build according to your operating system and follow installation steps provided in the Readme file. For instance, if you use 64bit GNU/Linux run: <div class="fragment"><div class="line">$ cd Linux/OpenNI-Linux-x64-2.3.0.63/</div><div class="line">$ sudo ./install.sh</div></div><!-- fragment --> When you are done with the installation, make sure to replug your device for udev rules to take effect. The camera should now work as a general camera device. Note that your current user should belong to group <code>video</code> to have access to the camera. Also, make sure to source <code>OpenNIDevEnvironment</code> file: <div class="fragment"><div class="line">$ source OpenNIDevEnvironment</div></div><!-- fragment --></li>
<li>Run the following commands to verify that OpenNI library and header files can be found. You should see something similar in your terminal: <div class="fragment"><div class="line">$ echo $OPENNI2_INCLUDE</div><div class="line">/home/user/OpenNI_2.3.0.63/Linux/OpenNI-Linux-x64-2.3.0.63/Include</div><div class="line">$ echo $OPENNI2_REDIST</div><div class="line">/home/user/OpenNI_2.3.0.63/Linux/OpenNI-Linux-x64-2.3.0.63/Redist</div></div><!-- fragment --> If the above two variables are empty, then you need to source <code>OpenNIDevEnvironment</code> again. Now you can configure OpenCV with OpenNI support enabled by setting the <code>WITH_OPENNI2</code> flag in CMake. You may also like to enable the <code>BUILD_EXAMPLES</code> flag to get a code sample working with your Astra camera. Run the following commands in the directory containing OpenCV source code to enable OpenNI support: <div class="fragment"><div class="line">$ mkdir build</div><div class="line">$ cd build</div><div class="line">$ cmake -DWITH_OPENNI2=ON ..</div></div><!-- fragment --> If the OpenNI library is found, OpenCV will be built with OpenNI2 support. You can see the status of OpenNI2 support in the CMake log: <div class="fragment"><div class="line">--   Video I/O:</div><div class="line">--     DC1394:                      YES (2.2.6)</div><div class="line">--     FFMPEG:                      YES</div><div class="line">--       avcodec:                   YES (58.91.100)</div><div class="line">--       avformat:                  YES (58.45.100)</div><div class="line">--       avutil:                    YES (56.51.100)</div><div class="line">--       swscale:                   YES (5.7.100)</div><div class="line">--       avresample:                NO</div><div class="line">--     GStreamer:                   YES (1.18.1)</div><div class="line">--     OpenNI2:                     YES (2.3.0)</div><div class="line">--     v4l/v4l2:                    YES (linux/videodev2.h)</div></div><!-- fragment --></li>
<li>Build OpenCV: <div class="fragment"><div class="line">$ make</div></div><!-- fragment --></li>
</ol>
<h3>Code</h3>
<p>The Astra Pro camera has two sensors &ndash; a depth sensor and a color sensor. The depth sensor can be read using the OpenNI interface with <a class="el" href="../../d8/dfe/classcv_1_1VideoCapture.html">cv::VideoCapture</a> class. The video stream is not available through OpenNI API and is only provided via the regular camera interface. So, to get both depth and color frames, two <a class="el" href="../../d8/dfe/classcv_1_1VideoCapture.html">cv::VideoCapture</a> objects should be created:</p>
<div class="fragment"><div class="line"><a name="l00028"></a><span class="lineno">   28</span>&#160;    <span class="comment">// Open depth stream</span></div><div class="line"><a name="l00029"></a><span class="lineno">   29</span>&#160;    VideoCapture depthStream(<a class="code" href="../../d4/d15/group__videoio__flags__base.html#gga023786be1ee68a9105bf2e48c700294da9ce2b4f360d124a676a4dd320d23e6c8">CAP_OPENNI2_ASTRA</a>);</div><div class="line"><a name="l00030"></a><span class="lineno">   30</span>&#160;    <span class="comment">// Open color stream</span></div><div class="line"><a name="l00031"></a><span class="lineno">   31</span>&#160;    VideoCapture colorStream(0, <a class="code" href="../../d4/d15/group__videoio__flags__base.html#gga023786be1ee68a9105bf2e48c700294da78d2234242b9be9c53196be7c2bb2537">CAP_V4L2</a>);</div><div class="ttc" id="group__videoio__flags__base_html_gga023786be1ee68a9105bf2e48c700294da9ce2b4f360d124a676a4dd320d23e6c8"><div class="ttname"><a href="../../d4/d15/group__videoio__flags__base.html#gga023786be1ee68a9105bf2e48c700294da9ce2b4f360d124a676a4dd320d23e6c8">cv::CAP_OPENNI2_ASTRA</a></div><div class="ttdoc">OpenNI2 (for Orbbec Astra) </div><div class="ttdef"><b>Definition:</b> videoio.hpp:116</div></div>
<div class="ttc" id="group__videoio__flags__base_html_gga023786be1ee68a9105bf2e48c700294da78d2234242b9be9c53196be7c2bb2537"><div class="ttname"><a href="../../d4/d15/group__videoio__flags__base.html#gga023786be1ee68a9105bf2e48c700294da78d2234242b9be9c53196be7c2bb2537">cv::CAP_V4L2</a></div><div class="ttdoc">Same as CAP_V4L. </div><div class="ttdef"><b>Definition:</b> videoio.hpp:94</div></div>
</div><!-- fragment --><p> The first object will use the OpenNI2 API to retrieve depth data. The second one uses the Video4Linux2 interface to access the color sensor. Note that the example above assumes that the Astra camera is the first camera in the system. If you have more than one camera connected, you may need to explicitly set the proper camera number.</p>
<p>Before using the created VideoCapture objects you may want to set up stream parameters by setting objects' properties. The most important parameters are frame width, frame height and fps. For this example, we’ll configure width and height of both streams to VGA resolution, which is the maximum resolution available for both sensors, and we’d like both stream parameters to be the same for easier color-to-depth data registration:</p>
<div class="fragment"><div class="line"><a name="l00049"></a><span class="lineno">   49</span>&#160;    <span class="comment">// Set color and depth stream parameters</span></div><div class="line"><a name="l00050"></a><span class="lineno">   50</span>&#160;    colorStream.set(<a class="code" href="../../d4/d15/group__videoio__flags__base.html#ggaeb8dd9c89c10a5c63c139bf7c4f5704dab26d2ba37086662261148e9fe93eecad">CAP_PROP_FRAME_WIDTH</a>,  640);</div><div class="line"><a name="l00051"></a><span class="lineno">   51</span>&#160;    colorStream.set(<a class="code" href="../../d4/d15/group__videoio__flags__base.html#ggaeb8dd9c89c10a5c63c139bf7c4f5704dad8b57083fd9bd58e0f94e68a54b42b7e">CAP_PROP_FRAME_HEIGHT</a>, 480);</div><div class="line"><a name="l00052"></a><span class="lineno">   52</span>&#160;    depthStream.set(<a class="code" href="../../d4/d15/group__videoio__flags__base.html#ggaeb8dd9c89c10a5c63c139bf7c4f5704dab26d2ba37086662261148e9fe93eecad">CAP_PROP_FRAME_WIDTH</a>,  640);</div><div class="line"><a name="l00053"></a><span class="lineno">   53</span>&#160;    depthStream.set(<a class="code" href="../../d4/d15/group__videoio__flags__base.html#ggaeb8dd9c89c10a5c63c139bf7c4f5704dad8b57083fd9bd58e0f94e68a54b42b7e">CAP_PROP_FRAME_HEIGHT</a>, 480);</div><div class="line"><a name="l00054"></a><span class="lineno">   54</span>&#160;    depthStream.set(<a class="code" href="../../dc/dfc/group__videoio__flags__others.html#gga4a5821d9216a2a8593cc349cc7fdf966a4d176582d7510d1dd24816916134178f">CAP_PROP_OPENNI2_MIRROR</a>, 0);</div><div class="ttc" id="group__videoio__flags__others_html_gga4a5821d9216a2a8593cc349cc7fdf966a4d176582d7510d1dd24816916134178f"><div class="ttname"><a href="../../dc/dfc/group__videoio__flags__others.html#gga4a5821d9216a2a8593cc349cc7fdf966a4d176582d7510d1dd24816916134178f">cv::CAP_PROP_OPENNI2_MIRROR</a></div><div class="ttdef"><b>Definition:</b> videoio.hpp:284</div></div>
<div class="ttc" id="group__videoio__flags__base_html_ggaeb8dd9c89c10a5c63c139bf7c4f5704dab26d2ba37086662261148e9fe93eecad"><div class="ttname"><a href="../../d4/d15/group__videoio__flags__base.html#ggaeb8dd9c89c10a5c63c139bf7c4f5704dab26d2ba37086662261148e9fe93eecad">cv::CAP_PROP_FRAME_WIDTH</a></div><div class="ttdoc">Width of the frames in the video stream. </div><div class="ttdef"><b>Definition:</b> videoio.hpp:138</div></div>
<div class="ttc" id="group__videoio__flags__base_html_ggaeb8dd9c89c10a5c63c139bf7c4f5704dad8b57083fd9bd58e0f94e68a54b42b7e"><div class="ttname"><a href="../../d4/d15/group__videoio__flags__base.html#ggaeb8dd9c89c10a5c63c139bf7c4f5704dad8b57083fd9bd58e0f94e68a54b42b7e">cv::CAP_PROP_FRAME_HEIGHT</a></div><div class="ttdoc">Height of the frames in the video stream. </div><div class="ttdef"><b>Definition:</b> videoio.hpp:139</div></div>
</div><!-- fragment --><p> For setting and retrieving some property of sensor data generators use <a class="el" href="../../d8/dfe/classcv_1_1VideoCapture.html#a8c6d8c2d37505b5ca61ffd4bb54e9a7c">cv::VideoCapture::set</a> and <a class="el" href="../../d8/dfe/classcv_1_1VideoCapture.html#aa6480e6972ef4c00d74814ec841a2939">cv::VideoCapture::get</a> methods respectively, e.g. :</p>
<div class="fragment"><div class="line"><a name="l00063"></a><span class="lineno">   63</span>&#160;    <span class="comment">// Print depth stream parameters</span></div><div class="line"><a name="l00064"></a><span class="lineno">   64</span>&#160;    cout &lt;&lt; <span class="stringliteral">&quot;Depth stream: &quot;</span></div><div class="line"><a name="l00065"></a><span class="lineno">   65</span>&#160;         &lt;&lt; depthStream.get(<a class="code" href="../../d4/d15/group__videoio__flags__base.html#ggaeb8dd9c89c10a5c63c139bf7c4f5704dab26d2ba37086662261148e9fe93eecad">CAP_PROP_FRAME_WIDTH</a>) &lt;&lt; <span class="stringliteral">&quot;x&quot;</span> &lt;&lt; depthStream.get(<a class="code" href="../../d4/d15/group__videoio__flags__base.html#ggaeb8dd9c89c10a5c63c139bf7c4f5704dad8b57083fd9bd58e0f94e68a54b42b7e">CAP_PROP_FRAME_HEIGHT</a>)</div><div class="line"><a name="l00066"></a><span class="lineno">   66</span>&#160;         &lt;&lt; <span class="stringliteral">&quot; @&quot;</span> &lt;&lt; depthStream.get(<a class="code" href="../../d4/d15/group__videoio__flags__base.html#ggaeb8dd9c89c10a5c63c139bf7c4f5704daf01bc92359d2abc9e6eeb5cbe36d9af2">CAP_PROP_FPS</a>) &lt;&lt; <span class="stringliteral">&quot; fps&quot;</span> &lt;&lt; endl;</div><div class="ttc" id="group__videoio__flags__base_html_ggaeb8dd9c89c10a5c63c139bf7c4f5704dab26d2ba37086662261148e9fe93eecad"><div class="ttname"><a href="../../d4/d15/group__videoio__flags__base.html#ggaeb8dd9c89c10a5c63c139bf7c4f5704dab26d2ba37086662261148e9fe93eecad">cv::CAP_PROP_FRAME_WIDTH</a></div><div class="ttdoc">Width of the frames in the video stream. </div><div class="ttdef"><b>Definition:</b> videoio.hpp:138</div></div>
<div class="ttc" id="group__videoio__flags__base_html_ggaeb8dd9c89c10a5c63c139bf7c4f5704dad8b57083fd9bd58e0f94e68a54b42b7e"><div class="ttname"><a href="../../d4/d15/group__videoio__flags__base.html#ggaeb8dd9c89c10a5c63c139bf7c4f5704dad8b57083fd9bd58e0f94e68a54b42b7e">cv::CAP_PROP_FRAME_HEIGHT</a></div><div class="ttdoc">Height of the frames in the video stream. </div><div class="ttdef"><b>Definition:</b> videoio.hpp:139</div></div>
<div class="ttc" id="group__videoio__flags__base_html_ggaeb8dd9c89c10a5c63c139bf7c4f5704daf01bc92359d2abc9e6eeb5cbe36d9af2"><div class="ttname"><a href="../../d4/d15/group__videoio__flags__base.html#ggaeb8dd9c89c10a5c63c139bf7c4f5704daf01bc92359d2abc9e6eeb5cbe36d9af2">cv::CAP_PROP_FPS</a></div><div class="ttdoc">Frame rate. </div><div class="ttdef"><b>Definition:</b> videoio.hpp:140</div></div>
</div><!-- fragment --><p> The following properties of cameras available through OpenNI interface are supported for the depth generator:</p>
<ul>
<li><a class="el" href="../../d4/d15/group__videoio__flags__base.html#ggaeb8dd9c89c10a5c63c139bf7c4f5704dab26d2ba37086662261148e9fe93eecad">cv::CAP_PROP_FRAME_WIDTH</a> &ndash; Frame width in pixels.</li>
<li><a class="el" href="../../d4/d15/group__videoio__flags__base.html#ggaeb8dd9c89c10a5c63c139bf7c4f5704dad8b57083fd9bd58e0f94e68a54b42b7e">cv::CAP_PROP_FRAME_HEIGHT</a> &ndash; Frame height in pixels.</li>
<li><a class="el" href="../../d4/d15/group__videoio__flags__base.html#ggaeb8dd9c89c10a5c63c139bf7c4f5704daf01bc92359d2abc9e6eeb5cbe36d9af2">cv::CAP_PROP_FPS</a> &ndash; Frame rate in FPS.</li>
<li><a class="el" href="../../dc/dfc/group__videoio__flags__others.html#gga4a5821d9216a2a8593cc349cc7fdf966a051eab1e624d79b5b3b297883eaab23f">cv::CAP_PROP_OPENNI_REGISTRATION</a> &ndash; Flag that registers the remapping depth map to image map by changing the depth generator's viewpoint (if the flag is "on") or sets this view point to its normal one (if the flag is "off"). The registration process’ resulting images are pixel-aligned, which means that every pixel in the image is aligned to a pixel in the depth image.</li>
<li><p class="startli"><a class="el" href="../../dc/dfc/group__videoio__flags__others.html#gga4a5821d9216a2a8593cc349cc7fdf966a4d176582d7510d1dd24816916134178f">cv::CAP_PROP_OPENNI2_MIRROR</a> &ndash; Flag to enable or disable mirroring for this stream. Set to 0 to disable mirroring</p>
<p class="startli">Next properties are available for getting only:</p>
</li>
<li><a class="el" href="../../dc/dfc/group__videoio__flags__others.html#gga4a5821d9216a2a8593cc349cc7fdf966af1babc6cc545ca6774f1690b9c32a036">cv::CAP_PROP_OPENNI_FRAME_MAX_DEPTH</a> &ndash; A maximum supported depth of the camera in mm.</li>
<li><a class="el" href="../../dc/dfc/group__videoio__flags__others.html#gga4a5821d9216a2a8593cc349cc7fdf966a676d008cfa53522d7d118d34ff66336b">cv::CAP_PROP_OPENNI_BASELINE</a> &ndash; Baseline value in mm.</li>
</ul>
<p>After the VideoCapture objects have been set up, you can start reading frames from them.</p>
<dl class="section note"><dt>Note</dt><dd>OpenCV's VideoCapture provides synchronous API, so you have to grab frames in a new thread to avoid one stream blocking while another stream is being read. VideoCapture is not a thread-safe class, so you need to be careful to avoid any possible deadlocks or data races.</dd></dl>
<p>As there are two video sources that should be read simultaneously, it’s necessary to create two threads to avoid blocking. Example implementation that gets frames from each sensor in a new thread and stores them in a list along with their timestamps:</p>
<div class="fragment"><div class="line"><a name="l00070"></a><span class="lineno">   70</span>&#160;    <span class="comment">// Create two lists to store frames</span></div><div class="line"><a name="l00071"></a><span class="lineno">   71</span>&#160;    std::list&lt;Frame&gt; depthFrames, colorFrames;</div><div class="line"><a name="l00072"></a><span class="lineno">   72</span>&#160;    <span class="keyword">const</span> std::size_t maxFrames = 64;</div><div class="line"><a name="l00073"></a><span class="lineno">   73</span>&#160;</div><div class="line"><a name="l00074"></a><span class="lineno">   74</span>&#160;    <span class="comment">// Synchronization objects</span></div><div class="line"><a name="l00075"></a><span class="lineno">   75</span>&#160;    std::mutex mtx;</div><div class="line"><a name="l00076"></a><span class="lineno">   76</span>&#160;    std::condition_variable dataReady;</div><div class="line"><a name="l00077"></a><span class="lineno">   77</span>&#160;    std::atomic&lt;bool&gt; isFinish;</div><div class="line"><a name="l00078"></a><span class="lineno">   78</span>&#160;</div><div class="line"><a name="l00079"></a><span class="lineno">   79</span>&#160;    isFinish = <span class="keyword">false</span>;</div><div class="line"><a name="l00080"></a><span class="lineno">   80</span>&#160;</div><div class="line"><a name="l00081"></a><span class="lineno">   81</span>&#160;    <span class="comment">// Start depth reading thread</span></div><div class="line"><a name="l00082"></a><span class="lineno">   82</span>&#160;    std::thread depthReader([&amp;]</div><div class="line"><a name="l00083"></a><span class="lineno">   83</span>&#160;    {</div><div class="line"><a name="l00084"></a><span class="lineno">   84</span>&#160;        <span class="keywordflow">while</span> (!isFinish)</div><div class="line"><a name="l00085"></a><span class="lineno">   85</span>&#160;        {</div><div class="line"><a name="l00086"></a><span class="lineno">   86</span>&#160;            <span class="comment">// Grab and decode new frame</span></div><div class="line"><a name="l00087"></a><span class="lineno">   87</span>&#160;            <span class="keywordflow">if</span> (depthStream.grab())</div><div class="line"><a name="l00088"></a><span class="lineno">   88</span>&#160;            {</div><div class="line"><a name="l00089"></a><span class="lineno">   89</span>&#160;                Frame f;</div><div class="line"><a name="l00090"></a><span class="lineno">   90</span>&#160;                f.timestamp = <a class="code" href="../../db/de0/group__core__utils.html#gae73f58000611a1af25dd36d496bf4487">cv::getTickCount</a>();</div><div class="line"><a name="l00091"></a><span class="lineno">   91</span>&#160;                depthStream.retrieve(f.frame, <a class="code" href="../../dc/dfc/group__videoio__flags__others.html#ggaeb44e5d729d41902496454c281d5adbfa2278fe906c8f7508e6e7a92f46888661">CAP_OPENNI_DEPTH_MAP</a>);</div><div class="line"><a name="l00092"></a><span class="lineno">   92</span>&#160;                <span class="keywordflow">if</span> (f.frame.empty())</div><div class="line"><a name="l00093"></a><span class="lineno">   93</span>&#160;                {</div><div class="line"><a name="l00094"></a><span class="lineno">   94</span>&#160;                    cerr &lt;&lt; <span class="stringliteral">&quot;ERROR: Failed to decode frame from depth stream&quot;</span> &lt;&lt; endl;</div><div class="line"><a name="l00095"></a><span class="lineno">   95</span>&#160;                    <span class="keywordflow">break</span>;</div><div class="line"><a name="l00096"></a><span class="lineno">   96</span>&#160;                }</div><div class="line"><a name="l00097"></a><span class="lineno">   97</span>&#160;</div><div class="line"><a name="l00098"></a><span class="lineno">   98</span>&#160;                {</div><div class="line"><a name="l00099"></a><span class="lineno">   99</span>&#160;                    std::lock_guard&lt;std::mutex&gt; lk(mtx);</div><div class="line"><a name="l00100"></a><span class="lineno">  100</span>&#160;                    <span class="keywordflow">if</span> (depthFrames.size() &gt;= maxFrames)</div><div class="line"><a name="l00101"></a><span class="lineno">  101</span>&#160;                        depthFrames.pop_front();</div><div class="line"><a name="l00102"></a><span class="lineno">  102</span>&#160;                    depthFrames.push_back(f);</div><div class="line"><a name="l00103"></a><span class="lineno">  103</span>&#160;                }</div><div class="line"><a name="l00104"></a><span class="lineno">  104</span>&#160;                dataReady.notify_one();</div><div class="line"><a name="l00105"></a><span class="lineno">  105</span>&#160;            }</div><div class="line"><a name="l00106"></a><span class="lineno">  106</span>&#160;        }</div><div class="line"><a name="l00107"></a><span class="lineno">  107</span>&#160;    });</div><div class="line"><a name="l00108"></a><span class="lineno">  108</span>&#160;</div><div class="line"><a name="l00109"></a><span class="lineno">  109</span>&#160;    <span class="comment">// Start color reading thread</span></div><div class="line"><a name="l00110"></a><span class="lineno">  110</span>&#160;    std::thread colorReader([&amp;]</div><div class="line"><a name="l00111"></a><span class="lineno">  111</span>&#160;    {</div><div class="line"><a name="l00112"></a><span class="lineno">  112</span>&#160;        <span class="keywordflow">while</span> (!isFinish)</div><div class="line"><a name="l00113"></a><span class="lineno">  113</span>&#160;        {</div><div class="line"><a name="l00114"></a><span class="lineno">  114</span>&#160;            <span class="comment">// Grab and decode new frame</span></div><div class="line"><a name="l00115"></a><span class="lineno">  115</span>&#160;            <span class="keywordflow">if</span> (colorStream.grab())</div><div class="line"><a name="l00116"></a><span class="lineno">  116</span>&#160;            {</div><div class="line"><a name="l00117"></a><span class="lineno">  117</span>&#160;                Frame f;</div><div class="line"><a name="l00118"></a><span class="lineno">  118</span>&#160;                f.timestamp = <a class="code" href="../../db/de0/group__core__utils.html#gae73f58000611a1af25dd36d496bf4487">cv::getTickCount</a>();</div><div class="line"><a name="l00119"></a><span class="lineno">  119</span>&#160;                colorStream.retrieve(f.frame);</div><div class="line"><a name="l00120"></a><span class="lineno">  120</span>&#160;                <span class="keywordflow">if</span> (f.frame.empty())</div><div class="line"><a name="l00121"></a><span class="lineno">  121</span>&#160;                {</div><div class="line"><a name="l00122"></a><span class="lineno">  122</span>&#160;                    cerr &lt;&lt; <span class="stringliteral">&quot;ERROR: Failed to decode frame from color stream&quot;</span> &lt;&lt; endl;</div><div class="line"><a name="l00123"></a><span class="lineno">  123</span>&#160;                    <span class="keywordflow">break</span>;</div><div class="line"><a name="l00124"></a><span class="lineno">  124</span>&#160;                }</div><div class="line"><a name="l00125"></a><span class="lineno">  125</span>&#160;</div><div class="line"><a name="l00126"></a><span class="lineno">  126</span>&#160;                {</div><div class="line"><a name="l00127"></a><span class="lineno">  127</span>&#160;                    std::lock_guard&lt;std::mutex&gt; lk(mtx);</div><div class="line"><a name="l00128"></a><span class="lineno">  128</span>&#160;                    <span class="keywordflow">if</span> (colorFrames.size() &gt;= maxFrames)</div><div class="line"><a name="l00129"></a><span class="lineno">  129</span>&#160;                        colorFrames.pop_front();</div><div class="line"><a name="l00130"></a><span class="lineno">  130</span>&#160;                    colorFrames.push_back(f);</div><div class="line"><a name="l00131"></a><span class="lineno">  131</span>&#160;                }</div><div class="line"><a name="l00132"></a><span class="lineno">  132</span>&#160;                dataReady.notify_one();</div><div class="line"><a name="l00133"></a><span class="lineno">  133</span>&#160;            }</div><div class="line"><a name="l00134"></a><span class="lineno">  134</span>&#160;        }</div><div class="line"><a name="l00135"></a><span class="lineno">  135</span>&#160;    });</div><div class="ttc" id="group__videoio__flags__others_html_ggaeb44e5d729d41902496454c281d5adbfa2278fe906c8f7508e6e7a92f46888661"><div class="ttname"><a href="../../dc/dfc/group__videoio__flags__others.html#ggaeb44e5d729d41902496454c281d5adbfa2278fe906c8f7508e6e7a92f46888661">cv::CAP_OPENNI_DEPTH_MAP</a></div><div class="ttdoc">Depth values in mm (CV_16UC1) </div><div class="ttdef"><b>Definition:</b> videoio.hpp:299</div></div>
<div class="ttc" id="group__core__utils_html_gae73f58000611a1af25dd36d496bf4487"><div class="ttname"><a href="../../db/de0/group__core__utils.html#gae73f58000611a1af25dd36d496bf4487">cv::getTickCount</a></div><div class="ttdeci">int64 getTickCount()</div><div class="ttdoc">Returns the number of ticks. </div></div>
</div><!-- fragment --><p> VideoCapture can retrieve the following data:</p>
<ol type="1">
<li>data given from the depth generator:<ul>
<li><a class="el" href="../../dc/dfc/group__videoio__flags__others.html#ggaeb44e5d729d41902496454c281d5adbfa2278fe906c8f7508e6e7a92f46888661">cv::CAP_OPENNI_DEPTH_MAP</a> - depth values in mm (CV_16UC1)</li>
<li><a class="el" href="../../dc/dfc/group__videoio__flags__others.html#ggaeb44e5d729d41902496454c281d5adbfa9025b0faf50e561b874405f7013f43dd">cv::CAP_OPENNI_POINT_CLOUD_MAP</a> - XYZ in meters (CV_32FC3)</li>
<li><a class="el" href="../../dc/dfc/group__videoio__flags__others.html#ggaeb44e5d729d41902496454c281d5adbfa549229ff705c2c4d637712b93041e06f">cv::CAP_OPENNI_DISPARITY_MAP</a> - disparity in pixels (CV_8UC1)</li>
<li><a class="el" href="../../dc/dfc/group__videoio__flags__others.html#ggaeb44e5d729d41902496454c281d5adbfa471838d1ee511e9555d07abe076db5b1">cv::CAP_OPENNI_DISPARITY_MAP_32F</a> - disparity in pixels (CV_32FC1)</li>
<li><a class="el" href="../../dc/dfc/group__videoio__flags__others.html#ggaeb44e5d729d41902496454c281d5adbfa4cc33615f4869c0a2eda62587b945fb9">cv::CAP_OPENNI_VALID_DEPTH_MASK</a> - mask of valid pixels (not occluded, not shaded, etc.) (CV_8UC1)</li>
</ul>
</li>
<li>data given from the color sensor is a regular BGR image (CV_8UC3).</li>
</ol>
<p>When new data are available, each reading thread notifies the main thread using a condition variable. A frame is stored in the ordered list &ndash; the first frame in the list is the earliest captured, the last frame is the latest captured. As depth and color frames are read from independent sources two video streams may become out of sync even when both streams are set up for the same frame rate. A post-synchronization procedure can be applied to the streams to combine depth and color frames into pairs. The sample code below demonstrates this procedure:</p>
<div class="fragment"><div class="line"><a name="l00139"></a><span class="lineno">  139</span>&#160;    <span class="comment">// Pair depth and color frames</span></div><div class="line"><a name="l00140"></a><span class="lineno">  140</span>&#160;    <span class="keywordflow">while</span> (!isFinish)</div><div class="line"><a name="l00141"></a><span class="lineno">  141</span>&#160;    {</div><div class="line"><a name="l00142"></a><span class="lineno">  142</span>&#160;        std::unique_lock&lt;std::mutex&gt; lk(mtx);</div><div class="line"><a name="l00143"></a><span class="lineno">  143</span>&#160;        <span class="keywordflow">while</span> (!isFinish &amp;&amp; (depthFrames.empty() || colorFrames.empty()))</div><div class="line"><a name="l00144"></a><span class="lineno">  144</span>&#160;            dataReady.wait(lk);</div><div class="line"><a name="l00145"></a><span class="lineno">  145</span>&#160;</div><div class="line"><a name="l00146"></a><span class="lineno">  146</span>&#160;        <span class="keywordflow">while</span> (!depthFrames.empty() &amp;&amp; !colorFrames.empty())</div><div class="line"><a name="l00147"></a><span class="lineno">  147</span>&#160;        {</div><div class="line"><a name="l00148"></a><span class="lineno">  148</span>&#160;            <span class="keywordflow">if</span> (!lk.owns_lock())</div><div class="line"><a name="l00149"></a><span class="lineno">  149</span>&#160;                lk.lock();</div><div class="line"><a name="l00150"></a><span class="lineno">  150</span>&#160;</div><div class="line"><a name="l00151"></a><span class="lineno">  151</span>&#160;            <span class="comment">// Get a frame from the list</span></div><div class="line"><a name="l00152"></a><span class="lineno">  152</span>&#160;            Frame depthFrame = depthFrames.front();</div><div class="line"><a name="l00153"></a><span class="lineno">  153</span>&#160;            <a class="code" href="../../d1/d1b/group__core__hal__interface.html#ga7cde0074dfd288f2d70c0e035dacb28a">int64</a> depthT = depthFrame.timestamp;</div><div class="line"><a name="l00154"></a><span class="lineno">  154</span>&#160;</div><div class="line"><a name="l00155"></a><span class="lineno">  155</span>&#160;            <span class="comment">// Get a frame from the list</span></div><div class="line"><a name="l00156"></a><span class="lineno">  156</span>&#160;            Frame colorFrame = colorFrames.front();</div><div class="line"><a name="l00157"></a><span class="lineno">  157</span>&#160;            <a class="code" href="../../d1/d1b/group__core__hal__interface.html#ga7cde0074dfd288f2d70c0e035dacb28a">int64</a> colorT = colorFrame.timestamp;</div><div class="line"><a name="l00158"></a><span class="lineno">  158</span>&#160;</div><div class="line"><a name="l00159"></a><span class="lineno">  159</span>&#160;            <span class="comment">// Half of frame period is a maximum time diff between frames</span></div><div class="line"><a name="l00160"></a><span class="lineno">  160</span>&#160;            <span class="keyword">const</span> <a class="code" href="../../d1/d1b/group__core__hal__interface.html#ga7cde0074dfd288f2d70c0e035dacb28a">int64</a> maxTdiff = <a class="code" href="../../d1/d1b/group__core__hal__interface.html#ga7cde0074dfd288f2d70c0e035dacb28a">int64</a>(1000000000 / (2 * colorStream.get(<a class="code" href="../../d4/d15/group__videoio__flags__base.html#ggaeb8dd9c89c10a5c63c139bf7c4f5704daf01bc92359d2abc9e6eeb5cbe36d9af2">CAP_PROP_FPS</a>)));</div><div class="line"><a name="l00161"></a><span class="lineno">  161</span>&#160;            <span class="keywordflow">if</span> (depthT + maxTdiff &lt; colorT)</div><div class="line"><a name="l00162"></a><span class="lineno">  162</span>&#160;            {</div><div class="line"><a name="l00163"></a><span class="lineno">  163</span>&#160;                depthFrames.pop_front();</div><div class="line"><a name="l00164"></a><span class="lineno">  164</span>&#160;                <span class="keywordflow">continue</span>;</div><div class="line"><a name="l00165"></a><span class="lineno">  165</span>&#160;            }</div><div class="line"><a name="l00166"></a><span class="lineno">  166</span>&#160;            <span class="keywordflow">else</span> <span class="keywordflow">if</span> (colorT + maxTdiff &lt; depthT)</div><div class="line"><a name="l00167"></a><span class="lineno">  167</span>&#160;            {</div><div class="line"><a name="l00168"></a><span class="lineno">  168</span>&#160;                colorFrames.pop_front();</div><div class="line"><a name="l00169"></a><span class="lineno">  169</span>&#160;                <span class="keywordflow">continue</span>;</div><div class="line"><a name="l00170"></a><span class="lineno">  170</span>&#160;            }</div><div class="line"><a name="l00171"></a><span class="lineno">  171</span>&#160;            depthFrames.pop_front();</div><div class="line"><a name="l00172"></a><span class="lineno">  172</span>&#160;            colorFrames.pop_front();</div><div class="line"><a name="l00173"></a><span class="lineno">  173</span>&#160;            lk.unlock();</div><div class="line"><a name="l00174"></a><span class="lineno">  174</span>&#160;</div><div class="line"><a name="l00176"></a><span class="lineno">  176</span>&#160;            <span class="comment">// Show depth frame</span></div><div class="line"><a name="l00177"></a><span class="lineno">  177</span>&#160;            Mat d8, dColor;</div><div class="line"><a name="l00178"></a><span class="lineno">  178</span>&#160;            depthFrame.frame.convertTo(d8, <a class="code" href="../../d1/d1b/group__core__hal__interface.html#ga32b18d904ee2b1731a9416a8eef67d06">CV_8U</a>, 255.0 / 2500);</div><div class="line"><a name="l00179"></a><span class="lineno">  179</span>&#160;            <a class="code" href="../../d3/d50/group__imgproc__colormap.html#gadf478a5e5ff49d8aa24e726ea6f65d15">applyColorMap</a>(d8, dColor, <a class="code" href="../../d3/d50/group__imgproc__colormap.html#gga9a805d8262bcbe273f16be9ea2055a65a8c4210cb135b2555ba95e2db97f65ace">COLORMAP_OCEAN</a>);</div><div class="line"><a name="l00180"></a><span class="lineno">  180</span>&#160;            <a class="code" href="../../d7/dfc/group__highgui.html#ga453d42fe4cb60e5723281a89973ee563">imshow</a>(<span class="stringliteral">&quot;Depth (colored)&quot;</span>, dColor);</div><div class="line"><a name="l00181"></a><span class="lineno">  181</span>&#160;</div><div class="line"><a name="l00182"></a><span class="lineno">  182</span>&#160;            <span class="comment">// Show color frame</span></div><div class="line"><a name="l00183"></a><span class="lineno">  183</span>&#160;            <a class="code" href="../../d7/dfc/group__highgui.html#ga453d42fe4cb60e5723281a89973ee563">imshow</a>(<span class="stringliteral">&quot;Color&quot;</span>, colorFrame.frame);</div><div class="line"><a name="l00185"></a><span class="lineno">  185</span>&#160;</div><div class="line"><a name="l00186"></a><span class="lineno">  186</span>&#160;            <span class="comment">// Exit on Esc key press</span></div><div class="line"><a name="l00187"></a><span class="lineno">  187</span>&#160;            <span class="keywordtype">int</span> key = <a class="code" href="../../d7/dfc/group__highgui.html#ga5628525ad33f52eab17feebcfba38bd7">waitKey</a>(1);</div><div class="line"><a name="l00188"></a><span class="lineno">  188</span>&#160;            <span class="keywordflow">if</span> (key == 27) <span class="comment">// ESC</span></div><div class="line"><a name="l00189"></a><span class="lineno">  189</span>&#160;            {</div><div class="line"><a name="l00190"></a><span class="lineno">  190</span>&#160;                isFinish = <span class="keyword">true</span>;</div><div class="line"><a name="l00191"></a><span class="lineno">  191</span>&#160;                <span class="keywordflow">break</span>;</div><div class="line"><a name="l00192"></a><span class="lineno">  192</span>&#160;            }</div><div class="line"><a name="l00193"></a><span class="lineno">  193</span>&#160;        }</div><div class="line"><a name="l00194"></a><span class="lineno">  194</span>&#160;    }</div><div class="ttc" id="group__imgproc__colormap_html_gga9a805d8262bcbe273f16be9ea2055a65a8c4210cb135b2555ba95e2db97f65ace"><div class="ttname"><a href="../../d3/d50/group__imgproc__colormap.html#gga9a805d8262bcbe273f16be9ea2055a65a8c4210cb135b2555ba95e2db97f65ace">cv::COLORMAP_OCEAN</a></div><div class="ttdoc">ocean</div><div class="ttdef"><b>Definition:</b> imgproc.hpp:4317</div></div>
<div class="ttc" id="group__core__hal__interface_html_ga32b18d904ee2b1731a9416a8eef67d06"><div class="ttname"><a href="../../d1/d1b/group__core__hal__interface.html#ga32b18d904ee2b1731a9416a8eef67d06">CV_8U</a></div><div class="ttdeci">#define CV_8U</div><div class="ttdef"><b>Definition:</b> interface.h:73</div></div>
<div class="ttc" id="group__highgui_html_ga453d42fe4cb60e5723281a89973ee563"><div class="ttname"><a href="../../d7/dfc/group__highgui.html#ga453d42fe4cb60e5723281a89973ee563">cv::imshow</a></div><div class="ttdeci">void imshow(const String &amp;winname, InputArray mat)</div><div class="ttdoc">Displays an image in the specified window. </div></div>
<div class="ttc" id="group__core__hal__interface_html_ga7cde0074dfd288f2d70c0e035dacb28a"><div class="ttname"><a href="../../d1/d1b/group__core__hal__interface.html#ga7cde0074dfd288f2d70c0e035dacb28a">int64</a></div><div class="ttdeci">int64_t int64</div><div class="ttdef"><b>Definition:</b> interface.h:61</div></div>
<div class="ttc" id="group__imgproc__colormap_html_gadf478a5e5ff49d8aa24e726ea6f65d15"><div class="ttname"><a href="../../d3/d50/group__imgproc__colormap.html#gadf478a5e5ff49d8aa24e726ea6f65d15">cv::applyColorMap</a></div><div class="ttdeci">void applyColorMap(InputArray src, OutputArray dst, int colormap)</div><div class="ttdoc">Applies a GNU Octave/MATLAB equivalent colormap on a given image. </div></div>
<div class="ttc" id="group__videoio__flags__base_html_ggaeb8dd9c89c10a5c63c139bf7c4f5704daf01bc92359d2abc9e6eeb5cbe36d9af2"><div class="ttname"><a href="../../d4/d15/group__videoio__flags__base.html#ggaeb8dd9c89c10a5c63c139bf7c4f5704daf01bc92359d2abc9e6eeb5cbe36d9af2">cv::CAP_PROP_FPS</a></div><div class="ttdoc">Frame rate. </div><div class="ttdef"><b>Definition:</b> videoio.hpp:140</div></div>
<div class="ttc" id="group__highgui_html_ga5628525ad33f52eab17feebcfba38bd7"><div class="ttname"><a href="../../d7/dfc/group__highgui.html#ga5628525ad33f52eab17feebcfba38bd7">cv::waitKey</a></div><div class="ttdeci">int waitKey(int delay=0)</div><div class="ttdoc">Waits for a pressed key. </div></div>
</div><!-- fragment --><p> In the code snippet above the execution is blocked until there are some frames in both frame lists. When there are new frames, their timestamps are being checked &ndash; if they differ more than a half of the frame period then one of the frames is dropped. If timestamps are close enough, then two frames are paired. Now, we have two frames: one containing color information and another one &ndash; depth information. In the example above retrieved frames are simply shown with <a class="el" href="../../d7/dfc/group__highgui.html#ga453d42fe4cb60e5723281a89973ee563" title="Displays an image in the specified window. ">cv::imshow</a> function, but you can insert any other processing code here.</p>
<p>In the sample images below you can see the color frame and the depth frame representing the same scene. Looking at the color frame it's hard to distinguish plant leaves from leaves painted on a wall, but the depth data makes it easy.</p>
<div class="image">
<img src="../../astra_color.jpg" alt="astra_color.jpg"/>
<div class="caption">
Color frame</div></div>
<div class="image">
<img src="../../astra_depth.png" alt="astra_depth.png"/>
<div class="caption">
Depth frame</div></div>
<p> The complete implementation can be found in <a href="https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/videoio/orbbec_astra/orbbec_astra.cpp">orbbec_astra.cpp</a> in <code>samples/cpp/tutorial_code/videoio</code> directory. </p>
</div></div><!-- contents -->
<!-- HTML footer for doxygen 1.8.6-->
<!-- start footer part -->
<hr class="footer"/><address class="footer"><small>
Generated on Fri Apr 2 2021 11:36:34 for OpenCV by &#160;<a href="http://www.doxygen.org/index.html">
<img class="footer" src="../../doxygen.png" alt="doxygen"/>
</a> 1.8.13
</small></address>
<script type="text/javascript">
//<![CDATA[
addTutorialsButtons();
//]]>
</script>
</body>
</html>
