This section explains how stereo vision works and how it can be used to find the 3-dimensional structure of surrounding objects. Stereo vision involves capturing two or more images of the same scene from varying positions and viewpoints. These images can be obtained using multiple cameras or by repositioning the same camera.
Let’s understand the problem statement of finding the 3D structure of objects by understanding the geometry of image formation. As shown in Figure 1, we have a point P in 3D with x, y, z coordinates. Point P gets projected to the camera’s image plane via the pinhole. This can also be viewed as projecting a 3D point to a 2D image plane.
Now, let’s say we are given this 2D image and the location of the pixel coordinates of point P in this image. We want to find the 3D coordinates of point P. Is this possible? Is point P unique, or are there other 3D points that also map to the same pixel coordinates as point P? Answer is that all 3D points that lie on the line joining point P, and the pinhole will map to the same pixel coordinates in the 2D image plane.
We aim to solve the problem of determining the 3D structure of objects. In our problem statement, we can represent an object in 3D as a set of 3D points. Finding the 3D coordinates of each of these points helps us determine the 3D structure of the object.
Figure 1: Image formation using single camera
Let’s assume we are given the following information:
With the information provided above, we can find a 3D line that originates from the pixel coordinates of point P (the projection of point P in the image plane), passes through the pinhole, and extends to infinity. Based on the principles of image formation geometry, we can conclude that point P must exist somewhere along this line.
Given 2 lines in 3D, there are are three possibilities for their intersection:
If both images (with original and new camera positions) contain point P, we can conclude that the 3D lines must intersect at least once and that the intersection point is point P. Furthermore, we can envision infinite points where both lines intersect only if the two lines are collinear. This is achievable if the pinhole at the new camera position lies somewhere on the original 3D line. For all other positions and orientations of the new camera location, the two 3D lines must intersect precisely at one point, where point P lies.
Therefore, using 2 images of the same scene point P, known positions and orientations of the camera locations, and known internal parameters of the camera, we can precisely find where point P lies in the 3D space.
Since there are many different positions and orientations for the camera locations which can be selected, we can select a location that makes the math simpler, less complex, and reduces computational processing when running on a computer or an embedded device. One configuration that is popular and generally used is shown in Figure 2. We use 2 cameras in this configuration, which is equivalent to a single camera for capturing 2 images from 2 different locations.
Figure 2: Image formation using 2 cameras
With the above configuration in place, we have the below equations which map a point in 3D to the image plane in 2D.
Different symbols used in above equations are defined below: , refer to pixel coordinates of point P in the left image , refer to pixel coordinates of point P in the right image * refers to the focal length (in pixels) in x direction and refers to the focal length (in pixels) in y direction. Actually, there is only 1 focal length for a camera which is the distance between the pinhole (/ optical center of the lens) to the image plane. However, pixels may be rectangular and not perfect squares, resulting in different fx and fy values when we represent f in terms of pixels.
We have 4 equations above and 3 unknowns - x, y and z coordinates of a 3D point P. Intrinsic camera parameters - focal lengths and principal point are assumed to be known. Equations 1.2 and 2.2 indicate that the v coordinate value in the left and right images is the same.
3.
Using equations 1.1, 1.2 and 2.1 we can derive the x,y,z coordinates of point P.
4. 5. 6.
Note that the x and y values above concern the left camera since the origin of the coordinate system is aligned with the left camera. The above equations show that we can find 3D coordinates of a point P using its 2 images captured from 2 different camera locations. z value is also referred to as the depth value. Using this technique, we can find the depth values for different pixels within an image and their real-world x and y coordinates. We can also find real-world distances between different points in an image.
We’ll work through an example, capture some images, and perform some calculations to find out if our above assumptions and math work out! For capturing the images we’ll use a hardware known as OAK-D Lite (OAK stands for OpenCV AI Kit). This device has 3 cameras - left and right mono (black and white) and a center color cameras. We’ll use the left and right mono cameras for our experiment. A regular smartphone camera could also be used, but OAK-D lite has some advantages listed below.
The left and right cameras in OAK-D Lite are oriented similarly to the geometry of the simplified solution detailed above. The baseline distance between the left and right cameras is 7.5cm. Left and right images of a scene captured using this device are shown below. The figure also shows these images stacked horizontally with a red line drawn at a constant height (i.e. at a constant v value ). We’ll refer to the horizontal x-axis as u and the vertical y-axis as v.
Raw Left Image
Raw Right Image
Raw Stacked Left and Right Images
Let’s focus on a single point - the top left corner of the laptop. As per equation 3 above,\(v_left = v_right\) for the same point in the left and right images. However, notice that the red line, which is at a constant v value, touches the top-left corner of the laptop in the left image but misses this point by a few pixels in the right image. There are two main reasons for this discrepancy:
We can perform image rectification/post-processing to correct for differences in intrinsic parameters and orientations of the left and right cameras. This process involves performing 3x3 matrix transformations. In the OAK-D Lite API, a stereo node performs these calculations and outputs the rectified left and right images. Details and source code can be viewed here. In this specific implementation, correction for intrinsic parameters is performed using intrinsic camera matrices, and correction for orientation is performed using rotation matrices(part of calibration parameters) for the left and right cameras. The rectified left image is transformed as if the left camera had the same intrinsic parameters as the right one. Therefore, in all our following calculations, we’ll use the intrinsic parameters for the right camera i.e. focal length of 452.9 and principal point at (298.85, 245.52). In the rectified and stacked images below, notice that the red line at constant v touches the top-left corner of the laptop in both the left and right images.
Rectified Left Image
Rectified Right Image
Rectified and Stacked Left and Right Images
Let’s also overlap the rectified left and right images to see the difference. We can see that the v values for different points remain mostly constant in the left and right images. However, the u values change, and this difference in the u values helps us find the depth information for different points in the scene, as shown in Equation 6 above. This difference in ‘u’ values is called disparity, and we can notice that the disparity for points near the camera is greater compared to points further away. Depth z and disparity are inversely proportional, as shown in equation 6.
Rectified and Overlapped Left and Right Images
Let’s find the 3D coordinates for some points in the scene. A few points are selected and manually annotated with their (u,v) values, as shown in the figures below. Instead of manual annotations, we can also use template-based matching, feature detection algorithms like SIFT, etc for finding corresponding points in left and right images.
Annotated Left Image
Annotated Right Image
Twelve points are selected in the scene, and their (u,v) values in the left and right images are tabulated below. Using equations 4, 5, and 6, (x,y,z) coordinates for these points are also calculated and tabulated below. X and Y coordinates concerning the left camera, and the origin is at the left camera’s pinhole (or optical center of the lens). Therefore, 3D points left and above the pinhole have negative X and Y values, respectively.
point | depth/z(cm) | ||||||
---|---|---|---|---|---|---|---|
pt1 | 138 | 219 | 102 | 219 | 94.36 | -33.51 | -5.53 |
pt2 | 264 | 216 | 234 | 217 | 113.23 | -8.72 | -7.38 |
pt3 | 137 | 320 | 101 | 321 | 94.36 | -33.72 | 15.52 |
pt4 | 263 | 303 | 233 | 302 | 113.23 | -8.97 | 14.37 |
pt5 | 307 | 211 | 280 | 211 | 125.81 | 2.26 | -9.59 |
pt6 | 367 | 212 | 339 | 212 | 121.32 | 18.25 | -8.98 |
pt7 | 305 | 298 | 278 | 298 | 125.81 | 1.71 | 14.58 |
pt8 | 365 | 299 | 338 | 299 | 125.81 | 18.37 | 14.86 |
pt9 | 466 | 225 | 415 | 225 | 66.61 | 24.58 | -3.02 |
pt10 | 581 | 225 | 530 | 226 | 66.61 | 41.49 | -3.02 |
pt11 | 464 | 387 | 413 | 388 | 66.61 | 24.29 | 20.81 |
pt12 | 579 | 388 | 528 | 390 | 66.61 | 41.2 | 20.95 |
We can also compute 3D distances between different points using their (x,y,z) values using the formulae. Computed distances between some of the points are tabulated below along with their actual measured values. Percentage error) is also computed and tabulated. Notice that the calculated and actual values match very well with a percentage error of 1.2% or less.
dimension | calculated(cm) | actual(cm) | % error |
---|---|---|---|
d1(1-2) | 31.2 | 31.2 | 0 |
d2(1-3) | 21.1 | 21.3 | 0.94 |
d3(5-6) | 16.6 | 16.7 | 0.6 |
d4(5-7) | 24.2 | 24 | 0.83 |
d5(9-10) | 16.9 | 16.7 | 1.2 |
d6(9-11) | 23.8 | 24 | 0.83 |
Calculated Dimension Results ![Calculated Dimension Results] (https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/3d_stereo_vision_images/calculated_dim_results.png?download=true)