A point cloud is a collection of individual points, each representing a sample of a surface within a three-dimensional space denoted by [x, y, z] coordinates. Beyond their spatial coordinates, these points often carry additional attributes like normals, RGB color, albedo, and Bidirectional Reflectance Distribution Function (BRDF).
Here, albedo is the measure of how much light a surface reflects. It’s essentially the ratio of reflected light to the incident light that strikes the surface. In simpler terms, it describes how much of the incoming light is bounced back. A high albedo indicates a surface that reflects a lot of light, such as snow, while a low albedo suggests a surface that absorbs more light, like asphalt.
The BRDF is a function that describes how light is scattered or reflected at an opaque surface. It details the way light is reflected at an intersection point on a surface, considering the incoming light direction and the outgoing direction. It provides a mathematical description of the surface’s reflective properties, including factors like glossiness, roughness, and the distribution of reflected light over different angles. These attributes serve crucial roles in various applications such as modeling, rendering, and scene comprehension.
While the concept of point cloud data isn’t new and has been integral in fields like graphics and physics simulation for many years, its significance has notably surged due to two key trends. Firstly, the widespread availability of cost-effective and user-friendly point cloud acquisition devices has significantly increased accessibility.
Augmented Reality and autonomous vehicles have further underscored their relevance in today’s technological landscape.
Now that we know what a Point Cloud is, what can we do with them?
The 3D Point Data is mainly used in self-driving capabilities, but now other AI models using computer vision like drones and robots are also using LiDAR for better visual perception. LiDAR is a remote sensing process that collects measurements used to create 3D models and maps of objects and environments. Using ultraviolet, visible, or near-infrared light, LiDAR gauges spatial relationships and shapes by measuring the time it takes for signals to bounce off objects and return to the scanner.
We will be using the python library point-cloud-utils, and open-3d, which can be installed by:
pip install point-cloud-utils
We will be also using the python library open-3d, which can be installed by:
pip install open3d
OR a Smaller CPU only version:
pip install open3d-cpu
Now, first we need to understand the formats in which these point clouds are stored in, and for that, we need to look at mesh cloud.
Why?
point-cloud-utils
supports reading common mesh formats (PLY, STL, OFF, OBJ, 3DS, VRML 2.0, X3D, COLLADA).The type of file is inferred from its file extension. Some of the extensions supported are:
PLY (Polygon File Format)
STL (Standard Tessellation Language)
OFF (Object File Format)
3DS (3D Studio)
X3D (Extensible 3D Graphics)
DAE (Digital Asset Exchange)