A point cloud is a collection of individual points, each representing a sample of a surface within a three-dimensional space denoted by [x, y, z] coordinates. Beyond their spatial coordinates, these points often carry additional attributes like normals, RGB color, albedo, and Bidirectional Reflectance Distribution Function (BRDF).
Here, albedo is the measure of how much light a surface reflects. It’s essentially the ratio of reflected light to the incident light that strikes the surface. In simpler terms, it describes how much of the incoming light is bounced back. A high albedo indicates a surface that reflects a lot of light, such as snow, while a low albedo suggests a surface that absorbs more light, like asphalt.
The BRDF is a function that describes how light is scattered or reflected at an opaque surface. It details the way light is reflected at an intersection point on a surface, considering the incoming light direction and the outgoing direction. It provides a mathematical description of the surface’s reflective properties, including factors like glossiness, roughness, and the distribution of reflected light over different angles. These attributes serve crucial roles in various applications such as modeling, rendering, and scene comprehension.
While the concept of point cloud data isn’t new and has been integral in fields like graphics and physics simulation for many years, its significance has notably surged due to two key trends. Firstly, the widespread availability of cost-effective and user-friendly point cloud acquisition devices has significantly increased accessibility.
Augmented Reality and autonomous vehicles have further underscored their relevance in today’s technological landscape.
Now that we know what a Point Cloud is, what can we do with them?
The 3D Point Data is mainly used in self-driving capabilities, but now other AI models using computer vision like drones and robots are also using LiDAR for better visual perception. LiDAR is a remote sensing process that collects measurements used to create 3D models and maps of objects and environments. Using ultraviolet, visible, or near-infrared light, LiDAR gauges spatial relationships and shapes by measuring the time it takes for signals to bounce off objects and return to the scanner.
We will be using the python library point-cloud-utils, and open-3d, which can be installed by
pip install point-cloud-utils
We will be also using the python library open-3d, which can be installed by
pip install open3d
OR a Smaller CPU only version
pip install open3d-cpu
Now, first we need to understand the formats in which these point clouds are stored in, and for that, we need to look at mesh cloud.
Why?
point-cloud-utils
supports reading common mesh formats (PLY, STL, OFF, OBJ, 3DS, VRML 2.0, X3D, COLLADA).The type of file is inferred from its file extension. Some of the extensions supported are:
PLY (Polygon File Format)
STL (Standard Tessellation Language)
OFF (Object File Format)
3DS (3D Studio)
X3D (Extensible 3D Graphics)
DAE (Digital Asset Exchange)
We can load different styles of Point-Clouds using the point-cloud-utils library, like this:
import point_cloud_utils as pcu
# Load mesh vertices and faces
vertices, faces = pcu.load_mesh_vf("path/to/mesh")
# Load mesh vertices and per-vertex normals
vertices, normals = pcu.load_mesh_vn("path/to/mesh")
# Load mesh vertices, per-vertex normals, and per-vertex colors
vertices, normals, colors = pcu.load_mesh_vnc("path/to/mesh")
# Load mesh vertices, faces, and per-vertex normals
vertices, faces, normals = pcu.load_mesh_vfn("path/to/mesh")
# Load mesh vertices, faces, per-vertex normals, and per-vertex colors
vertices, faces, normals, colors = pcu.load_mesh_vfnc("path/to/mesh")
You can load some point clouds to start practising.
In general, the Barycentric Coordinate System is a pretty important FileSystem, due to:
Homogeneity
Uniqueness
Affine Invariance
So, here we see that we can use the barycentric Coordinates, to generate random samples on a mesh, just as an example
import point_cloud_utils as pcu
import numpy as np
# Load mesh vertices, faces, and per-vertex normals
vertices, faces, normals = pcu.load_mesh_vfn("my_model.ply")
# Generate random samples on the mesh (vertices, faces, normals)
# f_indices: Face indices of each sample, bary_coords: Barycentric coordinates of the samples within each face
f_indices, bary_coords = pcu.sample_mesh_random(vertices, faces, num_samples=vertices.shape[0] * 40)
# Interpolate to compute sample positions and normals using face indices and barycentric coordinates
sampled_vertices = pcu.interpolate_barycentric_coords(faces, f_indices, bary_coords, vertices)
sampled_normals = pcu.interpolate_barycentric_coords(faces, f_indices, bary_coords, normals)