Introduction

Apart from videos, about which we talked in the last chapter, another common form of visual data comes in the 3-dimensional form. While for 2D images we usually have the two dimensions, commonly labelled as x and y, for 3D images we have three dimensions, referred to as x, y and z.

“But wait,” I hear you say, “videos also have three dimensions!” That is completely correct - videos have the two spatial dimensions, x and y, and the temporal dimension, t. The difference with 3D data is that here all three dimensions are of a spatial nature. This helps us to create a better model of our world and our perceptive capabilities. That is why one very common field for 3D applications nowadays is Mixed Reality applications, in which we try to merge the digital and analog worlds.

Unit Overview

You will learn more about applications of 3D Computer Vision in the first chapter after this introduction. Right after that, we will take a look at the historical developments of 3D applications - all the way from the 19th century to today.

After these general topics, we’ll dive right into the terminologies and concepts with three chapters about camera models, linear algebra and different representations.

We are following up the theory with some proper fields of use for 3D Computer Vision. Starting off with Novel View Synthesis, followed by Stereo Vision and finishing this unit (for now) with one of the most popular applications right now - Neural Radiance Fields (NeRFs).

Ready? Then get out your 3D goggles and lets learn! 🌟

< > Update on GitHub