Using a 3D Renderer to Generate Synthetic Data

When creating computer-generated images to use as synthetic training data, ideally we want the images to look as realistic as possible. Physically Based Renderers (PBR) such as Blender Cycles or Unity help to create images that are super realistic and look and feel just like they do in the real world.

Imagine you’re creating an image of a shiny apple. Now, when you color that apple, you want it to look realistic, right? That’s where something called PBR comes in.

apple

Okay, let’s break it down:

Colors and Light:

Materials:

Textures:

Realism:

Layers of Light:

PBR also simplifies the workflow. Instead of manually tweaking many parameters to get the right look, you can use a set of standardized materials and lighting models. This makes the process more intuitive and user-friendly.

Now, think about training AI models like those used in computer vision. If you’re teaching a computer to recognize objects in images, it’s beneficial to have a diverse set of images that closely mimic real-world scenarios. PBR helps in generating synthetic data that looks so real that it can be used to train computer vision models effectively.

There are several 3D rendering engines that you can use for PBR, including Blender Cycles or Unity. We are going to focus on Blender because it is open source and there are a lot of resources about Blender.

Blender

Blender is a powerful, open-source 3D computer graphics software used for creating animated films, visual effects, art, 3D games, and more. It encompasses a wide range of features, making it a versatile tool for artists, animators, and developers. Let’s start off by walking through a basic example of rendering a synthetic image of an elephant.

Here are the essential steps:

Here is the elephant image generated in Blender:

elephant image

It is not completely photorealistic, but probably close enough to train a model to monitor elephant populations. Of course, to do that we need to create a large dataset of synthetic elephant images! You can use the Blender python environment bpy to render a large number of images with the location and rotation of the elephant randomized. You can also use a script to help with segmentation, depth, normal, and pose estimation.

Great! How do we get started?

Unfortunately, there is a pretty steep learning curve associated with Blender. None of the steps are too complicated, but wouldn’t it be nice if we could render the dataset without trying to figure all of this out? Luckily for us, there is a library called BlenderProc that has all the scripts we need to render realistic synthetic data and annotations and it is built on top of Blender.

BlenderProc

The BlenderProc pipeline was introduced in BlenderProc, Denninger, et. al. and is a modular pipeline built on top of Blender. It can be used to generate images in a variety of use cases, including segmentation, depth, normal and pose estimation.

It is specifically created to help in the generation of realistic looking images for the training of convolutional neural networks.
It has the following properties which make it a great choice for synthetic data generation:

You can install BlenderProc via pip:

pip install blenderProc

Alternately, you can clone the official BlenderProc repository from GitHub using Git:

git clone https://github.com/DLR-RM/BlenderProc

BlenderProc must be run inside the blender python environment (bpy), as this is the only way to access the Blender API.

blenderproc run <your_python_script>

You can check out this notebook to try BlenderProc in Google Colab, demos the basic examples provided here. Here are some images rendered with the basic example:

colors normals depth

Blender Resources

The following video explains how to render a 3D syntehtic dataset in Blender:

The following video explains how to create a 3D object using Photogrammetry:

Papers / Blogs

BlenderProc Resources

The following video provides an overview of the BlenderProc pipeline:

Papers

< > Update on GitHub