Novel View Synthesis

We’ve seen in the NeRF chapter how, given a large set of images, we can generate a three-dimensional representation of an object. But sometimes we have only a handful of images or even just one. Novel View Synthesis (NVS) is a collection of methods to generate views from new camera angles that are plausibly consistent with a set of images. Once we have a large, consistent set of images we can use NeRF or a similar algorithm to construct a 3D representation.

Many methods have recently been developed for this task. However, they can be divided into two general classes - those that generate an intermediate three-dimensional representation, which is rendered from a new viewing direction, and those that directly produce a new image.

One key difficulty is that this task is almost always underdetermined. For example, for an image of the back of a sign, there are many possible different things that could be on the front. Similarly, there could be parts of the object that are occluded, with one part of an object in front of another. If a model is trained to directly predict (regress) the unseen parts, with a loss penalizing errors in reconstructing held-out views, then by necessity the model will, when it is not clear what should be there, predict a blurry, grey colored region, as noted in NerfDiff. This has spurred interest in the use of generative, diffusion-based models, which are able to sample from multiple plausible possibilities for the unseen regions.

Here we will briefly discuss two approaches, which are representative of the two classes. PixelNeRF directly predicts a NeRF for the scene from an input image. Zero123 adapts the Stable Diffusion latent diffusion model to directly generate new views without an intermediate 3D representation.

PixelNeRF

PixelNeRF is a method that directly generates the parameters of a NeRF from one or more images. In other words, it conditions the NeRF on the input images. Unlike the original NeRF, which trains a MLP which takes spatial points to a density and color, PixelNeRF uses spatial features generated from the input images.

PixelNeRF diagram image from https://alexyu.net/pixelnerf

The method first passes the input images through a convolutional neural network (ResNet34), bilinearly upsampling features from multiple layers to the same resolution as the input images.
As in a standard NeRF, the new view is generated by volume rendering. However, the NeRF itself has a slightly unusual structure. At each query point $x$ in the rendered volume, the corresponding point in the input image(s) is found (by projecting it using the input image camera transformation $\pi$). The input image features at this point, $W(\pi x)$ are then found by bilinear interpolation. Like in the original NeRF, the query point $x$ is positionally encoded and concatentated with the viewing direction $d$. The NeRF network consists of a set of ResNet blocks; the input image features $W(\pi(x))$ pass through a linear layer, and are added to the features at the start of each of the first three residual blocks. There are then two more residual blocks to further process these features, before an output layer reduces the number of channels to four (RGB+density). When multiple input views are supplied, these are processed independently for the first three residual blocks, and then the features are averaged before the last two blocks.

The original PixelNeRF model was trained on a relatively small set of renderings from the ShapeNet dataset. The model is trained with either one or two input images, and attempts to predict a single novel view from a new camera angle. The loss is the mean-squared error between the rendered and expected novel views. A model was trained separately on each class of object (e.g. planes, benches, cars).

Results (from the PixelNeRF website)

Input image of a chair Rotating gif animation of rendered novel views

image from https://alexyu.net/pixelnerf

The PixelNeRF code can be found on GitHub

Related methods

In the ObjaverseXL paper, PixelNeRF was trained on a much larger dataset allenai/objaverse-xl.

See also - Generative Query Networks, Scene Representation Networks, LRM

Zero123 (or Zero-1-to-3)

Zero123 takes a different approach, being a diffusion model. Rather than trying to generate a three-dimensional representation, it instead directly predicts the image from the new views. The model takes a single input image, and the relative viewpoint transformation between the input and novel view direction. It attempts to generate a plausible, 3D-consistent image from the novel view direction.

Zero123 is built upon the Stable Diffusion architecture, and it was trained by fine-tuning existing weights.
However, it adds a few new twists. The model actually starts with the weights from Stable Diffusion Image Variations, which uses the CLIP image embeddings (the final hidden state) of the input image to condition the diffusion U-Net, instead of a text prompt. However, here these CLIP image embeddings are concatenated with the relative viewpoint transformation between the input and novel views. (This viewpoint change is represented in terms of spherical polar coordinates.)

Zero123 image from https://zero123.cs.columbia.edu

The rest of the architecture is the same as Stable Diffusion. However, the latent representation of the input image is concatenated channel-wise with the noisy latents before being input into the denoising U-Net.

To explore this model further, see the Live Demo

Related methods

3DiM - X-UNet architecture, with cross-attention between input and noisy frames. Zero123-XL - Trained on the larger objaverseXL dataset. See also Stable Zero 123

Zero123++ - Generates 6 new fixed views, at fixed relative positions to the input view, with reference attention between input and generated images