text stringlengths 0 4.99k |
|---|
We have produced a minimal implementation of NeRF to provide an intuition of its core ideas and methodology. This method has been used in various other works in the computer graphics space. |
We would like to encourage our readers to use this code as an example and play with the hyperparameters and visualize the outputs. Below we have also provided the outputs of the model trained for more epochs. |
Epochs GIF of the training step |
100 100-epoch-training |
200 200-epoch-training |
Reference |
NeRF repository: The official repository for NeRF. |
NeRF paper: The paper on NeRF. |
Manim Repository: We have used manim to build all the animations. |
Mathworks: Mathworks for the camera calibration article. |
Mathew's video: A great video on NeRF. |
Compact Convolutional Transformers |
As discussed in the Vision Transformers (ViT) paper, a Transformer-based architecture for vision typically requires a larger dataset than usual, as well as a longer pre-training schedule. ImageNet-1k (which has about a million images) is considered to fall under the medium-sized data regime with respect to ViTs. This i... |
In Escaping the Big Data Paradigm with Compact Transformers, Hassani et al. present an approach for doing exactly this. They proposed the Compact Convolutional Transformer (CCT) architecture. In this example, we will work on an implementation of CCT and we will see how well it performs on the CIFAR-10 dataset. |
If you are unfamiliar with the concept of self-attention or Transformers, you can read this chapter from François Chollet's book Deep Learning with Python. This example uses code snippets from another example, Image classification with Vision Transformer. |
This example requires TensorFlow 2.5 or higher, as well as TensorFlow Addons, which can be installed using the following command: |
!pip install -U -q tensorflow-addons |
[K |████████████████████████████████| 686kB 5.4MB/s |
[?25h |
Imports |
from tensorflow.keras import layers |
from tensorflow import keras |
import matplotlib.pyplot as plt |
import tensorflow_addons as tfa |
import tensorflow as tf |
import numpy as np |
Hyperparameters and constants |
positional_emb = True |
conv_layers = 2 |
projection_dim = 128 |
num_heads = 2 |
transformer_units = [ |
projection_dim, |
projection_dim, |
] |
transformer_layers = 2 |
stochastic_depth_rate = 0.1 |
learning_rate = 0.001 |
weight_decay = 0.0001 |
batch_size = 128 |
num_epochs = 30 |
image_size = 32 |
Load CIFAR-10 dataset |
num_classes = 10 |
input_shape = (32, 32, 3) |
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data() |
y_train = keras.utils.to_categorical(y_train, num_classes) |
y_test = keras.utils.to_categorical(y_test, num_classes) |
print(f\"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}\") |
print(f\"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}\") |
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz |
170500096/170498071 [==============================] - 11s 0us/step |
x_train shape: (50000, 32, 32, 3) - y_train shape: (50000, 10) |
x_test shape: (10000, 32, 32, 3) - y_test shape: (10000, 10) |
The CCT tokenizer |
The first recipe introduced by the CCT authors is the tokenizer for processing the images. In a standard ViT, images are organized into uniform non-overlapping patches. This eliminates the boundary-level information present in between different patches. This is important for a neural network to effectively exploit the ... |
We already know that convolutions are quite good at exploiting locality information. So, based on this, the authors introduce an all-convolution mini-network to produce image patches. |
class CCTTokenizer(layers.Layer): |
def __init__( |
self, |
kernel_size=3, |
stride=1, |
padding=1, |
pooling_kernel_size=3, |
pooling_stride=2, |
num_conv_layers=conv_layers, |
num_output_channels=[64, 128], |
positional_emb=positional_emb, |
**kwargs, |
): |
super(CCTTokenizer, self).__init__(**kwargs) |
# This is our tokenizer. |
self.conv_model = keras.Sequential() |
for i in range(num_conv_layers): |
self.conv_model.add( |
layers.Conv2D( |
num_output_channels[i], |
kernel_size, |
stride, |
padding=\"valid\", |
use_bias=False, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.