Training a CTC-based model for automatic speech recognition. Introduction Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also known as automatic speech recognition (ASR), computer speech recognition or speech to text (STT). It incorporates knowledge and research in the computer science, linguistics and computer engineering fields. This demonstration shows how to combine a 2D CNN, RNN and a Connectionist Temporal Classification (CTC) loss to build an ASR. CTC is an algorithm used to train deep neural networks in speech recognition, handwriting recognition and other sequence problems. CTC is used when we don’t know how the input aligns with the output (how the characters in the transcript align to the audio). The model we create is similar to DeepSpeech2. We will use the LJSpeech dataset from the LibriVox project. It consists of short audio clips of a single speaker reading passages from 7 non-fiction books. We will evaluate the quality of the model using Word Error Rate (WER). WER is obtained by adding up the substitutions, insertions, and deletions that occur in a sequence of recognized words. Divide that number by the total number of words originally spoken. The result is the WER. To get the WER score you need to install the jiwer package. You can use the following command line: pip install jiwer References: LJSpeech Dataset Speech recognition Sequence Modeling With CTC DeepSpeech2 Setup import pandas as pd import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt from IPython import display from jiwer import wer Load the LJSpeech Dataset Let's download the LJSpeech Dataset. The dataset contains 13,100 audio files as wav files in the /wavs/ folder. The label (transcript) for each audio file is a string given in the metadata.csv file. The fields are: ID: this is the name of the corresponding .wav file Transcription: words spoken by the reader (UTF-8) Normalized transcription: transcription with numbers, ordinals, and monetary units expanded into full words (UTF-8). For this demo we will use on the \"Normalized transcription\" field. Each audio file is a single-channel 16-bit PCM WAV with a sample rate of 22,050 Hz. data_url = \"https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2\" data_path = keras.utils.get_file(\"LJSpeech-1.1\", data_url, untar=True) wavs_path = data_path + \"/wavs/\" metadata_path = data_path + \"/metadata.csv\" # Read metadata file and parse it metadata_df = pd.read_csv(metadata_path, sep=\"|\", header=None, quoting=3) metadata_df.columns = [\"file_name\", \"transcription\", \"normalized_transcription\"] metadata_df = metadata_df[[\"file_name\", \"normalized_transcription\"]] metadata_df = metadata_df.sample(frac=1).reset_index(drop=True) metadata_df.head(3) file_name normalized_transcription 0 LJ042-0218 to the entire land and complete foundations of... 1 LJ004-0218 a week's allowance at a time, was abolished, a... 2 LJ005-0151 in others women were very properly exempted fr... We now split the data into training and validation set. split = int(len(metadata_df) * 0.90) df_train = metadata_df[:split] df_val = metadata_df[split:] print(f\"Size of the training set: {len(df_train)}\") print(f\"Size of the training set: {len(df_val)}\") Size of the training set: 11790 Size of the training set: 1310 Preprocessing We first prepare the vocabulary to be used. # The set of characters accepted in the transcription. characters = [x for x in \"abcdefghijklmnopqrstuvwxyz'?! \"] # Mapping characters to integers char_to_num = keras.layers.StringLookup(vocabulary=characters, oov_token=\"\") # Mapping integers back to original characters num_to_char = keras.layers.StringLookup( vocabulary=char_to_num.get_vocabulary(), oov_token=\"\", invert=True ) print( f\"The vocabulary is: {char_to_num.get_vocabulary()} \" f\"(size ={char_to_num.vocabulary_size()})\" ) The vocabulary is: ['', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', \"'\", '?', '!', ' '] (size =31) 2021-09-28 21:16:33.150832: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-09-28 21:16:33.692813: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0. 2021-09-28 21:16:33.692847: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 9124 MB memory: -> device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:65:00.0, compute capability: 7.5 Next, we create the function that describes the transformation that we apply to each element of our dataset. # An integer scalar Tensor. The window length in samples. frame_length = 256 # An integer scalar Tensor. The number of samples to step. frame_step = 160 # An integer scalar Tensor. The size of the FFT to apply. # If not provided, uses the smallest power of 2 enclosing frame_length. fft_length = 384 def encode_single_sample(wav_file, label): ########################################### ## Process the Audio ########################################## # 1. Read wav file file = tf.io.read_file(wavs_path + wav_file + \".wav\") # 2. Decode the wav file audio, _ = tf.audio.decode_wav(file) audio = tf.squeeze(audio, axis=-1) # 3. Change type to float audio = tf.cast(audio, tf.float32) # 4. Get the spectrogram spectrogram = tf.signal.stft( audio, frame_length=frame_length, frame_step=frame_step, fft_length=fft_length ) # 5. We only need the magnitude, which can be derived by applying tf.abs spectrogram = tf.abs(spectrogram) spectrogram = tf.math.pow(spectrogram, 0.5) # 6. normalisation means = tf.math.reduce_mean(spectrogram, 1, keepdims=True) stddevs = tf.math.reduce_std(spectrogram, 1, keepdims=True) spectrogram = (spectrogram - means) / (stddevs + 1e-10) ########################################### ## Process the label ########################################## # 7. Convert label to Lower case label = tf.strings.lower(label) # 8. Split the label label = tf.strings.unicode_split(label, input_encoding=\"UTF-8\") # 9. Map the characters in label to numbers label = char_to_num(label) # 10. Return a dict as our model is expecting two inputs return spectrogram, label Creating Dataset objects We create a tf.data.Dataset object that yields the transformed elements, in the same order as they appeared in the input. batch_size = 32 # Define the trainig dataset train_dataset = tf.data.Dataset.from_tensor_slices( (list(df_train[\"file_name\"]), list(df_train[\"normalized_transcription\"])) ) train_dataset = ( train_dataset.map(encode_single_sample, num_parallel_calls=tf.data.AUTOTUNE) .padded_batch(batch_size) .prefetch(buffer_size=tf.data.AUTOTUNE) ) # Define the validation dataset validation_dataset = tf.data.Dataset.from_tensor_slices( (list(df_val[\"file_name\"]), list(df_val[\"normalized_transcription\"])) ) validation_dataset = ( validation_dataset.map(encode_single_sample, num_parallel_calls=tf.data.AUTOTUNE) .padded_batch(batch_size) .prefetch(buffer_size=tf.data.AUTOTUNE) ) Visualize the data Let's visualize an example in our dataset, including the audio clip, the spectrogram and the corresponding label. fig = plt.figure(figsize=(8, 5)) for batch in train_dataset.take(1): spectrogram = batch[0][0].numpy() spectrogram = np.array([np.trim_zeros(x) for x in np.transpose(spectrogram)]) label = batch[1][0] # Spectrogram label = tf.strings.reduce_join(num_to_char(label)).numpy().decode(\"utf-8\") ax = plt.subplot(2, 1, 1) ax.imshow(spectrogram, vmax=1) ax.set_title(label) ax.axis(\"off\") # Wav file = tf.io.read_file(wavs_path + list(df_train[\"file_name\"])[0] + \".wav\") audio, _ = tf.audio.decode_wav(file) audio = audio.numpy() ax = plt.subplot(2, 1, 2) plt.plot(audio) ax.set_title(\"Signal Wave\") ax.set_xlim(0, len(audio)) display.display(display.Audio(np.transpose(audio), rate=16000)) plt.show() 2021-09-28 21:16:34.014170: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2) png Model We first define the CTC Loss function. def CTCLoss(y_true, y_pred): # Compute the training-time loss value batch_len = tf.cast(tf.shape(y_true)[0], dtype=\"int64\") input_length = tf.cast(tf.shape(y_pred)[1], dtype=\"int64\") label_length = tf.cast(tf.shape(y_true)[1], dtype=\"int64\") input_length = input_length * tf.ones(shape=(batch_len, 1), dtype=\"int64\") label_length = label_length * tf.ones(shape=(batch_len, 1), dtype=\"int64\") loss = keras.backend.ctc_batch_cost(y_true, y_pred, input_length, label_length) return loss We now define our model. We will define a model similar to DeepSpeech2. def build_model(input_dim, output_dim, rnn_layers=5, rnn_units=128): \"\"\"Model similar to DeepSpeech2.\"\"\" # Model's input input_spectrogram = layers.Input((None, input_dim), name=\"input\") # Expand the dimension to use 2D CNN. x = layers.Reshape((-1, input_dim, 1), name=\"expand_dim\")(input_spectrogram) # Convolution layer 1 x = layers.Conv2D( filters=32, kernel_size=[11, 41], strides=[2, 2], padding=\"same\", use_bias=False, name=\"conv_1\", )(x) x = layers.BatchNormalization(name=\"conv_1_bn\")(x) x = layers.ReLU(name=\"conv_1_relu\")(x) # Convolution layer 2 x = layers.Conv2D( filters=32, kernel_size=[11, 21], strides=[1, 2], padding=\"same\", use_bias=False, name=\"conv_2\", )(x) x = layers.BatchNormalization(name=\"conv_2_bn\")(x) x = layers.ReLU(name=\"conv_2_relu\")(x) # Reshape the resulted volume to feed the RNNs layers x = layers.Reshape((-1, x.shape[-2] * x.shape[-1]))(x) # RNN layers for i in range(1, rnn_layers + 1): recurrent = layers.GRU( units=rnn_units, activation=\"tanh\", recurrent_activation=\"sigmoid\", use_bias=True, return_sequences=True, reset_after=True, name=f\"gru_{i}\", ) x = layers.Bidirectional( recurrent, name=f\"bidirectional_{i}\", merge_mode=\"concat\" )(x) if i < rnn_layers: x = layers.Dropout(rate=0.5)(x) # Dense layer x = layers.Dense(units=rnn_units * 2, name=\"dense_1\")(x) x = layers.ReLU(name=\"dense_1_relu\")(x) x = layers.Dropout(rate=0.5)(x) # Classification layer output = layers.Dense(units=output_dim + 1, activation=\"softmax\")(x) # Model model = keras.Model(input_spectrogram, output, name=\"DeepSpeech_2\") # Optimizer opt = keras.optimizers.Adam(learning_rate=1e-4) # Compile the model and return model.compile(optimizer=opt, loss=CTCLoss) return model # Get the model model = build_model( input_dim=fft_length // 2 + 1, output_dim=char_to_num.vocabulary_size(), rnn_units=512, ) model.summary(line_length=110) Model: \"DeepSpeech_2\" ______________________________________________________________________________________________________________ Layer (type) Output Shape Param # ============================================================================================================== input (InputLayer) [(None, None, 193)] 0 ______________________________________________________________________________________________________________ expand_dim (Reshape) (None, None, 193, 1) 0 ______________________________________________________________________________________________________________ conv_1 (Conv2D) (None, None, 97, 32) 14432 ______________________________________________________________________________________________________________ conv_1_bn (BatchNormalization) (None, None, 97, 32) 128 ______________________________________________________________________________________________________________ conv_1_relu (ReLU) (None, None, 97, 32) 0 ______________________________________________________________________________________________________________ conv_2 (Conv2D) (None, None, 49, 32) 236544 ______________________________________________________________________________________________________________ conv_2_bn (BatchNormalization) (None, None, 49, 32) 128 ______________________________________________________________________________________________________________ conv_2_relu (ReLU) (None, None, 49, 32) 0 ______________________________________________________________________________________________________________ reshape (Reshape) (None, None, 1568) 0 ______________________________________________________________________________________________________________ bidirectional_1 (Bidirectional) (None, None, 1024) 6395904 ______________________________________________________________________________________________________________ dropout (Dropout) (None, None, 1024) 0 ______________________________________________________________________________________________________________ bidirectional_2 (Bidirectional) (None, None, 1024) 4724736 ______________________________________________________________________________________________________________ dropout_1 (Dropout) (None, None, 1024) 0 ______________________________________________________________________________________________________________ bidirectional_3 (Bidirectional) (None, None, 1024) 4724736 ______________________________________________________________________________________________________________ dropout_2 (Dropout) (None, None, 1024) 0 ______________________________________________________________________________________________________________ bidirectional_4 (Bidirectional) (None, None, 1024) 4724736 ______________________________________________________________________________________________________________ dropout_3 (Dropout) (None, None, 1024) 0 ______________________________________________________________________________________________________________ bidirectional_5 (Bidirectional) (None, None, 1024) 4724736 ______________________________________________________________________________________________________________ dense_1 (Dense) (None, None, 1024) 1049600 ______________________________________________________________________________________________________________ dense_1_relu (ReLU) (None, None, 1024) 0 ______________________________________________________________________________________________________________ dropout_4 (Dropout) (None, None, 1024) 0 ______________________________________________________________________________________________________________ dense (Dense) (None, None, 32) 32800 ============================================================================================================== Total params: 26,628,480 Trainable params: 26,628,352 Non-trainable params: 128 ______________________________________________________________________________________________________________ Training and Evaluating # A utility function to decode the output of the network def decode_batch_predictions(pred): input_len = np.ones(pred.shape[0]) * pred.shape[1] # Use greedy search. For complex tasks, you can use beam search results = keras.backend.ctc_decode(pred, input_length=input_len, greedy=True)[0][0] # Iterate over the results and get back the text output_text = [] for result in results: result = tf.strings.reduce_join(num_to_char(result)).numpy().decode(\"utf-8\") output_text.append(result) return output_text # A callback class to output a few transcriptions during training class CallbackEval(keras.callbacks.Callback): \"\"\"Displays a batch of outputs after every epoch.\"\"\" def __init__(self, dataset): super().__init__() self.dataset = dataset def on_epoch_end(self, epoch: int, logs=None): predictions = [] targets = [] for batch in self.dataset: X, y = batch batch_predictions = model.predict(X) batch_predictions = decode_batch_predictions(batch_predictions) predictions.extend(batch_predictions) for label in y: label = ( tf.strings.reduce_join(num_to_char(label)).numpy().decode(\"utf-8\") ) targets.append(label) wer_score = wer(targets, predictions) print(\"-\" * 100) print(f\"Word Error Rate: {wer_score:.4f}\") print(\"-\" * 100) for i in np.random.randint(0, len(predictions), 2): print(f\"Target : {targets[i]}\") print(f\"Prediction: {predictions[i]}\") print(\"-\" * 100) Let's start the training process. # Define the number of epochs. epochs = 1 # Callback function to check transcription on the val set. validation_callback = CallbackEval(validation_dataset) # Train the model history = model.fit( train_dataset, validation_data=validation_dataset, epochs=epochs, callbacks=[validation_callback], ) 2021-09-28 21:16:48.067448: I tensorflow/stream_executor/cuda/cuda_dnn.cc:369] Loaded cuDNN version 8100 369/369 [==============================] - 586s 2s/step - loss: 300.4624 - val_loss: 296.1459 ---------------------------------------------------------------------------------------------------- Word Error Rate: 0.9998 ---------------------------------------------------------------------------------------------------- Target : the procession traversed ratcliffe twice halting for a quarter of an hour in front of the victims' dwelling Prediction: s ---------------------------------------------------------------------------------------------------- Target : some difficulty then arose as to gaining admission to the strong room and it was arranged that a man may another custom house clerk Prediction: s ---------------------------------------------------------------------------------------------------- Inference # Let's check results on more validation samples predictions = [] targets = [] for batch in validation_dataset: X, y = batch batch_predictions = model.predict(X) batch_predictions = decode_batch_predictions(batch_predictions) predictions.extend(batch_predictions) for label in y: label = tf.strings.reduce_join(num_to_char(label)).numpy().decode(\"utf-8\") targets.append(label) wer_score = wer(targets, predictions) print(\"-\" * 100) print(f\"Word Error Rate: {wer_score:.4f}\") print(\"-\" * 100) for i in np.random.randint(0, len(predictions), 5): print(f\"Target : {targets[i]}\") print(f\"Prediction: {predictions[i]}\") print(\"-\" * 100) ---------------------------------------------------------------------------------------------------- Word Error Rate: 0.9998 ---------------------------------------------------------------------------------------------------- Target : two of the nine agents returned to their rooms the seven others proceeded to an establishment called the cellar coffee house Prediction: ---------------------------------------------------------------------------------------------------- Target : a scaffold was erected in front of that prison for the execution of several convicts named by the recorder Prediction: sss ---------------------------------------------------------------------------------------------------- Target : it was perpetrated upon a respectable country solicitor Prediction: ss ---------------------------------------------------------------------------------------------------- Target : oswald like all marine recruits received training on the rifle range at distances up to five hundred yards Prediction: ---------------------------------------------------------------------------------------------------- Target : chief rowley testified that agents on duty in such a situation usually stay within the building during their relief Prediction: s ---------------------------------------------------------------------------------------------------- Conclusion In practice, you should train for around 50 epochs or more. Each epoch takes approximately 5-6mn using a GeForce RTX 2080 Ti GPU. The model we trained at 50 epochs has a Word Error Rate (WER) ≈ 16% to 17%. Some of the transcriptions around epoch 50: Audio file: LJ017-0009.wav - Target : sir thomas overbury was undoubtedly poisoned by lord rochester in the reign of james the first - Prediction: cer thomas overbery was undoubtedly poisoned by lordrochester in the reign of james the first Audio file: LJ003-0340.wav - Target : the committee does not seem to have yet understood that newgate could be only and properly replaced - Prediction: the committee does not seem to have yet understood that newgate could be only and proberly replace Audio file: LJ011-0136.wav - Target : still no sentence of death was carried out for the offense and in eighteen thirtytwo - Prediction: still no sentence of death was carried out for the offense and in eighteen thirtytwo Training a sequence-to-sequence Transformer for automatic speech recognition. Introduction Automatic speech recognition (ASR) consists of transcribing audio speech segments into text. ASR can be treated as a sequence-to-sequence problem, where the audio can be represented as a sequence of feature vectors and the text as a sequence of characters, words, or subword tokens. For this demonstration, we will use the LJSpeech dataset from the LibriVox project. It consists of short audio clips of a single speaker reading passages from 7 non-fiction books. Our model will be similar to the original Transformer (both encoder and decoder) as proposed in the paper, \"Attention is All You Need\". References: Attention is All You Need Very Deep Self-Attention Networks for End-to-End Speech Recognition Speech Transformers LJSpeech Dataset import os import random from glob import glob import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Define the Transformer Input Layer When processing past target tokens for the decoder, we compute the sum of position embeddings and token embeddings. When processing audio features, we apply convolutional layers to downsample them (via convolution stides) and process local relationships. class TokenEmbedding(layers.Layer): def __init__(self, num_vocab=1000, maxlen=100, num_hid=64): super().__init__() self.emb = tf.keras.layers.Embedding(num_vocab, num_hid) self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=num_hid) def call(self, x): maxlen = tf.shape(x)[-1] x = self.emb(x) positions = tf.range(start=0, limit=maxlen, delta=1) positions = self.pos_emb(positions) return x + positions class SpeechFeatureEmbedding(layers.Layer): def __init__(self, num_hid=64, maxlen=100): super().__init__() self.conv1 = tf.keras.layers.Conv1D( num_hid, 11, strides=2, padding=\"same\", activation=\"relu\" ) self.conv2 = tf.keras.layers.Conv1D( num_hid, 11, strides=2, padding=\"same\", activation=\"relu\" ) self.conv3 = tf.keras.layers.Conv1D( num_hid, 11, strides=2, padding=\"same\", activation=\"relu\" ) self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=num_hid) def call(self, x): x = self.conv1(x) x = self.conv2(x) return self.conv3(x) Transformer Encoder Layer class TransformerEncoder(layers.Layer): def __init__(self, embed_dim, num_heads, feed_forward_dim, rate=0.1): super().__init__() self.att = layers.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.ffn = keras.Sequential( [ layers.Dense(feed_forward_dim, activation=\"relu\"), layers.Dense(embed_dim), ] ) self.layernorm1 = layers.LayerNormalization(epsilon=1e-6) self.layernorm2 = layers.LayerNormalization(epsilon=1e-6) self.dropout1 = layers.Dropout(rate) self.dropout2 = layers.Dropout(rate) def call(self, inputs, training): attn_output = self.att(inputs, inputs) attn_output = self.dropout1(attn_output, training=training) out1 = self.layernorm1(inputs + attn_output) ffn_output = self.ffn(out1) ffn_output = self.dropout2(ffn_output, training=training) return self.layernorm2(out1 + ffn_output) Transformer Decoder Layer class TransformerDecoder(layers.Layer): def __init__(self, embed_dim, num_heads, feed_forward_dim, dropout_rate=0.1): super().__init__() self.layernorm1 = layers.LayerNormalization(epsilon=1e-6) self.layernorm2 = layers.LayerNormalization(epsilon=1e-6) self.layernorm3 = layers.LayerNormalization(epsilon=1e-6) self.self_att = layers.MultiHeadAttention( num_heads=num_heads, key_dim=embed_dim ) self.enc_att = layers.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.self_dropout = layers.Dropout(0.5) self.enc_dropout = layers.Dropout(0.1) self.ffn_dropout = layers.Dropout(0.1) self.ffn = keras.Sequential( [ layers.Dense(feed_forward_dim, activation=\"relu\"), layers.Dense(embed_dim), ] ) def causal_attention_mask(self, batch_size, n_dest, n_src, dtype): \"\"\"Masks the upper half of the dot product matrix in self attention. This prevents flow of information from future tokens to current token. 1's in the lower triangle, counting from the lower right corner. \"\"\" i = tf.range(n_dest)[:, None] j = tf.range(n_src) m = i >= j - n_src + n_dest mask = tf.cast(m, dtype) mask = tf.reshape(mask, [1, n_dest, n_src]) mult = tf.concat( [tf.expand_dims(batch_size, -1), tf.constant([1, 1], dtype=tf.int32)], 0 ) return tf.tile(mask, mult) def call(self, enc_out, target): input_shape = tf.shape(target) batch_size = input_shape[0] seq_len = input_shape[1] causal_mask = self.causal_attention_mask(batch_size, seq_len, seq_len, tf.bool) target_att = self.self_att(target, target, attention_mask=causal_mask) target_norm = self.layernorm1(target + self.self_dropout(target_att)) enc_out = self.enc_att(target_norm, enc_out) enc_out_norm = self.layernorm2(self.enc_dropout(enc_out) + target_norm) ffn_out = self.ffn(enc_out_norm) ffn_out_norm = self.layernorm3(enc_out_norm + self.ffn_dropout(ffn_out)) return ffn_out_norm Complete the Transformer model Our model takes audio spectrograms as inputs and predicts a sequence of characters. During training, we give the decoder the target character sequence shifted to the left as input. During inference, the decoder uses its own past predictions to predict the next token. class Transformer(keras.Model): def __init__( self, num_hid=64, num_head=2, num_feed_forward=128, source_maxlen=100, target_maxlen=100, num_layers_enc=4, num_layers_dec=1, num_classes=10, ): super().__init__() self.loss_metric = keras.metrics.Mean(name=\"loss\") self.num_layers_enc = num_layers_enc self.num_layers_dec = num_layers_dec self.target_maxlen = target_maxlen self.num_classes = num_classes self.enc_input = SpeechFeatureEmbedding(num_hid=num_hid, maxlen=source_maxlen) self.dec_input = TokenEmbedding( num_vocab=num_classes, maxlen=target_maxlen, num_hid=num_hid ) self.encoder = keras.Sequential( [self.enc_input] + [ TransformerEncoder(num_hid, num_head, num_feed_forward) for _ in range(num_layers_enc) ] ) for i in range(num_layers_dec): setattr( self, f\"dec_layer_{i}\", TransformerDecoder(num_hid, num_head, num_feed_forward), ) self.classifier = layers.Dense(num_classes) def decode(self, enc_out, target): y = self.dec_input(target) for i in range(self.num_layers_dec): y = getattr(self, f\"dec_layer_{i}\")(enc_out, y) return y def call(self, inputs): source = inputs[0] target = inputs[1] x = self.encoder(source) y = self.decode(x, target) return self.classifier(y) @property def metrics(self): return [self.loss_metric] def train_step(self, batch): \"\"\"Processes one batch inside model.fit().\"\"\" source = batch[\"source\"] target = batch[\"target\"] dec_input = target[:, :-1] dec_target = target[:, 1:] with tf.GradientTape() as tape: preds = self([source, dec_input]) one_hot = tf.one_hot(dec_target, depth=self.num_classes) mask = tf.math.logical_not(tf.math.equal(dec_target, 0)) loss = self.compiled_loss(one_hot, preds, sample_weight=mask) trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) self.optimizer.apply_gradients(zip(gradients, trainable_vars)) self.loss_metric.update_state(loss) return {\"loss\": self.loss_metric.result()} def test_step(self, batch): source = batch[\"source\"] target = batch[\"target\"] dec_input = target[:, :-1] dec_target = target[:, 1:] preds = self([source, dec_input]) one_hot = tf.one_hot(dec_target, depth=self.num_classes) mask = tf.math.logical_not(tf.math.equal(dec_target, 0)) loss = self.compiled_loss(one_hot, preds, sample_weight=mask) self.loss_metric.update_state(loss) return {\"loss\": self.loss_metric.result()} def generate(self, source, target_start_token_idx): \"\"\"Performs inference over one batch of inputs using greedy decoding.\"\"\" bs = tf.shape(source)[0] enc = self.encoder(source) dec_input = tf.ones((bs, 1), dtype=tf.int32) * target_start_token_idx dec_logits = [] for i in range(self.target_maxlen - 1): dec_out = self.decode(enc, dec_input) logits = self.classifier(dec_out) logits = tf.argmax(logits, axis=-1, output_type=tf.int32) last_logit = tf.expand_dims(logits[:, -1], axis=-1) dec_logits.append(last_logit) dec_input = tf.concat([dec_input, last_logit], axis=-1) return dec_input Download the dataset Note: This requires ~3.6 GB of disk space and takes ~5 minutes for the extraction of files. keras.utils.get_file( os.path.join(os.getcwd(), \"data.tar.gz\"), \"https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2\", extract=True, archive_format=\"tar\", cache_dir=\".\", ) saveto = \"./datasets/LJSpeech-1.1\" wavs = glob(\"{}/**/*.wav\".format(saveto), recursive=True) id_to_text = {} with open(os.path.join(saveto, \"metadata.csv\"), encoding=\"utf-8\") as f: for line in f: id = line.strip().split(\"|\")[0] text = line.strip().split(\"|\")[2] id_to_text[id] = text def get_data(wavs, id_to_text, maxlen=50): \"\"\" returns mapping of audio paths and transcription texts \"\"\" data = [] for w in wavs: id = w.split(\"/\")[-1].split(\".\")[0] if len(id_to_text[id]) < maxlen: data.append({\"audio\": w, \"text\": id_to_text[id]}) return data Downloading data from https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2 2748579840/2748572632 [==============================] - 57s 0us/step Preprocess the dataset class VectorizeChar: def __init__(self, max_len=50): self.vocab = ( [\"-\", \"#\", \"<\", \">\"] + [chr(i + 96) for i in range(1, 27)] + [\" \", \".\", \",\", \"?\"] ) self.max_len = max_len self.char_to_idx = {} for i, ch in enumerate(self.vocab): self.char_to_idx[ch] = i def __call__(self, text): text = text.lower() text = text[: self.max_len - 2] text = \"<\" + text + \">\" pad_len = self.max_len - len(text) return [self.char_to_idx.get(ch, 1) for ch in text] + [0] * pad_len def get_vocabulary(self): return self.vocab max_target_len = 200 # all transcripts in out data are < 200 characters data = get_data(wavs, id_to_text, max_target_len) vectorizer = VectorizeChar(max_target_len) print(\"vocab size\", len(vectorizer.get_vocabulary())) def create_text_ds(data): texts = [_[\"text\"] for _ in data] text_ds = [vectorizer(t) for t in texts] text_ds = tf.data.Dataset.from_tensor_slices(text_ds) return text_ds def path_to_audio(path): # spectrogram using stft audio = tf.io.read_file(path) audio, _ = tf.audio.decode_wav(audio, 1) audio = tf.squeeze(audio, axis=-1) stfts = tf.signal.stft(audio, frame_length=200, frame_step=80, fft_length=256) x = tf.math.pow(tf.abs(stfts), 0.5) # normalisation means = tf.math.reduce_mean(x, 1, keepdims=True) stddevs = tf.math.reduce_std(x, 1, keepdims=True) x = (x - means) / stddevs audio_len = tf.shape(x)[0] # padding to 10 seconds pad_len = 2754 paddings = tf.constant([[0, pad_len], [0, 0]]) x = tf.pad(x, paddings, \"CONSTANT\")[:pad_len, :] return x def create_audio_ds(data): flist = [_[\"audio\"] for _ in data] audio_ds = tf.data.Dataset.from_tensor_slices(flist) audio_ds = audio_ds.map( path_to_audio, num_parallel_calls=tf.data.AUTOTUNE ) return audio_ds def create_tf_dataset(data, bs=4): audio_ds = create_audio_ds(data) text_ds = create_text_ds(data) ds = tf.data.Dataset.zip((audio_ds, text_ds)) ds = ds.map(lambda x, y: {\"source\": x, \"target\": y}) ds = ds.batch(bs) ds = ds.prefetch(tf.data.AUTOTUNE) return ds split = int(len(data) * 0.99) train_data = data[:split] test_data = data[split:] ds = create_tf_dataset(train_data, bs=64) val_ds = create_tf_dataset(test_data, bs=4) vocab size 34 Callbacks to display predictions class DisplayOutputs(keras.callbacks.Callback): def __init__( self, batch, idx_to_token, target_start_token_idx=27, target_end_token_idx=28 ): \"\"\"Displays a batch of outputs after every epoch Args: batch: A test batch containing the keys \"source\" and \"target\" idx_to_token: A List containing the vocabulary tokens corresponding to their indices target_start_token_idx: A start token index in the target vocabulary target_end_token_idx: An end token index in the target vocabulary \"\"\" self.batch = batch self.target_start_token_idx = target_start_token_idx self.target_end_token_idx = target_end_token_idx self.idx_to_char = idx_to_token def on_epoch_end(self, epoch, logs=None): if epoch % 5 != 0: return source = self.batch[\"source\"] target = self.batch[\"target\"].numpy() bs = tf.shape(source)[0] preds = self.model.generate(source, self.target_start_token_idx) preds = preds.numpy() for i in range(bs): target_text = \"\".join([self.idx_to_char[_] for _ in target[i, :]]) prediction = \"\" for idx in preds[i, :]: prediction += self.idx_to_char[idx] if idx == self.target_end_token_idx: break print(f\"target: {target_text.replace('-','')}\") print(f\"prediction: {prediction}\n\") Learning rate schedule class CustomSchedule(keras.optimizers.schedules.LearningRateSchedule): def __init__( self, init_lr=0.00001, lr_after_warmup=0.001, final_lr=0.00001, warmup_epochs=15, decay_epochs=85, steps_per_epoch=203, ): super().__init__() self.init_lr = init_lr self.lr_after_warmup = lr_after_warmup self.final_lr = final_lr self.warmup_epochs = warmup_epochs self.decay_epochs = decay_epochs self.steps_per_epoch = steps_per_epoch def calculate_lr(self, epoch): \"\"\" linear warm up - linear decay \"\"\" warmup_lr = ( self.init_lr + ((self.lr_after_warmup - self.init_lr) / (self.warmup_epochs - 1)) * epoch ) decay_lr = tf.math.maximum( self.final_lr, self.lr_after_warmup - (epoch - self.warmup_epochs) * (self.lr_after_warmup - self.final_lr) / (self.decay_epochs), ) return tf.math.minimum(warmup_lr, decay_lr) def __call__(self, step): epoch = step // self.steps_per_epoch return self.calculate_lr(epoch) Create & train the end-to-end model batch = next(iter(val_ds)) # The vocabulary to convert predicted indices into characters idx_to_char = vectorizer.get_vocabulary() display_cb = DisplayOutputs( batch, idx_to_char, target_start_token_idx=2, target_end_token_idx=3 ) # set the arguments as per vocabulary index for '<' and '>' model = Transformer( num_hid=200, num_head=2, num_feed_forward=400, target_maxlen=max_target_len, num_layers_enc=4, num_layers_dec=1, num_classes=34, ) loss_fn = tf.keras.losses.CategoricalCrossentropy( from_logits=True, label_smoothing=0.1, ) learning_rate = CustomSchedule( init_lr=0.00001, lr_after_warmup=0.001, final_lr=0.00001, warmup_epochs=15, decay_epochs=85, steps_per_epoch=len(ds), ) optimizer = keras.optimizers.Adam(learning_rate) model.compile(optimizer=optimizer, loss=loss_fn) history = model.fit(ds, validation_data=val_ds, callbacks=[display_cb], epochs=1) 203/203 [==============================] - 349s 2s/step - loss: 1.7437 - val_loss: 1.4650 target: prediction: prediction: prediction: prediction: prediction: target: prediction: Inversion of audio from mel-spectograms using the MelGAN architecture and feature matching. Introduction Autoregressive vocoders have been ubiquitous for a majority of the history of speech processing, but for most of their existence they have lacked parallelism. MelGAN is a non-autoregressive, fully convolutional vocoder architecture used for purposes ranging from spectral inversion and speech enhancement to present-day state-of-the-art speech synthesis when used as a decoder with models like Tacotron2 or FastSpeech that convert text to mel spectrograms. In this tutorial, we will have a look at the MelGAN architecture and how it can achieve fast spectral inversion, i.e. conversion of spectrograms to audio waves. The MelGAN implemented in this tutorial is similar to the original implementation with only the difference of method of padding for convolutions where we will use 'same' instead of reflect padding. Importing and Defining Hyperparameters !pip install -qqq tensorflow_addons !pip install -qqq tensorflow-io import tensorflow as tf import tensorflow_io as tfio from tensorflow import keras from tensorflow.keras import layers from tensorflow_addons import layers as addon_layers # Setting logger level to avoid input shape warnings tf.get_logger().setLevel(\"ERROR\") # Defining hyperparameters DESIRED_SAMPLES = 8192 LEARNING_RATE_GEN = 1e-5 LEARNING_RATE_DISC = 1e-6 BATCH_SIZE = 16 mse = keras.losses.MeanSquaredError() mae = keras.losses.MeanAbsoluteError() |████████████████████████████████| 1.1 MB 5.1 MB/s |████████████████████████████████| 22.7 MB 1.7 MB/s |████████████████████████████████| 2.1 MB 36.2 MB/s Loading the Dataset This example uses the LJSpeech dataset. The LJSpeech dataset is primarily used for text-to-speech and consists of 13,100 discrete speech samples taken from 7 non-fiction books, having a total length of approximately 24 hours. The MelGAN training is only concerned with the audio waves so we process only the WAV files and ignore the audio annotations. !wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2 !tar -xf /content/LJSpeech-1.1.tar.bz2 --2021-09-16 11:45:24-- https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2 Resolving data.keithito.com (data.keithito.com)... 174.138.79.61 Connecting to data.keithito.com (data.keithito.com)|174.138.79.61|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 2748572632 (2.6G) [application/octet-stream] Saving to: ‘LJSpeech-1.1.tar.bz2’ LJSpeech-1.1.tar.bz 100%[===================>] 2.56G 68.3MB/s in 36s 2021-09-16 11:46:01 (72.2 MB/s) - ‘LJSpeech-1.1.tar.bz2’ saved [2748572632/2748572632] We create a tf.data.Dataset to load and process the audio files on the fly. The preprocess() function takes the file path as input and returns two instances of the wave, one for input and one as the ground truth for comparsion. The input wave will be mapped to a spectrogram using the custom MelSpec layer as shown later in this example. # Splitting the dataset into training and testing splits wavs = tf.io.gfile.glob(\"LJSpeech-1.1/wavs/*.wav\") print(f\"Number of audio files: {len(wavs)}\") # Mapper function for loading the audio. This function returns two instances of the wave def preprocess(filename): audio = tf.audio.decode_wav(tf.io.read_file(filename), 1, DESIRED_SAMPLES).audio return audio, audio # Create tf.data.Dataset objects and apply preprocessing train_dataset = tf.data.Dataset.from_tensor_slices((wavs,)) train_dataset = train_dataset.map(preprocess, num_parallel_calls=tf.data.AUTOTUNE) Number of audio files: 13100 Defining custom layers for MelGAN The MelGAN architecture consists of 3 main modules: The residual block Dilated convolutional block Discriminator block MelGAN Since the network takes a mel-spectrogram as input, we will create an additional custom layer which can convert the raw audio wave to a spectrogram on-the-fly. We use the raw audio tensor from train_dataset and map it to a mel-spectrogram using the MelSpec layer below. # Custom keras layer for on-the-fly audio to spectrogram conversion class MelSpec(layers.Layer): def __init__( self, frame_length=1024, frame_step=256, fft_length=None, sampling_rate=22050, num_mel_channels=80, freq_min=125, freq_max=7600, **kwargs, ): super().__init__(**kwargs) self.frame_length = frame_length self.frame_step = frame_step self.fft_length = fft_length self.sampling_rate = sampling_rate self.num_mel_channels = num_mel_channels self.freq_min = freq_min self.freq_max = freq_max # Defining mel filter. This filter will be multiplied with the STFT output self.mel_filterbank = tf.signal.linear_to_mel_weight_matrix( num_mel_bins=self.num_mel_channels, num_spectrogram_bins=self.frame_length // 2 + 1, sample_rate=self.sampling_rate, lower_edge_hertz=self.freq_min, upper_edge_hertz=self.freq_max, ) def call(self, audio, training=True): # We will only perform the transformation during training. if training: # Taking the Short Time Fourier Transform. Ensure that the audio is padded. # In the paper, the STFT output is padded using the 'REFLECT' strategy. stft = tf.signal.stft( tf.squeeze(audio, -1), self.frame_length, self.frame_step, self.fft_length, pad_end=True, ) # Taking the magnitude of the STFT output magnitude = tf.abs(stft) # Multiplying the Mel-filterbank with the magnitude and scaling it using the db scale mel = tf.matmul(tf.square(magnitude), self.mel_filterbank) log_mel_spec = tfio.audio.dbscale(mel, top_db=80) return log_mel_spec else: return audio def get_config(self): config = super(MelSpec, self).get_config() config.update( { \"frame_length\": self.frame_length, \"frame_step\": self.frame_step, \"fft_length\": self.fft_length, \"sampling_rate\": self.sampling_rate, \"num_mel_channels\": self.num_mel_channels, \"freq_min\": self.freq_min, \"freq_max\": self.freq_max, } ) return config The residual convolutional block extensively uses dilations and has a total receptive field of 27 timesteps per block. The dilations must grow as a power of the kernel_size to ensure reduction of hissing noise in the output. The network proposed by the paper is as follows: ConvBlock # Creating the residual stack block def residual_stack(input, filters): \"\"\"Convolutional residual stack with weight normalization. Args: filter: int, determines filter size for the residual stack. Returns: Residual stack output. \"\"\" c1 = addon_layers.WeightNormalization( layers.Conv1D(filters, 3, dilation_rate=1, padding=\"same\"), data_init=False )(input) lrelu1 = layers.LeakyReLU()(c1) c2 = addon_layers.WeightNormalization( layers.Conv1D(filters, 3, dilation_rate=1, padding=\"same\"), data_init=False )(lrelu1) add1 = layers.Add()([c2, input]) lrelu2 = layers.LeakyReLU()(add1) c3 = addon_layers.WeightNormalization( layers.Conv1D(filters, 3, dilation_rate=3, padding=\"same\"), data_init=False )(lrelu2) lrelu3 = layers.LeakyReLU()(c3) c4 = addon_layers.WeightNormalization( layers.Conv1D(filters, 3, dilation_rate=1, padding=\"same\"), data_init=False )(lrelu3) add2 = layers.Add()([add1, c4]) lrelu4 = layers.LeakyReLU()(add2) c5 = addon_layers.WeightNormalization( layers.Conv1D(filters, 3, dilation_rate=9, padding=\"same\"), data_init=False )(lrelu4) lrelu5 = layers.LeakyReLU()(c5) c6 = addon_layers.WeightNormalization( layers.Conv1D(filters, 3, dilation_rate=1, padding=\"same\"), data_init=False )(lrelu5) add3 = layers.Add()([c6, add2]) return add3 Each convolutional block uses the dilations offered by the residual stack and upsamples the input data by the upsampling_factor. # Dilated convolutional block consisting of the Residual stack def conv_block(input, conv_dim, upsampling_factor): \"\"\"Dilated Convolutional Block with weight normalization. Args: conv_dim: int, determines filter size for the block. upsampling_factor: int, scale for upsampling. Returns: Dilated convolution block. \"\"\" conv_t = addon_layers.WeightNormalization( layers.Conv1DTranspose(conv_dim, 16, upsampling_factor, padding=\"same\"), data_init=False, )(input) lrelu1 = layers.LeakyReLU()(conv_t) res_stack = residual_stack(lrelu1, conv_dim) lrelu2 = layers.LeakyReLU()(res_stack) return lrelu2 The discriminator block consists of convolutions and downsampling layers. This block is essential for the implementation of the feature matching technique. Each discriminator outputs a list of feature maps that will be compared during training to compute the feature matching loss. def discriminator_block(input): conv1 = addon_layers.WeightNormalization( layers.Conv1D(16, 15, 1, \"same\"), data_init=False )(input) lrelu1 = layers.LeakyReLU()(conv1) conv2 = addon_layers.WeightNormalization( layers.Conv1D(64, 41, 4, \"same\", groups=4), data_init=False )(lrelu1) lrelu2 = layers.LeakyReLU()(conv2) conv3 = addon_layers.WeightNormalization( layers.Conv1D(256, 41, 4, \"same\", groups=16), data_init=False )(lrelu2) lrelu3 = layers.LeakyReLU()(conv3) conv4 = addon_layers.WeightNormalization( layers.Conv1D(1024, 41, 4, \"same\", groups=64), data_init=False )(lrelu3) lrelu4 = layers.LeakyReLU()(conv4) conv5 = addon_layers.WeightNormalization( layers.Conv1D(1024, 41, 4, \"same\", groups=256), data_init=False )(lrelu4) lrelu5 = layers.LeakyReLU()(conv5) conv6 = addon_layers.WeightNormalization( layers.Conv1D(1024, 5, 1, \"same\"), data_init=False )(lrelu5) lrelu6 = layers.LeakyReLU()(conv6) conv7 = addon_layers.WeightNormalization( layers.Conv1D(1, 3, 1, \"same\"), data_init=False )(lrelu6) return [lrelu1, lrelu2, lrelu3, lrelu4, lrelu5, lrelu6, conv7] Create the generator def create_generator(input_shape): inp = keras.Input(input_shape) x = MelSpec()(inp) x = layers.Conv1D(512, 7, padding=\"same\")(x) x = layers.LeakyReLU()(x) x = conv_block(x, 256, 8) x = conv_block(x, 128, 8) x = conv_block(x, 64, 2) x = conv_block(x, 32, 2) x = addon_layers.WeightNormalization( layers.Conv1D(1, 7, padding=\"same\", activation=\"tanh\") )(x) return keras.Model(inp, x) # We use a dynamic input shape for the generator since the model is fully convolutional generator = create_generator((None, 1)) generator.summary() Model: \"model\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, None, 1)] 0 __________________________________________________________________________________________________ mel_spec (MelSpec) (None, None, 80) 0 input_1[0][0] __________________________________________________________________________________________________ conv1d (Conv1D) (None, None, 512) 287232 mel_spec[0][0] __________________________________________________________________________________________________ leaky_re_lu (LeakyReLU) (None, None, 512) 0 conv1d[0][0] __________________________________________________________________________________________________ weight_normalization (WeightNor (None, None, 256) 2097921 leaky_re_lu[0][0] __________________________________________________________________________________________________ leaky_re_lu_1 (LeakyReLU) (None, None, 256) 0 weight_normalization[0][0] __________________________________________________________________________________________________ weight_normalization_1 (WeightN (None, None, 256) 197121 leaky_re_lu_1[0][0] __________________________________________________________________________________________________ leaky_re_lu_2 (LeakyReLU) (None, None, 256) 0 weight_normalization_1[0][0] __________________________________________________________________________________________________ weight_normalization_2 (WeightN (None, None, 256) 197121 leaky_re_lu_2[0][0] __________________________________________________________________________________________________ add (Add) (None, None, 256) 0 weight_normalization_2[0][0] leaky_re_lu_1[0][0] __________________________________________________________________________________________________ leaky_re_lu_3 (LeakyReLU) (None, None, 256) 0 add[0][0] __________________________________________________________________________________________________ weight_normalization_3 (WeightN (None, None, 256) 197121 leaky_re_lu_3[0][0] __________________________________________________________________________________________________ leaky_re_lu_4 (LeakyReLU) (None, None, 256) 0 weight_normalization_3[0][0] __________________________________________________________________________________________________ weight_normalization_4 (WeightN (None, None, 256) 197121 leaky_re_lu_4[0][0] __________________________________________________________________________________________________ add_1 (Add) (None, None, 256) 0 add[0][0] weight_normalization_4[0][0] __________________________________________________________________________________________________ leaky_re_lu_5 (LeakyReLU) (None, None, 256) 0 add_1[0][0] __________________________________________________________________________________________________ weight_normalization_5 (WeightN (None, None, 256) 197121 leaky_re_lu_5[0][0] __________________________________________________________________________________________________ leaky_re_lu_6 (LeakyReLU) (None, None, 256) 0 weight_normalization_5[0][0] __________________________________________________________________________________________________ weight_normalization_6 (WeightN (None, None, 256) 197121 leaky_re_lu_6[0][0] __________________________________________________________________________________________________ add_2 (Add) (None, None, 256) 0 weight_normalization_6[0][0] add_1[0][0] __________________________________________________________________________________________________ leaky_re_lu_7 (LeakyReLU) (None, None, 256) 0 add_2[0][0] __________________________________________________________________________________________________ weight_normalization_7 (WeightN (None, None, 128) 524673 leaky_re_lu_7[0][0] __________________________________________________________________________________________________ leaky_re_lu_8 (LeakyReLU) (None, None, 128) 0 weight_normalization_7[0][0] __________________________________________________________________________________________________ weight_normalization_8 (WeightN (None, None, 128) 49409 leaky_re_lu_8[0][0] __________________________________________________________________________________________________ leaky_re_lu_9 (LeakyReLU) (None, None, 128) 0 weight_normalization_8[0][0] __________________________________________________________________________________________________ weight_normalization_9 (WeightN (None, None, 128) 49409 leaky_re_lu_9[0][0] __________________________________________________________________________________________________ add_3 (Add) (None, None, 128) 0 weight_normalization_9[0][0] leaky_re_lu_8[0][0] __________________________________________________________________________________________________ leaky_re_lu_10 (LeakyReLU) (None, None, 128) 0 add_3[0][0] __________________________________________________________________________________________________ weight_normalization_10 (Weight (None, None, 128) 49409 leaky_re_lu_10[0][0] __________________________________________________________________________________________________ leaky_re_lu_11 (LeakyReLU) (None, None, 128) 0 weight_normalization_10[0][0] __________________________________________________________________________________________________ weight_normalization_11 (Weight (None, None, 128) 49409 leaky_re_lu_11[0][0] __________________________________________________________________________________________________ add_4 (Add) (None, None, 128) 0 add_3[0][0] weight_normalization_11[0][0] __________________________________________________________________________________________________ leaky_re_lu_12 (LeakyReLU) (None, None, 128) 0 add_4[0][0] __________________________________________________________________________________________________ weight_normalization_12 (Weight (None, None, 128) 49409 leaky_re_lu_12[0][0] __________________________________________________________________________________________________ leaky_re_lu_13 (LeakyReLU) (None, None, 128) 0 weight_normalization_12[0][0] __________________________________________________________________________________________________ weight_normalization_13 (Weight (None, None, 128) 49409 leaky_re_lu_13[0][0] __________________________________________________________________________________________________ add_5 (Add) (None, None, 128) 0 weight_normalization_13[0][0] add_4[0][0] __________________________________________________________________________________________________ leaky_re_lu_14 (LeakyReLU) (None, None, 128) 0 add_5[0][0] __________________________________________________________________________________________________ weight_normalization_14 (Weight (None, None, 64) 131265 leaky_re_lu_14[0][0] __________________________________________________________________________________________________ leaky_re_lu_15 (LeakyReLU) (None, None, 64) 0 weight_normalization_14[0][0] __________________________________________________________________________________________________ weight_normalization_15 (Weight (None, None, 64) 12417 leaky_re_lu_15[0][0] __________________________________________________________________________________________________ leaky_re_lu_16 (LeakyReLU) (None, None, 64) 0 weight_normalization_15[0][0] __________________________________________________________________________________________________ weight_normalization_16 (Weight (None, None, 64) 12417 leaky_re_lu_16[0][0] __________________________________________________________________________________________________ add_6 (Add) (None, None, 64) 0 weight_normalization_16[0][0] leaky_re_lu_15[0][0] __________________________________________________________________________________________________ leaky_re_lu_17 (LeakyReLU) (None, None, 64) 0 add_6[0][0] __________________________________________________________________________________________________ weight_normalization_17 (Weight (None, None, 64) 12417 leaky_re_lu_17[0][0] __________________________________________________________________________________________________ leaky_re_lu_18 (LeakyReLU) (None, None, 64) 0 weight_normalization_17[0][0] __________________________________________________________________________________________________ weight_normalization_18 (Weight (None, None, 64) 12417 leaky_re_lu_18[0][0] __________________________________________________________________________________________________ add_7 (Add) (None, None, 64) 0 add_6[0][0] weight_normalization_18[0][0] __________________________________________________________________________________________________ leaky_re_lu_19 (LeakyReLU) (None, None, 64) 0 add_7[0][0] __________________________________________________________________________________________________ weight_normalization_19 (Weight (None, None, 64) 12417 leaky_re_lu_19[0][0] __________________________________________________________________________________________________ leaky_re_lu_20 (LeakyReLU) (None, None, 64) 0 weight_normalization_19[0][0] __________________________________________________________________________________________________ weight_normalization_20 (Weight (None, None, 64) 12417 leaky_re_lu_20[0][0] __________________________________________________________________________________________________ add_8 (Add) (None, None, 64) 0 weight_normalization_20[0][0] add_7[0][0] __________________________________________________________________________________________________ leaky_re_lu_21 (LeakyReLU) (None, None, 64) 0 add_8[0][0] __________________________________________________________________________________________________ weight_normalization_21 (Weight (None, None, 32) 32865 leaky_re_lu_21[0][0] __________________________________________________________________________________________________ leaky_re_lu_22 (LeakyReLU) (None, None, 32) 0 weight_normalization_21[0][0] __________________________________________________________________________________________________ weight_normalization_22 (Weight (None, None, 32) 3137 leaky_re_lu_22[0][0] __________________________________________________________________________________________________ leaky_re_lu_23 (LeakyReLU) (None, None, 32) 0 weight_normalization_22[0][0] __________________________________________________________________________________________________ weight_normalization_23 (Weight (None, None, 32) 3137 leaky_re_lu_23[0][0] __________________________________________________________________________________________________ add_9 (Add) (None, None, 32) 0 weight_normalization_23[0][0] leaky_re_lu_22[0][0] __________________________________________________________________________________________________ leaky_re_lu_24 (LeakyReLU) (None, None, 32) 0 add_9[0][0] __________________________________________________________________________________________________ weight_normalization_24 (Weight (None, None, 32) 3137 leaky_re_lu_24[0][0] __________________________________________________________________________________________________ leaky_re_lu_25 (LeakyReLU) (None, None, 32) 0 weight_normalization_24[0][0] __________________________________________________________________________________________________ weight_normalization_25 (Weight (None, None, 32) 3137 leaky_re_lu_25[0][0] __________________________________________________________________________________________________ add_10 (Add) (None, None, 32) 0 add_9[0][0] weight_normalization_25[0][0] __________________________________________________________________________________________________ leaky_re_lu_26 (LeakyReLU) (None, None, 32) 0 add_10[0][0] __________________________________________________________________________________________________ weight_normalization_26 (Weight (None, None, 32) 3137 leaky_re_lu_26[0][0] __________________________________________________________________________________________________ leaky_re_lu_27 (LeakyReLU) (None, None, 32) 0 weight_normalization_26[0][0] __________________________________________________________________________________________________ weight_normalization_27 (Weight (None, None, 32) 3137 leaky_re_lu_27[0][0] __________________________________________________________________________________________________ add_11 (Add) (None, None, 32) 0 weight_normalization_27[0][0] add_10[0][0] __________________________________________________________________________________________________ leaky_re_lu_28 (LeakyReLU) (None, None, 32) 0 add_11[0][0] __________________________________________________________________________________________________ weight_normalization_28 (Weight (None, None, 1) 452 leaky_re_lu_28[0][0] ================================================================================================== Total params: 4,646,912 Trainable params: 4,646,658 Non-trainable params: 254 __________________________________________________________________________________________________ Create the discriminator def create_discriminator(input_shape): inp = keras.Input(input_shape) out_map1 = discriminator_block(inp) pool1 = layers.AveragePooling1D()(inp) out_map2 = discriminator_block(pool1) pool2 = layers.AveragePooling1D()(pool1) out_map3 = discriminator_block(pool2) return keras.Model(inp, [out_map1, out_map2, out_map3]) # We use a dynamic input shape for the discriminator # This is done because the input shape for the generator is unknown discriminator = create_discriminator((None, 1)) discriminator.summary() Model: \"model_1\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_2 (InputLayer) [(None, None, 1)] 0 __________________________________________________________________________________________________ average_pooling1d (AveragePooli (None, None, 1) 0 input_2[0][0] __________________________________________________________________________________________________ average_pooling1d_1 (AveragePoo (None, None, 1) 0 average_pooling1d[0][0] __________________________________________________________________________________________________ weight_normalization_29 (Weight (None, None, 16) 273 input_2[0][0] __________________________________________________________________________________________________ weight_normalization_36 (Weight (None, None, 16) 273 average_pooling1d[0][0] __________________________________________________________________________________________________ weight_normalization_43 (Weight (None, None, 16) 273 average_pooling1d_1[0][0] __________________________________________________________________________________________________ leaky_re_lu_29 (LeakyReLU) (None, None, 16) 0 weight_normalization_29[0][0] __________________________________________________________________________________________________ leaky_re_lu_35 (LeakyReLU) (None, None, 16) 0 weight_normalization_36[0][0] __________________________________________________________________________________________________ leaky_re_lu_41 (LeakyReLU) (None, None, 16) 0 weight_normalization_43[0][0] __________________________________________________________________________________________________ weight_normalization_30 (Weight (None, None, 64) 10625 leaky_re_lu_29[0][0] __________________________________________________________________________________________________ weight_normalization_37 (Weight (None, None, 64) 10625 leaky_re_lu_35[0][0] __________________________________________________________________________________________________ weight_normalization_44 (Weight (None, None, 64) 10625 leaky_re_lu_41[0][0] __________________________________________________________________________________________________ leaky_re_lu_30 (LeakyReLU) (None, None, 64) 0 weight_normalization_30[0][0] __________________________________________________________________________________________________ leaky_re_lu_36 (LeakyReLU) (None, None, 64) 0 weight_normalization_37[0][0] __________________________________________________________________________________________________ leaky_re_lu_42 (LeakyReLU) (None, None, 64) 0 weight_normalization_44[0][0] __________________________________________________________________________________________________ weight_normalization_31 (Weight (None, None, 256) 42497 leaky_re_lu_30[0][0] __________________________________________________________________________________________________ weight_normalization_38 (Weight (None, None, 256) 42497 leaky_re_lu_36[0][0] __________________________________________________________________________________________________ weight_normalization_45 (Weight (None, None, 256) 42497 leaky_re_lu_42[0][0] __________________________________________________________________________________________________ leaky_re_lu_31 (LeakyReLU) (None, None, 256) 0 weight_normalization_31[0][0] __________________________________________________________________________________________________ leaky_re_lu_37 (LeakyReLU) (None, None, 256) 0 weight_normalization_38[0][0] __________________________________________________________________________________________________ leaky_re_lu_43 (LeakyReLU) (None, None, 256) 0 weight_normalization_45[0][0] __________________________________________________________________________________________________ weight_normalization_32 (Weight (None, None, 1024) 169985 leaky_re_lu_31[0][0] __________________________________________________________________________________________________ weight_normalization_39 (Weight (None, None, 1024) 169985 leaky_re_lu_37[0][0] __________________________________________________________________________________________________ weight_normalization_46 (Weight (None, None, 1024) 169985 leaky_re_lu_43[0][0] __________________________________________________________________________________________________ leaky_re_lu_32 (LeakyReLU) (None, None, 1024) 0 weight_normalization_32[0][0] __________________________________________________________________________________________________ leaky_re_lu_38 (LeakyReLU) (None, None, 1024) 0 weight_normalization_39[0][0] __________________________________________________________________________________________________ leaky_re_lu_44 (LeakyReLU) (None, None, 1024) 0 weight_normalization_46[0][0] __________________________________________________________________________________________________ weight_normalization_33 (Weight (None, None, 1024) 169985 leaky_re_lu_32[0][0] __________________________________________________________________________________________________ weight_normalization_40 (Weight (None, None, 1024) 169985 leaky_re_lu_38[0][0] __________________________________________________________________________________________________ weight_normalization_47 (Weight (None, None, 1024) 169985 leaky_re_lu_44[0][0] __________________________________________________________________________________________________ leaky_re_lu_33 (LeakyReLU) (None, None, 1024) 0 weight_normalization_33[0][0] __________________________________________________________________________________________________ leaky_re_lu_39 (LeakyReLU) (None, None, 1024) 0 weight_normalization_40[0][0] __________________________________________________________________________________________________ leaky_re_lu_45 (LeakyReLU) (None, None, 1024) 0 weight_normalization_47[0][0] __________________________________________________________________________________________________ weight_normalization_34 (Weight (None, None, 1024) 5244929 leaky_re_lu_33[0][0] __________________________________________________________________________________________________ weight_normalization_41 (Weight (None, None, 1024) 5244929 leaky_re_lu_39[0][0] __________________________________________________________________________________________________ weight_normalization_48 (Weight (None, None, 1024) 5244929 leaky_re_lu_45[0][0] __________________________________________________________________________________________________ leaky_re_lu_34 (LeakyReLU) (None, None, 1024) 0 weight_normalization_34[0][0] __________________________________________________________________________________________________ leaky_re_lu_40 (LeakyReLU) (None, None, 1024) 0 weight_normalization_41[0][0] __________________________________________________________________________________________________ leaky_re_lu_46 (LeakyReLU) (None, None, 1024) 0 weight_normalization_48[0][0] __________________________________________________________________________________________________ weight_normalization_35 (Weight (None, None, 1) 3075 leaky_re_lu_34[0][0] __________________________________________________________________________________________________ weight_normalization_42 (Weight (None, None, 1) 3075 leaky_re_lu_40[0][0] __________________________________________________________________________________________________ weight_normalization_49 (Weight (None, None, 1) 3075 leaky_re_lu_46[0][0] ================================================================================================== Total params: 16,924,107 Trainable params: 16,924,086 Non-trainable params: 21 __________________________________________________________________________________________________ Defining the loss functions Generator Loss The generator architecture uses a combination of two losses Mean Squared Error: This is the standard MSE generator loss calculated between ones and the outputs from the discriminator with N layers. Feature Matching Loss: This loss involves extracting the outputs of every layer from the discriminator for both the generator and ground truth and compare each layer output k using Mean Absolute Error. Discriminator Loss The discriminator uses the Mean Absolute Error and compares the real data predictions with ones and generated predictions with zeros. # Generator loss def generator_loss(real_pred, fake_pred): \"\"\"Loss function for the generator. Args: real_pred: Tensor, output of the ground truth wave passed through the discriminator. fake_pred: Tensor, output of the generator prediction passed through the discriminator. Returns: Loss for the generator. \"\"\" gen_loss = [] for i in range(len(fake_pred)): gen_loss.append(mse(tf.ones_like(fake_pred[i][-1]), fake_pred[i][-1])) return tf.reduce_mean(gen_loss) def feature_matching_loss(real_pred, fake_pred): \"\"\"Implements the feature matching loss. Args: real_pred: Tensor, output of the ground truth wave passed through the discriminator. fake_pred: Tensor, output of the generator prediction passed through the discriminator. Returns: Feature Matching Loss. \"\"\" fm_loss = [] for i in range(len(fake_pred)): for j in range(len(fake_pred[i]) - 1): fm_loss.append(mae(real_pred[i][j], fake_pred[i][j])) return tf.reduce_mean(fm_loss) def discriminator_loss(real_pred, fake_pred): \"\"\"Implements the discriminator loss. Args: real_pred: Tensor, output of the ground truth wave passed through the discriminator. fake_pred: Tensor, output of the generator prediction passed through the discriminator. Returns: Discriminator Loss. \"\"\" real_loss, fake_loss = [], [] for i in range(len(real_pred)): real_loss.append(mse(tf.ones_like(real_pred[i][-1]), real_pred[i][-1])) fake_loss.append(mse(tf.zeros_like(fake_pred[i][-1]), fake_pred[i][-1])) # Calculating the final discriminator loss after scaling disc_loss = tf.reduce_mean(real_loss) + tf.reduce_mean(fake_loss) return disc_loss Defining the MelGAN model for training. This subclass overrides the train_step() method to implement the training logic. class MelGAN(keras.Model): def __init__(self, generator, discriminator, **kwargs): \"\"\"MelGAN trainer class Args: generator: keras.Model, Generator model discriminator: keras.Model, Discriminator model \"\"\" super().__init__(**kwargs) self.generator = generator self.discriminator = discriminator def compile( self, gen_optimizer, disc_optimizer, generator_loss, feature_matching_loss, discriminator_loss, ): \"\"\"MelGAN compile method. Args: gen_optimizer: keras.optimizer, optimizer to be used for training disc_optimizer: keras.optimizer, optimizer to be used for training generator_loss: callable, loss function for generator feature_matching_loss: callable, loss function for feature matching discriminator_loss: callable, loss function for discriminator \"\"\" super().compile() # Optimizers self.gen_optimizer = gen_optimizer self.disc_optimizer = disc_optimizer # Losses self.generator_loss = generator_loss self.feature_matching_loss = feature_matching_loss self.discriminator_loss = discriminator_loss # Trackers self.gen_loss_tracker = keras.metrics.Mean(name=\"gen_loss\") self.disc_loss_tracker = keras.metrics.Mean(name=\"disc_loss\") def train_step(self, batch): x_batch_train, y_batch_train = batch with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape: # Generating the audio wave gen_audio_wave = generator(x_batch_train, training=True) # Generating the features using the discriminator fake_pred = discriminator(y_batch_train) real_pred = discriminator(gen_audio_wave) # Calculating the generator losses gen_loss = generator_loss(real_pred, fake_pred) fm_loss = feature_matching_loss(real_pred, fake_pred) # Calculating final generator loss gen_fm_loss = gen_loss + 10 * fm_loss # Calculating the discriminator losses disc_loss = discriminator_loss(real_pred, fake_pred) # Calculating and applying the gradients for generator and discriminator grads_gen = gen_tape.gradient(gen_fm_loss, generator.trainable_weights) grads_disc = disc_tape.gradient(disc_loss, discriminator.trainable_weights) gen_optimizer.apply_gradients(zip(grads_gen, generator.trainable_weights)) disc_optimizer.apply_gradients(zip(grads_disc, discriminator.trainable_weights)) self.gen_loss_tracker.update_state(gen_fm_loss) self.disc_loss_tracker.update_state(disc_loss) return { \"gen_loss\": self.gen_loss_tracker.result(), \"disc_loss\": self.disc_loss_tracker.result(), } Training The paper suggests that the training with dynamic shapes takes around 400,000 steps (~500 epochs). For this example, we will run it only for a single epoch (819 steps). Longer training time (greater than 300 epochs) will almost certainly provide better results. gen_optimizer = keras.optimizers.Adam( LEARNING_RATE_GEN, beta_1=0.5, beta_2=0.9, clipnorm=1 ) disc_optimizer = keras.optimizers.Adam( LEARNING_RATE_DISC, beta_1=0.5, beta_2=0.9, clipnorm=1 ) # Start training generator = create_generator((None, 1)) discriminator = create_discriminator((None, 1)) mel_gan = MelGAN(generator, discriminator) mel_gan.compile( gen_optimizer, disc_optimizer, generator_loss, feature_matching_loss, discriminator_loss, ) mel_gan.fit( train_dataset.shuffle(200).batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE), epochs=1 ) 819/819 [==============================] - 641s 696ms/step - gen_loss: 0.9761 - disc_loss: 0.9350 Testing the model The trained model can now be used for real time text-to-speech translation tasks. To test how fast the MelGAN inference can be, let us take a sample audio mel-spectrogram and convert it. Note that the actual model pipeline will not include the MelSpec layer and hence this layer will be disabled during inference. The inference input will be a mel-spectrogram processed similar to the MelSpec layer configuration. For testing this, we will create a randomly uniformly distributed tensor to simulate the behavior of the inference pipeline. # Sampling a random tensor to mimic a batch of 128 spectrograms of shape [50, 80] audio_sample = tf.random.uniform([128, 50, 80]) Timing the inference speed of a single sample. Running this, you can see that the average inference time per spectrogram ranges from 8 milliseconds to 10 milliseconds on a K80 GPU which is pretty fast. pred = generator.predict(audio_sample, batch_size=32, verbose=1) 4/4 [==============================] - 5s 280ms/step Conclusion The MelGAN is a highly effective architecture for spectral inversion that has a Mean Opinion Score (MOS) of 3.61 that considerably outperforms the Griffin Lim algorithm having a MOS of just 1.57. In contrast with this, the MelGAN compares with the state-of-the-art WaveGlow and WaveNet architectures on text-to-speech and speech enhancement tasks on the LJSpeech and VCTK datasets [1]. This tutorial highlights: The advantages of using dilated convolutions that grow with the filter size Implementation of a custom layer for on-the-fly conversion of audio waves to mel-spectrograms Effectiveness of using the feature matching loss function for training GAN generators. Further reading MelGAN paper (Kundan Kumar et al.) to understand the reasoning behind the architecture and training process For in-depth understanding of the feature matching loss, you can refer to Improved Techniques for Training GANs (Tim Salimans et al.). Classify speakers using Fast Fourier Transform (FFT) and a 1D Convnet. Introduction This example demonstrates how to create a model to classify speakers from the frequency domain representation of speech recordings, obtained via Fast Fourier Transform (FFT). It shows the following: How to use tf.data to load, preprocess and feed audio streams into a model How to create a 1D convolutional network with residual connections for audio classification. Our process: We prepare a dataset of speech samples from different speakers, with the speaker as label. We add background noise to these samples to augment our data. We take the FFT of these samples. We train a 1D convnet to predict the correct speaker given a noisy FFT speech sample. Note: This example should be run with TensorFlow 2.3 or higher, or tf-nightly. The noise samples in the dataset need to be resampled to a sampling rate of 16000 Hz before using the code in this example. In order to do this, you will need to have installed ffmpg. Setup import os import shutil import numpy as np import tensorflow as tf from tensorflow import keras from pathlib import Path from IPython.display import display, Audio # Get the data from https://www.kaggle.com/kongaevans/speaker-recognition-dataset/download # and save it to the 'Downloads' folder in your HOME directory DATASET_ROOT = os.path.join(os.path.expanduser(\"~\"), \"Downloads/16000_pcm_speeches\") # The folders in which we will put the audio samples and the noise samples AUDIO_SUBFOLDER = \"audio\" NOISE_SUBFOLDER = \"noise\" DATASET_AUDIO_PATH = os.path.join(DATASET_ROOT, AUDIO_SUBFOLDER) DATASET_NOISE_PATH = os.path.join(DATASET_ROOT, NOISE_SUBFOLDER) # Percentage of samples to use for validation VALID_SPLIT = 0.1 # Seed to use when shuffling the dataset and the noise SHUFFLE_SEED = 43 # The sampling rate to use. # This is the one used in all of the audio samples. # We will resample all of the noise to this sampling rate. # This will also be the output size of the audio wave samples # (since all samples are of 1 second long) SAMPLING_RATE = 16000 # The factor to multiply the noise with according to: # noisy_sample = sample + noise * prop * scale # where prop = sample_amplitude / noise_amplitude SCALE = 0.5 BATCH_SIZE = 128 EPOCHS = 100 Data preparation The dataset is composed of 7 folders, divided into 2 groups: Speech samples, with 5 folders for 5 different speakers. Each folder contains 1500 audio files, each 1 second long and sampled at 16000 Hz. Background noise samples, with 2 folders and a total of 6 files. These files are longer than 1 second (and originally not sampled at 16000 Hz, but we will resample them to 16000 Hz). We will use those 6 files to create 354 1-second-long noise samples to be used for training. Let's sort these 2 categories into 2 folders: An audio folder which will contain all the per-speaker speech sample folders A noise folder which will contain all the noise samples Before sorting the audio and noise categories into 2 folders, main_directory/ ...speaker_a/ ...speaker_b/ ...speaker_c/ ...speaker_d/ ...speaker_e/ ...other/ ..._background_noise_/ After sorting, we end up with the following structure: main_directory/ ...audio/ ......speaker_a/ ......speaker_b/ ......speaker_c/ ......speaker_d/ ......speaker_e/ ...noise/ ......other/ ......_background_noise_/ # If folder `audio`, does not exist, create it, otherwise do nothing if os.path.exists(DATASET_AUDIO_PATH) is False: os.makedirs(DATASET_AUDIO_PATH) # If folder `noise`, does not exist, create it, otherwise do nothing if os.path.exists(DATASET_NOISE_PATH) is False: os.makedirs(DATASET_NOISE_PATH) for folder in os.listdir(DATASET_ROOT): if os.path.isdir(os.path.join(DATASET_ROOT, folder)): if folder in [AUDIO_SUBFOLDER, NOISE_SUBFOLDER]: # If folder is `audio` or `noise`, do nothing continue elif folder in [\"other\", \"_background_noise_\"]: # If folder is one of the folders that contains noise samples, # move it to the `noise` folder shutil.move( os.path.join(DATASET_ROOT, folder), os.path.join(DATASET_NOISE_PATH, folder), ) else: # Otherwise, it should be a speaker folder, then move it to # `audio` folder shutil.move( os.path.join(DATASET_ROOT, folder), os.path.join(DATASET_AUDIO_PATH, folder), ) Noise preparation In this section: We load all noise samples (which should have been resampled to 16000) We split those noise samples to chuncks of 16000 samples which correspond to 1 second duration each # Get the list of all noise files noise_paths = [] for subdir in os.listdir(DATASET_NOISE_PATH): subdir_path = Path(DATASET_NOISE_PATH) / subdir if os.path.isdir(subdir_path): noise_paths += [ os.path.join(subdir_path, filepath) for filepath in os.listdir(subdir_path) if filepath.endswith(\".wav\") ] print( \"Found {} files belonging to {} directories\".format( len(noise_paths), len(os.listdir(DATASET_NOISE_PATH)) ) ) Found 6 files belonging to 2 directories Resample all noise samples to 16000 Hz command = ( \"for dir in `ls -1 \" + DATASET_NOISE_PATH + \"`; do \" \"for file in `ls -1 \" + DATASET_NOISE_PATH + \"/$dir/*.wav`; do \" \"sample_rate=`ffprobe -hide_banner -loglevel panic -show_streams \" \"$file | grep sample_rate | cut -f2 -d=`; \" \"if [ $sample_rate -ne 16000 ]; then \" \"ffmpeg -hide_banner -loglevel panic -y \" \"-i $file -ar 16000 temp.wav; \" \"mv temp.wav $file; \" \"fi; done; done\" ) os.system(command) # Split noise into chunks of 16000 each def load_noise_sample(path): sample, sampling_rate = tf.audio.decode_wav( tf.io.read_file(path), desired_channels=1 ) if sampling_rate == SAMPLING_RATE: # Number of slices of 16000 each that can be generated from the noise sample slices = int(sample.shape[0] / SAMPLING_RATE) sample = tf.split(sample[: slices * SAMPLING_RATE], slices) return sample else: print(\"Sampling rate for {} is incorrect. Ignoring it\".format(path)) return None noises = [] for path in noise_paths: sample = load_noise_sample(path) if sample: noises.extend(sample) noises = tf.stack(noises) print( \"{} noise files were split into {} noise samples where each is {} sec. long\".format( len(noise_paths), noises.shape[0], noises.shape[1] // SAMPLING_RATE ) ) 6 noise files were split into 354 noise samples where each is 1 sec. long Dataset generation def paths_and_labels_to_dataset(audio_paths, labels): \"\"\"Constructs a dataset of audios and labels.\"\"\" path_ds = tf.data.Dataset.from_tensor_slices(audio_paths) audio_ds = path_ds.map(lambda x: path_to_audio(x)) label_ds = tf.data.Dataset.from_tensor_slices(labels) return tf.data.Dataset.zip((audio_ds, label_ds)) def path_to_audio(path): \"\"\"Reads and decodes an audio file.\"\"\" audio = tf.io.read_file(path) audio, _ = tf.audio.decode_wav(audio, 1, SAMPLING_RATE) return audio def add_noise(audio, noises=None, scale=0.5): if noises is not None: # Create a random tensor of the same size as audio ranging from # 0 to the number of noise stream samples that we have. tf_rnd = tf.random.uniform( (tf.shape(audio)[0],), 0, noises.shape[0], dtype=tf.int32 ) noise = tf.gather(noises, tf_rnd, axis=0) # Get the amplitude proportion between the audio and the noise prop = tf.math.reduce_max(audio, axis=1) / tf.math.reduce_max(noise, axis=1) prop = tf.repeat(tf.expand_dims(prop, axis=1), tf.shape(audio)[1], axis=1) # Adding the rescaled noise to audio audio = audio + noise * prop * scale return audio def audio_to_fft(audio): # Since tf.signal.fft applies FFT on the innermost dimension, # we need to squeeze the dimensions and then expand them again # after FFT audio = tf.squeeze(audio, axis=-1) fft = tf.signal.fft( tf.cast(tf.complex(real=audio, imag=tf.zeros_like(audio)), tf.complex64) ) fft = tf.expand_dims(fft, axis=-1) # Return the absolute value of the first half of the FFT # which represents the positive frequencies return tf.math.abs(fft[:, : (audio.shape[1] // 2), :]) # Get the list of audio file paths along with their corresponding labels class_names = os.listdir(DATASET_AUDIO_PATH) print(\"Our class names: {}\".format(class_names,)) audio_paths = [] labels = [] for label, name in enumerate(class_names): print(\"Processing speaker {}\".format(name,)) dir_path = Path(DATASET_AUDIO_PATH) / name speaker_sample_paths = [ os.path.join(dir_path, filepath) for filepath in os.listdir(dir_path) if filepath.endswith(\".wav\") ] audio_paths += speaker_sample_paths labels += [label] * len(speaker_sample_paths) print( \"Found {} files belonging to {} classes.\".format(len(audio_paths), len(class_names)) ) # Shuffle rng = np.random.RandomState(SHUFFLE_SEED) rng.shuffle(audio_paths) rng = np.random.RandomState(SHUFFLE_SEED) rng.shuffle(labels) # Split into training and validation num_val_samples = int(VALID_SPLIT * len(audio_paths)) print(\"Using {} files for training.\".format(len(audio_paths) - num_val_samples)) train_audio_paths = audio_paths[:-num_val_samples] train_labels = labels[:-num_val_samples] print(\"Using {} files for validation.\".format(num_val_samples)) valid_audio_paths = audio_paths[-num_val_samples:] valid_labels = labels[-num_val_samples:] # Create 2 datasets, one for training and the other for validation train_ds = paths_and_labels_to_dataset(train_audio_paths, train_labels) train_ds = train_ds.shuffle(buffer_size=BATCH_SIZE * 8, seed=SHUFFLE_SEED).batch( BATCH_SIZE ) valid_ds = paths_and_labels_to_dataset(valid_audio_paths, valid_labels) valid_ds = valid_ds.shuffle(buffer_size=32 * 8, seed=SHUFFLE_SEED).batch(32) # Add noise to the training set train_ds = train_ds.map( lambda x, y: (add_noise(x, noises, scale=SCALE), y), num_parallel_calls=tf.data.AUTOTUNE, ) # Transform audio wave to the frequency domain using `audio_to_fft` train_ds = train_ds.map( lambda x, y: (audio_to_fft(x), y), num_parallel_calls=tf.data.AUTOTUNE ) train_ds = train_ds.prefetch(tf.data.AUTOTUNE) valid_ds = valid_ds.map( lambda x, y: (audio_to_fft(x), y), num_parallel_calls=tf.data.AUTOTUNE ) valid_ds = valid_ds.prefetch(tf.data.AUTOTUNE) Our class names: ['Julia_Gillard', 'Jens_Stoltenberg', 'Nelson_Mandela', 'Magaret_Tarcher', 'Benjamin_Netanyau'] Processing speaker Julia_Gillard Processing speaker Jens_Stoltenberg Processing speaker Nelson_Mandela Processing speaker Magaret_Tarcher Processing speaker Benjamin_Netanyau Found 7501 files belonging to 5 classes. Using 6751 files for training. Using 750 files for validation. Model Definition def residual_block(x, filters, conv_num=3, activation=\"relu\"): # Shortcut s = keras.layers.Conv1D(filters, 1, padding=\"same\")(x) for i in range(conv_num - 1): x = keras.layers.Conv1D(filters, 3, padding=\"same\")(x) x = keras.layers.Activation(activation)(x) x = keras.layers.Conv1D(filters, 3, padding=\"same\")(x) x = keras.layers.Add()([x, s]) x = keras.layers.Activation(activation)(x) return keras.layers.MaxPool1D(pool_size=2, strides=2)(x) def build_model(input_shape, num_classes): inputs = keras.layers.Input(shape=input_shape, name=\"input\") x = residual_block(inputs, 16, 2) x = residual_block(x, 32, 2) x = residual_block(x, 64, 3) x = residual_block(x, 128, 3) x = residual_block(x, 128, 3) x = keras.layers.AveragePooling1D(pool_size=3, strides=3)(x) x = keras.layers.Flatten()(x) x = keras.layers.Dense(256, activation=\"relu\")(x) x = keras.layers.Dense(128, activation=\"relu\")(x) outputs = keras.layers.Dense(num_classes, activation=\"softmax\", name=\"output\")(x) return keras.models.Model(inputs=inputs, outputs=outputs) model = build_model((SAMPLING_RATE // 2, 1), len(class_names)) model.summary() # Compile the model using Adam's default learning rate model.compile( optimizer=\"Adam\", loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"] ) # Add callbacks: # 'EarlyStopping' to stop training when the model is not enhancing anymore # 'ModelCheckPoint' to always keep the model that has the best val_accuracy model_save_filename = \"model.h5\" earlystopping_cb = keras.callbacks.EarlyStopping(patience=10, restore_best_weights=True) mdlcheckpoint_cb = keras.callbacks.ModelCheckpoint( model_save_filename, monitor=\"val_accuracy\", save_best_only=True ) Model: \"model\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input (InputLayer) [(None, 8000, 1)] 0 __________________________________________________________________________________________________ conv1d_1 (Conv1D) (None, 8000, 16) 64 input[0][0] __________________________________________________________________________________________________ activation (Activation) (None, 8000, 16) 0 conv1d_1[0][0] __________________________________________________________________________________________________ conv1d_2 (Conv1D) (None, 8000, 16) 784 activation[0][0] __________________________________________________________________________________________________ conv1d (Conv1D) (None, 8000, 16) 32 input[0][0] __________________________________________________________________________________________________ add (Add) (None, 8000, 16) 0 conv1d_2[0][0] conv1d[0][0] __________________________________________________________________________________________________ activation_1 (Activation) (None, 8000, 16) 0 add[0][0] __________________________________________________________________________________________________ max_pooling1d (MaxPooling1D) (None, 4000, 16) 0 activation_1[0][0] __________________________________________________________________________________________________ conv1d_4 (Conv1D) (None, 4000, 32) 1568 max_pooling1d[0][0] __________________________________________________________________________________________________ activation_2 (Activation) (None, 4000, 32) 0 conv1d_4[0][0] __________________________________________________________________________________________________ conv1d_5 (Conv1D) (None, 4000, 32) 3104 activation_2[0][0] __________________________________________________________________________________________________ conv1d_3 (Conv1D) (None, 4000, 32) 544 max_pooling1d[0][0] __________________________________________________________________________________________________ add_1 (Add) (None, 4000, 32) 0 conv1d_5[0][0] conv1d_3[0][0] __________________________________________________________________________________________________ activation_3 (Activation) (None, 4000, 32) 0 add_1[0][0] __________________________________________________________________________________________________ max_pooling1d_1 (MaxPooling1D) (None, 2000, 32) 0 activation_3[0][0] __________________________________________________________________________________________________ conv1d_7 (Conv1D) (None, 2000, 64) 6208 max_pooling1d_1[0][0] __________________________________________________________________________________________________ activation_4 (Activation) (None, 2000, 64) 0 conv1d_7[0][0] __________________________________________________________________________________________________ conv1d_8 (Conv1D) (None, 2000, 64) 12352 activation_4[0][0] __________________________________________________________________________________________________ activation_5 (Activation) (None, 2000, 64) 0 conv1d_8[0][0] __________________________________________________________________________________________________ conv1d_9 (Conv1D) (None, 2000, 64) 12352 activation_5[0][0] __________________________________________________________________________________________________ conv1d_6 (Conv1D) (None, 2000, 64) 2112 max_pooling1d_1[0][0] __________________________________________________________________________________________________ add_2 (Add) (None, 2000, 64) 0 conv1d_9[0][0] conv1d_6[0][0] __________________________________________________________________________________________________ activation_6 (Activation) (None, 2000, 64) 0 add_2[0][0] __________________________________________________________________________________________________ max_pooling1d_2 (MaxPooling1D) (None, 1000, 64) 0 activation_6[0][0] __________________________________________________________________________________________________ conv1d_11 (Conv1D) (None, 1000, 128) 24704 max_pooling1d_2[0][0] __________________________________________________________________________________________________ activation_7 (Activation) (None, 1000, 128) 0 conv1d_11[0][0] __________________________________________________________________________________________________ conv1d_12 (Conv1D) (None, 1000, 128) 49280 activation_7[0][0] __________________________________________________________________________________________________ activation_8 (Activation) (None, 1000, 128) 0 conv1d_12[0][0] __________________________________________________________________________________________________ conv1d_13 (Conv1D) (None, 1000, 128) 49280 activation_8[0][0] __________________________________________________________________________________________________ conv1d_10 (Conv1D) (None, 1000, 128) 8320 max_pooling1d_2[0][0] __________________________________________________________________________________________________ add_3 (Add) (None, 1000, 128) 0 conv1d_13[0][0] conv1d_10[0][0] __________________________________________________________________________________________________ activation_9 (Activation) (None, 1000, 128) 0 add_3[0][0] __________________________________________________________________________________________________ max_pooling1d_3 (MaxPooling1D) (None, 500, 128) 0 activation_9[0][0] __________________________________________________________________________________________________ conv1d_15 (Conv1D) (None, 500, 128) 49280 max_pooling1d_3[0][0] __________________________________________________________________________________________________ activation_10 (Activation) (None, 500, 128) 0 conv1d_15[0][0] __________________________________________________________________________________________________ conv1d_16 (Conv1D) (None, 500, 128) 49280 activation_10[0][0] __________________________________________________________________________________________________ activation_11 (Activation) (None, 500, 128) 0 conv1d_16[0][0] __________________________________________________________________________________________________ conv1d_17 (Conv1D) (None, 500, 128) 49280 activation_11[0][0] __________________________________________________________________________________________________ conv1d_14 (Conv1D) (None, 500, 128) 16512 max_pooling1d_3[0][0] __________________________________________________________________________________________________ add_4 (Add) (None, 500, 128) 0 conv1d_17[0][0] conv1d_14[0][0] __________________________________________________________________________________________________ activation_12 (Activation) (None, 500, 128) 0 add_4[0][0] __________________________________________________________________________________________________ max_pooling1d_4 (MaxPooling1D) (None, 250, 128) 0 activation_12[0][0] __________________________________________________________________________________________________ average_pooling1d (AveragePooli (None, 83, 128) 0 max_pooling1d_4[0][0] __________________________________________________________________________________________________ flatten (Flatten) (None, 10624) 0 average_pooling1d[0][0] __________________________________________________________________________________________________ dense (Dense) (None, 256) 2720000 flatten[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (None, 128) 32896 dense[0][0] __________________________________________________________________________________________________ output (Dense) (None, 5) 645 dense_1[0][0] ================================================================================================== Total params: 3,088,597 Trainable params: 3,088,597 Non-trainable params: 0 __________________________________________________________________________________________________ Training history = model.fit( train_ds, epochs=EPOCHS, validation_data=valid_ds, callbacks=[earlystopping_cb, mdlcheckpoint_cb], ) Epoch 1/100 53/53 [==============================] - 62s 1s/step - loss: 1.0107 - accuracy: 0.6929 - val_loss: 0.3367 - val_accuracy: 0.8640 Epoch 2/100 53/53 [==============================] - 61s 1s/step - loss: 0.2863 - accuracy: 0.8926 - val_loss: 0.2814 - val_accuracy: 0.8813 Epoch 3/100 53/53 [==============================] - 61s 1s/step - loss: 0.2293 - accuracy: 0.9104 - val_loss: 0.2054 - val_accuracy: 0.9160 Epoch 4/100 53/53 [==============================] - 63s 1s/step - loss: 0.1750 - accuracy: 0.9320 - val_loss: 0.1668 - val_accuracy: 0.9320 Epoch 5/100 53/53 [==============================] - 61s 1s/step - loss: 0.2044 - accuracy: 0.9206 - val_loss: 0.1658 - val_accuracy: 0.9347 Epoch 6/100 53/53 [==============================] - 61s 1s/step - loss: 0.1407 - accuracy: 0.9415 - val_loss: 0.0888 - val_accuracy: 0.9720 Epoch 7/100 53/53 [==============================] - 61s 1s/step - loss: 0.1047 - accuracy: 0.9600 - val_loss: 0.1113 - val_accuracy: 0.9587 Epoch 8/100 53/53 [==============================] - 60s 1s/step - loss: 0.1077 - accuracy: 0.9573 - val_loss: 0.0819 - val_accuracy: 0.9693 Epoch 9/100 53/53 [==============================] - 61s 1s/step - loss: 0.0998 - accuracy: 0.9640 - val_loss: 0.1586 - val_accuracy: 0.9427 Epoch 10/100 53/53 [==============================] - 63s 1s/step - loss: 0.1004 - accuracy: 0.9621 - val_loss: 0.1504 - val_accuracy: 0.9333 Epoch 11/100 53/53 [==============================] - 60s 1s/step - loss: 0.0902 - accuracy: 0.9695 - val_loss: 0.1016 - val_accuracy: 0.9600 Epoch 12/100 53/53 [==============================] - 61s 1s/step - loss: 0.0773 - accuracy: 0.9714 - val_loss: 0.0647 - val_accuracy: 0.9800 Epoch 13/100 53/53 [==============================] - 63s 1s/step - loss: 0.0797 - accuracy: 0.9699 - val_loss: 0.0485 - val_accuracy: 0.9853 Epoch 14/100 53/53 [==============================] - 61s 1s/step - loss: 0.0750 - accuracy: 0.9727 - val_loss: 0.0601 - val_accuracy: 0.9787 Epoch 15/100 53/53 [==============================] - 62s 1s/step - loss: 0.0629 - accuracy: 0.9766 - val_loss: 0.0476 - val_accuracy: 0.9787 Epoch 16/100 53/53 [==============================] - 63s 1s/step - loss: 0.0564 - accuracy: 0.9793 - val_loss: 0.0565 - val_accuracy: 0.9813 Epoch 17/100 53/53 [==============================] - 61s 1s/step - loss: 0.0545 - accuracy: 0.9809 - val_loss: 0.0325 - val_accuracy: 0.9893 Epoch 18/100 53/53 [==============================] - 61s 1s/step - loss: 0.0415 - accuracy: 0.9859 - val_loss: 0.0776 - val_accuracy: 0.9693 Epoch 19/100 53/53 [==============================] - 61s 1s/step - loss: 0.0537 - accuracy: 0.9810 - val_loss: 0.0647 - val_accuracy: 0.9853 Epoch 20/100 53/53 [==============================] - 62s 1s/step - loss: 0.0556 - accuracy: 0.9802 - val_loss: 0.0500 - val_accuracy: 0.9880 Epoch 21/100 53/53 [==============================] - 63s 1s/step - loss: 0.0486 - accuracy: 0.9828 - val_loss: 0.0470 - val_accuracy: 0.9827 Epoch 22/100 53/53 [==============================] - 61s 1s/step - loss: 0.0479 - accuracy: 0.9825 - val_loss: 0.0918 - val_accuracy: 0.9693 Epoch 23/100 53/53 [==============================] - 61s 1s/step - loss: 0.0446 - accuracy: 0.9834 - val_loss: 0.0429 - val_accuracy: 0.9867 Epoch 24/100 53/53 [==============================] - 61s 1s/step - loss: 0.0309 - accuracy: 0.9889 - val_loss: 0.0473 - val_accuracy: 0.9867 Epoch 25/100 53/53 [==============================] - 63s 1s/step - loss: 0.0341 - accuracy: 0.9895 - val_loss: 0.0244 - val_accuracy: 0.9907 Epoch 26/100 53/53 [==============================] - 60s 1s/step - loss: 0.0357 - accuracy: 0.9874 - val_loss: 0.0289 - val_accuracy: 0.9893 Epoch 27/100 53/53 [==============================] - 61s 1s/step - loss: 0.0331 - accuracy: 0.9893 - val_loss: 0.0246 - val_accuracy: 0.9920 Epoch 28/100 53/53 [==============================] - 61s 1s/step - loss: 0.0339 - accuracy: 0.9879 - val_loss: 0.0646 - val_accuracy: 0.9787 Epoch 29/100 53/53 [==============================] - 61s 1s/step - loss: 0.0250 - accuracy: 0.9910 - val_loss: 0.0146 - val_accuracy: 0.9947 Epoch 30/100 53/53 [==============================] - 63s 1s/step - loss: 0.0343 - accuracy: 0.9883 - val_loss: 0.0318 - val_accuracy: 0.9893 Epoch 31/100 53/53 [==============================] - 61s 1s/step - loss: 0.0312 - accuracy: 0.9893 - val_loss: 0.0270 - val_accuracy: 0.9880 Epoch 32/100 53/53 [==============================] - 61s 1s/step - loss: 0.0201 - accuracy: 0.9917 - val_loss: 0.0264 - val_accuracy: 0.9893 Epoch 33/100 53/53 [==============================] - 61s 1s/step - loss: 0.0371 - accuracy: 0.9876 - val_loss: 0.0722 - val_accuracy: 0.9773 Epoch 34/100 53/53 [==============================] - 61s 1s/step - loss: 0.0533 - accuracy: 0.9828 - val_loss: 0.0161 - val_accuracy: 0.9947 Epoch 35/100 53/53 [==============================] - 61s 1s/step - loss: 0.0258 - accuracy: 0.9911 - val_loss: 0.0277 - val_accuracy: 0.9867 Epoch 36/100 53/53 [==============================] - 60s 1s/step - loss: 0.0261 - accuracy: 0.9901 - val_loss: 0.0542 - val_accuracy: 0.9787 Epoch 37/100 53/53 [==============================] - 60s 1s/step - loss: 0.0368 - accuracy: 0.9877 - val_loss: 0.0699 - val_accuracy: 0.9813 Epoch 38/100 53/53 [==============================] - 63s 1s/step - loss: 0.0251 - accuracy: 0.9890 - val_loss: 0.0206 - val_accuracy: 0.9907 Epoch 39/100 53/53 [==============================] - 62s 1s/step - loss: 0.0220 - accuracy: 0.9913 - val_loss: 0.0211 - val_accuracy: 0.9947 Evaluation print(model.evaluate(valid_ds)) 24/24 [==============================] - 6s 244ms/step - loss: 0.0146 - accuracy: 0.9947 [0.014629718847572803, 0.9946666955947876] We get ~ 98% validation accuracy. Demonstration Let's take some samples and: Predict the speaker Compare the prediction with the real speaker Listen to the audio to see that despite the samples being noisy, the model is still pretty accurate SAMPLES_TO_DISPLAY = 10 test_ds = paths_and_labels_to_dataset(valid_audio_paths, valid_labels) test_ds = test_ds.shuffle(buffer_size=BATCH_SIZE * 8, seed=SHUFFLE_SEED).batch( BATCH_SIZE ) test_ds = test_ds.map(lambda x, y: (add_noise(x, noises, scale=SCALE), y)) for audios, labels in test_ds.take(1): # Get the signal FFT ffts = audio_to_fft(audios) # Predict y_pred = model.predict(ffts) # Take random samples rnd = np.random.randint(0, BATCH_SIZE, SAMPLES_TO_DISPLAY) audios = audios.numpy()[rnd, :, :] labels = labels.numpy()[rnd] y_pred = np.argmax(y_pred, axis=-1)[rnd] for index in range(SAMPLES_TO_DISPLAY): # For every sample, print the true and predicted label # as well as run the voice with the noise print( \"Speaker: {} - Predicted: {}\".format( class_names[labels[index]], class_names[y_pred[index]], ) ) display(Audio(audios[index, :, :].squeeze(), rate=SAMPLING_RATE)) Train a 3D convolutional neural network to predict presence of pneumonia. Introduction This example will show the steps needed to build a 3D convolutional neural network (CNN) to predict the presence of viral pneumonia in computer tomography (CT) scans. 2D CNNs are commonly used to process RGB images (3 channels). A 3D CNN is simply the 3D equivalent: it takes as input a 3D volume or a sequence of 2D frames (e.g. slices in a CT scan), 3D CNNs are a powerful model for learning representations for volumetric data. References A survey on Deep Learning Advances on Different 3D DataRepresentations VoxNet: A 3D Convolutional Neural Network for Real-Time Object Recognition FusionNet: 3D Object Classification Using MultipleData Representations Uniformizing Techniques to Process CT scans with 3D CNNs for Tuberculosis Prediction Setup import os import zipfile import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Downloading the MosMedData: Chest CT Scans with COVID-19 Related Findings In this example, we use a subset of the MosMedData: Chest CT Scans with COVID-19 Related Findings. This dataset consists of lung CT scans with COVID-19 related findings, as well as without such findings. We will be using the associated radiological findings of the CT scans as labels to build a classifier to predict presence of viral pneumonia. Hence, the task is a binary classification problem. # Download url of normal CT scans. url = \"https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-0.zip\" filename = os.path.join(os.getcwd(), \"CT-0.zip\") keras.utils.get_file(filename, url) # Download url of abnormal CT scans. url = \"https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-23.zip\" filename = os.path.join(os.getcwd(), \"CT-23.zip\") keras.utils.get_file(filename, url) # Make a directory to store the data. os.makedirs(\"MosMedData\") # Unzip data in the newly created directory. with zipfile.ZipFile(\"CT-0.zip\", \"r\") as z_fp: z_fp.extractall(\"./MosMedData/\") with zipfile.ZipFile(\"CT-23.zip\", \"r\") as z_fp: z_fp.extractall(\"./MosMedData/\") Downloading data from https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-0.zip 1065476096/1065471431 [==============================] - 236s 0us/step Downloading data from https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-23.zip 271171584/1045162547 [======>.......................] - ETA: 2:56 Loading data and preprocessing The files are provided in Nifti format with the extension .nii. To read the scans, we use the nibabel package. You can install the package via pip install nibabel. CT scans store raw voxel intensity in Hounsfield units (HU). They range from -1024 to above 2000 in this dataset. Above 400 are bones with different radiointensity, so this is used as a higher bound. A threshold between -1000 and 400 is commonly used to normalize CT scans. To process the data, we do the following: We first rotate the volumes by 90 degrees, so the orientation is fixed We scale the HU values to be between 0 and 1. We resize width, height and depth. Here we define several helper functions to process the data. These functions will be used when building training and validation datasets. import nibabel as nib from scipy import ndimage def read_nifti_file(filepath): \"\"\"Read and load volume\"\"\" # Read file scan = nib.load(filepath) # Get raw data scan = scan.get_fdata() return scan def normalize(volume): \"\"\"Normalize the volume\"\"\" min = -1000 max = 400 volume[volume < min] = min volume[volume > max] = max volume = (volume - min) / (max - min) volume = volume.astype(\"float32\") return volume def resize_volume(img): \"\"\"Resize across z-axis\"\"\" # Set the desired depth desired_depth = 64 desired_width = 128 desired_height = 128 # Get current depth current_depth = img.shape[-1] current_width = img.shape[0] current_height = img.shape[1] # Compute depth factor depth = current_depth / desired_depth width = current_width / desired_width height = current_height / desired_height depth_factor = 1 / depth width_factor = 1 / width height_factor = 1 / height # Rotate img = ndimage.rotate(img, 90, reshape=False) # Resize across z-axis img = ndimage.zoom(img, (width_factor, height_factor, depth_factor), order=1) return img def process_scan(path): \"\"\"Read and resize volume\"\"\" # Read scan volume = read_nifti_file(path) # Normalize volume = normalize(volume) # Resize width, height and depth volume = resize_volume(volume) return volume Let's read the paths of the CT scans from the class directories. # Folder \"CT-0\" consist of CT scans having normal lung tissue, # no CT-signs of viral pneumonia. normal_scan_paths = [ os.path.join(os.getcwd(), \"MosMedData/CT-0\", x) for x in os.listdir(\"MosMedData/CT-0\") ] # Folder \"CT-23\" consist of CT scans having several ground-glass opacifications, # involvement of lung parenchyma. abnormal_scan_paths = [ os.path.join(os.getcwd(), \"MosMedData/CT-23\", x) for x in os.listdir(\"MosMedData/CT-23\") ] print(\"CT scans with normal lung tissue: \" + str(len(normal_scan_paths))) print(\"CT scans with abnormal lung tissue: \" + str(len(abnormal_scan_paths))) CT scans with normal lung tissue: 100 CT scans with abnormal lung tissue: 100 Build train and validation datasets Read the scans from the class directories and assign labels. Downsample the scans to have shape of 128x128x64. Rescale the raw HU values to the range 0 to 1. Lastly, split the dataset into train and validation subsets. # Read and process the scans. # Each scan is resized across height, width, and depth and rescaled. abnormal_scans = np.array([process_scan(path) for path in abnormal_scan_paths]) normal_scans = np.array([process_scan(path) for path in normal_scan_paths]) # For the CT scans having presence of viral pneumonia # assign 1, for the normal ones assign 0. abnormal_labels = np.array([1 for _ in range(len(abnormal_scans))]) normal_labels = np.array([0 for _ in range(len(normal_scans))]) # Split data in the ratio 70-30 for training and validation. x_train = np.concatenate((abnormal_scans[:70], normal_scans[:70]), axis=0) y_train = np.concatenate((abnormal_labels[:70], normal_labels[:70]), axis=0) x_val = np.concatenate((abnormal_scans[70:], normal_scans[70:]), axis=0) y_val = np.concatenate((abnormal_labels[70:], normal_labels[70:]), axis=0) print( \"Number of samples in train and validation are %d and %d.\" % (x_train.shape[0], x_val.shape[0]) ) Number of samples in train and validation are 140 and 60. Data augmentation The CT scans also augmented by rotating at random angles during training. Since the data is stored in rank-3 tensors of shape (samples, height, width, depth), we add a dimension of size 1 at axis 4 to be able to perform 3D convolutions on the data. The new shape is thus (samples, height, width, depth, 1). There are different kinds of preprocessing and augmentation techniques out there, this example shows a few simple ones to get started. import random from scipy import ndimage @tf.function def rotate(volume): \"\"\"Rotate the volume by a few degrees\"\"\" def scipy_rotate(volume): # define some rotation angles angles = [-20, -10, -5, 5, 10, 20] # pick angles at random angle = random.choice(angles) # rotate volume volume = ndimage.rotate(volume, angle, reshape=False) volume[volume < 0] = 0 volume[volume > 1] = 1 return volume augmented_volume = tf.numpy_function(scipy_rotate, [volume], tf.float32) return augmented_volume def train_preprocessing(volume, label): \"\"\"Process training data by rotating and adding a channel.\"\"\" # Rotate volume volume = rotate(volume) volume = tf.expand_dims(volume, axis=3) return volume, label def validation_preprocessing(volume, label): \"\"\"Process validation data by only adding a channel.\"\"\" volume = tf.expand_dims(volume, axis=3) return volume, label While defining the train and validation data loader, the training data is passed through and augmentation function which randomly rotates volume at different angles. Note that both training and validation data are already rescaled to have values between 0 and 1. # Define data loaders. train_loader = tf.data.Dataset.from_tensor_slices((x_train, y_train)) validation_loader = tf.data.Dataset.from_tensor_slices((x_val, y_val)) batch_size = 2 # Augment the on the fly during training. train_dataset = ( train_loader.shuffle(len(x_train)) .map(train_preprocessing) .batch(batch_size) .prefetch(2) ) # Only rescale. validation_dataset = ( validation_loader.shuffle(len(x_val)) .map(validation_preprocessing) .batch(batch_size) .prefetch(2) ) Visualize an augmented CT scan. import matplotlib.pyplot as plt data = train_dataset.take(1) images, labels = list(data)[0] images = images.numpy() image = images[0] print(\"Dimension of the CT scan is:\", image.shape) plt.imshow(np.squeeze(image[:, :, 30]), cmap=\"gray\") Dimension of the CT scan is: (128, 128, 64, 1) png Since a CT scan has many slices, let's visualize a montage of the slices. def plot_slices(num_rows, num_columns, width, height, data): \"\"\"Plot a montage of 20 CT slices\"\"\" data = np.rot90(np.array(data)) data = np.transpose(data) data = np.reshape(data, (num_rows, num_columns, width, height)) rows_data, columns_data = data.shape[0], data.shape[1] heights = [slc[0].shape[0] for slc in data] widths = [slc.shape[1] for slc in data[0]] fig_width = 12.0 fig_height = fig_width * sum(heights) / sum(widths) f, axarr = plt.subplots( rows_data, columns_data, figsize=(fig_width, fig_height), gridspec_kw={\"height_ratios\": heights}, ) for i in range(rows_data): for j in range(columns_data): axarr[i, j].imshow(data[i][j], cmap=\"gray\") axarr[i, j].axis(\"off\") plt.subplots_adjust(wspace=0, hspace=0, left=0, right=1, bottom=0, top=1) plt.show() # Visualize montage of slices. # 4 rows and 10 columns for 100 slices of the CT scan. plot_slices(4, 10, 128, 128, image[:, :, :40]) png Define a 3D convolutional neural network To make the model easier to understand, we structure it into blocks. The architecture of the 3D CNN used in this example is based on this paper. def get_model(width=128, height=128, depth=64): \"\"\"Build a 3D convolutional neural network model.\"\"\" inputs = keras.Input((width, height, depth, 1)) x = layers.Conv3D(filters=64, kernel_size=3, activation=\"relu\")(inputs) x = layers.MaxPool3D(pool_size=2)(x) x = layers.BatchNormalization()(x) x = layers.Conv3D(filters=64, kernel_size=3, activation=\"relu\")(x) x = layers.MaxPool3D(pool_size=2)(x) x = layers.BatchNormalization()(x) x = layers.Conv3D(filters=128, kernel_size=3, activation=\"relu\")(x) x = layers.MaxPool3D(pool_size=2)(x) x = layers.BatchNormalization()(x) x = layers.Conv3D(filters=256, kernel_size=3, activation=\"relu\")(x) x = layers.MaxPool3D(pool_size=2)(x) x = layers.BatchNormalization()(x) x = layers.GlobalAveragePooling3D()(x) x = layers.Dense(units=512, activation=\"relu\")(x) x = layers.Dropout(0.3)(x) outputs = layers.Dense(units=1, activation=\"sigmoid\")(x) # Define the model. model = keras.Model(inputs, outputs, name=\"3dcnn\") return model # Build model. model = get_model(width=128, height=128, depth=64) model.summary() Model: \"3dcnn\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 128, 128, 64, 1)] 0 _________________________________________________________________ conv3d (Conv3D) (None, 126, 126, 62, 64) 1792 _________________________________________________________________ max_pooling3d (MaxPooling3D) (None, 63, 63, 31, 64) 0 _________________________________________________________________ batch_normalization (BatchNo (None, 63, 63, 31, 64) 256 _________________________________________________________________ conv3d_1 (Conv3D) (None, 61, 61, 29, 64) 110656 _________________________________________________________________ max_pooling3d_1 (MaxPooling3 (None, 30, 30, 14, 64) 0 _________________________________________________________________ batch_normalization_1 (Batch (None, 30, 30, 14, 64) 256 _________________________________________________________________ conv3d_2 (Conv3D) (None, 28, 28, 12, 128) 221312 _________________________________________________________________ max_pooling3d_2 (MaxPooling3 (None, 14, 14, 6, 128) 0 _________________________________________________________________ batch_normalization_2 (Batch (None, 14, 14, 6, 128) 512 _________________________________________________________________ conv3d_3 (Conv3D) (None, 12, 12, 4, 256) 884992 _________________________________________________________________ max_pooling3d_3 (MaxPooling3 (None, 6, 6, 2, 256) 0 _________________________________________________________________ batch_normalization_3 (Batch (None, 6, 6, 2, 256) 1024 _________________________________________________________________ global_average_pooling3d (Gl (None, 256) 0 _________________________________________________________________ dense (Dense) (None, 512) 131584 _________________________________________________________________ dropout (Dropout) (None, 512) 0 _________________________________________________________________ dense_1 (Dense) (None, 1) 513 ================================================================= Total params: 1,352,897 Trainable params: 1,351,873 Non-trainable params: 1,024 _________________________________________________________________ Train model # Compile model. initial_learning_rate = 0.0001 lr_schedule = keras.optimizers.schedules.ExponentialDecay( initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True ) model.compile( loss=\"binary_crossentropy\", optimizer=keras.optimizers.Adam(learning_rate=lr_schedule), metrics=[\"acc\"], ) # Define callbacks. checkpoint_cb = keras.callbacks.ModelCheckpoint( \"3d_image_classification.h5\", save_best_only=True ) early_stopping_cb = keras.callbacks.EarlyStopping(monitor=\"val_acc\", patience=15) # Train the model, doing validation at the end of each epoch epochs = 100 model.fit( train_dataset, validation_data=validation_dataset, epochs=epochs, shuffle=True, verbose=2, callbacks=[checkpoint_cb, early_stopping_cb], ) Epoch 1/100 70/70 - 12s - loss: 0.7031 - acc: 0.5286 - val_loss: 1.1421 - val_acc: 0.5000 Epoch 2/100 70/70 - 12s - loss: 0.6769 - acc: 0.5929 - val_loss: 1.3491 - val_acc: 0.5000 Epoch 3/100 70/70 - 12s - loss: 0.6543 - acc: 0.6286 - val_loss: 1.5108 - val_acc: 0.5000 Epoch 4/100 70/70 - 12s - loss: 0.6236 - acc: 0.6714 - val_loss: 2.5255 - val_acc: 0.5000 Epoch 5/100 70/70 - 12s - loss: 0.6628 - acc: 0.6000 - val_loss: 1.8446 - val_acc: 0.5000 Epoch 6/100 70/70 - 12s - loss: 0.6621 - acc: 0.6071 - val_loss: 1.9661 - val_acc: 0.5000 Epoch 7/100 70/70 - 12s - loss: 0.6346 - acc: 0.6571 - val_loss: 2.8997 - val_acc: 0.5000 Epoch 8/100 70/70 - 12s - loss: 0.6501 - acc: 0.6071 - val_loss: 1.6101 - val_acc: 0.5000 Epoch 9/100 70/70 - 12s - loss: 0.6065 - acc: 0.6571 - val_loss: 0.8688 - val_acc: 0.6167 Epoch 10/100 70/70 - 12s - loss: 0.5970 - acc: 0.6714 - val_loss: 0.8802 - val_acc: 0.5167 Epoch 11/100 70/70 - 12s - loss: 0.5910 - acc: 0.7143 - val_loss: 0.7282 - val_acc: 0.6333 Epoch 12/100 70/70 - 12s - loss: 0.6147 - acc: 0.6500 - val_loss: 0.5828 - val_acc: 0.7500 Epoch 13/100 70/70 - 12s - loss: 0.5641 - acc: 0.7214 - val_loss: 0.7080 - val_acc: 0.6667 Epoch 14/100 70/70 - 12s - loss: 0.5664 - acc: 0.6857 - val_loss: 0.5641 - val_acc: 0.7000 Epoch 15/100 70/70 - 12s - loss: 0.5924 - acc: 0.6929 - val_loss: 0.7595 - val_acc: 0.6000 Epoch 16/100 70/70 - 12s - loss: 0.5389 - acc: 0.7071 - val_loss: 0.5719 - val_acc: 0.7833 Epoch 17/100 70/70 - 12s - loss: 0.5493 - acc: 0.6714 - val_loss: 0.5234 - val_acc: 0.7500 Epoch 18/100 70/70 - 12s - loss: 0.5050 - acc: 0.7786 - val_loss: 0.7359 - val_acc: 0.6000 Epoch 19/100 70/70 - 12s - loss: 0.5152 - acc: 0.7286 - val_loss: 0.6469 - val_acc: 0.6500 Epoch 20/100 70/70 - 12s - loss: 0.5015 - acc: 0.7786 - val_loss: 0.5651 - val_acc: 0.7333 Epoch 21/100 70/70 - 12s - loss: 0.4975 - acc: 0.7786 - val_loss: 0.8707 - val_acc: 0.5500 Epoch 22/100 70/70 - 12s - loss: 0.4470 - acc: 0.7714 - val_loss: 0.5577 - val_acc: 0.7500 Epoch 23/100 70/70 - 12s - loss: 0.5489 - acc: 0.7071 - val_loss: 0.9929 - val_acc: 0.6500 Epoch 24/100 70/70 - 12s - loss: 0.5045 - acc: 0.7357 - val_loss: 0.5891 - val_acc: 0.7333 Epoch 25/100 70/70 - 12s - loss: 0.5598 - acc: 0.7500 - val_loss: 0.5703 - val_acc: 0.7667 Epoch 26/100 70/70 - 12s - loss: 0.4822 - acc: 0.7429 - val_loss: 0.5631 - val_acc: 0.7333 Epoch 27/100 70/70 - 12s - loss: 0.5572 - acc: 0.7000 - val_loss: 0.6255 - val_acc: 0.6500 Epoch 28/100 70/70 - 12s - loss: 0.4694 - acc: 0.7643 - val_loss: 0.7007 - val_acc: 0.6833 Epoch 29/100 70/70 - 12s - loss: 0.4870 - acc: 0.7571 - val_loss: 1.7148 - val_acc: 0.5667 Epoch 30/100 70/70 - 12s - loss: 0.4794 - acc: 0.7500 - val_loss: 0.5744 - val_acc: 0.7333 Epoch 31/100 70/70 - 12s - loss: 0.4632 - acc: 0.7857 - val_loss: 0.7787 - val_acc: 0.5833 It is important to note that the number of samples is very small (only 200) and we don't specify a random seed. As such, you can expect significant variance in the results. The full dataset which consists of over 1000 CT scans can be found here. Using the full dataset, an accuracy of 83% was achieved. A variability of 6-7% in the classification performance is observed in both cases. Visualizing model performance Here the model accuracy and loss for the training and the validation sets are plotted. Since the validation set is class-balanced, accuracy provides an unbiased representation of the model's performance. fig, ax = plt.subplots(1, 2, figsize=(20, 3)) ax = ax.ravel() for i, metric in enumerate([\"acc\", \"loss\"]): ax[i].plot(model.history.history[metric]) ax[i].plot(model.history.history[\"val_\" + metric]) ax[i].set_title(\"Model {}\".format(metric)) ax[i].set_xlabel(\"epochs\") ax[i].set_ylabel(metric) ax[i].legend([\"train\", \"val\"]) png Make predictions on a single CT scan # Load best weights. model.load_weights(\"3d_image_classification.h5\") prediction = model.predict(np.expand_dims(x_val[0], axis=0))[0] scores = [1 - prediction[0], prediction[0]] class_names = [\"normal\", \"abnormal\"] for score, name in zip(scores, class_names): print( \"This model is %.2f percent confident that CT scan is %s\" % ((100 * score), name) ) This model is 26.60 percent confident that CT scan is normal This model is 73.40 percent confident that CT scan is abnormal Minimal implementation of volumetric rendering as shown in NeRF. Introduction In this example, we present a minimal implementation of the research paper NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis by Ben Mildenhall et. al. The authors have proposed an ingenious way to synthesize novel views of a scene by modelling the volumetric scene function through a neural network. To help you understand this intuitively, let's start with the following question: would it be possible to give to a neural network the position of a pixel in an image, and ask the network to predict the color at that position? 2d-train Figure 1: A neural network being given coordinates of an image as input and asked to predict the color at the coordinates. The neural network would hypothetically memorize (overfit on) the image. This means that our neural network would have encoded the entire image in its weights. We could query the neural network with each position, and it would eventually reconstruct the entire image. 2d-test Figure 2: The trained neural network recreates the image from scratch. A question now arises, how do we extend this idea to learn a 3D volumetric scene? Implementing a similar process as above would require the knowledge of every voxel (volume pixel). Turns out, this is quite a challenging task to do. The authors of the paper propose a minimal and elegant way to learn a 3D scene using a few images of the scene. They discard the use of voxels for training. The network learns to model the volumetric scene, thus generating novel views (images) of the 3D scene that the model was not shown at training time. There are a few prerequisites one needs to understand to fully appreciate the process. We structure the example in such a way that you will have all the required knowledge before starting the implementation. Setup # Setting random seed to obtain reproducible results. import tensorflow as tf tf.random.set_seed(42) import os import glob import imageio import numpy as np from tqdm import tqdm from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt # Initialize global variables. AUTO = tf.data.AUTOTUNE BATCH_SIZE = 5 NUM_SAMPLES = 32 POS_ENCODE_DIMS = 16 EPOCHS = 20 Download and load the data The npz data file contains images, camera poses, and a focal length. The images are taken from multiple camera angles as shown in Figure 3. camera-angles Figure 3: Multiple camera angles Source: NeRF To understand camera poses in this context we have to first allow ourselves to think that a camera is a mapping between the real-world and the 2-D image. mapping Figure 4: 3-D world to 2-D image mapping through a camera Source: Mathworks Consider the following equation: Where x is the 2-D image point, X is the 3-D world point and P is the camera-matrix. P is a 3 x 4 matrix that plays the crucial role of mapping the real world object onto an image plane. The camera-matrix is an affine transform matrix that is concatenated with a 3 x 1 column [image height, image width, focal length] to produce the pose matrix. This matrix is of dimensions 3 x 5 where the first 3 x 3 block is in the camera’s point of view. The axes are [down, right, backwards] or [-y, x, z] where the camera is facing forwards -z. camera-mapping Figure 5: The affine transformation. The COLMAP frame is [right, down, forwards] or [x, -y, -z]. Read more about COLMAP here. # Download the data if it does not already exist. file_name = \"tiny_nerf_data.npz\" url = \"https://people.eecs.berkeley.edu/~bmild/nerf/tiny_nerf_data.npz\" if not os.path.exists(file_name): data = keras.utils.get_file(fname=file_name, origin=url) data = np.load(data) images = data[\"images\"] im_shape = images.shape (num_images, H, W, _) = images.shape (poses, focal) = (data[\"poses\"], data[\"focal\"]) # Plot a random image from the dataset for visualization. plt.imshow(images[np.random.randint(low=0, high=num_images)]) plt.show() Downloading data from https://people.eecs.berkeley.edu/~bmild/nerf/tiny_nerf_data.npz 12730368/12727482 [==============================] - 0s 0us/step png Data pipeline Now that you've understood the notion of camera matrix and the mapping from a 3D scene to 2D images, let's talk about the inverse mapping, i.e. from 2D image to the 3D scene. We'll need to talk about volumetric rendering with ray casting and tracing, which are common computer graphics techniques. This section will help you get to speed with these techniques. Consider an image with N pixels. We shoot a ray through each pixel and sample some points on the ray. A ray is commonly parameterized by the equation r(t) = o + td where t is the parameter, o is the origin and d is the unit directional vector as shown in Figure 6. img Figure 6: r(t) = o + td where t is 3 In Figure 7, we consider a ray, and we sample some random points on the ray. These sample points each have a unique location (x, y, z) and the ray has a viewing angle (theta, phi). The viewing angle is particularly interesting as we can shoot a ray through a single pixel in a lot of different ways, each with a unique viewing angle. Another interesting thing to notice here is the noise that is added to the sampling process. We add a uniform noise to each sample so that the samples correspond to a continuous distribution. In Figure 7 the blue points are the evenly distributed samples and the white points (t1, t2, t3) are randomly placed between the samples. img Figure 7: Sampling the points from a ray. Figure 8 showcases the entire sampling process in 3D, where you can see the rays coming out of the white image. This means that each pixel will have its corresponding rays and each ray will be sampled at distinct points. 3-d rays Figure 8: Shooting rays from all the pixels of an image in 3-D These sampled points act as the input to the NeRF model. The model is then asked to predict the RGB color and the volume density at that point. 3-Drender Figure 9: Data pipeline Source: NeRF def encode_position(x): \"\"\"Encodes the position into its corresponding Fourier feature. Args: x: The input coordinate. Returns: Fourier features tensors of the position. \"\"\" positions = [x] for i in range(POS_ENCODE_DIMS): for fn in [tf.sin, tf.cos]: positions.append(fn(2.0 ** i * x)) return tf.concat(positions, axis=-1) def get_rays(height, width, focal, pose): \"\"\"Computes origin point and direction vector of rays. Args: height: Height of the image. width: Width of the image. focal: The focal length between the images and the camera. pose: The pose matrix of the camera. Returns: Tuple of origin point and direction vector for rays. \"\"\" # Build a meshgrid for the rays. i, j = tf.meshgrid( tf.range(width, dtype=tf.float32), tf.range(height, dtype=tf.float32), indexing=\"xy\", ) # Normalize the x axis coordinates. transformed_i = (i - width * 0.5) / focal # Normalize the y axis coordinates. transformed_j = (j - height * 0.5) / focal # Create the direction unit vectors. directions = tf.stack([transformed_i, -transformed_j, -tf.ones_like(i)], axis=-1) # Get the camera matrix. camera_matrix = pose[:3, :3] height_width_focal = pose[:3, -1] # Get origins and directions for the rays. transformed_dirs = directions[..., None, :] camera_dirs = transformed_dirs * camera_matrix ray_directions = tf.reduce_sum(camera_dirs, axis=-1) ray_origins = tf.broadcast_to(height_width_focal, tf.shape(ray_directions)) # Return the origins and directions. return (ray_origins, ray_directions) def render_flat_rays(ray_origins, ray_directions, near, far, num_samples, rand=False): \"\"\"Renders the rays and flattens it. Args: ray_origins: The origin points for rays. ray_directions: The direction unit vectors for the rays. near: The near bound of the volumetric scene. far: The far bound of the volumetric scene. num_samples: Number of sample points in a ray. rand: Choice for randomising the sampling strategy. Returns: Tuple of flattened rays and sample points on each rays. \"\"\" # Compute 3D query points. # Equation: r(t) = o+td -> Building the \"t\" here. t_vals = tf.linspace(near, far, num_samples) if rand: # Inject uniform noise into sample space to make the sampling # continuous. shape = list(ray_origins.shape[:-1]) + [num_samples] noise = tf.random.uniform(shape=shape) * (far - near) / num_samples t_vals = t_vals + noise # Equation: r(t) = o + td -> Building the \"r\" here. rays = ray_origins[..., None, :] + ( ray_directions[..., None, :] * t_vals[..., None] ) rays_flat = tf.reshape(rays, [-1, 3]) rays_flat = encode_position(rays_flat) return (rays_flat, t_vals) def map_fn(pose): \"\"\"Maps individual pose to flattened rays and sample points. Args: pose: The pose matrix of the camera. Returns: Tuple of flattened rays and sample points corresponding to the camera pose. \"\"\" (ray_origins, ray_directions) = get_rays(height=H, width=W, focal=focal, pose=pose) (rays_flat, t_vals) = render_flat_rays( ray_origins=ray_origins, ray_directions=ray_directions, near=2.0, far=6.0, num_samples=NUM_SAMPLES, rand=True, ) return (rays_flat, t_vals) # Create the training split. split_index = int(num_images * 0.8) # Split the images into training and validation. train_images = images[:split_index] val_images = images[split_index:] # Split the poses into training and validation. train_poses = poses[:split_index] val_poses = poses[split_index:] # Make the training pipeline. train_img_ds = tf.data.Dataset.from_tensor_slices(train_images) train_pose_ds = tf.data.Dataset.from_tensor_slices(train_poses) train_ray_ds = train_pose_ds.map(map_fn, num_parallel_calls=AUTO) training_ds = tf.data.Dataset.zip((train_img_ds, train_ray_ds)) train_ds = ( training_ds.shuffle(BATCH_SIZE) .batch(BATCH_SIZE, drop_remainder=True, num_parallel_calls=AUTO) .prefetch(AUTO) ) # Make the validation pipeline. val_img_ds = tf.data.Dataset.from_tensor_slices(val_images) val_pose_ds = tf.data.Dataset.from_tensor_slices(val_poses) val_ray_ds = val_pose_ds.map(map_fn, num_parallel_calls=AUTO) validation_ds = tf.data.Dataset.zip((val_img_ds, val_ray_ds)) val_ds = ( validation_ds.shuffle(BATCH_SIZE) .batch(BATCH_SIZE, drop_remainder=True, num_parallel_calls=AUTO) .prefetch(AUTO) ) NeRF model The model is a multi-layer perceptron (MLP), with ReLU as its non-linearity. An excerpt from the paper: \"We encourage the representation to be multiview-consistent by restricting the network to predict the volume density sigma as a function of only the location x, while allowing the RGB color c to be predicted as a function of both location and viewing direction. To accomplish this, the MLP first processes the input 3D coordinate x with 8 fully-connected layers (using ReLU activations and 256 channels per layer), and outputs sigma and a 256-dimensional feature vector. This feature vector is then concatenated with the camera ray's viewing direction and passed to one additional fully-connected layer (using a ReLU activation and 128 channels) that output the view-dependent RGB color.\" Here we have gone for a minimal implementation and have used 64 Dense units instead of 256 as mentioned in the paper. def get_nerf_model(num_layers, num_pos): \"\"\"Generates the NeRF neural network. Args: num_layers: The number of MLP layers. num_pos: The number of dimensions of positional encoding. Returns: The [`tf.keras`](https://www.tensorflow.org/api_docs/python/tf/keras) model. \"\"\" inputs = keras.Input(shape=(num_pos, 2 * 3 * POS_ENCODE_DIMS + 3)) x = inputs for i in range(num_layers): x = layers.Dense(units=64, activation=\"relu\")(x) if i % 4 == 0 and i > 0: # Inject residual connection. x = layers.concatenate([x, inputs], axis=-1) outputs = layers.Dense(units=4)(x) return keras.Model(inputs=inputs, outputs=outputs) def render_rgb_depth(model, rays_flat, t_vals, rand=True, train=True): \"\"\"Generates the RGB image and depth map from model prediction. Args: model: The MLP model that is trained to predict the rgb and volume density of the volumetric scene. rays_flat: The flattened rays that serve as the input to the NeRF model. t_vals: The sample points for the rays. rand: Choice to randomise the sampling strategy. train: Whether the model is in the training or testing phase. Returns: Tuple of rgb image and depth map. \"\"\" # Get the predictions from the nerf model and reshape it. if train: predictions = model(rays_flat) else: predictions = model.predict(rays_flat) predictions = tf.reshape(predictions, shape=(BATCH_SIZE, H, W, NUM_SAMPLES, 4)) # Slice the predictions into rgb and sigma. rgb = tf.sigmoid(predictions[..., :-1]) sigma_a = tf.nn.relu(predictions[..., -1]) # Get the distance of adjacent intervals. delta = t_vals[..., 1:] - t_vals[..., :-1] # delta shape = (num_samples) if rand: delta = tf.concat( [delta, tf.broadcast_to([1e10], shape=(BATCH_SIZE, H, W, 1))], axis=-1 ) alpha = 1.0 - tf.exp(-sigma_a * delta) else: delta = tf.concat( [delta, tf.broadcast_to([1e10], shape=(BATCH_SIZE, 1))], axis=-1 ) alpha = 1.0 - tf.exp(-sigma_a * delta[:, None, None, :]) # Get transmittance. exp_term = 1.0 - alpha epsilon = 1e-10 transmittance = tf.math.cumprod(exp_term + epsilon, axis=-1, exclusive=True) weights = alpha * transmittance rgb = tf.reduce_sum(weights[..., None] * rgb, axis=-2) if rand: depth_map = tf.reduce_sum(weights * t_vals, axis=-1) else: depth_map = tf.reduce_sum(weights * t_vals[:, None, None], axis=-1) return (rgb, depth_map) Training The training step is implemented as part of a custom keras.Model subclass so that we can make use of the model.fit functionality. class NeRF(keras.Model): def __init__(self, nerf_model): super().__init__() self.nerf_model = nerf_model def compile(self, optimizer, loss_fn): super().compile() self.optimizer = optimizer self.loss_fn = loss_fn self.loss_tracker = keras.metrics.Mean(name=\"loss\") self.psnr_metric = keras.metrics.Mean(name=\"psnr\") def train_step(self, inputs): # Get the images and the rays. (images, rays) = inputs (rays_flat, t_vals) = rays with tf.GradientTape() as tape: # Get the predictions from the model. rgb, _ = render_rgb_depth( model=self.nerf_model, rays_flat=rays_flat, t_vals=t_vals, rand=True ) loss = self.loss_fn(images, rgb) # Get the trainable variables. trainable_variables = self.nerf_model.trainable_variables # Get the gradeints of the trainiable variables with respect to the loss. gradients = tape.gradient(loss, trainable_variables) # Apply the grads and optimize the model. self.optimizer.apply_gradients(zip(gradients, trainable_variables)) # Get the PSNR of the reconstructed images and the source images. psnr = tf.image.psnr(images, rgb, max_val=1.0) # Compute our own metrics self.loss_tracker.update_state(loss) self.psnr_metric.update_state(psnr) return {\"loss\": self.loss_tracker.result(), \"psnr\": self.psnr_metric.result()} def test_step(self, inputs): # Get the images and the rays. (images, rays) = inputs (rays_flat, t_vals) = rays # Get the predictions from the model. rgb, _ = render_rgb_depth( model=self.nerf_model, rays_flat=rays_flat, t_vals=t_vals, rand=True ) loss = self.loss_fn(images, rgb) # Get the PSNR of the reconstructed images and the source images. psnr = tf.image.psnr(images, rgb, max_val=1.0) # Compute our own metrics self.loss_tracker.update_state(loss) self.psnr_metric.update_state(psnr) return {\"loss\": self.loss_tracker.result(), \"psnr\": self.psnr_metric.result()} @property def metrics(self): return [self.loss_tracker, self.psnr_metric] test_imgs, test_rays = next(iter(train_ds)) test_rays_flat, test_t_vals = test_rays loss_list = [] class TrainMonitor(keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): loss = logs[\"loss\"] loss_list.append(loss) test_recons_images, depth_maps = render_rgb_depth( model=self.model.nerf_model, rays_flat=test_rays_flat, t_vals=test_t_vals, rand=True, train=False, ) # Plot the rgb, depth and the loss plot. fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(20, 5)) ax[0].imshow(keras.preprocessing.image.array_to_img(test_recons_images[0])) ax[0].set_title(f\"Predicted Image: {epoch:03d}\") ax[1].imshow(keras.preprocessing.image.array_to_img(depth_maps[0, ..., None])) ax[1].set_title(f\"Depth Map: {epoch:03d}\") ax[2].plot(loss_list) ax[2].set_xticks(np.arange(0, EPOCHS + 1, 5.0)) ax[2].set_title(f\"Loss Plot: {epoch:03d}\") fig.savefig(f\"images/{epoch:03d}.png\") plt.show() plt.close() num_pos = H * W * NUM_SAMPLES nerf_model = get_nerf_model(num_layers=8, num_pos=num_pos) model = NeRF(nerf_model) model.compile( optimizer=keras.optimizers.Adam(), loss_fn=keras.losses.MeanSquaredError() ) # Create a directory to save the images during training. if not os.path.exists(\"images\"): os.makedirs(\"images\") model.fit( train_ds, validation_data=val_ds, batch_size=BATCH_SIZE, epochs=EPOCHS, callbacks=[TrainMonitor()], steps_per_epoch=split_index // BATCH_SIZE, ) def create_gif(path_to_images, name_gif): filenames = glob.glob(path_to_images) filenames = sorted(filenames) images = [] for filename in tqdm(filenames): images.append(imageio.imread(filename)) kargs = {\"duration\": 0.25} imageio.mimsave(name_gif, images, \"GIF\", **kargs) create_gif(\"images/*.png\", \"training.gif\") Epoch 1/20 16/16 [==============================] - 15s 753ms/step - loss: 0.1134 - psnr: 9.7278 - val_loss: 0.0683 - val_psnr: 12.0722 png Epoch 2/20 16/16 [==============================] - 13s 752ms/step - loss: 0.0648 - psnr: 12.4200 - val_loss: 0.0664 - val_psnr: 12.1765 png Epoch 3/20 16/16 [==============================] - 13s 746ms/step - loss: 0.0607 - psnr: 12.5281 - val_loss: 0.0673 - val_psnr: 12.0121 png Epoch 4/20 16/16 [==============================] - 13s 758ms/step - loss: 0.0595 - psnr: 12.7050 - val_loss: 0.0646 - val_psnr: 12.2768 png Epoch 5/20 16/16 [==============================] - 13s 755ms/step - loss: 0.0583 - psnr: 12.7522 - val_loss: 0.0613 - val_psnr: 12.5351 png Epoch 6/20 16/16 [==============================] - 13s 749ms/step - loss: 0.0545 - psnr: 13.0654 - val_loss: 0.0553 - val_psnr: 12.9512 png Epoch 7/20 16/16 [==============================] - 13s 744ms/step - loss: 0.0480 - psnr: 13.6313 - val_loss: 0.0444 - val_psnr: 13.7838 png Epoch 8/20 16/16 [==============================] - 13s 763ms/step - loss: 0.0359 - psnr: 14.8570 - val_loss: 0.0342 - val_psnr: 14.8823 png Epoch 9/20 16/16 [==============================] - 13s 758ms/step - loss: 0.0299 - psnr: 15.5374 - val_loss: 0.0287 - val_psnr: 15.6171 png Epoch 10/20 16/16 [==============================] - 13s 779ms/step - loss: 0.0273 - psnr: 15.9051 - val_loss: 0.0266 - val_psnr: 15.9319 png Epoch 11/20 16/16 [==============================] - 13s 736ms/step - loss: 0.0255 - psnr: 16.1422 - val_loss: 0.0250 - val_psnr: 16.1568 png Epoch 12/20 16/16 [==============================] - 13s 746ms/step - loss: 0.0236 - psnr: 16.5074 - val_loss: 0.0233 - val_psnr: 16.4793 png Epoch 13/20 16/16 [==============================] - 13s 755ms/step - loss: 0.0217 - psnr: 16.8391 - val_loss: 0.0210 - val_psnr: 16.8950 png Epoch 14/20 16/16 [==============================] - 13s 741ms/step - loss: 0.0197 - psnr: 17.2245 - val_loss: 0.0187 - val_psnr: 17.3766 png Epoch 15/20 16/16 [==============================] - 13s 739ms/step - loss: 0.0179 - psnr: 17.6246 - val_loss: 0.0179 - val_psnr: 17.5445 png Epoch 16/20 16/16 [==============================] - 13s 735ms/step - loss: 0.0175 - psnr: 17.6998 - val_loss: 0.0180 - val_psnr: 17.5154 png Epoch 17/20 16/16 [==============================] - 13s 741ms/step - loss: 0.0167 - psnr: 17.9393 - val_loss: 0.0156 - val_psnr: 18.1784 png Epoch 18/20 16/16 [==============================] - 13s 750ms/step - loss: 0.0150 - psnr: 18.3875 - val_loss: 0.0151 - val_psnr: 18.2811 png Epoch 19/20 16/16 [==============================] - 13s 755ms/step - loss: 0.0141 - psnr: 18.6476 - val_loss: 0.0139 - val_psnr: 18.6216 png Epoch 20/20 16/16 [==============================] - 14s 777ms/step - loss: 0.0139 - psnr: 18.7131 - val_loss: 0.0137 - val_psnr: 18.7259 png 100%|██████████| 20/20 [00:00<00:00, 57.59it/s] Visualize the training step Here we see the training step. With the decreasing loss, the rendered image and the depth maps are getting better. In your local system, you will see the training.gif file generated. training-20 Inference In this section, we ask the model to build novel views of the scene. The model was given 106 views of the scene in the training step. The collections of training images cannot contain each and every angle of the scene. A trained model can represent the entire 3-D scene with a sparse set of training images. Here we provide different poses to the model and ask for it to give us the 2-D image corresponding to that camera view. If we infer the model for all the 360-degree views, it should provide an overview of the entire scenery from all around. # Get the trained NeRF model and infer. nerf_model = model.nerf_model test_recons_images, depth_maps = render_rgb_depth( model=nerf_model, rays_flat=test_rays_flat, t_vals=test_t_vals, rand=True, train=False, ) # Create subplots. fig, axes = plt.subplots(nrows=5, ncols=3, figsize=(10, 20)) for ax, ori_img, recons_img, depth_map in zip( axes, test_imgs, test_recons_images, depth_maps ): ax[0].imshow(keras.preprocessing.image.array_to_img(ori_img)) ax[0].set_title(\"Original\") ax[1].imshow(keras.preprocessing.image.array_to_img(recons_img)) ax[1].set_title(\"Reconstructed\") ax[2].imshow( keras.preprocessing.image.array_to_img(depth_map[..., None]), cmap=\"inferno\" ) ax[2].set_title(\"Depth Map\") png Render 3D Scene Here we will synthesize novel 3D views and stitch all of them together to render a video encompassing the 360-degree view. def get_translation_t(t): \"\"\"Get the translation matrix for movement in t.\"\"\" matrix = [ [1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, t], [0, 0, 0, 1], ] return tf.convert_to_tensor(matrix, dtype=tf.float32) def get_rotation_phi(phi): \"\"\"Get the rotation matrix for movement in phi.\"\"\" matrix = [ [1, 0, 0, 0], [0, tf.cos(phi), -tf.sin(phi), 0], [0, tf.sin(phi), tf.cos(phi), 0], [0, 0, 0, 1], ] return tf.convert_to_tensor(matrix, dtype=tf.float32) def get_rotation_theta(theta): \"\"\"Get the rotation matrix for movement in theta.\"\"\" matrix = [ [tf.cos(theta), 0, -tf.sin(theta), 0], [0, 1, 0, 0], [tf.sin(theta), 0, tf.cos(theta), 0], [0, 0, 0, 1], ] return tf.convert_to_tensor(matrix, dtype=tf.float32) def pose_spherical(theta, phi, t): \"\"\" Get the camera to world matrix for the corresponding theta, phi and t. \"\"\" c2w = get_translation_t(t) c2w = get_rotation_phi(phi / 180.0 * np.pi) @ c2w c2w = get_rotation_theta(theta / 180.0 * np.pi) @ c2w c2w = np.array([[-1, 0, 0, 0], [0, 0, 1, 0], [0, 1, 0, 0], [0, 0, 0, 1]]) @ c2w return c2w rgb_frames = [] batch_flat = [] batch_t = [] # Iterate over different theta value and generate scenes. for index, theta in tqdm(enumerate(np.linspace(0.0, 360.0, 120, endpoint=False))): # Get the camera to world matrix. c2w = pose_spherical(theta, -30.0, 4.0) # ray_oris, ray_dirs = get_rays(H, W, focal, c2w) rays_flat, t_vals = render_flat_rays( ray_oris, ray_dirs, near=2.0, far=6.0, num_samples=NUM_SAMPLES, rand=False ) if index % BATCH_SIZE == 0 and index > 0: batched_flat = tf.stack(batch_flat, axis=0) batch_flat = [rays_flat] batched_t = tf.stack(batch_t, axis=0) batch_t = [t_vals] rgb, _ = render_rgb_depth( nerf_model, batched_flat, batched_t, rand=False, train=False ) temp_rgb = [np.clip(255 * img, 0.0, 255.0).astype(np.uint8) for img in rgb] rgb_frames = rgb_frames + temp_rgb else: batch_flat.append(rays_flat) batch_t.append(t_vals) rgb_video = \"rgb_video.mp4\" imageio.mimwrite(rgb_video, rgb_frames, fps=30, quality=7, macro_block_size=None) 120it [00:12, 9.24it/s] Visualize the video Here we can see the rendered 360 degree view of the scene. The model has successfully learned the entire volumetric space through the sparse set of images in only 20 epochs. You can view the rendered video saved locally, named rgb_video.mp4. rendered-video Conclusion We have produced a minimal implementation of NeRF to provide an intuition of its core ideas and methodology. This method has been used in various other works in the computer graphics space. We would like to encourage our readers to use this code as an example and play with the hyperparameters and visualize the outputs. Below we have also provided the outputs of the model trained for more epochs. Epochs GIF of the training step 100 100-epoch-training 200 200-epoch-training Reference NeRF repository: The official repository for NeRF. NeRF paper: The paper on NeRF. Manim Repository: We have used manim to build all the animations. Mathworks: Mathworks for the camera calibration article. Mathew's video: A great video on NeRF. Compact Convolutional Transformers As discussed in the Vision Transformers (ViT) paper, a Transformer-based architecture for vision typically requires a larger dataset than usual, as well as a longer pre-training schedule. ImageNet-1k (which has about a million images) is considered to fall under the medium-sized data regime with respect to ViTs. This is primarily because, unlike CNNs, ViTs (or a typical Transformer-based architecture) do not have well-informed inductive biases (such as convolutions for processing images). This begs the question: can't we combine the benefits of convolution and the benefits of Transformers in a single network architecture? These benefits include parameter-efficiency, and self-attention to process long-range and global dependencies (interactions between different regions in an image). In Escaping the Big Data Paradigm with Compact Transformers, Hassani et al. present an approach for doing exactly this. They proposed the Compact Convolutional Transformer (CCT) architecture. In this example, we will work on an implementation of CCT and we will see how well it performs on the CIFAR-10 dataset. If you are unfamiliar with the concept of self-attention or Transformers, you can read this chapter from François Chollet's book Deep Learning with Python. This example uses code snippets from another example, Image classification with Vision Transformer. This example requires TensorFlow 2.5 or higher, as well as TensorFlow Addons, which can be installed using the following command: !pip install -U -q tensorflow-addons  |████████████████████████████████| 686kB 5.4MB/s [?25h Imports from tensorflow.keras import layers from tensorflow import keras import matplotlib.pyplot as plt import tensorflow_addons as tfa import tensorflow as tf import numpy as np Hyperparameters and constants positional_emb = True conv_layers = 2 projection_dim = 128 num_heads = 2 transformer_units = [ projection_dim, projection_dim, ] transformer_layers = 2 stochastic_depth_rate = 0.1 learning_rate = 0.001 weight_decay = 0.0001 batch_size = 128 num_epochs = 30 image_size = 32 Load CIFAR-10 dataset num_classes = 10 input_shape = (32, 32, 3) (x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data() y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) print(f\"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}\") print(f\"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}\") Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz 170500096/170498071 [==============================] - 11s 0us/step x_train shape: (50000, 32, 32, 3) - y_train shape: (50000, 10) x_test shape: (10000, 32, 32, 3) - y_test shape: (10000, 10) The CCT tokenizer The first recipe introduced by the CCT authors is the tokenizer for processing the images. In a standard ViT, images are organized into uniform non-overlapping patches. This eliminates the boundary-level information present in between different patches. This is important for a neural network to effectively exploit the locality information. The figure below presents an illustration of how images are organized into patches. We already know that convolutions are quite good at exploiting locality information. So, based on this, the authors introduce an all-convolution mini-network to produce image patches. class CCTTokenizer(layers.Layer): def __init__( self, kernel_size=3, stride=1, padding=1, pooling_kernel_size=3, pooling_stride=2, num_conv_layers=conv_layers, num_output_channels=[64, 128], positional_emb=positional_emb, **kwargs, ): super(CCTTokenizer, self).__init__(**kwargs) # This is our tokenizer. self.conv_model = keras.Sequential() for i in range(num_conv_layers): self.conv_model.add( layers.Conv2D( num_output_channels[i], kernel_size, stride, padding=\"valid\", use_bias=False, activation=\"relu\", kernel_initializer=\"he_normal\", ) ) self.conv_model.add(layers.ZeroPadding2D(padding)) self.conv_model.add( layers.MaxPool2D(pooling_kernel_size, pooling_stride, \"same\") ) self.positional_emb = positional_emb def call(self, images): outputs = self.conv_model(images) # After passing the images through our mini-network the spatial dimensions # are flattened to form sequences. reshaped = tf.reshape( outputs, (-1, tf.shape(outputs)[1] * tf.shape(outputs)[2], tf.shape(outputs)[-1]), ) return reshaped def positional_embedding(self, image_size): # Positional embeddings are optional in CCT. Here, we calculate # the number of sequences and initialize an `Embedding` layer to # compute the positional embeddings later. if self.positional_emb: dummy_inputs = tf.ones((1, image_size, image_size, 3)) dummy_outputs = self.call(dummy_inputs) sequence_length = tf.shape(dummy_outputs)[1] projection_dim = tf.shape(dummy_outputs)[-1] embed_layer = layers.Embedding( input_dim=sequence_length, output_dim=projection_dim ) return embed_layer, sequence_length else: return None Stochastic depth for regularization Stochastic depth is a regularization technique that randomly drops a set of layers. During inference, the layers are kept as they are. It is very much similar to Dropout but only that it operates on a block of layers rather than individual nodes present inside a layer. In CCT, stochastic depth is used just before the residual blocks of a Transformers encoder. # Referred from: github.com:rwightman/pytorch-image-models. class StochasticDepth(layers.Layer): def __init__(self, drop_prop, **kwargs): super(StochasticDepth, self).__init__(**kwargs) self.drop_prob = drop_prop def call(self, x, training=None): if training: keep_prob = 1 - self.drop_prob shape = (tf.shape(x)[0],) + (1,) * (len(tf.shape(x)) - 1) random_tensor = keep_prob + tf.random.uniform(shape, 0, 1) random_tensor = tf.floor(random_tensor) return (x / keep_prob) * random_tensor return x MLP for the Transformers encoder def mlp(x, hidden_units, dropout_rate): for units in hidden_units: x = layers.Dense(units, activation=tf.nn.gelu)(x) x = layers.Dropout(dropout_rate)(x) return x Data augmentation In the original paper, the authors use AutoAugment to induce stronger regularization. For this example, we will be using the standard geometric augmentations like random cropping and flipping. # Note the rescaling layer. These layers have pre-defined inference behavior. data_augmentation = keras.Sequential( [ layers.Rescaling(scale=1.0 / 255), layers.RandomCrop(image_size, image_size), layers.RandomFlip(\"horizontal\"), ], name=\"data_augmentation\", ) The final CCT model Another recipe introduced in CCT is attention pooling or sequence pooling. In ViT, only the feature map corresponding to the class token is pooled and is then used for the subsequent classification task (or any other downstream task). In CCT, outputs from the Transformers encoder are weighted and then passed on to the final task-specific layer (in this example, we do classification). def create_cct_model( image_size=image_size, input_shape=input_shape, num_heads=num_heads, projection_dim=projection_dim, transformer_units=transformer_units, ): inputs = layers.Input(input_shape) # Augment data. augmented = data_augmentation(inputs) # Encode patches. cct_tokenizer = CCTTokenizer() encoded_patches = cct_tokenizer(augmented) # Apply positional embedding. if positional_emb: pos_embed, seq_length = cct_tokenizer.positional_embedding(image_size) positions = tf.range(start=0, limit=seq_length, delta=1) position_embeddings = pos_embed(positions) encoded_patches += position_embeddings # Calculate Stochastic Depth probabilities. dpr = [x for x in np.linspace(0, stochastic_depth_rate, transformer_layers)] # Create multiple layers of the Transformer block. for i in range(transformer_layers): # Layer normalization 1. x1 = layers.LayerNormalization(epsilon=1e-5)(encoded_patches) # Create a multi-head attention layer. attention_output = layers.MultiHeadAttention( num_heads=num_heads, key_dim=projection_dim, dropout=0.1 )(x1, x1) # Skip connection 1. attention_output = StochasticDepth(dpr[i])(attention_output) x2 = layers.Add()([attention_output, encoded_patches]) # Layer normalization 2. x3 = layers.LayerNormalization(epsilon=1e-5)(x2) # MLP. x3 = mlp(x3, hidden_units=transformer_units, dropout_rate=0.1) # Skip connection 2. x3 = StochasticDepth(dpr[i])(x3) encoded_patches = layers.Add()([x3, x2]) # Apply sequence pooling. representation = layers.LayerNormalization(epsilon=1e-5)(encoded_patches) attention_weights = tf.nn.softmax(layers.Dense(1)(representation), axis=1) weighted_representation = tf.matmul( attention_weights, representation, transpose_a=True ) weighted_representation = tf.squeeze(weighted_representation, -2) # Classify outputs. logits = layers.Dense(num_classes)(weighted_representation) # Create the Keras model. model = keras.Model(inputs=inputs, outputs=logits) return model Model training and evaluation def run_experiment(model): optimizer = tfa.optimizers.AdamW(learning_rate=0.001, weight_decay=0.0001) model.compile( optimizer=optimizer, loss=keras.losses.CategoricalCrossentropy( from_logits=True, label_smoothing=0.1 ), metrics=[ keras.metrics.CategoricalAccuracy(name=\"accuracy\"), keras.metrics.TopKCategoricalAccuracy(5, name=\"top-5-accuracy\"), ], ) checkpoint_filepath = \"/tmp/checkpoint\" checkpoint_callback = keras.callbacks.ModelCheckpoint( checkpoint_filepath, monitor=\"val_accuracy\", save_best_only=True, save_weights_only=True, ) history = model.fit( x=x_train, y=y_train, batch_size=batch_size, epochs=num_epochs, validation_split=0.1, callbacks=[checkpoint_callback], ) model.load_weights(checkpoint_filepath) _, accuracy, top_5_accuracy = model.evaluate(x_test, y_test) print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") print(f\"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%\") return history cct_model = create_cct_model() history = run_experiment(cct_model) Epoch 1/30 352/352 [==============================] - 10s 18ms/step - loss: 1.9181 - accuracy: 0.3277 - top-5-accuracy: 0.8296 - val_loss: 1.7123 - val_accuracy: 0.4250 - val_top-5-accuracy: 0.9028 Epoch 2/30 352/352 [==============================] - 6s 16ms/step - loss: 1.5725 - accuracy: 0.5010 - top-5-accuracy: 0.9295 - val_loss: 1.5026 - val_accuracy: 0.5530 - val_top-5-accuracy: 0.9364 Epoch 3/30 352/352 [==============================] - 6s 16ms/step - loss: 1.4492 - accuracy: 0.5633 - top-5-accuracy: 0.9476 - val_loss: 1.3744 - val_accuracy: 0.6038 - val_top-5-accuracy: 0.9558 Epoch 4/30 352/352 [==============================] - 6s 16ms/step - loss: 1.3658 - accuracy: 0.6055 - top-5-accuracy: 0.9576 - val_loss: 1.3258 - val_accuracy: 0.6148 - val_top-5-accuracy: 0.9648 Epoch 5/30 352/352 [==============================] - 6s 16ms/step - loss: 1.3142 - accuracy: 0.6302 - top-5-accuracy: 0.9640 - val_loss: 1.2723 - val_accuracy: 0.6468 - val_top-5-accuracy: 0.9710 Epoch 6/30 352/352 [==============================] - 6s 16ms/step - loss: 1.2729 - accuracy: 0.6489 - top-5-accuracy: 0.9684 - val_loss: 1.2490 - val_accuracy: 0.6640 - val_top-5-accuracy: 0.9704 Epoch 7/30 352/352 [==============================] - 6s 16ms/step - loss: 1.2371 - accuracy: 0.6664 - top-5-accuracy: 0.9711 - val_loss: 1.1822 - val_accuracy: 0.6906 - val_top-5-accuracy: 0.9744 Epoch 8/30 352/352 [==============================] - 6s 16ms/step - loss: 1.1899 - accuracy: 0.6942 - top-5-accuracy: 0.9735 - val_loss: 1.1799 - val_accuracy: 0.6982 - val_top-5-accuracy: 0.9768 Epoch 9/30 352/352 [==============================] - 6s 16ms/step - loss: 1.1706 - accuracy: 0.6972 - top-5-accuracy: 0.9767 - val_loss: 1.1390 - val_accuracy: 0.7148 - val_top-5-accuracy: 0.9768 Epoch 10/30 352/352 [==============================] - 6s 16ms/step - loss: 1.1524 - accuracy: 0.7054 - top-5-accuracy: 0.9783 - val_loss: 1.1803 - val_accuracy: 0.7000 - val_top-5-accuracy: 0.9740 Epoch 11/30 352/352 [==============================] - 6s 16ms/step - loss: 1.1219 - accuracy: 0.7222 - top-5-accuracy: 0.9798 - val_loss: 1.1066 - val_accuracy: 0.7254 - val_top-5-accuracy: 0.9812 Epoch 12/30 352/352 [==============================] - 6s 16ms/step - loss: 1.1029 - accuracy: 0.7287 - top-5-accuracy: 0.9811 - val_loss: 1.0844 - val_accuracy: 0.7388 - val_top-5-accuracy: 0.9814 Epoch 13/30 352/352 [==============================] - 6s 16ms/step - loss: 1.0841 - accuracy: 0.7380 - top-5-accuracy: 0.9825 - val_loss: 1.1159 - val_accuracy: 0.7280 - val_top-5-accuracy: 0.9792 Epoch 14/30 352/352 [==============================] - 6s 16ms/step - loss: 1.0677 - accuracy: 0.7462 - top-5-accuracy: 0.9832 - val_loss: 1.0862 - val_accuracy: 0.7444 - val_top-5-accuracy: 0.9834 Epoch 15/30 352/352 [==============================] - 6s 16ms/step - loss: 1.0511 - accuracy: 0.7535 - top-5-accuracy: 0.9846 - val_loss: 1.0613 - val_accuracy: 0.7494 - val_top-5-accuracy: 0.9832 Epoch 16/30 352/352 [==============================] - 6s 16ms/step - loss: 1.0377 - accuracy: 0.7608 - top-5-accuracy: 0.9854 - val_loss: 1.0379 - val_accuracy: 0.7606 - val_top-5-accuracy: 0.9834 Epoch 17/30 352/352 [==============================] - 6s 16ms/step - loss: 1.0304 - accuracy: 0.7650 - top-5-accuracy: 0.9849 - val_loss: 1.0602 - val_accuracy: 0.7562 - val_top-5-accuracy: 0.9814 Epoch 18/30 352/352 [==============================] - 6s 16ms/step - loss: 1.0121 - accuracy: 0.7746 - top-5-accuracy: 0.9869 - val_loss: 1.0430 - val_accuracy: 0.7630 - val_top-5-accuracy: 0.9834 Epoch 19/30 352/352 [==============================] - 6s 16ms/step - loss: 1.0037 - accuracy: 0.7760 - top-5-accuracy: 0.9872 - val_loss: 1.0951 - val_accuracy: 0.7460 - val_top-5-accuracy: 0.9826 Epoch 20/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9964 - accuracy: 0.7805 - top-5-accuracy: 0.9871 - val_loss: 1.0683 - val_accuracy: 0.7538 - val_top-5-accuracy: 0.9834 Epoch 21/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9838 - accuracy: 0.7850 - top-5-accuracy: 0.9886 - val_loss: 1.0185 - val_accuracy: 0.7770 - val_top-5-accuracy: 0.9876 Epoch 22/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9742 - accuracy: 0.7904 - top-5-accuracy: 0.9894 - val_loss: 1.0253 - val_accuracy: 0.7738 - val_top-5-accuracy: 0.9838 Epoch 23/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9662 - accuracy: 0.7935 - top-5-accuracy: 0.9889 - val_loss: 1.0107 - val_accuracy: 0.7786 - val_top-5-accuracy: 0.9860 Epoch 24/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9549 - accuracy: 0.7994 - top-5-accuracy: 0.9897 - val_loss: 1.0089 - val_accuracy: 0.7790 - val_top-5-accuracy: 0.9852 Epoch 25/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9522 - accuracy: 0.8018 - top-5-accuracy: 0.9896 - val_loss: 1.0214 - val_accuracy: 0.7780 - val_top-5-accuracy: 0.9866 Epoch 26/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9469 - accuracy: 0.8023 - top-5-accuracy: 0.9897 - val_loss: 0.9993 - val_accuracy: 0.7816 - val_top-5-accuracy: 0.9882 Epoch 27/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9463 - accuracy: 0.8022 - top-5-accuracy: 0.9906 - val_loss: 1.0071 - val_accuracy: 0.7848 - val_top-5-accuracy: 0.9850 Epoch 28/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9336 - accuracy: 0.8077 - top-5-accuracy: 0.9909 - val_loss: 1.0113 - val_accuracy: 0.7868 - val_top-5-accuracy: 0.9856 Epoch 29/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9352 - accuracy: 0.8071 - top-5-accuracy: 0.9909 - val_loss: 1.0073 - val_accuracy: 0.7856 - val_top-5-accuracy: 0.9830 Epoch 30/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9273 - accuracy: 0.8112 - top-5-accuracy: 0.9908 - val_loss: 1.0144 - val_accuracy: 0.7792 - val_top-5-accuracy: 0.9836 313/313 [==============================] - 2s 6ms/step - loss: 1.0396 - accuracy: 0.7676 - top-5-accuracy: 0.9839 Test accuracy: 76.76% Test top 5 accuracy: 98.39% Let's now visualize the training progress of the model. plt.plot(history.history[\"loss\"], label=\"train_loss\") plt.plot(history.history[\"val_loss\"], label=\"val_loss\") plt.xlabel(\"Epochs\") plt.ylabel(\"Loss\") plt.title(\"Train and Validation Losses Over Epochs\", fontsize=14) plt.legend() plt.grid() plt.show() png The CCT model we just trained has just 0.4 million parameters, and it gets us to ~78% top-1 accuracy within 30 epochs. The plot above shows no signs of overfitting as well. This means we can train this network for longer (perhaps with a bit more regularization) and may obtain even better performance. This performance can further be improved by additional recipes like cosine decay learning rate schedule, other data augmentation techniques like AutoAugment, MixUp or Cutmix. With these modifications, the authors present 95.1% top-1 accuracy on the CIFAR-10 dataset. The authors also present a number of experiments to study how the number of convolution blocks, Transformers layers, etc. affect the final performance of CCTs. For a comparison, a ViT model takes about 4.7 million parameters and 100 epochs of training to reach a top-1 accuracy of 78.22% on the CIFAR-10 dataset. You can refer to this notebook to know about the experimental setup. The authors also demonstrate the performance of Compact Convolutional Transformers on NLP tasks and they report competitive results there. Training with consistency regularization for robustness against data distribution shifts. Deep learning models excel in many image recognition tasks when the data is independent and identically distributed (i.i.d.). However, they can suffer from performance degradation caused by subtle distribution shifts in the input data (such as random noise, contrast change, and blurring). So, naturally, there arises a question of why. As discussed in A Fourier Perspective on Model Robustness in Computer Vision), there's no reason for deep learning models to be robust against such shifts. Standard model training procedures (such as standard image classification training workflows) don't enable a model to learn beyond what's fed to it in the form of training data. In this example, we will be training an image classification model enforcing a sense of consistency inside it by doing the following: Train a standard image classification model. Train an equal or larger model on a noisy version of the dataset (augmented using RandAugment). To do this, we will first obtain predictions of the previous model on the clean images of the dataset. We will then use these predictions and train the second model to match these predictions on the noisy variant of the same images. This is identical to the workflow of Knowledge Distillation but since the student model is equal or larger in size this process is also referred to as Self-Training. This overall training workflow finds its roots in works like FixMatch, Unsupervised Data Augmentation for Consistency Training, and Noisy Student Training. Since this training process encourages a model yield consistent predictions for clean as well as noisy images, it's often referred to as consistency training or training with consistency regularization. Although the example focuses on using consistency training to enhance the robustness of models to common corruptions this example can also serve a template for performing weakly supervised learning. This example requires TensorFlow 2.4 or higher, as well as TensorFlow Hub and TensorFlow Models, which can be installed using the following command: !pip install -q tf-models-official tensorflow-addons Imports and setup from official.vision.image_classification.augment import RandAugment from tensorflow.keras import layers import tensorflow as tf import tensorflow_addons as tfa import matplotlib.pyplot as plt tf.random.set_seed(42) Define hyperparameters AUTO = tf.data.AUTOTUNE BATCH_SIZE = 128 EPOCHS = 5 CROP_TO = 72 RESIZE_TO = 96 Load the CIFAR-10 dataset (x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() val_samples = 49500 new_train_x, new_y_train = x_train[: val_samples + 1], y_train[: val_samples + 1] val_x, val_y = x_train[val_samples:], y_train[val_samples:] Create TensorFlow Dataset objects # Initialize `RandAugment` object with 2 layers of # augmentation transforms and strength of 9. augmenter = RandAugment(num_layers=2, magnitude=9) For training the teacher model, we will only be using two geometric augmentation transforms: random horizontal flip and random crop. def preprocess_train(image, label, noisy=True): image = tf.image.random_flip_left_right(image) # We first resize the original image to a larger dimension # and then we take random crops from it. image = tf.image.resize(image, [RESIZE_TO, RESIZE_TO]) image = tf.image.random_crop(image, [CROP_TO, CROP_TO, 3]) if noisy: image = augmenter.distort(image) return image, label def preprocess_test(image, label): image = tf.image.resize(image, [CROP_TO, CROP_TO]) return image, label train_ds = tf.data.Dataset.from_tensor_slices((new_train_x, new_y_train)) validation_ds = tf.data.Dataset.from_tensor_slices((val_x, val_y)) test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)) We make sure train_clean_ds and train_noisy_ds are shuffled using the same seed to ensure their orders are exactly the same. This will be helpful during training the student model. # This dataset will be used to train the first model. train_clean_ds = ( train_ds.shuffle(BATCH_SIZE * 10, seed=42) .map(lambda x, y: (preprocess_train(x, y, noisy=False)), num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) # This prepares the `Dataset` object to use RandAugment. train_noisy_ds = ( train_ds.shuffle(BATCH_SIZE * 10, seed=42) .map(preprocess_train, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) validation_ds = ( validation_ds.map(preprocess_test, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) test_ds = ( test_ds.map(preprocess_test, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) # This dataset will be used to train the second model. consistency_training_ds = tf.data.Dataset.zip((train_clean_ds, train_noisy_ds)) Visualize the datasets sample_images, sample_labels = next(iter(train_clean_ds)) plt.figure(figsize=(10, 10)) for i, image in enumerate(sample_images[:9]): ax = plt.subplot(3, 3, i + 1) plt.imshow(image.numpy().astype(\"int\")) plt.axis(\"off\") sample_images, sample_labels = next(iter(train_noisy_ds)) plt.figure(figsize=(10, 10)) for i, image in enumerate(sample_images[:9]): ax = plt.subplot(3, 3, i + 1) plt.imshow(image.numpy().astype(\"int\")) plt.axis(\"off\") png png Define a model building utility function We now define our model building utility. Our model is based on the ResNet50V2 architecture. def get_training_model(num_classes=10): resnet50_v2 = tf.keras.applications.ResNet50V2( weights=None, include_top=False, input_shape=(CROP_TO, CROP_TO, 3), ) model = tf.keras.Sequential( [ layers.Input((CROP_TO, CROP_TO, 3)), layers.Rescaling(scale=1.0 / 127.5, offset=-1), resnet50_v2, layers.GlobalAveragePooling2D(), layers.Dense(num_classes), ] ) return model In the interest of reproducibility, we serialize the initial random weights of the teacher network. initial_teacher_model = get_training_model() initial_teacher_model.save_weights(\"initial_teacher_model.h5\") Train the teacher model As noted in Noisy Student Training, if the teacher model is trained with geometric ensembling and when the student model is forced to mimic that, it leads to better performance. The original work uses Stochastic Depth and Dropout to bring in the ensembling part but for this example, we will use Stochastic Weight Averaging (SWA) which also resembles geometric ensembling. # Define the callbacks. reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(patience=3) early_stopping = tf.keras.callbacks.EarlyStopping( patience=10, restore_best_weights=True ) # Initialize SWA from tf-hub. SWA = tfa.optimizers.SWA # Compile and train the teacher model. teacher_model = get_training_model() teacher_model.load_weights(\"initial_teacher_model.h5\") teacher_model.compile( # Notice that we are wrapping our optimizer within SWA optimizer=SWA(tf.keras.optimizers.Adam()), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[\"accuracy\"], ) history = teacher_model.fit( train_clean_ds, epochs=EPOCHS, validation_data=validation_ds, callbacks=[reduce_lr, early_stopping], ) # Evaluate the teacher model on the test set. _, acc = teacher_model.evaluate(test_ds, verbose=0) print(f\"Test accuracy: {acc*100}%\") Epoch 1/5 387/387 [==============================] - 73s 78ms/step - loss: 1.7785 - accuracy: 0.3582 - val_loss: 2.0589 - val_accuracy: 0.3920 Epoch 2/5 387/387 [==============================] - 28s 71ms/step - loss: 1.2493 - accuracy: 0.5542 - val_loss: 1.4228 - val_accuracy: 0.5380 Epoch 3/5 387/387 [==============================] - 28s 73ms/step - loss: 1.0294 - accuracy: 0.6350 - val_loss: 1.4422 - val_accuracy: 0.5900 Epoch 4/5 387/387 [==============================] - 28s 73ms/step - loss: 0.8954 - accuracy: 0.6864 - val_loss: 1.2189 - val_accuracy: 0.6520 Epoch 5/5 387/387 [==============================] - 28s 73ms/step - loss: 0.7879 - accuracy: 0.7231 - val_loss: 0.9790 - val_accuracy: 0.6500 Test accuracy: 65.83999991416931% Define a self-training utility For this part, we will borrow the Distiller class from this Keras Example. # Majority of the code is taken from: # https://keras.io/examples/vision/knowledge_distillation/ class SelfTrainer(tf.keras.Model): def __init__(self, student, teacher): super(SelfTrainer, self).__init__() self.student = student self.teacher = teacher def compile( self, optimizer, metrics, student_loss_fn, distillation_loss_fn, temperature=3, ): super(SelfTrainer, self).compile(optimizer=optimizer, metrics=metrics) self.student_loss_fn = student_loss_fn self.distillation_loss_fn = distillation_loss_fn self.temperature = temperature def train_step(self, data): # Since our dataset is a zip of two independent datasets, # after initially parsing them, we segregate the # respective images and labels next. clean_ds, noisy_ds = data clean_images, _ = clean_ds noisy_images, y = noisy_ds # Forward pass of teacher teacher_predictions = self.teacher(clean_images, training=False) with tf.GradientTape() as tape: # Forward pass of student student_predictions = self.student(noisy_images, training=True) # Compute losses student_loss = self.student_loss_fn(y, student_predictions) distillation_loss = self.distillation_loss_fn( tf.nn.softmax(teacher_predictions / self.temperature, axis=1), tf.nn.softmax(student_predictions / self.temperature, axis=1), ) total_loss = (student_loss + distillation_loss) / 2 # Compute gradients trainable_vars = self.student.trainable_variables gradients = tape.gradient(total_loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Update the metrics configured in `compile()` self.compiled_metrics.update_state( y, tf.nn.softmax(student_predictions, axis=1) ) # Return a dict of performance results = {m.name: m.result() for m in self.metrics} results.update({\"total_loss\": total_loss}) return results def test_step(self, data): # During inference, we only pass a dataset consisting images and labels. x, y = data # Compute predictions y_prediction = self.student(x, training=False) # Update the metrics self.compiled_metrics.update_state(y, tf.nn.softmax(y_prediction, axis=1)) # Return a dict of performance results = {m.name: m.result() for m in self.metrics} return results The only difference in this implementation is the way loss is being calculated. Instead of weighted the distillation loss and student loss differently we are taking their average following Noisy Student Training. Train the student model # Define the callbacks. # We are using a larger decay factor to stabilize the training. reduce_lr = tf.keras.callbacks.ReduceLROnPlateau( patience=3, factor=0.5, monitor=\"val_accuracy\" ) early_stopping = tf.keras.callbacks.EarlyStopping( patience=10, restore_best_weights=True, monitor=\"val_accuracy\" ) # Compile and train the student model. self_trainer = SelfTrainer(student=get_training_model(), teacher=teacher_model) self_trainer.compile( # Notice we are *not* using SWA here. optimizer=\"adam\", metrics=[\"accuracy\"], student_loss_fn=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), distillation_loss_fn=tf.keras.losses.KLDivergence(), temperature=10, ) history = self_trainer.fit( consistency_training_ds, epochs=EPOCHS, validation_data=validation_ds, callbacks=[reduce_lr, early_stopping], ) # Evaluate the student model. acc = self_trainer.evaluate(test_ds, verbose=0) print(f\"Test accuracy from student model: {acc*100}%\") Epoch 1/5 387/387 [==============================] - 39s 84ms/step - accuracy: 0.2112 - total_loss: 1.0629 - val_accuracy: 0.4180 Epoch 2/5 387/387 [==============================] - 32s 82ms/step - accuracy: 0.3341 - total_loss: 0.9554 - val_accuracy: 0.3900 Epoch 3/5 387/387 [==============================] - 31s 81ms/step - accuracy: 0.3873 - total_loss: 0.8852 - val_accuracy: 0.4580 Epoch 4/5 387/387 [==============================] - 31s 81ms/step - accuracy: 0.4294 - total_loss: 0.8423 - val_accuracy: 0.5660 Epoch 5/5 387/387 [==============================] - 31s 81ms/step - accuracy: 0.4547 - total_loss: 0.8093 - val_accuracy: 0.5880 Test accuracy from student model: 58.490002155303955% Assess the robustness of the models A standard benchmark of assessing the robustness of vision models is to record their performance on corrupted datasets like ImageNet-C and CIFAR-10-C both of which were proposed in Benchmarking Neural Network Robustness to Common Corruptions and Perturbations. For this example, we will be using the CIFAR-10-C dataset which has 19 different corruptions on 5 different severity levels. To assess the robustness of the models on this dataset, we will do the following: Run the pre-trained models on the highest level of severities and obtain the top-1 accuracies. Compute the mean top-1 accuracy. For the purpose of this example, we won't be going through these steps. This is why we trained the models for only 5 epochs. You can check out this repository that demonstrates the full-scale training experiments and also the aforementioned assessment. The figure below presents an executive summary of that assessment: Mean Top-1 results stand for the CIFAR-10-C dataset and Test Top-1 results stand for the CIFAR-10 test set. It's clear that consistency training has an advantage on not only enhancing the model robustness but also on improving the standard test performance. How to train a deep convolutional autoencoder for image denoising. Introduction This example demonstrates how to implement a deep convolutional autoencoder for image denoising, mapping noisy digits images from the MNIST dataset to clean digits images. This implementation is based on an original blog post titled Building Autoencoders in Keras by François Chollet. Setup import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.keras import layers from tensorflow.keras.datasets import mnist from tensorflow.keras.models import Model def preprocess(array): \"\"\" Normalizes the supplied array and reshapes it into the appropriate format. \"\"\" array = array.astype(\"float32\") / 255.0 array = np.reshape(array, (len(array), 28, 28, 1)) return array def noise(array): \"\"\" Adds random noise to each image in the supplied array. \"\"\" noise_factor = 0.4 noisy_array = array + noise_factor * np.random.normal( loc=0.0, scale=1.0, size=array.shape ) return np.clip(noisy_array, 0.0, 1.0) def display(array1, array2): \"\"\" Displays ten random images from each one of the supplied arrays. \"\"\" n = 10 indices = np.random.randint(len(array1), size=n) images1 = array1[indices, :] images2 = array2[indices, :] plt.figure(figsize=(20, 4)) for i, (image1, image2) in enumerate(zip(images1, images2)): ax = plt.subplot(2, n, i + 1) plt.imshow(image1.reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax = plt.subplot(2, n, i + 1 + n) plt.imshow(image2.reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() Prepare the data # Since we only need images from the dataset to encode and decode, we # won't use the labels. (train_data, _), (test_data, _) = mnist.load_data() # Normalize and reshape the data train_data = preprocess(train_data) test_data = preprocess(test_data) # Create a copy of the data with added noise noisy_train_data = noise(train_data) noisy_test_data = noise(test_data) # Display the train data and a version of it with added noise display(train_data, noisy_train_data) png Build the autoencoder We are going to use the Functional API to build our convolutional autoencoder. input = layers.Input(shape=(28, 28, 1)) # Encoder x = layers.Conv2D(32, (3, 3), activation=\"relu\", padding=\"same\")(input) x = layers.MaxPooling2D((2, 2), padding=\"same\")(x) x = layers.Conv2D(32, (3, 3), activation=\"relu\", padding=\"same\")(x) x = layers.MaxPooling2D((2, 2), padding=\"same\")(x) # Decoder x = layers.Conv2DTranspose(32, (3, 3), strides=2, activation=\"relu\", padding=\"same\")(x) x = layers.Conv2DTranspose(32, (3, 3), strides=2, activation=\"relu\", padding=\"same\")(x) x = layers.Conv2D(1, (3, 3), activation=\"sigmoid\", padding=\"same\")(x) # Autoencoder autoencoder = Model(input, x) autoencoder.compile(optimizer=\"adam\", loss=\"binary_crossentropy\") autoencoder.summary() Model: \"model\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ conv2d (Conv2D) (None, 28, 28, 32) 320 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 14, 14, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 14, 14, 32) 9248 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 7, 7, 32) 0 _________________________________________________________________ conv2d_transpose (Conv2DTran (None, 14, 14, 32) 9248 _________________________________________________________________ conv2d_transpose_1 (Conv2DTr (None, 28, 28, 32) 9248 _________________________________________________________________ conv2d_2 (Conv2D) (None, 28, 28, 1) 289 ================================================================= Total params: 28,353 Trainable params: 28,353 Non-trainable params: 0 _________________________________________________________________ Now we can train our autoencoder using train_data as both our input data and target. Notice we are setting up the validation data using the same format. autoencoder.fit( x=train_data, y=train_data, epochs=50, batch_size=128, shuffle=True, validation_data=(test_data, test_data), ) Epoch 1/50 469/469 [==============================] - 20s 43ms/step - loss: 0.1354 - val_loss: 0.0735 Epoch 2/50 469/469 [==============================] - 21s 45ms/step - loss: 0.0719 - val_loss: 0.0698 Epoch 3/50 469/469 [==============================] - 22s 47ms/step - loss: 0.0695 - val_loss: 0.0682 Epoch 4/50 469/469 [==============================] - 23s 50ms/step - loss: 0.0684 - val_loss: 0.0674 Epoch 5/50 469/469 [==============================] - 24s 51ms/step - loss: 0.0676 - val_loss: 0.0669 Epoch 6/50 469/469 [==============================] - 26s 55ms/step - loss: 0.0671 - val_loss: 0.0663 Epoch 7/50 469/469 [==============================] - 27s 57ms/step - loss: 0.0667 - val_loss: 0.0660 Epoch 8/50 469/469 [==============================] - 26s 56ms/step - loss: 0.0663 - val_loss: 0.0657 Epoch 9/50 469/469 [==============================] - 28s 59ms/step - loss: 0.0642 - val_loss: 0.0639 Epoch 21/50 469/469 [==============================] - 28s 60ms/step - loss: 0.0642 - val_loss: 0.0638 Epoch 22/50 469/469 [==============================] - 29s 62ms/step - loss: 0.0632 - val_loss: 0.0629 Epoch 38/50 397/469 [========================>.....] - ETA: 4s - loss: 0.0632 Let's predict on our test dataset and display the original image together with the prediction from our autoencoder. Notice how the predictions are pretty close to the original images, although not quite the same. predictions = autoencoder.predict(test_data) display(test_data, predictions) png Now that we know that our autoencoder works, let's retrain it using the noisy data as our input and the clean data as our target. We want our autoencoder to learn how to denoise the images. autoencoder.fit( x=noisy_train_data, y=train_data, epochs=100, batch_size=128, shuffle=True, validation_data=(noisy_test_data, test_data), ) Epoch 1/100 469/469 [==============================] - 28s 59ms/step - loss: 0.1027 - val_loss: 0.0946 Epoch 2/100 469/469 [==============================] - 27s 57ms/step - loss: 0.0942 - val_loss: 0.0924 Epoch 3/100 469/469 [==============================] - 27s 58ms/step - loss: 0.0925 - val_loss: 0.0913 Epoch 4/100 469/469 [==============================] - 28s 60ms/step - loss: 0.0915 - val_loss: 0.0905 Epoch 5/100 469/469 [==============================] - 31s 66ms/step - loss: 0.0908 - val_loss: 0.0897 Epoch 6/100 469/469 [==============================] - 30s 64ms/step - loss: 0.0902 - val_loss: 0.0893 Epoch 7/100 469/469 [==============================] - 28s 60ms/step - loss: 0.0897 - val_loss: 0.0887 Epoch 8/100 469/469 [==============================] - 31s 66ms/step - loss: 0.0872 - val_loss: 0.0867 Epoch 19/100 469/469 [==============================] - 30s 64ms/step - loss: 0.0860 - val_loss: 0.0854 Epoch 35/100 469/469 [==============================] - 32s 68ms/step - loss: 0.0854 - val_loss: 0.0849 Epoch 52/100 469/469 [==============================] - 28s 60ms/step - loss: 0.0851 - val_loss: 0.0847 Epoch 68/100 469/469 [==============================] - 31s 66ms/step - loss: 0.0851 - val_loss: 0.0848 Epoch 69/100 469/469 [==============================] - 31s 65ms/step - loss: 0.0849 - val_loss: 0.0847 Epoch 84/100 469/469 [==============================] - 29s 63ms/step - loss: 0.0848 - val_loss: 0.0846 Let's now predict on the noisy data and display the results of our autoencoder. Notice how the autoencoder does an amazing job at removing the noise from the input images. predictions = autoencoder.predict(noisy_test_data) display(noisy_test_data, predictions) png Data augmentation with CutMix for image classification on CIFAR-10. Introduction CutMix is a data augmentation technique that addresses the issue of information loss and inefficiency present in regional dropout strategies. Instead of removing pixels and filling them with black or grey pixels or Gaussian noise, you replace the removed regions with a patch from another image, while the ground truth labels are mixed proportionally to the number of pixels of combined images. CutMix was proposed in CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features (Yun et al., 2019) It's implemented via the following formulas: where M is the binary mask which indicates the cutout and the fill-in regions from the two randomly drawn images and λ (in [0, 1]) is drawn from a Beta(α, α) distribution The coordinates of bounding boxes are: which indicates the cutout and fill-in regions in case of the images. The bounding box sampling is represented by: where rx, ry are randomly drawn from a uniform distribution with upper bound. Setup import numpy as np import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras np.random.seed(42) tf.random.set_seed(42) Load the CIFAR-10 dataset In this example, we will use the CIFAR-10 image classification dataset. (x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() y_train = tf.keras.utils.to_categorical(y_train, num_classes=10) y_test = tf.keras.utils.to_categorical(y_test, num_classes=10) print(x_train.shape) print(y_train.shape) print(x_test.shape) print(y_test.shape) class_names = [ \"Airplane\", \"Automobile\", \"Bird\", \"Cat\", \"Deer\", \"Dog\", \"Frog\", \"Horse\", \"Ship\", \"Truck\", ] (50000, 32, 32, 3) (50000, 10) (10000, 32, 32, 3) (10000, 10) Define hyperparameters AUTO = tf.data.AUTOTUNE BATCH_SIZE = 32 IMG_SIZE = 32 Define the image preprocessing function def preprocess_image(image, label): image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE)) image = tf.image.convert_image_dtype(image, tf.float32) / 255.0 return image, label Convert the data into TensorFlow Dataset objects train_ds_one = ( tf.data.Dataset.from_tensor_slices((x_train, y_train)) .shuffle(1024) .map(preprocess_image, num_parallel_calls=AUTO) ) train_ds_two = ( tf.data.Dataset.from_tensor_slices((x_train, y_train)) .shuffle(1024) .map(preprocess_image, num_parallel_calls=AUTO) ) train_ds_simple = tf.data.Dataset.from_tensor_slices((x_train, y_train)) test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)) train_ds_simple = ( train_ds_simple.map(preprocess_image, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) # Combine two shuffled datasets from the same training data. train_ds = tf.data.Dataset.zip((train_ds_one, train_ds_two)) test_ds = ( test_ds.map(preprocess_image, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) Define the CutMix data augmentation function The CutMix function takes two image and label pairs to perform the augmentation. It samples λ(l) from the Beta distribution and returns a bounding box from get_box function. We then crop the second image (image2) and pad this image in the final padded image at the same location. def sample_beta_distribution(size, concentration_0=0.2, concentration_1=0.2): gamma_1_sample = tf.random.gamma(shape=[size], alpha=concentration_1) gamma_2_sample = tf.random.gamma(shape=[size], alpha=concentration_0) return gamma_1_sample / (gamma_1_sample + gamma_2_sample) @tf.function def get_box(lambda_value): cut_rat = tf.math.sqrt(1.0 - lambda_value) cut_w = IMG_SIZE * cut_rat # rw cut_w = tf.cast(cut_w, tf.int32) cut_h = IMG_SIZE * cut_rat # rh cut_h = tf.cast(cut_h, tf.int32) cut_x = tf.random.uniform((1,), minval=0, maxval=IMG_SIZE, dtype=tf.int32) # rx cut_y = tf.random.uniform((1,), minval=0, maxval=IMG_SIZE, dtype=tf.int32) # ry boundaryx1 = tf.clip_by_value(cut_x[0] - cut_w // 2, 0, IMG_SIZE) boundaryy1 = tf.clip_by_value(cut_y[0] - cut_h // 2, 0, IMG_SIZE) bbx2 = tf.clip_by_value(cut_x[0] + cut_w // 2, 0, IMG_SIZE) bby2 = tf.clip_by_value(cut_y[0] + cut_h // 2, 0, IMG_SIZE) target_h = bby2 - boundaryy1 if target_h == 0: target_h += 1 target_w = bbx2 - boundaryx1 if target_w == 0: target_w += 1 return boundaryx1, boundaryy1, target_h, target_w @tf.function def cutmix(train_ds_one, train_ds_two): (image1, label1), (image2, label2) = train_ds_one, train_ds_two alpha = [0.25] beta = [0.25] # Get a sample from the Beta distribution lambda_value = sample_beta_distribution(1, alpha, beta) # Define Lambda lambda_value = lambda_value[0][0] # Get the bounding box offsets, heights and widths boundaryx1, boundaryy1, target_h, target_w = get_box(lambda_value) # Get a patch from the second image (`image2`) crop2 = tf.image.crop_to_bounding_box( image2, boundaryy1, boundaryx1, target_h, target_w ) # Pad the `image2` patch (`crop2`) with the same offset image2 = tf.image.pad_to_bounding_box( crop2, boundaryy1, boundaryx1, IMG_SIZE, IMG_SIZE ) # Get a patch from the first image (`image1`) crop1 = tf.image.crop_to_bounding_box( image1, boundaryy1, boundaryx1, target_h, target_w ) # Pad the `image1` patch (`crop1`) with the same offset img1 = tf.image.pad_to_bounding_box( crop1, boundaryy1, boundaryx1, IMG_SIZE, IMG_SIZE ) # Modify the first image by subtracting the patch from `image1` # (before applying the `image2` patch) image1 = image1 - img1 # Add the modified `image1` and `image2` together to get the CutMix image image = image1 + image2 # Adjust Lambda in accordance to the pixel ration lambda_value = 1 - (target_w * target_h) / (IMG_SIZE * IMG_SIZE) lambda_value = tf.cast(lambda_value, tf.float32) # Combine the labels of both images label = lambda_value * label1 + (1 - lambda_value) * label2 return image, label Note: we are combining two images to create a single one. Visualize the new dataset after applying the CutMix augmentation # Create the new dataset using our `cutmix` utility train_ds_cmu = ( train_ds.shuffle(1024) .map(cutmix, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) # Let's preview 9 samples from the dataset image_batch, label_batch = next(iter(train_ds_cmu)) plt.figure(figsize=(10, 10)) for i in range(9): ax = plt.subplot(3, 3, i + 1) plt.title(class_names[np.argmax(label_batch[i])]) plt.imshow(image_batch[i]) plt.axis(\"off\") png Define a ResNet-20 model def resnet_layer( inputs, num_filters=16, kernel_size=3, strides=1, activation=\"relu\", batch_normalization=True, conv_first=True, ): conv = keras.layers.Conv2D( num_filters, kernel_size=kernel_size, strides=strides, padding=\"same\", kernel_initializer=\"he_normal\", kernel_regularizer=keras.regularizers.l2(1e-4), ) x = inputs if conv_first: x = conv(x) if batch_normalization: x = keras.layers.BatchNormalization()(x) if activation is not None: x = keras.layers.Activation(activation)(x) else: if batch_normalization: x = keras.layers.BatchNormalization()(x) if activation is not None: x = keras.layers.Activation(activation)(x) x = conv(x) return x def resnet_v20(input_shape, depth, num_classes=10): if (depth - 2) % 6 != 0: raise ValueError(\"depth should be 6n+2 (eg 20, 32, 44 in [a])\") # Start model definition. num_filters = 16 num_res_blocks = int((depth - 2) / 6) inputs = keras.layers.Input(shape=input_shape) x = resnet_layer(inputs=inputs) # Instantiate the stack of residual units for stack in range(3): for res_block in range(num_res_blocks): strides = 1 if stack > 0 and res_block == 0: # first layer but not first stack strides = 2 # downsample y = resnet_layer(inputs=x, num_filters=num_filters, strides=strides) y = resnet_layer(inputs=y, num_filters=num_filters, activation=None) if stack > 0 and res_block == 0: # first layer but not first stack # linear projection residual shortcut connection to match # changed dims x = resnet_layer( inputs=x, num_filters=num_filters, kernel_size=1, strides=strides, activation=None, batch_normalization=False, ) x = keras.layers.add([x, y]) x = keras.layers.Activation(\"relu\")(x) num_filters *= 2 # Add classifier on top. # v1 does not use BN after last shortcut connection-ReLU x = keras.layers.AveragePooling2D(pool_size=8)(x) y = keras.layers.Flatten()(x) outputs = keras.layers.Dense( num_classes, activation=\"softmax\", kernel_initializer=\"he_normal\" )(y) # Instantiate model. model = keras.models.Model(inputs=inputs, outputs=outputs) return model def training_model(): return resnet_v20((32, 32, 3), 20) initial_model = training_model() initial_model.save_weights(\"initial_weights.h5\") Train the model with the dataset augmented by CutMix model = training_model() model.load_weights(\"initial_weights.h5\") model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"]) model.fit(train_ds_cmu, validation_data=test_ds, epochs=15) test_loss, test_accuracy = model.evaluate(test_ds) print(\"Test accuracy: {:.2f}%\".format(test_accuracy * 100)) Epoch 1/15 1563/1563 [==============================] - 62s 24ms/step - loss: 1.9216 - accuracy: 0.4090 - val_loss: 1.9737 - val_accuracy: 0.4061 Epoch 2/15 1563/1563 [==============================] - 37s 24ms/step - loss: 1.6549 - accuracy: 0.5325 - val_loss: 1.5033 - val_accuracy: 0.5061 Epoch 3/15 1563/1563 [==============================] - 38s 24ms/step - loss: 1.5536 - accuracy: 0.5840 - val_loss: 1.2913 - val_accuracy: 0.6112 Epoch 4/15 1563/1563 [==============================] - 38s 24ms/step - loss: 1.4988 - accuracy: 0.6097 - val_loss: 1.0587 - val_accuracy: 0.7033 Epoch 5/15 1563/1563 [==============================] - 38s 24ms/step - loss: 1.4531 - accuracy: 0.6291 - val_loss: 1.0681 - val_accuracy: 0.6841 Epoch 6/15 1563/1563 [==============================] - 37s 24ms/step - loss: 1.4173 - accuracy: 0.6464 - val_loss: 1.0265 - val_accuracy: 0.7085 Epoch 7/15 1563/1563 [==============================] - 37s 24ms/step - loss: 1.3932 - accuracy: 0.6572 - val_loss: 0.9540 - val_accuracy: 0.7331 Epoch 8/15 1563/1563 [==============================] - 37s 24ms/step - loss: 1.3736 - accuracy: 0.6680 - val_loss: 0.9877 - val_accuracy: 0.7240 Epoch 9/15 1563/1563 [==============================] - 38s 24ms/step - loss: 1.3575 - accuracy: 0.6782 - val_loss: 0.8944 - val_accuracy: 0.7570 Epoch 10/15 1563/1563 [==============================] - 38s 24ms/step - loss: 1.3398 - accuracy: 0.6886 - val_loss: 0.8598 - val_accuracy: 0.7649 Epoch 11/15 1563/1563 [==============================] - 38s 24ms/step - loss: 1.3277 - accuracy: 0.6939 - val_loss: 0.9032 - val_accuracy: 0.7603 Epoch 12/15 1563/1563 [==============================] - 38s 24ms/step - loss: 1.3131 - accuracy: 0.6964 - val_loss: 0.7934 - val_accuracy: 0.7926 Epoch 13/15 1563/1563 [==============================] - 37s 24ms/step - loss: 1.3050 - accuracy: 0.7029 - val_loss: 0.8737 - val_accuracy: 0.7552 Epoch 14/15 1563/1563 [==============================] - 37s 24ms/step - loss: 1.2987 - accuracy: 0.7099 - val_loss: 0.8409 - val_accuracy: 0.7766 Epoch 15/15 1563/1563 [==============================] - 37s 24ms/step - loss: 1.2953 - accuracy: 0.7099 - val_loss: 0.7850 - val_accuracy: 0.8014 313/313 [==============================] - 3s 9ms/step - loss: 0.7850 - accuracy: 0.8014 Test accuracy: 80.14% Train the model using the original non-augmented dataset model = training_model() model.load_weights(\"initial_weights.h5\") model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"]) model.fit(train_ds_simple, validation_data=test_ds, epochs=15) test_loss, test_accuracy = model.evaluate(test_ds) print(\"Test accuracy: {:.2f}%\".format(test_accuracy * 100)) Epoch 1/15 1563/1563 [==============================] - 38s 23ms/step - loss: 1.4864 - accuracy: 0.5173 - val_loss: 1.3694 - val_accuracy: 0.5708 Epoch 2/15 1563/1563 [==============================] - 36s 23ms/step - loss: 1.0682 - accuracy: 0.6779 - val_loss: 1.1424 - val_accuracy: 0.6686 Epoch 3/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.8955 - accuracy: 0.7449 - val_loss: 1.0555 - val_accuracy: 0.7007 Epoch 4/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.7890 - accuracy: 0.7878 - val_loss: 1.0575 - val_accuracy: 0.7079 Epoch 5/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.7107 - accuracy: 0.8175 - val_loss: 1.1395 - val_accuracy: 0.7062 Epoch 6/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.6524 - accuracy: 0.8397 - val_loss: 1.1716 - val_accuracy: 0.7042 Epoch 7/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.6098 - accuracy: 0.8594 - val_loss: 1.4120 - val_accuracy: 0.6786 Epoch 8/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.5715 - accuracy: 0.8765 - val_loss: 1.3159 - val_accuracy: 0.7011 Epoch 9/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.5477 - accuracy: 0.8872 - val_loss: 1.2873 - val_accuracy: 0.7182 Epoch 10/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.5233 - accuracy: 0.8988 - val_loss: 1.4118 - val_accuracy: 0.6964 Epoch 11/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.5165 - accuracy: 0.9045 - val_loss: 1.3741 - val_accuracy: 0.7230 Epoch 12/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.5008 - accuracy: 0.9124 - val_loss: 1.3984 - val_accuracy: 0.7181 Epoch 13/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.4896 - accuracy: 0.9190 - val_loss: 1.3642 - val_accuracy: 0.7209 Epoch 14/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.4845 - accuracy: 0.9231 - val_loss: 1.5469 - val_accuracy: 0.6992 Epoch 15/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.4749 - accuracy: 0.9294 - val_loss: 1.4034 - val_accuracy: 0.7362 313/313 [==============================] - 3s 9ms/step - loss: 1.4034 - accuracy: 0.7362 Test accuracy: 73.62% Notes In this example, we trained our model for 15 epochs. In our experiment, the model with CutMix achieves a better accuracy on the CIFAR-10 dataset (80.36% in our experiment) compared to the model that doesn't use the augmentation (72.70%). You may notice it takes less time to train the model with the CutMix augmentation. You can experiment further with the CutMix technique by following the original paper. Few-shot classification of the Omniglot dataset using Reptile. Introduction The Reptile algorithm was developed by OpenAI to perform model agnostic meta-learning. Specifically, this algorithm was designed to quickly learn to perform new tasks with minimal training (few-shot learning). The algorithm works by performing Stochastic Gradient Descent using the difference between weights trained on a mini-batch of never before seen data and the model weights prior to training over a fixed number of meta-iterations. import matplotlib.pyplot as plt import numpy as np import random import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_datasets as tfds Define the Hyperparameters learning_rate = 0.003 meta_step_size = 0.25 inner_batch_size = 25 eval_batch_size = 25 meta_iters = 2000 eval_iters = 5 inner_iters = 4 eval_interval = 1 train_shots = 20 shots = 5 classes = 5 Prepare the data The Omniglot dataset is a dataset of 1,623 characters taken from 50 different alphabets, with 20 examples for each character. The 20 samples for each character were drawn online via Amazon's Mechanical Turk. For the few-shot learning task, k samples (or \"shots\") are drawn randomly from n randomly-chosen classes. These n numerical values are used to create a new set of temporary labels to use to test the model's ability to learn a new task given few examples. In other words, if you are training on 5 classes, your new class labels will be either 0, 1, 2, 3, or 4. Omniglot is a great dataset for this task since there are many different classes to draw from, with a reasonable number of samples for each class. class Dataset: # This class will facilitate the creation of a few-shot dataset # from the Omniglot dataset that can be sampled from quickly while also # allowing to create new labels at the same time. def __init__(self, training): # Download the tfrecord files containing the omniglot data and convert to a # dataset. split = \"train\" if training else \"test\" ds = tfds.load(\"omniglot\", split=split, as_supervised=True, shuffle_files=False) # Iterate over the dataset to get each individual image and its class, # and put that data into a dictionary. self.data = {} def extraction(image, label): # This function will shrink the Omniglot images to the desired size, # scale pixel values and convert the RGB image to grayscale image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.rgb_to_grayscale(image) image = tf.image.resize(image, [28, 28]) return image, label for image, label in ds.map(extraction): image = image.numpy() label = str(label.numpy()) if label not in self.data: self.data[label] = [] self.data[label].append(image) self.labels = list(self.data.keys()) def get_mini_dataset( self, batch_size, repetitions, shots, num_classes, split=False ): temp_labels = np.zeros(shape=(num_classes * shots)) temp_images = np.zeros(shape=(num_classes * shots, 28, 28, 1)) if split: test_labels = np.zeros(shape=(num_classes)) test_images = np.zeros(shape=(num_classes, 28, 28, 1)) # Get a random subset of labels from the entire label set. label_subset = random.choices(self.labels, k=num_classes) for class_idx, class_obj in enumerate(label_subset): # Use enumerated index value as a temporary label for mini-batch in # few shot learning. temp_labels[class_idx * shots : (class_idx + 1) * shots] = class_idx # If creating a split dataset for testing, select an extra sample from each # label to create the test dataset. if split: test_labels[class_idx] = class_idx images_to_split = random.choices( self.data[label_subset[class_idx]], k=shots + 1 ) test_images[class_idx] = images_to_split[-1] temp_images[ class_idx * shots : (class_idx + 1) * shots ] = images_to_split[:-1] else: # For each index in the randomly selected label_subset, sample the # necessary number of images. temp_images[ class_idx * shots : (class_idx + 1) * shots ] = random.choices(self.data[label_subset[class_idx]], k=shots) dataset = tf.data.Dataset.from_tensor_slices( (temp_images.astype(np.float32), temp_labels.astype(np.int32)) ) dataset = dataset.shuffle(100).batch(batch_size).repeat(repetitions) if split: return dataset, test_images, test_labels return dataset import urllib3 urllib3.disable_warnings() # Disable SSL warnings that may happen during download. train_dataset = Dataset(training=True) test_dataset = Dataset(training=False) Downloading and preparing dataset omniglot/3.0.0 (download: 17.95 MiB, generated: Unknown size, total: 17.95 MiB) to /root/tensorflow_datasets/omniglot/3.0.0... HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Dl Completed...', max=1.0, style=Progre… HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Dl Size...', max=1.0, style=ProgressSty… HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Extraction completed...', max=1.0, styl… HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value=''))) Shuffling and writing examples to /root/tensorflow_datasets/omniglot/3.0.0.incompleteXTNZJN/omniglot-train.tfrecord HBox(children=(FloatProgress(value=0.0, max=19280.0), HTML(value=''))) HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value=''))) Shuffling and writing examples to /root/tensorflow_datasets/omniglot/3.0.0.incompleteXTNZJN/omniglot-test.tfrecord HBox(children=(FloatProgress(value=0.0, max=13180.0), HTML(value=''))) HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value=''))) Shuffling and writing examples to /root/tensorflow_datasets/omniglot/3.0.0.incompleteXTNZJN/omniglot-small1.tfrecord HBox(children=(FloatProgress(value=0.0, max=2720.0), HTML(value=''))) HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value=''))) Shuffling and writing examples to /root/tensorflow_datasets/omniglot/3.0.0.incompleteXTNZJN/omniglot-small2.tfrecord HBox(children=(FloatProgress(value=0.0, max=3120.0), HTML(value=''))) Dataset omniglot downloaded and prepared to /root/tensorflow_datasets/omniglot/3.0.0. Subsequent calls will reuse this data. Visualize some examples from the dataset _, axarr = plt.subplots(nrows=5, ncols=5, figsize=(20, 20)) sample_keys = list(train_dataset.data.keys()) for a in range(5): for b in range(5): temp_image = train_dataset.data[sample_keys[a]][b] temp_image = np.stack((temp_image[:, :, 0],) * 3, axis=2) temp_image *= 255 temp_image = np.clip(temp_image, 0, 255).astype(\"uint8\") if b == 2: axarr[a, b].set_title(\"Class : \" + sample_keys[a]) axarr[a, b].imshow(temp_image, cmap=\"gray\") axarr[a, b].xaxis.set_visible(False) axarr[a, b].yaxis.set_visible(False) plt.show() png Build the model def conv_bn(x): x = layers.Conv2D(filters=64, kernel_size=3, strides=2, padding=\"same\")(x) x = layers.BatchNormalization()(x) return layers.ReLU()(x) inputs = layers.Input(shape=(28, 28, 1)) x = conv_bn(inputs) x = conv_bn(x) x = conv_bn(x) x = conv_bn(x) x = layers.Flatten()(x) outputs = layers.Dense(classes, activation=\"softmax\")(x) model = keras.Model(inputs=inputs, outputs=outputs) model.compile() optimizer = keras.optimizers.SGD(learning_rate=learning_rate) Train the model training = [] testing = [] for meta_iter in range(meta_iters): frac_done = meta_iter / meta_iters cur_meta_step_size = (1 - frac_done) * meta_step_size # Temporarily save the weights from the model. old_vars = model.get_weights() # Get a sample from the full dataset. mini_dataset = train_dataset.get_mini_dataset( inner_batch_size, inner_iters, train_shots, classes ) for images, labels in mini_dataset: with tf.GradientTape() as tape: preds = model(images) loss = keras.losses.sparse_categorical_crossentropy(labels, preds) grads = tape.gradient(loss, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) new_vars = model.get_weights() # Perform SGD for the meta step. for var in range(len(new_vars)): new_vars[var] = old_vars[var] + ( (new_vars[var] - old_vars[var]) * cur_meta_step_size ) # After the meta-learning step, reload the newly-trained weights into the model. model.set_weights(new_vars) # Evaluation loop if meta_iter % eval_interval == 0: accuracies = [] for dataset in (train_dataset, test_dataset): # Sample a mini dataset from the full dataset. train_set, test_images, test_labels = dataset.get_mini_dataset( eval_batch_size, eval_iters, shots, classes, split=True ) old_vars = model.get_weights() # Train on the samples and get the resulting accuracies. for images, labels in train_set: with tf.GradientTape() as tape: preds = model(images) loss = keras.losses.sparse_categorical_crossentropy(labels, preds) grads = tape.gradient(loss, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) test_preds = model.predict(test_images) test_preds = tf.argmax(test_preds).numpy() num_correct = (test_preds == test_labels).sum() # Reset the weights after getting the evaluation accuracies. model.set_weights(old_vars) accuracies.append(num_correct / classes) training.append(accuracies[0]) testing.append(accuracies[1]) if meta_iter % 100 == 0: print( \"batch %d: train=%f test=%f\" % (meta_iter, accuracies[0], accuracies[1]) ) batch 0: train=0.000000 test=0.600000 batch 100: train=0.600000 test=0.800000 batch 200: train=1.000000 test=0.600000 batch 300: train=0.600000 test=0.800000 batch 400: train=0.800000 test=1.000000 batch 500: train=1.000000 test=0.600000 batch 600: train=1.000000 test=1.000000 batch 700: train=1.000000 test=1.000000 batch 800: train=1.000000 test=0.600000 batch 900: train=1.000000 test=1.000000 batch 1000: train=0.800000 test=1.000000 batch 1100: train=1.000000 test=0.600000 batch 1200: train=0.800000 test=1.000000 batch 1300: train=0.800000 test=1.000000 batch 1400: train=1.000000 test=1.000000 batch 1500: train=0.800000 test=1.000000 batch 1600: train=1.000000 test=1.000000 batch 1700: train=1.000000 test=0.800000 batch 1800: train=1.000000 test=1.000000 batch 1900: train=0.800000 test=1.000000 Visualize Results # First, some preprocessing to smooth the training and testing arrays for display. window_length = 100 train_s = np.r_[ training[window_length - 1 : 0 : -1], training, training[-1:-window_length:-1] ] test_s = np.r_[ testing[window_length - 1 : 0 : -1], testing, testing[-1:-window_length:-1] ] w = np.hamming(window_length) train_y = np.convolve(w / w.sum(), train_s, mode=\"valid\") test_y = np.convolve(w / w.sum(), test_s, mode=\"valid\") # Display the training accuracies. x = np.arange(0, len(test_y), 1) plt.plot(x, test_y, x, train_y) plt.legend([\"test\", \"train\"]) plt.grid() train_set, test_images, test_labels = dataset.get_mini_dataset( eval_batch_size, eval_iters, shots, classes, split=True ) for images, labels in train_set: with tf.GradientTape() as tape: preds = model(images) loss = keras.losses.sparse_categorical_crossentropy(labels, preds) grads = tape.gradient(loss, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) test_preds = model.predict(test_images) test_preds = tf.argmax(test_preds).numpy() _, axarr = plt.subplots(nrows=1, ncols=5, figsize=(20, 20)) sample_keys = list(train_dataset.data.keys()) for i, ax in zip(range(5), axarr): temp_image = np.stack((test_images[i, :, :, 0],) * 3, axis=2) temp_image *= 255 temp_image = np.clip(temp_image, 0, 255).astype(\"uint8\") ax.set_title( \"Label : {}, Prediction : {}\".format(int(test_labels[i]), test_preds[i]) ) ax.imshow(temp_image, cmap=\"gray\") ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) plt.show() png png Mitigating resolution discrepancy between training and test sets. Introduction It is a common practice to use the same input image resolution while training and testing vision models. However, as investigated in Fixing the train-test resolution discrepancy (Touvron et al.), this practice leads to suboptimal performance. Data augmentation is an indispensable part of the training process of deep neural networks. For vision models, we typically use random resized crops during training and center crops during inference. This introduces a discrepancy in the object sizes seen during training and inference. As shown by Touvron et al., if we can fix this discrepancy, we can significantly boost model performance. In this example, we implement the FixRes techniques introduced by Touvron et al. to fix this discrepancy. Imports from tensorflow import keras from tensorflow.keras import layers import tensorflow as tf import tensorflow_datasets as tfds tfds.disable_progress_bar() import matplotlib.pyplot as plt Load the tf_flowers dataset train_dataset, val_dataset = tfds.load( \"tf_flowers\", split=[\"train[:90%]\", \"train[90%:]\"], as_supervised=True ) num_train = train_dataset.cardinality() num_val = val_dataset.cardinality() print(f\"Number of training examples: {num_train}\") print(f\"Number of validation examples: {num_val}\") Number of training examples: 3303 Number of validation examples: 367 Data preprocessing utilities We create three datasets: A dataset with a smaller resolution - 128x128. Two datasets with a larger resolution - 224x224. We will apply different augmentation transforms to the larger-resolution datasets. The idea of FixRes is to first train a model on a smaller resolution dataset and then fine-tune it on a larger resolution dataset. This simple yet effective recipe leads to non-trivial performance improvements. Please refer to the original paper for results. # Reference: https://github.com/facebookresearch/FixRes/blob/main/transforms_v2.py. batch_size = 128 auto = tf.data.AUTOTUNE smaller_size = 128 bigger_size = 224 size_for_resizing = int((bigger_size / smaller_size) * bigger_size) central_crop_layer = layers.CenterCrop(bigger_size, bigger_size) def preprocess_initial(train, image_size): \"\"\"Initial preprocessing function for training on smaller resolution. For training, do random_horizontal_flip -> random_crop. For validation, just resize. No color-jittering has been used. \"\"\" def _pp(image, label, train): if train: channels = image.shape[-1] begin, size, _ = tf.image.sample_distorted_bounding_box( tf.shape(image), tf.zeros([0, 0, 4], tf.float32), area_range=(0.05, 1.0), min_object_covered=0, use_image_if_no_bounding_boxes=True, ) image = tf.slice(image, begin, size) image.set_shape([None, None, channels]) image = tf.image.resize(image, [image_size, image_size]) image = tf.image.random_flip_left_right(image) else: image = tf.image.resize(image, [image_size, image_size]) return image, label return _pp def preprocess_finetune(image, label, train): \"\"\"Preprocessing function for fine-tuning on a higher resolution. For training, resize to a bigger resolution to maintain the ratio -> random_horizontal_flip -> center_crop. For validation, do the same without any horizontal flipping. No color-jittering has been used. \"\"\" image = tf.image.resize(image, [size_for_resizing, size_for_resizing]) if train: image = tf.image.random_flip_left_right(image) image = central_crop_layer(image[None, ...])[0] return image, label def make_dataset( dataset: tf.data.Dataset, train: bool, image_size: int = smaller_size, fixres: bool = True, num_parallel_calls=auto, ): if image_size not in [smaller_size, bigger_size]: raise ValueError(f\"{image_size} resolution is not supported.\") # Determine which preprocessing function we are using. if image_size == smaller_size: preprocess_func = preprocess_initial(train, image_size) elif not fixres and image_size == bigger_size: preprocess_func = preprocess_initial(train, image_size) else: preprocess_func = preprocess_finetune if train: dataset = dataset.shuffle(batch_size * 10) return ( dataset.map( lambda x, y: preprocess_func(x, y, train), num_parallel_calls=num_parallel_calls, ) .batch(batch_size) .prefetch(num_parallel_calls) ) Notice how the augmentation transforms vary for the kind of dataset we are preparing. Prepare datasets initial_train_dataset = make_dataset(train_dataset, train=True, image_size=smaller_size) initial_val_dataset = make_dataset(val_dataset, train=False, image_size=smaller_size) finetune_train_dataset = make_dataset(train_dataset, train=True, image_size=bigger_size) finetune_val_dataset = make_dataset(val_dataset, train=False, image_size=bigger_size) vanilla_train_dataset = make_dataset( train_dataset, train=True, image_size=bigger_size, fixres=False ) vanilla_val_dataset = make_dataset( val_dataset, train=False, image_size=bigger_size, fixres=False ) Visualize the datasets def visualize_dataset(batch_images): plt.figure(figsize=(10, 10)) for n in range(25): ax = plt.subplot(5, 5, n + 1) plt.imshow(batch_images[n].numpy().astype(\"int\")) plt.axis(\"off\") plt.show() print(f\"Batch shape: {batch_images.shape}.\") # Smaller resolution. initial_sample_images, _ = next(iter(initial_train_dataset)) visualize_dataset(initial_sample_images) # Bigger resolution, only for fine-tuning. finetune_sample_images, _ = next(iter(finetune_train_dataset)) visualize_dataset(finetune_sample_images) # Bigger resolution, with the same augmentation transforms as # the smaller resolution dataset. vanilla_sample_images, _ = next(iter(vanilla_train_dataset)) visualize_dataset(vanilla_sample_images) 2021-10-11 02:05:26.638594: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead. png Batch shape: (128, 128, 128, 3). 2021-10-11 02:05:28.509752: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead. png Batch shape: (128, 224, 224, 3). 2021-10-11 02:05:30.108623: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead. png Batch shape: (128, 224, 224, 3). Model training utilities We train multiple variants of ResNet50V2 (He et al.): On the smaller resolution dataset (128x128). It will be trained from scratch. Then fine-tune the model from 1 on the larger resolution (224x224) dataset. Train another ResNet50V2 from scratch on the larger resolution dataset. As a reminder, the larger resolution datasets differ in terms of their augmentation transforms. def get_training_model(num_classes=5): inputs = layers.Input((None, None, 3)) resnet_base = keras.applications.ResNet50V2( include_top=False, weights=None, pooling=\"avg\" ) resnet_base.trainable = True x = layers.Rescaling(scale=1.0 / 127.5, offset=-1)(inputs) x = resnet_base(x) outputs = layers.Dense(num_classes, activation=\"softmax\")(x) return keras.Model(inputs, outputs) def train_and_evaluate( model, train_ds, val_ds, epochs, learning_rate=1e-3, use_early_stopping=False ): optimizer = keras.optimizers.Adam(learning_rate=learning_rate) model.compile( optimizer=optimizer, loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"], ) if use_early_stopping: es_callback = keras.callbacks.EarlyStopping(patience=5) callbacks = [es_callback] else: callbacks = None model.fit( train_ds, validation_data=val_ds, epochs=epochs, callbacks=callbacks, ) _, accuracy = model.evaluate(val_ds) print(f\"Top-1 accuracy on the validation set: {accuracy*100:.2f}%.\") return model Experiment 1: Train on 128x128 and then fine-tune on 224x224 epochs = 30 smaller_res_model = get_training_model() smaller_res_model = train_and_evaluate( smaller_res_model, initial_train_dataset, initial_val_dataset, epochs ) Epoch 1/30 26/26 [==============================] - 14s 226ms/step - loss: 1.6476 - accuracy: 0.4345 - val_loss: 9.8213 - val_accuracy: 0.2044 Epoch 2/30 26/26 [==============================] - 3s 123ms/step - loss: 1.1561 - accuracy: 0.5495 - val_loss: 6.5521 - val_accuracy: 0.2071 Epoch 3/30 26/26 [==============================] - 3s 123ms/step - loss: 1.0989 - accuracy: 0.5722 - val_loss: 2.6216 - val_accuracy: 0.1935 Epoch 4/30 26/26 [==============================] - 3s 122ms/step - loss: 1.0373 - accuracy: 0.5895 - val_loss: 1.9918 - val_accuracy: 0.2125 Epoch 5/30 26/26 [==============================] - 3s 122ms/step - loss: 0.9960 - accuracy: 0.6119 - val_loss: 2.8505 - val_accuracy: 0.2262 Epoch 6/30 26/26 [==============================] - 3s 122ms/step - loss: 0.9458 - accuracy: 0.6331 - val_loss: 1.8974 - val_accuracy: 0.2834 Epoch 7/30 26/26 [==============================] - 3s 122ms/step - loss: 0.8949 - accuracy: 0.6606 - val_loss: 2.1164 - val_accuracy: 0.2834 Epoch 8/30 26/26 [==============================] - 3s 122ms/step - loss: 0.8581 - accuracy: 0.6709 - val_loss: 1.8858 - val_accuracy: 0.3815 Epoch 9/30 26/26 [==============================] - 3s 123ms/step - loss: 0.8436 - accuracy: 0.6776 - val_loss: 1.5671 - val_accuracy: 0.4687 Epoch 10/30 26/26 [==============================] - 3s 123ms/step - loss: 0.8632 - accuracy: 0.6685 - val_loss: 1.5005 - val_accuracy: 0.5504 Epoch 11/30 26/26 [==============================] - 3s 123ms/step - loss: 0.8316 - accuracy: 0.6918 - val_loss: 1.1421 - val_accuracy: 0.6594 Epoch 12/30 26/26 [==============================] - 3s 123ms/step - loss: 0.7981 - accuracy: 0.6951 - val_loss: 1.2036 - val_accuracy: 0.6403 Epoch 13/30 26/26 [==============================] - 3s 122ms/step - loss: 0.8275 - accuracy: 0.6806 - val_loss: 2.2632 - val_accuracy: 0.5177 Epoch 14/30 26/26 [==============================] - 3s 122ms/step - loss: 0.8156 - accuracy: 0.6994 - val_loss: 1.1023 - val_accuracy: 0.6649 Epoch 15/30 26/26 [==============================] - 3s 122ms/step - loss: 0.7572 - accuracy: 0.7091 - val_loss: 1.6248 - val_accuracy: 0.6049 Epoch 16/30 26/26 [==============================] - 3s 123ms/step - loss: 0.7757 - accuracy: 0.7024 - val_loss: 2.0600 - val_accuracy: 0.6294 Epoch 17/30 26/26 [==============================] - 3s 122ms/step - loss: 0.7600 - accuracy: 0.7087 - val_loss: 1.5731 - val_accuracy: 0.6131 Epoch 18/30 26/26 [==============================] - 3s 122ms/step - loss: 0.7385 - accuracy: 0.7215 - val_loss: 1.8312 - val_accuracy: 0.5749 Epoch 19/30 26/26 [==============================] - 3s 122ms/step - loss: 0.7493 - accuracy: 0.7224 - val_loss: 3.0382 - val_accuracy: 0.4986 Epoch 20/30 26/26 [==============================] - 3s 122ms/step - loss: 0.7746 - accuracy: 0.7048 - val_loss: 7.8191 - val_accuracy: 0.5123 Epoch 21/30 26/26 [==============================] - 3s 123ms/step - loss: 0.7367 - accuracy: 0.7405 - val_loss: 1.9607 - val_accuracy: 0.6676 Epoch 22/30 26/26 [==============================] - 3s 122ms/step - loss: 0.6970 - accuracy: 0.7357 - val_loss: 3.1944 - val_accuracy: 0.4496 Epoch 23/30 26/26 [==============================] - 3s 122ms/step - loss: 0.7299 - accuracy: 0.7212 - val_loss: 1.4012 - val_accuracy: 0.6567 Epoch 24/30 26/26 [==============================] - 3s 122ms/step - loss: 0.6965 - accuracy: 0.7315 - val_loss: 1.9781 - val_accuracy: 0.6403 Epoch 25/30 26/26 [==============================] - 3s 124ms/step - loss: 0.6811 - accuracy: 0.7408 - val_loss: 0.9287 - val_accuracy: 0.6839 Epoch 26/30 26/26 [==============================] - 3s 123ms/step - loss: 0.6732 - accuracy: 0.7487 - val_loss: 2.9406 - val_accuracy: 0.5504 Epoch 27/30 26/26 [==============================] - 3s 122ms/step - loss: 0.6571 - accuracy: 0.7560 - val_loss: 1.6268 - val_accuracy: 0.5804 Epoch 28/30 26/26 [==============================] - 3s 122ms/step - loss: 0.6662 - accuracy: 0.7548 - val_loss: 0.9067 - val_accuracy: 0.7357 Epoch 29/30 26/26 [==============================] - 3s 122ms/step - loss: 0.6443 - accuracy: 0.7520 - val_loss: 0.7760 - val_accuracy: 0.7520 Epoch 30/30 26/26 [==============================] - 3s 122ms/step - loss: 0.6617 - accuracy: 0.7539 - val_loss: 0.6026 - val_accuracy: 0.7766 3/3 [==============================] - 0s 37ms/step - loss: 0.6026 - accuracy: 0.7766 Top-1 accuracy on the validation set: 77.66%. Freeze all the layers except for the final Batch Normalization layer For fine-tuning, we train only two layers: The final Batch Normalization (Ioffe et al.) layer. The classification layer. We are unfreezing the final Batch Normalization layer to compensate for the change in activation statistics before the global average pooling layer. As shown in the paper, unfreezing the final Batch Normalization layer is enough. For a comprehensive guide on fine-tuning models in Keras, refer to this tutorial. for layer in smaller_res_model.layers[2].layers: layer.trainable = False smaller_res_model.layers[2].get_layer(\"post_bn\").trainable = True epochs = 10 # Use a lower learning rate during fine-tuning. bigger_res_model = train_and_evaluate( smaller_res_model, finetune_train_dataset, finetune_val_dataset, epochs, learning_rate=1e-4, ) Epoch 1/10 26/26 [==============================] - 9s 201ms/step - loss: 0.7912 - accuracy: 0.7856 - val_loss: 0.6808 - val_accuracy: 0.7575 Epoch 2/10 26/26 [==============================] - 3s 115ms/step - loss: 0.7732 - accuracy: 0.7938 - val_loss: 0.7028 - val_accuracy: 0.7684 Epoch 3/10 26/26 [==============================] - 3s 115ms/step - loss: 0.7658 - accuracy: 0.7923 - val_loss: 0.7136 - val_accuracy: 0.7629 Epoch 4/10 26/26 [==============================] - 3s 115ms/step - loss: 0.7536 - accuracy: 0.7872 - val_loss: 0.7161 - val_accuracy: 0.7684 Epoch 5/10 26/26 [==============================] - 3s 115ms/step - loss: 0.7346 - accuracy: 0.7947 - val_loss: 0.7154 - val_accuracy: 0.7711 Epoch 6/10 26/26 [==============================] - 3s 115ms/step - loss: 0.7183 - accuracy: 0.7990 - val_loss: 0.7139 - val_accuracy: 0.7684 Epoch 7/10 26/26 [==============================] - 3s 116ms/step - loss: 0.7059 - accuracy: 0.7962 - val_loss: 0.7071 - val_accuracy: 0.7738 Epoch 8/10 26/26 [==============================] - 3s 115ms/step - loss: 0.6959 - accuracy: 0.7923 - val_loss: 0.7002 - val_accuracy: 0.7738 Epoch 9/10 26/26 [==============================] - 3s 116ms/step - loss: 0.6871 - accuracy: 0.8011 - val_loss: 0.6967 - val_accuracy: 0.7711 Epoch 10/10 26/26 [==============================] - 3s 116ms/step - loss: 0.6761 - accuracy: 0.8044 - val_loss: 0.6887 - val_accuracy: 0.7738 3/3 [==============================] - 0s 95ms/step - loss: 0.6887 - accuracy: 0.7738 Top-1 accuracy on the validation set: 77.38%. Experiment 2: Train a model on 224x224 resolution from scratch Now, we train another model from scratch on the larger resolution dataset. Recall that the augmentation transforms used in this dataset are different from before. epochs = 30 vanilla_bigger_res_model = get_training_model() vanilla_bigger_res_model = train_and_evaluate( vanilla_bigger_res_model, vanilla_train_dataset, vanilla_val_dataset, epochs ) Epoch 1/30 26/26 [==============================] - 15s 389ms/step - loss: 1.5339 - accuracy: 0.4569 - val_loss: 177.5233 - val_accuracy: 0.1907 Epoch 2/30 26/26 [==============================] - 8s 314ms/step - loss: 1.1472 - accuracy: 0.5483 - val_loss: 17.5804 - val_accuracy: 0.1907 Epoch 3/30 26/26 [==============================] - 8s 315ms/step - loss: 1.0708 - accuracy: 0.5792 - val_loss: 2.2719 - val_accuracy: 0.2480 Epoch 4/30 26/26 [==============================] - 8s 315ms/step - loss: 1.0225 - accuracy: 0.6170 - val_loss: 2.1274 - val_accuracy: 0.2398 Epoch 5/30 26/26 [==============================] - 8s 316ms/step - loss: 1.0001 - accuracy: 0.6206 - val_loss: 2.0375 - val_accuracy: 0.2834 Epoch 6/30 26/26 [==============================] - 8s 315ms/step - loss: 0.9602 - accuracy: 0.6355 - val_loss: 1.4412 - val_accuracy: 0.3978 Epoch 7/30 26/26 [==============================] - 8s 316ms/step - loss: 0.9418 - accuracy: 0.6461 - val_loss: 1.5257 - val_accuracy: 0.4305 Epoch 8/30 26/26 [==============================] - 8s 316ms/step - loss: 0.8911 - accuracy: 0.6649 - val_loss: 1.1530 - val_accuracy: 0.5858 Epoch 9/30 26/26 [==============================] - 8s 316ms/step - loss: 0.8834 - accuracy: 0.6694 - val_loss: 1.2026 - val_accuracy: 0.5531 Epoch 10/30 26/26 [==============================] - 8s 316ms/step - loss: 0.8752 - accuracy: 0.6724 - val_loss: 1.4917 - val_accuracy: 0.5695 Epoch 11/30 26/26 [==============================] - 8s 316ms/step - loss: 0.8690 - accuracy: 0.6594 - val_loss: 1.4115 - val_accuracy: 0.6022 Epoch 12/30 26/26 [==============================] - 8s 314ms/step - loss: 0.8586 - accuracy: 0.6761 - val_loss: 1.0692 - val_accuracy: 0.6349 Epoch 13/30 26/26 [==============================] - 8s 315ms/step - loss: 0.8120 - accuracy: 0.6894 - val_loss: 1.5233 - val_accuracy: 0.6567 Epoch 14/30 26/26 [==============================] - 8s 316ms/step - loss: 0.8275 - accuracy: 0.6857 - val_loss: 1.9079 - val_accuracy: 0.5804 Epoch 15/30 26/26 [==============================] - 8s 316ms/step - loss: 0.7624 - accuracy: 0.7127 - val_loss: 0.9543 - val_accuracy: 0.6540 Epoch 16/30 26/26 [==============================] - 8s 315ms/step - loss: 0.7595 - accuracy: 0.7266 - val_loss: 4.5757 - val_accuracy: 0.4877 Epoch 17/30 26/26 [==============================] - 8s 315ms/step - loss: 0.7577 - accuracy: 0.7154 - val_loss: 1.8411 - val_accuracy: 0.5749 Epoch 18/30 26/26 [==============================] - 8s 316ms/step - loss: 0.7596 - accuracy: 0.7163 - val_loss: 1.0660 - val_accuracy: 0.6703 Epoch 19/30 26/26 [==============================] - 8s 315ms/step - loss: 0.7492 - accuracy: 0.7160 - val_loss: 1.2462 - val_accuracy: 0.6485 Epoch 20/30 26/26 [==============================] - 8s 315ms/step - loss: 0.7269 - accuracy: 0.7330 - val_loss: 5.8287 - val_accuracy: 0.3379 Epoch 21/30 26/26 [==============================] - 8s 315ms/step - loss: 0.7193 - accuracy: 0.7275 - val_loss: 4.7058 - val_accuracy: 0.6049 Epoch 22/30 26/26 [==============================] - 8s 316ms/step - loss: 0.7251 - accuracy: 0.7318 - val_loss: 1.5608 - val_accuracy: 0.6485 Epoch 23/30 26/26 [==============================] - 8s 314ms/step - loss: 0.6888 - accuracy: 0.7466 - val_loss: 1.7914 - val_accuracy: 0.6240 Epoch 24/30 26/26 [==============================] - 8s 314ms/step - loss: 0.7051 - accuracy: 0.7339 - val_loss: 2.0918 - val_accuracy: 0.6158 Epoch 25/30 26/26 [==============================] - 8s 315ms/step - loss: 0.6920 - accuracy: 0.7454 - val_loss: 0.7284 - val_accuracy: 0.7575 Epoch 26/30 26/26 [==============================] - 8s 316ms/step - loss: 0.6502 - accuracy: 0.7523 - val_loss: 2.5474 - val_accuracy: 0.5313 Epoch 27/30 26/26 [==============================] - 8s 315ms/step - loss: 0.7101 - accuracy: 0.7330 - val_loss: 26.8117 - val_accuracy: 0.3297 Epoch 28/30 26/26 [==============================] - 8s 315ms/step - loss: 0.6632 - accuracy: 0.7548 - val_loss: 20.1011 - val_accuracy: 0.3243 Epoch 29/30 26/26 [==============================] - 8s 315ms/step - loss: 0.6682 - accuracy: 0.7505 - val_loss: 11.5872 - val_accuracy: 0.3297 Epoch 30/30 26/26 [==============================] - 8s 315ms/step - loss: 0.6758 - accuracy: 0.7514 - val_loss: 5.7229 - val_accuracy: 0.4305 3/3 [==============================] - 0s 95ms/step - loss: 5.7229 - accuracy: 0.4305 Top-1 accuracy on the validation set: 43.05%. As we can notice from the above cells, FixRes leads to a better performance. Another advantage of FixRes is the improved total training time and reduction in GPU memory usage. FixRes is model-agnostic, you can use it on any image classification model to potentially boost performance. You can find more results here that were gathered by running the same code with different random seeds. How to obtain a class activation heatmap for an image classification model. Adapted from Deep Learning with Python (2017). Setup import numpy as np import tensorflow as tf from tensorflow import keras # Display from IPython.display import Image, display import matplotlib.pyplot as plt import matplotlib.cm as cm Configurable parameters You can change these to another model. To get the values for last_conv_layer_name use model.summary() to see the names of all layers in the model. model_builder = keras.applications.xception.Xception img_size = (299, 299) preprocess_input = keras.applications.xception.preprocess_input decode_predictions = keras.applications.xception.decode_predictions last_conv_layer_name = \"block14_sepconv2_act\" # The local path to our target image img_path = keras.utils.get_file( \"african_elephant.jpg\", \"https://i.imgur.com/Bvro0YD.png\" ) display(Image(img_path)) jpeg The Grad-CAM algorithm def get_img_array(img_path, size): # `img` is a PIL image of size 299x299 img = keras.preprocessing.image.load_img(img_path, target_size=size) # `array` is a float32 Numpy array of shape (299, 299, 3) array = keras.preprocessing.image.img_to_array(img) # We add a dimension to transform our array into a \"batch\" # of size (1, 299, 299, 3) array = np.expand_dims(array, axis=0) return array def make_gradcam_heatmap(img_array, model, last_conv_layer_name, pred_index=None): # First, we create a model that maps the input image to the activations # of the last conv layer as well as the output predictions grad_model = tf.keras.models.Model( [model.inputs], [model.get_layer(last_conv_layer_name).output, model.output] ) # Then, we compute the gradient of the top predicted class for our input image # with respect to the activations of the last conv layer with tf.GradientTape() as tape: last_conv_layer_output, preds = grad_model(img_array) if pred_index is None: pred_index = tf.argmax(preds[0]) class_channel = preds[:, pred_index] # This is the gradient of the output neuron (top predicted or chosen) # with regard to the output feature map of the last conv layer grads = tape.gradient(class_channel, last_conv_layer_output) # This is a vector where each entry is the mean intensity of the gradient # over a specific feature map channel pooled_grads = tf.reduce_mean(grads, axis=(0, 1, 2)) # We multiply each channel in the feature map array # by \"how important this channel is\" with regard to the top predicted class # then sum all the channels to obtain the heatmap class activation last_conv_layer_output = last_conv_layer_output[0] heatmap = last_conv_layer_output @ pooled_grads[..., tf.newaxis] heatmap = tf.squeeze(heatmap) # For visualization purpose, we will also normalize the heatmap between 0 & 1 heatmap = tf.maximum(heatmap, 0) / tf.math.reduce_max(heatmap) return heatmap.numpy() Let's test-drive it # Prepare image img_array = preprocess_input(get_img_array(img_path, size=img_size)) # Make model model = model_builder(weights=\"imagenet\") # Remove last layer's softmax model.layers[-1].activation = None # Print what the top predicted class is preds = model.predict(img_array) print(\"Predicted:\", decode_predictions(preds, top=1)[0]) # Generate class activation heatmap heatmap = make_gradcam_heatmap(img_array, model, last_conv_layer_name) # Display heatmap plt.matshow(heatmap) plt.show() Predicted: [('n02504458', 'African_elephant', 9.862388)] png Create a superimposed visualization def save_and_display_gradcam(img_path, heatmap, cam_path=\"cam.jpg\", alpha=0.4): # Load the original image img = keras.preprocessing.image.load_img(img_path) img = keras.preprocessing.image.img_to_array(img) # Rescale heatmap to a range 0-255 heatmap = np.uint8(255 * heatmap) # Use jet colormap to colorize heatmap jet = cm.get_cmap(\"jet\") # Use RGB values of the colormap jet_colors = jet(np.arange(256))[:, :3] jet_heatmap = jet_colors[heatmap] # Create an image with RGB colorized heatmap jet_heatmap = keras.preprocessing.image.array_to_img(jet_heatmap) jet_heatmap = jet_heatmap.resize((img.shape[1], img.shape[0])) jet_heatmap = keras.preprocessing.image.img_to_array(jet_heatmap) # Superimpose the heatmap on original image superimposed_img = jet_heatmap * alpha + img superimposed_img = keras.preprocessing.image.array_to_img(superimposed_img) # Save the superimposed image superimposed_img.save(cam_path) # Display Grad CAM display(Image(cam_path)) save_and_display_gradcam(img_path, heatmap) jpeg Let's try another image We will see how the grad cam explains the model's outputs for a multi-label image. Let's try an image with a cat and a dog together, and see how the grad cam behaves. img_path = keras.utils.get_file( \"cat_and_dog.jpg\", \"https://storage.googleapis.com/petbacker/images/blog/2017/dog-and-cat-cover.jpg\", ) display(Image(img_path)) # Prepare image img_array = preprocess_input(get_img_array(img_path, size=img_size)) # Print what the two top predicted classes are preds = model.predict(img_array) print(\"Predicted:\", decode_predictions(preds, top=2)[0]) jpeg Predicted: [('n02112137', 'chow', 4.611241), ('n02124075', 'Egyptian_cat', 4.3817368)] We generate class activation heatmap for \"chow,\" the class index is 260 heatmap = make_gradcam_heatmap(img_array, model, last_conv_layer_name, pred_index=260) save_and_display_gradcam(img_path, heatmap) jpeg We generate class activation heatmap for \"egyptian cat,\" the class index is 285 heatmap = make_gradcam_heatmap(img_array, model, last_conv_layer_name, pred_index=285) save_and_display_gradcam(img_path, heatmap) jpeg Implement Gradient Centralization to improve training performance of DNNs. Introduction This example implements Gradient Centralization, a new optimization technique for Deep Neural Networks by Yong et al., and demonstrates it on Laurence Moroney's Horses or Humans Dataset. Gradient Centralization can both speedup training process and improve the final generalization performance of DNNs. It operates directly on gradients by centralizing the gradient vectors to have zero mean. Gradient Centralization morever improves the Lipschitzness of the loss function and its gradient so that the training process becomes more efficient and stable. This example requires TensorFlow 2.2 or higher as well as tensorflow_datasets which can be installed with this command: pip install tensorflow-datasets We will be implementing Gradient Centralization in this example but you could also use this very easily with a package I built, gradient-centralization-tf. Setup from time import time import tensorflow as tf import tensorflow_datasets as tfds from tensorflow.keras import layers from tensorflow.keras.optimizers import RMSprop Prepare the data For this example, we will be using the Horses or Humans dataset. num_classes = 2 input_shape = (300, 300, 3) dataset_name = \"horses_or_humans\" batch_size = 128 AUTOTUNE = tf.data.AUTOTUNE (train_ds, test_ds), metadata = tfds.load( name=dataset_name, split=[tfds.Split.TRAIN, tfds.Split.TEST], with_info=True, as_supervised=True, ) print(f\"Image shape: {metadata.features['image'].shape}\") print(f\"Training images: {metadata.splits['train'].num_examples}\") print(f\"Test images: {metadata.splits['test'].num_examples}\") Image shape: (300, 300, 3) Training images: 1027 Test images: 256 Use Data Augmentation We will rescale the data to [0, 1] and perform simple augmentations to our data. rescale = layers.Rescaling(1.0 / 255) data_augmentation = tf.keras.Sequential( [ layers.RandomFlip(\"horizontal_and_vertical\"), layers.RandomRotation(0.3), layers.RandomZoom(0.2), ] ) def prepare(ds, shuffle=False, augment=False): # Rescale dataset ds = ds.map(lambda x, y: (rescale(x), y), num_parallel_calls=AUTOTUNE) if shuffle: ds = ds.shuffle(1024) # Batch dataset ds = ds.batch(batch_size) # Use data augmentation only on the training set if augment: ds = ds.map( lambda x, y: (data_augmentation(x, training=True), y), num_parallel_calls=AUTOTUNE, ) # Use buffered prefecting return ds.prefetch(buffer_size=AUTOTUNE) Rescale and augment the data train_ds = prepare(train_ds, shuffle=True, augment=True) test_ds = prepare(test_ds) Define a model In this section we will define a Convolutional neural network. model = tf.keras.Sequential( [ layers.Conv2D(16, (3, 3), activation=\"relu\", input_shape=(300, 300, 3)), layers.MaxPooling2D(2, 2), layers.Conv2D(32, (3, 3), activation=\"relu\"), layers.Dropout(0.5), layers.MaxPooling2D(2, 2), layers.Conv2D(64, (3, 3), activation=\"relu\"), layers.Dropout(0.5), layers.MaxPooling2D(2, 2), layers.Conv2D(64, (3, 3), activation=\"relu\"), layers.MaxPooling2D(2, 2), layers.Conv2D(64, (3, 3), activation=\"relu\"), layers.MaxPooling2D(2, 2), layers.Flatten(), layers.Dropout(0.5), layers.Dense(512, activation=\"relu\"), layers.Dense(1, activation=\"sigmoid\"), ] ) Implement Gradient Centralization We will now subclass the RMSProp optimizer class modifying the tf.keras.optimizers.Optimizer.get_gradients() method where we now implement Gradient Centralization. On a high level the idea is that let us say we obtain our gradients through back propogation for a Dense or Convolution layer we then compute the mean of the column vectors of the weight matrix, and then remove the mean from each column vector. The experiments in this paper on various applications, including general image classification, fine-grained image classification, detection and segmentation and Person ReID demonstrate that GC can consistently improve the performance of DNN learning. Also, for simplicity at the moment we are not implementing gradient cliiping functionality, however this quite easy to implement. At the moment we are just creating a subclass for the RMSProp optimizer however you could easily reproduce this for any other optimizer or on a custom optimizer in the same way. We will be using this class in the later section when we train a model with Gradient Centralization. class GCRMSprop(RMSprop): def get_gradients(self, loss, params): # We here just provide a modified get_gradients() function since we are # trying to just compute the centralized gradients. grads = [] gradients = super().get_gradients() for grad in gradients: grad_len = len(grad.shape) if grad_len > 1: axis = list(range(grad_len - 1)) grad -= tf.reduce_mean(grad, axis=axis, keep_dims=True) grads.append(grad) return grads optimizer = GCRMSprop(learning_rate=1e-4) Training utilities We will also create a callback which allows us to easily measure the total training time and the time taken for each epoch since we are interested in comparing the effect of Gradient Centralization on the model we built above. class TimeHistory(tf.keras.callbacks.Callback): def on_train_begin(self, logs={}): self.times = [] def on_epoch_begin(self, batch, logs={}): self.epoch_time_start = time() def on_epoch_end(self, batch, logs={}): self.times.append(time() - self.epoch_time_start) Train the model without GC We now train the model we built earlier without Gradient Centralization which we can compare to the training performance of the model trained with Gradient Centralization. time_callback_no_gc = TimeHistory() model.compile( loss=\"binary_crossentropy\", optimizer=RMSprop(learning_rate=1e-4), metrics=[\"accuracy\"], ) model.summary() Model: \"sequential_1\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 298, 298, 16) 448 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 149, 149, 16) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 147, 147, 32) 4640 _________________________________________________________________ dropout (Dropout) (None, 147, 147, 32) 0 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 73, 73, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 71, 71, 64) 18496 _________________________________________________________________ dropout_1 (Dropout) (None, 71, 71, 64) 0 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 35, 35, 64) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 33, 33, 64) 36928 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 16, 16, 64) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 14, 14, 64) 36928 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 7, 7, 64) 0 _________________________________________________________________ flatten (Flatten) (None, 3136) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 3136) 0 _________________________________________________________________ dense (Dense) (None, 512) 1606144 _________________________________________________________________ dense_1 (Dense) (None, 1) 513 ================================================================= Total params: 1,704,097 Trainable params: 1,704,097 Non-trainable params: 0 _________________________________________________________________ We also save the history since we later want to compare our model trained with and not trained with Gradient Centralization history_no_gc = model.fit( train_ds, epochs=10, verbose=1, callbacks=[time_callback_no_gc] ) Epoch 1/10 9/9 [==============================] - 5s 571ms/step - loss: 0.7427 - accuracy: 0.5073 Epoch 2/10 9/9 [==============================] - 6s 667ms/step - loss: 0.6757 - accuracy: 0.5433 Epoch 3/10 9/9 [==============================] - 6s 660ms/step - loss: 0.6616 - accuracy: 0.6144 Epoch 4/10 9/9 [==============================] - 6s 642ms/step - loss: 0.6598 - accuracy: 0.6203 Epoch 5/10 9/9 [==============================] - 6s 666ms/step - loss: 0.6782 - accuracy: 0.6329 Epoch 6/10 9/9 [==============================] - 6s 655ms/step - loss: 0.6550 - accuracy: 0.6524 Epoch 7/10 9/9 [==============================] - 6s 645ms/step - loss: 0.6157 - accuracy: 0.7186 Epoch 8/10 9/9 [==============================] - 6s 654ms/step - loss: 0.6095 - accuracy: 0.6913 Epoch 9/10 9/9 [==============================] - 6s 677ms/step - loss: 0.5880 - accuracy: 0.7147 Epoch 10/10 9/9 [==============================] - 6s 663ms/step - loss: 0.5814 - accuracy: 0.6933 Train the model with GC We will now train the same model, this time using Gradient Centralization, notice our optimizer is the one using Gradient Centralization this time. time_callback_gc = TimeHistory() model.compile(loss=\"binary_crossentropy\", optimizer=optimizer, metrics=[\"accuracy\"]) model.summary() history_gc = model.fit(train_ds, epochs=10, verbose=1, callbacks=[time_callback_gc]) Model: \"sequential_1\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 298, 298, 16) 448 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 149, 149, 16) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 147, 147, 32) 4640 _________________________________________________________________ dropout (Dropout) (None, 147, 147, 32) 0 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 73, 73, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 71, 71, 64) 18496 _________________________________________________________________ dropout_1 (Dropout) (None, 71, 71, 64) 0 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 35, 35, 64) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 33, 33, 64) 36928 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 16, 16, 64) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 14, 14, 64) 36928 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 7, 7, 64) 0 _________________________________________________________________ flatten (Flatten) (None, 3136) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 3136) 0 _________________________________________________________________ dense (Dense) (None, 512) 1606144 _________________________________________________________________ dense_1 (Dense) (None, 1) 513 ================================================================= Total params: 1,704,097 Trainable params: 1,704,097 Non-trainable params: 0 _________________________________________________________________ Epoch 1/10 9/9 [==============================] - 6s 673ms/step - loss: 0.6022 - accuracy: 0.7147 Epoch 2/10 9/9 [==============================] - 6s 662ms/step - loss: 0.5385 - accuracy: 0.7371 Epoch 3/10 9/9 [==============================] - 6s 673ms/step - loss: 0.4832 - accuracy: 0.7945 Epoch 4/10 9/9 [==============================] - 6s 645ms/step - loss: 0.4692 - accuracy: 0.7799 Epoch 5/10 9/9 [==============================] - 6s 720ms/step - loss: 0.4792 - accuracy: 0.7799 Epoch 6/10 9/9 [==============================] - 6s 658ms/step - loss: 0.4623 - accuracy: 0.7838 Epoch 7/10 9/9 [==============================] - 6s 651ms/step - loss: 0.4413 - accuracy: 0.8072 Epoch 8/10 9/9 [==============================] - 6s 682ms/step - loss: 0.4542 - accuracy: 0.8014 Epoch 9/10 9/9 [==============================] - 6s 649ms/step - loss: 0.4235 - accuracy: 0.8053 Epoch 10/10 9/9 [==============================] - 6s 686ms/step - loss: 0.4445 - accuracy: 0.7936 Comparing performance print(\"Not using Gradient Centralization\") print(f\"Loss: {history_no_gc.history['loss'][-1]}\") print(f\"Accuracy: {history_no_gc.history['accuracy'][-1]}\") print(f\"Training Time: {sum(time_callback_no_gc.times)}\") print(\"Using Gradient Centralization\") print(f\"Loss: {history_gc.history['loss'][-1]}\") print(f\"Accuracy: {history_gc.history['accuracy'][-1]}\") print(f\"Training Time: {sum(time_callback_gc.times)}\") Not using Gradient Centralization Loss: 0.5814347863197327 Accuracy: 0.6932814121246338 Training Time: 136.35903406143188 Using Gradient Centralization Loss: 0.4444807469844818 Accuracy: 0.7935734987258911 Training Time: 131.61780261993408 Readers are encouraged to try out Gradient Centralization on different datasets from different domains and experiment with it's effect. You are strongly advised to check out the original paper as well - the authors present several studies on Gradient Centralization showing how it can improve general performance, generalization, training time as well as more efficient. Many thanks to Ali Mustufa Shaikh for reviewing this implementation. Training a handwriting recognition model with variable-length sequences. Introduction This example shows how the Captcha OCR example can be extended to the IAM Dataset, which has variable length ground-truth targets. Each sample in the dataset is an image of some handwritten text, and its corresponding target is the string present in the image. The IAM Dataset is widely used across many OCR benchmarks, so we hope this example can serve as a good starting point for building OCR systems. Data collection !wget -q https://git.io/J0fjL -O IAM_Words.zip !unzip -qq IAM_Words.zip ! !mkdir data !mkdir data/words !tar -xf IAM_Words/words.tgz -C data/words !mv IAM_Words/words.txt data Preview how the dataset is organized. Lines prepended by \"#\" are just metadata information. !head -20 data/words.txt #--- words.txt ---------------------------------------------------------------# # # iam database word information # # format: a01-000u-00-00 ok 154 1 408 768 27 51 AT A # # a01-000u-00-00 -> word id for line 00 in form a01-000u # ok -> result of word segmentation # ok: word was correctly # er: segmentation of word can be bad # # 154 -> graylevel to binarize the line containing this word # 1 -> number of components for this word # 408 768 27 51 -> bounding box around this word in x,y,w,h format # AT -> the grammatical tag for this word, see the # file tagset.txt for an explanation # A -> the transcription for this word # a01-000u-00-00 ok 154 408 768 27 51 AT A a01-000u-00-01 ok 154 507 766 213 48 NN MOVE Imports from tensorflow.keras.layers.experimental.preprocessing import StringLookup from tensorflow import keras import matplotlib.pyplot as plt import tensorflow as tf import numpy as np import os np.random.seed(42) tf.random.set_seed(42) Dataset splitting base_path = \"data\" words_list = [] words = open(f\"{base_path}/words.txt\", \"r\").readlines() for line in words: if line[0] == \"#\": continue if line.split(\" \")[1] != \"err\": # We don't need to deal with errored entries. words_list.append(line) len(words_list) np.random.shuffle(words_list) We will split the dataset into three subsets with a 90:5:5 ratio (train:validation:test). split_idx = int(0.9 * len(words_list)) train_samples = words_list[:split_idx] test_samples = words_list[split_idx:] val_split_idx = int(0.5 * len(test_samples)) validation_samples = test_samples[:val_split_idx] test_samples = test_samples[val_split_idx:] assert len(words_list) == len(train_samples) + len(validation_samples) + len( test_samples ) print(f\"Total training samples: {len(train_samples)}\") print(f\"Total validation samples: {len(validation_samples)}\") print(f\"Total test samples: {len(test_samples)}\") Total training samples: 86810 Total validation samples: 4823 Total test samples: 4823 Data input pipeline We start building our data input pipeline by first preparing the image paths. base_image_path = os.path.join(base_path, \"words\") def get_image_paths_and_labels(samples): paths = [] corrected_samples = [] for (i, file_line) in enumerate(samples): line_split = file_line.strip() line_split = line_split.split(\" \") # Each line split will have this format for the corresponding image: # part1/part1-part2/part1-part2-part3.png image_name = line_split[0] partI = image_name.split(\"-\")[0] partII = image_name.split(\"-\")[1] img_path = os.path.join( base_image_path, partI, partI + \"-\" + partII, image_name + \".png\" ) if os.path.getsize(img_path): paths.append(img_path) corrected_samples.append(file_line.split(\"\n\")[0]) return paths, corrected_samples train_img_paths, train_labels = get_image_paths_and_labels(train_samples) validation_img_paths, validation_labels = get_image_paths_and_labels(validation_samples) test_img_paths, test_labels = get_image_paths_and_labels(test_samples) Then we prepare the ground-truth labels. # Find maximum length and the size of the vocabulary in the training data. train_labels_cleaned = [] characters = set() max_len = 0 for label in train_labels: label = label.split(\" \")[-1].strip() for char in label: characters.add(char) max_len = max(max_len, len(label)) train_labels_cleaned.append(label) print(\"Maximum length: \", max_len) print(\"Vocab size: \", len(characters)) # Check some label samples. train_labels_cleaned[:10] Maximum length: 21 Vocab size: 78 ['sure', 'he', 'during', 'of', 'booty', 'gastronomy', 'boy', 'The', 'and', 'in'] Now we clean the validation and the test labels as well. def clean_labels(labels): cleaned_labels = [] for label in labels: label = label.split(\" \")[-1].strip() cleaned_labels.append(label) return cleaned_labels validation_labels_cleaned = clean_labels(validation_labels) test_labels_cleaned = clean_labels(test_labels) Building the character vocabulary Keras provides different preprocessing layers to deal with different modalities of data. This guide provids a comprehensive introduction. Our example involves preprocessing labels at the character level. This means that if there are two labels, e.g. \"cat\" and \"dog\", then our character vocabulary should be {a, c, d, g, o, t} (without any special tokens). We use the StringLookup layer for this purpose. AUTOTUNE = tf.data.AUTOTUNE # Mapping characters to integers. char_to_num = StringLookup(vocabulary=list(characters), mask_token=None) # Mapping integers back to original characters. num_to_char = StringLookup( vocabulary=char_to_num.get_vocabulary(), mask_token=None, invert=True ) Resizing images without distortion Instead of square images, many OCR models work with rectangular images. This will become clearer in a moment when we will visualize a few samples from the dataset. While aspect-unaware resizing square images does not introduce a significant amount of distortion this is not the case for rectangular images. But resizing images to a uniform size is a requirement for mini-batching. So we need to perform our resizing such that the following criteria are met: Aspect ratio is preserved. Content of the images is not affected. def distortion_free_resize(image, img_size): w, h = img_size image = tf.image.resize(image, size=(h, w), preserve_aspect_ratio=True) # Check tha amount of padding needed to be done. pad_height = h - tf.shape(image)[0] pad_width = w - tf.shape(image)[1] # Only necessary if you want to do same amount of padding on both sides. if pad_height % 2 != 0: height = pad_height // 2 pad_height_top = height + 1 pad_height_bottom = height else: pad_height_top = pad_height_bottom = pad_height // 2 if pad_width % 2 != 0: width = pad_width // 2 pad_width_left = width + 1 pad_width_right = width else: pad_width_left = pad_width_right = pad_width // 2 image = tf.pad( image, paddings=[ [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right], [0, 0], ], ) image = tf.transpose(image, perm=[1, 0, 2]) image = tf.image.flip_left_right(image) return image If we just go with the plain resizing then the images would look like so: Notice how this resizing would have introduced unnecessary stretching. Putting the utilities together batch_size = 64 padding_token = 99 image_width = 128 image_height = 32 def preprocess_image(image_path, img_size=(image_width, image_height)): image = tf.io.read_file(image_path) image = tf.image.decode_png(image, 1) image = distortion_free_resize(image, img_size) image = tf.cast(image, tf.float32) / 255.0 return image def vectorize_label(label): label = char_to_num(tf.strings.unicode_split(label, input_encoding=\"UTF-8\")) length = tf.shape(label)[0] pad_amount = max_len - length label = tf.pad(label, paddings=[[0, pad_amount]], constant_values=padding_token) return label def process_images_labels(image_path, label): image = preprocess_image(image_path) label = vectorize_label(label) return {\"image\": image, \"label\": label} def prepare_dataset(image_paths, labels): dataset = tf.data.Dataset.from_tensor_slices((image_paths, labels)).map( process_images_labels, num_parallel_calls=AUTOTUNE ) return dataset.batch(batch_size).cache().prefetch(AUTOTUNE) Prepare tf.data.Dataset objects train_ds = prepare_dataset(train_img_paths, train_labels_cleaned) validation_ds = prepare_dataset(validation_img_paths, validation_labels_cleaned) test_ds = prepare_dataset(test_img_paths, test_labels_cleaned) Visualize a few samples for data in train_ds.take(1): images, labels = data[\"image\"], data[\"label\"] _, ax = plt.subplots(4, 4, figsize=(15, 8)) for i in range(16): img = images[i] img = tf.image.flip_left_right(img) img = tf.transpose(img, perm=[1, 0, 2]) img = (img * 255.0).numpy().clip(0, 255).astype(np.uint8) img = img[:, :, 0] # Gather indices where label!= padding_token. label = labels[i] indices = tf.gather(label, tf.where(tf.math.not_equal(label, padding_token))) # Convert to string. label = tf.strings.reduce_join(num_to_char(indices)) label = label.numpy().decode(\"utf-8\") ax[i // 4, i % 4].imshow(img, cmap=\"gray\") ax[i // 4, i % 4].set_title(label) ax[i // 4, i % 4].axis(\"off\") plt.show() png You will notice that the content of original image is kept as faithful as possible and has been padded accordingly. Model Our model will use the CTC loss as an endpoint layer. For a detailed understanding of the CTC loss, refer to this post. class CTCLayer(keras.layers.Layer): def __init__(self, name=None): super().__init__(name=name) self.loss_fn = keras.backend.ctc_batch_cost def call(self, y_true, y_pred): batch_len = tf.cast(tf.shape(y_true)[0], dtype=\"int64\") input_length = tf.cast(tf.shape(y_pred)[1], dtype=\"int64\") label_length = tf.cast(tf.shape(y_true)[1], dtype=\"int64\") input_length = input_length * tf.ones(shape=(batch_len, 1), dtype=\"int64\") label_length = label_length * tf.ones(shape=(batch_len, 1), dtype=\"int64\") loss = self.loss_fn(y_true, y_pred, input_length, label_length) self.add_loss(loss) # At test time, just return the computed predictions. return y_pred def build_model(): # Inputs to the model input_img = keras.Input(shape=(image_width, image_height, 1), name=\"image\") labels = keras.layers.Input(name=\"label\", shape=(None,)) # First conv block. x = keras.layers.Conv2D( 32, (3, 3), activation=\"relu\", kernel_initializer=\"he_normal\", padding=\"same\", name=\"Conv1\", )(input_img) x = keras.layers.MaxPooling2D((2, 2), name=\"pool1\")(x) # Second conv block. x = keras.layers.Conv2D( 64, (3, 3), activation=\"relu\", kernel_initializer=\"he_normal\", padding=\"same\", name=\"Conv2\", )(x) x = keras.layers.MaxPooling2D((2, 2), name=\"pool2\")(x) # We have used two max pool with pool size and strides 2. # Hence, downsampled feature maps are 4x smaller. The number of # filters in the last layer is 64. Reshape accordingly before # passing the output to the RNN part of the model. new_shape = ((image_width // 4), (image_height // 4) * 64) x = keras.layers.Reshape(target_shape=new_shape, name=\"reshape\")(x) x = keras.layers.Dense(64, activation=\"relu\", name=\"dense1\")(x) x = keras.layers.Dropout(0.2)(x) # RNNs. x = keras.layers.Bidirectional( keras.layers.LSTM(128, return_sequences=True, dropout=0.25) )(x) x = keras.layers.Bidirectional( keras.layers.LSTM(64, return_sequences=True, dropout=0.25) )(x) # +2 is to account for the two special tokens introduced by the CTC loss. # The recommendation comes here: https://git.io/J0eXP. x = keras.layers.Dense( len(char_to_num.get_vocabulary()) + 2, activation=\"softmax\", name=\"dense2\" )(x) # Add CTC layer for calculating CTC loss at each step. output = CTCLayer(name=\"ctc_loss\")(labels, x) # Define the model. model = keras.models.Model( inputs=[input_img, labels], outputs=output, name=\"handwriting_recognizer\" ) # Optimizer. opt = keras.optimizers.Adam() # Compile the model and return. model.compile(optimizer=opt) return model # Get the model. model = build_model() model.summary() Model: \"handwriting_recognizer\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== image (InputLayer) [(None, 128, 32, 1)] 0 __________________________________________________________________________________________________ Conv1 (Conv2D) (None, 128, 32, 32) 320 image[0][0] __________________________________________________________________________________________________ pool1 (MaxPooling2D) (None, 64, 16, 32) 0 Conv1[0][0] __________________________________________________________________________________________________ Conv2 (Conv2D) (None, 64, 16, 64) 18496 pool1[0][0] __________________________________________________________________________________________________ pool2 (MaxPooling2D) (None, 32, 8, 64) 0 Conv2[0][0] __________________________________________________________________________________________________ reshape (Reshape) (None, 32, 512) 0 pool2[0][0] __________________________________________________________________________________________________ dense1 (Dense) (None, 32, 64) 32832 reshape[0][0] __________________________________________________________________________________________________ dropout (Dropout) (None, 32, 64) 0 dense1[0][0] __________________________________________________________________________________________________ bidirectional (Bidirectional) (None, 32, 256) 197632 dropout[0][0] __________________________________________________________________________________________________ bidirectional_1 (Bidirectional) (None, 32, 128) 164352 bidirectional[0][0] __________________________________________________________________________________________________ label (InputLayer) [(None, None)] 0 __________________________________________________________________________________________________ dense2 (Dense) (None, 32, 81) 10449 bidirectional_1[0][0] __________________________________________________________________________________________________ ctc_loss (CTCLayer) (None, 32, 81) 0 label[0][0] dense2[0][0] ================================================================================================== Total params: 424,081 Trainable params: 424,081 Non-trainable params: 0 __________________________________________________________________________________________________ Evaluation metric Edit Distance is the most widely used metric for evaluating OCR models. In this section, we will implement it and use it as a callback to monitor our model. We first segregate the validation images and their labels for convenience. validation_images = [] validation_labels = [] for batch in validation_ds: validation_images.append(batch[\"image\"]) validation_labels.append(batch[\"label\"]) Now, we create a callback to monitor the edit distances. def calculate_edit_distance(labels, predictions): # Get a single batch and convert its labels to sparse tensors. saprse_labels = tf.cast(tf.sparse.from_dense(labels), dtype=tf.int64) # Make predictions and convert them to sparse tensors. input_len = np.ones(predictions.shape[0]) * predictions.shape[1] predictions_decoded = keras.backend.ctc_decode( predictions, input_length=input_len, greedy=True )[0][0][:, :max_len] sparse_predictions = tf.cast( tf.sparse.from_dense(predictions_decoded), dtype=tf.int64 ) # Compute individual edit distances and average them out. edit_distances = tf.edit_distance( sparse_predictions, saprse_labels, normalize=False ) return tf.reduce_mean(edit_distances) class EditDistanceCallback(keras.callbacks.Callback): def __init__(self, pred_model): super().__init__() self.prediction_model = pred_model def on_epoch_end(self, epoch, logs=None): edit_distances = [] for i in range(len(validation_images)): labels = validation_labels[i] predictions = self.prediction_model.predict(validation_images[i]) edit_distances.append(calculate_edit_distance(labels, predictions).numpy()) print( f\"Mean edit distance for epoch {epoch + 1}: {np.mean(edit_distances):.4f}\" ) Training Now we are ready to kick off model training. epochs = 10 # To get good results this should be at least 50. model = build_model() prediction_model = keras.models.Model( model.get_layer(name=\"image\").input, model.get_layer(name=\"dense2\").output ) edit_distance_callback = EditDistanceCallback(prediction_model) # Train the model. history = model.fit( train_ds, validation_data=validation_ds, epochs=epochs, callbacks=[edit_distance_callback], ) Epoch 1/10 1357/1357 [==============================] - 89s 51ms/step - loss: 13.6670 - val_loss: 11.8041 Mean edit distance for epoch 1: 20.5117 Epoch 2/10 1357/1357 [==============================] - 48s 36ms/step - loss: 10.6864 - val_loss: 9.6994 Mean edit distance for epoch 2: 20.1167 Epoch 3/10 1357/1357 [==============================] - 48s 35ms/step - loss: 9.0437 - val_loss: 8.0355 Mean edit distance for epoch 3: 19.7270 Epoch 4/10 1357/1357 [==============================] - 48s 35ms/step - loss: 7.6098 - val_loss: 6.4239 Mean edit distance for epoch 4: 19.1106 Epoch 5/10 1357/1357 [==============================] - 48s 35ms/step - loss: 6.3194 - val_loss: 4.9814 Mean edit distance for epoch 5: 18.4894 Epoch 6/10 1357/1357 [==============================] - 48s 35ms/step - loss: 5.3417 - val_loss: 4.1307 Mean edit distance for epoch 6: 18.1909 Epoch 7/10 1357/1357 [==============================] - 48s 35ms/step - loss: 4.6396 - val_loss: 3.7706 Mean edit distance for epoch 7: 18.1224 Epoch 8/10 1357/1357 [==============================] - 48s 35ms/step - loss: 4.1926 - val_loss: 3.3682 Mean edit distance for epoch 8: 17.9387 Epoch 9/10 1357/1357 [==============================] - 48s 36ms/step - loss: 3.8532 - val_loss: 3.1829 Mean edit distance for epoch 9: 17.9074 Epoch 10/10 1357/1357 [==============================] - 49s 36ms/step - loss: 3.5769 - val_loss: 2.9221 Mean edit distance for epoch 10: 17.7960 Inference # A utility function to decode the output of the network. def decode_batch_predictions(pred): input_len = np.ones(pred.shape[0]) * pred.shape[1] # Use greedy search. For complex tasks, you can use beam search. results = keras.backend.ctc_decode(pred, input_length=input_len, greedy=True)[0][0][ :, :max_len ] # Iterate over the results and get back the text. output_text = [] for res in results: res = tf.gather(res, tf.where(tf.math.not_equal(res, -1))) res = tf.strings.reduce_join(num_to_char(res)).numpy().decode(\"utf-8\") output_text.append(res) return output_text # Let's check results on some test samples. for batch in test_ds.take(1): batch_images = batch[\"image\"] _, ax = plt.subplots(4, 4, figsize=(15, 8)) preds = prediction_model.predict(batch_images) pred_texts = decode_batch_predictions(preds) for i in range(16): img = batch_images[i] img = tf.image.flip_left_right(img) img = tf.transpose(img, perm=[1, 0, 2]) img = (img * 255.0).numpy().clip(0, 255).astype(np.uint8) img = img[:, :, 0] title = f\"Prediction: {pred_texts[i]}\" ax[i // 4, i % 4].imshow(img, cmap=\"gray\") ax[i // 4, i % 4].set_title(title) ax[i // 4, i % 4].axis(\"off\") plt.show() png To get better results the model should be trained for at least 50 epochs. Final remarks The prediction_model is fully compatible with TensorFlow Lite. If you are interested, you can use it inside a mobile application. You may find this notebook to be useful in this regard. Not all the training examples are perfectly aligned as observed in this example. This can hurt model performance for complex sequences. To this end, we can leverage Spatial Transformer Networks (Jaderberg et al.) that can help the model learn affine transformations that maximize its performance. Implement an image captioning model using a CNN and a Transformer. Setup import os import re import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.applications import efficientnet from tensorflow.keras.layers import TextVectorization seed = 111 np.random.seed(seed) tf.random.set_seed(seed) Download the dataset We will be using the Flickr8K dataset for this tutorial. This dataset comprises over 8,000 images, that are each paired with five different captions. !wget -q https://github.com/jbrownlee/Datasets/releases/download/Flickr8k/Flickr8k_Dataset.zip !wget -q https://github.com/jbrownlee/Datasets/releases/download/Flickr8k/Flickr8k_text.zip !unzip -qq Flickr8k_Dataset.zip !unzip -qq Flickr8k_text.zip !rm Flickr8k_Dataset.zip Flickr8k_text.zip # Path to the images IMAGES_PATH = \"Flicker8k_Dataset\" # Desired image dimensions IMAGE_SIZE = (299, 299) # Vocabulary size VOCAB_SIZE = 10000 # Fixed length allowed for any sequence SEQ_LENGTH = 25 # Dimension for the image embeddings and token embeddings EMBED_DIM = 512 # Per-layer units in the feed-forward network FF_DIM = 512 # Other training parameters BATCH_SIZE = 64 EPOCHS = 30 AUTOTUNE = tf.data.AUTOTUNE Preparing the dataset def load_captions_data(filename): \"\"\"Loads captions (text) data and maps them to corresponding images. Args: filename: Path to the text file containing caption data. Returns: caption_mapping: Dictionary mapping image names and the corresponding captions text_data: List containing all the available captions \"\"\" with open(filename) as caption_file: caption_data = caption_file.readlines() caption_mapping = {} text_data = [] images_to_skip = set() for line in caption_data: line = line.rstrip(\"\n\") # Image name and captions are separated using a tab img_name, caption = line.split(\"\t\") # Each image is repeated five times for the five different captions. # Each image name has a suffix `#(caption_number)` img_name = img_name.split(\"#\")[0] img_name = os.path.join(IMAGES_PATH, img_name.strip()) # We will remove caption that are either too short to too long tokens = caption.strip().split() if len(tokens) < 5 or len(tokens) > SEQ_LENGTH: images_to_skip.add(img_name) continue if img_name.endswith(\"jpg\") and img_name not in images_to_skip: # We will add a start and an end token to each caption caption = \" \" + caption.strip() + \" \" text_data.append(caption) if img_name in caption_mapping: caption_mapping[img_name].append(caption) else: caption_mapping[img_name] = [caption] for img_name in images_to_skip: if img_name in caption_mapping: del caption_mapping[img_name] return caption_mapping, text_data def train_val_split(caption_data, train_size=0.8, shuffle=True): \"\"\"Split the captioning dataset into train and validation sets. Args: caption_data (dict): Dictionary containing the mapped caption data train_size (float): Fraction of all the full dataset to use as training data shuffle (bool): Whether to shuffle the dataset before splitting Returns: Traning and validation datasets as two separated dicts \"\"\" # 1. Get the list of all image names all_images = list(caption_data.keys()) # 2. Shuffle if necessary if shuffle: np.random.shuffle(all_images) # 3. Split into training and validation sets train_size = int(len(caption_data) * train_size) training_data = { img_name: caption_data[img_name] for img_name in all_images[:train_size] } validation_data = { img_name: caption_data[img_name] for img_name in all_images[train_size:] } # 4. Return the splits return training_data, validation_data # Load the dataset captions_mapping, text_data = load_captions_data(\"Flickr8k.token.txt\") # Split the dataset into training and validation sets train_data, valid_data = train_val_split(captions_mapping) print(\"Number of training samples: \", len(train_data)) print(\"Number of validation samples: \", len(valid_data)) Number of training samples: 6114 Number of validation samples: 1529 Number of training samples: 6114 Number of validation samples: 1529 Vectorizing the text data We'll use the TextVectorization layer to vectorize the text data, that is to say, to turn the original strings into integer sequences where each integer represents the index of a word in a vocabulary. We will use a custom string standardization scheme (strip punctuation characters except < and >) and the default splitting scheme (split on whitespace). def custom_standardization(input_string): lowercase = tf.strings.lower(input_string) return tf.strings.regex_replace(lowercase, \"[%s]\" % re.escape(strip_chars), \"\") # [KERASBERT PROCESSING] removed definition of special chars for import strip_chars = strip_chars.replace(\"<\", \"\") strip_chars = strip_chars.replace(\">\", \"\") vectorization = TextVectorization( max_tokens=VOCAB_SIZE, output_mode=\"int\", output_sequence_length=SEQ_LENGTH, standardize=custom_standardization, ) vectorization.adapt(text_data) # Data augmentation for image data image_augmentation = keras.Sequential( [ layers.RandomFlip(\"horizontal\"), layers.RandomRotation(0.2), layers.RandomContrast(0.3), ] ) 2021-09-17 05:17:57.047819: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-17 05:17:57.058177: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-17 05:17:57.106007: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-17 05:17:57.107650: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-09-17 05:17:57.134387: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-17 05:17:57.135154: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-17 05:17:57.135806: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-17 05:17:57.680010: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-17 05:17:57.680785: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-17 05:17:57.681439: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-17 05:17:57.682067: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14684 MB memory: -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:04.0, compute capability: 7.0 2021-09-17 05:17:58.229404: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2) Building a tf.data.Dataset pipeline for training We will generate pairs of images and corresponding captions using a tf.data.Dataset object. The pipeline consists of two steps: Read the image from the disk Tokenize all the five captions corresponding to the image def decode_and_resize(img_path): img = tf.io.read_file(img_path) img = tf.image.decode_jpeg(img, channels=3) img = tf.image.resize(img, IMAGE_SIZE) img = tf.image.convert_image_dtype(img, tf.float32) return img def process_input(img_path, captions): return decode_and_resize(img_path), vectorization(captions) def make_dataset(images, captions): if split == \"train\": img_dataset = tf.data.Dataset.from_tensor_slices(images).map( read_train_image, num_parallel_calls=AUTOTUNE ) else: img_dataset = tf.data.Dataset.from_tensor_slices(images).map( read_valid_image, num_parallel_calls=AUTOTUNE ) cap_dataset = tf.data.Dataset.from_tensor_slices(captions).map( vectorization, num_parallel_calls=AUTOTUNE ) dataset = tf.data.Dataset.zip((img_dataset, cap_dataset)) dataset = dataset.batch(BATCH_SIZE).shuffle(256).prefetch(AUTOTUNE) return dataset # Pass the list of images and the list of corresponding captions train_dataset = make_dataset(list(train_data.keys()), list(train_data.values())) valid_dataset = make_dataset(list(valid_data.keys()), list(valid_data.values())) Building the model Our image captioning architecture consists of three models: A CNN: used to extract the image features A TransformerEncoder: The extracted image features are then passed to a Transformer based encoder that generates a new representation of the inputs A TransformerDecoder: This model takes the encoder output and the text data (sequences) as inputs and tries to learn to generate the caption. def get_cnn_model(): base_model = efficientnet.EfficientNetB0( input_shape=(*IMAGE_SIZE, 3), include_top=False, weights=\"imagenet\", ) # We freeze our feature extractor base_model.trainable = False base_model_out = base_model.output base_model_out = layers.Reshape((-1, base_model_out.shape[-1]))(base_model_out) cnn_model = keras.models.Model(base_model.input, base_model_out) return cnn_model class TransformerEncoderBlock(layers.Layer): def __init__(self, embed_dim, dense_dim, num_heads, **kwargs): super().__init__(**kwargs) self.embed_dim = embed_dim self.dense_dim = dense_dim self.num_heads = num_heads self.attention_1 = layers.MultiHeadAttention( num_heads=num_heads, key_dim=embed_dim, dropout=0.0 ) self.layernorm_1 = layers.LayerNormalization() self.layernorm_2 = layers.LayerNormalization() self.dense_1 = layers.Dense(embed_dim, activation=\"relu\") def call(self, inputs, training, mask=None): inputs = self.layernorm_1(inputs) inputs = self.dense_1(inputs) attention_output_1 = self.attention_1( query=inputs, value=inputs, key=inputs, attention_mask=None, training=training, ) out_1 = self.layernorm_2(inputs + attention_output_1) return out_1 class PositionalEmbedding(layers.Layer): def __init__(self, sequence_length, vocab_size, embed_dim, **kwargs): super().__init__(**kwargs) self.token_embeddings = layers.Embedding( input_dim=vocab_size, output_dim=embed_dim ) self.position_embeddings = layers.Embedding( input_dim=sequence_length, output_dim=embed_dim ) self.sequence_length = sequence_length self.vocab_size = vocab_size self.embed_dim = embed_dim self.embed_scale = tf.math.sqrt(tf.cast(embed_dim, tf.float32)) def call(self, inputs): length = tf.shape(inputs)[-1] positions = tf.range(start=0, limit=length, delta=1) embedded_tokens = self.token_embeddings(inputs) embedded_tokens = embedded_tokens * self.embed_scale embedded_positions = self.position_embeddings(positions) return embedded_tokens + embedded_positions def compute_mask(self, inputs, mask=None): return tf.math.not_equal(inputs, 0) class TransformerDecoderBlock(layers.Layer): def __init__(self, embed_dim, ff_dim, num_heads, **kwargs): super().__init__(**kwargs) self.embed_dim = embed_dim self.ff_dim = ff_dim self.num_heads = num_heads self.attention_1 = layers.MultiHeadAttention( num_heads=num_heads, key_dim=embed_dim, dropout=0.1 ) self.attention_2 = layers.MultiHeadAttention( num_heads=num_heads, key_dim=embed_dim, dropout=0.1 ) self.ffn_layer_1 = layers.Dense(ff_dim, activation=\"relu\") self.ffn_layer_2 = layers.Dense(embed_dim) self.layernorm_1 = layers.LayerNormalization() self.layernorm_2 = layers.LayerNormalization() self.layernorm_3 = layers.LayerNormalization() self.embedding = PositionalEmbedding( embed_dim=EMBED_DIM, sequence_length=SEQ_LENGTH, vocab_size=VOCAB_SIZE ) self.out = layers.Dense(VOCAB_SIZE, activation=\"softmax\") self.dropout_1 = layers.Dropout(0.3) self.dropout_2 = layers.Dropout(0.5) self.supports_masking = True def call(self, inputs, encoder_outputs, training, mask=None): inputs = self.embedding(inputs) causal_mask = self.get_causal_attention_mask(inputs) if mask is not None: padding_mask = tf.cast(mask[:, :, tf.newaxis], dtype=tf.int32) combined_mask = tf.cast(mask[:, tf.newaxis, :], dtype=tf.int32) combined_mask = tf.minimum(combined_mask, causal_mask) attention_output_1 = self.attention_1( query=inputs, value=inputs, key=inputs, attention_mask=combined_mask, training=training, ) out_1 = self.layernorm_1(inputs + attention_output_1) attention_output_2 = self.attention_2( query=out_1, value=encoder_outputs, key=encoder_outputs, attention_mask=padding_mask, training=training, ) out_2 = self.layernorm_2(out_1 + attention_output_2) ffn_out = self.ffn_layer_1(out_2) ffn_out = self.dropout_1(ffn_out, training=training) ffn_out = self.ffn_layer_2(ffn_out) ffn_out = self.layernorm_3(ffn_out + out_2, training=training) ffn_out = self.dropout_2(ffn_out, training=training) preds = self.out(ffn_out) return preds def get_causal_attention_mask(self, inputs): input_shape = tf.shape(inputs) batch_size, sequence_length = input_shape[0], input_shape[1] i = tf.range(sequence_length)[:, tf.newaxis] j = tf.range(sequence_length) mask = tf.cast(i >= j, dtype=\"int32\") mask = tf.reshape(mask, (1, input_shape[1], input_shape[1])) mult = tf.concat( [tf.expand_dims(batch_size, -1), tf.constant([1, 1], dtype=tf.int32)], axis=0, ) return tf.tile(mask, mult) class ImageCaptioningModel(keras.Model): def __init__( self, cnn_model, encoder, decoder, num_captions_per_image=5, image_aug=None, ): super().__init__() self.cnn_model = cnn_model self.encoder = encoder self.decoder = decoder self.loss_tracker = keras.metrics.Mean(name=\"loss\") self.acc_tracker = keras.metrics.Mean(name=\"accuracy\") self.num_captions_per_image = num_captions_per_image self.image_aug = image_aug def calculate_loss(self, y_true, y_pred, mask): loss = self.loss(y_true, y_pred) mask = tf.cast(mask, dtype=loss.dtype) loss *= mask return tf.reduce_sum(loss) / tf.reduce_sum(mask) def calculate_accuracy(self, y_true, y_pred, mask): accuracy = tf.equal(y_true, tf.argmax(y_pred, axis=2)) accuracy = tf.math.logical_and(mask, accuracy) accuracy = tf.cast(accuracy, dtype=tf.float32) mask = tf.cast(mask, dtype=tf.float32) return tf.reduce_sum(accuracy) / tf.reduce_sum(mask) def _compute_caption_loss_and_acc(self, img_embed, batch_seq, training=True): encoder_out = self.encoder(img_embed, training=training) batch_seq_inp = batch_seq[:, :-1] batch_seq_true = batch_seq[:, 1:] mask = tf.math.not_equal(batch_seq_true, 0) batch_seq_pred = self.decoder( batch_seq_inp, encoder_out, training=training, mask=mask ) loss = self.calculate_loss(batch_seq_true, batch_seq_pred, mask) acc = self.calculate_accuracy(batch_seq_true, batch_seq_pred, mask) return loss, acc def train_step(self, batch_data): batch_img, batch_seq = batch_data batch_loss = 0 batch_acc = 0 if self.image_aug: batch_img = self.image_aug(batch_img) # 1. Get image embeddings img_embed = self.cnn_model(batch_img) # 2. Pass each of the five captions one by one to the decoder # along with the encoder outputs and compute the loss as well as accuracy # for each caption. for i in range(self.num_captions_per_image): with tf.GradientTape() as tape: loss, acc = self._compute_caption_loss_and_acc( img_embed, batch_seq[:, i, :], training=True ) # 3. Update loss and accuracy batch_loss += loss batch_acc += acc # 4. Get the list of all the trainable weights train_vars = ( self.encoder.trainable_variables + self.decoder.trainable_variables ) # 5. Get the gradients grads = tape.gradient(loss, train_vars) # 6. Update the trainable weights self.optimizer.apply_gradients(zip(grads, train_vars)) # 7. Update the trackers batch_acc /= float(self.num_captions_per_image) self.loss_tracker.update_state(batch_loss) self.acc_tracker.update_state(batch_acc) # 8. Return the loss and accuracy values return {\"loss\": self.loss_tracker.result(), \"acc\": self.acc_tracker.result()} def test_step(self, batch_data): batch_img, batch_seq = batch_data batch_loss = 0 batch_acc = 0 # 1. Get image embeddings img_embed = self.cnn_model(batch_img) # 2. Pass each of the five captions one by one to the decoder # along with the encoder outputs and compute the loss as well as accuracy # for each caption. for i in range(self.num_captions_per_image): loss, acc = self._compute_caption_loss_and_acc( img_embed, batch_seq[:, i, :], training=False ) # 3. Update batch loss and batch accuracy batch_loss += loss batch_acc += acc batch_acc /= float(self.num_captions_per_image) # 4. Update the trackers self.loss_tracker.update_state(batch_loss) self.acc_tracker.update_state(batch_acc) # 5. Return the loss and accuracy values return {\"loss\": self.loss_tracker.result(), \"acc\": self.acc_tracker.result()} @property def metrics(self): # We need to list our metrics here so the `reset_states()` can be # called automatically. return [self.loss_tracker, self.acc_tracker] cnn_model = get_cnn_model() encoder = TransformerEncoderBlock(embed_dim=EMBED_DIM, dense_dim=FF_DIM, num_heads=1) decoder = TransformerDecoderBlock(embed_dim=EMBED_DIM, ff_dim=FF_DIM, num_heads=2) caption_model = ImageCaptioningModel( cnn_model=cnn_model, encoder=encoder, decoder=decoder, image_aug=image_augmentation, ) Model training # Define the loss function cross_entropy = keras.losses.SparseCategoricalCrossentropy( from_logits=False, reduction=\"none\" ) # EarlyStopping criteria early_stopping = keras.callbacks.EarlyStopping(patience=3, restore_best_weights=True) # Learning Rate Scheduler for the optimizer class LRSchedule(keras.optimizers.schedules.LearningRateSchedule): def __init__(self, post_warmup_learning_rate, warmup_steps): super().__init__() self.post_warmup_learning_rate = post_warmup_learning_rate self.warmup_steps = warmup_steps def __call__(self, step): global_step = tf.cast(step, tf.float32) warmup_steps = tf.cast(self.warmup_steps, tf.float32) warmup_progress = global_step / warmup_steps warmup_learning_rate = self.post_warmup_learning_rate * warmup_progress return tf.cond( global_step < warmup_steps, lambda: warmup_learning_rate, lambda: self.post_warmup_learning_rate, ) # Create a learning rate schedule num_train_steps = len(train_dataset) * EPOCHS num_warmup_steps = num_train_steps // 15 lr_schedule = LRSchedule(post_warmup_learning_rate=1e-4, warmup_steps=num_warmup_steps) # Compile the model caption_model.compile(optimizer=keras.optimizers.Adam(lr_schedule), loss=cross_entropy) # Fit the model caption_model.fit( train_dataset, epochs=EPOCHS, validation_data=valid_dataset, callbacks=[early_stopping], ) Epoch 1/30 2021-09-17 05:18:22.943796: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 59 of 256 2021-09-17 05:18:30.137746: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 2021-09-17 05:18:30.598020: I tensorflow/stream_executor/cuda/cuda_dnn.cc:369] Loaded cuDNN version 8005 96/96 [==============================] - 62s 327ms/step - loss: 28.1409 - acc: 0.1313 - val_loss: 20.4968 - val_acc: 0.3116 Epoch 2/30 2021-09-17 05:19:13.829127: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 59 of 256 2021-09-17 05:19:19.872802: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 43s 278ms/step - loss: 19.3393 - acc: 0.3207 - val_loss: 18.0922 - val_acc: 0.3514 Epoch 3/30 2021-09-17 05:19:56.772506: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 61 of 256 2021-09-17 05:20:02.481758: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 278ms/step - loss: 17.4184 - acc: 0.3552 - val_loss: 17.0022 - val_acc: 0.3698 Epoch 4/30 2021-09-17 05:20:39.367542: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 61 of 256 2021-09-17 05:20:45.149089: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 43s 278ms/step - loss: 16.3052 - acc: 0.3760 - val_loss: 16.3026 - val_acc: 0.3845 Epoch 5/30 2021-09-17 05:21:21.930582: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 61 of 256 2021-09-17 05:21:27.608503: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 278ms/step - loss: 15.5097 - acc: 0.3901 - val_loss: 15.8929 - val_acc: 0.3925 Epoch 6/30 2021-09-17 05:22:04.553717: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 61 of 256 2021-09-17 05:22:10.210087: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 278ms/step - loss: 14.8596 - acc: 0.4069 - val_loss: 15.5456 - val_acc: 0.4005 Epoch 7/30 2021-09-17 05:22:47.100594: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 62 of 256 2021-09-17 05:22:52.466539: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 14.3454 - acc: 0.4131 - val_loss: 15.3313 - val_acc: 0.4045 Epoch 8/30 2021-09-17 05:23:29.226300: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 61 of 256 2021-09-17 05:23:34.808841: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 13.8745 - acc: 0.4251 - val_loss: 15.2011 - val_acc: 0.4078 Epoch 9/30 2021-09-17 05:24:11.615058: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 62 of 256 2021-09-17 05:24:17.030769: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 13.4640 - acc: 0.4350 - val_loss: 15.0905 - val_acc: 0.4107 Epoch 10/30 2021-09-17 05:24:53.832807: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 61 of 256 2021-09-17 05:24:59.506573: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 13.0922 - acc: 0.4414 - val_loss: 15.0083 - val_acc: 0.4113 Epoch 11/30 2021-09-17 05:25:36.242501: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 62 of 256 2021-09-17 05:25:41.723206: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 12.7538 - acc: 0.4464 - val_loss: 14.9455 - val_acc: 0.4143 Epoch 12/30 2021-09-17 05:26:18.532009: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 62 of 256 2021-09-17 05:26:23.985106: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 12.4233 - acc: 0.4547 - val_loss: 14.9816 - val_acc: 0.4133 Epoch 13/30 2021-09-17 05:27:00.696082: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 63 of 256 2021-09-17 05:27:05.812571: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 12.1264 - acc: 0.4636 - val_loss: 14.9451 - val_acc: 0.4158 Epoch 14/30 2021-09-17 05:27:42.513445: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 63 of 256 2021-09-17 05:27:47.675342: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 11.8244 - acc: 0.4724 - val_loss: 14.9751 - val_acc: 0.4148 Epoch 15/30 2021-09-17 05:28:24.371225: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 63 of 256 2021-09-17 05:28:29.829654: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 11.5644 - acc: 0.4776 - val_loss: 15.0377 - val_acc: 0.4167 Epoch 16/30 2021-09-17 05:29:06.564650: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 62 of 256 2021-09-17 05:29:11.945996: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 11.3046 - acc: 0.4852 - val_loss: 15.0575 - val_acc: 0.4135 Check sample predictions vocab = vectorization.get_vocabulary() index_lookup = dict(zip(range(len(vocab)), vocab)) max_decoded_sentence_length = SEQ_LENGTH - 1 valid_images = list(valid_data.keys()) def generate_caption(): # Select a random image from the validation dataset sample_img = np.random.choice(valid_images) # Read the image from the disk sample_img = decode_and_resize(sample_img) img = sample_img.numpy().clip(0, 255).astype(np.uint8) plt.imshow(img) plt.show() # Pass the image to the CNN img = tf.expand_dims(sample_img, 0) img = caption_model.cnn_model(img) # Pass the image features to the Transformer encoder encoded_img = caption_model.encoder(img, training=False) # Generate the caption using the Transformer decoder decoded_caption = \" \" for i in range(max_decoded_sentence_length): tokenized_caption = vectorization([decoded_caption])[:, :-1] mask = tf.math.not_equal(tokenized_caption, 0) predictions = caption_model.decoder( tokenized_caption, encoded_img, training=False, mask=mask ) sampled_token_index = np.argmax(predictions[0, i, :]) sampled_token = index_lookup[sampled_token_index] if sampled_token == \" \": break decoded_caption += \" \" + sampled_token decoded_caption = decoded_caption.replace(\" \", \"\") decoded_caption = decoded_caption.replace(\" \", \"\").strip() print(\"Predicted Caption: \", decoded_caption) # Check predictions for a few samples generate_caption() generate_caption() generate_caption() png Predicted Caption: a group of dogs race in the snow png Predicted Caption: a man in a blue canoe on a lake png Predicted Caption: a black and white dog is running through a green grass End Notes We saw that the model starts to generate reasonable captions after a few epochs. To keep this example easily runnable, we have trained it with a few constraints, like a minimal number of attention heads. To improve the predictions, you can try changing these training settings and find a good model for your use case. Training an image classifier from scratch on the Kaggle Cats vs Dogs dataset. Introduction This example shows how to do image classification from scratch, starting from JPEG image files on disk, without leveraging pre-trained weights or a pre-made Keras Application model. We demonstrate the workflow on the Kaggle Cats vs Dogs binary classification dataset. We use the image_dataset_from_directory utility to generate the datasets, and we use Keras image preprocessing layers for image standardization and data augmentation. Setup import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Load the data: the Cats vs Dogs dataset Raw data download First, let's download the 786M ZIP archive of the raw data: !curl -O https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip !unzip -q kagglecatsanddogs_3367a.zip !ls % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 786M 100 786M 0 0 44.4M 0 0:00:17 0:00:17 --:--:-- 49.6M image_classification_from_scratch.ipynb MSR-LA - 3467.docx readme[1].txt kagglecatsanddogs_3367a.zip PetImages Now we have a PetImages folder which contain two subfolders, Cat and Dog. Each subfolder contains image files for each category. !ls PetImages Cat Dog Filter out corrupted images When working with lots of real-world image data, corrupted images are a common occurence. Let's filter out badly-encoded images that do not feature the string "JFIF" in their header. import os num_skipped = 0 for folder_name in (\"Cat\", \"Dog\"): folder_path = os.path.join(\"PetImages\", folder_name) for fname in os.listdir(folder_path): fpath = os.path.join(folder_path, fname) try: fobj = open(fpath, \"rb\") is_jfif = tf.compat.as_bytes(\"JFIF\") in fobj.peek(10) finally: fobj.close() if not is_jfif: num_skipped += 1 # Delete corrupted image os.remove(fpath) print(\"Deleted %d images\" % num_skipped) Deleted 1590 images Generate a Dataset image_size = (180, 180) batch_size = 32 train_ds = tf.keras.preprocessing.image_dataset_from_directory( \"PetImages\", validation_split=0.2, subset=\"trainin\", seed=1337, image_size=image_size, batch_size=batch_size, ) val_ds = tf.keras.preprocessing.image_dataset_from_directory( \"PetImages\", validation_split=0.2, subset=\"validation\", seed=1337, image_size=image_size, batch_size=batch_size, ) Found 23410 files belonging to 2 classes. Using 18728 files for training. Found 23410 files belonging to 2 classes. Using 4682 files for validation. Visualize the data Here are the first 9 images in the training dataset. As you can see, label 1 is \"dog\" and label 0 is "cat". import matplotlib.pyplot as plt plt.figure(figsize=(10, 10)) for images, labels in train_ds.take(1): for i in range(9): ax = plt.subplot(3, 3, i + 1) plt.imshow(images[i].numpy().astype(\"uint8\")) plt.title(int(labels[i])) plt.axis(\"off\") png Using image data augmentation When you don't have a large image dataset, it's a good practice to artificially introduce sample diversity by applying random yet realistic transformations to the training images, such as random horizontal flipping or small random rotations. This helps expose the model to different aspects of the training data while slowing down overfitting. data_augmentation = keras.Sequential( [ layers.RandomFlip(\"horizontal\"), layers.RandomRotation(0.1), ] ) Let's visualize what the augmented samples look like, by applying data_augmentation repeatedly to the first image in the dataset: plt.figure(figsize=(10, 10)) for images, _ in train_ds.take(1): for i in range(9): augmented_images = data_augmentation(images) ax = plt.subplot(3, 3, i + 1) plt.imshow(augmented_images[0].numpy().astype(\"uint8\")) plt.axis(\"off\") png Standardizing the data Our image are already in a standard size (180x180), as they are being yielded as contiguous float32 batches by our dataset. However, their RGB channel values are in the [0, 255] range. This is not ideal for a neural network; in general you should seek to make your input values small. Here, we will standardize values to be in the [0, 1] by using a Rescaling layer at the start of our model. Two options to preprocess the data There are two ways you could be using the data_augmentation preprocessor: Option 1: Make it part of the model, like this: inputs = keras.Input(shape=input_shape) x = data_augmentation(inputs) x = layers.Rescaling(1./255)(x) ... # Rest of the model With this option, your data augmentation will happen on device, synchronously with the rest of the model execution, meaning that it will benefit from GPU acceleration. Note that data augmentation is inactive at test time, so the input samples will only be augmented during fit(), not when calling evaluate() or predict(). If you're training on GPU, this is the better option. Option 2: apply it to the dataset, so as to obtain a dataset that yields batches of augmented images, like this: augmented_train_ds = train_ds.map( lambda x, y: (data_augmentation(x, training=True), y)) With this option, your data augmentation will happen on CPU, asynchronously, and will be buffered before going into the model. If you're training on CPU, this is the better option, since it makes data augmentation asynchronous and non-blocking. In our case, we'll go with the first option. Configure the dataset for performance Let's make sure to use buffered prefetching so we can yield data from disk without having I/O becoming blocking: train_ds = train_ds.prefetch(buffer_size=32) val_ds = val_ds.prefetch(buffer_size=32) Build a model We'll build a small version of the Xception network. We haven't particularly tried to optimize the architecture; if you want to do a systematic search for the best model configuration, consider using KerasTuner. Note that: We start the model with the data_augmentation preprocessor, followed by a Rescaling layer. We include a Dropout layer before the final classification layer. def make_model(input_shape, num_classes): inputs = keras.Input(shape=input_shape) # Image augmentation block x = data_augmentation(inputs) # Entry block x = layers.Rescaling(1.0 / 255)(x) x = layers.Conv2D(32, 3, strides=2, padding=\"same\")(x) x = layers.BatchNormalization()(x) x = layers.Activation(\"relu\")(x) x = layers.Conv2D(64, 3, padding=\"same\")(x) x = layers.BatchNormalization()(x) x = layers.Activation(\"relu\")(x) previous_block_activation = x # Set aside residual for size in [128, 256, 512, 728]: x = layers.Activation(\"relu\")(x) x = layers.SeparableConv2D(size, 3, padding=\"same\")(x) x = layers.BatchNormalization()(x) x = layers.Activation(\"relu\")(x) x = layers.SeparableConv2D(size, 3, padding=\"same\")(x) x = layers.BatchNormalization()(x) x = layers.MaxPooling2D(3, strides=2, padding=\"same\")(x) # Project residual residual = layers.Conv2D(size, 1, strides=2, padding=\"same\")( previous_block_activation ) x = layers.add([x, residual]) # Add back residual previous_block_activation = x # Set aside next residual x = layers.SeparableConv2D(1024, 3, padding=\"same\")(x) x = layers.BatchNormalization()(x) x = layers.Activation(\"relu\")(x) x = layers.GlobalAveragePooling2D()(x) if num_classes == 2: activation = \"sigmoid\" units = 1 else: activation = \"softmax\" units = num_classes x = layers.Dropout(0.5)(x) outputs = layers.Dense(units, activation=activation)(x) return keras.Model(inputs, outputs) model = make_model(input_shape=image_size + (3,), num_classes=2) keras.utils.plot_model(model, show_shapes=True) ('Failed to import pydot. You must `pip install pydot` and install graphviz (https://graphviz.gitlab.io/download/), ', 'for `pydotprint` to work.') Train the model epochs = 50 callbacks = [ keras.callbacks.ModelCheckpoint(\"save_at_{epoch}.h5\"), ] model.compile( optimizer=keras.optimizers.Adam(1e-3), loss=\"binary_crossentropy\", metrics=[\"accuracy\"], ) model.fit( train_ds, epochs=epochs, callbacks=callbacks, validation_data=val_ds, ) Epoch 1/50 586/586 [==============================] - 81s 139ms/step - loss: 0.6233 - accuracy: 0.6700 - val_loss: 0.7698 - val_accuracy: 0.6117 Epoch 2/50 586/586 [==============================] - 80s 137ms/step - loss: 0.4638 - accuracy: 0.7840 - val_loss: 0.4056 - val_accuracy: 0.8178 Epoch 3/50 586/586 [==============================] - 80s 137ms/step - loss: 0.3652 - accuracy: 0.8405 - val_loss: 0.3535 - val_accuracy: 0.8528 Epoch 4/50 586/586 [==============================] - 80s 137ms/step - loss: 0.3112 - accuracy: 0.8675 - val_loss: 0.2673 - val_accuracy: 0.8894 Epoch 5/50 586/586 [==============================] - 80s 137ms/step - loss: 0.2585 - accuracy: 0.8928 - val_loss: 0.6213 - val_accuracy: 0.7294 Epoch 6/50 586/586 [==============================] - 81s 138ms/step - loss: 0.2218 - accuracy: 0.9071 - val_loss: 0.2377 - val_accuracy: 0.8930 Epoch 7/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1992 - accuracy: 0.9169 - val_loss: 1.1273 - val_accuracy: 0.6254 Epoch 8/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1820 - accuracy: 0.9243 - val_loss: 0.1955 - val_accuracy: 0.9173 Epoch 9/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1694 - accuracy: 0.9308 - val_loss: 0.1602 - val_accuracy: 0.9314 Epoch 10/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1623 - accuracy: 0.9333 - val_loss: 0.1777 - val_accuracy: 0.9248 Epoch 11/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1522 - accuracy: 0.9365 - val_loss: 0.1562 - val_accuracy: 0.9400 Epoch 12/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1458 - accuracy: 0.9417 - val_loss: 0.1529 - val_accuracy: 0.9338 Epoch 13/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1368 - accuracy: 0.9433 - val_loss: 0.1694 - val_accuracy: 0.9259 Epoch 14/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1301 - accuracy: 0.9461 - val_loss: 0.1250 - val_accuracy: 0.9530 Epoch 15/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1261 - accuracy: 0.9483 - val_loss: 0.1548 - val_accuracy: 0.9353 Epoch 16/50 586/586 [==============================] - 81s 137ms/step - loss: 0.1241 - accuracy: 0.9497 - val_loss: 0.1376 - val_accuracy: 0.9464 Epoch 17/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1193 - accuracy: 0.9535 - val_loss: 0.1093 - val_accuracy: 0.9575 Epoch 18/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1107 - accuracy: 0.9558 - val_loss: 0.1488 - val_accuracy: 0.9432 Epoch 19/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1175 - accuracy: 0.9532 - val_loss: 0.1380 - val_accuracy: 0.9421 Epoch 20/50 586/586 [==============================] - 81s 138ms/step - loss: 0.1026 - accuracy: 0.9584 - val_loss: 0.1293 - val_accuracy: 0.9485 Epoch 21/50 586/586 [==============================] - 80s 137ms/step - loss: 0.0977 - accuracy: 0.9606 - val_loss: 0.1105 - val_accuracy: 0.9573 Epoch 22/50 586/586 [==============================] - 80s 137ms/step - loss: 0.0983 - accuracy: 0.9610 - val_loss: 0.1023 - val_accuracy: 0.9633 Epoch 23/50 586/586 [==============================] - 80s 137ms/step - loss: 0.0776 - accuracy: 0.9694 - val_loss: 0.1176 - val_accuracy: 0.9530 Epoch 38/50 586/586 [==============================] - 80s 136ms/step - loss: 0.0596 - accuracy: 0.9768 - val_loss: 0.0967 - val_accuracy: 0.9633 Epoch 44/50 586/586 [==============================] - 80s 136ms/step - loss: 0.0504 - accuracy: 0.9792 - val_loss: 0.0984 - val_accuracy: 0.9663 Epoch 50/50 586/586 [==============================] - 80s 137ms/step - loss: 0.0486 - accuracy: 0.9817 - val_loss: 0.1157 - val_accuracy: 0.9609 We get to ~96% validation accuracy after training for 50 epochs on the full dataset. Run inference on new data Note that data augmentation and dropout are inactive at inference time. img = keras.preprocessing.image.load_img( \"PetImages/Cat/6779.jpg\", target_size=image_size ) img_array = keras.preprocessing.image.img_to_array(img) img_array = tf.expand_dims(img_array, 0) # Create batch axis predictions = model.predict(img_array) score = predictions[0] print( \"This image is %.2f percent cat and %.2f percent dog.\" % (100 * (1 - score), 100 * score) ) This image is 84.34 percent cat and 15.66 percent dog. BigTransfer (BiT) State-of-the-art transfer learning for image classification. Introduction BigTransfer (also known as BiT) is a state-of-the-art transfer learning method for image classification. Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. BiT revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. The importance of appropriately choosing normalization layers and scaling the architecture capacity as the amount of pre-training data increases. BigTransfer(BiT) is trained on public datasets, along with code in TF2, Jax and Pytorch. This will help anyone to reach state of the art performance on their task of interest, even with just a handful of labeled images per class. You can find BiT models pre-trained on ImageNet and ImageNet-21k in TFHub as TensorFlow2 SavedModels that you can use easily as Keras Layers. There are a variety of sizes ranging from a standard ResNet50 to a ResNet152x4 (152 layers deep, 4x wider than a typical ResNet50) for users with larger computational and memory budgets but higher accuracy requirements. Figure: The x-axis shows the number of images used per class, ranging from 1 to the full dataset. On the plots on the left, the curve in blue above is our BiT-L model, whereas the curve below is a ResNet-50 pre-trained on ImageNet (ILSVRC-2012). Setup import numpy as np import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras import tensorflow_hub as hub import tensorflow_datasets as tfds tfds.disable_progress_bar() SEEDS = 42 np.random.seed(SEEDS) tf.random.set_seed(SEEDS) Gather Flower Dataset train_ds, validation_ds = tfds.load( \"tf_flowers\", split=[\"train[:85%]\", \"train[85%:]\"], as_supervised=True, ) Downloading and preparing dataset tf_flowers/3.0.1 (download: 218.21 MiB, generated: 221.83 MiB, total: 440.05 MiB) to /root/tensorflow_datasets/tf_flowers/3.0.1... Dataset tf_flowers downloaded and prepared to /root/tensorflow_datasets/tf_flowers/3.0.1. Subsequent calls will reuse this data. Visualise the dataset plt.figure(figsize=(10, 10)) for i, (image, label) in enumerate(train_ds.take(9)): ax = plt.subplot(3, 3, i + 1) plt.imshow(image) plt.title(int(label)) plt.axis(\"off\") png Define hyperparameters RESIZE_TO = 384 CROP_TO = 224 BATCH_SIZE = 64 STEPS_PER_EPOCH = 10 AUTO = tf.data.AUTOTUNE # optimise the pipeline performance NUM_CLASSES = 5 # number of classes SCHEDULE_LENGTH = ( 500 # we will train on lower resolution images and will still attain good results ) SCHEDULE_BOUNDARIES = [ 200, 300, 400, ] # more the dataset size the schedule length increase The hyperparamteres like SCHEDULE_LENGTH and SCHEDULE_BOUNDARIES are determined based on empirical results. The method has been explained in the original paper and in their Google AI Blog Post. The SCHEDULE_LENGTH is aslo determined whether to use MixUp Augmentation or not. You can also find an easy MixUp Implementation in Keras Coding Examples. Define preprocessing helper functions SCHEDULE_LENGTH = SCHEDULE_LENGTH * 512 / BATCH_SIZE @tf.function def preprocess_train(image, label): image = tf.image.random_flip_left_right(image) image = tf.image.resize(image, (RESIZE_TO, RESIZE_TO)) image = tf.image.random_crop(image, (CROP_TO, CROP_TO, 3)) image = image / 255.0 return (image, label) @tf.function def preprocess_test(image, label): image = tf.image.resize(image, (RESIZE_TO, RESIZE_TO)) image = image / 255.0 return (image, label) DATASET_NUM_TRAIN_EXAMPLES = train_ds.cardinality().numpy() repeat_count = int( SCHEDULE_LENGTH * BATCH_SIZE / DATASET_NUM_TRAIN_EXAMPLES * STEPS_PER_EPOCH ) repeat_count += 50 + 1 # To ensure at least there are 50 epochs of training Define the data pipeline # Training pipeline pipeline_train = ( train_ds.shuffle(10000) .repeat(repeat_count) # Repeat dataset_size / num_steps .map(preprocess_train, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) # Validation pipeline pipeline_validation = ( validation_ds.map(preprocess_test, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) Visualise the training samples image_batch, label_batch = next(iter(pipeline_train)) plt.figure(figsize=(10, 10)) for n in range(25): ax = plt.subplot(5, 5, n + 1) plt.imshow(image_batch[n]) plt.title(label_batch[n].numpy()) plt.axis(\"off\") png Load pretrained TF-Hub model into a KerasLayer bit_model_url = \"https://tfhub.dev/google/bit/m-r50x1/1\" bit_module = hub.KerasLayer(bit_model_url) Create BigTransfer (BiT) model To create the new model, we: Cut off the BiT model’s original head. This leaves us with the “pre-logits” output. We do not have to do this if we use the ‘feature extractor’ models (i.e. all those in subdirectories titled feature_vectors), since for those models the head has already been cut off. Add a new head with the number of outputs equal to the number of classes of our new task. Note that it is important that we initialise the head to all zeroes. class MyBiTModel(keras.Model): def __init__(self, num_classes, module, **kwargs): super().__init__(**kwargs) self.num_classes = num_classes self.head = keras.layers.Dense(num_classes, kernel_initializer=\"zeros\") self.bit_model = module def call(self, images): bit_embedding = self.bit_model(images) return self.head(bit_embedding) model = MyBiTModel(num_classes=NUM_CLASSES, module=bit_module) Define optimizer and loss learning_rate = 0.003 * BATCH_SIZE / 512 # Decay learning rate by a factor of 10 at SCHEDULE_BOUNDARIES. lr_schedule = keras.optimizers.schedules.PiecewiseConstantDecay( boundaries=SCHEDULE_BOUNDARIES, values=[ learning_rate, learning_rate * 0.1, learning_rate * 0.01, learning_rate * 0.001, ], ) optimizer = keras.optimizers.SGD(learning_rate=lr_schedule, momentum=0.9) loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) Compile the model model.compile(optimizer=optimizer, loss=loss_fn, metrics=[\"accuracy\"]) Set up callbacks train_callbacks = [ keras.callbacks.EarlyStopping( monitor=\"val_accuracy\", patience=2, restore_best_weights=True ) ] Train the model history = model.fit( pipeline_train, batch_size=BATCH_SIZE, epochs=int(SCHEDULE_LENGTH / STEPS_PER_EPOCH), steps_per_epoch=STEPS_PER_EPOCH, validation_data=pipeline_validation, callbacks=train_callbacks, ) Epoch 1/400 10/10 [==============================] - 41s 1s/step - loss: 0.7440 - accuracy: 0.7844 - val_loss: 0.1837 - val_accuracy: 0.9582 Epoch 2/400 10/10 [==============================] - 8s 904ms/step - loss: 0.1499 - accuracy: 0.9547 - val_loss: 0.1094 - val_accuracy: 0.9709 Epoch 3/400 10/10 [==============================] - 8s 905ms/step - loss: 0.1674 - accuracy: 0.9422 - val_loss: 0.0874 - val_accuracy: 0.9727 Epoch 4/400 10/10 [==============================] - 8s 905ms/step - loss: 0.1314 - accuracy: 0.9578 - val_loss: 0.0829 - val_accuracy: 0.9727 Epoch 5/400 10/10 [==============================] - 8s 903ms/step - loss: 0.1336 - accuracy: 0.9500 - val_loss: 0.0765 - val_accuracy: 0.9727 Plot the training and validation metrics def plot_hist(hist): plt.plot(hist.history[\"accuracy\"]) plt.plot(hist.history[\"val_accuracy\"]) plt.plot(hist.history[\"loss\"]) plt.plot(hist.history[\"val_loss\"]) plt.title(\"Training Progress\") plt.ylabel(\"Accuracy/Loss\") plt.xlabel(\"Epochs\") plt.legend([\"train_acc\", \"val_acc\", \"train_loss\", \"val_loss\"], loc=\"upper left\") plt.show() plot_hist(history) png Evaluate the model accuracy = model.evaluate(pipeline_validation)[1] * 100 print(\"Accuracy: {:.2f}%\".format(accuracy)) 9/9 [==============================] - 6s 646ms/step - loss: 0.0874 - accuracy: 0.9727 Accuracy: 97.27% Conclusion BiT performs well across a surprisingly wide range of data regimes -- from 1 example per class to 1M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. You can experiment further with the BigTransfer Method by following the original paper. Use EfficientNet with weights pre-trained on imagenet for Stanford Dogs classification. Introduction: what is EfficientNet EfficientNet, first introduced in Tan and Le, 2019 is among the most efficient models (i.e. requiring least FLOPS for inference) that reaches State-of-the-Art accuracy on both imagenet and common image classification transfer learning tasks. The smallest base model is similar to MnasNet, which reached near-SOTA with a significantly smaller model. By introducing a heuristic way to scale the model, EfficientNet provides a family of models (B0 to B7) that represents a good combination of efficiency and accuracy on a variety of scales. Such a scaling heuristics (compound-scaling, details see Tan and Le, 2019) allows the efficiency-oriented base model (B0) to surpass models at every scale, while avoiding extensive grid-search of hyperparameters. A summary of the latest updates on the model is available at here, where various augmentation schemes and semi-supervised learning approaches are applied to further improve the imagenet performance of the models. These extensions of the model can be used by updating weights without changing model architecture. B0 to B7 variants of EfficientNet (This section provides some details on \"compound scaling\", and can be skipped if you're only interested in using the models) Based on the original paper people may have the impression that EfficientNet is a continuous family of models created by arbitrarily choosing scaling factor in as Eq.(3) of the paper. However, choice of resolution, depth and width are also restricted by many factors: Resolution: Resolutions not divisible by 8, 16, etc. cause zero-padding near boundaries of some layers which wastes computational resources. This especially applies to smaller variants of the model, hence the input resolution for B0 and B1 are chosen as 224 and 240. Depth and width: The building blocks of EfficientNet demands channel size to be multiples of 8. Resource limit: Memory limitation may bottleneck resolution when depth and width can still increase. In such a situation, increasing depth and/or width but keep resolution can still improve performance. As a result, the depth, width and resolution of each variant of the EfficientNet models are hand-picked and proven to produce good results, though they may be significantly off from the compound scaling formula. Therefore, the keras implementation (detailed below) only provide these 8 models, B0 to B7, instead of allowing arbitray choice of width / depth / resolution parameters. Keras implementation of EfficientNet An implementation of EfficientNet B0 to B7 has been shipped with tf.keras since TF2.3. To use EfficientNetB0 for classifying 1000 classes of images from imagenet, run: from tensorflow.keras.applications import EfficientNetB0 model = EfficientNetB0(weights='imagenet') This model takes input images of shape (224, 224, 3), and the input data should range [0, 255]. Normalization is included as part of the model. Because training EfficientNet on ImageNet takes a tremendous amount of resources and several techniques that are not a part of the model architecture itself. Hence the Keras implementation by default loads pre-trained weights obtained via training with AutoAugment. For B0 to B7 base models, the input shapes are different. Here is a list of input shape expected for each model: Base model resolution EfficientNetB0 224 EfficientNetB1 240 EfficientNetB2 260 EfficientNetB3 300 EfficientNetB4 380 EfficientNetB5 456 EfficientNetB6 528 EfficientNetB7 600 When the model is intended for transfer learning, the Keras implementation provides a option to remove the top layers: model = EfficientNetB0(include_top=False, weights='imagenet') This option excludes the final Dense layer that turns 1280 features on the penultimate layer into prediction of the 1000 ImageNet classes. Replacing the top layer with custom layers allows using EfficientNet as a feature extractor in a transfer learning workflow. Another argument in the model constructor worth noticing is drop_connect_rate which controls the dropout rate responsible for stochastic depth. This parameter serves as a toggle for extra regularization in finetuning, but does not affect loaded weights. For example, when stronger regularization is desired, try: model = EfficientNetB0(weights='imagenet', drop_connect_rate=0.4) The default value is 0.2. Example: EfficientNetB0 for Stanford Dogs. EfficientNet is capable of a wide range of image classification tasks. This makes it a good model for transfer learning. As an end-to-end example, we will show using pre-trained EfficientNetB0 on Stanford Dogs dataset. # IMG_SIZE is determined by EfficientNet model choice IMG_SIZE = 224 Setup and data loading This example requires TensorFlow 2.3 or above. To use TPU, the TPU runtime must match current running TensorFlow version. If there is a mismatch, try: from cloud_tpu_client import Client c = Client() c.configure_tpu_version(tf.__version__, restart_type=\"always\") import tensorflow as tf try: tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect() print(\"Device:\", tpu.master()) strategy = tf.distribute.TPUStrategy(tpu) except ValueError: print(\"Not connected to a TPU runtime. Using CPU/GPU strategy\") strategy = tf.distribute.MirroredStrategy() Not connected to a TPU runtime. Using CPU/GPU strategy INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',) Loading data Here we load data from tensorflow_datasets (hereafter TFDS). Stanford Dogs dataset is provided in TFDS as stanford_dogs. It features 20,580 images that belong to 120 classes of dog breeds (12,000 for training and 8,580 for testing). By simply changing dataset_name below, you may also try this notebook for other datasets in TFDS such as cifar10, cifar100, food101, etc. When the images are much smaller than the size of EfficientNet input, we can simply upsample the input images. It has been shown in Tan and Le, 2019 that transfer learning result is better for increased resolution even if input images remain small. For TPU: if using TFDS datasets, a GCS bucket location is required to save the datasets. For example: tfds.load(dataset_name, data_dir=\"gs://example-bucket/datapath\") Also, both the current environment and the TPU service account have proper access to the bucket. Alternatively, for small datasets you may try loading data into the memory and use tf.data.Dataset.from_tensor_slices(). import tensorflow_datasets as tfds batch_size = 64 dataset_name = \"stanford_dogs\" (ds_train, ds_test), ds_info = tfds.load( dataset_name, split=[\"train\", \"test\"], with_info=True, as_supervised=True ) NUM_CLASSES = ds_info.features[\"label\"].num_classes When the dataset include images with various size, we need to resize them into a shared size. The Stanford Dogs dataset includes only images at least 200x200 pixels in size. Here we resize the images to the input size needed for EfficientNet. size = (IMG_SIZE, IMG_SIZE) ds_train = ds_train.map(lambda image, label: (tf.image.resize(image, size), label)) ds_test = ds_test.map(lambda image, label: (tf.image.resize(image, size), label)) Visualizing the data The following code shows the first 9 images with their labels. import matplotlib.pyplot as plt def format_label(label): string_label = label_info.int2str(label) return string_label.split(\"-\")[1] label_info = ds_info.features[\"label\"] for i, (image, label) in enumerate(ds_train.take(9)): ax = plt.subplot(3, 3, i + 1) plt.imshow(image.numpy().astype(\"uint8\")) plt.title(\"{}\".format(format_label(label))) plt.axis(\"off\") png Data augmentation We can use the preprocessing layers APIs for image augmentation. from tensorflow.keras.models import Sequential from tensorflow.keras import layers img_augmentation = Sequential( [ layers.RandomRotation(factor=0.15), layers.RandomTranslation(height_factor=0.1, width_factor=0.1), layers.RandomFlip(), layers.RandomContrast(factor=0.1), ], name=\"img_augmentation\", ) This Sequential model object can be used both as a part of the model we later build, and as a function to preprocess data before feeding into the model. Using them as function makes it easy to visualize the augmented images. Here we plot 9 examples of augmentation result of a given figure. for image, label in ds_train.take(1): for i in range(9): ax = plt.subplot(3, 3, i + 1) aug_img = img_augmentation(tf.expand_dims(image, axis=0)) plt.imshow(aug_img[0].numpy().astype(\"uint8\")) plt.title(\"{}\".format(format_label(label))) plt.axis(\"off\") png Prepare inputs Once we verify the input data and augmentation are working correctly, we prepare dataset for training. The input data are resized to uniform IMG_SIZE. The labels are put into one-hot (a.k.a. categorical) encoding. The dataset is batched. Note: prefetch and AUTOTUNE may in some situation improve performance, but depends on environment and the specific dataset used. See this guide for more information on data pipeline performance. # One-hot / categorical encoding def input_preprocess(image, label): label = tf.one_hot(label, NUM_CLASSES) return image, label ds_train = ds_train.map( input_preprocess, num_parallel_calls=tf.data.AUTOTUNE ) ds_train = ds_train.batch(batch_size=batch_size, drop_remainder=True) ds_train = ds_train.prefetch(tf.data.AUTOTUNE) ds_test = ds_test.map(input_preprocess) ds_test = ds_test.batch(batch_size=batch_size, drop_remainder=True) Training a model from scratch We build an EfficientNetB0 with 120 output classes, that is initialized from scratch: Note: the accuracy will increase very slowly and may overfit. from tensorflow.keras.applications import EfficientNetB0 with strategy.scope(): inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3)) x = img_augmentation(inputs) outputs = EfficientNetB0(include_top=True, weights=None, classes=NUM_CLASSES)(x) model = tf.keras.Model(inputs, outputs) model.compile( optimizer=\"adam\", loss=\"categorical_crossentropy\", metrics=[\"accuracy\"] ) model.summary() epochs = 40 # @param {type: \"slider\", min:10, max:100} hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2) Model: \"functional_1\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________ img_augmentation (Sequential (None, 224, 224, 3) 0 _________________________________________________________________ efficientnetb0 (Functional) (None, 120) 4203291 ================================================================= Total params: 4,203,291 Trainable params: 4,161,268 Non-trainable params: 42,023 _________________________________________________________________ Epoch 1/40 187/187 - 66s - loss: 4.9221 - accuracy: 0.0119 - val_loss: 4.9835 - val_accuracy: 0.0104 Epoch 2/40 187/187 - 63s - loss: 4.5652 - accuracy: 0.0243 - val_loss: 5.1626 - val_accuracy: 0.0145 Epoch 3/40 187/187 - 63s - loss: 4.4179 - accuracy: 0.0337 - val_loss: 4.7597 - val_accuracy: 0.0237 Epoch 4/40 187/187 - 63s - loss: 4.2964 - accuracy: 0.0421 - val_loss: 4.4028 - val_accuracy: 0.0378 Epoch 5/40 187/187 - 63s - loss: 4.1951 - accuracy: 0.0540 - val_loss: 4.3048 - val_accuracy: 0.0443 Epoch 6/40 187/187 - 63s - loss: 4.1025 - accuracy: 0.0596 - val_loss: 4.1918 - val_accuracy: 0.0526 Epoch 7/40 187/187 - 63s - loss: 4.0157 - accuracy: 0.0728 - val_loss: 4.1482 - val_accuracy: 0.0591 Epoch 8/40 187/187 - 62s - loss: 3.9344 - accuracy: 0.0844 - val_loss: 4.1088 - val_accuracy: 0.0638 Epoch 9/40 187/187 - 63s - loss: 3.8529 - accuracy: 0.0951 - val_loss: 4.0692 - val_accuracy: 0.0770 Epoch 10/40 187/187 - 63s - loss: 3.7650 - accuracy: 0.1040 - val_loss: 4.1468 - val_accuracy: 0.0719 Epoch 11/40 187/187 - 63s - loss: 3.6858 - accuracy: 0.1185 - val_loss: 4.0484 - val_accuracy: 0.0913 Epoch 12/40 187/187 - 63s - loss: 3.5942 - accuracy: 0.1326 - val_loss: 3.8047 - val_accuracy: 0.1072 Epoch 13/40 187/187 - 63s - loss: 3.5028 - accuracy: 0.1447 - val_loss: 3.9513 - val_accuracy: 0.0933 Epoch 14/40 187/187 - 63s - loss: 3.4295 - accuracy: 0.1604 - val_loss: 3.7738 - val_accuracy: 0.1220 Epoch 15/40 187/187 - 63s - loss: 3.3410 - accuracy: 0.1735 - val_loss: 3.9104 - val_accuracy: 0.1104 Epoch 16/40 187/187 - 63s - loss: 3.2511 - accuracy: 0.1890 - val_loss: 3.6904 - val_accuracy: 0.1264 Epoch 17/40 187/187 - 63s - loss: 3.1624 - accuracy: 0.2076 - val_loss: 3.4026 - val_accuracy: 0.1769 Epoch 18/40 187/187 - 63s - loss: 3.0825 - accuracy: 0.2229 - val_loss: 3.4627 - val_accuracy: 0.1744 Epoch 19/40 187/187 - 63s - loss: 3.0041 - accuracy: 0.2355 - val_loss: 3.6061 - val_accuracy: 0.1542 Epoch 20/40 187/187 - 64s - loss: 2.8945 - accuracy: 0.2552 - val_loss: 3.2769 - val_accuracy: 0.2036 Epoch 21/40 187/187 - 63s - loss: 2.8054 - accuracy: 0.2710 - val_loss: 3.5355 - val_accuracy: 0.1834 Epoch 22/40 187/187 - 63s - loss: 2.7342 - accuracy: 0.2904 - val_loss: 3.3540 - val_accuracy: 0.1973 Epoch 23/40 187/187 - 62s - loss: 2.6258 - accuracy: 0.3042 - val_loss: 3.2608 - val_accuracy: 0.2217 Epoch 24/40 187/187 - 62s - loss: 2.5453 - accuracy: 0.3218 - val_loss: 3.4611 - val_accuracy: 0.1941 Epoch 25/40 187/187 - 63s - loss: 2.4585 - accuracy: 0.3356 - val_loss: 3.4163 - val_accuracy: 0.2070 Epoch 26/40 187/187 - 62s - loss: 2.3606 - accuracy: 0.3647 - val_loss: 3.2558 - val_accuracy: 0.2392 Epoch 27/40 187/187 - 63s - loss: 2.2819 - accuracy: 0.3801 - val_loss: 3.3676 - val_accuracy: 0.2222 Epoch 28/40 187/187 - 62s - loss: 2.2114 - accuracy: 0.3933 - val_loss: 3.6578 - val_accuracy: 0.2022 Epoch 29/40 187/187 - 62s - loss: 2.0964 - accuracy: 0.4215 - val_loss: 3.5366 - val_accuracy: 0.2186 Epoch 30/40 187/187 - 63s - loss: 1.9931 - accuracy: 0.4459 - val_loss: 3.5612 - val_accuracy: 0.2310 Epoch 31/40 187/187 - 63s - loss: 1.8924 - accuracy: 0.4657 - val_loss: 3.4780 - val_accuracy: 0.2359 Epoch 32/40 187/187 - 63s - loss: 1.8095 - accuracy: 0.4874 - val_loss: 3.5776 - val_accuracy: 0.2403 Epoch 33/40 187/187 - 63s - loss: 1.7126 - accuracy: 0.5086 - val_loss: 3.6865 - val_accuracy: 0.2316 Epoch 34/40 187/187 - 63s - loss: 1.6117 - accuracy: 0.5373 - val_loss: 3.6419 - val_accuracy: 0.2513 Epoch 35/40 187/187 - 63s - loss: 1.5532 - accuracy: 0.5514 - val_loss: 3.8050 - val_accuracy: 0.2415 Epoch 36/40 187/187 - 63s - loss: 1.4479 - accuracy: 0.5809 - val_loss: 4.0113 - val_accuracy: 0.2299 Epoch 37/40 187/187 - 62s - loss: 1.3885 - accuracy: 0.5939 - val_loss: 4.1262 - val_accuracy: 0.2158 Epoch 38/40 187/187 - 63s - loss: 1.2979 - accuracy: 0.6217 - val_loss: 4.2519 - val_accuracy: 0.2344 Epoch 39/40 187/187 - 62s - loss: 1.2066 - accuracy: 0.6413 - val_loss: 4.3924 - val_accuracy: 0.2169 Epoch 40/40 187/187 - 62s - loss: 1.1348 - accuracy: 0.6618 - val_loss: 4.2216 - val_accuracy: 0.2374 Training the model is relatively fast (takes only 20 seconds per epoch on TPUv2 that is available on Colab). This might make it sounds easy to simply train EfficientNet on any dataset wanted from scratch. However, training EfficientNet on smaller datasets, especially those with lower resolution like CIFAR-100, faces the significant challenge of overfitting. Hence training from scratch requires very careful choice of hyperparameters and is difficult to find suitable regularization. It would also be much more demanding in resources. Plotting the training and validation accuracy makes it clear that validation accuracy stagnates at a low value. import matplotlib.pyplot as plt def plot_hist(hist): plt.plot(hist.history[\"accuracy\"]) plt.plot(hist.history[\"val_accuracy\"]) plt.title(\"model accuracy\") plt.ylabel(\"accuracy\") plt.xlabel(\"epoch\") plt.legend([\"train\", \"validation\"], loc=\"upper left\") plt.show() plot_hist(hist) png Transfer learning from pre-trained weights Here we initialize the model with pre-trained ImageNet weights, and we fine-tune it on our own dataset. def build_model(num_classes): inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3)) x = img_augmentation(inputs) model = EfficientNetB0(include_top=False, input_tensor=x, weights=\"imagenet\") # Freeze the pretrained weights model.trainable = False # Rebuild top x = layers.GlobalAveragePooling2D(name=\"avg_pool\")(model.output) x = layers.BatchNormalization()(x) top_dropout_rate = 0.2 x = layers.Dropout(top_dropout_rate, name=\"top_dropout\")(x) outputs = layers.Dense(NUM_CLASSES, activation=\"softmax\", name=\"pred\")(x) # Compile model = tf.keras.Model(inputs, outputs, name=\"EfficientNet\") optimizer = tf.keras.optimizers.Adam(learning_rate=1e-2) model.compile( optimizer=optimizer, loss=\"categorical_crossentropy\", metrics=[\"accuracy\"] ) return model The first step to transfer learning is to freeze all layers and train only the top layers. For this step, a relatively large learning rate (1e-2) can be used. Note that validation accuracy and loss will usually be better than training accuracy and loss. This is because the regularization is strong, which only suppresses training-time metrics. Note that the convergence may take up to 50 epochs depending on choice of learning rate. If image augmentation layers were not applied, the validation accuracy may only reach ~60%. with strategy.scope(): model = build_model(num_classes=NUM_CLASSES) epochs = 25 # @param {type: \"slider\", min:8, max:80} hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2) plot_hist(hist) Epoch 1/25 187/187 - 33s - loss: 3.5673 - accuracy: 0.3624 - val_loss: 1.0288 - val_accuracy: 0.6957 Epoch 2/25 187/187 - 31s - loss: 1.8503 - accuracy: 0.5232 - val_loss: 0.8439 - val_accuracy: 0.7484 Epoch 3/25 187/187 - 31s - loss: 1.5511 - accuracy: 0.5772 - val_loss: 0.7953 - val_accuracy: 0.7563 Epoch 4/25 187/187 - 31s - loss: 1.4660 - accuracy: 0.5878 - val_loss: 0.8061 - val_accuracy: 0.7535 Epoch 5/25 187/187 - 31s - loss: 1.4143 - accuracy: 0.6034 - val_loss: 0.7850 - val_accuracy: 0.7569 Epoch 6/25 187/187 - 31s - loss: 1.4000 - accuracy: 0.6054 - val_loss: 0.7846 - val_accuracy: 0.7646 Epoch 7/25 187/187 - 31s - loss: 1.3678 - accuracy: 0.6173 - val_loss: 0.7850 - val_accuracy: 0.7682 Epoch 8/25 187/187 - 31s - loss: 1.3286 - accuracy: 0.6222 - val_loss: 0.8142 - val_accuracy: 0.7608 Epoch 9/25 187/187 - 31s - loss: 1.3210 - accuracy: 0.6245 - val_loss: 0.7890 - val_accuracy: 0.7669 Epoch 10/25 187/187 - 31s - loss: 1.3086 - accuracy: 0.6278 - val_loss: 0.8368 - val_accuracy: 0.7575 Epoch 11/25 187/187 - 31s - loss: 1.2877 - accuracy: 0.6315 - val_loss: 0.8309 - val_accuracy: 0.7599 Epoch 12/25 187/187 - 31s - loss: 1.2918 - accuracy: 0.6308 - val_loss: 0.8319 - val_accuracy: 0.7535 Epoch 13/25 187/187 - 31s - loss: 1.2738 - accuracy: 0.6373 - val_loss: 0.8567 - val_accuracy: 0.7576 Epoch 14/25 187/187 - 31s - loss: 1.2837 - accuracy: 0.6410 - val_loss: 0.8004 - val_accuracy: 0.7697 Epoch 15/25 187/187 - 31s - loss: 1.2828 - accuracy: 0.6403 - val_loss: 0.8364 - val_accuracy: 0.7625 Epoch 16/25 187/187 - 31s - loss: 1.2749 - accuracy: 0.6405 - val_loss: 0.8558 - val_accuracy: 0.7565 Epoch 17/25 187/187 - 31s - loss: 1.3022 - accuracy: 0.6352 - val_loss: 0.8361 - val_accuracy: 0.7551 Epoch 18/25 187/187 - 31s - loss: 1.2848 - accuracy: 0.6394 - val_loss: 0.8958 - val_accuracy: 0.7479 Epoch 19/25 187/187 - 31s - loss: 1.2791 - accuracy: 0.6420 - val_loss: 0.8875 - val_accuracy: 0.7509 Epoch 20/25 187/187 - 30s - loss: 1.2834 - accuracy: 0.6416 - val_loss: 0.8653 - val_accuracy: 0.7607 Epoch 21/25 187/187 - 30s - loss: 1.2608 - accuracy: 0.6435 - val_loss: 0.8451 - val_accuracy: 0.7612 Epoch 22/25 187/187 - 30s - loss: 1.2780 - accuracy: 0.6390 - val_loss: 0.9035 - val_accuracy: 0.7486 Epoch 23/25 187/187 - 30s - loss: 1.2742 - accuracy: 0.6473 - val_loss: 0.8837 - val_accuracy: 0.7556 Epoch 24/25 187/187 - 30s - loss: 1.2609 - accuracy: 0.6434 - val_loss: 0.9233 - val_accuracy: 0.7524 Epoch 25/25 187/187 - 31s - loss: 1.2630 - accuracy: 0.6496 - val_loss: 0.9116 - val_accuracy: 0.7584 png The second step is to unfreeze a number of layers and fit the model using smaller learning rate. In this example we show unfreezing all layers, but depending on specific dataset it may be desireble to only unfreeze a fraction of all layers. When the feature extraction with pretrained model works good enough, this step would give a very limited gain on validation accuracy. In our case we only see a small improvement, as ImageNet pretraining already exposed the model to a good amount of dogs. On the other hand, when we use pretrained weights on a dataset that is more different from ImageNet, this fine-tuning step can be crucial as the feature extractor also needs to be adjusted by a considerable amount. Such a situation can be demonstrated if choosing CIFAR-100 dataset instead, where fine-tuning boosts validation accuracy by about 10% to pass 80% on EfficientNetB0. In such a case the convergence may take more than 50 epochs. A side note on freezing/unfreezing models: setting trainable of a Model will simultaneously set all layers belonging to the Model to the same trainable attribute. Each layer is trainable only if both the layer itself and the model containing it are trainable. Hence when we need to partially freeze/unfreeze a model, we need to make sure the trainable attribute of the model is set to True. def unfreeze_model(model): # We unfreeze the top 20 layers while leaving BatchNorm layers frozen for layer in model.layers[-20:]: if not isinstance(layer, layers.BatchNormalization): layer.trainable = True optimizer = tf.keras.optimizers.Adam(learning_rate=1e-4) model.compile( optimizer=optimizer, loss=\"categorical_crossentropy\", metrics=[\"accuracy\"] ) unfreeze_model(model) epochs = 10 # @param {type: \"slider\", min:8, max:50} hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2) plot_hist(hist) Epoch 1/10 187/187 - 33s - loss: 0.9956 - accuracy: 0.7080 - val_loss: 0.7644 - val_accuracy: 0.7856 Epoch 2/10 187/187 - 31s - loss: 0.8885 - accuracy: 0.7352 - val_loss: 0.7696 - val_accuracy: 0.7866 Epoch 3/10 187/187 - 31s - loss: 0.8059 - accuracy: 0.7533 - val_loss: 0.7659 - val_accuracy: 0.7885 Epoch 4/10 187/187 - 32s - loss: 0.7648 - accuracy: 0.7675 - val_loss: 0.7730 - val_accuracy: 0.7866 Epoch 5/10 187/187 - 32s - loss: 0.6982 - accuracy: 0.7833 - val_loss: 0.7691 - val_accuracy: 0.7858 Epoch 6/10 187/187 - 31s - loss: 0.6823 - accuracy: 0.7880 - val_loss: 0.7814 - val_accuracy: 0.7872 Epoch 7/10 187/187 - 31s - loss: 0.6536 - accuracy: 0.7953 - val_loss: 0.7850 - val_accuracy: 0.7873 Epoch 8/10 187/187 - 31s - loss: 0.6104 - accuracy: 0.8111 - val_loss: 0.7774 - val_accuracy: 0.7879 Epoch 9/10 187/187 - 32s - loss: 0.5990 - accuracy: 0.8067 - val_loss: 0.7925 - val_accuracy: 0.7870 Epoch 10/10 187/187 - 31s - loss: 0.5531 - accuracy: 0.8239 - val_loss: 0.7870 - val_accuracy: 0.7836 png Tips for fine tuning EfficientNet On unfreezing layers: The BathcNormalization layers need to be kept frozen (more details). If they are also turned to trainable, the first epoch after unfreezing will significantly reduce accuracy. In some cases it may be beneficial to open up only a portion of layers instead of unfreezing all. This will make fine tuning much faster when going to larger models like B7. Each block needs to be all turned on or off. This is because the architecture includes a shortcut from the first layer to the last layer for each block. Not respecting blocks also significantly harms the final performance. Some other tips for utilizing EfficientNet: Larger variants of EfficientNet do not guarantee improved performance, especially for tasks with less data or fewer classes. In such a case, the larger variant of EfficientNet chosen, the harder it is to tune hyperparameters. EMA (Exponential Moving Average) is very helpful in training EfficientNet from scratch, but not so much for transfer learning. Do not use the RMSprop setup as in the original paper for transfer learning. The momentum and learning rate are too high for transfer learning. It will easily corrupt the pretrained weight and blow up the loss. A quick check is to see if loss (as categorical cross entropy) is getting significantly larger than log(NUM_CLASSES) after the same epoch. If so, the initial learning rate/momentum is too high. Smaller batch size benefit validation accuracy, possibly due to effectively providing regularization. Using the latest EfficientNet weights Since the initial paper, the EfficientNet has been improved by various methods for data preprocessing and for using unlabelled data to enhance learning results. These improvements are relatively hard and computationally costly to reproduce, and require extra code; but the weights are readily available in the form of TF checkpoint files. The model architecture has not changed, so loading the improved checkpoints is possible. To use a checkpoint provided at the official model repository, first download the checkpoint. As example, here we download noisy-student version of B1: !wget https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet\ /noisystudent/noisy_student_efficientnet-b1.tar.gz !tar -xf noisy_student_efficientnet-b1.tar.gz Then use the script efficientnet_weight_update_util.py to convert ckpt file to h5 file. !python efficientnet_weight_update_util.py --model b1 --notop --ckpt \ efficientnet-b1/model.ckpt --o efficientnetb1_notop.h5 When creating model, use the following to load new weight: model = EfficientNetB1(weights=\"efficientnetb1_notop.h5\", include_top=False) An all-convolutional network applied to patches of images. Introduction Vision Transformers (ViT; Dosovitskiy et al.) extract small patches from the input images, linearly project them, and then apply the Transformer (Vaswani et al.) blocks. The application of ViTs to image recognition tasks is quickly becoming a promising area of research, because ViTs eliminate the need to have strong inductive biases (such as convolutions) for modeling locality. This presents them as a general computation primititive capable of learning just from the training data with as minimal inductive priors as possible. ViTs yield great downstream performance when trained with proper regularization, data augmentation, and relatively large datasets. In the Patches Are All You Need paper (note: at the time of writing, it is a submission to the ICLR 2022 conference), the authors extend the idea of using patches to train an all-convolutional network and demonstrate competitive results. Their architecture namely ConvMixer uses recipes from the recent isotrophic architectures like ViT, MLP-Mixer (Tolstikhin et al.), such as using the same depth and resolution across different layers in the network, residual connections, and so on. In this example, we will implement the ConvMixer model and demonstrate its performance on the CIFAR-10 dataset. To use the AdamW optimizer, we need to install TensorFlow Addons: pip install -U -q tensorflow-addons Imports from tensorflow.keras import layers from tensorflow import keras import matplotlib.pyplot as plt import tensorflow_addons as tfa import tensorflow as tf import numpy as np Hyperparameters To keep run time short, we will train the model for only 10 epochs. To focus on the core ideas of ConvMixer, we will not use other training-specific elements like RandAugment (Cubuk et al.). If you are interested in learning more about those details, please refer to the original paper. learning_rate = 0.001 weight_decay = 0.0001 batch_size = 128 num_epochs = 10 Load the CIFAR-10 dataset (x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data() val_split = 0.1 val_indices = int(len(x_train) * val_split) new_x_train, new_y_train = x_train[val_indices:], y_train[val_indices:] x_val, y_val = x_train[:val_indices], y_train[:val_indices] print(f\"Training data samples: {len(new_x_train)}\") print(f\"Validation data samples: {len(x_val)}\") print(f\"Test data samples: {len(x_test)}\") Training data samples: 45000 Validation data samples: 5000 Test data samples: 10000 Prepare tf.data.Dataset objects Our data augmentation pipeline is different from what the authors used for the CIFAR-10 dataset, which is fine for the purpose of the example. image_size = 32 auto = tf.data.AUTOTUNE data_augmentation = keras.Sequential( [layers.RandomCrop(image_size, image_size), layers.RandomFlip(\"horizontal\"),], name=\"data_augmentation\", ) def make_datasets(images, labels, is_train=False): dataset = tf.data.Dataset.from_tensor_slices((images, labels)) if is_train: dataset = dataset.shuffle(batch_size * 10) dataset = dataset.batch(batch_size) if is_train: dataset = dataset.map( lambda x, y: (data_augmentation(x), y), num_parallel_calls=auto ) return dataset.prefetch(auto) train_dataset = make_datasets(new_x_train, new_y_train, is_train=True) val_dataset = make_datasets(x_val, y_val) test_dataset = make_datasets(x_test, y_test) 2021-10-17 03:43:59.588315: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-10-17 03:43:59.596532: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-10-17 03:43:59.597211: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-10-17 03:43:59.622016: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-10-17 03:43:59.622853: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-10-17 03:43:59.623542: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-10-17 03:43:59.624174: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-10-17 03:44:00.067659: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-10-17 03:44:00.068334: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-10-17 03:44:00.068970: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-10-17 03:44:00.069615: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14684 MB memory: -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:04.0, compute capability: 7.0 ConvMixer utilities The following figure (taken from the original paper) depicts the ConvMixer model: ConvMixer is very similar to the MLP-Mixer, model with the following key differences: Instead of using fully-connected layers, it uses standard convolution layers. Instead of LayerNorm (which is typical for ViTs and MLP-Mixers), it uses BatchNorm. Two types of convolution layers are used in ConvMixer. (1): Depthwise convolutions, for mixing spatial locations of the images, (2): Pointwise convolutions (which follow the depthwise convolutions), for mixing channel-wise information across the patches. Another keypoint is the use of larger kernel sizes to allow a larger receptive field. def activation_block(x): x = layers.Activation(\"gelu\")(x) return layers.BatchNormalization()(x) def conv_stem(x, filters: int, patch_size: int): x = layers.Conv2D(filters, kernel_size=patch_size, strides=patch_size)(x) return activation_block(x) def conv_mixer_block(x, filters: int, kernel_size: int): # Depthwise convolution. x0 = x x = layers.DepthwiseConv2D(kernel_size=kernel_size, padding=\"same\")(x) x = layers.Add()([activation_block(x), x0]) # Residual. # Pointwise convolution. x = layers.Conv2D(filters, kernel_size=1)(x) x = activation_block(x) return x def get_conv_mixer_256_8( image_size=32, filters=256, depth=8, kernel_size=5, patch_size=2, num_classes=10 ): \"\"\"ConvMixer-256/8: https://openreview.net/pdf?id=TVHS5Y4dNvM. The hyperparameter values are taken from the paper. \"\"\" inputs = keras.Input((image_size, image_size, 3)) x = layers.Rescaling(scale=1.0 / 255)(inputs) # Extract patch embeddings. x = conv_stem(x, filters, patch_size) # ConvMixer blocks. for _ in range(depth): x = conv_mixer_block(x, filters, kernel_size) # Classification block. x = layers.GlobalAvgPool2D()(x) outputs = layers.Dense(num_classes, activation=\"softmax\")(x) return keras.Model(inputs, outputs) The model used in this experiment is termed as ConvMixer-256/8 where 256 denotes the number of channels and 8 denotes the depth. The resulting model only has 0.8 million parameters. Model training and evaluation utility # Code reference: # https://keras.io/examples/vision/image_classification_with_vision_transformer/. def run_experiment(model): optimizer = tfa.optimizers.AdamW( learning_rate=learning_rate, weight_decay=weight_decay ) model.compile( optimizer=optimizer, loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"], ) checkpoint_filepath = \"/tmp/checkpoint\" checkpoint_callback = keras.callbacks.ModelCheckpoint( checkpoint_filepath, monitor=\"val_accuracy\", save_best_only=True, save_weights_only=True, ) history = model.fit( train_dataset, validation_data=val_dataset, epochs=num_epochs, callbacks=[checkpoint_callback], ) model.load_weights(checkpoint_filepath) _, accuracy = model.evaluate(test_dataset) print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") return history, model Train and evaluate model conv_mixer_model = get_conv_mixer_256_8() history, conv_mixer_model = run_experiment(conv_mixer_model) 2021-10-17 03:44:01.291445: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2) Epoch 1/10 2021-10-17 03:44:04.721186: I tensorflow/stream_executor/cuda/cuda_dnn.cc:369] Loaded cuDNN version 8005 352/352 [==============================] - 29s 70ms/step - loss: 1.2272 - accuracy: 0.5592 - val_loss: 3.9422 - val_accuracy: 0.1196 Epoch 2/10 352/352 [==============================] - 24s 69ms/step - loss: 0.7813 - accuracy: 0.7278 - val_loss: 0.8860 - val_accuracy: 0.6898 Epoch 3/10 352/352 [==============================] - 24s 68ms/step - loss: 0.5947 - accuracy: 0.7943 - val_loss: 0.6175 - val_accuracy: 0.7856 Epoch 4/10 352/352 [==============================] - 24s 69ms/step - loss: 0.4801 - accuracy: 0.8330 - val_loss: 0.5634 - val_accuracy: 0.8064 Epoch 5/10 352/352 [==============================] - 24s 68ms/step - loss: 0.4065 - accuracy: 0.8599 - val_loss: 0.5359 - val_accuracy: 0.8166 Epoch 6/10 352/352 [==============================] - 24s 68ms/step - loss: 0.3473 - accuracy: 0.8804 - val_loss: 0.5257 - val_accuracy: 0.8228 Epoch 7/10 352/352 [==============================] - 24s 68ms/step - loss: 0.3071 - accuracy: 0.8944 - val_loss: 0.4982 - val_accuracy: 0.8264 Epoch 8/10 352/352 [==============================] - 24s 68ms/step - loss: 0.2655 - accuracy: 0.9083 - val_loss: 0.5032 - val_accuracy: 0.8346 Epoch 9/10 352/352 [==============================] - 24s 68ms/step - loss: 0.2328 - accuracy: 0.9194 - val_loss: 0.5225 - val_accuracy: 0.8326 Epoch 10/10 352/352 [==============================] - 24s 68ms/step - loss: 0.2115 - accuracy: 0.9278 - val_loss: 0.5063 - val_accuracy: 0.8372 79/79 [==============================] - 2s 19ms/step - loss: 0.5412 - accuracy: 0.8325 Test accuracy: 83.25% The gap in training and validation performance can be mitigated by using additional regularization techniques. Nevertheless, being able to get to ~83% accuracy within 10 epochs with 0.8 million parameters is a strong result. Visualizing the internals of ConvMixer We can visualize the patch embeddings and the learned convolution filters. Recall that each patch embedding and intermediate feature map have the same number of channels (256 in this case). This will make our visualization utility easier to implement. # Code reference: https://bit.ly/3awIRbP. def visualization_plot(weights, idx=1): # First, apply min-max normalization to the # given weights to avoid isotrophic scaling. p_min, p_max = weights.min(), weights.max() weights = (weights - p_min) / (p_max - p_min) # Visualize all the filters. num_filters = 256 plt.figure(figsize=(8, 8)) for i in range(num_filters): current_weight = weights[:, :, :, i] if current_weight.shape[-1] == 1: current_weight = current_weight.squeeze() ax = plt.subplot(16, 16, idx) ax.set_xticks([]) ax.set_yticks([]) plt.imshow(current_weight) idx += 1 # We first visualize the learned patch embeddings. patch_embeddings = conv_mixer_model.layers[2].get_weights()[0] visualization_plot(patch_embeddings) png Even though we did not train the network to convergence, we can notice that different patches show different patterns. Some share similarity with others while some are very different. These visualizations are more salient with larger image sizes. Similarly, we can visualize the raw convolution kernels. This can help us understand the patterns to which a given kernel is receptive. # First, print the indices of the convolution layers that are not # pointwise convolutions. for i, layer in enumerate(conv_mixer_model.layers): if isinstance(layer, layers.DepthwiseConv2D): if layer.get_config()[\"kernel_size\"] == (5, 5): print(i, layer) idx = 26 # Taking a kernel from the middle of the network. kernel = conv_mixer_model.layers[idx].get_weights()[0] kernel = np.expand_dims(kernel.squeeze(), axis=2) visualization_plot(kernel) 5 12 19 26 33 40 47 54 png We see that different filters in the kernel have different locality spans, and this pattern is likely to evolve with more training. Final notes There's been a recent trend on fusing convolutions with other data-agnostic operations like self-attention. Following works are along this line of research: ConViT (d'Ascoli et al.) CCT (Hassani et al.) CoAtNet (Dai et al.) Image classification with a Transformer that leverages external attention. Introduction This example implements the EANet model for image classification, and demonstrates it on the CIFAR-100 dataset. EANet introduces a novel attention mechanism named external attention, based on two external, small, learnable, and shared memories, which can be implemented easily by simply using two cascaded linear layers and two normalization layers. It conveniently replaces self-attention as used in existing architectures. External attention has linear complexity, as it only implicitly considers the correlations between all samples. This example requires TensorFlow 2.5 or higher, as well as TensorFlow Addons package, which can be installed using the following command: pip install -U tensorflow-addons Setup import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_addons as tfa import matplotlib.pyplot as plt Prepare the data num_classes = 100 input_shape = (32, 32, 3) (x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data() y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) print(f\"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}\") print(f\"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}\") x_train shape: (50000, 32, 32, 3) - y_train shape: (50000, 100) x_test shape: (10000, 32, 32, 3) - y_test shape: (10000, 100) Configure the hyperparameters weight_decay = 0.0001 learning_rate = 0.001 label_smoothing = 0.1 validation_split = 0.2 batch_size = 128 num_epochs = 50 patch_size = 2 # Size of the patches to be extracted from the input images. num_patches = (input_shape[0] // patch_size) ** 2 # Number of patch embedding_dim = 64 # Number of hidden units. mlp_dim = 64 dim_coefficient = 4 num_heads = 4 attention_dropout = 0.2 projection_dropout = 0.2 num_transformer_blocks = 8 # Number of repetitions of the transformer layer print(f\"Patch size: {patch_size} X {patch_size} = {patch_size ** 2} \") print(f\"Patches per image: {num_patches}\") Patch size: 2 X 2 = 4 Patches per image: 256 Use data augmentation data_augmentation = keras.Sequential( [ layers.Normalization(), layers.RandomFlip(\"horizontal\"), layers.RandomRotation(factor=0.1), layers.RandomContrast(factor=0.1), layers.RandomZoom(height_factor=0.2, width_factor=0.2), ], name=\"data_augmentation\", ) # Compute the mean and the variance of the training data for normalization. data_augmentation.layers[0].adapt(x_train) Implement the patch extraction and encoding layer class PatchExtract(layers.Layer): def __init__(self, patch_size, **kwargs): super(PatchExtract, self).__init__(**kwargs) self.patch_size = patch_size def call(self, images): batch_size = tf.shape(images)[0] patches = tf.image.extract_patches( images=images, sizes=(1, self.patch_size, self.patch_size, 1), strides=(1, self.patch_size, self.patch_size, 1), rates=(1, 1, 1, 1), padding=\"VALID\", ) patch_dim = patches.shape[-1] patch_num = patches.shape[1] return tf.reshape(patches, (batch_size, patch_num * patch_num, patch_dim)) class PatchEmbedding(layers.Layer): def __init__(self, num_patch, embed_dim, **kwargs): super(PatchEmbedding, self).__init__(**kwargs) self.num_patch = num_patch self.proj = layers.Dense(embed_dim) self.pos_embed = layers.Embedding(input_dim=num_patch, output_dim=embed_dim) def call(self, patch): pos = tf.range(start=0, limit=self.num_patch, delta=1) return self.proj(patch) + self.pos_embed(pos) Implement the external attention block def external_attention( x, dim, num_heads, dim_coefficient=4, attention_dropout=0, projection_dropout=0 ): _, num_patch, channel = x.shape assert dim % num_heads == 0 num_heads = num_heads * dim_coefficient x = layers.Dense(dim * dim_coefficient)(x) # create tensor [batch_size, num_patches, num_heads, dim*dim_coefficient//num_heads] x = tf.reshape( x, shape=(-1, num_patch, num_heads, dim * dim_coefficient // num_heads) ) x = tf.transpose(x, perm=[0, 2, 1, 3]) # a linear layer M_k attn = layers.Dense(dim // dim_coefficient)(x) # normalize attention map attn = layers.Softmax(axis=2)(attn) # dobule-normalization attn = attn / (1e-9 + tf.reduce_sum(attn, axis=-1, keepdims=True)) attn = layers.Dropout(attention_dropout)(attn) # a linear layer M_v x = layers.Dense(dim * dim_coefficient // num_heads)(attn) x = tf.transpose(x, perm=[0, 2, 1, 3]) x = tf.reshape(x, [-1, num_patch, dim * dim_coefficient]) # a linear layer to project original dim x = layers.Dense(dim)(x) x = layers.Dropout(projection_dropout)(x) return x Implement the MLP block def mlp(x, embedding_dim, mlp_dim, drop_rate=0.2): x = layers.Dense(mlp_dim, activation=tf.nn.gelu)(x) x = layers.Dropout(drop_rate)(x) x = layers.Dense(embedding_dim)(x) x = layers.Dropout(drop_rate)(x) return x Implement the Transformer block def transformer_encoder( x, embedding_dim, mlp_dim, num_heads, dim_coefficient, attention_dropout, projection_dropout, attention_type=\"external_attention\", ): residual_1 = x x = layers.LayerNormalization(epsilon=1e-5)(x) if attention_type == \"external_attention\": x = external_attention( x, embedding_dim, num_heads, dim_coefficient, attention_dropout, projection_dropout, ) elif attention_type == \"self_attention\": x = layers.MultiHeadAttention( num_heads=num_heads, key_dim=embedding_dim, dropout=attention_dropout )(x, x) x = layers.add([x, residual_1]) residual_2 = x x = layers.LayerNormalization(epsilon=1e-5)(x) x = mlp(x, embedding_dim, mlp_dim) x = layers.add([x, residual_2]) return x Implement the EANet model The EANet model leverages external attention. The computational complexity of traditional self attention is O(d * N ** 2), where d is the embedding size, and N is the number of patch. the authors find that most pixels are closely related to just a few other pixels, and an N-to-N attention matrix may be redundant. So, they propose as an alternative an external attention module where the computational complexity of external attention is O(d * S * N). As d and S are hyper-parameters, the proposed algorithm is linear in the number of pixels. In fact, this is equivalent to a drop patch operation, because a lot of information contained in a patch in an image is redundant and unimportant. def get_model(attention_type=\"external_attention\"): inputs = layers.Input(shape=input_shape) # Image augment x = data_augmentation(inputs) # Extract patches. x = PatchExtract(patch_size)(x) # Create patch embedding. x = PatchEmbedding(num_patches, embedding_dim)(x) # Create Transformer block. for _ in range(num_transformer_blocks): x = transformer_encoder( x, embedding_dim, mlp_dim, num_heads, dim_coefficient, attention_dropout, projection_dropout, attention_type, ) x = layers.GlobalAvgPool1D()(x) outputs = layers.Dense(num_classes, activation=\"softmax\")(x) model = keras.Model(inputs=inputs, outputs=outputs) return model Train on CIFAR-100 model = get_model(attention_type=\"external_attention\") model.compile( loss=keras.losses.CategoricalCrossentropy(label_smoothing=label_smoothing), optimizer=tfa.optimizers.AdamW( learning_rate=learning_rate, weight_decay=weight_decay ), metrics=[ keras.metrics.CategoricalAccuracy(name=\"accuracy\"), keras.metrics.TopKCategoricalAccuracy(5, name=\"top-5-accuracy\"), ], ) history = model.fit( x_train, y_train, batch_size=batch_size, epochs=num_epochs, validation_split=validation_split, ) Epoch 1/50 313/313 [==============================] - 40s 95ms/step - loss: 4.2091 - accuracy: 0.0723 - top-5-accuracy: 0.2384 - val_loss: 3.9706 - val_accuracy: 0.1153 - val_top-5-accuracy: 0.3336 Epoch 2/50 313/313 [==============================] - 29s 91ms/step - loss: 3.8028 - accuracy: 0.1427 - top-5-accuracy: 0.3871 - val_loss: 3.6672 - val_accuracy: 0.1829 - val_top-5-accuracy: 0.4513 Epoch 3/50 313/313 [==============================] - 29s 93ms/step - loss: 3.5493 - accuracy: 0.1978 - top-5-accuracy: 0.4805 - val_loss: 3.5402 - val_accuracy: 0.2141 - val_top-5-accuracy: 0.5038 Epoch 4/50 313/313 [==============================] - 29s 93ms/step - loss: 3.4029 - accuracy: 0.2355 - top-5-accuracy: 0.5328 - val_loss: 3.4496 - val_accuracy: 0.2354 - val_top-5-accuracy: 0.5316 Epoch 5/50 313/313 [==============================] - 29s 92ms/step - loss: 3.2917 - accuracy: 0.2636 - top-5-accuracy: 0.5678 - val_loss: 3.3342 - val_accuracy: 0.2699 - val_top-5-accuracy: 0.5679 Epoch 6/50 313/313 [==============================] - 29s 92ms/step - loss: 3.2116 - accuracy: 0.2830 - top-5-accuracy: 0.5921 - val_loss: 3.2896 - val_accuracy: 0.2749 - val_top-5-accuracy: 0.5874 Epoch 7/50 313/313 [==============================] - 28s 90ms/step - loss: 3.1453 - accuracy: 0.2980 - top-5-accuracy: 0.6100 - val_loss: 3.3090 - val_accuracy: 0.2857 - val_top-5-accuracy: 0.5831 Epoch 8/50 313/313 [==============================] - 29s 94ms/step - loss: 3.0889 - accuracy: 0.3121 - top-5-accuracy: 0.6266 - val_loss: 3.1969 - val_accuracy: 0.2975 - val_top-5-accuracy: 0.6082 Epoch 9/50 313/313 [==============================] - 29s 92ms/step - loss: 3.0390 - accuracy: 0.3252 - top-5-accuracy: 0.6441 - val_loss: 3.1249 - val_accuracy: 0.3175 - val_top-5-accuracy: 0.6330 Epoch 10/50 313/313 [==============================] - 29s 92ms/step - loss: 2.9871 - accuracy: 0.3365 - top-5-accuracy: 0.6615 - val_loss: 3.1121 - val_accuracy: 0.3200 - val_top-5-accuracy: 0.6374 Epoch 11/50 313/313 [==============================] - 29s 92ms/step - loss: 2.9476 - accuracy: 0.3489 - top-5-accuracy: 0.6697 - val_loss: 3.1156 - val_accuracy: 0.3268 - val_top-5-accuracy: 0.6421 Epoch 12/50 313/313 [==============================] - 29s 91ms/step - loss: 2.9106 - accuracy: 0.3576 - top-5-accuracy: 0.6783 - val_loss: 3.1337 - val_accuracy: 0.3226 - val_top-5-accuracy: 0.6389 Epoch 13/50 313/313 [==============================] - 29s 92ms/step - loss: 2.8772 - accuracy: 0.3662 - top-5-accuracy: 0.6871 - val_loss: 3.0373 - val_accuracy: 0.3348 - val_top-5-accuracy: 0.6624 Epoch 14/50 313/313 [==============================] - 29s 92ms/step - loss: 2.8508 - accuracy: 0.3756 - top-5-accuracy: 0.6944 - val_loss: 3.0297 - val_accuracy: 0.3441 - val_top-5-accuracy: 0.6643 Epoch 15/50 313/313 [==============================] - 28s 90ms/step - loss: 2.8211 - accuracy: 0.3821 - top-5-accuracy: 0.7034 - val_loss: 2.9680 - val_accuracy: 0.3604 - val_top-5-accuracy: 0.6847 Epoch 16/50 313/313 [==============================] - 28s 90ms/step - loss: 2.8017 - accuracy: 0.3864 - top-5-accuracy: 0.7090 - val_loss: 2.9746 - val_accuracy: 0.3584 - val_top-5-accuracy: 0.6855 Epoch 17/50 313/313 [==============================] - 29s 91ms/step - loss: 2.7714 - accuracy: 0.3962 - top-5-accuracy: 0.7169 - val_loss: 2.9104 - val_accuracy: 0.3738 - val_top-5-accuracy: 0.6940 Epoch 18/50 313/313 [==============================] - 29s 92ms/step - loss: 2.7523 - accuracy: 0.4008 - top-5-accuracy: 0.7204 - val_loss: 2.8560 - val_accuracy: 0.3861 - val_top-5-accuracy: 0.7115 Epoch 19/50 313/313 [==============================] - 28s 91ms/step - loss: 2.7320 - accuracy: 0.4051 - top-5-accuracy: 0.7263 - val_loss: 2.8780 - val_accuracy: 0.3820 - val_top-5-accuracy: 0.7101 Epoch 20/50 313/313 [==============================] - 28s 90ms/step - loss: 2.7139 - accuracy: 0.4114 - top-5-accuracy: 0.7290 - val_loss: 2.9831 - val_accuracy: 0.3694 - val_top-5-accuracy: 0.6922 Epoch 21/50 313/313 [==============================] - 28s 91ms/step - loss: 2.6991 - accuracy: 0.4142 - top-5-accuracy: 0.7335 - val_loss: 2.8420 - val_accuracy: 0.3968 - val_top-5-accuracy: 0.7138 Epoch 22/50 313/313 [==============================] - 29s 91ms/step - loss: 2.6842 - accuracy: 0.4195 - top-5-accuracy: 0.7377 - val_loss: 2.7965 - val_accuracy: 0.4088 - val_top-5-accuracy: 0.7266 Epoch 23/50 313/313 [==============================] - 28s 91ms/step - loss: 2.6571 - accuracy: 0.4273 - top-5-accuracy: 0.7436 - val_loss: 2.8620 - val_accuracy: 0.3947 - val_top-5-accuracy: 0.7155 Epoch 24/50 313/313 [==============================] - 29s 91ms/step - loss: 2.6508 - accuracy: 0.4277 - top-5-accuracy: 0.7469 - val_loss: 2.8459 - val_accuracy: 0.3963 - val_top-5-accuracy: 0.7150 Epoch 25/50 313/313 [==============================] - 28s 90ms/step - loss: 2.6403 - accuracy: 0.4283 - top-5-accuracy: 0.7520 - val_loss: 2.7886 - val_accuracy: 0.4128 - val_top-5-accuracy: 0.7283 Epoch 26/50 313/313 [==============================] - 29s 92ms/step - loss: 2.6281 - accuracy: 0.4353 - top-5-accuracy: 0.7523 - val_loss: 2.8493 - val_accuracy: 0.4026 - val_top-5-accuracy: 0.7153 Epoch 27/50 313/313 [==============================] - 29s 92ms/step - loss: 2.6092 - accuracy: 0.4403 - top-5-accuracy: 0.7580 - val_loss: 2.7539 - val_accuracy: 0.4186 - val_top-5-accuracy: 0.7392 Epoch 28/50 313/313 [==============================] - 29s 91ms/step - loss: 2.5992 - accuracy: 0.4423 - top-5-accuracy: 0.7600 - val_loss: 2.8625 - val_accuracy: 0.3964 - val_top-5-accuracy: 0.7174 Epoch 29/50 313/313 [==============================] - 28s 90ms/step - loss: 2.5913 - accuracy: 0.4456 - top-5-accuracy: 0.7598 - val_loss: 2.7911 - val_accuracy: 0.4162 - val_top-5-accuracy: 0.7329 Epoch 30/50 313/313 [==============================] - 29s 92ms/step - loss: 2.5780 - accuracy: 0.4480 - top-5-accuracy: 0.7649 - val_loss: 2.8158 - val_accuracy: 0.4118 - val_top-5-accuracy: 0.7288 Epoch 31/50 313/313 [==============================] - 28s 91ms/step - loss: 2.5657 - accuracy: 0.4547 - top-5-accuracy: 0.7661 - val_loss: 2.8651 - val_accuracy: 0.4056 - val_top-5-accuracy: 0.7217 Epoch 32/50 313/313 [==============================] - 29s 91ms/step - loss: 2.5637 - accuracy: 0.4480 - top-5-accuracy: 0.7681 - val_loss: 2.8190 - val_accuracy: 0.4094 - val_top-5-accuracy: 0.7267 Epoch 33/50 313/313 [==============================] - 29s 92ms/step - loss: 2.5525 - accuracy: 0.4545 - top-5-accuracy: 0.7693 - val_loss: 2.7985 - val_accuracy: 0.4216 - val_top-5-accuracy: 0.7303 Epoch 34/50 313/313 [==============================] - 28s 91ms/step - loss: 2.5462 - accuracy: 0.4579 - top-5-accuracy: 0.7721 - val_loss: 2.8865 - val_accuracy: 0.4016 - val_top-5-accuracy: 0.7204 Epoch 35/50 313/313 [==============================] - 29s 92ms/step - loss: 2.5329 - accuracy: 0.4616 - top-5-accuracy: 0.7740 - val_loss: 2.7862 - val_accuracy: 0.4232 - val_top-5-accuracy: 0.7389 Epoch 36/50 313/313 [==============================] - 28s 90ms/step - loss: 2.5234 - accuracy: 0.4610 - top-5-accuracy: 0.7765 - val_loss: 2.8234 - val_accuracy: 0.4134 - val_top-5-accuracy: 0.7312 Epoch 37/50 313/313 [==============================] - 29s 91ms/step - loss: 2.5152 - accuracy: 0.4663 - top-5-accuracy: 0.7774 - val_loss: 2.7894 - val_accuracy: 0.4161 - val_top-5-accuracy: 0.7376 Epoch 38/50 313/313 [==============================] - 29s 92ms/step - loss: 2.5117 - accuracy: 0.4674 - top-5-accuracy: 0.7790 - val_loss: 2.8091 - val_accuracy: 0.4142 - val_top-5-accuracy: 0.7360 Epoch 39/50 313/313 [==============================] - 28s 90ms/step - loss: 2.5047 - accuracy: 0.4681 - top-5-accuracy: 0.7805 - val_loss: 2.8199 - val_accuracy: 0.4167 - val_top-5-accuracy: 0.7299 Epoch 40/50 313/313 [==============================] - 28s 90ms/step - loss: 2.4974 - accuracy: 0.4697 - top-5-accuracy: 0.7819 - val_loss: 2.7864 - val_accuracy: 0.4247 - val_top-5-accuracy: 0.7402 Epoch 41/50 313/313 [==============================] - 28s 90ms/step - loss: 2.4889 - accuracy: 0.4749 - top-5-accuracy: 0.7854 - val_loss: 2.8120 - val_accuracy: 0.4217 - val_top-5-accuracy: 0.7358 Epoch 42/50 313/313 [==============================] - 28s 90ms/step - loss: 2.4799 - accuracy: 0.4771 - top-5-accuracy: 0.7866 - val_loss: 2.9003 - val_accuracy: 0.4038 - val_top-5-accuracy: 0.7170 Epoch 43/50 313/313 [==============================] - 28s 90ms/step - loss: 2.4814 - accuracy: 0.4770 - top-5-accuracy: 0.7868 - val_loss: 2.7504 - val_accuracy: 0.4260 - val_top-5-accuracy: 0.7457 Epoch 44/50 313/313 [==============================] - 28s 91ms/step - loss: 2.4747 - accuracy: 0.4757 - top-5-accuracy: 0.7870 - val_loss: 2.8207 - val_accuracy: 0.4166 - val_top-5-accuracy: 0.7363 Epoch 45/50 313/313 [==============================] - 28s 90ms/step - loss: 2.4653 - accuracy: 0.4809 - top-5-accuracy: 0.7924 - val_loss: 2.8663 - val_accuracy: 0.4130 - val_top-5-accuracy: 0.7209 Epoch 46/50 313/313 [==============================] - 28s 90ms/step - loss: 2.4554 - accuracy: 0.4825 - top-5-accuracy: 0.7929 - val_loss: 2.8145 - val_accuracy: 0.4250 - val_top-5-accuracy: 0.7357 Epoch 47/50 313/313 [==============================] - 29s 91ms/step - loss: 2.4602 - accuracy: 0.4823 - top-5-accuracy: 0.7919 - val_loss: 2.8352 - val_accuracy: 0.4189 - val_top-5-accuracy: 0.7365 Epoch 48/50 313/313 [==============================] - 28s 91ms/step - loss: 2.4493 - accuracy: 0.4848 - top-5-accuracy: 0.7933 - val_loss: 2.8246 - val_accuracy: 0.4160 - val_top-5-accuracy: 0.7362 Epoch 49/50 313/313 [==============================] - 28s 91ms/step - loss: 2.4454 - accuracy: 0.4846 - top-5-accuracy: 0.7958 - val_loss: 2.7731 - val_accuracy: 0.4320 - val_top-5-accuracy: 0.7436 Epoch 50/50 313/313 [==============================] - 29s 92ms/step - loss: 2.4418 - accuracy: 0.4848 - top-5-accuracy: 0.7951 - val_loss: 2.7926 - val_accuracy: 0.4317 - val_top-5-accuracy: 0.7410 Let's visualize the training progress of the model. plt.plot(history.history[\"loss\"], label=\"train_loss\") plt.plot(history.history[\"val_loss\"], label=\"val_loss\") plt.xlabel(\"Epochs\") plt.ylabel(\"Loss\") plt.title(\"Train and Validation Losses Over Epochs\", fontsize=14) plt.legend() plt.grid() plt.show() png Let's display the final results of the test on CIFAR-100. loss, accuracy, top_5_accuracy = model.evaluate(x_test, y_test) print(f\"Test loss: {round(loss, 2)}\") print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") print(f\"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%\") 313/313 [==============================] - 6s 21ms/step - loss: 2.7574 - accuracy: 0.4391 - top-5-accuracy: 0.7471 Test loss: 2.76 Test accuracy: 43.91% Test top 5 accuracy: 74.71% EANet just replaces self attention in Vit with external attention. The traditional Vit achieved a ~73% test top-5 accuracy and ~41 top-1 accuracy after training 50 epochs, but with 0.6M parameters. Under the same experimental environment and the same hyperparameters, The EANet model we just trained has just 0.3M parameters, and it gets us to ~73% test top-5 accuracy and ~43% top-1 accuracy. This fully demonstrates the effectiveness of external attention. We only show the training process of EANet, you can train Vit under the same experimental conditions and observe the test results. Implementing the MLP-Mixer, FNet, and gMLP models for CIFAR-100 image classification. Introduction This example implements three modern attention-free, multi-layer perceptron (MLP) based models for image classification, demonstrated on the CIFAR-100 dataset: The MLP-Mixer model, by Ilya Tolstikhin et al., based on two types of MLPs. The FNet model, by James Lee-Thorp et al., based on unparameterized Fourier Transform. The gMLP model, by Hanxiao Liu et al., based on MLP with gating. The purpose of the example is not to compare between these models, as they might perform differently on different datasets with well-tuned hyperparameters. Rather, it is to show simple implementations of their main building blocks. This example requires TensorFlow 2.4 or higher, as well as TensorFlow Addons, which can be installed using the following command: pip install -U tensorflow-addons Setup import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_addons as tfa Prepare the data num_classes = 100 input_shape = (32, 32, 3) (x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data() print(f\"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}\") print(f\"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}\") x_train shape: (50000, 32, 32, 3) - y_train shape: (50000, 1) x_test shape: (10000, 32, 32, 3) - y_test shape: (10000, 1) Configure the hyperparameters weight_decay = 0.0001 batch_size = 128 num_epochs = 50 dropout_rate = 0.2 image_size = 64 # We'll resize input images to this size. patch_size = 8 # Size of the patches to be extracted from the input images. num_patches = (image_size // patch_size) ** 2 # Size of the data array. embedding_dim = 256 # Number of hidden units. num_blocks = 4 # Number of blocks. print(f\"Image size: {image_size} X {image_size} = {image_size ** 2}\") print(f\"Patch size: {patch_size} X {patch_size} = {patch_size ** 2} \") print(f\"Patches per image: {num_patches}\") print(f\"Elements per patch (3 channels): {(patch_size ** 2) * 3}\") Image size: 64 X 64 = 4096 Patch size: 8 X 8 = 64 Patches per image: 64 Elements per patch (3 channels): 192 Build a classification model We implement a method that builds a classifier given the processing blocks. def build_classifier(blocks, positional_encoding=False): inputs = layers.Input(shape=input_shape) # Augment data. augmented = data_augmentation(inputs) # Create patches. patches = Patches(patch_size, num_patches)(augmented) # Encode patches to generate a [batch_size, num_patches, embedding_dim] tensor. x = layers.Dense(units=embedding_dim)(patches) if positional_encoding: positions = tf.range(start=0, limit=num_patches, delta=1) position_embedding = layers.Embedding( input_dim=num_patches, output_dim=embedding_dim )(positions) x = x + position_embedding # Process x using the module blocks. x = blocks(x) # Apply global average pooling to generate a [batch_size, embedding_dim] representation tensor. representation = layers.GlobalAveragePooling1D()(x) # Apply dropout. representation = layers.Dropout(rate=dropout_rate)(representation) # Compute logits outputs. logits = layers.Dense(num_classes)(representation) # Create the Keras model. return keras.Model(inputs=inputs, outputs=logits) Define an experiment We implement a utility function to compile, train, and evaluate a given model. def run_experiment(model): # Create Adam optimizer with weight decay. optimizer = tfa.optimizers.AdamW( learning_rate=learning_rate, weight_decay=weight_decay, ) # Compile the model. model.compile( optimizer=optimizer, loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[ keras.metrics.SparseCategoricalAccuracy(name=\"acc\"), keras.metrics.SparseTopKCategoricalAccuracy(5, name=\"top5-acc\"), ], ) # Create a learning rate scheduler callback. reduce_lr = keras.callbacks.ReduceLROnPlateau( monitor=\"val_loss\", factor=0.5, patience=5 ) # Create an early stopping callback. early_stopping = tf.keras.callbacks.EarlyStopping( monitor=\"val_loss\", patience=10, restore_best_weights=True ) # Fit the model. history = model.fit( x=x_train, y=y_train, batch_size=batch_size, epochs=num_epochs, validation_split=0.1, callbacks=[early_stopping, reduce_lr], ) _, accuracy, top_5_accuracy = model.evaluate(x_test, y_test) print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") print(f\"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%\") # Return history to plot learning curves. return history Use data augmentation data_augmentation = keras.Sequential( [ layers.Normalization(), layers.Resizing(image_size, image_size), layers.RandomFlip(\"horizontal\"), layers.RandomZoom( height_factor=0.2, width_factor=0.2 ), ], name=\"data_augmentation\", ) # Compute the mean and the variance of the training data for normalization. data_augmentation.layers[0].adapt(x_train) Implement patch extraction as a layer class Patches(layers.Layer): def __init__(self, patch_size, num_patches): super(Patches, self).__init__() self.patch_size = patch_size self.num_patches = num_patches def call(self, images): batch_size = tf.shape(images)[0] patches = tf.image.extract_patches( images=images, sizes=[1, self.patch_size, self.patch_size, 1], strides=[1, self.patch_size, self.patch_size, 1], rates=[1, 1, 1, 1], padding=\"VALID\", ) patch_dims = patches.shape[-1] patches = tf.reshape(patches, [batch_size, self.num_patches, patch_dims]) return patches The MLP-Mixer model The MLP-Mixer is an architecture based exclusively on multi-layer perceptrons (MLPs), that contains two types of MLP layers: One applied independently to image patches, which mixes the per-location features. The other applied across patches (along channels), which mixes spatial information. This is similar to a depthwise separable convolution based model such as the Xception model, but with two chained dense transforms, no max pooling, and layer normalization instead of batch normalization. Implement the MLP-Mixer module class MLPMixerLayer(layers.Layer): def __init__(self, num_patches, hidden_units, dropout_rate, *args, **kwargs): super(MLPMixerLayer, self).__init__(*args, **kwargs) self.mlp1 = keras.Sequential( [ layers.Dense(units=num_patches), tfa.layers.GELU(), layers.Dense(units=num_patches), layers.Dropout(rate=dropout_rate), ] ) self.mlp2 = keras.Sequential( [ layers.Dense(units=num_patches), tfa.layers.GELU(), layers.Dense(units=embedding_dim), layers.Dropout(rate=dropout_rate), ] ) self.normalize = layers.LayerNormalization(epsilon=1e-6) def call(self, inputs): # Apply layer normalization. x = self.normalize(inputs) # Transpose inputs from [num_batches, num_patches, hidden_units] to [num_batches, hidden_units, num_patches]. x_channels = tf.linalg.matrix_transpose(x) # Apply mlp1 on each channel independently. mlp1_outputs = self.mlp1(x_channels) # Transpose mlp1_outputs from [num_batches, hidden_dim, num_patches] to [num_batches, num_patches, hidden_units]. mlp1_outputs = tf.linalg.matrix_transpose(mlp1_outputs) # Add skip connection. x = mlp1_outputs + inputs # Apply layer normalization. x_patches = self.normalize(x) # Apply mlp2 on each patch independtenly. mlp2_outputs = self.mlp2(x_patches) # Add skip connection. x = x + mlp2_outputs return x Build, train, and evaluate the MLP-Mixer model Note that training the model with the current settings on a V100 GPUs takes around 8 seconds per epoch. mlpmixer_blocks = keras.Sequential( [MLPMixerLayer(num_patches, embedding_dim, dropout_rate) for _ in range(num_blocks)] ) learning_rate = 0.005 mlpmixer_classifier = build_classifier(mlpmixer_blocks) history = run_experiment(mlpmixer_classifier) /opt/conda/lib/python3.7/site-packages/tensorflow/python/autograph/impl/api.py:390: UserWarning: Default value of `approximate` is changed from `True` to `False` return py_builtins.overload_of(f)(*args) Epoch 1/50 352/352 [==============================] - 13s 25ms/step - loss: 4.1703 - acc: 0.0756 - top5-acc: 0.2322 - val_loss: 3.6202 - val_acc: 0.1532 - val_top5-acc: 0.4140 Epoch 2/50 352/352 [==============================] - 8s 23ms/step - loss: 3.4165 - acc: 0.1789 - top5-acc: 0.4459 - val_loss: 3.1599 - val_acc: 0.2334 - val_top5-acc: 0.5160 Epoch 3/50 352/352 [==============================] - 8s 23ms/step - loss: 3.1367 - acc: 0.2328 - top5-acc: 0.5230 - val_loss: 3.0539 - val_acc: 0.2560 - val_top5-acc: 0.5664 Epoch 4/50 352/352 [==============================] - 8s 23ms/step - loss: 2.9985 - acc: 0.2624 - top5-acc: 0.5600 - val_loss: 2.9498 - val_acc: 0.2798 - val_top5-acc: 0.5856 Epoch 5/50 352/352 [==============================] - 8s 23ms/step - loss: 2.8806 - acc: 0.2809 - top5-acc: 0.5879 - val_loss: 2.8593 - val_acc: 0.2904 - val_top5-acc: 0.6050 Epoch 6/50 352/352 [==============================] - 8s 23ms/step - loss: 2.7860 - acc: 0.3024 - top5-acc: 0.6124 - val_loss: 2.7405 - val_acc: 0.3256 - val_top5-acc: 0.6364 Epoch 7/50 352/352 [==============================] - 8s 23ms/step - loss: 2.7065 - acc: 0.3152 - top5-acc: 0.6280 - val_loss: 2.7548 - val_acc: 0.3328 - val_top5-acc: 0.6450 Epoch 8/50 352/352 [==============================] - 8s 22ms/step - loss: 2.6443 - acc: 0.3263 - top5-acc: 0.6446 - val_loss: 2.6618 - val_acc: 0.3460 - val_top5-acc: 0.6578 Epoch 9/50 352/352 [==============================] - 8s 23ms/step - loss: 2.5886 - acc: 0.3406 - top5-acc: 0.6573 - val_loss: 2.6065 - val_acc: 0.3492 - val_top5-acc: 0.6650 Epoch 10/50 352/352 [==============================] - 8s 23ms/step - loss: 2.5798 - acc: 0.3404 - top5-acc: 0.6591 - val_loss: 2.6546 - val_acc: 0.3502 - val_top5-acc: 0.6630 Epoch 11/50 352/352 [==============================] - 8s 23ms/step - loss: 2.5269 - acc: 0.3498 - top5-acc: 0.6714 - val_loss: 2.6201 - val_acc: 0.3570 - val_top5-acc: 0.6710 Epoch 12/50 352/352 [==============================] - 8s 23ms/step - loss: 2.5003 - acc: 0.3569 - top5-acc: 0.6745 - val_loss: 2.5936 - val_acc: 0.3564 - val_top5-acc: 0.6662 Epoch 13/50 352/352 [==============================] - 8s 22ms/step - loss: 2.4801 - acc: 0.3619 - top5-acc: 0.6792 - val_loss: 2.5236 - val_acc: 0.3700 - val_top5-acc: 0.6786 Epoch 14/50 352/352 [==============================] - 8s 23ms/step - loss: 2.4392 - acc: 0.3676 - top5-acc: 0.6879 - val_loss: 2.4971 - val_acc: 0.3808 - val_top5-acc: 0.6926 Epoch 15/50 352/352 [==============================] - 8s 23ms/step - loss: 2.4073 - acc: 0.3790 - top5-acc: 0.6940 - val_loss: 2.5972 - val_acc: 0.3682 - val_top5-acc: 0.6750 Epoch 16/50 352/352 [==============================] - 8s 23ms/step - loss: 2.3922 - acc: 0.3754 - top5-acc: 0.6980 - val_loss: 2.4317 - val_acc: 0.3964 - val_top5-acc: 0.6992 Epoch 17/50 352/352 [==============================] - 8s 22ms/step - loss: 2.3603 - acc: 0.3891 - top5-acc: 0.7038 - val_loss: 2.4844 - val_acc: 0.3766 - val_top5-acc: 0.6964 Epoch 18/50 352/352 [==============================] - 8s 23ms/step - loss: 2.3560 - acc: 0.3849 - top5-acc: 0.7056 - val_loss: 2.4564 - val_acc: 0.3910 - val_top5-acc: 0.6990 Epoch 19/50 352/352 [==============================] - 8s 23ms/step - loss: 2.3367 - acc: 0.3900 - top5-acc: 0.7069 - val_loss: 2.4282 - val_acc: 0.3906 - val_top5-acc: 0.7058 Epoch 20/50 352/352 [==============================] - 8s 22ms/step - loss: 2.3096 - acc: 0.3945 - top5-acc: 0.7180 - val_loss: 2.4297 - val_acc: 0.3930 - val_top5-acc: 0.7082 Epoch 21/50 352/352 [==============================] - 8s 22ms/step - loss: 2.2935 - acc: 0.3996 - top5-acc: 0.7211 - val_loss: 2.4053 - val_acc: 0.3974 - val_top5-acc: 0.7076 Epoch 22/50 352/352 [==============================] - 8s 22ms/step - loss: 2.2823 - acc: 0.3991 - top5-acc: 0.7248 - val_loss: 2.4756 - val_acc: 0.3920 - val_top5-acc: 0.6988 Epoch 23/50 352/352 [==============================] - 8s 22ms/step - loss: 2.2371 - acc: 0.4126 - top5-acc: 0.7294 - val_loss: 2.3802 - val_acc: 0.3972 - val_top5-acc: 0.7100 Epoch 24/50 352/352 [==============================] - 8s 23ms/step - loss: 2.2234 - acc: 0.4140 - top5-acc: 0.7336 - val_loss: 2.4402 - val_acc: 0.3994 - val_top5-acc: 0.7096 Epoch 25/50 352/352 [==============================] - 8s 23ms/step - loss: 2.2320 - acc: 0.4088 - top5-acc: 0.7333 - val_loss: 2.4343 - val_acc: 0.3936 - val_top5-acc: 0.7052 Epoch 26/50 352/352 [==============================] - 8s 22ms/step - loss: 2.2094 - acc: 0.4193 - top5-acc: 0.7347 - val_loss: 2.4154 - val_acc: 0.4058 - val_top5-acc: 0.7192 Epoch 27/50 352/352 [==============================] - 8s 23ms/step - loss: 2.2029 - acc: 0.4180 - top5-acc: 0.7370 - val_loss: 2.3116 - val_acc: 0.4226 - val_top5-acc: 0.7268 Epoch 28/50 352/352 [==============================] - 8s 23ms/step - loss: 2.1959 - acc: 0.4234 - top5-acc: 0.7380 - val_loss: 2.4053 - val_acc: 0.4064 - val_top5-acc: 0.7168 Epoch 29/50 352/352 [==============================] - 8s 23ms/step - loss: 2.1815 - acc: 0.4227 - top5-acc: 0.7415 - val_loss: 2.4020 - val_acc: 0.4078 - val_top5-acc: 0.7192 Epoch 30/50 352/352 [==============================] - 8s 23ms/step - loss: 2.1783 - acc: 0.4245 - top5-acc: 0.7407 - val_loss: 2.4206 - val_acc: 0.3996 - val_top5-acc: 0.7234 Epoch 31/50 352/352 [==============================] - 8s 22ms/step - loss: 2.1686 - acc: 0.4248 - top5-acc: 0.7442 - val_loss: 2.3743 - val_acc: 0.4100 - val_top5-acc: 0.7162 Epoch 32/50 352/352 [==============================] - 8s 23ms/step - loss: 2.1487 - acc: 0.4317 - top5-acc: 0.7472 - val_loss: 2.3882 - val_acc: 0.4018 - val_top5-acc: 0.7266 Epoch 33/50 352/352 [==============================] - 8s 22ms/step - loss: 1.9836 - acc: 0.4644 - top5-acc: 0.7782 - val_loss: 2.1742 - val_acc: 0.4536 - val_top5-acc: 0.7506 Epoch 34/50 352/352 [==============================] - 8s 23ms/step - loss: 1.8723 - acc: 0.4950 - top5-acc: 0.7985 - val_loss: 2.1716 - val_acc: 0.4506 - val_top5-acc: 0.7546 Epoch 35/50 352/352 [==============================] - 8s 23ms/step - loss: 1.8461 - acc: 0.5009 - top5-acc: 0.8003 - val_loss: 2.1661 - val_acc: 0.4480 - val_top5-acc: 0.7542 Epoch 36/50 352/352 [==============================] - 8s 23ms/step - loss: 1.8499 - acc: 0.4944 - top5-acc: 0.8044 - val_loss: 2.1523 - val_acc: 0.4566 - val_top5-acc: 0.7628 Epoch 37/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8322 - acc: 0.5000 - top5-acc: 0.8059 - val_loss: 2.1334 - val_acc: 0.4570 - val_top5-acc: 0.7560 Epoch 38/50 352/352 [==============================] - 8s 23ms/step - loss: 1.8269 - acc: 0.5027 - top5-acc: 0.8085 - val_loss: 2.1024 - val_acc: 0.4614 - val_top5-acc: 0.7674 Epoch 39/50 352/352 [==============================] - 8s 23ms/step - loss: 1.8242 - acc: 0.4990 - top5-acc: 0.8098 - val_loss: 2.0789 - val_acc: 0.4610 - val_top5-acc: 0.7792 Epoch 40/50 352/352 [==============================] - 8s 23ms/step - loss: 1.7983 - acc: 0.5067 - top5-acc: 0.8122 - val_loss: 2.1514 - val_acc: 0.4546 - val_top5-acc: 0.7628 Epoch 41/50 352/352 [==============================] - 8s 23ms/step - loss: 1.7974 - acc: 0.5112 - top5-acc: 0.8132 - val_loss: 2.1425 - val_acc: 0.4542 - val_top5-acc: 0.7630 Epoch 42/50 352/352 [==============================] - 8s 23ms/step - loss: 1.7972 - acc: 0.5128 - top5-acc: 0.8127 - val_loss: 2.0980 - val_acc: 0.4580 - val_top5-acc: 0.7724 Epoch 43/50 352/352 [==============================] - 8s 23ms/step - loss: 1.8026 - acc: 0.5066 - top5-acc: 0.8115 - val_loss: 2.0922 - val_acc: 0.4684 - val_top5-acc: 0.7678 Epoch 44/50 352/352 [==============================] - 8s 23ms/step - loss: 1.7924 - acc: 0.5092 - top5-acc: 0.8129 - val_loss: 2.0511 - val_acc: 0.4750 - val_top5-acc: 0.7726 Epoch 45/50 352/352 [==============================] - 8s 22ms/step - loss: 1.7695 - acc: 0.5106 - top5-acc: 0.8193 - val_loss: 2.0949 - val_acc: 0.4678 - val_top5-acc: 0.7708 Epoch 46/50 352/352 [==============================] - 8s 23ms/step - loss: 1.7784 - acc: 0.5106 - top5-acc: 0.8141 - val_loss: 2.1094 - val_acc: 0.4656 - val_top5-acc: 0.7704 Epoch 47/50 352/352 [==============================] - 8s 23ms/step - loss: 1.7625 - acc: 0.5155 - top5-acc: 0.8190 - val_loss: 2.0492 - val_acc: 0.4774 - val_top5-acc: 0.7744 Epoch 48/50 352/352 [==============================] - 8s 23ms/step - loss: 1.7441 - acc: 0.5217 - top5-acc: 0.8190 - val_loss: 2.0562 - val_acc: 0.4698 - val_top5-acc: 0.7828 Epoch 49/50 352/352 [==============================] - 8s 23ms/step - loss: 1.7665 - acc: 0.5113 - top5-acc: 0.8196 - val_loss: 2.0348 - val_acc: 0.4708 - val_top5-acc: 0.7730 Epoch 50/50 352/352 [==============================] - 8s 23ms/step - loss: 1.7392 - acc: 0.5201 - top5-acc: 0.8226 - val_loss: 2.0787 - val_acc: 0.4710 - val_top5-acc: 0.7734 313/313 [==============================] - 2s 8ms/step - loss: 2.0571 - acc: 0.4758 - top5-acc: 0.7718 Test accuracy: 47.58% Test top 5 accuracy: 77.18% The MLP-Mixer model tends to have much less number of parameters compared to convolutional and transformer-based models, which leads to less training and serving computational cost. As mentioned in the MLP-Mixer paper, when pre-trained on large datasets, or with modern regularization schemes, the MLP-Mixer attains competitive scores to state-of-the-art models. You can obtain better results by increasing the embedding dimensions, increasing, increasing the number of mixer blocks, and training the model for longer. You may also try to increase the size of the input images and use different patch sizes. The FNet model The FNet uses a similar block to the Transformer block. However, FNet replaces the self-attention layer in the Transformer block with a parameter-free 2D Fourier transformation layer: One 1D Fourier Transform is applied along the patches. One 1D Fourier Transform is applied along the channels. Implement the FNet module class FNetLayer(layers.Layer): def __init__(self, num_patches, embedding_dim, dropout_rate, *args, **kwargs): super(FNetLayer, self).__init__(*args, **kwargs) self.ffn = keras.Sequential( [ layers.Dense(units=embedding_dim), tfa.layers.GELU(), layers.Dropout(rate=dropout_rate), layers.Dense(units=embedding_dim), ] ) self.normalize1 = layers.LayerNormalization(epsilon=1e-6) self.normalize2 = layers.LayerNormalization(epsilon=1e-6) def call(self, inputs): # Apply fourier transformations. x = tf.cast( tf.signal.fft2d(tf.cast(inputs, dtype=tf.dtypes.complex64)), dtype=tf.dtypes.float32, ) # Add skip connection. x = x + inputs # Apply layer normalization. x = self.normalize1(x) # Apply Feedfowrad network. x_ffn = self.ffn(x) # Add skip connection. x = x + x_ffn # Apply layer normalization. return self.normalize2(x) Build, train, and evaluate the FNet model Note that training the model with the current settings on a V100 GPUs takes around 8 seconds per epoch. fnet_blocks = keras.Sequential( [FNetLayer(num_patches, embedding_dim, dropout_rate) for _ in range(num_blocks)] ) learning_rate = 0.001 fnet_classifier = build_classifier(fnet_blocks, positional_encoding=True) history = run_experiment(fnet_classifier) Epoch 1/50 352/352 [==============================] - 11s 23ms/step - loss: 4.3419 - acc: 0.0470 - top5-acc: 0.1652 - val_loss: 3.8279 - val_acc: 0.1178 - val_top5-acc: 0.3268 Epoch 2/50 352/352 [==============================] - 8s 22ms/step - loss: 3.7814 - acc: 0.1202 - top5-acc: 0.3341 - val_loss: 3.5981 - val_acc: 0.1540 - val_top5-acc: 0.3914 Epoch 3/50 352/352 [==============================] - 8s 22ms/step - loss: 3.5319 - acc: 0.1603 - top5-acc: 0.4086 - val_loss: 3.3309 - val_acc: 0.1956 - val_top5-acc: 0.4656 Epoch 4/50 352/352 [==============================] - 8s 22ms/step - loss: 3.3025 - acc: 0.2001 - top5-acc: 0.4730 - val_loss: 3.1215 - val_acc: 0.2334 - val_top5-acc: 0.5234 Epoch 5/50 352/352 [==============================] - 8s 22ms/step - loss: 3.1621 - acc: 0.2224 - top5-acc: 0.5084 - val_loss: 3.0492 - val_acc: 0.2456 - val_top5-acc: 0.5322 Epoch 6/50 352/352 [==============================] - 8s 22ms/step - loss: 3.0506 - acc: 0.2469 - top5-acc: 0.5400 - val_loss: 2.9519 - val_acc: 0.2684 - val_top5-acc: 0.5652 Epoch 7/50 352/352 [==============================] - 8s 22ms/step - loss: 2.9520 - acc: 0.2618 - top5-acc: 0.5677 - val_loss: 2.8936 - val_acc: 0.2688 - val_top5-acc: 0.5864 Epoch 8/50 352/352 [==============================] - 8s 22ms/step - loss: 2.8377 - acc: 0.2828 - top5-acc: 0.5938 - val_loss: 2.7633 - val_acc: 0.2996 - val_top5-acc: 0.6068 Epoch 9/50 352/352 [==============================] - 8s 22ms/step - loss: 2.7670 - acc: 0.2969 - top5-acc: 0.6107 - val_loss: 2.7309 - val_acc: 0.3112 - val_top5-acc: 0.6136 Epoch 10/50 352/352 [==============================] - 8s 22ms/step - loss: 2.7027 - acc: 0.3148 - top5-acc: 0.6231 - val_loss: 2.6552 - val_acc: 0.3214 - val_top5-acc: 0.6436 Epoch 11/50 352/352 [==============================] - 8s 22ms/step - loss: 2.6375 - acc: 0.3256 - top5-acc: 0.6427 - val_loss: 2.6078 - val_acc: 0.3278 - val_top5-acc: 0.6434 Epoch 12/50 352/352 [==============================] - 8s 22ms/step - loss: 2.5573 - acc: 0.3424 - top5-acc: 0.6576 - val_loss: 2.5617 - val_acc: 0.3438 - val_top5-acc: 0.6534 Epoch 13/50 352/352 [==============================] - 8s 22ms/step - loss: 2.5259 - acc: 0.3488 - top5-acc: 0.6640 - val_loss: 2.5177 - val_acc: 0.3550 - val_top5-acc: 0.6652 Epoch 14/50 352/352 [==============================] - 8s 22ms/step - loss: 2.4782 - acc: 0.3586 - top5-acc: 0.6739 - val_loss: 2.5113 - val_acc: 0.3558 - val_top5-acc: 0.6718 Epoch 15/50 352/352 [==============================] - 8s 22ms/step - loss: 2.4242 - acc: 0.3712 - top5-acc: 0.6897 - val_loss: 2.4280 - val_acc: 0.3724 - val_top5-acc: 0.6880 Epoch 16/50 352/352 [==============================] - 8s 22ms/step - loss: 2.3884 - acc: 0.3741 - top5-acc: 0.6967 - val_loss: 2.4670 - val_acc: 0.3654 - val_top5-acc: 0.6794 Epoch 17/50 352/352 [==============================] - 8s 22ms/step - loss: 2.3619 - acc: 0.3797 - top5-acc: 0.7001 - val_loss: 2.3941 - val_acc: 0.3752 - val_top5-acc: 0.6922 Epoch 18/50 352/352 [==============================] - 8s 22ms/step - loss: 2.3183 - acc: 0.3931 - top5-acc: 0.7137 - val_loss: 2.4028 - val_acc: 0.3814 - val_top5-acc: 0.6954 Epoch 19/50 352/352 [==============================] - 8s 22ms/step - loss: 2.2919 - acc: 0.3955 - top5-acc: 0.7209 - val_loss: 2.3672 - val_acc: 0.3878 - val_top5-acc: 0.7022 Epoch 20/50 352/352 [==============================] - 8s 22ms/step - loss: 2.2612 - acc: 0.4038 - top5-acc: 0.7224 - val_loss: 2.3529 - val_acc: 0.3954 - val_top5-acc: 0.6934 Epoch 21/50 352/352 [==============================] - 8s 22ms/step - loss: 2.2416 - acc: 0.4068 - top5-acc: 0.7262 - val_loss: 2.3014 - val_acc: 0.3980 - val_top5-acc: 0.7158 Epoch 22/50 352/352 [==============================] - 8s 22ms/step - loss: 2.2087 - acc: 0.4162 - top5-acc: 0.7359 - val_loss: 2.2904 - val_acc: 0.4062 - val_top5-acc: 0.7120 Epoch 23/50 352/352 [==============================] - 8s 22ms/step - loss: 2.1803 - acc: 0.4200 - top5-acc: 0.7442 - val_loss: 2.3181 - val_acc: 0.4096 - val_top5-acc: 0.7120 Epoch 24/50 352/352 [==============================] - 8s 22ms/step - loss: 2.1718 - acc: 0.4246 - top5-acc: 0.7403 - val_loss: 2.2687 - val_acc: 0.4094 - val_top5-acc: 0.7234 Epoch 25/50 352/352 [==============================] - 8s 22ms/step - loss: 2.1559 - acc: 0.4198 - top5-acc: 0.7458 - val_loss: 2.2730 - val_acc: 0.4060 - val_top5-acc: 0.7190 Epoch 26/50 352/352 [==============================] - 8s 22ms/step - loss: 2.1285 - acc: 0.4300 - top5-acc: 0.7495 - val_loss: 2.2566 - val_acc: 0.4082 - val_top5-acc: 0.7306 Epoch 27/50 352/352 [==============================] - 8s 22ms/step - loss: 2.1118 - acc: 0.4386 - top5-acc: 0.7538 - val_loss: 2.2544 - val_acc: 0.4178 - val_top5-acc: 0.7218 Epoch 28/50 352/352 [==============================] - 8s 22ms/step - loss: 2.1007 - acc: 0.4408 - top5-acc: 0.7562 - val_loss: 2.2703 - val_acc: 0.4136 - val_top5-acc: 0.7172 Epoch 29/50 352/352 [==============================] - 8s 22ms/step - loss: 2.0707 - acc: 0.4446 - top5-acc: 0.7634 - val_loss: 2.2244 - val_acc: 0.4168 - val_top5-acc: 0.7332 Epoch 30/50 352/352 [==============================] - 8s 22ms/step - loss: 2.0694 - acc: 0.4428 - top5-acc: 0.7611 - val_loss: 2.2557 - val_acc: 0.4060 - val_top5-acc: 0.7270 Epoch 31/50 352/352 [==============================] - 8s 22ms/step - loss: 2.0485 - acc: 0.4502 - top5-acc: 0.7672 - val_loss: 2.2192 - val_acc: 0.4214 - val_top5-acc: 0.7308 Epoch 32/50 352/352 [==============================] - 8s 22ms/step - loss: 2.0105 - acc: 0.4617 - top5-acc: 0.7718 - val_loss: 2.2065 - val_acc: 0.4222 - val_top5-acc: 0.7286 Epoch 33/50 352/352 [==============================] - 8s 22ms/step - loss: 2.0238 - acc: 0.4556 - top5-acc: 0.7734 - val_loss: 2.1736 - val_acc: 0.4270 - val_top5-acc: 0.7368 Epoch 34/50 352/352 [==============================] - 8s 22ms/step - loss: 2.0253 - acc: 0.4547 - top5-acc: 0.7712 - val_loss: 2.2231 - val_acc: 0.4280 - val_top5-acc: 0.7308 Epoch 35/50 352/352 [==============================] - 8s 22ms/step - loss: 1.9992 - acc: 0.4593 - top5-acc: 0.7765 - val_loss: 2.1994 - val_acc: 0.4212 - val_top5-acc: 0.7358 Epoch 36/50 352/352 [==============================] - 8s 22ms/step - loss: 1.9849 - acc: 0.4636 - top5-acc: 0.7754 - val_loss: 2.2167 - val_acc: 0.4276 - val_top5-acc: 0.7308 Epoch 37/50 352/352 [==============================] - 8s 22ms/step - loss: 1.9880 - acc: 0.4677 - top5-acc: 0.7783 - val_loss: 2.1746 - val_acc: 0.4270 - val_top5-acc: 0.7416 Epoch 38/50 352/352 [==============================] - 8s 22ms/step - loss: 1.9562 - acc: 0.4720 - top5-acc: 0.7845 - val_loss: 2.1976 - val_acc: 0.4312 - val_top5-acc: 0.7356 Epoch 39/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8736 - acc: 0.4924 - top5-acc: 0.8004 - val_loss: 2.0755 - val_acc: 0.4578 - val_top5-acc: 0.7586 Epoch 40/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8189 - acc: 0.5042 - top5-acc: 0.8076 - val_loss: 2.0804 - val_acc: 0.4508 - val_top5-acc: 0.7600 Epoch 41/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8069 - acc: 0.5062 - top5-acc: 0.8132 - val_loss: 2.0784 - val_acc: 0.4456 - val_top5-acc: 0.7578 Epoch 42/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8156 - acc: 0.5052 - top5-acc: 0.8110 - val_loss: 2.0910 - val_acc: 0.4544 - val_top5-acc: 0.7542 Epoch 43/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8143 - acc: 0.5046 - top5-acc: 0.8105 - val_loss: 2.1037 - val_acc: 0.4466 - val_top5-acc: 0.7562 Epoch 44/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8119 - acc: 0.5032 - top5-acc: 0.8141 - val_loss: 2.0794 - val_acc: 0.4622 - val_top5-acc: 0.7532 Epoch 45/50 352/352 [==============================] - 8s 22ms/step - loss: 1.7611 - acc: 0.5188 - top5-acc: 0.8224 - val_loss: 2.0371 - val_acc: 0.4650 - val_top5-acc: 0.7628 Epoch 46/50 352/352 [==============================] - 8s 22ms/step - loss: 1.7713 - acc: 0.5189 - top5-acc: 0.8226 - val_loss: 2.0245 - val_acc: 0.4630 - val_top5-acc: 0.7644 Epoch 47/50 352/352 [==============================] - 8s 22ms/step - loss: 1.7809 - acc: 0.5130 - top5-acc: 0.8215 - val_loss: 2.0471 - val_acc: 0.4618 - val_top5-acc: 0.7618 Epoch 48/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8052 - acc: 0.5112 - top5-acc: 0.8165 - val_loss: 2.0441 - val_acc: 0.4596 - val_top5-acc: 0.7658 Epoch 49/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8128 - acc: 0.5039 - top5-acc: 0.8178 - val_loss: 2.0569 - val_acc: 0.4600 - val_top5-acc: 0.7614 Epoch 50/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8179 - acc: 0.5089 - top5-acc: 0.8155 - val_loss: 2.0514 - val_acc: 0.4576 - val_top5-acc: 0.7566 313/313 [==============================] - 2s 6ms/step - loss: 2.0142 - acc: 0.4663 - top5-acc: 0.7647 Test accuracy: 46.63% Test top 5 accuracy: 76.47% As shown in the FNet paper, better results can be achieved by increasing the embedding dimensions, increasing the number of FNet blocks, and training the model for longer. You may also try to increase the size of the input images and use different patch sizes. The FNet scales very efficiently to long inputs, runs much faster than attention-based Transformer models, and produces competitive accuracy results. The gMLP model The gMLP is a MLP architecture that features a Spatial Gating Unit (SGU). The SGU enables cross-patch interactions across the spatial (channel) dimension, by: Transforming the input spatially by applying linear projection across patches (along channels). Applying element-wise multiplication of the input and its spatial transformation. Implement the gMLP module class gMLPLayer(layers.Layer): def __init__(self, num_patches, embedding_dim, dropout_rate, *args, **kwargs): super(gMLPLayer, self).__init__(*args, **kwargs) self.channel_projection1 = keras.Sequential( [ layers.Dense(units=embedding_dim * 2), tfa.layers.GELU(), layers.Dropout(rate=dropout_rate), ] ) self.channel_projection2 = layers.Dense(units=embedding_dim) self.spatial_projection = layers.Dense( units=num_patches, bias_initializer=\"Ones\" ) self.normalize1 = layers.LayerNormalization(epsilon=1e-6) self.normalize2 = layers.LayerNormalization(epsilon=1e-6) def spatial_gating_unit(self, x): # Split x along the channel dimensions. # Tensors u and v will in th shape of [batch_size, num_patchs, embedding_dim]. u, v = tf.split(x, num_or_size_splits=2, axis=2) # Apply layer normalization. v = self.normalize2(v) # Apply spatial projection. v_channels = tf.linalg.matrix_transpose(v) v_projected = self.spatial_projection(v_channels) v_projected = tf.linalg.matrix_transpose(v_projected) # Apply element-wise multiplication. return u * v_projected def call(self, inputs): # Apply layer normalization. x = self.normalize1(inputs) # Apply the first channel projection. x_projected shape: [batch_size, num_patches, embedding_dim * 2]. x_projected = self.channel_projection1(x) # Apply the spatial gating unit. x_spatial shape: [batch_size, num_patches, embedding_dim]. x_spatial = self.spatial_gating_unit(x_projected) # Apply the second channel projection. x_projected shape: [batch_size, num_patches, embedding_dim]. x_projected = self.channel_projection2(x_spatial) # Add skip connection. return x + x_projected Build, train, and evaluate the gMLP model Note that training the model with the current settings on a V100 GPUs takes around 9 seconds per epoch. gmlp_blocks = keras.Sequential( [gMLPLayer(num_patches, embedding_dim, dropout_rate) for _ in range(num_blocks)] ) learning_rate = 0.003 gmlp_classifier = build_classifier(gmlp_blocks) history = run_experiment(gmlp_classifier) Epoch 1/50 352/352 [==============================] - 13s 28ms/step - loss: 4.1713 - acc: 0.0704 - top5-acc: 0.2206 - val_loss: 3.5629 - val_acc: 0.1548 - val_top5-acc: 0.4086 Epoch 2/50 352/352 [==============================] - 9s 27ms/step - loss: 3.5146 - acc: 0.1633 - top5-acc: 0.4172 - val_loss: 3.2899 - val_acc: 0.2066 - val_top5-acc: 0.4900 Epoch 3/50 352/352 [==============================] - 9s 26ms/step - loss: 3.2588 - acc: 0.2017 - top5-acc: 0.4895 - val_loss: 3.1152 - val_acc: 0.2362 - val_top5-acc: 0.5278 Epoch 4/50 352/352 [==============================] - 9s 26ms/step - loss: 3.1037 - acc: 0.2331 - top5-acc: 0.5288 - val_loss: 2.9771 - val_acc: 0.2624 - val_top5-acc: 0.5646 Epoch 5/50 352/352 [==============================] - 9s 26ms/step - loss: 2.9483 - acc: 0.2637 - top5-acc: 0.5680 - val_loss: 2.8807 - val_acc: 0.2784 - val_top5-acc: 0.5840 Epoch 6/50 352/352 [==============================] - 9s 26ms/step - loss: 2.8411 - acc: 0.2821 - top5-acc: 0.5930 - val_loss: 2.7246 - val_acc: 0.3146 - val_top5-acc: 0.6256 Epoch 7/50 352/352 [==============================] - 9s 26ms/step - loss: 2.7221 - acc: 0.3085 - top5-acc: 0.6193 - val_loss: 2.7022 - val_acc: 0.3108 - val_top5-acc: 0.6270 Epoch 8/50 352/352 [==============================] - 9s 26ms/step - loss: 2.6296 - acc: 0.3334 - top5-acc: 0.6420 - val_loss: 2.6289 - val_acc: 0.3324 - val_top5-acc: 0.6494 Epoch 9/50 352/352 [==============================] - 9s 26ms/step - loss: 2.5691 - acc: 0.3413 - top5-acc: 0.6563 - val_loss: 2.5353 - val_acc: 0.3586 - val_top5-acc: 0.6746 Epoch 10/50 352/352 [==============================] - 9s 26ms/step - loss: 2.4854 - acc: 0.3575 - top5-acc: 0.6760 - val_loss: 2.5271 - val_acc: 0.3578 - val_top5-acc: 0.6720 Epoch 11/50 352/352 [==============================] - 9s 26ms/step - loss: 2.4252 - acc: 0.3722 - top5-acc: 0.6870 - val_loss: 2.4553 - val_acc: 0.3684 - val_top5-acc: 0.6850 Epoch 12/50 352/352 [==============================] - 9s 26ms/step - loss: 2.3814 - acc: 0.3822 - top5-acc: 0.6985 - val_loss: 2.3841 - val_acc: 0.3888 - val_top5-acc: 0.6966 Epoch 13/50 352/352 [==============================] - 9s 26ms/step - loss: 2.3119 - acc: 0.3950 - top5-acc: 0.7135 - val_loss: 2.4306 - val_acc: 0.3780 - val_top5-acc: 0.6894 Epoch 14/50 352/352 [==============================] - 9s 26ms/step - loss: 2.2886 - acc: 0.4033 - top5-acc: 0.7168 - val_loss: 2.4053 - val_acc: 0.3932 - val_top5-acc: 0.7010 Epoch 15/50 352/352 [==============================] - 9s 26ms/step - loss: 2.2455 - acc: 0.4080 - top5-acc: 0.7233 - val_loss: 2.3443 - val_acc: 0.4004 - val_top5-acc: 0.7128 Epoch 16/50 352/352 [==============================] - 9s 26ms/step - loss: 2.2128 - acc: 0.4152 - top5-acc: 0.7317 - val_loss: 2.3150 - val_acc: 0.4018 - val_top5-acc: 0.7174 Epoch 17/50 352/352 [==============================] - 9s 26ms/step - loss: 2.1990 - acc: 0.4206 - top5-acc: 0.7357 - val_loss: 2.3590 - val_acc: 0.3978 - val_top5-acc: 0.7086 Epoch 18/50 352/352 [==============================] - 9s 26ms/step - loss: 2.1574 - acc: 0.4258 - top5-acc: 0.7451 - val_loss: 2.3140 - val_acc: 0.4052 - val_top5-acc: 0.7256 Epoch 19/50 352/352 [==============================] - 9s 26ms/step - loss: 2.1369 - acc: 0.4309 - top5-acc: 0.7487 - val_loss: 2.3012 - val_acc: 0.4124 - val_top5-acc: 0.7190 Epoch 20/50 352/352 [==============================] - 9s 26ms/step - loss: 2.1222 - acc: 0.4350 - top5-acc: 0.7494 - val_loss: 2.3294 - val_acc: 0.4076 - val_top5-acc: 0.7186 Epoch 21/50 352/352 [==============================] - 9s 26ms/step - loss: 2.0822 - acc: 0.4436 - top5-acc: 0.7576 - val_loss: 2.2498 - val_acc: 0.4302 - val_top5-acc: 0.7276 Epoch 22/50 352/352 [==============================] - 9s 26ms/step - loss: 2.0609 - acc: 0.4518 - top5-acc: 0.7610 - val_loss: 2.2915 - val_acc: 0.4232 - val_top5-acc: 0.7280 Epoch 23/50 352/352 [==============================] - 9s 26ms/step - loss: 2.0482 - acc: 0.4590 - top5-acc: 0.7648 - val_loss: 2.2448 - val_acc: 0.4242 - val_top5-acc: 0.7296 Epoch 24/50 352/352 [==============================] - 9s 26ms/step - loss: 2.0292 - acc: 0.4560 - top5-acc: 0.7705 - val_loss: 2.2526 - val_acc: 0.4334 - val_top5-acc: 0.7324 Epoch 25/50 352/352 [==============================] - 9s 26ms/step - loss: 2.0316 - acc: 0.4544 - top5-acc: 0.7687 - val_loss: 2.2430 - val_acc: 0.4318 - val_top5-acc: 0.7338 Epoch 26/50 352/352 [==============================] - 9s 26ms/step - loss: 1.9988 - acc: 0.4616 - top5-acc: 0.7748 - val_loss: 2.2053 - val_acc: 0.4470 - val_top5-acc: 0.7366 Epoch 27/50 352/352 [==============================] - 9s 26ms/step - loss: 1.9788 - acc: 0.4646 - top5-acc: 0.7806 - val_loss: 2.2313 - val_acc: 0.4378 - val_top5-acc: 0.7420 Epoch 28/50 352/352 [==============================] - 9s 26ms/step - loss: 1.9702 - acc: 0.4688 - top5-acc: 0.7829 - val_loss: 2.2392 - val_acc: 0.4344 - val_top5-acc: 0.7338 Epoch 29/50 352/352 [==============================] - 9s 26ms/step - loss: 1.9488 - acc: 0.4699 - top5-acc: 0.7866 - val_loss: 2.1600 - val_acc: 0.4490 - val_top5-acc: 0.7446 Epoch 30/50 352/352 [==============================] - 9s 26ms/step - loss: 1.9302 - acc: 0.4803 - top5-acc: 0.7878 - val_loss: 2.2069 - val_acc: 0.4410 - val_top5-acc: 0.7486 Epoch 31/50 352/352 [==============================] - 9s 26ms/step - loss: 1.9135 - acc: 0.4806 - top5-acc: 0.7916 - val_loss: 2.1929 - val_acc: 0.4486 - val_top5-acc: 0.7514 Epoch 32/50 352/352 [==============================] - 9s 26ms/step - loss: 1.8890 - acc: 0.4844 - top5-acc: 0.7961 - val_loss: 2.2176 - val_acc: 0.4404 - val_top5-acc: 0.7494 Epoch 33/50 352/352 [==============================] - 9s 26ms/step - loss: 1.8844 - acc: 0.4872 - top5-acc: 0.7980 - val_loss: 2.2321 - val_acc: 0.4444 - val_top5-acc: 0.7460 Epoch 34/50 352/352 [==============================] - 9s 26ms/step - loss: 1.8588 - acc: 0.4912 - top5-acc: 0.8005 - val_loss: 2.1895 - val_acc: 0.4532 - val_top5-acc: 0.7510 Epoch 35/50 352/352 [==============================] - 9s 26ms/step - loss: 1.7259 - acc: 0.5232 - top5-acc: 0.8266 - val_loss: 2.1024 - val_acc: 0.4800 - val_top5-acc: 0.7726 Epoch 36/50 352/352 [==============================] - 9s 26ms/step - loss: 1.6262 - acc: 0.5488 - top5-acc: 0.8437 - val_loss: 2.0712 - val_acc: 0.4830 - val_top5-acc: 0.7754 Epoch 37/50 352/352 [==============================] - 9s 26ms/step - loss: 1.6164 - acc: 0.5481 - top5-acc: 0.8390 - val_loss: 2.1219 - val_acc: 0.4772 - val_top5-acc: 0.7678 Epoch 38/50 352/352 [==============================] - 9s 26ms/step - loss: 1.5850 - acc: 0.5568 - top5-acc: 0.8510 - val_loss: 2.0931 - val_acc: 0.4892 - val_top5-acc: 0.7732 Epoch 39/50 352/352 [==============================] - 9s 26ms/step - loss: 1.5741 - acc: 0.5589 - top5-acc: 0.8507 - val_loss: 2.0910 - val_acc: 0.4910 - val_top5-acc: 0.7700 Epoch 40/50 352/352 [==============================] - 9s 26ms/step - loss: 1.5546 - acc: 0.5675 - top5-acc: 0.8519 - val_loss: 2.1388 - val_acc: 0.4790 - val_top5-acc: 0.7742 Epoch 41/50 352/352 [==============================] - 9s 26ms/step - loss: 1.5464 - acc: 0.5684 - top5-acc: 0.8561 - val_loss: 2.1121 - val_acc: 0.4786 - val_top5-acc: 0.7718 Epoch 42/50 352/352 [==============================] - 9s 26ms/step - loss: 1.4494 - acc: 0.5890 - top5-acc: 0.8702 - val_loss: 2.1157 - val_acc: 0.4944 - val_top5-acc: 0.7802 Epoch 43/50 352/352 [==============================] - 9s 26ms/step - loss: 1.3847 - acc: 0.6069 - top5-acc: 0.8825 - val_loss: 2.1048 - val_acc: 0.4884 - val_top5-acc: 0.7752 Epoch 44/50 352/352 [==============================] - 9s 26ms/step - loss: 1.3724 - acc: 0.6087 - top5-acc: 0.8832 - val_loss: 2.0681 - val_acc: 0.4924 - val_top5-acc: 0.7868 Epoch 45/50 352/352 [==============================] - 9s 26ms/step - loss: 1.3643 - acc: 0.6116 - top5-acc: 0.8840 - val_loss: 2.0965 - val_acc: 0.4932 - val_top5-acc: 0.7752 Epoch 46/50 352/352 [==============================] - 9s 26ms/step - loss: 1.3517 - acc: 0.6184 - top5-acc: 0.8849 - val_loss: 2.0869 - val_acc: 0.4956 - val_top5-acc: 0.7778 Epoch 47/50 352/352 [==============================] - 9s 26ms/step - loss: 1.3377 - acc: 0.6211 - top5-acc: 0.8891 - val_loss: 2.1120 - val_acc: 0.4882 - val_top5-acc: 0.7764 Epoch 48/50 352/352 [==============================] - 9s 26ms/step - loss: 1.3369 - acc: 0.6186 - top5-acc: 0.8888 - val_loss: 2.1257 - val_acc: 0.4912 - val_top5-acc: 0.7752 Epoch 49/50 352/352 [==============================] - 9s 26ms/step - loss: 1.3266 - acc: 0.6190 - top5-acc: 0.8893 - val_loss: 2.0961 - val_acc: 0.4958 - val_top5-acc: 0.7828 Epoch 50/50 352/352 [==============================] - 9s 26ms/step - loss: 1.2731 - acc: 0.6352 - top5-acc: 0.8976 - val_loss: 2.0897 - val_acc: 0.4982 - val_top5-acc: 0.7788 313/313 [==============================] - 2s 7ms/step - loss: 2.0743 - acc: 0.5064 - top5-acc: 0.7828 Test accuracy: 50.64% Test top 5 accuracy: 78.28% As shown in the gMLP paper, better results can be achieved by increasing the embedding dimensions, increasing the number of gMLP blocks, and training the model for longer. You may also try to increase the size of the input images and use different patch sizes. Note that, the paper used advanced regularization strategies, such as MixUp and CutMix, as well as AutoAugment. Implementing the Perceiver model for image classification. Introduction This example implements the Perceiver: General Perception with Iterative Attention model by Andrew Jaegle et al. for image classification, and demonstrates it on the CIFAR-100 dataset. The Perceiver model leverages an asymmetric attention mechanism to iteratively distill inputs into a tight latent bottleneck, allowing it to scale to handle very large inputs. In other words: let's assume that your input data array (e.g. image) has M elements (i.e. patches), where M is large. In a standard Transformer model, a self-attention operation is performed for the M elements. The complexity of this operation is O(M^2). However, the Perceiver model creates a latent array of size N elements, where N << M, and performs two operations iteratively: Cross-attention Transformer between the latent array and the data array - The complexity of this operation is O(M.N). Self-attention Transformer on the latent array - The complexity of this operation is O(N^2). This example requires TensorFlow 2.4 or higher, as well as TensorFlow Addons, which can be installed using the following command: pip install -U tensorflow-addons Setup import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_addons as tfa Prepare the data num_classes = 100 input_shape = (32, 32, 3) (x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data() print(f\"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}\") print(f\"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}\") x_train shape: (50000, 32, 32, 3) - y_train shape: (50000, 1) x_test shape: (10000, 32, 32, 3) - y_test shape: (10000, 1) Configure the hyperparameters learning_rate = 0.001 weight_decay = 0.0001 batch_size = 64 num_epochs = 50 dropout_rate = 0.2 image_size = 64 # We'll resize input images to this size. patch_size = 2 # Size of the patches to be extract from the input images. num_patches = (image_size // patch_size) ** 2 # Size of the data array. latent_dim = 256 # Size of the latent array. projection_dim = 256 # Embedding size of each element in the data and latent arrays. num_heads = 8 # Number of Transformer heads. ffn_units = [ projection_dim, projection_dim, ] # Size of the Transformer Feedforward network. num_transformer_blocks = 4 num_iterations = 2 # Repetitions of the cross-attention and Transformer modules. classifier_units = [ projection_dim, num_classes, ] # Size of the Feedforward network of the final classifier. print(f\"Image size: {image_size} X {image_size} = {image_size ** 2}\") print(f\"Patch size: {patch_size} X {patch_size} = {patch_size ** 2} \") print(f\"Patches per image: {num_patches}\") print(f\"Elements per patch (3 channels): {(patch_size ** 2) * 3}\") print(f\"Latent array shape: {latent_dim} X {projection_dim}\") print(f\"Data array shape: {num_patches} X {projection_dim}\") Image size: 64 X 64 = 4096 Patch size: 2 X 2 = 4 Patches per image: 1024 Elements per patch (3 channels): 12 Latent array shape: 256 X 256 Data array shape: 1024 X 256 Note that, in order to use each pixel as an individual input in the data array, set patch_size to 1. Use data augmentation data_augmentation = keras.Sequential( [ layers.Normalization(), layers.Resizing(image_size, image_size), layers.RandomFlip(\"horizontal\"), layers.RandomZoom( height_factor=0.2, width_factor=0.2 ), ], name=\"data_augmentation\", ) # Compute the mean and the variance of the training data for normalization. data_augmentation.layers[0].adapt(x_train) Implement Feedforward network (FFN) def create_ffn(hidden_units, dropout_rate): ffn_layers = [] for units in hidden_units[:-1]: ffn_layers.append(layers.Dense(units, activation=tf.nn.gelu)) ffn_layers.append(layers.Dense(units=hidden_units[-1])) ffn_layers.append(layers.Dropout(dropout_rate)) ffn = keras.Sequential(ffn_layers) return ffn Implement patch creation as a layer class Patches(layers.Layer): def __init__(self, patch_size): super(Patches, self).__init__() self.patch_size = patch_size def call(self, images): batch_size = tf.shape(images)[0] patches = tf.image.extract_patches( images=images, sizes=[1, self.patch_size, self.patch_size, 1], strides=[1, self.patch_size, self.patch_size, 1], rates=[1, 1, 1, 1], padding=\"VALID\", ) patch_dims = patches.shape[-1] patches = tf.reshape(patches, [batch_size, -1, patch_dims]) return patches Implement the patch encoding layer The PatchEncoder layer will linearly transform a patch by projecting it into a vector of size latent_dim. In addition, it adds a learnable position embedding to the projected vector. Note that the orginal Perceiver paper uses the Fourier feature positional encodings. class PatchEncoder(layers.Layer): def __init__(self, num_patches, projection_dim): super(PatchEncoder, self).__init__() self.num_patches = num_patches self.projection = layers.Dense(units=projection_dim) self.position_embedding = layers.Embedding( input_dim=num_patches, output_dim=projection_dim ) def call(self, patches): positions = tf.range(start=0, limit=self.num_patches, delta=1) encoded = self.projection(patches) + self.position_embedding(positions) return encoded Build the Perceiver model The Perceiver consists of two modules: a cross-attention module and a standard Transformer with self-attention. Cross-attention module The cross-attention expects a (latent_dim, projection_dim) latent array, and the (data_dim, projection_dim) data array as inputs, to produce a (latent_dim, projection_dim) latent array as an output. To apply cross-attention, the query vectors are generated from the latent array, while the key and value vectors are generated from the encoded image. Note that the data array in this example is the image, where the data_dim is set to the num_patches. def create_cross_attention_module( latent_dim, data_dim, projection_dim, ffn_units, dropout_rate ): inputs = { # Recieve the latent array as an input of shape [1, latent_dim, projection_dim]. \"latent_array\": layers.Input(shape=(latent_dim, projection_dim)), # Recieve the data_array (encoded image) as an input of shape [batch_size, data_dim, projection_dim]. \"data_array\": layers.Input(shape=(data_dim, projection_dim)), } # Apply layer norm to the inputs latent_array = layers.LayerNormalization(epsilon=1e-6)(inputs[\"latent_array\"]) data_array = layers.LayerNormalization(epsilon=1e-6)(inputs[\"data_array\"]) # Create query tensor: [1, latent_dim, projection_dim]. query = layers.Dense(units=projection_dim)(latent_array) # Create key tensor: [batch_size, data_dim, projection_dim]. key = layers.Dense(units=projection_dim)(data_array) # Create value tensor: [batch_size, data_dim, projection_dim]. value = layers.Dense(units=projection_dim)(data_array) # Generate cross-attention outputs: [batch_size, latent_dim, projection_dim]. attention_output = layers.Attention(use_scale=True, dropout=0.1)( [query, key, value], return_attention_scores=False ) # Skip connection 1. attention_output = layers.Add()([attention_output, latent_array]) # Apply layer norm. attention_output = layers.LayerNormalization(epsilon=1e-6)(attention_output) # Apply Feedforward network. ffn = create_ffn(hidden_units=ffn_units, dropout_rate=dropout_rate) outputs = ffn(attention_output) # Skip connection 2. outputs = layers.Add()([outputs, attention_output]) # Create the Keras model. model = keras.Model(inputs=inputs, outputs=outputs) return model Transformer module The Transformer expects the output latent vector from the cross-attention module as an input, applies multi-head self-attention to its latent_dim elements, followed by feedforward network, to produce another (latent_dim, projection_dim) latent array. def create_transformer_module( latent_dim, projection_dim, num_heads, num_transformer_blocks, ffn_units, dropout_rate, ): # input_shape: [1, latent_dim, projection_dim] inputs = layers.Input(shape=(latent_dim, projection_dim)) x0 = inputs # Create multiple layers of the Transformer block. for _ in range(num_transformer_blocks): # Apply layer normalization 1. x1 = layers.LayerNormalization(epsilon=1e-6)(x0) # Create a multi-head self-attention layer. attention_output = layers.MultiHeadAttention( num_heads=num_heads, key_dim=projection_dim, dropout=0.1 )(x1, x1) # Skip connection 1. x2 = layers.Add()([attention_output, x0]) # Apply layer normalization 2. x3 = layers.LayerNormalization(epsilon=1e-6)(x2) # Apply Feedforward network. ffn = create_ffn(hidden_units=ffn_units, dropout_rate=dropout_rate) x3 = ffn(x3) # Skip connection 2. x0 = layers.Add()([x3, x2]) # Create the Keras model. model = keras.Model(inputs=inputs, outputs=x0) return model Perceiver model The Perceiver model repeats the cross-attention and Transformer modules num_iterations times—with shared weights and skip connections—to allow the latent array to iteratively extract information from the input image as it is needed. class Perceiver(keras.Model): def __init__( self, patch_size, data_dim, latent_dim, projection_dim, num_heads, num_transformer_blocks, ffn_units, dropout_rate, num_iterations, classifier_units, ): super(Perceiver, self).__init__() self.latent_dim = latent_dim self.data_dim = data_dim self.patch_size = patch_size self.projection_dim = projection_dim self.num_heads = num_heads self.num_transformer_blocks = num_transformer_blocks self.ffn_units = ffn_units self.dropout_rate = dropout_rate self.num_iterations = num_iterations self.classifier_units = classifier_units def build(self, input_shape): # Create latent array. self.latent_array = self.add_weight( shape=(self.latent_dim, self.projection_dim), initializer=\"random_normal\", trainable=True, ) # Create patching module. self.patcher = Patches(self.patch_size) # Create patch encoder. self.patch_encoder = PatchEncoder(self.data_dim, self.projection_dim) # Create cross-attenion module. self.cross_attention = create_cross_attention_module( self.latent_dim, self.data_dim, self.projection_dim, self.ffn_units, self.dropout_rate, ) # Create Transformer module. self.transformer = create_transformer_module( self.latent_dim, self.projection_dim, self.num_heads, self.num_transformer_blocks, self.ffn_units, self.dropout_rate, ) # Create global average pooling layer. self.global_average_pooling = layers.GlobalAveragePooling1D() # Create a classification head. self.classification_head = create_ffn( hidden_units=self.classifier_units, dropout_rate=self.dropout_rate ) super(Perceiver, self).build(input_shape) def call(self, inputs): # Augment data. augmented = data_augmentation(inputs) # Create patches. patches = self.patcher(augmented) # Encode patches. encoded_patches = self.patch_encoder(patches) # Prepare cross-attention inputs. cross_attention_inputs = { \"latent_array\": tf.expand_dims(self.latent_array, 0), \"data_array\": encoded_patches, } # Apply the cross-attention and the Transformer modules iteratively. for _ in range(self.num_iterations): # Apply cross-attention from the latent array to the data array. latent_array = self.cross_attention(cross_attention_inputs) # Apply self-attention Transformer to the latent array. latent_array = self.transformer(latent_array) # Set the latent array of the next iteration. cross_attention_inputs[\"latent_array\"] = latent_array # Apply global average pooling to generate a [batch_size, projection_dim] repesentation tensor. representation = self.global_average_pooling(latent_array) # Generate logits. logits = self.classification_head(representation) return logits Compile, train, and evaluate the mode def run_experiment(model): # Create LAMB optimizer with weight decay. optimizer = tfa.optimizers.LAMB( learning_rate=learning_rate, weight_decay_rate=weight_decay, ) # Compile the model. model.compile( optimizer=optimizer, loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[ keras.metrics.SparseCategoricalAccuracy(name=\"acc\"), keras.metrics.SparseTopKCategoricalAccuracy(5, name=\"top5-acc\"), ], ) # Create a learning rate scheduler callback. reduce_lr = keras.callbacks.ReduceLROnPlateau( monitor=\"val_loss\", factor=0.2, patience=3 ) # Create an early stopping callback. early_stopping = tf.keras.callbacks.EarlyStopping( monitor=\"val_loss\", patience=15, restore_best_weights=True ) # Fit the model. history = model.fit( x=x_train, y=y_train, batch_size=batch_size, epochs=num_epochs, validation_split=0.1, callbacks=[early_stopping, reduce_lr], ) _, accuracy, top_5_accuracy = model.evaluate(x_test, y_test) print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") print(f\"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%\") # Return history to plot learning curves. return history Note that training the perceiver model with the current settings on a V100 GPUs takes around 200 seconds. perceiver_classifier = Perceiver( patch_size, num_patches, latent_dim, projection_dim, num_heads, num_transformer_blocks, ffn_units, dropout_rate, num_iterations, classifier_units, ) history = run_experiment(perceiver_classifier) Epoch 1/100 704/704 [==============================] - 305s 405ms/step - loss: 4.4550 - acc: 0.0389 - top5-acc: 0.1407 - val_loss: 4.0544 - val_acc: 0.0802 - val_top5-acc: 0.2516 Epoch 2/100 704/704 [==============================] - 284s 403ms/step - loss: 4.0639 - acc: 0.0889 - top5-acc: 0.2576 - val_loss: 3.7379 - val_acc: 0.1272 - val_top5-acc: 0.3556 Epoch 3/100 704/704 [==============================] - 283s 402ms/step - loss: 3.8400 - acc: 0.1226 - top5-acc: 0.3326 - val_loss: 3.4527 - val_acc: 0.1750 - val_top5-acc: 0.4350 Epoch 4/100 704/704 [==============================] - 283s 402ms/step - loss: 3.5917 - acc: 0.1657 - top5-acc: 0.4063 - val_loss: 3.2160 - val_acc: 0.2176 - val_top5-acc: 0.5048 Epoch 5/100 704/704 [==============================] - 283s 403ms/step - loss: 3.3820 - acc: 0.2082 - top5-acc: 0.4638 - val_loss: 2.9947 - val_acc: 0.2584 - val_top5-acc: 0.5732 Epoch 6/100 704/704 [==============================] - 284s 403ms/step - loss: 3.2487 - acc: 0.2338 - top5-acc: 0.4991 - val_loss: 2.9179 - val_acc: 0.2770 - val_top5-acc: 0.5744 Epoch 7/100 704/704 [==============================] - 283s 402ms/step - loss: 3.1228 - acc: 0.2605 - top5-acc: 0.5295 - val_loss: 2.7958 - val_acc: 0.2994 - val_top5-acc: 0.6100 Epoch 8/100 704/704 [==============================] - 283s 402ms/step - loss: 2.9989 - acc: 0.2862 - top5-acc: 0.5588 - val_loss: 2.7117 - val_acc: 0.3208 - val_top5-acc: 0.6340 Epoch 9/100 704/704 [==============================] - 283s 402ms/step - loss: 2.9294 - acc: 0.3018 - top5-acc: 0.5763 - val_loss: 2.5933 - val_acc: 0.3390 - val_top5-acc: 0.6636 Epoch 10/100 704/704 [==============================] - 283s 402ms/step - loss: 2.8687 - acc: 0.3139 - top5-acc: 0.5934 - val_loss: 2.5030 - val_acc: 0.3614 - val_top5-acc: 0.6764 Epoch 11/100 704/704 [==============================] - 283s 402ms/step - loss: 2.7771 - acc: 0.3341 - top5-acc: 0.6098 - val_loss: 2.4657 - val_acc: 0.3704 - val_top5-acc: 0.6928 Epoch 12/100 704/704 [==============================] - 283s 402ms/step - loss: 2.7306 - acc: 0.3436 - top5-acc: 0.6229 - val_loss: 2.4441 - val_acc: 0.3738 - val_top5-acc: 0.6878 Epoch 13/100 704/704 [==============================] - 283s 402ms/step - loss: 2.6863 - acc: 0.3546 - top5-acc: 0.6346 - val_loss: 2.3508 - val_acc: 0.3892 - val_top5-acc: 0.7050 Epoch 14/100 704/704 [==============================] - 283s 402ms/step - loss: 2.6107 - acc: 0.3708 - top5-acc: 0.6537 - val_loss: 2.3219 - val_acc: 0.3996 - val_top5-acc: 0.7108 Epoch 15/100 704/704 [==============================] - 283s 402ms/step - loss: 2.5559 - acc: 0.3836 - top5-acc: 0.6664 - val_loss: 2.2748 - val_acc: 0.4140 - val_top5-acc: 0.7242 Epoch 16/100 704/704 [==============================] - 283s 402ms/step - loss: 2.5016 - acc: 0.3942 - top5-acc: 0.6761 - val_loss: 2.2364 - val_acc: 0.4238 - val_top5-acc: 0.7264 Epoch 17/100 704/704 [==============================] - 283s 402ms/step - loss: 2.4554 - acc: 0.4056 - top5-acc: 0.6897 - val_loss: 2.1684 - val_acc: 0.4418 - val_top5-acc: 0.7452 Epoch 18/100 704/704 [==============================] - 283s 402ms/step - loss: 2.3926 - acc: 0.4209 - top5-acc: 0.7024 - val_loss: 2.1614 - val_acc: 0.4372 - val_top5-acc: 0.7428 Epoch 19/100 704/704 [==============================] - 283s 402ms/step - loss: 2.3617 - acc: 0.4264 - top5-acc: 0.7119 - val_loss: 2.1595 - val_acc: 0.4382 - val_top5-acc: 0.7408 Epoch 20/100 704/704 [==============================] - 283s 402ms/step - loss: 2.3355 - acc: 0.4324 - top5-acc: 0.7133 - val_loss: 2.1187 - val_acc: 0.4462 - val_top5-acc: 0.7490 Epoch 21/100 704/704 [==============================] - 283s 402ms/step - loss: 2.2571 - acc: 0.4512 - top5-acc: 0.7299 - val_loss: 2.1095 - val_acc: 0.4424 - val_top5-acc: 0.7534 Epoch 22/100 704/704 [==============================] - 283s 402ms/step - loss: 2.2374 - acc: 0.4559 - top5-acc: 0.7357 - val_loss: 2.0997 - val_acc: 0.4398 - val_top5-acc: 0.7554 Epoch 23/100 704/704 [==============================] - 283s 402ms/step - loss: 2.2108 - acc: 0.4628 - top5-acc: 0.7452 - val_loss: 2.0662 - val_acc: 0.4574 - val_top5-acc: 0.7598 Epoch 24/100 704/704 [==============================] - 283s 402ms/step - loss: 2.1628 - acc: 0.4728 - top5-acc: 0.7555 - val_loss: 2.0564 - val_acc: 0.4564 - val_top5-acc: 0.7584 Epoch 25/100 704/704 [==============================] - 283s 402ms/step - loss: 2.1169 - acc: 0.4834 - top5-acc: 0.7616 - val_loss: 2.0793 - val_acc: 0.4600 - val_top5-acc: 0.7538 Epoch 26/100 704/704 [==============================] - 283s 402ms/step - loss: 2.0938 - acc: 0.4867 - top5-acc: 0.7743 - val_loss: 2.0835 - val_acc: 0.4566 - val_top5-acc: 0.7506 Epoch 27/100 704/704 [==============================] - 283s 402ms/step - loss: 2.0479 - acc: 0.4993 - top5-acc: 0.7816 - val_loss: 2.0790 - val_acc: 0.4610 - val_top5-acc: 0.7556 Epoch 28/100 704/704 [==============================] - 283s 402ms/step - loss: 1.8480 - acc: 0.5493 - top5-acc: 0.8159 - val_loss: 1.8846 - val_acc: 0.5046 - val_top5-acc: 0.7890 Epoch 29/100 704/704 [==============================] - 283s 402ms/step - loss: 1.7532 - acc: 0.5731 - top5-acc: 0.8362 - val_loss: 1.8844 - val_acc: 0.5106 - val_top5-acc: 0.7976 Epoch 30/100 704/704 [==============================] - 283s 402ms/step - loss: 1.7113 - acc: 0.5827 - top5-acc: 0.8434 - val_loss: 1.8792 - val_acc: 0.5096 - val_top5-acc: 0.7928 Epoch 31/100 704/704 [==============================] - 283s 403ms/step - loss: 1.6831 - acc: 0.5891 - top5-acc: 0.8511 - val_loss: 1.8938 - val_acc: 0.5044 - val_top5-acc: 0.7914 Epoch 32/100 704/704 [==============================] - 284s 403ms/step - loss: 1.6480 - acc: 0.5977 - top5-acc: 0.8562 - val_loss: 1.9055 - val_acc: 0.5034 - val_top5-acc: 0.7922 Epoch 33/100 704/704 [==============================] - 284s 403ms/step - loss: 1.6320 - acc: 0.6015 - top5-acc: 0.8627 - val_loss: 1.9064 - val_acc: 0.5056 - val_top5-acc: 0.7896 Epoch 34/100 704/704 [==============================] - 283s 403ms/step - loss: 1.5821 - acc: 0.6145 - top5-acc: 0.8673 - val_loss: 1.8912 - val_acc: 0.5138 - val_top5-acc: 0.7936 Epoch 35/100 704/704 [==============================] - 283s 403ms/step - loss: 1.5791 - acc: 0.6163 - top5-acc: 0.8719 - val_loss: 1.8963 - val_acc: 0.5090 - val_top5-acc: 0.7982 Epoch 36/100 704/704 [==============================] - 283s 402ms/step - loss: 1.5680 - acc: 0.6178 - top5-acc: 0.8741 - val_loss: 1.8998 - val_acc: 0.5142 - val_top5-acc: 0.7936 Epoch 37/100 704/704 [==============================] - 284s 403ms/step - loss: 1.5506 - acc: 0.6218 - top5-acc: 0.8743 - val_loss: 1.8941 - val_acc: 0.5142 - val_top5-acc: 0.7952 Epoch 38/100 704/704 [==============================] - 283s 402ms/step - loss: 1.5611 - acc: 0.6216 - top5-acc: 0.8722 - val_loss: 1.8946 - val_acc: 0.5183 - val_top5-acc: 0.7956 Epoch 39/100 704/704 [==============================] - 284s 403ms/step - loss: 1.5541 - acc: 0.6215 - top5-acc: 0.8764 - val_loss: 1.8923 - val_acc: 0.5180 - val_top5-acc: 0.7962 Epoch 40/100 704/704 [==============================] - 283s 403ms/step - loss: 1.5505 - acc: 0.6228 - top5-acc: 0.8773 - val_loss: 1.8934 - val_acc: 0.5232 - val_top5-acc: 0.7962 Epoch 41/100 704/704 [==============================] - 283s 402ms/step - loss: 1.5604 - acc: 0.6224 - top5-acc: 0.8747 - val_loss: 1.8938 - val_acc: 0.5230 - val_top5-acc: 0.7958 Epoch 42/100 704/704 [==============================] - 283s 402ms/step - loss: 1.5545 - acc: 0.6194 - top5-acc: 0.8784 - val_loss: 1.8938 - val_acc: 0.5240 - val_top5-acc: 0.7966 Epoch 43/100 704/704 [==============================] - 283s 402ms/step - loss: 1.5630 - acc: 0.6210 - top5-acc: 0.8758 - val_loss: 1.8939 - val_acc: 0.5240 - val_top5-acc: 0.7958 Epoch 44/100 704/704 [==============================] - 283s 402ms/step - loss: 1.5569 - acc: 0.6198 - top5-acc: 0.8756 - val_loss: 1.8938 - val_acc: 0.5240 - val_top5-acc: 0.7060 Epoch 45/100 704/704 [==============================] - 283s 402ms/step - loss: 1.5569 - acc: 0.6197 - top5-acc: 0.8770 - val_loss: 1.8940 - val_acc: 0.5140 - val_top5-acc: 0.7962 313/313 [==============================] - 22s 69ms/step - loss: 1.8630 - acc: 0.5264 - top5-acc: 0.8087 Test accuracy: 52.64% Test top 5 accuracy: 80.87% After 45 epochs, the Perceiver model achieves around 53% accuracy and 81% top-5 accuracy on the test data. As mentioned in the ablations of the Perceiver paper, you can obtain better results by increasing the latent array size, increasing the (projection) dimensions of the latent array and data array elements, increasing the number of blocks in the Transformer module, and increasing the number of iterations of applying the cross-attention and the latent Transformer modules. You may also try to increase the size the input images and use different patch sizes. The Perceiver benefits from inceasing the model size. However, larger models needs bigger accelerators to fit in and train efficiently. This is why in the Perceiver paper they used 32 TPU cores to run the experiments. Image classification using Swin Transformers, a general-purpose backbone for computer vision. This example implements Swin Transformer: Hierarchical Vision Transformer using Shifted Windows by Liu et al. for image classification, and demonstrates it on the CIFAR-100 dataset. Swin Transformer (Shifted Window Transformer) can serve as a general-purpose backbone for computer vision. Swin Transformer is a hierarchical Transformer whose representations are computed with shifted windows. The shifted window scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connections. This architecture has the flexibility to model information at various scales and has a linear computational complexity with respect to image size. This example requires TensorFlow 2.5 or higher, as well as TensorFlow Addons, which can be installed using the following commands: !pip install -U tensorflow-addons Collecting tensorflow-addons Downloading tensorflow_addons-0.14.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)  |████████████████████████████████| 1.1 MB 7.9 MB/s [?25hCollecting typeguard>=2.7 Downloading typeguard-2.12.1-py3-none-any.whl (17 kB) Installing collected packages: typeguard, tensorflow-addons Successfully installed tensorflow-addons-0.14.0 typeguard-2.12.1 Setup import matplotlib.pyplot as plt import numpy as np import tensorflow as tf import tensorflow_addons as tfa from tensorflow import keras from tensorflow.keras import layers Prepare the data We load the CIFAR-100 dataset through tf.keras.datasets, normalize the images, and convert the integer labels to one-hot encoded vectors. num_classes = 100 input_shape = (32, 32, 3) (x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) print(f\"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}\") print(f\"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}\") plt.figure(figsize=(10, 10)) for i in range(25): plt.subplot(5, 5, i + 1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(x_train[i]) plt.show() Downloading data from https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz 169009152/169001437 [==============================] - 3s 0us/step 169017344/169001437 [==============================] - 3s 0us/step x_train shape: (50000, 32, 32, 3) - y_train shape: (50000, 100) x_test shape: (10000, 32, 32, 3) - y_test shape: (10000, 100) png Configure the hyperparameters A key parameter to pick is the patch_size, the size of the input patches. In order to use each pixel as an individual input, you can set patch_size to (1, 1). Below, we take inspiration from the original paper settings for training on ImageNet-1K, keeping most of the original settings for this example. patch_size = (2, 2) # 2-by-2 sized patches dropout_rate = 0.03 # Dropout rate num_heads = 8 # Attention heads embed_dim = 64 # Embedding dimension num_mlp = 256 # MLP layer size qkv_bias = True # Convert embedded patches to query, key, and values with a learnable additive value window_size = 2 # Size of attention window shift_size = 1 # Size of shifting window image_dimension = 32 # Initial image size num_patch_x = input_shape[0] // patch_size[0] num_patch_y = input_shape[1] // patch_size[1] learning_rate = 1e-3 batch_size = 128 num_epochs = 40 validation_split = 0.1 weight_decay = 0.0001 label_smoothing = 0.1 Helper functions We create two helper functions to help us get a sequence of patches from the image, merge patches, and apply dropout. def window_partition(x, window_size): _, height, width, channels = x.shape patch_num_y = height // window_size patch_num_x = width // window_size x = tf.reshape( x, shape=(-1, patch_num_y, window_size, patch_num_x, window_size, channels) ) x = tf.transpose(x, (0, 1, 3, 2, 4, 5)) windows = tf.reshape(x, shape=(-1, window_size, window_size, channels)) return windows def window_reverse(windows, window_size, height, width, channels): patch_num_y = height // window_size patch_num_x = width // window_size x = tf.reshape( windows, shape=(-1, patch_num_y, patch_num_x, window_size, window_size, channels), ) x = tf.transpose(x, perm=(0, 1, 3, 2, 4, 5)) x = tf.reshape(x, shape=(-1, height, width, channels)) return x class DropPath(layers.Layer): def __init__(self, drop_prob=None, **kwargs): super(DropPath, self).__init__(**kwargs) self.drop_prob = drop_prob def call(self, x): input_shape = tf.shape(x) batch_size = input_shape[0] rank = x.shape.rank shape = (batch_size,) + (1,) * (rank - 1) random_tensor = (1 - self.drop_prob) + tf.random.uniform(shape, dtype=x.dtype) path_mask = tf.floor(random_tensor) output = tf.math.divide(x, 1 - self.drop_prob) * path_mask return output Window based multi-head self-attention Usually Transformers perform global self-attention, where the relationships between a token and all other tokens are computed. The global computation leads to quadratic complexity with respect to the number of tokens. Here, as the original paper suggests, we compute self-attention within local windows, in a non-overlapping manner. Global self-attention leads to quadratic computational complexity in the number of patches, whereas window-based self-attention leads to linear complexity and is easily scalable. class WindowAttention(layers.Layer): def __init__( self, dim, window_size, num_heads, qkv_bias=True, dropout_rate=0.0, **kwargs ): super(WindowAttention, self).__init__(**kwargs) self.dim = dim self.window_size = window_size self.num_heads = num_heads self.scale = (dim // num_heads) ** -0.5 self.qkv = layers.Dense(dim * 3, use_bias=qkv_bias) self.dropout = layers.Dropout(dropout_rate) self.proj = layers.Dense(dim) def build(self, input_shape): num_window_elements = (2 * self.window_size[0] - 1) * ( 2 * self.window_size[1] - 1 ) self.relative_position_bias_table = self.add_weight( shape=(num_window_elements, self.num_heads), initializer=tf.initializers.Zeros(), trainable=True, ) coords_h = np.arange(self.window_size[0]) coords_w = np.arange(self.window_size[1]) coords_matrix = np.meshgrid(coords_h, coords_w, indexing=\"ij\") coords = np.stack(coords_matrix) coords_flatten = coords.reshape(2, -1) relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] relative_coords = relative_coords.transpose([1, 2, 0]) relative_coords[:, :, 0] += self.window_size[0] - 1 relative_coords[:, :, 1] += self.window_size[1] - 1 relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 relative_position_index = relative_coords.sum(-1) self.relative_position_index = tf.Variable( initial_value=tf.convert_to_tensor(relative_position_index), trainable=False ) def call(self, x, mask=None): _, size, channels = x.shape head_dim = channels // self.num_heads x_qkv = self.qkv(x) x_qkv = tf.reshape(x_qkv, shape=(-1, size, 3, self.num_heads, head_dim)) x_qkv = tf.transpose(x_qkv, perm=(2, 0, 3, 1, 4)) q, k, v = x_qkv[0], x_qkv[1], x_qkv[2] q = q * self.scale k = tf.transpose(k, perm=(0, 1, 3, 2)) attn = q @ k num_window_elements = self.window_size[0] * self.window_size[1] relative_position_index_flat = tf.reshape( self.relative_position_index, shape=(-1,) ) relative_position_bias = tf.gather( self.relative_position_bias_table, relative_position_index_flat ) relative_position_bias = tf.reshape( relative_position_bias, shape=(num_window_elements, num_window_elements, -1) ) relative_position_bias = tf.transpose(relative_position_bias, perm=(2, 0, 1)) attn = attn + tf.expand_dims(relative_position_bias, axis=0) if mask is not None: nW = mask.get_shape()[0] mask_float = tf.cast( tf.expand_dims(tf.expand_dims(mask, axis=1), axis=0), tf.float32 ) attn = ( tf.reshape(attn, shape=(-1, nW, self.num_heads, size, size)) + mask_float ) attn = tf.reshape(attn, shape=(-1, self.num_heads, size, size)) attn = keras.activations.softmax(attn, axis=-1) else: attn = keras.activations.softmax(attn, axis=-1) attn = self.dropout(attn) x_qkv = attn @ v x_qkv = tf.transpose(x_qkv, perm=(0, 2, 1, 3)) x_qkv = tf.reshape(x_qkv, shape=(-1, size, channels)) x_qkv = self.proj(x_qkv) x_qkv = self.dropout(x_qkv) return x_qkv The complete Swin Transformer model Finally, we put together the complete Swin Transformer by replacing the standard multi-head attention (MHA) with shifted windows attention. As suggested in the original paper, we create a model comprising of a shifted window-based MHA layer, followed by a 2-layer MLP with GELU nonlinearity in between, applying LayerNormalization before each MSA layer and each MLP, and a residual connection after each of these layers. Notice that we only create a simple MLP with 2 Dense and 2 Dropout layers. Often you will see models using ResNet-50 as the MLP which is quite standard in the literature. However in this paper the authors use a 2-layer MLP with GELU nonlinearity in between. class SwinTransformer(layers.Layer): def __init__( self, dim, num_patch, num_heads, window_size=7, shift_size=0, num_mlp=1024, qkv_bias=True, dropout_rate=0.0, **kwargs, ): super(SwinTransformer, self).__init__(**kwargs) self.dim = dim # number of input dimensions self.num_patch = num_patch # number of embedded patches self.num_heads = num_heads # number of attention heads self.window_size = window_size # size of window self.shift_size = shift_size # size of window shift self.num_mlp = num_mlp # number of MLP nodes self.norm1 = layers.LayerNormalization(epsilon=1e-5) self.attn = WindowAttention( dim, window_size=(self.window_size, self.window_size), num_heads=num_heads, qkv_bias=qkv_bias, dropout_rate=dropout_rate, ) self.drop_path = DropPath(dropout_rate) self.norm2 = layers.LayerNormalization(epsilon=1e-5) self.mlp = keras.Sequential( [ layers.Dense(num_mlp), layers.Activation(keras.activations.gelu), layers.Dropout(dropout_rate), layers.Dense(dim), layers.Dropout(dropout_rate), ] ) if min(self.num_patch) < self.window_size: self.shift_size = 0 self.window_size = min(self.num_patch) def build(self, input_shape): if self.shift_size == 0: self.attn_mask = None else: height, width = self.num_patch h_slices = ( slice(0, -self.window_size), slice(-self.window_size, -self.shift_size), slice(-self.shift_size, None), ) w_slices = ( slice(0, -self.window_size), slice(-self.window_size, -self.shift_size), slice(-self.shift_size, None), ) mask_array = np.zeros((1, height, width, 1)) count = 0 for h in h_slices: for w in w_slices: mask_array[:, h, w, :] = count count += 1 mask_array = tf.convert_to_tensor(mask_array) # mask array to windows mask_windows = window_partition(mask_array, self.window_size) mask_windows = tf.reshape( mask_windows, shape=[-1, self.window_size * self.window_size] ) attn_mask = tf.expand_dims(mask_windows, axis=1) - tf.expand_dims( mask_windows, axis=2 ) attn_mask = tf.where(attn_mask != 0, -100.0, attn_mask) attn_mask = tf.where(attn_mask == 0, 0.0, attn_mask) self.attn_mask = tf.Variable(initial_value=attn_mask, trainable=False) def call(self, x): height, width = self.num_patch _, num_patches_before, channels = x.shape x_skip = x x = self.norm1(x) x = tf.reshape(x, shape=(-1, height, width, channels)) if self.shift_size > 0: shifted_x = tf.roll( x, shift=[-self.shift_size, -self.shift_size], axis=[1, 2] ) else: shifted_x = x x_windows = window_partition(shifted_x, self.window_size) x_windows = tf.reshape( x_windows, shape=(-1, self.window_size * self.window_size, channels) ) attn_windows = self.attn(x_windows, mask=self.attn_mask) attn_windows = tf.reshape( attn_windows, shape=(-1, self.window_size, self.window_size, channels) ) shifted_x = window_reverse( attn_windows, self.window_size, height, width, channels ) if self.shift_size > 0: x = tf.roll( shifted_x, shift=[self.shift_size, self.shift_size], axis=[1, 2] ) else: x = shifted_x x = tf.reshape(x, shape=(-1, height * width, channels)) x = self.drop_path(x) x = x_skip + x x_skip = x x = self.norm2(x) x = self.mlp(x) x = self.drop_path(x) x = x_skip + x return x Model training and evaluation Extract and embed patches We first create 3 layers to help us extract, embed and merge patches from the images on top of which we will later use the Swin Transformer class we built. class PatchExtract(layers.Layer): def __init__(self, patch_size, **kwargs): super(PatchExtract, self).__init__(**kwargs) self.patch_size_x = patch_size[0] self.patch_size_y = patch_size[0] def call(self, images): batch_size = tf.shape(images)[0] patches = tf.image.extract_patches( images=images, sizes=(1, self.patch_size_x, self.patch_size_y, 1), strides=(1, self.patch_size_x, self.patch_size_y, 1), rates=(1, 1, 1, 1), padding=\"VALID\", ) patch_dim = patches.shape[-1] patch_num = patches.shape[1] return tf.reshape(patches, (batch_size, patch_num * patch_num, patch_dim)) class PatchEmbedding(layers.Layer): def __init__(self, num_patch, embed_dim, **kwargs): super(PatchEmbedding, self).__init__(**kwargs) self.num_patch = num_patch self.proj = layers.Dense(embed_dim) self.pos_embed = layers.Embedding(input_dim=num_patch, output_dim=embed_dim) def call(self, patch): pos = tf.range(start=0, limit=self.num_patch, delta=1) return self.proj(patch) + self.pos_embed(pos) class PatchMerging(tf.keras.layers.Layer): def __init__(self, num_patch, embed_dim): super(PatchMerging, self).__init__() self.num_patch = num_patch self.embed_dim = embed_dim self.linear_trans = layers.Dense(2 * embed_dim, use_bias=False) def call(self, x): height, width = self.num_patch _, _, C = x.get_shape().as_list() x = tf.reshape(x, shape=(-1, height, width, C)) x0 = x[:, 0::2, 0::2, :] x1 = x[:, 1::2, 0::2, :] x2 = x[:, 0::2, 1::2, :] x3 = x[:, 1::2, 1::2, :] x = tf.concat((x0, x1, x2, x3), axis=-1) x = tf.reshape(x, shape=(-1, (height // 2) * (width // 2), 4 * C)) return self.linear_trans(x) Build the model We put together the Swin Transformer model. input = layers.Input(input_shape) x = layers.RandomCrop(image_dimension, image_dimension)(input) x = layers.RandomFlip(\"horizontal\")(x) x = PatchExtract(patch_size)(x) x = PatchEmbedding(num_patch_x * num_patch_y, embed_dim)(x) x = SwinTransformer( dim=embed_dim, num_patch=(num_patch_x, num_patch_y), num_heads=num_heads, window_size=window_size, shift_size=0, num_mlp=num_mlp, qkv_bias=qkv_bias, dropout_rate=dropout_rate, )(x) x = SwinTransformer( dim=embed_dim, num_patch=(num_patch_x, num_patch_y), num_heads=num_heads, window_size=window_size, shift_size=shift_size, num_mlp=num_mlp, qkv_bias=qkv_bias, dropout_rate=dropout_rate, )(x) x = PatchMerging((num_patch_x, num_patch_y), embed_dim=embed_dim)(x) x = layers.GlobalAveragePooling1D()(x) output = layers.Dense(num_classes, activation=\"softmax\")(x) 2021-09-13 08:03:19.266695: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-13 08:03:19.275199: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-13 08:03:19.275997: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-13 08:03:19.277483: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-09-13 08:03:19.278433: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-13 08:03:19.279102: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-13 08:03:19.279706: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-13 08:03:21.258771: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-13 08:03:21.259481: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-13 08:03:21.260191: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-13 08:03:21.261723: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14684 MB memory: -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:04.0, compute capability: 7.0 Train on CIFAR-100 We train the model on CIFAR-100. Here, we only train the model for 40 epochs to keep the training time short in this example. In practice, you should train for 150 epochs to reach convergence. model = keras.Model(input, output) model.compile( loss=keras.losses.CategoricalCrossentropy(label_smoothing=label_smoothing), optimizer=tfa.optimizers.AdamW( learning_rate=learning_rate, weight_decay=weight_decay ), metrics=[ keras.metrics.CategoricalAccuracy(name=\"accuracy\"), keras.metrics.TopKCategoricalAccuracy(5, name=\"top-5-accuracy\"), ], ) history = model.fit( x_train, y_train, batch_size=batch_size, epochs=num_epochs, validation_split=validation_split, ) 2021-09-13 08:03:23.935873: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2) Epoch 1/40 352/352 [==============================] - 19s 34ms/step - loss: 4.1679 - accuracy: 0.0817 - top-5-accuracy: 0.2551 - val_loss: 3.8964 - val_accuracy: 0.1242 - val_top-5-accuracy: 0.3568 Epoch 2/40 352/352 [==============================] - 11s 32ms/step - loss: 3.7278 - accuracy: 0.1617 - top-5-accuracy: 0.4246 - val_loss: 3.6518 - val_accuracy: 0.1756 - val_top-5-accuracy: 0.4580 Epoch 3/40 352/352 [==============================] - 11s 32ms/step - loss: 3.5245 - accuracy: 0.2077 - top-5-accuracy: 0.4946 - val_loss: 3.4609 - val_accuracy: 0.2248 - val_top-5-accuracy: 0.5222 Epoch 4/40 352/352 [==============================] - 11s 32ms/step - loss: 3.3856 - accuracy: 0.2408 - top-5-accuracy: 0.5430 - val_loss: 3.3515 - val_accuracy: 0.2514 - val_top-5-accuracy: 0.5540 Epoch 5/40 352/352 [==============================] - 11s 32ms/step - loss: 3.2772 - accuracy: 0.2697 - top-5-accuracy: 0.5760 - val_loss: 3.3012 - val_accuracy: 0.2712 - val_top-5-accuracy: 0.5758 Epoch 6/40 352/352 [==============================] - 11s 32ms/step - loss: 3.1845 - accuracy: 0.2915 - top-5-accuracy: 0.6071 - val_loss: 3.2104 - val_accuracy: 0.2866 - val_top-5-accuracy: 0.5994 Epoch 7/40 352/352 [==============================] - 11s 32ms/step - loss: 3.1104 - accuracy: 0.3126 - top-5-accuracy: 0.6288 - val_loss: 3.1408 - val_accuracy: 0.3038 - val_top-5-accuracy: 0.6176 Epoch 8/40 352/352 [==============================] - 11s 32ms/step - loss: 3.0616 - accuracy: 0.3268 - top-5-accuracy: 0.6423 - val_loss: 3.0853 - val_accuracy: 0.3138 - val_top-5-accuracy: 0.6408 Epoch 9/40 352/352 [==============================] - 11s 31ms/step - loss: 3.0237 - accuracy: 0.3349 - top-5-accuracy: 0.6541 - val_loss: 3.0882 - val_accuracy: 0.3130 - val_top-5-accuracy: 0.6370 Epoch 10/40 352/352 [==============================] - 11s 31ms/step - loss: 2.9877 - accuracy: 0.3438 - top-5-accuracy: 0.6649 - val_loss: 3.0532 - val_accuracy: 0.3298 - val_top-5-accuracy: 0.6482 Epoch 11/40 352/352 [==============================] - 11s 31ms/step - loss: 2.9571 - accuracy: 0.3520 - top-5-accuracy: 0.6712 - val_loss: 3.0547 - val_accuracy: 0.3320 - val_top-5-accuracy: 0.6450 Epoch 12/40 352/352 [==============================] - 11s 31ms/step - loss: 2.9238 - accuracy: 0.3640 - top-5-accuracy: 0.6798 - val_loss: 2.9833 - val_accuracy: 0.3462 - val_top-5-accuracy: 0.6602 Epoch 13/40 352/352 [==============================] - 11s 31ms/step - loss: 2.9048 - accuracy: 0.3674 - top-5-accuracy: 0.6869 - val_loss: 2.9779 - val_accuracy: 0.3458 - val_top-5-accuracy: 0.6724 Epoch 14/40 352/352 [==============================] - 11s 31ms/step - loss: 2.8822 - accuracy: 0.3717 - top-5-accuracy: 0.6923 - val_loss: 2.9549 - val_accuracy: 0.3552 - val_top-5-accuracy: 0.6748 Epoch 15/40 352/352 [==============================] - 11s 31ms/step - loss: 2.8578 - accuracy: 0.3826 - top-5-accuracy: 0.6981 - val_loss: 2.9447 - val_accuracy: 0.3584 - val_top-5-accuracy: 0.6786 Epoch 16/40 352/352 [==============================] - 11s 31ms/step - loss: 2.8404 - accuracy: 0.3852 - top-5-accuracy: 0.7024 - val_loss: 2.9087 - val_accuracy: 0.3650 - val_top-5-accuracy: 0.6842 Epoch 17/40 352/352 [==============================] - 11s 31ms/step - loss: 2.8234 - accuracy: 0.3910 - top-5-accuracy: 0.7076 - val_loss: 2.8884 - val_accuracy: 0.3748 - val_top-5-accuracy: 0.6868 Epoch 18/40 352/352 [==============================] - 11s 31ms/step - loss: 2.8014 - accuracy: 0.3974 - top-5-accuracy: 0.7124 - val_loss: 2.8979 - val_accuracy: 0.3696 - val_top-5-accuracy: 0.6908 Epoch 19/40 352/352 [==============================] - 11s 31ms/step - loss: 2.7928 - accuracy: 0.3961 - top-5-accuracy: 0.7172 - val_loss: 2.8873 - val_accuracy: 0.3756 - val_top-5-accuracy: 0.6924 Epoch 20/40 352/352 [==============================] - 11s 31ms/step - loss: 2.7800 - accuracy: 0.4026 - top-5-accuracy: 0.7186 - val_loss: 2.8544 - val_accuracy: 0.3834 - val_top-5-accuracy: 0.7004 Epoch 21/40 352/352 [==============================] - 11s 31ms/step - loss: 2.7659 - accuracy: 0.4095 - top-5-accuracy: 0.7236 - val_loss: 2.8626 - val_accuracy: 0.3840 - val_top-5-accuracy: 0.6896 Epoch 22/40 352/352 [==============================] - 11s 31ms/step - loss: 2.7499 - accuracy: 0.4098 - top-5-accuracy: 0.7278 - val_loss: 2.8621 - val_accuracy: 0.3868 - val_top-5-accuracy: 0.6944 Epoch 23/40 352/352 [==============================] - 11s 31ms/step - loss: 2.7389 - accuracy: 0.4136 - top-5-accuracy: 0.7305 - val_loss: 2.8527 - val_accuracy: 0.3834 - val_top-5-accuracy: 0.7002 Epoch 24/40 352/352 [==============================] - 11s 31ms/step - loss: 2.7219 - accuracy: 0.4198 - top-5-accuracy: 0.7360 - val_loss: 2.9078 - val_accuracy: 0.3738 - val_top-5-accuracy: 0.6796 Epoch 25/40 352/352 [==============================] - 11s 32ms/step - loss: 2.7119 - accuracy: 0.4195 - top-5-accuracy: 0.7373 - val_loss: 2.8470 - val_accuracy: 0.3840 - val_top-5-accuracy: 0.6994 Epoch 26/40 352/352 [==============================] - 11s 32ms/step - loss: 2.7079 - accuracy: 0.4214 - top-5-accuracy: 0.7355 - val_loss: 2.8101 - val_accuracy: 0.3934 - val_top-5-accuracy: 0.7130 Epoch 27/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6925 - accuracy: 0.4280 - top-5-accuracy: 0.7398 - val_loss: 2.8660 - val_accuracy: 0.3804 - val_top-5-accuracy: 0.6996 Epoch 28/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6864 - accuracy: 0.4273 - top-5-accuracy: 0.7430 - val_loss: 2.7863 - val_accuracy: 0.4014 - val_top-5-accuracy: 0.7234 Epoch 29/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6763 - accuracy: 0.4324 - top-5-accuracy: 0.7472 - val_loss: 2.7852 - val_accuracy: 0.4030 - val_top-5-accuracy: 0.7158 Epoch 30/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6656 - accuracy: 0.4356 - top-5-accuracy: 0.7489 - val_loss: 2.7991 - val_accuracy: 0.3940 - val_top-5-accuracy: 0.7104 Epoch 31/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6589 - accuracy: 0.4383 - top-5-accuracy: 0.7512 - val_loss: 2.7938 - val_accuracy: 0.3966 - val_top-5-accuracy: 0.7148 Epoch 32/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6509 - accuracy: 0.4367 - top-5-accuracy: 0.7530 - val_loss: 2.8226 - val_accuracy: 0.3944 - val_top-5-accuracy: 0.7092 Epoch 33/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6384 - accuracy: 0.4432 - top-5-accuracy: 0.7565 - val_loss: 2.8171 - val_accuracy: 0.3964 - val_top-5-accuracy: 0.7060 Epoch 34/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6317 - accuracy: 0.4446 - top-5-accuracy: 0.7561 - val_loss: 2.7923 - val_accuracy: 0.3970 - val_top-5-accuracy: 0.7134 Epoch 35/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6241 - accuracy: 0.4447 - top-5-accuracy: 0.7574 - val_loss: 2.7664 - val_accuracy: 0.4108 - val_top-5-accuracy: 0.7180 Epoch 36/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6199 - accuracy: 0.4467 - top-5-accuracy: 0.7586 - val_loss: 2.7480 - val_accuracy: 0.4078 - val_top-5-accuracy: 0.7242 Epoch 37/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6127 - accuracy: 0.4506 - top-5-accuracy: 0.7608 - val_loss: 2.7651 - val_accuracy: 0.4052 - val_top-5-accuracy: 0.7218 Epoch 38/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6025 - accuracy: 0.4520 - top-5-accuracy: 0.7620 - val_loss: 2.7641 - val_accuracy: 0.4114 - val_top-5-accuracy: 0.7254 Epoch 39/40 352/352 [==============================] - 11s 31ms/step - loss: 2.5934 - accuracy: 0.4542 - top-5-accuracy: 0.7670 - val_loss: 2.7453 - val_accuracy: 0.4120 - val_top-5-accuracy: 0.7200 Epoch 40/40 352/352 [==============================] - 11s 31ms/step - loss: 2.5859 - accuracy: 0.4565 - top-5-accuracy: 0.7688 - val_loss: 2.7504 - val_accuracy: 0.4118 - val_top-5-accuracy: 0.7268 Let's visualize the training progress of the model. plt.plot(history.history[\"loss\"], label=\"train_loss\") plt.plot(history.history[\"val_loss\"], label=\"val_loss\") plt.xlabel(\"Epochs\") plt.ylabel(\"Loss\") plt.title(\"Train and Validation Losses Over Epochs\", fontsize=14) plt.legend() plt.grid() plt.show() png Let's display the final results of the training on CIFAR-100. loss, accuracy, top_5_accuracy = model.evaluate(x_test, y_test) print(f\"Test loss: {round(loss, 2)}\") print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") print(f\"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%\") 313/313 [==============================] - 3s 8ms/step - loss: 2.7039 - accuracy: 0.4288 - top-5-accuracy: 0.7366 Test loss: 2.7 Test accuracy: 42.88% Test top 5 accuracy: 73.66% The Swin Transformer model we just trained has just 152K parameters, and it gets us to ~75% test top-5 accuracy within just 40 epochs without any signs of overfitting as well as seen in above graph. This means we can train this network for longer (perhaps with a bit more regularization) and obtain even better performance. This performance can further be improved by additional techniques like cosine decay learning rate schedule, other data augmentation techniques. While experimenting, I tried training the model for 150 epochs with a slightly higher dropout and greater embedding dimensions which pushes the performance to ~72% test accuracy on CIFAR-100 as you can see in the screenshot. Results of training for longer The authors present a top-1 accuracy of 87.3% on ImageNet. The authors also present a number of experiments to study how input sizes, optimizers etc. affect the final performance of this model. The authors further present using this model for object detection, semantic segmentation and instance segmentation as well and report competitive results for these. You are strongly advised to also check out the original paper. This example takes inspiration from the official PyTorch and TensorFlow implementations. Implementing the Vision Transformer (ViT) model for image classification. Introduction This example implements the Vision Transformer (ViT) model by Alexey Dosovitskiy et al. for image classification, and demonstrates it on the CIFAR-100 dataset. The ViT model applies the Transformer architecture with self-attention to sequences of image patches, without using convolution layers. This example requires TensorFlow 2.4 or higher, as well as TensorFlow Addons, which can be installed using the following command: pip install -U tensorflow-addons Setup import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_addons as tfa Prepare the data num_classes = 100 input_shape = (32, 32, 3) (x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data() print(f\"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}\") print(f\"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}\") x_train shape: (50000, 32, 32, 3) - y_train shape: (50000, 1) x_test shape: (10000, 32, 32, 3) - y_test shape: (10000, 1) Configure the hyperparameters learning_rate = 0.001 weight_decay = 0.0001 batch_size = 256 num_epochs = 100 image_size = 72 # We'll resize input images to this size patch_size = 6 # Size of the patches to be extract from the input images num_patches = (image_size // patch_size) ** 2 projection_dim = 64 num_heads = 4 transformer_units = [ projection_dim * 2, projection_dim, ] # Size of the transformer layers transformer_layers = 8 mlp_head_units = [2048, 1024] # Size of the dense layers of the final classifier Use data augmentation data_augmentation = keras.Sequential( [ layers.Normalization(), layers.Resizing(image_size, image_size), layers.RandomFlip(\"horizontal\"), layers.RandomRotation(factor=0.02), layers.RandomZoom( height_factor=0.2, width_factor=0.2 ), ], name=\"data_augmentation\", ) # Compute the mean and the variance of the training data for normalization. data_augmentation.layers[0].adapt(x_train) Implement multilayer perceptron (MLP) def mlp(x, hidden_units, dropout_rate): for units in hidden_units: x = layers.Dense(units, activation=tf.nn.gelu)(x) x = layers.Dropout(dropout_rate)(x) return x Implement patch creation as a layer class Patches(layers.Layer): def __init__(self, patch_size): super(Patches, self).__init__() self.patch_size = patch_size def call(self, images): batch_size = tf.shape(images)[0] patches = tf.image.extract_patches( images=images, sizes=[1, self.patch_size, self.patch_size, 1], strides=[1, self.patch_size, self.patch_size, 1], rates=[1, 1, 1, 1], padding=\"VALID\", ) patch_dims = patches.shape[-1] patches = tf.reshape(patches, [batch_size, -1, patch_dims]) return patches Let's display patches for a sample image import matplotlib.pyplot as plt plt.figure(figsize=(4, 4)) image = x_train[np.random.choice(range(x_train.shape[0]))] plt.imshow(image.astype(\"uint8\")) plt.axis(\"off\") resized_image = tf.image.resize( tf.convert_to_tensor([image]), size=(image_size, image_size) ) patches = Patches(patch_size)(resized_image) print(f\"Image size: {image_size} X {image_size}\") print(f\"Patch size: {patch_size} X {patch_size}\") print(f\"Patches per image: {patches.shape[1]}\") print(f\"Elements per patch: {patches.shape[-1]}\") n = int(np.sqrt(patches.shape[1])) plt.figure(figsize=(4, 4)) for i, patch in enumerate(patches[0]): ax = plt.subplot(n, n, i + 1) patch_img = tf.reshape(patch, (patch_size, patch_size, 3)) plt.imshow(patch_img.numpy().astype(\"uint8\")) plt.axis(\"off\") Image size: 72 X 72 Patch size: 6 X 6 Patches per image: 144 Elements per patch: 108 png png Implement the patch encoding layer The PatchEncoder layer will linearly transform a patch by projecting it into a vector of size projection_dim. In addition, it adds a learnable position embedding to the projected vector. class PatchEncoder(layers.Layer): def __init__(self, num_patches, projection_dim): super(PatchEncoder, self).__init__() self.num_patches = num_patches self.projection = layers.Dense(units=projection_dim) self.position_embedding = layers.Embedding( input_dim=num_patches, output_dim=projection_dim ) def call(self, patch): positions = tf.range(start=0, limit=self.num_patches, delta=1) encoded = self.projection(patch) + self.position_embedding(positions) return encoded Build the ViT model The ViT model consists of multiple Transformer blocks, which use the layers.MultiHeadAttention layer as a self-attention mechanism applied to the sequence of patches. The Transformer blocks produce a [batch_size, num_patches, projection_dim] tensor, which is processed via an classifier head with softmax to produce the final class probabilities output. Unlike the technique described in the paper, which prepends a learnable embedding to the sequence of encoded patches to serve as the image representation, all the outputs of the final Transformer block are reshaped with layers.Flatten() and used as the image representation input to the classifier head. Note that the layers.GlobalAveragePooling1D layer could also be used instead to aggregate the outputs of the Transformer block, especially when the number of patches and the projection dimensions are large. def create_vit_classifier(): inputs = layers.Input(shape=input_shape) # Augment data. augmented = data_augmentation(inputs) # Create patches. patches = Patches(patch_size)(augmented) # Encode patches. encoded_patches = PatchEncoder(num_patches, projection_dim)(patches) # Create multiple layers of the Transformer block. for _ in range(transformer_layers): # Layer normalization 1. x1 = layers.LayerNormalization(epsilon=1e-6)(encoded_patches) # Create a multi-head attention layer. attention_output = layers.MultiHeadAttention( num_heads=num_heads, key_dim=projection_dim, dropout=0.1 )(x1, x1) # Skip connection 1. x2 = layers.Add()([attention_output, encoded_patches]) # Layer normalization 2. x3 = layers.LayerNormalization(epsilon=1e-6)(x2) # MLP. x3 = mlp(x3, hidden_units=transformer_units, dropout_rate=0.1) # Skip connection 2. encoded_patches = layers.Add()([x3, x2]) # Create a [batch_size, projection_dim] tensor. representation = layers.LayerNormalization(epsilon=1e-6)(encoded_patches) representation = layers.Flatten()(representation) representation = layers.Dropout(0.5)(representation) # Add MLP. features = mlp(representation, hidden_units=mlp_head_units, dropout_rate=0.5) # Classify outputs. logits = layers.Dense(num_classes)(features) # Create the Keras model. model = keras.Model(inputs=inputs, outputs=logits) return model Compile, train, and evaluate the mode def run_experiment(model): optimizer = tfa.optimizers.AdamW( learning_rate=learning_rate, weight_decay=weight_decay ) model.compile( optimizer=optimizer, loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[ keras.metrics.SparseCategoricalAccuracy(name=\"accuracy\"), keras.metrics.SparseTopKCategoricalAccuracy(5, name=\"top-5-accuracy\"), ], ) checkpoint_filepath = \"/tmp/checkpoint\" checkpoint_callback = keras.callbacks.ModelCheckpoint( checkpoint_filepath, monitor=\"val_accuracy\", save_best_only=True, save_weights_only=True, ) history = model.fit( x=x_train, y=y_train, batch_size=batch_size, epochs=num_epochs, validation_split=0.1, callbacks=[checkpoint_callback], ) model.load_weights(checkpoint_filepath) _, accuracy, top_5_accuracy = model.evaluate(x_test, y_test) print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") print(f\"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%\") return history vit_classifier = create_vit_classifier() history = run_experiment(vit_classifier) Epoch 1/100 176/176 [==============================] - 33s 136ms/step - loss: 4.8863 - accuracy: 0.0294 - top-5-accuracy: 0.1117 - val_loss: 3.9661 - val_accuracy: 0.0992 - val_top-5-accuracy: 0.3056 Epoch 2/100 176/176 [==============================] - 22s 127ms/step - loss: 4.0162 - accuracy: 0.0865 - top-5-accuracy: 0.2683 - val_loss: 3.5691 - val_accuracy: 0.1630 - val_top-5-accuracy: 0.4226 Epoch 3/100 176/176 [==============================] - 22s 127ms/step - loss: 3.7313 - accuracy: 0.1254 - top-5-accuracy: 0.3535 - val_loss: 3.3455 - val_accuracy: 0.1976 - val_top-5-accuracy: 0.4756 Epoch 4/100 176/176 [==============================] - 23s 128ms/step - loss: 3.5411 - accuracy: 0.1541 - top-5-accuracy: 0.4121 - val_loss: 3.1925 - val_accuracy: 0.2274 - val_top-5-accuracy: 0.5126 Epoch 5/100 176/176 [==============================] - 22s 127ms/step - loss: 3.3749 - accuracy: 0.1847 - top-5-accuracy: 0.4572 - val_loss: 3.1043 - val_accuracy: 0.2388 - val_top-5-accuracy: 0.5320 Epoch 6/100 176/176 [==============================] - 22s 127ms/step - loss: 3.2589 - accuracy: 0.2057 - top-5-accuracy: 0.4906 - val_loss: 2.9319 - val_accuracy: 0.2782 - val_top-5-accuracy: 0.5756 Epoch 7/100 176/176 [==============================] - 22s 127ms/step - loss: 3.1165 - accuracy: 0.2331 - top-5-accuracy: 0.5273 - val_loss: 2.8072 - val_accuracy: 0.2972 - val_top-5-accuracy: 0.5946 Epoch 8/100 176/176 [==============================] - 22s 127ms/step - loss: 2.9902 - accuracy: 0.2563 - top-5-accuracy: 0.5556 - val_loss: 2.7207 - val_accuracy: 0.3188 - val_top-5-accuracy: 0.6258 Epoch 9/100 176/176 [==============================] - 22s 127ms/step - loss: 2.8828 - accuracy: 0.2800 - top-5-accuracy: 0.5827 - val_loss: 2.6396 - val_accuracy: 0.3244 - val_top-5-accuracy: 0.6402 Epoch 10/100 176/176 [==============================] - 23s 128ms/step - loss: 2.7824 - accuracy: 0.2997 - top-5-accuracy: 0.6110 - val_loss: 2.5580 - val_accuracy: 0.3494 - val_top-5-accuracy: 0.6568 Epoch 11/100 176/176 [==============================] - 23s 130ms/step - loss: 2.6743 - accuracy: 0.3209 - top-5-accuracy: 0.6333 - val_loss: 2.5000 - val_accuracy: 0.3594 - val_top-5-accuracy: 0.6726 Epoch 12/100 176/176 [==============================] - 23s 130ms/step - loss: 2.5800 - accuracy: 0.3431 - top-5-accuracy: 0.6522 - val_loss: 2.3900 - val_accuracy: 0.3798 - val_top-5-accuracy: 0.6878 Epoch 13/100 176/176 [==============================] - 23s 128ms/step - loss: 2.5019 - accuracy: 0.3559 - top-5-accuracy: 0.6671 - val_loss: 2.3464 - val_accuracy: 0.3960 - val_top-5-accuracy: 0.7002 Epoch 14/100 176/176 [==============================] - 22s 128ms/step - loss: 2.4207 - accuracy: 0.3728 - top-5-accuracy: 0.6905 - val_loss: 2.3130 - val_accuracy: 0.4032 - val_top-5-accuracy: 0.7040 Epoch 15/100 176/176 [==============================] - 23s 128ms/step - loss: 2.3371 - accuracy: 0.3932 - top-5-accuracy: 0.7093 - val_loss: 2.2447 - val_accuracy: 0.4136 - val_top-5-accuracy: 0.7202 Epoch 16/100 176/176 [==============================] - 23s 128ms/step - loss: 2.2650 - accuracy: 0.4077 - top-5-accuracy: 0.7201 - val_loss: 2.2101 - val_accuracy: 0.4222 - val_top-5-accuracy: 0.7246 Epoch 17/100 176/176 [==============================] - 22s 127ms/step - loss: 2.1822 - accuracy: 0.4204 - top-5-accuracy: 0.7376 - val_loss: 2.1446 - val_accuracy: 0.4344 - val_top-5-accuracy: 0.7416 Epoch 18/100 176/176 [==============================] - 22s 128ms/step - loss: 2.1485 - accuracy: 0.4284 - top-5-accuracy: 0.7476 - val_loss: 2.1094 - val_accuracy: 0.4432 - val_top-5-accuracy: 0.7454 Epoch 19/100 176/176 [==============================] - 22s 128ms/step - loss: 2.0717 - accuracy: 0.4464 - top-5-accuracy: 0.7618 - val_loss: 2.0718 - val_accuracy: 0.4584 - val_top-5-accuracy: 0.7570 Epoch 20/100 176/176 [==============================] - 22s 127ms/step - loss: 2.0031 - accuracy: 0.4605 - top-5-accuracy: 0.7731 - val_loss: 2.0286 - val_accuracy: 0.4610 - val_top-5-accuracy: 0.7654 Epoch 21/100 176/176 [==============================] - 22s 127ms/step - loss: 1.9650 - accuracy: 0.4700 - top-5-accuracy: 0.7820 - val_loss: 2.0225 - val_accuracy: 0.4642 - val_top-5-accuracy: 0.7628 Epoch 22/100 176/176 [==============================] - 22s 127ms/step - loss: 1.9066 - accuracy: 0.4839 - top-5-accuracy: 0.7904 - val_loss: 1.9961 - val_accuracy: 0.4746 - val_top-5-accuracy: 0.7656 Epoch 23/100 176/176 [==============================] - 22s 127ms/step - loss: 1.8564 - accuracy: 0.4952 - top-5-accuracy: 0.8030 - val_loss: 1.9769 - val_accuracy: 0.4828 - val_top-5-accuracy: 0.7742 Epoch 24/100 176/176 [==============================] - 22s 128ms/step - loss: 1.8167 - accuracy: 0.5034 - top-5-accuracy: 0.8099 - val_loss: 1.9730 - val_accuracy: 0.4766 - val_top-5-accuracy: 0.7728 Epoch 25/100 176/176 [==============================] - 22s 128ms/step - loss: 1.7788 - accuracy: 0.5124 - top-5-accuracy: 0.8174 - val_loss: 1.9187 - val_accuracy: 0.4926 - val_top-5-accuracy: 0.7854 Epoch 26/100 176/176 [==============================] - 23s 128ms/step - loss: 1.7437 - accuracy: 0.5187 - top-5-accuracy: 0.8206 - val_loss: 1.9732 - val_accuracy: 0.4792 - val_top-5-accuracy: 0.7772 Epoch 27/100 176/176 [==============================] - 23s 128ms/step - loss: 1.6929 - accuracy: 0.5300 - top-5-accuracy: 0.8287 - val_loss: 1.9109 - val_accuracy: 0.4928 - val_top-5-accuracy: 0.7912 Epoch 28/100 176/176 [==============================] - 23s 129ms/step - loss: 1.6647 - accuracy: 0.5400 - top-5-accuracy: 0.8362 - val_loss: 1.9031 - val_accuracy: 0.4984 - val_top-5-accuracy: 0.7824 Epoch 29/100 176/176 [==============================] - 23s 129ms/step - loss: 1.6295 - accuracy: 0.5488 - top-5-accuracy: 0.8402 - val_loss: 1.8744 - val_accuracy: 0.4982 - val_top-5-accuracy: 0.7910 Epoch 30/100 176/176 [==============================] - 22s 128ms/step - loss: 1.5860 - accuracy: 0.5548 - top-5-accuracy: 0.8504 - val_loss: 1.8551 - val_accuracy: 0.5108 - val_top-5-accuracy: 0.7946 Epoch 31/100 176/176 [==============================] - 22s 127ms/step - loss: 1.5666 - accuracy: 0.5614 - top-5-accuracy: 0.8548 - val_loss: 1.8720 - val_accuracy: 0.5076 - val_top-5-accuracy: 0.7960 Epoch 32/100 176/176 [==============================] - 22s 127ms/step - loss: 1.5272 - accuracy: 0.5712 - top-5-accuracy: 0.8596 - val_loss: 1.8840 - val_accuracy: 0.5106 - val_top-5-accuracy: 0.7966 Epoch 33/100 176/176 [==============================] - 22s 128ms/step - loss: 1.4995 - accuracy: 0.5779 - top-5-accuracy: 0.8651 - val_loss: 1.8660 - val_accuracy: 0.5116 - val_top-5-accuracy: 0.7904 Epoch 34/100 176/176 [==============================] - 22s 128ms/step - loss: 1.4686 - accuracy: 0.5849 - top-5-accuracy: 0.8685 - val_loss: 1.8544 - val_accuracy: 0.5126 - val_top-5-accuracy: 0.7954 Epoch 35/100 176/176 [==============================] - 22s 127ms/step - loss: 1.4276 - accuracy: 0.5992 - top-5-accuracy: 0.8743 - val_loss: 1.8497 - val_accuracy: 0.5164 - val_top-5-accuracy: 0.7990 Epoch 36/100 176/176 [==============================] - 22s 127ms/step - loss: 1.4102 - accuracy: 0.5970 - top-5-accuracy: 0.8768 - val_loss: 1.8496 - val_accuracy: 0.5198 - val_top-5-accuracy: 0.7948 Epoch 37/100 176/176 [==============================] - 22s 126ms/step - loss: 1.3800 - accuracy: 0.6112 - top-5-accuracy: 0.8814 - val_loss: 1.8033 - val_accuracy: 0.5284 - val_top-5-accuracy: 0.8068 Epoch 38/100 176/176 [==============================] - 22s 126ms/step - loss: 1.3500 - accuracy: 0.6103 - top-5-accuracy: 0.8862 - val_loss: 1.8092 - val_accuracy: 0.5214 - val_top-5-accuracy: 0.8128 Epoch 39/100 176/176 [==============================] - 22s 127ms/step - loss: 1.3575 - accuracy: 0.6127 - top-5-accuracy: 0.8857 - val_loss: 1.8175 - val_accuracy: 0.5198 - val_top-5-accuracy: 0.8086 Epoch 40/100 176/176 [==============================] - 22s 126ms/step - loss: 1.3030 - accuracy: 0.6283 - top-5-accuracy: 0.8927 - val_loss: 1.8361 - val_accuracy: 0.5170 - val_top-5-accuracy: 0.8056 Epoch 41/100 176/176 [==============================] - 22s 125ms/step - loss: 1.3160 - accuracy: 0.6247 - top-5-accuracy: 0.8923 - val_loss: 1.8074 - val_accuracy: 0.5260 - val_top-5-accuracy: 0.8082 Epoch 42/100 176/176 [==============================] - 22s 126ms/step - loss: 1.2679 - accuracy: 0.6329 - top-5-accuracy: 0.9002 - val_loss: 1.8430 - val_accuracy: 0.5244 - val_top-5-accuracy: 0.8100 Epoch 43/100 176/176 [==============================] - 22s 126ms/step - loss: 1.2514 - accuracy: 0.6375 - top-5-accuracy: 0.9034 - val_loss: 1.8318 - val_accuracy: 0.5196 - val_top-5-accuracy: 0.8034 Epoch 44/100 176/176 [==============================] - 22s 126ms/step - loss: 1.2311 - accuracy: 0.6431 - top-5-accuracy: 0.9067 - val_loss: 1.8283 - val_accuracy: 0.5218 - val_top-5-accuracy: 0.8050 Epoch 45/100 176/176 [==============================] - 22s 125ms/step - loss: 1.2073 - accuracy: 0.6484 - top-5-accuracy: 0.9098 - val_loss: 1.8384 - val_accuracy: 0.5302 - val_top-5-accuracy: 0.8056 Epoch 46/100 176/176 [==============================] - 22s 125ms/step - loss: 1.1775 - accuracy: 0.6558 - top-5-accuracy: 0.9117 - val_loss: 1.8409 - val_accuracy: 0.5294 - val_top-5-accuracy: 0.8078 Epoch 47/100 176/176 [==============================] - 22s 126ms/step - loss: 1.1891 - accuracy: 0.6563 - top-5-accuracy: 0.9103 - val_loss: 1.8167 - val_accuracy: 0.5346 - val_top-5-accuracy: 0.8142 Epoch 48/100 176/176 [==============================] - 22s 127ms/step - loss: 1.1586 - accuracy: 0.6621 - top-5-accuracy: 0.9161 - val_loss: 1.8285 - val_accuracy: 0.5314 - val_top-5-accuracy: 0.8086 Epoch 49/100 176/176 [==============================] - 22s 126ms/step - loss: 1.1586 - accuracy: 0.6634 - top-5-accuracy: 0.9154 - val_loss: 1.8189 - val_accuracy: 0.5366 - val_top-5-accuracy: 0.8134 Epoch 50/100 176/176 [==============================] - 22s 126ms/step - loss: 1.1306 - accuracy: 0.6682 - top-5-accuracy: 0.9199 - val_loss: 1.8442 - val_accuracy: 0.5254 - val_top-5-accuracy: 0.8096 Epoch 51/100 176/176 [==============================] - 22s 126ms/step - loss: 1.1175 - accuracy: 0.6708 - top-5-accuracy: 0.9227 - val_loss: 1.8513 - val_accuracy: 0.5230 - val_top-5-accuracy: 0.8104 Epoch 52/100 176/176 [==============================] - 22s 126ms/step - loss: 1.1104 - accuracy: 0.6743 - top-5-accuracy: 0.9226 - val_loss: 1.8041 - val_accuracy: 0.5332 - val_top-5-accuracy: 0.8142 Epoch 53/100 176/176 [==============================] - 22s 127ms/step - loss: 1.0914 - accuracy: 0.6809 - top-5-accuracy: 0.9236 - val_loss: 1.8213 - val_accuracy: 0.5342 - val_top-5-accuracy: 0.8094 Epoch 54/100 176/176 [==============================] - 22s 126ms/step - loss: 1.0681 - accuracy: 0.6856 - top-5-accuracy: 0.9270 - val_loss: 1.8429 - val_accuracy: 0.5328 - val_top-5-accuracy: 0.8086 Epoch 55/100 176/176 [==============================] - 22s 126ms/step - loss: 1.0625 - accuracy: 0.6862 - top-5-accuracy: 0.9301 - val_loss: 1.8316 - val_accuracy: 0.5364 - val_top-5-accuracy: 0.8090 Epoch 56/100 176/176 [==============================] - 22s 127ms/step - loss: 1.0474 - accuracy: 0.6920 - top-5-accuracy: 0.9308 - val_loss: 1.8310 - val_accuracy: 0.5440 - val_top-5-accuracy: 0.8132 Epoch 57/100 176/176 [==============================] - 22s 127ms/step - loss: 1.0381 - accuracy: 0.6974 - top-5-accuracy: 0.9297 - val_loss: 1.8447 - val_accuracy: 0.5368 - val_top-5-accuracy: 0.8126 Epoch 58/100 176/176 [==============================] - 22s 126ms/step - loss: 1.0230 - accuracy: 0.7011 - top-5-accuracy: 0.9341 - val_loss: 1.8241 - val_accuracy: 0.5418 - val_top-5-accuracy: 0.8094 Epoch 59/100 176/176 [==============================] - 22s 127ms/step - loss: 1.0113 - accuracy: 0.7023 - top-5-accuracy: 0.9361 - val_loss: 1.8216 - val_accuracy: 0.5380 - val_top-5-accuracy: 0.8134 Epoch 60/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9953 - accuracy: 0.7031 - top-5-accuracy: 0.9386 - val_loss: 1.8356 - val_accuracy: 0.5422 - val_top-5-accuracy: 0.8122 Epoch 61/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9928 - accuracy: 0.7084 - top-5-accuracy: 0.9375 - val_loss: 1.8514 - val_accuracy: 0.5342 - val_top-5-accuracy: 0.8182 Epoch 62/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9740 - accuracy: 0.7121 - top-5-accuracy: 0.9387 - val_loss: 1.8674 - val_accuracy: 0.5366 - val_top-5-accuracy: 0.8092 Epoch 63/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9742 - accuracy: 0.7112 - top-5-accuracy: 0.9413 - val_loss: 1.8274 - val_accuracy: 0.5414 - val_top-5-accuracy: 0.8144 Epoch 64/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9633 - accuracy: 0.7147 - top-5-accuracy: 0.9393 - val_loss: 1.8250 - val_accuracy: 0.5434 - val_top-5-accuracy: 0.8180 Epoch 65/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9407 - accuracy: 0.7221 - top-5-accuracy: 0.9444 - val_loss: 1.8456 - val_accuracy: 0.5424 - val_top-5-accuracy: 0.8120 Epoch 66/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9410 - accuracy: 0.7194 - top-5-accuracy: 0.9447 - val_loss: 1.8559 - val_accuracy: 0.5460 - val_top-5-accuracy: 0.8144 Epoch 67/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9359 - accuracy: 0.7252 - top-5-accuracy: 0.9421 - val_loss: 1.8352 - val_accuracy: 0.5458 - val_top-5-accuracy: 0.8110 Epoch 68/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9232 - accuracy: 0.7254 - top-5-accuracy: 0.9460 - val_loss: 1.8479 - val_accuracy: 0.5444 - val_top-5-accuracy: 0.8132 Epoch 69/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9138 - accuracy: 0.7283 - top-5-accuracy: 0.9456 - val_loss: 1.8697 - val_accuracy: 0.5312 - val_top-5-accuracy: 0.8052 Epoch 70/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9095 - accuracy: 0.7295 - top-5-accuracy: 0.9478 - val_loss: 1.8550 - val_accuracy: 0.5376 - val_top-5-accuracy: 0.8170 Epoch 71/100 176/176 [==============================] - 22s 126ms/step - loss: 0.8945 - accuracy: 0.7332 - top-5-accuracy: 0.9504 - val_loss: 1.8286 - val_accuracy: 0.5436 - val_top-5-accuracy: 0.8198 Epoch 72/100 176/176 [==============================] - 22s 125ms/step - loss: 0.8936 - accuracy: 0.7344 - top-5-accuracy: 0.9479 - val_loss: 1.8727 - val_accuracy: 0.5438 - val_top-5-accuracy: 0.8182 Epoch 73/100 176/176 [==============================] - 22s 126ms/step - loss: 0.8775 - accuracy: 0.7355 - top-5-accuracy: 0.9510 - val_loss: 1.8522 - val_accuracy: 0.5404 - val_top-5-accuracy: 0.8170 Epoch 74/100 176/176 [==============================] - 22s 126ms/step - loss: 0.8660 - accuracy: 0.7390 - top-5-accuracy: 0.9513 - val_loss: 1.8432 - val_accuracy: 0.5448 - val_top-5-accuracy: 0.8156 Epoch 75/100 176/176 [==============================] - 22s 126ms/step - loss: 0.8583 - accuracy: 0.7441 - top-5-accuracy: 0.9532 - val_loss: 1.8419 - val_accuracy: 0.5462 - val_top-5-accuracy: 0.8226 Epoch 76/100 176/176 [==============================] - 22s 126ms/step - loss: 0.8549 - accuracy: 0.7443 - top-5-accuracy: 0.9529 - val_loss: 1.8757 - val_accuracy: 0.5454 - val_top-5-accuracy: 0.8086 Epoch 77/100 176/176 [==============================] - 22s 125ms/step - loss: 0.8578 - accuracy: 0.7384 - top-5-accuracy: 0.9531 - val_loss: 1.9051 - val_accuracy: 0.5462 - val_top-5-accuracy: 0.8136 Epoch 78/100 176/176 [==============================] - 22s 125ms/step - loss: 0.8530 - accuracy: 0.7442 - top-5-accuracy: 0.9526 - val_loss: 1.8496 - val_accuracy: 0.5384 - val_top-5-accuracy: 0.8124 Epoch 79/100 176/176 [==============================] - 22s 125ms/step - loss: 0.8403 - accuracy: 0.7485 - top-5-accuracy: 0.9542 - val_loss: 1.8701 - val_accuracy: 0.5550 - val_top-5-accuracy: 0.8228 Epoch 80/100 176/176 [==============================] - 22s 126ms/step - loss: 0.8410 - accuracy: 0.7491 - top-5-accuracy: 0.9538 - val_loss: 1.8737 - val_accuracy: 0.5502 - val_top-5-accuracy: 0.8150 Epoch 81/100 176/176 [==============================] - 22s 126ms/step - loss: 0.8275 - accuracy: 0.7547 - top-5-accuracy: 0.9532 - val_loss: 1.8391 - val_accuracy: 0.5534 - val_top-5-accuracy: 0.8156 Epoch 82/100 176/176 [==============================] - 22s 125ms/step - loss: 0.8221 - accuracy: 0.7528 - top-5-accuracy: 0.9562 - val_loss: 1.8775 - val_accuracy: 0.5428 - val_top-5-accuracy: 0.8120 Epoch 83/100 176/176 [==============================] - 22s 125ms/step - loss: 0.8270 - accuracy: 0.7526 - top-5-accuracy: 0.9550 - val_loss: 1.8464 - val_accuracy: 0.5468 - val_top-5-accuracy: 0.8148 Epoch 84/100 176/176 [==============================] - 22s 126ms/step - loss: 0.8080 - accuracy: 0.7551 - top-5-accuracy: 0.9576 - val_loss: 1.8789 - val_accuracy: 0.5486 - val_top-5-accuracy: 0.8204 Epoch 85/100 176/176 [==============================] - 22s 125ms/step - loss: 0.8058 - accuracy: 0.7593 - top-5-accuracy: 0.9573 - val_loss: 1.8691 - val_accuracy: 0.5446 - val_top-5-accuracy: 0.8156 Epoch 86/100 176/176 [==============================] - 22s 126ms/step - loss: 0.8092 - accuracy: 0.7564 - top-5-accuracy: 0.9560 - val_loss: 1.8588 - val_accuracy: 0.5524 - val_top-5-accuracy: 0.8172 Epoch 87/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7897 - accuracy: 0.7613 - top-5-accuracy: 0.9604 - val_loss: 1.8649 - val_accuracy: 0.5490 - val_top-5-accuracy: 0.8166 Epoch 88/100 176/176 [==============================] - 22s 126ms/step - loss: 0.7890 - accuracy: 0.7635 - top-5-accuracy: 0.9598 - val_loss: 1.9060 - val_accuracy: 0.5446 - val_top-5-accuracy: 0.8112 Epoch 89/100 176/176 [==============================] - 22s 126ms/step - loss: 0.7682 - accuracy: 0.7687 - top-5-accuracy: 0.9620 - val_loss: 1.8645 - val_accuracy: 0.5474 - val_top-5-accuracy: 0.8150 Epoch 90/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7958 - accuracy: 0.7617 - top-5-accuracy: 0.9600 - val_loss: 1.8549 - val_accuracy: 0.5496 - val_top-5-accuracy: 0.8140 Epoch 91/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7978 - accuracy: 0.7603 - top-5-accuracy: 0.9590 - val_loss: 1.9169 - val_accuracy: 0.5440 - val_top-5-accuracy: 0.8140 Epoch 92/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7898 - accuracy: 0.7630 - top-5-accuracy: 0.9594 - val_loss: 1.9015 - val_accuracy: 0.5540 - val_top-5-accuracy: 0.8174 Epoch 93/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7550 - accuracy: 0.7722 - top-5-accuracy: 0.9622 - val_loss: 1.9219 - val_accuracy: 0.5410 - val_top-5-accuracy: 0.8098 Epoch 94/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7692 - accuracy: 0.7689 - top-5-accuracy: 0.9599 - val_loss: 1.8928 - val_accuracy: 0.5506 - val_top-5-accuracy: 0.8184 Epoch 95/100 176/176 [==============================] - 22s 126ms/step - loss: 0.7783 - accuracy: 0.7661 - top-5-accuracy: 0.9597 - val_loss: 1.8646 - val_accuracy: 0.5490 - val_top-5-accuracy: 0.8166 Epoch 96/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7547 - accuracy: 0.7711 - top-5-accuracy: 0.9638 - val_loss: 1.9347 - val_accuracy: 0.5484 - val_top-5-accuracy: 0.8150 Epoch 97/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7603 - accuracy: 0.7692 - top-5-accuracy: 0.9616 - val_loss: 1.8966 - val_accuracy: 0.5522 - val_top-5-accuracy: 0.8144 Epoch 98/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7595 - accuracy: 0.7730 - top-5-accuracy: 0.9610 - val_loss: 1.8728 - val_accuracy: 0.5470 - val_top-5-accuracy: 0.8170 Epoch 99/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7542 - accuracy: 0.7736 - top-5-accuracy: 0.9622 - val_loss: 1.9132 - val_accuracy: 0.5504 - val_top-5-accuracy: 0.8156 Epoch 100/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7410 - accuracy: 0.7787 - top-5-accuracy: 0.9635 - val_loss: 1.9233 - val_accuracy: 0.5428 - val_top-5-accuracy: 0.8120 313/313 [==============================] - 4s 12ms/step - loss: 1.8487 - accuracy: 0.5514 - top-5-accuracy: 0.8186 Test accuracy: 55.14% Test top 5 accuracy: 81.86% After 100 epochs, the ViT model achieves around 55% accuracy and 82% top-5 accuracy on the test data. These are not competitive results on the CIFAR-100 dataset, as a ResNet50V2 trained from scratch on the same data can achieve 67% accuracy. Note that the state of the art results reported in the paper are achieved by pre-training the ViT model using the JFT-300M dataset, then fine-tuning it on the target dataset. To improve the model quality without pre-training, you can try to train the model for more epochs, use a larger number of Transformer layers, resize the input images, change the patch size, or increase the projection dimensions. Besides, as mentioned in the paper, the quality of the model is affected not only by architecture choices, but also by parameters such as the learning rate schedule, optimizer, weight decay, etc. In practice, it's recommended to fine-tune a ViT model that was pre-trained using a large, high-resolution dataset. Image segmentation model trained from scratch on the Oxford Pets dataset Download the data !curl -O https://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz !curl -O https://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz !tar -xf images.tar.gz !tar -xf annotations.tar.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 755M 100 755M 0 0 6943k 0 0:01:51 0:01:51 --:--:-- 7129k % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 18.2M 100 18.2M 0 0 5692k 0 0:00:03 0:00:03 --:--:-- 5692k Prepare paths of input images and target segmentation masks import os input_dir = \"images/\" target_dir = \"annotations/trimaps/\" img_size = (160, 160) num_classes = 3 batch_size = 32 input_img_paths = sorted( [ os.path.join(input_dir, fname) for fname in os.listdir(input_dir) if fname.endswith(\".jpg\") ] ) target_img_paths = sorted( [ os.path.join(target_dir, fname) for fname in os.listdir(target_dir) if fname.endswith(\".png\") and not fname.startswith(\".\") ] ) print(\"Number of samples:\", len(input_img_paths)) for input_path, target_path in zip(input_img_paths[:10], target_img_paths[:10]): print(input_path, \"|\", target_path) Number of samples: 7390 images/Abyssinian_1.jpg | annotations/trimaps/Abyssinian_1.png images/Abyssinian_10.jpg | annotations/trimaps/Abyssinian_10.png images/Abyssinian_100.jpg | annotations/trimaps/Abyssinian_100.png images/Abyssinian_101.jpg | annotations/trimaps/Abyssinian_101.png images/Abyssinian_102.jpg | annotations/trimaps/Abyssinian_102.png images/Abyssinian_103.jpg | annotations/trimaps/Abyssinian_103.png images/Abyssinian_104.jpg | annotations/trimaps/Abyssinian_104.png images/Abyssinian_105.jpg | annotations/trimaps/Abyssinian_105.png images/Abyssinian_106.jpg | annotations/trimaps/Abyssinian_106.png images/Abyssinian_107.jpg | annotations/trimaps/Abyssinian_107.png What does one input image and corresponding segmentation mask look like? from IPython.display import Image, display from tensorflow.keras.preprocessing.image import load_img import PIL from PIL import ImageOps # Display input image #7 display(Image(filename=input_img_paths[9])) # Display auto-contrast version of corresponding target (per-pixel categories) img = PIL.ImageOps.autocontrast(load_img(target_img_paths[9])) display(img) jpeg png Prepare Sequence class to load & vectorize batches of data from tensorflow import keras import numpy as np from tensorflow.keras.preprocessing.image import load_img class OxfordPets(keras.utils.Sequence): \"\"\"Helper to iterate over the data (as Numpy arrays).\"\"\" def __init__(self, batch_size, img_size, input_img_paths, target_img_paths): self.batch_size = batch_size self.img_size = img_size self.input_img_paths = input_img_paths self.target_img_paths = target_img_paths def __len__(self): return len(self.target_img_paths) // self.batch_size def __getitem__(self, idx): \"\"\"Returns tuple (input, target) correspond to batch #idx.\"\"\" i = idx * self.batch_size batch_input_img_paths = self.input_img_paths[i : i + self.batch_size] batch_target_img_paths = self.target_img_paths[i : i + self.batch_size] x = np.zeros((self.batch_size,) + self.img_size + (3,), dtype=\"float32\") for j, path in enumerate(batch_input_img_paths): img = load_img(path, target_size=self.img_size) x[j] = img y = np.zeros((self.batch_size,) + self.img_size + (1,), dtype=\"uint8\") for j, path in enumerate(batch_target_img_paths): img = load_img(path, target_size=self.img_size, color_mode=\"grayscale\") y[j] = np.expand_dims(img, 2) # Ground truth labels are 1, 2, 3. Subtract one to make them 0, 1, 2: y[j] -= 1 return x, y Prepare U-Net Xception-style model from tensorflow.keras import layers def get_model(img_size, num_classes): inputs = keras.Input(shape=img_size + (3,)) ### [First half of the network: downsampling inputs] ### # Entry block x = layers.Conv2D(32, 3, strides=2, padding=\"same\")(inputs) x = layers.BatchNormalization()(x) x = layers.Activation(\"relu\")(x) previous_block_activation = x # Set aside residual # Blocks 1, 2, 3 are identical apart from the feature depth. for filters in [64, 128, 256]: x = layers.Activation(\"relu\")(x) x = layers.SeparableConv2D(filters, 3, padding=\"same\")(x) x = layers.BatchNormalization()(x) x = layers.Activation(\"relu\")(x) x = layers.SeparableConv2D(filters, 3, padding=\"same\")(x) x = layers.BatchNormalization()(x) x = layers.MaxPooling2D(3, strides=2, padding=\"same\")(x) # Project residual residual = layers.Conv2D(filters, 1, strides=2, padding=\"same\")( previous_block_activation ) x = layers.add([x, residual]) # Add back residual previous_block_activation = x # Set aside next residual ### [Second half of the network: upsampling inputs] ### for filters in [256, 128, 64, 32]: x = layers.Activation(\"relu\")(x) x = layers.Conv2DTranspose(filters, 3, padding=\"same\")(x) x = layers.BatchNormalization()(x) x = layers.Activation(\"relu\")(x) x = layers.Conv2DTranspose(filters, 3, padding=\"same\")(x) x = layers.BatchNormalization()(x) x = layers.UpSampling2D(2)(x) # Project residual residual = layers.UpSampling2D(2)(previous_block_activation) residual = layers.Conv2D(filters, 1, padding=\"same\")(residual) x = layers.add([x, residual]) # Add back residual previous_block_activation = x # Set aside next residual # Add a per-pixel classification layer outputs = layers.Conv2D(num_classes, 3, activation=\"softmax\", padding=\"same\")(x) # Define the model model = keras.Model(inputs, outputs) return model # Free up RAM in case the model definition cells were run multiple times keras.backend.clear_session() # Build model model = get_model(img_size, num_classes) model.summary() Model: \"functional_1\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 160, 160, 3) 0 __________________________________________________________________________________________________ conv2d (Conv2D) (None, 80, 80, 32) 896 input_1[0][0] __________________________________________________________________________________________________ batch_normalization (BatchNorma (None, 80, 80, 32) 128 conv2d[0][0] __________________________________________________________________________________________________ activation (Activation) (None, 80, 80, 32) 0 batch_normalization[0][0] __________________________________________________________________________________________________ activation_1 (Activation) (None, 80, 80, 32) 0 activation[0][0] __________________________________________________________________________________________________ separable_conv2d (SeparableConv (None, 80, 80, 64) 2400 activation_1[0][0] __________________________________________________________________________________________________ batch_normalization_1 (BatchNor (None, 80, 80, 64) 256 separable_conv2d[0][0] __________________________________________________________________________________________________ activation_2 (Activation) (None, 80, 80, 64) 0 batch_normalization_1[0][0] __________________________________________________________________________________________________ separable_conv2d_1 (SeparableCo (None, 80, 80, 64) 4736 activation_2[0][0] __________________________________________________________________________________________________ batch_normalization_2 (BatchNor (None, 80, 80, 64) 256 separable_conv2d_1[0][0] __________________________________________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 40, 40, 64) 0 batch_normalization_2[0][0] __________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 40, 40, 64) 2112 activation[0][0] __________________________________________________________________________________________________ add (Add) (None, 40, 40, 64) 0 max_pooling2d[0][0] conv2d_1[0][0] __________________________________________________________________________________________________ activation_3 (Activation) (None, 40, 40, 64) 0 add[0][0] __________________________________________________________________________________________________ separable_conv2d_2 (SeparableCo (None, 40, 40, 128) 8896 activation_3[0][0] __________________________________________________________________________________________________ batch_normalization_3 (BatchNor (None, 40, 40, 128) 512 separable_conv2d_2[0][0] __________________________________________________________________________________________________ activation_4 (Activation) (None, 40, 40, 128) 0 batch_normalization_3[0][0] __________________________________________________________________________________________________ separable_conv2d_3 (SeparableCo (None, 40, 40, 128) 17664 activation_4[0][0] __________________________________________________________________________________________________ batch_normalization_4 (BatchNor (None, 40, 40, 128) 512 separable_conv2d_3[0][0] __________________________________________________________________________________________________ max_pooling2d_1 (MaxPooling2D) (None, 20, 20, 128) 0 batch_normalization_4[0][0] __________________________________________________________________________________________________ conv2d_2 (Conv2D) (None, 20, 20, 128) 8320 add[0][0] __________________________________________________________________________________________________ add_1 (Add) (None, 20, 20, 128) 0 max_pooling2d_1[0][0] conv2d_2[0][0] __________________________________________________________________________________________________ activation_5 (Activation) (None, 20, 20, 128) 0 add_1[0][0] __________________________________________________________________________________________________ separable_conv2d_4 (SeparableCo (None, 20, 20, 256) 34176 activation_5[0][0] __________________________________________________________________________________________________ batch_normalization_5 (BatchNor (None, 20, 20, 256) 1024 separable_conv2d_4[0][0] __________________________________________________________________________________________________ activation_6 (Activation) (None, 20, 20, 256) 0 batch_normalization_5[0][0] __________________________________________________________________________________________________ separable_conv2d_5 (SeparableCo (None, 20, 20, 256) 68096 activation_6[0][0] __________________________________________________________________________________________________ batch_normalization_6 (BatchNor (None, 20, 20, 256) 1024 separable_conv2d_5[0][0] __________________________________________________________________________________________________ max_pooling2d_2 (MaxPooling2D) (None, 10, 10, 256) 0 batch_normalization_6[0][0] __________________________________________________________________________________________________ conv2d_3 (Conv2D) (None, 10, 10, 256) 33024 add_1[0][0] __________________________________________________________________________________________________ add_2 (Add) (None, 10, 10, 256) 0 max_pooling2d_2[0][0] conv2d_3[0][0] __________________________________________________________________________________________________ activation_7 (Activation) (None, 10, 10, 256) 0 add_2[0][0] __________________________________________________________________________________________________ conv2d_transpose (Conv2DTranspo (None, 10, 10, 256) 590080 activation_7[0][0] __________________________________________________________________________________________________ batch_normalization_7 (BatchNor (None, 10, 10, 256) 1024 conv2d_transpose[0][0] __________________________________________________________________________________________________ activation_8 (Activation) (None, 10, 10, 256) 0 batch_normalization_7[0][0] __________________________________________________________________________________________________ conv2d_transpose_1 (Conv2DTrans (None, 10, 10, 256) 590080 activation_8[0][0] __________________________________________________________________________________________________ batch_normalization_8 (BatchNor (None, 10, 10, 256) 1024 conv2d_transpose_1[0][0] __________________________________________________________________________________________________ up_sampling2d_1 (UpSampling2D) (None, 20, 20, 256) 0 add_2[0][0] __________________________________________________________________________________________________ up_sampling2d (UpSampling2D) (None, 20, 20, 256) 0 batch_normalization_8[0][0] __________________________________________________________________________________________________ conv2d_4 (Conv2D) (None, 20, 20, 256) 65792 up_sampling2d_1[0][0] __________________________________________________________________________________________________ add_3 (Add) (None, 20, 20, 256) 0 up_sampling2d[0][0] conv2d_4[0][0] __________________________________________________________________________________________________ activation_9 (Activation) (None, 20, 20, 256) 0 add_3[0][0] __________________________________________________________________________________________________ conv2d_transpose_2 (Conv2DTrans (None, 20, 20, 128) 295040 activation_9[0][0] __________________________________________________________________________________________________ batch_normalization_9 (BatchNor (None, 20, 20, 128) 512 conv2d_transpose_2[0][0] __________________________________________________________________________________________________ activation_10 (Activation) (None, 20, 20, 128) 0 batch_normalization_9[0][0] __________________________________________________________________________________________________ conv2d_transpose_3 (Conv2DTrans (None, 20, 20, 128) 147584 activation_10[0][0] __________________________________________________________________________________________________ batch_normalization_10 (BatchNo (None, 20, 20, 128) 512 conv2d_transpose_3[0][0] __________________________________________________________________________________________________ up_sampling2d_3 (UpSampling2D) (None, 40, 40, 256) 0 add_3[0][0] __________________________________________________________________________________________________ up_sampling2d_2 (UpSampling2D) (None, 40, 40, 128) 0 batch_normalization_10[0][0] __________________________________________________________________________________________________ conv2d_5 (Conv2D) (None, 40, 40, 128) 32896 up_sampling2d_3[0][0] __________________________________________________________________________________________________ add_4 (Add) (None, 40, 40, 128) 0 up_sampling2d_2[0][0] conv2d_5[0][0] __________________________________________________________________________________________________ activation_11 (Activation) (None, 40, 40, 128) 0 add_4[0][0] __________________________________________________________________________________________________ conv2d_transpose_4 (Conv2DTrans (None, 40, 40, 64) 73792 activation_11[0][0] __________________________________________________________________________________________________ batch_normalization_11 (BatchNo (None, 40, 40, 64) 256 conv2d_transpose_4[0][0] __________________________________________________________________________________________________ activation_12 (Activation) (None, 40, 40, 64) 0 batch_normalization_11[0][0] __________________________________________________________________________________________________ conv2d_transpose_5 (Conv2DTrans (None, 40, 40, 64) 36928 activation_12[0][0] __________________________________________________________________________________________________ batch_normalization_12 (BatchNo (None, 40, 40, 64) 256 conv2d_transpose_5[0][0] __________________________________________________________________________________________________ up_sampling2d_5 (UpSampling2D) (None, 80, 80, 128) 0 add_4[0][0] __________________________________________________________________________________________________ up_sampling2d_4 (UpSampling2D) (None, 80, 80, 64) 0 batch_normalization_12[0][0] __________________________________________________________________________________________________ conv2d_6 (Conv2D) (None, 80, 80, 64) 8256 up_sampling2d_5[0][0] __________________________________________________________________________________________________ add_5 (Add) (None, 80, 80, 64) 0 up_sampling2d_4[0][0] conv2d_6[0][0] __________________________________________________________________________________________________ activation_13 (Activation) (None, 80, 80, 64) 0 add_5[0][0] __________________________________________________________________________________________________ conv2d_transpose_6 (Conv2DTrans (None, 80, 80, 32) 18464 activation_13[0][0] __________________________________________________________________________________________________ batch_normalization_13 (BatchNo (None, 80, 80, 32) 128 conv2d_transpose_6[0][0] __________________________________________________________________________________________________ activation_14 (Activation) (None, 80, 80, 32) 0 batch_normalization_13[0][0] __________________________________________________________________________________________________ conv2d_transpose_7 (Conv2DTrans (None, 80, 80, 32) 9248 activation_14[0][0] __________________________________________________________________________________________________ batch_normalization_14 (BatchNo (None, 80, 80, 32) 128 conv2d_transpose_7[0][0] __________________________________________________________________________________________________ up_sampling2d_7 (UpSampling2D) (None, 160, 160, 64) 0 add_5[0][0] __________________________________________________________________________________________________ up_sampling2d_6 (UpSampling2D) (None, 160, 160, 32) 0 batch_normalization_14[0][0] __________________________________________________________________________________________________ conv2d_7 (Conv2D) (None, 160, 160, 32) 2080 up_sampling2d_7[0][0] __________________________________________________________________________________________________ add_6 (Add) (None, 160, 160, 32) 0 up_sampling2d_6[0][0] conv2d_7[0][0] __________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 160, 160, 3) 867 add_6[0][0] ================================================================================================== Total params: 2,058,979 Trainable params: 2,055,203 Non-trainable params: 3,776 __________________________________________________________________________________________________ Set aside a validation split import random # Split our img paths into a training and a validation set val_samples = 1000 random.Random(1337).shuffle(input_img_paths) random.Random(1337).shuffle(target_img_paths) train_input_img_paths = input_img_paths[:-val_samples] train_target_img_paths = target_img_paths[:-val_samples] val_input_img_paths = input_img_paths[-val_samples:] val_target_img_paths = target_img_paths[-val_samples:] # Instantiate data Sequences for each split train_gen = OxfordPets( batch_size, img_size, train_input_img_paths, train_target_img_paths ) val_gen = OxfordPets(batch_size, img_size, val_input_img_paths, val_target_img_paths) Train the model # Configure the model for training. # We use the \"sparse\" version of categorical_crossentropy # because our target data is integers. model.compile(optimizer=\"rmsprop\", loss=\"sparse_categorical_crossentropy\") callbacks = [ keras.callbacks.ModelCheckpoint(\"oxford_segmentation.h5\", save_best_only=True) ] # Train the model, doing validation at the end of each epoch. epochs = 15 model.fit(train_gen, epochs=epochs, validation_data=val_gen, callbacks=callbacks) Epoch 1/15 2/199 [..............................] - ETA: 13s - loss: 5.4602WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0462s vs `on_train_batch_end` time: 0.0935s). Check your callbacks. 199/199 [==============================] - 32s 161ms/step - loss: 0.9396 - val_loss: 3.7159 Epoch 2/15 199/199 [==============================] - 32s 159ms/step - loss: 0.4911 - val_loss: 2.2709 Epoch 3/15 199/199 [==============================] - 32s 160ms/step - loss: 0.4205 - val_loss: 0.5184 Epoch 4/15 199/199 [==============================] - 32s 159ms/step - loss: 0.3739 - val_loss: 0.4584 Epoch 5/15 199/199 [==============================] - 32s 160ms/step - loss: 0.3416 - val_loss: 0.3968 Epoch 6/15 199/199 [==============================] - 32s 159ms/step - loss: 0.3131 - val_loss: 0.4059 Epoch 7/15 199/199 [==============================] - 31s 157ms/step - loss: 0.2895 - val_loss: 0.3963 Epoch 8/15 199/199 [==============================] - 31s 156ms/step - loss: 0.2695 - val_loss: 0.4035 Epoch 9/15 199/199 [==============================] - 31s 157ms/step - loss: 0.2528 - val_loss: 0.4184 Epoch 10/15 199/199 [==============================] - 31s 157ms/step - loss: 0.2360 - val_loss: 0.3950 Epoch 11/15 199/199 [==============================] - 31s 157ms/step - loss: 0.2247 - val_loss: 0.4139 Epoch 12/15 199/199 [==============================] - 31s 157ms/step - loss: 0.2126 - val_loss: 0.3861 Epoch 13/15 199/199 [==============================] - 31s 157ms/step - loss: 0.2026 - val_loss: 0.4138 Epoch 14/15 199/199 [==============================] - 31s 156ms/step - loss: 0.1932 - val_loss: 0.4265 Epoch 15/15 199/199 [==============================] - 31s 157ms/step - loss: 0.1857 - val_loss: 0.3959 Visualize predictions # Generate predictions for all images in the validation set val_gen = OxfordPets(batch_size, img_size, val_input_img_paths, val_target_img_paths) val_preds = model.predict(val_gen) def display_mask(i): \"\"\"Quick utility to display a model's prediction.\"\"\" mask = np.argmax(val_preds[i], axis=-1) mask = np.expand_dims(mask, axis=-1) img = PIL.ImageOps.autocontrast(keras.preprocessing.image.array_to_img(mask)) display(img) # Display results for validation image #10 i = 10 # Display input image display(Image(filename=val_input_img_paths[i])) # Display ground-truth target mask img = PIL.ImageOps.autocontrast(load_img(val_target_img_paths[i])) display(img) # Display mask predicted by our model display_mask(i) # Note that the model only sees inputs at 150x150. jpeg png png Similarity learning using a siamese network trained with a contrastive loss. Introduction Siamese Networks are neural networks which share weights between two or more sister networks, each producing embedding vectors of its respective inputs. In supervised similarity learning, the networks are then trained to maximize the contrast (distance) between embeddings of inputs of different classes, while minimizing the distance between embeddings of similar classes, resulting in embedding spaces that reflect the class segmentation of the training inputs. Setup import random import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt Hyperparameters epochs = 10 batch_size = 16 margin = 1 # Margin for constrastive loss. Load the MNIST dataset (x_train_val, y_train_val), (x_test, y_test) = keras.datasets.mnist.load_data() # Change the data type to a floating point format x_train_val = x_train_val.astype(\"float32\") x_test = x_test.astype(\"float32\") Define training and validation sets # Keep 50% of train_val in validation set x_train, x_val = x_train_val[:30000], x_train_val[30000:] y_train, y_val = y_train_val[:30000], y_train_val[30000:] del x_train_val, y_train_val Create pairs of images We will train the model to differentiate between digits of different classes. For example, digit 0 needs to be differentiated from the rest of the digits (1 through 9), digit 1 - from 0 and 2 through 9, and so on. To carry this out, we will select N random images from class A (for example, for digit 0) and pair them with N random images from another class B (for example, for digit 1). Then, we can repeat this process for all classes of digits (until digit 9). Once we have paired digit 0 with other digits, we can repeat this process for the remaining classes for the rest of the digits (from 1 until 9). def make_pairs(x, y): \"\"\"Creates a tuple containing image pairs with corresponding label. Arguments: x: List containing images, each index in this list corresponds to one image. y: List containing labels, each label with datatype of `int`. Returns: Tuple containing two numpy arrays as (pairs_of_samples, labels), where pairs_of_samples' shape is (2len(x), 2,n_features_dims) and labels are a binary array of shape (2len(x)). \"\"\" num_classes = max(y) + 1 digit_indices = [np.where(y == i)[0] for i in range(num_classes)] pairs = [] labels = [] for idx1 in range(len(x)): # add a matching example x1 = x[idx1] label1 = y[idx1] idx2 = random.choice(digit_indices[label1]) x2 = x[idx2] pairs += [[x1, x2]] labels += [1] # add a non-matching example label2 = random.randint(0, num_classes - 1) while label2 == label1: label2 = random.randint(0, num_classes - 1) idx2 = random.choice(digit_indices[label2]) x2 = x[idx2] pairs += [[x1, x2]] labels += [0] return np.array(pairs), np.array(labels).astype(\"float32\") # make train pairs pairs_train, labels_train = make_pairs(x_train, y_train) # make validation pairs pairs_val, labels_val = make_pairs(x_val, y_val) # make test pairs pairs_test, labels_test = make_pairs(x_test, y_test) We get: pairs_train.shape = (60000, 2, 28, 28) We have 60,000 pairs Each pair contains 2 images Each image has shape (28, 28) Split the training pairs x_train_1 = pairs_train[:, 0] # x_train_1.shape is (60000, 28, 28) x_train_2 = pairs_train[:, 1] Split the validation pairs x_val_1 = pairs_val[:, 0] # x_val_1.shape = (60000, 28, 28) x_val_2 = pairs_val[:, 1] Split the test pairs x_test_1 = pairs_test[:, 0] # x_test_1.shape = (20000, 28, 28) x_test_2 = pairs_test[:, 1] Visualize pairs and their labels def visualize(pairs, labels, to_show=6, num_col=3, predictions=None, test=False): \"\"\"Creates a plot of pairs and labels, and prediction if it's test dataset. Arguments: pairs: Numpy Array, of pairs to visualize, having shape (Number of pairs, 2, 28, 28). to_show: Int, number of examples to visualize (default is 6) `to_show` must be an integral multiple of `num_col`. Otherwise it will be trimmed if it is greater than num_col, and incremented if if it is less then num_col. num_col: Int, number of images in one row - (default is 3) For test and train respectively, it should not exceed 3 and 7. predictions: Numpy Array of predictions with shape (to_show, 1) - (default is None) Must be passed when test=True. test: Boolean telling whether the dataset being visualized is train dataset or test dataset - (default False). Returns: None. \"\"\" # Define num_row # If to_show % num_col != 0 # trim to_show, # to trim to_show limit num_row to the point where # to_show % num_col == 0 # # If to_show//num_col == 0 # then it means num_col is greater then to_show # increment to_show # to increment to_show set num_row to 1 num_row = to_show // num_col if to_show // num_col != 0 else 1 # `to_show` must be an integral multiple of `num_col` # we found num_row and we have num_col # to increment or decrement to_show # to make it integral multiple of `num_col` # simply set it equal to num_row * num_col to_show = num_row * num_col # Plot the images fig, axes = plt.subplots(num_row, num_col, figsize=(5, 5)) for i in range(to_show): # If the number of rows is 1, the axes array is one-dimensional if num_row == 1: ax = axes[i % num_col] else: ax = axes[i // num_col, i % num_col] ax.imshow(tf.concat([pairs[i][0], pairs[i][1]], axis=1), cmap=\"gray\") ax.set_axis_off() if test: ax.set_title(\"True: {} | Pred: {:.5f}\".format(labels[i], predictions[i][0])) else: ax.set_title(\"Label: {}\".format(labels[i])) if test: plt.tight_layout(rect=(0, 0, 1.9, 1.9), w_pad=0.0) else: plt.tight_layout(rect=(0, 0, 1.5, 1.5)) plt.show() Inspect training pairs visualize(pairs_train[:-1], labels_train[:-1], to_show=4, num_col=4) png Inspect validation pairs visualize(pairs_val[:-1], labels_val[:-1], to_show=4, num_col=4) png Inspect test pairs visualize(pairs_test[:-1], labels_test[:-1], to_show=4, num_col=4) png Define the model There are be two input layers, each leading to its own network, which produces embeddings. A Lambda layer then merges them using an Euclidean distance and the merged output is fed to the final network. # Provided two tensors t1 and t2 # Euclidean distance = sqrt(sum(square(t1-t2))) def euclidean_distance(vects): \"\"\"Find the Euclidean distance between two vectors. Arguments: vects: List containing two tensors of same length. Returns: Tensor containing euclidean distance (as floating point value) between vectors. \"\"\" x, y = vects sum_square = tf.math.reduce_sum(tf.math.square(x - y), axis=1, keepdims=True) return tf.math.sqrt(tf.math.maximum(sum_square, tf.keras.backend.epsilon())) input = layers.Input((28, 28, 1)) x = tf.keras.layers.BatchNormalization()(input) x = layers.Conv2D(4, (5, 5), activation=\"tanh\")(x) x = layers.AveragePooling2D(pool_size=(2, 2))(x) x = layers.Conv2D(16, (5, 5), activation=\"tanh\")(x) x = layers.AveragePooling2D(pool_size=(2, 2))(x) x = layers.Flatten()(x) x = tf.keras.layers.BatchNormalization()(x) x = layers.Dense(10, activation=\"tanh\")(x) embedding_network = keras.Model(input, x) input_1 = layers.Input((28, 28, 1)) input_2 = layers.Input((28, 28, 1)) # As mentioned above, Siamese Network share weights between # tower networks (sister networks). To allow this, we will use # same embedding network for both tower networks. tower_1 = embedding_network(input_1) tower_2 = embedding_network(input_2) merge_layer = layers.Lambda(euclidean_distance)([tower_1, tower_2]) normal_layer = tf.keras.layers.BatchNormalization()(merge_layer) output_layer = layers.Dense(1, activation=\"sigmoid\")(normal_layer) siamese = keras.Model(inputs=[input_1, input_2], outputs=output_layer) Define the constrastive Loss def loss(margin=1): \"\"\"Provides 'constrastive_loss' an enclosing scope with variable 'margin'. Arguments: margin: Integer, defines the baseline for distance for which pairs should be classified as dissimilar. - (default is 1). Returns: 'constrastive_loss' function with data ('margin') attached. \"\"\" # Contrastive loss = mean( (1-true_value) * square(prediction) + # true_value * square( max(margin-prediction, 0) )) def contrastive_loss(y_true, y_pred): \"\"\"Calculates the constrastive loss. Arguments: y_true: List of labels, each label is of type float32. y_pred: List of predictions of same length as of y_true, each label is of type float32. Returns: A tensor containing constrastive loss as floating point value. \"\"\" square_pred = tf.math.square(y_pred) margin_square = tf.math.square(tf.math.maximum(margin - (y_pred), 0)) return tf.math.reduce_mean( (1 - y_true) * square_pred + (y_true) * margin_square ) return contrastive_loss Compile the model with the contrastive loss siamese.compile(loss=loss(margin=margin), optimizer=\"RMSprop\", metrics=[\"accuracy\"]) siamese.summary() Model: \"model_1\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_2 (InputLayer) [(None, 28, 28, 1)] 0 __________________________________________________________________________________________________ input_3 (InputLayer) [(None, 28, 28, 1)] 0 __________________________________________________________________________________________________ model (Functional) (None, 10) 5318 input_2[0][0] input_3[0][0] __________________________________________________________________________________________________ lambda (Lambda) (None, 1) 0 model[0][0] model[1][0] __________________________________________________________________________________________________ batch_normalization_2 (BatchNor (None, 1) 4 lambda[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (None, 1) 2 batch_normalization_2[0][0] ================================================================================================== Total params: 5,324 Trainable params: 4,808 Non-trainable params: 516 __________________________________________________________________________________________________ Train the model history = siamese.fit( [x_train_1, x_train_2], labels_train, validation_data=([x_val_1, x_val_2], labels_val), batch_size=batch_size, epochs=epochs, ) Epoch 1/10 3750/3750 [==============================] - 25s 6ms/step - loss: 0.1993 - accuracy: 0.6626 - val_loss: 0.0525 - val_accuracy: 0.9331 Epoch 2/10 3750/3750 [==============================] - 23s 6ms/step - loss: 0.0611 - accuracy: 0.9187 - val_loss: 0.0277 - val_accuracy: 0.9644 Epoch 3/10 3750/3750 [==============================] - 24s 6ms/step - loss: 0.0455 - accuracy: 0.9409 - val_loss: 0.0214 - val_accuracy: 0.9719 Epoch 4/10 3750/3750 [==============================] - 27s 7ms/step - loss: 0.0386 - accuracy: 0.9506 - val_loss: 0.0198 - val_accuracy: 0.9743 Epoch 5/10 3750/3750 [==============================] - 45s 12ms/step - loss: 0.0362 - accuracy: 0.9529 - val_loss: 0.0169 - val_accuracy: 0.9783 Epoch 6/10 2497/3750 [==================>...........] - ETA: 10s - loss: 0.0343 - accuracy: 0.9552 Visualize results def plt_metric(history, metric, title, has_valid=True): \"\"\"Plots the given 'metric' from 'history'. Arguments: history: history attribute of History object returned from Model.fit. metric: Metric to plot, a string value present as key in 'history'. title: A string to be used as title of plot. has_valid: Boolean, true if valid data was passed to Model.fit else false. Returns: None. \"\"\" plt.plot(history[metric]) if has_valid: plt.plot(history[\"val_\" + metric]) plt.legend([\"train\", \"validation\"], loc=\"upper left\") plt.title(title) plt.ylabel(metric) plt.xlabel(\"epoch\") plt.show() # Plot the accuracy plt_metric(history=history.history, metric=\"accuracy\", title=\"Model accuracy\") # Plot the constrastive loss plt_metric(history=history.history, metric=\"loss\", title=\"Constrastive Loss\") png png Evaluate the model results = siamese.evaluate([x_test_1, x_test_2], labels_test) print(\"test loss, test acc:\", results) 625/625 [==============================] - 3s 4ms/step - loss: 0.0150 - accuracy: 0.9810 test loss, test acc: [0.015001337975263596, 0.9810000061988831] Visualize the predictions predictions = siamese.predict([x_test_1, x_test_2]) visualize(pairs_test, labels_test, to_show=3, predictions=predictions, test=True) png Training a Siamese Network to compare the similarity of images using a triplet loss function. Introduction A Siamese Network is a type of network architecture that contains two or more identical subnetworks used to generate feature vectors for each input and compare them. Siamese Networks can be applied to different use cases, like detecting duplicates, finding anomalies, and face recognition. This example uses a Siamese Network with three identical subnetworks. We will provide three images to the model, where two of them will be similar (anchor and positive samples), and the third will be unrelated (a negative example.) Our goal is for the model to learn to estimate the similarity between images. For the network to learn, we use a triplet loss function. You can find an introduction to triplet loss in the FaceNet paper by Schroff et al,. 2015. In this example, we define the triplet loss function as follows: L(A, P, N) = max(‖f(A) - f(P)‖² - ‖f(A) - f(N)‖² + margin, 0) This example uses the Totally Looks Like dataset by Rosenfeld et al., 2018. Setup import matplotlib.pyplot as plt import numpy as np import os import random import tensorflow as tf from pathlib import Path from tensorflow.keras import applications from tensorflow.keras import layers from tensorflow.keras import losses from tensorflow.keras import optimizers from tensorflow.keras import metrics from tensorflow.keras import Model from tensorflow.keras.applications import resnet target_shape = (200, 200) Load the dataset We are going to load the Totally Looks Like dataset and unzip it inside the ~/.keras directory in the local environment. The dataset consists of two separate files: left.zip contains the images that we will use as the anchor. right.zip contains the images that we will use as the positive sample (an image that looks like the anchor). cache_dir = Path(Path.home()) / \".keras\" anchor_images_path = cache_dir / \"left\" positive_images_path = cache_dir / \"right\" !gdown --id 1jvkbTr_giSP3Ru8OwGNCg6B4PvVbcO34 !gdown --id 1EzBZUb_mh_Dp_FKD0P4XiYYSd0QBH5zW !unzip -oq left.zip -d $cache_dir !unzip -oq right.zip -d $cache_dir zsh:1: command not found: gdown zsh:1: command not found: gdown unzip: cannot find or open left.zip, left.zip.zip or left.zip.ZIP. unzip: cannot find or open right.zip, right.zip.zip or right.zip.ZIP. Preparing the data We are going to use a tf.data pipeline to load the data and generate the triplets that we need to train the Siamese network. We'll set up the pipeline using a zipped list with anchor, positive, and negative filenames as the source. The pipeline will load and preprocess the corresponding images. def preprocess_image(filename): \"\"\" Load the specified file as a JPEG image, preprocess it and resize it to the target shape. \"\"\" image_string = tf.io.read_file(filename) image = tf.image.decode_jpeg(image_string, channels=3) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, target_shape) return image def preprocess_triplets(anchor, positive, negative): \"\"\" Given the filenames corresponding to the three images, load and preprocess them. \"\"\" return ( preprocess_image(anchor), preprocess_image(positive), preprocess_image(negative), ) Let's setup our data pipeline using a zipped list with an anchor, positive, and negative image filename as the source. The output of the pipeline contains the same triplet with every image loaded and preprocessed. # We need to make sure both the anchor and positive images are loaded in # sorted order so we can match them together. anchor_images = sorted( [str(anchor_images_path / f) for f in os.listdir(anchor_images_path)] ) positive_images = sorted( [str(positive_images_path / f) for f in os.listdir(positive_images_path)] ) image_count = len(anchor_images) anchor_dataset = tf.data.Dataset.from_tensor_slices(anchor_images) positive_dataset = tf.data.Dataset.from_tensor_slices(positive_images) # To generate the list of negative images, let's randomize the list of # available images and concatenate them together. rng = np.random.RandomState(seed=42) rng.shuffle(anchor_images) rng.shuffle(positive_images) negative_images = anchor_images + positive_images np.random.RandomState(seed=32).shuffle(negative_images) negative_dataset = tf.data.Dataset.from_tensor_slices(negative_images) negative_dataset = negative_dataset.shuffle(buffer_size=4096) dataset = tf.data.Dataset.zip((anchor_dataset, positive_dataset, negative_dataset)) dataset = dataset.shuffle(buffer_size=1024) dataset = dataset.map(preprocess_triplets) # Let's now split our dataset in train and validation. train_dataset = dataset.take(round(image_count * 0.8)) val_dataset = dataset.skip(round(image_count * 0.8)) train_dataset = train_dataset.batch(32, drop_remainder=False) train_dataset = train_dataset.prefetch(8) val_dataset = val_dataset.batch(32, drop_remainder=False) val_dataset = val_dataset.prefetch(8) Let's take a look at a few examples of triplets. Notice how the first two images look alike while the third one is always different. def visualize(anchor, positive, negative): \"\"\"Visualize a few triplets from the supplied batches.\"\"\" def show(ax, image): ax.imshow(image) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig = plt.figure(figsize=(9, 9)) axs = fig.subplots(3, 3) for i in range(3): show(axs[i, 0], anchor[i]) show(axs[i, 1], positive[i]) show(axs[i, 2], negative[i]) visualize(*list(train_dataset.take(1).as_numpy_iterator())[0]) png Setting up the embedding generator model Our Siamese Network will generate embeddings for each of the images of the triplet. To do this, we will use a ResNet50 model pretrained on ImageNet and connect a few Dense layers to it so we can learn to separate these embeddings. We will freeze the weights of all the layers of the model up until the layer conv5_block1_out. This is important to avoid affecting the weights that the model has already learned. We are going to leave the bottom few layers trainable, so that we can fine-tune their weights during training. base_cnn = resnet.ResNet50( weights=\"imagenet\", input_shape=target_shape + (3,), include_top=False ) flatten = layers.Flatten()(base_cnn.output) dense1 = layers.Dense(512, activation=\"relu\")(flatten) dense1 = layers.BatchNormalization()(dense1) dense2 = layers.Dense(256, activation=\"relu\")(dense1) dense2 = layers.BatchNormalization()(dense2) output = layers.Dense(256)(dense2) embedding = Model(base_cnn.input, output, name=\"Embedding\") trainable = False for layer in base_cnn.layers: if layer.name == \"conv5_block1_out\": trainable = True layer.trainable = trainable Setting up the Siamese Network model The Siamese network will receive each of the triplet images as an input, generate the embeddings, and output the distance between the anchor and the positive embedding, as well as the distance between the anchor and the negative embedding. To compute the distance, we can use a custom layer DistanceLayer that returns both values as a tuple. class DistanceLayer(layers.Layer): \"\"\" This layer is responsible for computing the distance between the anchor embedding and the positive embedding, and the anchor embedding and the negative embedding. \"\"\" def __init__(self, **kwargs): super().__init__(**kwargs) def call(self, anchor, positive, negative): ap_distance = tf.reduce_sum(tf.square(anchor - positive), -1) an_distance = tf.reduce_sum(tf.square(anchor - negative), -1) return (ap_distance, an_distance) anchor_input = layers.Input(name=\"anchor\", shape=target_shape + (3,)) positive_input = layers.Input(name=\"positive\", shape=target_shape + (3,)) negative_input = layers.Input(name=\"negative\", shape=target_shape + (3,)) distances = DistanceLayer()( embedding(resnet.preprocess_input(anchor_input)), embedding(resnet.preprocess_input(positive_input)), embedding(resnet.preprocess_input(negative_input)), ) siamese_network = Model( inputs=[anchor_input, positive_input, negative_input], outputs=distances ) Putting everything together We now need to implement a model with custom training loop so we can compute the triplet loss using the three embeddings produced by the Siamese network. Let's create a Mean metric instance to track the loss of the training process. class SiameseModel(Model): \"\"\"The Siamese Network model with a custom training and testing loops. Computes the triplet loss using the three embeddings produced by the Siamese Network. The triplet loss is defined as: L(A, P, N) = max(‖f(A) - f(P)‖² - ‖f(A) - f(N)‖² + margin, 0) \"\"\" def __init__(self, siamese_network, margin=0.5): super(SiameseModel, self).__init__() self.siamese_network = siamese_network self.margin = margin self.loss_tracker = metrics.Mean(name=\"loss\") def call(self, inputs): return self.siamese_network(inputs) def train_step(self, data): # GradientTape is a context manager that records every operation that # you do inside. We are using it here to compute the loss so we can get # the gradients and apply them using the optimizer specified in # `compile()`. with tf.GradientTape() as tape: loss = self._compute_loss(data) # Storing the gradients of the loss function with respect to the # weights/parameters. gradients = tape.gradient(loss, self.siamese_network.trainable_weights) # Applying the gradients on the model using the specified optimizer self.optimizer.apply_gradients( zip(gradients, self.siamese_network.trainable_weights) ) # Let's update and return the training loss metric. self.loss_tracker.update_state(loss) return {\"loss\": self.loss_tracker.result()} def test_step(self, data): loss = self._compute_loss(data) # Let's update and return the loss metric. self.loss_tracker.update_state(loss) return {\"loss\": self.loss_tracker.result()} def _compute_loss(self, data): # The output of the network is a tuple containing the distances # between the anchor and the positive example, and the anchor and # the negative example. ap_distance, an_distance = self.siamese_network(data) # Computing the Triplet Loss by subtracting both distances and # making sure we don't get a negative value. loss = ap_distance - an_distance loss = tf.maximum(loss + self.margin, 0.0) return loss @property def metrics(self): # We need to list our metrics here so the `reset_states()` can be # called automatically. return [self.loss_tracker] Training We are now ready to train our model. siamese_model = SiameseModel(siamese_network) siamese_model.compile(optimizer=optimizers.Adam(0.0001)) siamese_model.fit(train_dataset, epochs=10, validation_data=val_dataset) Epoch 1/10 151/151 [==============================] - 277s 2s/step - loss: 0.5014 - val_loss: 0.3719 Epoch 2/10 151/151 [==============================] - 276s 2s/step - loss: 0.3884 - val_loss: 0.3632 Epoch 3/10 151/151 [==============================] - 287s 2s/step - loss: 0.3711 - val_loss: 0.3509 Epoch 4/10 151/151 [==============================] - 295s 2s/step - loss: 0.3585 - val_loss: 0.3287 Epoch 5/10 151/151 [==============================] - 299s 2s/step - loss: 0.3420 - val_loss: 0.3301 Epoch 6/10 151/151 [==============================] - 297s 2s/step - loss: 0.3181 - val_loss: 0.3419 Epoch 7/10 151/151 [==============================] - 290s 2s/step - loss: 0.3131 - val_loss: 0.3201 Epoch 8/10 151/151 [==============================] - 295s 2s/step - loss: 0.3102 - val_loss: 0.3152 Epoch 9/10 151/151 [==============================] - 286s 2s/step - loss: 0.2905 - val_loss: 0.2937 Epoch 10/10 151/151 [==============================] - 270s 2s/step - loss: 0.2921 - val_loss: 0.2952 Inspecting what the network has learned At this point, we can check how the network learned to separate the embeddings depending on whether they belong to similar images. We can use cosine similarity to measure the similarity between embeddings. Let's pick a sample from the dataset to check the similarity between the embeddings generated for each image. sample = next(iter(train_dataset)) visualize(*sample) anchor, positive, negative = sample anchor_embedding, positive_embedding, negative_embedding = ( embedding(resnet.preprocess_input(anchor)), embedding(resnet.preprocess_input(positive)), embedding(resnet.preprocess_input(negative)), ) png Finally, we can compute the cosine similarity between the anchor and positive images and compare it with the similarity between the anchor and the negative images. We should expect the similarity between the anchor and positive images to be larger than the similarity between the anchor and the negative images. cosine_similarity = metrics.CosineSimilarity() positive_similarity = cosine_similarity(anchor_embedding, positive_embedding) print(\"Positive similarity:\", positive_similarity.numpy()) negative_similarity = cosine_similarity(anchor_embedding, negative_embedding) print(\"Negative similarity\", negative_similarity.numpy()) Positive similarity: 0.9940324 Negative similarity 0.9918252 Summary The tf.data API enables you to build efficient input pipelines for your model. It is particularly useful if you have a large dataset. You can learn more about tf.data pipelines in tf.data: Build TensorFlow input pipelines. In this example, we use a pre-trained ResNet50 as part of the subnetwork that generates the feature embeddings. By using transfer learning, Implementing Super-Resolution using Efficient sub-pixel model on BSDS500. Introduction ESPCN (Efficient Sub-Pixel CNN), proposed by Shi, 2016 is a model that reconstructs a high-resolution version of an image given a low-resolution version. It leverages efficient \"sub-pixel convolution\" layers, which learns an array of image upscaling filters. In this code example, we will implement the model from the paper and train it on a small dataset, BSDS500. Setup import tensorflow as tf import os import math import numpy as np from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.preprocessing.image import load_img from tensorflow.keras.preprocessing.image import array_to_img from tensorflow.keras.preprocessing.image import img_to_array from tensorflow.keras.preprocessing import image_dataset_from_directory from IPython.display import display Load data: BSDS500 dataset Download dataset We use the built-in keras.utils.get_file utility to retrieve the dataset. dataset_url = \"http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/BSR/BSR_bsds500.tgz\" data_dir = keras.utils.get_file(origin=dataset_url, fname=\"BSR\", untar=True) root_dir = os.path.join(data_dir, \"BSDS500/data\") We create training and validation datasets via image_dataset_from_directory. crop_size = 300 upscale_factor = 3 input_size = crop_size // upscale_factor batch_size = 8 train_ds = image_dataset_from_directory( root_dir, batch_size=batch_size, image_size=(crop_size, crop_size), validation_split=0.2, subset=\"training\", seed=1337, label_mode=None, ) valid_ds = image_dataset_from_directory( root_dir, batch_size=batch_size, image_size=(crop_size, crop_size), validation_split=0.2, subset=\"validation\", seed=1337, label_mode=None, ) Found 500 files belonging to 2 classes. Using 400 files for training. Found 500 files belonging to 2 classes. Using 100 files for validation. We rescale the images to take values in the range [0, 1]. def scaling(input_image): input_image = input_image / 255.0 return input_image # Scale from (0, 255) to (0, 1) train_ds = train_ds.map(scaling) valid_ds = valid_ds.map(scaling) Let's visualize a few sample images: for batch in train_ds.take(1): for img in batch: display(array_to_img(img)) png png png png png png png png We prepare a dataset of test image paths that we will use for visual evaluation at the end of this example. dataset = os.path.join(root_dir, \"images\") test_path = os.path.join(dataset, \"test\") test_img_paths = sorted( [ os.path.join(test_path, fname) for fname in os.listdir(test_path) if fname.endswith(\".jpg\") ] ) Crop and resize images Let's process image data. First, we convert our images from the RGB color space to the YUV colour space. For the input data (low-resolution images), we crop the image, retrieve the y channel (luninance), and resize it with the area method (use BICUBIC if you use PIL). We only consider the luminance channel in the YUV color space because humans are more sensitive to luminance change. For the target data (high-resolution images), we just crop the image and retrieve the y channel. # Use TF Ops to process. def process_input(input, input_size, upscale_factor): input = tf.image.rgb_to_yuv(input) last_dimension_axis = len(input.shape) - 1 y, u, v = tf.split(input, 3, axis=last_dimension_axis) return tf.image.resize(y, [input_size, input_size], method=\"area\") def process_target(input): input = tf.image.rgb_to_yuv(input) last_dimension_axis = len(input.shape) - 1 y, u, v = tf.split(input, 3, axis=last_dimension_axis) return y train_ds = train_ds.map( lambda x: (process_input(x, input_size, upscale_factor), process_target(x)) ) train_ds = train_ds.prefetch(buffer_size=32) valid_ds = valid_ds.map( lambda x: (process_input(x, input_size, upscale_factor), process_target(x)) ) valid_ds = valid_ds.prefetch(buffer_size=32) Let's take a look at the input and target data. for batch in train_ds.take(1): for img in batch[0]: display(array_to_img(img)) for img in batch[1]: display(array_to_img(img)) png png png png png png png png png png png png png png png png Build a model Compared to the paper, we add one more layer and we use the relu activation function instead of tanh. It achieves better performance even though we train the model for fewer epochs. def get_model(upscale_factor=3, channels=1): conv_args = { \"activation\": \"relu\", \"kernel_initializer\": \"Orthogonal\", \"padding\": \"same\", } inputs = keras.Input(shape=(None, None, channels)) x = layers.Conv2D(64, 5, **conv_args)(inputs) x = layers.Conv2D(64, 3, **conv_args)(x) x = layers.Conv2D(32, 3, **conv_args)(x) x = layers.Conv2D(channels * (upscale_factor ** 2), 3, **conv_args)(x) outputs = tf.nn.depth_to_space(x, upscale_factor) return keras.Model(inputs, outputs) Define utility functions We need to define several utility functions to monitor our results: plot_results to plot an save an image. get_lowres_image to convert an image to its low-resolution version. upscale_image to turn a low-resolution image to a high-resolution version reconstructed by the model. In this function, we use the y channel from the YUV color space as input to the model and then combine the output with the other channels to obtain an RGB image. import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes from mpl_toolkits.axes_grid1.inset_locator import mark_inset import PIL def plot_results(img, prefix, title): \"\"\"Plot the result with zoom-in area.\"\"\" img_array = img_to_array(img) img_array = img_array.astype(\"float32\") / 255.0 # Create a new figure with a default 111 subplot. fig, ax = plt.subplots() im = ax.imshow(img_array[::-1], origin=\"lower\") plt.title(title) # zoom-factor: 2.0, location: upper-left axins = zoomed_inset_axes(ax, 2, loc=2) axins.imshow(img_array[::-1], origin=\"lower\") # Specify the limits. x1, x2, y1, y2 = 200, 300, 100, 200 # Apply the x-limits. axins.set_xlim(x1, x2) # Apply the y-limits. axins.set_ylim(y1, y2) plt.yticks(visible=False) plt.xticks(visible=False) # Make the line. mark_inset(ax, axins, loc1=1, loc2=3, fc=\"none\", ec=\"blue\") plt.savefig(str(prefix) + \"-\" + title + \".png\") plt.show() def get_lowres_image(img, upscale_factor): \"\"\"Return low-resolution image to use as model input.\"\"\" return img.resize( (img.size[0] // upscale_factor, img.size[1] // upscale_factor), PIL.Image.BICUBIC, ) def upscale_image(model, img): \"\"\"Predict the result based on input image and restore the image as RGB.\"\"\" ycbcr = img.convert(\"YCbCr\") y, cb, cr = ycbcr.split() y = img_to_array(y) y = y.astype(\"float32\") / 255.0 input = np.expand_dims(y, axis=0) out = model.predict(input) out_img_y = out[0] out_img_y *= 255.0 # Restore the image in RGB color space. out_img_y = out_img_y.clip(0, 255) out_img_y = out_img_y.reshape((np.shape(out_img_y)[0], np.shape(out_img_y)[1])) out_img_y = PIL.Image.fromarray(np.uint8(out_img_y), mode=\"L\") out_img_cb = cb.resize(out_img_y.size, PIL.Image.BICUBIC) out_img_cr = cr.resize(out_img_y.size, PIL.Image.BICUBIC) out_img = PIL.Image.merge(\"YCbCr\", (out_img_y, out_img_cb, out_img_cr)).convert( \"RGB\" ) return out_img Define callbacks to monitor training The ESPCNCallback object will compute and display the PSNR metric. This is the main metric we use to evaluate super-resolution performance. class ESPCNCallback(keras.callbacks.Callback): def __init__(self): super(ESPCNCallback, self).__init__() self.test_img = get_lowres_image(load_img(test_img_paths[0]), upscale_factor) # Store PSNR value in each epoch. def on_epoch_begin(self, epoch, logs=None): self.psnr = [] def on_epoch_end(self, epoch, logs=None): print(\"Mean PSNR for epoch: %.2f\" % (np.mean(self.psnr))) if epoch % 20 == 0: prediction = upscale_image(self.model, self.test_img) plot_results(prediction, \"epoch-\" + str(epoch), \"prediction\") def on_test_batch_end(self, batch, logs=None): self.psnr.append(10 * math.log10(1 / logs[\"loss\"])) Define ModelCheckpoint and EarlyStopping callbacks. early_stopping_callback = keras.callbacks.EarlyStopping(monitor=\"loss\", patience=10) checkpoint_filepath = \"/tmp/checkpoint\" model_checkpoint_callback = keras.callbacks.ModelCheckpoint( filepath=checkpoint_filepath, save_weights_only=True, monitor=\"loss\", mode=\"min\", save_best_only=True, ) model = get_model(upscale_factor=upscale_factor, channels=1) model.summary() callbacks = [ESPCNCallback(), early_stopping_callback, model_checkpoint_callback] loss_fn = keras.losses.MeanSquaredError() optimizer = keras.optimizers.Adam(learning_rate=0.001) Model: \"model\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, None, None, 1)] 0 _________________________________________________________________ conv2d (Conv2D) (None, None, None, 64) 1664 _________________________________________________________________ conv2d_1 (Conv2D) (None, None, None, 64) 36928 _________________________________________________________________ conv2d_2 (Conv2D) (None, None, None, 32) 18464 _________________________________________________________________ conv2d_3 (Conv2D) (None, None, None, 9) 2601 _________________________________________________________________ tf.nn.depth_to_space (TFOpLa (None, None, None, 1) 0 ================================================================= Total params: 59,657 Trainable params: 59,657 Non-trainable params: 0 _________________________________________________________________ Train the model epochs = 100 model.compile( optimizer=optimizer, loss=loss_fn, ) model.fit( train_ds, epochs=epochs, callbacks=callbacks, validation_data=valid_ds, verbose=2 ) # The model weights (that are considered the best) are loaded into the model. model.load_weights(checkpoint_filepath) WARNING: Logging before flag parsing goes to stderr. W0828 11:01:31.262773 4528061888 callbacks.py:1270] Automatic model reloading for interrupted job was removed from the `ModelCheckpoint` callback in multi-worker mode, please use the [`keras.callbacks.experimental.BackupAndRestore`](/api/callbacks/backup_and_restore#backupandrestore-class) callback instead. See this tutorial for details: https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#backupandrestore_callback. Epoch 1/100 Mean PSNR for epoch: 22.03 png 50/50 - 14s - loss: 0.0259 - val_loss: 0.0063 Epoch 2/100 Mean PSNR for epoch: 24.55 50/50 - 13s - loss: 0.0049 - val_loss: 0.0034 Epoch 3/100 Mean PSNR for epoch: 25.57 50/50 - 13s - loss: 0.0035 - val_loss: 0.0029 Epoch 4/100 Mean PSNR for epoch: 26.35 50/50 - 13s - loss: 0.0031 - val_loss: 0.0026 Epoch 5/100 Mean PSNR for epoch: 25.88 50/50 - 13s - loss: 0.0029 - val_loss: 0.0026 Epoch 6/100 Mean PSNR for epoch: 26.23 50/50 - 13s - loss: 0.0030 - val_loss: 0.0025 Epoch 7/100 Mean PSNR for epoch: 26.30 50/50 - 13s - loss: 0.0028 - val_loss: 0.0025 Epoch 8/100 Mean PSNR for epoch: 26.27 50/50 - 13s - loss: 0.0028 - val_loss: 0.0025 Epoch 9/100 Mean PSNR for epoch: 26.38 50/50 - 12s - loss: 0.0028 - val_loss: 0.0025 Epoch 10/100 Mean PSNR for epoch: 26.25 50/50 - 12s - loss: 0.0027 - val_loss: 0.0024 Epoch 11/100 Mean PSNR for epoch: 26.19 50/50 - 12s - loss: 0.0027 - val_loss: 0.0025 Epoch 12/100 Mean PSNR for epoch: 25.97 50/50 - 12s - loss: 0.0028 - val_loss: 0.0025 Epoch 13/100 Mean PSNR for epoch: 26.30 50/50 - 12s - loss: 0.0027 - val_loss: 0.0024 Epoch 14/100 Mean PSNR for epoch: 26.43 50/50 - 12s - loss: 0.0027 - val_loss: 0.0024 Epoch 15/100 Mean PSNR for epoch: 26.49 50/50 - 12s - loss: 0.0027 - val_loss: 0.0024 Epoch 16/100 Mean PSNR for epoch: 26.41 50/50 - 12s - loss: 0.0027 - val_loss: 0.0024 Epoch 17/100 Mean PSNR for epoch: 25.86 50/50 - 13s - loss: 0.0027 - val_loss: 0.0024 Epoch 18/100 Mean PSNR for epoch: 26.11 50/50 - 12s - loss: 0.0027 - val_loss: 0.0025 Epoch 19/100 Mean PSNR for epoch: 26.78 50/50 - 12s - loss: 0.0027 - val_loss: 0.0024 Epoch 20/100 Mean PSNR for epoch: 26.59 50/50 - 12s - loss: 0.0027 - val_loss: 0.0024 Epoch 21/100 Mean PSNR for epoch: 26.52 png 50/50 - 13s - loss: 0.0026 - val_loss: 0.0024 Epoch 22/100 Mean PSNR for epoch: 26.21 50/50 - 13s - loss: 0.0026 - val_loss: 0.0024 Epoch 23/100 Mean PSNR for epoch: 26.32 50/50 - 13s - loss: 0.0031 - val_loss: 0.0025 Epoch 24/100 Mean PSNR for epoch: 26.68 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 25/100 Mean PSNR for epoch: 27.03 50/50 - 12s - loss: 0.0026 - val_loss: 0.0023 Epoch 26/100 Mean PSNR for epoch: 26.31 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 27/100 Mean PSNR for epoch: 27.20 50/50 - 12s - loss: 0.0026 - val_loss: 0.0023 Epoch 28/100 Mean PSNR for epoch: 26.53 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 29/100 Mean PSNR for epoch: 26.63 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 30/100 Mean PSNR for epoch: 26.43 50/50 - 12s - loss: 0.0026 - val_loss: 0.0023 Epoch 31/100 Mean PSNR for epoch: 26.13 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 32/100 Mean PSNR for epoch: 26.50 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 33/100 Mean PSNR for epoch: 26.91 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 34/100 Mean PSNR for epoch: 26.48 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 35/100 Mean PSNR for epoch: 26.68 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 36/100 Mean PSNR for epoch: 26.82 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 37/100 Mean PSNR for epoch: 26.53 50/50 - 14s - loss: 0.0026 - val_loss: 0.0023 Epoch 38/100 Mean PSNR for epoch: 26.73 50/50 - 14s - loss: 0.0026 - val_loss: 0.0023 Epoch 39/100 Mean PSNR for epoch: 26.07 50/50 - 13s - loss: 0.0026 - val_loss: 0.0026 Epoch 40/100 Mean PSNR for epoch: 26.36 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 41/100 Mean PSNR for epoch: 26.43 png 50/50 - 14s - loss: 0.0026 - val_loss: 0.0023 Epoch 42/100 Mean PSNR for epoch: 26.67 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 43/100 Mean PSNR for epoch: 26.83 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 44/100 Mean PSNR for epoch: 26.81 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 45/100 Mean PSNR for epoch: 26.45 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 46/100 Mean PSNR for epoch: 26.25 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 47/100 Mean PSNR for epoch: 26.56 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 48/100 Mean PSNR for epoch: 26.28 50/50 - 13s - loss: 0.0028 - val_loss: 0.0023 Epoch 49/100 Mean PSNR for epoch: 26.52 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 50/100 Mean PSNR for epoch: 26.41 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 51/100 Mean PSNR for epoch: 26.69 50/50 - 12s - loss: 0.0025 - val_loss: 0.0023 Epoch 52/100 Mean PSNR for epoch: 26.44 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 53/100 Mean PSNR for epoch: 26.90 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 54/100 Mean PSNR for epoch: 26.43 50/50 - 13s - loss: 0.0026 - val_loss: 0.0024 Epoch 55/100 Mean PSNR for epoch: 26.83 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 56/100 Mean PSNR for epoch: 26.77 50/50 - 14s - loss: 0.0025 - val_loss: 0.0023 Epoch 57/100 Mean PSNR for epoch: 26.67 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 58/100 Mean PSNR for epoch: 26.45 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 59/100 Mean PSNR for epoch: 26.85 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 60/100 Mean PSNR for epoch: 26.41 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 61/100 Mean PSNR for epoch: 26.36 png 50/50 - 14s - loss: 0.0026 - val_loss: 0.0024 Epoch 62/100 Mean PSNR for epoch: 26.21 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 63/100 Mean PSNR for epoch: 26.36 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 64/100 Mean PSNR for epoch: 27.31 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 65/100 Mean PSNR for epoch: 26.88 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 66/100 Mean PSNR for epoch: 26.34 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 67/100 Mean PSNR for epoch: 26.65 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 68/100 Mean PSNR for epoch: 24.88 50/50 - 13s - loss: 0.0030 - val_loss: 0.0034 Epoch 69/100 Mean PSNR for epoch: 26.41 50/50 - 13s - loss: 0.0027 - val_loss: 0.0023 Epoch 70/100 Mean PSNR for epoch: 26.71 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 71/100 Mean PSNR for epoch: 26.70 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 72/100 Mean PSNR for epoch: 26.88 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 73/100 Mean PSNR for epoch: 26.72 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 74/100 Mean PSNR for epoch: 26.53 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 75/100 Mean PSNR for epoch: 26.85 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 76/100 Mean PSNR for epoch: 26.53 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 77/100 Mean PSNR for epoch: 26.50 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 78/100 Mean PSNR for epoch: 26.90 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 79/100 Mean PSNR for epoch: 26.92 50/50 - 15s - loss: 0.0025 - val_loss: 0.0022 Epoch 80/100 Mean PSNR for epoch: 27.00 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 81/100 Mean PSNR for epoch: 26.89 png 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 82/100 Mean PSNR for epoch: 26.62 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 83/100 Mean PSNR for epoch: 26.85 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 84/100 Mean PSNR for epoch: 26.69 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 85/100 Mean PSNR for epoch: 26.81 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 86/100 Mean PSNR for epoch: 26.16 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 87/100 Mean PSNR for epoch: 26.48 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 88/100 Mean PSNR for epoch: 25.62 50/50 - 14s - loss: 0.0026 - val_loss: 0.0027 Epoch 89/100 Mean PSNR for epoch: 26.55 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 90/100 Mean PSNR for epoch: 26.20 50/50 - 14s - loss: 0.0025 - val_loss: 0.0023 Epoch 91/100 Mean PSNR for epoch: 26.35 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 92/100 Mean PSNR for epoch: 26.85 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 93/100 Mean PSNR for epoch: 26.83 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 94/100 Mean PSNR for epoch: 26.63 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 95/100 Mean PSNR for epoch: 25.94 50/50 - 13s - loss: 0.0025 - val_loss: 0.0024 Epoch 96/100 Mean PSNR for epoch: 26.47 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 97/100 Mean PSNR for epoch: 26.42 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 98/100 Mean PSNR for epoch: 26.33 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 99/100 Mean PSNR for epoch: 26.55 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 100/100 Mean PSNR for epoch: 27.08 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Run model prediction and plot the results Let's compute the reconstructed version of a few images and save the results. total_bicubic_psnr = 0.0 total_test_psnr = 0.0 for index, test_img_path in enumerate(test_img_paths[50:60]): img = load_img(test_img_path) lowres_input = get_lowres_image(img, upscale_factor) w = lowres_input.size[0] * upscale_factor h = lowres_input.size[1] * upscale_factor highres_img = img.resize((w, h)) prediction = upscale_image(model, lowres_input) lowres_img = lowres_input.resize((w, h)) lowres_img_arr = img_to_array(lowres_img) highres_img_arr = img_to_array(highres_img) predict_img_arr = img_to_array(prediction) bicubic_psnr = tf.image.psnr(lowres_img_arr, highres_img_arr, max_val=255) test_psnr = tf.image.psnr(predict_img_arr, highres_img_arr, max_val=255) total_bicubic_psnr += bicubic_psnr total_test_psnr += test_psnr print( \"PSNR of low resolution image and high resolution image is %.4f\" % bicubic_psnr ) print(\"PSNR of predict and high resolution is %.4f\" % test_psnr) plot_results(lowres_img, index, \"lowres\") plot_results(highres_img, index, \"highres\") plot_results(prediction, index, \"prediction\") print(\"Avg. PSNR of lowres images is %.4f\" % (total_bicubic_psnr / 10)) print(\"Avg. PSNR of reconstructions is %.4f\" % (total_test_psnr / 10)) PSNR of low resolution image and high resolution image is 28.2682 PSNR of predict and high resolution is 29.7881 png png png PSNR of low resolution image and high resolution image is 23.0465 PSNR of predict and high resolution is 25.1304 png png png PSNR of low resolution image and high resolution image is 25.4113 PSNR of predict and high resolution is 27.3936 png png png PSNR of low resolution image and high resolution image is 26.5175 PSNR of predict and high resolution is 27.1014 png png png PSNR of low resolution image and high resolution image is 24.2559 PSNR of predict and high resolution is 25.7635 png png png PSNR of low resolution image and high resolution image is 23.9661 PSNR of predict and high resolution is 25.9522 png png png PSNR of low resolution image and high resolution image is 24.3061 PSNR of predict and high resolution is 26.3963 png png png PSNR of low resolution image and high resolution image is 21.7309 PSNR of predict and high resolution is 23.8342 png png png PSNR of low resolution image and high resolution image is 28.8549 PSNR of predict and high resolution is 29.6143 png png png PSNR of low resolution image and high resolution image is 23.9198 PSNR of predict and high resolution is 25.2592 png png png Avg. PSNR of lowres images is 25.0277 Avg. PSNR of reconstructions is 26.6233 Deep dive into location-specific and channel-agnostic involution kernels. Introduction Convolution has been the basis of most modern neural networks for computer vision. A convolution kernel is spatial-agnostic and channel-specific. Because of this, it isn't able to adapt to different visual patterns with respect to different spatial locations. Along with location-related problems, the receptive field of convolution creates challenges with regard to capturing long-range spatial interactions. To address the above issues, Li et. al. rethink the properties of convolution in Involution: Inverting the Inherence of Convolution for VisualRecognition. The authors propose the \"involution kernel\", that is location-specific and channel-agnostic. Due to the location-specific nature of the operation, the authors say that self-attention falls under the design paradigm of involution. This example describes the involution kernel, compares two image classification models, one with convolution and the other with involution, and also tries drawing a parallel with the self-attention layer. Setup import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt # Set seed for reproducibility. tf.random.set_seed(42) Convolution Convolution remains the mainstay of deep neural networks for computer vision. To understand Involution, it is necessary to talk about the convolution operation. Imgur Consider an input tensor X with dimensions H, W and C_in. We take a collection of C_out convolution kernels each of shape K, K, C_in. With the multiply-add operation between the input tensor and the kernels we obtain an output tensor Y with dimensions H, W, C_out. In the diagram above C_out=3. This makes the output tensor of shape H, W and 3. One can notice that the convoltuion kernel does not depend on the spatial position of the input tensor which makes it location-agnostic. On the other hand, each channel in the output tensor is based on a specific convolution filter which makes is channel-specific. Involution The idea is to have an operation that is both location-specific and channel-agnostic. Trying to implement these specific properties poses a challenge. With a fixed number of involution kernels (for each spatial position) we will not be able to process variable-resolution input tensors. To solve this problem, the authors have considered generating each kernel conditioned on specific spatial positions. With this method, we should be able to process variable-resolution input tensors with ease. The diagram below provides an intuition on this kernel generation method. Imgur class Involution(keras.layers.Layer): def __init__( self, channel, group_number, kernel_size, stride, reduction_ratio, name ): super().__init__(name=name) # Initialize the parameters. self.channel = channel self.group_number = group_number self.kernel_size = kernel_size self.stride = stride self.reduction_ratio = reduction_ratio def build(self, input_shape): # Get the shape of the input. (_, height, width, num_channels) = input_shape # Scale the height and width with respect to the strides. height = height // self.stride width = width // self.stride # Define a layer that average pools the input tensor # if stride is more than 1. self.stride_layer = ( keras.layers.AveragePooling2D( pool_size=self.stride, strides=self.stride, padding=\"same\" ) if self.stride > 1 else tf.identity ) # Define the kernel generation layer. self.kernel_gen = keras.Sequential( [ keras.layers.Conv2D( filters=self.channel // self.reduction_ratio, kernel_size=1 ), keras.layers.BatchNormalization(), keras.layers.ReLU(), keras.layers.Conv2D( filters=self.kernel_size * self.kernel_size * self.group_number, kernel_size=1, ), ] ) # Define reshape layers self.kernel_reshape = keras.layers.Reshape( target_shape=( height, width, self.kernel_size * self.kernel_size, 1, self.group_number, ) ) self.input_patches_reshape = keras.layers.Reshape( target_shape=( height, width, self.kernel_size * self.kernel_size, num_channels // self.group_number, self.group_number, ) ) self.output_reshape = keras.layers.Reshape( target_shape=(height, width, num_channels) ) def call(self, x): # Generate the kernel with respect to the input tensor. # B, H, W, K*K*G kernel_input = self.stride_layer(x) kernel = self.kernel_gen(kernel_input) # reshape the kerenl # B, H, W, K*K, 1, G kernel = self.kernel_reshape(kernel) # Extract input patches. # B, H, W, K*K*C input_patches = tf.image.extract_patches( images=x, sizes=[1, self.kernel_size, self.kernel_size, 1], strides=[1, self.stride, self.stride, 1], rates=[1, 1, 1, 1], padding=\"SAME\", ) # Reshape the input patches to align with later operations. # B, H, W, K*K, C//G, G input_patches = self.input_patches_reshape(input_patches) # Compute the multiply-add operation of kernels and patches. # B, H, W, K*K, C//G, G output = tf.multiply(kernel, input_patches) # B, H, W, C//G, G output = tf.reduce_sum(output, axis=3) # Reshape the output kernel. # B, H, W, C output = self.output_reshape(output) # Return the output tensor and the kernel. return output, kernel Testing the Involution layer # Define the input tensor. input_tensor = tf.random.normal((32, 256, 256, 3)) # Compute involution with stride 1. output_tensor, _ = Involution( channel=3, group_number=1, kernel_size=5, stride=1, reduction_ratio=1, name=\"inv_1\" )(input_tensor) print(f\"with stride 1 ouput shape: {output_tensor.shape}\") # Compute involution with stride 2. output_tensor, _ = Involution( channel=3, group_number=1, kernel_size=5, stride=2, reduction_ratio=1, name=\"inv_2\" )(input_tensor) print(f\"with stride 2 ouput shape: {output_tensor.shape}\") # Compute involution with stride 1, channel 16 and reduction ratio 2. output_tensor, _ = Involution( channel=16, group_number=1, kernel_size=5, stride=1, reduction_ratio=2, name=\"inv_3\" )(input_tensor) print( \"with channel 16 and reduction ratio 2 ouput shape: {}\".format(output_tensor.shape) ) with stride 1 ouput shape: (32, 256, 256, 3) with stride 2 ouput shape: (32, 128, 128, 3) with channel 16 and reduction ratio 2 ouput shape: (32, 256, 256, 3) Image Classification In this section, we will build an image-classifier model. There will be two models one with convolutions and the other with involutions. The image-classification model is heavily inspired by this Convolutional Neural Network (CNN) tutorial from Google. Get the CIFAR10 Dataset # Load the CIFAR10 dataset. print(\"loading the CIFAR10 dataset...\") (train_images, train_labels), ( test_images, test_labels, ) = keras.datasets.cifar10.load_data() # Normalize pixel values to be between 0 and 1. (train_images, test_images) = (train_images / 255.0, test_images / 255.0) # Shuffle and batch the dataset. train_ds = ( tf.data.Dataset.from_tensor_slices((train_images, train_labels)) .shuffle(256) .batch(256) ) test_ds = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(256) loading the CIFAR10 dataset... Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz 170500096/170498071 [==============================] - 3s 0us/step Visualise the data class_names = [ \"airplane\", \"automobile\", \"bird\", \"cat\", \"deer\", \"dog\", \"frog\", \"horse\", \"ship\", \"truck\", ] plt.figure(figsize=(10, 10)) for i in range(25): plt.subplot(5, 5, i + 1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i]) plt.xlabel(class_names[train_labels[i][0]]) plt.show() png Convolutional Neural Network # Build the conv model. print(\"building the convolution model...\") conv_model = keras.Sequential( [ keras.layers.Conv2D(32, (3, 3), input_shape=(32, 32, 3), padding=\"same\"), keras.layers.ReLU(name=\"relu1\"), keras.layers.MaxPooling2D((2, 2)), keras.layers.Conv2D(64, (3, 3), padding=\"same\"), keras.layers.ReLU(name=\"relu2\"), keras.layers.MaxPooling2D((2, 2)), keras.layers.Conv2D(64, (3, 3), padding=\"same\"), keras.layers.ReLU(name=\"relu3\"), keras.layers.Flatten(), keras.layers.Dense(64, activation=\"relu\"), keras.layers.Dense(10), ] ) # Compile the mode with the necessary loss function and optimizer. print(\"compiling the convolution model...\") conv_model.compile( optimizer=\"adam\", loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[\"accuracy\"], ) # Train the model. print(\"conv model training...\") conv_hist = conv_model.fit(train_ds, epochs=20, validation_data=test_ds) building the convolution model... compiling the convolution model... conv model training... Epoch 1/20 196/196 [==============================] - 16s 16ms/step - loss: 1.6367 - accuracy: 0.4041 - val_loss: 1.3283 - val_accuracy: 0.5275 Epoch 2/20 196/196 [==============================] - 3s 16ms/step - loss: 1.2207 - accuracy: 0.5675 - val_loss: 1.1365 - val_accuracy: 0.5965 Epoch 3/20 196/196 [==============================] - 3s 16ms/step - loss: 1.0649 - accuracy: 0.6267 - val_loss: 1.0219 - val_accuracy: 0.6378 Epoch 4/20 196/196 [==============================] - 3s 16ms/step - loss: 0.9642 - accuracy: 0.6613 - val_loss: 0.9741 - val_accuracy: 0.6601 Epoch 5/20 196/196 [==============================] - 3s 16ms/step - loss: 0.8779 - accuracy: 0.6939 - val_loss: 0.9145 - val_accuracy: 0.6826 Epoch 6/20 196/196 [==============================] - 3s 16ms/step - loss: 0.8126 - accuracy: 0.7180 - val_loss: 0.8841 - val_accuracy: 0.6913 Epoch 7/20 196/196 [==============================] - 3s 16ms/step - loss: 0.7641 - accuracy: 0.7334 - val_loss: 0.8667 - val_accuracy: 0.7049 Epoch 8/20 196/196 [==============================] - 3s 16ms/step - loss: 0.7210 - accuracy: 0.7503 - val_loss: 0.8363 - val_accuracy: 0.7089 Epoch 9/20 196/196 [==============================] - 3s 16ms/step - loss: 0.6796 - accuracy: 0.7630 - val_loss: 0.8150 - val_accuracy: 0.7203 Epoch 10/20 196/196 [==============================] - 3s 15ms/step - loss: 0.6370 - accuracy: 0.7793 - val_loss: 0.9021 - val_accuracy: 0.6964 Epoch 11/20 196/196 [==============================] - 3s 15ms/step - loss: 0.6089 - accuracy: 0.7886 - val_loss: 0.8336 - val_accuracy: 0.7207 Epoch 12/20 196/196 [==============================] - 3s 15ms/step - loss: 0.5723 - accuracy: 0.8022 - val_loss: 0.8326 - val_accuracy: 0.7246 Epoch 13/20 196/196 [==============================] - 3s 15ms/step - loss: 0.5375 - accuracy: 0.8144 - val_loss: 0.8482 - val_accuracy: 0.7223 Epoch 14/20 196/196 [==============================] - 3s 15ms/step - loss: 0.5121 - accuracy: 0.8230 - val_loss: 0.8244 - val_accuracy: 0.7306 Epoch 15/20 196/196 [==============================] - 3s 15ms/step - loss: 0.4786 - accuracy: 0.8363 - val_loss: 0.8313 - val_accuracy: 0.7363 Epoch 16/20 196/196 [==============================] - 3s 15ms/step - loss: 0.4518 - accuracy: 0.8458 - val_loss: 0.8634 - val_accuracy: 0.7293 Epoch 17/20 196/196 [==============================] - 3s 16ms/step - loss: 0.4403 - accuracy: 0.8489 - val_loss: 0.8683 - val_accuracy: 0.7290 Epoch 18/20 196/196 [==============================] - 3s 16ms/step - loss: 0.4094 - accuracy: 0.8576 - val_loss: 0.8982 - val_accuracy: 0.7272 Epoch 19/20 196/196 [==============================] - 3s 16ms/step - loss: 0.3941 - accuracy: 0.8630 - val_loss: 0.9537 - val_accuracy: 0.7200 Epoch 20/20 196/196 [==============================] - 3s 15ms/step - loss: 0.3778 - accuracy: 0.8691 - val_loss: 0.9780 - val_accuracy: 0.7184 Involutional Neural Network # Build the involution model. print(\"building the involution model...\") inputs = keras.Input(shape=(32, 32, 3)) x, _ = Involution( channel=3, group_number=1, kernel_size=3, stride=1, reduction_ratio=2, name=\"inv_1\" )(inputs) x = keras.layers.ReLU()(x) x = keras.layers.MaxPooling2D((2, 2))(x) x, _ = Involution( channel=3, group_number=1, kernel_size=3, stride=1, reduction_ratio=2, name=\"inv_2\" )(x) x = keras.layers.ReLU()(x) x = keras.layers.MaxPooling2D((2, 2))(x) x, _ = Involution( channel=3, group_number=1, kernel_size=3, stride=1, reduction_ratio=2, name=\"inv_3\" )(x) x = keras.layers.ReLU()(x) x = keras.layers.Flatten()(x) x = keras.layers.Dense(64, activation=\"relu\")(x) outputs = keras.layers.Dense(10)(x) inv_model = keras.Model(inputs=[inputs], outputs=[outputs], name=\"inv_model\") # Compile the mode with the necessary loss function and optimizer. print(\"compiling the involution model...\") inv_model.compile( optimizer=\"adam\", loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[\"accuracy\"], ) # train the model print(\"inv model training...\") inv_hist = inv_model.fit(train_ds, epochs=20, validation_data=test_ds) building the involution model... compiling the involution model... inv model training... Epoch 1/20 196/196 [==============================] - 5s 21ms/step - loss: 2.1570 - accuracy: 0.2266 - val_loss: 2.2712 - val_accuracy: 0.1557 Epoch 2/20 196/196 [==============================] - 4s 20ms/step - loss: 1.9445 - accuracy: 0.3054 - val_loss: 1.9762 - val_accuracy: 0.2963 Epoch 3/20 196/196 [==============================] - 4s 20ms/step - loss: 1.8469 - accuracy: 0.3433 - val_loss: 1.8044 - val_accuracy: 0.3669 Epoch 4/20 196/196 [==============================] - 4s 20ms/step - loss: 1.7837 - accuracy: 0.3646 - val_loss: 1.7640 - val_accuracy: 0.3761 Epoch 5/20 196/196 [==============================] - 4s 20ms/step - loss: 1.7369 - accuracy: 0.3784 - val_loss: 1.7180 - val_accuracy: 0.3907 Epoch 6/20 196/196 [==============================] - 4s 19ms/step - loss: 1.7031 - accuracy: 0.3917 - val_loss: 1.6839 - val_accuracy: 0.4004 Epoch 7/20 196/196 [==============================] - 4s 19ms/step - loss: 1.6748 - accuracy: 0.3988 - val_loss: 1.6786 - val_accuracy: 0.4037 Epoch 8/20 196/196 [==============================] - 4s 19ms/step - loss: 1.6592 - accuracy: 0.4052 - val_loss: 1.6550 - val_accuracy: 0.4103 Epoch 9/20 196/196 [==============================] - 4s 19ms/step - loss: 1.6412 - accuracy: 0.4106 - val_loss: 1.6346 - val_accuracy: 0.4158 Epoch 10/20 196/196 [==============================] - 4s 19ms/step - loss: 1.6251 - accuracy: 0.4178 - val_loss: 1.6330 - val_accuracy: 0.4145 Epoch 11/20 196/196 [==============================] - 4s 19ms/step - loss: 1.6124 - accuracy: 0.4206 - val_loss: 1.6214 - val_accuracy: 0.4218 Epoch 12/20 196/196 [==============================] - 4s 19ms/step - loss: 1.5978 - accuracy: 0.4252 - val_loss: 1.6121 - val_accuracy: 0.4239 Epoch 13/20 196/196 [==============================] - 4s 19ms/step - loss: 1.5868 - accuracy: 0.4301 - val_loss: 1.5974 - val_accuracy: 0.4284 Epoch 14/20 196/196 [==============================] - 4s 19ms/step - loss: 1.5759 - accuracy: 0.4353 - val_loss: 1.5939 - val_accuracy: 0.4325 Epoch 15/20 196/196 [==============================] - 4s 19ms/step - loss: 1.5677 - accuracy: 0.4369 - val_loss: 1.5889 - val_accuracy: 0.4372 Epoch 16/20 196/196 [==============================] - 4s 20ms/step - loss: 1.5586 - accuracy: 0.4413 - val_loss: 1.5817 - val_accuracy: 0.4376 Epoch 17/20 196/196 [==============================] - 4s 20ms/step - loss: 1.5507 - accuracy: 0.4447 - val_loss: 1.5776 - val_accuracy: 0.4381 Epoch 18/20 196/196 [==============================] - 4s 20ms/step - loss: 1.5420 - accuracy: 0.4477 - val_loss: 1.5785 - val_accuracy: 0.4378 Epoch 19/20 196/196 [==============================] - 4s 20ms/step - loss: 1.5357 - accuracy: 0.4484 - val_loss: 1.5639 - val_accuracy: 0.4431 Epoch 20/20 196/196 [==============================] - 4s 20ms/step - loss: 1.5305 - accuracy: 0.4530 - val_loss: 1.5661 - val_accuracy: 0.4418 Comparisons In this section, we will be looking at both the models and compare a few pointers. Parameters One can see that with a similar architecture the parameters in a CNN is much larger than that of an INN (Involutional Neural Network). conv_model.summary() inv_model.summary() Model: \"sequential_3\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_6 (Conv2D) (None, 32, 32, 32) 896 _________________________________________________________________ relu1 (ReLU) (None, 32, 32, 32) 0 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 16, 16, 32) 0 _________________________________________________________________ conv2d_7 (Conv2D) (None, 16, 16, 64) 18496 _________________________________________________________________ relu2 (ReLU) (None, 16, 16, 64) 0 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 8, 8, 64) 0 _________________________________________________________________ conv2d_8 (Conv2D) (None, 8, 8, 64) 36928 _________________________________________________________________ relu3 (ReLU) (None, 8, 8, 64) 0 _________________________________________________________________ flatten (Flatten) (None, 4096) 0 _________________________________________________________________ dense (Dense) (None, 64) 262208 _________________________________________________________________ dense_1 (Dense) (None, 10) 650 ================================================================= Total params: 319,178 Trainable params: 319,178 Non-trainable params: 0 _________________________________________________________________ Model: \"inv_model\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 32, 32, 3)] 0 _________________________________________________________________ inv_1 (Involution) ((None, 32, 32, 3), (None 26 _________________________________________________________________ re_lu_3 (ReLU) (None, 32, 32, 3) 0 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 16, 16, 3) 0 _________________________________________________________________ inv_2 (Involution) ((None, 16, 16, 3), (None 26 _________________________________________________________________ re_lu_4 (ReLU) (None, 16, 16, 3) 0 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 8, 8, 3) 0 _________________________________________________________________ inv_3 (Involution) ((None, 8, 8, 3), (None, 26 _________________________________________________________________ re_lu_5 (ReLU) (None, 8, 8, 3) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 192) 0 _________________________________________________________________ dense_2 (Dense) (None, 64) 12352 _________________________________________________________________ dense_3 (Dense) (None, 10) 650 ================================================================= Total params: 13,080 Trainable params: 13,074 Non-trainable params: 6 _________________________________________________________________ Loss and Accuracy Plots Here, the loss and the accuracy plots demonstrate that INNs are slow learners (with lower parameters). plt.figure(figsize=(20, 5)) plt.subplot(1, 2, 1) plt.title(\"Convolution Loss\") plt.plot(conv_hist.history[\"loss\"], label=\"loss\") plt.plot(conv_hist.history[\"val_loss\"], label=\"val_loss\") plt.legend() plt.subplot(1, 2, 2) plt.title(\"Involution Loss\") plt.plot(inv_hist.history[\"loss\"], label=\"loss\") plt.plot(inv_hist.history[\"val_loss\"], label=\"val_loss\") plt.legend() plt.show() plt.figure(figsize=(20, 5)) plt.subplot(1, 2, 1) plt.title(\"Convolution Accuracy\") plt.plot(conv_hist.history[\"accuracy\"], label=\"accuracy\") plt.plot(conv_hist.history[\"val_accuracy\"], label=\"val_accuracy\") plt.legend() plt.subplot(1, 2, 2) plt.title(\"Involution Accuracy\") plt.plot(inv_hist.history[\"accuracy\"], label=\"accuracy\") plt.plot(inv_hist.history[\"val_accuracy\"], label=\"val_accuracy\") plt.legend() plt.show() png png Visualizing Involution Kernels To visualize the kernels, we take the sum of K×K values from each involution kernel. All the representatives at different spatial locations frame the corresponding heat map. The authors mention: \"Our proposed involution is reminiscent of self-attention and essentially could become a generalized version of it.\" With the visualization of the kernel we can indeed obtain an attention map of the image. The learned involution kernels provides attention to individual spatial positions of the input tensor. The location-specific property makes involution a generic space of models in which self-attention belongs. layer_names = [\"inv_1\", \"inv_2\", \"inv_3\"] outputs = [inv_model.get_layer(name).output for name in layer_names] vis_model = keras.Model(inv_model.input, outputs) fig, axes = plt.subplots(nrows=10, ncols=4, figsize=(10, 30)) for ax, test_image in zip(axes, test_images[:10]): (inv1_out, inv2_out, inv3_out) = vis_model.predict(test_image[None, ...]) _, inv1_kernel = inv1_out _, inv2_kernel = inv2_out _, inv3_kernel = inv3_out inv1_kernel = tf.reduce_sum(inv1_kernel, axis=[-1, -2, -3]) inv2_kernel = tf.reduce_sum(inv2_kernel, axis=[-1, -2, -3]) inv3_kernel = tf.reduce_sum(inv3_kernel, axis=[-1, -2, -3]) ax[0].imshow(keras.preprocessing.image.array_to_img(test_image)) ax[0].set_title(\"Input Image\") ax[1].imshow(keras.preprocessing.image.array_to_img(inv1_kernel[0, ..., None])) ax[1].set_title(\"Involution Kernel 1\") ax[2].imshow(keras.preprocessing.image.array_to_img(inv2_kernel[0, ..., None])) ax[2].set_title(\"Involution Kernel 2\") ax[3].imshow(keras.preprocessing.image.array_to_img(inv3_kernel[0, ..., None])) ax[3].set_title(\"Involution Kernel 3\") png Conclusions In this example, the main focus was to build an Involution layer which can be easily reused. While our comparisons were based on a specific task, feel free to use the layer for different tasks and report your results. According to me, the key take-away of involution is its relationship with self-attention. The intuition behind location-specific and channel-spefic processing makes sense in a lot of tasks. Moving forward one can: Look at Yannick's video on involution for a better understanding. Experiment with the various hyperparameters of the involution layer. Build different models with the involution layer. Try building a different kernel generation method altogether. Training a keypoint detector with data augmentation and transfer learning. Keypoint detection consists of locating key object parts. For example, the key parts of our faces include nose tips, eyebrows, eye corners, and so on. These parts help to represent the underlying object in a feature-rich manner. Keypoint detection has applications that include pose estimation, face detection, etc. In this example, we will build a keypoint detector using the StanfordExtra dataset, using transfer learning. This example requires TensorFlow 2.4 or higher, as well as imgaug library, which can be installed using the following command: !pip install -q -U imgaug Data collection The StanfordExtra dataset contains 12,000 images of dogs together with keypoints and segmentation maps. It is developed from the Stanford dogs dataset. It can be downloaded with the command below: !wget -q http://vision.stanford.edu/aditya86/ImageNetDogs/images.tar Annotations are provided as a single JSON file in the StanfordExtra dataset and one needs to fill this form to get access to it. The authors explicitly instruct users not to share the JSON file, and this example respects this wish: you should obtain the JSON file yourself. The JSON file is expected to be locally available as stanfordextra_v12.zip. After the files are downloaded, we can extract the archives. !tar xf images.tar !unzip -qq ~/stanfordextra_v12.zip Imports from tensorflow.keras import layers from tensorflow import keras import tensorflow as tf from imgaug.augmentables.kps import KeypointsOnImage from imgaug.augmentables.kps import Keypoint import imgaug.augmenters as iaa from PIL import Image from sklearn.model_selection import train_test_split from matplotlib import pyplot as plt import pandas as pd import numpy as np import json import os Define hyperparameters IMG_SIZE = 224 BATCH_SIZE = 64 EPOCHS = 5 NUM_KEYPOINTS = 24 * 2 # 24 pairs each having x and y coordinates Load data The authors also provide a metadata file that specifies additional information about the keypoints, like color information, animal pose name, etc. We will load this file in a pandas dataframe to extract information for visualization purposes. IMG_DIR = \"Images\" JSON = \"StanfordExtra_V12/StanfordExtra_v12.json\" KEYPOINT_DEF = ( \"https://github.com/benjiebob/StanfordExtra/raw/master/keypoint_definitions.csv\" ) # Load the ground-truth annotations. with open(JSON) as infile: json_data = json.load(infile) # Set up a dictionary, mapping all the ground-truth information # with respect to the path of the image. json_dict = {i[\"img_path\"]: i for i in json_data} A single entry of json_dict looks like the following: 'n02085782-Japanese_spaniel/n02085782_2886.jpg': {'img_bbox': [205, 20, 116, 201], 'img_height': 272, 'img_path': 'n02085782-Japanese_spaniel/n02085782_2886.jpg', 'img_width': 350, 'is_multiple_dogs': False, 'joints': [[108.66666666666667, 252.0, 1], [147.66666666666666, 229.0, 1], [163.5, 208.5, 1], [0, 0, 0], [0, 0, 0], [0, 0, 0], [54.0, 244.0, 1], [77.33333333333333, 225.33333333333334, 1], [79.0, 196.5, 1], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [150.66666666666666, 86.66666666666667, 1], [88.66666666666667, 73.0, 1], [116.0, 106.33333333333333, 1], [109.0, 123.33333333333333, 1], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]], 'seg': ...} In this example, the keys we are interested in are: img_path joints There are a total of 24 entries present inside joints. Each entry has 3 values: x-coordinate y-coordinate visibility flag of the keypoints (1 indicates visibility and 0 indicates non-visibility) As we can see joints contain multiple [0, 0, 0] entries which denote that those keypoints were not labeled. In this example, we will consider both non-visible as well as unlabeled keypoints in order to allow mini-batch learning. # Load the metdata definition file and preview it. keypoint_def = pd.read_csv(KEYPOINT_DEF) keypoint_def.head() # Extract the colours and labels. colours = keypoint_def[\"Hex colour\"].values.tolist() colours = [\"#\" + colour for colour in colours] labels = keypoint_def[\"Name\"].values.tolist() # Utility for reading an image and for getting its annotations. def get_dog(name): data = json_dict[name] img_data = plt.imread(os.path.join(IMG_DIR, data[\"img_path\"])) # If the image is RGBA convert it to RGB. if img_data.shape[-1] == 4: img_data = img_data.astype(np.uint8) img_data = Image.fromarray(img_data) img_data = np.array(img_data.convert(\"RGB\")) data[\"img_data\"] = img_data return data Visualize data Now, we write a utility function to visualize the images and their keypoints. # Parts of this code come from here: # https://github.com/benjiebob/StanfordExtra/blob/master/demo.ipynb def visualize_keypoints(images, keypoints): fig, axes = plt.subplots(nrows=len(images), ncols=2, figsize=(16, 12)) [ax.axis(\"off\") for ax in np.ravel(axes)] for (ax_orig, ax_all), image, current_keypoint in zip(axes, images, keypoints): ax_orig.imshow(image) ax_all.imshow(image) # If the keypoints were formed by `imgaug` then the coordinates need # to be iterated differently. if isinstance(current_keypoint, KeypointsOnImage): for idx, kp in enumerate(current_keypoint.keypoints): ax_all.scatter( [kp.x], [kp.y], c=colours[idx], marker=\"x\", s=50, linewidths=5 ) else: current_keypoint = np.array(current_keypoint) # Since the last entry is the visibility flag, we discard it. current_keypoint = current_keypoint[:, :2] for idx, (x, y) in enumerate(current_keypoint): ax_all.scatter([x], [y], c=colours[idx], marker=\"x\", s=50, linewidths=5) plt.tight_layout(pad=2.0) plt.show() # Select four samples randomly for visualization. samples = list(json_dict.keys()) num_samples = 4 selected_samples = np.random.choice(samples, num_samples, replace=False) images, keypoints = [], [] for sample in selected_samples: data = get_dog(sample) image = data[\"img_data\"] keypoint = data[\"joints\"] images.append(image) keypoints.append(keypoint) visualize_keypoints(images, keypoints) png The plots show that we have images of non-uniform sizes, which is expected in most real-world scenarios. However, if we resize these images to have a uniform shape (for instance (224 x 224)) their ground-truth annotations will also be affected. The same applies if we apply any geometric transformation (horizontal flip, for e.g.) to an image. Fortunately, imgaug provides utilities that can handle this issue. In the next section, we will write a data generator inheriting the [keras.utils.Sequence](/api/utils/python_utils#sequence-class) class that applies data augmentation on batches of data using imgaug. Prepare data generator class KeyPointsDataset(keras.utils.Sequence): def __init__(self, image_keys, aug, batch_size=BATCH_SIZE, train=True): self.image_keys = image_keys self.aug = aug self.batch_size = batch_size self.train = train self.on_epoch_end() def __len__(self): return len(self.image_keys) // self.batch_size def on_epoch_end(self): self.indexes = np.arange(len(self.image_keys)) if self.train: np.random.shuffle(self.indexes) def __getitem__(self, index): indexes = self.indexes[index * self.batch_size : (index + 1) * self.batch_size] image_keys_temp = [self.image_keys[k] for k in indexes] (images, keypoints) = self.__data_generation(image_keys_temp) return (images, keypoints) def __data_generation(self, image_keys_temp): batch_images = np.empty((self.batch_size, IMG_SIZE, IMG_SIZE, 3), dtype=\"int\") batch_keypoints = np.empty( (self.batch_size, 1, 1, NUM_KEYPOINTS), dtype=\"float32\" ) for i, key in enumerate(image_keys_temp): data = get_dog(key) current_keypoint = np.array(data[\"joints\"])[:, :2] kps = [] # To apply our data augmentation pipeline, we first need to # form Keypoint objects with the original coordinates. for j in range(0, len(current_keypoint)): kps.append(Keypoint(x=current_keypoint[j][0], y=current_keypoint[j][1])) # We then project the original image and its keypoint coordinates. current_image = data[\"img_data\"] kps_obj = KeypointsOnImage(kps, shape=current_image.shape) # Apply the augmentation pipeline. (new_image, new_kps_obj) = self.aug(image=current_image, keypoints=kps_obj) batch_images[i,] = new_image # Parse the coordinates from the new keypoint object. kp_temp = [] for keypoint in new_kps_obj: kp_temp.append(np.nan_to_num(keypoint.x)) kp_temp.append(np.nan_to_num(keypoint.y)) # More on why this reshaping later. batch_keypoints[i,] = np.array(kp_temp).reshape(1, 1, 24 * 2) # Scale the coordinates to [0, 1] range. batch_keypoints = batch_keypoints / IMG_SIZE return (batch_images, batch_keypoints) To know more about how to operate with keypoints in imgaug check out this document. Define augmentation transforms train_aug = iaa.Sequential( [ iaa.Resize(IMG_SIZE, interpolation=\"linear\"), iaa.Fliplr(0.3), # `Sometimes()` applies a function randomly to the inputs with # a given probability (0.3, in this case). iaa.Sometimes(0.3, iaa.Affine(rotate=10, scale=(0.5, 0.7))), ] ) test_aug = iaa.Sequential([iaa.Resize(IMG_SIZE, interpolation=\"linear\")]) Create training and validation splits np.random.shuffle(samples) train_keys, validation_keys = ( samples[int(len(samples) * 0.15) :], samples[: int(len(samples) * 0.15)], ) Data generator investigation train_dataset = KeyPointsDataset(train_keys, train_aug) validation_dataset = KeyPointsDataset(validation_keys, test_aug, train=False) print(f\"Total batches in training set: {len(train_dataset)}\") print(f\"Total batches in validation set: {len(validation_dataset)}\") sample_images, sample_keypoints = next(iter(train_dataset)) assert sample_keypoints.max() == 1.0 assert sample_keypoints.min() == 0.0 sample_keypoints = sample_keypoints[:4].reshape(-1, 24, 2) * IMG_SIZE visualize_keypoints(sample_images[:4], sample_keypoints) Total batches in training set: 166 Total batches in validation set: 29 png Model building The Stanford dogs dataset (on which the StanfordExtra dataset is based) was built using the ImageNet-1k dataset. So, it is likely that the models pretrained on the ImageNet-1k dataset would be useful for this task. We will use a MobileNetV2 pre-trained on this dataset as a backbone to extract meaningful features from the images and then pass those to a custom regression head for predicting coordinates. def get_model(): # Load the pre-trained weights of MobileNetV2 and freeze the weights backbone = keras.applications.MobileNetV2( weights=\"imagenet\", include_top=False, input_shape=(IMG_SIZE, IMG_SIZE, 3) ) backbone.trainable = False inputs = layers.Input((IMG_SIZE, IMG_SIZE, 3)) x = keras.applications.mobilenet_v2.preprocess_input(inputs) x = backbone(x) x = layers.Dropout(0.3)(x) x = layers.SeparableConv2D( NUM_KEYPOINTS, kernel_size=5, strides=1, activation=\"relu\" )(x) outputs = layers.SeparableConv2D( NUM_KEYPOINTS, kernel_size=3, strides=1, activation=\"sigmoid\" )(x) return keras.Model(inputs, outputs, name=\"keypoint_detector\") Our custom network is fully-convolutional which makes it more parameter-friendly than the same version of the network having fully-connected dense layers. get_model().summary() Model: \"keypoint_detector\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________ tf.math.truediv (TFOpLambda) (None, 224, 224, 3) 0 _________________________________________________________________ tf.math.subtract (TFOpLambda (None, 224, 224, 3) 0 _________________________________________________________________ mobilenetv2_1.00_224 (Functi (None, 7, 7, 1280) 2257984 _________________________________________________________________ dropout (Dropout) (None, 7, 7, 1280) 0 _________________________________________________________________ separable_conv2d (SeparableC (None, 3, 3, 48) 93488 _________________________________________________________________ separable_conv2d_1 (Separabl (None, 1, 1, 48) 2784 ================================================================= Total params: 2,354,256 Trainable params: 96,272 Non-trainable params: 2,257,984 _________________________________________________________________ Notice the output shape of the network: (None, 1, 1, 48). This is why we have reshaped the coordinates as: batch_keypoints[i, :] = np.array(kp_temp).reshape(1, 1, 24 * 2). Model compilation and training For this example, we will train the network only for five epochs. model = get_model() model.compile(loss=\"mse\", optimizer=keras.optimizers.Adam(1e-4)) model.fit(train_dataset, validation_data=validation_dataset, epochs=EPOCHS) Epoch 1/5 166/166 [==============================] - 85s 486ms/step - loss: 0.1087 - val_loss: 0.0950 Epoch 2/5 166/166 [==============================] - 78s 471ms/step - loss: 0.0830 - val_loss: 0.0778 Epoch 3/5 166/166 [==============================] - 78s 468ms/step - loss: 0.0778 - val_loss: 0.0739 Epoch 4/5 166/166 [==============================] - 78s 470ms/step - loss: 0.0753 - val_loss: 0.0711 Epoch 5/5 166/166 [==============================] - 78s 468ms/step - loss: 0.0735 - val_loss: 0.0692 Make predictions and visualize them sample_val_images, sample_val_keypoints = next(iter(validation_dataset)) sample_val_images = sample_val_images[:4] sample_val_keypoints = sample_val_keypoints[:4].reshape(-1, 24, 2) * IMG_SIZE predictions = model.predict(sample_val_images).reshape(-1, 24, 2) * IMG_SIZE # Ground-truth visualize_keypoints(sample_val_images, sample_val_keypoints) # Predictions visualize_keypoints(sample_val_images, predictions) png png Predictions will likely improve with more training. Going further Try using other augmentation transforms from imgaug to investigate how that changes the results. Here, we transferred the features from the pre-trained network linearly that is we did not fine-tune it. You are encouraged to fine-tune it on this task and see if that improves the performance. You can also try different architectures and see how they affect the final performance. Implementation of classical Knowledge Distillation. Introduction to Knowledge Distillation Knowledge Distillation is a procedure for model compression, in which a small (student) model is trained to match a large pre-trained (teacher) model. Knowledge is transferred from the teacher model to the student by minimizing a loss function, aimed at matching softened teacher logits as well as ground-truth labels. The logits are softened by applying a \"temperature\" scaling function in the softmax, effectively smoothing out the probability distribution and revealing inter-class relationships learned by the teacher. Reference: Hinton et al. (2015) Setup import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import numpy as np Construct Distiller() class The custom Distiller() class, overrides the Model methods train_step, test_step, and compile(). In order to use the distiller, we need: A trained teacher model A student model to train A student loss function on the difference between student predictions and ground-truth A distillation loss function, along with a temperature, on the difference between the soft student predictions and the soft teacher labels An alpha factor to weight the student and distillation loss An optimizer for the student and (optional) metrics to evaluate performance In the train_step method, we perform a forward pass of both the teacher and student, calculate the loss with weighting of the student_loss and distillation_loss by alpha and 1 - alpha, respectively, and perform the backward pass. Note: only the student weights are updated, and therefore we only calculate the gradients for the student weights. In the test_step method, we evaluate the student model on the provided dataset. class Distiller(keras.Model): def __init__(self, student, teacher): super(Distiller, self).__init__() self.teacher = teacher self.student = student def compile( self, optimizer, metrics, student_loss_fn, distillation_loss_fn, alpha=0.1, temperature=3, ): \"\"\" Configure the distiller. Args: optimizer: Keras optimizer for the student weights metrics: Keras metrics for evaluation student_loss_fn: Loss function of difference between student predictions and ground-truth distillation_loss_fn: Loss function of difference between soft student predictions and soft teacher predictions alpha: weight to student_loss_fn and 1-alpha to distillation_loss_fn temperature: Temperature for softening probability distributions. Larger temperature gives softer distributions. \"\"\" super(Distiller, self).compile(optimizer=optimizer, metrics=metrics) self.student_loss_fn = student_loss_fn self.distillation_loss_fn = distillation_loss_fn self.alpha = alpha self.temperature = temperature def train_step(self, data): # Unpack data x, y = data # Forward pass of teacher teacher_predictions = self.teacher(x, training=False) with tf.GradientTape() as tape: # Forward pass of student student_predictions = self.student(x, training=True) # Compute losses student_loss = self.student_loss_fn(y, student_predictions) distillation_loss = self.distillation_loss_fn( tf.nn.softmax(teacher_predictions / self.temperature, axis=1), tf.nn.softmax(student_predictions / self.temperature, axis=1), ) loss = self.alpha * student_loss + (1 - self.alpha) * distillation_loss # Compute gradients trainable_vars = self.student.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Update the metrics configured in `compile()`. self.compiled_metrics.update_state(y, student_predictions) # Return a dict of performance results = {m.name: m.result() for m in self.metrics} results.update( {\"student_loss\": student_loss, \"distillation_loss\": distillation_loss} ) return results def test_step(self, data): # Unpack the data x, y = data # Compute predictions y_prediction = self.student(x, training=False) # Calculate the loss student_loss = self.student_loss_fn(y, y_prediction) # Update the metrics. self.compiled_metrics.update_state(y, y_prediction) # Return a dict of performance results = {m.name: m.result() for m in self.metrics} results.update({\"student_loss\": student_loss}) return results Create student and teacher models Initialy, we create a teacher model and a smaller student model. Both models are convolutional neural networks and created using Sequential(), but could be any Keras model. # Create the teacher teacher = keras.Sequential( [ keras.Input(shape=(28, 28, 1)), layers.Conv2D(256, (3, 3), strides=(2, 2), padding=\"same\"), layers.LeakyReLU(alpha=0.2), layers.MaxPooling2D(pool_size=(2, 2), strides=(1, 1), padding=\"same\"), layers.Conv2D(512, (3, 3), strides=(2, 2), padding=\"same\"), layers.Flatten(), layers.Dense(10), ], name=\"teacher\", ) # Create the student student = keras.Sequential( [ keras.Input(shape=(28, 28, 1)), layers.Conv2D(16, (3, 3), strides=(2, 2), padding=\"same\"), layers.LeakyReLU(alpha=0.2), layers.MaxPooling2D(pool_size=(2, 2), strides=(1, 1), padding=\"same\"), layers.Conv2D(32, (3, 3), strides=(2, 2), padding=\"same\"), layers.Flatten(), layers.Dense(10), ], name=\"student\", ) # Clone student for later comparison student_scratch = keras.models.clone_model(student) Prepare the dataset The dataset used for training the teacher and distilling the teacher is MNIST, and the procedure would be equivalent for any other dataset, e.g. CIFAR-10, with a suitable choice of models. Both the student and teacher are trained on the training set and evaluated on the test set. # Prepare the train and test dataset. batch_size = 64 (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() # Normalize data x_train = x_train.astype(\"float32\") / 255.0 x_train = np.reshape(x_train, (-1, 28, 28, 1)) x_test = x_test.astype(\"float32\") / 255.0 x_test = np.reshape(x_test, (-1, 28, 28, 1)) Train the teacher In knowledge distillation we assume that the teacher is trained and fixed. Thus, we start by training the teacher model on the training set in the usual way. # Train teacher as usual teacher.compile( optimizer=keras.optimizers.Adam(), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[keras.metrics.SparseCategoricalAccuracy()], ) # Train and evaluate teacher on data. teacher.fit(x_train, y_train, epochs=5) teacher.evaluate(x_test, y_test) Epoch 1/5 1875/1875 [==============================] - 248s 132ms/step - loss: 0.2438 - sparse_categorical_accuracy: 0.9220 Epoch 2/5 1875/1875 [==============================] - 263s 140ms/step - loss: 0.0881 - sparse_categorical_accuracy: 0.9738 Epoch 3/5 1875/1875 [==============================] - 245s 131ms/step - loss: 0.0650 - sparse_categorical_accuracy: 0.9811 Epoch 5/5 363/1875 [====>.........................] - ETA: 3:18 - loss: 0.0555 - sparse_categorical_accuracy: 0.9839 Distill teacher to student We have already trained the teacher model, and we only need to initialize a Distiller(student, teacher) instance, compile() it with the desired losses, hyperparameters and optimizer, and distill the teacher to the student. # Initialize and compile distiller distiller = Distiller(student=student, teacher=teacher) distiller.compile( optimizer=keras.optimizers.Adam(), metrics=[keras.metrics.SparseCategoricalAccuracy()], student_loss_fn=keras.losses.SparseCategoricalCrossentropy(from_logits=True), distillation_loss_fn=keras.losses.KLDivergence(), alpha=0.1, temperature=10, ) # Distill teacher to student distiller.fit(x_train, y_train, epochs=3) # Evaluate student on test dataset distiller.evaluate(x_test, y_test) Epoch 1/3 1875/1875 [==============================] - 242s 129ms/step - sparse_categorical_accuracy: 0.9761 - student_loss: 0.1526 - distillation_loss: 0.0226 Epoch 2/3 1875/1875 [==============================] - 281s 150ms/step - sparse_categorical_accuracy: 0.9863 - student_loss: 0.1384 - distillation_loss: 0.0185 Epoch 3/3 399/1875 [=====>........................] - ETA: 3:27 - sparse_categorical_accuracy: 0.9896 - student_loss: 0.1300 - distillation_loss: 0.0182 Train student from scratch for comparison We can also train an equivalent student model from scratch without the teacher, in order to evaluate the performance gain obtained by knowledge distillation. # Train student as doen usually student_scratch.compile( optimizer=keras.optimizers.Adam(), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[keras.metrics.SparseCategoricalAccuracy()], ) # Train and evaluate student trained from scratch. student_scratch.fit(x_train, y_train, epochs=3) student_scratch.evaluate(x_test, y_test) Epoch 1/3 1875/1875 [==============================] - 4s 2ms/step - loss: 0.4731 - sparse_categorical_accuracy: 0.8550 Epoch 2/3 1875/1875 [==============================] - 4s 2ms/step - loss: 0.0966 - sparse_categorical_accuracy: 0.9710 Epoch 3/3 1875/1875 [==============================] - 4s 2ms/step - loss: 0.0750 - sparse_categorical_accuracy: 0.9773 313/313 [==============================] - 0s 963us/step - loss: 0.0691 - sparse_categorical_accuracy: 0.9778 [0.06905383616685867, 0.9778000116348267] If the teacher is trained for 5 full epochs and the student is distilled on this teacher for 3 full epochs, you should in this example experience a performance boost compared to training the same student model from scratch, and even compared to the teacher itself. You should expect the teacher to have accuracy around 97.6%, the student trained from scratch should be around 97.6%, and the distilled student should be around 98.1%. Remove or try out different seeds to use different weight initializations. How to optimally learn representations of images for a given resolution. It is a common belief that if we constrain vision models to perceive things as humans do, their performance can be improved. For example, in this work, Geirhos et al. showed that the vision models pre-trained on the ImageNet-1k dataset are biased toward texture whereas human beings mostly use the shape descriptor to develop a common perception. But does this belief always apply especially when it comes to improving the performance of vision models? It turns out it may not always be the case. When training vision models, it is common to resize images to a lower dimension ((224 x 224), (299 x 299), etc.) to allow mini-batch learning and also to keep up the compute limitations. We generally make use of image resizing methods like bilinear interpolation for this step and the resized images do not lose much of their perceptual character to the human eyes. In Learning to Resize Images for Computer Vision Tasks, Talebi et al. show that if we try to optimize the perceptual quality of the images for the vision models rather than the human eyes, their performance can further be improved. They investigate the following question: For a given image resolution and a model, how to best resize the given images? As shown in the paper, this idea helps to consistently improve the performance of the common vision models (pre-trained on ImageNet-1k) like DenseNet-121, ResNet-50, MobileNetV2, and EfficientNets. In this example, we will implement the learnable image resizing module as proposed in the paper and demonstrate that on the Cats and Dogs dataset using the DenseNet-121 architecture. This example requires TensorFlow 2.4 or higher. Setup from tensorflow.keras import layers from tensorflow import keras import tensorflow as tf import tensorflow_datasets as tfds tfds.disable_progress_bar() import matplotlib.pyplot as plt import numpy as np Define hyperparameters In order to facilitate mini-batch learning, we need to have a fixed shape for the images inside a given batch. This is why an initial resizing is required. We first resize all the images to (300 x 300) shape and then learn their optimal representation for the (150 x 150) resolution. INP_SIZE = (300, 300) TARGET_SIZE = (150, 150) INTERPOLATION = \"bilinear\" AUTO = tf.data.AUTOTUNE BATCH_SIZE = 64 EPOCHS = 5 In this example, we will use the bilinear interpolation but the learnable image resizer module is not dependent on any specific interpolation method. We can also use others, such as bicubic. Load and prepare the dataset For this example, we will only use 40% of the total training dataset. train_ds, validation_ds = tfds.load( \"cats_vs_dogs\", # Reserve 10% for validation split=[\"train[:40%]\", \"train[40%:50%]\"], as_supervised=True, ) def preprocess_dataset(image, label): image = tf.image.resize(image, (INP_SIZE[0], INP_SIZE[1])) label = tf.one_hot(label, depth=2) return (image, label) train_ds = ( train_ds.shuffle(BATCH_SIZE * 100) .map(preprocess_dataset, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) validation_ds = ( validation_ds.map(preprocess_dataset, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) Downloading and preparing dataset 786.68 MiB (download: 786.68 MiB, generated: Unknown size, total: 786.68 MiB) to /home/jupyter/tensorflow_datasets/cats_vs_dogs/4.0.0... WARNING:absl:1738 images were corrupted and were skipped Dataset cats_vs_dogs downloaded and prepared to /home/jupyter/tensorflow_datasets/cats_vs_dogs/4.0.0. Subsequent calls will reuse this data. Define the learnable resizer utilities The figure below (courtesy: Learning to Resize Images for Computer Vision Tasks) presents the structure of the learnable resizing module: def conv_block(x, filters, kernel_size, strides, activation=layers.LeakyReLU(0.2)): x = layers.Conv2D(filters, kernel_size, strides, padding=\"same\", use_bias=False)(x) x = layers.BatchNormalization()(x) if activation: x = activation(x) return x def res_block(x): inputs = x x = conv_block(x, 16, 3, 1) x = conv_block(x, 16, 3, 1, activation=None) return layers.Add()([inputs, x]) def get_learnable_resizer(filters=16, num_res_blocks=1, interpolation=INTERPOLATION): inputs = layers.Input(shape=[None, None, 3]) # First, perform naive resizing. naive_resize = layers.Resizing( *TARGET_SIZE, interpolation=interpolation )(inputs) # First convolution block without batch normalization. x = layers.Conv2D(filters=filters, kernel_size=7, strides=1, padding=\"same\")(inputs) x = layers.LeakyReLU(0.2)(x) # Second convolution block with batch normalization. x = layers.Conv2D(filters=filters, kernel_size=1, strides=1, padding=\"same\")(x) x = layers.LeakyReLU(0.2)(x) x = layers.BatchNormalization()(x) # Intermediate resizing as a bottleneck. bottleneck = layers.Resizing( *TARGET_SIZE, interpolation=interpolation )(x) # Residual passes. for _ in range(num_res_blocks): x = res_block(bottleneck) # Projection. x = layers.Conv2D( filters=filters, kernel_size=3, strides=1, padding=\"same\", use_bias=False )(x) x = layers.BatchNormalization()(x) # Skip connection. x = layers.Add()([bottleneck, x]) # Final resized image. x = layers.Conv2D(filters=3, kernel_size=7, strides=1, padding=\"same\")(x) final_resize = layers.Add()([naive_resize, x]) return tf.keras.Model(inputs, final_resize, name=\"learnable_resizer\") learnable_resizer = get_learnable_resizer() Visualize the outputs of the learnable resizing module Here, we visualize how the resized images would look like after being passed through the random weights of the resizer. sample_images, _ = next(iter(train_ds)) plt.figure(figsize=(16, 10)) for i, image in enumerate(sample_images[:6]): image = image / 255 ax = plt.subplot(3, 4, 2 * i + 1) plt.title(\"Input Image\") plt.imshow(image.numpy().squeeze()) plt.axis(\"off\") ax = plt.subplot(3, 4, 2 * i + 2) resized_image = learnable_resizer(image[None, ...]) plt.title(\"Resized Image\") plt.imshow(resized_image.numpy().squeeze()) plt.axis(\"off\") WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). png Model building utility def get_model(): backbone = tf.keras.applications.DenseNet121( weights=None, include_top=True, classes=2, input_shape=((TARGET_SIZE[0], TARGET_SIZE[1], 3)), ) backbone.trainable = True inputs = layers.Input((INP_SIZE[0], INP_SIZE[1], 3)) x = layers.Rescaling(scale=1.0 / 255)(inputs) x = learnable_resizer(x) outputs = backbone(x) return tf.keras.Model(inputs, outputs) The structure of the learnable image resizer module allows for flexible integrations with different vision models. Compile and train our model with learnable resizer model = get_model() model.compile( loss=keras.losses.CategoricalCrossentropy(label_smoothing=0.1), optimizer=\"sgd\", metrics=[\"accuracy\"], ) model.fit(train_ds, validation_data=validation_ds, epochs=EPOCHS) Epoch 1/5 146/146 [==============================] - 49s 247ms/step - loss: 0.6956 - accuracy: 0.5697 - val_loss: 0.6958 - val_accuracy: 0.5103 Epoch 2/5 146/146 [==============================] - 33s 216ms/step - loss: 0.6685 - accuracy: 0.6117 - val_loss: 0.6955 - val_accuracy: 0.5387 Epoch 3/5 146/146 [==============================] - 33s 216ms/step - loss: 0.6542 - accuracy: 0.6190 - val_loss: 0.7410 - val_accuracy: 0.5684 Epoch 4/5 146/146 [==============================] - 33s 216ms/step - loss: 0.6357 - accuracy: 0.6576 - val_loss: 0.9322 - val_accuracy: 0.5314 Epoch 5/5 146/146 [==============================] - 33s 215ms/step - loss: 0.6224 - accuracy: 0.6745 - val_loss: 0.6526 - val_accuracy: 0.6672 Visualize the outputs of the trained visualizer plt.figure(figsize=(16, 10)) for i, image in enumerate(sample_images[:6]): image = image / 255 ax = plt.subplot(3, 4, 2 * i + 1) plt.title(\"Input Image\") plt.imshow(image.numpy().squeeze()) plt.axis(\"off\") ax = plt.subplot(3, 4, 2 * i + 2) resized_image = learnable_resizer(image[None, ...]) plt.title(\"Resized Image\") plt.imshow(resized_image.numpy().squeeze() / 10) plt.axis(\"off\") WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). png The plot shows that the visuals of the images have improved with training. The following table shows the benefits of using the resizing module in comparison to using the bilinear interpolation: Model Number of parameters (Million) Top-1 accuracy With the learnable resizer 7.051717 67.67% Without the learnable resizer 7.039554 60.19% For more details, you can check out this repository. Note the above-reported models were trained for 10 epochs on 90% of the training set of Cats and Dogs unlike this example. Also, note that the increase in the number of parameters due to the resizing module is very negligible. To ensure that the improvement in the performance is not due to stochasticity, the models were trained using the same initial random weights. Now, a question worth asking here is - isn't the improved accuracy simply a consequence of adding more layers (the resizer is a mini network after all) to the model, compared to the baseline? To show that it is not the case, the authors conduct the following experiment: Take a pre-trained model trained some size, say (224 x 224). Now, first, use it to infer predictions on images resized to a lower resolution. Record the performance. For the second experiment, plug in the resizer module at the top of the pre-trained model and warm-start the training. Record the performance. Now, the authors argue that using the second option is better because it helps the model learn how to adjust the representations better with respect to the given resolution. Since the results purely are empirical, a few more experiments such as analyzing the cross-channel interaction would have been even better. It is worth noting that elements like Squeeze and Excitation (SE) blocks, Global Context (GC) blocks also add a few parameters to an existing network but they are known to help a network process information in systematic ways to improve the overall performance. Notes To impose shape bias inside the vision models, Geirhos et al. trained them with a combination of natural and stylized images. It might be interesting to investigate if this learnable resizing module could achieve something similar as the outputs seem to discard the texture information. The resizer module can handle arbitrary resolutions and aspect ratios which is very important for tasks like object detection and segmentation. There is another closely related topic on adaptive image resizing that attempts to resize images/feature maps adaptively during training. EfficientV2 uses this idea. Implementing the MIRNet architecture for low-light image enhancement. Introduction With the goal of recovering high-quality image content from its degraded version, image restoration enjoys numerous applications, such as in photography, security, medical imaging, and remote sensing. In this example, we implement the MIRNet model for low-light image enhancement, a fully-convolutional architecture that learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details. References: Learning Enriched Features for Real Image Restoration and Enhancement The Retinex Theory of Color Vision Two deterministic half-quadratic regularization algorithms for computed imaging Downloading LOLDataset The LoL Dataset has been created for low-light image enhancement. It provides 485 images for training and 15 for testing. Each image pair in the dataset consists of a low-light input image and its corresponding well-exposed reference image. import os import cv2 import random import numpy as np from glob import glob from PIL import Image, ImageOps import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers !gdown https://drive.google.com/uc?id=1DdGIJ4PZPlF2ikl8mNM9V-PdVxVLbQi6 !unzip -q lol_dataset.zip Downloading... From: https://drive.google.com/uc?id=1DdGIJ4PZPlF2ikl8mNM9V-PdVxVLbQi6 To: /content/keras-io/scripts/tmp_2614641/lol_dataset.zip 347MB [00:03, 108MB/s] Creating a TensorFlow Dataset We use 300 image pairs from the LoL Dataset's training set for training, and we use the remaining 185 image pairs for validation. We generate random crops of size 128 x 128 from the image pairs to be used for both training and validation. random.seed(10) IMAGE_SIZE = 128 BATCH_SIZE = 4 MAX_TRAIN_IMAGES = 300 def read_image(image_path): image = tf.io.read_file(image_path) image = tf.image.decode_png(image, channels=3) image.set_shape([None, None, 3]) image = tf.cast(image, dtype=tf.float32) / 255.0 return image def random_crop(low_image, enhanced_image): low_image_shape = tf.shape(low_image)[:2] low_w = tf.random.uniform( shape=(), maxval=low_image_shape[1] - IMAGE_SIZE + 1, dtype=tf.int32 ) low_h = tf.random.uniform( shape=(), maxval=low_image_shape[0] - IMAGE_SIZE + 1, dtype=tf.int32 ) enhanced_w = low_w enhanced_h = low_h low_image_cropped = low_image[ low_h : low_h + IMAGE_SIZE, low_w : low_w + IMAGE_SIZE ] enhanced_image_cropped = enhanced_image[ enhanced_h : enhanced_h + IMAGE_SIZE, enhanced_w : enhanced_w + IMAGE_SIZE ] return low_image_cropped, enhanced_image_cropped def load_data(low_light_image_path, enhanced_image_path): low_light_image = read_image(low_light_image_path) enhanced_image = read_image(enhanced_image_path) low_light_image, enhanced_image = random_crop(low_light_image, enhanced_image) return low_light_image, enhanced_image def get_dataset(low_light_images, enhanced_images): dataset = tf.data.Dataset.from_tensor_slices((low_light_images, enhanced_images)) dataset = dataset.map(load_data, num_parallel_calls=tf.data.AUTOTUNE) dataset = dataset.batch(BATCH_SIZE, drop_remainder=True) return dataset train_low_light_images = sorted(glob(\"./lol_dataset/our485/low/*\"))[:MAX_TRAIN_IMAGES] train_enhanced_images = sorted(glob(\"./lol_dataset/our485/high/*\"))[:MAX_TRAIN_IMAGES] val_low_light_images = sorted(glob(\"./lol_dataset/our485/low/*\"))[MAX_TRAIN_IMAGES:] val_enhanced_images = sorted(glob(\"./lol_dataset/our485/high/*\"))[MAX_TRAIN_IMAGES:] test_low_light_images = sorted(glob(\"./lol_dataset/eval15/low/*\")) test_enhanced_images = sorted(glob(\"./lol_dataset/eval15/high/*\")) train_dataset = get_dataset(train_low_light_images, train_enhanced_images) val_dataset = get_dataset(val_low_light_images, val_enhanced_images) print(\"Train Dataset:\", train_dataset) print(\"Val Dataset:\", val_dataset) Train Dataset: Val Dataset: MIRNet Model Here are the main features of the MIRNet model: A feature extraction model that computes a complementary set of features across multiple spatial scales, while maintaining the original high-resolution features to preserve precise spatial details. A regularly repeated mechanism for information exchange, where the features across multi-resolution branches are progressively fused together for improved representation learning. A new approach to fuse multi-scale features using a selective kernel network that dynamically combines variable receptive fields and faithfully preserves the original feature information at each spatial resolution. A recursive residual design that progressively breaks down the input signal in order to simplify the overall learning process, and allows the construction of very deep networks. Selective Kernel Feature Fusion The Selective Kernel Feature Fusion or SKFF module performs dynamic adjustment of receptive fields via two operations: Fuse and Select. The Fuse operator generates global feature descriptors by combining the information from multi-resolution streams. The Select operator uses these descriptors to recalibrate the feature maps (of different streams) followed by their aggregation. Fuse: The SKFF receives inputs from three parallel convolution streams carrying different scales of information. We first combine these multi-scale features using an element-wise sum, on which we apply Global Average Pooling (GAP) across the spatial dimension. Next, we apply a channel- downscaling convolution layer to generate a compact feature representation which passes through three parallel channel-upscaling convolution layers (one for each resolution stream) and provides us with three feature descriptors. Select: This operator applies the softmax function to the feature descriptors to obtain the corresponding activations that are used to adaptively recalibrate multi-scale feature maps. The aggregated features are defined as the sum of product of the corresponding multi-scale feature and the feature descriptor. def selective_kernel_feature_fusion( multi_scale_feature_1, multi_scale_feature_2, multi_scale_feature_3 ): channels = list(multi_scale_feature_1.shape)[-1] combined_feature = layers.Add()( [multi_scale_feature_1, multi_scale_feature_2, multi_scale_feature_3] ) gap = layers.GlobalAveragePooling2D()(combined_feature) channel_wise_statistics = tf.reshape(gap, shape=(-1, 1, 1, channels)) compact_feature_representation = layers.Conv2D( filters=channels // 8, kernel_size=(1, 1), activation=\"relu\" )(channel_wise_statistics) feature_descriptor_1 = layers.Conv2D( channels, kernel_size=(1, 1), activation=\"softmax\" )(compact_feature_representation) feature_descriptor_2 = layers.Conv2D( channels, kernel_size=(1, 1), activation=\"softmax\" )(compact_feature_representation) feature_descriptor_3 = layers.Conv2D( channels, kernel_size=(1, 1), activation=\"softmax\" )(compact_feature_representation) feature_1 = multi_scale_feature_1 * feature_descriptor_1 feature_2 = multi_scale_feature_2 * feature_descriptor_2 feature_3 = multi_scale_feature_3 * feature_descriptor_3 aggregated_feature = layers.Add()([feature_1, feature_2, feature_3]) return aggregated_feature Dual Attention Unit The Dual Attention Unit or DAU is used to extract features in the convolutional streams. While the SKFF block fuses information across multi-resolution branches, we also need a mechanism to share information within a feature tensor, both along the spatial and the channel dimensions which is done by the DAU block. The DAU suppresses less useful features and only allows more informative ones to pass further. This feature recalibration is achieved by using Channel Attention and Spatial Attention mechanisms. The Channel Attention branch exploits the inter-channel relationships of the convolutional feature maps by applying squeeze and excitation operations. Given a feature map, the squeeze operation applies Global Average Pooling across spatial dimensions to encode global context, thus yielding a feature descriptor. The excitation operator passes this feature descriptor through two convolutional layers followed by the sigmoid gating and generates activations. Finally, the output of Channel Attention branch is obtained by rescaling the input feature map with the output activations. The Spatial Attention branch is designed to exploit the inter-spatial dependencies of convolutional features. The goal of Spatial Attention is to generate a spatial attention map and use it to recalibrate the incoming features. To generate the spatial attention map, the Spatial Attention branch first independently applies Global Average Pooling and Max Pooling operations on input features along the channel dimensions and concatenates the outputs to form a resultant feature map which is then passed through a convolution and sigmoid activation to obtain the spatial attention map. This spatial attention map is then used to rescale the input feature map. def spatial_attention_block(input_tensor): average_pooling = tf.reduce_max(input_tensor, axis=-1) average_pooling = tf.expand_dims(average_pooling, axis=-1) max_pooling = tf.reduce_mean(input_tensor, axis=-1) max_pooling = tf.expand_dims(max_pooling, axis=-1) concatenated = layers.Concatenate(axis=-1)([average_pooling, max_pooling]) feature_map = layers.Conv2D(1, kernel_size=(1, 1))(concatenated) feature_map = tf.nn.sigmoid(feature_map) return input_tensor * feature_map def channel_attention_block(input_tensor): channels = list(input_tensor.shape)[-1] average_pooling = layers.GlobalAveragePooling2D()(input_tensor) feature_descriptor = tf.reshape(average_pooling, shape=(-1, 1, 1, channels)) feature_activations = layers.Conv2D( filters=channels // 8, kernel_size=(1, 1), activation=\"relu\" )(feature_descriptor) feature_activations = layers.Conv2D( filters=channels, kernel_size=(1, 1), activation=\"sigmoid\" )(feature_activations) return input_tensor * feature_activations def dual_attention_unit_block(input_tensor): channels = list(input_tensor.shape)[-1] feature_map = layers.Conv2D( channels, kernel_size=(3, 3), padding=\"same\", activation=\"relu\" )(input_tensor) feature_map = layers.Conv2D(channels, kernel_size=(3, 3), padding=\"same\")( feature_map ) channel_attention = channel_attention_block(feature_map) spatial_attention = spatial_attention_block(feature_map) concatenation = layers.Concatenate(axis=-1)([channel_attention, spatial_attention]) concatenation = layers.Conv2D(channels, kernel_size=(1, 1))(concatenation) return layers.Add()([input_tensor, concatenation]) Multi-Scale Residual Block The Multi-Scale Residual Block is capable of generating a spatially-precise output by maintaining high-resolution representations, while receiving rich contextual information from low-resolutions. The MRB consists of multiple (three in this paper) fully-convolutional streams connected in parallel. It allows information exchange across parallel streams in order to consolidate the high-resolution features with the help of low-resolution features, and vice versa. The MIRNet employs a recursive residual design (with skip connections) to ease the flow of information during the learning process. In order to maintain the residual nature of our architecture, residual resizing modules are used to perform downsampling and upsampling operations that are used in the Multi-scale Residual Block. # Recursive Residual Modules def down_sampling_module(input_tensor): channels = list(input_tensor.shape)[-1] main_branch = layers.Conv2D(channels, kernel_size=(1, 1), activation=\"relu\")( input_tensor ) main_branch = layers.Conv2D( channels, kernel_size=(3, 3), padding=\"same\", activation=\"relu\" )(main_branch) main_branch = layers.MaxPooling2D()(main_branch) main_branch = layers.Conv2D(channels * 2, kernel_size=(1, 1))(main_branch) skip_branch = layers.MaxPooling2D()(input_tensor) skip_branch = layers.Conv2D(channels * 2, kernel_size=(1, 1))(skip_branch) return layers.Add()([skip_branch, main_branch]) def up_sampling_module(input_tensor): channels = list(input_tensor.shape)[-1] main_branch = layers.Conv2D(channels, kernel_size=(1, 1), activation=\"relu\")( input_tensor ) main_branch = layers.Conv2D( channels, kernel_size=(3, 3), padding=\"same\", activation=\"relu\" )(main_branch) main_branch = layers.UpSampling2D()(main_branch) main_branch = layers.Conv2D(channels // 2, kernel_size=(1, 1))(main_branch) skip_branch = layers.UpSampling2D()(input_tensor) skip_branch = layers.Conv2D(channels // 2, kernel_size=(1, 1))(skip_branch) return layers.Add()([skip_branch, main_branch]) # MRB Block def multi_scale_residual_block(input_tensor, channels): # features level1 = input_tensor level2 = down_sampling_module(input_tensor) level3 = down_sampling_module(level2) # DAU level1_dau = dual_attention_unit_block(level1) level2_dau = dual_attention_unit_block(level2) level3_dau = dual_attention_unit_block(level3) # SKFF level1_skff = selective_kernel_feature_fusion( level1_dau, up_sampling_module(level2_dau), up_sampling_module(up_sampling_module(level3_dau)), ) level2_skff = selective_kernel_feature_fusion( down_sampling_module(level1_dau), level2_dau, up_sampling_module(level3_dau) ) level3_skff = selective_kernel_feature_fusion( down_sampling_module(down_sampling_module(level1_dau)), down_sampling_module(level2_dau), level3_dau, ) # DAU 2 level1_dau_2 = dual_attention_unit_block(level1_skff) level2_dau_2 = up_sampling_module((dual_attention_unit_block(level2_skff))) level3_dau_2 = up_sampling_module( up_sampling_module(dual_attention_unit_block(level3_skff)) ) # SKFF 2 skff_ = selective_kernel_feature_fusion(level1_dau_2, level3_dau_2, level3_dau_2) conv = layers.Conv2D(channels, kernel_size=(3, 3), padding=\"same\")(skff_) return layers.Add()([input_tensor, conv]) MIRNet Model def recursive_residual_group(input_tensor, num_mrb, channels): conv1 = layers.Conv2D(channels, kernel_size=(3, 3), padding=\"same\")(input_tensor) for _ in range(num_mrb): conv1 = multi_scale_residual_block(conv1, channels) conv2 = layers.Conv2D(channels, kernel_size=(3, 3), padding=\"same\")(conv1) return layers.Add()([conv2, input_tensor]) def mirnet_model(num_rrg, num_mrb, channels): input_tensor = keras.Input(shape=[None, None, 3]) x1 = layers.Conv2D(channels, kernel_size=(3, 3), padding=\"same\")(input_tensor) for _ in range(num_rrg): x1 = recursive_residual_group(x1, num_mrb, channels) conv = layers.Conv2D(3, kernel_size=(3, 3), padding=\"same\")(x1) output_tensor = layers.Add()([input_tensor, conv]) return keras.Model(input_tensor, output_tensor) model = mirnet_model(num_rrg=3, num_mrb=2, channels=64) Training We train MIRNet using Charbonnier Loss as the loss function and Adam Optimizer with a learning rate of 1e-4. We use Peak Signal Noise Ratio or PSNR as a metric which is an expression for the ratio between the maximum possible value (power) of a signal and the power of distorting noise that affects the quality of its representation. def charbonnier_loss(y_true, y_pred): return tf.reduce_mean(tf.sqrt(tf.square(y_true - y_pred) + tf.square(1e-3))) def peak_signal_noise_ratio(y_true, y_pred): return tf.image.psnr(y_pred, y_true, max_val=255.0) optimizer = keras.optimizers.Adam(learning_rate=1e-4) model.compile( optimizer=optimizer, loss=charbonnier_loss, metrics=[peak_signal_noise_ratio] ) history = model.fit( train_dataset, validation_data=val_dataset, epochs=50, callbacks=[ keras.callbacks.ReduceLROnPlateau( monitor=\"val_peak_signal_noise_ratio\", factor=0.5, patience=5, verbose=1, min_delta=1e-7, mode=\"max\", ) ], ) plt.plot(history.history[\"loss\"], label=\"train_loss\") plt.plot(history.history[\"val_loss\"], label=\"val_loss\") plt.xlabel(\"Epochs\") plt.ylabel(\"Loss\") plt.title(\"Train and Validation Losses Over Epochs\", fontsize=14) plt.legend() plt.grid() plt.show() plt.plot(history.history[\"peak_signal_noise_ratio\"], label=\"train_psnr\") plt.plot(history.history[\"val_peak_signal_noise_ratio\"], label=\"val_psnr\") plt.xlabel(\"Epochs\") plt.ylabel(\"PSNR\") plt.title(\"Train and Validation PSNR Over Epochs\", fontsize=14) plt.legend() plt.grid() plt.show() Epoch 1/50 75/75 [==============================] - 109s 731ms/step - loss: 0.2125 - peak_signal_noise_ratio: 62.0458 - val_loss: 0.1592 - val_peak_signal_noise_ratio: 64.1833 Epoch 2/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1764 - peak_signal_noise_ratio: 63.1356 - val_loss: 0.1257 - val_peak_signal_noise_ratio: 65.6498 Epoch 3/50 75/75 [==============================] - 49s 652ms/step - loss: 0.1724 - peak_signal_noise_ratio: 63.3172 - val_loss: 0.1245 - val_peak_signal_noise_ratio: 65.6902 Epoch 4/50 75/75 [==============================] - 49s 653ms/step - loss: 0.1670 - peak_signal_noise_ratio: 63.4917 - val_loss: 0.1206 - val_peak_signal_noise_ratio: 65.8893 Epoch 5/50 75/75 [==============================] - 49s 653ms/step - loss: 0.1651 - peak_signal_noise_ratio: 63.6555 - val_loss: 0.1333 - val_peak_signal_noise_ratio: 65.6338 Epoch 6/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1572 - peak_signal_noise_ratio: 64.1984 - val_loss: 0.1142 - val_peak_signal_noise_ratio: 66.7711 Epoch 7/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1592 - peak_signal_noise_ratio: 64.0062 - val_loss: 0.1205 - val_peak_signal_noise_ratio: 66.1075 Epoch 8/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1493 - peak_signal_noise_ratio: 64.4675 - val_loss: 0.1170 - val_peak_signal_noise_ratio: 66.1355 Epoch 9/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1446 - peak_signal_noise_ratio: 64.7416 - val_loss: 0.1301 - val_peak_signal_noise_ratio: 66.0207 Epoch 10/50 75/75 [==============================] - 49s 655ms/step - loss: 0.1539 - peak_signal_noise_ratio: 64.3999 - val_loss: 0.1220 - val_peak_signal_noise_ratio: 66.7203 Epoch 11/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1451 - peak_signal_noise_ratio: 64.7352 - val_loss: 0.1219 - val_peak_signal_noise_ratio: 66.3140 Epoch 00011: ReduceLROnPlateau reducing learning rate to 4.999999873689376e-05. Epoch 12/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1492 - peak_signal_noise_ratio: 64.7238 - val_loss: 0.1204 - val_peak_signal_noise_ratio: 66.4726 Epoch 13/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1456 - peak_signal_noise_ratio: 64.9666 - val_loss: 0.1109 - val_peak_signal_noise_ratio: 67.1270 Epoch 14/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1372 - peak_signal_noise_ratio: 65.3932 - val_loss: 0.1150 - val_peak_signal_noise_ratio: 66.9255 Epoch 15/50 75/75 [==============================] - 49s 650ms/step - loss: 0.1340 - peak_signal_noise_ratio: 65.5611 - val_loss: 0.1111 - val_peak_signal_noise_ratio: 67.2009 Epoch 16/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1377 - peak_signal_noise_ratio: 65.3355 - val_loss: 0.1140 - val_peak_signal_noise_ratio: 67.0495 Epoch 17/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1340 - peak_signal_noise_ratio: 65.6484 - val_loss: 0.1132 - val_peak_signal_noise_ratio: 67.0257 Epoch 18/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1360 - peak_signal_noise_ratio: 65.4871 - val_loss: 0.1070 - val_peak_signal_noise_ratio: 67.4185 Epoch 19/50 75/75 [==============================] - 49s 649ms/step - loss: 0.1349 - peak_signal_noise_ratio: 65.4856 - val_loss: 0.1112 - val_peak_signal_noise_ratio: 67.2248 Epoch 20/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1273 - peak_signal_noise_ratio: 66.0817 - val_loss: 0.1185 - val_peak_signal_noise_ratio: 67.0208 Epoch 21/50 75/75 [==============================] - 49s 656ms/step - loss: 0.1393 - peak_signal_noise_ratio: 65.3710 - val_loss: 0.1102 - val_peak_signal_noise_ratio: 67.0362 Epoch 22/50 75/75 [==============================] - 49s 653ms/step - loss: 0.1326 - peak_signal_noise_ratio: 65.8781 - val_loss: 0.1059 - val_peak_signal_noise_ratio: 67.4949 Epoch 23/50 75/75 [==============================] - 49s 653ms/step - loss: 0.1260 - peak_signal_noise_ratio: 66.1770 - val_loss: 0.1187 - val_peak_signal_noise_ratio: 66.6312 Epoch 24/50 75/75 [==============================] - 49s 650ms/step - loss: 0.1331 - peak_signal_noise_ratio: 65.8160 - val_loss: 0.1075 - val_peak_signal_noise_ratio: 67.2668 Epoch 25/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1288 - peak_signal_noise_ratio: 66.0734 - val_loss: 0.1027 - val_peak_signal_noise_ratio: 67.9508 Epoch 26/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1306 - peak_signal_noise_ratio: 66.0349 - val_loss: 0.1076 - val_peak_signal_noise_ratio: 67.3821 Epoch 27/50 75/75 [==============================] - 49s 655ms/step - loss: 0.1356 - peak_signal_noise_ratio: 65.7978 - val_loss: 0.1079 - val_peak_signal_noise_ratio: 67.4785 Epoch 28/50 75/75 [==============================] - 49s 655ms/step - loss: 0.1270 - peak_signal_noise_ratio: 66.2681 - val_loss: 0.1116 - val_peak_signal_noise_ratio: 67.3327 Epoch 29/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1297 - peak_signal_noise_ratio: 66.0506 - val_loss: 0.1057 - val_peak_signal_noise_ratio: 67.5432 Epoch 30/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1275 - peak_signal_noise_ratio: 66.3542 - val_loss: 0.1034 - val_peak_signal_noise_ratio: 67.4624 Epoch 00030: ReduceLROnPlateau reducing learning rate to 2.499999936844688e-05. Epoch 31/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1258 - peak_signal_noise_ratio: 66.2724 - val_loss: 0.1066 - val_peak_signal_noise_ratio: 67.5729 Epoch 32/50 75/75 [==============================] - 49s 653ms/step - loss: 0.1153 - peak_signal_noise_ratio: 67.0384 - val_loss: 0.1064 - val_peak_signal_noise_ratio: 67.4336 Epoch 33/50 75/75 [==============================] - 49s 653ms/step - loss: 0.1189 - peak_signal_noise_ratio: 66.7662 - val_loss: 0.1062 - val_peak_signal_noise_ratio: 67.5128 Epoch 34/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1159 - peak_signal_noise_ratio: 66.9257 - val_loss: 0.1003 - val_peak_signal_noise_ratio: 67.8672 Epoch 35/50 75/75 [==============================] - 49s 653ms/step - loss: 0.1191 - peak_signal_noise_ratio: 66.7690 - val_loss: 0.1043 - val_peak_signal_noise_ratio: 67.4840 Epoch 00035: ReduceLROnPlateau reducing learning rate to 1.249999968422344e-05. Epoch 36/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1158 - peak_signal_noise_ratio: 67.0264 - val_loss: 0.1057 - val_peak_signal_noise_ratio: 67.6526 Epoch 37/50 75/75 [==============================] - 49s 652ms/step - loss: 0.1128 - peak_signal_noise_ratio: 67.1950 - val_loss: 0.1104 - val_peak_signal_noise_ratio: 67.1770 Epoch 38/50 75/75 [==============================] - 49s 652ms/step - loss: 0.1200 - peak_signal_noise_ratio: 66.7623 - val_loss: 0.1048 - val_peak_signal_noise_ratio: 67.7003 Epoch 39/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1112 - peak_signal_noise_ratio: 67.3895 - val_loss: 0.1031 - val_peak_signal_noise_ratio: 67.6530 Epoch 40/50 75/75 [==============================] - 49s 650ms/step - loss: 0.1125 - peak_signal_noise_ratio: 67.1694 - val_loss: 0.1034 - val_peak_signal_noise_ratio: 67.6437 Epoch 00040: ReduceLROnPlateau reducing learning rate to 6.24999984211172e-06. Epoch 41/50 75/75 [==============================] - 49s 650ms/step - loss: 0.1131 - peak_signal_noise_ratio: 67.2471 - val_loss: 0.1152 - val_peak_signal_noise_ratio: 66.8625 Epoch 42/50 75/75 [==============================] - 49s 650ms/step - loss: 0.1069 - peak_signal_noise_ratio: 67.5794 - val_loss: 0.1119 - val_peak_signal_noise_ratio: 67.1944 Epoch 43/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1118 - peak_signal_noise_ratio: 67.2779 - val_loss: 0.1147 - val_peak_signal_noise_ratio: 66.9731 Epoch 44/50 75/75 [==============================] - 48s 647ms/step - loss: 0.1101 - peak_signal_noise_ratio: 67.2777 - val_loss: 0.1107 - val_peak_signal_noise_ratio: 67.2580 Epoch 45/50 75/75 [==============================] - 49s 649ms/step - loss: 0.1076 - peak_signal_noise_ratio: 67.6359 - val_loss: 0.1103 - val_peak_signal_noise_ratio: 67.2720 Epoch 00045: ReduceLROnPlateau reducing learning rate to 3.12499992105586e-06. Epoch 46/50 75/75 [==============================] - 49s 648ms/step - loss: 0.1066 - peak_signal_noise_ratio: 67.4869 - val_loss: 0.1077 - val_peak_signal_noise_ratio: 67.4986 Epoch 47/50 75/75 [==============================] - 49s 649ms/step - loss: 0.1072 - peak_signal_noise_ratio: 67.4890 - val_loss: 0.1140 - val_peak_signal_noise_ratio: 67.1755 Epoch 48/50 75/75 [==============================] - 49s 649ms/step - loss: 0.1065 - peak_signal_noise_ratio: 67.6796 - val_loss: 0.1091 - val_peak_signal_noise_ratio: 67.3442 Epoch 49/50 75/75 [==============================] - 49s 648ms/step - loss: 0.1098 - peak_signal_noise_ratio: 67.3909 - val_loss: 0.1082 - val_peak_signal_noise_ratio: 67.4616 Epoch 50/50 75/75 [==============================] - 49s 648ms/step - loss: 0.1090 - peak_signal_noise_ratio: 67.5139 - val_loss: 0.1124 - val_peak_signal_noise_ratio: 67.1488 Epoch 00050: ReduceLROnPlateau reducing learning rate to 1.56249996052793e-06. png png Inference def plot_results(images, titles, figure_size=(12, 12)): fig = plt.figure(figsize=figure_size) for i in range(len(images)): fig.add_subplot(1, len(images), i + 1).set_title(titles[i]) _ = plt.imshow(images[i]) plt.axis(\"off\") plt.show() def infer(original_image): image = keras.preprocessing.image.img_to_array(original_image) image = image.astype(\"float32\") / 255.0 image = np.expand_dims(image, axis=0) output = model.predict(image) output_image = output[0] * 255.0 output_image = output_image.clip(0, 255) output_image = output_image.reshape( (np.shape(output_image)[0], np.shape(output_image)[1], 3) ) output_image = Image.fromarray(np.uint8(output_image)) original_image = Image.fromarray(np.uint8(original_image)) return output_image Inference on Test Images We compare the test images from LOLDataset enhanced by MIRNet with images enhanced via the PIL.ImageOps.autocontrast() function. for low_light_image in random.sample(test_low_light_images, 6): original_image = Image.open(low_light_image) enhanced_image = infer(original_image) plot_results( [original_image, ImageOps.autocontrast(original_image), enhanced_image], [\"Original\", \"PIL Autocontrast\", \"MIRNet Enhanced\"], (20, 12), ) png png png png png png Implementing Masked Autoencoders for self-supervised pretraining. Introduction In deep learning, models with growing capacity and capability can easily overfit on large datasets (ImageNet-1K). In the field of natural language processing, the appetite for data has been successfully addressed by self-supervised pretraining. In the academic paper Masked Autoencoders Are Scalable Vision Learners by He et. al. the authors propose a simple yet effective method to pretrain large vision models (here ViT Huge). Inspired from the pretraining algorithm of BERT (Devlin et al.), they mask patches of an image and, through an autoencoder predict the masked patches. In the spirit of \"masked language modeling\", this pretraining task could be referred to as \"masked image modeling\". In this example, we implement Masked Autoencoders Are Scalable Vision Learners with the CIFAR-10 dataset. After pretraining a scaled down version of ViT, we also implement the linear evaluation pipeline on CIFAR-10. This implementation covers (MAE refers to Masked Autoencoder): The masking algorithm MAE encoder MAE decoder Evaluation with linear probing As a reference, we reuse some of the code presented in this example. Imports This example requires TensorFlow Addons, which can be installed using the following command: pip install -U tensorflow-addons from tensorflow.keras import layers import tensorflow_addons as tfa from tensorflow import keras import tensorflow as tf import matplotlib.pyplot as plt import numpy as np import random # Setting seeds for reproducibility. SEED = 42 keras.utils.set_random_seed(SEED) Hyperparameters for pretraining Please feel free to change the hyperparameters and check your results. The best way to get an intuition about the architecture is to experiment with it. Our hyperparameters are heavily inspired by the design guidelines laid out by the authors in the original paper. # DATA BUFFER_SIZE = 1024 BATCH_SIZE = 256 AUTO = tf.data.AUTOTUNE INPUT_SHAPE = (32, 32, 3) NUM_CLASSES = 10 # OPTIMIZER LEARNING_RATE = 5e-3 WEIGHT_DECAY = 1e-4 # PRETRAINING EPOCHS = 100 # AUGMENTATION IMAGE_SIZE = 48 # We will resize input images to this size. PATCH_SIZE = 6 # Size of the patches to be extracted from the input images. NUM_PATCHES = (IMAGE_SIZE // PATCH_SIZE) ** 2 MASK_PROPORTION = 0.75 # We have found 75% masking to give us the best results. # ENCODER and DECODER LAYER_NORM_EPS = 1e-6 ENC_PROJECTION_DIM = 128 DEC_PROJECTION_DIM = 64 ENC_NUM_HEADS = 4 ENC_LAYERS = 6 DEC_NUM_HEADS = 4 DEC_LAYERS = ( 2 # The decoder is lightweight but should be reasonably deep for reconstruction. ) ENC_TRANSFORMER_UNITS = [ ENC_PROJECTION_DIM * 2, ENC_PROJECTION_DIM, ] # Size of the transformer layers. DEC_TRANSFORMER_UNITS = [ DEC_PROJECTION_DIM * 2, DEC_PROJECTION_DIM, ] Load and prepare the CIFAR-10 dataset (x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data() (x_train, y_train), (x_val, y_val) = ( (x_train[:40000], y_train[:40000]), (x_train[40000:], y_train[40000:]), ) print(f\"Training samples: {len(x_train)}\") print(f\"Validation samples: {len(x_val)}\") print(f\"Testing samples: {len(x_test)}\") train_ds = tf.data.Dataset.from_tensor_slices(x_train) train_ds = train_ds.shuffle(BUFFER_SIZE).batch(BATCH_SIZE).prefetch(AUTO) val_ds = tf.data.Dataset.from_tensor_slices(x_val) val_ds = val_ds.batch(BATCH_SIZE).prefetch(AUTO) test_ds = tf.data.Dataset.from_tensor_slices(x_test) test_ds = test_ds.batch(BATCH_SIZE).prefetch(AUTO) Training samples: 40000 Validation samples: 10000 Testing samples: 10000 2021-11-24 01:10:52.088318: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-11-24 01:10:54.356762: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 38444 MB memory: -> device: 0, name: A100-SXM4-40GB, pci bus id: 0000:00:04.0, compute capability: 8.0 Data augmentation In previous self-supervised pretraining methodologies (SimCLR alike), we have noticed that the data augmentation pipeline plays an important role. On the other hand the authors of this paper point out that Masked Autoencoders do not rely on augmentations. They propose a simple augmentation pipeline of: Resizing Random cropping (fixed-sized or random sized) Random horizontal flipping def get_train_augmentation_model(): model = keras.Sequential( [ layers.Rescaling(1 / 255.0), layers.Resizing(INPUT_SHAPE[0] + 20, INPUT_SHAPE[0] + 20), layers.RandomCrop(IMAGE_SIZE, IMAGE_SIZE), layers.RandomFlip(\"horizontal\"), ], name=\"train_data_augmentation\", ) return model def get_test_augmentation_model(): model = keras.Sequential( [layers.Rescaling(1 / 255.0), layers.Resizing(IMAGE_SIZE, IMAGE_SIZE),], name=\"test_data_augmentation\", ) return model A layer for extracting patches from images This layer takes images as input and divides them into patches. The layer also includes two utility method: show_patched_image -- Takes a batch of images and its corresponding patches to plot a random pair of image and patches. reconstruct_from_patch -- Takes a single instance of patches and stitches them together into the original image. class Patches(layers.Layer): def __init__(self, patch_size=PATCH_SIZE, **kwargs): super().__init__(**kwargs) self.patch_size = patch_size # Assuming the image has three channels each patch would be # of size (patch_size, patch_size, 3). self.resize = layers.Reshape((-1, patch_size * patch_size * 3)) def call(self, images): # Create patches from the input images patches = tf.image.extract_patches( images=images, sizes=[1, self.patch_size, self.patch_size, 1], strides=[1, self.patch_size, self.patch_size, 1], rates=[1, 1, 1, 1], padding=\"VALID\", ) # Reshape the patches to (batch, num_patches, patch_area) and return it. patches = self.resize(patches) return patches def show_patched_image(self, images, patches): # This is a utility function which accepts a batch of images and its # corresponding patches and help visualize one image and its patches # side by side. idx = np.random.choice(patches.shape[0]) print(f\"Index selected: {idx}.\") plt.figure(figsize=(4, 4)) plt.imshow(keras.utils.array_to_img(images[idx])) plt.axis(\"off\") plt.show() n = int(np.sqrt(patches.shape[1])) plt.figure(figsize=(4, 4)) for i, patch in enumerate(patches[idx]): ax = plt.subplot(n, n, i + 1) patch_img = tf.reshape(patch, (self.patch_size, self.patch_size, 3)) plt.imshow(keras.utils.img_to_array(patch_img)) plt.axis(\"off\") plt.show() # Return the index chosen to validate it outside the method. return idx # taken from https://stackoverflow.com/a/58082878/10319735 def reconstruct_from_patch(self, patch): # This utility function takes patches from a *single* image and # reconstructs it back into the image. This is useful for the train # monitor callback. num_patches = patch.shape[0] n = int(np.sqrt(num_patches)) patch = tf.reshape(patch, (num_patches, self.patch_size, self.patch_size, 3)) rows = tf.split(patch, n, axis=0) rows = [tf.concat(tf.unstack(x), axis=1) for x in rows] reconstructed = tf.concat(rows, axis=0) return reconstructed Let's visualize the image patches. # Get a batch of images. image_batch = next(iter(train_ds)) # Augment the images. augmentation_model = get_train_augmentation_model() augmented_images = augmentation_model(image_batch) # Define the patch layer. patch_layer = Patches() # Get the patches from the batched images. patches = patch_layer(images=augmented_images) # Now pass the images and the corresponding patches # to the `show_patched_image` method. random_index = patch_layer.show_patched_image(images=augmented_images, patches=patches) # Chose the same chose image and try reconstructing the patches # into the original image. image = patch_layer.reconstruct_from_patch(patches[random_index]) plt.imshow(image) plt.axis(\"off\") plt.show() Index selected: 102. png png png Patch encoding with masking Quoting the paper Following ViT, we divide an image into regular non-overlapping patches. Then we sample a subset of patches and mask (i.e., remove) the remaining ones. Our sampling strategy is straightforward: we sample random patches without replacement, following a uniform distribution. We simply refer to this as “random sampling”. This layer includes masking and encoding the patches. The utility methods of the layer are: get_random_indices -- Provides the mask and unmask indices. generate_masked_image -- Takes patches and unmask indices, results in a random masked image. This is an essential utility method for our training monitor callback (defined later). class PatchEncoder(layers.Layer): def __init__( self, patch_size=PATCH_SIZE, projection_dim=ENC_PROJECTION_DIM, mask_proportion=MASK_PROPORTION, downstream=False, **kwargs, ): super().__init__(**kwargs) self.patch_size = patch_size self.projection_dim = projection_dim self.mask_proportion = mask_proportion self.downstream = downstream # This is a trainable mask token initialized randomly from a normal # distribution. self.mask_token = tf.Variable( tf.random.normal([1, patch_size * patch_size * 3]), trainable=True ) def build(self, input_shape): (_, self.num_patches, self.patch_area) = input_shape # Create the projection layer for the patches. self.projection = layers.Dense(units=self.projection_dim) # Create the positional embedding layer. self.position_embedding = layers.Embedding( input_dim=self.num_patches, output_dim=self.projection_dim ) # Number of patches that will be masked. self.num_mask = int(self.mask_proportion * self.num_patches) def call(self, patches): # Get the positional embeddings. batch_size = tf.shape(patches)[0] positions = tf.range(start=0, limit=self.num_patches, delta=1) pos_embeddings = self.position_embedding(positions[tf.newaxis, ...]) pos_embeddings = tf.tile( pos_embeddings, [batch_size, 1, 1] ) # (B, num_patches, projection_dim) # Embed the patches. patch_embeddings = ( self.projection(patches) + pos_embeddings ) # (B, num_patches, projection_dim) if self.downstream: return patch_embeddings else: mask_indices, unmask_indices = self.get_random_indices(batch_size) # The encoder input is the unmasked patch embeddings. Here we gather # all the patches that should be unmasked. unmasked_embeddings = tf.gather( patch_embeddings, unmask_indices, axis=1, batch_dims=1 ) # (B, unmask_numbers, projection_dim) # Get the unmasked and masked position embeddings. We will need them # for the decoder. unmasked_positions = tf.gather( pos_embeddings, unmask_indices, axis=1, batch_dims=1 ) # (B, unmask_numbers, projection_dim) masked_positions = tf.gather( pos_embeddings, mask_indices, axis=1, batch_dims=1 ) # (B, mask_numbers, projection_dim) # Repeat the mask token number of mask times. # Mask tokens replace the masks of the image. mask_tokens = tf.repeat(self.mask_token, repeats=self.num_mask, axis=0) mask_tokens = tf.repeat( mask_tokens[tf.newaxis, ...], repeats=batch_size, axis=0 ) # Get the masked embeddings for the tokens. masked_embeddings = self.projection(mask_tokens) + masked_positions return ( unmasked_embeddings, # Input to the encoder. masked_embeddings, # First part of input to the decoder. unmasked_positions, # Added to the encoder outputs. mask_indices, # The indices that were masked. unmask_indices, # The indices that were unmaksed. ) def get_random_indices(self, batch_size): # Create random indices from a uniform distribution and then split # it into mask and unmask indices. rand_indices = tf.argsort( tf.random.uniform(shape=(batch_size, self.num_patches)), axis=-1 ) mask_indices = rand_indices[:, : self.num_mask] unmask_indices = rand_indices[:, self.num_mask :] return mask_indices, unmask_indices def generate_masked_image(self, patches, unmask_indices): # Choose a random patch and it corresponding unmask index. idx = np.random.choice(patches.shape[0]) patch = patches[idx] unmask_index = unmask_indices[idx] # Build a numpy array of same shape as patch. new_patch = np.zeros_like(patch) # Iterate of the new_patch and plug the unmasked patches. count = 0 for i in range(unmask_index.shape[0]): new_patch[unmask_index[i]] = patch[unmask_index[i]] return new_patch, idx Let's see the masking process in action on a sample image. # Create the patch encoder layer. patch_encoder = PatchEncoder() # Get the embeddings and positions. ( unmasked_embeddings, masked_embeddings, unmasked_positions, mask_indices, unmask_indices, ) = patch_encoder(patches=patches) # Show a maksed patch image. new_patch, random_index = patch_encoder.generate_masked_image(patches, unmask_indices) plt.figure(figsize=(10, 10)) plt.subplot(1, 2, 1) img = patch_layer.reconstruct_from_patch(new_patch) plt.imshow(keras.utils.array_to_img(img)) plt.axis(\"off\") plt.title(\"Masked\") plt.subplot(1, 2, 2) img = augmented_images[random_index] plt.imshow(keras.utils.array_to_img(img)) plt.axis(\"off\") plt.title(\"Original\") plt.show() 2021-11-24 01:11:00.182447: I tensorflow/stream_executor/cuda/cuda_blas.cc:1774] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once. png MLP This serves as the fully connected feed forward network of the transformer architecture. def mlp(x, dropout_rate, hidden_units): for units in hidden_units: x = layers.Dense(units, activation=tf.nn.gelu)(x) x = layers.Dropout(dropout_rate)(x) return x MAE encoder The MAE encoder is ViT. The only point to note here is that the encoder outputs a layer normalized output. def create_encoder(num_heads=ENC_NUM_HEADS, num_layers=ENC_LAYERS): inputs = layers.Input((None, ENC_PROJECTION_DIM)) x = inputs for _ in range(num_layers): # Layer normalization 1. x1 = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x) # Create a multi-head attention layer. attention_output = layers.MultiHeadAttention( num_heads=num_heads, key_dim=ENC_PROJECTION_DIM, dropout=0.1 )(x1, x1) # Skip connection 1. x2 = layers.Add()([attention_output, x]) # Layer normalization 2. x3 = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x2) # MLP. x3 = mlp(x3, hidden_units=ENC_TRANSFORMER_UNITS, dropout_rate=0.1) # Skip connection 2. x = layers.Add()([x3, x2]) outputs = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x) return keras.Model(inputs, outputs, name=\"mae_encoder\") MAE decoder The authors point out that they use an asymmetric autoencoder model. They use a lightweight decoder that takes \"<10% computation per token vs. the encoder\". We are not specific with the \"<10% computation\" in our implementation but have used a smaller decoder (both in terms of depth and projection dimensions). def create_decoder( num_layers=DEC_LAYERS, num_heads=DEC_NUM_HEADS, image_size=IMAGE_SIZE ): inputs = layers.Input((NUM_PATCHES, ENC_PROJECTION_DIM)) x = layers.Dense(DEC_PROJECTION_DIM)(inputs) for _ in range(num_layers): # Layer normalization 1. x1 = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x) # Create a multi-head attention layer. attention_output = layers.MultiHeadAttention( num_heads=num_heads, key_dim=DEC_PROJECTION_DIM, dropout=0.1 )(x1, x1) # Skip connection 1. x2 = layers.Add()([attention_output, x]) # Layer normalization 2. x3 = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x2) # MLP. x3 = mlp(x3, hidden_units=DEC_TRANSFORMER_UNITS, dropout_rate=0.1) # Skip connection 2. x = layers.Add()([x3, x2]) x = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x) x = layers.Flatten()(x) pre_final = layers.Dense(units=image_size * image_size * 3, activation=\"sigmoid\")(x) outputs = layers.Reshape((image_size, image_size, 3))(pre_final) return keras.Model(inputs, outputs, name=\"mae_decoder\") MAE trainer This is the trainer module. We wrap the encoder and decoder inside of a tf.keras.Model subclass. This allows us to customize what happens in the model.fit() loop. class MaskedAutoencoder(keras.Model): def __init__( self, train_augmentation_model, test_augmentation_model, patch_layer, patch_encoder, encoder, decoder, **kwargs, ): super().__init__(**kwargs) self.train_augmentation_model = train_augmentation_model self.test_augmentation_model = test_augmentation_model self.patch_layer = patch_layer self.patch_encoder = patch_encoder self.encoder = encoder self.decoder = decoder def calculate_loss(self, images, test=False): # Augment the input images. if test: augmented_images = self.test_augmentation_model(images) else: augmented_images = self.train_augmentation_model(images) # Patch the augmented images. patches = self.patch_layer(augmented_images) # Encode the patches. ( unmasked_embeddings, masked_embeddings, unmasked_positions, mask_indices, unmask_indices, ) = self.patch_encoder(patches) # Pass the unmaksed patche to the encoder. encoder_outputs = self.encoder(unmasked_embeddings) # Create the decoder inputs. encoder_outputs = encoder_outputs + unmasked_positions decoder_inputs = tf.concat([encoder_outputs, masked_embeddings], axis=1) # Decode the inputs. decoder_outputs = self.decoder(decoder_inputs) decoder_patches = self.patch_layer(decoder_outputs) loss_patch = tf.gather(patches, mask_indices, axis=1, batch_dims=1) loss_output = tf.gather(decoder_patches, mask_indices, axis=1, batch_dims=1) # Compute the total loss. total_loss = self.compiled_loss(loss_patch, loss_output) return total_loss, loss_patch, loss_output def train_step(self, images): with tf.GradientTape() as tape: total_loss, loss_patch, loss_output = self.calculate_loss(images) # Apply gradients. train_vars = [ self.train_augmentation_model.trainable_variables, self.patch_layer.trainable_variables, self.patch_encoder.trainable_variables, self.encoder.trainable_variables, self.decoder.trainable_variables, ] grads = tape.gradient(total_loss, train_vars) tv_list = [] for (grad, var) in zip(grads, train_vars): for g, v in zip(grad, var): tv_list.append((g, v)) self.optimizer.apply_gradients(tv_list) # Report progress. self.compiled_metrics.update_state(loss_patch, loss_output) return {m.name: m.result() for m in self.metrics} def test_step(self, images): total_loss, loss_patch, loss_output = self.calculate_loss(images, test=True) # Update the trackers. self.compiled_metrics.update_state(loss_patch, loss_output) return {m.name: m.result() for m in self.metrics} Model initialization train_augmentation_model = get_train_augmentation_model() test_augmentation_model = get_test_augmentation_model() patch_layer = Patches() patch_encoder = PatchEncoder() encoder = create_encoder() decoder = create_decoder() mae_model = MaskedAutoencoder( train_augmentation_model=train_augmentation_model, test_augmentation_model=test_augmentation_model, patch_layer=patch_layer, patch_encoder=patch_encoder, encoder=encoder, decoder=decoder, ) Training callbacks Visualization callback # Taking a batch of test inputs to measure model's progress. test_images = next(iter(test_ds)) class TrainMonitor(keras.callbacks.Callback): def __init__(self, epoch_interval=None): self.epoch_interval = epoch_interval def on_epoch_end(self, epoch, logs=None): if self.epoch_interval and epoch % self.epoch_interval == 0: test_augmented_images = self.model.test_augmentation_model(test_images) test_patches = self.model.patch_layer(test_augmented_images) ( test_unmasked_embeddings, test_masked_embeddings, test_unmasked_positions, test_mask_indices, test_unmask_indices, ) = self.model.patch_encoder(test_patches) test_encoder_outputs = self.model.encoder(test_unmasked_embeddings) test_encoder_outputs = test_encoder_outputs + test_unmasked_positions test_decoder_inputs = tf.concat( [test_encoder_outputs, test_masked_embeddings], axis=1 ) test_decoder_outputs = self.model.decoder(test_decoder_inputs) # Show a maksed patch image. test_masked_patch, idx = self.model.patch_encoder.generate_masked_image( test_patches, test_unmask_indices ) print(f\"\nIdx chosen: {idx}\") original_image = test_augmented_images[idx] masked_image = self.model.patch_layer.reconstruct_from_patch( test_masked_patch ) reconstructed_image = test_decoder_outputs[idx] fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(15, 5)) ax[0].imshow(original_image) ax[0].set_title(f\"Original: {epoch:03d}\") ax[1].imshow(masked_image) ax[1].set_title(f\"Masked: {epoch:03d}\") ax[2].imshow(reconstructed_image) ax[2].set_title(f\"Resonstructed: {epoch:03d}\") plt.show() plt.close() Learning rate scheduler # Some code is taken from: # https://www.kaggle.com/ashusma/training-rfcx-tensorflow-tpu-effnet-b2. class WarmUpCosine(keras.optimizers.schedules.LearningRateSchedule): def __init__( self, learning_rate_base, total_steps, warmup_learning_rate, warmup_steps ): super(WarmUpCosine, self).__init__() self.learning_rate_base = learning_rate_base self.total_steps = total_steps self.warmup_learning_rate = warmup_learning_rate self.warmup_steps = warmup_steps self.pi = tf.constant(np.pi) def __call__(self, step): if self.total_steps < self.warmup_steps: raise ValueError(\"Total_steps must be larger or equal to warmup_steps.\") cos_annealed_lr = tf.cos( self.pi * (tf.cast(step, tf.float32) - self.warmup_steps) / float(self.total_steps - self.warmup_steps) ) learning_rate = 0.5 * self.learning_rate_base * (1 + cos_annealed_lr) if self.warmup_steps > 0: if self.learning_rate_base < self.warmup_learning_rate: raise ValueError( \"Learning_rate_base must be larger or equal to \" \"warmup_learning_rate.\" ) slope = ( self.learning_rate_base - self.warmup_learning_rate ) / self.warmup_steps warmup_rate = slope * tf.cast(step, tf.float32) + self.warmup_learning_rate learning_rate = tf.where( step < self.warmup_steps, warmup_rate, learning_rate ) return tf.where( step > self.total_steps, 0.0, learning_rate, name=\"learning_rate\" ) total_steps = int((len(x_train) / BATCH_SIZE) * EPOCHS) warmup_epoch_percentage = 0.15 warmup_steps = int(total_steps * warmup_epoch_percentage) scheduled_lrs = WarmUpCosine( learning_rate_base=LEARNING_RATE, total_steps=total_steps, warmup_learning_rate=0.0, warmup_steps=warmup_steps, ) lrs = [scheduled_lrs(step) for step in range(total_steps)] plt.plot(lrs) plt.xlabel(\"Step\", fontsize=14) plt.ylabel(\"LR\", fontsize=14) plt.show() # Assemble the callbacks. train_callbacks = [TrainMonitor(epoch_interval=5)] png Model compilation and training optimizer = tfa.optimizers.AdamW(learning_rate=scheduled_lrs, weight_decay=WEIGHT_DECAY) # Compile and pretrain the model. mae_model.compile( optimizer=optimizer, loss=keras.losses.MeanSquaredError(), metrics=[\"mae\"] ) history = mae_model.fit( train_ds, epochs=EPOCHS, validation_data=val_ds, callbacks=train_callbacks, ) # Measure its performance. loss, mae = mae_model.evaluate(test_ds) print(f\"Loss: {loss:.2f}\") print(f\"MAE: {mae:.2f}\") Epoch 1/100 157/157 [==============================] - ETA: 0s - loss: 0.0507 - mae: 0.1811 Idx chosen: 92 png 157/157 [==============================] - 19s 54ms/step - loss: 0.0507 - mae: 0.1811 - val_loss: 0.0417 - val_mae: 0.1630 Epoch 2/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0385 - mae: 0.1550 - val_loss: 0.0349 - val_mae: 0.1460 Epoch 3/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0336 - mae: 0.1420 - val_loss: 0.0311 - val_mae: 0.1352 Epoch 4/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0299 - mae: 0.1325 - val_loss: 0.0302 - val_mae: 0.1321 Epoch 5/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0269 - mae: 0.1246 - val_loss: 0.0256 - val_mae: 0.1207 Epoch 6/100 156/157 [============================>.] - ETA: 0s - loss: 0.0246 - mae: 0.1181 Idx chosen: 14 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0246 - mae: 0.1181 - val_loss: 0.0241 - val_mae: 0.1166 Epoch 7/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0232 - mae: 0.1142 - val_loss: 0.0237 - val_mae: 0.1152 Epoch 8/100 157/157 [==============================] - 7s 43ms/step - loss: 0.0222 - mae: 0.1113 - val_loss: 0.0216 - val_mae: 0.1088 Epoch 9/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0214 - mae: 0.1086 - val_loss: 0.0217 - val_mae: 0.1096 Epoch 10/100 157/157 [==============================] - 7s 43ms/step - loss: 0.0206 - mae: 0.1064 - val_loss: 0.0215 - val_mae: 0.1100 Epoch 11/100 157/157 [==============================] - ETA: 0s - loss: 0.0203 - mae: 0.1053 Idx chosen: 106 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0203 - mae: 0.1053 - val_loss: 0.0205 - val_mae: 0.1052 Epoch 12/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0200 - mae: 0.1043 - val_loss: 0.0196 - val_mae: 0.1028 Epoch 13/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0196 - mae: 0.1030 - val_loss: 0.0198 - val_mae: 0.1043 Epoch 14/100 157/157 [==============================] - 7s 43ms/step - loss: 0.0193 - mae: 0.1019 - val_loss: 0.0192 - val_mae: 0.1004 Epoch 15/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0191 - mae: 0.1013 - val_loss: 0.0198 - val_mae: 0.1031 Epoch 16/100 157/157 [==============================] - ETA: 0s - loss: 0.0189 - mae: 0.1007 Idx chosen: 71 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0189 - mae: 0.1007 - val_loss: 0.0188 - val_mae: 0.1003 Epoch 17/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0185 - mae: 0.0992 - val_loss: 0.0187 - val_mae: 0.0993 Epoch 18/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0185 - mae: 0.0992 - val_loss: 0.0192 - val_mae: 0.1021 Epoch 19/100 157/157 [==============================] - 7s 43ms/step - loss: 0.0182 - mae: 0.0984 - val_loss: 0.0181 - val_mae: 0.0967 Epoch 20/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0180 - mae: 0.0975 - val_loss: 0.0183 - val_mae: 0.0996 Epoch 21/100 156/157 [============================>.] - ETA: 0s - loss: 0.0180 - mae: 0.0975 Idx chosen: 188 png 157/157 [==============================] - 7s 47ms/step - loss: 0.0180 - mae: 0.0975 - val_loss: 0.0185 - val_mae: 0.0992 Epoch 22/100 157/157 [==============================] - 7s 45ms/step - loss: 0.0179 - mae: 0.0971 - val_loss: 0.0181 - val_mae: 0.0977 Epoch 23/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0178 - mae: 0.0966 - val_loss: 0.0179 - val_mae: 0.0962 Epoch 24/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0178 - mae: 0.0966 - val_loss: 0.0176 - val_mae: 0.0952 Epoch 25/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0176 - mae: 0.0960 - val_loss: 0.0182 - val_mae: 0.0984 Epoch 26/100 157/157 [==============================] - ETA: 0s - loss: 0.0175 - mae: 0.0958 Idx chosen: 20 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0175 - mae: 0.0958 - val_loss: 0.0176 - val_mae: 0.0958 Epoch 27/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0175 - mae: 0.0957 - val_loss: 0.0175 - val_mae: 0.0948 Epoch 28/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0175 - mae: 0.0956 - val_loss: 0.0173 - val_mae: 0.0947 Epoch 29/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0172 - mae: 0.0949 - val_loss: 0.0174 - val_mae: 0.0948 Epoch 30/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0172 - mae: 0.0948 - val_loss: 0.0174 - val_mae: 0.0944 Epoch 31/100 157/157 [==============================] - ETA: 0s - loss: 0.0172 - mae: 0.0945 Idx chosen: 102 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0172 - mae: 0.0945 - val_loss: 0.0169 - val_mae: 0.0932 Epoch 32/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0172 - mae: 0.0947 - val_loss: 0.0174 - val_mae: 0.0961 Epoch 33/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0171 - mae: 0.0945 - val_loss: 0.0171 - val_mae: 0.0937 Epoch 34/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0170 - mae: 0.0938 - val_loss: 0.0171 - val_mae: 0.0941 Epoch 35/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0170 - mae: 0.0940 - val_loss: 0.0171 - val_mae: 0.0948 Epoch 36/100 157/157 [==============================] - ETA: 0s - loss: 0.0168 - mae: 0.0933 Idx chosen: 121 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0168 - mae: 0.0933 - val_loss: 0.0170 - val_mae: 0.0935 Epoch 37/100 157/157 [==============================] - 7s 43ms/step - loss: 0.0169 - mae: 0.0935 - val_loss: 0.0168 - val_mae: 0.0933 Epoch 38/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0168 - mae: 0.0933 - val_loss: 0.0170 - val_mae: 0.0935 Epoch 39/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0167 - mae: 0.0931 - val_loss: 0.0169 - val_mae: 0.0934 Epoch 40/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0167 - mae: 0.0930 - val_loss: 0.0169 - val_mae: 0.0934 Epoch 41/100 157/157 [==============================] - ETA: 0s - loss: 0.0167 - mae: 0.0929 Idx chosen: 210 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0167 - mae: 0.0929 - val_loss: 0.0169 - val_mae: 0.0930 Epoch 42/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0167 - mae: 0.0928 - val_loss: 0.0170 - val_mae: 0.0941 Epoch 43/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0166 - mae: 0.0925 - val_loss: 0.0169 - val_mae: 0.0931 Epoch 44/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0165 - mae: 0.0921 - val_loss: 0.0165 - val_mae: 0.0914 Epoch 45/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0165 - mae: 0.0922 - val_loss: 0.0165 - val_mae: 0.0915 Epoch 46/100 157/157 [==============================] - ETA: 0s - loss: 0.0165 - mae: 0.0922 Idx chosen: 214 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0165 - mae: 0.0922 - val_loss: 0.0166 - val_mae: 0.0914 Epoch 47/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0164 - mae: 0.0919 - val_loss: 0.0164 - val_mae: 0.0912 Epoch 48/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0163 - mae: 0.0914 - val_loss: 0.0166 - val_mae: 0.0923 Epoch 49/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0163 - mae: 0.0914 - val_loss: 0.0164 - val_mae: 0.0914 Epoch 50/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0162 - mae: 0.0912 - val_loss: 0.0164 - val_mae: 0.0916 Epoch 51/100 157/157 [==============================] - ETA: 0s - loss: 0.0162 - mae: 0.0913 Idx chosen: 74 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0162 - mae: 0.0913 - val_loss: 0.0165 - val_mae: 0.0919 Epoch 52/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0162 - mae: 0.0909 - val_loss: 0.0163 - val_mae: 0.0912 Epoch 53/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0161 - mae: 0.0908 - val_loss: 0.0161 - val_mae: 0.0903 Epoch 54/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0161 - mae: 0.0908 - val_loss: 0.0162 - val_mae: 0.0901 Epoch 55/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0161 - mae: 0.0907 - val_loss: 0.0162 - val_mae: 0.0909 Epoch 56/100 156/157 [============================>.] - ETA: 0s - loss: 0.0160 - mae: 0.0904 Idx chosen: 202 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0160 - mae: 0.0904 - val_loss: 0.0160 - val_mae: 0.0908 Epoch 57/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0159 - mae: 0.0902 - val_loss: 0.0160 - val_mae: 0.0899 Epoch 58/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0159 - mae: 0.0901 - val_loss: 0.0162 - val_mae: 0.0916 Epoch 59/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0159 - mae: 0.0898 - val_loss: 0.0160 - val_mae: 0.0903 Epoch 60/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0159 - mae: 0.0898 - val_loss: 0.0159 - val_mae: 0.0897 Epoch 61/100 157/157 [==============================] - ETA: 0s - loss: 0.0158 - mae: 0.0894 Idx chosen: 87 png 157/157 [==============================] - 7s 48ms/step - loss: 0.0158 - mae: 0.0894 - val_loss: 0.0160 - val_mae: 0.0895 Epoch 62/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0158 - mae: 0.0895 - val_loss: 0.0161 - val_mae: 0.0905 Epoch 63/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0157 - mae: 0.0891 - val_loss: 0.0158 - val_mae: 0.0894 Epoch 64/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0157 - mae: 0.0890 - val_loss: 0.0158 - val_mae: 0.0889 Epoch 65/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0157 - mae: 0.0890 - val_loss: 0.0159 - val_mae: 0.0893 Epoch 66/100 157/157 [==============================] - ETA: 0s - loss: 0.0156 - mae: 0.0888 Idx chosen: 116 png 157/157 [==============================] - 7s 47ms/step - loss: 0.0156 - mae: 0.0888 - val_loss: 0.0160 - val_mae: 0.0903 Epoch 67/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0156 - mae: 0.0886 - val_loss: 0.0156 - val_mae: 0.0881 Epoch 68/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0155 - mae: 0.0883 - val_loss: 0.0156 - val_mae: 0.0885 Epoch 69/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0154 - mae: 0.0881 - val_loss: 0.0155 - val_mae: 0.0878 Epoch 70/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0154 - mae: 0.0881 - val_loss: 0.0158 - val_mae: 0.0891 Epoch 71/100 156/157 [============================>.] - ETA: 0s - loss: 0.0154 - mae: 0.0879 Idx chosen: 99 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0154 - mae: 0.0879 - val_loss: 0.0155 - val_mae: 0.0884 Epoch 72/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0153 - mae: 0.0877 - val_loss: 0.0154 - val_mae: 0.0878 Epoch 73/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0153 - mae: 0.0876 - val_loss: 0.0155 - val_mae: 0.0879 Epoch 74/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0152 - mae: 0.0874 - val_loss: 0.0153 - val_mae: 0.0876 Epoch 75/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0152 - mae: 0.0872 - val_loss: 0.0153 - val_mae: 0.0872 Epoch 76/100 157/157 [==============================] - ETA: 0s - loss: 0.0151 - mae: 0.0870 Idx chosen: 103 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0151 - mae: 0.0870 - val_loss: 0.0153 - val_mae: 0.0873 Epoch 77/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0151 - mae: 0.0869 - val_loss: 0.0152 - val_mae: 0.0872 Epoch 78/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0151 - mae: 0.0867 - val_loss: 0.0152 - val_mae: 0.0869 Epoch 79/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0151 - mae: 0.0867 - val_loss: 0.0151 - val_mae: 0.0863 Epoch 80/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0150 - mae: 0.0865 - val_loss: 0.0150 - val_mae: 0.0860 Epoch 81/100 157/157 [==============================] - ETA: 0s - loss: 0.0150 - mae: 0.0865 Idx chosen: 151 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0150 - mae: 0.0865 - val_loss: 0.0151 - val_mae: 0.0862 Epoch 82/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0149 - mae: 0.0861 - val_loss: 0.0151 - val_mae: 0.0859 Epoch 83/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0149 - mae: 0.0861 - val_loss: 0.0149 - val_mae: 0.0857 Epoch 84/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0149 - mae: 0.0860 - val_loss: 0.0151 - val_mae: 0.0865 Epoch 85/100 157/157 [==============================] - 7s 43ms/step - loss: 0.0148 - mae: 0.0858 - val_loss: 0.0150 - val_mae: 0.0856 Epoch 86/100 157/157 [==============================] - ETA: 0s - loss: 0.0148 - mae: 0.0856 Idx chosen: 130 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0148 - mae: 0.0856 - val_loss: 0.0149 - val_mae: 0.0855 Epoch 87/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0148 - mae: 0.0855 - val_loss: 0.0148 - val_mae: 0.0851 Epoch 88/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0148 - mae: 0.0856 - val_loss: 0.0149 - val_mae: 0.0855 Epoch 89/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0147 - mae: 0.0853 - val_loss: 0.0148 - val_mae: 0.0852 Epoch 90/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0147 - mae: 0.0853 - val_loss: 0.0148 - val_mae: 0.0850 Epoch 91/100 157/157 [==============================] - ETA: 0s - loss: 0.0147 - mae: 0.0852 Idx chosen: 149 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0147 - mae: 0.0852 - val_loss: 0.0148 - val_mae: 0.0851 Epoch 92/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0146 - mae: 0.0851 - val_loss: 0.0147 - val_mae: 0.0849 Epoch 93/100 157/157 [==============================] - 7s 43ms/step - loss: 0.0147 - mae: 0.0853 - val_loss: 0.0147 - val_mae: 0.0849 Epoch 94/100 157/157 [==============================] - 7s 43ms/step - loss: 0.0147 - mae: 0.0852 - val_loss: 0.0148 - val_mae: 0.0850 Epoch 95/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0147 - mae: 0.0852 - val_loss: 0.0148 - val_mae: 0.0853 Epoch 96/100 157/157 [==============================] - ETA: 0s - loss: 0.0147 - mae: 0.0853 Idx chosen: 52 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0147 - mae: 0.0853 - val_loss: 0.0148 - val_mae: 0.0853 Epoch 97/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0148 - mae: 0.0856 - val_loss: 0.0149 - val_mae: 0.0855 Epoch 98/100 157/157 [==============================] - 7s 43ms/step - loss: 0.0148 - mae: 0.0857 - val_loss: 0.0149 - val_mae: 0.0858 Epoch 99/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0149 - mae: 0.0863 - val_loss: 0.0150 - val_mae: 0.0865 Epoch 100/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0150 - mae: 0.0873 - val_loss: 0.0153 - val_mae: 0.0881 40/40 [==============================] - 1s 15ms/step - loss: 0.0154 - mae: 0.0882 Loss: 0.02 MAE: 0.09 Evaluation with linear probing Extract the encoder model along with other layers # Extract the augmentation layers. train_augmentation_model = mae_model.train_augmentation_model test_augmentation_model = mae_model.test_augmentation_model # Extract the patchers. patch_layer = mae_model.patch_layer patch_encoder = mae_model.patch_encoder patch_encoder.downstream = True # Swtich the downstream flag to True. # Extract the encoder. encoder = mae_model.encoder # Pack as a model. downstream_model = keras.Sequential( [ layers.Input((IMAGE_SIZE, IMAGE_SIZE, 3)), patch_layer, patch_encoder, encoder, layers.BatchNormalization(), # Refer to A.1 (Linear probing). layers.GlobalAveragePooling1D(), layers.Dense(NUM_CLASSES, activation=\"softmax\"), ], name=\"linear_probe_model\", ) # Only the final classification layer of the `downstream_model` should be trainable. for layer in downstream_model.layers[:-1]: layer.trainable = False downstream_model.summary() Model: \"linear_probe_model\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= patches_1 (Patches) (None, 64, 108) 0 patch_encoder_1 (PatchEncod (None, 64, 128) 22252 er) mae_encoder (Functional) (None, None, 128) 1981696 batch_normalization (BatchN (None, 64, 128) 512 ormalization) global_average_pooling1d (G (None, 128) 0 lobalAveragePooling1D) dense_19 (Dense) (None, 10) 1290 ================================================================= Total params: 2,005,750 Trainable params: 1,290 Non-trainable params: 2,004,460 _________________________________________________________________ We are using average pooling to extract learned representations from the MAE encoder. Another approach would be to use a learnable dummy token inside the encoder during pretraining (resembling the [CLS] token). Then we can extract representations from that token during the downstream tasks. Prepare datasets for linear probing def prepare_data(images, labels, is_train=True): if is_train: augmentation_model = train_augmentation_model else: augmentation_model = test_augmentation_model dataset = tf.data.Dataset.from_tensor_slices((images, labels)) if is_train: dataset = dataset.shuffle(BUFFER_SIZE) dataset = dataset.batch(BATCH_SIZE).map( lambda x, y: (augmentation_model(x), y), num_parallel_calls=AUTO ) return dataset.prefetch(AUTO) train_ds = prepare_data(x_train, y_train) val_ds = prepare_data(x_train, y_train, is_train=False) test_ds = prepare_data(x_test, y_test, is_train=False) Perform linear probing linear_probe_epochs = 50 linear_prob_lr = 0.1 warm_epoch_percentage = 0.1 steps = int((len(x_train) // BATCH_SIZE) * linear_probe_epochs) warmup_steps = int(steps * warm_epoch_percentage) scheduled_lrs = WarmUpCosine( learning_rate_base=linear_prob_lr, total_steps=steps, warmup_learning_rate=0.0, warmup_steps=warmup_steps, ) optimizer = keras.optimizers.SGD(learning_rate=scheduled_lrs, momentum=0.9) downstream_model.compile( optimizer=optimizer, loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"] ) downstream_model.fit(train_ds, validation_data=val_ds, epochs=linear_probe_epochs) loss, accuracy = downstream_model.evaluate(test_ds) accuracy = round(accuracy * 100, 2) print(f\"Accuracy on the test set: {accuracy}%.\") Epoch 1/50 157/157 [==============================] - 11s 43ms/step - loss: 2.2131 - accuracy: 0.1838 - val_loss: 2.0249 - val_accuracy: 0.2986 Epoch 2/50 157/157 [==============================] - 6s 36ms/step - loss: 1.9065 - accuracy: 0.3498 - val_loss: 1.7813 - val_accuracy: 0.3913 Epoch 3/50 157/157 [==============================] - 6s 36ms/step - loss: 1.7443 - accuracy: 0.3995 - val_loss: 1.6705 - val_accuracy: 0.4195 Epoch 4/50 157/157 [==============================] - 6s 36ms/step - loss: 1.6645 - accuracy: 0.4201 - val_loss: 1.6107 - val_accuracy: 0.4344 Epoch 5/50 157/157 [==============================] - 6s 36ms/step - loss: 1.6169 - accuracy: 0.4320 - val_loss: 1.5747 - val_accuracy: 0.4435 Epoch 6/50 157/157 [==============================] - 6s 36ms/step - loss: 1.5843 - accuracy: 0.4364 - val_loss: 1.5476 - val_accuracy: 0.4496 Epoch 7/50 157/157 [==============================] - 6s 36ms/step - loss: 1.5634 - accuracy: 0.4418 - val_loss: 1.5294 - val_accuracy: 0.4540 Epoch 8/50 157/157 [==============================] - 6s 36ms/step - loss: 1.5462 - accuracy: 0.4452 - val_loss: 1.5158 - val_accuracy: 0.4575 Epoch 9/50 157/157 [==============================] - 6s 36ms/step - loss: 1.5365 - accuracy: 0.4468 - val_loss: 1.5068 - val_accuracy: 0.4602 Epoch 10/50 157/157 [==============================] - 6s 36ms/step - loss: 1.5237 - accuracy: 0.4541 - val_loss: 1.4971 - val_accuracy: 0.4616 Epoch 11/50 157/157 [==============================] - 6s 36ms/step - loss: 1.5171 - accuracy: 0.4539 - val_loss: 1.4902 - val_accuracy: 0.4620 Epoch 12/50 157/157 [==============================] - 6s 37ms/step - loss: 1.5127 - accuracy: 0.4552 - val_loss: 1.4850 - val_accuracy: 0.4640 Epoch 13/50 157/157 [==============================] - 6s 36ms/step - loss: 1.5027 - accuracy: 0.4590 - val_loss: 1.4796 - val_accuracy: 0.4669 Epoch 14/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4985 - accuracy: 0.4587 - val_loss: 1.4747 - val_accuracy: 0.4673 Epoch 15/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4975 - accuracy: 0.4588 - val_loss: 1.4694 - val_accuracy: 0.4694 Epoch 16/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4933 - accuracy: 0.4596 - val_loss: 1.4661 - val_accuracy: 0.4698 Epoch 17/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4889 - accuracy: 0.4608 - val_loss: 1.4628 - val_accuracy: 0.4721 Epoch 18/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4869 - accuracy: 0.4659 - val_loss: 1.4623 - val_accuracy: 0.4721 Epoch 19/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4826 - accuracy: 0.4639 - val_loss: 1.4585 - val_accuracy: 0.4716 Epoch 20/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4813 - accuracy: 0.4653 - val_loss: 1.4559 - val_accuracy: 0.4743 Epoch 21/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4824 - accuracy: 0.4644 - val_loss: 1.4542 - val_accuracy: 0.4746 Epoch 22/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4768 - accuracy: 0.4667 - val_loss: 1.4526 - val_accuracy: 0.4757 Epoch 23/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4775 - accuracy: 0.4644 - val_loss: 1.4507 - val_accuracy: 0.4751 Epoch 24/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4750 - accuracy: 0.4670 - val_loss: 1.4481 - val_accuracy: 0.4756 Epoch 25/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4726 - accuracy: 0.4663 - val_loss: 1.4467 - val_accuracy: 0.4767 Epoch 26/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4706 - accuracy: 0.4681 - val_loss: 1.4450 - val_accuracy: 0.4781 Epoch 27/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4660 - accuracy: 0.4706 - val_loss: 1.4456 - val_accuracy: 0.4766 Epoch 28/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4664 - accuracy: 0.4707 - val_loss: 1.4443 - val_accuracy: 0.4776 Epoch 29/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4678 - accuracy: 0.4674 - val_loss: 1.4411 - val_accuracy: 0.4802 Epoch 30/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4654 - accuracy: 0.4704 - val_loss: 1.4411 - val_accuracy: 0.4801 Epoch 31/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4655 - accuracy: 0.4702 - val_loss: 1.4402 - val_accuracy: 0.4787 Epoch 32/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4620 - accuracy: 0.4735 - val_loss: 1.4402 - val_accuracy: 0.4781 Epoch 33/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4668 - accuracy: 0.4699 - val_loss: 1.4397 - val_accuracy: 0.4783 Epoch 34/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4619 - accuracy: 0.4724 - val_loss: 1.4382 - val_accuracy: 0.4793 Epoch 35/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4652 - accuracy: 0.4697 - val_loss: 1.4374 - val_accuracy: 0.4800 Epoch 36/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4618 - accuracy: 0.4707 - val_loss: 1.4372 - val_accuracy: 0.4794 Epoch 37/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4606 - accuracy: 0.4710 - val_loss: 1.4369 - val_accuracy: 0.4793 Epoch 38/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4613 - accuracy: 0.4706 - val_loss: 1.4363 - val_accuracy: 0.4806 Epoch 39/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4631 - accuracy: 0.4713 - val_loss: 1.4361 - val_accuracy: 0.4804 Epoch 40/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4620 - accuracy: 0.4695 - val_loss: 1.4357 - val_accuracy: 0.4802 Epoch 41/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4639 - accuracy: 0.4706 - val_loss: 1.4355 - val_accuracy: 0.4801 Epoch 42/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4588 - accuracy: 0.4735 - val_loss: 1.4352 - val_accuracy: 0.4802 Epoch 43/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4573 - accuracy: 0.4734 - val_loss: 1.4352 - val_accuracy: 0.4794 Epoch 44/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4597 - accuracy: 0.4723 - val_loss: 1.4350 - val_accuracy: 0.4796 Epoch 45/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4572 - accuracy: 0.4741 - val_loss: 1.4349 - val_accuracy: 0.4799 Epoch 46/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4561 - accuracy: 0.4756 - val_loss: 1.4348 - val_accuracy: 0.4801 Epoch 47/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4593 - accuracy: 0.4730 - val_loss: 1.4348 - val_accuracy: 0.4801 Epoch 48/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4613 - accuracy: 0.4733 - val_loss: 1.4348 - val_accuracy: 0.4802 Epoch 49/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4591 - accuracy: 0.4710 - val_loss: 1.4348 - val_accuracy: 0.4803 Epoch 50/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4566 - accuracy: 0.4766 - val_loss: 1.4348 - val_accuracy: 0.4803 40/40 [==============================] - 1s 17ms/step - loss: 1.4375 - accuracy: 0.4790 Accuracy on the test set: 47.9%. We believe that with a more sophisticated hyperparameter tuning process and a longer pretraining it is possible to improve this performance further. For comparison, we took the encoder architecture and trained it from scratch in a fully supervised manner. This gave us ~76% test top-1 accuracy. The authors of MAE demonstrates strong performance on the ImageNet-1k dataset as well as other downstream tasks like object detection and semantic segmentation. Final notes We refer the interested readers to other examples on self-supervised learning present on keras.io: SimCLR NNCLR SimSiam This idea of using BERT flavored pretraining in computer vision was also explored in Selfie, but it could not demonstrate strong results. Another concurrent work that explores the idea of masked image modeling is SimMIM. Finally, as a fun fact, we, the authors of this example also explored the idea of \"reconstruction as a pretext task\" in 2020 but we could not prevent the network from representation collapse, and hence we did not get strong downstream performance. We would like to thank Xinlei Chen (one of the authors of MAE) for helpful discussions. We are grateful to JarvisLabs and Google Developers Experts program for helping with GPU credits Example of using similarity metric learning on CIFAR-10 images. Overview This example is based on the \"Metric learning for image similarity search\" example. We aim to use the same data set but implement the model using TensorFlow Similarity. Metric learning aims to train models that can embed inputs into a high-dimensional space such that \"similar\" inputs are pulled closer to each other and \"dissimilar\" inputs are pushed farther apart. Once trained, these models can produce embeddings for downstream systems where such similarity is useful, for instance as a ranking signal for search or as a form of pretrained embedding model for another supervised problem. For a more detailed overview of metric learning, see: What is metric learning? \"Using crossentropy for metric learning\" tutorial Setup This tutorial will use the TensorFlow Similarity library to learn and evaluate the similarity embedding. TensorFlow Similarity provides components that: Make training contrastive models simple and fast. Make it easier to ensure that batches contain pairs of examples. Enable the evaluation of the quality of the embedding. import random from matplotlib import pyplot as plt from mpl_toolkits import axes_grid1 import numpy as np import tensorflow as tf from tensorflow import keras import tensorflow_similarity as tfsim tfsim.utils.tf_cap_memory() print(\"TensorFlow:\", tf.__version__) print(\"TensorFlow Similarity:\", tfsim.__version__) TensorFlow: 2.6.0 TensorFlow Similarity: 0.14 Dataset samplers We will be using the CIFAR-10 dataset for this tutorial. For a similarity model to learn efficiently, each batch must contains at least 2 examples of each class. To make this easy, tf_similarity offers Sampler objects that enable you to set both the number of classes and the minimum number of examples of each class per batch. The train and validation datasets will be created using the TFDatasetMultiShotMemorySampler object. This creates a sampler that loads datasets from TensorFlow Datasets and yields batches containing a target number of classes and a target number of examples per class. Additionally, we can restrict the sampler to only yield the subset of classes defined in class_list, enabling us to train on a subset of the classes and then test how the embedding generalizes to the unseen classes. This can be useful when working on few-shot learning problems. The following cell creates a train_ds sample that: Loads the CIFAR-10 dataset from TFDS and then takes the examples_per_class_per_batch. Ensures the sampler restricts the classes to those defined in class_list. Ensures each batch contains 10 different classes with 8 examples each. We also create a validation dataset in the same way, but we limit the total number of examples per class to 100 and the examples per class per batch is set to the default of 2. # This determines the number of classes used during training. # Here we are using all the classes. num_known_classes = 10 class_list = random.sample(population=range(10), k=num_known_classes) classes_per_batch = 10 # Passing multiple examples per class per batch ensures that each example has # multiple positive pairs. This can be useful when performing triplet mining or # when using losses like `MultiSimilarityLoss` or `CircleLoss` as these can # take a weighted mix of all the positive pairs. In general, more examples per # class will lead to more information for the positive pairs, while more classes # per batch will provide more varied information in the negative pairs. However, # the losses compute the pairwise distance between the examples in a batch so # the upper limit of the batch size is restricted by the memory. examples_per_class_per_batch = 8 print( \"Batch size is: \" f\"{min(classes_per_batch, num_known_classes) * examples_per_class_per_batch}\" ) print(\" Create Training Data \".center(34, \"#\")) train_ds = tfsim.samplers.TFDatasetMultiShotMemorySampler( \"cifar10\", classes_per_batch=min(classes_per_batch, num_known_classes), splits=\"train\", steps_per_epoch=4000, examples_per_class_per_batch=examples_per_class_per_batch, class_list=class_list, ) print(\"\n\" + \" Create Validation Data \".center(34, \"#\")) val_ds = tfsim.samplers.TFDatasetMultiShotMemorySampler( \"cifar10\", classes_per_batch=classes_per_batch, splits=\"test\", total_examples_per_class=100, ) Batch size is: 80 ###### Create Training Data ###### 2021-10-07 22:48:06.609114: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. converting train: 0%| | 0/50000 [00:00ap\", anchor_embeddings, positive_embeddings ) # Since we intend to use these as logits we scale them by a temperature. # This value would normally be chosen as a hyper parameter. temperature = 0.2 similarities /= temperature # We use these similarities as logits for a softmax. The labels for # this call are just the sequence [0, 1, 2, ..., num_classes] since we # want the main diagonal values, which correspond to the anchor/positive # pairs, to be high. This loss will move embeddings for the # anchor/positive pairs together and move all other pairs apart. sparse_labels = tf.range(num_classes) loss = self.compiled_loss(sparse_labels, similarities) # Calculate gradients and apply via optimizer. gradients = tape.gradient(loss, self.trainable_variables) self.optimizer.apply_gradients(zip(gradients, self.trainable_variables)) # Update and return metrics (specifically the one for the loss value). self.compiled_metrics.update_state(sparse_labels, similarities) return {m.name: m.result() for m in self.metrics} Next we describe the architecture that maps from an image to an embedding. This model simply consists of a sequence of 2d convolutions followed by global pooling with a final linear projection to an embedding space. As is common in metric learning we normalise the embeddings so that we can use simple dot products to measure similarity. For simplicity this model is intentionally small. inputs = layers.Input(shape=(height_width, height_width, 3)) x = layers.Conv2D(filters=32, kernel_size=3, strides=2, activation=\"relu\")(inputs) x = layers.Conv2D(filters=64, kernel_size=3, strides=2, activation=\"relu\")(x) x = layers.Conv2D(filters=128, kernel_size=3, strides=2, activation=\"relu\")(x) x = layers.GlobalAveragePooling2D()(x) embeddings = layers.Dense(units=8, activation=None)(x) embeddings = tf.nn.l2_normalize(embeddings, axis=-1) model = EmbeddingModel(inputs, embeddings) Finally we run the training. On a Google Colab GPU instance this takes about a minute. model.compile( optimizer=keras.optimizers.Adam(learning_rate=1e-3), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), ) history = model.fit(AnchorPositivePairs(num_batchs=1000), epochs=20) plt.plot(history.history[\"loss\"]) plt.show() Epoch 1/20 1000/1000 [==============================] - 4s 4ms/step - loss: 2.2475 Epoch 2/20 1000/1000 [==============================] - 5s 5ms/step - loss: 2.1246 Epoch 3/20 1000/1000 [==============================] - 7s 7ms/step - loss: 2.0519 Epoch 4/20 1000/1000 [==============================] - 8s 8ms/step - loss: 2.0011 Epoch 5/20 1000/1000 [==============================] - 9s 9ms/step - loss: 1.9601 Epoch 6/20 1000/1000 [==============================] - 9s 9ms/step - loss: 1.9214 Epoch 7/20 1000/1000 [==============================] - 9s 9ms/step - loss: 1.9094 Epoch 8/20 1000/1000 [==============================] - 10s 10ms/step - loss: 1.8669 Epoch 9/20 1000/1000 [==============================] - 10s 10ms/step - loss: 1.8462 Epoch 10/20 1000/1000 [==============================] - 10s 10ms/step - loss: 1.8095 Epoch 11/20 1000/1000 [==============================] - 10s 10ms/step - loss: 1.7854 Epoch 12/20 1000/1000 [==============================] - 11s 11ms/step - loss: 1.7595 Epoch 13/20 1000/1000 [==============================] - 11s 11ms/step - loss: 1.7538 Epoch 14/20 1000/1000 [==============================] - 11s 11ms/step - loss: 1.7198 Epoch 15/20 906/1000 [==========================>...] - ETA: 1s - loss: 1.7017 Testing We can review the quality of this model by applying it to the test set and considering near neighbours in the embedding space. First we embed the test set and calculate all near neighbours. Recall that since the embeddings are unit length we can calculate cosine similarity via dot products. near_neighbours_per_example = 10 embeddings = model.predict(x_test) gram_matrix = np.einsum(\"ae,be->ab\", embeddings, embeddings) near_neighbours = np.argsort(gram_matrix.T)[:, -(near_neighbours_per_example + 1) :] As a visual check of these embeddings we can build a collage of the near neighbours for 5 random examples. The first column of the image below is a randomly selected image, the following 10 columns show the nearest neighbours in order of similarity. num_collage_examples = 5 examples = np.empty( ( num_collage_examples, near_neighbours_per_example + 1, height_width, height_width, 3, ), dtype=np.float32, ) for row_idx in range(num_collage_examples): examples[row_idx, 0] = x_test[row_idx] anchor_near_neighbours = reversed(near_neighbours[row_idx][:-1]) for col_idx, nn_idx in enumerate(anchor_near_neighbours): examples[row_idx, col_idx + 1] = x_test[nn_idx] show_collage(examples) png We can also get a quantified view of the performance by considering the correctness of near neighbours in terms of a confusion matrix. Let us sample 10 examples from each of the 10 classes and consider their near neighbours as a form of prediction; that is, does the example and its near neighbours share the same class? We observe that each animal class does generally well, and is confused the most with the other animal classes. The vehicle classes follow the same pattern. confusion_matrix = np.zeros((num_classes, num_classes)) # For each class. for class_idx in range(num_classes): # Consider 10 examples. example_idxs = class_idx_to_test_idxs[class_idx][:10] for y_test_idx in example_idxs: # And count the classes of its near neighbours. for nn_idx in near_neighbours[y_test_idx][:-1]: nn_class_idx = y_test[nn_idx] confusion_matrix[class_idx, nn_class_idx] += 1 # Display a confusion matrix. labels = [ \"Airplane\", \"Automobile\", \"Bird\", \"Cat\", \"Deer\", \"Dog\", \"Frog\", \"Horse\", \"Ship\", \"Truck\", ] disp = ConfusionMatrixDisplay(confusion_matrix=confusion_matrix, display_labels=labels) disp.plot(include_values=True, cmap=\"viridis\", ax=None, xticks_rotation=\"vertical\") plt.show() png Data augmentation using the mixup technique for image classification. Introduction mixup is a domain-agnostic data augmentation technique proposed in mixup: Beyond Empirical Risk Minimization by Zhang et al. It's implemented with the following formulas: (Note that the lambda values are values with the [0, 1] range and are sampled from the Beta distribution.) The technique is quite systematically named - we are literally mixing up the features and their corresponding labels. Implementation-wise it's simple. Neural networks are prone to memorizing corrupt labels. mixup relaxes this by combining different features with one another (same happens for the labels too) so that a network does not get overconfident about the relationship between the features and their labels. mixup is specifically useful when we are not sure about selecting a set of augmentation transforms for a given dataset, medical imaging datasets, for example. mixup can be extended to a variety of data modalities such as computer vision, naturallanguage processing, speech, and so on. This example requires TensorFlow 2.4 or higher. Setup import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.keras import layers Prepare the dataset In this example, we will be using the FashionMNIST dataset. But this same recipe can be used for other classification datasets as well. (x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data() x_train = x_train.astype(\"float32\") / 255.0 x_train = np.reshape(x_train, (-1, 28, 28, 1)) y_train = tf.one_hot(y_train, 10) x_test = x_test.astype(\"float32\") / 255.0 x_test = np.reshape(x_test, (-1, 28, 28, 1)) y_test = tf.one_hot(y_test, 10) Define hyperparameters AUTO = tf.data.AUTOTUNE BATCH_SIZE = 64 EPOCHS = 10 Convert the data into TensorFlow Dataset objects # Put aside a few samples to create our validation set val_samples = 2000 x_val, y_val = x_train[:val_samples], y_train[:val_samples] new_x_train, new_y_train = x_train[val_samples:], y_train[val_samples:] train_ds_one = ( tf.data.Dataset.from_tensor_slices((new_x_train, new_y_train)) .shuffle(BATCH_SIZE * 100) .batch(BATCH_SIZE) ) train_ds_two = ( tf.data.Dataset.from_tensor_slices((new_x_train, new_y_train)) .shuffle(BATCH_SIZE * 100) .batch(BATCH_SIZE) ) # Because we will be mixing up the images and their corresponding labels, we will be # combining two shuffled datasets from the same training data. train_ds = tf.data.Dataset.zip((train_ds_one, train_ds_two)) val_ds = tf.data.Dataset.from_tensor_slices((x_val, y_val)).batch(BATCH_SIZE) test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(BATCH_SIZE) Define the mixup technique function To perform the mixup routine, we create new virtual datasets using the training data from the same dataset, and apply a lambda value within the [0, 1] range sampled from a Beta distribution — such that, for example, new_x = lambda * x1 + (1 - lambda) * x2 (where x1 and x2 are images) and the same equation is applied to the labels as well. def sample_beta_distribution(size, concentration_0=0.2, concentration_1=0.2): gamma_1_sample = tf.random.gamma(shape=[size], alpha=concentration_1) gamma_2_sample = tf.random.gamma(shape=[size], alpha=concentration_0) return gamma_1_sample / (gamma_1_sample + gamma_2_sample) def mix_up(ds_one, ds_two, alpha=0.2): # Unpack two datasets images_one, labels_one = ds_one images_two, labels_two = ds_two batch_size = tf.shape(images_one)[0] # Sample lambda and reshape it to do the mixup l = sample_beta_distribution(batch_size, alpha, alpha) x_l = tf.reshape(l, (batch_size, 1, 1, 1)) y_l = tf.reshape(l, (batch_size, 1)) # Perform mixup on both images and labels by combining a pair of images/labels # (one from each dataset) into one image/label images = images_one * x_l + images_two * (1 - x_l) labels = labels_one * y_l + labels_two * (1 - y_l) return (images, labels) Note that here , we are combining two images to create a single one. Theoretically, we can combine as many we want but that comes at an increased computation cost. In certain cases, it may not help improve the performance as well. Visualize the new augmented dataset # First create the new dataset using our `mix_up` utility train_ds_mu = train_ds.map( lambda ds_one, ds_two: mix_up(ds_one, ds_two, alpha=0.2), num_parallel_calls=AUTO ) # Let's preview 9 samples from the dataset sample_images, sample_labels = next(iter(train_ds_mu)) plt.figure(figsize=(10, 10)) for i, (image, label) in enumerate(zip(sample_images[:9], sample_labels[:9])): ax = plt.subplot(3, 3, i + 1) plt.imshow(image.numpy().squeeze()) print(label.numpy().tolist()) plt.axis(\"off\") [0.01706075668334961, 0.0, 0.0, 0.9829392433166504, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] [0.0, 0.5761554837226868, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.42384451627731323, 0.0] [0.0, 0.0, 0.9999957084655762, 0.0, 4.291534423828125e-06, 0.0, 0.0, 0.0, 0.0, 0.0] [0.0, 0.0, 0.03438800573348999, 0.0, 0.0, 0.0, 0.0, 0.0, 0.96561199426651, 0.0] [0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0] [0.0, 0.0, 0.9808260202407837, 0.0, 0.0, 0.0, 0.01917397230863571, 0.0, 0.0, 0.0] [0.0, 0.9999748468399048, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2.5153160095214844e-05] [0.0, 0.0, 0.0, 0.0002035107754636556, 0.0, 0.9997965097427368, 0.0, 0.0, 0.0, 0.0] [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2410212755203247, 0.0, 0.0, 0.7589787244796753] png Model building def get_training_model(): model = tf.keras.Sequential( [ layers.Conv2D(16, (5, 5), activation=\"relu\", input_shape=(28, 28, 1)), layers.MaxPooling2D(pool_size=(2, 2)), layers.Conv2D(32, (5, 5), activation=\"relu\"), layers.MaxPooling2D(pool_size=(2, 2)), layers.Dropout(0.2), layers.GlobalAvgPool2D(), layers.Dense(128, activation=\"relu\"), layers.Dense(10, activation=\"softmax\"), ] ) return model For the sake of reproducibility, we serialize the initial random weights of our shallow network. initial_model = get_training_model() initial_model.save_weights(\"initial_weights.h5\") 1. Train the model with the mixed up dataset model = get_training_model() model.load_weights(\"initial_weights.h5\") model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"]) model.fit(train_ds_mu, validation_data=val_ds, epochs=EPOCHS) _, test_acc = model.evaluate(test_ds) print(\"Test accuracy: {:.2f}%\".format(test_acc * 100)) Epoch 1/10 907/907 [==============================] - 38s 41ms/step - loss: 1.4440 - accuracy: 0.5173 - val_loss: 0.7120 - val_accuracy: 0.7405 Epoch 2/10 907/907 [==============================] - 38s 42ms/step - loss: 0.9869 - accuracy: 0.7074 - val_loss: 0.5996 - val_accuracy: 0.7780 Epoch 3/10 907/907 [==============================] - 38s 42ms/step - loss: 0.9096 - accuracy: 0.7451 - val_loss: 0.5197 - val_accuracy: 0.8285 Epoch 4/10 907/907 [==============================] - 38s 42ms/step - loss: 0.8485 - accuracy: 0.7741 - val_loss: 0.4830 - val_accuracy: 0.8380 Epoch 5/10 907/907 [==============================] - 38s 42ms/step - loss: 0.8032 - accuracy: 0.7916 - val_loss: 0.4543 - val_accuracy: 0.8445 Epoch 6/10 907/907 [==============================] - 38s 42ms/step - loss: 0.7675 - accuracy: 0.8032 - val_loss: 0.4398 - val_accuracy: 0.8470 Epoch 7/10 907/907 [==============================] - 38s 42ms/step - loss: 0.7474 - accuracy: 0.8098 - val_loss: 0.4262 - val_accuracy: 0.8495 Epoch 8/10 907/907 [==============================] - 38s 42ms/step - loss: 0.7337 - accuracy: 0.8145 - val_loss: 0.3950 - val_accuracy: 0.8650 Epoch 9/10 907/907 [==============================] - 38s 42ms/step - loss: 0.7154 - accuracy: 0.8218 - val_loss: 0.3822 - val_accuracy: 0.8725 Epoch 10/10 907/907 [==============================] - 38s 42ms/step - loss: 0.7095 - accuracy: 0.8224 - val_loss: 0.3563 - val_accuracy: 0.8720 157/157 [==============================] - 2s 14ms/step - loss: 0.3821 - accuracy: 0.8726 Test accuracy: 87.26% 2. Train the model without the mixed up dataset model = get_training_model() model.load_weights(\"initial_weights.h5\") model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"]) # Notice that we are NOT using the mixed up dataset here model.fit(train_ds_one, validation_data=val_ds, epochs=EPOCHS) _, test_acc = model.evaluate(test_ds) print(\"Test accuracy: {:.2f}%\".format(test_acc * 100)) Epoch 1/10 907/907 [==============================] - 37s 40ms/step - loss: 1.2037 - accuracy: 0.5553 - val_loss: 0.6732 - val_accuracy: 0.7565 Epoch 2/10 907/907 [==============================] - 37s 40ms/step - loss: 0.6724 - accuracy: 0.7462 - val_loss: 0.5715 - val_accuracy: 0.7940 Epoch 3/10 907/907 [==============================] - 37s 40ms/step - loss: 0.5828 - accuracy: 0.7897 - val_loss: 0.5042 - val_accuracy: 0.8210 Epoch 4/10 907/907 [==============================] - 37s 40ms/step - loss: 0.5203 - accuracy: 0.8115 - val_loss: 0.4587 - val_accuracy: 0.8405 Epoch 5/10 907/907 [==============================] - 36s 40ms/step - loss: 0.4802 - accuracy: 0.8255 - val_loss: 0.4602 - val_accuracy: 0.8340 Epoch 6/10 907/907 [==============================] - 36s 40ms/step - loss: 0.4566 - accuracy: 0.8351 - val_loss: 0.3985 - val_accuracy: 0.8700 Epoch 7/10 907/907 [==============================] - 37s 40ms/step - loss: 0.4273 - accuracy: 0.8457 - val_loss: 0.3764 - val_accuracy: 0.8685 Epoch 8/10 907/907 [==============================] - 36s 40ms/step - loss: 0.4133 - accuracy: 0.8481 - val_loss: 0.3704 - val_accuracy: 0.8735 Epoch 9/10 907/907 [==============================] - 36s 40ms/step - loss: 0.3951 - accuracy: 0.8543 - val_loss: 0.3715 - val_accuracy: 0.8680 Epoch 10/10 907/907 [==============================] - 36s 40ms/step - loss: 0.3850 - accuracy: 0.8586 - val_loss: 0.3458 - val_accuracy: 0.8735 157/157 [==============================] - 2s 13ms/step - loss: 0.3817 - accuracy: 0.8636 Test accuracy: 86.36% Readers are encouraged to try out mixup on different datasets from different domains and experiment with the lambda parameter. You are strongly advised to check out the original paper as well - the authors present several ablation studies on mixup showing how it can improve generalization, as well as show their results of combining more than two images to create a single one. Notes With mixup, you can create synthetic examples — especially when you lack a large dataset - without incurring high computational costs. Label smoothing and mixup usually do not work well together because label smoothing already modifies the hard labels by some factor. mixup does not work well when you are using Supervised Contrastive Learning (SCL) since SCL expects the true labels during its pre-training phase. A few other benefits of mixup include (as described in the paper) robustness to adversarial examples and stabilized GAN (Generative Adversarial Networks) training. There are a number of data augmentation techniques that extend mixup such as CutMix and AugMix. MobileViT for image classification with combined benefits of convolutions and Transformers. Introduction In this example, we implement the MobileViT architecture (Mehta et al.), which combines the benefits of Transformers (Vaswani et al.) and convolutions. With Transformers, we can capture long-range dependencies that result in global representations. With convolutions, we can capture spatial relationships that model locality. Besides combining the properties of Transformers and convolutions, the authors introduce MobileViT as a general-purpose mobile-friendly backbone for different image recognition tasks. Their findings suggest that, performance-wise, MobileViT is better than other models with the same or higher complexity (MobileNetV3, for example), while being efficient on mobile devices. Imports import tensorflow as tf from keras.applications import imagenet_utils from tensorflow.keras import layers from tensorflow import keras import tensorflow_datasets as tfds import tensorflow_addons as tfa tfds.disable_progress_bar() Hyperparameters # Values are from table 4. patch_size = 4 # 2x2, for the Transformer blocks. image_size = 256 expansion_factor = 2 # expansion factor for the MobileNetV2 blocks. MobileViT utilities The MobileViT architecture is comprised of the following blocks: Strided 3x3 convolutions that process the input image. MobileNetV2-style inverted residual blocks for downsampling the resolution of the intermediate feature maps. MobileViT blocks that combine the benefits of Transformers and convolutions. It is presented in the figure below (taken from the original paper): def conv_block(x, filters=16, kernel_size=3, strides=2): conv_layer = layers.Conv2D( filters, kernel_size, strides=strides, activation=tf.nn.swish, padding=\"same\" ) return conv_layer(x) # Reference: https://git.io/JKgtC def inverted_residual_block(x, expanded_channels, output_channels, strides=1): m = layers.Conv2D(expanded_channels, 1, padding=\"same\", use_bias=False)(x) m = layers.BatchNormalization()(m) m = tf.nn.swish(m) if strides == 2: m = layers.ZeroPadding2D(padding=imagenet_utils.correct_pad(m, 3))(m) m = layers.DepthwiseConv2D( 3, strides=strides, padding=\"same\" if strides == 1 else \"valid\", use_bias=False )(m) m = layers.BatchNormalization()(m) m = tf.nn.swish(m) m = layers.Conv2D(output_channels, 1, padding=\"same\", use_bias=False)(m) m = layers.BatchNormalization()(m) if tf.math.equal(x.shape[-1], output_channels) and strides == 1: return layers.Add()([m, x]) return m # Reference: # https://keras.io/examples/vision/image_classification_with_vision_transformer/ def mlp(x, hidden_units, dropout_rate): for units in hidden_units: x = layers.Dense(units, activation=tf.nn.swish)(x) x = layers.Dropout(dropout_rate)(x) return x def transformer_block(x, transformer_layers, projection_dim, num_heads=2): for _ in range(transformer_layers): # Layer normalization 1. x1 = layers.LayerNormalization(epsilon=1e-6)(x) # Create a multi-head attention layer. attention_output = layers.MultiHeadAttention( num_heads=num_heads, key_dim=projection_dim, dropout=0.1 )(x1, x1) # Skip connection 1. x2 = layers.Add()([attention_output, x]) # Layer normalization 2. x3 = layers.LayerNormalization(epsilon=1e-6)(x2) # MLP. x3 = mlp(x3, hidden_units=[x.shape[-1] * 2, x.shape[-1]], dropout_rate=0.1,) # Skip connection 2. x = layers.Add()([x3, x2]) return x def mobilevit_block(x, num_blocks, projection_dim, strides=1): # Local projection with convolutions. local_features = conv_block(x, filters=projection_dim, strides=strides) local_features = conv_block( local_features, filters=projection_dim, kernel_size=1, strides=strides ) # Unfold into patches and then pass through Transformers. num_patches = int((local_features.shape[1] * local_features.shape[2]) / patch_size) non_overlapping_patches = layers.Reshape((patch_size, num_patches, projection_dim))( local_features ) global_features = transformer_block( non_overlapping_patches, num_blocks, projection_dim ) # Fold into conv-like feature-maps. folded_feature_map = layers.Reshape((*local_features.shape[1:-1], projection_dim))( global_features ) # Apply point-wise conv -> concatenate with the input features. folded_feature_map = conv_block( folded_feature_map, filters=x.shape[-1], kernel_size=1, strides=strides ) local_global_features = layers.Concatenate(axis=-1)([x, folded_feature_map]) # Fuse the local and global features using a convoluion layer. local_global_features = conv_block( local_global_features, filters=projection_dim, strides=strides ) return local_global_features More on the MobileViT block: First, the feature representations (A) go through convolution blocks that capture local relationships. The expected shape of a single entry here would be (h, w, num_channels). Then they get unfolded into another vector with shape (p, n, num_channels), where p is the area of a small patch, and n is (h * w) / p. So, we end up with n non-overlapping patches. This unfolded vector is then passed through a Tranformer block that captures global relationships between the patches. The output vector (B) is again folded into a vector of shape (h, w, num_channels) resembling a feature map coming out of convolutions. Vectors A and B are then passed through two more convolutional layers to fuse the local and global representations. Notice how the spatial resolution of the final vector remains unchanged at this point. The authors also present an explanation of how the MobileViT block resembles a convolution block of a CNN. For more details, please refer to the original paper. Next, we combine these blocks together and implement the MobileViT architecture (XXS variant). The following figure (taken from the original paper) presents a schematic representation of the architecture: def create_mobilevit(num_classes=5): inputs = keras.Input((image_size, image_size, 3)) x = layers.Rescaling(scale=1.0 / 255)(inputs) # Initial conv-stem -> MV2 block. x = conv_block(x, filters=16) x = inverted_residual_block( x, expanded_channels=16 * expansion_factor, output_channels=16 ) # Downsampling with MV2 block. x = inverted_residual_block( x, expanded_channels=16 * expansion_factor, output_channels=24, strides=2 ) x = inverted_residual_block( x, expanded_channels=24 * expansion_factor, output_channels=24 ) x = inverted_residual_block( x, expanded_channels=24 * expansion_factor, output_channels=24 ) # First MV2 -> MobileViT block. x = inverted_residual_block( x, expanded_channels=24 * expansion_factor, output_channels=48, strides=2 ) x = mobilevit_block(x, num_blocks=2, projection_dim=64) # Second MV2 -> MobileViT block. x = inverted_residual_block( x, expanded_channels=64 * expansion_factor, output_channels=64, strides=2 ) x = mobilevit_block(x, num_blocks=4, projection_dim=80) # Third MV2 -> MobileViT block. x = inverted_residual_block( x, expanded_channels=80 * expansion_factor, output_channels=80, strides=2 ) x = mobilevit_block(x, num_blocks=3, projection_dim=96) x = conv_block(x, filters=320, kernel_size=1, strides=1) # Classification head. x = layers.GlobalAvgPool2D()(x) outputs = layers.Dense(num_classes, activation=\"softmax\")(x) return keras.Model(inputs, outputs) mobilevit_xxs = create_mobilevit() mobilevit_xxs.summary() Model: \"model\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 256, 256, 3) 0 __________________________________________________________________________________________________ rescaling (Rescaling) (None, 256, 256, 3) 0 input_1[0][0] __________________________________________________________________________________________________ conv2d (Conv2D) (None, 128, 128, 16) 448 rescaling[0][0] __________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 128, 128, 32) 512 conv2d[0][0] __________________________________________________________________________________________________ batch_normalization (BatchNorma (None, 128, 128, 32) 128 conv2d_1[0][0] __________________________________________________________________________________________________ tf.nn.silu (TFOpLambda) (None, 128, 128, 32) 0 batch_normalization[0][0] __________________________________________________________________________________________________ depthwise_conv2d (DepthwiseConv (None, 128, 128, 32) 288 tf.nn.silu[0][0] __________________________________________________________________________________________________ batch_normalization_1 (BatchNor (None, 128, 128, 32) 128 depthwise_conv2d[0][0] __________________________________________________________________________________________________ tf.nn.silu_1 (TFOpLambda) (None, 128, 128, 32) 0 batch_normalization_1[0][0] __________________________________________________________________________________________________ conv2d_2 (Conv2D) (None, 128, 128, 16) 512 tf.nn.silu_1[0][0] __________________________________________________________________________________________________ batch_normalization_2 (BatchNor (None, 128, 128, 16) 64 conv2d_2[0][0] __________________________________________________________________________________________________ add (Add) (None, 128, 128, 16) 0 batch_normalization_2[0][0] conv2d[0][0] __________________________________________________________________________________________________ conv2d_3 (Conv2D) (None, 128, 128, 32) 512 add[0][0] __________________________________________________________________________________________________ batch_normalization_3 (BatchNor (None, 128, 128, 32) 128 conv2d_3[0][0] __________________________________________________________________________________________________ tf.nn.silu_2 (TFOpLambda) (None, 128, 128, 32) 0 batch_normalization_3[0][0] __________________________________________________________________________________________________ zero_padding2d (ZeroPadding2D) (None, 129, 129, 32) 0 tf.nn.silu_2[0][0] __________________________________________________________________________________________________ depthwise_conv2d_1 (DepthwiseCo (None, 64, 64, 32) 288 zero_padding2d[0][0] __________________________________________________________________________________________________ batch_normalization_4 (BatchNor (None, 64, 64, 32) 128 depthwise_conv2d_1[0][0] __________________________________________________________________________________________________ tf.nn.silu_3 (TFOpLambda) (None, 64, 64, 32) 0 batch_normalization_4[0][0] __________________________________________________________________________________________________ conv2d_4 (Conv2D) (None, 64, 64, 24) 768 tf.nn.silu_3[0][0] __________________________________________________________________________________________________ batch_normalization_5 (BatchNor (None, 64, 64, 24) 96 conv2d_4[0][0] __________________________________________________________________________________________________ conv2d_5 (Conv2D) (None, 64, 64, 48) 1152 batch_normalization_5[0][0] __________________________________________________________________________________________________ batch_normalization_6 (BatchNor (None, 64, 64, 48) 192 conv2d_5[0][0] __________________________________________________________________________________________________ tf.nn.silu_4 (TFOpLambda) (None, 64, 64, 48) 0 batch_normalization_6[0][0] __________________________________________________________________________________________________ depthwise_conv2d_2 (DepthwiseCo (None, 64, 64, 48) 432 tf.nn.silu_4[0][0] __________________________________________________________________________________________________ batch_normalization_7 (BatchNor (None, 64, 64, 48) 192 depthwise_conv2d_2[0][0] __________________________________________________________________________________________________ tf.nn.silu_5 (TFOpLambda) (None, 64, 64, 48) 0 batch_normalization_7[0][0] __________________________________________________________________________________________________ conv2d_6 (Conv2D) (None, 64, 64, 24) 1152 tf.nn.silu_5[0][0] __________________________________________________________________________________________________ batch_normalization_8 (BatchNor (None, 64, 64, 24) 96 conv2d_6[0][0] __________________________________________________________________________________________________ add_1 (Add) (None, 64, 64, 24) 0 batch_normalization_8[0][0] batch_normalization_5[0][0] __________________________________________________________________________________________________ conv2d_7 (Conv2D) (None, 64, 64, 48) 1152 add_1[0][0] __________________________________________________________________________________________________ batch_normalization_9 (BatchNor (None, 64, 64, 48) 192 conv2d_7[0][0] __________________________________________________________________________________________________ tf.nn.silu_6 (TFOpLambda) (None, 64, 64, 48) 0 batch_normalization_9[0][0] __________________________________________________________________________________________________ depthwise_conv2d_3 (DepthwiseCo (None, 64, 64, 48) 432 tf.nn.silu_6[0][0] __________________________________________________________________________________________________ batch_normalization_10 (BatchNo (None, 64, 64, 48) 192 depthwise_conv2d_3[0][0] __________________________________________________________________________________________________ tf.nn.silu_7 (TFOpLambda) (None, 64, 64, 48) 0 batch_normalization_10[0][0] __________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 64, 64, 24) 1152 tf.nn.silu_7[0][0] __________________________________________________________________________________________________ batch_normalization_11 (BatchNo (None, 64, 64, 24) 96 conv2d_8[0][0] __________________________________________________________________________________________________ add_2 (Add) (None, 64, 64, 24) 0 batch_normalization_11[0][0] add_1[0][0] __________________________________________________________________________________________________ conv2d_9 (Conv2D) (None, 64, 64, 48) 1152 add_2[0][0] __________________________________________________________________________________________________ batch_normalization_12 (BatchNo (None, 64, 64, 48) 192 conv2d_9[0][0] __________________________________________________________________________________________________ tf.nn.silu_8 (TFOpLambda) (None, 64, 64, 48) 0 batch_normalization_12[0][0] __________________________________________________________________________________________________ zero_padding2d_1 (ZeroPadding2D (None, 65, 65, 48) 0 tf.nn.silu_8[0][0] __________________________________________________________________________________________________ depthwise_conv2d_4 (DepthwiseCo (None, 32, 32, 48) 432 zero_padding2d_1[0][0] __________________________________________________________________________________________________ batch_normalization_13 (BatchNo (None, 32, 32, 48) 192 depthwise_conv2d_4[0][0] __________________________________________________________________________________________________ tf.nn.silu_9 (TFOpLambda) (None, 32, 32, 48) 0 batch_normalization_13[0][0] __________________________________________________________________________________________________ conv2d_10 (Conv2D) (None, 32, 32, 48) 2304 tf.nn.silu_9[0][0] __________________________________________________________________________________________________ batch_normalization_14 (BatchNo (None, 32, 32, 48) 192 conv2d_10[0][0] __________________________________________________________________________________________________ conv2d_11 (Conv2D) (None, 32, 32, 64) 27712 batch_normalization_14[0][0] __________________________________________________________________________________________________ conv2d_12 (Conv2D) (None, 32, 32, 64) 4160 conv2d_11[0][0] __________________________________________________________________________________________________ reshape (Reshape) (None, 4, 256, 64) 0 conv2d_12[0][0] __________________________________________________________________________________________________ layer_normalization (LayerNorma (None, 4, 256, 64) 128 reshape[0][0] __________________________________________________________________________________________________ multi_head_attention (MultiHead (None, 4, 256, 64) 33216 layer_normalization[0][0] layer_normalization[0][0] __________________________________________________________________________________________________ add_3 (Add) (None, 4, 256, 64) 0 multi_head_attention[0][0] reshape[0][0] __________________________________________________________________________________________________ layer_normalization_1 (LayerNor (None, 4, 256, 64) 128 add_3[0][0] __________________________________________________________________________________________________ dense (Dense) (None, 4, 256, 128) 8320 layer_normalization_1[0][0] __________________________________________________________________________________________________ dropout (Dropout) (None, 4, 256, 128) 0 dense[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (None, 4, 256, 64) 8256 dropout[0][0] __________________________________________________________________________________________________ dropout_1 (Dropout) (None, 4, 256, 64) 0 dense_1[0][0] __________________________________________________________________________________________________ add_4 (Add) (None, 4, 256, 64) 0 dropout_1[0][0] add_3[0][0] __________________________________________________________________________________________________ layer_normalization_2 (LayerNor (None, 4, 256, 64) 128 add_4[0][0] __________________________________________________________________________________________________ multi_head_attention_1 (MultiHe (None, 4, 256, 64) 33216 layer_normalization_2[0][0] layer_normalization_2[0][0] __________________________________________________________________________________________________ add_5 (Add) (None, 4, 256, 64) 0 multi_head_attention_1[0][0] add_4[0][0] __________________________________________________________________________________________________ layer_normalization_3 (LayerNor (None, 4, 256, 64) 128 add_5[0][0] __________________________________________________________________________________________________ dense_2 (Dense) (None, 4, 256, 128) 8320 layer_normalization_3[0][0] __________________________________________________________________________________________________ dropout_2 (Dropout) (None, 4, 256, 128) 0 dense_2[0][0] __________________________________________________________________________________________________ dense_3 (Dense) (None, 4, 256, 64) 8256 dropout_2[0][0] __________________________________________________________________________________________________ dropout_3 (Dropout) (None, 4, 256, 64) 0 dense_3[0][0] __________________________________________________________________________________________________ add_6 (Add) (None, 4, 256, 64) 0 dropout_3[0][0] add_5[0][0] __________________________________________________________________________________________________ reshape_1 (Reshape) (None, 32, 32, 64) 0 add_6[0][0] __________________________________________________________________________________________________ conv2d_13 (Conv2D) (None, 32, 32, 48) 3120 reshape_1[0][0] __________________________________________________________________________________________________ concatenate (Concatenate) (None, 32, 32, 96) 0 batch_normalization_14[0][0] conv2d_13[0][0] __________________________________________________________________________________________________ conv2d_14 (Conv2D) (None, 32, 32, 64) 55360 concatenate[0][0] __________________________________________________________________________________________________ conv2d_15 (Conv2D) (None, 32, 32, 128) 8192 conv2d_14[0][0] __________________________________________________________________________________________________ batch_normalization_15 (BatchNo (None, 32, 32, 128) 512 conv2d_15[0][0] __________________________________________________________________________________________________ tf.nn.silu_10 (TFOpLambda) (None, 32, 32, 128) 0 batch_normalization_15[0][0] __________________________________________________________________________________________________ zero_padding2d_2 (ZeroPadding2D (None, 33, 33, 128) 0 tf.nn.silu_10[0][0] __________________________________________________________________________________________________ depthwise_conv2d_5 (DepthwiseCo (None, 16, 16, 128) 1152 zero_padding2d_2[0][0] __________________________________________________________________________________________________ batch_normalization_16 (BatchNo (None, 16, 16, 128) 512 depthwise_conv2d_5[0][0] __________________________________________________________________________________________________ tf.nn.silu_11 (TFOpLambda) (None, 16, 16, 128) 0 batch_normalization_16[0][0] __________________________________________________________________________________________________ conv2d_16 (Conv2D) (None, 16, 16, 64) 8192 tf.nn.silu_11[0][0] __________________________________________________________________________________________________ batch_normalization_17 (BatchNo (None, 16, 16, 64) 256 conv2d_16[0][0] __________________________________________________________________________________________________ conv2d_17 (Conv2D) (None, 16, 16, 80) 46160 batch_normalization_17[0][0] __________________________________________________________________________________________________ conv2d_18 (Conv2D) (None, 16, 16, 80) 6480 conv2d_17[0][0] __________________________________________________________________________________________________ reshape_2 (Reshape) (None, 4, 64, 80) 0 conv2d_18[0][0] __________________________________________________________________________________________________ layer_normalization_4 (LayerNor (None, 4, 64, 80) 160 reshape_2[0][0] __________________________________________________________________________________________________ multi_head_attention_2 (MultiHe (None, 4, 64, 80) 51760 layer_normalization_4[0][0] layer_normalization_4[0][0] __________________________________________________________________________________________________ add_7 (Add) (None, 4, 64, 80) 0 multi_head_attention_2[0][0] reshape_2[0][0] __________________________________________________________________________________________________ layer_normalization_5 (LayerNor (None, 4, 64, 80) 160 add_7[0][0] __________________________________________________________________________________________________ dense_4 (Dense) (None, 4, 64, 160) 12960 layer_normalization_5[0][0] __________________________________________________________________________________________________ dropout_4 (Dropout) (None, 4, 64, 160) 0 dense_4[0][0] __________________________________________________________________________________________________ dense_5 (Dense) (None, 4, 64, 80) 12880 dropout_4[0][0] __________________________________________________________________________________________________ dropout_5 (Dropout) (None, 4, 64, 80) 0 dense_5[0][0] __________________________________________________________________________________________________ add_8 (Add) (None, 4, 64, 80) 0 dropout_5[0][0] add_7[0][0] __________________________________________________________________________________________________ layer_normalization_6 (LayerNor (None, 4, 64, 80) 160 add_8[0][0] __________________________________________________________________________________________________ multi_head_attention_3 (MultiHe (None, 4, 64, 80) 51760 layer_normalization_6[0][0] layer_normalization_6[0][0] __________________________________________________________________________________________________ add_9 (Add) (None, 4, 64, 80) 0 multi_head_attention_3[0][0] add_8[0][0] __________________________________________________________________________________________________ layer_normalization_7 (LayerNor (None, 4, 64, 80) 160 add_9[0][0] __________________________________________________________________________________________________ dense_6 (Dense) (None, 4, 64, 160) 12960 layer_normalization_7[0][0] __________________________________________________________________________________________________ dropout_6 (Dropout) (None, 4, 64, 160) 0 dense_6[0][0] __________________________________________________________________________________________________ dense_7 (Dense) (None, 4, 64, 80) 12880 dropout_6[0][0] __________________________________________________________________________________________________ dropout_7 (Dropout) (None, 4, 64, 80) 0 dense_7[0][0] __________________________________________________________________________________________________ add_10 (Add) (None, 4, 64, 80) 0 dropout_7[0][0] add_9[0][0] __________________________________________________________________________________________________ layer_normalization_8 (LayerNor (None, 4, 64, 80) 160 add_10[0][0] __________________________________________________________________________________________________ multi_head_attention_4 (MultiHe (None, 4, 64, 80) 51760 layer_normalization_8[0][0] layer_normalization_8[0][0] __________________________________________________________________________________________________ add_11 (Add) (None, 4, 64, 80) 0 multi_head_attention_4[0][0] add_10[0][0] __________________________________________________________________________________________________ layer_normalization_9 (LayerNor (None, 4, 64, 80) 160 add_11[0][0] __________________________________________________________________________________________________ dense_8 (Dense) (None, 4, 64, 160) 12960 layer_normalization_9[0][0] __________________________________________________________________________________________________ dropout_8 (Dropout) (None, 4, 64, 160) 0 dense_8[0][0] __________________________________________________________________________________________________ dense_9 (Dense) (None, 4, 64, 80) 12880 dropout_8[0][0] __________________________________________________________________________________________________ dropout_9 (Dropout) (None, 4, 64, 80) 0 dense_9[0][0] __________________________________________________________________________________________________ add_12 (Add) (None, 4, 64, 80) 0 dropout_9[0][0] add_11[0][0] __________________________________________________________________________________________________ layer_normalization_10 (LayerNo (None, 4, 64, 80) 160 add_12[0][0] __________________________________________________________________________________________________ multi_head_attention_5 (MultiHe (None, 4, 64, 80) 51760 layer_normalization_10[0][0] layer_normalization_10[0][0] __________________________________________________________________________________________________ add_13 (Add) (None, 4, 64, 80) 0 multi_head_attention_5[0][0] add_12[0][0] __________________________________________________________________________________________________ layer_normalization_11 (LayerNo (None, 4, 64, 80) 160 add_13[0][0] __________________________________________________________________________________________________ dense_10 (Dense) (None, 4, 64, 160) 12960 layer_normalization_11[0][0] __________________________________________________________________________________________________ dropout_10 (Dropout) (None, 4, 64, 160) 0 dense_10[0][0] __________________________________________________________________________________________________ dense_11 (Dense) (None, 4, 64, 80) 12880 dropout_10[0][0] __________________________________________________________________________________________________ dropout_11 (Dropout) (None, 4, 64, 80) 0 dense_11[0][0] __________________________________________________________________________________________________ add_14 (Add) (None, 4, 64, 80) 0 dropout_11[0][0] add_13[0][0] __________________________________________________________________________________________________ reshape_3 (Reshape) (None, 16, 16, 80) 0 add_14[0][0] __________________________________________________________________________________________________ conv2d_19 (Conv2D) (None, 16, 16, 64) 5184 reshape_3[0][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 16, 16, 128) 0 batch_normalization_17[0][0] conv2d_19[0][0] __________________________________________________________________________________________________ conv2d_20 (Conv2D) (None, 16, 16, 80) 92240 concatenate_1[0][0] __________________________________________________________________________________________________ conv2d_21 (Conv2D) (None, 16, 16, 160) 12800 conv2d_20[0][0] __________________________________________________________________________________________________ batch_normalization_18 (BatchNo (None, 16, 16, 160) 640 conv2d_21[0][0] __________________________________________________________________________________________________ tf.nn.silu_12 (TFOpLambda) (None, 16, 16, 160) 0 batch_normalization_18[0][0] __________________________________________________________________________________________________ zero_padding2d_3 (ZeroPadding2D (None, 17, 17, 160) 0 tf.nn.silu_12[0][0] __________________________________________________________________________________________________ depthwise_conv2d_6 (DepthwiseCo (None, 8, 8, 160) 1440 zero_padding2d_3[0][0] __________________________________________________________________________________________________ batch_normalization_19 (BatchNo (None, 8, 8, 160) 640 depthwise_conv2d_6[0][0] __________________________________________________________________________________________________ tf.nn.silu_13 (TFOpLambda) (None, 8, 8, 160) 0 batch_normalization_19[0][0] __________________________________________________________________________________________________ conv2d_22 (Conv2D) (None, 8, 8, 80) 12800 tf.nn.silu_13[0][0] __________________________________________________________________________________________________ batch_normalization_20 (BatchNo (None, 8, 8, 80) 320 conv2d_22[0][0] __________________________________________________________________________________________________ conv2d_23 (Conv2D) (None, 8, 8, 96) 69216 batch_normalization_20[0][0] __________________________________________________________________________________________________ conv2d_24 (Conv2D) (None, 8, 8, 96) 9312 conv2d_23[0][0] __________________________________________________________________________________________________ reshape_4 (Reshape) (None, 4, 16, 96) 0 conv2d_24[0][0] __________________________________________________________________________________________________ layer_normalization_12 (LayerNo (None, 4, 16, 96) 192 reshape_4[0][0] __________________________________________________________________________________________________ multi_head_attention_6 (MultiHe (None, 4, 16, 96) 74400 layer_normalization_12[0][0] layer_normalization_12[0][0] __________________________________________________________________________________________________ add_15 (Add) (None, 4, 16, 96) 0 multi_head_attention_6[0][0] reshape_4[0][0] __________________________________________________________________________________________________ layer_normalization_13 (LayerNo (None, 4, 16, 96) 192 add_15[0][0] __________________________________________________________________________________________________ dense_12 (Dense) (None, 4, 16, 192) 18624 layer_normalization_13[0][0] __________________________________________________________________________________________________ dropout_12 (Dropout) (None, 4, 16, 192) 0 dense_12[0][0] __________________________________________________________________________________________________ dense_13 (Dense) (None, 4, 16, 96) 18528 dropout_12[0][0] __________________________________________________________________________________________________ dropout_13 (Dropout) (None, 4, 16, 96) 0 dense_13[0][0] __________________________________________________________________________________________________ add_16 (Add) (None, 4, 16, 96) 0 dropout_13[0][0] add_15[0][0] __________________________________________________________________________________________________ layer_normalization_14 (LayerNo (None, 4, 16, 96) 192 add_16[0][0] __________________________________________________________________________________________________ multi_head_attention_7 (MultiHe (None, 4, 16, 96) 74400 layer_normalization_14[0][0] layer_normalization_14[0][0] __________________________________________________________________________________________________ add_17 (Add) (None, 4, 16, 96) 0 multi_head_attention_7[0][0] add_16[0][0] __________________________________________________________________________________________________ layer_normalization_15 (LayerNo (None, 4, 16, 96) 192 add_17[0][0] __________________________________________________________________________________________________ dense_14 (Dense) (None, 4, 16, 192) 18624 layer_normalization_15[0][0] __________________________________________________________________________________________________ dropout_14 (Dropout) (None, 4, 16, 192) 0 dense_14[0][0] __________________________________________________________________________________________________ dense_15 (Dense) (None, 4, 16, 96) 18528 dropout_14[0][0] __________________________________________________________________________________________________ dropout_15 (Dropout) (None, 4, 16, 96) 0 dense_15[0][0] __________________________________________________________________________________________________ add_18 (Add) (None, 4, 16, 96) 0 dropout_15[0][0] add_17[0][0] __________________________________________________________________________________________________ layer_normalization_16 (LayerNo (None, 4, 16, 96) 192 add_18[0][0] __________________________________________________________________________________________________ multi_head_attention_8 (MultiHe (None, 4, 16, 96) 74400 layer_normalization_16[0][0] layer_normalization_16[0][0] __________________________________________________________________________________________________ add_19 (Add) (None, 4, 16, 96) 0 multi_head_attention_8[0][0] add_18[0][0] __________________________________________________________________________________________________ layer_normalization_17 (LayerNo (None, 4, 16, 96) 192 add_19[0][0] __________________________________________________________________________________________________ dense_16 (Dense) (None, 4, 16, 192) 18624 layer_normalization_17[0][0] __________________________________________________________________________________________________ dropout_16 (Dropout) (None, 4, 16, 192) 0 dense_16[0][0] __________________________________________________________________________________________________ dense_17 (Dense) (None, 4, 16, 96) 18528 dropout_16[0][0] __________________________________________________________________________________________________ dropout_17 (Dropout) (None, 4, 16, 96) 0 dense_17[0][0] __________________________________________________________________________________________________ add_20 (Add) (None, 4, 16, 96) 0 dropout_17[0][0] add_19[0][0] __________________________________________________________________________________________________ reshape_5 (Reshape) (None, 8, 8, 96) 0 add_20[0][0] __________________________________________________________________________________________________ conv2d_25 (Conv2D) (None, 8, 8, 80) 7760 reshape_5[0][0] __________________________________________________________________________________________________ concatenate_2 (Concatenate) (None, 8, 8, 160) 0 batch_normalization_20[0][0] conv2d_25[0][0] __________________________________________________________________________________________________ conv2d_26 (Conv2D) (None, 8, 8, 96) 138336 concatenate_2[0][0] __________________________________________________________________________________________________ conv2d_27 (Conv2D) (None, 8, 8, 320) 31040 conv2d_26[0][0] __________________________________________________________________________________________________ global_average_pooling2d (Globa (None, 320) 0 conv2d_27[0][0] __________________________________________________________________________________________________ dense_18 (Dense) (None, 5) 1605 global_average_pooling2d[0][0] ================================================================================================== Total params: 1,307,621 Trainable params: 1,305,077 Non-trainable params: 2,544 __________________________________________________________________________________________________ Dataset preparation We will be using the tf_flowers dataset to demonstrate the model. Unlike other Transformer-based architectures, MobileViT uses a simple augmentation pipeline primarily because it has the properties of a CNN. batch_size = 64 auto = tf.data.AUTOTUNE resize_bigger = 280 num_classes = 5 def preprocess_dataset(is_training=True): def _pp(image, label): if is_training: # Resize to a bigger spatial resolution and take the random # crops. image = tf.image.resize(image, (resize_bigger, resize_bigger)) image = tf.image.random_crop(image, (image_size, image_size, 3)) image = tf.image.random_flip_left_right(image) else: image = tf.image.resize(image, (image_size, image_size)) label = tf.one_hot(label, depth=num_classes) return image, label return _pp def prepare_dataset(dataset, is_training=True): if is_training: dataset = dataset.shuffle(batch_size * 10) dataset = dataset.map(preprocess_dataset(is_training), num_parallel_calls=auto) return dataset.batch(batch_size).prefetch(auto) The authors use a multi-scale data sampler to help the model learn representations of varied scales. In this example, we discard this part. Load and prepare the dataset train_dataset, val_dataset = tfds.load( \"tf_flowers\", split=[\"train[:90%]\", \"train[90%:]\"], as_supervised=True ) num_train = train_dataset.cardinality() num_val = val_dataset.cardinality() print(f\"Number of training examples: {num_train}\") print(f\"Number of validation examples: {num_val}\") train_dataset = prepare_dataset(train_dataset, is_training=True) val_dataset = prepare_dataset(val_dataset, is_training=False) Number of training examples: 3303 Number of validation examples: 367 Train a MobileViT (XXS) model learning_rate = 0.002 label_smoothing_factor = 0.1 epochs = 30 optimizer = keras.optimizers.Adam(learning_rate=learning_rate) loss_fn = keras.losses.CategoricalCrossentropy(label_smoothing=label_smoothing_factor) def run_experiment(epochs=epochs): mobilevit_xxs = create_mobilevit(num_classes=num_classes) mobilevit_xxs.compile(optimizer=optimizer, loss=loss_fn, metrics=[\"accuracy\"]) checkpoint_filepath = \"/tmp/checkpoint\" checkpoint_callback = keras.callbacks.ModelCheckpoint( checkpoint_filepath, monitor=\"val_accuracy\", save_best_only=True, save_weights_only=True, ) mobilevit_xxs.fit( train_dataset, validation_data=val_dataset, epochs=epochs, callbacks=[checkpoint_callback], ) mobilevit_xxs.load_weights(checkpoint_filepath) _, accuracy = mobilevit_xxs.evaluate(val_dataset) print(f\"Validation accuracy: {round(accuracy * 100, 2)}%\") return mobilevit_xxs mobilevit_xxs = run_experiment() Epoch 1/30 52/52 [==============================] - 47s 459ms/step - loss: 1.3397 - accuracy: 0.4832 - val_loss: 1.7250 - val_accuracy: 0.1662 Epoch 2/30 52/52 [==============================] - 21s 404ms/step - loss: 1.1167 - accuracy: 0.6210 - val_loss: 1.9844 - val_accuracy: 0.1907 Epoch 3/30 52/52 [==============================] - 21s 403ms/step - loss: 1.0217 - accuracy: 0.6709 - val_loss: 1.8187 - val_accuracy: 0.1907 Epoch 4/30 52/52 [==============================] - 21s 409ms/step - loss: 0.9682 - accuracy: 0.7048 - val_loss: 2.0329 - val_accuracy: 0.1907 Epoch 5/30 52/52 [==============================] - 21s 408ms/step - loss: 0.9552 - accuracy: 0.7196 - val_loss: 2.1150 - val_accuracy: 0.1907 Epoch 6/30 52/52 [==============================] - 21s 407ms/step - loss: 0.9186 - accuracy: 0.7318 - val_loss: 2.9713 - val_accuracy: 0.1907 Epoch 7/30 52/52 [==============================] - 21s 407ms/step - loss: 0.8986 - accuracy: 0.7457 - val_loss: 3.2062 - val_accuracy: 0.1907 Epoch 8/30 52/52 [==============================] - 21s 408ms/step - loss: 0.8831 - accuracy: 0.7542 - val_loss: 3.8631 - val_accuracy: 0.1907 Epoch 9/30 52/52 [==============================] - 21s 408ms/step - loss: 0.8433 - accuracy: 0.7714 - val_loss: 1.8029 - val_accuracy: 0.3542 Epoch 10/30 52/52 [==============================] - 21s 408ms/step - loss: 0.8489 - accuracy: 0.7763 - val_loss: 1.7920 - val_accuracy: 0.4796 Epoch 11/30 52/52 [==============================] - 21s 409ms/step - loss: 0.8256 - accuracy: 0.7884 - val_loss: 1.4992 - val_accuracy: 0.5477 Epoch 12/30 52/52 [==============================] - 21s 407ms/step - loss: 0.7859 - accuracy: 0.8123 - val_loss: 0.9236 - val_accuracy: 0.7330 Epoch 13/30 52/52 [==============================] - 21s 409ms/step - loss: 0.7702 - accuracy: 0.8159 - val_loss: 0.8059 - val_accuracy: 0.8011 Epoch 14/30 52/52 [==============================] - 21s 403ms/step - loss: 0.7670 - accuracy: 0.8153 - val_loss: 1.1535 - val_accuracy: 0.7084 Epoch 15/30 52/52 [==============================] - 21s 408ms/step - loss: 0.7332 - accuracy: 0.8344 - val_loss: 0.7746 - val_accuracy: 0.8147 Epoch 16/30 52/52 [==============================] - 21s 404ms/step - loss: 0.7284 - accuracy: 0.8335 - val_loss: 1.0342 - val_accuracy: 0.7330 Epoch 17/30 52/52 [==============================] - 21s 409ms/step - loss: 0.7484 - accuracy: 0.8262 - val_loss: 1.0523 - val_accuracy: 0.7112 Epoch 18/30 52/52 [==============================] - 21s 408ms/step - loss: 0.7209 - accuracy: 0.8450 - val_loss: 0.8146 - val_accuracy: 0.8174 Epoch 19/30 52/52 [==============================] - 21s 409ms/step - loss: 0.7141 - accuracy: 0.8435 - val_loss: 0.8016 - val_accuracy: 0.7875 Epoch 20/30 52/52 [==============================] - 21s 410ms/step - loss: 0.7075 - accuracy: 0.8435 - val_loss: 0.9352 - val_accuracy: 0.7439 Epoch 21/30 52/52 [==============================] - 21s 406ms/step - loss: 0.7066 - accuracy: 0.8504 - val_loss: 1.0171 - val_accuracy: 0.7139 Epoch 22/30 52/52 [==============================] - 21s 405ms/step - loss: 0.6913 - accuracy: 0.8532 - val_loss: 0.7059 - val_accuracy: 0.8610 Epoch 23/30 52/52 [==============================] - 21s 408ms/step - loss: 0.6681 - accuracy: 0.8671 - val_loss: 0.8007 - val_accuracy: 0.8147 Epoch 24/30 52/52 [==============================] - 21s 409ms/step - loss: 0.6636 - accuracy: 0.8747 - val_loss: 0.9490 - val_accuracy: 0.7302 Epoch 25/30 52/52 [==============================] - 21s 408ms/step - loss: 0.6637 - accuracy: 0.8722 - val_loss: 0.6913 - val_accuracy: 0.8556 Epoch 26/30 52/52 [==============================] - 21s 406ms/step - loss: 0.6443 - accuracy: 0.8837 - val_loss: 1.0483 - val_accuracy: 0.7139 Epoch 27/30 52/52 [==============================] - 21s 407ms/step - loss: 0.6555 - accuracy: 0.8695 - val_loss: 0.9448 - val_accuracy: 0.7602 Epoch 28/30 52/52 [==============================] - 21s 409ms/step - loss: 0.6409 - accuracy: 0.8807 - val_loss: 0.9337 - val_accuracy: 0.7302 Epoch 29/30 52/52 [==============================] - 21s 408ms/step - loss: 0.6300 - accuracy: 0.8910 - val_loss: 0.7461 - val_accuracy: 0.8256 Epoch 30/30 52/52 [==============================] - 21s 408ms/step - loss: 0.6093 - accuracy: 0.8968 - val_loss: 0.8651 - val_accuracy: 0.7766 6/6 [==============================] - 0s 65ms/step - loss: 0.7059 - accuracy: 0.8610 Validation accuracy: 86.1% Results and TFLite conversion With about one million parameters, getting to ~85% top-1 accuracy on 256x256 resolution is a strong result. This MobileViT mobile is fully compatible with TensorFlow Lite (TFLite) and can be converted with the following code: # Serialize the model as a SavedModel. mobilevit_xxs.save(\"mobilevit_xxs\") # Convert to TFLite. This form of quantization is called # post-training dynamic-range quantization in TFLite. converter = tf.lite.TFLiteConverter.from_saved_model(\"mobilevit_xxs\") converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_ops = [ tf.lite.OpsSet.TFLITE_BUILTINS, # Enable TensorFlow Lite ops. tf.lite.OpsSet.SELECT_TF_OPS, # Enable TensorFlow ops. ] tflite_model = converter.convert() open(\"mobilevit_xxs.tflite\", \"wb\").write(tflite_model) To learn more about different quantization recipes available in TFLite and running inference with TFLite models, check out this official resource. How to obtain integrated gradients for a classification model. Integrated Gradients Integrated Gradients is a technique for attributing a classification model's prediction to its input features. It is a model interpretability technique: you can use it to visualize the relationship between input features and model predictions. Integrated Gradients is a variation on computing the gradient of the prediction output with regard to features of the input. To compute integrated gradients, we need to perform the following steps: Identify the input and the output. In our case, the input is an image and the output is the last layer of our model (dense layer with softmax activation). Compute which features are important to a neural network when making a prediction on a particular data point. To identify these features, we need to choose a baseline input. A baseline input can be a black image (all pixel values set to zero) or random noise. The shape of the baseline input needs to be the same as our input image, e.g. (299, 299, 3). Interpolate the baseline for a given number of steps. The number of steps represents the steps we need in the gradient approximation for a given input image. The number of steps is a hyperparameter. The authors recommend using anywhere between 20 and 1000 steps. Preprocess these interpolated images and do a forward pass. Get the gradients for these interpolated images. Approximate the gradients integral using the trapezoidal rule. To read in-depth about integrated gradients and why this method works, consider reading this excellent article. References: Integrated Gradients original paper Original implementation Setup import numpy as np import matplotlib.pyplot as plt from scipy import ndimage from IPython.display import Image import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.applications import xception # Size of the input image img_size = (299, 299, 3) # Load Xception model with imagenet weights model = xception.Xception(weights=\"imagenet\") # The local path to our target image img_path = keras.utils.get_file(\"elephant.jpg\", \"https://i.imgur.com/Bvro0YD.png\") display(Image(img_path)) Downloading data from https://i.imgur.com/Bvro0YD.png 4218880/4217496 [==============================] - 0s 0us/step jpeg Integrated Gradients algorithm def get_img_array(img_path, size=(299, 299)): # `img` is a PIL image of size 299x299 img = keras.preprocessing.image.load_img(img_path, target_size=size) # `array` is a float32 Numpy array of shape (299, 299, 3) array = keras.preprocessing.image.img_to_array(img) # We add a dimension to transform our array into a \"batch\" # of size (1, 299, 299, 3) array = np.expand_dims(array, axis=0) return array def get_gradients(img_input, top_pred_idx): \"\"\"Computes the gradients of outputs w.r.t input image. Args: img_input: 4D image tensor top_pred_idx: Predicted label for the input image Returns: Gradients of the predictions w.r.t img_input \"\"\" images = tf.cast(img_input, tf.float32) with tf.GradientTape() as tape: tape.watch(images) preds = model(images) top_class = preds[:, top_pred_idx] grads = tape.gradient(top_class, images) return grads def get_integrated_gradients(img_input, top_pred_idx, baseline=None, num_steps=50): \"\"\"Computes Integrated Gradients for a predicted label. Args: img_input (ndarray): Original image top_pred_idx: Predicted label for the input image baseline (ndarray): The baseline image to start with for interpolation num_steps: Number of interpolation steps between the baseline and the input used in the computation of integrated gradients. These steps along determine the integral approximation error. By default, num_steps is set to 50. Returns: Integrated gradients w.r.t input image \"\"\" # If baseline is not provided, start with a black image # having same size as the input image. if baseline is None: baseline = np.zeros(img_size).astype(np.float32) else: baseline = baseline.astype(np.float32) # 1. Do interpolation. img_input = img_input.astype(np.float32) interpolated_image = [ baseline + (step / num_steps) * (img_input - baseline) for step in range(num_steps + 1) ] interpolated_image = np.array(interpolated_image).astype(np.float32) # 2. Preprocess the interpolated images interpolated_image = xception.preprocess_input(interpolated_image) # 3. Get the gradients grads = [] for i, img in enumerate(interpolated_image): img = tf.expand_dims(img, axis=0) grad = get_gradients(img, top_pred_idx=top_pred_idx) grads.append(grad[0]) grads = tf.convert_to_tensor(grads, dtype=tf.float32) # 4. Approximate the integral using the trapezoidal rule grads = (grads[:-1] + grads[1:]) / 2.0 avg_grads = tf.reduce_mean(grads, axis=0) # 5. Calculate integrated gradients and return integrated_grads = (img_input - baseline) * avg_grads return integrated_grads def random_baseline_integrated_gradients( img_input, top_pred_idx, num_steps=50, num_runs=2 ): \"\"\"Generates a number of random baseline images. Args: img_input (ndarray): 3D image top_pred_idx: Predicted label for the input image num_steps: Number of interpolation steps between the baseline and the input used in the computation of integrated gradients. These steps along determine the integral approximation error. By default, num_steps is set to 50. num_runs: number of baseline images to generate Returns: Averaged integrated gradients for `num_runs` baseline images \"\"\" # 1. List to keep track of Integrated Gradients (IG) for all the images integrated_grads = [] # 2. Get the integrated gradients for all the baselines for run in range(num_runs): baseline = np.random.random(img_size) * 255 igrads = get_integrated_gradients( img_input=img_input, top_pred_idx=top_pred_idx, baseline=baseline, num_steps=num_steps, ) integrated_grads.append(igrads) # 3. Return the average integrated gradients for the image integrated_grads = tf.convert_to_tensor(integrated_grads) return tf.reduce_mean(integrated_grads, axis=0) Helper class for visualizing gradients and integrated gradients class GradVisualizer: \"\"\"Plot gradients of the outputs w.r.t an input image.\"\"\" def __init__(self, positive_channel=None, negative_channel=None): if positive_channel is None: self.positive_channel = [0, 255, 0] else: self.positive_channel = positive_channel if negative_channel is None: self.negative_channel = [255, 0, 0] else: self.negative_channel = negative_channel def apply_polarity(self, attributions, polarity): if polarity == \"positive\": return np.clip(attributions, 0, 1) else: return np.clip(attributions, -1, 0) def apply_linear_transformation( self, attributions, clip_above_percentile=99.9, clip_below_percentile=70.0, lower_end=0.2, ): # 1. Get the thresholds m = self.get_thresholded_attributions( attributions, percentage=100 - clip_above_percentile ) e = self.get_thresholded_attributions( attributions, percentage=100 - clip_below_percentile ) # 2. Transform the attributions by a linear function f(x) = a*x + b such that # f(m) = 1.0 and f(e) = lower_end transformed_attributions = (1 - lower_end) * (np.abs(attributions) - e) / ( m - e ) + lower_end # 3. Make sure that the sign of transformed attributions is the same as original attributions transformed_attributions *= np.sign(attributions) # 4. Only keep values that are bigger than the lower_end transformed_attributions *= transformed_attributions >= lower_end # 5. Clip values and return transformed_attributions = np.clip(transformed_attributions, 0.0, 1.0) return transformed_attributions def get_thresholded_attributions(self, attributions, percentage): if percentage == 100.0: return np.min(attributions) # 1. Flatten the attributions flatten_attr = attributions.flatten() # 2. Get the sum of the attributions total = np.sum(flatten_attr) # 3. Sort the attributions from largest to smallest. sorted_attributions = np.sort(np.abs(flatten_attr))[::-1] # 4. Calculate the percentage of the total sum that each attribution # and the values about it contribute. cum_sum = 100.0 * np.cumsum(sorted_attributions) / total # 5. Threshold the attributions by the percentage indices_to_consider = np.where(cum_sum >= percentage)[0][0] # 6. Select the desired attributions and return attributions = sorted_attributions[indices_to_consider] return attributions def binarize(self, attributions, threshold=0.001): return attributions > threshold def morphological_cleanup_fn(self, attributions, structure=np.ones((4, 4))): closed = ndimage.grey_closing(attributions, structure=structure) opened = ndimage.grey_opening(closed, structure=structure) return opened def draw_outlines( self, attributions, percentage=90, connected_component_structure=np.ones((3, 3)) ): # 1. Binarize the attributions. attributions = self.binarize(attributions) # 2. Fill the gaps attributions = ndimage.binary_fill_holes(attributions) # 3. Compute connected components connected_components, num_comp = ndimage.measurements.label( attributions, structure=connected_component_structure ) # 4. Sum up the attributions for each component total = np.sum(attributions[connected_components > 0]) component_sums = [] for comp in range(1, num_comp + 1): mask = connected_components == comp component_sum = np.sum(attributions[mask]) component_sums.append((component_sum, mask)) # 5. Compute the percentage of top components to keep sorted_sums_and_masks = sorted(component_sums, key=lambda x: x[0], reverse=True) sorted_sums = list(zip(*sorted_sums_and_masks))[0] cumulative_sorted_sums = np.cumsum(sorted_sums) cutoff_threshold = percentage * total / 100 cutoff_idx = np.where(cumulative_sorted_sums >= cutoff_threshold)[0][0] if cutoff_idx > 2: cutoff_idx = 2 # 6. Set the values for the kept components border_mask = np.zeros_like(attributions) for i in range(cutoff_idx + 1): border_mask[sorted_sums_and_masks[i][1]] = 1 # 7. Make the mask hollow and show only the border eroded_mask = ndimage.binary_erosion(border_mask, iterations=1) border_mask[eroded_mask] = 0 # 8. Return the outlined mask return border_mask def process_grads( self, image, attributions, polarity=\"positive\", clip_above_percentile=99.9, clip_below_percentile=0, morphological_cleanup=False, structure=np.ones((3, 3)), outlines=False, outlines_component_percentage=90, overlay=True, ): if polarity not in [\"positive\", \"negative\"]: raise ValueError( f\"\"\" Allowed polarity values: 'positive' or 'negative' but provided {polarity}\"\"\" ) if clip_above_percentile < 0 or clip_above_percentile > 100: raise ValueError(\"clip_above_percentile must be in [0, 100]\") if clip_below_percentile < 0 or clip_below_percentile > 100: raise ValueError(\"clip_below_percentile must be in [0, 100]\") # 1. Apply polarity if polarity == \"positive\": attributions = self.apply_polarity(attributions, polarity=polarity) channel = self.positive_channel else: attributions = self.apply_polarity(attributions, polarity=polarity) attributions = np.abs(attributions) channel = self.negative_channel # 2. Take average over the channels attributions = np.average(attributions, axis=2) # 3. Apply linear transformation to the attributions attributions = self.apply_linear_transformation( attributions, clip_above_percentile=clip_above_percentile, clip_below_percentile=clip_below_percentile, lower_end=0.0, ) # 4. Cleanup if morphological_cleanup: attributions = self.morphological_cleanup_fn( attributions, structure=structure ) # 5. Draw the outlines if outlines: attributions = self.draw_outlines( attributions, percentage=outlines_component_percentage ) # 6. Expand the channel axis and convert to RGB attributions = np.expand_dims(attributions, 2) * channel # 7.Superimpose on the original image if overlay: attributions = np.clip((attributions * 0.8 + image), 0, 255) return attributions def visualize( self, image, gradients, integrated_gradients, polarity=\"positive\", clip_above_percentile=99.9, clip_below_percentile=0, morphological_cleanup=False, structure=np.ones((3, 3)), outlines=False, outlines_component_percentage=90, overlay=True, figsize=(15, 8), ): # 1. Make two copies of the original image img1 = np.copy(image) img2 = np.copy(image) # 2. Process the normal gradients grads_attr = self.process_grads( image=img1, attributions=gradients, polarity=polarity, clip_above_percentile=clip_above_percentile, clip_below_percentile=clip_below_percentile, morphological_cleanup=morphological_cleanup, structure=structure, outlines=outlines, outlines_component_percentage=outlines_component_percentage, overlay=overlay, ) # 3. Process the integrated gradients igrads_attr = self.process_grads( image=img2, attributions=integrated_gradients, polarity=polarity, clip_above_percentile=clip_above_percentile, clip_below_percentile=clip_below_percentile, morphological_cleanup=morphological_cleanup, structure=structure, outlines=outlines, outlines_component_percentage=outlines_component_percentage, overlay=overlay, ) _, ax = plt.subplots(1, 3, figsize=figsize) ax[0].imshow(image) ax[1].imshow(grads_attr.astype(np.uint8)) ax[2].imshow(igrads_attr.astype(np.uint8)) ax[0].set_title(\"Input\") ax[1].set_title(\"Normal gradients\") ax[2].set_title(\"Integrated gradients\") plt.show() Let's test-drive it # 1. Convert the image to numpy array img = get_img_array(img_path) # 2. Keep a copy of the original image orig_img = np.copy(img[0]).astype(np.uint8) # 3. Preprocess the image img_processed = tf.cast(xception.preprocess_input(img), dtype=tf.float32) # 4. Get model predictions preds = model.predict(img_processed) top_pred_idx = tf.argmax(preds[0]) print(\"Predicted:\", top_pred_idx, xception.decode_predictions(preds, top=1)[0]) # 5. Get the gradients of the last layer for the predicted label grads = get_gradients(img_processed, top_pred_idx=top_pred_idx) # 6. Get the integrated gradients igrads = random_baseline_integrated_gradients( np.copy(orig_img), top_pred_idx=top_pred_idx, num_steps=50, num_runs=2 ) # 7. Process the gradients and plot vis = GradVisualizer() vis.visualize( image=orig_img, gradients=grads[0].numpy(), integrated_gradients=igrads.numpy(), clip_above_percentile=99, clip_below_percentile=0, ) vis.visualize( image=orig_img, gradients=grads[0].numpy(), integrated_gradients=igrads.numpy(), clip_above_percentile=95, clip_below_percentile=28, morphological_cleanup=True, outlines=True, ) Predicted: tf.Tensor(386, shape=(), dtype=int64) [('n02504458', 'African_elephant', 0.8871446)] Implement a depth estimation model with a convnet. Introduction Depth estimation is a crucial step towards inferring scene geometry from 2D images. The goal in monocular depth estimation is to predict the depth value of each pixel or inferring depth information, given only a single RGB image as input. This example will show an approach to build a depth estimation model with a convnet and simple loss functions. depth Setup import os import sys import tensorflow as tf from tensorflow.keras import layers import pandas as pd import numpy as np import cv2 import matplotlib.pyplot as plt tf.random.set_seed(123) Downloading the dataset We will be using the dataset DIODE: A Dense Indoor and Outdoor Depth Dataset for this tutorial. However, we use the validation set generating training and evaluation subsets for our model. The reason we use the validation set rather than the training set of the original dataset is because the training set consists of 81GB of data, which is challenging to download compared to the validation set which is only 2.6GB. Other datasets that you could use are NYU-v2 and KITTI. annotation_folder = \"/dataset/\" if not os.path.exists(os.path.abspath(\".\") + annotation_folder): annotation_zip = tf.keras.utils.get_file( \"val.tar.gz\", cache_subdir=os.path.abspath(\".\"), origin=\"http://diode-dataset.s3.amazonaws.com/val.tar.gz\", extract=True, ) Downloading data from http://diode-dataset.s3.amazonaws.com/val.tar.gz 2774630400/2774625282 [==============================] - 90s 0us/step 2774638592/2774625282 [==============================] - 90s 0us/step Preparing the dataset We only use the indoor images to train our depth estimation model. path = \"val/indoors\" filelist = [] for root, dirs, files in os.walk(path): for file in files: filelist.append(os.path.join(root, file)) filelist.sort() data = { \"image\": [x for x in filelist if x.endswith(\".png\")], \"depth\": [x for x in filelist if x.endswith(\"_depth.npy\")], \"mask\": [x for x in filelist if x.endswith(\"_depth_mask.npy\")], } df = pd.DataFrame(data) df = df.sample(frac=1, random_state=42) Preparing hyperparameters HEIGHT = 256 WIDTH = 256 LR = 0.0002 EPOCHS = 30 BATCH_SIZE = 32 Building a data pipeline The pipeline takes a dataframe containing the path for the RGB images, as well as the depth and depth mask files. It reads and resize the RGB images. It reads the depth and depth mask files, process them to generate the depth map image and resize it. It returns the RGB images and the depth map images for a batch. class DataGenerator(tf.keras.utils.Sequence): def __init__(self, data, batch_size=6, dim=(768, 1024), n_channels=3, shuffle=True): \"\"\" Initialization \"\"\" self.data = data self.indices = self.data.index.tolist() self.dim = dim self.n_channels = n_channels self.batch_size = batch_size self.shuffle = shuffle self.min_depth = 0.1 self.on_epoch_end() def __len__(self): return int(np.ceil(len(self.data) / self.batch_size)) def __getitem__(self, index): if (index + 1) * self.batch_size > len(self.indices): self.batch_size = len(self.indices) - index * self.batch_size # Generate one batch of data # Generate indices of the batch index = self.indices[index * self.batch_size : (index + 1) * self.batch_size] # Find list of IDs batch = [self.indices[k] for k in index] x, y = self.data_generation(batch) return x, y def on_epoch_end(self): \"\"\" Updates indexes after each epoch \"\"\" self.index = np.arange(len(self.indices)) if self.shuffle == True: np.random.shuffle(self.index) def load(self, image_path, depth_map, mask): \"\"\"Load input and target image.\"\"\" image_ = cv2.imread(image_path) image_ = cv2.cvtColor(image_, cv2.COLOR_BGR2RGB) image_ = cv2.resize(image_, self.dim) image_ = tf.image.convert_image_dtype(image_, tf.float32) depth_map = np.load(depth_map).squeeze() mask = np.load(mask) mask = mask > 0 max_depth = min(300, np.percentile(depth_map, 99)) depth_map = np.clip(depth_map, self.min_depth, max_depth) depth_map = np.log(depth_map, where=mask) depth_map = np.ma.masked_where(~mask, depth_map) depth_map = np.clip(depth_map, 0.1, np.log(max_depth)) depth_map = cv2.resize(depth_map, self.dim) depth_map = np.expand_dims(depth_map, axis=2) depth_map = tf.image.convert_image_dtype(depth_map, tf.float32) return image_, depth_map def data_generation(self, batch): x = np.empty((self.batch_size, *self.dim, self.n_channels)) y = np.empty((self.batch_size, *self.dim, 1)) for i, batch_id in enumerate(batch): x[i,], y[i,] = self.load( self.data[\"image\"][batch_id], self.data[\"depth\"][batch_id], self.data[\"mask\"][batch_id], ) return x, y Visualizing samples def visualize_depth_map(samples, test=False, model=None): input, target = samples cmap = plt.cm.jet cmap.set_bad(color=\"black\") if test: pred = model.predict(input) fig, ax = plt.subplots(6, 3, figsize=(50, 50)) for i in range(6): ax[i, 0].imshow((input[i].squeeze())) ax[i, 1].imshow((target[i].squeeze()), cmap=cmap) ax[i, 2].imshow((pred[i].squeeze()), cmap=cmap) else: fig, ax = plt.subplots(6, 2, figsize=(50, 50)) for i in range(6): ax[i, 0].imshow((input[i].squeeze())) ax[i, 1].imshow((target[i].squeeze()), cmap=cmap) visualize_samples = next( iter(DataGenerator(data=df, batch_size=6, dim=(HEIGHT, WIDTH))) ) visualize_depth_map(visualize_samples) png 3D point cloud visualization depth_vis = np.flipud(visualize_samples[1][1].squeeze()) # target img_vis = np.flipud(visualize_samples[0][1].squeeze()) # input fig = plt.figure(figsize=(15, 10)) ax = plt.axes(projection=\"3d\") STEP = 3 for x in range(0, img_vis.shape[0], STEP): for y in range(0, img_vis.shape[1], STEP): ax.scatter( [depth_vis[x, y]] * 3, [y] * 3, [x] * 3, c=tuple(img_vis[x, y, :3] / 255), s=3, ) ax.view_init(45, 135) png Building the model The basic model is from U-Net. Addditive skip-connections are implemented in the downscaling block. class DownscaleBlock(layers.Layer): def __init__( self, filters, kernel_size=(3, 3), padding=\"same\", strides=1, **kwargs ): super().__init__(**kwargs) self.convA = layers.Conv2D(filters, kernel_size, strides, padding) self.convB = layers.Conv2D(filters, kernel_size, strides, padding) self.reluA = layers.LeakyReLU(alpha=0.2) self.reluB = layers.LeakyReLU(alpha=0.2) self.bn2a = tf.keras.layers.BatchNormalization() self.bn2b = tf.keras.layers.BatchNormalization() self.pool = layers.MaxPool2D((2, 2), (2, 2)) def call(self, input_tensor): d = self.convA(input_tensor) x = self.bn2a(d) x = self.reluA(x) x = self.convB(x) x = self.bn2b(x) x = self.reluB(x) x += d p = self.pool(x) return x, p class UpscaleBlock(layers.Layer): def __init__( self, filters, kernel_size=(3, 3), padding=\"same\", strides=1, **kwargs ): super().__init__(**kwargs) self.us = layers.UpSampling2D((2, 2)) self.convA = layers.Conv2D(filters, kernel_size, strides, padding) self.convB = layers.Conv2D(filters, kernel_size, strides, padding) self.reluA = layers.LeakyReLU(alpha=0.2) self.reluB = layers.LeakyReLU(alpha=0.2) self.bn2a = tf.keras.layers.BatchNormalization() self.bn2b = tf.keras.layers.BatchNormalization() self.conc = layers.Concatenate() def call(self, x, skip): x = self.us(x) concat = self.conc([x, skip]) x = self.convA(concat) x = self.bn2a(x) x = self.reluA(x) x = self.convB(x) x = self.bn2b(x) x = self.reluB(x) return x class BottleNeckBlock(layers.Layer): def __init__( self, filters, kernel_size=(3, 3), padding=\"same\", strides=1, **kwargs ): super().__init__(**kwargs) self.convA = layers.Conv2D(filters, kernel_size, strides, padding) self.convB = layers.Conv2D(filters, kernel_size, strides, padding) self.reluA = layers.LeakyReLU(alpha=0.2) self.reluB = layers.LeakyReLU(alpha=0.2) def call(self, x): x = self.convA(x) x = self.reluA(x) x = self.convB(x) x = self.reluB(x) return x Defining the loss We will optimize 3 losses in our mode. 1. Structural similarity index(SSIM). 2. L1-loss, or Point-wise depth in our case. 3. Depth smoothness loss. Out of the three loss functions, SSIM contributes the most to improving model performance. class DepthEstimationModel(tf.keras.Model): def __init__(self): super().__init__() self.ssim_loss_weight = 0.85 self.l1_loss_weight = 0.1 self.edge_loss_weight = 0.9 self.loss_metric = tf.keras.metrics.Mean(name=\"loss\") f = [16, 32, 64, 128, 256] self.downscale_blocks = [ DownscaleBlock(f[0]), DownscaleBlock(f[1]), DownscaleBlock(f[2]), DownscaleBlock(f[3]), ] self.bottle_neck_block = BottleNeckBlock(f[4]) self.upscale_blocks = [ UpscaleBlock(f[3]), UpscaleBlock(f[2]), UpscaleBlock(f[1]), UpscaleBlock(f[0]), ] self.conv_layer = layers.Conv2D(1, (1, 1), padding=\"same\", activation=\"tanh\") def calculate_loss(self, target, pred): # Edges dy_true, dx_true = tf.image.image_gradients(target) dy_pred, dx_pred = tf.image.image_gradients(pred) weights_x = tf.exp(tf.reduce_mean(tf.abs(dx_true))) weights_y = tf.exp(tf.reduce_mean(tf.abs(dy_true))) # Depth smoothness smoothness_x = dx_pred * weights_x smoothness_y = dy_pred * weights_y depth_smoothness_loss = tf.reduce_mean(abs(smoothness_x)) + tf.reduce_mean( abs(smoothness_y) ) # Structural similarity (SSIM) index ssim_loss = tf.reduce_mean( 1 - tf.image.ssim( target, pred, max_val=WIDTH, filter_size=7, k1=0.01 ** 2, k2=0.03 ** 2 ) ) # Point-wise depth l1_loss = tf.reduce_mean(tf.abs(target - pred)) loss = ( (self.ssim_loss_weight * ssim_loss) + (self.l1_loss_weight * l1_loss) + (self.edge_loss_weight * depth_smoothness_loss) ) return loss @property def metrics(self): return [self.loss_metric] def train_step(self, batch_data): input, target = batch_data with tf.GradientTape() as tape: pred = self(input, training=True) loss = self.calculate_loss(target, pred) gradients = tape.gradient(loss, self.trainable_variables) self.optimizer.apply_gradients(zip(gradients, self.trainable_variables)) self.loss_metric.update_state(loss) return { \"loss\": self.loss_metric.result(), } def test_step(self, batch_data): input, target = batch_data pred = self(input, training=False) loss = self.calculate_loss(target, pred) self.loss_metric.update_state(loss) return { \"loss\": self.loss_metric.result(), } def call(self, x): c1, p1 = self.downscale_blocks[0](x) c2, p2 = self.downscale_blocks[1](p1) c3, p3 = self.downscale_blocks[2](p2) c4, p4 = self.downscale_blocks[3](p3) bn = self.bottle_neck_block(p4) u1 = self.upscale_blocks[0](bn, c4) u2 = self.upscale_blocks[1](u1, c3) u3 = self.upscale_blocks[2](u2, c2) u4 = self.upscale_blocks[3](u3, c1) return self.conv_layer(u4) Model training optimizer = tf.keras.optimizers.Adam( learning_rate=LR, amsgrad=False, ) model = DepthEstimationModel() # Define the loss function cross_entropy = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, reduction=\"none\" ) # Compile the model model.compile(optimizer, loss=cross_entropy) train_loader = DataGenerator( data=df[:260].reset_index(drop=\"true\"), batch_size=BATCH_SIZE, dim=(HEIGHT, WIDTH) ) validation_loader = DataGenerator( data=df[260:].reset_index(drop=\"true\"), batch_size=BATCH_SIZE, dim=(HEIGHT, WIDTH) ) model.fit( train_loader, epochs=EPOCHS, validation_data=validation_loader, ) Epoch 1/30 9/9 [==============================] - 18s 1s/step - loss: 1.1543 - val_loss: 1.4281 Epoch 2/30 9/9 [==============================] - 3s 390ms/step - loss: 0.8727 - val_loss: 1.0686 Epoch 3/30 9/9 [==============================] - 4s 428ms/step - loss: 0.6659 - val_loss: 0.7884 Epoch 4/30 9/9 [==============================] - 3s 334ms/step - loss: 0.6462 - val_loss: 0.6198 Epoch 5/30 9/9 [==============================] - 3s 355ms/step - loss: 0.5689 - val_loss: 0.6207 Epoch 6/30 9/9 [==============================] - 3s 361ms/step - loss: 0.5067 - val_loss: 0.4876 Epoch 7/30 9/9 [==============================] - 3s 357ms/step - loss: 0.4680 - val_loss: 0.4698 Epoch 8/30 9/9 [==============================] - 3s 325ms/step - loss: 0.4622 - val_loss: 0.7249 Epoch 9/30 9/9 [==============================] - 3s 393ms/step - loss: 0.4215 - val_loss: 0.3826 Epoch 10/30 9/9 [==============================] - 3s 337ms/step - loss: 0.3788 - val_loss: 0.3289 Epoch 11/30 9/9 [==============================] - 3s 345ms/step - loss: 0.3347 - val_loss: 0.3032 Epoch 12/30 9/9 [==============================] - 3s 327ms/step - loss: 0.3488 - val_loss: 0.2631 Epoch 13/30 9/9 [==============================] - 3s 326ms/step - loss: 0.3315 - val_loss: 0.2383 Epoch 14/30 9/9 [==============================] - 3s 331ms/step - loss: 0.3349 - val_loss: 0.2379 Epoch 15/30 9/9 [==============================] - 3s 333ms/step - loss: 0.3394 - val_loss: 0.2151 Epoch 16/30 9/9 [==============================] - 3s 337ms/step - loss: 0.3073 - val_loss: 0.2243 Epoch 17/30 9/9 [==============================] - 3s 355ms/step - loss: 0.3951 - val_loss: 0.2627 Epoch 18/30 9/9 [==============================] - 3s 335ms/step - loss: 0.3657 - val_loss: 0.2175 Epoch 19/30 9/9 [==============================] - 3s 321ms/step - loss: 0.3404 - val_loss: 0.2073 Epoch 20/30 9/9 [==============================] - 3s 320ms/step - loss: 0.3549 - val_loss: 0.1972 Epoch 21/30 9/9 [==============================] - 3s 317ms/step - loss: 0.2802 - val_loss: 0.1936 Epoch 22/30 9/9 [==============================] - 3s 316ms/step - loss: 0.2632 - val_loss: 0.1893 Epoch 23/30 9/9 [==============================] - 3s 318ms/step - loss: 0.2862 - val_loss: 0.1807 Epoch 24/30 9/9 [==============================] - 3s 328ms/step - loss: 0.3083 - val_loss: 0.1923 Epoch 25/30 9/9 [==============================] - 3s 312ms/step - loss: 0.3666 - val_loss: 0.1795 Epoch 26/30 9/9 [==============================] - 3s 316ms/step - loss: 0.2928 - val_loss: 0.1753 Epoch 27/30 9/9 [==============================] - 3s 325ms/step - loss: 0.2945 - val_loss: 0.1790 Epoch 28/30 9/9 [==============================] - 3s 325ms/step - loss: 0.2642 - val_loss: 0.1775 Epoch 29/30 9/9 [==============================] - 3s 333ms/step - loss: 0.2546 - val_loss: 0.1810 Epoch 30/30 9/9 [==============================] - 3s 315ms/step - loss: 0.2650 - val_loss: 0.1795 Visualizing model output We visualize the model output over the validation set. The first image is the RGB image, the second image is the ground truth depth map image and the third one is the predicted depth map image. test_loader = next( iter( DataGenerator( data=df[265:].reset_index(drop=\"true\"), batch_size=6, dim=(HEIGHT, WIDTH) ) ) ) visualize_depth_map(test_loader, test=True, model=model) test_loader = next( iter( DataGenerator( data=df[300:].reset_index(drop=\"true\"), batch_size=6, dim=(HEIGHT, WIDTH) ) ) ) visualize_depth_map(test_loader, test=True, model=model) png png Possible improvements You can improve this model by replacing the encoding part of the U-Net with a pretrained DenseNet or ResNet. Loss functions play an important role in solving this problem. Tuning the loss functions may yield significant improvement. References The following papers go deeper into possible approaches for depth estimation. 1. Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos 2. Digging Into Self-Supervised Monocular Depth Estimation 3. Deeper Depth Prediction with Fully Convolutional Residual Networks You can also find helpful implementations in the papers with code depth estimation task. Implement DeepLabV3+ architecture for Multi-class Semantic Segmentation. Introduction Semantic segmentation, with the goal to assign semantic labels to every pixel in an image, is an essential computer vision task. In this example, we implement the DeepLabV3+ model for multi-class semantic segmentation, a fully-convolutional architecture that performs well on semantic segmentation benchmarks. References: Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation Rethinking Atrous Convolution for Semantic Image Segmentation DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs Downloading the data We will use the Crowd Instance-level Human Parsing Dataset for training our model. The Crowd Instance-level Human Parsing (CIHP) dataset has 38,280 diverse human images. Each image in CIHP is labeled with pixel-wise annotations for 20 categories, as well as instance-level identification. This dataset can be used for the \"human part segmentation\" task. import os import cv2 import numpy as np from glob import glob from scipy.io import loadmat import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers !gdown https://drive.google.com/uc?id=1B9A9UCJYMwTL4oBEo4RZfbMZMaZhKJaz !unzip -q instance-level-human-parsing.zip Downloading... From: https://drive.google.com/uc?id=1B9A9UCJYMwTL4oBEo4RZfbMZMaZhKJaz To: /content/keras-io/scripts/tmp_4374681/instance-level-human-parsing.zip 2.91GB [00:36, 79.6MB/s] Creating a TensorFlow Dataset Training on the entire CIHP dataset with 38,280 images takes a lot of time, hence we will be using a smaller subset of 200 images for training our model in this example. IMAGE_SIZE = 512 BATCH_SIZE = 4 NUM_CLASSES = 20 DATA_DIR = \"./instance-level_human_parsing/instance-level_human_parsing/Training\" NUM_TRAIN_IMAGES = 1000 NUM_VAL_IMAGES = 50 train_images = sorted(glob(os.path.join(DATA_DIR, \"Images/*\")))[:NUM_TRAIN_IMAGES] train_masks = sorted(glob(os.path.join(DATA_DIR, \"Category_ids/*\")))[:NUM_TRAIN_IMAGES] val_images = sorted(glob(os.path.join(DATA_DIR, \"Images/*\")))[ NUM_TRAIN_IMAGES : NUM_VAL_IMAGES + NUM_TRAIN_IMAGES ] val_masks = sorted(glob(os.path.join(DATA_DIR, \"Category_ids/*\")))[ NUM_TRAIN_IMAGES : NUM_VAL_IMAGES + NUM_TRAIN_IMAGES ] def read_image(image_path, mask=False): image = tf.io.read_file(image_path) if mask: image = tf.image.decode_png(image, channels=1) image.set_shape([None, None, 1]) image = tf.image.resize(images=image, size=[IMAGE_SIZE, IMAGE_SIZE]) else: image = tf.image.decode_png(image, channels=3) image.set_shape([None, None, 3]) image = tf.image.resize(images=image, size=[IMAGE_SIZE, IMAGE_SIZE]) image = image / 127.5 - 1 return image def load_data(image_list, mask_list): image = read_image(image_list) mask = read_image(mask_list, mask=True) return image, mask def data_generator(image_list, mask_list): dataset = tf.data.Dataset.from_tensor_slices((image_list, mask_list)) dataset = dataset.map(load_data, num_parallel_calls=tf.data.AUTOTUNE) dataset = dataset.batch(BATCH_SIZE, drop_remainder=True) return dataset train_dataset = data_generator(train_images, train_masks) val_dataset = data_generator(val_images, val_masks) print(\"Train Dataset:\", train_dataset) print(\"Val Dataset:\", val_dataset) Train Dataset: Val Dataset: Building the DeepLabV3+ model DeepLabv3+ extends DeepLabv3 by adding an encoder-decoder structure. The encoder module processes multiscale contextual information by applying dilated convolution at multiple scales, while the decoder module refines the segmentation results along object boundaries. Dilated convolution: With dilated convolution, as we go deeper in the network, we can keep the stride constant but with larger field-of-view without increasing the number of parameters or the amount of computation. Besides, it enables larger output feature maps, which is useful for semantic segmentation. The reason for using Dilated Spatial Pyramid Pooling is that it was shown that as the sampling rate becomes larger, the number of valid filter weights (i.e., weights that are applied to the valid feature region, instead of padded zeros) becomes smaller. def convolution_block( block_input, num_filters=256, kernel_size=3, dilation_rate=1, padding=\"same\", use_bias=False, ): x = layers.Conv2D( num_filters, kernel_size=kernel_size, dilation_rate=dilation_rate, padding=\"same\", use_bias=use_bias, kernel_initializer=keras.initializers.HeNormal(), )(block_input) x = layers.BatchNormalization()(x) return tf.nn.relu(x) def DilatedSpatialPyramidPooling(dspp_input): dims = dspp_input.shape x = layers.AveragePooling2D(pool_size=(dims[-3], dims[-2]))(dspp_input) x = convolution_block(x, kernel_size=1, use_bias=True) out_pool = layers.UpSampling2D( size=(dims[-3] // x.shape[1], dims[-2] // x.shape[2]), interpolation=\"bilinear\", )(x) out_1 = convolution_block(dspp_input, kernel_size=1, dilation_rate=1) out_6 = convolution_block(dspp_input, kernel_size=3, dilation_rate=6) out_12 = convolution_block(dspp_input, kernel_size=3, dilation_rate=12) out_18 = convolution_block(dspp_input, kernel_size=3, dilation_rate=18) x = layers.Concatenate(axis=-1)([out_pool, out_1, out_6, out_12, out_18]) output = convolution_block(x, kernel_size=1) return output The encoder features are first bilinearly upsampled by a factor 4, and then concatenated with the corresponding low-level features from the network backbone that have the same spatial resolution. For this example, we use a ResNet50 pretrained on ImageNet as the backbone model, and we use the low-level features from the conv4_block6_2_relu block of the backbone. def DeeplabV3Plus(image_size, num_classes): model_input = keras.Input(shape=(image_size, image_size, 3)) resnet50 = keras.applications.ResNet50( weights=\"imagenet\", include_top=False, input_tensor=model_input ) x = resnet50.get_layer(\"conv4_block6_2_relu\").output x = DilatedSpatialPyramidPooling(x) input_a = layers.UpSampling2D( size=(image_size // 4 // x.shape[1], image_size // 4 // x.shape[2]), interpolation=\"bilinear\", )(x) input_b = resnet50.get_layer(\"conv2_block3_2_relu\").output input_b = convolution_block(input_b, num_filters=48, kernel_size=1) x = layers.Concatenate(axis=-1)([input_a, input_b]) x = convolution_block(x) x = convolution_block(x) x = layers.UpSampling2D( size=(image_size // x.shape[1], image_size // x.shape[2]), interpolation=\"bilinear\", )(x) model_output = layers.Conv2D(num_classes, kernel_size=(1, 1), padding=\"same\")(x) return keras.Model(inputs=model_input, outputs=model_output) model = DeeplabV3Plus(image_size=IMAGE_SIZE, num_classes=NUM_CLASSES) model.summary() Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5 94773248/94765736 [==============================] - 1s 0us/step 94781440/94765736 [==============================] - 1s 0us/step Model: \"model\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 512, 512, 3) 0 __________________________________________________________________________________________________ conv1_pad (ZeroPadding2D) (None, 518, 518, 3) 0 input_1[0][0] __________________________________________________________________________________________________ conv1_conv (Conv2D) (None, 256, 256, 64) 9472 conv1_pad[0][0] __________________________________________________________________________________________________ conv1_bn (BatchNormalization) (None, 256, 256, 64) 256 conv1_conv[0][0] __________________________________________________________________________________________________ conv1_relu (Activation) (None, 256, 256, 64) 0 conv1_bn[0][0] __________________________________________________________________________________________________ pool1_pad (ZeroPadding2D) (None, 258, 258, 64) 0 conv1_relu[0][0] __________________________________________________________________________________________________ pool1_pool (MaxPooling2D) (None, 128, 128, 64) 0 pool1_pad[0][0] __________________________________________________________________________________________________ conv2_block1_1_conv (Conv2D) (None, 128, 128, 64) 4160 pool1_pool[0][0] __________________________________________________________________________________________________ conv2_block1_1_bn (BatchNormali (None, 128, 128, 64) 256 conv2_block1_1_conv[0][0] __________________________________________________________________________________________________ conv2_block1_1_relu (Activation (None, 128, 128, 64) 0 conv2_block1_1_bn[0][0] __________________________________________________________________________________________________ conv2_block1_2_conv (Conv2D) (None, 128, 128, 64) 36928 conv2_block1_1_relu[0][0] __________________________________________________________________________________________________ conv2_block1_2_bn (BatchNormali (None, 128, 128, 64) 256 conv2_block1_2_conv[0][0] __________________________________________________________________________________________________ conv2_block1_2_relu (Activation (None, 128, 128, 64) 0 conv2_block1_2_bn[0][0] __________________________________________________________________________________________________ conv2_block1_0_conv (Conv2D) (None, 128, 128, 256 16640 pool1_pool[0][0] __________________________________________________________________________________________________ conv2_block1_3_conv (Conv2D) (None, 128, 128, 256 16640 conv2_block1_2_relu[0][0] __________________________________________________________________________________________________ conv2_block1_0_bn (BatchNormali (None, 128, 128, 256 1024 conv2_block1_0_conv[0][0] __________________________________________________________________________________________________ conv2_block1_3_bn (BatchNormali (None, 128, 128, 256 1024 conv2_block1_3_conv[0][0] __________________________________________________________________________________________________ conv2_block1_add (Add) (None, 128, 128, 256 0 conv2_block1_0_bn[0][0] conv2_block1_3_bn[0][0] __________________________________________________________________________________________________ conv2_block1_out (Activation) (None, 128, 128, 256 0 conv2_block1_add[0][0] __________________________________________________________________________________________________ conv2_block2_1_conv (Conv2D) (None, 128, 128, 64) 16448 conv2_block1_out[0][0] __________________________________________________________________________________________________ conv2_block2_1_bn (BatchNormali (None, 128, 128, 64) 256 conv2_block2_1_conv[0][0] __________________________________________________________________________________________________ conv2_block2_1_relu (Activation (None, 128, 128, 64) 0 conv2_block2_1_bn[0][0] __________________________________________________________________________________________________ conv2_block2_2_conv (Conv2D) (None, 128, 128, 64) 36928 conv2_block2_1_relu[0][0] __________________________________________________________________________________________________ conv2_block2_2_bn (BatchNormali (None, 128, 128, 64) 256 conv2_block2_2_conv[0][0] __________________________________________________________________________________________________ conv2_block2_2_relu (Activation (None, 128, 128, 64) 0 conv2_block2_2_bn[0][0] __________________________________________________________________________________________________ conv2_block2_3_conv (Conv2D) (None, 128, 128, 256 16640 conv2_block2_2_relu[0][0] __________________________________________________________________________________________________ conv2_block2_3_bn (BatchNormali (None, 128, 128, 256 1024 conv2_block2_3_conv[0][0] __________________________________________________________________________________________________ conv2_block2_add (Add) (None, 128, 128, 256 0 conv2_block1_out[0][0] conv2_block2_3_bn[0][0] __________________________________________________________________________________________________ conv2_block2_out (Activation) (None, 128, 128, 256 0 conv2_block2_add[0][0] __________________________________________________________________________________________________ conv2_block3_1_conv (Conv2D) (None, 128, 128, 64) 16448 conv2_block2_out[0][0] __________________________________________________________________________________________________ conv2_block3_1_bn (BatchNormali (None, 128, 128, 64) 256 conv2_block3_1_conv[0][0] __________________________________________________________________________________________________ conv2_block3_1_relu (Activation (None, 128, 128, 64) 0 conv2_block3_1_bn[0][0] __________________________________________________________________________________________________ conv2_block3_2_conv (Conv2D) (None, 128, 128, 64) 36928 conv2_block3_1_relu[0][0] __________________________________________________________________________________________________ conv2_block3_2_bn (BatchNormali (None, 128, 128, 64) 256 conv2_block3_2_conv[0][0] __________________________________________________________________________________________________ conv2_block3_2_relu (Activation (None, 128, 128, 64) 0 conv2_block3_2_bn[0][0] __________________________________________________________________________________________________ conv2_block3_3_conv (Conv2D) (None, 128, 128, 256 16640 conv2_block3_2_relu[0][0] __________________________________________________________________________________________________ conv2_block3_3_bn (BatchNormali (None, 128, 128, 256 1024 conv2_block3_3_conv[0][0] __________________________________________________________________________________________________ conv2_block3_add (Add) (None, 128, 128, 256 0 conv2_block2_out[0][0] conv2_block3_3_bn[0][0] __________________________________________________________________________________________________ conv2_block3_out (Activation) (None, 128, 128, 256 0 conv2_block3_add[0][0] __________________________________________________________________________________________________ conv3_block1_1_conv (Conv2D) (None, 64, 64, 128) 32896 conv2_block3_out[0][0] __________________________________________________________________________________________________ conv3_block1_1_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block1_1_conv[0][0] __________________________________________________________________________________________________ conv3_block1_1_relu (Activation (None, 64, 64, 128) 0 conv3_block1_1_bn[0][0] __________________________________________________________________________________________________ conv3_block1_2_conv (Conv2D) (None, 64, 64, 128) 147584 conv3_block1_1_relu[0][0] __________________________________________________________________________________________________ conv3_block1_2_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block1_2_conv[0][0] __________________________________________________________________________________________________ conv3_block1_2_relu (Activation (None, 64, 64, 128) 0 conv3_block1_2_bn[0][0] __________________________________________________________________________________________________ conv3_block1_0_conv (Conv2D) (None, 64, 64, 512) 131584 conv2_block3_out[0][0] __________________________________________________________________________________________________ conv3_block1_3_conv (Conv2D) (None, 64, 64, 512) 66048 conv3_block1_2_relu[0][0] __________________________________________________________________________________________________ conv3_block1_0_bn (BatchNormali (None, 64, 64, 512) 2048 conv3_block1_0_conv[0][0] __________________________________________________________________________________________________ conv3_block1_3_bn (BatchNormali (None, 64, 64, 512) 2048 conv3_block1_3_conv[0][0] __________________________________________________________________________________________________ conv3_block1_add (Add) (None, 64, 64, 512) 0 conv3_block1_0_bn[0][0] conv3_block1_3_bn[0][0] __________________________________________________________________________________________________ conv3_block1_out (Activation) (None, 64, 64, 512) 0 conv3_block1_add[0][0] __________________________________________________________________________________________________ conv3_block2_1_conv (Conv2D) (None, 64, 64, 128) 65664 conv3_block1_out[0][0] __________________________________________________________________________________________________ conv3_block2_1_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block2_1_conv[0][0] __________________________________________________________________________________________________ conv3_block2_1_relu (Activation (None, 64, 64, 128) 0 conv3_block2_1_bn[0][0] __________________________________________________________________________________________________ conv3_block2_2_conv (Conv2D) (None, 64, 64, 128) 147584 conv3_block2_1_relu[0][0] __________________________________________________________________________________________________ conv3_block2_2_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block2_2_conv[0][0] __________________________________________________________________________________________________ conv3_block2_2_relu (Activation (None, 64, 64, 128) 0 conv3_block2_2_bn[0][0] __________________________________________________________________________________________________ conv3_block2_3_conv (Conv2D) (None, 64, 64, 512) 66048 conv3_block2_2_relu[0][0] __________________________________________________________________________________________________ conv3_block2_3_bn (BatchNormali (None, 64, 64, 512) 2048 conv3_block2_3_conv[0][0] __________________________________________________________________________________________________ conv3_block2_add (Add) (None, 64, 64, 512) 0 conv3_block1_out[0][0] conv3_block2_3_bn[0][0] __________________________________________________________________________________________________ conv3_block2_out (Activation) (None, 64, 64, 512) 0 conv3_block2_add[0][0] __________________________________________________________________________________________________ conv3_block3_1_conv (Conv2D) (None, 64, 64, 128) 65664 conv3_block2_out[0][0] __________________________________________________________________________________________________ conv3_block3_1_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block3_1_conv[0][0] __________________________________________________________________________________________________ conv3_block3_1_relu (Activation (None, 64, 64, 128) 0 conv3_block3_1_bn[0][0] __________________________________________________________________________________________________ conv3_block3_2_conv (Conv2D) (None, 64, 64, 128) 147584 conv3_block3_1_relu[0][0] __________________________________________________________________________________________________ conv3_block3_2_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block3_2_conv[0][0] __________________________________________________________________________________________________ conv3_block3_2_relu (Activation (None, 64, 64, 128) 0 conv3_block3_2_bn[0][0] __________________________________________________________________________________________________ conv3_block3_3_conv (Conv2D) (None, 64, 64, 512) 66048 conv3_block3_2_relu[0][0] __________________________________________________________________________________________________ conv3_block3_3_bn (BatchNormali (None, 64, 64, 512) 2048 conv3_block3_3_conv[0][0] __________________________________________________________________________________________________ conv3_block3_add (Add) (None, 64, 64, 512) 0 conv3_block2_out[0][0] conv3_block3_3_bn[0][0] __________________________________________________________________________________________________ conv3_block3_out (Activation) (None, 64, 64, 512) 0 conv3_block3_add[0][0] __________________________________________________________________________________________________ conv3_block4_1_conv (Conv2D) (None, 64, 64, 128) 65664 conv3_block3_out[0][0] __________________________________________________________________________________________________ conv3_block4_1_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block4_1_conv[0][0] __________________________________________________________________________________________________ conv3_block4_1_relu (Activation (None, 64, 64, 128) 0 conv3_block4_1_bn[0][0] __________________________________________________________________________________________________ conv3_block4_2_conv (Conv2D) (None, 64, 64, 128) 147584 conv3_block4_1_relu[0][0] __________________________________________________________________________________________________ conv3_block4_2_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block4_2_conv[0][0] __________________________________________________________________________________________________ conv3_block4_2_relu (Activation (None, 64, 64, 128) 0 conv3_block4_2_bn[0][0] __________________________________________________________________________________________________ conv3_block4_3_conv (Conv2D) (None, 64, 64, 512) 66048 conv3_block4_2_relu[0][0] __________________________________________________________________________________________________ conv3_block4_3_bn (BatchNormali (None, 64, 64, 512) 2048 conv3_block4_3_conv[0][0] __________________________________________________________________________________________________ conv3_block4_add (Add) (None, 64, 64, 512) 0 conv3_block3_out[0][0] conv3_block4_3_bn[0][0] __________________________________________________________________________________________________ conv3_block4_out (Activation) (None, 64, 64, 512) 0 conv3_block4_add[0][0] __________________________________________________________________________________________________ conv4_block1_1_conv (Conv2D) (None, 32, 32, 256) 131328 conv3_block4_out[0][0] __________________________________________________________________________________________________ conv4_block1_1_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block1_1_conv[0][0] __________________________________________________________________________________________________ conv4_block1_1_relu (Activation (None, 32, 32, 256) 0 conv4_block1_1_bn[0][0] __________________________________________________________________________________________________ conv4_block1_2_conv (Conv2D) (None, 32, 32, 256) 590080 conv4_block1_1_relu[0][0] __________________________________________________________________________________________________ conv4_block1_2_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block1_2_conv[0][0] __________________________________________________________________________________________________ conv4_block1_2_relu (Activation (None, 32, 32, 256) 0 conv4_block1_2_bn[0][0] __________________________________________________________________________________________________ conv4_block1_0_conv (Conv2D) (None, 32, 32, 1024) 525312 conv3_block4_out[0][0] __________________________________________________________________________________________________ conv4_block1_3_conv (Conv2D) (None, 32, 32, 1024) 263168 conv4_block1_2_relu[0][0] __________________________________________________________________________________________________ conv4_block1_0_bn (BatchNormali (None, 32, 32, 1024) 4096 conv4_block1_0_conv[0][0] __________________________________________________________________________________________________ conv4_block1_3_bn (BatchNormali (None, 32, 32, 1024) 4096 conv4_block1_3_conv[0][0] __________________________________________________________________________________________________ conv4_block1_add (Add) (None, 32, 32, 1024) 0 conv4_block1_0_bn[0][0] conv4_block1_3_bn[0][0] __________________________________________________________________________________________________ conv4_block1_out (Activation) (None, 32, 32, 1024) 0 conv4_block1_add[0][0] __________________________________________________________________________________________________ conv4_block2_1_conv (Conv2D) (None, 32, 32, 256) 262400 conv4_block1_out[0][0] __________________________________________________________________________________________________ conv4_block2_1_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block2_1_conv[0][0] __________________________________________________________________________________________________ conv4_block2_1_relu (Activation (None, 32, 32, 256) 0 conv4_block2_1_bn[0][0] __________________________________________________________________________________________________ conv4_block2_2_conv (Conv2D) (None, 32, 32, 256) 590080 conv4_block2_1_relu[0][0] __________________________________________________________________________________________________ conv4_block2_2_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block2_2_conv[0][0] __________________________________________________________________________________________________ conv4_block2_2_relu (Activation (None, 32, 32, 256) 0 conv4_block2_2_bn[0][0] __________________________________________________________________________________________________ conv4_block2_3_conv (Conv2D) (None, 32, 32, 1024) 263168 conv4_block2_2_relu[0][0] __________________________________________________________________________________________________ conv4_block2_3_bn (BatchNormali (None, 32, 32, 1024) 4096 conv4_block2_3_conv[0][0] __________________________________________________________________________________________________ conv4_block2_add (Add) (None, 32, 32, 1024) 0 conv4_block1_out[0][0] conv4_block2_3_bn[0][0] __________________________________________________________________________________________________ conv4_block2_out (Activation) (None, 32, 32, 1024) 0 conv4_block2_add[0][0] __________________________________________________________________________________________________ conv4_block3_1_conv (Conv2D) (None, 32, 32, 256) 262400 conv4_block2_out[0][0] __________________________________________________________________________________________________ conv4_block3_1_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block3_1_conv[0][0] __________________________________________________________________________________________________ conv4_block3_1_relu (Activation (None, 32, 32, 256) 0 conv4_block3_1_bn[0][0] __________________________________________________________________________________________________ conv4_block3_2_conv (Conv2D) (None, 32, 32, 256) 590080 conv4_block3_1_relu[0][0] __________________________________________________________________________________________________ conv4_block3_2_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block3_2_conv[0][0] __________________________________________________________________________________________________ conv4_block3_2_relu (Activation (None, 32, 32, 256) 0 conv4_block3_2_bn[0][0] __________________________________________________________________________________________________ conv4_block3_3_conv (Conv2D) (None, 32, 32, 1024) 263168 conv4_block3_2_relu[0][0] __________________________________________________________________________________________________ conv4_block3_3_bn (BatchNormali (None, 32, 32, 1024) 4096 conv4_block3_3_conv[0][0] __________________________________________________________________________________________________ conv4_block3_add (Add) (None, 32, 32, 1024) 0 conv4_block2_out[0][0] conv4_block3_3_bn[0][0] __________________________________________________________________________________________________ conv4_block3_out (Activation) (None, 32, 32, 1024) 0 conv4_block3_add[0][0] __________________________________________________________________________________________________ conv4_block4_1_conv (Conv2D) (None, 32, 32, 256) 262400 conv4_block3_out[0][0] __________________________________________________________________________________________________ conv4_block4_1_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block4_1_conv[0][0] __________________________________________________________________________________________________ conv4_block4_1_relu (Activation (None, 32, 32, 256) 0 conv4_block4_1_bn[0][0] __________________________________________________________________________________________________ conv4_block4_2_conv (Conv2D) (None, 32, 32, 256) 590080 conv4_block4_1_relu[0][0] __________________________________________________________________________________________________ conv4_block4_2_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block4_2_conv[0][0] __________________________________________________________________________________________________ conv4_block4_2_relu (Activation (None, 32, 32, 256) 0 conv4_block4_2_bn[0][0] __________________________________________________________________________________________________ conv4_block4_3_conv (Conv2D) (None, 32, 32, 1024) 263168 conv4_block4_2_relu[0][0] __________________________________________________________________________________________________ conv4_block4_3_bn (BatchNormali (None, 32, 32, 1024) 4096 conv4_block4_3_conv[0][0] __________________________________________________________________________________________________ conv4_block4_add (Add) (None, 32, 32, 1024) 0 conv4_block3_out[0][0] conv4_block4_3_bn[0][0] __________________________________________________________________________________________________ conv4_block4_out (Activation) (None, 32, 32, 1024) 0 conv4_block4_add[0][0] __________________________________________________________________________________________________ conv4_block5_1_conv (Conv2D) (None, 32, 32, 256) 262400 conv4_block4_out[0][0] __________________________________________________________________________________________________ conv4_block5_1_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block5_1_conv[0][0] __________________________________________________________________________________________________ conv4_block5_1_relu (Activation (None, 32, 32, 256) 0 conv4_block5_1_bn[0][0] __________________________________________________________________________________________________ conv4_block5_2_conv (Conv2D) (None, 32, 32, 256) 590080 conv4_block5_1_relu[0][0] __________________________________________________________________________________________________ conv4_block5_2_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block5_2_conv[0][0] __________________________________________________________________________________________________ conv4_block5_2_relu (Activation (None, 32, 32, 256) 0 conv4_block5_2_bn[0][0] __________________________________________________________________________________________________ conv4_block5_3_conv (Conv2D) (None, 32, 32, 1024) 263168 conv4_block5_2_relu[0][0] __________________________________________________________________________________________________ conv4_block5_3_bn (BatchNormali (None, 32, 32, 1024) 4096 conv4_block5_3_conv[0][0] __________________________________________________________________________________________________ conv4_block5_add (Add) (None, 32, 32, 1024) 0 conv4_block4_out[0][0] conv4_block5_3_bn[0][0] __________________________________________________________________________________________________ conv4_block5_out (Activation) (None, 32, 32, 1024) 0 conv4_block5_add[0][0] __________________________________________________________________________________________________ conv4_block6_1_conv (Conv2D) (None, 32, 32, 256) 262400 conv4_block5_out[0][0] __________________________________________________________________________________________________ conv4_block6_1_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block6_1_conv[0][0] __________________________________________________________________________________________________ conv4_block6_1_relu (Activation (None, 32, 32, 256) 0 conv4_block6_1_bn[0][0] __________________________________________________________________________________________________ conv4_block6_2_conv (Conv2D) (None, 32, 32, 256) 590080 conv4_block6_1_relu[0][0] __________________________________________________________________________________________________ conv4_block6_2_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block6_2_conv[0][0] __________________________________________________________________________________________________ conv4_block6_2_relu (Activation (None, 32, 32, 256) 0 conv4_block6_2_bn[0][0] __________________________________________________________________________________________________ average_pooling2d (AveragePooli (None, 1, 1, 256) 0 conv4_block6_2_relu[0][0] __________________________________________________________________________________________________ conv2d (Conv2D) (None, 1, 1, 256) 65792 average_pooling2d[0][0] __________________________________________________________________________________________________ batch_normalization (BatchNorma (None, 1, 1, 256) 1024 conv2d[0][0] __________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 32, 32, 256) 65536 conv4_block6_2_relu[0][0] __________________________________________________________________________________________________ conv2d_2 (Conv2D) (None, 32, 32, 256) 589824 conv4_block6_2_relu[0][0] __________________________________________________________________________________________________ conv2d_3 (Conv2D) (None, 32, 32, 256) 589824 conv4_block6_2_relu[0][0] __________________________________________________________________________________________________ conv2d_4 (Conv2D) (None, 32, 32, 256) 589824 conv4_block6_2_relu[0][0] __________________________________________________________________________________________________ tf.nn.relu (TFOpLambda) (None, 1, 1, 256) 0 batch_normalization[0][0] __________________________________________________________________________________________________ batch_normalization_1 (BatchNor (None, 32, 32, 256) 1024 conv2d_1[0][0] __________________________________________________________________________________________________ batch_normalization_2 (BatchNor (None, 32, 32, 256) 1024 conv2d_2[0][0] __________________________________________________________________________________________________ batch_normalization_3 (BatchNor (None, 32, 32, 256) 1024 conv2d_3[0][0] __________________________________________________________________________________________________ batch_normalization_4 (BatchNor (None, 32, 32, 256) 1024 conv2d_4[0][0] __________________________________________________________________________________________________ up_sampling2d (UpSampling2D) (None, 32, 32, 256) 0 tf.nn.relu[0][0] __________________________________________________________________________________________________ tf.nn.relu_1 (TFOpLambda) (None, 32, 32, 256) 0 batch_normalization_1[0][0] __________________________________________________________________________________________________ tf.nn.relu_2 (TFOpLambda) (None, 32, 32, 256) 0 batch_normalization_2[0][0] __________________________________________________________________________________________________ tf.nn.relu_3 (TFOpLambda) (None, 32, 32, 256) 0 batch_normalization_3[0][0] __________________________________________________________________________________________________ tf.nn.relu_4 (TFOpLambda) (None, 32, 32, 256) 0 batch_normalization_4[0][0] __________________________________________________________________________________________________ concatenate (Concatenate) (None, 32, 32, 1280) 0 up_sampling2d[0][0] tf.nn.relu_1[0][0] tf.nn.relu_2[0][0] tf.nn.relu_3[0][0] tf.nn.relu_4[0][0] __________________________________________________________________________________________________ conv2d_5 (Conv2D) (None, 32, 32, 256) 327680 concatenate[0][0] __________________________________________________________________________________________________ batch_normalization_5 (BatchNor (None, 32, 32, 256) 1024 conv2d_5[0][0] __________________________________________________________________________________________________ conv2d_6 (Conv2D) (None, 128, 128, 48) 3072 conv2_block3_2_relu[0][0] __________________________________________________________________________________________________ tf.nn.relu_5 (TFOpLambda) (None, 32, 32, 256) 0 batch_normalization_5[0][0] __________________________________________________________________________________________________ batch_normalization_6 (BatchNor (None, 128, 128, 48) 192 conv2d_6[0][0] __________________________________________________________________________________________________ up_sampling2d_1 (UpSampling2D) (None, 128, 128, 256 0 tf.nn.relu_5[0][0] __________________________________________________________________________________________________ tf.nn.relu_6 (TFOpLambda) (None, 128, 128, 48) 0 batch_normalization_6[0][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 128, 128, 304 0 up_sampling2d_1[0][0] tf.nn.relu_6[0][0] __________________________________________________________________________________________________ conv2d_7 (Conv2D) (None, 128, 128, 256 700416 concatenate_1[0][0] __________________________________________________________________________________________________ batch_normalization_7 (BatchNor (None, 128, 128, 256 1024 conv2d_7[0][0] __________________________________________________________________________________________________ tf.nn.relu_7 (TFOpLambda) (None, 128, 128, 256 0 batch_normalization_7[0][0] __________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 128, 128, 256 589824 tf.nn.relu_7[0][0] __________________________________________________________________________________________________ batch_normalization_8 (BatchNor (None, 128, 128, 256 1024 conv2d_8[0][0] __________________________________________________________________________________________________ tf.nn.relu_8 (TFOpLambda) (None, 128, 128, 256 0 batch_normalization_8[0][0] __________________________________________________________________________________________________ up_sampling2d_2 (UpSampling2D) (None, 512, 512, 256 0 tf.nn.relu_8[0][0] __________________________________________________________________________________________________ conv2d_9 (Conv2D) (None, 512, 512, 20) 5140 up_sampling2d_2[0][0] ================================================================================================== Total params: 11,857,236 Trainable params: 11,824,500 Non-trainable params: 32,736 __________________________________________________________________________________________________ Training We train the model using sparse categorical crossentropy as the loss function, and Adam as the optimizer. loss = keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile( optimizer=keras.optimizers.Adam(learning_rate=0.001), loss=loss, metrics=[\"accuracy\"], ) history = model.fit(train_dataset, validation_data=val_dataset, epochs=25) plt.plot(history.history[\"loss\"]) plt.title(\"Training Loss\") plt.ylabel(\"loss\") plt.xlabel(\"epoch\") plt.show() plt.plot(history.history[\"accuracy\"]) plt.title(\"Training Accuracy\") plt.ylabel(\"accuracy\") plt.xlabel(\"epoch\") plt.show() plt.plot(history.history[\"val_loss\"]) plt.title(\"Validation Loss\") plt.ylabel(\"val_loss\") plt.xlabel(\"epoch\") plt.show() plt.plot(history.history[\"val_accuracy\"]) plt.title(\"Validation Accuracy\") plt.ylabel(\"val_accuracy\") plt.xlabel(\"epoch\") plt.show() Epoch 1/25 250/250 [==============================] - 115s 359ms/step - loss: 1.1765 - accuracy: 0.6424 - val_loss: 2.3559 - val_accuracy: 0.5960 Epoch 2/25 250/250 [==============================] - 92s 366ms/step - loss: 0.9413 - accuracy: 0.6998 - val_loss: 1.7349 - val_accuracy: 0.5593 Epoch 3/25 250/250 [==============================] - 93s 371ms/step - loss: 0.8415 - accuracy: 0.7310 - val_loss: 1.3097 - val_accuracy: 0.6281 Epoch 4/25 250/250 [==============================] - 93s 372ms/step - loss: 0.7640 - accuracy: 0.7552 - val_loss: 1.0175 - val_accuracy: 0.6885 Epoch 5/25 250/250 [==============================] - 93s 372ms/step - loss: 0.7139 - accuracy: 0.7706 - val_loss: 1.2226 - val_accuracy: 0.6107 Epoch 6/25 250/250 [==============================] - 93s 373ms/step - loss: 0.6647 - accuracy: 0.7867 - val_loss: 0.8583 - val_accuracy: 0.7178 Epoch 7/25 250/250 [==============================] - 94s 375ms/step - loss: 0.5986 - accuracy: 0.8080 - val_loss: 0.9724 - val_accuracy: 0.7135 Epoch 8/25 250/250 [==============================] - 93s 372ms/step - loss: 0.5599 - accuracy: 0.8212 - val_loss: 0.9722 - val_accuracy: 0.7064 Epoch 9/25 250/250 [==============================] - 93s 372ms/step - loss: 0.5161 - accuracy: 0.8364 - val_loss: 0.9023 - val_accuracy: 0.7471 Epoch 10/25 250/250 [==============================] - 93s 373ms/step - loss: 0.4719 - accuracy: 0.8515 - val_loss: 0.8803 - val_accuracy: 0.7540 Epoch 11/25 250/250 [==============================] - 93s 372ms/step - loss: 0.4337 - accuracy: 0.8636 - val_loss: 0.9682 - val_accuracy: 0.7377 Epoch 12/25 250/250 [==============================] - 93s 373ms/step - loss: 0.4079 - accuracy: 0.8718 - val_loss: 0.9586 - val_accuracy: 0.7551 Epoch 13/25 250/250 [==============================] - 93s 373ms/step - loss: 0.3694 - accuracy: 0.8856 - val_loss: 0.9676 - val_accuracy: 0.7606 Epoch 14/25 250/250 [==============================] - 93s 373ms/step - loss: 0.3493 - accuracy: 0.8913 - val_loss: 0.8375 - val_accuracy: 0.7706 Epoch 15/25 250/250 [==============================] - 93s 373ms/step - loss: 0.3217 - accuracy: 0.9008 - val_loss: 0.9956 - val_accuracy: 0.7469 Epoch 16/25 250/250 [==============================] - 93s 372ms/step - loss: 0.3018 - accuracy: 0.9075 - val_loss: 0.9614 - val_accuracy: 0.7474 Epoch 17/25 250/250 [==============================] - 93s 372ms/step - loss: 0.2870 - accuracy: 0.9122 - val_loss: 0.9652 - val_accuracy: 0.7626 Epoch 18/25 250/250 [==============================] - 93s 373ms/step - loss: 0.2685 - accuracy: 0.9182 - val_loss: 0.8913 - val_accuracy: 0.7824 Epoch 19/25 250/250 [==============================] - 93s 373ms/step - loss: 0.2574 - accuracy: 0.9216 - val_loss: 1.0205 - val_accuracy: 0.7417 Epoch 20/25 250/250 [==============================] - 93s 372ms/step - loss: 0.2619 - accuracy: 0.9199 - val_loss: 0.9237 - val_accuracy: 0.7788 Epoch 21/25 250/250 [==============================] - 93s 372ms/step - loss: 0.2372 - accuracy: 0.9280 - val_loss: 0.9076 - val_accuracy: 0.7796 Epoch 22/25 250/250 [==============================] - 93s 372ms/step - loss: 0.2175 - accuracy: 0.9344 - val_loss: 0.9797 - val_accuracy: 0.7742 Epoch 23/25 250/250 [==============================] - 93s 372ms/step - loss: 0.2084 - accuracy: 0.9370 - val_loss: 0.9981 - val_accuracy: 0.7870 Epoch 24/25 250/250 [==============================] - 93s 373ms/step - loss: 0.2077 - accuracy: 0.9370 - val_loss: 1.0494 - val_accuracy: 0.7767 Epoch 25/25 250/250 [==============================] - 93s 372ms/step - loss: 0.2059 - accuracy: 0.9377 - val_loss: 0.9640 - val_accuracy: 0.7651 png png png png Inference using Colormap Overlay The raw predictions from the model represent a one-hot encoded tensor of shape (N, 512, 512, 20) where each one of the 20 channels is a binary mask corresponding to a predicted label. In order to visualize the results, we plot them as RGB segmentation masks where each pixel is represented by a unique color corresponding to the particular label predicted. We can easily find the color corresponding to each label from the human_colormap.mat file provided as part of the dataset. We would also plot an overlay of the RGB segmentation mask on the input image as this further helps us to identify the different categories present in the image more intuitively. # Loading the Colormap colormap = loadmat( \"./instance-level_human_parsing/instance-level_human_parsing/human_colormap.mat\" )[\"colormap\"] colormap = colormap * 100 colormap = colormap.astype(np.uint8) def infer(model, image_tensor): predictions = model.predict(np.expand_dims((image_tensor), axis=0)) predictions = np.squeeze(predictions) predictions = np.argmax(predictions, axis=2) return predictions def decode_segmentation_masks(mask, colormap, n_classes): r = np.zeros_like(mask).astype(np.uint8) g = np.zeros_like(mask).astype(np.uint8) b = np.zeros_like(mask).astype(np.uint8) for l in range(0, n_classes): idx = mask == l r[idx] = colormap[l, 0] g[idx] = colormap[l, 1] b[idx] = colormap[l, 2] rgb = np.stack([r, g, b], axis=2) return rgb def get_overlay(image, colored_mask): image = tf.keras.preprocessing.image.array_to_img(image) image = np.array(image).astype(np.uint8) overlay = cv2.addWeighted(image, 0.35, colored_mask, 0.65, 0) return overlay def plot_samples_matplotlib(display_list, figsize=(5, 3)): _, axes = plt.subplots(nrows=1, ncols=len(display_list), figsize=figsize) for i in range(len(display_list)): if display_list[i].shape[-1] == 3: axes[i].imshow(tf.keras.preprocessing.image.array_to_img(display_list[i])) else: axes[i].imshow(display_list[i]) plt.show() def plot_predictions(images_list, colormap, model): for image_file in images_list: image_tensor = read_image(image_file) prediction_mask = infer(image_tensor=image_tensor, model=model) prediction_colormap = decode_segmentation_masks(prediction_mask, colormap, 20) overlay = get_overlay(image_tensor, prediction_colormap) plot_samples_matplotlib( [image_tensor, overlay, prediction_colormap], figsize=(18, 14) ) Inference on Train Images plot_predictions(train_images[:4], colormap, model=model) png png png png Inference on Validation Images plot_predictions(val_images[:4], colormap, model=model) png png png png Building a near-duplicate image search utility using deep learning and locality-sensitive hashing. Introduction Fetching similar images in (near) real time is an important use case of information retrieval systems. Some popular products utilizing it include Pinterest, Google Image Search, etc. In this example, we will build a similar image search utility using Locality Sensitive Hashing (LSH) and random projection on top of the image representations computed by a pretrained image classifier. This kind of search engine is also known as a near-duplicate (or near-dup) image detector. We will also look into optimizing the inference performance of our search utility on GPU using TensorRT. There are other examples under keras.io/examples/vision that are worth checking out in this regard: Metric learning for image similarity search Image similarity estimation using a Siamese Network with a triplet loss Finally, this example uses the following resource as a reference and as such reuses some of its code: Locality Sensitive Hashing for Similar Item Search. Note that in order to optimize the performance of our parser, you should have a GPU runtime available. Imports import matplotlib.pyplot as plt import tensorflow as tf import numpy as np import time import tensorflow_datasets as tfds tfds.disable_progress_bar() Load the dataset and create a training set of 1,000 images To keep the run time of the example short, we will be using a subset of 1,000 images from the tf_flowers dataset (available through TensorFlow Datasets) to build our vocabulary. train_ds, validation_ds = tfds.load( \"tf_flowers\", split=[\"train[:85%]\", \"train[85%:]\"], as_supervised=True ) IMAGE_SIZE = 224 NUM_IMAGES = 1000 images = [] labels = [] for (image, label) in train_ds.take(NUM_IMAGES): image = tf.image.resize(image, (IMAGE_SIZE, IMAGE_SIZE)) images.append(image.numpy()) labels.append(label.numpy()) images = np.array(images) labels = np.array(labels) Load a pre-trained model In this section, we load an image classification model that was trained on the tf_flowers dataset. 85% of the total images were used to build the training set. For more details on the training, refer to this notebook. The underlying model is a BiT-ResNet (proposed in Big Transfer (BiT): General Visual Representation Learning). The BiT-ResNet family of models is known to provide excellent transfer performance across a wide variety of different downstream tasks. !wget -q https://git.io/JuMq0 -O flower_model_bit_0.96875.zip !unzip -qq flower_model_bit_0.96875.zip bit_model = tf.keras.models.load_model(\"flower_model_bit_0.96875\") bit_model.count_params() 23510597 Create an embedding model To retrieve similar images given a query image, we need to first generate vector representations of all the images involved. We do this via an embedding model that extracts output features from our pretrained classifier and normalizes the resulting feature vectors. embedding_model = tf.keras.Sequential( [ tf.keras.layers.Input((IMAGE_SIZE, IMAGE_SIZE, 3)), tf.keras.layers.Rescaling(scale=1.0 / 255), bit_model.layers[1], tf.keras.layers.Normalization(mean=0, variance=1), ], name=\"embedding_model\", ) embedding_model.summary() Model: \"embedding_model\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= rescaling (Rescaling) (None, 224, 224, 3) 0 _________________________________________________________________ keras_layer (KerasLayer) (None, 2048) 23500352 _________________________________________________________________ normalization (Normalization (None, 2048) 0 ================================================================= Total params: 23,500,352 Trainable params: 23,500,352 Non-trainable params: 0 _________________________________________________________________ Take note of the normalization layer inside the model. It is used to project the representation vectors to the space of unit-spheres. Hashing utilities def hash_func(embedding, random_vectors): embedding = np.array(embedding) # Random projection. bools = np.dot(embedding, random_vectors) > 0 return [bool2int(bool_vec) for bool_vec in bools] def bool2int(x): y = 0 for i, j in enumerate(x): if j: y += 1 << i return y The shape of the vectors coming out of embedding_model is (2048,), and considering practical aspects (storage, retrieval performance, etc.) it is quite large. So, there arises a need to reduce the dimensionality of the embedding vectors without reducing their information content. This is where random projection comes into the picture. It is based on the principle that if the distance between a group of points on a given plane is approximately preserved, the dimensionality of that plane can further be reduced. Inside hash_func(), we first reduce the dimensionality of the embedding vectors. Then we compute the bitwise hash values of the images to determine their hash buckets. Images having same hash values are likely to go into the same hash bucket. From a deployment perspective, bitwise hash values are cheaper to store and operate on. Query utilities The Table class is responsible for building a single hash table. Each entry in the hash table is a mapping between the reduced embedding of an image from our dataset and a unique identifier. Because our dimensionality reduction technique involves randomness, it can so happen that similar images are not mapped to the same hash bucket everytime the process run. To reduce this effect, we will take results from multiple tables into consideration -- the number of tables and the reduction dimensionality are the key hyperparameters here. Crucially, you wouldn't reimplement locality-sensitive hashing yourself when working with real world applications. Instead, you'd likely use one of the following popular libraries: ScaNN Annoy Vald class Table: def __init__(self, hash_size, dim): self.table = {} self.hash_size = hash_size self.random_vectors = np.random.randn(hash_size, dim).T def add(self, id, vectors, label): # Create a unique indentifier. entry = {\"id_label\": str(id) + \"_\" + str(label)} # Compute the hash values. hashes = hash_func(vectors, self.random_vectors) # Add the hash values to the current table. for h in hashes: if h in self.table: self.table[h].append(entry) else: self.table[h] = [entry] def query(self, vectors): # Compute hash value for the query vector. hashes = hash_func(vectors, self.random_vectors) results = [] # Loop over the query hashes and determine if they exist in # the current table. for h in hashes: if h in self.table: results.extend(self.table[h]) return results In the following LSH class we will pack the utilities to have multiple hash tables. class LSH: def __init__(self, hash_size, dim, num_tables): self.num_tables = num_tables self.tables = [] for i in range(self.num_tables): self.tables.append(Table(hash_size, dim)) def add(self, id, vectors, label): for table in self.tables: table.add(id, vectors, label) def query(self, vectors): results = [] for table in self.tables: results.extend(table.query(vectors)) return results Now we can encapsulate the logic for building and operating with the master LSH table (a collection of many tables) inside a class. It has two methods: train(): Responsible for building the final LSH table. query(): Computes the number of matches given a query image and also quantifies the similarity score. class BuildLSHTable: def __init__( self, prediction_model, concrete_function=False, hash_size=8, dim=2048, num_tables=10, ): self.hash_size = hash_size self.dim = dim self.num_tables = num_tables self.lsh = LSH(self.hash_size, self.dim, self.num_tables) self.prediction_model = prediction_model self.concrete_function = concrete_function def train(self, training_files): for id, training_file in enumerate(training_files): # Unpack the data. image, label = training_file if len(image.shape) < 4: image = image[None, ...] # Compute embeddings and update the LSH tables. # More on `self.concrete_function()` later. if self.concrete_function: features = self.prediction_model(tf.constant(image))[ \"normalization\" ].numpy() else: features = self.prediction_model.predict(image) self.lsh.add(id, features, label) def query(self, image, verbose=True): # Compute the embeddings of the query image and fetch the results. if len(image.shape) < 4: image = image[None, ...] if self.concrete_function: features = self.prediction_model(tf.constant(image))[ \"normalization\" ].numpy() else: features = self.prediction_model.predict(image) results = self.lsh.query(features) if verbose: print(\"Matches:\", len(results)) # Calculate Jaccard index to quantify the similarity. counts = {} for r in results: if r[\"id_label\"] in counts: counts[r[\"id_label\"]] += 1 else: counts[r[\"id_label\"]] = 1 for k in counts: counts[k] = float(counts[k]) / self.dim return counts Create LSH tables With our helper utilities and classes implemented, we can now build our LSH table. Since we will be benchmarking performance between optimized and unoptimized embedding models, we will also warm up our GPU to avoid any unfair comparison. # Utility to warm up the GPU. def warmup(): dummy_sample = tf.ones((1, IMAGE_SIZE, IMAGE_SIZE, 3)) for _ in range(100): _ = embedding_model.predict(dummy_sample) Now we can first do the GPU wam-up and proceed to build the master LSH table with embedding_model. warmup() training_files = zip(images, labels) lsh_builder = BuildLSHTable(embedding_model) lsh_builder.train(training_files) At the time of writing, the wall time was 54.1 seconds on a Tesla T4 GPU. This timing may vary based on the GPU you are using. Optimize the model with TensorRT For NVIDIA-based GPUs, the TensorRT framework can be used to dramatically enhance the inference latency by using various model optimization techniques like pruning, constant folding, layer fusion, and so on. Here we will use the tf.experimental.tensorrt module to optimize our embedding model. # First serialize the embedding model as a SavedModel. embedding_model.save(\"embedding_model\") # Initialize the conversion parameters. params = tf.experimental.tensorrt.ConversionParams( precision_mode=\"FP16\", maximum_cached_engines=16 ) # Run the conversion. converter = tf.experimental.tensorrt.Converter( input_saved_model_dir=\"embedding_model\", conversion_params=params ) converter.convert() converter.save(\"tensorrt_embedding_model\") WARNING:tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model. WARNING:tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model. INFO:tensorflow:Assets written to: embedding_model/assets INFO:tensorflow:Assets written to: embedding_model/assets INFO:tensorflow:Linked TensorRT version: (0, 0, 0) INFO:tensorflow:Linked TensorRT version: (0, 0, 0) INFO:tensorflow:Loaded TensorRT version: (0, 0, 0) INFO:tensorflow:Loaded TensorRT version: (0, 0, 0) INFO:tensorflow:Assets written to: tensorrt_embedding_model/assets INFO:tensorflow:Assets written to: tensorrt_embedding_model/assets Notes on the parameters inside of tf.experimental.tensorrt.ConversionParams(): precision_mode defines the numerical precision of the operations in the to-be-converted model. maximum_cached_engines specifies the maximum number of TRT engines that will be cached to handle dynamic operations (operations with unknown shapes). To learn more about the other options, refer to the official documentation. You can also explore the different quantization options provided by the tf.experimental.tensorrt module. # Load the converted model. root = tf.saved_model.load(\"tensorrt_embedding_model\") trt_model_function = root.signatures[\"serving_default\"] Build LSH tables with optimized model warmup() training_files = zip(images, labels) lsh_builder_trt = BuildLSHTable(trt_model_function, concrete_function=True) lsh_builder_trt.train(training_files) Notice the difference in the wall time which is 13.1 seconds. Earlier, with the unoptimized model it was 54.1 seconds. We can take a closer look into one of the hash tables and get an idea of how they are represented. idx = 0 for hash, entry in lsh_builder_trt.lsh.tables[0].table.items(): if idx == 5: break if len(entry) < 5: print(hash, entry) idx += 1 145 [{'id_label': '3_4'}, {'id_label': '727_3'}] 5 [{'id_label': '12_4'}] 128 [{'id_label': '30_2'}, {'id_label': '480_2'}] 208 [{'id_label': '34_2'}, {'id_label': '132_2'}, {'id_label': '984_2'}] 188 [{'id_label': '42_0'}, {'id_label': '135_3'}, {'id_label': '436_3'}, {'id_label': '670_3'}] Visualize results on validation images In this section we will first writing a couple of utility functions to visualize the similar image parsing process. Then we will benchmark the query performance of the models with and without optimization. First, we take 100 images from the validation set for testing purposes. validation_images = [] validation_labels = [] for image, label in validation_ds.take(100): image = tf.image.resize(image, (224, 224)) validation_images.append(image.numpy()) validation_labels.append(label.numpy()) validation_images = np.array(validation_images) validation_labels = np.array(validation_labels) validation_images.shape, validation_labels.shape ((100, 224, 224, 3), (100,)) Now we write our visualization utilities. def plot_images(images, labels): plt.figure(figsize=(20, 10)) columns = 5 for (i, image) in enumerate(images): ax = plt.subplot(len(images) / columns + 1, columns, i + 1) if i == 0: ax.set_title(\"Query Image\n\" + \"Label: {}\".format(labels[i])) else: ax.set_title(\"Similar Image # \" + str(i) + \"\nLabel: {}\".format(labels[i])) plt.imshow(image.astype(\"int\")) plt.axis(\"off\") def visualize_lsh(lsh_class): idx = np.random.choice(len(validation_images)) image = validation_images[idx] label = validation_labels[idx] results = lsh_class.query(image) candidates = [] labels = [] overlaps = [] for idx, r in enumerate(sorted(results, key=results.get, reverse=True)): if idx == 4: break image_id, label = r.split(\"_\")[0], r.split(\"_\")[1] candidates.append(images[int(image_id)]) labels.append(label) overlaps.append(results[r]) candidates.insert(0, image) labels.insert(0, label) plot_images(candidates, labels) Non-TRT model for _ in range(5): visualize_lsh(lsh_builder) visualize_lsh(lsh_builder) Matches: 507 Matches: 554 Matches: 438 Matches: 370 Matches: 407 Matches: 306 png png png png png png TRT model for _ in range(5): visualize_lsh(lsh_builder_trt) Matches: 458 Matches: 181 Matches: 280 Matches: 280 Matches: 503 png png png png png As you may have noticed, there are a couple of incorrect results. This can be mitigated in a few ways: Better models for generating the initial embeddings especially for noisy samples. We can use techniques like ArcFace, Supervised Contrastive Learning, etc. that implicitly encourage better learning of representations for retrieval purposes. The trade-off between the number of tables and the reduction dimensionality is crucial and helps set the right recall required for your application. Benchmarking query performance def benchmark(lsh_class): warmup() start_time = time.time() for _ in range(1000): image = np.ones((1, 224, 224, 3)).astype(\"float32\") _ = lsh_class.query(image, verbose=False) end_time = time.time() - start_time print(f\"Time taken: {end_time:.3f}\") benchmark(lsh_builder) benchmark(lsh_builder_trt) Time taken: 54.359 Time taken: 13.963 We can immediately notice a stark difference between the query performance of the two models. Final remarks In this example, we explored the TensorRT framework from NVIDIA for optimizing our model. It's best suited for GPU-based inference servers. There are other choices for such frameworks that cater to different hardware platforms: TensorFlow Lite for mobile and edge devices. ONNX for commodity CPU-based servers. Apache TVM, compiler for machine learning models covering various platforms. Here are a few resources you might want to check out to learn more about applications based on vector similary search in general: ANN Benchmarks Accelerating Large-Scale Inference with Anisotropic Vector Quantization(ScaNN) Spreading vectors for similarity search Building a real-time embeddings similarity matching system How to build and train a convolutional LSTM model for next-frame video prediction. Introduction The Convolutional LSTM architectures bring together time series processing and computer vision by introducing a convolutional recurrent cell in a LSTM layer. In this example, we will explore the Convolutional LSTM model in an application to next-frame prediction, the process of predicting what video frames come next given a series of past frames. Setup import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import io import imageio from IPython.display import Image, display from ipywidgets import widgets, Layout, HBox Dataset Construction For this example, we will be using the Moving MNIST dataset. We will download the dataset and then construct and preprocess training and validation sets. For next-frame prediction, our model will be using a previous frame, which we'll call f_n, to predict a new frame, called f_(n + 1). To allow the model to create these predictions, we'll need to process the data such that we have \"shifted\" inputs and outputs, where the input data is frame x_n, being used to predict frame y_(n + 1). # Download and load the dataset. fpath = keras.utils.get_file( \"moving_mnist.npy\", \"http://www.cs.toronto.edu/~nitish/unsupervised_video/mnist_test_seq.npy\", ) dataset = np.load(fpath) # Swap the axes representing the number of frames and number of data samples. dataset = np.swapaxes(dataset, 0, 1) # We'll pick out 1000 of the 10000 total examples and use those. dataset = dataset[:1000, ...] # Add a channel dimension since the images are grayscale. dataset = np.expand_dims(dataset, axis=-1) # Split into train and validation sets using indexing to optimize memory. indexes = np.arange(dataset.shape[0]) np.random.shuffle(indexes) train_index = indexes[: int(0.9 * dataset.shape[0])] val_index = indexes[int(0.9 * dataset.shape[0]) :] train_dataset = dataset[train_index] val_dataset = dataset[val_index] # Normalize the data to the 0-1 range. train_dataset = train_dataset / 255 val_dataset = val_dataset / 255 # We'll define a helper function to shift the frames, where # `x` is frames 0 to n - 1, and `y` is frames 1 to n. def create_shifted_frames(data): x = data[:, 0 : data.shape[1] - 1, :, :] y = data[:, 1 : data.shape[1], :, :] return x, y # Apply the processing function to the datasets. x_train, y_train = create_shifted_frames(train_dataset) x_val, y_val = create_shifted_frames(val_dataset) # Inspect the dataset. print(\"Training Dataset Shapes: \" + str(x_train.shape) + \", \" + str(y_train.shape)) print(\"Validation Dataset Shapes: \" + str(x_val.shape) + \", \" + str(y_val.shape)) Training Dataset Shapes: (900, 19, 64, 64, 1), (900, 19, 64, 64, 1) Validation Dataset Shapes: (100, 19, 64, 64, 1), (100, 19, 64, 64, 1) Data Visualization Our data consists of sequences of frames, each of which are used to predict the upcoming frame. Let's take a look at some of these sequential frames. # Construct a figure on which we will visualize the images. fig, axes = plt.subplots(4, 5, figsize=(10, 8)) # Plot each of the sequential images for one random data example. data_choice = np.random.choice(range(len(train_dataset)), size=1)[0] for idx, ax in enumerate(axes.flat): ax.imshow(np.squeeze(train_dataset[data_choice][idx]), cmap=\"gray\") ax.set_title(f\"Frame {idx + 1}\") ax.axis(\"off\") # Print information and display the figure. print(f\"Displaying frames for example {data_choice}.\") plt.show() Displaying frames for example 130. png Model Construction To build a Convolutional LSTM model, we will use the ConvLSTM2D layer, which will accept inputs of shape (batch_size, num_frames, width, height, channels), and return a prediction movie of the same shape. # Construct the input layer with no definite frame size. inp = layers.Input(shape=(None, *x_train.shape[2:])) # We will construct 3 `ConvLSTM2D` layers with batch normalization, # followed by a `Conv3D` layer for the spatiotemporal outputs. x = layers.ConvLSTM2D( filters=64, kernel_size=(5, 5), padding=\"same\", return_sequences=True, activation=\"relu\", )(inp) x = layers.BatchNormalization()(x) x = layers.ConvLSTM2D( filters=64, kernel_size=(3, 3), padding=\"same\", return_sequences=True, activation=\"relu\", )(x) x = layers.BatchNormalization()(x) x = layers.ConvLSTM2D( filters=64, kernel_size=(1, 1), padding=\"same\", return_sequences=True, activation=\"relu\", )(x) x = layers.Conv3D( filters=1, kernel_size=(3, 3, 3), activation=\"sigmoid\", padding=\"same\" )(x) # Next, we will build the complete model and compile it. model = keras.models.Model(inp, x) model.compile( loss=keras.losses.binary_crossentropy, optimizer=keras.optimizers.Adam(), ) Model Training With our model and data constructed, we can now train the model. # Define some callbacks to improve training. early_stopping = keras.callbacks.EarlyStopping(monitor=\"val_loss\", patience=10) reduce_lr = keras.callbacks.ReduceLROnPlateau(monitor=\"val_loss\", patience=5) # Define modifiable training hyperparameters. epochs = 20 batch_size = 5 # Fit the model to the training data. model.fit( x_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(x_val, y_val), callbacks=[early_stopping, reduce_lr], ) Frame Prediction Visualizations With our model now constructed and trained, we can generate some example frame predictions based on a new video. We'll pick a random example from the validation set and then choose the first ten frames from them. From there, we can allow the model to predict 10 new frames, which we can compare to the ground truth frame predictions. # Select a random example from the validation dataset. example = val_dataset[np.random.choice(range(len(val_dataset)), size=1)[0]] # Pick the first/last ten frames from the example. frames = example[:10, ...] original_frames = example[10:, ...] # Predict a new set of 10 frames. for _ in range(10): # Extract the model's prediction and post-process it. new_prediction = model.predict(np.expand_dims(frames, axis=0)) new_prediction = np.squeeze(new_prediction, axis=0) predicted_frame = np.expand_dims(new_prediction[-1, ...], axis=0) # Extend the set of prediction frames. frames = np.concatenate((frames, predicted_frame), axis=0) # Construct a figure for the original and new frames. fig, axes = plt.subplots(2, 10, figsize=(20, 4)) # Plot the original frames. for idx, ax in enumerate(axes[0]): ax.imshow(np.squeeze(original_frames[idx]), cmap=\"gray\") ax.set_title(f\"Frame {idx + 11}\") ax.axis(\"off\") # Plot the new frames. new_frames = frames[10:, ...] for idx, ax in enumerate(axes[1]): ax.imshow(np.squeeze(new_frames[idx]), cmap=\"gray\") ax.set_title(f\"Frame {idx + 11}\") ax.axis(\"off\") # Display the figure. plt.show() png Predicted Videos Finally, we'll pick a few examples from the validation set and construct some GIFs with them to see the model's predicted videos. # Select a few random examples from the dataset. examples = val_dataset[np.random.choice(range(len(val_dataset)), size=5)] # Iterate over the examples and predict the frames. predicted_videos = [] for example in examples: # Pick the first/last ten frames from the example. frames = example[:10, ...] original_frames = example[10:, ...] new_predictions = np.zeros(shape=(10, *frames[0].shape)) # Predict a new set of 10 frames. for i in range(10): # Extract the model's prediction and post-process it. frames = example[: 10 + i + 1, ...] new_prediction = model.predict(np.expand_dims(frames, axis=0)) new_prediction = np.squeeze(new_prediction, axis=0) predicted_frame = np.expand_dims(new_prediction[-1, ...], axis=0) # Extend the set of prediction frames. new_predictions[i] = predicted_frame # Create and save GIFs for each of the ground truth/prediction images. for frame_set in [original_frames, new_predictions]: # Construct a GIF from the selected video frames. current_frames = np.squeeze(frame_set) current_frames = current_frames[..., np.newaxis] * np.ones(3) current_frames = (current_frames * 255).astype(np.uint8) current_frames = list(current_frames) # Construct a GIF from the frames. with io.BytesIO() as gif: imageio.mimsave(gif, current_frames, \"GIF\", fps=5) predicted_videos.append(gif.getvalue()) # Display the videos. print(\" Truth\tPrediction\") for i in range(0, len(predicted_videos), 2): # Construct and display an `HBox` with the ground truth and prediction. box = HBox( [ widgets.Image(value=predicted_videos[i]), widgets.Image(value=predicted_videos[i + 1]), ] ) display(box) Truth Prediction Imgur Implementing RetinaNet: Focal Loss for Dense Object Detection. Introduction Object detection a very important problem in computer vision. Here the model is tasked with localizing the objects present in an image, and at the same time, classifying them into different categories. Object detection models can be broadly classified into \"single-stage\" and \"two-stage\" detectors. Two-stage detectors are often more accurate but at the cost of being slower. Here in this example, we will implement RetinaNet, a popular single-stage detector, which is accurate and runs fast. RetinaNet uses a feature pyramid network to efficiently detect objects at multiple scales and introduces a new loss, the Focal loss function, to alleviate the problem of the extreme foreground-background class imbalance. References: RetinaNet Paper Feature Pyramid Network Paper import os import re import zipfile import numpy as np import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt import tensorflow_datasets as tfds Downloading the COCO2017 dataset Training on the entire COCO2017 dataset which has around 118k images takes a lot of time, hence we will be using a smaller subset of ~500 images for training in this example. url = \"https://github.com/srihari-humbarwadi/datasets/releases/download/v0.1.0/data.zip\" filename = os.path.join(os.getcwd(), \"data.zip\") keras.utils.get_file(filename, url) with zipfile.ZipFile(\"data.zip\", \"r\") as z_fp: z_fp.extractall(\"./\") Downloading data from https://github.com/srihari-humbarwadi/datasets/releases/download/v0.1.0/data.zip 560529408/560525318 [==============================] - 304s 1us/step Implementing utility functions Bounding boxes can be represented in multiple ways, the most common formats are: Storing the coordinates of the corners [xmin, ymin, xmax, ymax] Storing the coordinates of the center and the box dimensions [x, y, width, height] Since we require both formats, we will be implementing functions for converting between the formats. def swap_xy(boxes): \"\"\"Swaps order the of x and y coordinates of the boxes. Arguments: boxes: A tensor with shape `(num_boxes, 4)` representing bounding boxes. Returns: swapped boxes with shape same as that of boxes. \"\"\" return tf.stack([boxes[:, 1], boxes[:, 0], boxes[:, 3], boxes[:, 2]], axis=-1) def convert_to_xywh(boxes): \"\"\"Changes the box format to center, width and height. Arguments: boxes: A tensor of rank 2 or higher with a shape of `(..., num_boxes, 4)` representing bounding boxes where each box is of the format `[xmin, ymin, xmax, ymax]`. Returns: converted boxes with shape same as that of boxes. \"\"\" return tf.concat( [(boxes[..., :2] + boxes[..., 2:]) / 2.0, boxes[..., 2:] - boxes[..., :2]], axis=-1, ) def convert_to_corners(boxes): \"\"\"Changes the box format to corner coordinates Arguments: boxes: A tensor of rank 2 or higher with a shape of `(..., num_boxes, 4)` representing bounding boxes where each box is of the format `[x, y, width, height]`. Returns: converted boxes with shape same as that of boxes. \"\"\" return tf.concat( [boxes[..., :2] - boxes[..., 2:] / 2.0, boxes[..., :2] + boxes[..., 2:] / 2.0], axis=-1, ) Computing pairwise Intersection Over Union (IOU) As we will see later in the example, we would be assigning ground truth boxes to anchor boxes based on the extent of overlapping. This will require us to calculate the Intersection Over Union (IOU) between all the anchor boxes and ground truth boxes pairs. def compute_iou(boxes1, boxes2): \"\"\"Computes pairwise IOU matrix for given two sets of boxes Arguments: boxes1: A tensor with shape `(N, 4)` representing bounding boxes where each box is of the format `[x, y, width, height]`. boxes2: A tensor with shape `(M, 4)` representing bounding boxes where each box is of the format `[x, y, width, height]`. Returns: pairwise IOU matrix with shape `(N, M)`, where the value at ith row jth column holds the IOU between ith box and jth box from boxes1 and boxes2 respectively. \"\"\" boxes1_corners = convert_to_corners(boxes1) boxes2_corners = convert_to_corners(boxes2) lu = tf.maximum(boxes1_corners[:, None, :2], boxes2_corners[:, :2]) rd = tf.minimum(boxes1_corners[:, None, 2:], boxes2_corners[:, 2:]) intersection = tf.maximum(0.0, rd - lu) intersection_area = intersection[:, :, 0] * intersection[:, :, 1] boxes1_area = boxes1[:, 2] * boxes1[:, 3] boxes2_area = boxes2[:, 2] * boxes2[:, 3] union_area = tf.maximum( boxes1_area[:, None] + boxes2_area - intersection_area, 1e-8 ) return tf.clip_by_value(intersection_area / union_area, 0.0, 1.0) def visualize_detections( image, boxes, classes, scores, figsize=(7, 7), linewidth=1, color=[0, 0, 1] ): \"\"\"Visualize Detections\"\"\" image = np.array(image, dtype=np.uint8) plt.figure(figsize=figsize) plt.axis(\"off\") plt.imshow(image) ax = plt.gca() for box, _cls, score in zip(boxes, classes, scores): text = \"{}: {:.2f}\".format(_cls, score) x1, y1, x2, y2 = box w, h = x2 - x1, y2 - y1 patch = plt.Rectangle( [x1, y1], w, h, fill=False, edgecolor=color, linewidth=linewidth ) ax.add_patch(patch) ax.text( x1, y1, text, bbox={\"facecolor\": color, \"alpha\": 0.4}, clip_box=ax.clipbox, clip_on=True, ) plt.show() return ax Implementing Anchor generator Anchor boxes are fixed sized boxes that the model uses to predict the bounding box for an object. It does this by regressing the offset between the location of the object's center and the center of an anchor box, and then uses the width and height of the anchor box to predict a relative scale of the object. In the case of RetinaNet, each location on a given feature map has nine anchor boxes (at three scales and three ratios). class AnchorBox: \"\"\"Generates anchor boxes. This class has operations to generate anchor boxes for feature maps at strides `[8, 16, 32, 64, 128]`. Where each anchor each box is of the format `[x, y, width, height]`. Attributes: aspect_ratios: A list of float values representing the aspect ratios of the anchor boxes at each location on the feature map scales: A list of float values representing the scale of the anchor boxes at each location on the feature map. num_anchors: The number of anchor boxes at each location on feature map areas: A list of float values representing the areas of the anchor boxes for each feature map in the feature pyramid. strides: A list of float value representing the strides for each feature map in the feature pyramid. \"\"\" def __init__(self): self.aspect_ratios = [0.5, 1.0, 2.0] self.scales = [2 ** x for x in [0, 1 / 3, 2 / 3]] self._num_anchors = len(self.aspect_ratios) * len(self.scales) self._strides = [2 ** i for i in range(3, 8)] self._areas = [x ** 2 for x in [32.0, 64.0, 128.0, 256.0, 512.0]] self._anchor_dims = self._compute_dims() def _compute_dims(self): \"\"\"Computes anchor box dimensions for all ratios and scales at all levels of the feature pyramid. \"\"\" anchor_dims_all = [] for area in self._areas: anchor_dims = [] for ratio in self.aspect_ratios: anchor_height = tf.math.sqrt(area / ratio) anchor_width = area / anchor_height dims = tf.reshape( tf.stack([anchor_width, anchor_height], axis=-1), [1, 1, 2] ) for scale in self.scales: anchor_dims.append(scale * dims) anchor_dims_all.append(tf.stack(anchor_dims, axis=-2)) return anchor_dims_all def _get_anchors(self, feature_height, feature_width, level): \"\"\"Generates anchor boxes for a given feature map size and level Arguments: feature_height: An integer representing the height of the feature map. feature_width: An integer representing the width of the feature map. level: An integer representing the level of the feature map in the feature pyramid. Returns: anchor boxes with the shape `(feature_height * feature_width * num_anchors, 4)` \"\"\" rx = tf.range(feature_width, dtype=tf.float32) + 0.5 ry = tf.range(feature_height, dtype=tf.float32) + 0.5 centers = tf.stack(tf.meshgrid(rx, ry), axis=-1) * self._strides[level - 3] centers = tf.expand_dims(centers, axis=-2) centers = tf.tile(centers, [1, 1, self._num_anchors, 1]) dims = tf.tile( self._anchor_dims[level - 3], [feature_height, feature_width, 1, 1] ) anchors = tf.concat([centers, dims], axis=-1) return tf.reshape( anchors, [feature_height * feature_width * self._num_anchors, 4] ) def get_anchors(self, image_height, image_width): \"\"\"Generates anchor boxes for all the feature maps of the feature pyramid. Arguments: image_height: Height of the input image. image_width: Width of the input image. Returns: anchor boxes for all the feature maps, stacked as a single tensor with shape `(total_anchors, 4)` \"\"\" anchors = [ self._get_anchors( tf.math.ceil(image_height / 2 ** i), tf.math.ceil(image_width / 2 ** i), i, ) for i in range(3, 8) ] return tf.concat(anchors, axis=0) Preprocessing data Preprocessing the images involves two steps: Resizing the image: Images are resized such that the shortest size is equal to 800 px, after resizing if the longest side of the image exceeds 1333 px, the image is resized such that the longest size is now capped at 1333 px. Applying augmentation: Random scale jittering and random horizontal flipping are the only augmentations applied to the images. Along with the images, bounding boxes are rescaled and flipped if required. def random_flip_horizontal(image, boxes): \"\"\"Flips image and boxes horizontally with 50% chance Arguments: image: A 3-D tensor of shape `(height, width, channels)` representing an image. boxes: A tensor with shape `(num_boxes, 4)` representing bounding boxes, having normalized coordinates. Returns: Randomly flipped image and boxes \"\"\" if tf.random.uniform(()) > 0.5: image = tf.image.flip_left_right(image) boxes = tf.stack( [1 - boxes[:, 2], boxes[:, 1], 1 - boxes[:, 0], boxes[:, 3]], axis=-1 ) return image, boxes def resize_and_pad_image( image, min_side=800.0, max_side=1333.0, jitter=[640, 1024], stride=128.0 ): \"\"\"Resizes and pads image while preserving aspect ratio. 1. Resizes images so that the shorter side is equal to `min_side` 2. If the longer side is greater than `max_side`, then resize the image with longer side equal to `max_side` 3. Pad with zeros on right and bottom to make the image shape divisible by `stride` Arguments: image: A 3-D tensor of shape `(height, width, channels)` representing an image. min_side: The shorter side of the image is resized to this value, if `jitter` is set to None. max_side: If the longer side of the image exceeds this value after resizing, the image is resized such that the longer side now equals to this value. jitter: A list of floats containing minimum and maximum size for scale jittering. If available, the shorter side of the image will be resized to a random value in this range. stride: The stride of the smallest feature map in the feature pyramid. Can be calculated using `image_size / feature_map_size`. Returns: image: Resized and padded image. image_shape: Shape of the image before padding. ratio: The scaling factor used to resize the image \"\"\" image_shape = tf.cast(tf.shape(image)[:2], dtype=tf.float32) if jitter is not None: min_side = tf.random.uniform((), jitter[0], jitter[1], dtype=tf.float32) ratio = min_side / tf.reduce_min(image_shape) if ratio * tf.reduce_max(image_shape) > max_side: ratio = max_side / tf.reduce_max(image_shape) image_shape = ratio * image_shape image = tf.image.resize(image, tf.cast(image_shape, dtype=tf.int32)) padded_image_shape = tf.cast( tf.math.ceil(image_shape / stride) * stride, dtype=tf.int32 ) image = tf.image.pad_to_bounding_box( image, 0, 0, padded_image_shape[0], padded_image_shape[1] ) return image, image_shape, ratio def preprocess_data(sample): \"\"\"Applies preprocessing step to a single sample Arguments: sample: A dict representing a single training sample. Returns: image: Resized and padded image with random horizontal flipping applied. bbox: Bounding boxes with the shape `(num_objects, 4)` where each box is of the format `[x, y, width, height]`. class_id: An tensor representing the class id of the objects, having shape `(num_objects,)`. \"\"\" image = sample[\"image\"] bbox = swap_xy(sample[\"objects\"][\"bbox\"]) class_id = tf.cast(sample[\"objects\"][\"label\"], dtype=tf.int32) image, bbox = random_flip_horizontal(image, bbox) image, image_shape, _ = resize_and_pad_image(image) bbox = tf.stack( [ bbox[:, 0] * image_shape[1], bbox[:, 1] * image_shape[0], bbox[:, 2] * image_shape[1], bbox[:, 3] * image_shape[0], ], axis=-1, ) bbox = convert_to_xywh(bbox) return image, bbox, class_id Encoding labels The raw labels, consisting of bounding boxes and class ids need to be transformed into targets for training. This transformation consists of the following steps: Generating anchor boxes for the given image dimensions Assigning ground truth boxes to the anchor boxes The anchor boxes that are not assigned any objects, are either assigned the background class or ignored depending on the IOU Generating the classification and regression targets using anchor boxes class LabelEncoder: \"\"\"Transforms the raw labels into targets for training. This class has operations to generate targets for a batch of samples which is made up of the input images, bounding boxes for the objects present and their class ids. Attributes: anchor_box: Anchor box generator to encode the bounding boxes. box_variance: The scaling factors used to scale the bounding box targets. \"\"\" def __init__(self): self._anchor_box = AnchorBox() self._box_variance = tf.convert_to_tensor( [0.1, 0.1, 0.2, 0.2], dtype=tf.float32 ) def _match_anchor_boxes( self, anchor_boxes, gt_boxes, match_iou=0.5, ignore_iou=0.4 ): \"\"\"Matches ground truth boxes to anchor boxes based on IOU. 1. Calculates the pairwise IOU for the M `anchor_boxes` and N `gt_boxes` to get a `(M, N)` shaped matrix. 2. The ground truth box with the maximum IOU in each row is assigned to the anchor box provided the IOU is greater than `match_iou`. 3. If the maximum IOU in a row is less than `ignore_iou`, the anchor box is assigned with the background class. 4. The remaining anchor boxes that do not have any class assigned are ignored during training. Arguments: anchor_boxes: A float tensor with the shape `(total_anchors, 4)` representing all the anchor boxes for a given input image shape, where each anchor box is of the format `[x, y, width, height]`. gt_boxes: A float tensor with shape `(num_objects, 4)` representing the ground truth boxes, where each box is of the format `[x, y, width, height]`. match_iou: A float value representing the minimum IOU threshold for determining if a ground truth box can be assigned to an anchor box. ignore_iou: A float value representing the IOU threshold under which an anchor box is assigned to the background class. Returns: matched_gt_idx: Index of the matched object positive_mask: A mask for anchor boxes that have been assigned ground truth boxes. ignore_mask: A mask for anchor boxes that need to by ignored during training \"\"\" iou_matrix = compute_iou(anchor_boxes, gt_boxes) max_iou = tf.reduce_max(iou_matrix, axis=1) matched_gt_idx = tf.argmax(iou_matrix, axis=1) positive_mask = tf.greater_equal(max_iou, match_iou) negative_mask = tf.less(max_iou, ignore_iou) ignore_mask = tf.logical_not(tf.logical_or(positive_mask, negative_mask)) return ( matched_gt_idx, tf.cast(positive_mask, dtype=tf.float32), tf.cast(ignore_mask, dtype=tf.float32), ) def _compute_box_target(self, anchor_boxes, matched_gt_boxes): \"\"\"Transforms the ground truth boxes into targets for training\"\"\" box_target = tf.concat( [ (matched_gt_boxes[:, :2] - anchor_boxes[:, :2]) / anchor_boxes[:, 2:], tf.math.log(matched_gt_boxes[:, 2:] / anchor_boxes[:, 2:]), ], axis=-1, ) box_target = box_target / self._box_variance return box_target def _encode_sample(self, image_shape, gt_boxes, cls_ids): \"\"\"Creates box and classification targets for a single sample\"\"\" anchor_boxes = self._anchor_box.get_anchors(image_shape[1], image_shape[2]) cls_ids = tf.cast(cls_ids, dtype=tf.float32) matched_gt_idx, positive_mask, ignore_mask = self._match_anchor_boxes( anchor_boxes, gt_boxes ) matched_gt_boxes = tf.gather(gt_boxes, matched_gt_idx) box_target = self._compute_box_target(anchor_boxes, matched_gt_boxes) matched_gt_cls_ids = tf.gather(cls_ids, matched_gt_idx) cls_target = tf.where( tf.not_equal(positive_mask, 1.0), -1.0, matched_gt_cls_ids ) cls_target = tf.where(tf.equal(ignore_mask, 1.0), -2.0, cls_target) cls_target = tf.expand_dims(cls_target, axis=-1) label = tf.concat([box_target, cls_target], axis=-1) return label def encode_batch(self, batch_images, gt_boxes, cls_ids): \"\"\"Creates box and classification targets for a batch\"\"\" images_shape = tf.shape(batch_images) batch_size = images_shape[0] labels = tf.TensorArray(dtype=tf.float32, size=batch_size, dynamic_size=True) for i in range(batch_size): label = self._encode_sample(images_shape, gt_boxes[i], cls_ids[i]) labels = labels.write(i, label) batch_images = tf.keras.applications.resnet.preprocess_input(batch_images) return batch_images, labels.stack() Building the ResNet50 backbone RetinaNet uses a ResNet based backbone, using which a feature pyramid network is constructed. In the example we use ResNet50 as the backbone, and return the feature maps at strides 8, 16 and 32. def get_backbone(): \"\"\"Builds ResNet50 with pre-trained imagenet weights\"\"\" backbone = keras.applications.ResNet50( include_top=False, input_shape=[None, None, 3] ) c3_output, c4_output, c5_output = [ backbone.get_layer(layer_name).output for layer_name in [\"conv3_block4_out\", \"conv4_block6_out\", \"conv5_block3_out\"] ] return keras.Model( inputs=[backbone.inputs], outputs=[c3_output, c4_output, c5_output] ) Building Feature Pyramid Network as a custom layer class FeaturePyramid(keras.layers.Layer): \"\"\"Builds the Feature Pyramid with the feature maps from the backbone. Attributes: num_classes: Number of classes in the dataset. backbone: The backbone to build the feature pyramid from. Currently supports ResNet50 only. \"\"\" def __init__(self, backbone=None, **kwargs): super(FeaturePyramid, self).__init__(name=\"FeaturePyramid\", **kwargs) self.backbone = backbone if backbone else get_backbone() self.conv_c3_1x1 = keras.layers.Conv2D(256, 1, 1, \"same\") self.conv_c4_1x1 = keras.layers.Conv2D(256, 1, 1, \"same\") self.conv_c5_1x1 = keras.layers.Conv2D(256, 1, 1, \"same\") self.conv_c3_3x3 = keras.layers.Conv2D(256, 3, 1, \"same\") self.conv_c4_3x3 = keras.layers.Conv2D(256, 3, 1, \"same\") self.conv_c5_3x3 = keras.layers.Conv2D(256, 3, 1, \"same\") self.conv_c6_3x3 = keras.layers.Conv2D(256, 3, 2, \"same\") self.conv_c7_3x3 = keras.layers.Conv2D(256, 3, 2, \"same\") self.upsample_2x = keras.layers.UpSampling2D(2) def call(self, images, training=False): c3_output, c4_output, c5_output = self.backbone(images, training=training) p3_output = self.conv_c3_1x1(c3_output) p4_output = self.conv_c4_1x1(c4_output) p5_output = self.conv_c5_1x1(c5_output) p4_output = p4_output + self.upsample_2x(p5_output) p3_output = p3_output + self.upsample_2x(p4_output) p3_output = self.conv_c3_3x3(p3_output) p4_output = self.conv_c4_3x3(p4_output) p5_output = self.conv_c5_3x3(p5_output) p6_output = self.conv_c6_3x3(c5_output) p7_output = self.conv_c7_3x3(tf.nn.relu(p6_output)) return p3_output, p4_output, p5_output, p6_output, p7_output Building the classification and box regression heads. The RetinaNet model has separate heads for bounding box regression and for predicting class probabilities for the objects. These heads are shared between all the feature maps of the feature pyramid. def build_head(output_filters, bias_init): \"\"\"Builds the class/box predictions head. Arguments: output_filters: Number of convolution filters in the final layer. bias_init: Bias Initializer for the final convolution layer. Returns: A keras sequential model representing either the classification or the box regression head depending on `output_filters`. \"\"\" head = keras.Sequential([keras.Input(shape=[None, None, 256])]) kernel_init = tf.initializers.RandomNormal(0.0, 0.01) for _ in range(4): head.add( keras.layers.Conv2D(256, 3, padding=\"same\", kernel_initializer=kernel_init) ) head.add(keras.layers.ReLU()) head.add( keras.layers.Conv2D( output_filters, 3, 1, padding=\"same\", kernel_initializer=kernel_init, bias_initializer=bias_init, ) ) return head Building RetinaNet using a subclassed model class RetinaNet(keras.Model): \"\"\"A subclassed Keras model implementing the RetinaNet architecture. Attributes: num_classes: Number of classes in the dataset. backbone: The backbone to build the feature pyramid from. Currently supports ResNet50 only. \"\"\" def __init__(self, num_classes, backbone=None, **kwargs): super(RetinaNet, self).__init__(name=\"RetinaNet\", **kwargs) self.fpn = FeaturePyramid(backbone) self.num_classes = num_classes prior_probability = tf.constant_initializer(-np.log((1 - 0.01) / 0.01)) self.cls_head = build_head(9 * num_classes, prior_probability) self.box_head = build_head(9 * 4, \"zeros\") def call(self, image, training=False): features = self.fpn(image, training=training) N = tf.shape(image)[0] cls_outputs = [] box_outputs = [] for feature in features: box_outputs.append(tf.reshape(self.box_head(feature), [N, -1, 4])) cls_outputs.append( tf.reshape(self.cls_head(feature), [N, -1, self.num_classes]) ) cls_outputs = tf.concat(cls_outputs, axis=1) box_outputs = tf.concat(box_outputs, axis=1) return tf.concat([box_outputs, cls_outputs], axis=-1) Implementing a custom layer to decode predictions class DecodePredictions(tf.keras.layers.Layer): \"\"\"A Keras layer that decodes predictions of the RetinaNet model. Attributes: num_classes: Number of classes in the dataset confidence_threshold: Minimum class probability, below which detections are pruned. nms_iou_threshold: IOU threshold for the NMS operation max_detections_per_class: Maximum number of detections to retain per class. max_detections: Maximum number of detections to retain across all classes. box_variance: The scaling factors used to scale the bounding box predictions. \"\"\" def __init__( self, num_classes=80, confidence_threshold=0.05, nms_iou_threshold=0.5, max_detections_per_class=100, max_detections=100, box_variance=[0.1, 0.1, 0.2, 0.2], **kwargs ): super(DecodePredictions, self).__init__(**kwargs) self.num_classes = num_classes self.confidence_threshold = confidence_threshold self.nms_iou_threshold = nms_iou_threshold self.max_detections_per_class = max_detections_per_class self.max_detections = max_detections self._anchor_box = AnchorBox() self._box_variance = tf.convert_to_tensor( [0.1, 0.1, 0.2, 0.2], dtype=tf.float32 ) def _decode_box_predictions(self, anchor_boxes, box_predictions): boxes = box_predictions * self._box_variance boxes = tf.concat( [ boxes[:, :, :2] * anchor_boxes[:, :, 2:] + anchor_boxes[:, :, :2], tf.math.exp(boxes[:, :, 2:]) * anchor_boxes[:, :, 2:], ], axis=-1, ) boxes_transformed = convert_to_corners(boxes) return boxes_transformed def call(self, images, predictions): image_shape = tf.cast(tf.shape(images), dtype=tf.float32) anchor_boxes = self._anchor_box.get_anchors(image_shape[1], image_shape[2]) box_predictions = predictions[:, :, :4] cls_predictions = tf.nn.sigmoid(predictions[:, :, 4:]) boxes = self._decode_box_predictions(anchor_boxes[None, ...], box_predictions) return tf.image.combined_non_max_suppression( tf.expand_dims(boxes, axis=2), cls_predictions, self.max_detections_per_class, self.max_detections, self.nms_iou_threshold, self.confidence_threshold, clip_boxes=False, ) Implementing Smooth L1 loss and Focal Loss as keras custom losses class RetinaNetBoxLoss(tf.losses.Loss): \"\"\"Implements Smooth L1 loss\"\"\" def __init__(self, delta): super(RetinaNetBoxLoss, self).__init__( reduction=\"none\", name=\"RetinaNetBoxLoss\" ) self._delta = delta def call(self, y_true, y_pred): difference = y_true - y_pred absolute_difference = tf.abs(difference) squared_difference = difference ** 2 loss = tf.where( tf.less(absolute_difference, self._delta), 0.5 * squared_difference, absolute_difference - 0.5, ) return tf.reduce_sum(loss, axis=-1) class RetinaNetClassificationLoss(tf.losses.Loss): \"\"\"Implements Focal loss\"\"\" def __init__(self, alpha, gamma): super(RetinaNetClassificationLoss, self).__init__( reduction=\"none\", name=\"RetinaNetClassificationLoss\" ) self._alpha = alpha self._gamma = gamma def call(self, y_true, y_pred): cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits( labels=y_true, logits=y_pred ) probs = tf.nn.sigmoid(y_pred) alpha = tf.where(tf.equal(y_true, 1.0), self._alpha, (1.0 - self._alpha)) pt = tf.where(tf.equal(y_true, 1.0), probs, 1 - probs) loss = alpha * tf.pow(1.0 - pt, self._gamma) * cross_entropy return tf.reduce_sum(loss, axis=-1) class RetinaNetLoss(tf.losses.Loss): \"\"\"Wrapper to combine both the losses\"\"\" def __init__(self, num_classes=80, alpha=0.25, gamma=2.0, delta=1.0): super(RetinaNetLoss, self).__init__(reduction=\"auto\", name=\"RetinaNetLoss\") self._clf_loss = RetinaNetClassificationLoss(alpha, gamma) self._box_loss = RetinaNetBoxLoss(delta) self._num_classes = num_classes def call(self, y_true, y_pred): y_pred = tf.cast(y_pred, dtype=tf.float32) box_labels = y_true[:, :, :4] box_predictions = y_pred[:, :, :4] cls_labels = tf.one_hot( tf.cast(y_true[:, :, 4], dtype=tf.int32), depth=self._num_classes, dtype=tf.float32, ) cls_predictions = y_pred[:, :, 4:] positive_mask = tf.cast(tf.greater(y_true[:, :, 4], -1.0), dtype=tf.float32) ignore_mask = tf.cast(tf.equal(y_true[:, :, 4], -2.0), dtype=tf.float32) clf_loss = self._clf_loss(cls_labels, cls_predictions) box_loss = self._box_loss(box_labels, box_predictions) clf_loss = tf.where(tf.equal(ignore_mask, 1.0), 0.0, clf_loss) box_loss = tf.where(tf.equal(positive_mask, 1.0), box_loss, 0.0) normalizer = tf.reduce_sum(positive_mask, axis=-1) clf_loss = tf.math.divide_no_nan(tf.reduce_sum(clf_loss, axis=-1), normalizer) box_loss = tf.math.divide_no_nan(tf.reduce_sum(box_loss, axis=-1), normalizer) loss = clf_loss + box_loss return loss Setting up training parameters model_dir = \"retinanet/\" label_encoder = LabelEncoder() num_classes = 80 batch_size = 2 learning_rates = [2.5e-06, 0.000625, 0.00125, 0.0025, 0.00025, 2.5e-05] learning_rate_boundaries = [125, 250, 500, 240000, 360000] learning_rate_fn = tf.optimizers.schedules.PiecewiseConstantDecay( boundaries=learning_rate_boundaries, values=learning_rates ) Initializing and compiling model resnet50_backbone = get_backbone() loss_fn = RetinaNetLoss(num_classes) model = RetinaNet(num_classes, resnet50_backbone) optimizer = tf.optimizers.SGD(learning_rate=learning_rate_fn, momentum=0.9) model.compile(loss=loss_fn, optimizer=optimizer) Setting up callbacks callbacks_list = [ tf.keras.callbacks.ModelCheckpoint( filepath=os.path.join(model_dir, \"weights\" + \"_epoch_{epoch}\"), monitor=\"loss\", save_best_only=False, save_weights_only=True, verbose=1, ) ] Load the COCO2017 dataset using TensorFlow Datasets # set `data_dir=None` to load the complete dataset (train_dataset, val_dataset), dataset_info = tfds.load( \"coco/2017\", split=[\"train\", \"validation\"], with_info=True, data_dir=\"data\" ) Setting up a tf.data pipeline To ensure that the model is fed with data efficiently we will be using tf.data API to create our input pipeline. The input pipeline consists for the following major processing steps: Apply the preprocessing function to the samples Create batches with fixed batch size. Since images in the batch can have different dimensions, and can also have different number of objects, we use padded_batch to the add the necessary padding to create rectangular tensors Create targets for each sample in the batch using LabelEncoder autotune = tf.data.AUTOTUNE train_dataset = train_dataset.map(preprocess_data, num_parallel_calls=autotune) train_dataset = train_dataset.shuffle(8 * batch_size) train_dataset = train_dataset.padded_batch( batch_size=batch_size, padding_values=(0.0, 1e-8, -1), drop_remainder=True ) train_dataset = train_dataset.map( label_encoder.encode_batch, num_parallel_calls=autotune ) train_dataset = train_dataset.apply(tf.data.experimental.ignore_errors()) train_dataset = train_dataset.prefetch(autotune) val_dataset = val_dataset.map(preprocess_data, num_parallel_calls=autotune) val_dataset = val_dataset.padded_batch( batch_size=1, padding_values=(0.0, 1e-8, -1), drop_remainder=True ) val_dataset = val_dataset.map(label_encoder.encode_batch, num_parallel_calls=autotune) val_dataset = val_dataset.apply(tf.data.experimental.ignore_errors()) val_dataset = val_dataset.prefetch(autotune) Training the model # Uncomment the following lines, when training on full dataset # train_steps_per_epoch = dataset_info.splits[\"train\"].num_examples // batch_size # val_steps_per_epoch = \ # dataset_info.splits[\"validation\"].num_examples // batch_size # train_steps = 4 * 100000 # epochs = train_steps // train_steps_per_epoch epochs = 1 # Running 100 training and 50 validation steps, # remove `.take` when training on the full dataset model.fit( train_dataset.take(100), validation_data=val_dataset.take(50), epochs=epochs, callbacks=callbacks_list, verbose=1, ) 100/100 [==============================] - ETA: 0s - loss: 4.0953 Epoch 00001: saving model to retinanet/weights_epoch_1 100/100 [==============================] - 68s 679ms/step - loss: 4.0953 - val_loss: 4.0821 Loading weights # Change this to `model_dir` when not using the downloaded weights weights_dir = \"data\" latest_checkpoint = tf.train.latest_checkpoint(weights_dir) model.load_weights(latest_checkpoint) Building inference model image = tf.keras.Input(shape=[None, None, 3], name=\"image\") predictions = model(image, training=False) detections = DecodePredictions(confidence_threshold=0.5)(image, predictions) inference_model = tf.keras.Model(inputs=image, outputs=detections) Generating detections def prepare_image(image): image, _, ratio = resize_and_pad_image(image, jitter=None) image = tf.keras.applications.resnet.preprocess_input(image) return tf.expand_dims(image, axis=0), ratio val_dataset = tfds.load(\"coco/2017\", split=\"validation\", data_dir=\"data\") int2str = dataset_info.features[\"objects\"][\"label\"].int2str for sample in val_dataset.take(2): image = tf.cast(sample[\"image\"], dtype=tf.float32) input_image, ratio = prepare_image(image) detections = inference_model.predict(input_image) num_detections = detections.valid_detections[0] class_names = [ int2str(int(x)) for x in detections.nmsed_classes[0][:num_detections] ] visualize_detections( image, detections.nmsed_boxes[0][:num_detections] / ratio, class_names, detections.nmsed_scores[0][:num_detections], ) png png How to implement an OCR model using CNNs, RNNs and CTC loss. Introduction This example demonstrates a simple OCR model built with the Functional API. Apart from combining CNN and RNN, it also illustrates how you can instantiate a new layer and use it as an \"Endpoint layer\" for implementing CTC loss. For a detailed guide to layer subclassing, please check out this page in the developer guides. Setup import os import numpy as np import matplotlib.pyplot as plt from pathlib import Path from collections import Counter import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Load the data: Captcha Images Let's download the data. !curl -LO https://github.com/AakashKumarNain/CaptchaCracker/raw/master/captcha_images_v2.zip !unzip -qq captcha_images_v2.zip % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 159 100 159 0 0 164 0 --:--:-- --:--:-- --:--:-- 164 100 8863k 100 8863k 0 0 4882k 0 0:00:01 0:00:01 --:--:-- 33.0M The dataset contains 1040 captcha files as png images. The label for each sample is a string, the name of the file (minus the file extension). We will map each character in the string to an integer for training the model. Similary, we will need to map the predictions of the model back to strings. For this purpose we will maintain two dictionaries, mapping characters to integers, and integers to characters, respectively. # Path to the data directory data_dir = Path(\"./captcha_images_v2/\") # Get list of all the images images = sorted(list(map(str, list(data_dir.glob(\"*.png\"))))) labels = [img.split(os.path.sep)[-1].split(\".png\")[0] for img in images] characters = set(char for label in labels for char in label) print(\"Number of images found: \", len(images)) print(\"Number of labels found: \", len(labels)) print(\"Number of unique characters: \", len(characters)) print(\"Characters present: \", characters) # Batch size for training and validation batch_size = 16 # Desired image dimensions img_width = 200 img_height = 50 # Factor by which the image is going to be downsampled # by the convolutional blocks. We will be using two # convolution blocks and each block will have # a pooling layer which downsample the features by a factor of 2. # Hence total downsampling factor would be 4. downsample_factor = 4 # Maximum length of any captcha in the dataset max_length = max([len(label) for label in labels]) Number of images found: 1040 Number of labels found: 1040 Number of unique characters: 19 Characters present: {'d', 'w', 'y', '4', 'f', '6', 'g', 'e', '3', '5', 'p', 'x', '2', 'c', '7', 'n', 'b', '8', 'm'} Preprocessing # Mapping characters to integers char_to_num = layers.StringLookup( vocabulary=list(characters), mask_token=None ) # Mapping integers back to original characters num_to_char = layers.StringLookup( vocabulary=char_to_num.get_vocabulary(), mask_token=None, invert=True ) def split_data(images, labels, train_size=0.9, shuffle=True): # 1. Get the total size of the dataset size = len(images) # 2. Make an indices array and shuffle it, if required indices = np.arange(size) if shuffle: np.random.shuffle(indices) # 3. Get the size of training samples train_samples = int(size * train_size) # 4. Split data into training and validation sets x_train, y_train = images[indices[:train_samples]], labels[indices[:train_samples]] x_valid, y_valid = images[indices[train_samples:]], labels[indices[train_samples:]] return x_train, x_valid, y_train, y_valid # Splitting data into training and validation sets x_train, x_valid, y_train, y_valid = split_data(np.array(images), np.array(labels)) def encode_single_sample(img_path, label): # 1. Read image img = tf.io.read_file(img_path) # 2. Decode and convert to grayscale img = tf.io.decode_png(img, channels=1) # 3. Convert to float32 in [0, 1] range img = tf.image.convert_image_dtype(img, tf.float32) # 4. Resize to the desired size img = tf.image.resize(img, [img_height, img_width]) # 5. Transpose the image because we want the time # dimension to correspond to the width of the image. img = tf.transpose(img, perm=[1, 0, 2]) # 6. Map the characters in label to numbers label = char_to_num(tf.strings.unicode_split(label, input_encoding=\"UTF-8\")) # 7. Return a dict as our model is expecting two inputs return {\"image\": img, \"label\": label} Create Dataset objects train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = ( train_dataset.map( encode_single_sample, num_parallel_calls=tf.data.AUTOTUNE ) .batch(batch_size) .prefetch(buffer_size=tf.data.AUTOTUNE) ) validation_dataset = tf.data.Dataset.from_tensor_slices((x_valid, y_valid)) validation_dataset = ( validation_dataset.map( encode_single_sample, num_parallel_calls=tf.data.AUTOTUNE ) .batch(batch_size) .prefetch(buffer_size=tf.data.AUTOTUNE) ) Visualize the data _, ax = plt.subplots(4, 4, figsize=(10, 5)) for batch in train_dataset.take(1): images = batch[\"image\"] labels = batch[\"label\"] for i in range(16): img = (images[i] * 255).numpy().astype(\"uint8\") label = tf.strings.reduce_join(num_to_char(labels[i])).numpy().decode(\"utf-8\") ax[i // 4, i % 4].imshow(img[:, :, 0].T, cmap=\"gray\") ax[i // 4, i % 4].set_title(label) ax[i // 4, i % 4].axis(\"off\") plt.show() png Model class CTCLayer(layers.Layer): def __init__(self, name=None): super().__init__(name=name) self.loss_fn = keras.backend.ctc_batch_cost def call(self, y_true, y_pred): # Compute the training-time loss value and add it # to the layer using `self.add_loss()`. batch_len = tf.cast(tf.shape(y_true)[0], dtype=\"int64\") input_length = tf.cast(tf.shape(y_pred)[1], dtype=\"int64\") label_length = tf.cast(tf.shape(y_true)[1], dtype=\"int64\") input_length = input_length * tf.ones(shape=(batch_len, 1), dtype=\"int64\") label_length = label_length * tf.ones(shape=(batch_len, 1), dtype=\"int64\") loss = self.loss_fn(y_true, y_pred, input_length, label_length) self.add_loss(loss) # At test time, just return the computed predictions return y_pred def build_model(): # Inputs to the model input_img = layers.Input( shape=(img_width, img_height, 1), name=\"image\", dtype=\"float32\" ) labels = layers.Input(name=\"label\", shape=(None,), dtype=\"float32\") # First conv block x = layers.Conv2D( 32, (3, 3), activation=\"relu\", kernel_initializer=\"he_normal\", padding=\"same\", name=\"Conv1\", )(input_img) x = layers.MaxPooling2D((2, 2), name=\"pool1\")(x) # Second conv block x = layers.Conv2D( 64, (3, 3), activation=\"relu\", kernel_initializer=\"he_normal\", padding=\"same\", name=\"Conv2\", )(x) x = layers.MaxPooling2D((2, 2), name=\"pool2\")(x) # We have used two max pool with pool size and strides 2. # Hence, downsampled feature maps are 4x smaller. The number of # filters in the last layer is 64. Reshape accordingly before # passing the output to the RNN part of the model new_shape = ((img_width // 4), (img_height // 4) * 64) x = layers.Reshape(target_shape=new_shape, name=\"reshape\")(x) x = layers.Dense(64, activation=\"relu\", name=\"dense1\")(x) x = layers.Dropout(0.2)(x) # RNNs x = layers.Bidirectional(layers.LSTM(128, return_sequences=True, dropout=0.25))(x) x = layers.Bidirectional(layers.LSTM(64, return_sequences=True, dropout=0.25))(x) # Output layer x = layers.Dense( len(char_to_num.get_vocabulary()) + 1, activation=\"softmax\", name=\"dense2\" )(x) # Add CTC layer for calculating CTC loss at each step output = CTCLayer(name=\"ctc_loss\")(labels, x) # Define the model model = keras.models.Model( inputs=[input_img, labels], outputs=output, name=\"ocr_model_v1\" ) # Optimizer opt = keras.optimizers.Adam() # Compile the model and return model.compile(optimizer=opt) return model # Get the model model = build_model() model.summary() Model: \"ocr_model_v1\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== image (InputLayer) [(None, 200, 50, 1)] 0 __________________________________________________________________________________________________ Conv1 (Conv2D) (None, 200, 50, 32) 320 image[0][0] __________________________________________________________________________________________________ pool1 (MaxPooling2D) (None, 100, 25, 32) 0 Conv1[0][0] __________________________________________________________________________________________________ Conv2 (Conv2D) (None, 100, 25, 64) 18496 pool1[0][0] __________________________________________________________________________________________________ pool2 (MaxPooling2D) (None, 50, 12, 64) 0 Conv2[0][0] __________________________________________________________________________________________________ reshape (Reshape) (None, 50, 768) 0 pool2[0][0] __________________________________________________________________________________________________ dense1 (Dense) (None, 50, 64) 49216 reshape[0][0] __________________________________________________________________________________________________ dropout (Dropout) (None, 50, 64) 0 dense1[0][0] __________________________________________________________________________________________________ bidirectional (Bidirectional) (None, 50, 256) 197632 dropout[0][0] __________________________________________________________________________________________________ bidirectional_1 (Bidirectional) (None, 50, 128) 164352 bidirectional[0][0] __________________________________________________________________________________________________ label (InputLayer) [(None, None)] 0 __________________________________________________________________________________________________ dense2 (Dense) (None, 50, 20) 2580 bidirectional_1[0][0] __________________________________________________________________________________________________ ctc_loss (CTCLayer) (None, 50, 20) 0 label[0][0] dense2[0][0] ================================================================================================== Total params: 432,596 Trainable params: 432,596 Non-trainable params: 0 __________________________________________________________________________________________________ Training epochs = 100 early_stopping_patience = 10 # Add early stopping early_stopping = keras.callbacks.EarlyStopping( monitor=\"val_loss\", patience=early_stopping_patience, restore_best_weights=True ) # Train the model history = model.fit( train_dataset, validation_data=validation_dataset, epochs=epochs, callbacks=[early_stopping], ) Epoch 1/100 59/59 [==============================] - 3s 53ms/step - loss: 21.5722 - val_loss: 16.3351 Epoch 2/100 59/59 [==============================] - 2s 27ms/step - loss: 16.3335 - val_loss: 16.3062 Epoch 3/100 59/59 [==============================] - 2s 27ms/step - loss: 16.3360 - val_loss: 16.3116 Epoch 4/100 59/59 [==============================] - 2s 27ms/step - loss: 16.3318 - val_loss: 16.3167 Epoch 5/100 59/59 [==============================] - 2s 27ms/step - loss: 16.3256 - val_loss: 16.3152 Epoch 6/100 59/59 [==============================] - 2s 29ms/step - loss: 16.3229 - val_loss: 16.3123 Epoch 7/100 59/59 [==============================] - 2s 30ms/step - loss: 16.3119 - val_loss: 16.3116 Epoch 8/100 59/59 [==============================] - 2s 27ms/step - loss: 16.2977 - val_loss: 16.3107 Epoch 9/100 59/59 [==============================] - 2s 28ms/step - loss: 16.2801 - val_loss: 16.2552 Epoch 10/100 59/59 [==============================] - 2s 28ms/step - loss: 16.2199 - val_loss: 16.1008 Epoch 11/100 59/59 [==============================] - 2s 28ms/step - loss: 16.1136 - val_loss: 15.9867 Epoch 12/100 59/59 [==============================] - 2s 30ms/step - loss: 16.0138 - val_loss: 15.8825 Epoch 13/100 59/59 [==============================] - 2s 29ms/step - loss: 15.9670 - val_loss: 15.8413 Epoch 14/100 59/59 [==============================] - 2s 29ms/step - loss: 15.9315 - val_loss: 15.8263 Epoch 15/100 59/59 [==============================] - 2s 31ms/step - loss: 15.9162 - val_loss: 15.7971 Epoch 16/100 59/59 [==============================] - 2s 31ms/step - loss: 15.8916 - val_loss: 15.7844 Epoch 17/100 59/59 [==============================] - 2s 31ms/step - loss: 15.8653 - val_loss: 15.7624 Epoch 18/100 59/59 [==============================] - 2s 31ms/step - loss: 15.8543 - val_loss: 15.7620 Epoch 19/100 59/59 [==============================] - 2s 28ms/step - loss: 15.8373 - val_loss: 15.7559 Epoch 20/100 59/59 [==============================] - 2s 27ms/step - loss: 15.8319 - val_loss: 15.7495 Epoch 21/100 59/59 [==============================] - 2s 27ms/step - loss: 15.8104 - val_loss: 15.7430 Epoch 22/100 59/59 [==============================] - 2s 29ms/step - loss: 15.8037 - val_loss: 15.7260 Epoch 23/100 59/59 [==============================] - 2s 29ms/step - loss: 15.8021 - val_loss: 15.7204 Epoch 24/100 59/59 [==============================] - 2s 28ms/step - loss: 15.7901 - val_loss: 15.7174 Epoch 25/100 59/59 [==============================] - 2s 29ms/step - loss: 15.7851 - val_loss: 15.7074 Epoch 26/100 59/59 [==============================] - 2s 27ms/step - loss: 15.7701 - val_loss: 15.7097 Epoch 27/100 59/59 [==============================] - 2s 28ms/step - loss: 15.7694 - val_loss: 15.7040 Epoch 28/100 59/59 [==============================] - 2s 28ms/step - loss: 15.7544 - val_loss: 15.7012 Epoch 29/100 59/59 [==============================] - 2s 31ms/step - loss: 15.7498 - val_loss: 15.7015 Epoch 30/100 59/59 [==============================] - 2s 31ms/step - loss: 15.7521 - val_loss: 15.6880 Epoch 31/100 59/59 [==============================] - 2s 29ms/step - loss: 15.7165 - val_loss: 15.6734 Epoch 32/100 59/59 [==============================] - 2s 27ms/step - loss: 15.6650 - val_loss: 15.5789 Epoch 33/100 59/59 [==============================] - 2s 27ms/step - loss: 15.5300 - val_loss: 15.4026 Epoch 34/100 59/59 [==============================] - 2s 27ms/step - loss: 15.3519 - val_loss: 15.2115 Epoch 35/100 59/59 [==============================] - 2s 27ms/step - loss: 15.1165 - val_loss: 14.7826 Epoch 36/100 59/59 [==============================] - 2s 27ms/step - loss: 14.7086 - val_loss: 14.4432 Epoch 37/100 59/59 [==============================] - 2s 29ms/step - loss: 14.3317 - val_loss: 13.9445 Epoch 38/100 59/59 [==============================] - 2s 29ms/step - loss: 13.9658 - val_loss: 13.6972 Epoch 39/100 59/59 [==============================] - 2s 29ms/step - loss: 13.6728 - val_loss: 13.3388 Epoch 40/100 59/59 [==============================] - 2s 28ms/step - loss: 13.3454 - val_loss: 13.0102 Epoch 41/100 59/59 [==============================] - 2s 27ms/step - loss: 13.0448 - val_loss: 12.8307 Epoch 42/100 59/59 [==============================] - 2s 28ms/step - loss: 12.7552 - val_loss: 12.6071 Epoch 43/100 59/59 [==============================] - 2s 29ms/step - loss: 12.4573 - val_loss: 12.2800 Epoch 44/100 59/59 [==============================] - 2s 31ms/step - loss: 12.1055 - val_loss: 11.9209 Epoch 45/100 59/59 [==============================] - 2s 28ms/step - loss: 11.8148 - val_loss: 11.9132 Epoch 46/100 59/59 [==============================] - 2s 28ms/step - loss: 11.4530 - val_loss: 11.4357 Epoch 47/100 59/59 [==============================] - 2s 29ms/step - loss: 11.0592 - val_loss: 11.1121 Epoch 48/100 59/59 [==============================] - 2s 27ms/step - loss: 10.7746 - val_loss: 10.8532 Epoch 49/100 59/59 [==============================] - 2s 28ms/step - loss: 10.2616 - val_loss: 10.3643 Epoch 50/100 59/59 [==============================] - 2s 28ms/step - loss: 9.8708 - val_loss: 10.0987 Epoch 51/100 59/59 [==============================] - 2s 30ms/step - loss: 9.4077 - val_loss: 9.6371 Epoch 52/100 59/59 [==============================] - 2s 29ms/step - loss: 9.0663 - val_loss: 9.2463 Epoch 53/100 59/59 [==============================] - 2s 28ms/step - loss: 8.4546 - val_loss: 8.7581 Epoch 54/100 59/59 [==============================] - 2s 28ms/step - loss: 7.9226 - val_loss: 8.1805 Epoch 55/100 59/59 [==============================] - 2s 27ms/step - loss: 7.4927 - val_loss: 7.8858 Epoch 56/100 59/59 [==============================] - 2s 28ms/step - loss: 7.0499 - val_loss: 7.3202 Epoch 57/100 59/59 [==============================] - 2s 27ms/step - loss: 6.6383 - val_loss: 7.0875 Epoch 58/100 59/59 [==============================] - 2s 28ms/step - loss: 6.1446 - val_loss: 6.9619 Epoch 59/100 59/59 [==============================] - 2s 28ms/step - loss: 5.8533 - val_loss: 6.3855 Epoch 60/100 59/59 [==============================] - 2s 28ms/step - loss: 5.5107 - val_loss: 5.9797 Epoch 61/100 59/59 [==============================] - 2s 31ms/step - loss: 5.1181 - val_loss: 5.7549 Epoch 62/100 59/59 [==============================] - 2s 31ms/step - loss: 4.6952 - val_loss: 5.5488 Epoch 63/100 59/59 [==============================] - 2s 29ms/step - loss: 4.4189 - val_loss: 5.3030 Epoch 64/100 59/59 [==============================] - 2s 28ms/step - loss: 4.1358 - val_loss: 5.1772 Epoch 65/100 59/59 [==============================] - 2s 28ms/step - loss: 3.8560 - val_loss: 5.1071 Epoch 66/100 59/59 [==============================] - 2s 28ms/step - loss: 3.5342 - val_loss: 4.6958 Epoch 67/100 59/59 [==============================] - 2s 28ms/step - loss: 3.3336 - val_loss: 4.5865 Epoch 68/100 59/59 [==============================] - 2s 27ms/step - loss: 3.0925 - val_loss: 4.3647 Epoch 69/100 59/59 [==============================] - 2s 28ms/step - loss: 2.8751 - val_loss: 4.3005 Epoch 70/100 59/59 [==============================] - 2s 27ms/step - loss: 2.7444 - val_loss: 4.0820 Epoch 71/100 59/59 [==============================] - 2s 27ms/step - loss: 2.5921 - val_loss: 4.1694 Epoch 72/100 59/59 [==============================] - 2s 28ms/step - loss: 2.3246 - val_loss: 3.9142 Epoch 73/100 59/59 [==============================] - 2s 28ms/step - loss: 2.0769 - val_loss: 3.9135 Epoch 74/100 59/59 [==============================] - 2s 29ms/step - loss: 2.0872 - val_loss: 3.9808 Epoch 75/100 59/59 [==============================] - 2s 29ms/step - loss: 1.9498 - val_loss: 3.9935 Epoch 76/100 59/59 [==============================] - 2s 28ms/step - loss: 1.8178 - val_loss: 3.7735 Epoch 77/100 59/59 [==============================] - 2s 29ms/step - loss: 1.7661 - val_loss: 3.6309 Epoch 78/100 59/59 [==============================] - 2s 31ms/step - loss: 1.6236 - val_loss: 3.7410 Epoch 79/100 59/59 [==============================] - 2s 29ms/step - loss: 1.4652 - val_loss: 3.6756 Epoch 80/100 59/59 [==============================] - 2s 27ms/step - loss: 1.3552 - val_loss: 3.4979 Epoch 81/100 59/59 [==============================] - 2s 29ms/step - loss: 1.2655 - val_loss: 3.5306 Epoch 82/100 59/59 [==============================] - 2s 29ms/step - loss: 1.2632 - val_loss: 3.2885 Epoch 83/100 59/59 [==============================] - 2s 28ms/step - loss: 1.2316 - val_loss: 3.2482 Epoch 84/100 59/59 [==============================] - 2s 30ms/step - loss: 1.1260 - val_loss: 3.4285 Epoch 85/100 59/59 [==============================] - 2s 28ms/step - loss: 1.0745 - val_loss: 3.2985 Epoch 86/100 59/59 [==============================] - 2s 29ms/step - loss: 1.0133 - val_loss: 3.2209 Epoch 87/100 59/59 [==============================] - 2s 31ms/step - loss: 0.9417 - val_loss: 3.2203 Epoch 88/100 59/59 [==============================] - 2s 28ms/step - loss: 0.9104 - val_loss: 3.1121 Epoch 89/100 59/59 [==============================] - 2s 30ms/step - loss: 0.8516 - val_loss: 3.2070 Epoch 90/100 59/59 [==============================] - 2s 28ms/step - loss: 0.8275 - val_loss: 3.0335 Epoch 91/100 59/59 [==============================] - 2s 28ms/step - loss: 0.8056 - val_loss: 3.2085 Epoch 92/100 59/59 [==============================] - 2s 28ms/step - loss: 0.7373 - val_loss: 3.0326 Epoch 93/100 59/59 [==============================] - 2s 28ms/step - loss: 0.7753 - val_loss: 2.9935 Epoch 94/100 59/59 [==============================] - 2s 28ms/step - loss: 0.7688 - val_loss: 2.9940 Epoch 95/100 59/59 [==============================] - 2s 27ms/step - loss: 0.6765 - val_loss: 3.0432 Epoch 96/100 59/59 [==============================] - 2s 29ms/step - loss: 0.6674 - val_loss: 3.1233 Epoch 97/100 59/59 [==============================] - 2s 29ms/step - loss: 0.6018 - val_loss: 2.8405 Epoch 98/100 59/59 [==============================] - 2s 28ms/step - loss: 0.6322 - val_loss: 2.8323 Epoch 99/100 59/59 [==============================] - 2s 29ms/step - loss: 0.5889 - val_loss: 2.8786 Epoch 100/100 59/59 [==============================] - 2s 28ms/step - loss: 0.5616 - val_loss: 2.9697 Inference # Get the prediction model by extracting layers till the output layer prediction_model = keras.models.Model( model.get_layer(name=\"image\").input, model.get_layer(name=\"dense2\").output ) prediction_model.summary() # A utility function to decode the output of the network def decode_batch_predictions(pred): input_len = np.ones(pred.shape[0]) * pred.shape[1] # Use greedy search. For complex tasks, you can use beam search results = keras.backend.ctc_decode(pred, input_length=input_len, greedy=True)[0][0][ :, :max_length ] # Iterate over the results and get back the text output_text = [] for res in results: res = tf.strings.reduce_join(num_to_char(res)).numpy().decode(\"utf-8\") output_text.append(res) return output_text # Let's check results on some validation samples for batch in validation_dataset.take(1): batch_images = batch[\"image\"] batch_labels = batch[\"label\"] preds = prediction_model.predict(batch_images) pred_texts = decode_batch_predictions(preds) orig_texts = [] for label in batch_labels: label = tf.strings.reduce_join(num_to_char(label)).numpy().decode(\"utf-8\") orig_texts.append(label) _, ax = plt.subplots(4, 4, figsize=(15, 5)) for i in range(len(pred_texts)): img = (batch_images[i, :, :, 0] * 255).numpy().astype(np.uint8) img = img.T title = f\"Prediction: {pred_texts[i]}\" ax[i // 4, i % 4].imshow(img, cmap=\"gray\") ax[i // 4, i % 4].set_title(title) ax[i // 4, i % 4].axis(\"off\") plt.show() Model: \"functional_1\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= image (InputLayer) [(None, 200, 50, 1)] 0 _________________________________________________________________ Conv1 (Conv2D) (None, 200, 50, 32) 320 _________________________________________________________________ pool1 (MaxPooling2D) (None, 100, 25, 32) 0 _________________________________________________________________ Conv2 (Conv2D) (None, 100, 25, 64) 18496 _________________________________________________________________ pool2 (MaxPooling2D) (None, 50, 12, 64) 0 _________________________________________________________________ reshape (Reshape) (None, 50, 768) 0 _________________________________________________________________ dense1 (Dense) (None, 50, 64) 49216 _________________________________________________________________ dropout (Dropout) (None, 50, 64) 0 _________________________________________________________________ bidirectional (Bidirectional (None, 50, 256) 197632 _________________________________________________________________ bidirectional_1 (Bidirection (None, 50, 128) 164352 _________________________________________________________________ dense2 (Dense) (None, 50, 20) 2580 ================================================================= Total params: 432,596 Trainable params: 432,596 Non-trainable params: 0 _________________________________________________________________ png Medical image classification on TPU. Introduction + Set-up This tutorial will explain how to build an X-ray image classification model to predict whether an X-ray scan shows presence of pneumonia. import re import os import random import numpy as np import pandas as pd import tensorflow as tf import matplotlib.pyplot as plt try: tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect() print(\"Device:\", tpu.master()) strategy = tf.distribute.TPUStrategy(tpu) except: strategy = tf.distribute.get_strategy() print(\"Number of replicas:\", strategy.num_replicas_in_sync) Device: grpc://10.0.27.122:8470 INFO:tensorflow:Initializing the TPU system: grpc://10.0.27.122:8470 INFO:tensorflow:Initializing the TPU system: grpc://10.0.27.122:8470 INFO:tensorflow:Clearing out eager caches INFO:tensorflow:Clearing out eager caches INFO:tensorflow:Finished initializing TPU system. INFO:tensorflow:Finished initializing TPU system. WARNING:absl:[`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is deprecated, please use the non experimental symbol [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) instead. INFO:tensorflow:Found TPU system: INFO:tensorflow:Found TPU system: INFO:tensorflow:*** Num TPU Cores: 8 INFO:tensorflow:*** Num TPU Cores: 8 INFO:tensorflow:*** Num TPU Workers: 1 INFO:tensorflow:*** Num TPU Workers: 1 INFO:tensorflow:*** Num TPU Cores Per Worker: 8 INFO:tensorflow:*** Num TPU Cores Per Worker: 8 INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:CPU:0, CPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:CPU:0, CPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0) Number of replicas: 8 We need a Google Cloud link to our data to load the data using a TPU. Below, we define key configuration parameters we'll use in this example. To run on TPU, this example must be on Colab with the TPU runtime selected. AUTOTUNE = tf.data.AUTOTUNE BATCH_SIZE = 25 * strategy.num_replicas_in_sync IMAGE_SIZE = [180, 180] CLASS_NAMES = [\"NORMAL\", \"PNEUMONIA\"] Load the data The Chest X-ray data we are using from Cell divides the data into training and test files. Let's first load in the training TFRecords. train_images = tf.data.TFRecordDataset( \"gs://download.tensorflow.org/data/ChestXRay2017/train/images.tfrec\" ) train_paths = tf.data.TFRecordDataset( \"gs://download.tensorflow.org/data/ChestXRay2017/train/paths.tfrec\" ) ds = tf.data.Dataset.zip((train_images, train_paths)) Let's count how many healthy/normal chest X-rays we have and how many pneumonia chest X-rays we have: COUNT_NORMAL = len( [ filename for filename in train_paths if \"NORMAL\" in filename.numpy().decode(\"utf-8\") ] ) print(\"Normal images count in training set: \" + str(COUNT_NORMAL)) COUNT_PNEUMONIA = len( [ filename for filename in train_paths if \"PNEUMONIA\" in filename.numpy().decode(\"utf-8\") ] ) print(\"Pneumonia images count in training set: \" + str(COUNT_PNEUMONIA)) Normal images count in training set: 1349 Pneumonia images count in training set: 3883 Notice that there are way more images that are classified as pneumonia than normal. This shows that we have an imbalance in our data. We will correct for this imbalance later on in our notebook. We want to map each filename to the corresponding (image, label) pair. The following methods will help us do that. As we only have two labels, we will encode the label so that 1 or True indicates pneumonia and 0 or False indicates normal. def get_label(file_path): # convert the path to a list of path components parts = tf.strings.split(file_path, \"/\") # The second to last is the class-directory return parts[-2] == \"PNEUMONIA\" def decode_img(img): # convert the compressed string to a 3D uint8 tensor img = tf.image.decode_jpeg(img, channels=3) # resize the image to the desired size. return tf.image.resize(img, IMAGE_SIZE) def process_path(image, path): label = get_label(path) # load the raw data from the file as a string img = decode_img(image) return img, label ds = ds.map(process_path, num_parallel_calls=AUTOTUNE) Let's split the data into a training and validation datasets. ds = ds.shuffle(10000) train_ds = ds.take(4200) val_ds = ds.skip(4200) Let's visualize the shape of an (image, label) pair. for image, label in train_ds.take(1): print(\"Image shape: \", image.numpy().shape) print(\"Label: \", label.numpy()) Image shape: (180, 180, 3) Label: False Load and format the test data as well. test_images = tf.data.TFRecordDataset( \"gs://download.tensorflow.org/data/ChestXRay2017/test/images.tfrec\" ) test_paths = tf.data.TFRecordDataset( \"gs://download.tensorflow.org/data/ChestXRay2017/test/paths.tfrec\" ) test_ds = tf.data.Dataset.zip((test_images, test_paths)) test_ds = test_ds.map(process_path, num_parallel_calls=AUTOTUNE) test_ds = test_ds.batch(BATCH_SIZE) Visualize the dataset First, let's use buffered prefetching so we can yield data from disk without having I/O become blocking. Please note that large image datasets should not be cached in memory. We do it here because the dataset is not very large and we want to train on TPU. def prepare_for_training(ds, cache=True): # This is a small dataset, only load it once, and keep it in memory. # use `.cache(filename)` to cache preprocessing work for datasets that don't # fit in memory. if cache: if isinstance(cache, str): ds = ds.cache(cache) else: ds = ds.cache() ds = ds.batch(BATCH_SIZE) # `prefetch` lets the dataset fetch batches in the background while the model # is training. ds = ds.prefetch(buffer_size=AUTOTUNE) return ds Call the next batch iteration of the training data. train_ds = prepare_for_training(train_ds) val_ds = prepare_for_training(val_ds) image_batch, label_batch = next(iter(train_ds)) Define the method to show the images in the batch. def show_batch(image_batch, label_batch): plt.figure(figsize=(10, 10)) for n in range(25): ax = plt.subplot(5, 5, n + 1) plt.imshow(image_batch[n] / 255) if label_batch[n]: plt.title(\"PNEUMONIA\") else: plt.title(\"NORMAL\") plt.axis(\"off\") As the method takes in NumPy arrays as its parameters, call the numpy function on the batches to return the tensor in NumPy array form. show_batch(image_batch.numpy(), label_batch.numpy()) png Build the CNN To make our model more modular and easier to understand, let's define some blocks. As we're building a convolution neural network, we'll create a convolution block and a dense layer block. The architecture for this CNN has been inspired by this article. from tensorflow import keras from tensorflow.keras import layers def conv_block(filters, inputs): x = layers.SeparableConv2D(filters, 3, activation=\"relu\", padding=\"same\")(inputs) x = layers.SeparableConv2D(filters, 3, activation=\"relu\", padding=\"same\")(x) x = layers.BatchNormalization()(x) outputs = layers.MaxPool2D()(x) return outputs def dense_block(units, dropout_rate, inputs): x = layers.Dense(units, activation=\"relu\")(inputs) x = layers.BatchNormalization()(x) outputs = layers.Dropout(dropout_rate)(x) return outputs The following method will define the function to build our model for us. The images originally have values that range from [0, 255]. CNNs work better with smaller numbers so we will scale this down for our input. The Dropout layers are important, as they reduce the likelikhood of the model overfitting. We want to end the model with a Dense layer with one node, as this will be the binary output that determines if an X-ray shows presence of pneumonia. def build_model(): inputs = keras.Input(shape=(IMAGE_SIZE[0], IMAGE_SIZE[1], 3)) x = layers.Rescaling(1.0 / 255)(inputs) x = layers.Conv2D(16, 3, activation=\"relu\", padding=\"same\")(x) x = layers.Conv2D(16, 3, activation=\"relu\", padding=\"same\")(x) x = layers.MaxPool2D()(x) x = conv_block(32, x) x = conv_block(64, x) x = conv_block(128, x) x = layers.Dropout(0.2)(x) x = conv_block(256, x) x = layers.Dropout(0.2)(x) x = layers.Flatten()(x) x = dense_block(512, 0.7, x) x = dense_block(128, 0.5, x) x = dense_block(64, 0.3, x) outputs = layers.Dense(1, activation=\"sigmoid\")(x) model = keras.Model(inputs=inputs, outputs=outputs) return model Correct for data imbalance We saw earlier in this example that the data was imbalanced, with more images classified as pneumonia than normal. We will correct for that by using class weighting: initial_bias = np.log([COUNT_PNEUMONIA / COUNT_NORMAL]) print(\"Initial bias: {:.5f}\".format(initial_bias[0])) TRAIN_IMG_COUNT = COUNT_NORMAL + COUNT_PNEUMONIA weight_for_0 = (1 / COUNT_NORMAL) * (TRAIN_IMG_COUNT) / 2.0 weight_for_1 = (1 / COUNT_PNEUMONIA) * (TRAIN_IMG_COUNT) / 2.0 class_weight = {0: weight_for_0, 1: weight_for_1} print(\"Weight for class 0: {:.2f}\".format(weight_for_0)) print(\"Weight for class 1: {:.2f}\".format(weight_for_1)) Initial bias: 1.05724 Weight for class 0: 1.94 Weight for class 1: 0.67 The weight for class 0 (Normal) is a lot higher than the weight for class 1 (Pneumonia). Because there are less normal images, each normal image will be weighted more to balance the data as the CNN works best when the training data is balanced. Train the model Defining callbacks The checkpoint callback saves the best weights of the model, so next time we want to use the model, we do not have to spend time training it. The early stopping callback stops the training process when the model starts becoming stagnant, or even worse, when the model starts overfitting. checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(\"xray_model.h5\", save_best_only=True) early_stopping_cb = tf.keras.callbacks.EarlyStopping( patience=10, restore_best_weights=True ) We also want to tune our learning rate. Too high of a learning rate will cause the model to diverge. Too small of a learning rate will cause the model to be too slow. We implement the exponential learning rate scheduling method below. initial_learning_rate = 0.015 lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay( initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True ) Fit the model For our metrics, we want to include precision and recall as they will provide use with a more informed picture of how good our model is. Accuracy tells us what fraction of the labels is correct. Since our data is not balanced, accuracy might give a skewed sense of a good model (i.e. a model that always predicts PNEUMONIA will be 74% accurate but is not a good model). Precision is the number of true positives (TP) over the sum of TP and false positives (FP). It shows what fraction of labeled positives are actually correct. Recall is the number of TP over the sum of TP and false negatves (FN). It shows what fraction of actual positives are correct. Since there are only two possible labels for the image, we will be using the binary crossentropy loss. When we fit the model, remember to specify the class weights, which we defined earlier. Because we are using a TPU, training will be quick - less than 2 minutes. with strategy.scope(): model = build_model() METRICS = [ tf.keras.metrics.BinaryAccuracy(), tf.keras.metrics.Precision(name=\"precision\"), tf.keras.metrics.Recall(name=\"recall\"), ] model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=lr_schedule), loss=\"binary_crossentropy\", metrics=METRICS, ) history = model.fit( train_ds, epochs=100, validation_data=val_ds, class_weight=class_weight, callbacks=[checkpoint_cb, early_stopping_cb], ) Epoch 1/100 WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/multi_device_iterator_ops.py:601: get_next_as_optional (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.data.Iterator.get_next_as_optional()` instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/multi_device_iterator_ops.py:601: get_next_as_optional (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.data.Iterator.get_next_as_optional()` instead. 21/21 [==============================] - 12s 568ms/step - loss: 0.5857 - binary_accuracy: 0.6960 - precision: 0.8887 - recall: 0.6733 - val_loss: 34.0149 - val_binary_accuracy: 0.7180 - val_precision: 0.7180 - val_recall: 1.0000 Epoch 2/100 21/21 [==============================] - 3s 128ms/step - loss: 0.2916 - binary_accuracy: 0.8755 - precision: 0.9540 - recall: 0.8738 - val_loss: 97.5194 - val_binary_accuracy: 0.7180 - val_precision: 0.7180 - val_recall: 1.0000 Epoch 3/100 21/21 [==============================] - 4s 167ms/step - loss: 0.2384 - binary_accuracy: 0.9002 - precision: 0.9663 - recall: 0.8964 - val_loss: 27.7902 - val_binary_accuracy: 0.7180 - val_precision: 0.7180 - val_recall: 1.0000 Epoch 4/100 21/21 [==============================] - 4s 173ms/step - loss: 0.2046 - binary_accuracy: 0.9145 - precision: 0.9725 - recall: 0.9102 - val_loss: 10.8302 - val_binary_accuracy: 0.7180 - val_precision: 0.7180 - val_recall: 1.0000 Epoch 5/100 21/21 [==============================] - 4s 174ms/step - loss: 0.1841 - binary_accuracy: 0.9279 - precision: 0.9733 - recall: 0.9279 - val_loss: 3.5860 - val_binary_accuracy: 0.7103 - val_precision: 0.7162 - val_recall: 0.9879 Epoch 6/100 21/21 [==============================] - 4s 185ms/step - loss: 0.1600 - binary_accuracy: 0.9362 - precision: 0.9791 - recall: 0.9337 - val_loss: 0.3014 - val_binary_accuracy: 0.8895 - val_precision: 0.8973 - val_recall: 0.9555 Epoch 7/100 21/21 [==============================] - 3s 130ms/step - loss: 0.1567 - binary_accuracy: 0.9393 - precision: 0.9798 - recall: 0.9372 - val_loss: 0.6763 - val_binary_accuracy: 0.7810 - val_precision: 0.7760 - val_recall: 0.9771 Epoch 8/100 21/21 [==============================] - 3s 131ms/step - loss: 0.1532 - binary_accuracy: 0.9421 - precision: 0.9825 - recall: 0.9385 - val_loss: 0.3169 - val_binary_accuracy: 0.8895 - val_precision: 0.8684 - val_recall: 0.9973 Epoch 9/100 21/21 [==============================] - 4s 184ms/step - loss: 0.1457 - binary_accuracy: 0.9431 - precision: 0.9822 - recall: 0.9401 - val_loss: 0.2064 - val_binary_accuracy: 0.9273 - val_precision: 0.9840 - val_recall: 0.9136 Epoch 10/100 21/21 [==============================] - 3s 132ms/step - loss: 0.1201 - binary_accuracy: 0.9521 - precision: 0.9869 - recall: 0.9479 - val_loss: 0.4364 - val_binary_accuracy: 0.8605 - val_precision: 0.8443 - val_recall: 0.9879 Epoch 11/100 21/21 [==============================] - 3s 127ms/step - loss: 0.1200 - binary_accuracy: 0.9510 - precision: 0.9863 - recall: 0.9469 - val_loss: 0.5197 - val_binary_accuracy: 0.8508 - val_precision: 1.0000 - val_recall: 0.7922 Epoch 12/100 21/21 [==============================] - 4s 186ms/step - loss: 0.1077 - binary_accuracy: 0.9581 - precision: 0.9870 - recall: 0.9559 - val_loss: 0.1349 - val_binary_accuracy: 0.9486 - val_precision: 0.9587 - val_recall: 0.9703 Epoch 13/100 21/21 [==============================] - 4s 173ms/step - loss: 0.0918 - binary_accuracy: 0.9650 - precision: 0.9914 - recall: 0.9611 - val_loss: 0.0926 - val_binary_accuracy: 0.9700 - val_precision: 0.9837 - val_recall: 0.9744 Epoch 14/100 21/21 [==============================] - 3s 130ms/step - loss: 0.0996 - binary_accuracy: 0.9612 - precision: 0.9913 - recall: 0.9559 - val_loss: 0.1811 - val_binary_accuracy: 0.9419 - val_precision: 0.9956 - val_recall: 0.9231 Epoch 15/100 21/21 [==============================] - 3s 129ms/step - loss: 0.0898 - binary_accuracy: 0.9643 - precision: 0.9901 - recall: 0.9614 - val_loss: 0.1525 - val_binary_accuracy: 0.9486 - val_precision: 0.9986 - val_recall: 0.9298 Epoch 16/100 21/21 [==============================] - 3s 128ms/step - loss: 0.0941 - binary_accuracy: 0.9621 - precision: 0.9904 - recall: 0.9582 - val_loss: 0.5101 - val_binary_accuracy: 0.8527 - val_precision: 1.0000 - val_recall: 0.7949 Epoch 17/100 21/21 [==============================] - 3s 125ms/step - loss: 0.0798 - binary_accuracy: 0.9636 - precision: 0.9897 - recall: 0.9607 - val_loss: 0.1239 - val_binary_accuracy: 0.9622 - val_precision: 0.9875 - val_recall: 0.9595 Epoch 18/100 21/21 [==============================] - 3s 126ms/step - loss: 0.0821 - binary_accuracy: 0.9657 - precision: 0.9911 - recall: 0.9623 - val_loss: 0.1597 - val_binary_accuracy: 0.9322 - val_precision: 0.9956 - val_recall: 0.9096 Epoch 19/100 21/21 [==============================] - 3s 143ms/step - loss: 0.0800 - binary_accuracy: 0.9657 - precision: 0.9917 - recall: 0.9617 - val_loss: 0.2538 - val_binary_accuracy: 0.9109 - val_precision: 1.0000 - val_recall: 0.8758 Epoch 20/100 21/21 [==============================] - 3s 127ms/step - loss: 0.0605 - binary_accuracy: 0.9738 - precision: 0.9950 - recall: 0.9694 - val_loss: 0.6594 - val_binary_accuracy: 0.8566 - val_precision: 1.0000 - val_recall: 0.8003 Epoch 21/100 21/21 [==============================] - 4s 167ms/step - loss: 0.0726 - binary_accuracy: 0.9733 - precision: 0.9937 - recall: 0.9701 - val_loss: 0.0593 - val_binary_accuracy: 0.9816 - val_precision: 0.9945 - val_recall: 0.9798 Epoch 22/100 21/21 [==============================] - 3s 126ms/step - loss: 0.0577 - binary_accuracy: 0.9783 - precision: 0.9951 - recall: 0.9755 - val_loss: 0.1087 - val_binary_accuracy: 0.9729 - val_precision: 0.9931 - val_recall: 0.9690 Epoch 23/100 21/21 [==============================] - 3s 125ms/step - loss: 0.0652 - binary_accuracy: 0.9729 - precision: 0.9924 - recall: 0.9707 - val_loss: 1.8465 - val_binary_accuracy: 0.7180 - val_precision: 0.7180 - val_recall: 1.0000 Epoch 24/100 21/21 [==============================] - 3s 124ms/step - loss: 0.0538 - binary_accuracy: 0.9783 - precision: 0.9951 - recall: 0.9755 - val_loss: 1.5769 - val_binary_accuracy: 0.7180 - val_precision: 0.7180 - val_recall: 1.0000 Epoch 25/100 21/21 [==============================] - 4s 167ms/step - loss: 0.0549 - binary_accuracy: 0.9776 - precision: 0.9954 - recall: 0.9743 - val_loss: 0.0590 - val_binary_accuracy: 0.9777 - val_precision: 0.9904 - val_recall: 0.9784 Epoch 26/100 21/21 [==============================] - 3s 131ms/step - loss: 0.0677 - binary_accuracy: 0.9719 - precision: 0.9924 - recall: 0.9694 - val_loss: 2.6008 - val_binary_accuracy: 0.6928 - val_precision: 0.9977 - val_recall: 0.5735 Epoch 27/100 21/21 [==============================] - 3s 127ms/step - loss: 0.0469 - binary_accuracy: 0.9833 - precision: 0.9971 - recall: 0.9804 - val_loss: 1.0184 - val_binary_accuracy: 0.8605 - val_precision: 0.9983 - val_recall: 0.8070 Epoch 28/100 21/21 [==============================] - 3s 126ms/step - loss: 0.0501 - binary_accuracy: 0.9790 - precision: 0.9961 - recall: 0.9755 - val_loss: 0.3737 - val_binary_accuracy: 0.9089 - val_precision: 0.9954 - val_recall: 0.8772 Epoch 29/100 21/21 [==============================] - 3s 128ms/step - loss: 0.0548 - binary_accuracy: 0.9798 - precision: 0.9941 - recall: 0.9784 - val_loss: 1.2928 - val_binary_accuracy: 0.7907 - val_precision: 1.0000 - val_recall: 0.7085 Epoch 30/100 21/21 [==============================] - 3s 129ms/step - loss: 0.0370 - binary_accuracy: 0.9860 - precision: 0.9980 - recall: 0.9829 - val_loss: 0.1370 - val_binary_accuracy: 0.9612 - val_precision: 0.9972 - val_recall: 0.9487 Epoch 31/100 21/21 [==============================] - 3s 125ms/step - loss: 0.0585 - binary_accuracy: 0.9819 - precision: 0.9951 - recall: 0.9804 - val_loss: 1.1955 - val_binary_accuracy: 0.6870 - val_precision: 0.9976 - val_recall: 0.5655 Epoch 32/100 21/21 [==============================] - 3s 140ms/step - loss: 0.0813 - binary_accuracy: 0.9695 - precision: 0.9934 - recall: 0.9652 - val_loss: 1.0394 - val_binary_accuracy: 0.8576 - val_precision: 0.9853 - val_recall: 0.8138 Epoch 33/100 21/21 [==============================] - 3s 128ms/step - loss: 0.1111 - binary_accuracy: 0.9555 - precision: 0.9870 - recall: 0.9524 - val_loss: 4.9438 - val_binary_accuracy: 0.5911 - val_precision: 1.0000 - val_recall: 0.4305 Epoch 34/100 21/21 [==============================] - 3s 130ms/step - loss: 0.0680 - binary_accuracy: 0.9726 - precision: 0.9921 - recall: 0.9707 - val_loss: 2.8822 - val_binary_accuracy: 0.7267 - val_precision: 0.9978 - val_recall: 0.6208 Epoch 35/100 21/21 [==============================] - 4s 187ms/step - loss: 0.0784 - binary_accuracy: 0.9712 - precision: 0.9892 - recall: 0.9717 - val_loss: 0.3940 - val_binary_accuracy: 0.9390 - val_precision: 0.9942 - val_recall: 0.9204 Visualizing model performance Let's plot the model accuracy and loss for the training and the validating set. Note that no random seed is specified for this notebook. For your notebook, there might be slight variance. fig, ax = plt.subplots(1, 4, figsize=(20, 3)) ax = ax.ravel() for i, met in enumerate([\"precision\", \"recall\", \"binary_accuracy\", \"loss\"]): ax[i].plot(history.history[met]) ax[i].plot(history.history[\"val_\" + met]) ax[i].set_title(\"Model {}\".format(met)) ax[i].set_xlabel(\"epochs\") ax[i].set_ylabel(met) ax[i].legend([\"train\", \"val\"]) png We see that the accuracy for our model is around 95%. Predict and evaluate results Let's evaluate the model on our test data! model.evaluate(test_ds, return_dict=True) 4/4 [==============================] - 3s 708ms/step - loss: 0.9718 - binary_accuracy: 0.7901 - precision: 0.7524 - recall: 0.9897 {'binary_accuracy': 0.7900640964508057, 'loss': 0.9717951416969299, 'precision': 0.752436637878418, 'recall': 0.9897436499595642} We see that our accuracy on our test data is lower than the accuracy for our validating set. This may indicate overfitting. Our recall is greater than our precision, indicating that almost all pneumonia images are correctly identified but some normal images are falsely identified. We should aim to increase our precision. for image, label in test_ds.take(1): plt.imshow(image[0] / 255.0) plt.title(CLASS_NAMES[label[0].numpy()]) prediction = model.predict(test_ds.take(1))[0] scores = [1 - prediction, prediction] for score, name in zip(scores, CLASS_NAMES): print(\"This image is %.2f percent %s\" % ((100 * score), name)) /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:3: DeprecationWarning: In future, it will be an error for 'np.bool_' scalars to be interpreted as an index This is separate from the ipykernel package so we can avoid doing imports until This image is 47.19 percent NORMAL This image is 52.81 percent PNEUMONIA png Implementation of PointNet for ModelNet10 classification. Introduction Classification, detection and segmentation of unordered 3D point sets i.e. point clouds is a core problem in computer vision. This example implements the seminal point cloud deep learning paper PointNet (Qi et al., 2017). For a detailed intoduction on PointNet see this blog post. Setup If using colab first install trimesh with !pip install trimesh. import os import glob import trimesh import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from matplotlib import pyplot as plt tf.random.set_seed(1234) Load dataset We use the ModelNet10 model dataset, the smaller 10 class version of the ModelNet40 dataset. First download the data: DATA_DIR = tf.keras.utils.get_file( \"modelnet.zip\", \"http://3dvision.princeton.edu/projects/2014/3DShapeNets/ModelNet10.zip\", extract=True, ) DATA_DIR = os.path.join(os.path.dirname(DATA_DIR), \"ModelNet10\") Downloading data from http://3dvision.princeton.edu/projects/2014/3DShapeNets/ModelNet10.zip 473407488/473402300 [==============================] - 13s 0us/step We can use the trimesh package to read and visualize the .off mesh files. mesh = trimesh.load(os.path.join(DATA_DIR, \"chair/train/chair_0001.off\")) mesh.show() To convert a mesh file to a point cloud we first need to sample points on the mesh surface. .sample() performs a unifrom random sampling. Here we sample at 2048 locations and visualize in matplotlib. points = mesh.sample(2048) fig = plt.figure(figsize=(5, 5)) ax = fig.add_subplot(111, projection=\"3d\") ax.scatter(points[:, 0], points[:, 1], points[:, 2]) ax.set_axis_off() plt.show() png To generate a tf.data.Dataset() we need to first parse through the ModelNet data folders. Each mesh is loaded and sampled into a point cloud before being added to a standard python list and converted to a numpy array. We also store the current enumerate index value as the object label and use a dictionary to recall this later. def parse_dataset(num_points=2048): train_points = [] train_labels = [] test_points = [] test_labels = [] class_map = {} folders = glob.glob(os.path.join(DATA_DIR, \"[!README]*\")) for i, folder in enumerate(folders): print(\"processing class: {}\".format(os.path.basename(folder))) # store folder name with ID so we can retrieve later class_map[i] = folder.split(\"/\")[-1] # gather all files train_files = glob.glob(os.path.join(folder, \"train/*\")) test_files = glob.glob(os.path.join(folder, \"test/*\")) for f in train_files: train_points.append(trimesh.load(f).sample(num_points)) train_labels.append(i) for f in test_files: test_points.append(trimesh.load(f).sample(num_points)) test_labels.append(i) return ( np.array(train_points), np.array(test_points), np.array(train_labels), np.array(test_labels), class_map, ) Set the number of points to sample and batch size and parse the dataset. This can take ~5minutes to complete. NUM_POINTS = 2048 NUM_CLASSES = 10 BATCH_SIZE = 32 train_points, test_points, train_labels, test_labels, CLASS_MAP = parse_dataset( NUM_POINTS ) processing class: bathtub processing class: desk processing class: monitor processing class: sofa processing class: chair processing class: toilet processing class: dresser processing class: table processing class: bed processing class: night_stand Our data can now be read into a tf.data.Dataset() object. We set the shuffle buffer size to the entire size of the dataset as prior to this the data is ordered by class. Data augmentation is important when working with point cloud data. We create a augmentation function to jitter and shuffle the train dataset. def augment(points, label): # jitter points points += tf.random.uniform(points.shape, -0.005, 0.005, dtype=tf.float64) # shuffle points points = tf.random.shuffle(points) return points, label train_dataset = tf.data.Dataset.from_tensor_slices((train_points, train_labels)) test_dataset = tf.data.Dataset.from_tensor_slices((test_points, test_labels)) train_dataset = train_dataset.shuffle(len(train_points)).map(augment).batch(BATCH_SIZE) test_dataset =