Training a CTC-based model for automatic speech recognition. Introduction Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also known as automatic speech recognition (ASR), computer speech recognition or speech to text (STT). It incorporates knowledge and research in the computer science, linguistics and computer engineering fields. This demonstration shows how to combine a 2D CNN, RNN and a Connectionist Temporal Classification (CTC) loss to build an ASR. CTC is an algorithm used to train deep neural networks in speech recognition, handwriting recognition and other sequence problems. CTC is used when we don’t know how the input aligns with the output (how the characters in the transcript align to the audio). The model we create is similar to DeepSpeech2. We will use the LJSpeech dataset from the LibriVox project. It consists of short audio clips of a single speaker reading passages from 7 non-fiction books. We will evaluate the quality of the model using Word Error Rate (WER). WER is obtained by adding up the substitutions, insertions, and deletions that occur in a sequence of recognized words. Divide that number by the total number of words originally spoken. The result is the WER. To get the WER score you need to install the jiwer package. You can use the following command line: pip install jiwer References: LJSpeech Dataset Speech recognition Sequence Modeling With CTC DeepSpeech2 Setup import pandas as pd import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt from IPython import display from jiwer import wer Load the LJSpeech Dataset Let's download the LJSpeech Dataset. The dataset contains 13,100 audio files as wav files in the /wavs/ folder. The label (transcript) for each audio file is a string given in the metadata.csv file. The fields are: ID: this is the name of the corresponding .wav file Transcription: words spoken by the reader (UTF-8) Normalized transcription: transcription with numbers, ordinals, and monetary units expanded into full words (UTF-8). For this demo we will use on the \"Normalized transcription\" field. Each audio file is a single-channel 16-bit PCM WAV with a sample rate of 22,050 Hz. data_url = \"https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2\" data_path = keras.utils.get_file(\"LJSpeech-1.1\", data_url, untar=True) wavs_path = data_path + \"/wavs/\" metadata_path = data_path + \"/metadata.csv\" # Read metadata file and parse it metadata_df = pd.read_csv(metadata_path, sep=\"|\", header=None, quoting=3) metadata_df.columns = [\"file_name\", \"transcription\", \"normalized_transcription\"] metadata_df = metadata_df[[\"file_name\", \"normalized_transcription\"]] metadata_df = metadata_df.sample(frac=1).reset_index(drop=True) metadata_df.head(3) file_name normalized_transcription 0 LJ042-0218 to the entire land and complete foundations of... 1 LJ004-0218 a week's allowance at a time, was abolished, a... 2 LJ005-0151 in others women were very properly exempted fr... We now split the data into training and validation set. split = int(len(metadata_df) * 0.90) df_train = metadata_df[:split] df_val = metadata_df[split:] print(f\"Size of the training set: {len(df_train)}\") print(f\"Size of the training set: {len(df_val)}\") Size of the training set: 11790 Size of the training set: 1310 Preprocessing We first prepare the vocabulary to be used. # The set of characters accepted in the transcription. characters = [x for x in \"abcdefghijklmnopqrstuvwxyz'?! \"] # Mapping characters to integers char_to_num = keras.layers.StringLookup(vocabulary=characters, oov_token=\"\") # Mapping integers back to original characters num_to_char = keras.layers.StringLookup( vocabulary=char_to_num.get_vocabulary(), oov_token=\"\", invert=True ) print( f\"The vocabulary is: {char_to_num.get_vocabulary()} \" f\"(size ={char_to_num.vocabulary_size()})\" ) The vocabulary is: ['', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', \"'\", '?', '!', ' '] (size =31) 2021-09-28 21:16:33.150832: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-09-28 21:16:33.692813: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0. 2021-09-28 21:16:33.692847: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 9124 MB memory: -> device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:65:00.0, compute capability: 7.5 Next, we create the function that describes the transformation that we apply to each element of our dataset. # An integer scalar Tensor. The window length in samples. frame_length = 256 # An integer scalar Tensor. The number of samples to step. frame_step = 160 # An integer scalar Tensor. The size of the FFT to apply. # If not provided, uses the smallest power of 2 enclosing frame_length. fft_length = 384 def encode_single_sample(wav_file, label): ########################################### ## Process the Audio ########################################## # 1. Read wav file file = tf.io.read_file(wavs_path + wav_file + \".wav\") # 2. Decode the wav file audio, _ = tf.audio.decode_wav(file) audio = tf.squeeze(audio, axis=-1) # 3. Change type to float audio = tf.cast(audio, tf.float32) # 4. Get the spectrogram spectrogram = tf.signal.stft( audio, frame_length=frame_length, frame_step=frame_step, fft_length=fft_length ) # 5. We only need the magnitude, which can be derived by applying tf.abs spectrogram = tf.abs(spectrogram) spectrogram = tf.math.pow(spectrogram, 0.5) # 6. normalisation means = tf.math.reduce_mean(spectrogram, 1, keepdims=True) stddevs = tf.math.reduce_std(spectrogram, 1, keepdims=True) spectrogram = (spectrogram - means) / (stddevs + 1e-10) ########################################### ## Process the label ########################################## # 7. Convert label to Lower case label = tf.strings.lower(label) # 8. Split the label label = tf.strings.unicode_split(label, input_encoding=\"UTF-8\") # 9. Map the characters in label to numbers label = char_to_num(label) # 10. Return a dict as our model is expecting two inputs return spectrogram, label Creating Dataset objects We create a tf.data.Dataset object that yields the transformed elements, in the same order as they appeared in the input. batch_size = 32 # Define the trainig dataset train_dataset = tf.data.Dataset.from_tensor_slices( (list(df_train[\"file_name\"]), list(df_train[\"normalized_transcription\"])) ) train_dataset = ( train_dataset.map(encode_single_sample, num_parallel_calls=tf.data.AUTOTUNE) .padded_batch(batch_size) .prefetch(buffer_size=tf.data.AUTOTUNE) ) # Define the validation dataset validation_dataset = tf.data.Dataset.from_tensor_slices( (list(df_val[\"file_name\"]), list(df_val[\"normalized_transcription\"])) ) validation_dataset = ( validation_dataset.map(encode_single_sample, num_parallel_calls=tf.data.AUTOTUNE) .padded_batch(batch_size) .prefetch(buffer_size=tf.data.AUTOTUNE) ) Visualize the data Let's visualize an example in our dataset, including the audio clip, the spectrogram and the corresponding label. fig = plt.figure(figsize=(8, 5)) for batch in train_dataset.take(1): spectrogram = batch[0][0].numpy() spectrogram = np.array([np.trim_zeros(x) for x in np.transpose(spectrogram)]) label = batch[1][0] # Spectrogram label = tf.strings.reduce_join(num_to_char(label)).numpy().decode(\"utf-8\") ax = plt.subplot(2, 1, 1) ax.imshow(spectrogram, vmax=1) ax.set_title(label) ax.axis(\"off\") # Wav file = tf.io.read_file(wavs_path + list(df_train[\"file_name\"])[0] + \".wav\") audio, _ = tf.audio.decode_wav(file) audio = audio.numpy() ax = plt.subplot(2, 1, 2) plt.plot(audio) ax.set_title(\"Signal Wave\") ax.set_xlim(0, len(audio)) display.display(display.Audio(np.transpose(audio), rate=16000)) plt.show() 2021-09-28 21:16:34.014170: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2) png Model We first define the CTC Loss function. def CTCLoss(y_true, y_pred): # Compute the training-time loss value batch_len = tf.cast(tf.shape(y_true)[0], dtype=\"int64\") input_length = tf.cast(tf.shape(y_pred)[1], dtype=\"int64\") label_length = tf.cast(tf.shape(y_true)[1], dtype=\"int64\") input_length = input_length * tf.ones(shape=(batch_len, 1), dtype=\"int64\") label_length = label_length * tf.ones(shape=(batch_len, 1), dtype=\"int64\") loss = keras.backend.ctc_batch_cost(y_true, y_pred, input_length, label_length) return loss We now define our model. We will define a model similar to DeepSpeech2. def build_model(input_dim, output_dim, rnn_layers=5, rnn_units=128): \"\"\"Model similar to DeepSpeech2.\"\"\" # Model's input input_spectrogram = layers.Input((None, input_dim), name=\"input\") # Expand the dimension to use 2D CNN. x = layers.Reshape((-1, input_dim, 1), name=\"expand_dim\")(input_spectrogram) # Convolution layer 1 x = layers.Conv2D( filters=32, kernel_size=[11, 41], strides=[2, 2], padding=\"same\", use_bias=False, name=\"conv_1\", )(x) x = layers.BatchNormalization(name=\"conv_1_bn\")(x) x = layers.ReLU(name=\"conv_1_relu\")(x) # Convolution layer 2 x = layers.Conv2D( filters=32, kernel_size=[11, 21], strides=[1, 2], padding=\"same\", use_bias=False, name=\"conv_2\", )(x) x = layers.BatchNormalization(name=\"conv_2_bn\")(x) x = layers.ReLU(name=\"conv_2_relu\")(x) # Reshape the resulted volume to feed the RNNs layers x = layers.Reshape((-1, x.shape[-2] * x.shape[-1]))(x) # RNN layers for i in range(1, rnn_layers + 1): recurrent = layers.GRU( units=rnn_units, activation=\"tanh\", recurrent_activation=\"sigmoid\", use_bias=True, return_sequences=True, reset_after=True, name=f\"gru_{i}\", ) x = layers.Bidirectional( recurrent, name=f\"bidirectional_{i}\", merge_mode=\"concat\" )(x) if i < rnn_layers: x = layers.Dropout(rate=0.5)(x) # Dense layer x = layers.Dense(units=rnn_units * 2, name=\"dense_1\")(x) x = layers.ReLU(name=\"dense_1_relu\")(x) x = layers.Dropout(rate=0.5)(x) # Classification layer output = layers.Dense(units=output_dim + 1, activation=\"softmax\")(x) # Model model = keras.Model(input_spectrogram, output, name=\"DeepSpeech_2\") # Optimizer opt = keras.optimizers.Adam(learning_rate=1e-4) # Compile the model and return model.compile(optimizer=opt, loss=CTCLoss) return model # Get the model model = build_model( input_dim=fft_length // 2 + 1, output_dim=char_to_num.vocabulary_size(), rnn_units=512, ) model.summary(line_length=110) Model: \"DeepSpeech_2\" ______________________________________________________________________________________________________________ Layer (type) Output Shape Param # ============================================================================================================== input (InputLayer) [(None, None, 193)] 0 ______________________________________________________________________________________________________________ expand_dim (Reshape) (None, None, 193, 1) 0 ______________________________________________________________________________________________________________ conv_1 (Conv2D) (None, None, 97, 32) 14432 ______________________________________________________________________________________________________________ conv_1_bn (BatchNormalization) (None, None, 97, 32) 128 ______________________________________________________________________________________________________________ conv_1_relu (ReLU) (None, None, 97, 32) 0 ______________________________________________________________________________________________________________ conv_2 (Conv2D) (None, None, 49, 32) 236544 ______________________________________________________________________________________________________________ conv_2_bn (BatchNormalization) (None, None, 49, 32) 128 ______________________________________________________________________________________________________________ conv_2_relu (ReLU) (None, None, 49, 32) 0 ______________________________________________________________________________________________________________ reshape (Reshape) (None, None, 1568) 0 ______________________________________________________________________________________________________________ bidirectional_1 (Bidirectional) (None, None, 1024) 6395904 ______________________________________________________________________________________________________________ dropout (Dropout) (None, None, 1024) 0 ______________________________________________________________________________________________________________ bidirectional_2 (Bidirectional) (None, None, 1024) 4724736 ______________________________________________________________________________________________________________ dropout_1 (Dropout) (None, None, 1024) 0 ______________________________________________________________________________________________________________ bidirectional_3 (Bidirectional) (None, None, 1024) 4724736 ______________________________________________________________________________________________________________ dropout_2 (Dropout) (None, None, 1024) 0 ______________________________________________________________________________________________________________ bidirectional_4 (Bidirectional) (None, None, 1024) 4724736 ______________________________________________________________________________________________________________ dropout_3 (Dropout) (None, None, 1024) 0 ______________________________________________________________________________________________________________ bidirectional_5 (Bidirectional) (None, None, 1024) 4724736 ______________________________________________________________________________________________________________ dense_1 (Dense) (None, None, 1024) 1049600 ______________________________________________________________________________________________________________ dense_1_relu (ReLU) (None, None, 1024) 0 ______________________________________________________________________________________________________________ dropout_4 (Dropout) (None, None, 1024) 0 ______________________________________________________________________________________________________________ dense (Dense) (None, None, 32) 32800 ============================================================================================================== Total params: 26,628,480 Trainable params: 26,628,352 Non-trainable params: 128 ______________________________________________________________________________________________________________ Training and Evaluating # A utility function to decode the output of the network def decode_batch_predictions(pred): input_len = np.ones(pred.shape[0]) * pred.shape[1] # Use greedy search. For complex tasks, you can use beam search results = keras.backend.ctc_decode(pred, input_length=input_len, greedy=True)[0][0] # Iterate over the results and get back the text output_text = [] for result in results: result = tf.strings.reduce_join(num_to_char(result)).numpy().decode(\"utf-8\") output_text.append(result) return output_text # A callback class to output a few transcriptions during training class CallbackEval(keras.callbacks.Callback): \"\"\"Displays a batch of outputs after every epoch.\"\"\" def __init__(self, dataset): super().__init__() self.dataset = dataset def on_epoch_end(self, epoch: int, logs=None): predictions = [] targets = [] for batch in self.dataset: X, y = batch batch_predictions = model.predict(X) batch_predictions = decode_batch_predictions(batch_predictions) predictions.extend(batch_predictions) for label in y: label = ( tf.strings.reduce_join(num_to_char(label)).numpy().decode(\"utf-8\") ) targets.append(label) wer_score = wer(targets, predictions) print(\"-\" * 100) print(f\"Word Error Rate: {wer_score:.4f}\") print(\"-\" * 100) for i in np.random.randint(0, len(predictions), 2): print(f\"Target : {targets[i]}\") print(f\"Prediction: {predictions[i]}\") print(\"-\" * 100) Let's start the training process. # Define the number of epochs. epochs = 1 # Callback function to check transcription on the val set. validation_callback = CallbackEval(validation_dataset) # Train the model history = model.fit( train_dataset, validation_data=validation_dataset, epochs=epochs, callbacks=[validation_callback], ) 2021-09-28 21:16:48.067448: I tensorflow/stream_executor/cuda/cuda_dnn.cc:369] Loaded cuDNN version 8100 369/369 [==============================] - 586s 2s/step - loss: 300.4624 - val_loss: 296.1459 ---------------------------------------------------------------------------------------------------- Word Error Rate: 0.9998 ---------------------------------------------------------------------------------------------------- Target : the procession traversed ratcliffe twice halting for a quarter of an hour in front of the victims' dwelling Prediction: s ---------------------------------------------------------------------------------------------------- Target : some difficulty then arose as to gaining admission to the strong room and it was arranged that a man may another custom house clerk Prediction: s ---------------------------------------------------------------------------------------------------- Inference # Let's check results on more validation samples predictions = [] targets = [] for batch in validation_dataset: X, y = batch batch_predictions = model.predict(X) batch_predictions = decode_batch_predictions(batch_predictions) predictions.extend(batch_predictions) for label in y: label = tf.strings.reduce_join(num_to_char(label)).numpy().decode(\"utf-8\") targets.append(label) wer_score = wer(targets, predictions) print(\"-\" * 100) print(f\"Word Error Rate: {wer_score:.4f}\") print(\"-\" * 100) for i in np.random.randint(0, len(predictions), 5): print(f\"Target : {targets[i]}\") print(f\"Prediction: {predictions[i]}\") print(\"-\" * 100) ---------------------------------------------------------------------------------------------------- Word Error Rate: 0.9998 ---------------------------------------------------------------------------------------------------- Target : two of the nine agents returned to their rooms the seven others proceeded to an establishment called the cellar coffee house Prediction: ---------------------------------------------------------------------------------------------------- Target : a scaffold was erected in front of that prison for the execution of several convicts named by the recorder Prediction: sss ---------------------------------------------------------------------------------------------------- Target : it was perpetrated upon a respectable country solicitor Prediction: ss ---------------------------------------------------------------------------------------------------- Target : oswald like all marine recruits received training on the rifle range at distances up to five hundred yards Prediction: ---------------------------------------------------------------------------------------------------- Target : chief rowley testified that agents on duty in such a situation usually stay within the building during their relief Prediction: s ---------------------------------------------------------------------------------------------------- Conclusion In practice, you should train for around 50 epochs or more. Each epoch takes approximately 5-6mn using a GeForce RTX 2080 Ti GPU. The model we trained at 50 epochs has a Word Error Rate (WER) ≈ 16% to 17%. Some of the transcriptions around epoch 50: Audio file: LJ017-0009.wav - Target : sir thomas overbury was undoubtedly poisoned by lord rochester in the reign of james the first - Prediction: cer thomas overbery was undoubtedly poisoned by lordrochester in the reign of james the first Audio file: LJ003-0340.wav - Target : the committee does not seem to have yet understood that newgate could be only and properly replaced - Prediction: the committee does not seem to have yet understood that newgate could be only and proberly replace Audio file: LJ011-0136.wav - Target : still no sentence of death was carried out for the offense and in eighteen thirtytwo - Prediction: still no sentence of death was carried out for the offense and in eighteen thirtytwo Training a sequence-to-sequence Transformer for automatic speech recognition. Introduction Automatic speech recognition (ASR) consists of transcribing audio speech segments into text. ASR can be treated as a sequence-to-sequence problem, where the audio can be represented as a sequence of feature vectors and the text as a sequence of characters, words, or subword tokens. For this demonstration, we will use the LJSpeech dataset from the LibriVox project. It consists of short audio clips of a single speaker reading passages from 7 non-fiction books. Our model will be similar to the original Transformer (both encoder and decoder) as proposed in the paper, \"Attention is All You Need\". References: Attention is All You Need Very Deep Self-Attention Networks for End-to-End Speech Recognition Speech Transformers LJSpeech Dataset import os import random from glob import glob import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Define the Transformer Input Layer When processing past target tokens for the decoder, we compute the sum of position embeddings and token embeddings. When processing audio features, we apply convolutional layers to downsample them (via convolution stides) and process local relationships. class TokenEmbedding(layers.Layer): def __init__(self, num_vocab=1000, maxlen=100, num_hid=64): super().__init__() self.emb = tf.keras.layers.Embedding(num_vocab, num_hid) self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=num_hid) def call(self, x): maxlen = tf.shape(x)[-1] x = self.emb(x) positions = tf.range(start=0, limit=maxlen, delta=1) positions = self.pos_emb(positions) return x + positions class SpeechFeatureEmbedding(layers.Layer): def __init__(self, num_hid=64, maxlen=100): super().__init__() self.conv1 = tf.keras.layers.Conv1D( num_hid, 11, strides=2, padding=\"same\", activation=\"relu\" ) self.conv2 = tf.keras.layers.Conv1D( num_hid, 11, strides=2, padding=\"same\", activation=\"relu\" ) self.conv3 = tf.keras.layers.Conv1D( num_hid, 11, strides=2, padding=\"same\", activation=\"relu\" ) self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=num_hid) def call(self, x): x = self.conv1(x) x = self.conv2(x) return self.conv3(x) Transformer Encoder Layer class TransformerEncoder(layers.Layer): def __init__(self, embed_dim, num_heads, feed_forward_dim, rate=0.1): super().__init__() self.att = layers.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.ffn = keras.Sequential( [ layers.Dense(feed_forward_dim, activation=\"relu\"), layers.Dense(embed_dim), ] ) self.layernorm1 = layers.LayerNormalization(epsilon=1e-6) self.layernorm2 = layers.LayerNormalization(epsilon=1e-6) self.dropout1 = layers.Dropout(rate) self.dropout2 = layers.Dropout(rate) def call(self, inputs, training): attn_output = self.att(inputs, inputs) attn_output = self.dropout1(attn_output, training=training) out1 = self.layernorm1(inputs + attn_output) ffn_output = self.ffn(out1) ffn_output = self.dropout2(ffn_output, training=training) return self.layernorm2(out1 + ffn_output) Transformer Decoder Layer class TransformerDecoder(layers.Layer): def __init__(self, embed_dim, num_heads, feed_forward_dim, dropout_rate=0.1): super().__init__() self.layernorm1 = layers.LayerNormalization(epsilon=1e-6) self.layernorm2 = layers.LayerNormalization(epsilon=1e-6) self.layernorm3 = layers.LayerNormalization(epsilon=1e-6) self.self_att = layers.MultiHeadAttention( num_heads=num_heads, key_dim=embed_dim ) self.enc_att = layers.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.self_dropout = layers.Dropout(0.5) self.enc_dropout = layers.Dropout(0.1) self.ffn_dropout = layers.Dropout(0.1) self.ffn = keras.Sequential( [ layers.Dense(feed_forward_dim, activation=\"relu\"), layers.Dense(embed_dim), ] ) def causal_attention_mask(self, batch_size, n_dest, n_src, dtype): \"\"\"Masks the upper half of the dot product matrix in self attention. This prevents flow of information from future tokens to current token. 1's in the lower triangle, counting from the lower right corner. \"\"\" i = tf.range(n_dest)[:, None] j = tf.range(n_src) m = i >= j - n_src + n_dest mask = tf.cast(m, dtype) mask = tf.reshape(mask, [1, n_dest, n_src]) mult = tf.concat( [tf.expand_dims(batch_size, -1), tf.constant([1, 1], dtype=tf.int32)], 0 ) return tf.tile(mask, mult) def call(self, enc_out, target): input_shape = tf.shape(target) batch_size = input_shape[0] seq_len = input_shape[1] causal_mask = self.causal_attention_mask(batch_size, seq_len, seq_len, tf.bool) target_att = self.self_att(target, target, attention_mask=causal_mask) target_norm = self.layernorm1(target + self.self_dropout(target_att)) enc_out = self.enc_att(target_norm, enc_out) enc_out_norm = self.layernorm2(self.enc_dropout(enc_out) + target_norm) ffn_out = self.ffn(enc_out_norm) ffn_out_norm = self.layernorm3(enc_out_norm + self.ffn_dropout(ffn_out)) return ffn_out_norm Complete the Transformer model Our model takes audio spectrograms as inputs and predicts a sequence of characters. During training, we give the decoder the target character sequence shifted to the left as input. During inference, the decoder uses its own past predictions to predict the next token. class Transformer(keras.Model): def __init__( self, num_hid=64, num_head=2, num_feed_forward=128, source_maxlen=100, target_maxlen=100, num_layers_enc=4, num_layers_dec=1, num_classes=10, ): super().__init__() self.loss_metric = keras.metrics.Mean(name=\"loss\") self.num_layers_enc = num_layers_enc self.num_layers_dec = num_layers_dec self.target_maxlen = target_maxlen self.num_classes = num_classes self.enc_input = SpeechFeatureEmbedding(num_hid=num_hid, maxlen=source_maxlen) self.dec_input = TokenEmbedding( num_vocab=num_classes, maxlen=target_maxlen, num_hid=num_hid ) self.encoder = keras.Sequential( [self.enc_input] + [ TransformerEncoder(num_hid, num_head, num_feed_forward) for _ in range(num_layers_enc) ] ) for i in range(num_layers_dec): setattr( self, f\"dec_layer_{i}\", TransformerDecoder(num_hid, num_head, num_feed_forward), ) self.classifier = layers.Dense(num_classes) def decode(self, enc_out, target): y = self.dec_input(target) for i in range(self.num_layers_dec): y = getattr(self, f\"dec_layer_{i}\")(enc_out, y) return y def call(self, inputs): source = inputs[0] target = inputs[1] x = self.encoder(source) y = self.decode(x, target) return self.classifier(y) @property def metrics(self): return [self.loss_metric] def train_step(self, batch): \"\"\"Processes one batch inside model.fit().\"\"\" source = batch[\"source\"] target = batch[\"target\"] dec_input = target[:, :-1] dec_target = target[:, 1:] with tf.GradientTape() as tape: preds = self([source, dec_input]) one_hot = tf.one_hot(dec_target, depth=self.num_classes) mask = tf.math.logical_not(tf.math.equal(dec_target, 0)) loss = self.compiled_loss(one_hot, preds, sample_weight=mask) trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) self.optimizer.apply_gradients(zip(gradients, trainable_vars)) self.loss_metric.update_state(loss) return {\"loss\": self.loss_metric.result()} def test_step(self, batch): source = batch[\"source\"] target = batch[\"target\"] dec_input = target[:, :-1] dec_target = target[:, 1:] preds = self([source, dec_input]) one_hot = tf.one_hot(dec_target, depth=self.num_classes) mask = tf.math.logical_not(tf.math.equal(dec_target, 0)) loss = self.compiled_loss(one_hot, preds, sample_weight=mask) self.loss_metric.update_state(loss) return {\"loss\": self.loss_metric.result()} def generate(self, source, target_start_token_idx): \"\"\"Performs inference over one batch of inputs using greedy decoding.\"\"\" bs = tf.shape(source)[0] enc = self.encoder(source) dec_input = tf.ones((bs, 1), dtype=tf.int32) * target_start_token_idx dec_logits = [] for i in range(self.target_maxlen - 1): dec_out = self.decode(enc, dec_input) logits = self.classifier(dec_out) logits = tf.argmax(logits, axis=-1, output_type=tf.int32) last_logit = tf.expand_dims(logits[:, -1], axis=-1) dec_logits.append(last_logit) dec_input = tf.concat([dec_input, last_logit], axis=-1) return dec_input Download the dataset Note: This requires ~3.6 GB of disk space and takes ~5 minutes for the extraction of files. keras.utils.get_file( os.path.join(os.getcwd(), \"data.tar.gz\"), \"https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2\", extract=True, archive_format=\"tar\", cache_dir=\".\", ) saveto = \"./datasets/LJSpeech-1.1\" wavs = glob(\"{}/**/*.wav\".format(saveto), recursive=True) id_to_text = {} with open(os.path.join(saveto, \"metadata.csv\"), encoding=\"utf-8\") as f: for line in f: id = line.strip().split(\"|\")[0] text = line.strip().split(\"|\")[2] id_to_text[id] = text def get_data(wavs, id_to_text, maxlen=50): \"\"\" returns mapping of audio paths and transcription texts \"\"\" data = [] for w in wavs: id = w.split(\"/\")[-1].split(\".\")[0] if len(id_to_text[id]) < maxlen: data.append({\"audio\": w, \"text\": id_to_text[id]}) return data Downloading data from https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2 2748579840/2748572632 [==============================] - 57s 0us/step Preprocess the dataset class VectorizeChar: def __init__(self, max_len=50): self.vocab = ( [\"-\", \"#\", \"<\", \">\"] + [chr(i + 96) for i in range(1, 27)] + [\" \", \".\", \",\", \"?\"] ) self.max_len = max_len self.char_to_idx = {} for i, ch in enumerate(self.vocab): self.char_to_idx[ch] = i def __call__(self, text): text = text.lower() text = text[: self.max_len - 2] text = \"<\" + text + \">\" pad_len = self.max_len - len(text) return [self.char_to_idx.get(ch, 1) for ch in text] + [0] * pad_len def get_vocabulary(self): return self.vocab max_target_len = 200 # all transcripts in out data are < 200 characters data = get_data(wavs, id_to_text, max_target_len) vectorizer = VectorizeChar(max_target_len) print(\"vocab size\", len(vectorizer.get_vocabulary())) def create_text_ds(data): texts = [_[\"text\"] for _ in data] text_ds = [vectorizer(t) for t in texts] text_ds = tf.data.Dataset.from_tensor_slices(text_ds) return text_ds def path_to_audio(path): # spectrogram using stft audio = tf.io.read_file(path) audio, _ = tf.audio.decode_wav(audio, 1) audio = tf.squeeze(audio, axis=-1) stfts = tf.signal.stft(audio, frame_length=200, frame_step=80, fft_length=256) x = tf.math.pow(tf.abs(stfts), 0.5) # normalisation means = tf.math.reduce_mean(x, 1, keepdims=True) stddevs = tf.math.reduce_std(x, 1, keepdims=True) x = (x - means) / stddevs audio_len = tf.shape(x)[0] # padding to 10 seconds pad_len = 2754 paddings = tf.constant([[0, pad_len], [0, 0]]) x = tf.pad(x, paddings, \"CONSTANT\")[:pad_len, :] return x def create_audio_ds(data): flist = [_[\"audio\"] for _ in data] audio_ds = tf.data.Dataset.from_tensor_slices(flist) audio_ds = audio_ds.map( path_to_audio, num_parallel_calls=tf.data.AUTOTUNE ) return audio_ds def create_tf_dataset(data, bs=4): audio_ds = create_audio_ds(data) text_ds = create_text_ds(data) ds = tf.data.Dataset.zip((audio_ds, text_ds)) ds = ds.map(lambda x, y: {\"source\": x, \"target\": y}) ds = ds.batch(bs) ds = ds.prefetch(tf.data.AUTOTUNE) return ds split = int(len(data) * 0.99) train_data = data[:split] test_data = data[split:] ds = create_tf_dataset(train_data, bs=64) val_ds = create_tf_dataset(test_data, bs=4) vocab size 34 Callbacks to display predictions class DisplayOutputs(keras.callbacks.Callback): def __init__( self, batch, idx_to_token, target_start_token_idx=27, target_end_token_idx=28 ): \"\"\"Displays a batch of outputs after every epoch Args: batch: A test batch containing the keys \"source\" and \"target\" idx_to_token: A List containing the vocabulary tokens corresponding to their indices target_start_token_idx: A start token index in the target vocabulary target_end_token_idx: An end token index in the target vocabulary \"\"\" self.batch = batch self.target_start_token_idx = target_start_token_idx self.target_end_token_idx = target_end_token_idx self.idx_to_char = idx_to_token def on_epoch_end(self, epoch, logs=None): if epoch % 5 != 0: return source = self.batch[\"source\"] target = self.batch[\"target\"].numpy() bs = tf.shape(source)[0] preds = self.model.generate(source, self.target_start_token_idx) preds = preds.numpy() for i in range(bs): target_text = \"\".join([self.idx_to_char[_] for _ in target[i, :]]) prediction = \"\" for idx in preds[i, :]: prediction += self.idx_to_char[idx] if idx == self.target_end_token_idx: break print(f\"target: {target_text.replace('-','')}\") print(f\"prediction: {prediction}\n\") Learning rate schedule class CustomSchedule(keras.optimizers.schedules.LearningRateSchedule): def __init__( self, init_lr=0.00001, lr_after_warmup=0.001, final_lr=0.00001, warmup_epochs=15, decay_epochs=85, steps_per_epoch=203, ): super().__init__() self.init_lr = init_lr self.lr_after_warmup = lr_after_warmup self.final_lr = final_lr self.warmup_epochs = warmup_epochs self.decay_epochs = decay_epochs self.steps_per_epoch = steps_per_epoch def calculate_lr(self, epoch): \"\"\" linear warm up - linear decay \"\"\" warmup_lr = ( self.init_lr + ((self.lr_after_warmup - self.init_lr) / (self.warmup_epochs - 1)) * epoch ) decay_lr = tf.math.maximum( self.final_lr, self.lr_after_warmup - (epoch - self.warmup_epochs) * (self.lr_after_warmup - self.final_lr) / (self.decay_epochs), ) return tf.math.minimum(warmup_lr, decay_lr) def __call__(self, step): epoch = step // self.steps_per_epoch return self.calculate_lr(epoch) Create & train the end-to-end model batch = next(iter(val_ds)) # The vocabulary to convert predicted indices into characters idx_to_char = vectorizer.get_vocabulary() display_cb = DisplayOutputs( batch, idx_to_char, target_start_token_idx=2, target_end_token_idx=3 ) # set the arguments as per vocabulary index for '<' and '>' model = Transformer( num_hid=200, num_head=2, num_feed_forward=400, target_maxlen=max_target_len, num_layers_enc=4, num_layers_dec=1, num_classes=34, ) loss_fn = tf.keras.losses.CategoricalCrossentropy( from_logits=True, label_smoothing=0.1, ) learning_rate = CustomSchedule( init_lr=0.00001, lr_after_warmup=0.001, final_lr=0.00001, warmup_epochs=15, decay_epochs=85, steps_per_epoch=len(ds), ) optimizer = keras.optimizers.Adam(learning_rate) model.compile(optimizer=optimizer, loss=loss_fn) history = model.fit(ds, validation_data=val_ds, callbacks=[display_cb], epochs=1) 203/203 [==============================] - 349s 2s/step - loss: 1.7437 - val_loss: 1.4650 target: prediction: prediction: prediction: prediction: prediction: target: prediction: Inversion of audio from mel-spectograms using the MelGAN architecture and feature matching. Introduction Autoregressive vocoders have been ubiquitous for a majority of the history of speech processing, but for most of their existence they have lacked parallelism. MelGAN is a non-autoregressive, fully convolutional vocoder architecture used for purposes ranging from spectral inversion and speech enhancement to present-day state-of-the-art speech synthesis when used as a decoder with models like Tacotron2 or FastSpeech that convert text to mel spectrograms. In this tutorial, we will have a look at the MelGAN architecture and how it can achieve fast spectral inversion, i.e. conversion of spectrograms to audio waves. The MelGAN implemented in this tutorial is similar to the original implementation with only the difference of method of padding for convolutions where we will use 'same' instead of reflect padding. Importing and Defining Hyperparameters !pip install -qqq tensorflow_addons !pip install -qqq tensorflow-io import tensorflow as tf import tensorflow_io as tfio from tensorflow import keras from tensorflow.keras import layers from tensorflow_addons import layers as addon_layers # Setting logger level to avoid input shape warnings tf.get_logger().setLevel(\"ERROR\") # Defining hyperparameters DESIRED_SAMPLES = 8192 LEARNING_RATE_GEN = 1e-5 LEARNING_RATE_DISC = 1e-6 BATCH_SIZE = 16 mse = keras.losses.MeanSquaredError() mae = keras.losses.MeanAbsoluteError() |████████████████████████████████| 1.1 MB 5.1 MB/s |████████████████████████████████| 22.7 MB 1.7 MB/s |████████████████████████████████| 2.1 MB 36.2 MB/s Loading the Dataset This example uses the LJSpeech dataset. The LJSpeech dataset is primarily used for text-to-speech and consists of 13,100 discrete speech samples taken from 7 non-fiction books, having a total length of approximately 24 hours. The MelGAN training is only concerned with the audio waves so we process only the WAV files and ignore the audio annotations. !wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2 !tar -xf /content/LJSpeech-1.1.tar.bz2 --2021-09-16 11:45:24-- https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2 Resolving data.keithito.com (data.keithito.com)... 174.138.79.61 Connecting to data.keithito.com (data.keithito.com)|174.138.79.61|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 2748572632 (2.6G) [application/octet-stream] Saving to: ‘LJSpeech-1.1.tar.bz2’ LJSpeech-1.1.tar.bz 100%[===================>] 2.56G 68.3MB/s in 36s 2021-09-16 11:46:01 (72.2 MB/s) - ‘LJSpeech-1.1.tar.bz2’ saved [2748572632/2748572632] We create a tf.data.Dataset to load and process the audio files on the fly. The preprocess() function takes the file path as input and returns two instances of the wave, one for input and one as the ground truth for comparsion. The input wave will be mapped to a spectrogram using the custom MelSpec layer as shown later in this example. # Splitting the dataset into training and testing splits wavs = tf.io.gfile.glob(\"LJSpeech-1.1/wavs/*.wav\") print(f\"Number of audio files: {len(wavs)}\") # Mapper function for loading the audio. This function returns two instances of the wave def preprocess(filename): audio = tf.audio.decode_wav(tf.io.read_file(filename), 1, DESIRED_SAMPLES).audio return audio, audio # Create tf.data.Dataset objects and apply preprocessing train_dataset = tf.data.Dataset.from_tensor_slices((wavs,)) train_dataset = train_dataset.map(preprocess, num_parallel_calls=tf.data.AUTOTUNE) Number of audio files: 13100 Defining custom layers for MelGAN The MelGAN architecture consists of 3 main modules: The residual block Dilated convolutional block Discriminator block MelGAN Since the network takes a mel-spectrogram as input, we will create an additional custom layer which can convert the raw audio wave to a spectrogram on-the-fly. We use the raw audio tensor from train_dataset and map it to a mel-spectrogram using the MelSpec layer below. # Custom keras layer for on-the-fly audio to spectrogram conversion class MelSpec(layers.Layer): def __init__( self, frame_length=1024, frame_step=256, fft_length=None, sampling_rate=22050, num_mel_channels=80, freq_min=125, freq_max=7600, **kwargs, ): super().__init__(**kwargs) self.frame_length = frame_length self.frame_step = frame_step self.fft_length = fft_length self.sampling_rate = sampling_rate self.num_mel_channels = num_mel_channels self.freq_min = freq_min self.freq_max = freq_max # Defining mel filter. This filter will be multiplied with the STFT output self.mel_filterbank = tf.signal.linear_to_mel_weight_matrix( num_mel_bins=self.num_mel_channels, num_spectrogram_bins=self.frame_length // 2 + 1, sample_rate=self.sampling_rate, lower_edge_hertz=self.freq_min, upper_edge_hertz=self.freq_max, ) def call(self, audio, training=True): # We will only perform the transformation during training. if training: # Taking the Short Time Fourier Transform. Ensure that the audio is padded. # In the paper, the STFT output is padded using the 'REFLECT' strategy. stft = tf.signal.stft( tf.squeeze(audio, -1), self.frame_length, self.frame_step, self.fft_length, pad_end=True, ) # Taking the magnitude of the STFT output magnitude = tf.abs(stft) # Multiplying the Mel-filterbank with the magnitude and scaling it using the db scale mel = tf.matmul(tf.square(magnitude), self.mel_filterbank) log_mel_spec = tfio.audio.dbscale(mel, top_db=80) return log_mel_spec else: return audio def get_config(self): config = super(MelSpec, self).get_config() config.update( { \"frame_length\": self.frame_length, \"frame_step\": self.frame_step, \"fft_length\": self.fft_length, \"sampling_rate\": self.sampling_rate, \"num_mel_channels\": self.num_mel_channels, \"freq_min\": self.freq_min, \"freq_max\": self.freq_max, } ) return config The residual convolutional block extensively uses dilations and has a total receptive field of 27 timesteps per block. The dilations must grow as a power of the kernel_size to ensure reduction of hissing noise in the output. The network proposed by the paper is as follows: ConvBlock # Creating the residual stack block def residual_stack(input, filters): \"\"\"Convolutional residual stack with weight normalization. Args: filter: int, determines filter size for the residual stack. Returns: Residual stack output. \"\"\" c1 = addon_layers.WeightNormalization( layers.Conv1D(filters, 3, dilation_rate=1, padding=\"same\"), data_init=False )(input) lrelu1 = layers.LeakyReLU()(c1) c2 = addon_layers.WeightNormalization( layers.Conv1D(filters, 3, dilation_rate=1, padding=\"same\"), data_init=False )(lrelu1) add1 = layers.Add()([c2, input]) lrelu2 = layers.LeakyReLU()(add1) c3 = addon_layers.WeightNormalization( layers.Conv1D(filters, 3, dilation_rate=3, padding=\"same\"), data_init=False )(lrelu2) lrelu3 = layers.LeakyReLU()(c3) c4 = addon_layers.WeightNormalization( layers.Conv1D(filters, 3, dilation_rate=1, padding=\"same\"), data_init=False )(lrelu3) add2 = layers.Add()([add1, c4]) lrelu4 = layers.LeakyReLU()(add2) c5 = addon_layers.WeightNormalization( layers.Conv1D(filters, 3, dilation_rate=9, padding=\"same\"), data_init=False )(lrelu4) lrelu5 = layers.LeakyReLU()(c5) c6 = addon_layers.WeightNormalization( layers.Conv1D(filters, 3, dilation_rate=1, padding=\"same\"), data_init=False )(lrelu5) add3 = layers.Add()([c6, add2]) return add3 Each convolutional block uses the dilations offered by the residual stack and upsamples the input data by the upsampling_factor. # Dilated convolutional block consisting of the Residual stack def conv_block(input, conv_dim, upsampling_factor): \"\"\"Dilated Convolutional Block with weight normalization. Args: conv_dim: int, determines filter size for the block. upsampling_factor: int, scale for upsampling. Returns: Dilated convolution block. \"\"\" conv_t = addon_layers.WeightNormalization( layers.Conv1DTranspose(conv_dim, 16, upsampling_factor, padding=\"same\"), data_init=False, )(input) lrelu1 = layers.LeakyReLU()(conv_t) res_stack = residual_stack(lrelu1, conv_dim) lrelu2 = layers.LeakyReLU()(res_stack) return lrelu2 The discriminator block consists of convolutions and downsampling layers. This block is essential for the implementation of the feature matching technique. Each discriminator outputs a list of feature maps that will be compared during training to compute the feature matching loss. def discriminator_block(input): conv1 = addon_layers.WeightNormalization( layers.Conv1D(16, 15, 1, \"same\"), data_init=False )(input) lrelu1 = layers.LeakyReLU()(conv1) conv2 = addon_layers.WeightNormalization( layers.Conv1D(64, 41, 4, \"same\", groups=4), data_init=False )(lrelu1) lrelu2 = layers.LeakyReLU()(conv2) conv3 = addon_layers.WeightNormalization( layers.Conv1D(256, 41, 4, \"same\", groups=16), data_init=False )(lrelu2) lrelu3 = layers.LeakyReLU()(conv3) conv4 = addon_layers.WeightNormalization( layers.Conv1D(1024, 41, 4, \"same\", groups=64), data_init=False )(lrelu3) lrelu4 = layers.LeakyReLU()(conv4) conv5 = addon_layers.WeightNormalization( layers.Conv1D(1024, 41, 4, \"same\", groups=256), data_init=False )(lrelu4) lrelu5 = layers.LeakyReLU()(conv5) conv6 = addon_layers.WeightNormalization( layers.Conv1D(1024, 5, 1, \"same\"), data_init=False )(lrelu5) lrelu6 = layers.LeakyReLU()(conv6) conv7 = addon_layers.WeightNormalization( layers.Conv1D(1, 3, 1, \"same\"), data_init=False )(lrelu6) return [lrelu1, lrelu2, lrelu3, lrelu4, lrelu5, lrelu6, conv7] Create the generator def create_generator(input_shape): inp = keras.Input(input_shape) x = MelSpec()(inp) x = layers.Conv1D(512, 7, padding=\"same\")(x) x = layers.LeakyReLU()(x) x = conv_block(x, 256, 8) x = conv_block(x, 128, 8) x = conv_block(x, 64, 2) x = conv_block(x, 32, 2) x = addon_layers.WeightNormalization( layers.Conv1D(1, 7, padding=\"same\", activation=\"tanh\") )(x) return keras.Model(inp, x) # We use a dynamic input shape for the generator since the model is fully convolutional generator = create_generator((None, 1)) generator.summary() Model: \"model\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, None, 1)] 0 __________________________________________________________________________________________________ mel_spec (MelSpec) (None, None, 80) 0 input_1[0][0] __________________________________________________________________________________________________ conv1d (Conv1D) (None, None, 512) 287232 mel_spec[0][0] __________________________________________________________________________________________________ leaky_re_lu (LeakyReLU) (None, None, 512) 0 conv1d[0][0] __________________________________________________________________________________________________ weight_normalization (WeightNor (None, None, 256) 2097921 leaky_re_lu[0][0] __________________________________________________________________________________________________ leaky_re_lu_1 (LeakyReLU) (None, None, 256) 0 weight_normalization[0][0] __________________________________________________________________________________________________ weight_normalization_1 (WeightN (None, None, 256) 197121 leaky_re_lu_1[0][0] __________________________________________________________________________________________________ leaky_re_lu_2 (LeakyReLU) (None, None, 256) 0 weight_normalization_1[0][0] __________________________________________________________________________________________________ weight_normalization_2 (WeightN (None, None, 256) 197121 leaky_re_lu_2[0][0] __________________________________________________________________________________________________ add (Add) (None, None, 256) 0 weight_normalization_2[0][0] leaky_re_lu_1[0][0] __________________________________________________________________________________________________ leaky_re_lu_3 (LeakyReLU) (None, None, 256) 0 add[0][0] __________________________________________________________________________________________________ weight_normalization_3 (WeightN (None, None, 256) 197121 leaky_re_lu_3[0][0] __________________________________________________________________________________________________ leaky_re_lu_4 (LeakyReLU) (None, None, 256) 0 weight_normalization_3[0][0] __________________________________________________________________________________________________ weight_normalization_4 (WeightN (None, None, 256) 197121 leaky_re_lu_4[0][0] __________________________________________________________________________________________________ add_1 (Add) (None, None, 256) 0 add[0][0] weight_normalization_4[0][0] __________________________________________________________________________________________________ leaky_re_lu_5 (LeakyReLU) (None, None, 256) 0 add_1[0][0] __________________________________________________________________________________________________ weight_normalization_5 (WeightN (None, None, 256) 197121 leaky_re_lu_5[0][0] __________________________________________________________________________________________________ leaky_re_lu_6 (LeakyReLU) (None, None, 256) 0 weight_normalization_5[0][0] __________________________________________________________________________________________________ weight_normalization_6 (WeightN (None, None, 256) 197121 leaky_re_lu_6[0][0] __________________________________________________________________________________________________ add_2 (Add) (None, None, 256) 0 weight_normalization_6[0][0] add_1[0][0] __________________________________________________________________________________________________ leaky_re_lu_7 (LeakyReLU) (None, None, 256) 0 add_2[0][0] __________________________________________________________________________________________________ weight_normalization_7 (WeightN (None, None, 128) 524673 leaky_re_lu_7[0][0] __________________________________________________________________________________________________ leaky_re_lu_8 (LeakyReLU) (None, None, 128) 0 weight_normalization_7[0][0] __________________________________________________________________________________________________ weight_normalization_8 (WeightN (None, None, 128) 49409 leaky_re_lu_8[0][0] __________________________________________________________________________________________________ leaky_re_lu_9 (LeakyReLU) (None, None, 128) 0 weight_normalization_8[0][0] __________________________________________________________________________________________________ weight_normalization_9 (WeightN (None, None, 128) 49409 leaky_re_lu_9[0][0] __________________________________________________________________________________________________ add_3 (Add) (None, None, 128) 0 weight_normalization_9[0][0] leaky_re_lu_8[0][0] __________________________________________________________________________________________________ leaky_re_lu_10 (LeakyReLU) (None, None, 128) 0 add_3[0][0] __________________________________________________________________________________________________ weight_normalization_10 (Weight (None, None, 128) 49409 leaky_re_lu_10[0][0] __________________________________________________________________________________________________ leaky_re_lu_11 (LeakyReLU) (None, None, 128) 0 weight_normalization_10[0][0] __________________________________________________________________________________________________ weight_normalization_11 (Weight (None, None, 128) 49409 leaky_re_lu_11[0][0] __________________________________________________________________________________________________ add_4 (Add) (None, None, 128) 0 add_3[0][0] weight_normalization_11[0][0] __________________________________________________________________________________________________ leaky_re_lu_12 (LeakyReLU) (None, None, 128) 0 add_4[0][0] __________________________________________________________________________________________________ weight_normalization_12 (Weight (None, None, 128) 49409 leaky_re_lu_12[0][0] __________________________________________________________________________________________________ leaky_re_lu_13 (LeakyReLU) (None, None, 128) 0 weight_normalization_12[0][0] __________________________________________________________________________________________________ weight_normalization_13 (Weight (None, None, 128) 49409 leaky_re_lu_13[0][0] __________________________________________________________________________________________________ add_5 (Add) (None, None, 128) 0 weight_normalization_13[0][0] add_4[0][0] __________________________________________________________________________________________________ leaky_re_lu_14 (LeakyReLU) (None, None, 128) 0 add_5[0][0] __________________________________________________________________________________________________ weight_normalization_14 (Weight (None, None, 64) 131265 leaky_re_lu_14[0][0] __________________________________________________________________________________________________ leaky_re_lu_15 (LeakyReLU) (None, None, 64) 0 weight_normalization_14[0][0] __________________________________________________________________________________________________ weight_normalization_15 (Weight (None, None, 64) 12417 leaky_re_lu_15[0][0] __________________________________________________________________________________________________ leaky_re_lu_16 (LeakyReLU) (None, None, 64) 0 weight_normalization_15[0][0] __________________________________________________________________________________________________ weight_normalization_16 (Weight (None, None, 64) 12417 leaky_re_lu_16[0][0] __________________________________________________________________________________________________ add_6 (Add) (None, None, 64) 0 weight_normalization_16[0][0] leaky_re_lu_15[0][0] __________________________________________________________________________________________________ leaky_re_lu_17 (LeakyReLU) (None, None, 64) 0 add_6[0][0] __________________________________________________________________________________________________ weight_normalization_17 (Weight (None, None, 64) 12417 leaky_re_lu_17[0][0] __________________________________________________________________________________________________ leaky_re_lu_18 (LeakyReLU) (None, None, 64) 0 weight_normalization_17[0][0] __________________________________________________________________________________________________ weight_normalization_18 (Weight (None, None, 64) 12417 leaky_re_lu_18[0][0] __________________________________________________________________________________________________ add_7 (Add) (None, None, 64) 0 add_6[0][0] weight_normalization_18[0][0] __________________________________________________________________________________________________ leaky_re_lu_19 (LeakyReLU) (None, None, 64) 0 add_7[0][0] __________________________________________________________________________________________________ weight_normalization_19 (Weight (None, None, 64) 12417 leaky_re_lu_19[0][0] __________________________________________________________________________________________________ leaky_re_lu_20 (LeakyReLU) (None, None, 64) 0 weight_normalization_19[0][0] __________________________________________________________________________________________________ weight_normalization_20 (Weight (None, None, 64) 12417 leaky_re_lu_20[0][0] __________________________________________________________________________________________________ add_8 (Add) (None, None, 64) 0 weight_normalization_20[0][0] add_7[0][0] __________________________________________________________________________________________________ leaky_re_lu_21 (LeakyReLU) (None, None, 64) 0 add_8[0][0] __________________________________________________________________________________________________ weight_normalization_21 (Weight (None, None, 32) 32865 leaky_re_lu_21[0][0] __________________________________________________________________________________________________ leaky_re_lu_22 (LeakyReLU) (None, None, 32) 0 weight_normalization_21[0][0] __________________________________________________________________________________________________ weight_normalization_22 (Weight (None, None, 32) 3137 leaky_re_lu_22[0][0] __________________________________________________________________________________________________ leaky_re_lu_23 (LeakyReLU) (None, None, 32) 0 weight_normalization_22[0][0] __________________________________________________________________________________________________ weight_normalization_23 (Weight (None, None, 32) 3137 leaky_re_lu_23[0][0] __________________________________________________________________________________________________ add_9 (Add) (None, None, 32) 0 weight_normalization_23[0][0] leaky_re_lu_22[0][0] __________________________________________________________________________________________________ leaky_re_lu_24 (LeakyReLU) (None, None, 32) 0 add_9[0][0] __________________________________________________________________________________________________ weight_normalization_24 (Weight (None, None, 32) 3137 leaky_re_lu_24[0][0] __________________________________________________________________________________________________ leaky_re_lu_25 (LeakyReLU) (None, None, 32) 0 weight_normalization_24[0][0] __________________________________________________________________________________________________ weight_normalization_25 (Weight (None, None, 32) 3137 leaky_re_lu_25[0][0] __________________________________________________________________________________________________ add_10 (Add) (None, None, 32) 0 add_9[0][0] weight_normalization_25[0][0] __________________________________________________________________________________________________ leaky_re_lu_26 (LeakyReLU) (None, None, 32) 0 add_10[0][0] __________________________________________________________________________________________________ weight_normalization_26 (Weight (None, None, 32) 3137 leaky_re_lu_26[0][0] __________________________________________________________________________________________________ leaky_re_lu_27 (LeakyReLU) (None, None, 32) 0 weight_normalization_26[0][0] __________________________________________________________________________________________________ weight_normalization_27 (Weight (None, None, 32) 3137 leaky_re_lu_27[0][0] __________________________________________________________________________________________________ add_11 (Add) (None, None, 32) 0 weight_normalization_27[0][0] add_10[0][0] __________________________________________________________________________________________________ leaky_re_lu_28 (LeakyReLU) (None, None, 32) 0 add_11[0][0] __________________________________________________________________________________________________ weight_normalization_28 (Weight (None, None, 1) 452 leaky_re_lu_28[0][0] ================================================================================================== Total params: 4,646,912 Trainable params: 4,646,658 Non-trainable params: 254 __________________________________________________________________________________________________ Create the discriminator def create_discriminator(input_shape): inp = keras.Input(input_shape) out_map1 = discriminator_block(inp) pool1 = layers.AveragePooling1D()(inp) out_map2 = discriminator_block(pool1) pool2 = layers.AveragePooling1D()(pool1) out_map3 = discriminator_block(pool2) return keras.Model(inp, [out_map1, out_map2, out_map3]) # We use a dynamic input shape for the discriminator # This is done because the input shape for the generator is unknown discriminator = create_discriminator((None, 1)) discriminator.summary() Model: \"model_1\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_2 (InputLayer) [(None, None, 1)] 0 __________________________________________________________________________________________________ average_pooling1d (AveragePooli (None, None, 1) 0 input_2[0][0] __________________________________________________________________________________________________ average_pooling1d_1 (AveragePoo (None, None, 1) 0 average_pooling1d[0][0] __________________________________________________________________________________________________ weight_normalization_29 (Weight (None, None, 16) 273 input_2[0][0] __________________________________________________________________________________________________ weight_normalization_36 (Weight (None, None, 16) 273 average_pooling1d[0][0] __________________________________________________________________________________________________ weight_normalization_43 (Weight (None, None, 16) 273 average_pooling1d_1[0][0] __________________________________________________________________________________________________ leaky_re_lu_29 (LeakyReLU) (None, None, 16) 0 weight_normalization_29[0][0] __________________________________________________________________________________________________ leaky_re_lu_35 (LeakyReLU) (None, None, 16) 0 weight_normalization_36[0][0] __________________________________________________________________________________________________ leaky_re_lu_41 (LeakyReLU) (None, None, 16) 0 weight_normalization_43[0][0] __________________________________________________________________________________________________ weight_normalization_30 (Weight (None, None, 64) 10625 leaky_re_lu_29[0][0] __________________________________________________________________________________________________ weight_normalization_37 (Weight (None, None, 64) 10625 leaky_re_lu_35[0][0] __________________________________________________________________________________________________ weight_normalization_44 (Weight (None, None, 64) 10625 leaky_re_lu_41[0][0] __________________________________________________________________________________________________ leaky_re_lu_30 (LeakyReLU) (None, None, 64) 0 weight_normalization_30[0][0] __________________________________________________________________________________________________ leaky_re_lu_36 (LeakyReLU) (None, None, 64) 0 weight_normalization_37[0][0] __________________________________________________________________________________________________ leaky_re_lu_42 (LeakyReLU) (None, None, 64) 0 weight_normalization_44[0][0] __________________________________________________________________________________________________ weight_normalization_31 (Weight (None, None, 256) 42497 leaky_re_lu_30[0][0] __________________________________________________________________________________________________ weight_normalization_38 (Weight (None, None, 256) 42497 leaky_re_lu_36[0][0] __________________________________________________________________________________________________ weight_normalization_45 (Weight (None, None, 256) 42497 leaky_re_lu_42[0][0] __________________________________________________________________________________________________ leaky_re_lu_31 (LeakyReLU) (None, None, 256) 0 weight_normalization_31[0][0] __________________________________________________________________________________________________ leaky_re_lu_37 (LeakyReLU) (None, None, 256) 0 weight_normalization_38[0][0] __________________________________________________________________________________________________ leaky_re_lu_43 (LeakyReLU) (None, None, 256) 0 weight_normalization_45[0][0] __________________________________________________________________________________________________ weight_normalization_32 (Weight (None, None, 1024) 169985 leaky_re_lu_31[0][0] __________________________________________________________________________________________________ weight_normalization_39 (Weight (None, None, 1024) 169985 leaky_re_lu_37[0][0] __________________________________________________________________________________________________ weight_normalization_46 (Weight (None, None, 1024) 169985 leaky_re_lu_43[0][0] __________________________________________________________________________________________________ leaky_re_lu_32 (LeakyReLU) (None, None, 1024) 0 weight_normalization_32[0][0] __________________________________________________________________________________________________ leaky_re_lu_38 (LeakyReLU) (None, None, 1024) 0 weight_normalization_39[0][0] __________________________________________________________________________________________________ leaky_re_lu_44 (LeakyReLU) (None, None, 1024) 0 weight_normalization_46[0][0] __________________________________________________________________________________________________ weight_normalization_33 (Weight (None, None, 1024) 169985 leaky_re_lu_32[0][0] __________________________________________________________________________________________________ weight_normalization_40 (Weight (None, None, 1024) 169985 leaky_re_lu_38[0][0] __________________________________________________________________________________________________ weight_normalization_47 (Weight (None, None, 1024) 169985 leaky_re_lu_44[0][0] __________________________________________________________________________________________________ leaky_re_lu_33 (LeakyReLU) (None, None, 1024) 0 weight_normalization_33[0][0] __________________________________________________________________________________________________ leaky_re_lu_39 (LeakyReLU) (None, None, 1024) 0 weight_normalization_40[0][0] __________________________________________________________________________________________________ leaky_re_lu_45 (LeakyReLU) (None, None, 1024) 0 weight_normalization_47[0][0] __________________________________________________________________________________________________ weight_normalization_34 (Weight (None, None, 1024) 5244929 leaky_re_lu_33[0][0] __________________________________________________________________________________________________ weight_normalization_41 (Weight (None, None, 1024) 5244929 leaky_re_lu_39[0][0] __________________________________________________________________________________________________ weight_normalization_48 (Weight (None, None, 1024) 5244929 leaky_re_lu_45[0][0] __________________________________________________________________________________________________ leaky_re_lu_34 (LeakyReLU) (None, None, 1024) 0 weight_normalization_34[0][0] __________________________________________________________________________________________________ leaky_re_lu_40 (LeakyReLU) (None, None, 1024) 0 weight_normalization_41[0][0] __________________________________________________________________________________________________ leaky_re_lu_46 (LeakyReLU) (None, None, 1024) 0 weight_normalization_48[0][0] __________________________________________________________________________________________________ weight_normalization_35 (Weight (None, None, 1) 3075 leaky_re_lu_34[0][0] __________________________________________________________________________________________________ weight_normalization_42 (Weight (None, None, 1) 3075 leaky_re_lu_40[0][0] __________________________________________________________________________________________________ weight_normalization_49 (Weight (None, None, 1) 3075 leaky_re_lu_46[0][0] ================================================================================================== Total params: 16,924,107 Trainable params: 16,924,086 Non-trainable params: 21 __________________________________________________________________________________________________ Defining the loss functions Generator Loss The generator architecture uses a combination of two losses Mean Squared Error: This is the standard MSE generator loss calculated between ones and the outputs from the discriminator with N layers. Feature Matching Loss: This loss involves extracting the outputs of every layer from the discriminator for both the generator and ground truth and compare each layer output k using Mean Absolute Error. Discriminator Loss The discriminator uses the Mean Absolute Error and compares the real data predictions with ones and generated predictions with zeros. # Generator loss def generator_loss(real_pred, fake_pred): \"\"\"Loss function for the generator. Args: real_pred: Tensor, output of the ground truth wave passed through the discriminator. fake_pred: Tensor, output of the generator prediction passed through the discriminator. Returns: Loss for the generator. \"\"\" gen_loss = [] for i in range(len(fake_pred)): gen_loss.append(mse(tf.ones_like(fake_pred[i][-1]), fake_pred[i][-1])) return tf.reduce_mean(gen_loss) def feature_matching_loss(real_pred, fake_pred): \"\"\"Implements the feature matching loss. Args: real_pred: Tensor, output of the ground truth wave passed through the discriminator. fake_pred: Tensor, output of the generator prediction passed through the discriminator. Returns: Feature Matching Loss. \"\"\" fm_loss = [] for i in range(len(fake_pred)): for j in range(len(fake_pred[i]) - 1): fm_loss.append(mae(real_pred[i][j], fake_pred[i][j])) return tf.reduce_mean(fm_loss) def discriminator_loss(real_pred, fake_pred): \"\"\"Implements the discriminator loss. Args: real_pred: Tensor, output of the ground truth wave passed through the discriminator. fake_pred: Tensor, output of the generator prediction passed through the discriminator. Returns: Discriminator Loss. \"\"\" real_loss, fake_loss = [], [] for i in range(len(real_pred)): real_loss.append(mse(tf.ones_like(real_pred[i][-1]), real_pred[i][-1])) fake_loss.append(mse(tf.zeros_like(fake_pred[i][-1]), fake_pred[i][-1])) # Calculating the final discriminator loss after scaling disc_loss = tf.reduce_mean(real_loss) + tf.reduce_mean(fake_loss) return disc_loss Defining the MelGAN model for training. This subclass overrides the train_step() method to implement the training logic. class MelGAN(keras.Model): def __init__(self, generator, discriminator, **kwargs): \"\"\"MelGAN trainer class Args: generator: keras.Model, Generator model discriminator: keras.Model, Discriminator model \"\"\" super().__init__(**kwargs) self.generator = generator self.discriminator = discriminator def compile( self, gen_optimizer, disc_optimizer, generator_loss, feature_matching_loss, discriminator_loss, ): \"\"\"MelGAN compile method. Args: gen_optimizer: keras.optimizer, optimizer to be used for training disc_optimizer: keras.optimizer, optimizer to be used for training generator_loss: callable, loss function for generator feature_matching_loss: callable, loss function for feature matching discriminator_loss: callable, loss function for discriminator \"\"\" super().compile() # Optimizers self.gen_optimizer = gen_optimizer self.disc_optimizer = disc_optimizer # Losses self.generator_loss = generator_loss self.feature_matching_loss = feature_matching_loss self.discriminator_loss = discriminator_loss # Trackers self.gen_loss_tracker = keras.metrics.Mean(name=\"gen_loss\") self.disc_loss_tracker = keras.metrics.Mean(name=\"disc_loss\") def train_step(self, batch): x_batch_train, y_batch_train = batch with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape: # Generating the audio wave gen_audio_wave = generator(x_batch_train, training=True) # Generating the features using the discriminator fake_pred = discriminator(y_batch_train) real_pred = discriminator(gen_audio_wave) # Calculating the generator losses gen_loss = generator_loss(real_pred, fake_pred) fm_loss = feature_matching_loss(real_pred, fake_pred) # Calculating final generator loss gen_fm_loss = gen_loss + 10 * fm_loss # Calculating the discriminator losses disc_loss = discriminator_loss(real_pred, fake_pred) # Calculating and applying the gradients for generator and discriminator grads_gen = gen_tape.gradient(gen_fm_loss, generator.trainable_weights) grads_disc = disc_tape.gradient(disc_loss, discriminator.trainable_weights) gen_optimizer.apply_gradients(zip(grads_gen, generator.trainable_weights)) disc_optimizer.apply_gradients(zip(grads_disc, discriminator.trainable_weights)) self.gen_loss_tracker.update_state(gen_fm_loss) self.disc_loss_tracker.update_state(disc_loss) return { \"gen_loss\": self.gen_loss_tracker.result(), \"disc_loss\": self.disc_loss_tracker.result(), } Training The paper suggests that the training with dynamic shapes takes around 400,000 steps (~500 epochs). For this example, we will run it only for a single epoch (819 steps). Longer training time (greater than 300 epochs) will almost certainly provide better results. gen_optimizer = keras.optimizers.Adam( LEARNING_RATE_GEN, beta_1=0.5, beta_2=0.9, clipnorm=1 ) disc_optimizer = keras.optimizers.Adam( LEARNING_RATE_DISC, beta_1=0.5, beta_2=0.9, clipnorm=1 ) # Start training generator = create_generator((None, 1)) discriminator = create_discriminator((None, 1)) mel_gan = MelGAN(generator, discriminator) mel_gan.compile( gen_optimizer, disc_optimizer, generator_loss, feature_matching_loss, discriminator_loss, ) mel_gan.fit( train_dataset.shuffle(200).batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE), epochs=1 ) 819/819 [==============================] - 641s 696ms/step - gen_loss: 0.9761 - disc_loss: 0.9350 Testing the model The trained model can now be used for real time text-to-speech translation tasks. To test how fast the MelGAN inference can be, let us take a sample audio mel-spectrogram and convert it. Note that the actual model pipeline will not include the MelSpec layer and hence this layer will be disabled during inference. The inference input will be a mel-spectrogram processed similar to the MelSpec layer configuration. For testing this, we will create a randomly uniformly distributed tensor to simulate the behavior of the inference pipeline. # Sampling a random tensor to mimic a batch of 128 spectrograms of shape [50, 80] audio_sample = tf.random.uniform([128, 50, 80]) Timing the inference speed of a single sample. Running this, you can see that the average inference time per spectrogram ranges from 8 milliseconds to 10 milliseconds on a K80 GPU which is pretty fast. pred = generator.predict(audio_sample, batch_size=32, verbose=1) 4/4 [==============================] - 5s 280ms/step Conclusion The MelGAN is a highly effective architecture for spectral inversion that has a Mean Opinion Score (MOS) of 3.61 that considerably outperforms the Griffin Lim algorithm having a MOS of just 1.57. In contrast with this, the MelGAN compares with the state-of-the-art WaveGlow and WaveNet architectures on text-to-speech and speech enhancement tasks on the LJSpeech and VCTK datasets [1]. This tutorial highlights: The advantages of using dilated convolutions that grow with the filter size Implementation of a custom layer for on-the-fly conversion of audio waves to mel-spectrograms Effectiveness of using the feature matching loss function for training GAN generators. Further reading MelGAN paper (Kundan Kumar et al.) to understand the reasoning behind the architecture and training process For in-depth understanding of the feature matching loss, you can refer to Improved Techniques for Training GANs (Tim Salimans et al.). Classify speakers using Fast Fourier Transform (FFT) and a 1D Convnet. Introduction This example demonstrates how to create a model to classify speakers from the frequency domain representation of speech recordings, obtained via Fast Fourier Transform (FFT). It shows the following: How to use tf.data to load, preprocess and feed audio streams into a model How to create a 1D convolutional network with residual connections for audio classification. Our process: We prepare a dataset of speech samples from different speakers, with the speaker as label. We add background noise to these samples to augment our data. We take the FFT of these samples. We train a 1D convnet to predict the correct speaker given a noisy FFT speech sample. Note: This example should be run with TensorFlow 2.3 or higher, or tf-nightly. The noise samples in the dataset need to be resampled to a sampling rate of 16000 Hz before using the code in this example. In order to do this, you will need to have installed ffmpg. Setup import os import shutil import numpy as np import tensorflow as tf from tensorflow import keras from pathlib import Path from IPython.display import display, Audio # Get the data from https://www.kaggle.com/kongaevans/speaker-recognition-dataset/download # and save it to the 'Downloads' folder in your HOME directory DATASET_ROOT = os.path.join(os.path.expanduser(\"~\"), \"Downloads/16000_pcm_speeches\") # The folders in which we will put the audio samples and the noise samples AUDIO_SUBFOLDER = \"audio\" NOISE_SUBFOLDER = \"noise\" DATASET_AUDIO_PATH = os.path.join(DATASET_ROOT, AUDIO_SUBFOLDER) DATASET_NOISE_PATH = os.path.join(DATASET_ROOT, NOISE_SUBFOLDER) # Percentage of samples to use for validation VALID_SPLIT = 0.1 # Seed to use when shuffling the dataset and the noise SHUFFLE_SEED = 43 # The sampling rate to use. # This is the one used in all of the audio samples. # We will resample all of the noise to this sampling rate. # This will also be the output size of the audio wave samples # (since all samples are of 1 second long) SAMPLING_RATE = 16000 # The factor to multiply the noise with according to: # noisy_sample = sample + noise * prop * scale # where prop = sample_amplitude / noise_amplitude SCALE = 0.5 BATCH_SIZE = 128 EPOCHS = 100 Data preparation The dataset is composed of 7 folders, divided into 2 groups: Speech samples, with 5 folders for 5 different speakers. Each folder contains 1500 audio files, each 1 second long and sampled at 16000 Hz. Background noise samples, with 2 folders and a total of 6 files. These files are longer than 1 second (and originally not sampled at 16000 Hz, but we will resample them to 16000 Hz). We will use those 6 files to create 354 1-second-long noise samples to be used for training. Let's sort these 2 categories into 2 folders: An audio folder which will contain all the per-speaker speech sample folders A noise folder which will contain all the noise samples Before sorting the audio and noise categories into 2 folders, main_directory/ ...speaker_a/ ...speaker_b/ ...speaker_c/ ...speaker_d/ ...speaker_e/ ...other/ ..._background_noise_/ After sorting, we end up with the following structure: main_directory/ ...audio/ ......speaker_a/ ......speaker_b/ ......speaker_c/ ......speaker_d/ ......speaker_e/ ...noise/ ......other/ ......_background_noise_/ # If folder `audio`, does not exist, create it, otherwise do nothing if os.path.exists(DATASET_AUDIO_PATH) is False: os.makedirs(DATASET_AUDIO_PATH) # If folder `noise`, does not exist, create it, otherwise do nothing if os.path.exists(DATASET_NOISE_PATH) is False: os.makedirs(DATASET_NOISE_PATH) for folder in os.listdir(DATASET_ROOT): if os.path.isdir(os.path.join(DATASET_ROOT, folder)): if folder in [AUDIO_SUBFOLDER, NOISE_SUBFOLDER]: # If folder is `audio` or `noise`, do nothing continue elif folder in [\"other\", \"_background_noise_\"]: # If folder is one of the folders that contains noise samples, # move it to the `noise` folder shutil.move( os.path.join(DATASET_ROOT, folder), os.path.join(DATASET_NOISE_PATH, folder), ) else: # Otherwise, it should be a speaker folder, then move it to # `audio` folder shutil.move( os.path.join(DATASET_ROOT, folder), os.path.join(DATASET_AUDIO_PATH, folder), ) Noise preparation In this section: We load all noise samples (which should have been resampled to 16000) We split those noise samples to chuncks of 16000 samples which correspond to 1 second duration each # Get the list of all noise files noise_paths = [] for subdir in os.listdir(DATASET_NOISE_PATH): subdir_path = Path(DATASET_NOISE_PATH) / subdir if os.path.isdir(subdir_path): noise_paths += [ os.path.join(subdir_path, filepath) for filepath in os.listdir(subdir_path) if filepath.endswith(\".wav\") ] print( \"Found {} files belonging to {} directories\".format( len(noise_paths), len(os.listdir(DATASET_NOISE_PATH)) ) ) Found 6 files belonging to 2 directories Resample all noise samples to 16000 Hz command = ( \"for dir in `ls -1 \" + DATASET_NOISE_PATH + \"`; do \" \"for file in `ls -1 \" + DATASET_NOISE_PATH + \"/$dir/*.wav`; do \" \"sample_rate=`ffprobe -hide_banner -loglevel panic -show_streams \" \"$file | grep sample_rate | cut -f2 -d=`; \" \"if [ $sample_rate -ne 16000 ]; then \" \"ffmpeg -hide_banner -loglevel panic -y \" \"-i $file -ar 16000 temp.wav; \" \"mv temp.wav $file; \" \"fi; done; done\" ) os.system(command) # Split noise into chunks of 16000 each def load_noise_sample(path): sample, sampling_rate = tf.audio.decode_wav( tf.io.read_file(path), desired_channels=1 ) if sampling_rate == SAMPLING_RATE: # Number of slices of 16000 each that can be generated from the noise sample slices = int(sample.shape[0] / SAMPLING_RATE) sample = tf.split(sample[: slices * SAMPLING_RATE], slices) return sample else: print(\"Sampling rate for {} is incorrect. Ignoring it\".format(path)) return None noises = [] for path in noise_paths: sample = load_noise_sample(path) if sample: noises.extend(sample) noises = tf.stack(noises) print( \"{} noise files were split into {} noise samples where each is {} sec. long\".format( len(noise_paths), noises.shape[0], noises.shape[1] // SAMPLING_RATE ) ) 6 noise files were split into 354 noise samples where each is 1 sec. long Dataset generation def paths_and_labels_to_dataset(audio_paths, labels): \"\"\"Constructs a dataset of audios and labels.\"\"\" path_ds = tf.data.Dataset.from_tensor_slices(audio_paths) audio_ds = path_ds.map(lambda x: path_to_audio(x)) label_ds = tf.data.Dataset.from_tensor_slices(labels) return tf.data.Dataset.zip((audio_ds, label_ds)) def path_to_audio(path): \"\"\"Reads and decodes an audio file.\"\"\" audio = tf.io.read_file(path) audio, _ = tf.audio.decode_wav(audio, 1, SAMPLING_RATE) return audio def add_noise(audio, noises=None, scale=0.5): if noises is not None: # Create a random tensor of the same size as audio ranging from # 0 to the number of noise stream samples that we have. tf_rnd = tf.random.uniform( (tf.shape(audio)[0],), 0, noises.shape[0], dtype=tf.int32 ) noise = tf.gather(noises, tf_rnd, axis=0) # Get the amplitude proportion between the audio and the noise prop = tf.math.reduce_max(audio, axis=1) / tf.math.reduce_max(noise, axis=1) prop = tf.repeat(tf.expand_dims(prop, axis=1), tf.shape(audio)[1], axis=1) # Adding the rescaled noise to audio audio = audio + noise * prop * scale return audio def audio_to_fft(audio): # Since tf.signal.fft applies FFT on the innermost dimension, # we need to squeeze the dimensions and then expand them again # after FFT audio = tf.squeeze(audio, axis=-1) fft = tf.signal.fft( tf.cast(tf.complex(real=audio, imag=tf.zeros_like(audio)), tf.complex64) ) fft = tf.expand_dims(fft, axis=-1) # Return the absolute value of the first half of the FFT # which represents the positive frequencies return tf.math.abs(fft[:, : (audio.shape[1] // 2), :]) # Get the list of audio file paths along with their corresponding labels class_names = os.listdir(DATASET_AUDIO_PATH) print(\"Our class names: {}\".format(class_names,)) audio_paths = [] labels = [] for label, name in enumerate(class_names): print(\"Processing speaker {}\".format(name,)) dir_path = Path(DATASET_AUDIO_PATH) / name speaker_sample_paths = [ os.path.join(dir_path, filepath) for filepath in os.listdir(dir_path) if filepath.endswith(\".wav\") ] audio_paths += speaker_sample_paths labels += [label] * len(speaker_sample_paths) print( \"Found {} files belonging to {} classes.\".format(len(audio_paths), len(class_names)) ) # Shuffle rng = np.random.RandomState(SHUFFLE_SEED) rng.shuffle(audio_paths) rng = np.random.RandomState(SHUFFLE_SEED) rng.shuffle(labels) # Split into training and validation num_val_samples = int(VALID_SPLIT * len(audio_paths)) print(\"Using {} files for training.\".format(len(audio_paths) - num_val_samples)) train_audio_paths = audio_paths[:-num_val_samples] train_labels = labels[:-num_val_samples] print(\"Using {} files for validation.\".format(num_val_samples)) valid_audio_paths = audio_paths[-num_val_samples:] valid_labels = labels[-num_val_samples:] # Create 2 datasets, one for training and the other for validation train_ds = paths_and_labels_to_dataset(train_audio_paths, train_labels) train_ds = train_ds.shuffle(buffer_size=BATCH_SIZE * 8, seed=SHUFFLE_SEED).batch( BATCH_SIZE ) valid_ds = paths_and_labels_to_dataset(valid_audio_paths, valid_labels) valid_ds = valid_ds.shuffle(buffer_size=32 * 8, seed=SHUFFLE_SEED).batch(32) # Add noise to the training set train_ds = train_ds.map( lambda x, y: (add_noise(x, noises, scale=SCALE), y), num_parallel_calls=tf.data.AUTOTUNE, ) # Transform audio wave to the frequency domain using `audio_to_fft` train_ds = train_ds.map( lambda x, y: (audio_to_fft(x), y), num_parallel_calls=tf.data.AUTOTUNE ) train_ds = train_ds.prefetch(tf.data.AUTOTUNE) valid_ds = valid_ds.map( lambda x, y: (audio_to_fft(x), y), num_parallel_calls=tf.data.AUTOTUNE ) valid_ds = valid_ds.prefetch(tf.data.AUTOTUNE) Our class names: ['Julia_Gillard', 'Jens_Stoltenberg', 'Nelson_Mandela', 'Magaret_Tarcher', 'Benjamin_Netanyau'] Processing speaker Julia_Gillard Processing speaker Jens_Stoltenberg Processing speaker Nelson_Mandela Processing speaker Magaret_Tarcher Processing speaker Benjamin_Netanyau Found 7501 files belonging to 5 classes. Using 6751 files for training. Using 750 files for validation. Model Definition def residual_block(x, filters, conv_num=3, activation=\"relu\"): # Shortcut s = keras.layers.Conv1D(filters, 1, padding=\"same\")(x) for i in range(conv_num - 1): x = keras.layers.Conv1D(filters, 3, padding=\"same\")(x) x = keras.layers.Activation(activation)(x) x = keras.layers.Conv1D(filters, 3, padding=\"same\")(x) x = keras.layers.Add()([x, s]) x = keras.layers.Activation(activation)(x) return keras.layers.MaxPool1D(pool_size=2, strides=2)(x) def build_model(input_shape, num_classes): inputs = keras.layers.Input(shape=input_shape, name=\"input\") x = residual_block(inputs, 16, 2) x = residual_block(x, 32, 2) x = residual_block(x, 64, 3) x = residual_block(x, 128, 3) x = residual_block(x, 128, 3) x = keras.layers.AveragePooling1D(pool_size=3, strides=3)(x) x = keras.layers.Flatten()(x) x = keras.layers.Dense(256, activation=\"relu\")(x) x = keras.layers.Dense(128, activation=\"relu\")(x) outputs = keras.layers.Dense(num_classes, activation=\"softmax\", name=\"output\")(x) return keras.models.Model(inputs=inputs, outputs=outputs) model = build_model((SAMPLING_RATE // 2, 1), len(class_names)) model.summary() # Compile the model using Adam's default learning rate model.compile( optimizer=\"Adam\", loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"] ) # Add callbacks: # 'EarlyStopping' to stop training when the model is not enhancing anymore # 'ModelCheckPoint' to always keep the model that has the best val_accuracy model_save_filename = \"model.h5\" earlystopping_cb = keras.callbacks.EarlyStopping(patience=10, restore_best_weights=True) mdlcheckpoint_cb = keras.callbacks.ModelCheckpoint( model_save_filename, monitor=\"val_accuracy\", save_best_only=True ) Model: \"model\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input (InputLayer) [(None, 8000, 1)] 0 __________________________________________________________________________________________________ conv1d_1 (Conv1D) (None, 8000, 16) 64 input[0][0] __________________________________________________________________________________________________ activation (Activation) (None, 8000, 16) 0 conv1d_1[0][0] __________________________________________________________________________________________________ conv1d_2 (Conv1D) (None, 8000, 16) 784 activation[0][0] __________________________________________________________________________________________________ conv1d (Conv1D) (None, 8000, 16) 32 input[0][0] __________________________________________________________________________________________________ add (Add) (None, 8000, 16) 0 conv1d_2[0][0] conv1d[0][0] __________________________________________________________________________________________________ activation_1 (Activation) (None, 8000, 16) 0 add[0][0] __________________________________________________________________________________________________ max_pooling1d (MaxPooling1D) (None, 4000, 16) 0 activation_1[0][0] __________________________________________________________________________________________________ conv1d_4 (Conv1D) (None, 4000, 32) 1568 max_pooling1d[0][0] __________________________________________________________________________________________________ activation_2 (Activation) (None, 4000, 32) 0 conv1d_4[0][0] __________________________________________________________________________________________________ conv1d_5 (Conv1D) (None, 4000, 32) 3104 activation_2[0][0] __________________________________________________________________________________________________ conv1d_3 (Conv1D) (None, 4000, 32) 544 max_pooling1d[0][0] __________________________________________________________________________________________________ add_1 (Add) (None, 4000, 32) 0 conv1d_5[0][0] conv1d_3[0][0] __________________________________________________________________________________________________ activation_3 (Activation) (None, 4000, 32) 0 add_1[0][0] __________________________________________________________________________________________________ max_pooling1d_1 (MaxPooling1D) (None, 2000, 32) 0 activation_3[0][0] __________________________________________________________________________________________________ conv1d_7 (Conv1D) (None, 2000, 64) 6208 max_pooling1d_1[0][0] __________________________________________________________________________________________________ activation_4 (Activation) (None, 2000, 64) 0 conv1d_7[0][0] __________________________________________________________________________________________________ conv1d_8 (Conv1D) (None, 2000, 64) 12352 activation_4[0][0] __________________________________________________________________________________________________ activation_5 (Activation) (None, 2000, 64) 0 conv1d_8[0][0] __________________________________________________________________________________________________ conv1d_9 (Conv1D) (None, 2000, 64) 12352 activation_5[0][0] __________________________________________________________________________________________________ conv1d_6 (Conv1D) (None, 2000, 64) 2112 max_pooling1d_1[0][0] __________________________________________________________________________________________________ add_2 (Add) (None, 2000, 64) 0 conv1d_9[0][0] conv1d_6[0][0] __________________________________________________________________________________________________ activation_6 (Activation) (None, 2000, 64) 0 add_2[0][0] __________________________________________________________________________________________________ max_pooling1d_2 (MaxPooling1D) (None, 1000, 64) 0 activation_6[0][0] __________________________________________________________________________________________________ conv1d_11 (Conv1D) (None, 1000, 128) 24704 max_pooling1d_2[0][0] __________________________________________________________________________________________________ activation_7 (Activation) (None, 1000, 128) 0 conv1d_11[0][0] __________________________________________________________________________________________________ conv1d_12 (Conv1D) (None, 1000, 128) 49280 activation_7[0][0] __________________________________________________________________________________________________ activation_8 (Activation) (None, 1000, 128) 0 conv1d_12[0][0] __________________________________________________________________________________________________ conv1d_13 (Conv1D) (None, 1000, 128) 49280 activation_8[0][0] __________________________________________________________________________________________________ conv1d_10 (Conv1D) (None, 1000, 128) 8320 max_pooling1d_2[0][0] __________________________________________________________________________________________________ add_3 (Add) (None, 1000, 128) 0 conv1d_13[0][0] conv1d_10[0][0] __________________________________________________________________________________________________ activation_9 (Activation) (None, 1000, 128) 0 add_3[0][0] __________________________________________________________________________________________________ max_pooling1d_3 (MaxPooling1D) (None, 500, 128) 0 activation_9[0][0] __________________________________________________________________________________________________ conv1d_15 (Conv1D) (None, 500, 128) 49280 max_pooling1d_3[0][0] __________________________________________________________________________________________________ activation_10 (Activation) (None, 500, 128) 0 conv1d_15[0][0] __________________________________________________________________________________________________ conv1d_16 (Conv1D) (None, 500, 128) 49280 activation_10[0][0] __________________________________________________________________________________________________ activation_11 (Activation) (None, 500, 128) 0 conv1d_16[0][0] __________________________________________________________________________________________________ conv1d_17 (Conv1D) (None, 500, 128) 49280 activation_11[0][0] __________________________________________________________________________________________________ conv1d_14 (Conv1D) (None, 500, 128) 16512 max_pooling1d_3[0][0] __________________________________________________________________________________________________ add_4 (Add) (None, 500, 128) 0 conv1d_17[0][0] conv1d_14[0][0] __________________________________________________________________________________________________ activation_12 (Activation) (None, 500, 128) 0 add_4[0][0] __________________________________________________________________________________________________ max_pooling1d_4 (MaxPooling1D) (None, 250, 128) 0 activation_12[0][0] __________________________________________________________________________________________________ average_pooling1d (AveragePooli (None, 83, 128) 0 max_pooling1d_4[0][0] __________________________________________________________________________________________________ flatten (Flatten) (None, 10624) 0 average_pooling1d[0][0] __________________________________________________________________________________________________ dense (Dense) (None, 256) 2720000 flatten[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (None, 128) 32896 dense[0][0] __________________________________________________________________________________________________ output (Dense) (None, 5) 645 dense_1[0][0] ================================================================================================== Total params: 3,088,597 Trainable params: 3,088,597 Non-trainable params: 0 __________________________________________________________________________________________________ Training history = model.fit( train_ds, epochs=EPOCHS, validation_data=valid_ds, callbacks=[earlystopping_cb, mdlcheckpoint_cb], ) Epoch 1/100 53/53 [==============================] - 62s 1s/step - loss: 1.0107 - accuracy: 0.6929 - val_loss: 0.3367 - val_accuracy: 0.8640 Epoch 2/100 53/53 [==============================] - 61s 1s/step - loss: 0.2863 - accuracy: 0.8926 - val_loss: 0.2814 - val_accuracy: 0.8813 Epoch 3/100 53/53 [==============================] - 61s 1s/step - loss: 0.2293 - accuracy: 0.9104 - val_loss: 0.2054 - val_accuracy: 0.9160 Epoch 4/100 53/53 [==============================] - 63s 1s/step - loss: 0.1750 - accuracy: 0.9320 - val_loss: 0.1668 - val_accuracy: 0.9320 Epoch 5/100 53/53 [==============================] - 61s 1s/step - loss: 0.2044 - accuracy: 0.9206 - val_loss: 0.1658 - val_accuracy: 0.9347 Epoch 6/100 53/53 [==============================] - 61s 1s/step - loss: 0.1407 - accuracy: 0.9415 - val_loss: 0.0888 - val_accuracy: 0.9720 Epoch 7/100 53/53 [==============================] - 61s 1s/step - loss: 0.1047 - accuracy: 0.9600 - val_loss: 0.1113 - val_accuracy: 0.9587 Epoch 8/100 53/53 [==============================] - 60s 1s/step - loss: 0.1077 - accuracy: 0.9573 - val_loss: 0.0819 - val_accuracy: 0.9693 Epoch 9/100 53/53 [==============================] - 61s 1s/step - loss: 0.0998 - accuracy: 0.9640 - val_loss: 0.1586 - val_accuracy: 0.9427 Epoch 10/100 53/53 [==============================] - 63s 1s/step - loss: 0.1004 - accuracy: 0.9621 - val_loss: 0.1504 - val_accuracy: 0.9333 Epoch 11/100 53/53 [==============================] - 60s 1s/step - loss: 0.0902 - accuracy: 0.9695 - val_loss: 0.1016 - val_accuracy: 0.9600 Epoch 12/100 53/53 [==============================] - 61s 1s/step - loss: 0.0773 - accuracy: 0.9714 - val_loss: 0.0647 - val_accuracy: 0.9800 Epoch 13/100 53/53 [==============================] - 63s 1s/step - loss: 0.0797 - accuracy: 0.9699 - val_loss: 0.0485 - val_accuracy: 0.9853 Epoch 14/100 53/53 [==============================] - 61s 1s/step - loss: 0.0750 - accuracy: 0.9727 - val_loss: 0.0601 - val_accuracy: 0.9787 Epoch 15/100 53/53 [==============================] - 62s 1s/step - loss: 0.0629 - accuracy: 0.9766 - val_loss: 0.0476 - val_accuracy: 0.9787 Epoch 16/100 53/53 [==============================] - 63s 1s/step - loss: 0.0564 - accuracy: 0.9793 - val_loss: 0.0565 - val_accuracy: 0.9813 Epoch 17/100 53/53 [==============================] - 61s 1s/step - loss: 0.0545 - accuracy: 0.9809 - val_loss: 0.0325 - val_accuracy: 0.9893 Epoch 18/100 53/53 [==============================] - 61s 1s/step - loss: 0.0415 - accuracy: 0.9859 - val_loss: 0.0776 - val_accuracy: 0.9693 Epoch 19/100 53/53 [==============================] - 61s 1s/step - loss: 0.0537 - accuracy: 0.9810 - val_loss: 0.0647 - val_accuracy: 0.9853 Epoch 20/100 53/53 [==============================] - 62s 1s/step - loss: 0.0556 - accuracy: 0.9802 - val_loss: 0.0500 - val_accuracy: 0.9880 Epoch 21/100 53/53 [==============================] - 63s 1s/step - loss: 0.0486 - accuracy: 0.9828 - val_loss: 0.0470 - val_accuracy: 0.9827 Epoch 22/100 53/53 [==============================] - 61s 1s/step - loss: 0.0479 - accuracy: 0.9825 - val_loss: 0.0918 - val_accuracy: 0.9693 Epoch 23/100 53/53 [==============================] - 61s 1s/step - loss: 0.0446 - accuracy: 0.9834 - val_loss: 0.0429 - val_accuracy: 0.9867 Epoch 24/100 53/53 [==============================] - 61s 1s/step - loss: 0.0309 - accuracy: 0.9889 - val_loss: 0.0473 - val_accuracy: 0.9867 Epoch 25/100 53/53 [==============================] - 63s 1s/step - loss: 0.0341 - accuracy: 0.9895 - val_loss: 0.0244 - val_accuracy: 0.9907 Epoch 26/100 53/53 [==============================] - 60s 1s/step - loss: 0.0357 - accuracy: 0.9874 - val_loss: 0.0289 - val_accuracy: 0.9893 Epoch 27/100 53/53 [==============================] - 61s 1s/step - loss: 0.0331 - accuracy: 0.9893 - val_loss: 0.0246 - val_accuracy: 0.9920 Epoch 28/100 53/53 [==============================] - 61s 1s/step - loss: 0.0339 - accuracy: 0.9879 - val_loss: 0.0646 - val_accuracy: 0.9787 Epoch 29/100 53/53 [==============================] - 61s 1s/step - loss: 0.0250 - accuracy: 0.9910 - val_loss: 0.0146 - val_accuracy: 0.9947 Epoch 30/100 53/53 [==============================] - 63s 1s/step - loss: 0.0343 - accuracy: 0.9883 - val_loss: 0.0318 - val_accuracy: 0.9893 Epoch 31/100 53/53 [==============================] - 61s 1s/step - loss: 0.0312 - accuracy: 0.9893 - val_loss: 0.0270 - val_accuracy: 0.9880 Epoch 32/100 53/53 [==============================] - 61s 1s/step - loss: 0.0201 - accuracy: 0.9917 - val_loss: 0.0264 - val_accuracy: 0.9893 Epoch 33/100 53/53 [==============================] - 61s 1s/step - loss: 0.0371 - accuracy: 0.9876 - val_loss: 0.0722 - val_accuracy: 0.9773 Epoch 34/100 53/53 [==============================] - 61s 1s/step - loss: 0.0533 - accuracy: 0.9828 - val_loss: 0.0161 - val_accuracy: 0.9947 Epoch 35/100 53/53 [==============================] - 61s 1s/step - loss: 0.0258 - accuracy: 0.9911 - val_loss: 0.0277 - val_accuracy: 0.9867 Epoch 36/100 53/53 [==============================] - 60s 1s/step - loss: 0.0261 - accuracy: 0.9901 - val_loss: 0.0542 - val_accuracy: 0.9787 Epoch 37/100 53/53 [==============================] - 60s 1s/step - loss: 0.0368 - accuracy: 0.9877 - val_loss: 0.0699 - val_accuracy: 0.9813 Epoch 38/100 53/53 [==============================] - 63s 1s/step - loss: 0.0251 - accuracy: 0.9890 - val_loss: 0.0206 - val_accuracy: 0.9907 Epoch 39/100 53/53 [==============================] - 62s 1s/step - loss: 0.0220 - accuracy: 0.9913 - val_loss: 0.0211 - val_accuracy: 0.9947 Evaluation print(model.evaluate(valid_ds)) 24/24 [==============================] - 6s 244ms/step - loss: 0.0146 - accuracy: 0.9947 [0.014629718847572803, 0.9946666955947876] We get ~ 98% validation accuracy. Demonstration Let's take some samples and: Predict the speaker Compare the prediction with the real speaker Listen to the audio to see that despite the samples being noisy, the model is still pretty accurate SAMPLES_TO_DISPLAY = 10 test_ds = paths_and_labels_to_dataset(valid_audio_paths, valid_labels) test_ds = test_ds.shuffle(buffer_size=BATCH_SIZE * 8, seed=SHUFFLE_SEED).batch( BATCH_SIZE ) test_ds = test_ds.map(lambda x, y: (add_noise(x, noises, scale=SCALE), y)) for audios, labels in test_ds.take(1): # Get the signal FFT ffts = audio_to_fft(audios) # Predict y_pred = model.predict(ffts) # Take random samples rnd = np.random.randint(0, BATCH_SIZE, SAMPLES_TO_DISPLAY) audios = audios.numpy()[rnd, :, :] labels = labels.numpy()[rnd] y_pred = np.argmax(y_pred, axis=-1)[rnd] for index in range(SAMPLES_TO_DISPLAY): # For every sample, print the true and predicted label # as well as run the voice with the noise print( \"Speaker: {} - Predicted: {}\".format( class_names[labels[index]], class_names[y_pred[index]], ) ) display(Audio(audios[index, :, :].squeeze(), rate=SAMPLING_RATE)) Train a 3D convolutional neural network to predict presence of pneumonia. Introduction This example will show the steps needed to build a 3D convolutional neural network (CNN) to predict the presence of viral pneumonia in computer tomography (CT) scans. 2D CNNs are commonly used to process RGB images (3 channels). A 3D CNN is simply the 3D equivalent: it takes as input a 3D volume or a sequence of 2D frames (e.g. slices in a CT scan), 3D CNNs are a powerful model for learning representations for volumetric data. References A survey on Deep Learning Advances on Different 3D DataRepresentations VoxNet: A 3D Convolutional Neural Network for Real-Time Object Recognition FusionNet: 3D Object Classification Using MultipleData Representations Uniformizing Techniques to Process CT scans with 3D CNNs for Tuberculosis Prediction Setup import os import zipfile import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Downloading the MosMedData: Chest CT Scans with COVID-19 Related Findings In this example, we use a subset of the MosMedData: Chest CT Scans with COVID-19 Related Findings. This dataset consists of lung CT scans with COVID-19 related findings, as well as without such findings. We will be using the associated radiological findings of the CT scans as labels to build a classifier to predict presence of viral pneumonia. Hence, the task is a binary classification problem. # Download url of normal CT scans. url = \"https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-0.zip\" filename = os.path.join(os.getcwd(), \"CT-0.zip\") keras.utils.get_file(filename, url) # Download url of abnormal CT scans. url = \"https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-23.zip\" filename = os.path.join(os.getcwd(), \"CT-23.zip\") keras.utils.get_file(filename, url) # Make a directory to store the data. os.makedirs(\"MosMedData\") # Unzip data in the newly created directory. with zipfile.ZipFile(\"CT-0.zip\", \"r\") as z_fp: z_fp.extractall(\"./MosMedData/\") with zipfile.ZipFile(\"CT-23.zip\", \"r\") as z_fp: z_fp.extractall(\"./MosMedData/\") Downloading data from https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-0.zip 1065476096/1065471431 [==============================] - 236s 0us/step Downloading data from https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-23.zip 271171584/1045162547 [======>.......................] - ETA: 2:56 Loading data and preprocessing The files are provided in Nifti format with the extension .nii. To read the scans, we use the nibabel package. You can install the package via pip install nibabel. CT scans store raw voxel intensity in Hounsfield units (HU). They range from -1024 to above 2000 in this dataset. Above 400 are bones with different radiointensity, so this is used as a higher bound. A threshold between -1000 and 400 is commonly used to normalize CT scans. To process the data, we do the following: We first rotate the volumes by 90 degrees, so the orientation is fixed We scale the HU values to be between 0 and 1. We resize width, height and depth. Here we define several helper functions to process the data. These functions will be used when building training and validation datasets. import nibabel as nib from scipy import ndimage def read_nifti_file(filepath): \"\"\"Read and load volume\"\"\" # Read file scan = nib.load(filepath) # Get raw data scan = scan.get_fdata() return scan def normalize(volume): \"\"\"Normalize the volume\"\"\" min = -1000 max = 400 volume[volume < min] = min volume[volume > max] = max volume = (volume - min) / (max - min) volume = volume.astype(\"float32\") return volume def resize_volume(img): \"\"\"Resize across z-axis\"\"\" # Set the desired depth desired_depth = 64 desired_width = 128 desired_height = 128 # Get current depth current_depth = img.shape[-1] current_width = img.shape[0] current_height = img.shape[1] # Compute depth factor depth = current_depth / desired_depth width = current_width / desired_width height = current_height / desired_height depth_factor = 1 / depth width_factor = 1 / width height_factor = 1 / height # Rotate img = ndimage.rotate(img, 90, reshape=False) # Resize across z-axis img = ndimage.zoom(img, (width_factor, height_factor, depth_factor), order=1) return img def process_scan(path): \"\"\"Read and resize volume\"\"\" # Read scan volume = read_nifti_file(path) # Normalize volume = normalize(volume) # Resize width, height and depth volume = resize_volume(volume) return volume Let's read the paths of the CT scans from the class directories. # Folder \"CT-0\" consist of CT scans having normal lung tissue, # no CT-signs of viral pneumonia. normal_scan_paths = [ os.path.join(os.getcwd(), \"MosMedData/CT-0\", x) for x in os.listdir(\"MosMedData/CT-0\") ] # Folder \"CT-23\" consist of CT scans having several ground-glass opacifications, # involvement of lung parenchyma. abnormal_scan_paths = [ os.path.join(os.getcwd(), \"MosMedData/CT-23\", x) for x in os.listdir(\"MosMedData/CT-23\") ] print(\"CT scans with normal lung tissue: \" + str(len(normal_scan_paths))) print(\"CT scans with abnormal lung tissue: \" + str(len(abnormal_scan_paths))) CT scans with normal lung tissue: 100 CT scans with abnormal lung tissue: 100 Build train and validation datasets Read the scans from the class directories and assign labels. Downsample the scans to have shape of 128x128x64. Rescale the raw HU values to the range 0 to 1. Lastly, split the dataset into train and validation subsets. # Read and process the scans. # Each scan is resized across height, width, and depth and rescaled. abnormal_scans = np.array([process_scan(path) for path in abnormal_scan_paths]) normal_scans = np.array([process_scan(path) for path in normal_scan_paths]) # For the CT scans having presence of viral pneumonia # assign 1, for the normal ones assign 0. abnormal_labels = np.array([1 for _ in range(len(abnormal_scans))]) normal_labels = np.array([0 for _ in range(len(normal_scans))]) # Split data in the ratio 70-30 for training and validation. x_train = np.concatenate((abnormal_scans[:70], normal_scans[:70]), axis=0) y_train = np.concatenate((abnormal_labels[:70], normal_labels[:70]), axis=0) x_val = np.concatenate((abnormal_scans[70:], normal_scans[70:]), axis=0) y_val = np.concatenate((abnormal_labels[70:], normal_labels[70:]), axis=0) print( \"Number of samples in train and validation are %d and %d.\" % (x_train.shape[0], x_val.shape[0]) ) Number of samples in train and validation are 140 and 60. Data augmentation The CT scans also augmented by rotating at random angles during training. Since the data is stored in rank-3 tensors of shape (samples, height, width, depth), we add a dimension of size 1 at axis 4 to be able to perform 3D convolutions on the data. The new shape is thus (samples, height, width, depth, 1). There are different kinds of preprocessing and augmentation techniques out there, this example shows a few simple ones to get started. import random from scipy import ndimage @tf.function def rotate(volume): \"\"\"Rotate the volume by a few degrees\"\"\" def scipy_rotate(volume): # define some rotation angles angles = [-20, -10, -5, 5, 10, 20] # pick angles at random angle = random.choice(angles) # rotate volume volume = ndimage.rotate(volume, angle, reshape=False) volume[volume < 0] = 0 volume[volume > 1] = 1 return volume augmented_volume = tf.numpy_function(scipy_rotate, [volume], tf.float32) return augmented_volume def train_preprocessing(volume, label): \"\"\"Process training data by rotating and adding a channel.\"\"\" # Rotate volume volume = rotate(volume) volume = tf.expand_dims(volume, axis=3) return volume, label def validation_preprocessing(volume, label): \"\"\"Process validation data by only adding a channel.\"\"\" volume = tf.expand_dims(volume, axis=3) return volume, label While defining the train and validation data loader, the training data is passed through and augmentation function which randomly rotates volume at different angles. Note that both training and validation data are already rescaled to have values between 0 and 1. # Define data loaders. train_loader = tf.data.Dataset.from_tensor_slices((x_train, y_train)) validation_loader = tf.data.Dataset.from_tensor_slices((x_val, y_val)) batch_size = 2 # Augment the on the fly during training. train_dataset = ( train_loader.shuffle(len(x_train)) .map(train_preprocessing) .batch(batch_size) .prefetch(2) ) # Only rescale. validation_dataset = ( validation_loader.shuffle(len(x_val)) .map(validation_preprocessing) .batch(batch_size) .prefetch(2) ) Visualize an augmented CT scan. import matplotlib.pyplot as plt data = train_dataset.take(1) images, labels = list(data)[0] images = images.numpy() image = images[0] print(\"Dimension of the CT scan is:\", image.shape) plt.imshow(np.squeeze(image[:, :, 30]), cmap=\"gray\") Dimension of the CT scan is: (128, 128, 64, 1) png Since a CT scan has many slices, let's visualize a montage of the slices. def plot_slices(num_rows, num_columns, width, height, data): \"\"\"Plot a montage of 20 CT slices\"\"\" data = np.rot90(np.array(data)) data = np.transpose(data) data = np.reshape(data, (num_rows, num_columns, width, height)) rows_data, columns_data = data.shape[0], data.shape[1] heights = [slc[0].shape[0] for slc in data] widths = [slc.shape[1] for slc in data[0]] fig_width = 12.0 fig_height = fig_width * sum(heights) / sum(widths) f, axarr = plt.subplots( rows_data, columns_data, figsize=(fig_width, fig_height), gridspec_kw={\"height_ratios\": heights}, ) for i in range(rows_data): for j in range(columns_data): axarr[i, j].imshow(data[i][j], cmap=\"gray\") axarr[i, j].axis(\"off\") plt.subplots_adjust(wspace=0, hspace=0, left=0, right=1, bottom=0, top=1) plt.show() # Visualize montage of slices. # 4 rows and 10 columns for 100 slices of the CT scan. plot_slices(4, 10, 128, 128, image[:, :, :40]) png Define a 3D convolutional neural network To make the model easier to understand, we structure it into blocks. The architecture of the 3D CNN used in this example is based on this paper. def get_model(width=128, height=128, depth=64): \"\"\"Build a 3D convolutional neural network model.\"\"\" inputs = keras.Input((width, height, depth, 1)) x = layers.Conv3D(filters=64, kernel_size=3, activation=\"relu\")(inputs) x = layers.MaxPool3D(pool_size=2)(x) x = layers.BatchNormalization()(x) x = layers.Conv3D(filters=64, kernel_size=3, activation=\"relu\")(x) x = layers.MaxPool3D(pool_size=2)(x) x = layers.BatchNormalization()(x) x = layers.Conv3D(filters=128, kernel_size=3, activation=\"relu\")(x) x = layers.MaxPool3D(pool_size=2)(x) x = layers.BatchNormalization()(x) x = layers.Conv3D(filters=256, kernel_size=3, activation=\"relu\")(x) x = layers.MaxPool3D(pool_size=2)(x) x = layers.BatchNormalization()(x) x = layers.GlobalAveragePooling3D()(x) x = layers.Dense(units=512, activation=\"relu\")(x) x = layers.Dropout(0.3)(x) outputs = layers.Dense(units=1, activation=\"sigmoid\")(x) # Define the model. model = keras.Model(inputs, outputs, name=\"3dcnn\") return model # Build model. model = get_model(width=128, height=128, depth=64) model.summary() Model: \"3dcnn\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 128, 128, 64, 1)] 0 _________________________________________________________________ conv3d (Conv3D) (None, 126, 126, 62, 64) 1792 _________________________________________________________________ max_pooling3d (MaxPooling3D) (None, 63, 63, 31, 64) 0 _________________________________________________________________ batch_normalization (BatchNo (None, 63, 63, 31, 64) 256 _________________________________________________________________ conv3d_1 (Conv3D) (None, 61, 61, 29, 64) 110656 _________________________________________________________________ max_pooling3d_1 (MaxPooling3 (None, 30, 30, 14, 64) 0 _________________________________________________________________ batch_normalization_1 (Batch (None, 30, 30, 14, 64) 256 _________________________________________________________________ conv3d_2 (Conv3D) (None, 28, 28, 12, 128) 221312 _________________________________________________________________ max_pooling3d_2 (MaxPooling3 (None, 14, 14, 6, 128) 0 _________________________________________________________________ batch_normalization_2 (Batch (None, 14, 14, 6, 128) 512 _________________________________________________________________ conv3d_3 (Conv3D) (None, 12, 12, 4, 256) 884992 _________________________________________________________________ max_pooling3d_3 (MaxPooling3 (None, 6, 6, 2, 256) 0 _________________________________________________________________ batch_normalization_3 (Batch (None, 6, 6, 2, 256) 1024 _________________________________________________________________ global_average_pooling3d (Gl (None, 256) 0 _________________________________________________________________ dense (Dense) (None, 512) 131584 _________________________________________________________________ dropout (Dropout) (None, 512) 0 _________________________________________________________________ dense_1 (Dense) (None, 1) 513 ================================================================= Total params: 1,352,897 Trainable params: 1,351,873 Non-trainable params: 1,024 _________________________________________________________________ Train model # Compile model. initial_learning_rate = 0.0001 lr_schedule = keras.optimizers.schedules.ExponentialDecay( initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True ) model.compile( loss=\"binary_crossentropy\", optimizer=keras.optimizers.Adam(learning_rate=lr_schedule), metrics=[\"acc\"], ) # Define callbacks. checkpoint_cb = keras.callbacks.ModelCheckpoint( \"3d_image_classification.h5\", save_best_only=True ) early_stopping_cb = keras.callbacks.EarlyStopping(monitor=\"val_acc\", patience=15) # Train the model, doing validation at the end of each epoch epochs = 100 model.fit( train_dataset, validation_data=validation_dataset, epochs=epochs, shuffle=True, verbose=2, callbacks=[checkpoint_cb, early_stopping_cb], ) Epoch 1/100 70/70 - 12s - loss: 0.7031 - acc: 0.5286 - val_loss: 1.1421 - val_acc: 0.5000 Epoch 2/100 70/70 - 12s - loss: 0.6769 - acc: 0.5929 - val_loss: 1.3491 - val_acc: 0.5000 Epoch 3/100 70/70 - 12s - loss: 0.6543 - acc: 0.6286 - val_loss: 1.5108 - val_acc: 0.5000 Epoch 4/100 70/70 - 12s - loss: 0.6236 - acc: 0.6714 - val_loss: 2.5255 - val_acc: 0.5000 Epoch 5/100 70/70 - 12s - loss: 0.6628 - acc: 0.6000 - val_loss: 1.8446 - val_acc: 0.5000 Epoch 6/100 70/70 - 12s - loss: 0.6621 - acc: 0.6071 - val_loss: 1.9661 - val_acc: 0.5000 Epoch 7/100 70/70 - 12s - loss: 0.6346 - acc: 0.6571 - val_loss: 2.8997 - val_acc: 0.5000 Epoch 8/100 70/70 - 12s - loss: 0.6501 - acc: 0.6071 - val_loss: 1.6101 - val_acc: 0.5000 Epoch 9/100 70/70 - 12s - loss: 0.6065 - acc: 0.6571 - val_loss: 0.8688 - val_acc: 0.6167 Epoch 10/100 70/70 - 12s - loss: 0.5970 - acc: 0.6714 - val_loss: 0.8802 - val_acc: 0.5167 Epoch 11/100 70/70 - 12s - loss: 0.5910 - acc: 0.7143 - val_loss: 0.7282 - val_acc: 0.6333 Epoch 12/100 70/70 - 12s - loss: 0.6147 - acc: 0.6500 - val_loss: 0.5828 - val_acc: 0.7500 Epoch 13/100 70/70 - 12s - loss: 0.5641 - acc: 0.7214 - val_loss: 0.7080 - val_acc: 0.6667 Epoch 14/100 70/70 - 12s - loss: 0.5664 - acc: 0.6857 - val_loss: 0.5641 - val_acc: 0.7000 Epoch 15/100 70/70 - 12s - loss: 0.5924 - acc: 0.6929 - val_loss: 0.7595 - val_acc: 0.6000 Epoch 16/100 70/70 - 12s - loss: 0.5389 - acc: 0.7071 - val_loss: 0.5719 - val_acc: 0.7833 Epoch 17/100 70/70 - 12s - loss: 0.5493 - acc: 0.6714 - val_loss: 0.5234 - val_acc: 0.7500 Epoch 18/100 70/70 - 12s - loss: 0.5050 - acc: 0.7786 - val_loss: 0.7359 - val_acc: 0.6000 Epoch 19/100 70/70 - 12s - loss: 0.5152 - acc: 0.7286 - val_loss: 0.6469 - val_acc: 0.6500 Epoch 20/100 70/70 - 12s - loss: 0.5015 - acc: 0.7786 - val_loss: 0.5651 - val_acc: 0.7333 Epoch 21/100 70/70 - 12s - loss: 0.4975 - acc: 0.7786 - val_loss: 0.8707 - val_acc: 0.5500 Epoch 22/100 70/70 - 12s - loss: 0.4470 - acc: 0.7714 - val_loss: 0.5577 - val_acc: 0.7500 Epoch 23/100 70/70 - 12s - loss: 0.5489 - acc: 0.7071 - val_loss: 0.9929 - val_acc: 0.6500 Epoch 24/100 70/70 - 12s - loss: 0.5045 - acc: 0.7357 - val_loss: 0.5891 - val_acc: 0.7333 Epoch 25/100 70/70 - 12s - loss: 0.5598 - acc: 0.7500 - val_loss: 0.5703 - val_acc: 0.7667 Epoch 26/100 70/70 - 12s - loss: 0.4822 - acc: 0.7429 - val_loss: 0.5631 - val_acc: 0.7333 Epoch 27/100 70/70 - 12s - loss: 0.5572 - acc: 0.7000 - val_loss: 0.6255 - val_acc: 0.6500 Epoch 28/100 70/70 - 12s - loss: 0.4694 - acc: 0.7643 - val_loss: 0.7007 - val_acc: 0.6833 Epoch 29/100 70/70 - 12s - loss: 0.4870 - acc: 0.7571 - val_loss: 1.7148 - val_acc: 0.5667 Epoch 30/100 70/70 - 12s - loss: 0.4794 - acc: 0.7500 - val_loss: 0.5744 - val_acc: 0.7333 Epoch 31/100 70/70 - 12s - loss: 0.4632 - acc: 0.7857 - val_loss: 0.7787 - val_acc: 0.5833 It is important to note that the number of samples is very small (only 200) and we don't specify a random seed. As such, you can expect significant variance in the results. The full dataset which consists of over 1000 CT scans can be found here. Using the full dataset, an accuracy of 83% was achieved. A variability of 6-7% in the classification performance is observed in both cases. Visualizing model performance Here the model accuracy and loss for the training and the validation sets are plotted. Since the validation set is class-balanced, accuracy provides an unbiased representation of the model's performance. fig, ax = plt.subplots(1, 2, figsize=(20, 3)) ax = ax.ravel() for i, metric in enumerate([\"acc\", \"loss\"]): ax[i].plot(model.history.history[metric]) ax[i].plot(model.history.history[\"val_\" + metric]) ax[i].set_title(\"Model {}\".format(metric)) ax[i].set_xlabel(\"epochs\") ax[i].set_ylabel(metric) ax[i].legend([\"train\", \"val\"]) png Make predictions on a single CT scan # Load best weights. model.load_weights(\"3d_image_classification.h5\") prediction = model.predict(np.expand_dims(x_val[0], axis=0))[0] scores = [1 - prediction[0], prediction[0]] class_names = [\"normal\", \"abnormal\"] for score, name in zip(scores, class_names): print( \"This model is %.2f percent confident that CT scan is %s\" % ((100 * score), name) ) This model is 26.60 percent confident that CT scan is normal This model is 73.40 percent confident that CT scan is abnormal Minimal implementation of volumetric rendering as shown in NeRF. Introduction In this example, we present a minimal implementation of the research paper NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis by Ben Mildenhall et. al. The authors have proposed an ingenious way to synthesize novel views of a scene by modelling the volumetric scene function through a neural network. To help you understand this intuitively, let's start with the following question: would it be possible to give to a neural network the position of a pixel in an image, and ask the network to predict the color at that position? 2d-train Figure 1: A neural network being given coordinates of an image as input and asked to predict the color at the coordinates. The neural network would hypothetically memorize (overfit on) the image. This means that our neural network would have encoded the entire image in its weights. We could query the neural network with each position, and it would eventually reconstruct the entire image. 2d-test Figure 2: The trained neural network recreates the image from scratch. A question now arises, how do we extend this idea to learn a 3D volumetric scene? Implementing a similar process as above would require the knowledge of every voxel (volume pixel). Turns out, this is quite a challenging task to do. The authors of the paper propose a minimal and elegant way to learn a 3D scene using a few images of the scene. They discard the use of voxels for training. The network learns to model the volumetric scene, thus generating novel views (images) of the 3D scene that the model was not shown at training time. There are a few prerequisites one needs to understand to fully appreciate the process. We structure the example in such a way that you will have all the required knowledge before starting the implementation. Setup # Setting random seed to obtain reproducible results. import tensorflow as tf tf.random.set_seed(42) import os import glob import imageio import numpy as np from tqdm import tqdm from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt # Initialize global variables. AUTO = tf.data.AUTOTUNE BATCH_SIZE = 5 NUM_SAMPLES = 32 POS_ENCODE_DIMS = 16 EPOCHS = 20 Download and load the data The npz data file contains images, camera poses, and a focal length. The images are taken from multiple camera angles as shown in Figure 3. camera-angles Figure 3: Multiple camera angles Source: NeRF To understand camera poses in this context we have to first allow ourselves to think that a camera is a mapping between the real-world and the 2-D image. mapping Figure 4: 3-D world to 2-D image mapping through a camera Source: Mathworks Consider the following equation: Where x is the 2-D image point, X is the 3-D world point and P is the camera-matrix. P is a 3 x 4 matrix that plays the crucial role of mapping the real world object onto an image plane. The camera-matrix is an affine transform matrix that is concatenated with a 3 x 1 column [image height, image width, focal length] to produce the pose matrix. This matrix is of dimensions 3 x 5 where the first 3 x 3 block is in the camera’s point of view. The axes are [down, right, backwards] or [-y, x, z] where the camera is facing forwards -z. camera-mapping Figure 5: The affine transformation. The COLMAP frame is [right, down, forwards] or [x, -y, -z]. Read more about COLMAP here. # Download the data if it does not already exist. file_name = \"tiny_nerf_data.npz\" url = \"https://people.eecs.berkeley.edu/~bmild/nerf/tiny_nerf_data.npz\" if not os.path.exists(file_name): data = keras.utils.get_file(fname=file_name, origin=url) data = np.load(data) images = data[\"images\"] im_shape = images.shape (num_images, H, W, _) = images.shape (poses, focal) = (data[\"poses\"], data[\"focal\"]) # Plot a random image from the dataset for visualization. plt.imshow(images[np.random.randint(low=0, high=num_images)]) plt.show() Downloading data from https://people.eecs.berkeley.edu/~bmild/nerf/tiny_nerf_data.npz 12730368/12727482 [==============================] - 0s 0us/step png Data pipeline Now that you've understood the notion of camera matrix and the mapping from a 3D scene to 2D images, let's talk about the inverse mapping, i.e. from 2D image to the 3D scene. We'll need to talk about volumetric rendering with ray casting and tracing, which are common computer graphics techniques. This section will help you get to speed with these techniques. Consider an image with N pixels. We shoot a ray through each pixel and sample some points on the ray. A ray is commonly parameterized by the equation r(t) = o + td where t is the parameter, o is the origin and d is the unit directional vector as shown in Figure 6. img Figure 6: r(t) = o + td where t is 3 In Figure 7, we consider a ray, and we sample some random points on the ray. These sample points each have a unique location (x, y, z) and the ray has a viewing angle (theta, phi). The viewing angle is particularly interesting as we can shoot a ray through a single pixel in a lot of different ways, each with a unique viewing angle. Another interesting thing to notice here is the noise that is added to the sampling process. We add a uniform noise to each sample so that the samples correspond to a continuous distribution. In Figure 7 the blue points are the evenly distributed samples and the white points (t1, t2, t3) are randomly placed between the samples. img Figure 7: Sampling the points from a ray. Figure 8 showcases the entire sampling process in 3D, where you can see the rays coming out of the white image. This means that each pixel will have its corresponding rays and each ray will be sampled at distinct points. 3-d rays Figure 8: Shooting rays from all the pixels of an image in 3-D These sampled points act as the input to the NeRF model. The model is then asked to predict the RGB color and the volume density at that point. 3-Drender Figure 9: Data pipeline Source: NeRF def encode_position(x): \"\"\"Encodes the position into its corresponding Fourier feature. Args: x: The input coordinate. Returns: Fourier features tensors of the position. \"\"\" positions = [x] for i in range(POS_ENCODE_DIMS): for fn in [tf.sin, tf.cos]: positions.append(fn(2.0 ** i * x)) return tf.concat(positions, axis=-1) def get_rays(height, width, focal, pose): \"\"\"Computes origin point and direction vector of rays. Args: height: Height of the image. width: Width of the image. focal: The focal length between the images and the camera. pose: The pose matrix of the camera. Returns: Tuple of origin point and direction vector for rays. \"\"\" # Build a meshgrid for the rays. i, j = tf.meshgrid( tf.range(width, dtype=tf.float32), tf.range(height, dtype=tf.float32), indexing=\"xy\", ) # Normalize the x axis coordinates. transformed_i = (i - width * 0.5) / focal # Normalize the y axis coordinates. transformed_j = (j - height * 0.5) / focal # Create the direction unit vectors. directions = tf.stack([transformed_i, -transformed_j, -tf.ones_like(i)], axis=-1) # Get the camera matrix. camera_matrix = pose[:3, :3] height_width_focal = pose[:3, -1] # Get origins and directions for the rays. transformed_dirs = directions[..., None, :] camera_dirs = transformed_dirs * camera_matrix ray_directions = tf.reduce_sum(camera_dirs, axis=-1) ray_origins = tf.broadcast_to(height_width_focal, tf.shape(ray_directions)) # Return the origins and directions. return (ray_origins, ray_directions) def render_flat_rays(ray_origins, ray_directions, near, far, num_samples, rand=False): \"\"\"Renders the rays and flattens it. Args: ray_origins: The origin points for rays. ray_directions: The direction unit vectors for the rays. near: The near bound of the volumetric scene. far: The far bound of the volumetric scene. num_samples: Number of sample points in a ray. rand: Choice for randomising the sampling strategy. Returns: Tuple of flattened rays and sample points on each rays. \"\"\" # Compute 3D query points. # Equation: r(t) = o+td -> Building the \"t\" here. t_vals = tf.linspace(near, far, num_samples) if rand: # Inject uniform noise into sample space to make the sampling # continuous. shape = list(ray_origins.shape[:-1]) + [num_samples] noise = tf.random.uniform(shape=shape) * (far - near) / num_samples t_vals = t_vals + noise # Equation: r(t) = o + td -> Building the \"r\" here. rays = ray_origins[..., None, :] + ( ray_directions[..., None, :] * t_vals[..., None] ) rays_flat = tf.reshape(rays, [-1, 3]) rays_flat = encode_position(rays_flat) return (rays_flat, t_vals) def map_fn(pose): \"\"\"Maps individual pose to flattened rays and sample points. Args: pose: The pose matrix of the camera. Returns: Tuple of flattened rays and sample points corresponding to the camera pose. \"\"\" (ray_origins, ray_directions) = get_rays(height=H, width=W, focal=focal, pose=pose) (rays_flat, t_vals) = render_flat_rays( ray_origins=ray_origins, ray_directions=ray_directions, near=2.0, far=6.0, num_samples=NUM_SAMPLES, rand=True, ) return (rays_flat, t_vals) # Create the training split. split_index = int(num_images * 0.8) # Split the images into training and validation. train_images = images[:split_index] val_images = images[split_index:] # Split the poses into training and validation. train_poses = poses[:split_index] val_poses = poses[split_index:] # Make the training pipeline. train_img_ds = tf.data.Dataset.from_tensor_slices(train_images) train_pose_ds = tf.data.Dataset.from_tensor_slices(train_poses) train_ray_ds = train_pose_ds.map(map_fn, num_parallel_calls=AUTO) training_ds = tf.data.Dataset.zip((train_img_ds, train_ray_ds)) train_ds = ( training_ds.shuffle(BATCH_SIZE) .batch(BATCH_SIZE, drop_remainder=True, num_parallel_calls=AUTO) .prefetch(AUTO) ) # Make the validation pipeline. val_img_ds = tf.data.Dataset.from_tensor_slices(val_images) val_pose_ds = tf.data.Dataset.from_tensor_slices(val_poses) val_ray_ds = val_pose_ds.map(map_fn, num_parallel_calls=AUTO) validation_ds = tf.data.Dataset.zip((val_img_ds, val_ray_ds)) val_ds = ( validation_ds.shuffle(BATCH_SIZE) .batch(BATCH_SIZE, drop_remainder=True, num_parallel_calls=AUTO) .prefetch(AUTO) ) NeRF model The model is a multi-layer perceptron (MLP), with ReLU as its non-linearity. An excerpt from the paper: \"We encourage the representation to be multiview-consistent by restricting the network to predict the volume density sigma as a function of only the location x, while allowing the RGB color c to be predicted as a function of both location and viewing direction. To accomplish this, the MLP first processes the input 3D coordinate x with 8 fully-connected layers (using ReLU activations and 256 channels per layer), and outputs sigma and a 256-dimensional feature vector. This feature vector is then concatenated with the camera ray's viewing direction and passed to one additional fully-connected layer (using a ReLU activation and 128 channels) that output the view-dependent RGB color.\" Here we have gone for a minimal implementation and have used 64 Dense units instead of 256 as mentioned in the paper. def get_nerf_model(num_layers, num_pos): \"\"\"Generates the NeRF neural network. Args: num_layers: The number of MLP layers. num_pos: The number of dimensions of positional encoding. Returns: The [`tf.keras`](https://www.tensorflow.org/api_docs/python/tf/keras) model. \"\"\" inputs = keras.Input(shape=(num_pos, 2 * 3 * POS_ENCODE_DIMS + 3)) x = inputs for i in range(num_layers): x = layers.Dense(units=64, activation=\"relu\")(x) if i % 4 == 0 and i > 0: # Inject residual connection. x = layers.concatenate([x, inputs], axis=-1) outputs = layers.Dense(units=4)(x) return keras.Model(inputs=inputs, outputs=outputs) def render_rgb_depth(model, rays_flat, t_vals, rand=True, train=True): \"\"\"Generates the RGB image and depth map from model prediction. Args: model: The MLP model that is trained to predict the rgb and volume density of the volumetric scene. rays_flat: The flattened rays that serve as the input to the NeRF model. t_vals: The sample points for the rays. rand: Choice to randomise the sampling strategy. train: Whether the model is in the training or testing phase. Returns: Tuple of rgb image and depth map. \"\"\" # Get the predictions from the nerf model and reshape it. if train: predictions = model(rays_flat) else: predictions = model.predict(rays_flat) predictions = tf.reshape(predictions, shape=(BATCH_SIZE, H, W, NUM_SAMPLES, 4)) # Slice the predictions into rgb and sigma. rgb = tf.sigmoid(predictions[..., :-1]) sigma_a = tf.nn.relu(predictions[..., -1]) # Get the distance of adjacent intervals. delta = t_vals[..., 1:] - t_vals[..., :-1] # delta shape = (num_samples) if rand: delta = tf.concat( [delta, tf.broadcast_to([1e10], shape=(BATCH_SIZE, H, W, 1))], axis=-1 ) alpha = 1.0 - tf.exp(-sigma_a * delta) else: delta = tf.concat( [delta, tf.broadcast_to([1e10], shape=(BATCH_SIZE, 1))], axis=-1 ) alpha = 1.0 - tf.exp(-sigma_a * delta[:, None, None, :]) # Get transmittance. exp_term = 1.0 - alpha epsilon = 1e-10 transmittance = tf.math.cumprod(exp_term + epsilon, axis=-1, exclusive=True) weights = alpha * transmittance rgb = tf.reduce_sum(weights[..., None] * rgb, axis=-2) if rand: depth_map = tf.reduce_sum(weights * t_vals, axis=-1) else: depth_map = tf.reduce_sum(weights * t_vals[:, None, None], axis=-1) return (rgb, depth_map) Training The training step is implemented as part of a custom keras.Model subclass so that we can make use of the model.fit functionality. class NeRF(keras.Model): def __init__(self, nerf_model): super().__init__() self.nerf_model = nerf_model def compile(self, optimizer, loss_fn): super().compile() self.optimizer = optimizer self.loss_fn = loss_fn self.loss_tracker = keras.metrics.Mean(name=\"loss\") self.psnr_metric = keras.metrics.Mean(name=\"psnr\") def train_step(self, inputs): # Get the images and the rays. (images, rays) = inputs (rays_flat, t_vals) = rays with tf.GradientTape() as tape: # Get the predictions from the model. rgb, _ = render_rgb_depth( model=self.nerf_model, rays_flat=rays_flat, t_vals=t_vals, rand=True ) loss = self.loss_fn(images, rgb) # Get the trainable variables. trainable_variables = self.nerf_model.trainable_variables # Get the gradeints of the trainiable variables with respect to the loss. gradients = tape.gradient(loss, trainable_variables) # Apply the grads and optimize the model. self.optimizer.apply_gradients(zip(gradients, trainable_variables)) # Get the PSNR of the reconstructed images and the source images. psnr = tf.image.psnr(images, rgb, max_val=1.0) # Compute our own metrics self.loss_tracker.update_state(loss) self.psnr_metric.update_state(psnr) return {\"loss\": self.loss_tracker.result(), \"psnr\": self.psnr_metric.result()} def test_step(self, inputs): # Get the images and the rays. (images, rays) = inputs (rays_flat, t_vals) = rays # Get the predictions from the model. rgb, _ = render_rgb_depth( model=self.nerf_model, rays_flat=rays_flat, t_vals=t_vals, rand=True ) loss = self.loss_fn(images, rgb) # Get the PSNR of the reconstructed images and the source images. psnr = tf.image.psnr(images, rgb, max_val=1.0) # Compute our own metrics self.loss_tracker.update_state(loss) self.psnr_metric.update_state(psnr) return {\"loss\": self.loss_tracker.result(), \"psnr\": self.psnr_metric.result()} @property def metrics(self): return [self.loss_tracker, self.psnr_metric] test_imgs, test_rays = next(iter(train_ds)) test_rays_flat, test_t_vals = test_rays loss_list = [] class TrainMonitor(keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): loss = logs[\"loss\"] loss_list.append(loss) test_recons_images, depth_maps = render_rgb_depth( model=self.model.nerf_model, rays_flat=test_rays_flat, t_vals=test_t_vals, rand=True, train=False, ) # Plot the rgb, depth and the loss plot. fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(20, 5)) ax[0].imshow(keras.preprocessing.image.array_to_img(test_recons_images[0])) ax[0].set_title(f\"Predicted Image: {epoch:03d}\") ax[1].imshow(keras.preprocessing.image.array_to_img(depth_maps[0, ..., None])) ax[1].set_title(f\"Depth Map: {epoch:03d}\") ax[2].plot(loss_list) ax[2].set_xticks(np.arange(0, EPOCHS + 1, 5.0)) ax[2].set_title(f\"Loss Plot: {epoch:03d}\") fig.savefig(f\"images/{epoch:03d}.png\") plt.show() plt.close() num_pos = H * W * NUM_SAMPLES nerf_model = get_nerf_model(num_layers=8, num_pos=num_pos) model = NeRF(nerf_model) model.compile( optimizer=keras.optimizers.Adam(), loss_fn=keras.losses.MeanSquaredError() ) # Create a directory to save the images during training. if not os.path.exists(\"images\"): os.makedirs(\"images\") model.fit( train_ds, validation_data=val_ds, batch_size=BATCH_SIZE, epochs=EPOCHS, callbacks=[TrainMonitor()], steps_per_epoch=split_index // BATCH_SIZE, ) def create_gif(path_to_images, name_gif): filenames = glob.glob(path_to_images) filenames = sorted(filenames) images = [] for filename in tqdm(filenames): images.append(imageio.imread(filename)) kargs = {\"duration\": 0.25} imageio.mimsave(name_gif, images, \"GIF\", **kargs) create_gif(\"images/*.png\", \"training.gif\") Epoch 1/20 16/16 [==============================] - 15s 753ms/step - loss: 0.1134 - psnr: 9.7278 - val_loss: 0.0683 - val_psnr: 12.0722 png Epoch 2/20 16/16 [==============================] - 13s 752ms/step - loss: 0.0648 - psnr: 12.4200 - val_loss: 0.0664 - val_psnr: 12.1765 png Epoch 3/20 16/16 [==============================] - 13s 746ms/step - loss: 0.0607 - psnr: 12.5281 - val_loss: 0.0673 - val_psnr: 12.0121 png Epoch 4/20 16/16 [==============================] - 13s 758ms/step - loss: 0.0595 - psnr: 12.7050 - val_loss: 0.0646 - val_psnr: 12.2768 png Epoch 5/20 16/16 [==============================] - 13s 755ms/step - loss: 0.0583 - psnr: 12.7522 - val_loss: 0.0613 - val_psnr: 12.5351 png Epoch 6/20 16/16 [==============================] - 13s 749ms/step - loss: 0.0545 - psnr: 13.0654 - val_loss: 0.0553 - val_psnr: 12.9512 png Epoch 7/20 16/16 [==============================] - 13s 744ms/step - loss: 0.0480 - psnr: 13.6313 - val_loss: 0.0444 - val_psnr: 13.7838 png Epoch 8/20 16/16 [==============================] - 13s 763ms/step - loss: 0.0359 - psnr: 14.8570 - val_loss: 0.0342 - val_psnr: 14.8823 png Epoch 9/20 16/16 [==============================] - 13s 758ms/step - loss: 0.0299 - psnr: 15.5374 - val_loss: 0.0287 - val_psnr: 15.6171 png Epoch 10/20 16/16 [==============================] - 13s 779ms/step - loss: 0.0273 - psnr: 15.9051 - val_loss: 0.0266 - val_psnr: 15.9319 png Epoch 11/20 16/16 [==============================] - 13s 736ms/step - loss: 0.0255 - psnr: 16.1422 - val_loss: 0.0250 - val_psnr: 16.1568 png Epoch 12/20 16/16 [==============================] - 13s 746ms/step - loss: 0.0236 - psnr: 16.5074 - val_loss: 0.0233 - val_psnr: 16.4793 png Epoch 13/20 16/16 [==============================] - 13s 755ms/step - loss: 0.0217 - psnr: 16.8391 - val_loss: 0.0210 - val_psnr: 16.8950 png Epoch 14/20 16/16 [==============================] - 13s 741ms/step - loss: 0.0197 - psnr: 17.2245 - val_loss: 0.0187 - val_psnr: 17.3766 png Epoch 15/20 16/16 [==============================] - 13s 739ms/step - loss: 0.0179 - psnr: 17.6246 - val_loss: 0.0179 - val_psnr: 17.5445 png Epoch 16/20 16/16 [==============================] - 13s 735ms/step - loss: 0.0175 - psnr: 17.6998 - val_loss: 0.0180 - val_psnr: 17.5154 png Epoch 17/20 16/16 [==============================] - 13s 741ms/step - loss: 0.0167 - psnr: 17.9393 - val_loss: 0.0156 - val_psnr: 18.1784 png Epoch 18/20 16/16 [==============================] - 13s 750ms/step - loss: 0.0150 - psnr: 18.3875 - val_loss: 0.0151 - val_psnr: 18.2811 png Epoch 19/20 16/16 [==============================] - 13s 755ms/step - loss: 0.0141 - psnr: 18.6476 - val_loss: 0.0139 - val_psnr: 18.6216 png Epoch 20/20 16/16 [==============================] - 14s 777ms/step - loss: 0.0139 - psnr: 18.7131 - val_loss: 0.0137 - val_psnr: 18.7259 png 100%|██████████| 20/20 [00:00<00:00, 57.59it/s] Visualize the training step Here we see the training step. With the decreasing loss, the rendered image and the depth maps are getting better. In your local system, you will see the training.gif file generated. training-20 Inference In this section, we ask the model to build novel views of the scene. The model was given 106 views of the scene in the training step. The collections of training images cannot contain each and every angle of the scene. A trained model can represent the entire 3-D scene with a sparse set of training images. Here we provide different poses to the model and ask for it to give us the 2-D image corresponding to that camera view. If we infer the model for all the 360-degree views, it should provide an overview of the entire scenery from all around. # Get the trained NeRF model and infer. nerf_model = model.nerf_model test_recons_images, depth_maps = render_rgb_depth( model=nerf_model, rays_flat=test_rays_flat, t_vals=test_t_vals, rand=True, train=False, ) # Create subplots. fig, axes = plt.subplots(nrows=5, ncols=3, figsize=(10, 20)) for ax, ori_img, recons_img, depth_map in zip( axes, test_imgs, test_recons_images, depth_maps ): ax[0].imshow(keras.preprocessing.image.array_to_img(ori_img)) ax[0].set_title(\"Original\") ax[1].imshow(keras.preprocessing.image.array_to_img(recons_img)) ax[1].set_title(\"Reconstructed\") ax[2].imshow( keras.preprocessing.image.array_to_img(depth_map[..., None]), cmap=\"inferno\" ) ax[2].set_title(\"Depth Map\") png Render 3D Scene Here we will synthesize novel 3D views and stitch all of them together to render a video encompassing the 360-degree view. def get_translation_t(t): \"\"\"Get the translation matrix for movement in t.\"\"\" matrix = [ [1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, t], [0, 0, 0, 1], ] return tf.convert_to_tensor(matrix, dtype=tf.float32) def get_rotation_phi(phi): \"\"\"Get the rotation matrix for movement in phi.\"\"\" matrix = [ [1, 0, 0, 0], [0, tf.cos(phi), -tf.sin(phi), 0], [0, tf.sin(phi), tf.cos(phi), 0], [0, 0, 0, 1], ] return tf.convert_to_tensor(matrix, dtype=tf.float32) def get_rotation_theta(theta): \"\"\"Get the rotation matrix for movement in theta.\"\"\" matrix = [ [tf.cos(theta), 0, -tf.sin(theta), 0], [0, 1, 0, 0], [tf.sin(theta), 0, tf.cos(theta), 0], [0, 0, 0, 1], ] return tf.convert_to_tensor(matrix, dtype=tf.float32) def pose_spherical(theta, phi, t): \"\"\" Get the camera to world matrix for the corresponding theta, phi and t. \"\"\" c2w = get_translation_t(t) c2w = get_rotation_phi(phi / 180.0 * np.pi) @ c2w c2w = get_rotation_theta(theta / 180.0 * np.pi) @ c2w c2w = np.array([[-1, 0, 0, 0], [0, 0, 1, 0], [0, 1, 0, 0], [0, 0, 0, 1]]) @ c2w return c2w rgb_frames = [] batch_flat = [] batch_t = [] # Iterate over different theta value and generate scenes. for index, theta in tqdm(enumerate(np.linspace(0.0, 360.0, 120, endpoint=False))): # Get the camera to world matrix. c2w = pose_spherical(theta, -30.0, 4.0) # ray_oris, ray_dirs = get_rays(H, W, focal, c2w) rays_flat, t_vals = render_flat_rays( ray_oris, ray_dirs, near=2.0, far=6.0, num_samples=NUM_SAMPLES, rand=False ) if index % BATCH_SIZE == 0 and index > 0: batched_flat = tf.stack(batch_flat, axis=0) batch_flat = [rays_flat] batched_t = tf.stack(batch_t, axis=0) batch_t = [t_vals] rgb, _ = render_rgb_depth( nerf_model, batched_flat, batched_t, rand=False, train=False ) temp_rgb = [np.clip(255 * img, 0.0, 255.0).astype(np.uint8) for img in rgb] rgb_frames = rgb_frames + temp_rgb else: batch_flat.append(rays_flat) batch_t.append(t_vals) rgb_video = \"rgb_video.mp4\" imageio.mimwrite(rgb_video, rgb_frames, fps=30, quality=7, macro_block_size=None) 120it [00:12, 9.24it/s] Visualize the video Here we can see the rendered 360 degree view of the scene. The model has successfully learned the entire volumetric space through the sparse set of images in only 20 epochs. You can view the rendered video saved locally, named rgb_video.mp4. rendered-video Conclusion We have produced a minimal implementation of NeRF to provide an intuition of its core ideas and methodology. This method has been used in various other works in the computer graphics space. We would like to encourage our readers to use this code as an example and play with the hyperparameters and visualize the outputs. Below we have also provided the outputs of the model trained for more epochs. Epochs GIF of the training step 100 100-epoch-training 200 200-epoch-training Reference NeRF repository: The official repository for NeRF. NeRF paper: The paper on NeRF. Manim Repository: We have used manim to build all the animations. Mathworks: Mathworks for the camera calibration article. Mathew's video: A great video on NeRF. Compact Convolutional Transformers As discussed in the Vision Transformers (ViT) paper, a Transformer-based architecture for vision typically requires a larger dataset than usual, as well as a longer pre-training schedule. ImageNet-1k (which has about a million images) is considered to fall under the medium-sized data regime with respect to ViTs. This is primarily because, unlike CNNs, ViTs (or a typical Transformer-based architecture) do not have well-informed inductive biases (such as convolutions for processing images). This begs the question: can't we combine the benefits of convolution and the benefits of Transformers in a single network architecture? These benefits include parameter-efficiency, and self-attention to process long-range and global dependencies (interactions between different regions in an image). In Escaping the Big Data Paradigm with Compact Transformers, Hassani et al. present an approach for doing exactly this. They proposed the Compact Convolutional Transformer (CCT) architecture. In this example, we will work on an implementation of CCT and we will see how well it performs on the CIFAR-10 dataset. If you are unfamiliar with the concept of self-attention or Transformers, you can read this chapter from François Chollet's book Deep Learning with Python. This example uses code snippets from another example, Image classification with Vision Transformer. This example requires TensorFlow 2.5 or higher, as well as TensorFlow Addons, which can be installed using the following command: !pip install -U -q tensorflow-addons  |████████████████████████████████| 686kB 5.4MB/s [?25h Imports from tensorflow.keras import layers from tensorflow import keras import matplotlib.pyplot as plt import tensorflow_addons as tfa import tensorflow as tf import numpy as np Hyperparameters and constants positional_emb = True conv_layers = 2 projection_dim = 128 num_heads = 2 transformer_units = [ projection_dim, projection_dim, ] transformer_layers = 2 stochastic_depth_rate = 0.1 learning_rate = 0.001 weight_decay = 0.0001 batch_size = 128 num_epochs = 30 image_size = 32 Load CIFAR-10 dataset num_classes = 10 input_shape = (32, 32, 3) (x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data() y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) print(f\"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}\") print(f\"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}\") Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz 170500096/170498071 [==============================] - 11s 0us/step x_train shape: (50000, 32, 32, 3) - y_train shape: (50000, 10) x_test shape: (10000, 32, 32, 3) - y_test shape: (10000, 10) The CCT tokenizer The first recipe introduced by the CCT authors is the tokenizer for processing the images. In a standard ViT, images are organized into uniform non-overlapping patches. This eliminates the boundary-level information present in between different patches. This is important for a neural network to effectively exploit the locality information. The figure below presents an illustration of how images are organized into patches. We already know that convolutions are quite good at exploiting locality information. So, based on this, the authors introduce an all-convolution mini-network to produce image patches. class CCTTokenizer(layers.Layer): def __init__( self, kernel_size=3, stride=1, padding=1, pooling_kernel_size=3, pooling_stride=2, num_conv_layers=conv_layers, num_output_channels=[64, 128], positional_emb=positional_emb, **kwargs, ): super(CCTTokenizer, self).__init__(**kwargs) # This is our tokenizer. self.conv_model = keras.Sequential() for i in range(num_conv_layers): self.conv_model.add( layers.Conv2D( num_output_channels[i], kernel_size, stride, padding=\"valid\", use_bias=False, activation=\"relu\", kernel_initializer=\"he_normal\", ) ) self.conv_model.add(layers.ZeroPadding2D(padding)) self.conv_model.add( layers.MaxPool2D(pooling_kernel_size, pooling_stride, \"same\") ) self.positional_emb = positional_emb def call(self, images): outputs = self.conv_model(images) # After passing the images through our mini-network the spatial dimensions # are flattened to form sequences. reshaped = tf.reshape( outputs, (-1, tf.shape(outputs)[1] * tf.shape(outputs)[2], tf.shape(outputs)[-1]), ) return reshaped def positional_embedding(self, image_size): # Positional embeddings are optional in CCT. Here, we calculate # the number of sequences and initialize an `Embedding` layer to # compute the positional embeddings later. if self.positional_emb: dummy_inputs = tf.ones((1, image_size, image_size, 3)) dummy_outputs = self.call(dummy_inputs) sequence_length = tf.shape(dummy_outputs)[1] projection_dim = tf.shape(dummy_outputs)[-1] embed_layer = layers.Embedding( input_dim=sequence_length, output_dim=projection_dim ) return embed_layer, sequence_length else: return None Stochastic depth for regularization Stochastic depth is a regularization technique that randomly drops a set of layers. During inference, the layers are kept as they are. It is very much similar to Dropout but only that it operates on a block of layers rather than individual nodes present inside a layer. In CCT, stochastic depth is used just before the residual blocks of a Transformers encoder. # Referred from: github.com:rwightman/pytorch-image-models. class StochasticDepth(layers.Layer): def __init__(self, drop_prop, **kwargs): super(StochasticDepth, self).__init__(**kwargs) self.drop_prob = drop_prop def call(self, x, training=None): if training: keep_prob = 1 - self.drop_prob shape = (tf.shape(x)[0],) + (1,) * (len(tf.shape(x)) - 1) random_tensor = keep_prob + tf.random.uniform(shape, 0, 1) random_tensor = tf.floor(random_tensor) return (x / keep_prob) * random_tensor return x MLP for the Transformers encoder def mlp(x, hidden_units, dropout_rate): for units in hidden_units: x = layers.Dense(units, activation=tf.nn.gelu)(x) x = layers.Dropout(dropout_rate)(x) return x Data augmentation In the original paper, the authors use AutoAugment to induce stronger regularization. For this example, we will be using the standard geometric augmentations like random cropping and flipping. # Note the rescaling layer. These layers have pre-defined inference behavior. data_augmentation = keras.Sequential( [ layers.Rescaling(scale=1.0 / 255), layers.RandomCrop(image_size, image_size), layers.RandomFlip(\"horizontal\"), ], name=\"data_augmentation\", ) The final CCT model Another recipe introduced in CCT is attention pooling or sequence pooling. In ViT, only the feature map corresponding to the class token is pooled and is then used for the subsequent classification task (or any other downstream task). In CCT, outputs from the Transformers encoder are weighted and then passed on to the final task-specific layer (in this example, we do classification). def create_cct_model( image_size=image_size, input_shape=input_shape, num_heads=num_heads, projection_dim=projection_dim, transformer_units=transformer_units, ): inputs = layers.Input(input_shape) # Augment data. augmented = data_augmentation(inputs) # Encode patches. cct_tokenizer = CCTTokenizer() encoded_patches = cct_tokenizer(augmented) # Apply positional embedding. if positional_emb: pos_embed, seq_length = cct_tokenizer.positional_embedding(image_size) positions = tf.range(start=0, limit=seq_length, delta=1) position_embeddings = pos_embed(positions) encoded_patches += position_embeddings # Calculate Stochastic Depth probabilities. dpr = [x for x in np.linspace(0, stochastic_depth_rate, transformer_layers)] # Create multiple layers of the Transformer block. for i in range(transformer_layers): # Layer normalization 1. x1 = layers.LayerNormalization(epsilon=1e-5)(encoded_patches) # Create a multi-head attention layer. attention_output = layers.MultiHeadAttention( num_heads=num_heads, key_dim=projection_dim, dropout=0.1 )(x1, x1) # Skip connection 1. attention_output = StochasticDepth(dpr[i])(attention_output) x2 = layers.Add()([attention_output, encoded_patches]) # Layer normalization 2. x3 = layers.LayerNormalization(epsilon=1e-5)(x2) # MLP. x3 = mlp(x3, hidden_units=transformer_units, dropout_rate=0.1) # Skip connection 2. x3 = StochasticDepth(dpr[i])(x3) encoded_patches = layers.Add()([x3, x2]) # Apply sequence pooling. representation = layers.LayerNormalization(epsilon=1e-5)(encoded_patches) attention_weights = tf.nn.softmax(layers.Dense(1)(representation), axis=1) weighted_representation = tf.matmul( attention_weights, representation, transpose_a=True ) weighted_representation = tf.squeeze(weighted_representation, -2) # Classify outputs. logits = layers.Dense(num_classes)(weighted_representation) # Create the Keras model. model = keras.Model(inputs=inputs, outputs=logits) return model Model training and evaluation def run_experiment(model): optimizer = tfa.optimizers.AdamW(learning_rate=0.001, weight_decay=0.0001) model.compile( optimizer=optimizer, loss=keras.losses.CategoricalCrossentropy( from_logits=True, label_smoothing=0.1 ), metrics=[ keras.metrics.CategoricalAccuracy(name=\"accuracy\"), keras.metrics.TopKCategoricalAccuracy(5, name=\"top-5-accuracy\"), ], ) checkpoint_filepath = \"/tmp/checkpoint\" checkpoint_callback = keras.callbacks.ModelCheckpoint( checkpoint_filepath, monitor=\"val_accuracy\", save_best_only=True, save_weights_only=True, ) history = model.fit( x=x_train, y=y_train, batch_size=batch_size, epochs=num_epochs, validation_split=0.1, callbacks=[checkpoint_callback], ) model.load_weights(checkpoint_filepath) _, accuracy, top_5_accuracy = model.evaluate(x_test, y_test) print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") print(f\"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%\") return history cct_model = create_cct_model() history = run_experiment(cct_model) Epoch 1/30 352/352 [==============================] - 10s 18ms/step - loss: 1.9181 - accuracy: 0.3277 - top-5-accuracy: 0.8296 - val_loss: 1.7123 - val_accuracy: 0.4250 - val_top-5-accuracy: 0.9028 Epoch 2/30 352/352 [==============================] - 6s 16ms/step - loss: 1.5725 - accuracy: 0.5010 - top-5-accuracy: 0.9295 - val_loss: 1.5026 - val_accuracy: 0.5530 - val_top-5-accuracy: 0.9364 Epoch 3/30 352/352 [==============================] - 6s 16ms/step - loss: 1.4492 - accuracy: 0.5633 - top-5-accuracy: 0.9476 - val_loss: 1.3744 - val_accuracy: 0.6038 - val_top-5-accuracy: 0.9558 Epoch 4/30 352/352 [==============================] - 6s 16ms/step - loss: 1.3658 - accuracy: 0.6055 - top-5-accuracy: 0.9576 - val_loss: 1.3258 - val_accuracy: 0.6148 - val_top-5-accuracy: 0.9648 Epoch 5/30 352/352 [==============================] - 6s 16ms/step - loss: 1.3142 - accuracy: 0.6302 - top-5-accuracy: 0.9640 - val_loss: 1.2723 - val_accuracy: 0.6468 - val_top-5-accuracy: 0.9710 Epoch 6/30 352/352 [==============================] - 6s 16ms/step - loss: 1.2729 - accuracy: 0.6489 - top-5-accuracy: 0.9684 - val_loss: 1.2490 - val_accuracy: 0.6640 - val_top-5-accuracy: 0.9704 Epoch 7/30 352/352 [==============================] - 6s 16ms/step - loss: 1.2371 - accuracy: 0.6664 - top-5-accuracy: 0.9711 - val_loss: 1.1822 - val_accuracy: 0.6906 - val_top-5-accuracy: 0.9744 Epoch 8/30 352/352 [==============================] - 6s 16ms/step - loss: 1.1899 - accuracy: 0.6942 - top-5-accuracy: 0.9735 - val_loss: 1.1799 - val_accuracy: 0.6982 - val_top-5-accuracy: 0.9768 Epoch 9/30 352/352 [==============================] - 6s 16ms/step - loss: 1.1706 - accuracy: 0.6972 - top-5-accuracy: 0.9767 - val_loss: 1.1390 - val_accuracy: 0.7148 - val_top-5-accuracy: 0.9768 Epoch 10/30 352/352 [==============================] - 6s 16ms/step - loss: 1.1524 - accuracy: 0.7054 - top-5-accuracy: 0.9783 - val_loss: 1.1803 - val_accuracy: 0.7000 - val_top-5-accuracy: 0.9740 Epoch 11/30 352/352 [==============================] - 6s 16ms/step - loss: 1.1219 - accuracy: 0.7222 - top-5-accuracy: 0.9798 - val_loss: 1.1066 - val_accuracy: 0.7254 - val_top-5-accuracy: 0.9812 Epoch 12/30 352/352 [==============================] - 6s 16ms/step - loss: 1.1029 - accuracy: 0.7287 - top-5-accuracy: 0.9811 - val_loss: 1.0844 - val_accuracy: 0.7388 - val_top-5-accuracy: 0.9814 Epoch 13/30 352/352 [==============================] - 6s 16ms/step - loss: 1.0841 - accuracy: 0.7380 - top-5-accuracy: 0.9825 - val_loss: 1.1159 - val_accuracy: 0.7280 - val_top-5-accuracy: 0.9792 Epoch 14/30 352/352 [==============================] - 6s 16ms/step - loss: 1.0677 - accuracy: 0.7462 - top-5-accuracy: 0.9832 - val_loss: 1.0862 - val_accuracy: 0.7444 - val_top-5-accuracy: 0.9834 Epoch 15/30 352/352 [==============================] - 6s 16ms/step - loss: 1.0511 - accuracy: 0.7535 - top-5-accuracy: 0.9846 - val_loss: 1.0613 - val_accuracy: 0.7494 - val_top-5-accuracy: 0.9832 Epoch 16/30 352/352 [==============================] - 6s 16ms/step - loss: 1.0377 - accuracy: 0.7608 - top-5-accuracy: 0.9854 - val_loss: 1.0379 - val_accuracy: 0.7606 - val_top-5-accuracy: 0.9834 Epoch 17/30 352/352 [==============================] - 6s 16ms/step - loss: 1.0304 - accuracy: 0.7650 - top-5-accuracy: 0.9849 - val_loss: 1.0602 - val_accuracy: 0.7562 - val_top-5-accuracy: 0.9814 Epoch 18/30 352/352 [==============================] - 6s 16ms/step - loss: 1.0121 - accuracy: 0.7746 - top-5-accuracy: 0.9869 - val_loss: 1.0430 - val_accuracy: 0.7630 - val_top-5-accuracy: 0.9834 Epoch 19/30 352/352 [==============================] - 6s 16ms/step - loss: 1.0037 - accuracy: 0.7760 - top-5-accuracy: 0.9872 - val_loss: 1.0951 - val_accuracy: 0.7460 - val_top-5-accuracy: 0.9826 Epoch 20/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9964 - accuracy: 0.7805 - top-5-accuracy: 0.9871 - val_loss: 1.0683 - val_accuracy: 0.7538 - val_top-5-accuracy: 0.9834 Epoch 21/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9838 - accuracy: 0.7850 - top-5-accuracy: 0.9886 - val_loss: 1.0185 - val_accuracy: 0.7770 - val_top-5-accuracy: 0.9876 Epoch 22/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9742 - accuracy: 0.7904 - top-5-accuracy: 0.9894 - val_loss: 1.0253 - val_accuracy: 0.7738 - val_top-5-accuracy: 0.9838 Epoch 23/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9662 - accuracy: 0.7935 - top-5-accuracy: 0.9889 - val_loss: 1.0107 - val_accuracy: 0.7786 - val_top-5-accuracy: 0.9860 Epoch 24/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9549 - accuracy: 0.7994 - top-5-accuracy: 0.9897 - val_loss: 1.0089 - val_accuracy: 0.7790 - val_top-5-accuracy: 0.9852 Epoch 25/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9522 - accuracy: 0.8018 - top-5-accuracy: 0.9896 - val_loss: 1.0214 - val_accuracy: 0.7780 - val_top-5-accuracy: 0.9866 Epoch 26/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9469 - accuracy: 0.8023 - top-5-accuracy: 0.9897 - val_loss: 0.9993 - val_accuracy: 0.7816 - val_top-5-accuracy: 0.9882 Epoch 27/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9463 - accuracy: 0.8022 - top-5-accuracy: 0.9906 - val_loss: 1.0071 - val_accuracy: 0.7848 - val_top-5-accuracy: 0.9850 Epoch 28/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9336 - accuracy: 0.8077 - top-5-accuracy: 0.9909 - val_loss: 1.0113 - val_accuracy: 0.7868 - val_top-5-accuracy: 0.9856 Epoch 29/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9352 - accuracy: 0.8071 - top-5-accuracy: 0.9909 - val_loss: 1.0073 - val_accuracy: 0.7856 - val_top-5-accuracy: 0.9830 Epoch 30/30 352/352 [==============================] - 6s 16ms/step - loss: 0.9273 - accuracy: 0.8112 - top-5-accuracy: 0.9908 - val_loss: 1.0144 - val_accuracy: 0.7792 - val_top-5-accuracy: 0.9836 313/313 [==============================] - 2s 6ms/step - loss: 1.0396 - accuracy: 0.7676 - top-5-accuracy: 0.9839 Test accuracy: 76.76% Test top 5 accuracy: 98.39% Let's now visualize the training progress of the model. plt.plot(history.history[\"loss\"], label=\"train_loss\") plt.plot(history.history[\"val_loss\"], label=\"val_loss\") plt.xlabel(\"Epochs\") plt.ylabel(\"Loss\") plt.title(\"Train and Validation Losses Over Epochs\", fontsize=14) plt.legend() plt.grid() plt.show() png The CCT model we just trained has just 0.4 million parameters, and it gets us to ~78% top-1 accuracy within 30 epochs. The plot above shows no signs of overfitting as well. This means we can train this network for longer (perhaps with a bit more regularization) and may obtain even better performance. This performance can further be improved by additional recipes like cosine decay learning rate schedule, other data augmentation techniques like AutoAugment, MixUp or Cutmix. With these modifications, the authors present 95.1% top-1 accuracy on the CIFAR-10 dataset. The authors also present a number of experiments to study how the number of convolution blocks, Transformers layers, etc. affect the final performance of CCTs. For a comparison, a ViT model takes about 4.7 million parameters and 100 epochs of training to reach a top-1 accuracy of 78.22% on the CIFAR-10 dataset. You can refer to this notebook to know about the experimental setup. The authors also demonstrate the performance of Compact Convolutional Transformers on NLP tasks and they report competitive results there. Training with consistency regularization for robustness against data distribution shifts. Deep learning models excel in many image recognition tasks when the data is independent and identically distributed (i.i.d.). However, they can suffer from performance degradation caused by subtle distribution shifts in the input data (such as random noise, contrast change, and blurring). So, naturally, there arises a question of why. As discussed in A Fourier Perspective on Model Robustness in Computer Vision), there's no reason for deep learning models to be robust against such shifts. Standard model training procedures (such as standard image classification training workflows) don't enable a model to learn beyond what's fed to it in the form of training data. In this example, we will be training an image classification model enforcing a sense of consistency inside it by doing the following: Train a standard image classification model. Train an equal or larger model on a noisy version of the dataset (augmented using RandAugment). To do this, we will first obtain predictions of the previous model on the clean images of the dataset. We will then use these predictions and train the second model to match these predictions on the noisy variant of the same images. This is identical to the workflow of Knowledge Distillation but since the student model is equal or larger in size this process is also referred to as Self-Training. This overall training workflow finds its roots in works like FixMatch, Unsupervised Data Augmentation for Consistency Training, and Noisy Student Training. Since this training process encourages a model yield consistent predictions for clean as well as noisy images, it's often referred to as consistency training or training with consistency regularization. Although the example focuses on using consistency training to enhance the robustness of models to common corruptions this example can also serve a template for performing weakly supervised learning. This example requires TensorFlow 2.4 or higher, as well as TensorFlow Hub and TensorFlow Models, which can be installed using the following command: !pip install -q tf-models-official tensorflow-addons Imports and setup from official.vision.image_classification.augment import RandAugment from tensorflow.keras import layers import tensorflow as tf import tensorflow_addons as tfa import matplotlib.pyplot as plt tf.random.set_seed(42) Define hyperparameters AUTO = tf.data.AUTOTUNE BATCH_SIZE = 128 EPOCHS = 5 CROP_TO = 72 RESIZE_TO = 96 Load the CIFAR-10 dataset (x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() val_samples = 49500 new_train_x, new_y_train = x_train[: val_samples + 1], y_train[: val_samples + 1] val_x, val_y = x_train[val_samples:], y_train[val_samples:] Create TensorFlow Dataset objects # Initialize `RandAugment` object with 2 layers of # augmentation transforms and strength of 9. augmenter = RandAugment(num_layers=2, magnitude=9) For training the teacher model, we will only be using two geometric augmentation transforms: random horizontal flip and random crop. def preprocess_train(image, label, noisy=True): image = tf.image.random_flip_left_right(image) # We first resize the original image to a larger dimension # and then we take random crops from it. image = tf.image.resize(image, [RESIZE_TO, RESIZE_TO]) image = tf.image.random_crop(image, [CROP_TO, CROP_TO, 3]) if noisy: image = augmenter.distort(image) return image, label def preprocess_test(image, label): image = tf.image.resize(image, [CROP_TO, CROP_TO]) return image, label train_ds = tf.data.Dataset.from_tensor_slices((new_train_x, new_y_train)) validation_ds = tf.data.Dataset.from_tensor_slices((val_x, val_y)) test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)) We make sure train_clean_ds and train_noisy_ds are shuffled using the same seed to ensure their orders are exactly the same. This will be helpful during training the student model. # This dataset will be used to train the first model. train_clean_ds = ( train_ds.shuffle(BATCH_SIZE * 10, seed=42) .map(lambda x, y: (preprocess_train(x, y, noisy=False)), num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) # This prepares the `Dataset` object to use RandAugment. train_noisy_ds = ( train_ds.shuffle(BATCH_SIZE * 10, seed=42) .map(preprocess_train, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) validation_ds = ( validation_ds.map(preprocess_test, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) test_ds = ( test_ds.map(preprocess_test, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) # This dataset will be used to train the second model. consistency_training_ds = tf.data.Dataset.zip((train_clean_ds, train_noisy_ds)) Visualize the datasets sample_images, sample_labels = next(iter(train_clean_ds)) plt.figure(figsize=(10, 10)) for i, image in enumerate(sample_images[:9]): ax = plt.subplot(3, 3, i + 1) plt.imshow(image.numpy().astype(\"int\")) plt.axis(\"off\") sample_images, sample_labels = next(iter(train_noisy_ds)) plt.figure(figsize=(10, 10)) for i, image in enumerate(sample_images[:9]): ax = plt.subplot(3, 3, i + 1) plt.imshow(image.numpy().astype(\"int\")) plt.axis(\"off\") png png Define a model building utility function We now define our model building utility. Our model is based on the ResNet50V2 architecture. def get_training_model(num_classes=10): resnet50_v2 = tf.keras.applications.ResNet50V2( weights=None, include_top=False, input_shape=(CROP_TO, CROP_TO, 3), ) model = tf.keras.Sequential( [ layers.Input((CROP_TO, CROP_TO, 3)), layers.Rescaling(scale=1.0 / 127.5, offset=-1), resnet50_v2, layers.GlobalAveragePooling2D(), layers.Dense(num_classes), ] ) return model In the interest of reproducibility, we serialize the initial random weights of the teacher network. initial_teacher_model = get_training_model() initial_teacher_model.save_weights(\"initial_teacher_model.h5\") Train the teacher model As noted in Noisy Student Training, if the teacher model is trained with geometric ensembling and when the student model is forced to mimic that, it leads to better performance. The original work uses Stochastic Depth and Dropout to bring in the ensembling part but for this example, we will use Stochastic Weight Averaging (SWA) which also resembles geometric ensembling. # Define the callbacks. reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(patience=3) early_stopping = tf.keras.callbacks.EarlyStopping( patience=10, restore_best_weights=True ) # Initialize SWA from tf-hub. SWA = tfa.optimizers.SWA # Compile and train the teacher model. teacher_model = get_training_model() teacher_model.load_weights(\"initial_teacher_model.h5\") teacher_model.compile( # Notice that we are wrapping our optimizer within SWA optimizer=SWA(tf.keras.optimizers.Adam()), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[\"accuracy\"], ) history = teacher_model.fit( train_clean_ds, epochs=EPOCHS, validation_data=validation_ds, callbacks=[reduce_lr, early_stopping], ) # Evaluate the teacher model on the test set. _, acc = teacher_model.evaluate(test_ds, verbose=0) print(f\"Test accuracy: {acc*100}%\") Epoch 1/5 387/387 [==============================] - 73s 78ms/step - loss: 1.7785 - accuracy: 0.3582 - val_loss: 2.0589 - val_accuracy: 0.3920 Epoch 2/5 387/387 [==============================] - 28s 71ms/step - loss: 1.2493 - accuracy: 0.5542 - val_loss: 1.4228 - val_accuracy: 0.5380 Epoch 3/5 387/387 [==============================] - 28s 73ms/step - loss: 1.0294 - accuracy: 0.6350 - val_loss: 1.4422 - val_accuracy: 0.5900 Epoch 4/5 387/387 [==============================] - 28s 73ms/step - loss: 0.8954 - accuracy: 0.6864 - val_loss: 1.2189 - val_accuracy: 0.6520 Epoch 5/5 387/387 [==============================] - 28s 73ms/step - loss: 0.7879 - accuracy: 0.7231 - val_loss: 0.9790 - val_accuracy: 0.6500 Test accuracy: 65.83999991416931% Define a self-training utility For this part, we will borrow the Distiller class from this Keras Example. # Majority of the code is taken from: # https://keras.io/examples/vision/knowledge_distillation/ class SelfTrainer(tf.keras.Model): def __init__(self, student, teacher): super(SelfTrainer, self).__init__() self.student = student self.teacher = teacher def compile( self, optimizer, metrics, student_loss_fn, distillation_loss_fn, temperature=3, ): super(SelfTrainer, self).compile(optimizer=optimizer, metrics=metrics) self.student_loss_fn = student_loss_fn self.distillation_loss_fn = distillation_loss_fn self.temperature = temperature def train_step(self, data): # Since our dataset is a zip of two independent datasets, # after initially parsing them, we segregate the # respective images and labels next. clean_ds, noisy_ds = data clean_images, _ = clean_ds noisy_images, y = noisy_ds # Forward pass of teacher teacher_predictions = self.teacher(clean_images, training=False) with tf.GradientTape() as tape: # Forward pass of student student_predictions = self.student(noisy_images, training=True) # Compute losses student_loss = self.student_loss_fn(y, student_predictions) distillation_loss = self.distillation_loss_fn( tf.nn.softmax(teacher_predictions / self.temperature, axis=1), tf.nn.softmax(student_predictions / self.temperature, axis=1), ) total_loss = (student_loss + distillation_loss) / 2 # Compute gradients trainable_vars = self.student.trainable_variables gradients = tape.gradient(total_loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Update the metrics configured in `compile()` self.compiled_metrics.update_state( y, tf.nn.softmax(student_predictions, axis=1) ) # Return a dict of performance results = {m.name: m.result() for m in self.metrics} results.update({\"total_loss\": total_loss}) return results def test_step(self, data): # During inference, we only pass a dataset consisting images and labels. x, y = data # Compute predictions y_prediction = self.student(x, training=False) # Update the metrics self.compiled_metrics.update_state(y, tf.nn.softmax(y_prediction, axis=1)) # Return a dict of performance results = {m.name: m.result() for m in self.metrics} return results The only difference in this implementation is the way loss is being calculated. Instead of weighted the distillation loss and student loss differently we are taking their average following Noisy Student Training. Train the student model # Define the callbacks. # We are using a larger decay factor to stabilize the training. reduce_lr = tf.keras.callbacks.ReduceLROnPlateau( patience=3, factor=0.5, monitor=\"val_accuracy\" ) early_stopping = tf.keras.callbacks.EarlyStopping( patience=10, restore_best_weights=True, monitor=\"val_accuracy\" ) # Compile and train the student model. self_trainer = SelfTrainer(student=get_training_model(), teacher=teacher_model) self_trainer.compile( # Notice we are *not* using SWA here. optimizer=\"adam\", metrics=[\"accuracy\"], student_loss_fn=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), distillation_loss_fn=tf.keras.losses.KLDivergence(), temperature=10, ) history = self_trainer.fit( consistency_training_ds, epochs=EPOCHS, validation_data=validation_ds, callbacks=[reduce_lr, early_stopping], ) # Evaluate the student model. acc = self_trainer.evaluate(test_ds, verbose=0) print(f\"Test accuracy from student model: {acc*100}%\") Epoch 1/5 387/387 [==============================] - 39s 84ms/step - accuracy: 0.2112 - total_loss: 1.0629 - val_accuracy: 0.4180 Epoch 2/5 387/387 [==============================] - 32s 82ms/step - accuracy: 0.3341 - total_loss: 0.9554 - val_accuracy: 0.3900 Epoch 3/5 387/387 [==============================] - 31s 81ms/step - accuracy: 0.3873 - total_loss: 0.8852 - val_accuracy: 0.4580 Epoch 4/5 387/387 [==============================] - 31s 81ms/step - accuracy: 0.4294 - total_loss: 0.8423 - val_accuracy: 0.5660 Epoch 5/5 387/387 [==============================] - 31s 81ms/step - accuracy: 0.4547 - total_loss: 0.8093 - val_accuracy: 0.5880 Test accuracy from student model: 58.490002155303955% Assess the robustness of the models A standard benchmark of assessing the robustness of vision models is to record their performance on corrupted datasets like ImageNet-C and CIFAR-10-C both of which were proposed in Benchmarking Neural Network Robustness to Common Corruptions and Perturbations. For this example, we will be using the CIFAR-10-C dataset which has 19 different corruptions on 5 different severity levels. To assess the robustness of the models on this dataset, we will do the following: Run the pre-trained models on the highest level of severities and obtain the top-1 accuracies. Compute the mean top-1 accuracy. For the purpose of this example, we won't be going through these steps. This is why we trained the models for only 5 epochs. You can check out this repository that demonstrates the full-scale training experiments and also the aforementioned assessment. The figure below presents an executive summary of that assessment: Mean Top-1 results stand for the CIFAR-10-C dataset and Test Top-1 results stand for the CIFAR-10 test set. It's clear that consistency training has an advantage on not only enhancing the model robustness but also on improving the standard test performance. How to train a deep convolutional autoencoder for image denoising. Introduction This example demonstrates how to implement a deep convolutional autoencoder for image denoising, mapping noisy digits images from the MNIST dataset to clean digits images. This implementation is based on an original blog post titled Building Autoencoders in Keras by François Chollet. Setup import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.keras import layers from tensorflow.keras.datasets import mnist from tensorflow.keras.models import Model def preprocess(array): \"\"\" Normalizes the supplied array and reshapes it into the appropriate format. \"\"\" array = array.astype(\"float32\") / 255.0 array = np.reshape(array, (len(array), 28, 28, 1)) return array def noise(array): \"\"\" Adds random noise to each image in the supplied array. \"\"\" noise_factor = 0.4 noisy_array = array + noise_factor * np.random.normal( loc=0.0, scale=1.0, size=array.shape ) return np.clip(noisy_array, 0.0, 1.0) def display(array1, array2): \"\"\" Displays ten random images from each one of the supplied arrays. \"\"\" n = 10 indices = np.random.randint(len(array1), size=n) images1 = array1[indices, :] images2 = array2[indices, :] plt.figure(figsize=(20, 4)) for i, (image1, image2) in enumerate(zip(images1, images2)): ax = plt.subplot(2, n, i + 1) plt.imshow(image1.reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax = plt.subplot(2, n, i + 1 + n) plt.imshow(image2.reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() Prepare the data # Since we only need images from the dataset to encode and decode, we # won't use the labels. (train_data, _), (test_data, _) = mnist.load_data() # Normalize and reshape the data train_data = preprocess(train_data) test_data = preprocess(test_data) # Create a copy of the data with added noise noisy_train_data = noise(train_data) noisy_test_data = noise(test_data) # Display the train data and a version of it with added noise display(train_data, noisy_train_data) png Build the autoencoder We are going to use the Functional API to build our convolutional autoencoder. input = layers.Input(shape=(28, 28, 1)) # Encoder x = layers.Conv2D(32, (3, 3), activation=\"relu\", padding=\"same\")(input) x = layers.MaxPooling2D((2, 2), padding=\"same\")(x) x = layers.Conv2D(32, (3, 3), activation=\"relu\", padding=\"same\")(x) x = layers.MaxPooling2D((2, 2), padding=\"same\")(x) # Decoder x = layers.Conv2DTranspose(32, (3, 3), strides=2, activation=\"relu\", padding=\"same\")(x) x = layers.Conv2DTranspose(32, (3, 3), strides=2, activation=\"relu\", padding=\"same\")(x) x = layers.Conv2D(1, (3, 3), activation=\"sigmoid\", padding=\"same\")(x) # Autoencoder autoencoder = Model(input, x) autoencoder.compile(optimizer=\"adam\", loss=\"binary_crossentropy\") autoencoder.summary() Model: \"model\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ conv2d (Conv2D) (None, 28, 28, 32) 320 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 14, 14, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 14, 14, 32) 9248 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 7, 7, 32) 0 _________________________________________________________________ conv2d_transpose (Conv2DTran (None, 14, 14, 32) 9248 _________________________________________________________________ conv2d_transpose_1 (Conv2DTr (None, 28, 28, 32) 9248 _________________________________________________________________ conv2d_2 (Conv2D) (None, 28, 28, 1) 289 ================================================================= Total params: 28,353 Trainable params: 28,353 Non-trainable params: 0 _________________________________________________________________ Now we can train our autoencoder using train_data as both our input data and target. Notice we are setting up the validation data using the same format. autoencoder.fit( x=train_data, y=train_data, epochs=50, batch_size=128, shuffle=True, validation_data=(test_data, test_data), ) Epoch 1/50 469/469 [==============================] - 20s 43ms/step - loss: 0.1354 - val_loss: 0.0735 Epoch 2/50 469/469 [==============================] - 21s 45ms/step - loss: 0.0719 - val_loss: 0.0698 Epoch 3/50 469/469 [==============================] - 22s 47ms/step - loss: 0.0695 - val_loss: 0.0682 Epoch 4/50 469/469 [==============================] - 23s 50ms/step - loss: 0.0684 - val_loss: 0.0674 Epoch 5/50 469/469 [==============================] - 24s 51ms/step - loss: 0.0676 - val_loss: 0.0669 Epoch 6/50 469/469 [==============================] - 26s 55ms/step - loss: 0.0671 - val_loss: 0.0663 Epoch 7/50 469/469 [==============================] - 27s 57ms/step - loss: 0.0667 - val_loss: 0.0660 Epoch 8/50 469/469 [==============================] - 26s 56ms/step - loss: 0.0663 - val_loss: 0.0657 Epoch 9/50 469/469 [==============================] - 28s 59ms/step - loss: 0.0642 - val_loss: 0.0639 Epoch 21/50 469/469 [==============================] - 28s 60ms/step - loss: 0.0642 - val_loss: 0.0638 Epoch 22/50 469/469 [==============================] - 29s 62ms/step - loss: 0.0632 - val_loss: 0.0629 Epoch 38/50 397/469 [========================>.....] - ETA: 4s - loss: 0.0632 Let's predict on our test dataset and display the original image together with the prediction from our autoencoder. Notice how the predictions are pretty close to the original images, although not quite the same. predictions = autoencoder.predict(test_data) display(test_data, predictions) png Now that we know that our autoencoder works, let's retrain it using the noisy data as our input and the clean data as our target. We want our autoencoder to learn how to denoise the images. autoencoder.fit( x=noisy_train_data, y=train_data, epochs=100, batch_size=128, shuffle=True, validation_data=(noisy_test_data, test_data), ) Epoch 1/100 469/469 [==============================] - 28s 59ms/step - loss: 0.1027 - val_loss: 0.0946 Epoch 2/100 469/469 [==============================] - 27s 57ms/step - loss: 0.0942 - val_loss: 0.0924 Epoch 3/100 469/469 [==============================] - 27s 58ms/step - loss: 0.0925 - val_loss: 0.0913 Epoch 4/100 469/469 [==============================] - 28s 60ms/step - loss: 0.0915 - val_loss: 0.0905 Epoch 5/100 469/469 [==============================] - 31s 66ms/step - loss: 0.0908 - val_loss: 0.0897 Epoch 6/100 469/469 [==============================] - 30s 64ms/step - loss: 0.0902 - val_loss: 0.0893 Epoch 7/100 469/469 [==============================] - 28s 60ms/step - loss: 0.0897 - val_loss: 0.0887 Epoch 8/100 469/469 [==============================] - 31s 66ms/step - loss: 0.0872 - val_loss: 0.0867 Epoch 19/100 469/469 [==============================] - 30s 64ms/step - loss: 0.0860 - val_loss: 0.0854 Epoch 35/100 469/469 [==============================] - 32s 68ms/step - loss: 0.0854 - val_loss: 0.0849 Epoch 52/100 469/469 [==============================] - 28s 60ms/step - loss: 0.0851 - val_loss: 0.0847 Epoch 68/100 469/469 [==============================] - 31s 66ms/step - loss: 0.0851 - val_loss: 0.0848 Epoch 69/100 469/469 [==============================] - 31s 65ms/step - loss: 0.0849 - val_loss: 0.0847 Epoch 84/100 469/469 [==============================] - 29s 63ms/step - loss: 0.0848 - val_loss: 0.0846 Let's now predict on the noisy data and display the results of our autoencoder. Notice how the autoencoder does an amazing job at removing the noise from the input images. predictions = autoencoder.predict(noisy_test_data) display(noisy_test_data, predictions) png Data augmentation with CutMix for image classification on CIFAR-10. Introduction CutMix is a data augmentation technique that addresses the issue of information loss and inefficiency present in regional dropout strategies. Instead of removing pixels and filling them with black or grey pixels or Gaussian noise, you replace the removed regions with a patch from another image, while the ground truth labels are mixed proportionally to the number of pixels of combined images. CutMix was proposed in CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features (Yun et al., 2019) It's implemented via the following formulas: where M is the binary mask which indicates the cutout and the fill-in regions from the two randomly drawn images and λ (in [0, 1]) is drawn from a Beta(α, α) distribution The coordinates of bounding boxes are: which indicates the cutout and fill-in regions in case of the images. The bounding box sampling is represented by: where rx, ry are randomly drawn from a uniform distribution with upper bound. Setup import numpy as np import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras np.random.seed(42) tf.random.set_seed(42) Load the CIFAR-10 dataset In this example, we will use the CIFAR-10 image classification dataset. (x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() y_train = tf.keras.utils.to_categorical(y_train, num_classes=10) y_test = tf.keras.utils.to_categorical(y_test, num_classes=10) print(x_train.shape) print(y_train.shape) print(x_test.shape) print(y_test.shape) class_names = [ \"Airplane\", \"Automobile\", \"Bird\", \"Cat\", \"Deer\", \"Dog\", \"Frog\", \"Horse\", \"Ship\", \"Truck\", ] (50000, 32, 32, 3) (50000, 10) (10000, 32, 32, 3) (10000, 10) Define hyperparameters AUTO = tf.data.AUTOTUNE BATCH_SIZE = 32 IMG_SIZE = 32 Define the image preprocessing function def preprocess_image(image, label): image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE)) image = tf.image.convert_image_dtype(image, tf.float32) / 255.0 return image, label Convert the data into TensorFlow Dataset objects train_ds_one = ( tf.data.Dataset.from_tensor_slices((x_train, y_train)) .shuffle(1024) .map(preprocess_image, num_parallel_calls=AUTO) ) train_ds_two = ( tf.data.Dataset.from_tensor_slices((x_train, y_train)) .shuffle(1024) .map(preprocess_image, num_parallel_calls=AUTO) ) train_ds_simple = tf.data.Dataset.from_tensor_slices((x_train, y_train)) test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)) train_ds_simple = ( train_ds_simple.map(preprocess_image, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) # Combine two shuffled datasets from the same training data. train_ds = tf.data.Dataset.zip((train_ds_one, train_ds_two)) test_ds = ( test_ds.map(preprocess_image, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) Define the CutMix data augmentation function The CutMix function takes two image and label pairs to perform the augmentation. It samples λ(l) from the Beta distribution and returns a bounding box from get_box function. We then crop the second image (image2) and pad this image in the final padded image at the same location. def sample_beta_distribution(size, concentration_0=0.2, concentration_1=0.2): gamma_1_sample = tf.random.gamma(shape=[size], alpha=concentration_1) gamma_2_sample = tf.random.gamma(shape=[size], alpha=concentration_0) return gamma_1_sample / (gamma_1_sample + gamma_2_sample) @tf.function def get_box(lambda_value): cut_rat = tf.math.sqrt(1.0 - lambda_value) cut_w = IMG_SIZE * cut_rat # rw cut_w = tf.cast(cut_w, tf.int32) cut_h = IMG_SIZE * cut_rat # rh cut_h = tf.cast(cut_h, tf.int32) cut_x = tf.random.uniform((1,), minval=0, maxval=IMG_SIZE, dtype=tf.int32) # rx cut_y = tf.random.uniform((1,), minval=0, maxval=IMG_SIZE, dtype=tf.int32) # ry boundaryx1 = tf.clip_by_value(cut_x[0] - cut_w // 2, 0, IMG_SIZE) boundaryy1 = tf.clip_by_value(cut_y[0] - cut_h // 2, 0, IMG_SIZE) bbx2 = tf.clip_by_value(cut_x[0] + cut_w // 2, 0, IMG_SIZE) bby2 = tf.clip_by_value(cut_y[0] + cut_h // 2, 0, IMG_SIZE) target_h = bby2 - boundaryy1 if target_h == 0: target_h += 1 target_w = bbx2 - boundaryx1 if target_w == 0: target_w += 1 return boundaryx1, boundaryy1, target_h, target_w @tf.function def cutmix(train_ds_one, train_ds_two): (image1, label1), (image2, label2) = train_ds_one, train_ds_two alpha = [0.25] beta = [0.25] # Get a sample from the Beta distribution lambda_value = sample_beta_distribution(1, alpha, beta) # Define Lambda lambda_value = lambda_value[0][0] # Get the bounding box offsets, heights and widths boundaryx1, boundaryy1, target_h, target_w = get_box(lambda_value) # Get a patch from the second image (`image2`) crop2 = tf.image.crop_to_bounding_box( image2, boundaryy1, boundaryx1, target_h, target_w ) # Pad the `image2` patch (`crop2`) with the same offset image2 = tf.image.pad_to_bounding_box( crop2, boundaryy1, boundaryx1, IMG_SIZE, IMG_SIZE ) # Get a patch from the first image (`image1`) crop1 = tf.image.crop_to_bounding_box( image1, boundaryy1, boundaryx1, target_h, target_w ) # Pad the `image1` patch (`crop1`) with the same offset img1 = tf.image.pad_to_bounding_box( crop1, boundaryy1, boundaryx1, IMG_SIZE, IMG_SIZE ) # Modify the first image by subtracting the patch from `image1` # (before applying the `image2` patch) image1 = image1 - img1 # Add the modified `image1` and `image2` together to get the CutMix image image = image1 + image2 # Adjust Lambda in accordance to the pixel ration lambda_value = 1 - (target_w * target_h) / (IMG_SIZE * IMG_SIZE) lambda_value = tf.cast(lambda_value, tf.float32) # Combine the labels of both images label = lambda_value * label1 + (1 - lambda_value) * label2 return image, label Note: we are combining two images to create a single one. Visualize the new dataset after applying the CutMix augmentation # Create the new dataset using our `cutmix` utility train_ds_cmu = ( train_ds.shuffle(1024) .map(cutmix, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) # Let's preview 9 samples from the dataset image_batch, label_batch = next(iter(train_ds_cmu)) plt.figure(figsize=(10, 10)) for i in range(9): ax = plt.subplot(3, 3, i + 1) plt.title(class_names[np.argmax(label_batch[i])]) plt.imshow(image_batch[i]) plt.axis(\"off\") png Define a ResNet-20 model def resnet_layer( inputs, num_filters=16, kernel_size=3, strides=1, activation=\"relu\", batch_normalization=True, conv_first=True, ): conv = keras.layers.Conv2D( num_filters, kernel_size=kernel_size, strides=strides, padding=\"same\", kernel_initializer=\"he_normal\", kernel_regularizer=keras.regularizers.l2(1e-4), ) x = inputs if conv_first: x = conv(x) if batch_normalization: x = keras.layers.BatchNormalization()(x) if activation is not None: x = keras.layers.Activation(activation)(x) else: if batch_normalization: x = keras.layers.BatchNormalization()(x) if activation is not None: x = keras.layers.Activation(activation)(x) x = conv(x) return x def resnet_v20(input_shape, depth, num_classes=10): if (depth - 2) % 6 != 0: raise ValueError(\"depth should be 6n+2 (eg 20, 32, 44 in [a])\") # Start model definition. num_filters = 16 num_res_blocks = int((depth - 2) / 6) inputs = keras.layers.Input(shape=input_shape) x = resnet_layer(inputs=inputs) # Instantiate the stack of residual units for stack in range(3): for res_block in range(num_res_blocks): strides = 1 if stack > 0 and res_block == 0: # first layer but not first stack strides = 2 # downsample y = resnet_layer(inputs=x, num_filters=num_filters, strides=strides) y = resnet_layer(inputs=y, num_filters=num_filters, activation=None) if stack > 0 and res_block == 0: # first layer but not first stack # linear projection residual shortcut connection to match # changed dims x = resnet_layer( inputs=x, num_filters=num_filters, kernel_size=1, strides=strides, activation=None, batch_normalization=False, ) x = keras.layers.add([x, y]) x = keras.layers.Activation(\"relu\")(x) num_filters *= 2 # Add classifier on top. # v1 does not use BN after last shortcut connection-ReLU x = keras.layers.AveragePooling2D(pool_size=8)(x) y = keras.layers.Flatten()(x) outputs = keras.layers.Dense( num_classes, activation=\"softmax\", kernel_initializer=\"he_normal\" )(y) # Instantiate model. model = keras.models.Model(inputs=inputs, outputs=outputs) return model def training_model(): return resnet_v20((32, 32, 3), 20) initial_model = training_model() initial_model.save_weights(\"initial_weights.h5\") Train the model with the dataset augmented by CutMix model = training_model() model.load_weights(\"initial_weights.h5\") model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"]) model.fit(train_ds_cmu, validation_data=test_ds, epochs=15) test_loss, test_accuracy = model.evaluate(test_ds) print(\"Test accuracy: {:.2f}%\".format(test_accuracy * 100)) Epoch 1/15 1563/1563 [==============================] - 62s 24ms/step - loss: 1.9216 - accuracy: 0.4090 - val_loss: 1.9737 - val_accuracy: 0.4061 Epoch 2/15 1563/1563 [==============================] - 37s 24ms/step - loss: 1.6549 - accuracy: 0.5325 - val_loss: 1.5033 - val_accuracy: 0.5061 Epoch 3/15 1563/1563 [==============================] - 38s 24ms/step - loss: 1.5536 - accuracy: 0.5840 - val_loss: 1.2913 - val_accuracy: 0.6112 Epoch 4/15 1563/1563 [==============================] - 38s 24ms/step - loss: 1.4988 - accuracy: 0.6097 - val_loss: 1.0587 - val_accuracy: 0.7033 Epoch 5/15 1563/1563 [==============================] - 38s 24ms/step - loss: 1.4531 - accuracy: 0.6291 - val_loss: 1.0681 - val_accuracy: 0.6841 Epoch 6/15 1563/1563 [==============================] - 37s 24ms/step - loss: 1.4173 - accuracy: 0.6464 - val_loss: 1.0265 - val_accuracy: 0.7085 Epoch 7/15 1563/1563 [==============================] - 37s 24ms/step - loss: 1.3932 - accuracy: 0.6572 - val_loss: 0.9540 - val_accuracy: 0.7331 Epoch 8/15 1563/1563 [==============================] - 37s 24ms/step - loss: 1.3736 - accuracy: 0.6680 - val_loss: 0.9877 - val_accuracy: 0.7240 Epoch 9/15 1563/1563 [==============================] - 38s 24ms/step - loss: 1.3575 - accuracy: 0.6782 - val_loss: 0.8944 - val_accuracy: 0.7570 Epoch 10/15 1563/1563 [==============================] - 38s 24ms/step - loss: 1.3398 - accuracy: 0.6886 - val_loss: 0.8598 - val_accuracy: 0.7649 Epoch 11/15 1563/1563 [==============================] - 38s 24ms/step - loss: 1.3277 - accuracy: 0.6939 - val_loss: 0.9032 - val_accuracy: 0.7603 Epoch 12/15 1563/1563 [==============================] - 38s 24ms/step - loss: 1.3131 - accuracy: 0.6964 - val_loss: 0.7934 - val_accuracy: 0.7926 Epoch 13/15 1563/1563 [==============================] - 37s 24ms/step - loss: 1.3050 - accuracy: 0.7029 - val_loss: 0.8737 - val_accuracy: 0.7552 Epoch 14/15 1563/1563 [==============================] - 37s 24ms/step - loss: 1.2987 - accuracy: 0.7099 - val_loss: 0.8409 - val_accuracy: 0.7766 Epoch 15/15 1563/1563 [==============================] - 37s 24ms/step - loss: 1.2953 - accuracy: 0.7099 - val_loss: 0.7850 - val_accuracy: 0.8014 313/313 [==============================] - 3s 9ms/step - loss: 0.7850 - accuracy: 0.8014 Test accuracy: 80.14% Train the model using the original non-augmented dataset model = training_model() model.load_weights(\"initial_weights.h5\") model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"]) model.fit(train_ds_simple, validation_data=test_ds, epochs=15) test_loss, test_accuracy = model.evaluate(test_ds) print(\"Test accuracy: {:.2f}%\".format(test_accuracy * 100)) Epoch 1/15 1563/1563 [==============================] - 38s 23ms/step - loss: 1.4864 - accuracy: 0.5173 - val_loss: 1.3694 - val_accuracy: 0.5708 Epoch 2/15 1563/1563 [==============================] - 36s 23ms/step - loss: 1.0682 - accuracy: 0.6779 - val_loss: 1.1424 - val_accuracy: 0.6686 Epoch 3/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.8955 - accuracy: 0.7449 - val_loss: 1.0555 - val_accuracy: 0.7007 Epoch 4/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.7890 - accuracy: 0.7878 - val_loss: 1.0575 - val_accuracy: 0.7079 Epoch 5/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.7107 - accuracy: 0.8175 - val_loss: 1.1395 - val_accuracy: 0.7062 Epoch 6/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.6524 - accuracy: 0.8397 - val_loss: 1.1716 - val_accuracy: 0.7042 Epoch 7/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.6098 - accuracy: 0.8594 - val_loss: 1.4120 - val_accuracy: 0.6786 Epoch 8/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.5715 - accuracy: 0.8765 - val_loss: 1.3159 - val_accuracy: 0.7011 Epoch 9/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.5477 - accuracy: 0.8872 - val_loss: 1.2873 - val_accuracy: 0.7182 Epoch 10/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.5233 - accuracy: 0.8988 - val_loss: 1.4118 - val_accuracy: 0.6964 Epoch 11/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.5165 - accuracy: 0.9045 - val_loss: 1.3741 - val_accuracy: 0.7230 Epoch 12/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.5008 - accuracy: 0.9124 - val_loss: 1.3984 - val_accuracy: 0.7181 Epoch 13/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.4896 - accuracy: 0.9190 - val_loss: 1.3642 - val_accuracy: 0.7209 Epoch 14/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.4845 - accuracy: 0.9231 - val_loss: 1.5469 - val_accuracy: 0.6992 Epoch 15/15 1563/1563 [==============================] - 36s 23ms/step - loss: 0.4749 - accuracy: 0.9294 - val_loss: 1.4034 - val_accuracy: 0.7362 313/313 [==============================] - 3s 9ms/step - loss: 1.4034 - accuracy: 0.7362 Test accuracy: 73.62% Notes In this example, we trained our model for 15 epochs. In our experiment, the model with CutMix achieves a better accuracy on the CIFAR-10 dataset (80.36% in our experiment) compared to the model that doesn't use the augmentation (72.70%). You may notice it takes less time to train the model with the CutMix augmentation. You can experiment further with the CutMix technique by following the original paper. Few-shot classification of the Omniglot dataset using Reptile. Introduction The Reptile algorithm was developed by OpenAI to perform model agnostic meta-learning. Specifically, this algorithm was designed to quickly learn to perform new tasks with minimal training (few-shot learning). The algorithm works by performing Stochastic Gradient Descent using the difference between weights trained on a mini-batch of never before seen data and the model weights prior to training over a fixed number of meta-iterations. import matplotlib.pyplot as plt import numpy as np import random import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_datasets as tfds Define the Hyperparameters learning_rate = 0.003 meta_step_size = 0.25 inner_batch_size = 25 eval_batch_size = 25 meta_iters = 2000 eval_iters = 5 inner_iters = 4 eval_interval = 1 train_shots = 20 shots = 5 classes = 5 Prepare the data The Omniglot dataset is a dataset of 1,623 characters taken from 50 different alphabets, with 20 examples for each character. The 20 samples for each character were drawn online via Amazon's Mechanical Turk. For the few-shot learning task, k samples (or \"shots\") are drawn randomly from n randomly-chosen classes. These n numerical values are used to create a new set of temporary labels to use to test the model's ability to learn a new task given few examples. In other words, if you are training on 5 classes, your new class labels will be either 0, 1, 2, 3, or 4. Omniglot is a great dataset for this task since there are many different classes to draw from, with a reasonable number of samples for each class. class Dataset: # This class will facilitate the creation of a few-shot dataset # from the Omniglot dataset that can be sampled from quickly while also # allowing to create new labels at the same time. def __init__(self, training): # Download the tfrecord files containing the omniglot data and convert to a # dataset. split = \"train\" if training else \"test\" ds = tfds.load(\"omniglot\", split=split, as_supervised=True, shuffle_files=False) # Iterate over the dataset to get each individual image and its class, # and put that data into a dictionary. self.data = {} def extraction(image, label): # This function will shrink the Omniglot images to the desired size, # scale pixel values and convert the RGB image to grayscale image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.rgb_to_grayscale(image) image = tf.image.resize(image, [28, 28]) return image, label for image, label in ds.map(extraction): image = image.numpy() label = str(label.numpy()) if label not in self.data: self.data[label] = [] self.data[label].append(image) self.labels = list(self.data.keys()) def get_mini_dataset( self, batch_size, repetitions, shots, num_classes, split=False ): temp_labels = np.zeros(shape=(num_classes * shots)) temp_images = np.zeros(shape=(num_classes * shots, 28, 28, 1)) if split: test_labels = np.zeros(shape=(num_classes)) test_images = np.zeros(shape=(num_classes, 28, 28, 1)) # Get a random subset of labels from the entire label set. label_subset = random.choices(self.labels, k=num_classes) for class_idx, class_obj in enumerate(label_subset): # Use enumerated index value as a temporary label for mini-batch in # few shot learning. temp_labels[class_idx * shots : (class_idx + 1) * shots] = class_idx # If creating a split dataset for testing, select an extra sample from each # label to create the test dataset. if split: test_labels[class_idx] = class_idx images_to_split = random.choices( self.data[label_subset[class_idx]], k=shots + 1 ) test_images[class_idx] = images_to_split[-1] temp_images[ class_idx * shots : (class_idx + 1) * shots ] = images_to_split[:-1] else: # For each index in the randomly selected label_subset, sample the # necessary number of images. temp_images[ class_idx * shots : (class_idx + 1) * shots ] = random.choices(self.data[label_subset[class_idx]], k=shots) dataset = tf.data.Dataset.from_tensor_slices( (temp_images.astype(np.float32), temp_labels.astype(np.int32)) ) dataset = dataset.shuffle(100).batch(batch_size).repeat(repetitions) if split: return dataset, test_images, test_labels return dataset import urllib3 urllib3.disable_warnings() # Disable SSL warnings that may happen during download. train_dataset = Dataset(training=True) test_dataset = Dataset(training=False) Downloading and preparing dataset omniglot/3.0.0 (download: 17.95 MiB, generated: Unknown size, total: 17.95 MiB) to /root/tensorflow_datasets/omniglot/3.0.0... HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Dl Completed...', max=1.0, style=Progre… HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Dl Size...', max=1.0, style=ProgressSty… HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Extraction completed...', max=1.0, styl… HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value=''))) Shuffling and writing examples to /root/tensorflow_datasets/omniglot/3.0.0.incompleteXTNZJN/omniglot-train.tfrecord HBox(children=(FloatProgress(value=0.0, max=19280.0), HTML(value=''))) HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value=''))) Shuffling and writing examples to /root/tensorflow_datasets/omniglot/3.0.0.incompleteXTNZJN/omniglot-test.tfrecord HBox(children=(FloatProgress(value=0.0, max=13180.0), HTML(value=''))) HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value=''))) Shuffling and writing examples to /root/tensorflow_datasets/omniglot/3.0.0.incompleteXTNZJN/omniglot-small1.tfrecord HBox(children=(FloatProgress(value=0.0, max=2720.0), HTML(value=''))) HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value=''))) Shuffling and writing examples to /root/tensorflow_datasets/omniglot/3.0.0.incompleteXTNZJN/omniglot-small2.tfrecord HBox(children=(FloatProgress(value=0.0, max=3120.0), HTML(value=''))) Dataset omniglot downloaded and prepared to /root/tensorflow_datasets/omniglot/3.0.0. Subsequent calls will reuse this data. Visualize some examples from the dataset _, axarr = plt.subplots(nrows=5, ncols=5, figsize=(20, 20)) sample_keys = list(train_dataset.data.keys()) for a in range(5): for b in range(5): temp_image = train_dataset.data[sample_keys[a]][b] temp_image = np.stack((temp_image[:, :, 0],) * 3, axis=2) temp_image *= 255 temp_image = np.clip(temp_image, 0, 255).astype(\"uint8\") if b == 2: axarr[a, b].set_title(\"Class : \" + sample_keys[a]) axarr[a, b].imshow(temp_image, cmap=\"gray\") axarr[a, b].xaxis.set_visible(False) axarr[a, b].yaxis.set_visible(False) plt.show() png Build the model def conv_bn(x): x = layers.Conv2D(filters=64, kernel_size=3, strides=2, padding=\"same\")(x) x = layers.BatchNormalization()(x) return layers.ReLU()(x) inputs = layers.Input(shape=(28, 28, 1)) x = conv_bn(inputs) x = conv_bn(x) x = conv_bn(x) x = conv_bn(x) x = layers.Flatten()(x) outputs = layers.Dense(classes, activation=\"softmax\")(x) model = keras.Model(inputs=inputs, outputs=outputs) model.compile() optimizer = keras.optimizers.SGD(learning_rate=learning_rate) Train the model training = [] testing = [] for meta_iter in range(meta_iters): frac_done = meta_iter / meta_iters cur_meta_step_size = (1 - frac_done) * meta_step_size # Temporarily save the weights from the model. old_vars = model.get_weights() # Get a sample from the full dataset. mini_dataset = train_dataset.get_mini_dataset( inner_batch_size, inner_iters, train_shots, classes ) for images, labels in mini_dataset: with tf.GradientTape() as tape: preds = model(images) loss = keras.losses.sparse_categorical_crossentropy(labels, preds) grads = tape.gradient(loss, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) new_vars = model.get_weights() # Perform SGD for the meta step. for var in range(len(new_vars)): new_vars[var] = old_vars[var] + ( (new_vars[var] - old_vars[var]) * cur_meta_step_size ) # After the meta-learning step, reload the newly-trained weights into the model. model.set_weights(new_vars) # Evaluation loop if meta_iter % eval_interval == 0: accuracies = [] for dataset in (train_dataset, test_dataset): # Sample a mini dataset from the full dataset. train_set, test_images, test_labels = dataset.get_mini_dataset( eval_batch_size, eval_iters, shots, classes, split=True ) old_vars = model.get_weights() # Train on the samples and get the resulting accuracies. for images, labels in train_set: with tf.GradientTape() as tape: preds = model(images) loss = keras.losses.sparse_categorical_crossentropy(labels, preds) grads = tape.gradient(loss, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) test_preds = model.predict(test_images) test_preds = tf.argmax(test_preds).numpy() num_correct = (test_preds == test_labels).sum() # Reset the weights after getting the evaluation accuracies. model.set_weights(old_vars) accuracies.append(num_correct / classes) training.append(accuracies[0]) testing.append(accuracies[1]) if meta_iter % 100 == 0: print( \"batch %d: train=%f test=%f\" % (meta_iter, accuracies[0], accuracies[1]) ) batch 0: train=0.000000 test=0.600000 batch 100: train=0.600000 test=0.800000 batch 200: train=1.000000 test=0.600000 batch 300: train=0.600000 test=0.800000 batch 400: train=0.800000 test=1.000000 batch 500: train=1.000000 test=0.600000 batch 600: train=1.000000 test=1.000000 batch 700: train=1.000000 test=1.000000 batch 800: train=1.000000 test=0.600000 batch 900: train=1.000000 test=1.000000 batch 1000: train=0.800000 test=1.000000 batch 1100: train=1.000000 test=0.600000 batch 1200: train=0.800000 test=1.000000 batch 1300: train=0.800000 test=1.000000 batch 1400: train=1.000000 test=1.000000 batch 1500: train=0.800000 test=1.000000 batch 1600: train=1.000000 test=1.000000 batch 1700: train=1.000000 test=0.800000 batch 1800: train=1.000000 test=1.000000 batch 1900: train=0.800000 test=1.000000 Visualize Results # First, some preprocessing to smooth the training and testing arrays for display. window_length = 100 train_s = np.r_[ training[window_length - 1 : 0 : -1], training, training[-1:-window_length:-1] ] test_s = np.r_[ testing[window_length - 1 : 0 : -1], testing, testing[-1:-window_length:-1] ] w = np.hamming(window_length) train_y = np.convolve(w / w.sum(), train_s, mode=\"valid\") test_y = np.convolve(w / w.sum(), test_s, mode=\"valid\") # Display the training accuracies. x = np.arange(0, len(test_y), 1) plt.plot(x, test_y, x, train_y) plt.legend([\"test\", \"train\"]) plt.grid() train_set, test_images, test_labels = dataset.get_mini_dataset( eval_batch_size, eval_iters, shots, classes, split=True ) for images, labels in train_set: with tf.GradientTape() as tape: preds = model(images) loss = keras.losses.sparse_categorical_crossentropy(labels, preds) grads = tape.gradient(loss, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) test_preds = model.predict(test_images) test_preds = tf.argmax(test_preds).numpy() _, axarr = plt.subplots(nrows=1, ncols=5, figsize=(20, 20)) sample_keys = list(train_dataset.data.keys()) for i, ax in zip(range(5), axarr): temp_image = np.stack((test_images[i, :, :, 0],) * 3, axis=2) temp_image *= 255 temp_image = np.clip(temp_image, 0, 255).astype(\"uint8\") ax.set_title( \"Label : {}, Prediction : {}\".format(int(test_labels[i]), test_preds[i]) ) ax.imshow(temp_image, cmap=\"gray\") ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) plt.show() png png Mitigating resolution discrepancy between training and test sets. Introduction It is a common practice to use the same input image resolution while training and testing vision models. However, as investigated in Fixing the train-test resolution discrepancy (Touvron et al.), this practice leads to suboptimal performance. Data augmentation is an indispensable part of the training process of deep neural networks. For vision models, we typically use random resized crops during training and center crops during inference. This introduces a discrepancy in the object sizes seen during training and inference. As shown by Touvron et al., if we can fix this discrepancy, we can significantly boost model performance. In this example, we implement the FixRes techniques introduced by Touvron et al. to fix this discrepancy. Imports from tensorflow import keras from tensorflow.keras import layers import tensorflow as tf import tensorflow_datasets as tfds tfds.disable_progress_bar() import matplotlib.pyplot as plt Load the tf_flowers dataset train_dataset, val_dataset = tfds.load( \"tf_flowers\", split=[\"train[:90%]\", \"train[90%:]\"], as_supervised=True ) num_train = train_dataset.cardinality() num_val = val_dataset.cardinality() print(f\"Number of training examples: {num_train}\") print(f\"Number of validation examples: {num_val}\") Number of training examples: 3303 Number of validation examples: 367 Data preprocessing utilities We create three datasets: A dataset with a smaller resolution - 128x128. Two datasets with a larger resolution - 224x224. We will apply different augmentation transforms to the larger-resolution datasets. The idea of FixRes is to first train a model on a smaller resolution dataset and then fine-tune it on a larger resolution dataset. This simple yet effective recipe leads to non-trivial performance improvements. Please refer to the original paper for results. # Reference: https://github.com/facebookresearch/FixRes/blob/main/transforms_v2.py. batch_size = 128 auto = tf.data.AUTOTUNE smaller_size = 128 bigger_size = 224 size_for_resizing = int((bigger_size / smaller_size) * bigger_size) central_crop_layer = layers.CenterCrop(bigger_size, bigger_size) def preprocess_initial(train, image_size): \"\"\"Initial preprocessing function for training on smaller resolution. For training, do random_horizontal_flip -> random_crop. For validation, just resize. No color-jittering has been used. \"\"\" def _pp(image, label, train): if train: channels = image.shape[-1] begin, size, _ = tf.image.sample_distorted_bounding_box( tf.shape(image), tf.zeros([0, 0, 4], tf.float32), area_range=(0.05, 1.0), min_object_covered=0, use_image_if_no_bounding_boxes=True, ) image = tf.slice(image, begin, size) image.set_shape([None, None, channels]) image = tf.image.resize(image, [image_size, image_size]) image = tf.image.random_flip_left_right(image) else: image = tf.image.resize(image, [image_size, image_size]) return image, label return _pp def preprocess_finetune(image, label, train): \"\"\"Preprocessing function for fine-tuning on a higher resolution. For training, resize to a bigger resolution to maintain the ratio -> random_horizontal_flip -> center_crop. For validation, do the same without any horizontal flipping. No color-jittering has been used. \"\"\" image = tf.image.resize(image, [size_for_resizing, size_for_resizing]) if train: image = tf.image.random_flip_left_right(image) image = central_crop_layer(image[None, ...])[0] return image, label def make_dataset( dataset: tf.data.Dataset, train: bool, image_size: int = smaller_size, fixres: bool = True, num_parallel_calls=auto, ): if image_size not in [smaller_size, bigger_size]: raise ValueError(f\"{image_size} resolution is not supported.\") # Determine which preprocessing function we are using. if image_size == smaller_size: preprocess_func = preprocess_initial(train, image_size) elif not fixres and image_size == bigger_size: preprocess_func = preprocess_initial(train, image_size) else: preprocess_func = preprocess_finetune if train: dataset = dataset.shuffle(batch_size * 10) return ( dataset.map( lambda x, y: preprocess_func(x, y, train), num_parallel_calls=num_parallel_calls, ) .batch(batch_size) .prefetch(num_parallel_calls) ) Notice how the augmentation transforms vary for the kind of dataset we are preparing. Prepare datasets initial_train_dataset = make_dataset(train_dataset, train=True, image_size=smaller_size) initial_val_dataset = make_dataset(val_dataset, train=False, image_size=smaller_size) finetune_train_dataset = make_dataset(train_dataset, train=True, image_size=bigger_size) finetune_val_dataset = make_dataset(val_dataset, train=False, image_size=bigger_size) vanilla_train_dataset = make_dataset( train_dataset, train=True, image_size=bigger_size, fixres=False ) vanilla_val_dataset = make_dataset( val_dataset, train=False, image_size=bigger_size, fixres=False ) Visualize the datasets def visualize_dataset(batch_images): plt.figure(figsize=(10, 10)) for n in range(25): ax = plt.subplot(5, 5, n + 1) plt.imshow(batch_images[n].numpy().astype(\"int\")) plt.axis(\"off\") plt.show() print(f\"Batch shape: {batch_images.shape}.\") # Smaller resolution. initial_sample_images, _ = next(iter(initial_train_dataset)) visualize_dataset(initial_sample_images) # Bigger resolution, only for fine-tuning. finetune_sample_images, _ = next(iter(finetune_train_dataset)) visualize_dataset(finetune_sample_images) # Bigger resolution, with the same augmentation transforms as # the smaller resolution dataset. vanilla_sample_images, _ = next(iter(vanilla_train_dataset)) visualize_dataset(vanilla_sample_images) 2021-10-11 02:05:26.638594: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead. png Batch shape: (128, 128, 128, 3). 2021-10-11 02:05:28.509752: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead. png Batch shape: (128, 224, 224, 3). 2021-10-11 02:05:30.108623: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead. png Batch shape: (128, 224, 224, 3). Model training utilities We train multiple variants of ResNet50V2 (He et al.): On the smaller resolution dataset (128x128). It will be trained from scratch. Then fine-tune the model from 1 on the larger resolution (224x224) dataset. Train another ResNet50V2 from scratch on the larger resolution dataset. As a reminder, the larger resolution datasets differ in terms of their augmentation transforms. def get_training_model(num_classes=5): inputs = layers.Input((None, None, 3)) resnet_base = keras.applications.ResNet50V2( include_top=False, weights=None, pooling=\"avg\" ) resnet_base.trainable = True x = layers.Rescaling(scale=1.0 / 127.5, offset=-1)(inputs) x = resnet_base(x) outputs = layers.Dense(num_classes, activation=\"softmax\")(x) return keras.Model(inputs, outputs) def train_and_evaluate( model, train_ds, val_ds, epochs, learning_rate=1e-3, use_early_stopping=False ): optimizer = keras.optimizers.Adam(learning_rate=learning_rate) model.compile( optimizer=optimizer, loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"], ) if use_early_stopping: es_callback = keras.callbacks.EarlyStopping(patience=5) callbacks = [es_callback] else: callbacks = None model.fit( train_ds, validation_data=val_ds, epochs=epochs, callbacks=callbacks, ) _, accuracy = model.evaluate(val_ds) print(f\"Top-1 accuracy on the validation set: {accuracy*100:.2f}%.\") return model Experiment 1: Train on 128x128 and then fine-tune on 224x224 epochs = 30 smaller_res_model = get_training_model() smaller_res_model = train_and_evaluate( smaller_res_model, initial_train_dataset, initial_val_dataset, epochs ) Epoch 1/30 26/26 [==============================] - 14s 226ms/step - loss: 1.6476 - accuracy: 0.4345 - val_loss: 9.8213 - val_accuracy: 0.2044 Epoch 2/30 26/26 [==============================] - 3s 123ms/step - loss: 1.1561 - accuracy: 0.5495 - val_loss: 6.5521 - val_accuracy: 0.2071 Epoch 3/30 26/26 [==============================] - 3s 123ms/step - loss: 1.0989 - accuracy: 0.5722 - val_loss: 2.6216 - val_accuracy: 0.1935 Epoch 4/30 26/26 [==============================] - 3s 122ms/step - loss: 1.0373 - accuracy: 0.5895 - val_loss: 1.9918 - val_accuracy: 0.2125 Epoch 5/30 26/26 [==============================] - 3s 122ms/step - loss: 0.9960 - accuracy: 0.6119 - val_loss: 2.8505 - val_accuracy: 0.2262 Epoch 6/30 26/26 [==============================] - 3s 122ms/step - loss: 0.9458 - accuracy: 0.6331 - val_loss: 1.8974 - val_accuracy: 0.2834 Epoch 7/30 26/26 [==============================] - 3s 122ms/step - loss: 0.8949 - accuracy: 0.6606 - val_loss: 2.1164 - val_accuracy: 0.2834 Epoch 8/30 26/26 [==============================] - 3s 122ms/step - loss: 0.8581 - accuracy: 0.6709 - val_loss: 1.8858 - val_accuracy: 0.3815 Epoch 9/30 26/26 [==============================] - 3s 123ms/step - loss: 0.8436 - accuracy: 0.6776 - val_loss: 1.5671 - val_accuracy: 0.4687 Epoch 10/30 26/26 [==============================] - 3s 123ms/step - loss: 0.8632 - accuracy: 0.6685 - val_loss: 1.5005 - val_accuracy: 0.5504 Epoch 11/30 26/26 [==============================] - 3s 123ms/step - loss: 0.8316 - accuracy: 0.6918 - val_loss: 1.1421 - val_accuracy: 0.6594 Epoch 12/30 26/26 [==============================] - 3s 123ms/step - loss: 0.7981 - accuracy: 0.6951 - val_loss: 1.2036 - val_accuracy: 0.6403 Epoch 13/30 26/26 [==============================] - 3s 122ms/step - loss: 0.8275 - accuracy: 0.6806 - val_loss: 2.2632 - val_accuracy: 0.5177 Epoch 14/30 26/26 [==============================] - 3s 122ms/step - loss: 0.8156 - accuracy: 0.6994 - val_loss: 1.1023 - val_accuracy: 0.6649 Epoch 15/30 26/26 [==============================] - 3s 122ms/step - loss: 0.7572 - accuracy: 0.7091 - val_loss: 1.6248 - val_accuracy: 0.6049 Epoch 16/30 26/26 [==============================] - 3s 123ms/step - loss: 0.7757 - accuracy: 0.7024 - val_loss: 2.0600 - val_accuracy: 0.6294 Epoch 17/30 26/26 [==============================] - 3s 122ms/step - loss: 0.7600 - accuracy: 0.7087 - val_loss: 1.5731 - val_accuracy: 0.6131 Epoch 18/30 26/26 [==============================] - 3s 122ms/step - loss: 0.7385 - accuracy: 0.7215 - val_loss: 1.8312 - val_accuracy: 0.5749 Epoch 19/30 26/26 [==============================] - 3s 122ms/step - loss: 0.7493 - accuracy: 0.7224 - val_loss: 3.0382 - val_accuracy: 0.4986 Epoch 20/30 26/26 [==============================] - 3s 122ms/step - loss: 0.7746 - accuracy: 0.7048 - val_loss: 7.8191 - val_accuracy: 0.5123 Epoch 21/30 26/26 [==============================] - 3s 123ms/step - loss: 0.7367 - accuracy: 0.7405 - val_loss: 1.9607 - val_accuracy: 0.6676 Epoch 22/30 26/26 [==============================] - 3s 122ms/step - loss: 0.6970 - accuracy: 0.7357 - val_loss: 3.1944 - val_accuracy: 0.4496 Epoch 23/30 26/26 [==============================] - 3s 122ms/step - loss: 0.7299 - accuracy: 0.7212 - val_loss: 1.4012 - val_accuracy: 0.6567 Epoch 24/30 26/26 [==============================] - 3s 122ms/step - loss: 0.6965 - accuracy: 0.7315 - val_loss: 1.9781 - val_accuracy: 0.6403 Epoch 25/30 26/26 [==============================] - 3s 124ms/step - loss: 0.6811 - accuracy: 0.7408 - val_loss: 0.9287 - val_accuracy: 0.6839 Epoch 26/30 26/26 [==============================] - 3s 123ms/step - loss: 0.6732 - accuracy: 0.7487 - val_loss: 2.9406 - val_accuracy: 0.5504 Epoch 27/30 26/26 [==============================] - 3s 122ms/step - loss: 0.6571 - accuracy: 0.7560 - val_loss: 1.6268 - val_accuracy: 0.5804 Epoch 28/30 26/26 [==============================] - 3s 122ms/step - loss: 0.6662 - accuracy: 0.7548 - val_loss: 0.9067 - val_accuracy: 0.7357 Epoch 29/30 26/26 [==============================] - 3s 122ms/step - loss: 0.6443 - accuracy: 0.7520 - val_loss: 0.7760 - val_accuracy: 0.7520 Epoch 30/30 26/26 [==============================] - 3s 122ms/step - loss: 0.6617 - accuracy: 0.7539 - val_loss: 0.6026 - val_accuracy: 0.7766 3/3 [==============================] - 0s 37ms/step - loss: 0.6026 - accuracy: 0.7766 Top-1 accuracy on the validation set: 77.66%. Freeze all the layers except for the final Batch Normalization layer For fine-tuning, we train only two layers: The final Batch Normalization (Ioffe et al.) layer. The classification layer. We are unfreezing the final Batch Normalization layer to compensate for the change in activation statistics before the global average pooling layer. As shown in the paper, unfreezing the final Batch Normalization layer is enough. For a comprehensive guide on fine-tuning models in Keras, refer to this tutorial. for layer in smaller_res_model.layers[2].layers: layer.trainable = False smaller_res_model.layers[2].get_layer(\"post_bn\").trainable = True epochs = 10 # Use a lower learning rate during fine-tuning. bigger_res_model = train_and_evaluate( smaller_res_model, finetune_train_dataset, finetune_val_dataset, epochs, learning_rate=1e-4, ) Epoch 1/10 26/26 [==============================] - 9s 201ms/step - loss: 0.7912 - accuracy: 0.7856 - val_loss: 0.6808 - val_accuracy: 0.7575 Epoch 2/10 26/26 [==============================] - 3s 115ms/step - loss: 0.7732 - accuracy: 0.7938 - val_loss: 0.7028 - val_accuracy: 0.7684 Epoch 3/10 26/26 [==============================] - 3s 115ms/step - loss: 0.7658 - accuracy: 0.7923 - val_loss: 0.7136 - val_accuracy: 0.7629 Epoch 4/10 26/26 [==============================] - 3s 115ms/step - loss: 0.7536 - accuracy: 0.7872 - val_loss: 0.7161 - val_accuracy: 0.7684 Epoch 5/10 26/26 [==============================] - 3s 115ms/step - loss: 0.7346 - accuracy: 0.7947 - val_loss: 0.7154 - val_accuracy: 0.7711 Epoch 6/10 26/26 [==============================] - 3s 115ms/step - loss: 0.7183 - accuracy: 0.7990 - val_loss: 0.7139 - val_accuracy: 0.7684 Epoch 7/10 26/26 [==============================] - 3s 116ms/step - loss: 0.7059 - accuracy: 0.7962 - val_loss: 0.7071 - val_accuracy: 0.7738 Epoch 8/10 26/26 [==============================] - 3s 115ms/step - loss: 0.6959 - accuracy: 0.7923 - val_loss: 0.7002 - val_accuracy: 0.7738 Epoch 9/10 26/26 [==============================] - 3s 116ms/step - loss: 0.6871 - accuracy: 0.8011 - val_loss: 0.6967 - val_accuracy: 0.7711 Epoch 10/10 26/26 [==============================] - 3s 116ms/step - loss: 0.6761 - accuracy: 0.8044 - val_loss: 0.6887 - val_accuracy: 0.7738 3/3 [==============================] - 0s 95ms/step - loss: 0.6887 - accuracy: 0.7738 Top-1 accuracy on the validation set: 77.38%. Experiment 2: Train a model on 224x224 resolution from scratch Now, we train another model from scratch on the larger resolution dataset. Recall that the augmentation transforms used in this dataset are different from before. epochs = 30 vanilla_bigger_res_model = get_training_model() vanilla_bigger_res_model = train_and_evaluate( vanilla_bigger_res_model, vanilla_train_dataset, vanilla_val_dataset, epochs ) Epoch 1/30 26/26 [==============================] - 15s 389ms/step - loss: 1.5339 - accuracy: 0.4569 - val_loss: 177.5233 - val_accuracy: 0.1907 Epoch 2/30 26/26 [==============================] - 8s 314ms/step - loss: 1.1472 - accuracy: 0.5483 - val_loss: 17.5804 - val_accuracy: 0.1907 Epoch 3/30 26/26 [==============================] - 8s 315ms/step - loss: 1.0708 - accuracy: 0.5792 - val_loss: 2.2719 - val_accuracy: 0.2480 Epoch 4/30 26/26 [==============================] - 8s 315ms/step - loss: 1.0225 - accuracy: 0.6170 - val_loss: 2.1274 - val_accuracy: 0.2398 Epoch 5/30 26/26 [==============================] - 8s 316ms/step - loss: 1.0001 - accuracy: 0.6206 - val_loss: 2.0375 - val_accuracy: 0.2834 Epoch 6/30 26/26 [==============================] - 8s 315ms/step - loss: 0.9602 - accuracy: 0.6355 - val_loss: 1.4412 - val_accuracy: 0.3978 Epoch 7/30 26/26 [==============================] - 8s 316ms/step - loss: 0.9418 - accuracy: 0.6461 - val_loss: 1.5257 - val_accuracy: 0.4305 Epoch 8/30 26/26 [==============================] - 8s 316ms/step - loss: 0.8911 - accuracy: 0.6649 - val_loss: 1.1530 - val_accuracy: 0.5858 Epoch 9/30 26/26 [==============================] - 8s 316ms/step - loss: 0.8834 - accuracy: 0.6694 - val_loss: 1.2026 - val_accuracy: 0.5531 Epoch 10/30 26/26 [==============================] - 8s 316ms/step - loss: 0.8752 - accuracy: 0.6724 - val_loss: 1.4917 - val_accuracy: 0.5695 Epoch 11/30 26/26 [==============================] - 8s 316ms/step - loss: 0.8690 - accuracy: 0.6594 - val_loss: 1.4115 - val_accuracy: 0.6022 Epoch 12/30 26/26 [==============================] - 8s 314ms/step - loss: 0.8586 - accuracy: 0.6761 - val_loss: 1.0692 - val_accuracy: 0.6349 Epoch 13/30 26/26 [==============================] - 8s 315ms/step - loss: 0.8120 - accuracy: 0.6894 - val_loss: 1.5233 - val_accuracy: 0.6567 Epoch 14/30 26/26 [==============================] - 8s 316ms/step - loss: 0.8275 - accuracy: 0.6857 - val_loss: 1.9079 - val_accuracy: 0.5804 Epoch 15/30 26/26 [==============================] - 8s 316ms/step - loss: 0.7624 - accuracy: 0.7127 - val_loss: 0.9543 - val_accuracy: 0.6540 Epoch 16/30 26/26 [==============================] - 8s 315ms/step - loss: 0.7595 - accuracy: 0.7266 - val_loss: 4.5757 - val_accuracy: 0.4877 Epoch 17/30 26/26 [==============================] - 8s 315ms/step - loss: 0.7577 - accuracy: 0.7154 - val_loss: 1.8411 - val_accuracy: 0.5749 Epoch 18/30 26/26 [==============================] - 8s 316ms/step - loss: 0.7596 - accuracy: 0.7163 - val_loss: 1.0660 - val_accuracy: 0.6703 Epoch 19/30 26/26 [==============================] - 8s 315ms/step - loss: 0.7492 - accuracy: 0.7160 - val_loss: 1.2462 - val_accuracy: 0.6485 Epoch 20/30 26/26 [==============================] - 8s 315ms/step - loss: 0.7269 - accuracy: 0.7330 - val_loss: 5.8287 - val_accuracy: 0.3379 Epoch 21/30 26/26 [==============================] - 8s 315ms/step - loss: 0.7193 - accuracy: 0.7275 - val_loss: 4.7058 - val_accuracy: 0.6049 Epoch 22/30 26/26 [==============================] - 8s 316ms/step - loss: 0.7251 - accuracy: 0.7318 - val_loss: 1.5608 - val_accuracy: 0.6485 Epoch 23/30 26/26 [==============================] - 8s 314ms/step - loss: 0.6888 - accuracy: 0.7466 - val_loss: 1.7914 - val_accuracy: 0.6240 Epoch 24/30 26/26 [==============================] - 8s 314ms/step - loss: 0.7051 - accuracy: 0.7339 - val_loss: 2.0918 - val_accuracy: 0.6158 Epoch 25/30 26/26 [==============================] - 8s 315ms/step - loss: 0.6920 - accuracy: 0.7454 - val_loss: 0.7284 - val_accuracy: 0.7575 Epoch 26/30 26/26 [==============================] - 8s 316ms/step - loss: 0.6502 - accuracy: 0.7523 - val_loss: 2.5474 - val_accuracy: 0.5313 Epoch 27/30 26/26 [==============================] - 8s 315ms/step - loss: 0.7101 - accuracy: 0.7330 - val_loss: 26.8117 - val_accuracy: 0.3297 Epoch 28/30 26/26 [==============================] - 8s 315ms/step - loss: 0.6632 - accuracy: 0.7548 - val_loss: 20.1011 - val_accuracy: 0.3243 Epoch 29/30 26/26 [==============================] - 8s 315ms/step - loss: 0.6682 - accuracy: 0.7505 - val_loss: 11.5872 - val_accuracy: 0.3297 Epoch 30/30 26/26 [==============================] - 8s 315ms/step - loss: 0.6758 - accuracy: 0.7514 - val_loss: 5.7229 - val_accuracy: 0.4305 3/3 [==============================] - 0s 95ms/step - loss: 5.7229 - accuracy: 0.4305 Top-1 accuracy on the validation set: 43.05%. As we can notice from the above cells, FixRes leads to a better performance. Another advantage of FixRes is the improved total training time and reduction in GPU memory usage. FixRes is model-agnostic, you can use it on any image classification model to potentially boost performance. You can find more results here that were gathered by running the same code with different random seeds. How to obtain a class activation heatmap for an image classification model. Adapted from Deep Learning with Python (2017). Setup import numpy as np import tensorflow as tf from tensorflow import keras # Display from IPython.display import Image, display import matplotlib.pyplot as plt import matplotlib.cm as cm Configurable parameters You can change these to another model. To get the values for last_conv_layer_name use model.summary() to see the names of all layers in the model. model_builder = keras.applications.xception.Xception img_size = (299, 299) preprocess_input = keras.applications.xception.preprocess_input decode_predictions = keras.applications.xception.decode_predictions last_conv_layer_name = \"block14_sepconv2_act\" # The local path to our target image img_path = keras.utils.get_file( \"african_elephant.jpg\", \"https://i.imgur.com/Bvro0YD.png\" ) display(Image(img_path)) jpeg The Grad-CAM algorithm def get_img_array(img_path, size): # `img` is a PIL image of size 299x299 img = keras.preprocessing.image.load_img(img_path, target_size=size) # `array` is a float32 Numpy array of shape (299, 299, 3) array = keras.preprocessing.image.img_to_array(img) # We add a dimension to transform our array into a \"batch\" # of size (1, 299, 299, 3) array = np.expand_dims(array, axis=0) return array def make_gradcam_heatmap(img_array, model, last_conv_layer_name, pred_index=None): # First, we create a model that maps the input image to the activations # of the last conv layer as well as the output predictions grad_model = tf.keras.models.Model( [model.inputs], [model.get_layer(last_conv_layer_name).output, model.output] ) # Then, we compute the gradient of the top predicted class for our input image # with respect to the activations of the last conv layer with tf.GradientTape() as tape: last_conv_layer_output, preds = grad_model(img_array) if pred_index is None: pred_index = tf.argmax(preds[0]) class_channel = preds[:, pred_index] # This is the gradient of the output neuron (top predicted or chosen) # with regard to the output feature map of the last conv layer grads = tape.gradient(class_channel, last_conv_layer_output) # This is a vector where each entry is the mean intensity of the gradient # over a specific feature map channel pooled_grads = tf.reduce_mean(grads, axis=(0, 1, 2)) # We multiply each channel in the feature map array # by \"how important this channel is\" with regard to the top predicted class # then sum all the channels to obtain the heatmap class activation last_conv_layer_output = last_conv_layer_output[0] heatmap = last_conv_layer_output @ pooled_grads[..., tf.newaxis] heatmap = tf.squeeze(heatmap) # For visualization purpose, we will also normalize the heatmap between 0 & 1 heatmap = tf.maximum(heatmap, 0) / tf.math.reduce_max(heatmap) return heatmap.numpy() Let's test-drive it # Prepare image img_array = preprocess_input(get_img_array(img_path, size=img_size)) # Make model model = model_builder(weights=\"imagenet\") # Remove last layer's softmax model.layers[-1].activation = None # Print what the top predicted class is preds = model.predict(img_array) print(\"Predicted:\", decode_predictions(preds, top=1)[0]) # Generate class activation heatmap heatmap = make_gradcam_heatmap(img_array, model, last_conv_layer_name) # Display heatmap plt.matshow(heatmap) plt.show() Predicted: [('n02504458', 'African_elephant', 9.862388)] png Create a superimposed visualization def save_and_display_gradcam(img_path, heatmap, cam_path=\"cam.jpg\", alpha=0.4): # Load the original image img = keras.preprocessing.image.load_img(img_path) img = keras.preprocessing.image.img_to_array(img) # Rescale heatmap to a range 0-255 heatmap = np.uint8(255 * heatmap) # Use jet colormap to colorize heatmap jet = cm.get_cmap(\"jet\") # Use RGB values of the colormap jet_colors = jet(np.arange(256))[:, :3] jet_heatmap = jet_colors[heatmap] # Create an image with RGB colorized heatmap jet_heatmap = keras.preprocessing.image.array_to_img(jet_heatmap) jet_heatmap = jet_heatmap.resize((img.shape[1], img.shape[0])) jet_heatmap = keras.preprocessing.image.img_to_array(jet_heatmap) # Superimpose the heatmap on original image superimposed_img = jet_heatmap * alpha + img superimposed_img = keras.preprocessing.image.array_to_img(superimposed_img) # Save the superimposed image superimposed_img.save(cam_path) # Display Grad CAM display(Image(cam_path)) save_and_display_gradcam(img_path, heatmap) jpeg Let's try another image We will see how the grad cam explains the model's outputs for a multi-label image. Let's try an image with a cat and a dog together, and see how the grad cam behaves. img_path = keras.utils.get_file( \"cat_and_dog.jpg\", \"https://storage.googleapis.com/petbacker/images/blog/2017/dog-and-cat-cover.jpg\", ) display(Image(img_path)) # Prepare image img_array = preprocess_input(get_img_array(img_path, size=img_size)) # Print what the two top predicted classes are preds = model.predict(img_array) print(\"Predicted:\", decode_predictions(preds, top=2)[0]) jpeg Predicted: [('n02112137', 'chow', 4.611241), ('n02124075', 'Egyptian_cat', 4.3817368)] We generate class activation heatmap for \"chow,\" the class index is 260 heatmap = make_gradcam_heatmap(img_array, model, last_conv_layer_name, pred_index=260) save_and_display_gradcam(img_path, heatmap) jpeg We generate class activation heatmap for \"egyptian cat,\" the class index is 285 heatmap = make_gradcam_heatmap(img_array, model, last_conv_layer_name, pred_index=285) save_and_display_gradcam(img_path, heatmap) jpeg Implement Gradient Centralization to improve training performance of DNNs. Introduction This example implements Gradient Centralization, a new optimization technique for Deep Neural Networks by Yong et al., and demonstrates it on Laurence Moroney's Horses or Humans Dataset. Gradient Centralization can both speedup training process and improve the final generalization performance of DNNs. It operates directly on gradients by centralizing the gradient vectors to have zero mean. Gradient Centralization morever improves the Lipschitzness of the loss function and its gradient so that the training process becomes more efficient and stable. This example requires TensorFlow 2.2 or higher as well as tensorflow_datasets which can be installed with this command: pip install tensorflow-datasets We will be implementing Gradient Centralization in this example but you could also use this very easily with a package I built, gradient-centralization-tf. Setup from time import time import tensorflow as tf import tensorflow_datasets as tfds from tensorflow.keras import layers from tensorflow.keras.optimizers import RMSprop Prepare the data For this example, we will be using the Horses or Humans dataset. num_classes = 2 input_shape = (300, 300, 3) dataset_name = \"horses_or_humans\" batch_size = 128 AUTOTUNE = tf.data.AUTOTUNE (train_ds, test_ds), metadata = tfds.load( name=dataset_name, split=[tfds.Split.TRAIN, tfds.Split.TEST], with_info=True, as_supervised=True, ) print(f\"Image shape: {metadata.features['image'].shape}\") print(f\"Training images: {metadata.splits['train'].num_examples}\") print(f\"Test images: {metadata.splits['test'].num_examples}\") Image shape: (300, 300, 3) Training images: 1027 Test images: 256 Use Data Augmentation We will rescale the data to [0, 1] and perform simple augmentations to our data. rescale = layers.Rescaling(1.0 / 255) data_augmentation = tf.keras.Sequential( [ layers.RandomFlip(\"horizontal_and_vertical\"), layers.RandomRotation(0.3), layers.RandomZoom(0.2), ] ) def prepare(ds, shuffle=False, augment=False): # Rescale dataset ds = ds.map(lambda x, y: (rescale(x), y), num_parallel_calls=AUTOTUNE) if shuffle: ds = ds.shuffle(1024) # Batch dataset ds = ds.batch(batch_size) # Use data augmentation only on the training set if augment: ds = ds.map( lambda x, y: (data_augmentation(x, training=True), y), num_parallel_calls=AUTOTUNE, ) # Use buffered prefecting return ds.prefetch(buffer_size=AUTOTUNE) Rescale and augment the data train_ds = prepare(train_ds, shuffle=True, augment=True) test_ds = prepare(test_ds) Define a model In this section we will define a Convolutional neural network. model = tf.keras.Sequential( [ layers.Conv2D(16, (3, 3), activation=\"relu\", input_shape=(300, 300, 3)), layers.MaxPooling2D(2, 2), layers.Conv2D(32, (3, 3), activation=\"relu\"), layers.Dropout(0.5), layers.MaxPooling2D(2, 2), layers.Conv2D(64, (3, 3), activation=\"relu\"), layers.Dropout(0.5), layers.MaxPooling2D(2, 2), layers.Conv2D(64, (3, 3), activation=\"relu\"), layers.MaxPooling2D(2, 2), layers.Conv2D(64, (3, 3), activation=\"relu\"), layers.MaxPooling2D(2, 2), layers.Flatten(), layers.Dropout(0.5), layers.Dense(512, activation=\"relu\"), layers.Dense(1, activation=\"sigmoid\"), ] ) Implement Gradient Centralization We will now subclass the RMSProp optimizer class modifying the tf.keras.optimizers.Optimizer.get_gradients() method where we now implement Gradient Centralization. On a high level the idea is that let us say we obtain our gradients through back propogation for a Dense or Convolution layer we then compute the mean of the column vectors of the weight matrix, and then remove the mean from each column vector. The experiments in this paper on various applications, including general image classification, fine-grained image classification, detection and segmentation and Person ReID demonstrate that GC can consistently improve the performance of DNN learning. Also, for simplicity at the moment we are not implementing gradient cliiping functionality, however this quite easy to implement. At the moment we are just creating a subclass for the RMSProp optimizer however you could easily reproduce this for any other optimizer or on a custom optimizer in the same way. We will be using this class in the later section when we train a model with Gradient Centralization. class GCRMSprop(RMSprop): def get_gradients(self, loss, params): # We here just provide a modified get_gradients() function since we are # trying to just compute the centralized gradients. grads = [] gradients = super().get_gradients() for grad in gradients: grad_len = len(grad.shape) if grad_len > 1: axis = list(range(grad_len - 1)) grad -= tf.reduce_mean(grad, axis=axis, keep_dims=True) grads.append(grad) return grads optimizer = GCRMSprop(learning_rate=1e-4) Training utilities We will also create a callback which allows us to easily measure the total training time and the time taken for each epoch since we are interested in comparing the effect of Gradient Centralization on the model we built above. class TimeHistory(tf.keras.callbacks.Callback): def on_train_begin(self, logs={}): self.times = [] def on_epoch_begin(self, batch, logs={}): self.epoch_time_start = time() def on_epoch_end(self, batch, logs={}): self.times.append(time() - self.epoch_time_start) Train the model without GC We now train the model we built earlier without Gradient Centralization which we can compare to the training performance of the model trained with Gradient Centralization. time_callback_no_gc = TimeHistory() model.compile( loss=\"binary_crossentropy\", optimizer=RMSprop(learning_rate=1e-4), metrics=[\"accuracy\"], ) model.summary() Model: \"sequential_1\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 298, 298, 16) 448 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 149, 149, 16) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 147, 147, 32) 4640 _________________________________________________________________ dropout (Dropout) (None, 147, 147, 32) 0 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 73, 73, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 71, 71, 64) 18496 _________________________________________________________________ dropout_1 (Dropout) (None, 71, 71, 64) 0 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 35, 35, 64) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 33, 33, 64) 36928 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 16, 16, 64) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 14, 14, 64) 36928 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 7, 7, 64) 0 _________________________________________________________________ flatten (Flatten) (None, 3136) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 3136) 0 _________________________________________________________________ dense (Dense) (None, 512) 1606144 _________________________________________________________________ dense_1 (Dense) (None, 1) 513 ================================================================= Total params: 1,704,097 Trainable params: 1,704,097 Non-trainable params: 0 _________________________________________________________________ We also save the history since we later want to compare our model trained with and not trained with Gradient Centralization history_no_gc = model.fit( train_ds, epochs=10, verbose=1, callbacks=[time_callback_no_gc] ) Epoch 1/10 9/9 [==============================] - 5s 571ms/step - loss: 0.7427 - accuracy: 0.5073 Epoch 2/10 9/9 [==============================] - 6s 667ms/step - loss: 0.6757 - accuracy: 0.5433 Epoch 3/10 9/9 [==============================] - 6s 660ms/step - loss: 0.6616 - accuracy: 0.6144 Epoch 4/10 9/9 [==============================] - 6s 642ms/step - loss: 0.6598 - accuracy: 0.6203 Epoch 5/10 9/9 [==============================] - 6s 666ms/step - loss: 0.6782 - accuracy: 0.6329 Epoch 6/10 9/9 [==============================] - 6s 655ms/step - loss: 0.6550 - accuracy: 0.6524 Epoch 7/10 9/9 [==============================] - 6s 645ms/step - loss: 0.6157 - accuracy: 0.7186 Epoch 8/10 9/9 [==============================] - 6s 654ms/step - loss: 0.6095 - accuracy: 0.6913 Epoch 9/10 9/9 [==============================] - 6s 677ms/step - loss: 0.5880 - accuracy: 0.7147 Epoch 10/10 9/9 [==============================] - 6s 663ms/step - loss: 0.5814 - accuracy: 0.6933 Train the model with GC We will now train the same model, this time using Gradient Centralization, notice our optimizer is the one using Gradient Centralization this time. time_callback_gc = TimeHistory() model.compile(loss=\"binary_crossentropy\", optimizer=optimizer, metrics=[\"accuracy\"]) model.summary() history_gc = model.fit(train_ds, epochs=10, verbose=1, callbacks=[time_callback_gc]) Model: \"sequential_1\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 298, 298, 16) 448 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 149, 149, 16) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 147, 147, 32) 4640 _________________________________________________________________ dropout (Dropout) (None, 147, 147, 32) 0 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 73, 73, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 71, 71, 64) 18496 _________________________________________________________________ dropout_1 (Dropout) (None, 71, 71, 64) 0 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 35, 35, 64) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 33, 33, 64) 36928 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 16, 16, 64) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 14, 14, 64) 36928 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 7, 7, 64) 0 _________________________________________________________________ flatten (Flatten) (None, 3136) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 3136) 0 _________________________________________________________________ dense (Dense) (None, 512) 1606144 _________________________________________________________________ dense_1 (Dense) (None, 1) 513 ================================================================= Total params: 1,704,097 Trainable params: 1,704,097 Non-trainable params: 0 _________________________________________________________________ Epoch 1/10 9/9 [==============================] - 6s 673ms/step - loss: 0.6022 - accuracy: 0.7147 Epoch 2/10 9/9 [==============================] - 6s 662ms/step - loss: 0.5385 - accuracy: 0.7371 Epoch 3/10 9/9 [==============================] - 6s 673ms/step - loss: 0.4832 - accuracy: 0.7945 Epoch 4/10 9/9 [==============================] - 6s 645ms/step - loss: 0.4692 - accuracy: 0.7799 Epoch 5/10 9/9 [==============================] - 6s 720ms/step - loss: 0.4792 - accuracy: 0.7799 Epoch 6/10 9/9 [==============================] - 6s 658ms/step - loss: 0.4623 - accuracy: 0.7838 Epoch 7/10 9/9 [==============================] - 6s 651ms/step - loss: 0.4413 - accuracy: 0.8072 Epoch 8/10 9/9 [==============================] - 6s 682ms/step - loss: 0.4542 - accuracy: 0.8014 Epoch 9/10 9/9 [==============================] - 6s 649ms/step - loss: 0.4235 - accuracy: 0.8053 Epoch 10/10 9/9 [==============================] - 6s 686ms/step - loss: 0.4445 - accuracy: 0.7936 Comparing performance print(\"Not using Gradient Centralization\") print(f\"Loss: {history_no_gc.history['loss'][-1]}\") print(f\"Accuracy: {history_no_gc.history['accuracy'][-1]}\") print(f\"Training Time: {sum(time_callback_no_gc.times)}\") print(\"Using Gradient Centralization\") print(f\"Loss: {history_gc.history['loss'][-1]}\") print(f\"Accuracy: {history_gc.history['accuracy'][-1]}\") print(f\"Training Time: {sum(time_callback_gc.times)}\") Not using Gradient Centralization Loss: 0.5814347863197327 Accuracy: 0.6932814121246338 Training Time: 136.35903406143188 Using Gradient Centralization Loss: 0.4444807469844818 Accuracy: 0.7935734987258911 Training Time: 131.61780261993408 Readers are encouraged to try out Gradient Centralization on different datasets from different domains and experiment with it's effect. You are strongly advised to check out the original paper as well - the authors present several studies on Gradient Centralization showing how it can improve general performance, generalization, training time as well as more efficient. Many thanks to Ali Mustufa Shaikh for reviewing this implementation. Training a handwriting recognition model with variable-length sequences. Introduction This example shows how the Captcha OCR example can be extended to the IAM Dataset, which has variable length ground-truth targets. Each sample in the dataset is an image of some handwritten text, and its corresponding target is the string present in the image. The IAM Dataset is widely used across many OCR benchmarks, so we hope this example can serve as a good starting point for building OCR systems. Data collection !wget -q https://git.io/J0fjL -O IAM_Words.zip !unzip -qq IAM_Words.zip ! !mkdir data !mkdir data/words !tar -xf IAM_Words/words.tgz -C data/words !mv IAM_Words/words.txt data Preview how the dataset is organized. Lines prepended by \"#\" are just metadata information. !head -20 data/words.txt #--- words.txt ---------------------------------------------------------------# # # iam database word information # # format: a01-000u-00-00 ok 154 1 408 768 27 51 AT A # # a01-000u-00-00 -> word id for line 00 in form a01-000u # ok -> result of word segmentation # ok: word was correctly # er: segmentation of word can be bad # # 154 -> graylevel to binarize the line containing this word # 1 -> number of components for this word # 408 768 27 51 -> bounding box around this word in x,y,w,h format # AT -> the grammatical tag for this word, see the # file tagset.txt for an explanation # A -> the transcription for this word # a01-000u-00-00 ok 154 408 768 27 51 AT A a01-000u-00-01 ok 154 507 766 213 48 NN MOVE Imports from tensorflow.keras.layers.experimental.preprocessing import StringLookup from tensorflow import keras import matplotlib.pyplot as plt import tensorflow as tf import numpy as np import os np.random.seed(42) tf.random.set_seed(42) Dataset splitting base_path = \"data\" words_list = [] words = open(f\"{base_path}/words.txt\", \"r\").readlines() for line in words: if line[0] == \"#\": continue if line.split(\" \")[1] != \"err\": # We don't need to deal with errored entries. words_list.append(line) len(words_list) np.random.shuffle(words_list) We will split the dataset into three subsets with a 90:5:5 ratio (train:validation:test). split_idx = int(0.9 * len(words_list)) train_samples = words_list[:split_idx] test_samples = words_list[split_idx:] val_split_idx = int(0.5 * len(test_samples)) validation_samples = test_samples[:val_split_idx] test_samples = test_samples[val_split_idx:] assert len(words_list) == len(train_samples) + len(validation_samples) + len( test_samples ) print(f\"Total training samples: {len(train_samples)}\") print(f\"Total validation samples: {len(validation_samples)}\") print(f\"Total test samples: {len(test_samples)}\") Total training samples: 86810 Total validation samples: 4823 Total test samples: 4823 Data input pipeline We start building our data input pipeline by first preparing the image paths. base_image_path = os.path.join(base_path, \"words\") def get_image_paths_and_labels(samples): paths = [] corrected_samples = [] for (i, file_line) in enumerate(samples): line_split = file_line.strip() line_split = line_split.split(\" \") # Each line split will have this format for the corresponding image: # part1/part1-part2/part1-part2-part3.png image_name = line_split[0] partI = image_name.split(\"-\")[0] partII = image_name.split(\"-\")[1] img_path = os.path.join( base_image_path, partI, partI + \"-\" + partII, image_name + \".png\" ) if os.path.getsize(img_path): paths.append(img_path) corrected_samples.append(file_line.split(\"\n\")[0]) return paths, corrected_samples train_img_paths, train_labels = get_image_paths_and_labels(train_samples) validation_img_paths, validation_labels = get_image_paths_and_labels(validation_samples) test_img_paths, test_labels = get_image_paths_and_labels(test_samples) Then we prepare the ground-truth labels. # Find maximum length and the size of the vocabulary in the training data. train_labels_cleaned = [] characters = set() max_len = 0 for label in train_labels: label = label.split(\" \")[-1].strip() for char in label: characters.add(char) max_len = max(max_len, len(label)) train_labels_cleaned.append(label) print(\"Maximum length: \", max_len) print(\"Vocab size: \", len(characters)) # Check some label samples. train_labels_cleaned[:10] Maximum length: 21 Vocab size: 78 ['sure', 'he', 'during', 'of', 'booty', 'gastronomy', 'boy', 'The', 'and', 'in'] Now we clean the validation and the test labels as well. def clean_labels(labels): cleaned_labels = [] for label in labels: label = label.split(\" \")[-1].strip() cleaned_labels.append(label) return cleaned_labels validation_labels_cleaned = clean_labels(validation_labels) test_labels_cleaned = clean_labels(test_labels) Building the character vocabulary Keras provides different preprocessing layers to deal with different modalities of data. This guide provids a comprehensive introduction. Our example involves preprocessing labels at the character level. This means that if there are two labels, e.g. \"cat\" and \"dog\", then our character vocabulary should be {a, c, d, g, o, t} (without any special tokens). We use the StringLookup layer for this purpose. AUTOTUNE = tf.data.AUTOTUNE # Mapping characters to integers. char_to_num = StringLookup(vocabulary=list(characters), mask_token=None) # Mapping integers back to original characters. num_to_char = StringLookup( vocabulary=char_to_num.get_vocabulary(), mask_token=None, invert=True ) Resizing images without distortion Instead of square images, many OCR models work with rectangular images. This will become clearer in a moment when we will visualize a few samples from the dataset. While aspect-unaware resizing square images does not introduce a significant amount of distortion this is not the case for rectangular images. But resizing images to a uniform size is a requirement for mini-batching. So we need to perform our resizing such that the following criteria are met: Aspect ratio is preserved. Content of the images is not affected. def distortion_free_resize(image, img_size): w, h = img_size image = tf.image.resize(image, size=(h, w), preserve_aspect_ratio=True) # Check tha amount of padding needed to be done. pad_height = h - tf.shape(image)[0] pad_width = w - tf.shape(image)[1] # Only necessary if you want to do same amount of padding on both sides. if pad_height % 2 != 0: height = pad_height // 2 pad_height_top = height + 1 pad_height_bottom = height else: pad_height_top = pad_height_bottom = pad_height // 2 if pad_width % 2 != 0: width = pad_width // 2 pad_width_left = width + 1 pad_width_right = width else: pad_width_left = pad_width_right = pad_width // 2 image = tf.pad( image, paddings=[ [pad_height_top, pad_height_bottom], [pad_width_left, pad_width_right], [0, 0], ], ) image = tf.transpose(image, perm=[1, 0, 2]) image = tf.image.flip_left_right(image) return image If we just go with the plain resizing then the images would look like so: Notice how this resizing would have introduced unnecessary stretching. Putting the utilities together batch_size = 64 padding_token = 99 image_width = 128 image_height = 32 def preprocess_image(image_path, img_size=(image_width, image_height)): image = tf.io.read_file(image_path) image = tf.image.decode_png(image, 1) image = distortion_free_resize(image, img_size) image = tf.cast(image, tf.float32) / 255.0 return image def vectorize_label(label): label = char_to_num(tf.strings.unicode_split(label, input_encoding=\"UTF-8\")) length = tf.shape(label)[0] pad_amount = max_len - length label = tf.pad(label, paddings=[[0, pad_amount]], constant_values=padding_token) return label def process_images_labels(image_path, label): image = preprocess_image(image_path) label = vectorize_label(label) return {\"image\": image, \"label\": label} def prepare_dataset(image_paths, labels): dataset = tf.data.Dataset.from_tensor_slices((image_paths, labels)).map( process_images_labels, num_parallel_calls=AUTOTUNE ) return dataset.batch(batch_size).cache().prefetch(AUTOTUNE) Prepare tf.data.Dataset objects train_ds = prepare_dataset(train_img_paths, train_labels_cleaned) validation_ds = prepare_dataset(validation_img_paths, validation_labels_cleaned) test_ds = prepare_dataset(test_img_paths, test_labels_cleaned) Visualize a few samples for data in train_ds.take(1): images, labels = data[\"image\"], data[\"label\"] _, ax = plt.subplots(4, 4, figsize=(15, 8)) for i in range(16): img = images[i] img = tf.image.flip_left_right(img) img = tf.transpose(img, perm=[1, 0, 2]) img = (img * 255.0).numpy().clip(0, 255).astype(np.uint8) img = img[:, :, 0] # Gather indices where label!= padding_token. label = labels[i] indices = tf.gather(label, tf.where(tf.math.not_equal(label, padding_token))) # Convert to string. label = tf.strings.reduce_join(num_to_char(indices)) label = label.numpy().decode(\"utf-8\") ax[i // 4, i % 4].imshow(img, cmap=\"gray\") ax[i // 4, i % 4].set_title(label) ax[i // 4, i % 4].axis(\"off\") plt.show() png You will notice that the content of original image is kept as faithful as possible and has been padded accordingly. Model Our model will use the CTC loss as an endpoint layer. For a detailed understanding of the CTC loss, refer to this post. class CTCLayer(keras.layers.Layer): def __init__(self, name=None): super().__init__(name=name) self.loss_fn = keras.backend.ctc_batch_cost def call(self, y_true, y_pred): batch_len = tf.cast(tf.shape(y_true)[0], dtype=\"int64\") input_length = tf.cast(tf.shape(y_pred)[1], dtype=\"int64\") label_length = tf.cast(tf.shape(y_true)[1], dtype=\"int64\") input_length = input_length * tf.ones(shape=(batch_len, 1), dtype=\"int64\") label_length = label_length * tf.ones(shape=(batch_len, 1), dtype=\"int64\") loss = self.loss_fn(y_true, y_pred, input_length, label_length) self.add_loss(loss) # At test time, just return the computed predictions. return y_pred def build_model(): # Inputs to the model input_img = keras.Input(shape=(image_width, image_height, 1), name=\"image\") labels = keras.layers.Input(name=\"label\", shape=(None,)) # First conv block. x = keras.layers.Conv2D( 32, (3, 3), activation=\"relu\", kernel_initializer=\"he_normal\", padding=\"same\", name=\"Conv1\", )(input_img) x = keras.layers.MaxPooling2D((2, 2), name=\"pool1\")(x) # Second conv block. x = keras.layers.Conv2D( 64, (3, 3), activation=\"relu\", kernel_initializer=\"he_normal\", padding=\"same\", name=\"Conv2\", )(x) x = keras.layers.MaxPooling2D((2, 2), name=\"pool2\")(x) # We have used two max pool with pool size and strides 2. # Hence, downsampled feature maps are 4x smaller. The number of # filters in the last layer is 64. Reshape accordingly before # passing the output to the RNN part of the model. new_shape = ((image_width // 4), (image_height // 4) * 64) x = keras.layers.Reshape(target_shape=new_shape, name=\"reshape\")(x) x = keras.layers.Dense(64, activation=\"relu\", name=\"dense1\")(x) x = keras.layers.Dropout(0.2)(x) # RNNs. x = keras.layers.Bidirectional( keras.layers.LSTM(128, return_sequences=True, dropout=0.25) )(x) x = keras.layers.Bidirectional( keras.layers.LSTM(64, return_sequences=True, dropout=0.25) )(x) # +2 is to account for the two special tokens introduced by the CTC loss. # The recommendation comes here: https://git.io/J0eXP. x = keras.layers.Dense( len(char_to_num.get_vocabulary()) + 2, activation=\"softmax\", name=\"dense2\" )(x) # Add CTC layer for calculating CTC loss at each step. output = CTCLayer(name=\"ctc_loss\")(labels, x) # Define the model. model = keras.models.Model( inputs=[input_img, labels], outputs=output, name=\"handwriting_recognizer\" ) # Optimizer. opt = keras.optimizers.Adam() # Compile the model and return. model.compile(optimizer=opt) return model # Get the model. model = build_model() model.summary() Model: \"handwriting_recognizer\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== image (InputLayer) [(None, 128, 32, 1)] 0 __________________________________________________________________________________________________ Conv1 (Conv2D) (None, 128, 32, 32) 320 image[0][0] __________________________________________________________________________________________________ pool1 (MaxPooling2D) (None, 64, 16, 32) 0 Conv1[0][0] __________________________________________________________________________________________________ Conv2 (Conv2D) (None, 64, 16, 64) 18496 pool1[0][0] __________________________________________________________________________________________________ pool2 (MaxPooling2D) (None, 32, 8, 64) 0 Conv2[0][0] __________________________________________________________________________________________________ reshape (Reshape) (None, 32, 512) 0 pool2[0][0] __________________________________________________________________________________________________ dense1 (Dense) (None, 32, 64) 32832 reshape[0][0] __________________________________________________________________________________________________ dropout (Dropout) (None, 32, 64) 0 dense1[0][0] __________________________________________________________________________________________________ bidirectional (Bidirectional) (None, 32, 256) 197632 dropout[0][0] __________________________________________________________________________________________________ bidirectional_1 (Bidirectional) (None, 32, 128) 164352 bidirectional[0][0] __________________________________________________________________________________________________ label (InputLayer) [(None, None)] 0 __________________________________________________________________________________________________ dense2 (Dense) (None, 32, 81) 10449 bidirectional_1[0][0] __________________________________________________________________________________________________ ctc_loss (CTCLayer) (None, 32, 81) 0 label[0][0] dense2[0][0] ================================================================================================== Total params: 424,081 Trainable params: 424,081 Non-trainable params: 0 __________________________________________________________________________________________________ Evaluation metric Edit Distance is the most widely used metric for evaluating OCR models. In this section, we will implement it and use it as a callback to monitor our model. We first segregate the validation images and their labels for convenience. validation_images = [] validation_labels = [] for batch in validation_ds: validation_images.append(batch[\"image\"]) validation_labels.append(batch[\"label\"]) Now, we create a callback to monitor the edit distances. def calculate_edit_distance(labels, predictions): # Get a single batch and convert its labels to sparse tensors. saprse_labels = tf.cast(tf.sparse.from_dense(labels), dtype=tf.int64) # Make predictions and convert them to sparse tensors. input_len = np.ones(predictions.shape[0]) * predictions.shape[1] predictions_decoded = keras.backend.ctc_decode( predictions, input_length=input_len, greedy=True )[0][0][:, :max_len] sparse_predictions = tf.cast( tf.sparse.from_dense(predictions_decoded), dtype=tf.int64 ) # Compute individual edit distances and average them out. edit_distances = tf.edit_distance( sparse_predictions, saprse_labels, normalize=False ) return tf.reduce_mean(edit_distances) class EditDistanceCallback(keras.callbacks.Callback): def __init__(self, pred_model): super().__init__() self.prediction_model = pred_model def on_epoch_end(self, epoch, logs=None): edit_distances = [] for i in range(len(validation_images)): labels = validation_labels[i] predictions = self.prediction_model.predict(validation_images[i]) edit_distances.append(calculate_edit_distance(labels, predictions).numpy()) print( f\"Mean edit distance for epoch {epoch + 1}: {np.mean(edit_distances):.4f}\" ) Training Now we are ready to kick off model training. epochs = 10 # To get good results this should be at least 50. model = build_model() prediction_model = keras.models.Model( model.get_layer(name=\"image\").input, model.get_layer(name=\"dense2\").output ) edit_distance_callback = EditDistanceCallback(prediction_model) # Train the model. history = model.fit( train_ds, validation_data=validation_ds, epochs=epochs, callbacks=[edit_distance_callback], ) Epoch 1/10 1357/1357 [==============================] - 89s 51ms/step - loss: 13.6670 - val_loss: 11.8041 Mean edit distance for epoch 1: 20.5117 Epoch 2/10 1357/1357 [==============================] - 48s 36ms/step - loss: 10.6864 - val_loss: 9.6994 Mean edit distance for epoch 2: 20.1167 Epoch 3/10 1357/1357 [==============================] - 48s 35ms/step - loss: 9.0437 - val_loss: 8.0355 Mean edit distance for epoch 3: 19.7270 Epoch 4/10 1357/1357 [==============================] - 48s 35ms/step - loss: 7.6098 - val_loss: 6.4239 Mean edit distance for epoch 4: 19.1106 Epoch 5/10 1357/1357 [==============================] - 48s 35ms/step - loss: 6.3194 - val_loss: 4.9814 Mean edit distance for epoch 5: 18.4894 Epoch 6/10 1357/1357 [==============================] - 48s 35ms/step - loss: 5.3417 - val_loss: 4.1307 Mean edit distance for epoch 6: 18.1909 Epoch 7/10 1357/1357 [==============================] - 48s 35ms/step - loss: 4.6396 - val_loss: 3.7706 Mean edit distance for epoch 7: 18.1224 Epoch 8/10 1357/1357 [==============================] - 48s 35ms/step - loss: 4.1926 - val_loss: 3.3682 Mean edit distance for epoch 8: 17.9387 Epoch 9/10 1357/1357 [==============================] - 48s 36ms/step - loss: 3.8532 - val_loss: 3.1829 Mean edit distance for epoch 9: 17.9074 Epoch 10/10 1357/1357 [==============================] - 49s 36ms/step - loss: 3.5769 - val_loss: 2.9221 Mean edit distance for epoch 10: 17.7960 Inference # A utility function to decode the output of the network. def decode_batch_predictions(pred): input_len = np.ones(pred.shape[0]) * pred.shape[1] # Use greedy search. For complex tasks, you can use beam search. results = keras.backend.ctc_decode(pred, input_length=input_len, greedy=True)[0][0][ :, :max_len ] # Iterate over the results and get back the text. output_text = [] for res in results: res = tf.gather(res, tf.where(tf.math.not_equal(res, -1))) res = tf.strings.reduce_join(num_to_char(res)).numpy().decode(\"utf-8\") output_text.append(res) return output_text # Let's check results on some test samples. for batch in test_ds.take(1): batch_images = batch[\"image\"] _, ax = plt.subplots(4, 4, figsize=(15, 8)) preds = prediction_model.predict(batch_images) pred_texts = decode_batch_predictions(preds) for i in range(16): img = batch_images[i] img = tf.image.flip_left_right(img) img = tf.transpose(img, perm=[1, 0, 2]) img = (img * 255.0).numpy().clip(0, 255).astype(np.uint8) img = img[:, :, 0] title = f\"Prediction: {pred_texts[i]}\" ax[i // 4, i % 4].imshow(img, cmap=\"gray\") ax[i // 4, i % 4].set_title(title) ax[i // 4, i % 4].axis(\"off\") plt.show() png To get better results the model should be trained for at least 50 epochs. Final remarks The prediction_model is fully compatible with TensorFlow Lite. If you are interested, you can use it inside a mobile application. You may find this notebook to be useful in this regard. Not all the training examples are perfectly aligned as observed in this example. This can hurt model performance for complex sequences. To this end, we can leverage Spatial Transformer Networks (Jaderberg et al.) that can help the model learn affine transformations that maximize its performance. Implement an image captioning model using a CNN and a Transformer. Setup import os import re import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.applications import efficientnet from tensorflow.keras.layers import TextVectorization seed = 111 np.random.seed(seed) tf.random.set_seed(seed) Download the dataset We will be using the Flickr8K dataset for this tutorial. This dataset comprises over 8,000 images, that are each paired with five different captions. !wget -q https://github.com/jbrownlee/Datasets/releases/download/Flickr8k/Flickr8k_Dataset.zip !wget -q https://github.com/jbrownlee/Datasets/releases/download/Flickr8k/Flickr8k_text.zip !unzip -qq Flickr8k_Dataset.zip !unzip -qq Flickr8k_text.zip !rm Flickr8k_Dataset.zip Flickr8k_text.zip # Path to the images IMAGES_PATH = \"Flicker8k_Dataset\" # Desired image dimensions IMAGE_SIZE = (299, 299) # Vocabulary size VOCAB_SIZE = 10000 # Fixed length allowed for any sequence SEQ_LENGTH = 25 # Dimension for the image embeddings and token embeddings EMBED_DIM = 512 # Per-layer units in the feed-forward network FF_DIM = 512 # Other training parameters BATCH_SIZE = 64 EPOCHS = 30 AUTOTUNE = tf.data.AUTOTUNE Preparing the dataset def load_captions_data(filename): \"\"\"Loads captions (text) data and maps them to corresponding images. Args: filename: Path to the text file containing caption data. Returns: caption_mapping: Dictionary mapping image names and the corresponding captions text_data: List containing all the available captions \"\"\" with open(filename) as caption_file: caption_data = caption_file.readlines() caption_mapping = {} text_data = [] images_to_skip = set() for line in caption_data: line = line.rstrip(\"\n\") # Image name and captions are separated using a tab img_name, caption = line.split(\"\t\") # Each image is repeated five times for the five different captions. # Each image name has a suffix `#(caption_number)` img_name = img_name.split(\"#\")[0] img_name = os.path.join(IMAGES_PATH, img_name.strip()) # We will remove caption that are either too short to too long tokens = caption.strip().split() if len(tokens) < 5 or len(tokens) > SEQ_LENGTH: images_to_skip.add(img_name) continue if img_name.endswith(\"jpg\") and img_name not in images_to_skip: # We will add a start and an end token to each caption caption = \" \" + caption.strip() + \" \" text_data.append(caption) if img_name in caption_mapping: caption_mapping[img_name].append(caption) else: caption_mapping[img_name] = [caption] for img_name in images_to_skip: if img_name in caption_mapping: del caption_mapping[img_name] return caption_mapping, text_data def train_val_split(caption_data, train_size=0.8, shuffle=True): \"\"\"Split the captioning dataset into train and validation sets. Args: caption_data (dict): Dictionary containing the mapped caption data train_size (float): Fraction of all the full dataset to use as training data shuffle (bool): Whether to shuffle the dataset before splitting Returns: Traning and validation datasets as two separated dicts \"\"\" # 1. Get the list of all image names all_images = list(caption_data.keys()) # 2. Shuffle if necessary if shuffle: np.random.shuffle(all_images) # 3. Split into training and validation sets train_size = int(len(caption_data) * train_size) training_data = { img_name: caption_data[img_name] for img_name in all_images[:train_size] } validation_data = { img_name: caption_data[img_name] for img_name in all_images[train_size:] } # 4. Return the splits return training_data, validation_data # Load the dataset captions_mapping, text_data = load_captions_data(\"Flickr8k.token.txt\") # Split the dataset into training and validation sets train_data, valid_data = train_val_split(captions_mapping) print(\"Number of training samples: \", len(train_data)) print(\"Number of validation samples: \", len(valid_data)) Number of training samples: 6114 Number of validation samples: 1529 Number of training samples: 6114 Number of validation samples: 1529 Vectorizing the text data We'll use the TextVectorization layer to vectorize the text data, that is to say, to turn the original strings into integer sequences where each integer represents the index of a word in a vocabulary. We will use a custom string standardization scheme (strip punctuation characters except < and >) and the default splitting scheme (split on whitespace). def custom_standardization(input_string): lowercase = tf.strings.lower(input_string) return tf.strings.regex_replace(lowercase, \"[%s]\" % re.escape(strip_chars), \"\") # [KERASBERT PROCESSING] removed definition of special chars for import strip_chars = strip_chars.replace(\"<\", \"\") strip_chars = strip_chars.replace(\">\", \"\") vectorization = TextVectorization( max_tokens=VOCAB_SIZE, output_mode=\"int\", output_sequence_length=SEQ_LENGTH, standardize=custom_standardization, ) vectorization.adapt(text_data) # Data augmentation for image data image_augmentation = keras.Sequential( [ layers.RandomFlip(\"horizontal\"), layers.RandomRotation(0.2), layers.RandomContrast(0.3), ] ) 2021-09-17 05:17:57.047819: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-17 05:17:57.058177: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-17 05:17:57.106007: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-17 05:17:57.107650: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-09-17 05:17:57.134387: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-17 05:17:57.135154: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-17 05:17:57.135806: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-17 05:17:57.680010: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-17 05:17:57.680785: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-17 05:17:57.681439: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-17 05:17:57.682067: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14684 MB memory: -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:04.0, compute capability: 7.0 2021-09-17 05:17:58.229404: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2) Building a tf.data.Dataset pipeline for training We will generate pairs of images and corresponding captions using a tf.data.Dataset object. The pipeline consists of two steps: Read the image from the disk Tokenize all the five captions corresponding to the image def decode_and_resize(img_path): img = tf.io.read_file(img_path) img = tf.image.decode_jpeg(img, channels=3) img = tf.image.resize(img, IMAGE_SIZE) img = tf.image.convert_image_dtype(img, tf.float32) return img def process_input(img_path, captions): return decode_and_resize(img_path), vectorization(captions) def make_dataset(images, captions): if split == \"train\": img_dataset = tf.data.Dataset.from_tensor_slices(images).map( read_train_image, num_parallel_calls=AUTOTUNE ) else: img_dataset = tf.data.Dataset.from_tensor_slices(images).map( read_valid_image, num_parallel_calls=AUTOTUNE ) cap_dataset = tf.data.Dataset.from_tensor_slices(captions).map( vectorization, num_parallel_calls=AUTOTUNE ) dataset = tf.data.Dataset.zip((img_dataset, cap_dataset)) dataset = dataset.batch(BATCH_SIZE).shuffle(256).prefetch(AUTOTUNE) return dataset # Pass the list of images and the list of corresponding captions train_dataset = make_dataset(list(train_data.keys()), list(train_data.values())) valid_dataset = make_dataset(list(valid_data.keys()), list(valid_data.values())) Building the model Our image captioning architecture consists of three models: A CNN: used to extract the image features A TransformerEncoder: The extracted image features are then passed to a Transformer based encoder that generates a new representation of the inputs A TransformerDecoder: This model takes the encoder output and the text data (sequences) as inputs and tries to learn to generate the caption. def get_cnn_model(): base_model = efficientnet.EfficientNetB0( input_shape=(*IMAGE_SIZE, 3), include_top=False, weights=\"imagenet\", ) # We freeze our feature extractor base_model.trainable = False base_model_out = base_model.output base_model_out = layers.Reshape((-1, base_model_out.shape[-1]))(base_model_out) cnn_model = keras.models.Model(base_model.input, base_model_out) return cnn_model class TransformerEncoderBlock(layers.Layer): def __init__(self, embed_dim, dense_dim, num_heads, **kwargs): super().__init__(**kwargs) self.embed_dim = embed_dim self.dense_dim = dense_dim self.num_heads = num_heads self.attention_1 = layers.MultiHeadAttention( num_heads=num_heads, key_dim=embed_dim, dropout=0.0 ) self.layernorm_1 = layers.LayerNormalization() self.layernorm_2 = layers.LayerNormalization() self.dense_1 = layers.Dense(embed_dim, activation=\"relu\") def call(self, inputs, training, mask=None): inputs = self.layernorm_1(inputs) inputs = self.dense_1(inputs) attention_output_1 = self.attention_1( query=inputs, value=inputs, key=inputs, attention_mask=None, training=training, ) out_1 = self.layernorm_2(inputs + attention_output_1) return out_1 class PositionalEmbedding(layers.Layer): def __init__(self, sequence_length, vocab_size, embed_dim, **kwargs): super().__init__(**kwargs) self.token_embeddings = layers.Embedding( input_dim=vocab_size, output_dim=embed_dim ) self.position_embeddings = layers.Embedding( input_dim=sequence_length, output_dim=embed_dim ) self.sequence_length = sequence_length self.vocab_size = vocab_size self.embed_dim = embed_dim self.embed_scale = tf.math.sqrt(tf.cast(embed_dim, tf.float32)) def call(self, inputs): length = tf.shape(inputs)[-1] positions = tf.range(start=0, limit=length, delta=1) embedded_tokens = self.token_embeddings(inputs) embedded_tokens = embedded_tokens * self.embed_scale embedded_positions = self.position_embeddings(positions) return embedded_tokens + embedded_positions def compute_mask(self, inputs, mask=None): return tf.math.not_equal(inputs, 0) class TransformerDecoderBlock(layers.Layer): def __init__(self, embed_dim, ff_dim, num_heads, **kwargs): super().__init__(**kwargs) self.embed_dim = embed_dim self.ff_dim = ff_dim self.num_heads = num_heads self.attention_1 = layers.MultiHeadAttention( num_heads=num_heads, key_dim=embed_dim, dropout=0.1 ) self.attention_2 = layers.MultiHeadAttention( num_heads=num_heads, key_dim=embed_dim, dropout=0.1 ) self.ffn_layer_1 = layers.Dense(ff_dim, activation=\"relu\") self.ffn_layer_2 = layers.Dense(embed_dim) self.layernorm_1 = layers.LayerNormalization() self.layernorm_2 = layers.LayerNormalization() self.layernorm_3 = layers.LayerNormalization() self.embedding = PositionalEmbedding( embed_dim=EMBED_DIM, sequence_length=SEQ_LENGTH, vocab_size=VOCAB_SIZE ) self.out = layers.Dense(VOCAB_SIZE, activation=\"softmax\") self.dropout_1 = layers.Dropout(0.3) self.dropout_2 = layers.Dropout(0.5) self.supports_masking = True def call(self, inputs, encoder_outputs, training, mask=None): inputs = self.embedding(inputs) causal_mask = self.get_causal_attention_mask(inputs) if mask is not None: padding_mask = tf.cast(mask[:, :, tf.newaxis], dtype=tf.int32) combined_mask = tf.cast(mask[:, tf.newaxis, :], dtype=tf.int32) combined_mask = tf.minimum(combined_mask, causal_mask) attention_output_1 = self.attention_1( query=inputs, value=inputs, key=inputs, attention_mask=combined_mask, training=training, ) out_1 = self.layernorm_1(inputs + attention_output_1) attention_output_2 = self.attention_2( query=out_1, value=encoder_outputs, key=encoder_outputs, attention_mask=padding_mask, training=training, ) out_2 = self.layernorm_2(out_1 + attention_output_2) ffn_out = self.ffn_layer_1(out_2) ffn_out = self.dropout_1(ffn_out, training=training) ffn_out = self.ffn_layer_2(ffn_out) ffn_out = self.layernorm_3(ffn_out + out_2, training=training) ffn_out = self.dropout_2(ffn_out, training=training) preds = self.out(ffn_out) return preds def get_causal_attention_mask(self, inputs): input_shape = tf.shape(inputs) batch_size, sequence_length = input_shape[0], input_shape[1] i = tf.range(sequence_length)[:, tf.newaxis] j = tf.range(sequence_length) mask = tf.cast(i >= j, dtype=\"int32\") mask = tf.reshape(mask, (1, input_shape[1], input_shape[1])) mult = tf.concat( [tf.expand_dims(batch_size, -1), tf.constant([1, 1], dtype=tf.int32)], axis=0, ) return tf.tile(mask, mult) class ImageCaptioningModel(keras.Model): def __init__( self, cnn_model, encoder, decoder, num_captions_per_image=5, image_aug=None, ): super().__init__() self.cnn_model = cnn_model self.encoder = encoder self.decoder = decoder self.loss_tracker = keras.metrics.Mean(name=\"loss\") self.acc_tracker = keras.metrics.Mean(name=\"accuracy\") self.num_captions_per_image = num_captions_per_image self.image_aug = image_aug def calculate_loss(self, y_true, y_pred, mask): loss = self.loss(y_true, y_pred) mask = tf.cast(mask, dtype=loss.dtype) loss *= mask return tf.reduce_sum(loss) / tf.reduce_sum(mask) def calculate_accuracy(self, y_true, y_pred, mask): accuracy = tf.equal(y_true, tf.argmax(y_pred, axis=2)) accuracy = tf.math.logical_and(mask, accuracy) accuracy = tf.cast(accuracy, dtype=tf.float32) mask = tf.cast(mask, dtype=tf.float32) return tf.reduce_sum(accuracy) / tf.reduce_sum(mask) def _compute_caption_loss_and_acc(self, img_embed, batch_seq, training=True): encoder_out = self.encoder(img_embed, training=training) batch_seq_inp = batch_seq[:, :-1] batch_seq_true = batch_seq[:, 1:] mask = tf.math.not_equal(batch_seq_true, 0) batch_seq_pred = self.decoder( batch_seq_inp, encoder_out, training=training, mask=mask ) loss = self.calculate_loss(batch_seq_true, batch_seq_pred, mask) acc = self.calculate_accuracy(batch_seq_true, batch_seq_pred, mask) return loss, acc def train_step(self, batch_data): batch_img, batch_seq = batch_data batch_loss = 0 batch_acc = 0 if self.image_aug: batch_img = self.image_aug(batch_img) # 1. Get image embeddings img_embed = self.cnn_model(batch_img) # 2. Pass each of the five captions one by one to the decoder # along with the encoder outputs and compute the loss as well as accuracy # for each caption. for i in range(self.num_captions_per_image): with tf.GradientTape() as tape: loss, acc = self._compute_caption_loss_and_acc( img_embed, batch_seq[:, i, :], training=True ) # 3. Update loss and accuracy batch_loss += loss batch_acc += acc # 4. Get the list of all the trainable weights train_vars = ( self.encoder.trainable_variables + self.decoder.trainable_variables ) # 5. Get the gradients grads = tape.gradient(loss, train_vars) # 6. Update the trainable weights self.optimizer.apply_gradients(zip(grads, train_vars)) # 7. Update the trackers batch_acc /= float(self.num_captions_per_image) self.loss_tracker.update_state(batch_loss) self.acc_tracker.update_state(batch_acc) # 8. Return the loss and accuracy values return {\"loss\": self.loss_tracker.result(), \"acc\": self.acc_tracker.result()} def test_step(self, batch_data): batch_img, batch_seq = batch_data batch_loss = 0 batch_acc = 0 # 1. Get image embeddings img_embed = self.cnn_model(batch_img) # 2. Pass each of the five captions one by one to the decoder # along with the encoder outputs and compute the loss as well as accuracy # for each caption. for i in range(self.num_captions_per_image): loss, acc = self._compute_caption_loss_and_acc( img_embed, batch_seq[:, i, :], training=False ) # 3. Update batch loss and batch accuracy batch_loss += loss batch_acc += acc batch_acc /= float(self.num_captions_per_image) # 4. Update the trackers self.loss_tracker.update_state(batch_loss) self.acc_tracker.update_state(batch_acc) # 5. Return the loss and accuracy values return {\"loss\": self.loss_tracker.result(), \"acc\": self.acc_tracker.result()} @property def metrics(self): # We need to list our metrics here so the `reset_states()` can be # called automatically. return [self.loss_tracker, self.acc_tracker] cnn_model = get_cnn_model() encoder = TransformerEncoderBlock(embed_dim=EMBED_DIM, dense_dim=FF_DIM, num_heads=1) decoder = TransformerDecoderBlock(embed_dim=EMBED_DIM, ff_dim=FF_DIM, num_heads=2) caption_model = ImageCaptioningModel( cnn_model=cnn_model, encoder=encoder, decoder=decoder, image_aug=image_augmentation, ) Model training # Define the loss function cross_entropy = keras.losses.SparseCategoricalCrossentropy( from_logits=False, reduction=\"none\" ) # EarlyStopping criteria early_stopping = keras.callbacks.EarlyStopping(patience=3, restore_best_weights=True) # Learning Rate Scheduler for the optimizer class LRSchedule(keras.optimizers.schedules.LearningRateSchedule): def __init__(self, post_warmup_learning_rate, warmup_steps): super().__init__() self.post_warmup_learning_rate = post_warmup_learning_rate self.warmup_steps = warmup_steps def __call__(self, step): global_step = tf.cast(step, tf.float32) warmup_steps = tf.cast(self.warmup_steps, tf.float32) warmup_progress = global_step / warmup_steps warmup_learning_rate = self.post_warmup_learning_rate * warmup_progress return tf.cond( global_step < warmup_steps, lambda: warmup_learning_rate, lambda: self.post_warmup_learning_rate, ) # Create a learning rate schedule num_train_steps = len(train_dataset) * EPOCHS num_warmup_steps = num_train_steps // 15 lr_schedule = LRSchedule(post_warmup_learning_rate=1e-4, warmup_steps=num_warmup_steps) # Compile the model caption_model.compile(optimizer=keras.optimizers.Adam(lr_schedule), loss=cross_entropy) # Fit the model caption_model.fit( train_dataset, epochs=EPOCHS, validation_data=valid_dataset, callbacks=[early_stopping], ) Epoch 1/30 2021-09-17 05:18:22.943796: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 59 of 256 2021-09-17 05:18:30.137746: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 2021-09-17 05:18:30.598020: I tensorflow/stream_executor/cuda/cuda_dnn.cc:369] Loaded cuDNN version 8005 96/96 [==============================] - 62s 327ms/step - loss: 28.1409 - acc: 0.1313 - val_loss: 20.4968 - val_acc: 0.3116 Epoch 2/30 2021-09-17 05:19:13.829127: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 59 of 256 2021-09-17 05:19:19.872802: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 43s 278ms/step - loss: 19.3393 - acc: 0.3207 - val_loss: 18.0922 - val_acc: 0.3514 Epoch 3/30 2021-09-17 05:19:56.772506: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 61 of 256 2021-09-17 05:20:02.481758: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 278ms/step - loss: 17.4184 - acc: 0.3552 - val_loss: 17.0022 - val_acc: 0.3698 Epoch 4/30 2021-09-17 05:20:39.367542: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 61 of 256 2021-09-17 05:20:45.149089: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 43s 278ms/step - loss: 16.3052 - acc: 0.3760 - val_loss: 16.3026 - val_acc: 0.3845 Epoch 5/30 2021-09-17 05:21:21.930582: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 61 of 256 2021-09-17 05:21:27.608503: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 278ms/step - loss: 15.5097 - acc: 0.3901 - val_loss: 15.8929 - val_acc: 0.3925 Epoch 6/30 2021-09-17 05:22:04.553717: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 61 of 256 2021-09-17 05:22:10.210087: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 278ms/step - loss: 14.8596 - acc: 0.4069 - val_loss: 15.5456 - val_acc: 0.4005 Epoch 7/30 2021-09-17 05:22:47.100594: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 62 of 256 2021-09-17 05:22:52.466539: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 14.3454 - acc: 0.4131 - val_loss: 15.3313 - val_acc: 0.4045 Epoch 8/30 2021-09-17 05:23:29.226300: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 61 of 256 2021-09-17 05:23:34.808841: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 13.8745 - acc: 0.4251 - val_loss: 15.2011 - val_acc: 0.4078 Epoch 9/30 2021-09-17 05:24:11.615058: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 62 of 256 2021-09-17 05:24:17.030769: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 13.4640 - acc: 0.4350 - val_loss: 15.0905 - val_acc: 0.4107 Epoch 10/30 2021-09-17 05:24:53.832807: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 61 of 256 2021-09-17 05:24:59.506573: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 13.0922 - acc: 0.4414 - val_loss: 15.0083 - val_acc: 0.4113 Epoch 11/30 2021-09-17 05:25:36.242501: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 62 of 256 2021-09-17 05:25:41.723206: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 12.7538 - acc: 0.4464 - val_loss: 14.9455 - val_acc: 0.4143 Epoch 12/30 2021-09-17 05:26:18.532009: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 62 of 256 2021-09-17 05:26:23.985106: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 12.4233 - acc: 0.4547 - val_loss: 14.9816 - val_acc: 0.4133 Epoch 13/30 2021-09-17 05:27:00.696082: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 63 of 256 2021-09-17 05:27:05.812571: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 12.1264 - acc: 0.4636 - val_loss: 14.9451 - val_acc: 0.4158 Epoch 14/30 2021-09-17 05:27:42.513445: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 63 of 256 2021-09-17 05:27:47.675342: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 11.8244 - acc: 0.4724 - val_loss: 14.9751 - val_acc: 0.4148 Epoch 15/30 2021-09-17 05:28:24.371225: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 63 of 256 2021-09-17 05:28:29.829654: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 11.5644 - acc: 0.4776 - val_loss: 15.0377 - val_acc: 0.4167 Epoch 16/30 2021-09-17 05:29:06.564650: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:175] Filling up shuffle buffer (this may take a while): 62 of 256 2021-09-17 05:29:11.945996: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:228] Shuffle buffer filled. 96/96 [==============================] - 42s 277ms/step - loss: 11.3046 - acc: 0.4852 - val_loss: 15.0575 - val_acc: 0.4135 Check sample predictions vocab = vectorization.get_vocabulary() index_lookup = dict(zip(range(len(vocab)), vocab)) max_decoded_sentence_length = SEQ_LENGTH - 1 valid_images = list(valid_data.keys()) def generate_caption(): # Select a random image from the validation dataset sample_img = np.random.choice(valid_images) # Read the image from the disk sample_img = decode_and_resize(sample_img) img = sample_img.numpy().clip(0, 255).astype(np.uint8) plt.imshow(img) plt.show() # Pass the image to the CNN img = tf.expand_dims(sample_img, 0) img = caption_model.cnn_model(img) # Pass the image features to the Transformer encoder encoded_img = caption_model.encoder(img, training=False) # Generate the caption using the Transformer decoder decoded_caption = \" \" for i in range(max_decoded_sentence_length): tokenized_caption = vectorization([decoded_caption])[:, :-1] mask = tf.math.not_equal(tokenized_caption, 0) predictions = caption_model.decoder( tokenized_caption, encoded_img, training=False, mask=mask ) sampled_token_index = np.argmax(predictions[0, i, :]) sampled_token = index_lookup[sampled_token_index] if sampled_token == \" \": break decoded_caption += \" \" + sampled_token decoded_caption = decoded_caption.replace(\" \", \"\") decoded_caption = decoded_caption.replace(\" \", \"\").strip() print(\"Predicted Caption: \", decoded_caption) # Check predictions for a few samples generate_caption() generate_caption() generate_caption() png Predicted Caption: a group of dogs race in the snow png Predicted Caption: a man in a blue canoe on a lake png Predicted Caption: a black and white dog is running through a green grass End Notes We saw that the model starts to generate reasonable captions after a few epochs. To keep this example easily runnable, we have trained it with a few constraints, like a minimal number of attention heads. To improve the predictions, you can try changing these training settings and find a good model for your use case. Training an image classifier from scratch on the Kaggle Cats vs Dogs dataset. Introduction This example shows how to do image classification from scratch, starting from JPEG image files on disk, without leveraging pre-trained weights or a pre-made Keras Application model. We demonstrate the workflow on the Kaggle Cats vs Dogs binary classification dataset. We use the image_dataset_from_directory utility to generate the datasets, and we use Keras image preprocessing layers for image standardization and data augmentation. Setup import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Load the data: the Cats vs Dogs dataset Raw data download First, let's download the 786M ZIP archive of the raw data: !curl -O https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip !unzip -q kagglecatsanddogs_3367a.zip !ls % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 786M 100 786M 0 0 44.4M 0 0:00:17 0:00:17 --:--:-- 49.6M image_classification_from_scratch.ipynb MSR-LA - 3467.docx readme[1].txt kagglecatsanddogs_3367a.zip PetImages Now we have a PetImages folder which contain two subfolders, Cat and Dog. Each subfolder contains image files for each category. !ls PetImages Cat Dog Filter out corrupted images When working with lots of real-world image data, corrupted images are a common occurence. Let's filter out badly-encoded images that do not feature the string "JFIF" in their header. import os num_skipped = 0 for folder_name in (\"Cat\", \"Dog\"): folder_path = os.path.join(\"PetImages\", folder_name) for fname in os.listdir(folder_path): fpath = os.path.join(folder_path, fname) try: fobj = open(fpath, \"rb\") is_jfif = tf.compat.as_bytes(\"JFIF\") in fobj.peek(10) finally: fobj.close() if not is_jfif: num_skipped += 1 # Delete corrupted image os.remove(fpath) print(\"Deleted %d images\" % num_skipped) Deleted 1590 images Generate a Dataset image_size = (180, 180) batch_size = 32 train_ds = tf.keras.preprocessing.image_dataset_from_directory( \"PetImages\", validation_split=0.2, subset=\"trainin\", seed=1337, image_size=image_size, batch_size=batch_size, ) val_ds = tf.keras.preprocessing.image_dataset_from_directory( \"PetImages\", validation_split=0.2, subset=\"validation\", seed=1337, image_size=image_size, batch_size=batch_size, ) Found 23410 files belonging to 2 classes. Using 18728 files for training. Found 23410 files belonging to 2 classes. Using 4682 files for validation. Visualize the data Here are the first 9 images in the training dataset. As you can see, label 1 is \"dog\" and label 0 is "cat". import matplotlib.pyplot as plt plt.figure(figsize=(10, 10)) for images, labels in train_ds.take(1): for i in range(9): ax = plt.subplot(3, 3, i + 1) plt.imshow(images[i].numpy().astype(\"uint8\")) plt.title(int(labels[i])) plt.axis(\"off\") png Using image data augmentation When you don't have a large image dataset, it's a good practice to artificially introduce sample diversity by applying random yet realistic transformations to the training images, such as random horizontal flipping or small random rotations. This helps expose the model to different aspects of the training data while slowing down overfitting. data_augmentation = keras.Sequential( [ layers.RandomFlip(\"horizontal\"), layers.RandomRotation(0.1), ] ) Let's visualize what the augmented samples look like, by applying data_augmentation repeatedly to the first image in the dataset: plt.figure(figsize=(10, 10)) for images, _ in train_ds.take(1): for i in range(9): augmented_images = data_augmentation(images) ax = plt.subplot(3, 3, i + 1) plt.imshow(augmented_images[0].numpy().astype(\"uint8\")) plt.axis(\"off\") png Standardizing the data Our image are already in a standard size (180x180), as they are being yielded as contiguous float32 batches by our dataset. However, their RGB channel values are in the [0, 255] range. This is not ideal for a neural network; in general you should seek to make your input values small. Here, we will standardize values to be in the [0, 1] by using a Rescaling layer at the start of our model. Two options to preprocess the data There are two ways you could be using the data_augmentation preprocessor: Option 1: Make it part of the model, like this: inputs = keras.Input(shape=input_shape) x = data_augmentation(inputs) x = layers.Rescaling(1./255)(x) ... # Rest of the model With this option, your data augmentation will happen on device, synchronously with the rest of the model execution, meaning that it will benefit from GPU acceleration. Note that data augmentation is inactive at test time, so the input samples will only be augmented during fit(), not when calling evaluate() or predict(). If you're training on GPU, this is the better option. Option 2: apply it to the dataset, so as to obtain a dataset that yields batches of augmented images, like this: augmented_train_ds = train_ds.map( lambda x, y: (data_augmentation(x, training=True), y)) With this option, your data augmentation will happen on CPU, asynchronously, and will be buffered before going into the model. If you're training on CPU, this is the better option, since it makes data augmentation asynchronous and non-blocking. In our case, we'll go with the first option. Configure the dataset for performance Let's make sure to use buffered prefetching so we can yield data from disk without having I/O becoming blocking: train_ds = train_ds.prefetch(buffer_size=32) val_ds = val_ds.prefetch(buffer_size=32) Build a model We'll build a small version of the Xception network. We haven't particularly tried to optimize the architecture; if you want to do a systematic search for the best model configuration, consider using KerasTuner. Note that: We start the model with the data_augmentation preprocessor, followed by a Rescaling layer. We include a Dropout layer before the final classification layer. def make_model(input_shape, num_classes): inputs = keras.Input(shape=input_shape) # Image augmentation block x = data_augmentation(inputs) # Entry block x = layers.Rescaling(1.0 / 255)(x) x = layers.Conv2D(32, 3, strides=2, padding=\"same\")(x) x = layers.BatchNormalization()(x) x = layers.Activation(\"relu\")(x) x = layers.Conv2D(64, 3, padding=\"same\")(x) x = layers.BatchNormalization()(x) x = layers.Activation(\"relu\")(x) previous_block_activation = x # Set aside residual for size in [128, 256, 512, 728]: x = layers.Activation(\"relu\")(x) x = layers.SeparableConv2D(size, 3, padding=\"same\")(x) x = layers.BatchNormalization()(x) x = layers.Activation(\"relu\")(x) x = layers.SeparableConv2D(size, 3, padding=\"same\")(x) x = layers.BatchNormalization()(x) x = layers.MaxPooling2D(3, strides=2, padding=\"same\")(x) # Project residual residual = layers.Conv2D(size, 1, strides=2, padding=\"same\")( previous_block_activation ) x = layers.add([x, residual]) # Add back residual previous_block_activation = x # Set aside next residual x = layers.SeparableConv2D(1024, 3, padding=\"same\")(x) x = layers.BatchNormalization()(x) x = layers.Activation(\"relu\")(x) x = layers.GlobalAveragePooling2D()(x) if num_classes == 2: activation = \"sigmoid\" units = 1 else: activation = \"softmax\" units = num_classes x = layers.Dropout(0.5)(x) outputs = layers.Dense(units, activation=activation)(x) return keras.Model(inputs, outputs) model = make_model(input_shape=image_size + (3,), num_classes=2) keras.utils.plot_model(model, show_shapes=True) ('Failed to import pydot. You must `pip install pydot` and install graphviz (https://graphviz.gitlab.io/download/), ', 'for `pydotprint` to work.') Train the model epochs = 50 callbacks = [ keras.callbacks.ModelCheckpoint(\"save_at_{epoch}.h5\"), ] model.compile( optimizer=keras.optimizers.Adam(1e-3), loss=\"binary_crossentropy\", metrics=[\"accuracy\"], ) model.fit( train_ds, epochs=epochs, callbacks=callbacks, validation_data=val_ds, ) Epoch 1/50 586/586 [==============================] - 81s 139ms/step - loss: 0.6233 - accuracy: 0.6700 - val_loss: 0.7698 - val_accuracy: 0.6117 Epoch 2/50 586/586 [==============================] - 80s 137ms/step - loss: 0.4638 - accuracy: 0.7840 - val_loss: 0.4056 - val_accuracy: 0.8178 Epoch 3/50 586/586 [==============================] - 80s 137ms/step - loss: 0.3652 - accuracy: 0.8405 - val_loss: 0.3535 - val_accuracy: 0.8528 Epoch 4/50 586/586 [==============================] - 80s 137ms/step - loss: 0.3112 - accuracy: 0.8675 - val_loss: 0.2673 - val_accuracy: 0.8894 Epoch 5/50 586/586 [==============================] - 80s 137ms/step - loss: 0.2585 - accuracy: 0.8928 - val_loss: 0.6213 - val_accuracy: 0.7294 Epoch 6/50 586/586 [==============================] - 81s 138ms/step - loss: 0.2218 - accuracy: 0.9071 - val_loss: 0.2377 - val_accuracy: 0.8930 Epoch 7/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1992 - accuracy: 0.9169 - val_loss: 1.1273 - val_accuracy: 0.6254 Epoch 8/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1820 - accuracy: 0.9243 - val_loss: 0.1955 - val_accuracy: 0.9173 Epoch 9/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1694 - accuracy: 0.9308 - val_loss: 0.1602 - val_accuracy: 0.9314 Epoch 10/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1623 - accuracy: 0.9333 - val_loss: 0.1777 - val_accuracy: 0.9248 Epoch 11/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1522 - accuracy: 0.9365 - val_loss: 0.1562 - val_accuracy: 0.9400 Epoch 12/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1458 - accuracy: 0.9417 - val_loss: 0.1529 - val_accuracy: 0.9338 Epoch 13/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1368 - accuracy: 0.9433 - val_loss: 0.1694 - val_accuracy: 0.9259 Epoch 14/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1301 - accuracy: 0.9461 - val_loss: 0.1250 - val_accuracy: 0.9530 Epoch 15/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1261 - accuracy: 0.9483 - val_loss: 0.1548 - val_accuracy: 0.9353 Epoch 16/50 586/586 [==============================] - 81s 137ms/step - loss: 0.1241 - accuracy: 0.9497 - val_loss: 0.1376 - val_accuracy: 0.9464 Epoch 17/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1193 - accuracy: 0.9535 - val_loss: 0.1093 - val_accuracy: 0.9575 Epoch 18/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1107 - accuracy: 0.9558 - val_loss: 0.1488 - val_accuracy: 0.9432 Epoch 19/50 586/586 [==============================] - 80s 137ms/step - loss: 0.1175 - accuracy: 0.9532 - val_loss: 0.1380 - val_accuracy: 0.9421 Epoch 20/50 586/586 [==============================] - 81s 138ms/step - loss: 0.1026 - accuracy: 0.9584 - val_loss: 0.1293 - val_accuracy: 0.9485 Epoch 21/50 586/586 [==============================] - 80s 137ms/step - loss: 0.0977 - accuracy: 0.9606 - val_loss: 0.1105 - val_accuracy: 0.9573 Epoch 22/50 586/586 [==============================] - 80s 137ms/step - loss: 0.0983 - accuracy: 0.9610 - val_loss: 0.1023 - val_accuracy: 0.9633 Epoch 23/50 586/586 [==============================] - 80s 137ms/step - loss: 0.0776 - accuracy: 0.9694 - val_loss: 0.1176 - val_accuracy: 0.9530 Epoch 38/50 586/586 [==============================] - 80s 136ms/step - loss: 0.0596 - accuracy: 0.9768 - val_loss: 0.0967 - val_accuracy: 0.9633 Epoch 44/50 586/586 [==============================] - 80s 136ms/step - loss: 0.0504 - accuracy: 0.9792 - val_loss: 0.0984 - val_accuracy: 0.9663 Epoch 50/50 586/586 [==============================] - 80s 137ms/step - loss: 0.0486 - accuracy: 0.9817 - val_loss: 0.1157 - val_accuracy: 0.9609 We get to ~96% validation accuracy after training for 50 epochs on the full dataset. Run inference on new data Note that data augmentation and dropout are inactive at inference time. img = keras.preprocessing.image.load_img( \"PetImages/Cat/6779.jpg\", target_size=image_size ) img_array = keras.preprocessing.image.img_to_array(img) img_array = tf.expand_dims(img_array, 0) # Create batch axis predictions = model.predict(img_array) score = predictions[0] print( \"This image is %.2f percent cat and %.2f percent dog.\" % (100 * (1 - score), 100 * score) ) This image is 84.34 percent cat and 15.66 percent dog. BigTransfer (BiT) State-of-the-art transfer learning for image classification. Introduction BigTransfer (also known as BiT) is a state-of-the-art transfer learning method for image classification. Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. BiT revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. The importance of appropriately choosing normalization layers and scaling the architecture capacity as the amount of pre-training data increases. BigTransfer(BiT) is trained on public datasets, along with code in TF2, Jax and Pytorch. This will help anyone to reach state of the art performance on their task of interest, even with just a handful of labeled images per class. You can find BiT models pre-trained on ImageNet and ImageNet-21k in TFHub as TensorFlow2 SavedModels that you can use easily as Keras Layers. There are a variety of sizes ranging from a standard ResNet50 to a ResNet152x4 (152 layers deep, 4x wider than a typical ResNet50) for users with larger computational and memory budgets but higher accuracy requirements. Figure: The x-axis shows the number of images used per class, ranging from 1 to the full dataset. On the plots on the left, the curve in blue above is our BiT-L model, whereas the curve below is a ResNet-50 pre-trained on ImageNet (ILSVRC-2012). Setup import numpy as np import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras import tensorflow_hub as hub import tensorflow_datasets as tfds tfds.disable_progress_bar() SEEDS = 42 np.random.seed(SEEDS) tf.random.set_seed(SEEDS) Gather Flower Dataset train_ds, validation_ds = tfds.load( \"tf_flowers\", split=[\"train[:85%]\", \"train[85%:]\"], as_supervised=True, ) Downloading and preparing dataset tf_flowers/3.0.1 (download: 218.21 MiB, generated: 221.83 MiB, total: 440.05 MiB) to /root/tensorflow_datasets/tf_flowers/3.0.1... Dataset tf_flowers downloaded and prepared to /root/tensorflow_datasets/tf_flowers/3.0.1. Subsequent calls will reuse this data. Visualise the dataset plt.figure(figsize=(10, 10)) for i, (image, label) in enumerate(train_ds.take(9)): ax = plt.subplot(3, 3, i + 1) plt.imshow(image) plt.title(int(label)) plt.axis(\"off\") png Define hyperparameters RESIZE_TO = 384 CROP_TO = 224 BATCH_SIZE = 64 STEPS_PER_EPOCH = 10 AUTO = tf.data.AUTOTUNE # optimise the pipeline performance NUM_CLASSES = 5 # number of classes SCHEDULE_LENGTH = ( 500 # we will train on lower resolution images and will still attain good results ) SCHEDULE_BOUNDARIES = [ 200, 300, 400, ] # more the dataset size the schedule length increase The hyperparamteres like SCHEDULE_LENGTH and SCHEDULE_BOUNDARIES are determined based on empirical results. The method has been explained in the original paper and in their Google AI Blog Post. The SCHEDULE_LENGTH is aslo determined whether to use MixUp Augmentation or not. You can also find an easy MixUp Implementation in Keras Coding Examples. Define preprocessing helper functions SCHEDULE_LENGTH = SCHEDULE_LENGTH * 512 / BATCH_SIZE @tf.function def preprocess_train(image, label): image = tf.image.random_flip_left_right(image) image = tf.image.resize(image, (RESIZE_TO, RESIZE_TO)) image = tf.image.random_crop(image, (CROP_TO, CROP_TO, 3)) image = image / 255.0 return (image, label) @tf.function def preprocess_test(image, label): image = tf.image.resize(image, (RESIZE_TO, RESIZE_TO)) image = image / 255.0 return (image, label) DATASET_NUM_TRAIN_EXAMPLES = train_ds.cardinality().numpy() repeat_count = int( SCHEDULE_LENGTH * BATCH_SIZE / DATASET_NUM_TRAIN_EXAMPLES * STEPS_PER_EPOCH ) repeat_count += 50 + 1 # To ensure at least there are 50 epochs of training Define the data pipeline # Training pipeline pipeline_train = ( train_ds.shuffle(10000) .repeat(repeat_count) # Repeat dataset_size / num_steps .map(preprocess_train, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) # Validation pipeline pipeline_validation = ( validation_ds.map(preprocess_test, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) Visualise the training samples image_batch, label_batch = next(iter(pipeline_train)) plt.figure(figsize=(10, 10)) for n in range(25): ax = plt.subplot(5, 5, n + 1) plt.imshow(image_batch[n]) plt.title(label_batch[n].numpy()) plt.axis(\"off\") png Load pretrained TF-Hub model into a KerasLayer bit_model_url = \"https://tfhub.dev/google/bit/m-r50x1/1\" bit_module = hub.KerasLayer(bit_model_url) Create BigTransfer (BiT) model To create the new model, we: Cut off the BiT model’s original head. This leaves us with the “pre-logits” output. We do not have to do this if we use the ‘feature extractor’ models (i.e. all those in subdirectories titled feature_vectors), since for those models the head has already been cut off. Add a new head with the number of outputs equal to the number of classes of our new task. Note that it is important that we initialise the head to all zeroes. class MyBiTModel(keras.Model): def __init__(self, num_classes, module, **kwargs): super().__init__(**kwargs) self.num_classes = num_classes self.head = keras.layers.Dense(num_classes, kernel_initializer=\"zeros\") self.bit_model = module def call(self, images): bit_embedding = self.bit_model(images) return self.head(bit_embedding) model = MyBiTModel(num_classes=NUM_CLASSES, module=bit_module) Define optimizer and loss learning_rate = 0.003 * BATCH_SIZE / 512 # Decay learning rate by a factor of 10 at SCHEDULE_BOUNDARIES. lr_schedule = keras.optimizers.schedules.PiecewiseConstantDecay( boundaries=SCHEDULE_BOUNDARIES, values=[ learning_rate, learning_rate * 0.1, learning_rate * 0.01, learning_rate * 0.001, ], ) optimizer = keras.optimizers.SGD(learning_rate=lr_schedule, momentum=0.9) loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) Compile the model model.compile(optimizer=optimizer, loss=loss_fn, metrics=[\"accuracy\"]) Set up callbacks train_callbacks = [ keras.callbacks.EarlyStopping( monitor=\"val_accuracy\", patience=2, restore_best_weights=True ) ] Train the model history = model.fit( pipeline_train, batch_size=BATCH_SIZE, epochs=int(SCHEDULE_LENGTH / STEPS_PER_EPOCH), steps_per_epoch=STEPS_PER_EPOCH, validation_data=pipeline_validation, callbacks=train_callbacks, ) Epoch 1/400 10/10 [==============================] - 41s 1s/step - loss: 0.7440 - accuracy: 0.7844 - val_loss: 0.1837 - val_accuracy: 0.9582 Epoch 2/400 10/10 [==============================] - 8s 904ms/step - loss: 0.1499 - accuracy: 0.9547 - val_loss: 0.1094 - val_accuracy: 0.9709 Epoch 3/400 10/10 [==============================] - 8s 905ms/step - loss: 0.1674 - accuracy: 0.9422 - val_loss: 0.0874 - val_accuracy: 0.9727 Epoch 4/400 10/10 [==============================] - 8s 905ms/step - loss: 0.1314 - accuracy: 0.9578 - val_loss: 0.0829 - val_accuracy: 0.9727 Epoch 5/400 10/10 [==============================] - 8s 903ms/step - loss: 0.1336 - accuracy: 0.9500 - val_loss: 0.0765 - val_accuracy: 0.9727 Plot the training and validation metrics def plot_hist(hist): plt.plot(hist.history[\"accuracy\"]) plt.plot(hist.history[\"val_accuracy\"]) plt.plot(hist.history[\"loss\"]) plt.plot(hist.history[\"val_loss\"]) plt.title(\"Training Progress\") plt.ylabel(\"Accuracy/Loss\") plt.xlabel(\"Epochs\") plt.legend([\"train_acc\", \"val_acc\", \"train_loss\", \"val_loss\"], loc=\"upper left\") plt.show() plot_hist(history) png Evaluate the model accuracy = model.evaluate(pipeline_validation)[1] * 100 print(\"Accuracy: {:.2f}%\".format(accuracy)) 9/9 [==============================] - 6s 646ms/step - loss: 0.0874 - accuracy: 0.9727 Accuracy: 97.27% Conclusion BiT performs well across a surprisingly wide range of data regimes -- from 1 example per class to 1M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. You can experiment further with the BigTransfer Method by following the original paper. Use EfficientNet with weights pre-trained on imagenet for Stanford Dogs classification. Introduction: what is EfficientNet EfficientNet, first introduced in Tan and Le, 2019 is among the most efficient models (i.e. requiring least FLOPS for inference) that reaches State-of-the-Art accuracy on both imagenet and common image classification transfer learning tasks. The smallest base model is similar to MnasNet, which reached near-SOTA with a significantly smaller model. By introducing a heuristic way to scale the model, EfficientNet provides a family of models (B0 to B7) that represents a good combination of efficiency and accuracy on a variety of scales. Such a scaling heuristics (compound-scaling, details see Tan and Le, 2019) allows the efficiency-oriented base model (B0) to surpass models at every scale, while avoiding extensive grid-search of hyperparameters. A summary of the latest updates on the model is available at here, where various augmentation schemes and semi-supervised learning approaches are applied to further improve the imagenet performance of the models. These extensions of the model can be used by updating weights without changing model architecture. B0 to B7 variants of EfficientNet (This section provides some details on \"compound scaling\", and can be skipped if you're only interested in using the models) Based on the original paper people may have the impression that EfficientNet is a continuous family of models created by arbitrarily choosing scaling factor in as Eq.(3) of the paper. However, choice of resolution, depth and width are also restricted by many factors: Resolution: Resolutions not divisible by 8, 16, etc. cause zero-padding near boundaries of some layers which wastes computational resources. This especially applies to smaller variants of the model, hence the input resolution for B0 and B1 are chosen as 224 and 240. Depth and width: The building blocks of EfficientNet demands channel size to be multiples of 8. Resource limit: Memory limitation may bottleneck resolution when depth and width can still increase. In such a situation, increasing depth and/or width but keep resolution can still improve performance. As a result, the depth, width and resolution of each variant of the EfficientNet models are hand-picked and proven to produce good results, though they may be significantly off from the compound scaling formula. Therefore, the keras implementation (detailed below) only provide these 8 models, B0 to B7, instead of allowing arbitray choice of width / depth / resolution parameters. Keras implementation of EfficientNet An implementation of EfficientNet B0 to B7 has been shipped with tf.keras since TF2.3. To use EfficientNetB0 for classifying 1000 classes of images from imagenet, run: from tensorflow.keras.applications import EfficientNetB0 model = EfficientNetB0(weights='imagenet') This model takes input images of shape (224, 224, 3), and the input data should range [0, 255]. Normalization is included as part of the model. Because training EfficientNet on ImageNet takes a tremendous amount of resources and several techniques that are not a part of the model architecture itself. Hence the Keras implementation by default loads pre-trained weights obtained via training with AutoAugment. For B0 to B7 base models, the input shapes are different. Here is a list of input shape expected for each model: Base model resolution EfficientNetB0 224 EfficientNetB1 240 EfficientNetB2 260 EfficientNetB3 300 EfficientNetB4 380 EfficientNetB5 456 EfficientNetB6 528 EfficientNetB7 600 When the model is intended for transfer learning, the Keras implementation provides a option to remove the top layers: model = EfficientNetB0(include_top=False, weights='imagenet') This option excludes the final Dense layer that turns 1280 features on the penultimate layer into prediction of the 1000 ImageNet classes. Replacing the top layer with custom layers allows using EfficientNet as a feature extractor in a transfer learning workflow. Another argument in the model constructor worth noticing is drop_connect_rate which controls the dropout rate responsible for stochastic depth. This parameter serves as a toggle for extra regularization in finetuning, but does not affect loaded weights. For example, when stronger regularization is desired, try: model = EfficientNetB0(weights='imagenet', drop_connect_rate=0.4) The default value is 0.2. Example: EfficientNetB0 for Stanford Dogs. EfficientNet is capable of a wide range of image classification tasks. This makes it a good model for transfer learning. As an end-to-end example, we will show using pre-trained EfficientNetB0 on Stanford Dogs dataset. # IMG_SIZE is determined by EfficientNet model choice IMG_SIZE = 224 Setup and data loading This example requires TensorFlow 2.3 or above. To use TPU, the TPU runtime must match current running TensorFlow version. If there is a mismatch, try: from cloud_tpu_client import Client c = Client() c.configure_tpu_version(tf.__version__, restart_type=\"always\") import tensorflow as tf try: tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect() print(\"Device:\", tpu.master()) strategy = tf.distribute.TPUStrategy(tpu) except ValueError: print(\"Not connected to a TPU runtime. Using CPU/GPU strategy\") strategy = tf.distribute.MirroredStrategy() Not connected to a TPU runtime. Using CPU/GPU strategy INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',) Loading data Here we load data from tensorflow_datasets (hereafter TFDS). Stanford Dogs dataset is provided in TFDS as stanford_dogs. It features 20,580 images that belong to 120 classes of dog breeds (12,000 for training and 8,580 for testing). By simply changing dataset_name below, you may also try this notebook for other datasets in TFDS such as cifar10, cifar100, food101, etc. When the images are much smaller than the size of EfficientNet input, we can simply upsample the input images. It has been shown in Tan and Le, 2019 that transfer learning result is better for increased resolution even if input images remain small. For TPU: if using TFDS datasets, a GCS bucket location is required to save the datasets. For example: tfds.load(dataset_name, data_dir=\"gs://example-bucket/datapath\") Also, both the current environment and the TPU service account have proper access to the bucket. Alternatively, for small datasets you may try loading data into the memory and use tf.data.Dataset.from_tensor_slices(). import tensorflow_datasets as tfds batch_size = 64 dataset_name = \"stanford_dogs\" (ds_train, ds_test), ds_info = tfds.load( dataset_name, split=[\"train\", \"test\"], with_info=True, as_supervised=True ) NUM_CLASSES = ds_info.features[\"label\"].num_classes When the dataset include images with various size, we need to resize them into a shared size. The Stanford Dogs dataset includes only images at least 200x200 pixels in size. Here we resize the images to the input size needed for EfficientNet. size = (IMG_SIZE, IMG_SIZE) ds_train = ds_train.map(lambda image, label: (tf.image.resize(image, size), label)) ds_test = ds_test.map(lambda image, label: (tf.image.resize(image, size), label)) Visualizing the data The following code shows the first 9 images with their labels. import matplotlib.pyplot as plt def format_label(label): string_label = label_info.int2str(label) return string_label.split(\"-\")[1] label_info = ds_info.features[\"label\"] for i, (image, label) in enumerate(ds_train.take(9)): ax = plt.subplot(3, 3, i + 1) plt.imshow(image.numpy().astype(\"uint8\")) plt.title(\"{}\".format(format_label(label))) plt.axis(\"off\") png Data augmentation We can use the preprocessing layers APIs for image augmentation. from tensorflow.keras.models import Sequential from tensorflow.keras import layers img_augmentation = Sequential( [ layers.RandomRotation(factor=0.15), layers.RandomTranslation(height_factor=0.1, width_factor=0.1), layers.RandomFlip(), layers.RandomContrast(factor=0.1), ], name=\"img_augmentation\", ) This Sequential model object can be used both as a part of the model we later build, and as a function to preprocess data before feeding into the model. Using them as function makes it easy to visualize the augmented images. Here we plot 9 examples of augmentation result of a given figure. for image, label in ds_train.take(1): for i in range(9): ax = plt.subplot(3, 3, i + 1) aug_img = img_augmentation(tf.expand_dims(image, axis=0)) plt.imshow(aug_img[0].numpy().astype(\"uint8\")) plt.title(\"{}\".format(format_label(label))) plt.axis(\"off\") png Prepare inputs Once we verify the input data and augmentation are working correctly, we prepare dataset for training. The input data are resized to uniform IMG_SIZE. The labels are put into one-hot (a.k.a. categorical) encoding. The dataset is batched. Note: prefetch and AUTOTUNE may in some situation improve performance, but depends on environment and the specific dataset used. See this guide for more information on data pipeline performance. # One-hot / categorical encoding def input_preprocess(image, label): label = tf.one_hot(label, NUM_CLASSES) return image, label ds_train = ds_train.map( input_preprocess, num_parallel_calls=tf.data.AUTOTUNE ) ds_train = ds_train.batch(batch_size=batch_size, drop_remainder=True) ds_train = ds_train.prefetch(tf.data.AUTOTUNE) ds_test = ds_test.map(input_preprocess) ds_test = ds_test.batch(batch_size=batch_size, drop_remainder=True) Training a model from scratch We build an EfficientNetB0 with 120 output classes, that is initialized from scratch: Note: the accuracy will increase very slowly and may overfit. from tensorflow.keras.applications import EfficientNetB0 with strategy.scope(): inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3)) x = img_augmentation(inputs) outputs = EfficientNetB0(include_top=True, weights=None, classes=NUM_CLASSES)(x) model = tf.keras.Model(inputs, outputs) model.compile( optimizer=\"adam\", loss=\"categorical_crossentropy\", metrics=[\"accuracy\"] ) model.summary() epochs = 40 # @param {type: \"slider\", min:10, max:100} hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2) Model: \"functional_1\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________ img_augmentation (Sequential (None, 224, 224, 3) 0 _________________________________________________________________ efficientnetb0 (Functional) (None, 120) 4203291 ================================================================= Total params: 4,203,291 Trainable params: 4,161,268 Non-trainable params: 42,023 _________________________________________________________________ Epoch 1/40 187/187 - 66s - loss: 4.9221 - accuracy: 0.0119 - val_loss: 4.9835 - val_accuracy: 0.0104 Epoch 2/40 187/187 - 63s - loss: 4.5652 - accuracy: 0.0243 - val_loss: 5.1626 - val_accuracy: 0.0145 Epoch 3/40 187/187 - 63s - loss: 4.4179 - accuracy: 0.0337 - val_loss: 4.7597 - val_accuracy: 0.0237 Epoch 4/40 187/187 - 63s - loss: 4.2964 - accuracy: 0.0421 - val_loss: 4.4028 - val_accuracy: 0.0378 Epoch 5/40 187/187 - 63s - loss: 4.1951 - accuracy: 0.0540 - val_loss: 4.3048 - val_accuracy: 0.0443 Epoch 6/40 187/187 - 63s - loss: 4.1025 - accuracy: 0.0596 - val_loss: 4.1918 - val_accuracy: 0.0526 Epoch 7/40 187/187 - 63s - loss: 4.0157 - accuracy: 0.0728 - val_loss: 4.1482 - val_accuracy: 0.0591 Epoch 8/40 187/187 - 62s - loss: 3.9344 - accuracy: 0.0844 - val_loss: 4.1088 - val_accuracy: 0.0638 Epoch 9/40 187/187 - 63s - loss: 3.8529 - accuracy: 0.0951 - val_loss: 4.0692 - val_accuracy: 0.0770 Epoch 10/40 187/187 - 63s - loss: 3.7650 - accuracy: 0.1040 - val_loss: 4.1468 - val_accuracy: 0.0719 Epoch 11/40 187/187 - 63s - loss: 3.6858 - accuracy: 0.1185 - val_loss: 4.0484 - val_accuracy: 0.0913 Epoch 12/40 187/187 - 63s - loss: 3.5942 - accuracy: 0.1326 - val_loss: 3.8047 - val_accuracy: 0.1072 Epoch 13/40 187/187 - 63s - loss: 3.5028 - accuracy: 0.1447 - val_loss: 3.9513 - val_accuracy: 0.0933 Epoch 14/40 187/187 - 63s - loss: 3.4295 - accuracy: 0.1604 - val_loss: 3.7738 - val_accuracy: 0.1220 Epoch 15/40 187/187 - 63s - loss: 3.3410 - accuracy: 0.1735 - val_loss: 3.9104 - val_accuracy: 0.1104 Epoch 16/40 187/187 - 63s - loss: 3.2511 - accuracy: 0.1890 - val_loss: 3.6904 - val_accuracy: 0.1264 Epoch 17/40 187/187 - 63s - loss: 3.1624 - accuracy: 0.2076 - val_loss: 3.4026 - val_accuracy: 0.1769 Epoch 18/40 187/187 - 63s - loss: 3.0825 - accuracy: 0.2229 - val_loss: 3.4627 - val_accuracy: 0.1744 Epoch 19/40 187/187 - 63s - loss: 3.0041 - accuracy: 0.2355 - val_loss: 3.6061 - val_accuracy: 0.1542 Epoch 20/40 187/187 - 64s - loss: 2.8945 - accuracy: 0.2552 - val_loss: 3.2769 - val_accuracy: 0.2036 Epoch 21/40 187/187 - 63s - loss: 2.8054 - accuracy: 0.2710 - val_loss: 3.5355 - val_accuracy: 0.1834 Epoch 22/40 187/187 - 63s - loss: 2.7342 - accuracy: 0.2904 - val_loss: 3.3540 - val_accuracy: 0.1973 Epoch 23/40 187/187 - 62s - loss: 2.6258 - accuracy: 0.3042 - val_loss: 3.2608 - val_accuracy: 0.2217 Epoch 24/40 187/187 - 62s - loss: 2.5453 - accuracy: 0.3218 - val_loss: 3.4611 - val_accuracy: 0.1941 Epoch 25/40 187/187 - 63s - loss: 2.4585 - accuracy: 0.3356 - val_loss: 3.4163 - val_accuracy: 0.2070 Epoch 26/40 187/187 - 62s - loss: 2.3606 - accuracy: 0.3647 - val_loss: 3.2558 - val_accuracy: 0.2392 Epoch 27/40 187/187 - 63s - loss: 2.2819 - accuracy: 0.3801 - val_loss: 3.3676 - val_accuracy: 0.2222 Epoch 28/40 187/187 - 62s - loss: 2.2114 - accuracy: 0.3933 - val_loss: 3.6578 - val_accuracy: 0.2022 Epoch 29/40 187/187 - 62s - loss: 2.0964 - accuracy: 0.4215 - val_loss: 3.5366 - val_accuracy: 0.2186 Epoch 30/40 187/187 - 63s - loss: 1.9931 - accuracy: 0.4459 - val_loss: 3.5612 - val_accuracy: 0.2310 Epoch 31/40 187/187 - 63s - loss: 1.8924 - accuracy: 0.4657 - val_loss: 3.4780 - val_accuracy: 0.2359 Epoch 32/40 187/187 - 63s - loss: 1.8095 - accuracy: 0.4874 - val_loss: 3.5776 - val_accuracy: 0.2403 Epoch 33/40 187/187 - 63s - loss: 1.7126 - accuracy: 0.5086 - val_loss: 3.6865 - val_accuracy: 0.2316 Epoch 34/40 187/187 - 63s - loss: 1.6117 - accuracy: 0.5373 - val_loss: 3.6419 - val_accuracy: 0.2513 Epoch 35/40 187/187 - 63s - loss: 1.5532 - accuracy: 0.5514 - val_loss: 3.8050 - val_accuracy: 0.2415 Epoch 36/40 187/187 - 63s - loss: 1.4479 - accuracy: 0.5809 - val_loss: 4.0113 - val_accuracy: 0.2299 Epoch 37/40 187/187 - 62s - loss: 1.3885 - accuracy: 0.5939 - val_loss: 4.1262 - val_accuracy: 0.2158 Epoch 38/40 187/187 - 63s - loss: 1.2979 - accuracy: 0.6217 - val_loss: 4.2519 - val_accuracy: 0.2344 Epoch 39/40 187/187 - 62s - loss: 1.2066 - accuracy: 0.6413 - val_loss: 4.3924 - val_accuracy: 0.2169 Epoch 40/40 187/187 - 62s - loss: 1.1348 - accuracy: 0.6618 - val_loss: 4.2216 - val_accuracy: 0.2374 Training the model is relatively fast (takes only 20 seconds per epoch on TPUv2 that is available on Colab). This might make it sounds easy to simply train EfficientNet on any dataset wanted from scratch. However, training EfficientNet on smaller datasets, especially those with lower resolution like CIFAR-100, faces the significant challenge of overfitting. Hence training from scratch requires very careful choice of hyperparameters and is difficult to find suitable regularization. It would also be much more demanding in resources. Plotting the training and validation accuracy makes it clear that validation accuracy stagnates at a low value. import matplotlib.pyplot as plt def plot_hist(hist): plt.plot(hist.history[\"accuracy\"]) plt.plot(hist.history[\"val_accuracy\"]) plt.title(\"model accuracy\") plt.ylabel(\"accuracy\") plt.xlabel(\"epoch\") plt.legend([\"train\", \"validation\"], loc=\"upper left\") plt.show() plot_hist(hist) png Transfer learning from pre-trained weights Here we initialize the model with pre-trained ImageNet weights, and we fine-tune it on our own dataset. def build_model(num_classes): inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3)) x = img_augmentation(inputs) model = EfficientNetB0(include_top=False, input_tensor=x, weights=\"imagenet\") # Freeze the pretrained weights model.trainable = False # Rebuild top x = layers.GlobalAveragePooling2D(name=\"avg_pool\")(model.output) x = layers.BatchNormalization()(x) top_dropout_rate = 0.2 x = layers.Dropout(top_dropout_rate, name=\"top_dropout\")(x) outputs = layers.Dense(NUM_CLASSES, activation=\"softmax\", name=\"pred\")(x) # Compile model = tf.keras.Model(inputs, outputs, name=\"EfficientNet\") optimizer = tf.keras.optimizers.Adam(learning_rate=1e-2) model.compile( optimizer=optimizer, loss=\"categorical_crossentropy\", metrics=[\"accuracy\"] ) return model The first step to transfer learning is to freeze all layers and train only the top layers. For this step, a relatively large learning rate (1e-2) can be used. Note that validation accuracy and loss will usually be better than training accuracy and loss. This is because the regularization is strong, which only suppresses training-time metrics. Note that the convergence may take up to 50 epochs depending on choice of learning rate. If image augmentation layers were not applied, the validation accuracy may only reach ~60%. with strategy.scope(): model = build_model(num_classes=NUM_CLASSES) epochs = 25 # @param {type: \"slider\", min:8, max:80} hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2) plot_hist(hist) Epoch 1/25 187/187 - 33s - loss: 3.5673 - accuracy: 0.3624 - val_loss: 1.0288 - val_accuracy: 0.6957 Epoch 2/25 187/187 - 31s - loss: 1.8503 - accuracy: 0.5232 - val_loss: 0.8439 - val_accuracy: 0.7484 Epoch 3/25 187/187 - 31s - loss: 1.5511 - accuracy: 0.5772 - val_loss: 0.7953 - val_accuracy: 0.7563 Epoch 4/25 187/187 - 31s - loss: 1.4660 - accuracy: 0.5878 - val_loss: 0.8061 - val_accuracy: 0.7535 Epoch 5/25 187/187 - 31s - loss: 1.4143 - accuracy: 0.6034 - val_loss: 0.7850 - val_accuracy: 0.7569 Epoch 6/25 187/187 - 31s - loss: 1.4000 - accuracy: 0.6054 - val_loss: 0.7846 - val_accuracy: 0.7646 Epoch 7/25 187/187 - 31s - loss: 1.3678 - accuracy: 0.6173 - val_loss: 0.7850 - val_accuracy: 0.7682 Epoch 8/25 187/187 - 31s - loss: 1.3286 - accuracy: 0.6222 - val_loss: 0.8142 - val_accuracy: 0.7608 Epoch 9/25 187/187 - 31s - loss: 1.3210 - accuracy: 0.6245 - val_loss: 0.7890 - val_accuracy: 0.7669 Epoch 10/25 187/187 - 31s - loss: 1.3086 - accuracy: 0.6278 - val_loss: 0.8368 - val_accuracy: 0.7575 Epoch 11/25 187/187 - 31s - loss: 1.2877 - accuracy: 0.6315 - val_loss: 0.8309 - val_accuracy: 0.7599 Epoch 12/25 187/187 - 31s - loss: 1.2918 - accuracy: 0.6308 - val_loss: 0.8319 - val_accuracy: 0.7535 Epoch 13/25 187/187 - 31s - loss: 1.2738 - accuracy: 0.6373 - val_loss: 0.8567 - val_accuracy: 0.7576 Epoch 14/25 187/187 - 31s - loss: 1.2837 - accuracy: 0.6410 - val_loss: 0.8004 - val_accuracy: 0.7697 Epoch 15/25 187/187 - 31s - loss: 1.2828 - accuracy: 0.6403 - val_loss: 0.8364 - val_accuracy: 0.7625 Epoch 16/25 187/187 - 31s - loss: 1.2749 - accuracy: 0.6405 - val_loss: 0.8558 - val_accuracy: 0.7565 Epoch 17/25 187/187 - 31s - loss: 1.3022 - accuracy: 0.6352 - val_loss: 0.8361 - val_accuracy: 0.7551 Epoch 18/25 187/187 - 31s - loss: 1.2848 - accuracy: 0.6394 - val_loss: 0.8958 - val_accuracy: 0.7479 Epoch 19/25 187/187 - 31s - loss: 1.2791 - accuracy: 0.6420 - val_loss: 0.8875 - val_accuracy: 0.7509 Epoch 20/25 187/187 - 30s - loss: 1.2834 - accuracy: 0.6416 - val_loss: 0.8653 - val_accuracy: 0.7607 Epoch 21/25 187/187 - 30s - loss: 1.2608 - accuracy: 0.6435 - val_loss: 0.8451 - val_accuracy: 0.7612 Epoch 22/25 187/187 - 30s - loss: 1.2780 - accuracy: 0.6390 - val_loss: 0.9035 - val_accuracy: 0.7486 Epoch 23/25 187/187 - 30s - loss: 1.2742 - accuracy: 0.6473 - val_loss: 0.8837 - val_accuracy: 0.7556 Epoch 24/25 187/187 - 30s - loss: 1.2609 - accuracy: 0.6434 - val_loss: 0.9233 - val_accuracy: 0.7524 Epoch 25/25 187/187 - 31s - loss: 1.2630 - accuracy: 0.6496 - val_loss: 0.9116 - val_accuracy: 0.7584 png The second step is to unfreeze a number of layers and fit the model using smaller learning rate. In this example we show unfreezing all layers, but depending on specific dataset it may be desireble to only unfreeze a fraction of all layers. When the feature extraction with pretrained model works good enough, this step would give a very limited gain on validation accuracy. In our case we only see a small improvement, as ImageNet pretraining already exposed the model to a good amount of dogs. On the other hand, when we use pretrained weights on a dataset that is more different from ImageNet, this fine-tuning step can be crucial as the feature extractor also needs to be adjusted by a considerable amount. Such a situation can be demonstrated if choosing CIFAR-100 dataset instead, where fine-tuning boosts validation accuracy by about 10% to pass 80% on EfficientNetB0. In such a case the convergence may take more than 50 epochs. A side note on freezing/unfreezing models: setting trainable of a Model will simultaneously set all layers belonging to the Model to the same trainable attribute. Each layer is trainable only if both the layer itself and the model containing it are trainable. Hence when we need to partially freeze/unfreeze a model, we need to make sure the trainable attribute of the model is set to True. def unfreeze_model(model): # We unfreeze the top 20 layers while leaving BatchNorm layers frozen for layer in model.layers[-20:]: if not isinstance(layer, layers.BatchNormalization): layer.trainable = True optimizer = tf.keras.optimizers.Adam(learning_rate=1e-4) model.compile( optimizer=optimizer, loss=\"categorical_crossentropy\", metrics=[\"accuracy\"] ) unfreeze_model(model) epochs = 10 # @param {type: \"slider\", min:8, max:50} hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2) plot_hist(hist) Epoch 1/10 187/187 - 33s - loss: 0.9956 - accuracy: 0.7080 - val_loss: 0.7644 - val_accuracy: 0.7856 Epoch 2/10 187/187 - 31s - loss: 0.8885 - accuracy: 0.7352 - val_loss: 0.7696 - val_accuracy: 0.7866 Epoch 3/10 187/187 - 31s - loss: 0.8059 - accuracy: 0.7533 - val_loss: 0.7659 - val_accuracy: 0.7885 Epoch 4/10 187/187 - 32s - loss: 0.7648 - accuracy: 0.7675 - val_loss: 0.7730 - val_accuracy: 0.7866 Epoch 5/10 187/187 - 32s - loss: 0.6982 - accuracy: 0.7833 - val_loss: 0.7691 - val_accuracy: 0.7858 Epoch 6/10 187/187 - 31s - loss: 0.6823 - accuracy: 0.7880 - val_loss: 0.7814 - val_accuracy: 0.7872 Epoch 7/10 187/187 - 31s - loss: 0.6536 - accuracy: 0.7953 - val_loss: 0.7850 - val_accuracy: 0.7873 Epoch 8/10 187/187 - 31s - loss: 0.6104 - accuracy: 0.8111 - val_loss: 0.7774 - val_accuracy: 0.7879 Epoch 9/10 187/187 - 32s - loss: 0.5990 - accuracy: 0.8067 - val_loss: 0.7925 - val_accuracy: 0.7870 Epoch 10/10 187/187 - 31s - loss: 0.5531 - accuracy: 0.8239 - val_loss: 0.7870 - val_accuracy: 0.7836 png Tips for fine tuning EfficientNet On unfreezing layers: The BathcNormalization layers need to be kept frozen (more details). If they are also turned to trainable, the first epoch after unfreezing will significantly reduce accuracy. In some cases it may be beneficial to open up only a portion of layers instead of unfreezing all. This will make fine tuning much faster when going to larger models like B7. Each block needs to be all turned on or off. This is because the architecture includes a shortcut from the first layer to the last layer for each block. Not respecting blocks also significantly harms the final performance. Some other tips for utilizing EfficientNet: Larger variants of EfficientNet do not guarantee improved performance, especially for tasks with less data or fewer classes. In such a case, the larger variant of EfficientNet chosen, the harder it is to tune hyperparameters. EMA (Exponential Moving Average) is very helpful in training EfficientNet from scratch, but not so much for transfer learning. Do not use the RMSprop setup as in the original paper for transfer learning. The momentum and learning rate are too high for transfer learning. It will easily corrupt the pretrained weight and blow up the loss. A quick check is to see if loss (as categorical cross entropy) is getting significantly larger than log(NUM_CLASSES) after the same epoch. If so, the initial learning rate/momentum is too high. Smaller batch size benefit validation accuracy, possibly due to effectively providing regularization. Using the latest EfficientNet weights Since the initial paper, the EfficientNet has been improved by various methods for data preprocessing and for using unlabelled data to enhance learning results. These improvements are relatively hard and computationally costly to reproduce, and require extra code; but the weights are readily available in the form of TF checkpoint files. The model architecture has not changed, so loading the improved checkpoints is possible. To use a checkpoint provided at the official model repository, first download the checkpoint. As example, here we download noisy-student version of B1: !wget https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet\ /noisystudent/noisy_student_efficientnet-b1.tar.gz !tar -xf noisy_student_efficientnet-b1.tar.gz Then use the script efficientnet_weight_update_util.py to convert ckpt file to h5 file. !python efficientnet_weight_update_util.py --model b1 --notop --ckpt \ efficientnet-b1/model.ckpt --o efficientnetb1_notop.h5 When creating model, use the following to load new weight: model = EfficientNetB1(weights=\"efficientnetb1_notop.h5\", include_top=False) An all-convolutional network applied to patches of images. Introduction Vision Transformers (ViT; Dosovitskiy et al.) extract small patches from the input images, linearly project them, and then apply the Transformer (Vaswani et al.) blocks. The application of ViTs to image recognition tasks is quickly becoming a promising area of research, because ViTs eliminate the need to have strong inductive biases (such as convolutions) for modeling locality. This presents them as a general computation primititive capable of learning just from the training data with as minimal inductive priors as possible. ViTs yield great downstream performance when trained with proper regularization, data augmentation, and relatively large datasets. In the Patches Are All You Need paper (note: at the time of writing, it is a submission to the ICLR 2022 conference), the authors extend the idea of using patches to train an all-convolutional network and demonstrate competitive results. Their architecture namely ConvMixer uses recipes from the recent isotrophic architectures like ViT, MLP-Mixer (Tolstikhin et al.), such as using the same depth and resolution across different layers in the network, residual connections, and so on. In this example, we will implement the ConvMixer model and demonstrate its performance on the CIFAR-10 dataset. To use the AdamW optimizer, we need to install TensorFlow Addons: pip install -U -q tensorflow-addons Imports from tensorflow.keras import layers from tensorflow import keras import matplotlib.pyplot as plt import tensorflow_addons as tfa import tensorflow as tf import numpy as np Hyperparameters To keep run time short, we will train the model for only 10 epochs. To focus on the core ideas of ConvMixer, we will not use other training-specific elements like RandAugment (Cubuk et al.). If you are interested in learning more about those details, please refer to the original paper. learning_rate = 0.001 weight_decay = 0.0001 batch_size = 128 num_epochs = 10 Load the CIFAR-10 dataset (x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data() val_split = 0.1 val_indices = int(len(x_train) * val_split) new_x_train, new_y_train = x_train[val_indices:], y_train[val_indices:] x_val, y_val = x_train[:val_indices], y_train[:val_indices] print(f\"Training data samples: {len(new_x_train)}\") print(f\"Validation data samples: {len(x_val)}\") print(f\"Test data samples: {len(x_test)}\") Training data samples: 45000 Validation data samples: 5000 Test data samples: 10000 Prepare tf.data.Dataset objects Our data augmentation pipeline is different from what the authors used for the CIFAR-10 dataset, which is fine for the purpose of the example. image_size = 32 auto = tf.data.AUTOTUNE data_augmentation = keras.Sequential( [layers.RandomCrop(image_size, image_size), layers.RandomFlip(\"horizontal\"),], name=\"data_augmentation\", ) def make_datasets(images, labels, is_train=False): dataset = tf.data.Dataset.from_tensor_slices((images, labels)) if is_train: dataset = dataset.shuffle(batch_size * 10) dataset = dataset.batch(batch_size) if is_train: dataset = dataset.map( lambda x, y: (data_augmentation(x), y), num_parallel_calls=auto ) return dataset.prefetch(auto) train_dataset = make_datasets(new_x_train, new_y_train, is_train=True) val_dataset = make_datasets(x_val, y_val) test_dataset = make_datasets(x_test, y_test) 2021-10-17 03:43:59.588315: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-10-17 03:43:59.596532: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-10-17 03:43:59.597211: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-10-17 03:43:59.622016: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-10-17 03:43:59.622853: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-10-17 03:43:59.623542: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-10-17 03:43:59.624174: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-10-17 03:44:00.067659: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-10-17 03:44:00.068334: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-10-17 03:44:00.068970: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-10-17 03:44:00.069615: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14684 MB memory: -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:04.0, compute capability: 7.0 ConvMixer utilities The following figure (taken from the original paper) depicts the ConvMixer model: ConvMixer is very similar to the MLP-Mixer, model with the following key differences: Instead of using fully-connected layers, it uses standard convolution layers. Instead of LayerNorm (which is typical for ViTs and MLP-Mixers), it uses BatchNorm. Two types of convolution layers are used in ConvMixer. (1): Depthwise convolutions, for mixing spatial locations of the images, (2): Pointwise convolutions (which follow the depthwise convolutions), for mixing channel-wise information across the patches. Another keypoint is the use of larger kernel sizes to allow a larger receptive field. def activation_block(x): x = layers.Activation(\"gelu\")(x) return layers.BatchNormalization()(x) def conv_stem(x, filters: int, patch_size: int): x = layers.Conv2D(filters, kernel_size=patch_size, strides=patch_size)(x) return activation_block(x) def conv_mixer_block(x, filters: int, kernel_size: int): # Depthwise convolution. x0 = x x = layers.DepthwiseConv2D(kernel_size=kernel_size, padding=\"same\")(x) x = layers.Add()([activation_block(x), x0]) # Residual. # Pointwise convolution. x = layers.Conv2D(filters, kernel_size=1)(x) x = activation_block(x) return x def get_conv_mixer_256_8( image_size=32, filters=256, depth=8, kernel_size=5, patch_size=2, num_classes=10 ): \"\"\"ConvMixer-256/8: https://openreview.net/pdf?id=TVHS5Y4dNvM. The hyperparameter values are taken from the paper. \"\"\" inputs = keras.Input((image_size, image_size, 3)) x = layers.Rescaling(scale=1.0 / 255)(inputs) # Extract patch embeddings. x = conv_stem(x, filters, patch_size) # ConvMixer blocks. for _ in range(depth): x = conv_mixer_block(x, filters, kernel_size) # Classification block. x = layers.GlobalAvgPool2D()(x) outputs = layers.Dense(num_classes, activation=\"softmax\")(x) return keras.Model(inputs, outputs) The model used in this experiment is termed as ConvMixer-256/8 where 256 denotes the number of channels and 8 denotes the depth. The resulting model only has 0.8 million parameters. Model training and evaluation utility # Code reference: # https://keras.io/examples/vision/image_classification_with_vision_transformer/. def run_experiment(model): optimizer = tfa.optimizers.AdamW( learning_rate=learning_rate, weight_decay=weight_decay ) model.compile( optimizer=optimizer, loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"], ) checkpoint_filepath = \"/tmp/checkpoint\" checkpoint_callback = keras.callbacks.ModelCheckpoint( checkpoint_filepath, monitor=\"val_accuracy\", save_best_only=True, save_weights_only=True, ) history = model.fit( train_dataset, validation_data=val_dataset, epochs=num_epochs, callbacks=[checkpoint_callback], ) model.load_weights(checkpoint_filepath) _, accuracy = model.evaluate(test_dataset) print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") return history, model Train and evaluate model conv_mixer_model = get_conv_mixer_256_8() history, conv_mixer_model = run_experiment(conv_mixer_model) 2021-10-17 03:44:01.291445: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2) Epoch 1/10 2021-10-17 03:44:04.721186: I tensorflow/stream_executor/cuda/cuda_dnn.cc:369] Loaded cuDNN version 8005 352/352 [==============================] - 29s 70ms/step - loss: 1.2272 - accuracy: 0.5592 - val_loss: 3.9422 - val_accuracy: 0.1196 Epoch 2/10 352/352 [==============================] - 24s 69ms/step - loss: 0.7813 - accuracy: 0.7278 - val_loss: 0.8860 - val_accuracy: 0.6898 Epoch 3/10 352/352 [==============================] - 24s 68ms/step - loss: 0.5947 - accuracy: 0.7943 - val_loss: 0.6175 - val_accuracy: 0.7856 Epoch 4/10 352/352 [==============================] - 24s 69ms/step - loss: 0.4801 - accuracy: 0.8330 - val_loss: 0.5634 - val_accuracy: 0.8064 Epoch 5/10 352/352 [==============================] - 24s 68ms/step - loss: 0.4065 - accuracy: 0.8599 - val_loss: 0.5359 - val_accuracy: 0.8166 Epoch 6/10 352/352 [==============================] - 24s 68ms/step - loss: 0.3473 - accuracy: 0.8804 - val_loss: 0.5257 - val_accuracy: 0.8228 Epoch 7/10 352/352 [==============================] - 24s 68ms/step - loss: 0.3071 - accuracy: 0.8944 - val_loss: 0.4982 - val_accuracy: 0.8264 Epoch 8/10 352/352 [==============================] - 24s 68ms/step - loss: 0.2655 - accuracy: 0.9083 - val_loss: 0.5032 - val_accuracy: 0.8346 Epoch 9/10 352/352 [==============================] - 24s 68ms/step - loss: 0.2328 - accuracy: 0.9194 - val_loss: 0.5225 - val_accuracy: 0.8326 Epoch 10/10 352/352 [==============================] - 24s 68ms/step - loss: 0.2115 - accuracy: 0.9278 - val_loss: 0.5063 - val_accuracy: 0.8372 79/79 [==============================] - 2s 19ms/step - loss: 0.5412 - accuracy: 0.8325 Test accuracy: 83.25% The gap in training and validation performance can be mitigated by using additional regularization techniques. Nevertheless, being able to get to ~83% accuracy within 10 epochs with 0.8 million parameters is a strong result. Visualizing the internals of ConvMixer We can visualize the patch embeddings and the learned convolution filters. Recall that each patch embedding and intermediate feature map have the same number of channels (256 in this case). This will make our visualization utility easier to implement. # Code reference: https://bit.ly/3awIRbP. def visualization_plot(weights, idx=1): # First, apply min-max normalization to the # given weights to avoid isotrophic scaling. p_min, p_max = weights.min(), weights.max() weights = (weights - p_min) / (p_max - p_min) # Visualize all the filters. num_filters = 256 plt.figure(figsize=(8, 8)) for i in range(num_filters): current_weight = weights[:, :, :, i] if current_weight.shape[-1] == 1: current_weight = current_weight.squeeze() ax = plt.subplot(16, 16, idx) ax.set_xticks([]) ax.set_yticks([]) plt.imshow(current_weight) idx += 1 # We first visualize the learned patch embeddings. patch_embeddings = conv_mixer_model.layers[2].get_weights()[0] visualization_plot(patch_embeddings) png Even though we did not train the network to convergence, we can notice that different patches show different patterns. Some share similarity with others while some are very different. These visualizations are more salient with larger image sizes. Similarly, we can visualize the raw convolution kernels. This can help us understand the patterns to which a given kernel is receptive. # First, print the indices of the convolution layers that are not # pointwise convolutions. for i, layer in enumerate(conv_mixer_model.layers): if isinstance(layer, layers.DepthwiseConv2D): if layer.get_config()[\"kernel_size\"] == (5, 5): print(i, layer) idx = 26 # Taking a kernel from the middle of the network. kernel = conv_mixer_model.layers[idx].get_weights()[0] kernel = np.expand_dims(kernel.squeeze(), axis=2) visualization_plot(kernel) 5 12 19 26 33 40 47 54 png We see that different filters in the kernel have different locality spans, and this pattern is likely to evolve with more training. Final notes There's been a recent trend on fusing convolutions with other data-agnostic operations like self-attention. Following works are along this line of research: ConViT (d'Ascoli et al.) CCT (Hassani et al.) CoAtNet (Dai et al.) Image classification with a Transformer that leverages external attention. Introduction This example implements the EANet model for image classification, and demonstrates it on the CIFAR-100 dataset. EANet introduces a novel attention mechanism named external attention, based on two external, small, learnable, and shared memories, which can be implemented easily by simply using two cascaded linear layers and two normalization layers. It conveniently replaces self-attention as used in existing architectures. External attention has linear complexity, as it only implicitly considers the correlations between all samples. This example requires TensorFlow 2.5 or higher, as well as TensorFlow Addons package, which can be installed using the following command: pip install -U tensorflow-addons Setup import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_addons as tfa import matplotlib.pyplot as plt Prepare the data num_classes = 100 input_shape = (32, 32, 3) (x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data() y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) print(f\"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}\") print(f\"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}\") x_train shape: (50000, 32, 32, 3) - y_train shape: (50000, 100) x_test shape: (10000, 32, 32, 3) - y_test shape: (10000, 100) Configure the hyperparameters weight_decay = 0.0001 learning_rate = 0.001 label_smoothing = 0.1 validation_split = 0.2 batch_size = 128 num_epochs = 50 patch_size = 2 # Size of the patches to be extracted from the input images. num_patches = (input_shape[0] // patch_size) ** 2 # Number of patch embedding_dim = 64 # Number of hidden units. mlp_dim = 64 dim_coefficient = 4 num_heads = 4 attention_dropout = 0.2 projection_dropout = 0.2 num_transformer_blocks = 8 # Number of repetitions of the transformer layer print(f\"Patch size: {patch_size} X {patch_size} = {patch_size ** 2} \") print(f\"Patches per image: {num_patches}\") Patch size: 2 X 2 = 4 Patches per image: 256 Use data augmentation data_augmentation = keras.Sequential( [ layers.Normalization(), layers.RandomFlip(\"horizontal\"), layers.RandomRotation(factor=0.1), layers.RandomContrast(factor=0.1), layers.RandomZoom(height_factor=0.2, width_factor=0.2), ], name=\"data_augmentation\", ) # Compute the mean and the variance of the training data for normalization. data_augmentation.layers[0].adapt(x_train) Implement the patch extraction and encoding layer class PatchExtract(layers.Layer): def __init__(self, patch_size, **kwargs): super(PatchExtract, self).__init__(**kwargs) self.patch_size = patch_size def call(self, images): batch_size = tf.shape(images)[0] patches = tf.image.extract_patches( images=images, sizes=(1, self.patch_size, self.patch_size, 1), strides=(1, self.patch_size, self.patch_size, 1), rates=(1, 1, 1, 1), padding=\"VALID\", ) patch_dim = patches.shape[-1] patch_num = patches.shape[1] return tf.reshape(patches, (batch_size, patch_num * patch_num, patch_dim)) class PatchEmbedding(layers.Layer): def __init__(self, num_patch, embed_dim, **kwargs): super(PatchEmbedding, self).__init__(**kwargs) self.num_patch = num_patch self.proj = layers.Dense(embed_dim) self.pos_embed = layers.Embedding(input_dim=num_patch, output_dim=embed_dim) def call(self, patch): pos = tf.range(start=0, limit=self.num_patch, delta=1) return self.proj(patch) + self.pos_embed(pos) Implement the external attention block def external_attention( x, dim, num_heads, dim_coefficient=4, attention_dropout=0, projection_dropout=0 ): _, num_patch, channel = x.shape assert dim % num_heads == 0 num_heads = num_heads * dim_coefficient x = layers.Dense(dim * dim_coefficient)(x) # create tensor [batch_size, num_patches, num_heads, dim*dim_coefficient//num_heads] x = tf.reshape( x, shape=(-1, num_patch, num_heads, dim * dim_coefficient // num_heads) ) x = tf.transpose(x, perm=[0, 2, 1, 3]) # a linear layer M_k attn = layers.Dense(dim // dim_coefficient)(x) # normalize attention map attn = layers.Softmax(axis=2)(attn) # dobule-normalization attn = attn / (1e-9 + tf.reduce_sum(attn, axis=-1, keepdims=True)) attn = layers.Dropout(attention_dropout)(attn) # a linear layer M_v x = layers.Dense(dim * dim_coefficient // num_heads)(attn) x = tf.transpose(x, perm=[0, 2, 1, 3]) x = tf.reshape(x, [-1, num_patch, dim * dim_coefficient]) # a linear layer to project original dim x = layers.Dense(dim)(x) x = layers.Dropout(projection_dropout)(x) return x Implement the MLP block def mlp(x, embedding_dim, mlp_dim, drop_rate=0.2): x = layers.Dense(mlp_dim, activation=tf.nn.gelu)(x) x = layers.Dropout(drop_rate)(x) x = layers.Dense(embedding_dim)(x) x = layers.Dropout(drop_rate)(x) return x Implement the Transformer block def transformer_encoder( x, embedding_dim, mlp_dim, num_heads, dim_coefficient, attention_dropout, projection_dropout, attention_type=\"external_attention\", ): residual_1 = x x = layers.LayerNormalization(epsilon=1e-5)(x) if attention_type == \"external_attention\": x = external_attention( x, embedding_dim, num_heads, dim_coefficient, attention_dropout, projection_dropout, ) elif attention_type == \"self_attention\": x = layers.MultiHeadAttention( num_heads=num_heads, key_dim=embedding_dim, dropout=attention_dropout )(x, x) x = layers.add([x, residual_1]) residual_2 = x x = layers.LayerNormalization(epsilon=1e-5)(x) x = mlp(x, embedding_dim, mlp_dim) x = layers.add([x, residual_2]) return x Implement the EANet model The EANet model leverages external attention. The computational complexity of traditional self attention is O(d * N ** 2), where d is the embedding size, and N is the number of patch. the authors find that most pixels are closely related to just a few other pixels, and an N-to-N attention matrix may be redundant. So, they propose as an alternative an external attention module where the computational complexity of external attention is O(d * S * N). As d and S are hyper-parameters, the proposed algorithm is linear in the number of pixels. In fact, this is equivalent to a drop patch operation, because a lot of information contained in a patch in an image is redundant and unimportant. def get_model(attention_type=\"external_attention\"): inputs = layers.Input(shape=input_shape) # Image augment x = data_augmentation(inputs) # Extract patches. x = PatchExtract(patch_size)(x) # Create patch embedding. x = PatchEmbedding(num_patches, embedding_dim)(x) # Create Transformer block. for _ in range(num_transformer_blocks): x = transformer_encoder( x, embedding_dim, mlp_dim, num_heads, dim_coefficient, attention_dropout, projection_dropout, attention_type, ) x = layers.GlobalAvgPool1D()(x) outputs = layers.Dense(num_classes, activation=\"softmax\")(x) model = keras.Model(inputs=inputs, outputs=outputs) return model Train on CIFAR-100 model = get_model(attention_type=\"external_attention\") model.compile( loss=keras.losses.CategoricalCrossentropy(label_smoothing=label_smoothing), optimizer=tfa.optimizers.AdamW( learning_rate=learning_rate, weight_decay=weight_decay ), metrics=[ keras.metrics.CategoricalAccuracy(name=\"accuracy\"), keras.metrics.TopKCategoricalAccuracy(5, name=\"top-5-accuracy\"), ], ) history = model.fit( x_train, y_train, batch_size=batch_size, epochs=num_epochs, validation_split=validation_split, ) Epoch 1/50 313/313 [==============================] - 40s 95ms/step - loss: 4.2091 - accuracy: 0.0723 - top-5-accuracy: 0.2384 - val_loss: 3.9706 - val_accuracy: 0.1153 - val_top-5-accuracy: 0.3336 Epoch 2/50 313/313 [==============================] - 29s 91ms/step - loss: 3.8028 - accuracy: 0.1427 - top-5-accuracy: 0.3871 - val_loss: 3.6672 - val_accuracy: 0.1829 - val_top-5-accuracy: 0.4513 Epoch 3/50 313/313 [==============================] - 29s 93ms/step - loss: 3.5493 - accuracy: 0.1978 - top-5-accuracy: 0.4805 - val_loss: 3.5402 - val_accuracy: 0.2141 - val_top-5-accuracy: 0.5038 Epoch 4/50 313/313 [==============================] - 29s 93ms/step - loss: 3.4029 - accuracy: 0.2355 - top-5-accuracy: 0.5328 - val_loss: 3.4496 - val_accuracy: 0.2354 - val_top-5-accuracy: 0.5316 Epoch 5/50 313/313 [==============================] - 29s 92ms/step - loss: 3.2917 - accuracy: 0.2636 - top-5-accuracy: 0.5678 - val_loss: 3.3342 - val_accuracy: 0.2699 - val_top-5-accuracy: 0.5679 Epoch 6/50 313/313 [==============================] - 29s 92ms/step - loss: 3.2116 - accuracy: 0.2830 - top-5-accuracy: 0.5921 - val_loss: 3.2896 - val_accuracy: 0.2749 - val_top-5-accuracy: 0.5874 Epoch 7/50 313/313 [==============================] - 28s 90ms/step - loss: 3.1453 - accuracy: 0.2980 - top-5-accuracy: 0.6100 - val_loss: 3.3090 - val_accuracy: 0.2857 - val_top-5-accuracy: 0.5831 Epoch 8/50 313/313 [==============================] - 29s 94ms/step - loss: 3.0889 - accuracy: 0.3121 - top-5-accuracy: 0.6266 - val_loss: 3.1969 - val_accuracy: 0.2975 - val_top-5-accuracy: 0.6082 Epoch 9/50 313/313 [==============================] - 29s 92ms/step - loss: 3.0390 - accuracy: 0.3252 - top-5-accuracy: 0.6441 - val_loss: 3.1249 - val_accuracy: 0.3175 - val_top-5-accuracy: 0.6330 Epoch 10/50 313/313 [==============================] - 29s 92ms/step - loss: 2.9871 - accuracy: 0.3365 - top-5-accuracy: 0.6615 - val_loss: 3.1121 - val_accuracy: 0.3200 - val_top-5-accuracy: 0.6374 Epoch 11/50 313/313 [==============================] - 29s 92ms/step - loss: 2.9476 - accuracy: 0.3489 - top-5-accuracy: 0.6697 - val_loss: 3.1156 - val_accuracy: 0.3268 - val_top-5-accuracy: 0.6421 Epoch 12/50 313/313 [==============================] - 29s 91ms/step - loss: 2.9106 - accuracy: 0.3576 - top-5-accuracy: 0.6783 - val_loss: 3.1337 - val_accuracy: 0.3226 - val_top-5-accuracy: 0.6389 Epoch 13/50 313/313 [==============================] - 29s 92ms/step - loss: 2.8772 - accuracy: 0.3662 - top-5-accuracy: 0.6871 - val_loss: 3.0373 - val_accuracy: 0.3348 - val_top-5-accuracy: 0.6624 Epoch 14/50 313/313 [==============================] - 29s 92ms/step - loss: 2.8508 - accuracy: 0.3756 - top-5-accuracy: 0.6944 - val_loss: 3.0297 - val_accuracy: 0.3441 - val_top-5-accuracy: 0.6643 Epoch 15/50 313/313 [==============================] - 28s 90ms/step - loss: 2.8211 - accuracy: 0.3821 - top-5-accuracy: 0.7034 - val_loss: 2.9680 - val_accuracy: 0.3604 - val_top-5-accuracy: 0.6847 Epoch 16/50 313/313 [==============================] - 28s 90ms/step - loss: 2.8017 - accuracy: 0.3864 - top-5-accuracy: 0.7090 - val_loss: 2.9746 - val_accuracy: 0.3584 - val_top-5-accuracy: 0.6855 Epoch 17/50 313/313 [==============================] - 29s 91ms/step - loss: 2.7714 - accuracy: 0.3962 - top-5-accuracy: 0.7169 - val_loss: 2.9104 - val_accuracy: 0.3738 - val_top-5-accuracy: 0.6940 Epoch 18/50 313/313 [==============================] - 29s 92ms/step - loss: 2.7523 - accuracy: 0.4008 - top-5-accuracy: 0.7204 - val_loss: 2.8560 - val_accuracy: 0.3861 - val_top-5-accuracy: 0.7115 Epoch 19/50 313/313 [==============================] - 28s 91ms/step - loss: 2.7320 - accuracy: 0.4051 - top-5-accuracy: 0.7263 - val_loss: 2.8780 - val_accuracy: 0.3820 - val_top-5-accuracy: 0.7101 Epoch 20/50 313/313 [==============================] - 28s 90ms/step - loss: 2.7139 - accuracy: 0.4114 - top-5-accuracy: 0.7290 - val_loss: 2.9831 - val_accuracy: 0.3694 - val_top-5-accuracy: 0.6922 Epoch 21/50 313/313 [==============================] - 28s 91ms/step - loss: 2.6991 - accuracy: 0.4142 - top-5-accuracy: 0.7335 - val_loss: 2.8420 - val_accuracy: 0.3968 - val_top-5-accuracy: 0.7138 Epoch 22/50 313/313 [==============================] - 29s 91ms/step - loss: 2.6842 - accuracy: 0.4195 - top-5-accuracy: 0.7377 - val_loss: 2.7965 - val_accuracy: 0.4088 - val_top-5-accuracy: 0.7266 Epoch 23/50 313/313 [==============================] - 28s 91ms/step - loss: 2.6571 - accuracy: 0.4273 - top-5-accuracy: 0.7436 - val_loss: 2.8620 - val_accuracy: 0.3947 - val_top-5-accuracy: 0.7155 Epoch 24/50 313/313 [==============================] - 29s 91ms/step - loss: 2.6508 - accuracy: 0.4277 - top-5-accuracy: 0.7469 - val_loss: 2.8459 - val_accuracy: 0.3963 - val_top-5-accuracy: 0.7150 Epoch 25/50 313/313 [==============================] - 28s 90ms/step - loss: 2.6403 - accuracy: 0.4283 - top-5-accuracy: 0.7520 - val_loss: 2.7886 - val_accuracy: 0.4128 - val_top-5-accuracy: 0.7283 Epoch 26/50 313/313 [==============================] - 29s 92ms/step - loss: 2.6281 - accuracy: 0.4353 - top-5-accuracy: 0.7523 - val_loss: 2.8493 - val_accuracy: 0.4026 - val_top-5-accuracy: 0.7153 Epoch 27/50 313/313 [==============================] - 29s 92ms/step - loss: 2.6092 - accuracy: 0.4403 - top-5-accuracy: 0.7580 - val_loss: 2.7539 - val_accuracy: 0.4186 - val_top-5-accuracy: 0.7392 Epoch 28/50 313/313 [==============================] - 29s 91ms/step - loss: 2.5992 - accuracy: 0.4423 - top-5-accuracy: 0.7600 - val_loss: 2.8625 - val_accuracy: 0.3964 - val_top-5-accuracy: 0.7174 Epoch 29/50 313/313 [==============================] - 28s 90ms/step - loss: 2.5913 - accuracy: 0.4456 - top-5-accuracy: 0.7598 - val_loss: 2.7911 - val_accuracy: 0.4162 - val_top-5-accuracy: 0.7329 Epoch 30/50 313/313 [==============================] - 29s 92ms/step - loss: 2.5780 - accuracy: 0.4480 - top-5-accuracy: 0.7649 - val_loss: 2.8158 - val_accuracy: 0.4118 - val_top-5-accuracy: 0.7288 Epoch 31/50 313/313 [==============================] - 28s 91ms/step - loss: 2.5657 - accuracy: 0.4547 - top-5-accuracy: 0.7661 - val_loss: 2.8651 - val_accuracy: 0.4056 - val_top-5-accuracy: 0.7217 Epoch 32/50 313/313 [==============================] - 29s 91ms/step - loss: 2.5637 - accuracy: 0.4480 - top-5-accuracy: 0.7681 - val_loss: 2.8190 - val_accuracy: 0.4094 - val_top-5-accuracy: 0.7267 Epoch 33/50 313/313 [==============================] - 29s 92ms/step - loss: 2.5525 - accuracy: 0.4545 - top-5-accuracy: 0.7693 - val_loss: 2.7985 - val_accuracy: 0.4216 - val_top-5-accuracy: 0.7303 Epoch 34/50 313/313 [==============================] - 28s 91ms/step - loss: 2.5462 - accuracy: 0.4579 - top-5-accuracy: 0.7721 - val_loss: 2.8865 - val_accuracy: 0.4016 - val_top-5-accuracy: 0.7204 Epoch 35/50 313/313 [==============================] - 29s 92ms/step - loss: 2.5329 - accuracy: 0.4616 - top-5-accuracy: 0.7740 - val_loss: 2.7862 - val_accuracy: 0.4232 - val_top-5-accuracy: 0.7389 Epoch 36/50 313/313 [==============================] - 28s 90ms/step - loss: 2.5234 - accuracy: 0.4610 - top-5-accuracy: 0.7765 - val_loss: 2.8234 - val_accuracy: 0.4134 - val_top-5-accuracy: 0.7312 Epoch 37/50 313/313 [==============================] - 29s 91ms/step - loss: 2.5152 - accuracy: 0.4663 - top-5-accuracy: 0.7774 - val_loss: 2.7894 - val_accuracy: 0.4161 - val_top-5-accuracy: 0.7376 Epoch 38/50 313/313 [==============================] - 29s 92ms/step - loss: 2.5117 - accuracy: 0.4674 - top-5-accuracy: 0.7790 - val_loss: 2.8091 - val_accuracy: 0.4142 - val_top-5-accuracy: 0.7360 Epoch 39/50 313/313 [==============================] - 28s 90ms/step - loss: 2.5047 - accuracy: 0.4681 - top-5-accuracy: 0.7805 - val_loss: 2.8199 - val_accuracy: 0.4167 - val_top-5-accuracy: 0.7299 Epoch 40/50 313/313 [==============================] - 28s 90ms/step - loss: 2.4974 - accuracy: 0.4697 - top-5-accuracy: 0.7819 - val_loss: 2.7864 - val_accuracy: 0.4247 - val_top-5-accuracy: 0.7402 Epoch 41/50 313/313 [==============================] - 28s 90ms/step - loss: 2.4889 - accuracy: 0.4749 - top-5-accuracy: 0.7854 - val_loss: 2.8120 - val_accuracy: 0.4217 - val_top-5-accuracy: 0.7358 Epoch 42/50 313/313 [==============================] - 28s 90ms/step - loss: 2.4799 - accuracy: 0.4771 - top-5-accuracy: 0.7866 - val_loss: 2.9003 - val_accuracy: 0.4038 - val_top-5-accuracy: 0.7170 Epoch 43/50 313/313 [==============================] - 28s 90ms/step - loss: 2.4814 - accuracy: 0.4770 - top-5-accuracy: 0.7868 - val_loss: 2.7504 - val_accuracy: 0.4260 - val_top-5-accuracy: 0.7457 Epoch 44/50 313/313 [==============================] - 28s 91ms/step - loss: 2.4747 - accuracy: 0.4757 - top-5-accuracy: 0.7870 - val_loss: 2.8207 - val_accuracy: 0.4166 - val_top-5-accuracy: 0.7363 Epoch 45/50 313/313 [==============================] - 28s 90ms/step - loss: 2.4653 - accuracy: 0.4809 - top-5-accuracy: 0.7924 - val_loss: 2.8663 - val_accuracy: 0.4130 - val_top-5-accuracy: 0.7209 Epoch 46/50 313/313 [==============================] - 28s 90ms/step - loss: 2.4554 - accuracy: 0.4825 - top-5-accuracy: 0.7929 - val_loss: 2.8145 - val_accuracy: 0.4250 - val_top-5-accuracy: 0.7357 Epoch 47/50 313/313 [==============================] - 29s 91ms/step - loss: 2.4602 - accuracy: 0.4823 - top-5-accuracy: 0.7919 - val_loss: 2.8352 - val_accuracy: 0.4189 - val_top-5-accuracy: 0.7365 Epoch 48/50 313/313 [==============================] - 28s 91ms/step - loss: 2.4493 - accuracy: 0.4848 - top-5-accuracy: 0.7933 - val_loss: 2.8246 - val_accuracy: 0.4160 - val_top-5-accuracy: 0.7362 Epoch 49/50 313/313 [==============================] - 28s 91ms/step - loss: 2.4454 - accuracy: 0.4846 - top-5-accuracy: 0.7958 - val_loss: 2.7731 - val_accuracy: 0.4320 - val_top-5-accuracy: 0.7436 Epoch 50/50 313/313 [==============================] - 29s 92ms/step - loss: 2.4418 - accuracy: 0.4848 - top-5-accuracy: 0.7951 - val_loss: 2.7926 - val_accuracy: 0.4317 - val_top-5-accuracy: 0.7410 Let's visualize the training progress of the model. plt.plot(history.history[\"loss\"], label=\"train_loss\") plt.plot(history.history[\"val_loss\"], label=\"val_loss\") plt.xlabel(\"Epochs\") plt.ylabel(\"Loss\") plt.title(\"Train and Validation Losses Over Epochs\", fontsize=14) plt.legend() plt.grid() plt.show() png Let's display the final results of the test on CIFAR-100. loss, accuracy, top_5_accuracy = model.evaluate(x_test, y_test) print(f\"Test loss: {round(loss, 2)}\") print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") print(f\"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%\") 313/313 [==============================] - 6s 21ms/step - loss: 2.7574 - accuracy: 0.4391 - top-5-accuracy: 0.7471 Test loss: 2.76 Test accuracy: 43.91% Test top 5 accuracy: 74.71% EANet just replaces self attention in Vit with external attention. The traditional Vit achieved a ~73% test top-5 accuracy and ~41 top-1 accuracy after training 50 epochs, but with 0.6M parameters. Under the same experimental environment and the same hyperparameters, The EANet model we just trained has just 0.3M parameters, and it gets us to ~73% test top-5 accuracy and ~43% top-1 accuracy. This fully demonstrates the effectiveness of external attention. We only show the training process of EANet, you can train Vit under the same experimental conditions and observe the test results. Implementing the MLP-Mixer, FNet, and gMLP models for CIFAR-100 image classification. Introduction This example implements three modern attention-free, multi-layer perceptron (MLP) based models for image classification, demonstrated on the CIFAR-100 dataset: The MLP-Mixer model, by Ilya Tolstikhin et al., based on two types of MLPs. The FNet model, by James Lee-Thorp et al., based on unparameterized Fourier Transform. The gMLP model, by Hanxiao Liu et al., based on MLP with gating. The purpose of the example is not to compare between these models, as they might perform differently on different datasets with well-tuned hyperparameters. Rather, it is to show simple implementations of their main building blocks. This example requires TensorFlow 2.4 or higher, as well as TensorFlow Addons, which can be installed using the following command: pip install -U tensorflow-addons Setup import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_addons as tfa Prepare the data num_classes = 100 input_shape = (32, 32, 3) (x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data() print(f\"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}\") print(f\"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}\") x_train shape: (50000, 32, 32, 3) - y_train shape: (50000, 1) x_test shape: (10000, 32, 32, 3) - y_test shape: (10000, 1) Configure the hyperparameters weight_decay = 0.0001 batch_size = 128 num_epochs = 50 dropout_rate = 0.2 image_size = 64 # We'll resize input images to this size. patch_size = 8 # Size of the patches to be extracted from the input images. num_patches = (image_size // patch_size) ** 2 # Size of the data array. embedding_dim = 256 # Number of hidden units. num_blocks = 4 # Number of blocks. print(f\"Image size: {image_size} X {image_size} = {image_size ** 2}\") print(f\"Patch size: {patch_size} X {patch_size} = {patch_size ** 2} \") print(f\"Patches per image: {num_patches}\") print(f\"Elements per patch (3 channels): {(patch_size ** 2) * 3}\") Image size: 64 X 64 = 4096 Patch size: 8 X 8 = 64 Patches per image: 64 Elements per patch (3 channels): 192 Build a classification model We implement a method that builds a classifier given the processing blocks. def build_classifier(blocks, positional_encoding=False): inputs = layers.Input(shape=input_shape) # Augment data. augmented = data_augmentation(inputs) # Create patches. patches = Patches(patch_size, num_patches)(augmented) # Encode patches to generate a [batch_size, num_patches, embedding_dim] tensor. x = layers.Dense(units=embedding_dim)(patches) if positional_encoding: positions = tf.range(start=0, limit=num_patches, delta=1) position_embedding = layers.Embedding( input_dim=num_patches, output_dim=embedding_dim )(positions) x = x + position_embedding # Process x using the module blocks. x = blocks(x) # Apply global average pooling to generate a [batch_size, embedding_dim] representation tensor. representation = layers.GlobalAveragePooling1D()(x) # Apply dropout. representation = layers.Dropout(rate=dropout_rate)(representation) # Compute logits outputs. logits = layers.Dense(num_classes)(representation) # Create the Keras model. return keras.Model(inputs=inputs, outputs=logits) Define an experiment We implement a utility function to compile, train, and evaluate a given model. def run_experiment(model): # Create Adam optimizer with weight decay. optimizer = tfa.optimizers.AdamW( learning_rate=learning_rate, weight_decay=weight_decay, ) # Compile the model. model.compile( optimizer=optimizer, loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[ keras.metrics.SparseCategoricalAccuracy(name=\"acc\"), keras.metrics.SparseTopKCategoricalAccuracy(5, name=\"top5-acc\"), ], ) # Create a learning rate scheduler callback. reduce_lr = keras.callbacks.ReduceLROnPlateau( monitor=\"val_loss\", factor=0.5, patience=5 ) # Create an early stopping callback. early_stopping = tf.keras.callbacks.EarlyStopping( monitor=\"val_loss\", patience=10, restore_best_weights=True ) # Fit the model. history = model.fit( x=x_train, y=y_train, batch_size=batch_size, epochs=num_epochs, validation_split=0.1, callbacks=[early_stopping, reduce_lr], ) _, accuracy, top_5_accuracy = model.evaluate(x_test, y_test) print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") print(f\"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%\") # Return history to plot learning curves. return history Use data augmentation data_augmentation = keras.Sequential( [ layers.Normalization(), layers.Resizing(image_size, image_size), layers.RandomFlip(\"horizontal\"), layers.RandomZoom( height_factor=0.2, width_factor=0.2 ), ], name=\"data_augmentation\", ) # Compute the mean and the variance of the training data for normalization. data_augmentation.layers[0].adapt(x_train) Implement patch extraction as a layer class Patches(layers.Layer): def __init__(self, patch_size, num_patches): super(Patches, self).__init__() self.patch_size = patch_size self.num_patches = num_patches def call(self, images): batch_size = tf.shape(images)[0] patches = tf.image.extract_patches( images=images, sizes=[1, self.patch_size, self.patch_size, 1], strides=[1, self.patch_size, self.patch_size, 1], rates=[1, 1, 1, 1], padding=\"VALID\", ) patch_dims = patches.shape[-1] patches = tf.reshape(patches, [batch_size, self.num_patches, patch_dims]) return patches The MLP-Mixer model The MLP-Mixer is an architecture based exclusively on multi-layer perceptrons (MLPs), that contains two types of MLP layers: One applied independently to image patches, which mixes the per-location features. The other applied across patches (along channels), which mixes spatial information. This is similar to a depthwise separable convolution based model such as the Xception model, but with two chained dense transforms, no max pooling, and layer normalization instead of batch normalization. Implement the MLP-Mixer module class MLPMixerLayer(layers.Layer): def __init__(self, num_patches, hidden_units, dropout_rate, *args, **kwargs): super(MLPMixerLayer, self).__init__(*args, **kwargs) self.mlp1 = keras.Sequential( [ layers.Dense(units=num_patches), tfa.layers.GELU(), layers.Dense(units=num_patches), layers.Dropout(rate=dropout_rate), ] ) self.mlp2 = keras.Sequential( [ layers.Dense(units=num_patches), tfa.layers.GELU(), layers.Dense(units=embedding_dim), layers.Dropout(rate=dropout_rate), ] ) self.normalize = layers.LayerNormalization(epsilon=1e-6) def call(self, inputs): # Apply layer normalization. x = self.normalize(inputs) # Transpose inputs from [num_batches, num_patches, hidden_units] to [num_batches, hidden_units, num_patches]. x_channels = tf.linalg.matrix_transpose(x) # Apply mlp1 on each channel independently. mlp1_outputs = self.mlp1(x_channels) # Transpose mlp1_outputs from [num_batches, hidden_dim, num_patches] to [num_batches, num_patches, hidden_units]. mlp1_outputs = tf.linalg.matrix_transpose(mlp1_outputs) # Add skip connection. x = mlp1_outputs + inputs # Apply layer normalization. x_patches = self.normalize(x) # Apply mlp2 on each patch independtenly. mlp2_outputs = self.mlp2(x_patches) # Add skip connection. x = x + mlp2_outputs return x Build, train, and evaluate the MLP-Mixer model Note that training the model with the current settings on a V100 GPUs takes around 8 seconds per epoch. mlpmixer_blocks = keras.Sequential( [MLPMixerLayer(num_patches, embedding_dim, dropout_rate) for _ in range(num_blocks)] ) learning_rate = 0.005 mlpmixer_classifier = build_classifier(mlpmixer_blocks) history = run_experiment(mlpmixer_classifier) /opt/conda/lib/python3.7/site-packages/tensorflow/python/autograph/impl/api.py:390: UserWarning: Default value of `approximate` is changed from `True` to `False` return py_builtins.overload_of(f)(*args) Epoch 1/50 352/352 [==============================] - 13s 25ms/step - loss: 4.1703 - acc: 0.0756 - top5-acc: 0.2322 - val_loss: 3.6202 - val_acc: 0.1532 - val_top5-acc: 0.4140 Epoch 2/50 352/352 [==============================] - 8s 23ms/step - loss: 3.4165 - acc: 0.1789 - top5-acc: 0.4459 - val_loss: 3.1599 - val_acc: 0.2334 - val_top5-acc: 0.5160 Epoch 3/50 352/352 [==============================] - 8s 23ms/step - loss: 3.1367 - acc: 0.2328 - top5-acc: 0.5230 - val_loss: 3.0539 - val_acc: 0.2560 - val_top5-acc: 0.5664 Epoch 4/50 352/352 [==============================] - 8s 23ms/step - loss: 2.9985 - acc: 0.2624 - top5-acc: 0.5600 - val_loss: 2.9498 - val_acc: 0.2798 - val_top5-acc: 0.5856 Epoch 5/50 352/352 [==============================] - 8s 23ms/step - loss: 2.8806 - acc: 0.2809 - top5-acc: 0.5879 - val_loss: 2.8593 - val_acc: 0.2904 - val_top5-acc: 0.6050 Epoch 6/50 352/352 [==============================] - 8s 23ms/step - loss: 2.7860 - acc: 0.3024 - top5-acc: 0.6124 - val_loss: 2.7405 - val_acc: 0.3256 - val_top5-acc: 0.6364 Epoch 7/50 352/352 [==============================] - 8s 23ms/step - loss: 2.7065 - acc: 0.3152 - top5-acc: 0.6280 - val_loss: 2.7548 - val_acc: 0.3328 - val_top5-acc: 0.6450 Epoch 8/50 352/352 [==============================] - 8s 22ms/step - loss: 2.6443 - acc: 0.3263 - top5-acc: 0.6446 - val_loss: 2.6618 - val_acc: 0.3460 - val_top5-acc: 0.6578 Epoch 9/50 352/352 [==============================] - 8s 23ms/step - loss: 2.5886 - acc: 0.3406 - top5-acc: 0.6573 - val_loss: 2.6065 - val_acc: 0.3492 - val_top5-acc: 0.6650 Epoch 10/50 352/352 [==============================] - 8s 23ms/step - loss: 2.5798 - acc: 0.3404 - top5-acc: 0.6591 - val_loss: 2.6546 - val_acc: 0.3502 - val_top5-acc: 0.6630 Epoch 11/50 352/352 [==============================] - 8s 23ms/step - loss: 2.5269 - acc: 0.3498 - top5-acc: 0.6714 - val_loss: 2.6201 - val_acc: 0.3570 - val_top5-acc: 0.6710 Epoch 12/50 352/352 [==============================] - 8s 23ms/step - loss: 2.5003 - acc: 0.3569 - top5-acc: 0.6745 - val_loss: 2.5936 - val_acc: 0.3564 - val_top5-acc: 0.6662 Epoch 13/50 352/352 [==============================] - 8s 22ms/step - loss: 2.4801 - acc: 0.3619 - top5-acc: 0.6792 - val_loss: 2.5236 - val_acc: 0.3700 - val_top5-acc: 0.6786 Epoch 14/50 352/352 [==============================] - 8s 23ms/step - loss: 2.4392 - acc: 0.3676 - top5-acc: 0.6879 - val_loss: 2.4971 - val_acc: 0.3808 - val_top5-acc: 0.6926 Epoch 15/50 352/352 [==============================] - 8s 23ms/step - loss: 2.4073 - acc: 0.3790 - top5-acc: 0.6940 - val_loss: 2.5972 - val_acc: 0.3682 - val_top5-acc: 0.6750 Epoch 16/50 352/352 [==============================] - 8s 23ms/step - loss: 2.3922 - acc: 0.3754 - top5-acc: 0.6980 - val_loss: 2.4317 - val_acc: 0.3964 - val_top5-acc: 0.6992 Epoch 17/50 352/352 [==============================] - 8s 22ms/step - loss: 2.3603 - acc: 0.3891 - top5-acc: 0.7038 - val_loss: 2.4844 - val_acc: 0.3766 - val_top5-acc: 0.6964 Epoch 18/50 352/352 [==============================] - 8s 23ms/step - loss: 2.3560 - acc: 0.3849 - top5-acc: 0.7056 - val_loss: 2.4564 - val_acc: 0.3910 - val_top5-acc: 0.6990 Epoch 19/50 352/352 [==============================] - 8s 23ms/step - loss: 2.3367 - acc: 0.3900 - top5-acc: 0.7069 - val_loss: 2.4282 - val_acc: 0.3906 - val_top5-acc: 0.7058 Epoch 20/50 352/352 [==============================] - 8s 22ms/step - loss: 2.3096 - acc: 0.3945 - top5-acc: 0.7180 - val_loss: 2.4297 - val_acc: 0.3930 - val_top5-acc: 0.7082 Epoch 21/50 352/352 [==============================] - 8s 22ms/step - loss: 2.2935 - acc: 0.3996 - top5-acc: 0.7211 - val_loss: 2.4053 - val_acc: 0.3974 - val_top5-acc: 0.7076 Epoch 22/50 352/352 [==============================] - 8s 22ms/step - loss: 2.2823 - acc: 0.3991 - top5-acc: 0.7248 - val_loss: 2.4756 - val_acc: 0.3920 - val_top5-acc: 0.6988 Epoch 23/50 352/352 [==============================] - 8s 22ms/step - loss: 2.2371 - acc: 0.4126 - top5-acc: 0.7294 - val_loss: 2.3802 - val_acc: 0.3972 - val_top5-acc: 0.7100 Epoch 24/50 352/352 [==============================] - 8s 23ms/step - loss: 2.2234 - acc: 0.4140 - top5-acc: 0.7336 - val_loss: 2.4402 - val_acc: 0.3994 - val_top5-acc: 0.7096 Epoch 25/50 352/352 [==============================] - 8s 23ms/step - loss: 2.2320 - acc: 0.4088 - top5-acc: 0.7333 - val_loss: 2.4343 - val_acc: 0.3936 - val_top5-acc: 0.7052 Epoch 26/50 352/352 [==============================] - 8s 22ms/step - loss: 2.2094 - acc: 0.4193 - top5-acc: 0.7347 - val_loss: 2.4154 - val_acc: 0.4058 - val_top5-acc: 0.7192 Epoch 27/50 352/352 [==============================] - 8s 23ms/step - loss: 2.2029 - acc: 0.4180 - top5-acc: 0.7370 - val_loss: 2.3116 - val_acc: 0.4226 - val_top5-acc: 0.7268 Epoch 28/50 352/352 [==============================] - 8s 23ms/step - loss: 2.1959 - acc: 0.4234 - top5-acc: 0.7380 - val_loss: 2.4053 - val_acc: 0.4064 - val_top5-acc: 0.7168 Epoch 29/50 352/352 [==============================] - 8s 23ms/step - loss: 2.1815 - acc: 0.4227 - top5-acc: 0.7415 - val_loss: 2.4020 - val_acc: 0.4078 - val_top5-acc: 0.7192 Epoch 30/50 352/352 [==============================] - 8s 23ms/step - loss: 2.1783 - acc: 0.4245 - top5-acc: 0.7407 - val_loss: 2.4206 - val_acc: 0.3996 - val_top5-acc: 0.7234 Epoch 31/50 352/352 [==============================] - 8s 22ms/step - loss: 2.1686 - acc: 0.4248 - top5-acc: 0.7442 - val_loss: 2.3743 - val_acc: 0.4100 - val_top5-acc: 0.7162 Epoch 32/50 352/352 [==============================] - 8s 23ms/step - loss: 2.1487 - acc: 0.4317 - top5-acc: 0.7472 - val_loss: 2.3882 - val_acc: 0.4018 - val_top5-acc: 0.7266 Epoch 33/50 352/352 [==============================] - 8s 22ms/step - loss: 1.9836 - acc: 0.4644 - top5-acc: 0.7782 - val_loss: 2.1742 - val_acc: 0.4536 - val_top5-acc: 0.7506 Epoch 34/50 352/352 [==============================] - 8s 23ms/step - loss: 1.8723 - acc: 0.4950 - top5-acc: 0.7985 - val_loss: 2.1716 - val_acc: 0.4506 - val_top5-acc: 0.7546 Epoch 35/50 352/352 [==============================] - 8s 23ms/step - loss: 1.8461 - acc: 0.5009 - top5-acc: 0.8003 - val_loss: 2.1661 - val_acc: 0.4480 - val_top5-acc: 0.7542 Epoch 36/50 352/352 [==============================] - 8s 23ms/step - loss: 1.8499 - acc: 0.4944 - top5-acc: 0.8044 - val_loss: 2.1523 - val_acc: 0.4566 - val_top5-acc: 0.7628 Epoch 37/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8322 - acc: 0.5000 - top5-acc: 0.8059 - val_loss: 2.1334 - val_acc: 0.4570 - val_top5-acc: 0.7560 Epoch 38/50 352/352 [==============================] - 8s 23ms/step - loss: 1.8269 - acc: 0.5027 - top5-acc: 0.8085 - val_loss: 2.1024 - val_acc: 0.4614 - val_top5-acc: 0.7674 Epoch 39/50 352/352 [==============================] - 8s 23ms/step - loss: 1.8242 - acc: 0.4990 - top5-acc: 0.8098 - val_loss: 2.0789 - val_acc: 0.4610 - val_top5-acc: 0.7792 Epoch 40/50 352/352 [==============================] - 8s 23ms/step - loss: 1.7983 - acc: 0.5067 - top5-acc: 0.8122 - val_loss: 2.1514 - val_acc: 0.4546 - val_top5-acc: 0.7628 Epoch 41/50 352/352 [==============================] - 8s 23ms/step - loss: 1.7974 - acc: 0.5112 - top5-acc: 0.8132 - val_loss: 2.1425 - val_acc: 0.4542 - val_top5-acc: 0.7630 Epoch 42/50 352/352 [==============================] - 8s 23ms/step - loss: 1.7972 - acc: 0.5128 - top5-acc: 0.8127 - val_loss: 2.0980 - val_acc: 0.4580 - val_top5-acc: 0.7724 Epoch 43/50 352/352 [==============================] - 8s 23ms/step - loss: 1.8026 - acc: 0.5066 - top5-acc: 0.8115 - val_loss: 2.0922 - val_acc: 0.4684 - val_top5-acc: 0.7678 Epoch 44/50 352/352 [==============================] - 8s 23ms/step - loss: 1.7924 - acc: 0.5092 - top5-acc: 0.8129 - val_loss: 2.0511 - val_acc: 0.4750 - val_top5-acc: 0.7726 Epoch 45/50 352/352 [==============================] - 8s 22ms/step - loss: 1.7695 - acc: 0.5106 - top5-acc: 0.8193 - val_loss: 2.0949 - val_acc: 0.4678 - val_top5-acc: 0.7708 Epoch 46/50 352/352 [==============================] - 8s 23ms/step - loss: 1.7784 - acc: 0.5106 - top5-acc: 0.8141 - val_loss: 2.1094 - val_acc: 0.4656 - val_top5-acc: 0.7704 Epoch 47/50 352/352 [==============================] - 8s 23ms/step - loss: 1.7625 - acc: 0.5155 - top5-acc: 0.8190 - val_loss: 2.0492 - val_acc: 0.4774 - val_top5-acc: 0.7744 Epoch 48/50 352/352 [==============================] - 8s 23ms/step - loss: 1.7441 - acc: 0.5217 - top5-acc: 0.8190 - val_loss: 2.0562 - val_acc: 0.4698 - val_top5-acc: 0.7828 Epoch 49/50 352/352 [==============================] - 8s 23ms/step - loss: 1.7665 - acc: 0.5113 - top5-acc: 0.8196 - val_loss: 2.0348 - val_acc: 0.4708 - val_top5-acc: 0.7730 Epoch 50/50 352/352 [==============================] - 8s 23ms/step - loss: 1.7392 - acc: 0.5201 - top5-acc: 0.8226 - val_loss: 2.0787 - val_acc: 0.4710 - val_top5-acc: 0.7734 313/313 [==============================] - 2s 8ms/step - loss: 2.0571 - acc: 0.4758 - top5-acc: 0.7718 Test accuracy: 47.58% Test top 5 accuracy: 77.18% The MLP-Mixer model tends to have much less number of parameters compared to convolutional and transformer-based models, which leads to less training and serving computational cost. As mentioned in the MLP-Mixer paper, when pre-trained on large datasets, or with modern regularization schemes, the MLP-Mixer attains competitive scores to state-of-the-art models. You can obtain better results by increasing the embedding dimensions, increasing, increasing the number of mixer blocks, and training the model for longer. You may also try to increase the size of the input images and use different patch sizes. The FNet model The FNet uses a similar block to the Transformer block. However, FNet replaces the self-attention layer in the Transformer block with a parameter-free 2D Fourier transformation layer: One 1D Fourier Transform is applied along the patches. One 1D Fourier Transform is applied along the channels. Implement the FNet module class FNetLayer(layers.Layer): def __init__(self, num_patches, embedding_dim, dropout_rate, *args, **kwargs): super(FNetLayer, self).__init__(*args, **kwargs) self.ffn = keras.Sequential( [ layers.Dense(units=embedding_dim), tfa.layers.GELU(), layers.Dropout(rate=dropout_rate), layers.Dense(units=embedding_dim), ] ) self.normalize1 = layers.LayerNormalization(epsilon=1e-6) self.normalize2 = layers.LayerNormalization(epsilon=1e-6) def call(self, inputs): # Apply fourier transformations. x = tf.cast( tf.signal.fft2d(tf.cast(inputs, dtype=tf.dtypes.complex64)), dtype=tf.dtypes.float32, ) # Add skip connection. x = x + inputs # Apply layer normalization. x = self.normalize1(x) # Apply Feedfowrad network. x_ffn = self.ffn(x) # Add skip connection. x = x + x_ffn # Apply layer normalization. return self.normalize2(x) Build, train, and evaluate the FNet model Note that training the model with the current settings on a V100 GPUs takes around 8 seconds per epoch. fnet_blocks = keras.Sequential( [FNetLayer(num_patches, embedding_dim, dropout_rate) for _ in range(num_blocks)] ) learning_rate = 0.001 fnet_classifier = build_classifier(fnet_blocks, positional_encoding=True) history = run_experiment(fnet_classifier) Epoch 1/50 352/352 [==============================] - 11s 23ms/step - loss: 4.3419 - acc: 0.0470 - top5-acc: 0.1652 - val_loss: 3.8279 - val_acc: 0.1178 - val_top5-acc: 0.3268 Epoch 2/50 352/352 [==============================] - 8s 22ms/step - loss: 3.7814 - acc: 0.1202 - top5-acc: 0.3341 - val_loss: 3.5981 - val_acc: 0.1540 - val_top5-acc: 0.3914 Epoch 3/50 352/352 [==============================] - 8s 22ms/step - loss: 3.5319 - acc: 0.1603 - top5-acc: 0.4086 - val_loss: 3.3309 - val_acc: 0.1956 - val_top5-acc: 0.4656 Epoch 4/50 352/352 [==============================] - 8s 22ms/step - loss: 3.3025 - acc: 0.2001 - top5-acc: 0.4730 - val_loss: 3.1215 - val_acc: 0.2334 - val_top5-acc: 0.5234 Epoch 5/50 352/352 [==============================] - 8s 22ms/step - loss: 3.1621 - acc: 0.2224 - top5-acc: 0.5084 - val_loss: 3.0492 - val_acc: 0.2456 - val_top5-acc: 0.5322 Epoch 6/50 352/352 [==============================] - 8s 22ms/step - loss: 3.0506 - acc: 0.2469 - top5-acc: 0.5400 - val_loss: 2.9519 - val_acc: 0.2684 - val_top5-acc: 0.5652 Epoch 7/50 352/352 [==============================] - 8s 22ms/step - loss: 2.9520 - acc: 0.2618 - top5-acc: 0.5677 - val_loss: 2.8936 - val_acc: 0.2688 - val_top5-acc: 0.5864 Epoch 8/50 352/352 [==============================] - 8s 22ms/step - loss: 2.8377 - acc: 0.2828 - top5-acc: 0.5938 - val_loss: 2.7633 - val_acc: 0.2996 - val_top5-acc: 0.6068 Epoch 9/50 352/352 [==============================] - 8s 22ms/step - loss: 2.7670 - acc: 0.2969 - top5-acc: 0.6107 - val_loss: 2.7309 - val_acc: 0.3112 - val_top5-acc: 0.6136 Epoch 10/50 352/352 [==============================] - 8s 22ms/step - loss: 2.7027 - acc: 0.3148 - top5-acc: 0.6231 - val_loss: 2.6552 - val_acc: 0.3214 - val_top5-acc: 0.6436 Epoch 11/50 352/352 [==============================] - 8s 22ms/step - loss: 2.6375 - acc: 0.3256 - top5-acc: 0.6427 - val_loss: 2.6078 - val_acc: 0.3278 - val_top5-acc: 0.6434 Epoch 12/50 352/352 [==============================] - 8s 22ms/step - loss: 2.5573 - acc: 0.3424 - top5-acc: 0.6576 - val_loss: 2.5617 - val_acc: 0.3438 - val_top5-acc: 0.6534 Epoch 13/50 352/352 [==============================] - 8s 22ms/step - loss: 2.5259 - acc: 0.3488 - top5-acc: 0.6640 - val_loss: 2.5177 - val_acc: 0.3550 - val_top5-acc: 0.6652 Epoch 14/50 352/352 [==============================] - 8s 22ms/step - loss: 2.4782 - acc: 0.3586 - top5-acc: 0.6739 - val_loss: 2.5113 - val_acc: 0.3558 - val_top5-acc: 0.6718 Epoch 15/50 352/352 [==============================] - 8s 22ms/step - loss: 2.4242 - acc: 0.3712 - top5-acc: 0.6897 - val_loss: 2.4280 - val_acc: 0.3724 - val_top5-acc: 0.6880 Epoch 16/50 352/352 [==============================] - 8s 22ms/step - loss: 2.3884 - acc: 0.3741 - top5-acc: 0.6967 - val_loss: 2.4670 - val_acc: 0.3654 - val_top5-acc: 0.6794 Epoch 17/50 352/352 [==============================] - 8s 22ms/step - loss: 2.3619 - acc: 0.3797 - top5-acc: 0.7001 - val_loss: 2.3941 - val_acc: 0.3752 - val_top5-acc: 0.6922 Epoch 18/50 352/352 [==============================] - 8s 22ms/step - loss: 2.3183 - acc: 0.3931 - top5-acc: 0.7137 - val_loss: 2.4028 - val_acc: 0.3814 - val_top5-acc: 0.6954 Epoch 19/50 352/352 [==============================] - 8s 22ms/step - loss: 2.2919 - acc: 0.3955 - top5-acc: 0.7209 - val_loss: 2.3672 - val_acc: 0.3878 - val_top5-acc: 0.7022 Epoch 20/50 352/352 [==============================] - 8s 22ms/step - loss: 2.2612 - acc: 0.4038 - top5-acc: 0.7224 - val_loss: 2.3529 - val_acc: 0.3954 - val_top5-acc: 0.6934 Epoch 21/50 352/352 [==============================] - 8s 22ms/step - loss: 2.2416 - acc: 0.4068 - top5-acc: 0.7262 - val_loss: 2.3014 - val_acc: 0.3980 - val_top5-acc: 0.7158 Epoch 22/50 352/352 [==============================] - 8s 22ms/step - loss: 2.2087 - acc: 0.4162 - top5-acc: 0.7359 - val_loss: 2.2904 - val_acc: 0.4062 - val_top5-acc: 0.7120 Epoch 23/50 352/352 [==============================] - 8s 22ms/step - loss: 2.1803 - acc: 0.4200 - top5-acc: 0.7442 - val_loss: 2.3181 - val_acc: 0.4096 - val_top5-acc: 0.7120 Epoch 24/50 352/352 [==============================] - 8s 22ms/step - loss: 2.1718 - acc: 0.4246 - top5-acc: 0.7403 - val_loss: 2.2687 - val_acc: 0.4094 - val_top5-acc: 0.7234 Epoch 25/50 352/352 [==============================] - 8s 22ms/step - loss: 2.1559 - acc: 0.4198 - top5-acc: 0.7458 - val_loss: 2.2730 - val_acc: 0.4060 - val_top5-acc: 0.7190 Epoch 26/50 352/352 [==============================] - 8s 22ms/step - loss: 2.1285 - acc: 0.4300 - top5-acc: 0.7495 - val_loss: 2.2566 - val_acc: 0.4082 - val_top5-acc: 0.7306 Epoch 27/50 352/352 [==============================] - 8s 22ms/step - loss: 2.1118 - acc: 0.4386 - top5-acc: 0.7538 - val_loss: 2.2544 - val_acc: 0.4178 - val_top5-acc: 0.7218 Epoch 28/50 352/352 [==============================] - 8s 22ms/step - loss: 2.1007 - acc: 0.4408 - top5-acc: 0.7562 - val_loss: 2.2703 - val_acc: 0.4136 - val_top5-acc: 0.7172 Epoch 29/50 352/352 [==============================] - 8s 22ms/step - loss: 2.0707 - acc: 0.4446 - top5-acc: 0.7634 - val_loss: 2.2244 - val_acc: 0.4168 - val_top5-acc: 0.7332 Epoch 30/50 352/352 [==============================] - 8s 22ms/step - loss: 2.0694 - acc: 0.4428 - top5-acc: 0.7611 - val_loss: 2.2557 - val_acc: 0.4060 - val_top5-acc: 0.7270 Epoch 31/50 352/352 [==============================] - 8s 22ms/step - loss: 2.0485 - acc: 0.4502 - top5-acc: 0.7672 - val_loss: 2.2192 - val_acc: 0.4214 - val_top5-acc: 0.7308 Epoch 32/50 352/352 [==============================] - 8s 22ms/step - loss: 2.0105 - acc: 0.4617 - top5-acc: 0.7718 - val_loss: 2.2065 - val_acc: 0.4222 - val_top5-acc: 0.7286 Epoch 33/50 352/352 [==============================] - 8s 22ms/step - loss: 2.0238 - acc: 0.4556 - top5-acc: 0.7734 - val_loss: 2.1736 - val_acc: 0.4270 - val_top5-acc: 0.7368 Epoch 34/50 352/352 [==============================] - 8s 22ms/step - loss: 2.0253 - acc: 0.4547 - top5-acc: 0.7712 - val_loss: 2.2231 - val_acc: 0.4280 - val_top5-acc: 0.7308 Epoch 35/50 352/352 [==============================] - 8s 22ms/step - loss: 1.9992 - acc: 0.4593 - top5-acc: 0.7765 - val_loss: 2.1994 - val_acc: 0.4212 - val_top5-acc: 0.7358 Epoch 36/50 352/352 [==============================] - 8s 22ms/step - loss: 1.9849 - acc: 0.4636 - top5-acc: 0.7754 - val_loss: 2.2167 - val_acc: 0.4276 - val_top5-acc: 0.7308 Epoch 37/50 352/352 [==============================] - 8s 22ms/step - loss: 1.9880 - acc: 0.4677 - top5-acc: 0.7783 - val_loss: 2.1746 - val_acc: 0.4270 - val_top5-acc: 0.7416 Epoch 38/50 352/352 [==============================] - 8s 22ms/step - loss: 1.9562 - acc: 0.4720 - top5-acc: 0.7845 - val_loss: 2.1976 - val_acc: 0.4312 - val_top5-acc: 0.7356 Epoch 39/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8736 - acc: 0.4924 - top5-acc: 0.8004 - val_loss: 2.0755 - val_acc: 0.4578 - val_top5-acc: 0.7586 Epoch 40/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8189 - acc: 0.5042 - top5-acc: 0.8076 - val_loss: 2.0804 - val_acc: 0.4508 - val_top5-acc: 0.7600 Epoch 41/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8069 - acc: 0.5062 - top5-acc: 0.8132 - val_loss: 2.0784 - val_acc: 0.4456 - val_top5-acc: 0.7578 Epoch 42/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8156 - acc: 0.5052 - top5-acc: 0.8110 - val_loss: 2.0910 - val_acc: 0.4544 - val_top5-acc: 0.7542 Epoch 43/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8143 - acc: 0.5046 - top5-acc: 0.8105 - val_loss: 2.1037 - val_acc: 0.4466 - val_top5-acc: 0.7562 Epoch 44/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8119 - acc: 0.5032 - top5-acc: 0.8141 - val_loss: 2.0794 - val_acc: 0.4622 - val_top5-acc: 0.7532 Epoch 45/50 352/352 [==============================] - 8s 22ms/step - loss: 1.7611 - acc: 0.5188 - top5-acc: 0.8224 - val_loss: 2.0371 - val_acc: 0.4650 - val_top5-acc: 0.7628 Epoch 46/50 352/352 [==============================] - 8s 22ms/step - loss: 1.7713 - acc: 0.5189 - top5-acc: 0.8226 - val_loss: 2.0245 - val_acc: 0.4630 - val_top5-acc: 0.7644 Epoch 47/50 352/352 [==============================] - 8s 22ms/step - loss: 1.7809 - acc: 0.5130 - top5-acc: 0.8215 - val_loss: 2.0471 - val_acc: 0.4618 - val_top5-acc: 0.7618 Epoch 48/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8052 - acc: 0.5112 - top5-acc: 0.8165 - val_loss: 2.0441 - val_acc: 0.4596 - val_top5-acc: 0.7658 Epoch 49/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8128 - acc: 0.5039 - top5-acc: 0.8178 - val_loss: 2.0569 - val_acc: 0.4600 - val_top5-acc: 0.7614 Epoch 50/50 352/352 [==============================] - 8s 22ms/step - loss: 1.8179 - acc: 0.5089 - top5-acc: 0.8155 - val_loss: 2.0514 - val_acc: 0.4576 - val_top5-acc: 0.7566 313/313 [==============================] - 2s 6ms/step - loss: 2.0142 - acc: 0.4663 - top5-acc: 0.7647 Test accuracy: 46.63% Test top 5 accuracy: 76.47% As shown in the FNet paper, better results can be achieved by increasing the embedding dimensions, increasing the number of FNet blocks, and training the model for longer. You may also try to increase the size of the input images and use different patch sizes. The FNet scales very efficiently to long inputs, runs much faster than attention-based Transformer models, and produces competitive accuracy results. The gMLP model The gMLP is a MLP architecture that features a Spatial Gating Unit (SGU). The SGU enables cross-patch interactions across the spatial (channel) dimension, by: Transforming the input spatially by applying linear projection across patches (along channels). Applying element-wise multiplication of the input and its spatial transformation. Implement the gMLP module class gMLPLayer(layers.Layer): def __init__(self, num_patches, embedding_dim, dropout_rate, *args, **kwargs): super(gMLPLayer, self).__init__(*args, **kwargs) self.channel_projection1 = keras.Sequential( [ layers.Dense(units=embedding_dim * 2), tfa.layers.GELU(), layers.Dropout(rate=dropout_rate), ] ) self.channel_projection2 = layers.Dense(units=embedding_dim) self.spatial_projection = layers.Dense( units=num_patches, bias_initializer=\"Ones\" ) self.normalize1 = layers.LayerNormalization(epsilon=1e-6) self.normalize2 = layers.LayerNormalization(epsilon=1e-6) def spatial_gating_unit(self, x): # Split x along the channel dimensions. # Tensors u and v will in th shape of [batch_size, num_patchs, embedding_dim]. u, v = tf.split(x, num_or_size_splits=2, axis=2) # Apply layer normalization. v = self.normalize2(v) # Apply spatial projection. v_channels = tf.linalg.matrix_transpose(v) v_projected = self.spatial_projection(v_channels) v_projected = tf.linalg.matrix_transpose(v_projected) # Apply element-wise multiplication. return u * v_projected def call(self, inputs): # Apply layer normalization. x = self.normalize1(inputs) # Apply the first channel projection. x_projected shape: [batch_size, num_patches, embedding_dim * 2]. x_projected = self.channel_projection1(x) # Apply the spatial gating unit. x_spatial shape: [batch_size, num_patches, embedding_dim]. x_spatial = self.spatial_gating_unit(x_projected) # Apply the second channel projection. x_projected shape: [batch_size, num_patches, embedding_dim]. x_projected = self.channel_projection2(x_spatial) # Add skip connection. return x + x_projected Build, train, and evaluate the gMLP model Note that training the model with the current settings on a V100 GPUs takes around 9 seconds per epoch. gmlp_blocks = keras.Sequential( [gMLPLayer(num_patches, embedding_dim, dropout_rate) for _ in range(num_blocks)] ) learning_rate = 0.003 gmlp_classifier = build_classifier(gmlp_blocks) history = run_experiment(gmlp_classifier) Epoch 1/50 352/352 [==============================] - 13s 28ms/step - loss: 4.1713 - acc: 0.0704 - top5-acc: 0.2206 - val_loss: 3.5629 - val_acc: 0.1548 - val_top5-acc: 0.4086 Epoch 2/50 352/352 [==============================] - 9s 27ms/step - loss: 3.5146 - acc: 0.1633 - top5-acc: 0.4172 - val_loss: 3.2899 - val_acc: 0.2066 - val_top5-acc: 0.4900 Epoch 3/50 352/352 [==============================] - 9s 26ms/step - loss: 3.2588 - acc: 0.2017 - top5-acc: 0.4895 - val_loss: 3.1152 - val_acc: 0.2362 - val_top5-acc: 0.5278 Epoch 4/50 352/352 [==============================] - 9s 26ms/step - loss: 3.1037 - acc: 0.2331 - top5-acc: 0.5288 - val_loss: 2.9771 - val_acc: 0.2624 - val_top5-acc: 0.5646 Epoch 5/50 352/352 [==============================] - 9s 26ms/step - loss: 2.9483 - acc: 0.2637 - top5-acc: 0.5680 - val_loss: 2.8807 - val_acc: 0.2784 - val_top5-acc: 0.5840 Epoch 6/50 352/352 [==============================] - 9s 26ms/step - loss: 2.8411 - acc: 0.2821 - top5-acc: 0.5930 - val_loss: 2.7246 - val_acc: 0.3146 - val_top5-acc: 0.6256 Epoch 7/50 352/352 [==============================] - 9s 26ms/step - loss: 2.7221 - acc: 0.3085 - top5-acc: 0.6193 - val_loss: 2.7022 - val_acc: 0.3108 - val_top5-acc: 0.6270 Epoch 8/50 352/352 [==============================] - 9s 26ms/step - loss: 2.6296 - acc: 0.3334 - top5-acc: 0.6420 - val_loss: 2.6289 - val_acc: 0.3324 - val_top5-acc: 0.6494 Epoch 9/50 352/352 [==============================] - 9s 26ms/step - loss: 2.5691 - acc: 0.3413 - top5-acc: 0.6563 - val_loss: 2.5353 - val_acc: 0.3586 - val_top5-acc: 0.6746 Epoch 10/50 352/352 [==============================] - 9s 26ms/step - loss: 2.4854 - acc: 0.3575 - top5-acc: 0.6760 - val_loss: 2.5271 - val_acc: 0.3578 - val_top5-acc: 0.6720 Epoch 11/50 352/352 [==============================] - 9s 26ms/step - loss: 2.4252 - acc: 0.3722 - top5-acc: 0.6870 - val_loss: 2.4553 - val_acc: 0.3684 - val_top5-acc: 0.6850 Epoch 12/50 352/352 [==============================] - 9s 26ms/step - loss: 2.3814 - acc: 0.3822 - top5-acc: 0.6985 - val_loss: 2.3841 - val_acc: 0.3888 - val_top5-acc: 0.6966 Epoch 13/50 352/352 [==============================] - 9s 26ms/step - loss: 2.3119 - acc: 0.3950 - top5-acc: 0.7135 - val_loss: 2.4306 - val_acc: 0.3780 - val_top5-acc: 0.6894 Epoch 14/50 352/352 [==============================] - 9s 26ms/step - loss: 2.2886 - acc: 0.4033 - top5-acc: 0.7168 - val_loss: 2.4053 - val_acc: 0.3932 - val_top5-acc: 0.7010 Epoch 15/50 352/352 [==============================] - 9s 26ms/step - loss: 2.2455 - acc: 0.4080 - top5-acc: 0.7233 - val_loss: 2.3443 - val_acc: 0.4004 - val_top5-acc: 0.7128 Epoch 16/50 352/352 [==============================] - 9s 26ms/step - loss: 2.2128 - acc: 0.4152 - top5-acc: 0.7317 - val_loss: 2.3150 - val_acc: 0.4018 - val_top5-acc: 0.7174 Epoch 17/50 352/352 [==============================] - 9s 26ms/step - loss: 2.1990 - acc: 0.4206 - top5-acc: 0.7357 - val_loss: 2.3590 - val_acc: 0.3978 - val_top5-acc: 0.7086 Epoch 18/50 352/352 [==============================] - 9s 26ms/step - loss: 2.1574 - acc: 0.4258 - top5-acc: 0.7451 - val_loss: 2.3140 - val_acc: 0.4052 - val_top5-acc: 0.7256 Epoch 19/50 352/352 [==============================] - 9s 26ms/step - loss: 2.1369 - acc: 0.4309 - top5-acc: 0.7487 - val_loss: 2.3012 - val_acc: 0.4124 - val_top5-acc: 0.7190 Epoch 20/50 352/352 [==============================] - 9s 26ms/step - loss: 2.1222 - acc: 0.4350 - top5-acc: 0.7494 - val_loss: 2.3294 - val_acc: 0.4076 - val_top5-acc: 0.7186 Epoch 21/50 352/352 [==============================] - 9s 26ms/step - loss: 2.0822 - acc: 0.4436 - top5-acc: 0.7576 - val_loss: 2.2498 - val_acc: 0.4302 - val_top5-acc: 0.7276 Epoch 22/50 352/352 [==============================] - 9s 26ms/step - loss: 2.0609 - acc: 0.4518 - top5-acc: 0.7610 - val_loss: 2.2915 - val_acc: 0.4232 - val_top5-acc: 0.7280 Epoch 23/50 352/352 [==============================] - 9s 26ms/step - loss: 2.0482 - acc: 0.4590 - top5-acc: 0.7648 - val_loss: 2.2448 - val_acc: 0.4242 - val_top5-acc: 0.7296 Epoch 24/50 352/352 [==============================] - 9s 26ms/step - loss: 2.0292 - acc: 0.4560 - top5-acc: 0.7705 - val_loss: 2.2526 - val_acc: 0.4334 - val_top5-acc: 0.7324 Epoch 25/50 352/352 [==============================] - 9s 26ms/step - loss: 2.0316 - acc: 0.4544 - top5-acc: 0.7687 - val_loss: 2.2430 - val_acc: 0.4318 - val_top5-acc: 0.7338 Epoch 26/50 352/352 [==============================] - 9s 26ms/step - loss: 1.9988 - acc: 0.4616 - top5-acc: 0.7748 - val_loss: 2.2053 - val_acc: 0.4470 - val_top5-acc: 0.7366 Epoch 27/50 352/352 [==============================] - 9s 26ms/step - loss: 1.9788 - acc: 0.4646 - top5-acc: 0.7806 - val_loss: 2.2313 - val_acc: 0.4378 - val_top5-acc: 0.7420 Epoch 28/50 352/352 [==============================] - 9s 26ms/step - loss: 1.9702 - acc: 0.4688 - top5-acc: 0.7829 - val_loss: 2.2392 - val_acc: 0.4344 - val_top5-acc: 0.7338 Epoch 29/50 352/352 [==============================] - 9s 26ms/step - loss: 1.9488 - acc: 0.4699 - top5-acc: 0.7866 - val_loss: 2.1600 - val_acc: 0.4490 - val_top5-acc: 0.7446 Epoch 30/50 352/352 [==============================] - 9s 26ms/step - loss: 1.9302 - acc: 0.4803 - top5-acc: 0.7878 - val_loss: 2.2069 - val_acc: 0.4410 - val_top5-acc: 0.7486 Epoch 31/50 352/352 [==============================] - 9s 26ms/step - loss: 1.9135 - acc: 0.4806 - top5-acc: 0.7916 - val_loss: 2.1929 - val_acc: 0.4486 - val_top5-acc: 0.7514 Epoch 32/50 352/352 [==============================] - 9s 26ms/step - loss: 1.8890 - acc: 0.4844 - top5-acc: 0.7961 - val_loss: 2.2176 - val_acc: 0.4404 - val_top5-acc: 0.7494 Epoch 33/50 352/352 [==============================] - 9s 26ms/step - loss: 1.8844 - acc: 0.4872 - top5-acc: 0.7980 - val_loss: 2.2321 - val_acc: 0.4444 - val_top5-acc: 0.7460 Epoch 34/50 352/352 [==============================] - 9s 26ms/step - loss: 1.8588 - acc: 0.4912 - top5-acc: 0.8005 - val_loss: 2.1895 - val_acc: 0.4532 - val_top5-acc: 0.7510 Epoch 35/50 352/352 [==============================] - 9s 26ms/step - loss: 1.7259 - acc: 0.5232 - top5-acc: 0.8266 - val_loss: 2.1024 - val_acc: 0.4800 - val_top5-acc: 0.7726 Epoch 36/50 352/352 [==============================] - 9s 26ms/step - loss: 1.6262 - acc: 0.5488 - top5-acc: 0.8437 - val_loss: 2.0712 - val_acc: 0.4830 - val_top5-acc: 0.7754 Epoch 37/50 352/352 [==============================] - 9s 26ms/step - loss: 1.6164 - acc: 0.5481 - top5-acc: 0.8390 - val_loss: 2.1219 - val_acc: 0.4772 - val_top5-acc: 0.7678 Epoch 38/50 352/352 [==============================] - 9s 26ms/step - loss: 1.5850 - acc: 0.5568 - top5-acc: 0.8510 - val_loss: 2.0931 - val_acc: 0.4892 - val_top5-acc: 0.7732 Epoch 39/50 352/352 [==============================] - 9s 26ms/step - loss: 1.5741 - acc: 0.5589 - top5-acc: 0.8507 - val_loss: 2.0910 - val_acc: 0.4910 - val_top5-acc: 0.7700 Epoch 40/50 352/352 [==============================] - 9s 26ms/step - loss: 1.5546 - acc: 0.5675 - top5-acc: 0.8519 - val_loss: 2.1388 - val_acc: 0.4790 - val_top5-acc: 0.7742 Epoch 41/50 352/352 [==============================] - 9s 26ms/step - loss: 1.5464 - acc: 0.5684 - top5-acc: 0.8561 - val_loss: 2.1121 - val_acc: 0.4786 - val_top5-acc: 0.7718 Epoch 42/50 352/352 [==============================] - 9s 26ms/step - loss: 1.4494 - acc: 0.5890 - top5-acc: 0.8702 - val_loss: 2.1157 - val_acc: 0.4944 - val_top5-acc: 0.7802 Epoch 43/50 352/352 [==============================] - 9s 26ms/step - loss: 1.3847 - acc: 0.6069 - top5-acc: 0.8825 - val_loss: 2.1048 - val_acc: 0.4884 - val_top5-acc: 0.7752 Epoch 44/50 352/352 [==============================] - 9s 26ms/step - loss: 1.3724 - acc: 0.6087 - top5-acc: 0.8832 - val_loss: 2.0681 - val_acc: 0.4924 - val_top5-acc: 0.7868 Epoch 45/50 352/352 [==============================] - 9s 26ms/step - loss: 1.3643 - acc: 0.6116 - top5-acc: 0.8840 - val_loss: 2.0965 - val_acc: 0.4932 - val_top5-acc: 0.7752 Epoch 46/50 352/352 [==============================] - 9s 26ms/step - loss: 1.3517 - acc: 0.6184 - top5-acc: 0.8849 - val_loss: 2.0869 - val_acc: 0.4956 - val_top5-acc: 0.7778 Epoch 47/50 352/352 [==============================] - 9s 26ms/step - loss: 1.3377 - acc: 0.6211 - top5-acc: 0.8891 - val_loss: 2.1120 - val_acc: 0.4882 - val_top5-acc: 0.7764 Epoch 48/50 352/352 [==============================] - 9s 26ms/step - loss: 1.3369 - acc: 0.6186 - top5-acc: 0.8888 - val_loss: 2.1257 - val_acc: 0.4912 - val_top5-acc: 0.7752 Epoch 49/50 352/352 [==============================] - 9s 26ms/step - loss: 1.3266 - acc: 0.6190 - top5-acc: 0.8893 - val_loss: 2.0961 - val_acc: 0.4958 - val_top5-acc: 0.7828 Epoch 50/50 352/352 [==============================] - 9s 26ms/step - loss: 1.2731 - acc: 0.6352 - top5-acc: 0.8976 - val_loss: 2.0897 - val_acc: 0.4982 - val_top5-acc: 0.7788 313/313 [==============================] - 2s 7ms/step - loss: 2.0743 - acc: 0.5064 - top5-acc: 0.7828 Test accuracy: 50.64% Test top 5 accuracy: 78.28% As shown in the gMLP paper, better results can be achieved by increasing the embedding dimensions, increasing the number of gMLP blocks, and training the model for longer. You may also try to increase the size of the input images and use different patch sizes. Note that, the paper used advanced regularization strategies, such as MixUp and CutMix, as well as AutoAugment. Implementing the Perceiver model for image classification. Introduction This example implements the Perceiver: General Perception with Iterative Attention model by Andrew Jaegle et al. for image classification, and demonstrates it on the CIFAR-100 dataset. The Perceiver model leverages an asymmetric attention mechanism to iteratively distill inputs into a tight latent bottleneck, allowing it to scale to handle very large inputs. In other words: let's assume that your input data array (e.g. image) has M elements (i.e. patches), where M is large. In a standard Transformer model, a self-attention operation is performed for the M elements. The complexity of this operation is O(M^2). However, the Perceiver model creates a latent array of size N elements, where N << M, and performs two operations iteratively: Cross-attention Transformer between the latent array and the data array - The complexity of this operation is O(M.N). Self-attention Transformer on the latent array - The complexity of this operation is O(N^2). This example requires TensorFlow 2.4 or higher, as well as TensorFlow Addons, which can be installed using the following command: pip install -U tensorflow-addons Setup import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_addons as tfa Prepare the data num_classes = 100 input_shape = (32, 32, 3) (x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data() print(f\"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}\") print(f\"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}\") x_train shape: (50000, 32, 32, 3) - y_train shape: (50000, 1) x_test shape: (10000, 32, 32, 3) - y_test shape: (10000, 1) Configure the hyperparameters learning_rate = 0.001 weight_decay = 0.0001 batch_size = 64 num_epochs = 50 dropout_rate = 0.2 image_size = 64 # We'll resize input images to this size. patch_size = 2 # Size of the patches to be extract from the input images. num_patches = (image_size // patch_size) ** 2 # Size of the data array. latent_dim = 256 # Size of the latent array. projection_dim = 256 # Embedding size of each element in the data and latent arrays. num_heads = 8 # Number of Transformer heads. ffn_units = [ projection_dim, projection_dim, ] # Size of the Transformer Feedforward network. num_transformer_blocks = 4 num_iterations = 2 # Repetitions of the cross-attention and Transformer modules. classifier_units = [ projection_dim, num_classes, ] # Size of the Feedforward network of the final classifier. print(f\"Image size: {image_size} X {image_size} = {image_size ** 2}\") print(f\"Patch size: {patch_size} X {patch_size} = {patch_size ** 2} \") print(f\"Patches per image: {num_patches}\") print(f\"Elements per patch (3 channels): {(patch_size ** 2) * 3}\") print(f\"Latent array shape: {latent_dim} X {projection_dim}\") print(f\"Data array shape: {num_patches} X {projection_dim}\") Image size: 64 X 64 = 4096 Patch size: 2 X 2 = 4 Patches per image: 1024 Elements per patch (3 channels): 12 Latent array shape: 256 X 256 Data array shape: 1024 X 256 Note that, in order to use each pixel as an individual input in the data array, set patch_size to 1. Use data augmentation data_augmentation = keras.Sequential( [ layers.Normalization(), layers.Resizing(image_size, image_size), layers.RandomFlip(\"horizontal\"), layers.RandomZoom( height_factor=0.2, width_factor=0.2 ), ], name=\"data_augmentation\", ) # Compute the mean and the variance of the training data for normalization. data_augmentation.layers[0].adapt(x_train) Implement Feedforward network (FFN) def create_ffn(hidden_units, dropout_rate): ffn_layers = [] for units in hidden_units[:-1]: ffn_layers.append(layers.Dense(units, activation=tf.nn.gelu)) ffn_layers.append(layers.Dense(units=hidden_units[-1])) ffn_layers.append(layers.Dropout(dropout_rate)) ffn = keras.Sequential(ffn_layers) return ffn Implement patch creation as a layer class Patches(layers.Layer): def __init__(self, patch_size): super(Patches, self).__init__() self.patch_size = patch_size def call(self, images): batch_size = tf.shape(images)[0] patches = tf.image.extract_patches( images=images, sizes=[1, self.patch_size, self.patch_size, 1], strides=[1, self.patch_size, self.patch_size, 1], rates=[1, 1, 1, 1], padding=\"VALID\", ) patch_dims = patches.shape[-1] patches = tf.reshape(patches, [batch_size, -1, patch_dims]) return patches Implement the patch encoding layer The PatchEncoder layer will linearly transform a patch by projecting it into a vector of size latent_dim. In addition, it adds a learnable position embedding to the projected vector. Note that the orginal Perceiver paper uses the Fourier feature positional encodings. class PatchEncoder(layers.Layer): def __init__(self, num_patches, projection_dim): super(PatchEncoder, self).__init__() self.num_patches = num_patches self.projection = layers.Dense(units=projection_dim) self.position_embedding = layers.Embedding( input_dim=num_patches, output_dim=projection_dim ) def call(self, patches): positions = tf.range(start=0, limit=self.num_patches, delta=1) encoded = self.projection(patches) + self.position_embedding(positions) return encoded Build the Perceiver model The Perceiver consists of two modules: a cross-attention module and a standard Transformer with self-attention. Cross-attention module The cross-attention expects a (latent_dim, projection_dim) latent array, and the (data_dim, projection_dim) data array as inputs, to produce a (latent_dim, projection_dim) latent array as an output. To apply cross-attention, the query vectors are generated from the latent array, while the key and value vectors are generated from the encoded image. Note that the data array in this example is the image, where the data_dim is set to the num_patches. def create_cross_attention_module( latent_dim, data_dim, projection_dim, ffn_units, dropout_rate ): inputs = { # Recieve the latent array as an input of shape [1, latent_dim, projection_dim]. \"latent_array\": layers.Input(shape=(latent_dim, projection_dim)), # Recieve the data_array (encoded image) as an input of shape [batch_size, data_dim, projection_dim]. \"data_array\": layers.Input(shape=(data_dim, projection_dim)), } # Apply layer norm to the inputs latent_array = layers.LayerNormalization(epsilon=1e-6)(inputs[\"latent_array\"]) data_array = layers.LayerNormalization(epsilon=1e-6)(inputs[\"data_array\"]) # Create query tensor: [1, latent_dim, projection_dim]. query = layers.Dense(units=projection_dim)(latent_array) # Create key tensor: [batch_size, data_dim, projection_dim]. key = layers.Dense(units=projection_dim)(data_array) # Create value tensor: [batch_size, data_dim, projection_dim]. value = layers.Dense(units=projection_dim)(data_array) # Generate cross-attention outputs: [batch_size, latent_dim, projection_dim]. attention_output = layers.Attention(use_scale=True, dropout=0.1)( [query, key, value], return_attention_scores=False ) # Skip connection 1. attention_output = layers.Add()([attention_output, latent_array]) # Apply layer norm. attention_output = layers.LayerNormalization(epsilon=1e-6)(attention_output) # Apply Feedforward network. ffn = create_ffn(hidden_units=ffn_units, dropout_rate=dropout_rate) outputs = ffn(attention_output) # Skip connection 2. outputs = layers.Add()([outputs, attention_output]) # Create the Keras model. model = keras.Model(inputs=inputs, outputs=outputs) return model Transformer module The Transformer expects the output latent vector from the cross-attention module as an input, applies multi-head self-attention to its latent_dim elements, followed by feedforward network, to produce another (latent_dim, projection_dim) latent array. def create_transformer_module( latent_dim, projection_dim, num_heads, num_transformer_blocks, ffn_units, dropout_rate, ): # input_shape: [1, latent_dim, projection_dim] inputs = layers.Input(shape=(latent_dim, projection_dim)) x0 = inputs # Create multiple layers of the Transformer block. for _ in range(num_transformer_blocks): # Apply layer normalization 1. x1 = layers.LayerNormalization(epsilon=1e-6)(x0) # Create a multi-head self-attention layer. attention_output = layers.MultiHeadAttention( num_heads=num_heads, key_dim=projection_dim, dropout=0.1 )(x1, x1) # Skip connection 1. x2 = layers.Add()([attention_output, x0]) # Apply layer normalization 2. x3 = layers.LayerNormalization(epsilon=1e-6)(x2) # Apply Feedforward network. ffn = create_ffn(hidden_units=ffn_units, dropout_rate=dropout_rate) x3 = ffn(x3) # Skip connection 2. x0 = layers.Add()([x3, x2]) # Create the Keras model. model = keras.Model(inputs=inputs, outputs=x0) return model Perceiver model The Perceiver model repeats the cross-attention and Transformer modules num_iterations times—with shared weights and skip connections—to allow the latent array to iteratively extract information from the input image as it is needed. class Perceiver(keras.Model): def __init__( self, patch_size, data_dim, latent_dim, projection_dim, num_heads, num_transformer_blocks, ffn_units, dropout_rate, num_iterations, classifier_units, ): super(Perceiver, self).__init__() self.latent_dim = latent_dim self.data_dim = data_dim self.patch_size = patch_size self.projection_dim = projection_dim self.num_heads = num_heads self.num_transformer_blocks = num_transformer_blocks self.ffn_units = ffn_units self.dropout_rate = dropout_rate self.num_iterations = num_iterations self.classifier_units = classifier_units def build(self, input_shape): # Create latent array. self.latent_array = self.add_weight( shape=(self.latent_dim, self.projection_dim), initializer=\"random_normal\", trainable=True, ) # Create patching module. self.patcher = Patches(self.patch_size) # Create patch encoder. self.patch_encoder = PatchEncoder(self.data_dim, self.projection_dim) # Create cross-attenion module. self.cross_attention = create_cross_attention_module( self.latent_dim, self.data_dim, self.projection_dim, self.ffn_units, self.dropout_rate, ) # Create Transformer module. self.transformer = create_transformer_module( self.latent_dim, self.projection_dim, self.num_heads, self.num_transformer_blocks, self.ffn_units, self.dropout_rate, ) # Create global average pooling layer. self.global_average_pooling = layers.GlobalAveragePooling1D() # Create a classification head. self.classification_head = create_ffn( hidden_units=self.classifier_units, dropout_rate=self.dropout_rate ) super(Perceiver, self).build(input_shape) def call(self, inputs): # Augment data. augmented = data_augmentation(inputs) # Create patches. patches = self.patcher(augmented) # Encode patches. encoded_patches = self.patch_encoder(patches) # Prepare cross-attention inputs. cross_attention_inputs = { \"latent_array\": tf.expand_dims(self.latent_array, 0), \"data_array\": encoded_patches, } # Apply the cross-attention and the Transformer modules iteratively. for _ in range(self.num_iterations): # Apply cross-attention from the latent array to the data array. latent_array = self.cross_attention(cross_attention_inputs) # Apply self-attention Transformer to the latent array. latent_array = self.transformer(latent_array) # Set the latent array of the next iteration. cross_attention_inputs[\"latent_array\"] = latent_array # Apply global average pooling to generate a [batch_size, projection_dim] repesentation tensor. representation = self.global_average_pooling(latent_array) # Generate logits. logits = self.classification_head(representation) return logits Compile, train, and evaluate the mode def run_experiment(model): # Create LAMB optimizer with weight decay. optimizer = tfa.optimizers.LAMB( learning_rate=learning_rate, weight_decay_rate=weight_decay, ) # Compile the model. model.compile( optimizer=optimizer, loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[ keras.metrics.SparseCategoricalAccuracy(name=\"acc\"), keras.metrics.SparseTopKCategoricalAccuracy(5, name=\"top5-acc\"), ], ) # Create a learning rate scheduler callback. reduce_lr = keras.callbacks.ReduceLROnPlateau( monitor=\"val_loss\", factor=0.2, patience=3 ) # Create an early stopping callback. early_stopping = tf.keras.callbacks.EarlyStopping( monitor=\"val_loss\", patience=15, restore_best_weights=True ) # Fit the model. history = model.fit( x=x_train, y=y_train, batch_size=batch_size, epochs=num_epochs, validation_split=0.1, callbacks=[early_stopping, reduce_lr], ) _, accuracy, top_5_accuracy = model.evaluate(x_test, y_test) print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") print(f\"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%\") # Return history to plot learning curves. return history Note that training the perceiver model with the current settings on a V100 GPUs takes around 200 seconds. perceiver_classifier = Perceiver( patch_size, num_patches, latent_dim, projection_dim, num_heads, num_transformer_blocks, ffn_units, dropout_rate, num_iterations, classifier_units, ) history = run_experiment(perceiver_classifier) Epoch 1/100 704/704 [==============================] - 305s 405ms/step - loss: 4.4550 - acc: 0.0389 - top5-acc: 0.1407 - val_loss: 4.0544 - val_acc: 0.0802 - val_top5-acc: 0.2516 Epoch 2/100 704/704 [==============================] - 284s 403ms/step - loss: 4.0639 - acc: 0.0889 - top5-acc: 0.2576 - val_loss: 3.7379 - val_acc: 0.1272 - val_top5-acc: 0.3556 Epoch 3/100 704/704 [==============================] - 283s 402ms/step - loss: 3.8400 - acc: 0.1226 - top5-acc: 0.3326 - val_loss: 3.4527 - val_acc: 0.1750 - val_top5-acc: 0.4350 Epoch 4/100 704/704 [==============================] - 283s 402ms/step - loss: 3.5917 - acc: 0.1657 - top5-acc: 0.4063 - val_loss: 3.2160 - val_acc: 0.2176 - val_top5-acc: 0.5048 Epoch 5/100 704/704 [==============================] - 283s 403ms/step - loss: 3.3820 - acc: 0.2082 - top5-acc: 0.4638 - val_loss: 2.9947 - val_acc: 0.2584 - val_top5-acc: 0.5732 Epoch 6/100 704/704 [==============================] - 284s 403ms/step - loss: 3.2487 - acc: 0.2338 - top5-acc: 0.4991 - val_loss: 2.9179 - val_acc: 0.2770 - val_top5-acc: 0.5744 Epoch 7/100 704/704 [==============================] - 283s 402ms/step - loss: 3.1228 - acc: 0.2605 - top5-acc: 0.5295 - val_loss: 2.7958 - val_acc: 0.2994 - val_top5-acc: 0.6100 Epoch 8/100 704/704 [==============================] - 283s 402ms/step - loss: 2.9989 - acc: 0.2862 - top5-acc: 0.5588 - val_loss: 2.7117 - val_acc: 0.3208 - val_top5-acc: 0.6340 Epoch 9/100 704/704 [==============================] - 283s 402ms/step - loss: 2.9294 - acc: 0.3018 - top5-acc: 0.5763 - val_loss: 2.5933 - val_acc: 0.3390 - val_top5-acc: 0.6636 Epoch 10/100 704/704 [==============================] - 283s 402ms/step - loss: 2.8687 - acc: 0.3139 - top5-acc: 0.5934 - val_loss: 2.5030 - val_acc: 0.3614 - val_top5-acc: 0.6764 Epoch 11/100 704/704 [==============================] - 283s 402ms/step - loss: 2.7771 - acc: 0.3341 - top5-acc: 0.6098 - val_loss: 2.4657 - val_acc: 0.3704 - val_top5-acc: 0.6928 Epoch 12/100 704/704 [==============================] - 283s 402ms/step - loss: 2.7306 - acc: 0.3436 - top5-acc: 0.6229 - val_loss: 2.4441 - val_acc: 0.3738 - val_top5-acc: 0.6878 Epoch 13/100 704/704 [==============================] - 283s 402ms/step - loss: 2.6863 - acc: 0.3546 - top5-acc: 0.6346 - val_loss: 2.3508 - val_acc: 0.3892 - val_top5-acc: 0.7050 Epoch 14/100 704/704 [==============================] - 283s 402ms/step - loss: 2.6107 - acc: 0.3708 - top5-acc: 0.6537 - val_loss: 2.3219 - val_acc: 0.3996 - val_top5-acc: 0.7108 Epoch 15/100 704/704 [==============================] - 283s 402ms/step - loss: 2.5559 - acc: 0.3836 - top5-acc: 0.6664 - val_loss: 2.2748 - val_acc: 0.4140 - val_top5-acc: 0.7242 Epoch 16/100 704/704 [==============================] - 283s 402ms/step - loss: 2.5016 - acc: 0.3942 - top5-acc: 0.6761 - val_loss: 2.2364 - val_acc: 0.4238 - val_top5-acc: 0.7264 Epoch 17/100 704/704 [==============================] - 283s 402ms/step - loss: 2.4554 - acc: 0.4056 - top5-acc: 0.6897 - val_loss: 2.1684 - val_acc: 0.4418 - val_top5-acc: 0.7452 Epoch 18/100 704/704 [==============================] - 283s 402ms/step - loss: 2.3926 - acc: 0.4209 - top5-acc: 0.7024 - val_loss: 2.1614 - val_acc: 0.4372 - val_top5-acc: 0.7428 Epoch 19/100 704/704 [==============================] - 283s 402ms/step - loss: 2.3617 - acc: 0.4264 - top5-acc: 0.7119 - val_loss: 2.1595 - val_acc: 0.4382 - val_top5-acc: 0.7408 Epoch 20/100 704/704 [==============================] - 283s 402ms/step - loss: 2.3355 - acc: 0.4324 - top5-acc: 0.7133 - val_loss: 2.1187 - val_acc: 0.4462 - val_top5-acc: 0.7490 Epoch 21/100 704/704 [==============================] - 283s 402ms/step - loss: 2.2571 - acc: 0.4512 - top5-acc: 0.7299 - val_loss: 2.1095 - val_acc: 0.4424 - val_top5-acc: 0.7534 Epoch 22/100 704/704 [==============================] - 283s 402ms/step - loss: 2.2374 - acc: 0.4559 - top5-acc: 0.7357 - val_loss: 2.0997 - val_acc: 0.4398 - val_top5-acc: 0.7554 Epoch 23/100 704/704 [==============================] - 283s 402ms/step - loss: 2.2108 - acc: 0.4628 - top5-acc: 0.7452 - val_loss: 2.0662 - val_acc: 0.4574 - val_top5-acc: 0.7598 Epoch 24/100 704/704 [==============================] - 283s 402ms/step - loss: 2.1628 - acc: 0.4728 - top5-acc: 0.7555 - val_loss: 2.0564 - val_acc: 0.4564 - val_top5-acc: 0.7584 Epoch 25/100 704/704 [==============================] - 283s 402ms/step - loss: 2.1169 - acc: 0.4834 - top5-acc: 0.7616 - val_loss: 2.0793 - val_acc: 0.4600 - val_top5-acc: 0.7538 Epoch 26/100 704/704 [==============================] - 283s 402ms/step - loss: 2.0938 - acc: 0.4867 - top5-acc: 0.7743 - val_loss: 2.0835 - val_acc: 0.4566 - val_top5-acc: 0.7506 Epoch 27/100 704/704 [==============================] - 283s 402ms/step - loss: 2.0479 - acc: 0.4993 - top5-acc: 0.7816 - val_loss: 2.0790 - val_acc: 0.4610 - val_top5-acc: 0.7556 Epoch 28/100 704/704 [==============================] - 283s 402ms/step - loss: 1.8480 - acc: 0.5493 - top5-acc: 0.8159 - val_loss: 1.8846 - val_acc: 0.5046 - val_top5-acc: 0.7890 Epoch 29/100 704/704 [==============================] - 283s 402ms/step - loss: 1.7532 - acc: 0.5731 - top5-acc: 0.8362 - val_loss: 1.8844 - val_acc: 0.5106 - val_top5-acc: 0.7976 Epoch 30/100 704/704 [==============================] - 283s 402ms/step - loss: 1.7113 - acc: 0.5827 - top5-acc: 0.8434 - val_loss: 1.8792 - val_acc: 0.5096 - val_top5-acc: 0.7928 Epoch 31/100 704/704 [==============================] - 283s 403ms/step - loss: 1.6831 - acc: 0.5891 - top5-acc: 0.8511 - val_loss: 1.8938 - val_acc: 0.5044 - val_top5-acc: 0.7914 Epoch 32/100 704/704 [==============================] - 284s 403ms/step - loss: 1.6480 - acc: 0.5977 - top5-acc: 0.8562 - val_loss: 1.9055 - val_acc: 0.5034 - val_top5-acc: 0.7922 Epoch 33/100 704/704 [==============================] - 284s 403ms/step - loss: 1.6320 - acc: 0.6015 - top5-acc: 0.8627 - val_loss: 1.9064 - val_acc: 0.5056 - val_top5-acc: 0.7896 Epoch 34/100 704/704 [==============================] - 283s 403ms/step - loss: 1.5821 - acc: 0.6145 - top5-acc: 0.8673 - val_loss: 1.8912 - val_acc: 0.5138 - val_top5-acc: 0.7936 Epoch 35/100 704/704 [==============================] - 283s 403ms/step - loss: 1.5791 - acc: 0.6163 - top5-acc: 0.8719 - val_loss: 1.8963 - val_acc: 0.5090 - val_top5-acc: 0.7982 Epoch 36/100 704/704 [==============================] - 283s 402ms/step - loss: 1.5680 - acc: 0.6178 - top5-acc: 0.8741 - val_loss: 1.8998 - val_acc: 0.5142 - val_top5-acc: 0.7936 Epoch 37/100 704/704 [==============================] - 284s 403ms/step - loss: 1.5506 - acc: 0.6218 - top5-acc: 0.8743 - val_loss: 1.8941 - val_acc: 0.5142 - val_top5-acc: 0.7952 Epoch 38/100 704/704 [==============================] - 283s 402ms/step - loss: 1.5611 - acc: 0.6216 - top5-acc: 0.8722 - val_loss: 1.8946 - val_acc: 0.5183 - val_top5-acc: 0.7956 Epoch 39/100 704/704 [==============================] - 284s 403ms/step - loss: 1.5541 - acc: 0.6215 - top5-acc: 0.8764 - val_loss: 1.8923 - val_acc: 0.5180 - val_top5-acc: 0.7962 Epoch 40/100 704/704 [==============================] - 283s 403ms/step - loss: 1.5505 - acc: 0.6228 - top5-acc: 0.8773 - val_loss: 1.8934 - val_acc: 0.5232 - val_top5-acc: 0.7962 Epoch 41/100 704/704 [==============================] - 283s 402ms/step - loss: 1.5604 - acc: 0.6224 - top5-acc: 0.8747 - val_loss: 1.8938 - val_acc: 0.5230 - val_top5-acc: 0.7958 Epoch 42/100 704/704 [==============================] - 283s 402ms/step - loss: 1.5545 - acc: 0.6194 - top5-acc: 0.8784 - val_loss: 1.8938 - val_acc: 0.5240 - val_top5-acc: 0.7966 Epoch 43/100 704/704 [==============================] - 283s 402ms/step - loss: 1.5630 - acc: 0.6210 - top5-acc: 0.8758 - val_loss: 1.8939 - val_acc: 0.5240 - val_top5-acc: 0.7958 Epoch 44/100 704/704 [==============================] - 283s 402ms/step - loss: 1.5569 - acc: 0.6198 - top5-acc: 0.8756 - val_loss: 1.8938 - val_acc: 0.5240 - val_top5-acc: 0.7060 Epoch 45/100 704/704 [==============================] - 283s 402ms/step - loss: 1.5569 - acc: 0.6197 - top5-acc: 0.8770 - val_loss: 1.8940 - val_acc: 0.5140 - val_top5-acc: 0.7962 313/313 [==============================] - 22s 69ms/step - loss: 1.8630 - acc: 0.5264 - top5-acc: 0.8087 Test accuracy: 52.64% Test top 5 accuracy: 80.87% After 45 epochs, the Perceiver model achieves around 53% accuracy and 81% top-5 accuracy on the test data. As mentioned in the ablations of the Perceiver paper, you can obtain better results by increasing the latent array size, increasing the (projection) dimensions of the latent array and data array elements, increasing the number of blocks in the Transformer module, and increasing the number of iterations of applying the cross-attention and the latent Transformer modules. You may also try to increase the size the input images and use different patch sizes. The Perceiver benefits from inceasing the model size. However, larger models needs bigger accelerators to fit in and train efficiently. This is why in the Perceiver paper they used 32 TPU cores to run the experiments. Image classification using Swin Transformers, a general-purpose backbone for computer vision. This example implements Swin Transformer: Hierarchical Vision Transformer using Shifted Windows by Liu et al. for image classification, and demonstrates it on the CIFAR-100 dataset. Swin Transformer (Shifted Window Transformer) can serve as a general-purpose backbone for computer vision. Swin Transformer is a hierarchical Transformer whose representations are computed with shifted windows. The shifted window scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connections. This architecture has the flexibility to model information at various scales and has a linear computational complexity with respect to image size. This example requires TensorFlow 2.5 or higher, as well as TensorFlow Addons, which can be installed using the following commands: !pip install -U tensorflow-addons Collecting tensorflow-addons Downloading tensorflow_addons-0.14.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)  |████████████████████████████████| 1.1 MB 7.9 MB/s [?25hCollecting typeguard>=2.7 Downloading typeguard-2.12.1-py3-none-any.whl (17 kB) Installing collected packages: typeguard, tensorflow-addons Successfully installed tensorflow-addons-0.14.0 typeguard-2.12.1 Setup import matplotlib.pyplot as plt import numpy as np import tensorflow as tf import tensorflow_addons as tfa from tensorflow import keras from tensorflow.keras import layers Prepare the data We load the CIFAR-100 dataset through tf.keras.datasets, normalize the images, and convert the integer labels to one-hot encoded vectors. num_classes = 100 input_shape = (32, 32, 3) (x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) print(f\"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}\") print(f\"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}\") plt.figure(figsize=(10, 10)) for i in range(25): plt.subplot(5, 5, i + 1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(x_train[i]) plt.show() Downloading data from https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz 169009152/169001437 [==============================] - 3s 0us/step 169017344/169001437 [==============================] - 3s 0us/step x_train shape: (50000, 32, 32, 3) - y_train shape: (50000, 100) x_test shape: (10000, 32, 32, 3) - y_test shape: (10000, 100) png Configure the hyperparameters A key parameter to pick is the patch_size, the size of the input patches. In order to use each pixel as an individual input, you can set patch_size to (1, 1). Below, we take inspiration from the original paper settings for training on ImageNet-1K, keeping most of the original settings for this example. patch_size = (2, 2) # 2-by-2 sized patches dropout_rate = 0.03 # Dropout rate num_heads = 8 # Attention heads embed_dim = 64 # Embedding dimension num_mlp = 256 # MLP layer size qkv_bias = True # Convert embedded patches to query, key, and values with a learnable additive value window_size = 2 # Size of attention window shift_size = 1 # Size of shifting window image_dimension = 32 # Initial image size num_patch_x = input_shape[0] // patch_size[0] num_patch_y = input_shape[1] // patch_size[1] learning_rate = 1e-3 batch_size = 128 num_epochs = 40 validation_split = 0.1 weight_decay = 0.0001 label_smoothing = 0.1 Helper functions We create two helper functions to help us get a sequence of patches from the image, merge patches, and apply dropout. def window_partition(x, window_size): _, height, width, channels = x.shape patch_num_y = height // window_size patch_num_x = width // window_size x = tf.reshape( x, shape=(-1, patch_num_y, window_size, patch_num_x, window_size, channels) ) x = tf.transpose(x, (0, 1, 3, 2, 4, 5)) windows = tf.reshape(x, shape=(-1, window_size, window_size, channels)) return windows def window_reverse(windows, window_size, height, width, channels): patch_num_y = height // window_size patch_num_x = width // window_size x = tf.reshape( windows, shape=(-1, patch_num_y, patch_num_x, window_size, window_size, channels), ) x = tf.transpose(x, perm=(0, 1, 3, 2, 4, 5)) x = tf.reshape(x, shape=(-1, height, width, channels)) return x class DropPath(layers.Layer): def __init__(self, drop_prob=None, **kwargs): super(DropPath, self).__init__(**kwargs) self.drop_prob = drop_prob def call(self, x): input_shape = tf.shape(x) batch_size = input_shape[0] rank = x.shape.rank shape = (batch_size,) + (1,) * (rank - 1) random_tensor = (1 - self.drop_prob) + tf.random.uniform(shape, dtype=x.dtype) path_mask = tf.floor(random_tensor) output = tf.math.divide(x, 1 - self.drop_prob) * path_mask return output Window based multi-head self-attention Usually Transformers perform global self-attention, where the relationships between a token and all other tokens are computed. The global computation leads to quadratic complexity with respect to the number of tokens. Here, as the original paper suggests, we compute self-attention within local windows, in a non-overlapping manner. Global self-attention leads to quadratic computational complexity in the number of patches, whereas window-based self-attention leads to linear complexity and is easily scalable. class WindowAttention(layers.Layer): def __init__( self, dim, window_size, num_heads, qkv_bias=True, dropout_rate=0.0, **kwargs ): super(WindowAttention, self).__init__(**kwargs) self.dim = dim self.window_size = window_size self.num_heads = num_heads self.scale = (dim // num_heads) ** -0.5 self.qkv = layers.Dense(dim * 3, use_bias=qkv_bias) self.dropout = layers.Dropout(dropout_rate) self.proj = layers.Dense(dim) def build(self, input_shape): num_window_elements = (2 * self.window_size[0] - 1) * ( 2 * self.window_size[1] - 1 ) self.relative_position_bias_table = self.add_weight( shape=(num_window_elements, self.num_heads), initializer=tf.initializers.Zeros(), trainable=True, ) coords_h = np.arange(self.window_size[0]) coords_w = np.arange(self.window_size[1]) coords_matrix = np.meshgrid(coords_h, coords_w, indexing=\"ij\") coords = np.stack(coords_matrix) coords_flatten = coords.reshape(2, -1) relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] relative_coords = relative_coords.transpose([1, 2, 0]) relative_coords[:, :, 0] += self.window_size[0] - 1 relative_coords[:, :, 1] += self.window_size[1] - 1 relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 relative_position_index = relative_coords.sum(-1) self.relative_position_index = tf.Variable( initial_value=tf.convert_to_tensor(relative_position_index), trainable=False ) def call(self, x, mask=None): _, size, channels = x.shape head_dim = channels // self.num_heads x_qkv = self.qkv(x) x_qkv = tf.reshape(x_qkv, shape=(-1, size, 3, self.num_heads, head_dim)) x_qkv = tf.transpose(x_qkv, perm=(2, 0, 3, 1, 4)) q, k, v = x_qkv[0], x_qkv[1], x_qkv[2] q = q * self.scale k = tf.transpose(k, perm=(0, 1, 3, 2)) attn = q @ k num_window_elements = self.window_size[0] * self.window_size[1] relative_position_index_flat = tf.reshape( self.relative_position_index, shape=(-1,) ) relative_position_bias = tf.gather( self.relative_position_bias_table, relative_position_index_flat ) relative_position_bias = tf.reshape( relative_position_bias, shape=(num_window_elements, num_window_elements, -1) ) relative_position_bias = tf.transpose(relative_position_bias, perm=(2, 0, 1)) attn = attn + tf.expand_dims(relative_position_bias, axis=0) if mask is not None: nW = mask.get_shape()[0] mask_float = tf.cast( tf.expand_dims(tf.expand_dims(mask, axis=1), axis=0), tf.float32 ) attn = ( tf.reshape(attn, shape=(-1, nW, self.num_heads, size, size)) + mask_float ) attn = tf.reshape(attn, shape=(-1, self.num_heads, size, size)) attn = keras.activations.softmax(attn, axis=-1) else: attn = keras.activations.softmax(attn, axis=-1) attn = self.dropout(attn) x_qkv = attn @ v x_qkv = tf.transpose(x_qkv, perm=(0, 2, 1, 3)) x_qkv = tf.reshape(x_qkv, shape=(-1, size, channels)) x_qkv = self.proj(x_qkv) x_qkv = self.dropout(x_qkv) return x_qkv The complete Swin Transformer model Finally, we put together the complete Swin Transformer by replacing the standard multi-head attention (MHA) with shifted windows attention. As suggested in the original paper, we create a model comprising of a shifted window-based MHA layer, followed by a 2-layer MLP with GELU nonlinearity in between, applying LayerNormalization before each MSA layer and each MLP, and a residual connection after each of these layers. Notice that we only create a simple MLP with 2 Dense and 2 Dropout layers. Often you will see models using ResNet-50 as the MLP which is quite standard in the literature. However in this paper the authors use a 2-layer MLP with GELU nonlinearity in between. class SwinTransformer(layers.Layer): def __init__( self, dim, num_patch, num_heads, window_size=7, shift_size=0, num_mlp=1024, qkv_bias=True, dropout_rate=0.0, **kwargs, ): super(SwinTransformer, self).__init__(**kwargs) self.dim = dim # number of input dimensions self.num_patch = num_patch # number of embedded patches self.num_heads = num_heads # number of attention heads self.window_size = window_size # size of window self.shift_size = shift_size # size of window shift self.num_mlp = num_mlp # number of MLP nodes self.norm1 = layers.LayerNormalization(epsilon=1e-5) self.attn = WindowAttention( dim, window_size=(self.window_size, self.window_size), num_heads=num_heads, qkv_bias=qkv_bias, dropout_rate=dropout_rate, ) self.drop_path = DropPath(dropout_rate) self.norm2 = layers.LayerNormalization(epsilon=1e-5) self.mlp = keras.Sequential( [ layers.Dense(num_mlp), layers.Activation(keras.activations.gelu), layers.Dropout(dropout_rate), layers.Dense(dim), layers.Dropout(dropout_rate), ] ) if min(self.num_patch) < self.window_size: self.shift_size = 0 self.window_size = min(self.num_patch) def build(self, input_shape): if self.shift_size == 0: self.attn_mask = None else: height, width = self.num_patch h_slices = ( slice(0, -self.window_size), slice(-self.window_size, -self.shift_size), slice(-self.shift_size, None), ) w_slices = ( slice(0, -self.window_size), slice(-self.window_size, -self.shift_size), slice(-self.shift_size, None), ) mask_array = np.zeros((1, height, width, 1)) count = 0 for h in h_slices: for w in w_slices: mask_array[:, h, w, :] = count count += 1 mask_array = tf.convert_to_tensor(mask_array) # mask array to windows mask_windows = window_partition(mask_array, self.window_size) mask_windows = tf.reshape( mask_windows, shape=[-1, self.window_size * self.window_size] ) attn_mask = tf.expand_dims(mask_windows, axis=1) - tf.expand_dims( mask_windows, axis=2 ) attn_mask = tf.where(attn_mask != 0, -100.0, attn_mask) attn_mask = tf.where(attn_mask == 0, 0.0, attn_mask) self.attn_mask = tf.Variable(initial_value=attn_mask, trainable=False) def call(self, x): height, width = self.num_patch _, num_patches_before, channels = x.shape x_skip = x x = self.norm1(x) x = tf.reshape(x, shape=(-1, height, width, channels)) if self.shift_size > 0: shifted_x = tf.roll( x, shift=[-self.shift_size, -self.shift_size], axis=[1, 2] ) else: shifted_x = x x_windows = window_partition(shifted_x, self.window_size) x_windows = tf.reshape( x_windows, shape=(-1, self.window_size * self.window_size, channels) ) attn_windows = self.attn(x_windows, mask=self.attn_mask) attn_windows = tf.reshape( attn_windows, shape=(-1, self.window_size, self.window_size, channels) ) shifted_x = window_reverse( attn_windows, self.window_size, height, width, channels ) if self.shift_size > 0: x = tf.roll( shifted_x, shift=[self.shift_size, self.shift_size], axis=[1, 2] ) else: x = shifted_x x = tf.reshape(x, shape=(-1, height * width, channels)) x = self.drop_path(x) x = x_skip + x x_skip = x x = self.norm2(x) x = self.mlp(x) x = self.drop_path(x) x = x_skip + x return x Model training and evaluation Extract and embed patches We first create 3 layers to help us extract, embed and merge patches from the images on top of which we will later use the Swin Transformer class we built. class PatchExtract(layers.Layer): def __init__(self, patch_size, **kwargs): super(PatchExtract, self).__init__(**kwargs) self.patch_size_x = patch_size[0] self.patch_size_y = patch_size[0] def call(self, images): batch_size = tf.shape(images)[0] patches = tf.image.extract_patches( images=images, sizes=(1, self.patch_size_x, self.patch_size_y, 1), strides=(1, self.patch_size_x, self.patch_size_y, 1), rates=(1, 1, 1, 1), padding=\"VALID\", ) patch_dim = patches.shape[-1] patch_num = patches.shape[1] return tf.reshape(patches, (batch_size, patch_num * patch_num, patch_dim)) class PatchEmbedding(layers.Layer): def __init__(self, num_patch, embed_dim, **kwargs): super(PatchEmbedding, self).__init__(**kwargs) self.num_patch = num_patch self.proj = layers.Dense(embed_dim) self.pos_embed = layers.Embedding(input_dim=num_patch, output_dim=embed_dim) def call(self, patch): pos = tf.range(start=0, limit=self.num_patch, delta=1) return self.proj(patch) + self.pos_embed(pos) class PatchMerging(tf.keras.layers.Layer): def __init__(self, num_patch, embed_dim): super(PatchMerging, self).__init__() self.num_patch = num_patch self.embed_dim = embed_dim self.linear_trans = layers.Dense(2 * embed_dim, use_bias=False) def call(self, x): height, width = self.num_patch _, _, C = x.get_shape().as_list() x = tf.reshape(x, shape=(-1, height, width, C)) x0 = x[:, 0::2, 0::2, :] x1 = x[:, 1::2, 0::2, :] x2 = x[:, 0::2, 1::2, :] x3 = x[:, 1::2, 1::2, :] x = tf.concat((x0, x1, x2, x3), axis=-1) x = tf.reshape(x, shape=(-1, (height // 2) * (width // 2), 4 * C)) return self.linear_trans(x) Build the model We put together the Swin Transformer model. input = layers.Input(input_shape) x = layers.RandomCrop(image_dimension, image_dimension)(input) x = layers.RandomFlip(\"horizontal\")(x) x = PatchExtract(patch_size)(x) x = PatchEmbedding(num_patch_x * num_patch_y, embed_dim)(x) x = SwinTransformer( dim=embed_dim, num_patch=(num_patch_x, num_patch_y), num_heads=num_heads, window_size=window_size, shift_size=0, num_mlp=num_mlp, qkv_bias=qkv_bias, dropout_rate=dropout_rate, )(x) x = SwinTransformer( dim=embed_dim, num_patch=(num_patch_x, num_patch_y), num_heads=num_heads, window_size=window_size, shift_size=shift_size, num_mlp=num_mlp, qkv_bias=qkv_bias, dropout_rate=dropout_rate, )(x) x = PatchMerging((num_patch_x, num_patch_y), embed_dim=embed_dim)(x) x = layers.GlobalAveragePooling1D()(x) output = layers.Dense(num_classes, activation=\"softmax\")(x) 2021-09-13 08:03:19.266695: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-13 08:03:19.275199: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-13 08:03:19.275997: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-13 08:03:19.277483: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-09-13 08:03:19.278433: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-13 08:03:19.279102: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-13 08:03:19.279706: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-13 08:03:21.258771: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-13 08:03:21.259481: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-13 08:03:21.260191: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-13 08:03:21.261723: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14684 MB memory: -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:04.0, compute capability: 7.0 Train on CIFAR-100 We train the model on CIFAR-100. Here, we only train the model for 40 epochs to keep the training time short in this example. In practice, you should train for 150 epochs to reach convergence. model = keras.Model(input, output) model.compile( loss=keras.losses.CategoricalCrossentropy(label_smoothing=label_smoothing), optimizer=tfa.optimizers.AdamW( learning_rate=learning_rate, weight_decay=weight_decay ), metrics=[ keras.metrics.CategoricalAccuracy(name=\"accuracy\"), keras.metrics.TopKCategoricalAccuracy(5, name=\"top-5-accuracy\"), ], ) history = model.fit( x_train, y_train, batch_size=batch_size, epochs=num_epochs, validation_split=validation_split, ) 2021-09-13 08:03:23.935873: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2) Epoch 1/40 352/352 [==============================] - 19s 34ms/step - loss: 4.1679 - accuracy: 0.0817 - top-5-accuracy: 0.2551 - val_loss: 3.8964 - val_accuracy: 0.1242 - val_top-5-accuracy: 0.3568 Epoch 2/40 352/352 [==============================] - 11s 32ms/step - loss: 3.7278 - accuracy: 0.1617 - top-5-accuracy: 0.4246 - val_loss: 3.6518 - val_accuracy: 0.1756 - val_top-5-accuracy: 0.4580 Epoch 3/40 352/352 [==============================] - 11s 32ms/step - loss: 3.5245 - accuracy: 0.2077 - top-5-accuracy: 0.4946 - val_loss: 3.4609 - val_accuracy: 0.2248 - val_top-5-accuracy: 0.5222 Epoch 4/40 352/352 [==============================] - 11s 32ms/step - loss: 3.3856 - accuracy: 0.2408 - top-5-accuracy: 0.5430 - val_loss: 3.3515 - val_accuracy: 0.2514 - val_top-5-accuracy: 0.5540 Epoch 5/40 352/352 [==============================] - 11s 32ms/step - loss: 3.2772 - accuracy: 0.2697 - top-5-accuracy: 0.5760 - val_loss: 3.3012 - val_accuracy: 0.2712 - val_top-5-accuracy: 0.5758 Epoch 6/40 352/352 [==============================] - 11s 32ms/step - loss: 3.1845 - accuracy: 0.2915 - top-5-accuracy: 0.6071 - val_loss: 3.2104 - val_accuracy: 0.2866 - val_top-5-accuracy: 0.5994 Epoch 7/40 352/352 [==============================] - 11s 32ms/step - loss: 3.1104 - accuracy: 0.3126 - top-5-accuracy: 0.6288 - val_loss: 3.1408 - val_accuracy: 0.3038 - val_top-5-accuracy: 0.6176 Epoch 8/40 352/352 [==============================] - 11s 32ms/step - loss: 3.0616 - accuracy: 0.3268 - top-5-accuracy: 0.6423 - val_loss: 3.0853 - val_accuracy: 0.3138 - val_top-5-accuracy: 0.6408 Epoch 9/40 352/352 [==============================] - 11s 31ms/step - loss: 3.0237 - accuracy: 0.3349 - top-5-accuracy: 0.6541 - val_loss: 3.0882 - val_accuracy: 0.3130 - val_top-5-accuracy: 0.6370 Epoch 10/40 352/352 [==============================] - 11s 31ms/step - loss: 2.9877 - accuracy: 0.3438 - top-5-accuracy: 0.6649 - val_loss: 3.0532 - val_accuracy: 0.3298 - val_top-5-accuracy: 0.6482 Epoch 11/40 352/352 [==============================] - 11s 31ms/step - loss: 2.9571 - accuracy: 0.3520 - top-5-accuracy: 0.6712 - val_loss: 3.0547 - val_accuracy: 0.3320 - val_top-5-accuracy: 0.6450 Epoch 12/40 352/352 [==============================] - 11s 31ms/step - loss: 2.9238 - accuracy: 0.3640 - top-5-accuracy: 0.6798 - val_loss: 2.9833 - val_accuracy: 0.3462 - val_top-5-accuracy: 0.6602 Epoch 13/40 352/352 [==============================] - 11s 31ms/step - loss: 2.9048 - accuracy: 0.3674 - top-5-accuracy: 0.6869 - val_loss: 2.9779 - val_accuracy: 0.3458 - val_top-5-accuracy: 0.6724 Epoch 14/40 352/352 [==============================] - 11s 31ms/step - loss: 2.8822 - accuracy: 0.3717 - top-5-accuracy: 0.6923 - val_loss: 2.9549 - val_accuracy: 0.3552 - val_top-5-accuracy: 0.6748 Epoch 15/40 352/352 [==============================] - 11s 31ms/step - loss: 2.8578 - accuracy: 0.3826 - top-5-accuracy: 0.6981 - val_loss: 2.9447 - val_accuracy: 0.3584 - val_top-5-accuracy: 0.6786 Epoch 16/40 352/352 [==============================] - 11s 31ms/step - loss: 2.8404 - accuracy: 0.3852 - top-5-accuracy: 0.7024 - val_loss: 2.9087 - val_accuracy: 0.3650 - val_top-5-accuracy: 0.6842 Epoch 17/40 352/352 [==============================] - 11s 31ms/step - loss: 2.8234 - accuracy: 0.3910 - top-5-accuracy: 0.7076 - val_loss: 2.8884 - val_accuracy: 0.3748 - val_top-5-accuracy: 0.6868 Epoch 18/40 352/352 [==============================] - 11s 31ms/step - loss: 2.8014 - accuracy: 0.3974 - top-5-accuracy: 0.7124 - val_loss: 2.8979 - val_accuracy: 0.3696 - val_top-5-accuracy: 0.6908 Epoch 19/40 352/352 [==============================] - 11s 31ms/step - loss: 2.7928 - accuracy: 0.3961 - top-5-accuracy: 0.7172 - val_loss: 2.8873 - val_accuracy: 0.3756 - val_top-5-accuracy: 0.6924 Epoch 20/40 352/352 [==============================] - 11s 31ms/step - loss: 2.7800 - accuracy: 0.4026 - top-5-accuracy: 0.7186 - val_loss: 2.8544 - val_accuracy: 0.3834 - val_top-5-accuracy: 0.7004 Epoch 21/40 352/352 [==============================] - 11s 31ms/step - loss: 2.7659 - accuracy: 0.4095 - top-5-accuracy: 0.7236 - val_loss: 2.8626 - val_accuracy: 0.3840 - val_top-5-accuracy: 0.6896 Epoch 22/40 352/352 [==============================] - 11s 31ms/step - loss: 2.7499 - accuracy: 0.4098 - top-5-accuracy: 0.7278 - val_loss: 2.8621 - val_accuracy: 0.3868 - val_top-5-accuracy: 0.6944 Epoch 23/40 352/352 [==============================] - 11s 31ms/step - loss: 2.7389 - accuracy: 0.4136 - top-5-accuracy: 0.7305 - val_loss: 2.8527 - val_accuracy: 0.3834 - val_top-5-accuracy: 0.7002 Epoch 24/40 352/352 [==============================] - 11s 31ms/step - loss: 2.7219 - accuracy: 0.4198 - top-5-accuracy: 0.7360 - val_loss: 2.9078 - val_accuracy: 0.3738 - val_top-5-accuracy: 0.6796 Epoch 25/40 352/352 [==============================] - 11s 32ms/step - loss: 2.7119 - accuracy: 0.4195 - top-5-accuracy: 0.7373 - val_loss: 2.8470 - val_accuracy: 0.3840 - val_top-5-accuracy: 0.6994 Epoch 26/40 352/352 [==============================] - 11s 32ms/step - loss: 2.7079 - accuracy: 0.4214 - top-5-accuracy: 0.7355 - val_loss: 2.8101 - val_accuracy: 0.3934 - val_top-5-accuracy: 0.7130 Epoch 27/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6925 - accuracy: 0.4280 - top-5-accuracy: 0.7398 - val_loss: 2.8660 - val_accuracy: 0.3804 - val_top-5-accuracy: 0.6996 Epoch 28/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6864 - accuracy: 0.4273 - top-5-accuracy: 0.7430 - val_loss: 2.7863 - val_accuracy: 0.4014 - val_top-5-accuracy: 0.7234 Epoch 29/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6763 - accuracy: 0.4324 - top-5-accuracy: 0.7472 - val_loss: 2.7852 - val_accuracy: 0.4030 - val_top-5-accuracy: 0.7158 Epoch 30/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6656 - accuracy: 0.4356 - top-5-accuracy: 0.7489 - val_loss: 2.7991 - val_accuracy: 0.3940 - val_top-5-accuracy: 0.7104 Epoch 31/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6589 - accuracy: 0.4383 - top-5-accuracy: 0.7512 - val_loss: 2.7938 - val_accuracy: 0.3966 - val_top-5-accuracy: 0.7148 Epoch 32/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6509 - accuracy: 0.4367 - top-5-accuracy: 0.7530 - val_loss: 2.8226 - val_accuracy: 0.3944 - val_top-5-accuracy: 0.7092 Epoch 33/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6384 - accuracy: 0.4432 - top-5-accuracy: 0.7565 - val_loss: 2.8171 - val_accuracy: 0.3964 - val_top-5-accuracy: 0.7060 Epoch 34/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6317 - accuracy: 0.4446 - top-5-accuracy: 0.7561 - val_loss: 2.7923 - val_accuracy: 0.3970 - val_top-5-accuracy: 0.7134 Epoch 35/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6241 - accuracy: 0.4447 - top-5-accuracy: 0.7574 - val_loss: 2.7664 - val_accuracy: 0.4108 - val_top-5-accuracy: 0.7180 Epoch 36/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6199 - accuracy: 0.4467 - top-5-accuracy: 0.7586 - val_loss: 2.7480 - val_accuracy: 0.4078 - val_top-5-accuracy: 0.7242 Epoch 37/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6127 - accuracy: 0.4506 - top-5-accuracy: 0.7608 - val_loss: 2.7651 - val_accuracy: 0.4052 - val_top-5-accuracy: 0.7218 Epoch 38/40 352/352 [==============================] - 11s 31ms/step - loss: 2.6025 - accuracy: 0.4520 - top-5-accuracy: 0.7620 - val_loss: 2.7641 - val_accuracy: 0.4114 - val_top-5-accuracy: 0.7254 Epoch 39/40 352/352 [==============================] - 11s 31ms/step - loss: 2.5934 - accuracy: 0.4542 - top-5-accuracy: 0.7670 - val_loss: 2.7453 - val_accuracy: 0.4120 - val_top-5-accuracy: 0.7200 Epoch 40/40 352/352 [==============================] - 11s 31ms/step - loss: 2.5859 - accuracy: 0.4565 - top-5-accuracy: 0.7688 - val_loss: 2.7504 - val_accuracy: 0.4118 - val_top-5-accuracy: 0.7268 Let's visualize the training progress of the model. plt.plot(history.history[\"loss\"], label=\"train_loss\") plt.plot(history.history[\"val_loss\"], label=\"val_loss\") plt.xlabel(\"Epochs\") plt.ylabel(\"Loss\") plt.title(\"Train and Validation Losses Over Epochs\", fontsize=14) plt.legend() plt.grid() plt.show() png Let's display the final results of the training on CIFAR-100. loss, accuracy, top_5_accuracy = model.evaluate(x_test, y_test) print(f\"Test loss: {round(loss, 2)}\") print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") print(f\"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%\") 313/313 [==============================] - 3s 8ms/step - loss: 2.7039 - accuracy: 0.4288 - top-5-accuracy: 0.7366 Test loss: 2.7 Test accuracy: 42.88% Test top 5 accuracy: 73.66% The Swin Transformer model we just trained has just 152K parameters, and it gets us to ~75% test top-5 accuracy within just 40 epochs without any signs of overfitting as well as seen in above graph. This means we can train this network for longer (perhaps with a bit more regularization) and obtain even better performance. This performance can further be improved by additional techniques like cosine decay learning rate schedule, other data augmentation techniques. While experimenting, I tried training the model for 150 epochs with a slightly higher dropout and greater embedding dimensions which pushes the performance to ~72% test accuracy on CIFAR-100 as you can see in the screenshot. Results of training for longer The authors present a top-1 accuracy of 87.3% on ImageNet. The authors also present a number of experiments to study how input sizes, optimizers etc. affect the final performance of this model. The authors further present using this model for object detection, semantic segmentation and instance segmentation as well and report competitive results for these. You are strongly advised to also check out the original paper. This example takes inspiration from the official PyTorch and TensorFlow implementations. Implementing the Vision Transformer (ViT) model for image classification. Introduction This example implements the Vision Transformer (ViT) model by Alexey Dosovitskiy et al. for image classification, and demonstrates it on the CIFAR-100 dataset. The ViT model applies the Transformer architecture with self-attention to sequences of image patches, without using convolution layers. This example requires TensorFlow 2.4 or higher, as well as TensorFlow Addons, which can be installed using the following command: pip install -U tensorflow-addons Setup import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_addons as tfa Prepare the data num_classes = 100 input_shape = (32, 32, 3) (x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data() print(f\"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}\") print(f\"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}\") x_train shape: (50000, 32, 32, 3) - y_train shape: (50000, 1) x_test shape: (10000, 32, 32, 3) - y_test shape: (10000, 1) Configure the hyperparameters learning_rate = 0.001 weight_decay = 0.0001 batch_size = 256 num_epochs = 100 image_size = 72 # We'll resize input images to this size patch_size = 6 # Size of the patches to be extract from the input images num_patches = (image_size // patch_size) ** 2 projection_dim = 64 num_heads = 4 transformer_units = [ projection_dim * 2, projection_dim, ] # Size of the transformer layers transformer_layers = 8 mlp_head_units = [2048, 1024] # Size of the dense layers of the final classifier Use data augmentation data_augmentation = keras.Sequential( [ layers.Normalization(), layers.Resizing(image_size, image_size), layers.RandomFlip(\"horizontal\"), layers.RandomRotation(factor=0.02), layers.RandomZoom( height_factor=0.2, width_factor=0.2 ), ], name=\"data_augmentation\", ) # Compute the mean and the variance of the training data for normalization. data_augmentation.layers[0].adapt(x_train) Implement multilayer perceptron (MLP) def mlp(x, hidden_units, dropout_rate): for units in hidden_units: x = layers.Dense(units, activation=tf.nn.gelu)(x) x = layers.Dropout(dropout_rate)(x) return x Implement patch creation as a layer class Patches(layers.Layer): def __init__(self, patch_size): super(Patches, self).__init__() self.patch_size = patch_size def call(self, images): batch_size = tf.shape(images)[0] patches = tf.image.extract_patches( images=images, sizes=[1, self.patch_size, self.patch_size, 1], strides=[1, self.patch_size, self.patch_size, 1], rates=[1, 1, 1, 1], padding=\"VALID\", ) patch_dims = patches.shape[-1] patches = tf.reshape(patches, [batch_size, -1, patch_dims]) return patches Let's display patches for a sample image import matplotlib.pyplot as plt plt.figure(figsize=(4, 4)) image = x_train[np.random.choice(range(x_train.shape[0]))] plt.imshow(image.astype(\"uint8\")) plt.axis(\"off\") resized_image = tf.image.resize( tf.convert_to_tensor([image]), size=(image_size, image_size) ) patches = Patches(patch_size)(resized_image) print(f\"Image size: {image_size} X {image_size}\") print(f\"Patch size: {patch_size} X {patch_size}\") print(f\"Patches per image: {patches.shape[1]}\") print(f\"Elements per patch: {patches.shape[-1]}\") n = int(np.sqrt(patches.shape[1])) plt.figure(figsize=(4, 4)) for i, patch in enumerate(patches[0]): ax = plt.subplot(n, n, i + 1) patch_img = tf.reshape(patch, (patch_size, patch_size, 3)) plt.imshow(patch_img.numpy().astype(\"uint8\")) plt.axis(\"off\") Image size: 72 X 72 Patch size: 6 X 6 Patches per image: 144 Elements per patch: 108 png png Implement the patch encoding layer The PatchEncoder layer will linearly transform a patch by projecting it into a vector of size projection_dim. In addition, it adds a learnable position embedding to the projected vector. class PatchEncoder(layers.Layer): def __init__(self, num_patches, projection_dim): super(PatchEncoder, self).__init__() self.num_patches = num_patches self.projection = layers.Dense(units=projection_dim) self.position_embedding = layers.Embedding( input_dim=num_patches, output_dim=projection_dim ) def call(self, patch): positions = tf.range(start=0, limit=self.num_patches, delta=1) encoded = self.projection(patch) + self.position_embedding(positions) return encoded Build the ViT model The ViT model consists of multiple Transformer blocks, which use the layers.MultiHeadAttention layer as a self-attention mechanism applied to the sequence of patches. The Transformer blocks produce a [batch_size, num_patches, projection_dim] tensor, which is processed via an classifier head with softmax to produce the final class probabilities output. Unlike the technique described in the paper, which prepends a learnable embedding to the sequence of encoded patches to serve as the image representation, all the outputs of the final Transformer block are reshaped with layers.Flatten() and used as the image representation input to the classifier head. Note that the layers.GlobalAveragePooling1D layer could also be used instead to aggregate the outputs of the Transformer block, especially when the number of patches and the projection dimensions are large. def create_vit_classifier(): inputs = layers.Input(shape=input_shape) # Augment data. augmented = data_augmentation(inputs) # Create patches. patches = Patches(patch_size)(augmented) # Encode patches. encoded_patches = PatchEncoder(num_patches, projection_dim)(patches) # Create multiple layers of the Transformer block. for _ in range(transformer_layers): # Layer normalization 1. x1 = layers.LayerNormalization(epsilon=1e-6)(encoded_patches) # Create a multi-head attention layer. attention_output = layers.MultiHeadAttention( num_heads=num_heads, key_dim=projection_dim, dropout=0.1 )(x1, x1) # Skip connection 1. x2 = layers.Add()([attention_output, encoded_patches]) # Layer normalization 2. x3 = layers.LayerNormalization(epsilon=1e-6)(x2) # MLP. x3 = mlp(x3, hidden_units=transformer_units, dropout_rate=0.1) # Skip connection 2. encoded_patches = layers.Add()([x3, x2]) # Create a [batch_size, projection_dim] tensor. representation = layers.LayerNormalization(epsilon=1e-6)(encoded_patches) representation = layers.Flatten()(representation) representation = layers.Dropout(0.5)(representation) # Add MLP. features = mlp(representation, hidden_units=mlp_head_units, dropout_rate=0.5) # Classify outputs. logits = layers.Dense(num_classes)(features) # Create the Keras model. model = keras.Model(inputs=inputs, outputs=logits) return model Compile, train, and evaluate the mode def run_experiment(model): optimizer = tfa.optimizers.AdamW( learning_rate=learning_rate, weight_decay=weight_decay ) model.compile( optimizer=optimizer, loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[ keras.metrics.SparseCategoricalAccuracy(name=\"accuracy\"), keras.metrics.SparseTopKCategoricalAccuracy(5, name=\"top-5-accuracy\"), ], ) checkpoint_filepath = \"/tmp/checkpoint\" checkpoint_callback = keras.callbacks.ModelCheckpoint( checkpoint_filepath, monitor=\"val_accuracy\", save_best_only=True, save_weights_only=True, ) history = model.fit( x=x_train, y=y_train, batch_size=batch_size, epochs=num_epochs, validation_split=0.1, callbacks=[checkpoint_callback], ) model.load_weights(checkpoint_filepath) _, accuracy, top_5_accuracy = model.evaluate(x_test, y_test) print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") print(f\"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%\") return history vit_classifier = create_vit_classifier() history = run_experiment(vit_classifier) Epoch 1/100 176/176 [==============================] - 33s 136ms/step - loss: 4.8863 - accuracy: 0.0294 - top-5-accuracy: 0.1117 - val_loss: 3.9661 - val_accuracy: 0.0992 - val_top-5-accuracy: 0.3056 Epoch 2/100 176/176 [==============================] - 22s 127ms/step - loss: 4.0162 - accuracy: 0.0865 - top-5-accuracy: 0.2683 - val_loss: 3.5691 - val_accuracy: 0.1630 - val_top-5-accuracy: 0.4226 Epoch 3/100 176/176 [==============================] - 22s 127ms/step - loss: 3.7313 - accuracy: 0.1254 - top-5-accuracy: 0.3535 - val_loss: 3.3455 - val_accuracy: 0.1976 - val_top-5-accuracy: 0.4756 Epoch 4/100 176/176 [==============================] - 23s 128ms/step - loss: 3.5411 - accuracy: 0.1541 - top-5-accuracy: 0.4121 - val_loss: 3.1925 - val_accuracy: 0.2274 - val_top-5-accuracy: 0.5126 Epoch 5/100 176/176 [==============================] - 22s 127ms/step - loss: 3.3749 - accuracy: 0.1847 - top-5-accuracy: 0.4572 - val_loss: 3.1043 - val_accuracy: 0.2388 - val_top-5-accuracy: 0.5320 Epoch 6/100 176/176 [==============================] - 22s 127ms/step - loss: 3.2589 - accuracy: 0.2057 - top-5-accuracy: 0.4906 - val_loss: 2.9319 - val_accuracy: 0.2782 - val_top-5-accuracy: 0.5756 Epoch 7/100 176/176 [==============================] - 22s 127ms/step - loss: 3.1165 - accuracy: 0.2331 - top-5-accuracy: 0.5273 - val_loss: 2.8072 - val_accuracy: 0.2972 - val_top-5-accuracy: 0.5946 Epoch 8/100 176/176 [==============================] - 22s 127ms/step - loss: 2.9902 - accuracy: 0.2563 - top-5-accuracy: 0.5556 - val_loss: 2.7207 - val_accuracy: 0.3188 - val_top-5-accuracy: 0.6258 Epoch 9/100 176/176 [==============================] - 22s 127ms/step - loss: 2.8828 - accuracy: 0.2800 - top-5-accuracy: 0.5827 - val_loss: 2.6396 - val_accuracy: 0.3244 - val_top-5-accuracy: 0.6402 Epoch 10/100 176/176 [==============================] - 23s 128ms/step - loss: 2.7824 - accuracy: 0.2997 - top-5-accuracy: 0.6110 - val_loss: 2.5580 - val_accuracy: 0.3494 - val_top-5-accuracy: 0.6568 Epoch 11/100 176/176 [==============================] - 23s 130ms/step - loss: 2.6743 - accuracy: 0.3209 - top-5-accuracy: 0.6333 - val_loss: 2.5000 - val_accuracy: 0.3594 - val_top-5-accuracy: 0.6726 Epoch 12/100 176/176 [==============================] - 23s 130ms/step - loss: 2.5800 - accuracy: 0.3431 - top-5-accuracy: 0.6522 - val_loss: 2.3900 - val_accuracy: 0.3798 - val_top-5-accuracy: 0.6878 Epoch 13/100 176/176 [==============================] - 23s 128ms/step - loss: 2.5019 - accuracy: 0.3559 - top-5-accuracy: 0.6671 - val_loss: 2.3464 - val_accuracy: 0.3960 - val_top-5-accuracy: 0.7002 Epoch 14/100 176/176 [==============================] - 22s 128ms/step - loss: 2.4207 - accuracy: 0.3728 - top-5-accuracy: 0.6905 - val_loss: 2.3130 - val_accuracy: 0.4032 - val_top-5-accuracy: 0.7040 Epoch 15/100 176/176 [==============================] - 23s 128ms/step - loss: 2.3371 - accuracy: 0.3932 - top-5-accuracy: 0.7093 - val_loss: 2.2447 - val_accuracy: 0.4136 - val_top-5-accuracy: 0.7202 Epoch 16/100 176/176 [==============================] - 23s 128ms/step - loss: 2.2650 - accuracy: 0.4077 - top-5-accuracy: 0.7201 - val_loss: 2.2101 - val_accuracy: 0.4222 - val_top-5-accuracy: 0.7246 Epoch 17/100 176/176 [==============================] - 22s 127ms/step - loss: 2.1822 - accuracy: 0.4204 - top-5-accuracy: 0.7376 - val_loss: 2.1446 - val_accuracy: 0.4344 - val_top-5-accuracy: 0.7416 Epoch 18/100 176/176 [==============================] - 22s 128ms/step - loss: 2.1485 - accuracy: 0.4284 - top-5-accuracy: 0.7476 - val_loss: 2.1094 - val_accuracy: 0.4432 - val_top-5-accuracy: 0.7454 Epoch 19/100 176/176 [==============================] - 22s 128ms/step - loss: 2.0717 - accuracy: 0.4464 - top-5-accuracy: 0.7618 - val_loss: 2.0718 - val_accuracy: 0.4584 - val_top-5-accuracy: 0.7570 Epoch 20/100 176/176 [==============================] - 22s 127ms/step - loss: 2.0031 - accuracy: 0.4605 - top-5-accuracy: 0.7731 - val_loss: 2.0286 - val_accuracy: 0.4610 - val_top-5-accuracy: 0.7654 Epoch 21/100 176/176 [==============================] - 22s 127ms/step - loss: 1.9650 - accuracy: 0.4700 - top-5-accuracy: 0.7820 - val_loss: 2.0225 - val_accuracy: 0.4642 - val_top-5-accuracy: 0.7628 Epoch 22/100 176/176 [==============================] - 22s 127ms/step - loss: 1.9066 - accuracy: 0.4839 - top-5-accuracy: 0.7904 - val_loss: 1.9961 - val_accuracy: 0.4746 - val_top-5-accuracy: 0.7656 Epoch 23/100 176/176 [==============================] - 22s 127ms/step - loss: 1.8564 - accuracy: 0.4952 - top-5-accuracy: 0.8030 - val_loss: 1.9769 - val_accuracy: 0.4828 - val_top-5-accuracy: 0.7742 Epoch 24/100 176/176 [==============================] - 22s 128ms/step - loss: 1.8167 - accuracy: 0.5034 - top-5-accuracy: 0.8099 - val_loss: 1.9730 - val_accuracy: 0.4766 - val_top-5-accuracy: 0.7728 Epoch 25/100 176/176 [==============================] - 22s 128ms/step - loss: 1.7788 - accuracy: 0.5124 - top-5-accuracy: 0.8174 - val_loss: 1.9187 - val_accuracy: 0.4926 - val_top-5-accuracy: 0.7854 Epoch 26/100 176/176 [==============================] - 23s 128ms/step - loss: 1.7437 - accuracy: 0.5187 - top-5-accuracy: 0.8206 - val_loss: 1.9732 - val_accuracy: 0.4792 - val_top-5-accuracy: 0.7772 Epoch 27/100 176/176 [==============================] - 23s 128ms/step - loss: 1.6929 - accuracy: 0.5300 - top-5-accuracy: 0.8287 - val_loss: 1.9109 - val_accuracy: 0.4928 - val_top-5-accuracy: 0.7912 Epoch 28/100 176/176 [==============================] - 23s 129ms/step - loss: 1.6647 - accuracy: 0.5400 - top-5-accuracy: 0.8362 - val_loss: 1.9031 - val_accuracy: 0.4984 - val_top-5-accuracy: 0.7824 Epoch 29/100 176/176 [==============================] - 23s 129ms/step - loss: 1.6295 - accuracy: 0.5488 - top-5-accuracy: 0.8402 - val_loss: 1.8744 - val_accuracy: 0.4982 - val_top-5-accuracy: 0.7910 Epoch 30/100 176/176 [==============================] - 22s 128ms/step - loss: 1.5860 - accuracy: 0.5548 - top-5-accuracy: 0.8504 - val_loss: 1.8551 - val_accuracy: 0.5108 - val_top-5-accuracy: 0.7946 Epoch 31/100 176/176 [==============================] - 22s 127ms/step - loss: 1.5666 - accuracy: 0.5614 - top-5-accuracy: 0.8548 - val_loss: 1.8720 - val_accuracy: 0.5076 - val_top-5-accuracy: 0.7960 Epoch 32/100 176/176 [==============================] - 22s 127ms/step - loss: 1.5272 - accuracy: 0.5712 - top-5-accuracy: 0.8596 - val_loss: 1.8840 - val_accuracy: 0.5106 - val_top-5-accuracy: 0.7966 Epoch 33/100 176/176 [==============================] - 22s 128ms/step - loss: 1.4995 - accuracy: 0.5779 - top-5-accuracy: 0.8651 - val_loss: 1.8660 - val_accuracy: 0.5116 - val_top-5-accuracy: 0.7904 Epoch 34/100 176/176 [==============================] - 22s 128ms/step - loss: 1.4686 - accuracy: 0.5849 - top-5-accuracy: 0.8685 - val_loss: 1.8544 - val_accuracy: 0.5126 - val_top-5-accuracy: 0.7954 Epoch 35/100 176/176 [==============================] - 22s 127ms/step - loss: 1.4276 - accuracy: 0.5992 - top-5-accuracy: 0.8743 - val_loss: 1.8497 - val_accuracy: 0.5164 - val_top-5-accuracy: 0.7990 Epoch 36/100 176/176 [==============================] - 22s 127ms/step - loss: 1.4102 - accuracy: 0.5970 - top-5-accuracy: 0.8768 - val_loss: 1.8496 - val_accuracy: 0.5198 - val_top-5-accuracy: 0.7948 Epoch 37/100 176/176 [==============================] - 22s 126ms/step - loss: 1.3800 - accuracy: 0.6112 - top-5-accuracy: 0.8814 - val_loss: 1.8033 - val_accuracy: 0.5284 - val_top-5-accuracy: 0.8068 Epoch 38/100 176/176 [==============================] - 22s 126ms/step - loss: 1.3500 - accuracy: 0.6103 - top-5-accuracy: 0.8862 - val_loss: 1.8092 - val_accuracy: 0.5214 - val_top-5-accuracy: 0.8128 Epoch 39/100 176/176 [==============================] - 22s 127ms/step - loss: 1.3575 - accuracy: 0.6127 - top-5-accuracy: 0.8857 - val_loss: 1.8175 - val_accuracy: 0.5198 - val_top-5-accuracy: 0.8086 Epoch 40/100 176/176 [==============================] - 22s 126ms/step - loss: 1.3030 - accuracy: 0.6283 - top-5-accuracy: 0.8927 - val_loss: 1.8361 - val_accuracy: 0.5170 - val_top-5-accuracy: 0.8056 Epoch 41/100 176/176 [==============================] - 22s 125ms/step - loss: 1.3160 - accuracy: 0.6247 - top-5-accuracy: 0.8923 - val_loss: 1.8074 - val_accuracy: 0.5260 - val_top-5-accuracy: 0.8082 Epoch 42/100 176/176 [==============================] - 22s 126ms/step - loss: 1.2679 - accuracy: 0.6329 - top-5-accuracy: 0.9002 - val_loss: 1.8430 - val_accuracy: 0.5244 - val_top-5-accuracy: 0.8100 Epoch 43/100 176/176 [==============================] - 22s 126ms/step - loss: 1.2514 - accuracy: 0.6375 - top-5-accuracy: 0.9034 - val_loss: 1.8318 - val_accuracy: 0.5196 - val_top-5-accuracy: 0.8034 Epoch 44/100 176/176 [==============================] - 22s 126ms/step - loss: 1.2311 - accuracy: 0.6431 - top-5-accuracy: 0.9067 - val_loss: 1.8283 - val_accuracy: 0.5218 - val_top-5-accuracy: 0.8050 Epoch 45/100 176/176 [==============================] - 22s 125ms/step - loss: 1.2073 - accuracy: 0.6484 - top-5-accuracy: 0.9098 - val_loss: 1.8384 - val_accuracy: 0.5302 - val_top-5-accuracy: 0.8056 Epoch 46/100 176/176 [==============================] - 22s 125ms/step - loss: 1.1775 - accuracy: 0.6558 - top-5-accuracy: 0.9117 - val_loss: 1.8409 - val_accuracy: 0.5294 - val_top-5-accuracy: 0.8078 Epoch 47/100 176/176 [==============================] - 22s 126ms/step - loss: 1.1891 - accuracy: 0.6563 - top-5-accuracy: 0.9103 - val_loss: 1.8167 - val_accuracy: 0.5346 - val_top-5-accuracy: 0.8142 Epoch 48/100 176/176 [==============================] - 22s 127ms/step - loss: 1.1586 - accuracy: 0.6621 - top-5-accuracy: 0.9161 - val_loss: 1.8285 - val_accuracy: 0.5314 - val_top-5-accuracy: 0.8086 Epoch 49/100 176/176 [==============================] - 22s 126ms/step - loss: 1.1586 - accuracy: 0.6634 - top-5-accuracy: 0.9154 - val_loss: 1.8189 - val_accuracy: 0.5366 - val_top-5-accuracy: 0.8134 Epoch 50/100 176/176 [==============================] - 22s 126ms/step - loss: 1.1306 - accuracy: 0.6682 - top-5-accuracy: 0.9199 - val_loss: 1.8442 - val_accuracy: 0.5254 - val_top-5-accuracy: 0.8096 Epoch 51/100 176/176 [==============================] - 22s 126ms/step - loss: 1.1175 - accuracy: 0.6708 - top-5-accuracy: 0.9227 - val_loss: 1.8513 - val_accuracy: 0.5230 - val_top-5-accuracy: 0.8104 Epoch 52/100 176/176 [==============================] - 22s 126ms/step - loss: 1.1104 - accuracy: 0.6743 - top-5-accuracy: 0.9226 - val_loss: 1.8041 - val_accuracy: 0.5332 - val_top-5-accuracy: 0.8142 Epoch 53/100 176/176 [==============================] - 22s 127ms/step - loss: 1.0914 - accuracy: 0.6809 - top-5-accuracy: 0.9236 - val_loss: 1.8213 - val_accuracy: 0.5342 - val_top-5-accuracy: 0.8094 Epoch 54/100 176/176 [==============================] - 22s 126ms/step - loss: 1.0681 - accuracy: 0.6856 - top-5-accuracy: 0.9270 - val_loss: 1.8429 - val_accuracy: 0.5328 - val_top-5-accuracy: 0.8086 Epoch 55/100 176/176 [==============================] - 22s 126ms/step - loss: 1.0625 - accuracy: 0.6862 - top-5-accuracy: 0.9301 - val_loss: 1.8316 - val_accuracy: 0.5364 - val_top-5-accuracy: 0.8090 Epoch 56/100 176/176 [==============================] - 22s 127ms/step - loss: 1.0474 - accuracy: 0.6920 - top-5-accuracy: 0.9308 - val_loss: 1.8310 - val_accuracy: 0.5440 - val_top-5-accuracy: 0.8132 Epoch 57/100 176/176 [==============================] - 22s 127ms/step - loss: 1.0381 - accuracy: 0.6974 - top-5-accuracy: 0.9297 - val_loss: 1.8447 - val_accuracy: 0.5368 - val_top-5-accuracy: 0.8126 Epoch 58/100 176/176 [==============================] - 22s 126ms/step - loss: 1.0230 - accuracy: 0.7011 - top-5-accuracy: 0.9341 - val_loss: 1.8241 - val_accuracy: 0.5418 - val_top-5-accuracy: 0.8094 Epoch 59/100 176/176 [==============================] - 22s 127ms/step - loss: 1.0113 - accuracy: 0.7023 - top-5-accuracy: 0.9361 - val_loss: 1.8216 - val_accuracy: 0.5380 - val_top-5-accuracy: 0.8134 Epoch 60/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9953 - accuracy: 0.7031 - top-5-accuracy: 0.9386 - val_loss: 1.8356 - val_accuracy: 0.5422 - val_top-5-accuracy: 0.8122 Epoch 61/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9928 - accuracy: 0.7084 - top-5-accuracy: 0.9375 - val_loss: 1.8514 - val_accuracy: 0.5342 - val_top-5-accuracy: 0.8182 Epoch 62/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9740 - accuracy: 0.7121 - top-5-accuracy: 0.9387 - val_loss: 1.8674 - val_accuracy: 0.5366 - val_top-5-accuracy: 0.8092 Epoch 63/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9742 - accuracy: 0.7112 - top-5-accuracy: 0.9413 - val_loss: 1.8274 - val_accuracy: 0.5414 - val_top-5-accuracy: 0.8144 Epoch 64/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9633 - accuracy: 0.7147 - top-5-accuracy: 0.9393 - val_loss: 1.8250 - val_accuracy: 0.5434 - val_top-5-accuracy: 0.8180 Epoch 65/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9407 - accuracy: 0.7221 - top-5-accuracy: 0.9444 - val_loss: 1.8456 - val_accuracy: 0.5424 - val_top-5-accuracy: 0.8120 Epoch 66/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9410 - accuracy: 0.7194 - top-5-accuracy: 0.9447 - val_loss: 1.8559 - val_accuracy: 0.5460 - val_top-5-accuracy: 0.8144 Epoch 67/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9359 - accuracy: 0.7252 - top-5-accuracy: 0.9421 - val_loss: 1.8352 - val_accuracy: 0.5458 - val_top-5-accuracy: 0.8110 Epoch 68/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9232 - accuracy: 0.7254 - top-5-accuracy: 0.9460 - val_loss: 1.8479 - val_accuracy: 0.5444 - val_top-5-accuracy: 0.8132 Epoch 69/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9138 - accuracy: 0.7283 - top-5-accuracy: 0.9456 - val_loss: 1.8697 - val_accuracy: 0.5312 - val_top-5-accuracy: 0.8052 Epoch 70/100 176/176 [==============================] - 22s 126ms/step - loss: 0.9095 - accuracy: 0.7295 - top-5-accuracy: 0.9478 - val_loss: 1.8550 - val_accuracy: 0.5376 - val_top-5-accuracy: 0.8170 Epoch 71/100 176/176 [==============================] - 22s 126ms/step - loss: 0.8945 - accuracy: 0.7332 - top-5-accuracy: 0.9504 - val_loss: 1.8286 - val_accuracy: 0.5436 - val_top-5-accuracy: 0.8198 Epoch 72/100 176/176 [==============================] - 22s 125ms/step - loss: 0.8936 - accuracy: 0.7344 - top-5-accuracy: 0.9479 - val_loss: 1.8727 - val_accuracy: 0.5438 - val_top-5-accuracy: 0.8182 Epoch 73/100 176/176 [==============================] - 22s 126ms/step - loss: 0.8775 - accuracy: 0.7355 - top-5-accuracy: 0.9510 - val_loss: 1.8522 - val_accuracy: 0.5404 - val_top-5-accuracy: 0.8170 Epoch 74/100 176/176 [==============================] - 22s 126ms/step - loss: 0.8660 - accuracy: 0.7390 - top-5-accuracy: 0.9513 - val_loss: 1.8432 - val_accuracy: 0.5448 - val_top-5-accuracy: 0.8156 Epoch 75/100 176/176 [==============================] - 22s 126ms/step - loss: 0.8583 - accuracy: 0.7441 - top-5-accuracy: 0.9532 - val_loss: 1.8419 - val_accuracy: 0.5462 - val_top-5-accuracy: 0.8226 Epoch 76/100 176/176 [==============================] - 22s 126ms/step - loss: 0.8549 - accuracy: 0.7443 - top-5-accuracy: 0.9529 - val_loss: 1.8757 - val_accuracy: 0.5454 - val_top-5-accuracy: 0.8086 Epoch 77/100 176/176 [==============================] - 22s 125ms/step - loss: 0.8578 - accuracy: 0.7384 - top-5-accuracy: 0.9531 - val_loss: 1.9051 - val_accuracy: 0.5462 - val_top-5-accuracy: 0.8136 Epoch 78/100 176/176 [==============================] - 22s 125ms/step - loss: 0.8530 - accuracy: 0.7442 - top-5-accuracy: 0.9526 - val_loss: 1.8496 - val_accuracy: 0.5384 - val_top-5-accuracy: 0.8124 Epoch 79/100 176/176 [==============================] - 22s 125ms/step - loss: 0.8403 - accuracy: 0.7485 - top-5-accuracy: 0.9542 - val_loss: 1.8701 - val_accuracy: 0.5550 - val_top-5-accuracy: 0.8228 Epoch 80/100 176/176 [==============================] - 22s 126ms/step - loss: 0.8410 - accuracy: 0.7491 - top-5-accuracy: 0.9538 - val_loss: 1.8737 - val_accuracy: 0.5502 - val_top-5-accuracy: 0.8150 Epoch 81/100 176/176 [==============================] - 22s 126ms/step - loss: 0.8275 - accuracy: 0.7547 - top-5-accuracy: 0.9532 - val_loss: 1.8391 - val_accuracy: 0.5534 - val_top-5-accuracy: 0.8156 Epoch 82/100 176/176 [==============================] - 22s 125ms/step - loss: 0.8221 - accuracy: 0.7528 - top-5-accuracy: 0.9562 - val_loss: 1.8775 - val_accuracy: 0.5428 - val_top-5-accuracy: 0.8120 Epoch 83/100 176/176 [==============================] - 22s 125ms/step - loss: 0.8270 - accuracy: 0.7526 - top-5-accuracy: 0.9550 - val_loss: 1.8464 - val_accuracy: 0.5468 - val_top-5-accuracy: 0.8148 Epoch 84/100 176/176 [==============================] - 22s 126ms/step - loss: 0.8080 - accuracy: 0.7551 - top-5-accuracy: 0.9576 - val_loss: 1.8789 - val_accuracy: 0.5486 - val_top-5-accuracy: 0.8204 Epoch 85/100 176/176 [==============================] - 22s 125ms/step - loss: 0.8058 - accuracy: 0.7593 - top-5-accuracy: 0.9573 - val_loss: 1.8691 - val_accuracy: 0.5446 - val_top-5-accuracy: 0.8156 Epoch 86/100 176/176 [==============================] - 22s 126ms/step - loss: 0.8092 - accuracy: 0.7564 - top-5-accuracy: 0.9560 - val_loss: 1.8588 - val_accuracy: 0.5524 - val_top-5-accuracy: 0.8172 Epoch 87/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7897 - accuracy: 0.7613 - top-5-accuracy: 0.9604 - val_loss: 1.8649 - val_accuracy: 0.5490 - val_top-5-accuracy: 0.8166 Epoch 88/100 176/176 [==============================] - 22s 126ms/step - loss: 0.7890 - accuracy: 0.7635 - top-5-accuracy: 0.9598 - val_loss: 1.9060 - val_accuracy: 0.5446 - val_top-5-accuracy: 0.8112 Epoch 89/100 176/176 [==============================] - 22s 126ms/step - loss: 0.7682 - accuracy: 0.7687 - top-5-accuracy: 0.9620 - val_loss: 1.8645 - val_accuracy: 0.5474 - val_top-5-accuracy: 0.8150 Epoch 90/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7958 - accuracy: 0.7617 - top-5-accuracy: 0.9600 - val_loss: 1.8549 - val_accuracy: 0.5496 - val_top-5-accuracy: 0.8140 Epoch 91/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7978 - accuracy: 0.7603 - top-5-accuracy: 0.9590 - val_loss: 1.9169 - val_accuracy: 0.5440 - val_top-5-accuracy: 0.8140 Epoch 92/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7898 - accuracy: 0.7630 - top-5-accuracy: 0.9594 - val_loss: 1.9015 - val_accuracy: 0.5540 - val_top-5-accuracy: 0.8174 Epoch 93/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7550 - accuracy: 0.7722 - top-5-accuracy: 0.9622 - val_loss: 1.9219 - val_accuracy: 0.5410 - val_top-5-accuracy: 0.8098 Epoch 94/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7692 - accuracy: 0.7689 - top-5-accuracy: 0.9599 - val_loss: 1.8928 - val_accuracy: 0.5506 - val_top-5-accuracy: 0.8184 Epoch 95/100 176/176 [==============================] - 22s 126ms/step - loss: 0.7783 - accuracy: 0.7661 - top-5-accuracy: 0.9597 - val_loss: 1.8646 - val_accuracy: 0.5490 - val_top-5-accuracy: 0.8166 Epoch 96/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7547 - accuracy: 0.7711 - top-5-accuracy: 0.9638 - val_loss: 1.9347 - val_accuracy: 0.5484 - val_top-5-accuracy: 0.8150 Epoch 97/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7603 - accuracy: 0.7692 - top-5-accuracy: 0.9616 - val_loss: 1.8966 - val_accuracy: 0.5522 - val_top-5-accuracy: 0.8144 Epoch 98/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7595 - accuracy: 0.7730 - top-5-accuracy: 0.9610 - val_loss: 1.8728 - val_accuracy: 0.5470 - val_top-5-accuracy: 0.8170 Epoch 99/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7542 - accuracy: 0.7736 - top-5-accuracy: 0.9622 - val_loss: 1.9132 - val_accuracy: 0.5504 - val_top-5-accuracy: 0.8156 Epoch 100/100 176/176 [==============================] - 22s 125ms/step - loss: 0.7410 - accuracy: 0.7787 - top-5-accuracy: 0.9635 - val_loss: 1.9233 - val_accuracy: 0.5428 - val_top-5-accuracy: 0.8120 313/313 [==============================] - 4s 12ms/step - loss: 1.8487 - accuracy: 0.5514 - top-5-accuracy: 0.8186 Test accuracy: 55.14% Test top 5 accuracy: 81.86% After 100 epochs, the ViT model achieves around 55% accuracy and 82% top-5 accuracy on the test data. These are not competitive results on the CIFAR-100 dataset, as a ResNet50V2 trained from scratch on the same data can achieve 67% accuracy. Note that the state of the art results reported in the paper are achieved by pre-training the ViT model using the JFT-300M dataset, then fine-tuning it on the target dataset. To improve the model quality without pre-training, you can try to train the model for more epochs, use a larger number of Transformer layers, resize the input images, change the patch size, or increase the projection dimensions. Besides, as mentioned in the paper, the quality of the model is affected not only by architecture choices, but also by parameters such as the learning rate schedule, optimizer, weight decay, etc. In practice, it's recommended to fine-tune a ViT model that was pre-trained using a large, high-resolution dataset. Image segmentation model trained from scratch on the Oxford Pets dataset Download the data !curl -O https://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz !curl -O https://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz !tar -xf images.tar.gz !tar -xf annotations.tar.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 755M 100 755M 0 0 6943k 0 0:01:51 0:01:51 --:--:-- 7129k % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 18.2M 100 18.2M 0 0 5692k 0 0:00:03 0:00:03 --:--:-- 5692k Prepare paths of input images and target segmentation masks import os input_dir = \"images/\" target_dir = \"annotations/trimaps/\" img_size = (160, 160) num_classes = 3 batch_size = 32 input_img_paths = sorted( [ os.path.join(input_dir, fname) for fname in os.listdir(input_dir) if fname.endswith(\".jpg\") ] ) target_img_paths = sorted( [ os.path.join(target_dir, fname) for fname in os.listdir(target_dir) if fname.endswith(\".png\") and not fname.startswith(\".\") ] ) print(\"Number of samples:\", len(input_img_paths)) for input_path, target_path in zip(input_img_paths[:10], target_img_paths[:10]): print(input_path, \"|\", target_path) Number of samples: 7390 images/Abyssinian_1.jpg | annotations/trimaps/Abyssinian_1.png images/Abyssinian_10.jpg | annotations/trimaps/Abyssinian_10.png images/Abyssinian_100.jpg | annotations/trimaps/Abyssinian_100.png images/Abyssinian_101.jpg | annotations/trimaps/Abyssinian_101.png images/Abyssinian_102.jpg | annotations/trimaps/Abyssinian_102.png images/Abyssinian_103.jpg | annotations/trimaps/Abyssinian_103.png images/Abyssinian_104.jpg | annotations/trimaps/Abyssinian_104.png images/Abyssinian_105.jpg | annotations/trimaps/Abyssinian_105.png images/Abyssinian_106.jpg | annotations/trimaps/Abyssinian_106.png images/Abyssinian_107.jpg | annotations/trimaps/Abyssinian_107.png What does one input image and corresponding segmentation mask look like? from IPython.display import Image, display from tensorflow.keras.preprocessing.image import load_img import PIL from PIL import ImageOps # Display input image #7 display(Image(filename=input_img_paths[9])) # Display auto-contrast version of corresponding target (per-pixel categories) img = PIL.ImageOps.autocontrast(load_img(target_img_paths[9])) display(img) jpeg png Prepare Sequence class to load & vectorize batches of data from tensorflow import keras import numpy as np from tensorflow.keras.preprocessing.image import load_img class OxfordPets(keras.utils.Sequence): \"\"\"Helper to iterate over the data (as Numpy arrays).\"\"\" def __init__(self, batch_size, img_size, input_img_paths, target_img_paths): self.batch_size = batch_size self.img_size = img_size self.input_img_paths = input_img_paths self.target_img_paths = target_img_paths def __len__(self): return len(self.target_img_paths) // self.batch_size def __getitem__(self, idx): \"\"\"Returns tuple (input, target) correspond to batch #idx.\"\"\" i = idx * self.batch_size batch_input_img_paths = self.input_img_paths[i : i + self.batch_size] batch_target_img_paths = self.target_img_paths[i : i + self.batch_size] x = np.zeros((self.batch_size,) + self.img_size + (3,), dtype=\"float32\") for j, path in enumerate(batch_input_img_paths): img = load_img(path, target_size=self.img_size) x[j] = img y = np.zeros((self.batch_size,) + self.img_size + (1,), dtype=\"uint8\") for j, path in enumerate(batch_target_img_paths): img = load_img(path, target_size=self.img_size, color_mode=\"grayscale\") y[j] = np.expand_dims(img, 2) # Ground truth labels are 1, 2, 3. Subtract one to make them 0, 1, 2: y[j] -= 1 return x, y Prepare U-Net Xception-style model from tensorflow.keras import layers def get_model(img_size, num_classes): inputs = keras.Input(shape=img_size + (3,)) ### [First half of the network: downsampling inputs] ### # Entry block x = layers.Conv2D(32, 3, strides=2, padding=\"same\")(inputs) x = layers.BatchNormalization()(x) x = layers.Activation(\"relu\")(x) previous_block_activation = x # Set aside residual # Blocks 1, 2, 3 are identical apart from the feature depth. for filters in [64, 128, 256]: x = layers.Activation(\"relu\")(x) x = layers.SeparableConv2D(filters, 3, padding=\"same\")(x) x = layers.BatchNormalization()(x) x = layers.Activation(\"relu\")(x) x = layers.SeparableConv2D(filters, 3, padding=\"same\")(x) x = layers.BatchNormalization()(x) x = layers.MaxPooling2D(3, strides=2, padding=\"same\")(x) # Project residual residual = layers.Conv2D(filters, 1, strides=2, padding=\"same\")( previous_block_activation ) x = layers.add([x, residual]) # Add back residual previous_block_activation = x # Set aside next residual ### [Second half of the network: upsampling inputs] ### for filters in [256, 128, 64, 32]: x = layers.Activation(\"relu\")(x) x = layers.Conv2DTranspose(filters, 3, padding=\"same\")(x) x = layers.BatchNormalization()(x) x = layers.Activation(\"relu\")(x) x = layers.Conv2DTranspose(filters, 3, padding=\"same\")(x) x = layers.BatchNormalization()(x) x = layers.UpSampling2D(2)(x) # Project residual residual = layers.UpSampling2D(2)(previous_block_activation) residual = layers.Conv2D(filters, 1, padding=\"same\")(residual) x = layers.add([x, residual]) # Add back residual previous_block_activation = x # Set aside next residual # Add a per-pixel classification layer outputs = layers.Conv2D(num_classes, 3, activation=\"softmax\", padding=\"same\")(x) # Define the model model = keras.Model(inputs, outputs) return model # Free up RAM in case the model definition cells were run multiple times keras.backend.clear_session() # Build model model = get_model(img_size, num_classes) model.summary() Model: \"functional_1\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 160, 160, 3) 0 __________________________________________________________________________________________________ conv2d (Conv2D) (None, 80, 80, 32) 896 input_1[0][0] __________________________________________________________________________________________________ batch_normalization (BatchNorma (None, 80, 80, 32) 128 conv2d[0][0] __________________________________________________________________________________________________ activation (Activation) (None, 80, 80, 32) 0 batch_normalization[0][0] __________________________________________________________________________________________________ activation_1 (Activation) (None, 80, 80, 32) 0 activation[0][0] __________________________________________________________________________________________________ separable_conv2d (SeparableConv (None, 80, 80, 64) 2400 activation_1[0][0] __________________________________________________________________________________________________ batch_normalization_1 (BatchNor (None, 80, 80, 64) 256 separable_conv2d[0][0] __________________________________________________________________________________________________ activation_2 (Activation) (None, 80, 80, 64) 0 batch_normalization_1[0][0] __________________________________________________________________________________________________ separable_conv2d_1 (SeparableCo (None, 80, 80, 64) 4736 activation_2[0][0] __________________________________________________________________________________________________ batch_normalization_2 (BatchNor (None, 80, 80, 64) 256 separable_conv2d_1[0][0] __________________________________________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 40, 40, 64) 0 batch_normalization_2[0][0] __________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 40, 40, 64) 2112 activation[0][0] __________________________________________________________________________________________________ add (Add) (None, 40, 40, 64) 0 max_pooling2d[0][0] conv2d_1[0][0] __________________________________________________________________________________________________ activation_3 (Activation) (None, 40, 40, 64) 0 add[0][0] __________________________________________________________________________________________________ separable_conv2d_2 (SeparableCo (None, 40, 40, 128) 8896 activation_3[0][0] __________________________________________________________________________________________________ batch_normalization_3 (BatchNor (None, 40, 40, 128) 512 separable_conv2d_2[0][0] __________________________________________________________________________________________________ activation_4 (Activation) (None, 40, 40, 128) 0 batch_normalization_3[0][0] __________________________________________________________________________________________________ separable_conv2d_3 (SeparableCo (None, 40, 40, 128) 17664 activation_4[0][0] __________________________________________________________________________________________________ batch_normalization_4 (BatchNor (None, 40, 40, 128) 512 separable_conv2d_3[0][0] __________________________________________________________________________________________________ max_pooling2d_1 (MaxPooling2D) (None, 20, 20, 128) 0 batch_normalization_4[0][0] __________________________________________________________________________________________________ conv2d_2 (Conv2D) (None, 20, 20, 128) 8320 add[0][0] __________________________________________________________________________________________________ add_1 (Add) (None, 20, 20, 128) 0 max_pooling2d_1[0][0] conv2d_2[0][0] __________________________________________________________________________________________________ activation_5 (Activation) (None, 20, 20, 128) 0 add_1[0][0] __________________________________________________________________________________________________ separable_conv2d_4 (SeparableCo (None, 20, 20, 256) 34176 activation_5[0][0] __________________________________________________________________________________________________ batch_normalization_5 (BatchNor (None, 20, 20, 256) 1024 separable_conv2d_4[0][0] __________________________________________________________________________________________________ activation_6 (Activation) (None, 20, 20, 256) 0 batch_normalization_5[0][0] __________________________________________________________________________________________________ separable_conv2d_5 (SeparableCo (None, 20, 20, 256) 68096 activation_6[0][0] __________________________________________________________________________________________________ batch_normalization_6 (BatchNor (None, 20, 20, 256) 1024 separable_conv2d_5[0][0] __________________________________________________________________________________________________ max_pooling2d_2 (MaxPooling2D) (None, 10, 10, 256) 0 batch_normalization_6[0][0] __________________________________________________________________________________________________ conv2d_3 (Conv2D) (None, 10, 10, 256) 33024 add_1[0][0] __________________________________________________________________________________________________ add_2 (Add) (None, 10, 10, 256) 0 max_pooling2d_2[0][0] conv2d_3[0][0] __________________________________________________________________________________________________ activation_7 (Activation) (None, 10, 10, 256) 0 add_2[0][0] __________________________________________________________________________________________________ conv2d_transpose (Conv2DTranspo (None, 10, 10, 256) 590080 activation_7[0][0] __________________________________________________________________________________________________ batch_normalization_7 (BatchNor (None, 10, 10, 256) 1024 conv2d_transpose[0][0] __________________________________________________________________________________________________ activation_8 (Activation) (None, 10, 10, 256) 0 batch_normalization_7[0][0] __________________________________________________________________________________________________ conv2d_transpose_1 (Conv2DTrans (None, 10, 10, 256) 590080 activation_8[0][0] __________________________________________________________________________________________________ batch_normalization_8 (BatchNor (None, 10, 10, 256) 1024 conv2d_transpose_1[0][0] __________________________________________________________________________________________________ up_sampling2d_1 (UpSampling2D) (None, 20, 20, 256) 0 add_2[0][0] __________________________________________________________________________________________________ up_sampling2d (UpSampling2D) (None, 20, 20, 256) 0 batch_normalization_8[0][0] __________________________________________________________________________________________________ conv2d_4 (Conv2D) (None, 20, 20, 256) 65792 up_sampling2d_1[0][0] __________________________________________________________________________________________________ add_3 (Add) (None, 20, 20, 256) 0 up_sampling2d[0][0] conv2d_4[0][0] __________________________________________________________________________________________________ activation_9 (Activation) (None, 20, 20, 256) 0 add_3[0][0] __________________________________________________________________________________________________ conv2d_transpose_2 (Conv2DTrans (None, 20, 20, 128) 295040 activation_9[0][0] __________________________________________________________________________________________________ batch_normalization_9 (BatchNor (None, 20, 20, 128) 512 conv2d_transpose_2[0][0] __________________________________________________________________________________________________ activation_10 (Activation) (None, 20, 20, 128) 0 batch_normalization_9[0][0] __________________________________________________________________________________________________ conv2d_transpose_3 (Conv2DTrans (None, 20, 20, 128) 147584 activation_10[0][0] __________________________________________________________________________________________________ batch_normalization_10 (BatchNo (None, 20, 20, 128) 512 conv2d_transpose_3[0][0] __________________________________________________________________________________________________ up_sampling2d_3 (UpSampling2D) (None, 40, 40, 256) 0 add_3[0][0] __________________________________________________________________________________________________ up_sampling2d_2 (UpSampling2D) (None, 40, 40, 128) 0 batch_normalization_10[0][0] __________________________________________________________________________________________________ conv2d_5 (Conv2D) (None, 40, 40, 128) 32896 up_sampling2d_3[0][0] __________________________________________________________________________________________________ add_4 (Add) (None, 40, 40, 128) 0 up_sampling2d_2[0][0] conv2d_5[0][0] __________________________________________________________________________________________________ activation_11 (Activation) (None, 40, 40, 128) 0 add_4[0][0] __________________________________________________________________________________________________ conv2d_transpose_4 (Conv2DTrans (None, 40, 40, 64) 73792 activation_11[0][0] __________________________________________________________________________________________________ batch_normalization_11 (BatchNo (None, 40, 40, 64) 256 conv2d_transpose_4[0][0] __________________________________________________________________________________________________ activation_12 (Activation) (None, 40, 40, 64) 0 batch_normalization_11[0][0] __________________________________________________________________________________________________ conv2d_transpose_5 (Conv2DTrans (None, 40, 40, 64) 36928 activation_12[0][0] __________________________________________________________________________________________________ batch_normalization_12 (BatchNo (None, 40, 40, 64) 256 conv2d_transpose_5[0][0] __________________________________________________________________________________________________ up_sampling2d_5 (UpSampling2D) (None, 80, 80, 128) 0 add_4[0][0] __________________________________________________________________________________________________ up_sampling2d_4 (UpSampling2D) (None, 80, 80, 64) 0 batch_normalization_12[0][0] __________________________________________________________________________________________________ conv2d_6 (Conv2D) (None, 80, 80, 64) 8256 up_sampling2d_5[0][0] __________________________________________________________________________________________________ add_5 (Add) (None, 80, 80, 64) 0 up_sampling2d_4[0][0] conv2d_6[0][0] __________________________________________________________________________________________________ activation_13 (Activation) (None, 80, 80, 64) 0 add_5[0][0] __________________________________________________________________________________________________ conv2d_transpose_6 (Conv2DTrans (None, 80, 80, 32) 18464 activation_13[0][0] __________________________________________________________________________________________________ batch_normalization_13 (BatchNo (None, 80, 80, 32) 128 conv2d_transpose_6[0][0] __________________________________________________________________________________________________ activation_14 (Activation) (None, 80, 80, 32) 0 batch_normalization_13[0][0] __________________________________________________________________________________________________ conv2d_transpose_7 (Conv2DTrans (None, 80, 80, 32) 9248 activation_14[0][0] __________________________________________________________________________________________________ batch_normalization_14 (BatchNo (None, 80, 80, 32) 128 conv2d_transpose_7[0][0] __________________________________________________________________________________________________ up_sampling2d_7 (UpSampling2D) (None, 160, 160, 64) 0 add_5[0][0] __________________________________________________________________________________________________ up_sampling2d_6 (UpSampling2D) (None, 160, 160, 32) 0 batch_normalization_14[0][0] __________________________________________________________________________________________________ conv2d_7 (Conv2D) (None, 160, 160, 32) 2080 up_sampling2d_7[0][0] __________________________________________________________________________________________________ add_6 (Add) (None, 160, 160, 32) 0 up_sampling2d_6[0][0] conv2d_7[0][0] __________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 160, 160, 3) 867 add_6[0][0] ================================================================================================== Total params: 2,058,979 Trainable params: 2,055,203 Non-trainable params: 3,776 __________________________________________________________________________________________________ Set aside a validation split import random # Split our img paths into a training and a validation set val_samples = 1000 random.Random(1337).shuffle(input_img_paths) random.Random(1337).shuffle(target_img_paths) train_input_img_paths = input_img_paths[:-val_samples] train_target_img_paths = target_img_paths[:-val_samples] val_input_img_paths = input_img_paths[-val_samples:] val_target_img_paths = target_img_paths[-val_samples:] # Instantiate data Sequences for each split train_gen = OxfordPets( batch_size, img_size, train_input_img_paths, train_target_img_paths ) val_gen = OxfordPets(batch_size, img_size, val_input_img_paths, val_target_img_paths) Train the model # Configure the model for training. # We use the \"sparse\" version of categorical_crossentropy # because our target data is integers. model.compile(optimizer=\"rmsprop\", loss=\"sparse_categorical_crossentropy\") callbacks = [ keras.callbacks.ModelCheckpoint(\"oxford_segmentation.h5\", save_best_only=True) ] # Train the model, doing validation at the end of each epoch. epochs = 15 model.fit(train_gen, epochs=epochs, validation_data=val_gen, callbacks=callbacks) Epoch 1/15 2/199 [..............................] - ETA: 13s - loss: 5.4602WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0462s vs `on_train_batch_end` time: 0.0935s). Check your callbacks. 199/199 [==============================] - 32s 161ms/step - loss: 0.9396 - val_loss: 3.7159 Epoch 2/15 199/199 [==============================] - 32s 159ms/step - loss: 0.4911 - val_loss: 2.2709 Epoch 3/15 199/199 [==============================] - 32s 160ms/step - loss: 0.4205 - val_loss: 0.5184 Epoch 4/15 199/199 [==============================] - 32s 159ms/step - loss: 0.3739 - val_loss: 0.4584 Epoch 5/15 199/199 [==============================] - 32s 160ms/step - loss: 0.3416 - val_loss: 0.3968 Epoch 6/15 199/199 [==============================] - 32s 159ms/step - loss: 0.3131 - val_loss: 0.4059 Epoch 7/15 199/199 [==============================] - 31s 157ms/step - loss: 0.2895 - val_loss: 0.3963 Epoch 8/15 199/199 [==============================] - 31s 156ms/step - loss: 0.2695 - val_loss: 0.4035 Epoch 9/15 199/199 [==============================] - 31s 157ms/step - loss: 0.2528 - val_loss: 0.4184 Epoch 10/15 199/199 [==============================] - 31s 157ms/step - loss: 0.2360 - val_loss: 0.3950 Epoch 11/15 199/199 [==============================] - 31s 157ms/step - loss: 0.2247 - val_loss: 0.4139 Epoch 12/15 199/199 [==============================] - 31s 157ms/step - loss: 0.2126 - val_loss: 0.3861 Epoch 13/15 199/199 [==============================] - 31s 157ms/step - loss: 0.2026 - val_loss: 0.4138 Epoch 14/15 199/199 [==============================] - 31s 156ms/step - loss: 0.1932 - val_loss: 0.4265 Epoch 15/15 199/199 [==============================] - 31s 157ms/step - loss: 0.1857 - val_loss: 0.3959 Visualize predictions # Generate predictions for all images in the validation set val_gen = OxfordPets(batch_size, img_size, val_input_img_paths, val_target_img_paths) val_preds = model.predict(val_gen) def display_mask(i): \"\"\"Quick utility to display a model's prediction.\"\"\" mask = np.argmax(val_preds[i], axis=-1) mask = np.expand_dims(mask, axis=-1) img = PIL.ImageOps.autocontrast(keras.preprocessing.image.array_to_img(mask)) display(img) # Display results for validation image #10 i = 10 # Display input image display(Image(filename=val_input_img_paths[i])) # Display ground-truth target mask img = PIL.ImageOps.autocontrast(load_img(val_target_img_paths[i])) display(img) # Display mask predicted by our model display_mask(i) # Note that the model only sees inputs at 150x150. jpeg png png Similarity learning using a siamese network trained with a contrastive loss. Introduction Siamese Networks are neural networks which share weights between two or more sister networks, each producing embedding vectors of its respective inputs. In supervised similarity learning, the networks are then trained to maximize the contrast (distance) between embeddings of inputs of different classes, while minimizing the distance between embeddings of similar classes, resulting in embedding spaces that reflect the class segmentation of the training inputs. Setup import random import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt Hyperparameters epochs = 10 batch_size = 16 margin = 1 # Margin for constrastive loss. Load the MNIST dataset (x_train_val, y_train_val), (x_test, y_test) = keras.datasets.mnist.load_data() # Change the data type to a floating point format x_train_val = x_train_val.astype(\"float32\") x_test = x_test.astype(\"float32\") Define training and validation sets # Keep 50% of train_val in validation set x_train, x_val = x_train_val[:30000], x_train_val[30000:] y_train, y_val = y_train_val[:30000], y_train_val[30000:] del x_train_val, y_train_val Create pairs of images We will train the model to differentiate between digits of different classes. For example, digit 0 needs to be differentiated from the rest of the digits (1 through 9), digit 1 - from 0 and 2 through 9, and so on. To carry this out, we will select N random images from class A (for example, for digit 0) and pair them with N random images from another class B (for example, for digit 1). Then, we can repeat this process for all classes of digits (until digit 9). Once we have paired digit 0 with other digits, we can repeat this process for the remaining classes for the rest of the digits (from 1 until 9). def make_pairs(x, y): \"\"\"Creates a tuple containing image pairs with corresponding label. Arguments: x: List containing images, each index in this list corresponds to one image. y: List containing labels, each label with datatype of `int`. Returns: Tuple containing two numpy arrays as (pairs_of_samples, labels), where pairs_of_samples' shape is (2len(x), 2,n_features_dims) and labels are a binary array of shape (2len(x)). \"\"\" num_classes = max(y) + 1 digit_indices = [np.where(y == i)[0] for i in range(num_classes)] pairs = [] labels = [] for idx1 in range(len(x)): # add a matching example x1 = x[idx1] label1 = y[idx1] idx2 = random.choice(digit_indices[label1]) x2 = x[idx2] pairs += [[x1, x2]] labels += [1] # add a non-matching example label2 = random.randint(0, num_classes - 1) while label2 == label1: label2 = random.randint(0, num_classes - 1) idx2 = random.choice(digit_indices[label2]) x2 = x[idx2] pairs += [[x1, x2]] labels += [0] return np.array(pairs), np.array(labels).astype(\"float32\") # make train pairs pairs_train, labels_train = make_pairs(x_train, y_train) # make validation pairs pairs_val, labels_val = make_pairs(x_val, y_val) # make test pairs pairs_test, labels_test = make_pairs(x_test, y_test) We get: pairs_train.shape = (60000, 2, 28, 28) We have 60,000 pairs Each pair contains 2 images Each image has shape (28, 28) Split the training pairs x_train_1 = pairs_train[:, 0] # x_train_1.shape is (60000, 28, 28) x_train_2 = pairs_train[:, 1] Split the validation pairs x_val_1 = pairs_val[:, 0] # x_val_1.shape = (60000, 28, 28) x_val_2 = pairs_val[:, 1] Split the test pairs x_test_1 = pairs_test[:, 0] # x_test_1.shape = (20000, 28, 28) x_test_2 = pairs_test[:, 1] Visualize pairs and their labels def visualize(pairs, labels, to_show=6, num_col=3, predictions=None, test=False): \"\"\"Creates a plot of pairs and labels, and prediction if it's test dataset. Arguments: pairs: Numpy Array, of pairs to visualize, having shape (Number of pairs, 2, 28, 28). to_show: Int, number of examples to visualize (default is 6) `to_show` must be an integral multiple of `num_col`. Otherwise it will be trimmed if it is greater than num_col, and incremented if if it is less then num_col. num_col: Int, number of images in one row - (default is 3) For test and train respectively, it should not exceed 3 and 7. predictions: Numpy Array of predictions with shape (to_show, 1) - (default is None) Must be passed when test=True. test: Boolean telling whether the dataset being visualized is train dataset or test dataset - (default False). Returns: None. \"\"\" # Define num_row # If to_show % num_col != 0 # trim to_show, # to trim to_show limit num_row to the point where # to_show % num_col == 0 # # If to_show//num_col == 0 # then it means num_col is greater then to_show # increment to_show # to increment to_show set num_row to 1 num_row = to_show // num_col if to_show // num_col != 0 else 1 # `to_show` must be an integral multiple of `num_col` # we found num_row and we have num_col # to increment or decrement to_show # to make it integral multiple of `num_col` # simply set it equal to num_row * num_col to_show = num_row * num_col # Plot the images fig, axes = plt.subplots(num_row, num_col, figsize=(5, 5)) for i in range(to_show): # If the number of rows is 1, the axes array is one-dimensional if num_row == 1: ax = axes[i % num_col] else: ax = axes[i // num_col, i % num_col] ax.imshow(tf.concat([pairs[i][0], pairs[i][1]], axis=1), cmap=\"gray\") ax.set_axis_off() if test: ax.set_title(\"True: {} | Pred: {:.5f}\".format(labels[i], predictions[i][0])) else: ax.set_title(\"Label: {}\".format(labels[i])) if test: plt.tight_layout(rect=(0, 0, 1.9, 1.9), w_pad=0.0) else: plt.tight_layout(rect=(0, 0, 1.5, 1.5)) plt.show() Inspect training pairs visualize(pairs_train[:-1], labels_train[:-1], to_show=4, num_col=4) png Inspect validation pairs visualize(pairs_val[:-1], labels_val[:-1], to_show=4, num_col=4) png Inspect test pairs visualize(pairs_test[:-1], labels_test[:-1], to_show=4, num_col=4) png Define the model There are be two input layers, each leading to its own network, which produces embeddings. A Lambda layer then merges them using an Euclidean distance and the merged output is fed to the final network. # Provided two tensors t1 and t2 # Euclidean distance = sqrt(sum(square(t1-t2))) def euclidean_distance(vects): \"\"\"Find the Euclidean distance between two vectors. Arguments: vects: List containing two tensors of same length. Returns: Tensor containing euclidean distance (as floating point value) between vectors. \"\"\" x, y = vects sum_square = tf.math.reduce_sum(tf.math.square(x - y), axis=1, keepdims=True) return tf.math.sqrt(tf.math.maximum(sum_square, tf.keras.backend.epsilon())) input = layers.Input((28, 28, 1)) x = tf.keras.layers.BatchNormalization()(input) x = layers.Conv2D(4, (5, 5), activation=\"tanh\")(x) x = layers.AveragePooling2D(pool_size=(2, 2))(x) x = layers.Conv2D(16, (5, 5), activation=\"tanh\")(x) x = layers.AveragePooling2D(pool_size=(2, 2))(x) x = layers.Flatten()(x) x = tf.keras.layers.BatchNormalization()(x) x = layers.Dense(10, activation=\"tanh\")(x) embedding_network = keras.Model(input, x) input_1 = layers.Input((28, 28, 1)) input_2 = layers.Input((28, 28, 1)) # As mentioned above, Siamese Network share weights between # tower networks (sister networks). To allow this, we will use # same embedding network for both tower networks. tower_1 = embedding_network(input_1) tower_2 = embedding_network(input_2) merge_layer = layers.Lambda(euclidean_distance)([tower_1, tower_2]) normal_layer = tf.keras.layers.BatchNormalization()(merge_layer) output_layer = layers.Dense(1, activation=\"sigmoid\")(normal_layer) siamese = keras.Model(inputs=[input_1, input_2], outputs=output_layer) Define the constrastive Loss def loss(margin=1): \"\"\"Provides 'constrastive_loss' an enclosing scope with variable 'margin'. Arguments: margin: Integer, defines the baseline for distance for which pairs should be classified as dissimilar. - (default is 1). Returns: 'constrastive_loss' function with data ('margin') attached. \"\"\" # Contrastive loss = mean( (1-true_value) * square(prediction) + # true_value * square( max(margin-prediction, 0) )) def contrastive_loss(y_true, y_pred): \"\"\"Calculates the constrastive loss. Arguments: y_true: List of labels, each label is of type float32. y_pred: List of predictions of same length as of y_true, each label is of type float32. Returns: A tensor containing constrastive loss as floating point value. \"\"\" square_pred = tf.math.square(y_pred) margin_square = tf.math.square(tf.math.maximum(margin - (y_pred), 0)) return tf.math.reduce_mean( (1 - y_true) * square_pred + (y_true) * margin_square ) return contrastive_loss Compile the model with the contrastive loss siamese.compile(loss=loss(margin=margin), optimizer=\"RMSprop\", metrics=[\"accuracy\"]) siamese.summary() Model: \"model_1\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_2 (InputLayer) [(None, 28, 28, 1)] 0 __________________________________________________________________________________________________ input_3 (InputLayer) [(None, 28, 28, 1)] 0 __________________________________________________________________________________________________ model (Functional) (None, 10) 5318 input_2[0][0] input_3[0][0] __________________________________________________________________________________________________ lambda (Lambda) (None, 1) 0 model[0][0] model[1][0] __________________________________________________________________________________________________ batch_normalization_2 (BatchNor (None, 1) 4 lambda[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (None, 1) 2 batch_normalization_2[0][0] ================================================================================================== Total params: 5,324 Trainable params: 4,808 Non-trainable params: 516 __________________________________________________________________________________________________ Train the model history = siamese.fit( [x_train_1, x_train_2], labels_train, validation_data=([x_val_1, x_val_2], labels_val), batch_size=batch_size, epochs=epochs, ) Epoch 1/10 3750/3750 [==============================] - 25s 6ms/step - loss: 0.1993 - accuracy: 0.6626 - val_loss: 0.0525 - val_accuracy: 0.9331 Epoch 2/10 3750/3750 [==============================] - 23s 6ms/step - loss: 0.0611 - accuracy: 0.9187 - val_loss: 0.0277 - val_accuracy: 0.9644 Epoch 3/10 3750/3750 [==============================] - 24s 6ms/step - loss: 0.0455 - accuracy: 0.9409 - val_loss: 0.0214 - val_accuracy: 0.9719 Epoch 4/10 3750/3750 [==============================] - 27s 7ms/step - loss: 0.0386 - accuracy: 0.9506 - val_loss: 0.0198 - val_accuracy: 0.9743 Epoch 5/10 3750/3750 [==============================] - 45s 12ms/step - loss: 0.0362 - accuracy: 0.9529 - val_loss: 0.0169 - val_accuracy: 0.9783 Epoch 6/10 2497/3750 [==================>...........] - ETA: 10s - loss: 0.0343 - accuracy: 0.9552 Visualize results def plt_metric(history, metric, title, has_valid=True): \"\"\"Plots the given 'metric' from 'history'. Arguments: history: history attribute of History object returned from Model.fit. metric: Metric to plot, a string value present as key in 'history'. title: A string to be used as title of plot. has_valid: Boolean, true if valid data was passed to Model.fit else false. Returns: None. \"\"\" plt.plot(history[metric]) if has_valid: plt.plot(history[\"val_\" + metric]) plt.legend([\"train\", \"validation\"], loc=\"upper left\") plt.title(title) plt.ylabel(metric) plt.xlabel(\"epoch\") plt.show() # Plot the accuracy plt_metric(history=history.history, metric=\"accuracy\", title=\"Model accuracy\") # Plot the constrastive loss plt_metric(history=history.history, metric=\"loss\", title=\"Constrastive Loss\") png png Evaluate the model results = siamese.evaluate([x_test_1, x_test_2], labels_test) print(\"test loss, test acc:\", results) 625/625 [==============================] - 3s 4ms/step - loss: 0.0150 - accuracy: 0.9810 test loss, test acc: [0.015001337975263596, 0.9810000061988831] Visualize the predictions predictions = siamese.predict([x_test_1, x_test_2]) visualize(pairs_test, labels_test, to_show=3, predictions=predictions, test=True) png Training a Siamese Network to compare the similarity of images using a triplet loss function. Introduction A Siamese Network is a type of network architecture that contains two or more identical subnetworks used to generate feature vectors for each input and compare them. Siamese Networks can be applied to different use cases, like detecting duplicates, finding anomalies, and face recognition. This example uses a Siamese Network with three identical subnetworks. We will provide three images to the model, where two of them will be similar (anchor and positive samples), and the third will be unrelated (a negative example.) Our goal is for the model to learn to estimate the similarity between images. For the network to learn, we use a triplet loss function. You can find an introduction to triplet loss in the FaceNet paper by Schroff et al,. 2015. In this example, we define the triplet loss function as follows: L(A, P, N) = max(‖f(A) - f(P)‖² - ‖f(A) - f(N)‖² + margin, 0) This example uses the Totally Looks Like dataset by Rosenfeld et al., 2018. Setup import matplotlib.pyplot as plt import numpy as np import os import random import tensorflow as tf from pathlib import Path from tensorflow.keras import applications from tensorflow.keras import layers from tensorflow.keras import losses from tensorflow.keras import optimizers from tensorflow.keras import metrics from tensorflow.keras import Model from tensorflow.keras.applications import resnet target_shape = (200, 200) Load the dataset We are going to load the Totally Looks Like dataset and unzip it inside the ~/.keras directory in the local environment. The dataset consists of two separate files: left.zip contains the images that we will use as the anchor. right.zip contains the images that we will use as the positive sample (an image that looks like the anchor). cache_dir = Path(Path.home()) / \".keras\" anchor_images_path = cache_dir / \"left\" positive_images_path = cache_dir / \"right\" !gdown --id 1jvkbTr_giSP3Ru8OwGNCg6B4PvVbcO34 !gdown --id 1EzBZUb_mh_Dp_FKD0P4XiYYSd0QBH5zW !unzip -oq left.zip -d $cache_dir !unzip -oq right.zip -d $cache_dir zsh:1: command not found: gdown zsh:1: command not found: gdown unzip: cannot find or open left.zip, left.zip.zip or left.zip.ZIP. unzip: cannot find or open right.zip, right.zip.zip or right.zip.ZIP. Preparing the data We are going to use a tf.data pipeline to load the data and generate the triplets that we need to train the Siamese network. We'll set up the pipeline using a zipped list with anchor, positive, and negative filenames as the source. The pipeline will load and preprocess the corresponding images. def preprocess_image(filename): \"\"\" Load the specified file as a JPEG image, preprocess it and resize it to the target shape. \"\"\" image_string = tf.io.read_file(filename) image = tf.image.decode_jpeg(image_string, channels=3) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, target_shape) return image def preprocess_triplets(anchor, positive, negative): \"\"\" Given the filenames corresponding to the three images, load and preprocess them. \"\"\" return ( preprocess_image(anchor), preprocess_image(positive), preprocess_image(negative), ) Let's setup our data pipeline using a zipped list with an anchor, positive, and negative image filename as the source. The output of the pipeline contains the same triplet with every image loaded and preprocessed. # We need to make sure both the anchor and positive images are loaded in # sorted order so we can match them together. anchor_images = sorted( [str(anchor_images_path / f) for f in os.listdir(anchor_images_path)] ) positive_images = sorted( [str(positive_images_path / f) for f in os.listdir(positive_images_path)] ) image_count = len(anchor_images) anchor_dataset = tf.data.Dataset.from_tensor_slices(anchor_images) positive_dataset = tf.data.Dataset.from_tensor_slices(positive_images) # To generate the list of negative images, let's randomize the list of # available images and concatenate them together. rng = np.random.RandomState(seed=42) rng.shuffle(anchor_images) rng.shuffle(positive_images) negative_images = anchor_images + positive_images np.random.RandomState(seed=32).shuffle(negative_images) negative_dataset = tf.data.Dataset.from_tensor_slices(negative_images) negative_dataset = negative_dataset.shuffle(buffer_size=4096) dataset = tf.data.Dataset.zip((anchor_dataset, positive_dataset, negative_dataset)) dataset = dataset.shuffle(buffer_size=1024) dataset = dataset.map(preprocess_triplets) # Let's now split our dataset in train and validation. train_dataset = dataset.take(round(image_count * 0.8)) val_dataset = dataset.skip(round(image_count * 0.8)) train_dataset = train_dataset.batch(32, drop_remainder=False) train_dataset = train_dataset.prefetch(8) val_dataset = val_dataset.batch(32, drop_remainder=False) val_dataset = val_dataset.prefetch(8) Let's take a look at a few examples of triplets. Notice how the first two images look alike while the third one is always different. def visualize(anchor, positive, negative): \"\"\"Visualize a few triplets from the supplied batches.\"\"\" def show(ax, image): ax.imshow(image) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig = plt.figure(figsize=(9, 9)) axs = fig.subplots(3, 3) for i in range(3): show(axs[i, 0], anchor[i]) show(axs[i, 1], positive[i]) show(axs[i, 2], negative[i]) visualize(*list(train_dataset.take(1).as_numpy_iterator())[0]) png Setting up the embedding generator model Our Siamese Network will generate embeddings for each of the images of the triplet. To do this, we will use a ResNet50 model pretrained on ImageNet and connect a few Dense layers to it so we can learn to separate these embeddings. We will freeze the weights of all the layers of the model up until the layer conv5_block1_out. This is important to avoid affecting the weights that the model has already learned. We are going to leave the bottom few layers trainable, so that we can fine-tune their weights during training. base_cnn = resnet.ResNet50( weights=\"imagenet\", input_shape=target_shape + (3,), include_top=False ) flatten = layers.Flatten()(base_cnn.output) dense1 = layers.Dense(512, activation=\"relu\")(flatten) dense1 = layers.BatchNormalization()(dense1) dense2 = layers.Dense(256, activation=\"relu\")(dense1) dense2 = layers.BatchNormalization()(dense2) output = layers.Dense(256)(dense2) embedding = Model(base_cnn.input, output, name=\"Embedding\") trainable = False for layer in base_cnn.layers: if layer.name == \"conv5_block1_out\": trainable = True layer.trainable = trainable Setting up the Siamese Network model The Siamese network will receive each of the triplet images as an input, generate the embeddings, and output the distance between the anchor and the positive embedding, as well as the distance between the anchor and the negative embedding. To compute the distance, we can use a custom layer DistanceLayer that returns both values as a tuple. class DistanceLayer(layers.Layer): \"\"\" This layer is responsible for computing the distance between the anchor embedding and the positive embedding, and the anchor embedding and the negative embedding. \"\"\" def __init__(self, **kwargs): super().__init__(**kwargs) def call(self, anchor, positive, negative): ap_distance = tf.reduce_sum(tf.square(anchor - positive), -1) an_distance = tf.reduce_sum(tf.square(anchor - negative), -1) return (ap_distance, an_distance) anchor_input = layers.Input(name=\"anchor\", shape=target_shape + (3,)) positive_input = layers.Input(name=\"positive\", shape=target_shape + (3,)) negative_input = layers.Input(name=\"negative\", shape=target_shape + (3,)) distances = DistanceLayer()( embedding(resnet.preprocess_input(anchor_input)), embedding(resnet.preprocess_input(positive_input)), embedding(resnet.preprocess_input(negative_input)), ) siamese_network = Model( inputs=[anchor_input, positive_input, negative_input], outputs=distances ) Putting everything together We now need to implement a model with custom training loop so we can compute the triplet loss using the three embeddings produced by the Siamese network. Let's create a Mean metric instance to track the loss of the training process. class SiameseModel(Model): \"\"\"The Siamese Network model with a custom training and testing loops. Computes the triplet loss using the three embeddings produced by the Siamese Network. The triplet loss is defined as: L(A, P, N) = max(‖f(A) - f(P)‖² - ‖f(A) - f(N)‖² + margin, 0) \"\"\" def __init__(self, siamese_network, margin=0.5): super(SiameseModel, self).__init__() self.siamese_network = siamese_network self.margin = margin self.loss_tracker = metrics.Mean(name=\"loss\") def call(self, inputs): return self.siamese_network(inputs) def train_step(self, data): # GradientTape is a context manager that records every operation that # you do inside. We are using it here to compute the loss so we can get # the gradients and apply them using the optimizer specified in # `compile()`. with tf.GradientTape() as tape: loss = self._compute_loss(data) # Storing the gradients of the loss function with respect to the # weights/parameters. gradients = tape.gradient(loss, self.siamese_network.trainable_weights) # Applying the gradients on the model using the specified optimizer self.optimizer.apply_gradients( zip(gradients, self.siamese_network.trainable_weights) ) # Let's update and return the training loss metric. self.loss_tracker.update_state(loss) return {\"loss\": self.loss_tracker.result()} def test_step(self, data): loss = self._compute_loss(data) # Let's update and return the loss metric. self.loss_tracker.update_state(loss) return {\"loss\": self.loss_tracker.result()} def _compute_loss(self, data): # The output of the network is a tuple containing the distances # between the anchor and the positive example, and the anchor and # the negative example. ap_distance, an_distance = self.siamese_network(data) # Computing the Triplet Loss by subtracting both distances and # making sure we don't get a negative value. loss = ap_distance - an_distance loss = tf.maximum(loss + self.margin, 0.0) return loss @property def metrics(self): # We need to list our metrics here so the `reset_states()` can be # called automatically. return [self.loss_tracker] Training We are now ready to train our model. siamese_model = SiameseModel(siamese_network) siamese_model.compile(optimizer=optimizers.Adam(0.0001)) siamese_model.fit(train_dataset, epochs=10, validation_data=val_dataset) Epoch 1/10 151/151 [==============================] - 277s 2s/step - loss: 0.5014 - val_loss: 0.3719 Epoch 2/10 151/151 [==============================] - 276s 2s/step - loss: 0.3884 - val_loss: 0.3632 Epoch 3/10 151/151 [==============================] - 287s 2s/step - loss: 0.3711 - val_loss: 0.3509 Epoch 4/10 151/151 [==============================] - 295s 2s/step - loss: 0.3585 - val_loss: 0.3287 Epoch 5/10 151/151 [==============================] - 299s 2s/step - loss: 0.3420 - val_loss: 0.3301 Epoch 6/10 151/151 [==============================] - 297s 2s/step - loss: 0.3181 - val_loss: 0.3419 Epoch 7/10 151/151 [==============================] - 290s 2s/step - loss: 0.3131 - val_loss: 0.3201 Epoch 8/10 151/151 [==============================] - 295s 2s/step - loss: 0.3102 - val_loss: 0.3152 Epoch 9/10 151/151 [==============================] - 286s 2s/step - loss: 0.2905 - val_loss: 0.2937 Epoch 10/10 151/151 [==============================] - 270s 2s/step - loss: 0.2921 - val_loss: 0.2952 Inspecting what the network has learned At this point, we can check how the network learned to separate the embeddings depending on whether they belong to similar images. We can use cosine similarity to measure the similarity between embeddings. Let's pick a sample from the dataset to check the similarity between the embeddings generated for each image. sample = next(iter(train_dataset)) visualize(*sample) anchor, positive, negative = sample anchor_embedding, positive_embedding, negative_embedding = ( embedding(resnet.preprocess_input(anchor)), embedding(resnet.preprocess_input(positive)), embedding(resnet.preprocess_input(negative)), ) png Finally, we can compute the cosine similarity between the anchor and positive images and compare it with the similarity between the anchor and the negative images. We should expect the similarity between the anchor and positive images to be larger than the similarity between the anchor and the negative images. cosine_similarity = metrics.CosineSimilarity() positive_similarity = cosine_similarity(anchor_embedding, positive_embedding) print(\"Positive similarity:\", positive_similarity.numpy()) negative_similarity = cosine_similarity(anchor_embedding, negative_embedding) print(\"Negative similarity\", negative_similarity.numpy()) Positive similarity: 0.9940324 Negative similarity 0.9918252 Summary The tf.data API enables you to build efficient input pipelines for your model. It is particularly useful if you have a large dataset. You can learn more about tf.data pipelines in tf.data: Build TensorFlow input pipelines. In this example, we use a pre-trained ResNet50 as part of the subnetwork that generates the feature embeddings. By using transfer learning, Implementing Super-Resolution using Efficient sub-pixel model on BSDS500. Introduction ESPCN (Efficient Sub-Pixel CNN), proposed by Shi, 2016 is a model that reconstructs a high-resolution version of an image given a low-resolution version. It leverages efficient \"sub-pixel convolution\" layers, which learns an array of image upscaling filters. In this code example, we will implement the model from the paper and train it on a small dataset, BSDS500. Setup import tensorflow as tf import os import math import numpy as np from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.preprocessing.image import load_img from tensorflow.keras.preprocessing.image import array_to_img from tensorflow.keras.preprocessing.image import img_to_array from tensorflow.keras.preprocessing import image_dataset_from_directory from IPython.display import display Load data: BSDS500 dataset Download dataset We use the built-in keras.utils.get_file utility to retrieve the dataset. dataset_url = \"http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/BSR/BSR_bsds500.tgz\" data_dir = keras.utils.get_file(origin=dataset_url, fname=\"BSR\", untar=True) root_dir = os.path.join(data_dir, \"BSDS500/data\") We create training and validation datasets via image_dataset_from_directory. crop_size = 300 upscale_factor = 3 input_size = crop_size // upscale_factor batch_size = 8 train_ds = image_dataset_from_directory( root_dir, batch_size=batch_size, image_size=(crop_size, crop_size), validation_split=0.2, subset=\"training\", seed=1337, label_mode=None, ) valid_ds = image_dataset_from_directory( root_dir, batch_size=batch_size, image_size=(crop_size, crop_size), validation_split=0.2, subset=\"validation\", seed=1337, label_mode=None, ) Found 500 files belonging to 2 classes. Using 400 files for training. Found 500 files belonging to 2 classes. Using 100 files for validation. We rescale the images to take values in the range [0, 1]. def scaling(input_image): input_image = input_image / 255.0 return input_image # Scale from (0, 255) to (0, 1) train_ds = train_ds.map(scaling) valid_ds = valid_ds.map(scaling) Let's visualize a few sample images: for batch in train_ds.take(1): for img in batch: display(array_to_img(img)) png png png png png png png png We prepare a dataset of test image paths that we will use for visual evaluation at the end of this example. dataset = os.path.join(root_dir, \"images\") test_path = os.path.join(dataset, \"test\") test_img_paths = sorted( [ os.path.join(test_path, fname) for fname in os.listdir(test_path) if fname.endswith(\".jpg\") ] ) Crop and resize images Let's process image data. First, we convert our images from the RGB color space to the YUV colour space. For the input data (low-resolution images), we crop the image, retrieve the y channel (luninance), and resize it with the area method (use BICUBIC if you use PIL). We only consider the luminance channel in the YUV color space because humans are more sensitive to luminance change. For the target data (high-resolution images), we just crop the image and retrieve the y channel. # Use TF Ops to process. def process_input(input, input_size, upscale_factor): input = tf.image.rgb_to_yuv(input) last_dimension_axis = len(input.shape) - 1 y, u, v = tf.split(input, 3, axis=last_dimension_axis) return tf.image.resize(y, [input_size, input_size], method=\"area\") def process_target(input): input = tf.image.rgb_to_yuv(input) last_dimension_axis = len(input.shape) - 1 y, u, v = tf.split(input, 3, axis=last_dimension_axis) return y train_ds = train_ds.map( lambda x: (process_input(x, input_size, upscale_factor), process_target(x)) ) train_ds = train_ds.prefetch(buffer_size=32) valid_ds = valid_ds.map( lambda x: (process_input(x, input_size, upscale_factor), process_target(x)) ) valid_ds = valid_ds.prefetch(buffer_size=32) Let's take a look at the input and target data. for batch in train_ds.take(1): for img in batch[0]: display(array_to_img(img)) for img in batch[1]: display(array_to_img(img)) png png png png png png png png png png png png png png png png Build a model Compared to the paper, we add one more layer and we use the relu activation function instead of tanh. It achieves better performance even though we train the model for fewer epochs. def get_model(upscale_factor=3, channels=1): conv_args = { \"activation\": \"relu\", \"kernel_initializer\": \"Orthogonal\", \"padding\": \"same\", } inputs = keras.Input(shape=(None, None, channels)) x = layers.Conv2D(64, 5, **conv_args)(inputs) x = layers.Conv2D(64, 3, **conv_args)(x) x = layers.Conv2D(32, 3, **conv_args)(x) x = layers.Conv2D(channels * (upscale_factor ** 2), 3, **conv_args)(x) outputs = tf.nn.depth_to_space(x, upscale_factor) return keras.Model(inputs, outputs) Define utility functions We need to define several utility functions to monitor our results: plot_results to plot an save an image. get_lowres_image to convert an image to its low-resolution version. upscale_image to turn a low-resolution image to a high-resolution version reconstructed by the model. In this function, we use the y channel from the YUV color space as input to the model and then combine the output with the other channels to obtain an RGB image. import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes from mpl_toolkits.axes_grid1.inset_locator import mark_inset import PIL def plot_results(img, prefix, title): \"\"\"Plot the result with zoom-in area.\"\"\" img_array = img_to_array(img) img_array = img_array.astype(\"float32\") / 255.0 # Create a new figure with a default 111 subplot. fig, ax = plt.subplots() im = ax.imshow(img_array[::-1], origin=\"lower\") plt.title(title) # zoom-factor: 2.0, location: upper-left axins = zoomed_inset_axes(ax, 2, loc=2) axins.imshow(img_array[::-1], origin=\"lower\") # Specify the limits. x1, x2, y1, y2 = 200, 300, 100, 200 # Apply the x-limits. axins.set_xlim(x1, x2) # Apply the y-limits. axins.set_ylim(y1, y2) plt.yticks(visible=False) plt.xticks(visible=False) # Make the line. mark_inset(ax, axins, loc1=1, loc2=3, fc=\"none\", ec=\"blue\") plt.savefig(str(prefix) + \"-\" + title + \".png\") plt.show() def get_lowres_image(img, upscale_factor): \"\"\"Return low-resolution image to use as model input.\"\"\" return img.resize( (img.size[0] // upscale_factor, img.size[1] // upscale_factor), PIL.Image.BICUBIC, ) def upscale_image(model, img): \"\"\"Predict the result based on input image and restore the image as RGB.\"\"\" ycbcr = img.convert(\"YCbCr\") y, cb, cr = ycbcr.split() y = img_to_array(y) y = y.astype(\"float32\") / 255.0 input = np.expand_dims(y, axis=0) out = model.predict(input) out_img_y = out[0] out_img_y *= 255.0 # Restore the image in RGB color space. out_img_y = out_img_y.clip(0, 255) out_img_y = out_img_y.reshape((np.shape(out_img_y)[0], np.shape(out_img_y)[1])) out_img_y = PIL.Image.fromarray(np.uint8(out_img_y), mode=\"L\") out_img_cb = cb.resize(out_img_y.size, PIL.Image.BICUBIC) out_img_cr = cr.resize(out_img_y.size, PIL.Image.BICUBIC) out_img = PIL.Image.merge(\"YCbCr\", (out_img_y, out_img_cb, out_img_cr)).convert( \"RGB\" ) return out_img Define callbacks to monitor training The ESPCNCallback object will compute and display the PSNR metric. This is the main metric we use to evaluate super-resolution performance. class ESPCNCallback(keras.callbacks.Callback): def __init__(self): super(ESPCNCallback, self).__init__() self.test_img = get_lowres_image(load_img(test_img_paths[0]), upscale_factor) # Store PSNR value in each epoch. def on_epoch_begin(self, epoch, logs=None): self.psnr = [] def on_epoch_end(self, epoch, logs=None): print(\"Mean PSNR for epoch: %.2f\" % (np.mean(self.psnr))) if epoch % 20 == 0: prediction = upscale_image(self.model, self.test_img) plot_results(prediction, \"epoch-\" + str(epoch), \"prediction\") def on_test_batch_end(self, batch, logs=None): self.psnr.append(10 * math.log10(1 / logs[\"loss\"])) Define ModelCheckpoint and EarlyStopping callbacks. early_stopping_callback = keras.callbacks.EarlyStopping(monitor=\"loss\", patience=10) checkpoint_filepath = \"/tmp/checkpoint\" model_checkpoint_callback = keras.callbacks.ModelCheckpoint( filepath=checkpoint_filepath, save_weights_only=True, monitor=\"loss\", mode=\"min\", save_best_only=True, ) model = get_model(upscale_factor=upscale_factor, channels=1) model.summary() callbacks = [ESPCNCallback(), early_stopping_callback, model_checkpoint_callback] loss_fn = keras.losses.MeanSquaredError() optimizer = keras.optimizers.Adam(learning_rate=0.001) Model: \"model\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, None, None, 1)] 0 _________________________________________________________________ conv2d (Conv2D) (None, None, None, 64) 1664 _________________________________________________________________ conv2d_1 (Conv2D) (None, None, None, 64) 36928 _________________________________________________________________ conv2d_2 (Conv2D) (None, None, None, 32) 18464 _________________________________________________________________ conv2d_3 (Conv2D) (None, None, None, 9) 2601 _________________________________________________________________ tf.nn.depth_to_space (TFOpLa (None, None, None, 1) 0 ================================================================= Total params: 59,657 Trainable params: 59,657 Non-trainable params: 0 _________________________________________________________________ Train the model epochs = 100 model.compile( optimizer=optimizer, loss=loss_fn, ) model.fit( train_ds, epochs=epochs, callbacks=callbacks, validation_data=valid_ds, verbose=2 ) # The model weights (that are considered the best) are loaded into the model. model.load_weights(checkpoint_filepath) WARNING: Logging before flag parsing goes to stderr. W0828 11:01:31.262773 4528061888 callbacks.py:1270] Automatic model reloading for interrupted job was removed from the `ModelCheckpoint` callback in multi-worker mode, please use the [`keras.callbacks.experimental.BackupAndRestore`](/api/callbacks/backup_and_restore#backupandrestore-class) callback instead. See this tutorial for details: https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#backupandrestore_callback. Epoch 1/100 Mean PSNR for epoch: 22.03 png 50/50 - 14s - loss: 0.0259 - val_loss: 0.0063 Epoch 2/100 Mean PSNR for epoch: 24.55 50/50 - 13s - loss: 0.0049 - val_loss: 0.0034 Epoch 3/100 Mean PSNR for epoch: 25.57 50/50 - 13s - loss: 0.0035 - val_loss: 0.0029 Epoch 4/100 Mean PSNR for epoch: 26.35 50/50 - 13s - loss: 0.0031 - val_loss: 0.0026 Epoch 5/100 Mean PSNR for epoch: 25.88 50/50 - 13s - loss: 0.0029 - val_loss: 0.0026 Epoch 6/100 Mean PSNR for epoch: 26.23 50/50 - 13s - loss: 0.0030 - val_loss: 0.0025 Epoch 7/100 Mean PSNR for epoch: 26.30 50/50 - 13s - loss: 0.0028 - val_loss: 0.0025 Epoch 8/100 Mean PSNR for epoch: 26.27 50/50 - 13s - loss: 0.0028 - val_loss: 0.0025 Epoch 9/100 Mean PSNR for epoch: 26.38 50/50 - 12s - loss: 0.0028 - val_loss: 0.0025 Epoch 10/100 Mean PSNR for epoch: 26.25 50/50 - 12s - loss: 0.0027 - val_loss: 0.0024 Epoch 11/100 Mean PSNR for epoch: 26.19 50/50 - 12s - loss: 0.0027 - val_loss: 0.0025 Epoch 12/100 Mean PSNR for epoch: 25.97 50/50 - 12s - loss: 0.0028 - val_loss: 0.0025 Epoch 13/100 Mean PSNR for epoch: 26.30 50/50 - 12s - loss: 0.0027 - val_loss: 0.0024 Epoch 14/100 Mean PSNR for epoch: 26.43 50/50 - 12s - loss: 0.0027 - val_loss: 0.0024 Epoch 15/100 Mean PSNR for epoch: 26.49 50/50 - 12s - loss: 0.0027 - val_loss: 0.0024 Epoch 16/100 Mean PSNR for epoch: 26.41 50/50 - 12s - loss: 0.0027 - val_loss: 0.0024 Epoch 17/100 Mean PSNR for epoch: 25.86 50/50 - 13s - loss: 0.0027 - val_loss: 0.0024 Epoch 18/100 Mean PSNR for epoch: 26.11 50/50 - 12s - loss: 0.0027 - val_loss: 0.0025 Epoch 19/100 Mean PSNR for epoch: 26.78 50/50 - 12s - loss: 0.0027 - val_loss: 0.0024 Epoch 20/100 Mean PSNR for epoch: 26.59 50/50 - 12s - loss: 0.0027 - val_loss: 0.0024 Epoch 21/100 Mean PSNR for epoch: 26.52 png 50/50 - 13s - loss: 0.0026 - val_loss: 0.0024 Epoch 22/100 Mean PSNR for epoch: 26.21 50/50 - 13s - loss: 0.0026 - val_loss: 0.0024 Epoch 23/100 Mean PSNR for epoch: 26.32 50/50 - 13s - loss: 0.0031 - val_loss: 0.0025 Epoch 24/100 Mean PSNR for epoch: 26.68 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 25/100 Mean PSNR for epoch: 27.03 50/50 - 12s - loss: 0.0026 - val_loss: 0.0023 Epoch 26/100 Mean PSNR for epoch: 26.31 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 27/100 Mean PSNR for epoch: 27.20 50/50 - 12s - loss: 0.0026 - val_loss: 0.0023 Epoch 28/100 Mean PSNR for epoch: 26.53 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 29/100 Mean PSNR for epoch: 26.63 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 30/100 Mean PSNR for epoch: 26.43 50/50 - 12s - loss: 0.0026 - val_loss: 0.0023 Epoch 31/100 Mean PSNR for epoch: 26.13 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 32/100 Mean PSNR for epoch: 26.50 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 33/100 Mean PSNR for epoch: 26.91 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 34/100 Mean PSNR for epoch: 26.48 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 35/100 Mean PSNR for epoch: 26.68 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 36/100 Mean PSNR for epoch: 26.82 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 37/100 Mean PSNR for epoch: 26.53 50/50 - 14s - loss: 0.0026 - val_loss: 0.0023 Epoch 38/100 Mean PSNR for epoch: 26.73 50/50 - 14s - loss: 0.0026 - val_loss: 0.0023 Epoch 39/100 Mean PSNR for epoch: 26.07 50/50 - 13s - loss: 0.0026 - val_loss: 0.0026 Epoch 40/100 Mean PSNR for epoch: 26.36 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 41/100 Mean PSNR for epoch: 26.43 png 50/50 - 14s - loss: 0.0026 - val_loss: 0.0023 Epoch 42/100 Mean PSNR for epoch: 26.67 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 43/100 Mean PSNR for epoch: 26.83 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 44/100 Mean PSNR for epoch: 26.81 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 45/100 Mean PSNR for epoch: 26.45 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 46/100 Mean PSNR for epoch: 26.25 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 47/100 Mean PSNR for epoch: 26.56 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 48/100 Mean PSNR for epoch: 26.28 50/50 - 13s - loss: 0.0028 - val_loss: 0.0023 Epoch 49/100 Mean PSNR for epoch: 26.52 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 50/100 Mean PSNR for epoch: 26.41 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 51/100 Mean PSNR for epoch: 26.69 50/50 - 12s - loss: 0.0025 - val_loss: 0.0023 Epoch 52/100 Mean PSNR for epoch: 26.44 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 53/100 Mean PSNR for epoch: 26.90 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 54/100 Mean PSNR for epoch: 26.43 50/50 - 13s - loss: 0.0026 - val_loss: 0.0024 Epoch 55/100 Mean PSNR for epoch: 26.83 50/50 - 13s - loss: 0.0026 - val_loss: 0.0023 Epoch 56/100 Mean PSNR for epoch: 26.77 50/50 - 14s - loss: 0.0025 - val_loss: 0.0023 Epoch 57/100 Mean PSNR for epoch: 26.67 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 58/100 Mean PSNR for epoch: 26.45 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 59/100 Mean PSNR for epoch: 26.85 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 60/100 Mean PSNR for epoch: 26.41 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 61/100 Mean PSNR for epoch: 26.36 png 50/50 - 14s - loss: 0.0026 - val_loss: 0.0024 Epoch 62/100 Mean PSNR for epoch: 26.21 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 63/100 Mean PSNR for epoch: 26.36 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 64/100 Mean PSNR for epoch: 27.31 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 65/100 Mean PSNR for epoch: 26.88 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 66/100 Mean PSNR for epoch: 26.34 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 67/100 Mean PSNR for epoch: 26.65 50/50 - 13s - loss: 0.0025 - val_loss: 0.0023 Epoch 68/100 Mean PSNR for epoch: 24.88 50/50 - 13s - loss: 0.0030 - val_loss: 0.0034 Epoch 69/100 Mean PSNR for epoch: 26.41 50/50 - 13s - loss: 0.0027 - val_loss: 0.0023 Epoch 70/100 Mean PSNR for epoch: 26.71 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 71/100 Mean PSNR for epoch: 26.70 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 72/100 Mean PSNR for epoch: 26.88 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 73/100 Mean PSNR for epoch: 26.72 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 74/100 Mean PSNR for epoch: 26.53 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 75/100 Mean PSNR for epoch: 26.85 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 76/100 Mean PSNR for epoch: 26.53 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 77/100 Mean PSNR for epoch: 26.50 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 78/100 Mean PSNR for epoch: 26.90 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 79/100 Mean PSNR for epoch: 26.92 50/50 - 15s - loss: 0.0025 - val_loss: 0.0022 Epoch 80/100 Mean PSNR for epoch: 27.00 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 81/100 Mean PSNR for epoch: 26.89 png 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 82/100 Mean PSNR for epoch: 26.62 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 83/100 Mean PSNR for epoch: 26.85 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 84/100 Mean PSNR for epoch: 26.69 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 85/100 Mean PSNR for epoch: 26.81 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 86/100 Mean PSNR for epoch: 26.16 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 87/100 Mean PSNR for epoch: 26.48 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 88/100 Mean PSNR for epoch: 25.62 50/50 - 14s - loss: 0.0026 - val_loss: 0.0027 Epoch 89/100 Mean PSNR for epoch: 26.55 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 90/100 Mean PSNR for epoch: 26.20 50/50 - 14s - loss: 0.0025 - val_loss: 0.0023 Epoch 91/100 Mean PSNR for epoch: 26.35 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 92/100 Mean PSNR for epoch: 26.85 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 93/100 Mean PSNR for epoch: 26.83 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 94/100 Mean PSNR for epoch: 26.63 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 95/100 Mean PSNR for epoch: 25.94 50/50 - 13s - loss: 0.0025 - val_loss: 0.0024 Epoch 96/100 Mean PSNR for epoch: 26.47 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 97/100 Mean PSNR for epoch: 26.42 50/50 - 14s - loss: 0.0025 - val_loss: 0.0022 Epoch 98/100 Mean PSNR for epoch: 26.33 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 99/100 Mean PSNR for epoch: 26.55 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Epoch 100/100 Mean PSNR for epoch: 27.08 50/50 - 13s - loss: 0.0025 - val_loss: 0.0022 Run model prediction and plot the results Let's compute the reconstructed version of a few images and save the results. total_bicubic_psnr = 0.0 total_test_psnr = 0.0 for index, test_img_path in enumerate(test_img_paths[50:60]): img = load_img(test_img_path) lowres_input = get_lowres_image(img, upscale_factor) w = lowres_input.size[0] * upscale_factor h = lowres_input.size[1] * upscale_factor highres_img = img.resize((w, h)) prediction = upscale_image(model, lowres_input) lowres_img = lowres_input.resize((w, h)) lowres_img_arr = img_to_array(lowres_img) highres_img_arr = img_to_array(highres_img) predict_img_arr = img_to_array(prediction) bicubic_psnr = tf.image.psnr(lowres_img_arr, highres_img_arr, max_val=255) test_psnr = tf.image.psnr(predict_img_arr, highres_img_arr, max_val=255) total_bicubic_psnr += bicubic_psnr total_test_psnr += test_psnr print( \"PSNR of low resolution image and high resolution image is %.4f\" % bicubic_psnr ) print(\"PSNR of predict and high resolution is %.4f\" % test_psnr) plot_results(lowres_img, index, \"lowres\") plot_results(highres_img, index, \"highres\") plot_results(prediction, index, \"prediction\") print(\"Avg. PSNR of lowres images is %.4f\" % (total_bicubic_psnr / 10)) print(\"Avg. PSNR of reconstructions is %.4f\" % (total_test_psnr / 10)) PSNR of low resolution image and high resolution image is 28.2682 PSNR of predict and high resolution is 29.7881 png png png PSNR of low resolution image and high resolution image is 23.0465 PSNR of predict and high resolution is 25.1304 png png png PSNR of low resolution image and high resolution image is 25.4113 PSNR of predict and high resolution is 27.3936 png png png PSNR of low resolution image and high resolution image is 26.5175 PSNR of predict and high resolution is 27.1014 png png png PSNR of low resolution image and high resolution image is 24.2559 PSNR of predict and high resolution is 25.7635 png png png PSNR of low resolution image and high resolution image is 23.9661 PSNR of predict and high resolution is 25.9522 png png png PSNR of low resolution image and high resolution image is 24.3061 PSNR of predict and high resolution is 26.3963 png png png PSNR of low resolution image and high resolution image is 21.7309 PSNR of predict and high resolution is 23.8342 png png png PSNR of low resolution image and high resolution image is 28.8549 PSNR of predict and high resolution is 29.6143 png png png PSNR of low resolution image and high resolution image is 23.9198 PSNR of predict and high resolution is 25.2592 png png png Avg. PSNR of lowres images is 25.0277 Avg. PSNR of reconstructions is 26.6233 Deep dive into location-specific and channel-agnostic involution kernels. Introduction Convolution has been the basis of most modern neural networks for computer vision. A convolution kernel is spatial-agnostic and channel-specific. Because of this, it isn't able to adapt to different visual patterns with respect to different spatial locations. Along with location-related problems, the receptive field of convolution creates challenges with regard to capturing long-range spatial interactions. To address the above issues, Li et. al. rethink the properties of convolution in Involution: Inverting the Inherence of Convolution for VisualRecognition. The authors propose the \"involution kernel\", that is location-specific and channel-agnostic. Due to the location-specific nature of the operation, the authors say that self-attention falls under the design paradigm of involution. This example describes the involution kernel, compares two image classification models, one with convolution and the other with involution, and also tries drawing a parallel with the self-attention layer. Setup import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt # Set seed for reproducibility. tf.random.set_seed(42) Convolution Convolution remains the mainstay of deep neural networks for computer vision. To understand Involution, it is necessary to talk about the convolution operation. Imgur Consider an input tensor X with dimensions H, W and C_in. We take a collection of C_out convolution kernels each of shape K, K, C_in. With the multiply-add operation between the input tensor and the kernels we obtain an output tensor Y with dimensions H, W, C_out. In the diagram above C_out=3. This makes the output tensor of shape H, W and 3. One can notice that the convoltuion kernel does not depend on the spatial position of the input tensor which makes it location-agnostic. On the other hand, each channel in the output tensor is based on a specific convolution filter which makes is channel-specific. Involution The idea is to have an operation that is both location-specific and channel-agnostic. Trying to implement these specific properties poses a challenge. With a fixed number of involution kernels (for each spatial position) we will not be able to process variable-resolution input tensors. To solve this problem, the authors have considered generating each kernel conditioned on specific spatial positions. With this method, we should be able to process variable-resolution input tensors with ease. The diagram below provides an intuition on this kernel generation method. Imgur class Involution(keras.layers.Layer): def __init__( self, channel, group_number, kernel_size, stride, reduction_ratio, name ): super().__init__(name=name) # Initialize the parameters. self.channel = channel self.group_number = group_number self.kernel_size = kernel_size self.stride = stride self.reduction_ratio = reduction_ratio def build(self, input_shape): # Get the shape of the input. (_, height, width, num_channels) = input_shape # Scale the height and width with respect to the strides. height = height // self.stride width = width // self.stride # Define a layer that average pools the input tensor # if stride is more than 1. self.stride_layer = ( keras.layers.AveragePooling2D( pool_size=self.stride, strides=self.stride, padding=\"same\" ) if self.stride > 1 else tf.identity ) # Define the kernel generation layer. self.kernel_gen = keras.Sequential( [ keras.layers.Conv2D( filters=self.channel // self.reduction_ratio, kernel_size=1 ), keras.layers.BatchNormalization(), keras.layers.ReLU(), keras.layers.Conv2D( filters=self.kernel_size * self.kernel_size * self.group_number, kernel_size=1, ), ] ) # Define reshape layers self.kernel_reshape = keras.layers.Reshape( target_shape=( height, width, self.kernel_size * self.kernel_size, 1, self.group_number, ) ) self.input_patches_reshape = keras.layers.Reshape( target_shape=( height, width, self.kernel_size * self.kernel_size, num_channels // self.group_number, self.group_number, ) ) self.output_reshape = keras.layers.Reshape( target_shape=(height, width, num_channels) ) def call(self, x): # Generate the kernel with respect to the input tensor. # B, H, W, K*K*G kernel_input = self.stride_layer(x) kernel = self.kernel_gen(kernel_input) # reshape the kerenl # B, H, W, K*K, 1, G kernel = self.kernel_reshape(kernel) # Extract input patches. # B, H, W, K*K*C input_patches = tf.image.extract_patches( images=x, sizes=[1, self.kernel_size, self.kernel_size, 1], strides=[1, self.stride, self.stride, 1], rates=[1, 1, 1, 1], padding=\"SAME\", ) # Reshape the input patches to align with later operations. # B, H, W, K*K, C//G, G input_patches = self.input_patches_reshape(input_patches) # Compute the multiply-add operation of kernels and patches. # B, H, W, K*K, C//G, G output = tf.multiply(kernel, input_patches) # B, H, W, C//G, G output = tf.reduce_sum(output, axis=3) # Reshape the output kernel. # B, H, W, C output = self.output_reshape(output) # Return the output tensor and the kernel. return output, kernel Testing the Involution layer # Define the input tensor. input_tensor = tf.random.normal((32, 256, 256, 3)) # Compute involution with stride 1. output_tensor, _ = Involution( channel=3, group_number=1, kernel_size=5, stride=1, reduction_ratio=1, name=\"inv_1\" )(input_tensor) print(f\"with stride 1 ouput shape: {output_tensor.shape}\") # Compute involution with stride 2. output_tensor, _ = Involution( channel=3, group_number=1, kernel_size=5, stride=2, reduction_ratio=1, name=\"inv_2\" )(input_tensor) print(f\"with stride 2 ouput shape: {output_tensor.shape}\") # Compute involution with stride 1, channel 16 and reduction ratio 2. output_tensor, _ = Involution( channel=16, group_number=1, kernel_size=5, stride=1, reduction_ratio=2, name=\"inv_3\" )(input_tensor) print( \"with channel 16 and reduction ratio 2 ouput shape: {}\".format(output_tensor.shape) ) with stride 1 ouput shape: (32, 256, 256, 3) with stride 2 ouput shape: (32, 128, 128, 3) with channel 16 and reduction ratio 2 ouput shape: (32, 256, 256, 3) Image Classification In this section, we will build an image-classifier model. There will be two models one with convolutions and the other with involutions. The image-classification model is heavily inspired by this Convolutional Neural Network (CNN) tutorial from Google. Get the CIFAR10 Dataset # Load the CIFAR10 dataset. print(\"loading the CIFAR10 dataset...\") (train_images, train_labels), ( test_images, test_labels, ) = keras.datasets.cifar10.load_data() # Normalize pixel values to be between 0 and 1. (train_images, test_images) = (train_images / 255.0, test_images / 255.0) # Shuffle and batch the dataset. train_ds = ( tf.data.Dataset.from_tensor_slices((train_images, train_labels)) .shuffle(256) .batch(256) ) test_ds = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(256) loading the CIFAR10 dataset... Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz 170500096/170498071 [==============================] - 3s 0us/step Visualise the data class_names = [ \"airplane\", \"automobile\", \"bird\", \"cat\", \"deer\", \"dog\", \"frog\", \"horse\", \"ship\", \"truck\", ] plt.figure(figsize=(10, 10)) for i in range(25): plt.subplot(5, 5, i + 1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i]) plt.xlabel(class_names[train_labels[i][0]]) plt.show() png Convolutional Neural Network # Build the conv model. print(\"building the convolution model...\") conv_model = keras.Sequential( [ keras.layers.Conv2D(32, (3, 3), input_shape=(32, 32, 3), padding=\"same\"), keras.layers.ReLU(name=\"relu1\"), keras.layers.MaxPooling2D((2, 2)), keras.layers.Conv2D(64, (3, 3), padding=\"same\"), keras.layers.ReLU(name=\"relu2\"), keras.layers.MaxPooling2D((2, 2)), keras.layers.Conv2D(64, (3, 3), padding=\"same\"), keras.layers.ReLU(name=\"relu3\"), keras.layers.Flatten(), keras.layers.Dense(64, activation=\"relu\"), keras.layers.Dense(10), ] ) # Compile the mode with the necessary loss function and optimizer. print(\"compiling the convolution model...\") conv_model.compile( optimizer=\"adam\", loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[\"accuracy\"], ) # Train the model. print(\"conv model training...\") conv_hist = conv_model.fit(train_ds, epochs=20, validation_data=test_ds) building the convolution model... compiling the convolution model... conv model training... Epoch 1/20 196/196 [==============================] - 16s 16ms/step - loss: 1.6367 - accuracy: 0.4041 - val_loss: 1.3283 - val_accuracy: 0.5275 Epoch 2/20 196/196 [==============================] - 3s 16ms/step - loss: 1.2207 - accuracy: 0.5675 - val_loss: 1.1365 - val_accuracy: 0.5965 Epoch 3/20 196/196 [==============================] - 3s 16ms/step - loss: 1.0649 - accuracy: 0.6267 - val_loss: 1.0219 - val_accuracy: 0.6378 Epoch 4/20 196/196 [==============================] - 3s 16ms/step - loss: 0.9642 - accuracy: 0.6613 - val_loss: 0.9741 - val_accuracy: 0.6601 Epoch 5/20 196/196 [==============================] - 3s 16ms/step - loss: 0.8779 - accuracy: 0.6939 - val_loss: 0.9145 - val_accuracy: 0.6826 Epoch 6/20 196/196 [==============================] - 3s 16ms/step - loss: 0.8126 - accuracy: 0.7180 - val_loss: 0.8841 - val_accuracy: 0.6913 Epoch 7/20 196/196 [==============================] - 3s 16ms/step - loss: 0.7641 - accuracy: 0.7334 - val_loss: 0.8667 - val_accuracy: 0.7049 Epoch 8/20 196/196 [==============================] - 3s 16ms/step - loss: 0.7210 - accuracy: 0.7503 - val_loss: 0.8363 - val_accuracy: 0.7089 Epoch 9/20 196/196 [==============================] - 3s 16ms/step - loss: 0.6796 - accuracy: 0.7630 - val_loss: 0.8150 - val_accuracy: 0.7203 Epoch 10/20 196/196 [==============================] - 3s 15ms/step - loss: 0.6370 - accuracy: 0.7793 - val_loss: 0.9021 - val_accuracy: 0.6964 Epoch 11/20 196/196 [==============================] - 3s 15ms/step - loss: 0.6089 - accuracy: 0.7886 - val_loss: 0.8336 - val_accuracy: 0.7207 Epoch 12/20 196/196 [==============================] - 3s 15ms/step - loss: 0.5723 - accuracy: 0.8022 - val_loss: 0.8326 - val_accuracy: 0.7246 Epoch 13/20 196/196 [==============================] - 3s 15ms/step - loss: 0.5375 - accuracy: 0.8144 - val_loss: 0.8482 - val_accuracy: 0.7223 Epoch 14/20 196/196 [==============================] - 3s 15ms/step - loss: 0.5121 - accuracy: 0.8230 - val_loss: 0.8244 - val_accuracy: 0.7306 Epoch 15/20 196/196 [==============================] - 3s 15ms/step - loss: 0.4786 - accuracy: 0.8363 - val_loss: 0.8313 - val_accuracy: 0.7363 Epoch 16/20 196/196 [==============================] - 3s 15ms/step - loss: 0.4518 - accuracy: 0.8458 - val_loss: 0.8634 - val_accuracy: 0.7293 Epoch 17/20 196/196 [==============================] - 3s 16ms/step - loss: 0.4403 - accuracy: 0.8489 - val_loss: 0.8683 - val_accuracy: 0.7290 Epoch 18/20 196/196 [==============================] - 3s 16ms/step - loss: 0.4094 - accuracy: 0.8576 - val_loss: 0.8982 - val_accuracy: 0.7272 Epoch 19/20 196/196 [==============================] - 3s 16ms/step - loss: 0.3941 - accuracy: 0.8630 - val_loss: 0.9537 - val_accuracy: 0.7200 Epoch 20/20 196/196 [==============================] - 3s 15ms/step - loss: 0.3778 - accuracy: 0.8691 - val_loss: 0.9780 - val_accuracy: 0.7184 Involutional Neural Network # Build the involution model. print(\"building the involution model...\") inputs = keras.Input(shape=(32, 32, 3)) x, _ = Involution( channel=3, group_number=1, kernel_size=3, stride=1, reduction_ratio=2, name=\"inv_1\" )(inputs) x = keras.layers.ReLU()(x) x = keras.layers.MaxPooling2D((2, 2))(x) x, _ = Involution( channel=3, group_number=1, kernel_size=3, stride=1, reduction_ratio=2, name=\"inv_2\" )(x) x = keras.layers.ReLU()(x) x = keras.layers.MaxPooling2D((2, 2))(x) x, _ = Involution( channel=3, group_number=1, kernel_size=3, stride=1, reduction_ratio=2, name=\"inv_3\" )(x) x = keras.layers.ReLU()(x) x = keras.layers.Flatten()(x) x = keras.layers.Dense(64, activation=\"relu\")(x) outputs = keras.layers.Dense(10)(x) inv_model = keras.Model(inputs=[inputs], outputs=[outputs], name=\"inv_model\") # Compile the mode with the necessary loss function and optimizer. print(\"compiling the involution model...\") inv_model.compile( optimizer=\"adam\", loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[\"accuracy\"], ) # train the model print(\"inv model training...\") inv_hist = inv_model.fit(train_ds, epochs=20, validation_data=test_ds) building the involution model... compiling the involution model... inv model training... Epoch 1/20 196/196 [==============================] - 5s 21ms/step - loss: 2.1570 - accuracy: 0.2266 - val_loss: 2.2712 - val_accuracy: 0.1557 Epoch 2/20 196/196 [==============================] - 4s 20ms/step - loss: 1.9445 - accuracy: 0.3054 - val_loss: 1.9762 - val_accuracy: 0.2963 Epoch 3/20 196/196 [==============================] - 4s 20ms/step - loss: 1.8469 - accuracy: 0.3433 - val_loss: 1.8044 - val_accuracy: 0.3669 Epoch 4/20 196/196 [==============================] - 4s 20ms/step - loss: 1.7837 - accuracy: 0.3646 - val_loss: 1.7640 - val_accuracy: 0.3761 Epoch 5/20 196/196 [==============================] - 4s 20ms/step - loss: 1.7369 - accuracy: 0.3784 - val_loss: 1.7180 - val_accuracy: 0.3907 Epoch 6/20 196/196 [==============================] - 4s 19ms/step - loss: 1.7031 - accuracy: 0.3917 - val_loss: 1.6839 - val_accuracy: 0.4004 Epoch 7/20 196/196 [==============================] - 4s 19ms/step - loss: 1.6748 - accuracy: 0.3988 - val_loss: 1.6786 - val_accuracy: 0.4037 Epoch 8/20 196/196 [==============================] - 4s 19ms/step - loss: 1.6592 - accuracy: 0.4052 - val_loss: 1.6550 - val_accuracy: 0.4103 Epoch 9/20 196/196 [==============================] - 4s 19ms/step - loss: 1.6412 - accuracy: 0.4106 - val_loss: 1.6346 - val_accuracy: 0.4158 Epoch 10/20 196/196 [==============================] - 4s 19ms/step - loss: 1.6251 - accuracy: 0.4178 - val_loss: 1.6330 - val_accuracy: 0.4145 Epoch 11/20 196/196 [==============================] - 4s 19ms/step - loss: 1.6124 - accuracy: 0.4206 - val_loss: 1.6214 - val_accuracy: 0.4218 Epoch 12/20 196/196 [==============================] - 4s 19ms/step - loss: 1.5978 - accuracy: 0.4252 - val_loss: 1.6121 - val_accuracy: 0.4239 Epoch 13/20 196/196 [==============================] - 4s 19ms/step - loss: 1.5868 - accuracy: 0.4301 - val_loss: 1.5974 - val_accuracy: 0.4284 Epoch 14/20 196/196 [==============================] - 4s 19ms/step - loss: 1.5759 - accuracy: 0.4353 - val_loss: 1.5939 - val_accuracy: 0.4325 Epoch 15/20 196/196 [==============================] - 4s 19ms/step - loss: 1.5677 - accuracy: 0.4369 - val_loss: 1.5889 - val_accuracy: 0.4372 Epoch 16/20 196/196 [==============================] - 4s 20ms/step - loss: 1.5586 - accuracy: 0.4413 - val_loss: 1.5817 - val_accuracy: 0.4376 Epoch 17/20 196/196 [==============================] - 4s 20ms/step - loss: 1.5507 - accuracy: 0.4447 - val_loss: 1.5776 - val_accuracy: 0.4381 Epoch 18/20 196/196 [==============================] - 4s 20ms/step - loss: 1.5420 - accuracy: 0.4477 - val_loss: 1.5785 - val_accuracy: 0.4378 Epoch 19/20 196/196 [==============================] - 4s 20ms/step - loss: 1.5357 - accuracy: 0.4484 - val_loss: 1.5639 - val_accuracy: 0.4431 Epoch 20/20 196/196 [==============================] - 4s 20ms/step - loss: 1.5305 - accuracy: 0.4530 - val_loss: 1.5661 - val_accuracy: 0.4418 Comparisons In this section, we will be looking at both the models and compare a few pointers. Parameters One can see that with a similar architecture the parameters in a CNN is much larger than that of an INN (Involutional Neural Network). conv_model.summary() inv_model.summary() Model: \"sequential_3\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_6 (Conv2D) (None, 32, 32, 32) 896 _________________________________________________________________ relu1 (ReLU) (None, 32, 32, 32) 0 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 16, 16, 32) 0 _________________________________________________________________ conv2d_7 (Conv2D) (None, 16, 16, 64) 18496 _________________________________________________________________ relu2 (ReLU) (None, 16, 16, 64) 0 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 8, 8, 64) 0 _________________________________________________________________ conv2d_8 (Conv2D) (None, 8, 8, 64) 36928 _________________________________________________________________ relu3 (ReLU) (None, 8, 8, 64) 0 _________________________________________________________________ flatten (Flatten) (None, 4096) 0 _________________________________________________________________ dense (Dense) (None, 64) 262208 _________________________________________________________________ dense_1 (Dense) (None, 10) 650 ================================================================= Total params: 319,178 Trainable params: 319,178 Non-trainable params: 0 _________________________________________________________________ Model: \"inv_model\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 32, 32, 3)] 0 _________________________________________________________________ inv_1 (Involution) ((None, 32, 32, 3), (None 26 _________________________________________________________________ re_lu_3 (ReLU) (None, 32, 32, 3) 0 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 16, 16, 3) 0 _________________________________________________________________ inv_2 (Involution) ((None, 16, 16, 3), (None 26 _________________________________________________________________ re_lu_4 (ReLU) (None, 16, 16, 3) 0 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 8, 8, 3) 0 _________________________________________________________________ inv_3 (Involution) ((None, 8, 8, 3), (None, 26 _________________________________________________________________ re_lu_5 (ReLU) (None, 8, 8, 3) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 192) 0 _________________________________________________________________ dense_2 (Dense) (None, 64) 12352 _________________________________________________________________ dense_3 (Dense) (None, 10) 650 ================================================================= Total params: 13,080 Trainable params: 13,074 Non-trainable params: 6 _________________________________________________________________ Loss and Accuracy Plots Here, the loss and the accuracy plots demonstrate that INNs are slow learners (with lower parameters). plt.figure(figsize=(20, 5)) plt.subplot(1, 2, 1) plt.title(\"Convolution Loss\") plt.plot(conv_hist.history[\"loss\"], label=\"loss\") plt.plot(conv_hist.history[\"val_loss\"], label=\"val_loss\") plt.legend() plt.subplot(1, 2, 2) plt.title(\"Involution Loss\") plt.plot(inv_hist.history[\"loss\"], label=\"loss\") plt.plot(inv_hist.history[\"val_loss\"], label=\"val_loss\") plt.legend() plt.show() plt.figure(figsize=(20, 5)) plt.subplot(1, 2, 1) plt.title(\"Convolution Accuracy\") plt.plot(conv_hist.history[\"accuracy\"], label=\"accuracy\") plt.plot(conv_hist.history[\"val_accuracy\"], label=\"val_accuracy\") plt.legend() plt.subplot(1, 2, 2) plt.title(\"Involution Accuracy\") plt.plot(inv_hist.history[\"accuracy\"], label=\"accuracy\") plt.plot(inv_hist.history[\"val_accuracy\"], label=\"val_accuracy\") plt.legend() plt.show() png png Visualizing Involution Kernels To visualize the kernels, we take the sum of K×K values from each involution kernel. All the representatives at different spatial locations frame the corresponding heat map. The authors mention: \"Our proposed involution is reminiscent of self-attention and essentially could become a generalized version of it.\" With the visualization of the kernel we can indeed obtain an attention map of the image. The learned involution kernels provides attention to individual spatial positions of the input tensor. The location-specific property makes involution a generic space of models in which self-attention belongs. layer_names = [\"inv_1\", \"inv_2\", \"inv_3\"] outputs = [inv_model.get_layer(name).output for name in layer_names] vis_model = keras.Model(inv_model.input, outputs) fig, axes = plt.subplots(nrows=10, ncols=4, figsize=(10, 30)) for ax, test_image in zip(axes, test_images[:10]): (inv1_out, inv2_out, inv3_out) = vis_model.predict(test_image[None, ...]) _, inv1_kernel = inv1_out _, inv2_kernel = inv2_out _, inv3_kernel = inv3_out inv1_kernel = tf.reduce_sum(inv1_kernel, axis=[-1, -2, -3]) inv2_kernel = tf.reduce_sum(inv2_kernel, axis=[-1, -2, -3]) inv3_kernel = tf.reduce_sum(inv3_kernel, axis=[-1, -2, -3]) ax[0].imshow(keras.preprocessing.image.array_to_img(test_image)) ax[0].set_title(\"Input Image\") ax[1].imshow(keras.preprocessing.image.array_to_img(inv1_kernel[0, ..., None])) ax[1].set_title(\"Involution Kernel 1\") ax[2].imshow(keras.preprocessing.image.array_to_img(inv2_kernel[0, ..., None])) ax[2].set_title(\"Involution Kernel 2\") ax[3].imshow(keras.preprocessing.image.array_to_img(inv3_kernel[0, ..., None])) ax[3].set_title(\"Involution Kernel 3\") png Conclusions In this example, the main focus was to build an Involution layer which can be easily reused. While our comparisons were based on a specific task, feel free to use the layer for different tasks and report your results. According to me, the key take-away of involution is its relationship with self-attention. The intuition behind location-specific and channel-spefic processing makes sense in a lot of tasks. Moving forward one can: Look at Yannick's video on involution for a better understanding. Experiment with the various hyperparameters of the involution layer. Build different models with the involution layer. Try building a different kernel generation method altogether. Training a keypoint detector with data augmentation and transfer learning. Keypoint detection consists of locating key object parts. For example, the key parts of our faces include nose tips, eyebrows, eye corners, and so on. These parts help to represent the underlying object in a feature-rich manner. Keypoint detection has applications that include pose estimation, face detection, etc. In this example, we will build a keypoint detector using the StanfordExtra dataset, using transfer learning. This example requires TensorFlow 2.4 or higher, as well as imgaug library, which can be installed using the following command: !pip install -q -U imgaug Data collection The StanfordExtra dataset contains 12,000 images of dogs together with keypoints and segmentation maps. It is developed from the Stanford dogs dataset. It can be downloaded with the command below: !wget -q http://vision.stanford.edu/aditya86/ImageNetDogs/images.tar Annotations are provided as a single JSON file in the StanfordExtra dataset and one needs to fill this form to get access to it. The authors explicitly instruct users not to share the JSON file, and this example respects this wish: you should obtain the JSON file yourself. The JSON file is expected to be locally available as stanfordextra_v12.zip. After the files are downloaded, we can extract the archives. !tar xf images.tar !unzip -qq ~/stanfordextra_v12.zip Imports from tensorflow.keras import layers from tensorflow import keras import tensorflow as tf from imgaug.augmentables.kps import KeypointsOnImage from imgaug.augmentables.kps import Keypoint import imgaug.augmenters as iaa from PIL import Image from sklearn.model_selection import train_test_split from matplotlib import pyplot as plt import pandas as pd import numpy as np import json import os Define hyperparameters IMG_SIZE = 224 BATCH_SIZE = 64 EPOCHS = 5 NUM_KEYPOINTS = 24 * 2 # 24 pairs each having x and y coordinates Load data The authors also provide a metadata file that specifies additional information about the keypoints, like color information, animal pose name, etc. We will load this file in a pandas dataframe to extract information for visualization purposes. IMG_DIR = \"Images\" JSON = \"StanfordExtra_V12/StanfordExtra_v12.json\" KEYPOINT_DEF = ( \"https://github.com/benjiebob/StanfordExtra/raw/master/keypoint_definitions.csv\" ) # Load the ground-truth annotations. with open(JSON) as infile: json_data = json.load(infile) # Set up a dictionary, mapping all the ground-truth information # with respect to the path of the image. json_dict = {i[\"img_path\"]: i for i in json_data} A single entry of json_dict looks like the following: 'n02085782-Japanese_spaniel/n02085782_2886.jpg': {'img_bbox': [205, 20, 116, 201], 'img_height': 272, 'img_path': 'n02085782-Japanese_spaniel/n02085782_2886.jpg', 'img_width': 350, 'is_multiple_dogs': False, 'joints': [[108.66666666666667, 252.0, 1], [147.66666666666666, 229.0, 1], [163.5, 208.5, 1], [0, 0, 0], [0, 0, 0], [0, 0, 0], [54.0, 244.0, 1], [77.33333333333333, 225.33333333333334, 1], [79.0, 196.5, 1], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [150.66666666666666, 86.66666666666667, 1], [88.66666666666667, 73.0, 1], [116.0, 106.33333333333333, 1], [109.0, 123.33333333333333, 1], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]], 'seg': ...} In this example, the keys we are interested in are: img_path joints There are a total of 24 entries present inside joints. Each entry has 3 values: x-coordinate y-coordinate visibility flag of the keypoints (1 indicates visibility and 0 indicates non-visibility) As we can see joints contain multiple [0, 0, 0] entries which denote that those keypoints were not labeled. In this example, we will consider both non-visible as well as unlabeled keypoints in order to allow mini-batch learning. # Load the metdata definition file and preview it. keypoint_def = pd.read_csv(KEYPOINT_DEF) keypoint_def.head() # Extract the colours and labels. colours = keypoint_def[\"Hex colour\"].values.tolist() colours = [\"#\" + colour for colour in colours] labels = keypoint_def[\"Name\"].values.tolist() # Utility for reading an image and for getting its annotations. def get_dog(name): data = json_dict[name] img_data = plt.imread(os.path.join(IMG_DIR, data[\"img_path\"])) # If the image is RGBA convert it to RGB. if img_data.shape[-1] == 4: img_data = img_data.astype(np.uint8) img_data = Image.fromarray(img_data) img_data = np.array(img_data.convert(\"RGB\")) data[\"img_data\"] = img_data return data Visualize data Now, we write a utility function to visualize the images and their keypoints. # Parts of this code come from here: # https://github.com/benjiebob/StanfordExtra/blob/master/demo.ipynb def visualize_keypoints(images, keypoints): fig, axes = plt.subplots(nrows=len(images), ncols=2, figsize=(16, 12)) [ax.axis(\"off\") for ax in np.ravel(axes)] for (ax_orig, ax_all), image, current_keypoint in zip(axes, images, keypoints): ax_orig.imshow(image) ax_all.imshow(image) # If the keypoints were formed by `imgaug` then the coordinates need # to be iterated differently. if isinstance(current_keypoint, KeypointsOnImage): for idx, kp in enumerate(current_keypoint.keypoints): ax_all.scatter( [kp.x], [kp.y], c=colours[idx], marker=\"x\", s=50, linewidths=5 ) else: current_keypoint = np.array(current_keypoint) # Since the last entry is the visibility flag, we discard it. current_keypoint = current_keypoint[:, :2] for idx, (x, y) in enumerate(current_keypoint): ax_all.scatter([x], [y], c=colours[idx], marker=\"x\", s=50, linewidths=5) plt.tight_layout(pad=2.0) plt.show() # Select four samples randomly for visualization. samples = list(json_dict.keys()) num_samples = 4 selected_samples = np.random.choice(samples, num_samples, replace=False) images, keypoints = [], [] for sample in selected_samples: data = get_dog(sample) image = data[\"img_data\"] keypoint = data[\"joints\"] images.append(image) keypoints.append(keypoint) visualize_keypoints(images, keypoints) png The plots show that we have images of non-uniform sizes, which is expected in most real-world scenarios. However, if we resize these images to have a uniform shape (for instance (224 x 224)) their ground-truth annotations will also be affected. The same applies if we apply any geometric transformation (horizontal flip, for e.g.) to an image. Fortunately, imgaug provides utilities that can handle this issue. In the next section, we will write a data generator inheriting the [keras.utils.Sequence](/api/utils/python_utils#sequence-class) class that applies data augmentation on batches of data using imgaug. Prepare data generator class KeyPointsDataset(keras.utils.Sequence): def __init__(self, image_keys, aug, batch_size=BATCH_SIZE, train=True): self.image_keys = image_keys self.aug = aug self.batch_size = batch_size self.train = train self.on_epoch_end() def __len__(self): return len(self.image_keys) // self.batch_size def on_epoch_end(self): self.indexes = np.arange(len(self.image_keys)) if self.train: np.random.shuffle(self.indexes) def __getitem__(self, index): indexes = self.indexes[index * self.batch_size : (index + 1) * self.batch_size] image_keys_temp = [self.image_keys[k] for k in indexes] (images, keypoints) = self.__data_generation(image_keys_temp) return (images, keypoints) def __data_generation(self, image_keys_temp): batch_images = np.empty((self.batch_size, IMG_SIZE, IMG_SIZE, 3), dtype=\"int\") batch_keypoints = np.empty( (self.batch_size, 1, 1, NUM_KEYPOINTS), dtype=\"float32\" ) for i, key in enumerate(image_keys_temp): data = get_dog(key) current_keypoint = np.array(data[\"joints\"])[:, :2] kps = [] # To apply our data augmentation pipeline, we first need to # form Keypoint objects with the original coordinates. for j in range(0, len(current_keypoint)): kps.append(Keypoint(x=current_keypoint[j][0], y=current_keypoint[j][1])) # We then project the original image and its keypoint coordinates. current_image = data[\"img_data\"] kps_obj = KeypointsOnImage(kps, shape=current_image.shape) # Apply the augmentation pipeline. (new_image, new_kps_obj) = self.aug(image=current_image, keypoints=kps_obj) batch_images[i,] = new_image # Parse the coordinates from the new keypoint object. kp_temp = [] for keypoint in new_kps_obj: kp_temp.append(np.nan_to_num(keypoint.x)) kp_temp.append(np.nan_to_num(keypoint.y)) # More on why this reshaping later. batch_keypoints[i,] = np.array(kp_temp).reshape(1, 1, 24 * 2) # Scale the coordinates to [0, 1] range. batch_keypoints = batch_keypoints / IMG_SIZE return (batch_images, batch_keypoints) To know more about how to operate with keypoints in imgaug check out this document. Define augmentation transforms train_aug = iaa.Sequential( [ iaa.Resize(IMG_SIZE, interpolation=\"linear\"), iaa.Fliplr(0.3), # `Sometimes()` applies a function randomly to the inputs with # a given probability (0.3, in this case). iaa.Sometimes(0.3, iaa.Affine(rotate=10, scale=(0.5, 0.7))), ] ) test_aug = iaa.Sequential([iaa.Resize(IMG_SIZE, interpolation=\"linear\")]) Create training and validation splits np.random.shuffle(samples) train_keys, validation_keys = ( samples[int(len(samples) * 0.15) :], samples[: int(len(samples) * 0.15)], ) Data generator investigation train_dataset = KeyPointsDataset(train_keys, train_aug) validation_dataset = KeyPointsDataset(validation_keys, test_aug, train=False) print(f\"Total batches in training set: {len(train_dataset)}\") print(f\"Total batches in validation set: {len(validation_dataset)}\") sample_images, sample_keypoints = next(iter(train_dataset)) assert sample_keypoints.max() == 1.0 assert sample_keypoints.min() == 0.0 sample_keypoints = sample_keypoints[:4].reshape(-1, 24, 2) * IMG_SIZE visualize_keypoints(sample_images[:4], sample_keypoints) Total batches in training set: 166 Total batches in validation set: 29 png Model building The Stanford dogs dataset (on which the StanfordExtra dataset is based) was built using the ImageNet-1k dataset. So, it is likely that the models pretrained on the ImageNet-1k dataset would be useful for this task. We will use a MobileNetV2 pre-trained on this dataset as a backbone to extract meaningful features from the images and then pass those to a custom regression head for predicting coordinates. def get_model(): # Load the pre-trained weights of MobileNetV2 and freeze the weights backbone = keras.applications.MobileNetV2( weights=\"imagenet\", include_top=False, input_shape=(IMG_SIZE, IMG_SIZE, 3) ) backbone.trainable = False inputs = layers.Input((IMG_SIZE, IMG_SIZE, 3)) x = keras.applications.mobilenet_v2.preprocess_input(inputs) x = backbone(x) x = layers.Dropout(0.3)(x) x = layers.SeparableConv2D( NUM_KEYPOINTS, kernel_size=5, strides=1, activation=\"relu\" )(x) outputs = layers.SeparableConv2D( NUM_KEYPOINTS, kernel_size=3, strides=1, activation=\"sigmoid\" )(x) return keras.Model(inputs, outputs, name=\"keypoint_detector\") Our custom network is fully-convolutional which makes it more parameter-friendly than the same version of the network having fully-connected dense layers. get_model().summary() Model: \"keypoint_detector\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) [(None, 224, 224, 3)] 0 _________________________________________________________________ tf.math.truediv (TFOpLambda) (None, 224, 224, 3) 0 _________________________________________________________________ tf.math.subtract (TFOpLambda (None, 224, 224, 3) 0 _________________________________________________________________ mobilenetv2_1.00_224 (Functi (None, 7, 7, 1280) 2257984 _________________________________________________________________ dropout (Dropout) (None, 7, 7, 1280) 0 _________________________________________________________________ separable_conv2d (SeparableC (None, 3, 3, 48) 93488 _________________________________________________________________ separable_conv2d_1 (Separabl (None, 1, 1, 48) 2784 ================================================================= Total params: 2,354,256 Trainable params: 96,272 Non-trainable params: 2,257,984 _________________________________________________________________ Notice the output shape of the network: (None, 1, 1, 48). This is why we have reshaped the coordinates as: batch_keypoints[i, :] = np.array(kp_temp).reshape(1, 1, 24 * 2). Model compilation and training For this example, we will train the network only for five epochs. model = get_model() model.compile(loss=\"mse\", optimizer=keras.optimizers.Adam(1e-4)) model.fit(train_dataset, validation_data=validation_dataset, epochs=EPOCHS) Epoch 1/5 166/166 [==============================] - 85s 486ms/step - loss: 0.1087 - val_loss: 0.0950 Epoch 2/5 166/166 [==============================] - 78s 471ms/step - loss: 0.0830 - val_loss: 0.0778 Epoch 3/5 166/166 [==============================] - 78s 468ms/step - loss: 0.0778 - val_loss: 0.0739 Epoch 4/5 166/166 [==============================] - 78s 470ms/step - loss: 0.0753 - val_loss: 0.0711 Epoch 5/5 166/166 [==============================] - 78s 468ms/step - loss: 0.0735 - val_loss: 0.0692 Make predictions and visualize them sample_val_images, sample_val_keypoints = next(iter(validation_dataset)) sample_val_images = sample_val_images[:4] sample_val_keypoints = sample_val_keypoints[:4].reshape(-1, 24, 2) * IMG_SIZE predictions = model.predict(sample_val_images).reshape(-1, 24, 2) * IMG_SIZE # Ground-truth visualize_keypoints(sample_val_images, sample_val_keypoints) # Predictions visualize_keypoints(sample_val_images, predictions) png png Predictions will likely improve with more training. Going further Try using other augmentation transforms from imgaug to investigate how that changes the results. Here, we transferred the features from the pre-trained network linearly that is we did not fine-tune it. You are encouraged to fine-tune it on this task and see if that improves the performance. You can also try different architectures and see how they affect the final performance. Implementation of classical Knowledge Distillation. Introduction to Knowledge Distillation Knowledge Distillation is a procedure for model compression, in which a small (student) model is trained to match a large pre-trained (teacher) model. Knowledge is transferred from the teacher model to the student by minimizing a loss function, aimed at matching softened teacher logits as well as ground-truth labels. The logits are softened by applying a \"temperature\" scaling function in the softmax, effectively smoothing out the probability distribution and revealing inter-class relationships learned by the teacher. Reference: Hinton et al. (2015) Setup import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import numpy as np Construct Distiller() class The custom Distiller() class, overrides the Model methods train_step, test_step, and compile(). In order to use the distiller, we need: A trained teacher model A student model to train A student loss function on the difference between student predictions and ground-truth A distillation loss function, along with a temperature, on the difference between the soft student predictions and the soft teacher labels An alpha factor to weight the student and distillation loss An optimizer for the student and (optional) metrics to evaluate performance In the train_step method, we perform a forward pass of both the teacher and student, calculate the loss with weighting of the student_loss and distillation_loss by alpha and 1 - alpha, respectively, and perform the backward pass. Note: only the student weights are updated, and therefore we only calculate the gradients for the student weights. In the test_step method, we evaluate the student model on the provided dataset. class Distiller(keras.Model): def __init__(self, student, teacher): super(Distiller, self).__init__() self.teacher = teacher self.student = student def compile( self, optimizer, metrics, student_loss_fn, distillation_loss_fn, alpha=0.1, temperature=3, ): \"\"\" Configure the distiller. Args: optimizer: Keras optimizer for the student weights metrics: Keras metrics for evaluation student_loss_fn: Loss function of difference between student predictions and ground-truth distillation_loss_fn: Loss function of difference between soft student predictions and soft teacher predictions alpha: weight to student_loss_fn and 1-alpha to distillation_loss_fn temperature: Temperature for softening probability distributions. Larger temperature gives softer distributions. \"\"\" super(Distiller, self).compile(optimizer=optimizer, metrics=metrics) self.student_loss_fn = student_loss_fn self.distillation_loss_fn = distillation_loss_fn self.alpha = alpha self.temperature = temperature def train_step(self, data): # Unpack data x, y = data # Forward pass of teacher teacher_predictions = self.teacher(x, training=False) with tf.GradientTape() as tape: # Forward pass of student student_predictions = self.student(x, training=True) # Compute losses student_loss = self.student_loss_fn(y, student_predictions) distillation_loss = self.distillation_loss_fn( tf.nn.softmax(teacher_predictions / self.temperature, axis=1), tf.nn.softmax(student_predictions / self.temperature, axis=1), ) loss = self.alpha * student_loss + (1 - self.alpha) * distillation_loss # Compute gradients trainable_vars = self.student.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Update the metrics configured in `compile()`. self.compiled_metrics.update_state(y, student_predictions) # Return a dict of performance results = {m.name: m.result() for m in self.metrics} results.update( {\"student_loss\": student_loss, \"distillation_loss\": distillation_loss} ) return results def test_step(self, data): # Unpack the data x, y = data # Compute predictions y_prediction = self.student(x, training=False) # Calculate the loss student_loss = self.student_loss_fn(y, y_prediction) # Update the metrics. self.compiled_metrics.update_state(y, y_prediction) # Return a dict of performance results = {m.name: m.result() for m in self.metrics} results.update({\"student_loss\": student_loss}) return results Create student and teacher models Initialy, we create a teacher model and a smaller student model. Both models are convolutional neural networks and created using Sequential(), but could be any Keras model. # Create the teacher teacher = keras.Sequential( [ keras.Input(shape=(28, 28, 1)), layers.Conv2D(256, (3, 3), strides=(2, 2), padding=\"same\"), layers.LeakyReLU(alpha=0.2), layers.MaxPooling2D(pool_size=(2, 2), strides=(1, 1), padding=\"same\"), layers.Conv2D(512, (3, 3), strides=(2, 2), padding=\"same\"), layers.Flatten(), layers.Dense(10), ], name=\"teacher\", ) # Create the student student = keras.Sequential( [ keras.Input(shape=(28, 28, 1)), layers.Conv2D(16, (3, 3), strides=(2, 2), padding=\"same\"), layers.LeakyReLU(alpha=0.2), layers.MaxPooling2D(pool_size=(2, 2), strides=(1, 1), padding=\"same\"), layers.Conv2D(32, (3, 3), strides=(2, 2), padding=\"same\"), layers.Flatten(), layers.Dense(10), ], name=\"student\", ) # Clone student for later comparison student_scratch = keras.models.clone_model(student) Prepare the dataset The dataset used for training the teacher and distilling the teacher is MNIST, and the procedure would be equivalent for any other dataset, e.g. CIFAR-10, with a suitable choice of models. Both the student and teacher are trained on the training set and evaluated on the test set. # Prepare the train and test dataset. batch_size = 64 (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() # Normalize data x_train = x_train.astype(\"float32\") / 255.0 x_train = np.reshape(x_train, (-1, 28, 28, 1)) x_test = x_test.astype(\"float32\") / 255.0 x_test = np.reshape(x_test, (-1, 28, 28, 1)) Train the teacher In knowledge distillation we assume that the teacher is trained and fixed. Thus, we start by training the teacher model on the training set in the usual way. # Train teacher as usual teacher.compile( optimizer=keras.optimizers.Adam(), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[keras.metrics.SparseCategoricalAccuracy()], ) # Train and evaluate teacher on data. teacher.fit(x_train, y_train, epochs=5) teacher.evaluate(x_test, y_test) Epoch 1/5 1875/1875 [==============================] - 248s 132ms/step - loss: 0.2438 - sparse_categorical_accuracy: 0.9220 Epoch 2/5 1875/1875 [==============================] - 263s 140ms/step - loss: 0.0881 - sparse_categorical_accuracy: 0.9738 Epoch 3/5 1875/1875 [==============================] - 245s 131ms/step - loss: 0.0650 - sparse_categorical_accuracy: 0.9811 Epoch 5/5 363/1875 [====>.........................] - ETA: 3:18 - loss: 0.0555 - sparse_categorical_accuracy: 0.9839 Distill teacher to student We have already trained the teacher model, and we only need to initialize a Distiller(student, teacher) instance, compile() it with the desired losses, hyperparameters and optimizer, and distill the teacher to the student. # Initialize and compile distiller distiller = Distiller(student=student, teacher=teacher) distiller.compile( optimizer=keras.optimizers.Adam(), metrics=[keras.metrics.SparseCategoricalAccuracy()], student_loss_fn=keras.losses.SparseCategoricalCrossentropy(from_logits=True), distillation_loss_fn=keras.losses.KLDivergence(), alpha=0.1, temperature=10, ) # Distill teacher to student distiller.fit(x_train, y_train, epochs=3) # Evaluate student on test dataset distiller.evaluate(x_test, y_test) Epoch 1/3 1875/1875 [==============================] - 242s 129ms/step - sparse_categorical_accuracy: 0.9761 - student_loss: 0.1526 - distillation_loss: 0.0226 Epoch 2/3 1875/1875 [==============================] - 281s 150ms/step - sparse_categorical_accuracy: 0.9863 - student_loss: 0.1384 - distillation_loss: 0.0185 Epoch 3/3 399/1875 [=====>........................] - ETA: 3:27 - sparse_categorical_accuracy: 0.9896 - student_loss: 0.1300 - distillation_loss: 0.0182 Train student from scratch for comparison We can also train an equivalent student model from scratch without the teacher, in order to evaluate the performance gain obtained by knowledge distillation. # Train student as doen usually student_scratch.compile( optimizer=keras.optimizers.Adam(), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[keras.metrics.SparseCategoricalAccuracy()], ) # Train and evaluate student trained from scratch. student_scratch.fit(x_train, y_train, epochs=3) student_scratch.evaluate(x_test, y_test) Epoch 1/3 1875/1875 [==============================] - 4s 2ms/step - loss: 0.4731 - sparse_categorical_accuracy: 0.8550 Epoch 2/3 1875/1875 [==============================] - 4s 2ms/step - loss: 0.0966 - sparse_categorical_accuracy: 0.9710 Epoch 3/3 1875/1875 [==============================] - 4s 2ms/step - loss: 0.0750 - sparse_categorical_accuracy: 0.9773 313/313 [==============================] - 0s 963us/step - loss: 0.0691 - sparse_categorical_accuracy: 0.9778 [0.06905383616685867, 0.9778000116348267] If the teacher is trained for 5 full epochs and the student is distilled on this teacher for 3 full epochs, you should in this example experience a performance boost compared to training the same student model from scratch, and even compared to the teacher itself. You should expect the teacher to have accuracy around 97.6%, the student trained from scratch should be around 97.6%, and the distilled student should be around 98.1%. Remove or try out different seeds to use different weight initializations. How to optimally learn representations of images for a given resolution. It is a common belief that if we constrain vision models to perceive things as humans do, their performance can be improved. For example, in this work, Geirhos et al. showed that the vision models pre-trained on the ImageNet-1k dataset are biased toward texture whereas human beings mostly use the shape descriptor to develop a common perception. But does this belief always apply especially when it comes to improving the performance of vision models? It turns out it may not always be the case. When training vision models, it is common to resize images to a lower dimension ((224 x 224), (299 x 299), etc.) to allow mini-batch learning and also to keep up the compute limitations. We generally make use of image resizing methods like bilinear interpolation for this step and the resized images do not lose much of their perceptual character to the human eyes. In Learning to Resize Images for Computer Vision Tasks, Talebi et al. show that if we try to optimize the perceptual quality of the images for the vision models rather than the human eyes, their performance can further be improved. They investigate the following question: For a given image resolution and a model, how to best resize the given images? As shown in the paper, this idea helps to consistently improve the performance of the common vision models (pre-trained on ImageNet-1k) like DenseNet-121, ResNet-50, MobileNetV2, and EfficientNets. In this example, we will implement the learnable image resizing module as proposed in the paper and demonstrate that on the Cats and Dogs dataset using the DenseNet-121 architecture. This example requires TensorFlow 2.4 or higher. Setup from tensorflow.keras import layers from tensorflow import keras import tensorflow as tf import tensorflow_datasets as tfds tfds.disable_progress_bar() import matplotlib.pyplot as plt import numpy as np Define hyperparameters In order to facilitate mini-batch learning, we need to have a fixed shape for the images inside a given batch. This is why an initial resizing is required. We first resize all the images to (300 x 300) shape and then learn their optimal representation for the (150 x 150) resolution. INP_SIZE = (300, 300) TARGET_SIZE = (150, 150) INTERPOLATION = \"bilinear\" AUTO = tf.data.AUTOTUNE BATCH_SIZE = 64 EPOCHS = 5 In this example, we will use the bilinear interpolation but the learnable image resizer module is not dependent on any specific interpolation method. We can also use others, such as bicubic. Load and prepare the dataset For this example, we will only use 40% of the total training dataset. train_ds, validation_ds = tfds.load( \"cats_vs_dogs\", # Reserve 10% for validation split=[\"train[:40%]\", \"train[40%:50%]\"], as_supervised=True, ) def preprocess_dataset(image, label): image = tf.image.resize(image, (INP_SIZE[0], INP_SIZE[1])) label = tf.one_hot(label, depth=2) return (image, label) train_ds = ( train_ds.shuffle(BATCH_SIZE * 100) .map(preprocess_dataset, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) validation_ds = ( validation_ds.map(preprocess_dataset, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) Downloading and preparing dataset 786.68 MiB (download: 786.68 MiB, generated: Unknown size, total: 786.68 MiB) to /home/jupyter/tensorflow_datasets/cats_vs_dogs/4.0.0... WARNING:absl:1738 images were corrupted and were skipped Dataset cats_vs_dogs downloaded and prepared to /home/jupyter/tensorflow_datasets/cats_vs_dogs/4.0.0. Subsequent calls will reuse this data. Define the learnable resizer utilities The figure below (courtesy: Learning to Resize Images for Computer Vision Tasks) presents the structure of the learnable resizing module: def conv_block(x, filters, kernel_size, strides, activation=layers.LeakyReLU(0.2)): x = layers.Conv2D(filters, kernel_size, strides, padding=\"same\", use_bias=False)(x) x = layers.BatchNormalization()(x) if activation: x = activation(x) return x def res_block(x): inputs = x x = conv_block(x, 16, 3, 1) x = conv_block(x, 16, 3, 1, activation=None) return layers.Add()([inputs, x]) def get_learnable_resizer(filters=16, num_res_blocks=1, interpolation=INTERPOLATION): inputs = layers.Input(shape=[None, None, 3]) # First, perform naive resizing. naive_resize = layers.Resizing( *TARGET_SIZE, interpolation=interpolation )(inputs) # First convolution block without batch normalization. x = layers.Conv2D(filters=filters, kernel_size=7, strides=1, padding=\"same\")(inputs) x = layers.LeakyReLU(0.2)(x) # Second convolution block with batch normalization. x = layers.Conv2D(filters=filters, kernel_size=1, strides=1, padding=\"same\")(x) x = layers.LeakyReLU(0.2)(x) x = layers.BatchNormalization()(x) # Intermediate resizing as a bottleneck. bottleneck = layers.Resizing( *TARGET_SIZE, interpolation=interpolation )(x) # Residual passes. for _ in range(num_res_blocks): x = res_block(bottleneck) # Projection. x = layers.Conv2D( filters=filters, kernel_size=3, strides=1, padding=\"same\", use_bias=False )(x) x = layers.BatchNormalization()(x) # Skip connection. x = layers.Add()([bottleneck, x]) # Final resized image. x = layers.Conv2D(filters=3, kernel_size=7, strides=1, padding=\"same\")(x) final_resize = layers.Add()([naive_resize, x]) return tf.keras.Model(inputs, final_resize, name=\"learnable_resizer\") learnable_resizer = get_learnable_resizer() Visualize the outputs of the learnable resizing module Here, we visualize how the resized images would look like after being passed through the random weights of the resizer. sample_images, _ = next(iter(train_ds)) plt.figure(figsize=(16, 10)) for i, image in enumerate(sample_images[:6]): image = image / 255 ax = plt.subplot(3, 4, 2 * i + 1) plt.title(\"Input Image\") plt.imshow(image.numpy().squeeze()) plt.axis(\"off\") ax = plt.subplot(3, 4, 2 * i + 2) resized_image = learnable_resizer(image[None, ...]) plt.title(\"Resized Image\") plt.imshow(resized_image.numpy().squeeze()) plt.axis(\"off\") WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). png Model building utility def get_model(): backbone = tf.keras.applications.DenseNet121( weights=None, include_top=True, classes=2, input_shape=((TARGET_SIZE[0], TARGET_SIZE[1], 3)), ) backbone.trainable = True inputs = layers.Input((INP_SIZE[0], INP_SIZE[1], 3)) x = layers.Rescaling(scale=1.0 / 255)(inputs) x = learnable_resizer(x) outputs = backbone(x) return tf.keras.Model(inputs, outputs) The structure of the learnable image resizer module allows for flexible integrations with different vision models. Compile and train our model with learnable resizer model = get_model() model.compile( loss=keras.losses.CategoricalCrossentropy(label_smoothing=0.1), optimizer=\"sgd\", metrics=[\"accuracy\"], ) model.fit(train_ds, validation_data=validation_ds, epochs=EPOCHS) Epoch 1/5 146/146 [==============================] - 49s 247ms/step - loss: 0.6956 - accuracy: 0.5697 - val_loss: 0.6958 - val_accuracy: 0.5103 Epoch 2/5 146/146 [==============================] - 33s 216ms/step - loss: 0.6685 - accuracy: 0.6117 - val_loss: 0.6955 - val_accuracy: 0.5387 Epoch 3/5 146/146 [==============================] - 33s 216ms/step - loss: 0.6542 - accuracy: 0.6190 - val_loss: 0.7410 - val_accuracy: 0.5684 Epoch 4/5 146/146 [==============================] - 33s 216ms/step - loss: 0.6357 - accuracy: 0.6576 - val_loss: 0.9322 - val_accuracy: 0.5314 Epoch 5/5 146/146 [==============================] - 33s 215ms/step - loss: 0.6224 - accuracy: 0.6745 - val_loss: 0.6526 - val_accuracy: 0.6672 Visualize the outputs of the trained visualizer plt.figure(figsize=(16, 10)) for i, image in enumerate(sample_images[:6]): image = image / 255 ax = plt.subplot(3, 4, 2 * i + 1) plt.title(\"Input Image\") plt.imshow(image.numpy().squeeze()) plt.axis(\"off\") ax = plt.subplot(3, 4, 2 * i + 2) resized_image = learnable_resizer(image[None, ...]) plt.title(\"Resized Image\") plt.imshow(resized_image.numpy().squeeze() / 10) plt.axis(\"off\") WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). WARNING:matplotlib.image:Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). png The plot shows that the visuals of the images have improved with training. The following table shows the benefits of using the resizing module in comparison to using the bilinear interpolation: Model Number of parameters (Million) Top-1 accuracy With the learnable resizer 7.051717 67.67% Without the learnable resizer 7.039554 60.19% For more details, you can check out this repository. Note the above-reported models were trained for 10 epochs on 90% of the training set of Cats and Dogs unlike this example. Also, note that the increase in the number of parameters due to the resizing module is very negligible. To ensure that the improvement in the performance is not due to stochasticity, the models were trained using the same initial random weights. Now, a question worth asking here is - isn't the improved accuracy simply a consequence of adding more layers (the resizer is a mini network after all) to the model, compared to the baseline? To show that it is not the case, the authors conduct the following experiment: Take a pre-trained model trained some size, say (224 x 224). Now, first, use it to infer predictions on images resized to a lower resolution. Record the performance. For the second experiment, plug in the resizer module at the top of the pre-trained model and warm-start the training. Record the performance. Now, the authors argue that using the second option is better because it helps the model learn how to adjust the representations better with respect to the given resolution. Since the results purely are empirical, a few more experiments such as analyzing the cross-channel interaction would have been even better. It is worth noting that elements like Squeeze and Excitation (SE) blocks, Global Context (GC) blocks also add a few parameters to an existing network but they are known to help a network process information in systematic ways to improve the overall performance. Notes To impose shape bias inside the vision models, Geirhos et al. trained them with a combination of natural and stylized images. It might be interesting to investigate if this learnable resizing module could achieve something similar as the outputs seem to discard the texture information. The resizer module can handle arbitrary resolutions and aspect ratios which is very important for tasks like object detection and segmentation. There is another closely related topic on adaptive image resizing that attempts to resize images/feature maps adaptively during training. EfficientV2 uses this idea. Implementing the MIRNet architecture for low-light image enhancement. Introduction With the goal of recovering high-quality image content from its degraded version, image restoration enjoys numerous applications, such as in photography, security, medical imaging, and remote sensing. In this example, we implement the MIRNet model for low-light image enhancement, a fully-convolutional architecture that learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details. References: Learning Enriched Features for Real Image Restoration and Enhancement The Retinex Theory of Color Vision Two deterministic half-quadratic regularization algorithms for computed imaging Downloading LOLDataset The LoL Dataset has been created for low-light image enhancement. It provides 485 images for training and 15 for testing. Each image pair in the dataset consists of a low-light input image and its corresponding well-exposed reference image. import os import cv2 import random import numpy as np from glob import glob from PIL import Image, ImageOps import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers !gdown https://drive.google.com/uc?id=1DdGIJ4PZPlF2ikl8mNM9V-PdVxVLbQi6 !unzip -q lol_dataset.zip Downloading... From: https://drive.google.com/uc?id=1DdGIJ4PZPlF2ikl8mNM9V-PdVxVLbQi6 To: /content/keras-io/scripts/tmp_2614641/lol_dataset.zip 347MB [00:03, 108MB/s] Creating a TensorFlow Dataset We use 300 image pairs from the LoL Dataset's training set for training, and we use the remaining 185 image pairs for validation. We generate random crops of size 128 x 128 from the image pairs to be used for both training and validation. random.seed(10) IMAGE_SIZE = 128 BATCH_SIZE = 4 MAX_TRAIN_IMAGES = 300 def read_image(image_path): image = tf.io.read_file(image_path) image = tf.image.decode_png(image, channels=3) image.set_shape([None, None, 3]) image = tf.cast(image, dtype=tf.float32) / 255.0 return image def random_crop(low_image, enhanced_image): low_image_shape = tf.shape(low_image)[:2] low_w = tf.random.uniform( shape=(), maxval=low_image_shape[1] - IMAGE_SIZE + 1, dtype=tf.int32 ) low_h = tf.random.uniform( shape=(), maxval=low_image_shape[0] - IMAGE_SIZE + 1, dtype=tf.int32 ) enhanced_w = low_w enhanced_h = low_h low_image_cropped = low_image[ low_h : low_h + IMAGE_SIZE, low_w : low_w + IMAGE_SIZE ] enhanced_image_cropped = enhanced_image[ enhanced_h : enhanced_h + IMAGE_SIZE, enhanced_w : enhanced_w + IMAGE_SIZE ] return low_image_cropped, enhanced_image_cropped def load_data(low_light_image_path, enhanced_image_path): low_light_image = read_image(low_light_image_path) enhanced_image = read_image(enhanced_image_path) low_light_image, enhanced_image = random_crop(low_light_image, enhanced_image) return low_light_image, enhanced_image def get_dataset(low_light_images, enhanced_images): dataset = tf.data.Dataset.from_tensor_slices((low_light_images, enhanced_images)) dataset = dataset.map(load_data, num_parallel_calls=tf.data.AUTOTUNE) dataset = dataset.batch(BATCH_SIZE, drop_remainder=True) return dataset train_low_light_images = sorted(glob(\"./lol_dataset/our485/low/*\"))[:MAX_TRAIN_IMAGES] train_enhanced_images = sorted(glob(\"./lol_dataset/our485/high/*\"))[:MAX_TRAIN_IMAGES] val_low_light_images = sorted(glob(\"./lol_dataset/our485/low/*\"))[MAX_TRAIN_IMAGES:] val_enhanced_images = sorted(glob(\"./lol_dataset/our485/high/*\"))[MAX_TRAIN_IMAGES:] test_low_light_images = sorted(glob(\"./lol_dataset/eval15/low/*\")) test_enhanced_images = sorted(glob(\"./lol_dataset/eval15/high/*\")) train_dataset = get_dataset(train_low_light_images, train_enhanced_images) val_dataset = get_dataset(val_low_light_images, val_enhanced_images) print(\"Train Dataset:\", train_dataset) print(\"Val Dataset:\", val_dataset) Train Dataset: Val Dataset: MIRNet Model Here are the main features of the MIRNet model: A feature extraction model that computes a complementary set of features across multiple spatial scales, while maintaining the original high-resolution features to preserve precise spatial details. A regularly repeated mechanism for information exchange, where the features across multi-resolution branches are progressively fused together for improved representation learning. A new approach to fuse multi-scale features using a selective kernel network that dynamically combines variable receptive fields and faithfully preserves the original feature information at each spatial resolution. A recursive residual design that progressively breaks down the input signal in order to simplify the overall learning process, and allows the construction of very deep networks. Selective Kernel Feature Fusion The Selective Kernel Feature Fusion or SKFF module performs dynamic adjustment of receptive fields via two operations: Fuse and Select. The Fuse operator generates global feature descriptors by combining the information from multi-resolution streams. The Select operator uses these descriptors to recalibrate the feature maps (of different streams) followed by their aggregation. Fuse: The SKFF receives inputs from three parallel convolution streams carrying different scales of information. We first combine these multi-scale features using an element-wise sum, on which we apply Global Average Pooling (GAP) across the spatial dimension. Next, we apply a channel- downscaling convolution layer to generate a compact feature representation which passes through three parallel channel-upscaling convolution layers (one for each resolution stream) and provides us with three feature descriptors. Select: This operator applies the softmax function to the feature descriptors to obtain the corresponding activations that are used to adaptively recalibrate multi-scale feature maps. The aggregated features are defined as the sum of product of the corresponding multi-scale feature and the feature descriptor. def selective_kernel_feature_fusion( multi_scale_feature_1, multi_scale_feature_2, multi_scale_feature_3 ): channels = list(multi_scale_feature_1.shape)[-1] combined_feature = layers.Add()( [multi_scale_feature_1, multi_scale_feature_2, multi_scale_feature_3] ) gap = layers.GlobalAveragePooling2D()(combined_feature) channel_wise_statistics = tf.reshape(gap, shape=(-1, 1, 1, channels)) compact_feature_representation = layers.Conv2D( filters=channels // 8, kernel_size=(1, 1), activation=\"relu\" )(channel_wise_statistics) feature_descriptor_1 = layers.Conv2D( channels, kernel_size=(1, 1), activation=\"softmax\" )(compact_feature_representation) feature_descriptor_2 = layers.Conv2D( channels, kernel_size=(1, 1), activation=\"softmax\" )(compact_feature_representation) feature_descriptor_3 = layers.Conv2D( channels, kernel_size=(1, 1), activation=\"softmax\" )(compact_feature_representation) feature_1 = multi_scale_feature_1 * feature_descriptor_1 feature_2 = multi_scale_feature_2 * feature_descriptor_2 feature_3 = multi_scale_feature_3 * feature_descriptor_3 aggregated_feature = layers.Add()([feature_1, feature_2, feature_3]) return aggregated_feature Dual Attention Unit The Dual Attention Unit or DAU is used to extract features in the convolutional streams. While the SKFF block fuses information across multi-resolution branches, we also need a mechanism to share information within a feature tensor, both along the spatial and the channel dimensions which is done by the DAU block. The DAU suppresses less useful features and only allows more informative ones to pass further. This feature recalibration is achieved by using Channel Attention and Spatial Attention mechanisms. The Channel Attention branch exploits the inter-channel relationships of the convolutional feature maps by applying squeeze and excitation operations. Given a feature map, the squeeze operation applies Global Average Pooling across spatial dimensions to encode global context, thus yielding a feature descriptor. The excitation operator passes this feature descriptor through two convolutional layers followed by the sigmoid gating and generates activations. Finally, the output of Channel Attention branch is obtained by rescaling the input feature map with the output activations. The Spatial Attention branch is designed to exploit the inter-spatial dependencies of convolutional features. The goal of Spatial Attention is to generate a spatial attention map and use it to recalibrate the incoming features. To generate the spatial attention map, the Spatial Attention branch first independently applies Global Average Pooling and Max Pooling operations on input features along the channel dimensions and concatenates the outputs to form a resultant feature map which is then passed through a convolution and sigmoid activation to obtain the spatial attention map. This spatial attention map is then used to rescale the input feature map. def spatial_attention_block(input_tensor): average_pooling = tf.reduce_max(input_tensor, axis=-1) average_pooling = tf.expand_dims(average_pooling, axis=-1) max_pooling = tf.reduce_mean(input_tensor, axis=-1) max_pooling = tf.expand_dims(max_pooling, axis=-1) concatenated = layers.Concatenate(axis=-1)([average_pooling, max_pooling]) feature_map = layers.Conv2D(1, kernel_size=(1, 1))(concatenated) feature_map = tf.nn.sigmoid(feature_map) return input_tensor * feature_map def channel_attention_block(input_tensor): channels = list(input_tensor.shape)[-1] average_pooling = layers.GlobalAveragePooling2D()(input_tensor) feature_descriptor = tf.reshape(average_pooling, shape=(-1, 1, 1, channels)) feature_activations = layers.Conv2D( filters=channels // 8, kernel_size=(1, 1), activation=\"relu\" )(feature_descriptor) feature_activations = layers.Conv2D( filters=channels, kernel_size=(1, 1), activation=\"sigmoid\" )(feature_activations) return input_tensor * feature_activations def dual_attention_unit_block(input_tensor): channels = list(input_tensor.shape)[-1] feature_map = layers.Conv2D( channels, kernel_size=(3, 3), padding=\"same\", activation=\"relu\" )(input_tensor) feature_map = layers.Conv2D(channels, kernel_size=(3, 3), padding=\"same\")( feature_map ) channel_attention = channel_attention_block(feature_map) spatial_attention = spatial_attention_block(feature_map) concatenation = layers.Concatenate(axis=-1)([channel_attention, spatial_attention]) concatenation = layers.Conv2D(channels, kernel_size=(1, 1))(concatenation) return layers.Add()([input_tensor, concatenation]) Multi-Scale Residual Block The Multi-Scale Residual Block is capable of generating a spatially-precise output by maintaining high-resolution representations, while receiving rich contextual information from low-resolutions. The MRB consists of multiple (three in this paper) fully-convolutional streams connected in parallel. It allows information exchange across parallel streams in order to consolidate the high-resolution features with the help of low-resolution features, and vice versa. The MIRNet employs a recursive residual design (with skip connections) to ease the flow of information during the learning process. In order to maintain the residual nature of our architecture, residual resizing modules are used to perform downsampling and upsampling operations that are used in the Multi-scale Residual Block. # Recursive Residual Modules def down_sampling_module(input_tensor): channels = list(input_tensor.shape)[-1] main_branch = layers.Conv2D(channels, kernel_size=(1, 1), activation=\"relu\")( input_tensor ) main_branch = layers.Conv2D( channels, kernel_size=(3, 3), padding=\"same\", activation=\"relu\" )(main_branch) main_branch = layers.MaxPooling2D()(main_branch) main_branch = layers.Conv2D(channels * 2, kernel_size=(1, 1))(main_branch) skip_branch = layers.MaxPooling2D()(input_tensor) skip_branch = layers.Conv2D(channels * 2, kernel_size=(1, 1))(skip_branch) return layers.Add()([skip_branch, main_branch]) def up_sampling_module(input_tensor): channels = list(input_tensor.shape)[-1] main_branch = layers.Conv2D(channels, kernel_size=(1, 1), activation=\"relu\")( input_tensor ) main_branch = layers.Conv2D( channels, kernel_size=(3, 3), padding=\"same\", activation=\"relu\" )(main_branch) main_branch = layers.UpSampling2D()(main_branch) main_branch = layers.Conv2D(channels // 2, kernel_size=(1, 1))(main_branch) skip_branch = layers.UpSampling2D()(input_tensor) skip_branch = layers.Conv2D(channels // 2, kernel_size=(1, 1))(skip_branch) return layers.Add()([skip_branch, main_branch]) # MRB Block def multi_scale_residual_block(input_tensor, channels): # features level1 = input_tensor level2 = down_sampling_module(input_tensor) level3 = down_sampling_module(level2) # DAU level1_dau = dual_attention_unit_block(level1) level2_dau = dual_attention_unit_block(level2) level3_dau = dual_attention_unit_block(level3) # SKFF level1_skff = selective_kernel_feature_fusion( level1_dau, up_sampling_module(level2_dau), up_sampling_module(up_sampling_module(level3_dau)), ) level2_skff = selective_kernel_feature_fusion( down_sampling_module(level1_dau), level2_dau, up_sampling_module(level3_dau) ) level3_skff = selective_kernel_feature_fusion( down_sampling_module(down_sampling_module(level1_dau)), down_sampling_module(level2_dau), level3_dau, ) # DAU 2 level1_dau_2 = dual_attention_unit_block(level1_skff) level2_dau_2 = up_sampling_module((dual_attention_unit_block(level2_skff))) level3_dau_2 = up_sampling_module( up_sampling_module(dual_attention_unit_block(level3_skff)) ) # SKFF 2 skff_ = selective_kernel_feature_fusion(level1_dau_2, level3_dau_2, level3_dau_2) conv = layers.Conv2D(channels, kernel_size=(3, 3), padding=\"same\")(skff_) return layers.Add()([input_tensor, conv]) MIRNet Model def recursive_residual_group(input_tensor, num_mrb, channels): conv1 = layers.Conv2D(channels, kernel_size=(3, 3), padding=\"same\")(input_tensor) for _ in range(num_mrb): conv1 = multi_scale_residual_block(conv1, channels) conv2 = layers.Conv2D(channels, kernel_size=(3, 3), padding=\"same\")(conv1) return layers.Add()([conv2, input_tensor]) def mirnet_model(num_rrg, num_mrb, channels): input_tensor = keras.Input(shape=[None, None, 3]) x1 = layers.Conv2D(channels, kernel_size=(3, 3), padding=\"same\")(input_tensor) for _ in range(num_rrg): x1 = recursive_residual_group(x1, num_mrb, channels) conv = layers.Conv2D(3, kernel_size=(3, 3), padding=\"same\")(x1) output_tensor = layers.Add()([input_tensor, conv]) return keras.Model(input_tensor, output_tensor) model = mirnet_model(num_rrg=3, num_mrb=2, channels=64) Training We train MIRNet using Charbonnier Loss as the loss function and Adam Optimizer with a learning rate of 1e-4. We use Peak Signal Noise Ratio or PSNR as a metric which is an expression for the ratio between the maximum possible value (power) of a signal and the power of distorting noise that affects the quality of its representation. def charbonnier_loss(y_true, y_pred): return tf.reduce_mean(tf.sqrt(tf.square(y_true - y_pred) + tf.square(1e-3))) def peak_signal_noise_ratio(y_true, y_pred): return tf.image.psnr(y_pred, y_true, max_val=255.0) optimizer = keras.optimizers.Adam(learning_rate=1e-4) model.compile( optimizer=optimizer, loss=charbonnier_loss, metrics=[peak_signal_noise_ratio] ) history = model.fit( train_dataset, validation_data=val_dataset, epochs=50, callbacks=[ keras.callbacks.ReduceLROnPlateau( monitor=\"val_peak_signal_noise_ratio\", factor=0.5, patience=5, verbose=1, min_delta=1e-7, mode=\"max\", ) ], ) plt.plot(history.history[\"loss\"], label=\"train_loss\") plt.plot(history.history[\"val_loss\"], label=\"val_loss\") plt.xlabel(\"Epochs\") plt.ylabel(\"Loss\") plt.title(\"Train and Validation Losses Over Epochs\", fontsize=14) plt.legend() plt.grid() plt.show() plt.plot(history.history[\"peak_signal_noise_ratio\"], label=\"train_psnr\") plt.plot(history.history[\"val_peak_signal_noise_ratio\"], label=\"val_psnr\") plt.xlabel(\"Epochs\") plt.ylabel(\"PSNR\") plt.title(\"Train and Validation PSNR Over Epochs\", fontsize=14) plt.legend() plt.grid() plt.show() Epoch 1/50 75/75 [==============================] - 109s 731ms/step - loss: 0.2125 - peak_signal_noise_ratio: 62.0458 - val_loss: 0.1592 - val_peak_signal_noise_ratio: 64.1833 Epoch 2/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1764 - peak_signal_noise_ratio: 63.1356 - val_loss: 0.1257 - val_peak_signal_noise_ratio: 65.6498 Epoch 3/50 75/75 [==============================] - 49s 652ms/step - loss: 0.1724 - peak_signal_noise_ratio: 63.3172 - val_loss: 0.1245 - val_peak_signal_noise_ratio: 65.6902 Epoch 4/50 75/75 [==============================] - 49s 653ms/step - loss: 0.1670 - peak_signal_noise_ratio: 63.4917 - val_loss: 0.1206 - val_peak_signal_noise_ratio: 65.8893 Epoch 5/50 75/75 [==============================] - 49s 653ms/step - loss: 0.1651 - peak_signal_noise_ratio: 63.6555 - val_loss: 0.1333 - val_peak_signal_noise_ratio: 65.6338 Epoch 6/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1572 - peak_signal_noise_ratio: 64.1984 - val_loss: 0.1142 - val_peak_signal_noise_ratio: 66.7711 Epoch 7/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1592 - peak_signal_noise_ratio: 64.0062 - val_loss: 0.1205 - val_peak_signal_noise_ratio: 66.1075 Epoch 8/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1493 - peak_signal_noise_ratio: 64.4675 - val_loss: 0.1170 - val_peak_signal_noise_ratio: 66.1355 Epoch 9/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1446 - peak_signal_noise_ratio: 64.7416 - val_loss: 0.1301 - val_peak_signal_noise_ratio: 66.0207 Epoch 10/50 75/75 [==============================] - 49s 655ms/step - loss: 0.1539 - peak_signal_noise_ratio: 64.3999 - val_loss: 0.1220 - val_peak_signal_noise_ratio: 66.7203 Epoch 11/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1451 - peak_signal_noise_ratio: 64.7352 - val_loss: 0.1219 - val_peak_signal_noise_ratio: 66.3140 Epoch 00011: ReduceLROnPlateau reducing learning rate to 4.999999873689376e-05. Epoch 12/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1492 - peak_signal_noise_ratio: 64.7238 - val_loss: 0.1204 - val_peak_signal_noise_ratio: 66.4726 Epoch 13/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1456 - peak_signal_noise_ratio: 64.9666 - val_loss: 0.1109 - val_peak_signal_noise_ratio: 67.1270 Epoch 14/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1372 - peak_signal_noise_ratio: 65.3932 - val_loss: 0.1150 - val_peak_signal_noise_ratio: 66.9255 Epoch 15/50 75/75 [==============================] - 49s 650ms/step - loss: 0.1340 - peak_signal_noise_ratio: 65.5611 - val_loss: 0.1111 - val_peak_signal_noise_ratio: 67.2009 Epoch 16/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1377 - peak_signal_noise_ratio: 65.3355 - val_loss: 0.1140 - val_peak_signal_noise_ratio: 67.0495 Epoch 17/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1340 - peak_signal_noise_ratio: 65.6484 - val_loss: 0.1132 - val_peak_signal_noise_ratio: 67.0257 Epoch 18/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1360 - peak_signal_noise_ratio: 65.4871 - val_loss: 0.1070 - val_peak_signal_noise_ratio: 67.4185 Epoch 19/50 75/75 [==============================] - 49s 649ms/step - loss: 0.1349 - peak_signal_noise_ratio: 65.4856 - val_loss: 0.1112 - val_peak_signal_noise_ratio: 67.2248 Epoch 20/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1273 - peak_signal_noise_ratio: 66.0817 - val_loss: 0.1185 - val_peak_signal_noise_ratio: 67.0208 Epoch 21/50 75/75 [==============================] - 49s 656ms/step - loss: 0.1393 - peak_signal_noise_ratio: 65.3710 - val_loss: 0.1102 - val_peak_signal_noise_ratio: 67.0362 Epoch 22/50 75/75 [==============================] - 49s 653ms/step - loss: 0.1326 - peak_signal_noise_ratio: 65.8781 - val_loss: 0.1059 - val_peak_signal_noise_ratio: 67.4949 Epoch 23/50 75/75 [==============================] - 49s 653ms/step - loss: 0.1260 - peak_signal_noise_ratio: 66.1770 - val_loss: 0.1187 - val_peak_signal_noise_ratio: 66.6312 Epoch 24/50 75/75 [==============================] - 49s 650ms/step - loss: 0.1331 - peak_signal_noise_ratio: 65.8160 - val_loss: 0.1075 - val_peak_signal_noise_ratio: 67.2668 Epoch 25/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1288 - peak_signal_noise_ratio: 66.0734 - val_loss: 0.1027 - val_peak_signal_noise_ratio: 67.9508 Epoch 26/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1306 - peak_signal_noise_ratio: 66.0349 - val_loss: 0.1076 - val_peak_signal_noise_ratio: 67.3821 Epoch 27/50 75/75 [==============================] - 49s 655ms/step - loss: 0.1356 - peak_signal_noise_ratio: 65.7978 - val_loss: 0.1079 - val_peak_signal_noise_ratio: 67.4785 Epoch 28/50 75/75 [==============================] - 49s 655ms/step - loss: 0.1270 - peak_signal_noise_ratio: 66.2681 - val_loss: 0.1116 - val_peak_signal_noise_ratio: 67.3327 Epoch 29/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1297 - peak_signal_noise_ratio: 66.0506 - val_loss: 0.1057 - val_peak_signal_noise_ratio: 67.5432 Epoch 30/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1275 - peak_signal_noise_ratio: 66.3542 - val_loss: 0.1034 - val_peak_signal_noise_ratio: 67.4624 Epoch 00030: ReduceLROnPlateau reducing learning rate to 2.499999936844688e-05. Epoch 31/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1258 - peak_signal_noise_ratio: 66.2724 - val_loss: 0.1066 - val_peak_signal_noise_ratio: 67.5729 Epoch 32/50 75/75 [==============================] - 49s 653ms/step - loss: 0.1153 - peak_signal_noise_ratio: 67.0384 - val_loss: 0.1064 - val_peak_signal_noise_ratio: 67.4336 Epoch 33/50 75/75 [==============================] - 49s 653ms/step - loss: 0.1189 - peak_signal_noise_ratio: 66.7662 - val_loss: 0.1062 - val_peak_signal_noise_ratio: 67.5128 Epoch 34/50 75/75 [==============================] - 49s 654ms/step - loss: 0.1159 - peak_signal_noise_ratio: 66.9257 - val_loss: 0.1003 - val_peak_signal_noise_ratio: 67.8672 Epoch 35/50 75/75 [==============================] - 49s 653ms/step - loss: 0.1191 - peak_signal_noise_ratio: 66.7690 - val_loss: 0.1043 - val_peak_signal_noise_ratio: 67.4840 Epoch 00035: ReduceLROnPlateau reducing learning rate to 1.249999968422344e-05. Epoch 36/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1158 - peak_signal_noise_ratio: 67.0264 - val_loss: 0.1057 - val_peak_signal_noise_ratio: 67.6526 Epoch 37/50 75/75 [==============================] - 49s 652ms/step - loss: 0.1128 - peak_signal_noise_ratio: 67.1950 - val_loss: 0.1104 - val_peak_signal_noise_ratio: 67.1770 Epoch 38/50 75/75 [==============================] - 49s 652ms/step - loss: 0.1200 - peak_signal_noise_ratio: 66.7623 - val_loss: 0.1048 - val_peak_signal_noise_ratio: 67.7003 Epoch 39/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1112 - peak_signal_noise_ratio: 67.3895 - val_loss: 0.1031 - val_peak_signal_noise_ratio: 67.6530 Epoch 40/50 75/75 [==============================] - 49s 650ms/step - loss: 0.1125 - peak_signal_noise_ratio: 67.1694 - val_loss: 0.1034 - val_peak_signal_noise_ratio: 67.6437 Epoch 00040: ReduceLROnPlateau reducing learning rate to 6.24999984211172e-06. Epoch 41/50 75/75 [==============================] - 49s 650ms/step - loss: 0.1131 - peak_signal_noise_ratio: 67.2471 - val_loss: 0.1152 - val_peak_signal_noise_ratio: 66.8625 Epoch 42/50 75/75 [==============================] - 49s 650ms/step - loss: 0.1069 - peak_signal_noise_ratio: 67.5794 - val_loss: 0.1119 - val_peak_signal_noise_ratio: 67.1944 Epoch 43/50 75/75 [==============================] - 49s 651ms/step - loss: 0.1118 - peak_signal_noise_ratio: 67.2779 - val_loss: 0.1147 - val_peak_signal_noise_ratio: 66.9731 Epoch 44/50 75/75 [==============================] - 48s 647ms/step - loss: 0.1101 - peak_signal_noise_ratio: 67.2777 - val_loss: 0.1107 - val_peak_signal_noise_ratio: 67.2580 Epoch 45/50 75/75 [==============================] - 49s 649ms/step - loss: 0.1076 - peak_signal_noise_ratio: 67.6359 - val_loss: 0.1103 - val_peak_signal_noise_ratio: 67.2720 Epoch 00045: ReduceLROnPlateau reducing learning rate to 3.12499992105586e-06. Epoch 46/50 75/75 [==============================] - 49s 648ms/step - loss: 0.1066 - peak_signal_noise_ratio: 67.4869 - val_loss: 0.1077 - val_peak_signal_noise_ratio: 67.4986 Epoch 47/50 75/75 [==============================] - 49s 649ms/step - loss: 0.1072 - peak_signal_noise_ratio: 67.4890 - val_loss: 0.1140 - val_peak_signal_noise_ratio: 67.1755 Epoch 48/50 75/75 [==============================] - 49s 649ms/step - loss: 0.1065 - peak_signal_noise_ratio: 67.6796 - val_loss: 0.1091 - val_peak_signal_noise_ratio: 67.3442 Epoch 49/50 75/75 [==============================] - 49s 648ms/step - loss: 0.1098 - peak_signal_noise_ratio: 67.3909 - val_loss: 0.1082 - val_peak_signal_noise_ratio: 67.4616 Epoch 50/50 75/75 [==============================] - 49s 648ms/step - loss: 0.1090 - peak_signal_noise_ratio: 67.5139 - val_loss: 0.1124 - val_peak_signal_noise_ratio: 67.1488 Epoch 00050: ReduceLROnPlateau reducing learning rate to 1.56249996052793e-06. png png Inference def plot_results(images, titles, figure_size=(12, 12)): fig = plt.figure(figsize=figure_size) for i in range(len(images)): fig.add_subplot(1, len(images), i + 1).set_title(titles[i]) _ = plt.imshow(images[i]) plt.axis(\"off\") plt.show() def infer(original_image): image = keras.preprocessing.image.img_to_array(original_image) image = image.astype(\"float32\") / 255.0 image = np.expand_dims(image, axis=0) output = model.predict(image) output_image = output[0] * 255.0 output_image = output_image.clip(0, 255) output_image = output_image.reshape( (np.shape(output_image)[0], np.shape(output_image)[1], 3) ) output_image = Image.fromarray(np.uint8(output_image)) original_image = Image.fromarray(np.uint8(original_image)) return output_image Inference on Test Images We compare the test images from LOLDataset enhanced by MIRNet with images enhanced via the PIL.ImageOps.autocontrast() function. for low_light_image in random.sample(test_low_light_images, 6): original_image = Image.open(low_light_image) enhanced_image = infer(original_image) plot_results( [original_image, ImageOps.autocontrast(original_image), enhanced_image], [\"Original\", \"PIL Autocontrast\", \"MIRNet Enhanced\"], (20, 12), ) png png png png png png Implementing Masked Autoencoders for self-supervised pretraining. Introduction In deep learning, models with growing capacity and capability can easily overfit on large datasets (ImageNet-1K). In the field of natural language processing, the appetite for data has been successfully addressed by self-supervised pretraining. In the academic paper Masked Autoencoders Are Scalable Vision Learners by He et. al. the authors propose a simple yet effective method to pretrain large vision models (here ViT Huge). Inspired from the pretraining algorithm of BERT (Devlin et al.), they mask patches of an image and, through an autoencoder predict the masked patches. In the spirit of \"masked language modeling\", this pretraining task could be referred to as \"masked image modeling\". In this example, we implement Masked Autoencoders Are Scalable Vision Learners with the CIFAR-10 dataset. After pretraining a scaled down version of ViT, we also implement the linear evaluation pipeline on CIFAR-10. This implementation covers (MAE refers to Masked Autoencoder): The masking algorithm MAE encoder MAE decoder Evaluation with linear probing As a reference, we reuse some of the code presented in this example. Imports This example requires TensorFlow Addons, which can be installed using the following command: pip install -U tensorflow-addons from tensorflow.keras import layers import tensorflow_addons as tfa from tensorflow import keras import tensorflow as tf import matplotlib.pyplot as plt import numpy as np import random # Setting seeds for reproducibility. SEED = 42 keras.utils.set_random_seed(SEED) Hyperparameters for pretraining Please feel free to change the hyperparameters and check your results. The best way to get an intuition about the architecture is to experiment with it. Our hyperparameters are heavily inspired by the design guidelines laid out by the authors in the original paper. # DATA BUFFER_SIZE = 1024 BATCH_SIZE = 256 AUTO = tf.data.AUTOTUNE INPUT_SHAPE = (32, 32, 3) NUM_CLASSES = 10 # OPTIMIZER LEARNING_RATE = 5e-3 WEIGHT_DECAY = 1e-4 # PRETRAINING EPOCHS = 100 # AUGMENTATION IMAGE_SIZE = 48 # We will resize input images to this size. PATCH_SIZE = 6 # Size of the patches to be extracted from the input images. NUM_PATCHES = (IMAGE_SIZE // PATCH_SIZE) ** 2 MASK_PROPORTION = 0.75 # We have found 75% masking to give us the best results. # ENCODER and DECODER LAYER_NORM_EPS = 1e-6 ENC_PROJECTION_DIM = 128 DEC_PROJECTION_DIM = 64 ENC_NUM_HEADS = 4 ENC_LAYERS = 6 DEC_NUM_HEADS = 4 DEC_LAYERS = ( 2 # The decoder is lightweight but should be reasonably deep for reconstruction. ) ENC_TRANSFORMER_UNITS = [ ENC_PROJECTION_DIM * 2, ENC_PROJECTION_DIM, ] # Size of the transformer layers. DEC_TRANSFORMER_UNITS = [ DEC_PROJECTION_DIM * 2, DEC_PROJECTION_DIM, ] Load and prepare the CIFAR-10 dataset (x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data() (x_train, y_train), (x_val, y_val) = ( (x_train[:40000], y_train[:40000]), (x_train[40000:], y_train[40000:]), ) print(f\"Training samples: {len(x_train)}\") print(f\"Validation samples: {len(x_val)}\") print(f\"Testing samples: {len(x_test)}\") train_ds = tf.data.Dataset.from_tensor_slices(x_train) train_ds = train_ds.shuffle(BUFFER_SIZE).batch(BATCH_SIZE).prefetch(AUTO) val_ds = tf.data.Dataset.from_tensor_slices(x_val) val_ds = val_ds.batch(BATCH_SIZE).prefetch(AUTO) test_ds = tf.data.Dataset.from_tensor_slices(x_test) test_ds = test_ds.batch(BATCH_SIZE).prefetch(AUTO) Training samples: 40000 Validation samples: 10000 Testing samples: 10000 2021-11-24 01:10:52.088318: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-11-24 01:10:54.356762: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 38444 MB memory: -> device: 0, name: A100-SXM4-40GB, pci bus id: 0000:00:04.0, compute capability: 8.0 Data augmentation In previous self-supervised pretraining methodologies (SimCLR alike), we have noticed that the data augmentation pipeline plays an important role. On the other hand the authors of this paper point out that Masked Autoencoders do not rely on augmentations. They propose a simple augmentation pipeline of: Resizing Random cropping (fixed-sized or random sized) Random horizontal flipping def get_train_augmentation_model(): model = keras.Sequential( [ layers.Rescaling(1 / 255.0), layers.Resizing(INPUT_SHAPE[0] + 20, INPUT_SHAPE[0] + 20), layers.RandomCrop(IMAGE_SIZE, IMAGE_SIZE), layers.RandomFlip(\"horizontal\"), ], name=\"train_data_augmentation\", ) return model def get_test_augmentation_model(): model = keras.Sequential( [layers.Rescaling(1 / 255.0), layers.Resizing(IMAGE_SIZE, IMAGE_SIZE),], name=\"test_data_augmentation\", ) return model A layer for extracting patches from images This layer takes images as input and divides them into patches. The layer also includes two utility method: show_patched_image -- Takes a batch of images and its corresponding patches to plot a random pair of image and patches. reconstruct_from_patch -- Takes a single instance of patches and stitches them together into the original image. class Patches(layers.Layer): def __init__(self, patch_size=PATCH_SIZE, **kwargs): super().__init__(**kwargs) self.patch_size = patch_size # Assuming the image has three channels each patch would be # of size (patch_size, patch_size, 3). self.resize = layers.Reshape((-1, patch_size * patch_size * 3)) def call(self, images): # Create patches from the input images patches = tf.image.extract_patches( images=images, sizes=[1, self.patch_size, self.patch_size, 1], strides=[1, self.patch_size, self.patch_size, 1], rates=[1, 1, 1, 1], padding=\"VALID\", ) # Reshape the patches to (batch, num_patches, patch_area) and return it. patches = self.resize(patches) return patches def show_patched_image(self, images, patches): # This is a utility function which accepts a batch of images and its # corresponding patches and help visualize one image and its patches # side by side. idx = np.random.choice(patches.shape[0]) print(f\"Index selected: {idx}.\") plt.figure(figsize=(4, 4)) plt.imshow(keras.utils.array_to_img(images[idx])) plt.axis(\"off\") plt.show() n = int(np.sqrt(patches.shape[1])) plt.figure(figsize=(4, 4)) for i, patch in enumerate(patches[idx]): ax = plt.subplot(n, n, i + 1) patch_img = tf.reshape(patch, (self.patch_size, self.patch_size, 3)) plt.imshow(keras.utils.img_to_array(patch_img)) plt.axis(\"off\") plt.show() # Return the index chosen to validate it outside the method. return idx # taken from https://stackoverflow.com/a/58082878/10319735 def reconstruct_from_patch(self, patch): # This utility function takes patches from a *single* image and # reconstructs it back into the image. This is useful for the train # monitor callback. num_patches = patch.shape[0] n = int(np.sqrt(num_patches)) patch = tf.reshape(patch, (num_patches, self.patch_size, self.patch_size, 3)) rows = tf.split(patch, n, axis=0) rows = [tf.concat(tf.unstack(x), axis=1) for x in rows] reconstructed = tf.concat(rows, axis=0) return reconstructed Let's visualize the image patches. # Get a batch of images. image_batch = next(iter(train_ds)) # Augment the images. augmentation_model = get_train_augmentation_model() augmented_images = augmentation_model(image_batch) # Define the patch layer. patch_layer = Patches() # Get the patches from the batched images. patches = patch_layer(images=augmented_images) # Now pass the images and the corresponding patches # to the `show_patched_image` method. random_index = patch_layer.show_patched_image(images=augmented_images, patches=patches) # Chose the same chose image and try reconstructing the patches # into the original image. image = patch_layer.reconstruct_from_patch(patches[random_index]) plt.imshow(image) plt.axis(\"off\") plt.show() Index selected: 102. png png png Patch encoding with masking Quoting the paper Following ViT, we divide an image into regular non-overlapping patches. Then we sample a subset of patches and mask (i.e., remove) the remaining ones. Our sampling strategy is straightforward: we sample random patches without replacement, following a uniform distribution. We simply refer to this as “random sampling”. This layer includes masking and encoding the patches. The utility methods of the layer are: get_random_indices -- Provides the mask and unmask indices. generate_masked_image -- Takes patches and unmask indices, results in a random masked image. This is an essential utility method for our training monitor callback (defined later). class PatchEncoder(layers.Layer): def __init__( self, patch_size=PATCH_SIZE, projection_dim=ENC_PROJECTION_DIM, mask_proportion=MASK_PROPORTION, downstream=False, **kwargs, ): super().__init__(**kwargs) self.patch_size = patch_size self.projection_dim = projection_dim self.mask_proportion = mask_proportion self.downstream = downstream # This is a trainable mask token initialized randomly from a normal # distribution. self.mask_token = tf.Variable( tf.random.normal([1, patch_size * patch_size * 3]), trainable=True ) def build(self, input_shape): (_, self.num_patches, self.patch_area) = input_shape # Create the projection layer for the patches. self.projection = layers.Dense(units=self.projection_dim) # Create the positional embedding layer. self.position_embedding = layers.Embedding( input_dim=self.num_patches, output_dim=self.projection_dim ) # Number of patches that will be masked. self.num_mask = int(self.mask_proportion * self.num_patches) def call(self, patches): # Get the positional embeddings. batch_size = tf.shape(patches)[0] positions = tf.range(start=0, limit=self.num_patches, delta=1) pos_embeddings = self.position_embedding(positions[tf.newaxis, ...]) pos_embeddings = tf.tile( pos_embeddings, [batch_size, 1, 1] ) # (B, num_patches, projection_dim) # Embed the patches. patch_embeddings = ( self.projection(patches) + pos_embeddings ) # (B, num_patches, projection_dim) if self.downstream: return patch_embeddings else: mask_indices, unmask_indices = self.get_random_indices(batch_size) # The encoder input is the unmasked patch embeddings. Here we gather # all the patches that should be unmasked. unmasked_embeddings = tf.gather( patch_embeddings, unmask_indices, axis=1, batch_dims=1 ) # (B, unmask_numbers, projection_dim) # Get the unmasked and masked position embeddings. We will need them # for the decoder. unmasked_positions = tf.gather( pos_embeddings, unmask_indices, axis=1, batch_dims=1 ) # (B, unmask_numbers, projection_dim) masked_positions = tf.gather( pos_embeddings, mask_indices, axis=1, batch_dims=1 ) # (B, mask_numbers, projection_dim) # Repeat the mask token number of mask times. # Mask tokens replace the masks of the image. mask_tokens = tf.repeat(self.mask_token, repeats=self.num_mask, axis=0) mask_tokens = tf.repeat( mask_tokens[tf.newaxis, ...], repeats=batch_size, axis=0 ) # Get the masked embeddings for the tokens. masked_embeddings = self.projection(mask_tokens) + masked_positions return ( unmasked_embeddings, # Input to the encoder. masked_embeddings, # First part of input to the decoder. unmasked_positions, # Added to the encoder outputs. mask_indices, # The indices that were masked. unmask_indices, # The indices that were unmaksed. ) def get_random_indices(self, batch_size): # Create random indices from a uniform distribution and then split # it into mask and unmask indices. rand_indices = tf.argsort( tf.random.uniform(shape=(batch_size, self.num_patches)), axis=-1 ) mask_indices = rand_indices[:, : self.num_mask] unmask_indices = rand_indices[:, self.num_mask :] return mask_indices, unmask_indices def generate_masked_image(self, patches, unmask_indices): # Choose a random patch and it corresponding unmask index. idx = np.random.choice(patches.shape[0]) patch = patches[idx] unmask_index = unmask_indices[idx] # Build a numpy array of same shape as patch. new_patch = np.zeros_like(patch) # Iterate of the new_patch and plug the unmasked patches. count = 0 for i in range(unmask_index.shape[0]): new_patch[unmask_index[i]] = patch[unmask_index[i]] return new_patch, idx Let's see the masking process in action on a sample image. # Create the patch encoder layer. patch_encoder = PatchEncoder() # Get the embeddings and positions. ( unmasked_embeddings, masked_embeddings, unmasked_positions, mask_indices, unmask_indices, ) = patch_encoder(patches=patches) # Show a maksed patch image. new_patch, random_index = patch_encoder.generate_masked_image(patches, unmask_indices) plt.figure(figsize=(10, 10)) plt.subplot(1, 2, 1) img = patch_layer.reconstruct_from_patch(new_patch) plt.imshow(keras.utils.array_to_img(img)) plt.axis(\"off\") plt.title(\"Masked\") plt.subplot(1, 2, 2) img = augmented_images[random_index] plt.imshow(keras.utils.array_to_img(img)) plt.axis(\"off\") plt.title(\"Original\") plt.show() 2021-11-24 01:11:00.182447: I tensorflow/stream_executor/cuda/cuda_blas.cc:1774] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once. png MLP This serves as the fully connected feed forward network of the transformer architecture. def mlp(x, dropout_rate, hidden_units): for units in hidden_units: x = layers.Dense(units, activation=tf.nn.gelu)(x) x = layers.Dropout(dropout_rate)(x) return x MAE encoder The MAE encoder is ViT. The only point to note here is that the encoder outputs a layer normalized output. def create_encoder(num_heads=ENC_NUM_HEADS, num_layers=ENC_LAYERS): inputs = layers.Input((None, ENC_PROJECTION_DIM)) x = inputs for _ in range(num_layers): # Layer normalization 1. x1 = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x) # Create a multi-head attention layer. attention_output = layers.MultiHeadAttention( num_heads=num_heads, key_dim=ENC_PROJECTION_DIM, dropout=0.1 )(x1, x1) # Skip connection 1. x2 = layers.Add()([attention_output, x]) # Layer normalization 2. x3 = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x2) # MLP. x3 = mlp(x3, hidden_units=ENC_TRANSFORMER_UNITS, dropout_rate=0.1) # Skip connection 2. x = layers.Add()([x3, x2]) outputs = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x) return keras.Model(inputs, outputs, name=\"mae_encoder\") MAE decoder The authors point out that they use an asymmetric autoencoder model. They use a lightweight decoder that takes \"<10% computation per token vs. the encoder\". We are not specific with the \"<10% computation\" in our implementation but have used a smaller decoder (both in terms of depth and projection dimensions). def create_decoder( num_layers=DEC_LAYERS, num_heads=DEC_NUM_HEADS, image_size=IMAGE_SIZE ): inputs = layers.Input((NUM_PATCHES, ENC_PROJECTION_DIM)) x = layers.Dense(DEC_PROJECTION_DIM)(inputs) for _ in range(num_layers): # Layer normalization 1. x1 = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x) # Create a multi-head attention layer. attention_output = layers.MultiHeadAttention( num_heads=num_heads, key_dim=DEC_PROJECTION_DIM, dropout=0.1 )(x1, x1) # Skip connection 1. x2 = layers.Add()([attention_output, x]) # Layer normalization 2. x3 = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x2) # MLP. x3 = mlp(x3, hidden_units=DEC_TRANSFORMER_UNITS, dropout_rate=0.1) # Skip connection 2. x = layers.Add()([x3, x2]) x = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x) x = layers.Flatten()(x) pre_final = layers.Dense(units=image_size * image_size * 3, activation=\"sigmoid\")(x) outputs = layers.Reshape((image_size, image_size, 3))(pre_final) return keras.Model(inputs, outputs, name=\"mae_decoder\") MAE trainer This is the trainer module. We wrap the encoder and decoder inside of a tf.keras.Model subclass. This allows us to customize what happens in the model.fit() loop. class MaskedAutoencoder(keras.Model): def __init__( self, train_augmentation_model, test_augmentation_model, patch_layer, patch_encoder, encoder, decoder, **kwargs, ): super().__init__(**kwargs) self.train_augmentation_model = train_augmentation_model self.test_augmentation_model = test_augmentation_model self.patch_layer = patch_layer self.patch_encoder = patch_encoder self.encoder = encoder self.decoder = decoder def calculate_loss(self, images, test=False): # Augment the input images. if test: augmented_images = self.test_augmentation_model(images) else: augmented_images = self.train_augmentation_model(images) # Patch the augmented images. patches = self.patch_layer(augmented_images) # Encode the patches. ( unmasked_embeddings, masked_embeddings, unmasked_positions, mask_indices, unmask_indices, ) = self.patch_encoder(patches) # Pass the unmaksed patche to the encoder. encoder_outputs = self.encoder(unmasked_embeddings) # Create the decoder inputs. encoder_outputs = encoder_outputs + unmasked_positions decoder_inputs = tf.concat([encoder_outputs, masked_embeddings], axis=1) # Decode the inputs. decoder_outputs = self.decoder(decoder_inputs) decoder_patches = self.patch_layer(decoder_outputs) loss_patch = tf.gather(patches, mask_indices, axis=1, batch_dims=1) loss_output = tf.gather(decoder_patches, mask_indices, axis=1, batch_dims=1) # Compute the total loss. total_loss = self.compiled_loss(loss_patch, loss_output) return total_loss, loss_patch, loss_output def train_step(self, images): with tf.GradientTape() as tape: total_loss, loss_patch, loss_output = self.calculate_loss(images) # Apply gradients. train_vars = [ self.train_augmentation_model.trainable_variables, self.patch_layer.trainable_variables, self.patch_encoder.trainable_variables, self.encoder.trainable_variables, self.decoder.trainable_variables, ] grads = tape.gradient(total_loss, train_vars) tv_list = [] for (grad, var) in zip(grads, train_vars): for g, v in zip(grad, var): tv_list.append((g, v)) self.optimizer.apply_gradients(tv_list) # Report progress. self.compiled_metrics.update_state(loss_patch, loss_output) return {m.name: m.result() for m in self.metrics} def test_step(self, images): total_loss, loss_patch, loss_output = self.calculate_loss(images, test=True) # Update the trackers. self.compiled_metrics.update_state(loss_patch, loss_output) return {m.name: m.result() for m in self.metrics} Model initialization train_augmentation_model = get_train_augmentation_model() test_augmentation_model = get_test_augmentation_model() patch_layer = Patches() patch_encoder = PatchEncoder() encoder = create_encoder() decoder = create_decoder() mae_model = MaskedAutoencoder( train_augmentation_model=train_augmentation_model, test_augmentation_model=test_augmentation_model, patch_layer=patch_layer, patch_encoder=patch_encoder, encoder=encoder, decoder=decoder, ) Training callbacks Visualization callback # Taking a batch of test inputs to measure model's progress. test_images = next(iter(test_ds)) class TrainMonitor(keras.callbacks.Callback): def __init__(self, epoch_interval=None): self.epoch_interval = epoch_interval def on_epoch_end(self, epoch, logs=None): if self.epoch_interval and epoch % self.epoch_interval == 0: test_augmented_images = self.model.test_augmentation_model(test_images) test_patches = self.model.patch_layer(test_augmented_images) ( test_unmasked_embeddings, test_masked_embeddings, test_unmasked_positions, test_mask_indices, test_unmask_indices, ) = self.model.patch_encoder(test_patches) test_encoder_outputs = self.model.encoder(test_unmasked_embeddings) test_encoder_outputs = test_encoder_outputs + test_unmasked_positions test_decoder_inputs = tf.concat( [test_encoder_outputs, test_masked_embeddings], axis=1 ) test_decoder_outputs = self.model.decoder(test_decoder_inputs) # Show a maksed patch image. test_masked_patch, idx = self.model.patch_encoder.generate_masked_image( test_patches, test_unmask_indices ) print(f\"\nIdx chosen: {idx}\") original_image = test_augmented_images[idx] masked_image = self.model.patch_layer.reconstruct_from_patch( test_masked_patch ) reconstructed_image = test_decoder_outputs[idx] fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(15, 5)) ax[0].imshow(original_image) ax[0].set_title(f\"Original: {epoch:03d}\") ax[1].imshow(masked_image) ax[1].set_title(f\"Masked: {epoch:03d}\") ax[2].imshow(reconstructed_image) ax[2].set_title(f\"Resonstructed: {epoch:03d}\") plt.show() plt.close() Learning rate scheduler # Some code is taken from: # https://www.kaggle.com/ashusma/training-rfcx-tensorflow-tpu-effnet-b2. class WarmUpCosine(keras.optimizers.schedules.LearningRateSchedule): def __init__( self, learning_rate_base, total_steps, warmup_learning_rate, warmup_steps ): super(WarmUpCosine, self).__init__() self.learning_rate_base = learning_rate_base self.total_steps = total_steps self.warmup_learning_rate = warmup_learning_rate self.warmup_steps = warmup_steps self.pi = tf.constant(np.pi) def __call__(self, step): if self.total_steps < self.warmup_steps: raise ValueError(\"Total_steps must be larger or equal to warmup_steps.\") cos_annealed_lr = tf.cos( self.pi * (tf.cast(step, tf.float32) - self.warmup_steps) / float(self.total_steps - self.warmup_steps) ) learning_rate = 0.5 * self.learning_rate_base * (1 + cos_annealed_lr) if self.warmup_steps > 0: if self.learning_rate_base < self.warmup_learning_rate: raise ValueError( \"Learning_rate_base must be larger or equal to \" \"warmup_learning_rate.\" ) slope = ( self.learning_rate_base - self.warmup_learning_rate ) / self.warmup_steps warmup_rate = slope * tf.cast(step, tf.float32) + self.warmup_learning_rate learning_rate = tf.where( step < self.warmup_steps, warmup_rate, learning_rate ) return tf.where( step > self.total_steps, 0.0, learning_rate, name=\"learning_rate\" ) total_steps = int((len(x_train) / BATCH_SIZE) * EPOCHS) warmup_epoch_percentage = 0.15 warmup_steps = int(total_steps * warmup_epoch_percentage) scheduled_lrs = WarmUpCosine( learning_rate_base=LEARNING_RATE, total_steps=total_steps, warmup_learning_rate=0.0, warmup_steps=warmup_steps, ) lrs = [scheduled_lrs(step) for step in range(total_steps)] plt.plot(lrs) plt.xlabel(\"Step\", fontsize=14) plt.ylabel(\"LR\", fontsize=14) plt.show() # Assemble the callbacks. train_callbacks = [TrainMonitor(epoch_interval=5)] png Model compilation and training optimizer = tfa.optimizers.AdamW(learning_rate=scheduled_lrs, weight_decay=WEIGHT_DECAY) # Compile and pretrain the model. mae_model.compile( optimizer=optimizer, loss=keras.losses.MeanSquaredError(), metrics=[\"mae\"] ) history = mae_model.fit( train_ds, epochs=EPOCHS, validation_data=val_ds, callbacks=train_callbacks, ) # Measure its performance. loss, mae = mae_model.evaluate(test_ds) print(f\"Loss: {loss:.2f}\") print(f\"MAE: {mae:.2f}\") Epoch 1/100 157/157 [==============================] - ETA: 0s - loss: 0.0507 - mae: 0.1811 Idx chosen: 92 png 157/157 [==============================] - 19s 54ms/step - loss: 0.0507 - mae: 0.1811 - val_loss: 0.0417 - val_mae: 0.1630 Epoch 2/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0385 - mae: 0.1550 - val_loss: 0.0349 - val_mae: 0.1460 Epoch 3/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0336 - mae: 0.1420 - val_loss: 0.0311 - val_mae: 0.1352 Epoch 4/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0299 - mae: 0.1325 - val_loss: 0.0302 - val_mae: 0.1321 Epoch 5/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0269 - mae: 0.1246 - val_loss: 0.0256 - val_mae: 0.1207 Epoch 6/100 156/157 [============================>.] - ETA: 0s - loss: 0.0246 - mae: 0.1181 Idx chosen: 14 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0246 - mae: 0.1181 - val_loss: 0.0241 - val_mae: 0.1166 Epoch 7/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0232 - mae: 0.1142 - val_loss: 0.0237 - val_mae: 0.1152 Epoch 8/100 157/157 [==============================] - 7s 43ms/step - loss: 0.0222 - mae: 0.1113 - val_loss: 0.0216 - val_mae: 0.1088 Epoch 9/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0214 - mae: 0.1086 - val_loss: 0.0217 - val_mae: 0.1096 Epoch 10/100 157/157 [==============================] - 7s 43ms/step - loss: 0.0206 - mae: 0.1064 - val_loss: 0.0215 - val_mae: 0.1100 Epoch 11/100 157/157 [==============================] - ETA: 0s - loss: 0.0203 - mae: 0.1053 Idx chosen: 106 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0203 - mae: 0.1053 - val_loss: 0.0205 - val_mae: 0.1052 Epoch 12/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0200 - mae: 0.1043 - val_loss: 0.0196 - val_mae: 0.1028 Epoch 13/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0196 - mae: 0.1030 - val_loss: 0.0198 - val_mae: 0.1043 Epoch 14/100 157/157 [==============================] - 7s 43ms/step - loss: 0.0193 - mae: 0.1019 - val_loss: 0.0192 - val_mae: 0.1004 Epoch 15/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0191 - mae: 0.1013 - val_loss: 0.0198 - val_mae: 0.1031 Epoch 16/100 157/157 [==============================] - ETA: 0s - loss: 0.0189 - mae: 0.1007 Idx chosen: 71 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0189 - mae: 0.1007 - val_loss: 0.0188 - val_mae: 0.1003 Epoch 17/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0185 - mae: 0.0992 - val_loss: 0.0187 - val_mae: 0.0993 Epoch 18/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0185 - mae: 0.0992 - val_loss: 0.0192 - val_mae: 0.1021 Epoch 19/100 157/157 [==============================] - 7s 43ms/step - loss: 0.0182 - mae: 0.0984 - val_loss: 0.0181 - val_mae: 0.0967 Epoch 20/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0180 - mae: 0.0975 - val_loss: 0.0183 - val_mae: 0.0996 Epoch 21/100 156/157 [============================>.] - ETA: 0s - loss: 0.0180 - mae: 0.0975 Idx chosen: 188 png 157/157 [==============================] - 7s 47ms/step - loss: 0.0180 - mae: 0.0975 - val_loss: 0.0185 - val_mae: 0.0992 Epoch 22/100 157/157 [==============================] - 7s 45ms/step - loss: 0.0179 - mae: 0.0971 - val_loss: 0.0181 - val_mae: 0.0977 Epoch 23/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0178 - mae: 0.0966 - val_loss: 0.0179 - val_mae: 0.0962 Epoch 24/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0178 - mae: 0.0966 - val_loss: 0.0176 - val_mae: 0.0952 Epoch 25/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0176 - mae: 0.0960 - val_loss: 0.0182 - val_mae: 0.0984 Epoch 26/100 157/157 [==============================] - ETA: 0s - loss: 0.0175 - mae: 0.0958 Idx chosen: 20 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0175 - mae: 0.0958 - val_loss: 0.0176 - val_mae: 0.0958 Epoch 27/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0175 - mae: 0.0957 - val_loss: 0.0175 - val_mae: 0.0948 Epoch 28/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0175 - mae: 0.0956 - val_loss: 0.0173 - val_mae: 0.0947 Epoch 29/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0172 - mae: 0.0949 - val_loss: 0.0174 - val_mae: 0.0948 Epoch 30/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0172 - mae: 0.0948 - val_loss: 0.0174 - val_mae: 0.0944 Epoch 31/100 157/157 [==============================] - ETA: 0s - loss: 0.0172 - mae: 0.0945 Idx chosen: 102 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0172 - mae: 0.0945 - val_loss: 0.0169 - val_mae: 0.0932 Epoch 32/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0172 - mae: 0.0947 - val_loss: 0.0174 - val_mae: 0.0961 Epoch 33/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0171 - mae: 0.0945 - val_loss: 0.0171 - val_mae: 0.0937 Epoch 34/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0170 - mae: 0.0938 - val_loss: 0.0171 - val_mae: 0.0941 Epoch 35/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0170 - mae: 0.0940 - val_loss: 0.0171 - val_mae: 0.0948 Epoch 36/100 157/157 [==============================] - ETA: 0s - loss: 0.0168 - mae: 0.0933 Idx chosen: 121 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0168 - mae: 0.0933 - val_loss: 0.0170 - val_mae: 0.0935 Epoch 37/100 157/157 [==============================] - 7s 43ms/step - loss: 0.0169 - mae: 0.0935 - val_loss: 0.0168 - val_mae: 0.0933 Epoch 38/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0168 - mae: 0.0933 - val_loss: 0.0170 - val_mae: 0.0935 Epoch 39/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0167 - mae: 0.0931 - val_loss: 0.0169 - val_mae: 0.0934 Epoch 40/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0167 - mae: 0.0930 - val_loss: 0.0169 - val_mae: 0.0934 Epoch 41/100 157/157 [==============================] - ETA: 0s - loss: 0.0167 - mae: 0.0929 Idx chosen: 210 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0167 - mae: 0.0929 - val_loss: 0.0169 - val_mae: 0.0930 Epoch 42/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0167 - mae: 0.0928 - val_loss: 0.0170 - val_mae: 0.0941 Epoch 43/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0166 - mae: 0.0925 - val_loss: 0.0169 - val_mae: 0.0931 Epoch 44/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0165 - mae: 0.0921 - val_loss: 0.0165 - val_mae: 0.0914 Epoch 45/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0165 - mae: 0.0922 - val_loss: 0.0165 - val_mae: 0.0915 Epoch 46/100 157/157 [==============================] - ETA: 0s - loss: 0.0165 - mae: 0.0922 Idx chosen: 214 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0165 - mae: 0.0922 - val_loss: 0.0166 - val_mae: 0.0914 Epoch 47/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0164 - mae: 0.0919 - val_loss: 0.0164 - val_mae: 0.0912 Epoch 48/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0163 - mae: 0.0914 - val_loss: 0.0166 - val_mae: 0.0923 Epoch 49/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0163 - mae: 0.0914 - val_loss: 0.0164 - val_mae: 0.0914 Epoch 50/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0162 - mae: 0.0912 - val_loss: 0.0164 - val_mae: 0.0916 Epoch 51/100 157/157 [==============================] - ETA: 0s - loss: 0.0162 - mae: 0.0913 Idx chosen: 74 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0162 - mae: 0.0913 - val_loss: 0.0165 - val_mae: 0.0919 Epoch 52/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0162 - mae: 0.0909 - val_loss: 0.0163 - val_mae: 0.0912 Epoch 53/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0161 - mae: 0.0908 - val_loss: 0.0161 - val_mae: 0.0903 Epoch 54/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0161 - mae: 0.0908 - val_loss: 0.0162 - val_mae: 0.0901 Epoch 55/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0161 - mae: 0.0907 - val_loss: 0.0162 - val_mae: 0.0909 Epoch 56/100 156/157 [============================>.] - ETA: 0s - loss: 0.0160 - mae: 0.0904 Idx chosen: 202 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0160 - mae: 0.0904 - val_loss: 0.0160 - val_mae: 0.0908 Epoch 57/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0159 - mae: 0.0902 - val_loss: 0.0160 - val_mae: 0.0899 Epoch 58/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0159 - mae: 0.0901 - val_loss: 0.0162 - val_mae: 0.0916 Epoch 59/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0159 - mae: 0.0898 - val_loss: 0.0160 - val_mae: 0.0903 Epoch 60/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0159 - mae: 0.0898 - val_loss: 0.0159 - val_mae: 0.0897 Epoch 61/100 157/157 [==============================] - ETA: 0s - loss: 0.0158 - mae: 0.0894 Idx chosen: 87 png 157/157 [==============================] - 7s 48ms/step - loss: 0.0158 - mae: 0.0894 - val_loss: 0.0160 - val_mae: 0.0895 Epoch 62/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0158 - mae: 0.0895 - val_loss: 0.0161 - val_mae: 0.0905 Epoch 63/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0157 - mae: 0.0891 - val_loss: 0.0158 - val_mae: 0.0894 Epoch 64/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0157 - mae: 0.0890 - val_loss: 0.0158 - val_mae: 0.0889 Epoch 65/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0157 - mae: 0.0890 - val_loss: 0.0159 - val_mae: 0.0893 Epoch 66/100 157/157 [==============================] - ETA: 0s - loss: 0.0156 - mae: 0.0888 Idx chosen: 116 png 157/157 [==============================] - 7s 47ms/step - loss: 0.0156 - mae: 0.0888 - val_loss: 0.0160 - val_mae: 0.0903 Epoch 67/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0156 - mae: 0.0886 - val_loss: 0.0156 - val_mae: 0.0881 Epoch 68/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0155 - mae: 0.0883 - val_loss: 0.0156 - val_mae: 0.0885 Epoch 69/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0154 - mae: 0.0881 - val_loss: 0.0155 - val_mae: 0.0878 Epoch 70/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0154 - mae: 0.0881 - val_loss: 0.0158 - val_mae: 0.0891 Epoch 71/100 156/157 [============================>.] - ETA: 0s - loss: 0.0154 - mae: 0.0879 Idx chosen: 99 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0154 - mae: 0.0879 - val_loss: 0.0155 - val_mae: 0.0884 Epoch 72/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0153 - mae: 0.0877 - val_loss: 0.0154 - val_mae: 0.0878 Epoch 73/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0153 - mae: 0.0876 - val_loss: 0.0155 - val_mae: 0.0879 Epoch 74/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0152 - mae: 0.0874 - val_loss: 0.0153 - val_mae: 0.0876 Epoch 75/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0152 - mae: 0.0872 - val_loss: 0.0153 - val_mae: 0.0872 Epoch 76/100 157/157 [==============================] - ETA: 0s - loss: 0.0151 - mae: 0.0870 Idx chosen: 103 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0151 - mae: 0.0870 - val_loss: 0.0153 - val_mae: 0.0873 Epoch 77/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0151 - mae: 0.0869 - val_loss: 0.0152 - val_mae: 0.0872 Epoch 78/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0151 - mae: 0.0867 - val_loss: 0.0152 - val_mae: 0.0869 Epoch 79/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0151 - mae: 0.0867 - val_loss: 0.0151 - val_mae: 0.0863 Epoch 80/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0150 - mae: 0.0865 - val_loss: 0.0150 - val_mae: 0.0860 Epoch 81/100 157/157 [==============================] - ETA: 0s - loss: 0.0150 - mae: 0.0865 Idx chosen: 151 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0150 - mae: 0.0865 - val_loss: 0.0151 - val_mae: 0.0862 Epoch 82/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0149 - mae: 0.0861 - val_loss: 0.0151 - val_mae: 0.0859 Epoch 83/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0149 - mae: 0.0861 - val_loss: 0.0149 - val_mae: 0.0857 Epoch 84/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0149 - mae: 0.0860 - val_loss: 0.0151 - val_mae: 0.0865 Epoch 85/100 157/157 [==============================] - 7s 43ms/step - loss: 0.0148 - mae: 0.0858 - val_loss: 0.0150 - val_mae: 0.0856 Epoch 86/100 157/157 [==============================] - ETA: 0s - loss: 0.0148 - mae: 0.0856 Idx chosen: 130 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0148 - mae: 0.0856 - val_loss: 0.0149 - val_mae: 0.0855 Epoch 87/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0148 - mae: 0.0855 - val_loss: 0.0148 - val_mae: 0.0851 Epoch 88/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0148 - mae: 0.0856 - val_loss: 0.0149 - val_mae: 0.0855 Epoch 89/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0147 - mae: 0.0853 - val_loss: 0.0148 - val_mae: 0.0852 Epoch 90/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0147 - mae: 0.0853 - val_loss: 0.0148 - val_mae: 0.0850 Epoch 91/100 157/157 [==============================] - ETA: 0s - loss: 0.0147 - mae: 0.0852 Idx chosen: 149 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0147 - mae: 0.0852 - val_loss: 0.0148 - val_mae: 0.0851 Epoch 92/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0146 - mae: 0.0851 - val_loss: 0.0147 - val_mae: 0.0849 Epoch 93/100 157/157 [==============================] - 7s 43ms/step - loss: 0.0147 - mae: 0.0853 - val_loss: 0.0147 - val_mae: 0.0849 Epoch 94/100 157/157 [==============================] - 7s 43ms/step - loss: 0.0147 - mae: 0.0852 - val_loss: 0.0148 - val_mae: 0.0850 Epoch 95/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0147 - mae: 0.0852 - val_loss: 0.0148 - val_mae: 0.0853 Epoch 96/100 157/157 [==============================] - ETA: 0s - loss: 0.0147 - mae: 0.0853 Idx chosen: 52 png 157/157 [==============================] - 7s 46ms/step - loss: 0.0147 - mae: 0.0853 - val_loss: 0.0148 - val_mae: 0.0853 Epoch 97/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0148 - mae: 0.0856 - val_loss: 0.0149 - val_mae: 0.0855 Epoch 98/100 157/157 [==============================] - 7s 43ms/step - loss: 0.0148 - mae: 0.0857 - val_loss: 0.0149 - val_mae: 0.0858 Epoch 99/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0149 - mae: 0.0863 - val_loss: 0.0150 - val_mae: 0.0865 Epoch 100/100 157/157 [==============================] - 7s 44ms/step - loss: 0.0150 - mae: 0.0873 - val_loss: 0.0153 - val_mae: 0.0881 40/40 [==============================] - 1s 15ms/step - loss: 0.0154 - mae: 0.0882 Loss: 0.02 MAE: 0.09 Evaluation with linear probing Extract the encoder model along with other layers # Extract the augmentation layers. train_augmentation_model = mae_model.train_augmentation_model test_augmentation_model = mae_model.test_augmentation_model # Extract the patchers. patch_layer = mae_model.patch_layer patch_encoder = mae_model.patch_encoder patch_encoder.downstream = True # Swtich the downstream flag to True. # Extract the encoder. encoder = mae_model.encoder # Pack as a model. downstream_model = keras.Sequential( [ layers.Input((IMAGE_SIZE, IMAGE_SIZE, 3)), patch_layer, patch_encoder, encoder, layers.BatchNormalization(), # Refer to A.1 (Linear probing). layers.GlobalAveragePooling1D(), layers.Dense(NUM_CLASSES, activation=\"softmax\"), ], name=\"linear_probe_model\", ) # Only the final classification layer of the `downstream_model` should be trainable. for layer in downstream_model.layers[:-1]: layer.trainable = False downstream_model.summary() Model: \"linear_probe_model\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= patches_1 (Patches) (None, 64, 108) 0 patch_encoder_1 (PatchEncod (None, 64, 128) 22252 er) mae_encoder (Functional) (None, None, 128) 1981696 batch_normalization (BatchN (None, 64, 128) 512 ormalization) global_average_pooling1d (G (None, 128) 0 lobalAveragePooling1D) dense_19 (Dense) (None, 10) 1290 ================================================================= Total params: 2,005,750 Trainable params: 1,290 Non-trainable params: 2,004,460 _________________________________________________________________ We are using average pooling to extract learned representations from the MAE encoder. Another approach would be to use a learnable dummy token inside the encoder during pretraining (resembling the [CLS] token). Then we can extract representations from that token during the downstream tasks. Prepare datasets for linear probing def prepare_data(images, labels, is_train=True): if is_train: augmentation_model = train_augmentation_model else: augmentation_model = test_augmentation_model dataset = tf.data.Dataset.from_tensor_slices((images, labels)) if is_train: dataset = dataset.shuffle(BUFFER_SIZE) dataset = dataset.batch(BATCH_SIZE).map( lambda x, y: (augmentation_model(x), y), num_parallel_calls=AUTO ) return dataset.prefetch(AUTO) train_ds = prepare_data(x_train, y_train) val_ds = prepare_data(x_train, y_train, is_train=False) test_ds = prepare_data(x_test, y_test, is_train=False) Perform linear probing linear_probe_epochs = 50 linear_prob_lr = 0.1 warm_epoch_percentage = 0.1 steps = int((len(x_train) // BATCH_SIZE) * linear_probe_epochs) warmup_steps = int(steps * warm_epoch_percentage) scheduled_lrs = WarmUpCosine( learning_rate_base=linear_prob_lr, total_steps=steps, warmup_learning_rate=0.0, warmup_steps=warmup_steps, ) optimizer = keras.optimizers.SGD(learning_rate=scheduled_lrs, momentum=0.9) downstream_model.compile( optimizer=optimizer, loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"] ) downstream_model.fit(train_ds, validation_data=val_ds, epochs=linear_probe_epochs) loss, accuracy = downstream_model.evaluate(test_ds) accuracy = round(accuracy * 100, 2) print(f\"Accuracy on the test set: {accuracy}%.\") Epoch 1/50 157/157 [==============================] - 11s 43ms/step - loss: 2.2131 - accuracy: 0.1838 - val_loss: 2.0249 - val_accuracy: 0.2986 Epoch 2/50 157/157 [==============================] - 6s 36ms/step - loss: 1.9065 - accuracy: 0.3498 - val_loss: 1.7813 - val_accuracy: 0.3913 Epoch 3/50 157/157 [==============================] - 6s 36ms/step - loss: 1.7443 - accuracy: 0.3995 - val_loss: 1.6705 - val_accuracy: 0.4195 Epoch 4/50 157/157 [==============================] - 6s 36ms/step - loss: 1.6645 - accuracy: 0.4201 - val_loss: 1.6107 - val_accuracy: 0.4344 Epoch 5/50 157/157 [==============================] - 6s 36ms/step - loss: 1.6169 - accuracy: 0.4320 - val_loss: 1.5747 - val_accuracy: 0.4435 Epoch 6/50 157/157 [==============================] - 6s 36ms/step - loss: 1.5843 - accuracy: 0.4364 - val_loss: 1.5476 - val_accuracy: 0.4496 Epoch 7/50 157/157 [==============================] - 6s 36ms/step - loss: 1.5634 - accuracy: 0.4418 - val_loss: 1.5294 - val_accuracy: 0.4540 Epoch 8/50 157/157 [==============================] - 6s 36ms/step - loss: 1.5462 - accuracy: 0.4452 - val_loss: 1.5158 - val_accuracy: 0.4575 Epoch 9/50 157/157 [==============================] - 6s 36ms/step - loss: 1.5365 - accuracy: 0.4468 - val_loss: 1.5068 - val_accuracy: 0.4602 Epoch 10/50 157/157 [==============================] - 6s 36ms/step - loss: 1.5237 - accuracy: 0.4541 - val_loss: 1.4971 - val_accuracy: 0.4616 Epoch 11/50 157/157 [==============================] - 6s 36ms/step - loss: 1.5171 - accuracy: 0.4539 - val_loss: 1.4902 - val_accuracy: 0.4620 Epoch 12/50 157/157 [==============================] - 6s 37ms/step - loss: 1.5127 - accuracy: 0.4552 - val_loss: 1.4850 - val_accuracy: 0.4640 Epoch 13/50 157/157 [==============================] - 6s 36ms/step - loss: 1.5027 - accuracy: 0.4590 - val_loss: 1.4796 - val_accuracy: 0.4669 Epoch 14/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4985 - accuracy: 0.4587 - val_loss: 1.4747 - val_accuracy: 0.4673 Epoch 15/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4975 - accuracy: 0.4588 - val_loss: 1.4694 - val_accuracy: 0.4694 Epoch 16/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4933 - accuracy: 0.4596 - val_loss: 1.4661 - val_accuracy: 0.4698 Epoch 17/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4889 - accuracy: 0.4608 - val_loss: 1.4628 - val_accuracy: 0.4721 Epoch 18/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4869 - accuracy: 0.4659 - val_loss: 1.4623 - val_accuracy: 0.4721 Epoch 19/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4826 - accuracy: 0.4639 - val_loss: 1.4585 - val_accuracy: 0.4716 Epoch 20/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4813 - accuracy: 0.4653 - val_loss: 1.4559 - val_accuracy: 0.4743 Epoch 21/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4824 - accuracy: 0.4644 - val_loss: 1.4542 - val_accuracy: 0.4746 Epoch 22/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4768 - accuracy: 0.4667 - val_loss: 1.4526 - val_accuracy: 0.4757 Epoch 23/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4775 - accuracy: 0.4644 - val_loss: 1.4507 - val_accuracy: 0.4751 Epoch 24/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4750 - accuracy: 0.4670 - val_loss: 1.4481 - val_accuracy: 0.4756 Epoch 25/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4726 - accuracy: 0.4663 - val_loss: 1.4467 - val_accuracy: 0.4767 Epoch 26/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4706 - accuracy: 0.4681 - val_loss: 1.4450 - val_accuracy: 0.4781 Epoch 27/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4660 - accuracy: 0.4706 - val_loss: 1.4456 - val_accuracy: 0.4766 Epoch 28/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4664 - accuracy: 0.4707 - val_loss: 1.4443 - val_accuracy: 0.4776 Epoch 29/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4678 - accuracy: 0.4674 - val_loss: 1.4411 - val_accuracy: 0.4802 Epoch 30/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4654 - accuracy: 0.4704 - val_loss: 1.4411 - val_accuracy: 0.4801 Epoch 31/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4655 - accuracy: 0.4702 - val_loss: 1.4402 - val_accuracy: 0.4787 Epoch 32/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4620 - accuracy: 0.4735 - val_loss: 1.4402 - val_accuracy: 0.4781 Epoch 33/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4668 - accuracy: 0.4699 - val_loss: 1.4397 - val_accuracy: 0.4783 Epoch 34/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4619 - accuracy: 0.4724 - val_loss: 1.4382 - val_accuracy: 0.4793 Epoch 35/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4652 - accuracy: 0.4697 - val_loss: 1.4374 - val_accuracy: 0.4800 Epoch 36/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4618 - accuracy: 0.4707 - val_loss: 1.4372 - val_accuracy: 0.4794 Epoch 37/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4606 - accuracy: 0.4710 - val_loss: 1.4369 - val_accuracy: 0.4793 Epoch 38/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4613 - accuracy: 0.4706 - val_loss: 1.4363 - val_accuracy: 0.4806 Epoch 39/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4631 - accuracy: 0.4713 - val_loss: 1.4361 - val_accuracy: 0.4804 Epoch 40/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4620 - accuracy: 0.4695 - val_loss: 1.4357 - val_accuracy: 0.4802 Epoch 41/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4639 - accuracy: 0.4706 - val_loss: 1.4355 - val_accuracy: 0.4801 Epoch 42/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4588 - accuracy: 0.4735 - val_loss: 1.4352 - val_accuracy: 0.4802 Epoch 43/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4573 - accuracy: 0.4734 - val_loss: 1.4352 - val_accuracy: 0.4794 Epoch 44/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4597 - accuracy: 0.4723 - val_loss: 1.4350 - val_accuracy: 0.4796 Epoch 45/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4572 - accuracy: 0.4741 - val_loss: 1.4349 - val_accuracy: 0.4799 Epoch 46/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4561 - accuracy: 0.4756 - val_loss: 1.4348 - val_accuracy: 0.4801 Epoch 47/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4593 - accuracy: 0.4730 - val_loss: 1.4348 - val_accuracy: 0.4801 Epoch 48/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4613 - accuracy: 0.4733 - val_loss: 1.4348 - val_accuracy: 0.4802 Epoch 49/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4591 - accuracy: 0.4710 - val_loss: 1.4348 - val_accuracy: 0.4803 Epoch 50/50 157/157 [==============================] - 6s 36ms/step - loss: 1.4566 - accuracy: 0.4766 - val_loss: 1.4348 - val_accuracy: 0.4803 40/40 [==============================] - 1s 17ms/step - loss: 1.4375 - accuracy: 0.4790 Accuracy on the test set: 47.9%. We believe that with a more sophisticated hyperparameter tuning process and a longer pretraining it is possible to improve this performance further. For comparison, we took the encoder architecture and trained it from scratch in a fully supervised manner. This gave us ~76% test top-1 accuracy. The authors of MAE demonstrates strong performance on the ImageNet-1k dataset as well as other downstream tasks like object detection and semantic segmentation. Final notes We refer the interested readers to other examples on self-supervised learning present on keras.io: SimCLR NNCLR SimSiam This idea of using BERT flavored pretraining in computer vision was also explored in Selfie, but it could not demonstrate strong results. Another concurrent work that explores the idea of masked image modeling is SimMIM. Finally, as a fun fact, we, the authors of this example also explored the idea of \"reconstruction as a pretext task\" in 2020 but we could not prevent the network from representation collapse, and hence we did not get strong downstream performance. We would like to thank Xinlei Chen (one of the authors of MAE) for helpful discussions. We are grateful to JarvisLabs and Google Developers Experts program for helping with GPU credits Example of using similarity metric learning on CIFAR-10 images. Overview This example is based on the \"Metric learning for image similarity search\" example. We aim to use the same data set but implement the model using TensorFlow Similarity. Metric learning aims to train models that can embed inputs into a high-dimensional space such that \"similar\" inputs are pulled closer to each other and \"dissimilar\" inputs are pushed farther apart. Once trained, these models can produce embeddings for downstream systems where such similarity is useful, for instance as a ranking signal for search or as a form of pretrained embedding model for another supervised problem. For a more detailed overview of metric learning, see: What is metric learning? \"Using crossentropy for metric learning\" tutorial Setup This tutorial will use the TensorFlow Similarity library to learn and evaluate the similarity embedding. TensorFlow Similarity provides components that: Make training contrastive models simple and fast. Make it easier to ensure that batches contain pairs of examples. Enable the evaluation of the quality of the embedding. import random from matplotlib import pyplot as plt from mpl_toolkits import axes_grid1 import numpy as np import tensorflow as tf from tensorflow import keras import tensorflow_similarity as tfsim tfsim.utils.tf_cap_memory() print(\"TensorFlow:\", tf.__version__) print(\"TensorFlow Similarity:\", tfsim.__version__) TensorFlow: 2.6.0 TensorFlow Similarity: 0.14 Dataset samplers We will be using the CIFAR-10 dataset for this tutorial. For a similarity model to learn efficiently, each batch must contains at least 2 examples of each class. To make this easy, tf_similarity offers Sampler objects that enable you to set both the number of classes and the minimum number of examples of each class per batch. The train and validation datasets will be created using the TFDatasetMultiShotMemorySampler object. This creates a sampler that loads datasets from TensorFlow Datasets and yields batches containing a target number of classes and a target number of examples per class. Additionally, we can restrict the sampler to only yield the subset of classes defined in class_list, enabling us to train on a subset of the classes and then test how the embedding generalizes to the unseen classes. This can be useful when working on few-shot learning problems. The following cell creates a train_ds sample that: Loads the CIFAR-10 dataset from TFDS and then takes the examples_per_class_per_batch. Ensures the sampler restricts the classes to those defined in class_list. Ensures each batch contains 10 different classes with 8 examples each. We also create a validation dataset in the same way, but we limit the total number of examples per class to 100 and the examples per class per batch is set to the default of 2. # This determines the number of classes used during training. # Here we are using all the classes. num_known_classes = 10 class_list = random.sample(population=range(10), k=num_known_classes) classes_per_batch = 10 # Passing multiple examples per class per batch ensures that each example has # multiple positive pairs. This can be useful when performing triplet mining or # when using losses like `MultiSimilarityLoss` or `CircleLoss` as these can # take a weighted mix of all the positive pairs. In general, more examples per # class will lead to more information for the positive pairs, while more classes # per batch will provide more varied information in the negative pairs. However, # the losses compute the pairwise distance between the examples in a batch so # the upper limit of the batch size is restricted by the memory. examples_per_class_per_batch = 8 print( \"Batch size is: \" f\"{min(classes_per_batch, num_known_classes) * examples_per_class_per_batch}\" ) print(\" Create Training Data \".center(34, \"#\")) train_ds = tfsim.samplers.TFDatasetMultiShotMemorySampler( \"cifar10\", classes_per_batch=min(classes_per_batch, num_known_classes), splits=\"train\", steps_per_epoch=4000, examples_per_class_per_batch=examples_per_class_per_batch, class_list=class_list, ) print(\"\n\" + \" Create Validation Data \".center(34, \"#\")) val_ds = tfsim.samplers.TFDatasetMultiShotMemorySampler( \"cifar10\", classes_per_batch=classes_per_batch, splits=\"test\", total_examples_per_class=100, ) Batch size is: 80 ###### Create Training Data ###### 2021-10-07 22:48:06.609114: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. converting train: 0%| | 0/50000 [00:00ap\", anchor_embeddings, positive_embeddings ) # Since we intend to use these as logits we scale them by a temperature. # This value would normally be chosen as a hyper parameter. temperature = 0.2 similarities /= temperature # We use these similarities as logits for a softmax. The labels for # this call are just the sequence [0, 1, 2, ..., num_classes] since we # want the main diagonal values, which correspond to the anchor/positive # pairs, to be high. This loss will move embeddings for the # anchor/positive pairs together and move all other pairs apart. sparse_labels = tf.range(num_classes) loss = self.compiled_loss(sparse_labels, similarities) # Calculate gradients and apply via optimizer. gradients = tape.gradient(loss, self.trainable_variables) self.optimizer.apply_gradients(zip(gradients, self.trainable_variables)) # Update and return metrics (specifically the one for the loss value). self.compiled_metrics.update_state(sparse_labels, similarities) return {m.name: m.result() for m in self.metrics} Next we describe the architecture that maps from an image to an embedding. This model simply consists of a sequence of 2d convolutions followed by global pooling with a final linear projection to an embedding space. As is common in metric learning we normalise the embeddings so that we can use simple dot products to measure similarity. For simplicity this model is intentionally small. inputs = layers.Input(shape=(height_width, height_width, 3)) x = layers.Conv2D(filters=32, kernel_size=3, strides=2, activation=\"relu\")(inputs) x = layers.Conv2D(filters=64, kernel_size=3, strides=2, activation=\"relu\")(x) x = layers.Conv2D(filters=128, kernel_size=3, strides=2, activation=\"relu\")(x) x = layers.GlobalAveragePooling2D()(x) embeddings = layers.Dense(units=8, activation=None)(x) embeddings = tf.nn.l2_normalize(embeddings, axis=-1) model = EmbeddingModel(inputs, embeddings) Finally we run the training. On a Google Colab GPU instance this takes about a minute. model.compile( optimizer=keras.optimizers.Adam(learning_rate=1e-3), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), ) history = model.fit(AnchorPositivePairs(num_batchs=1000), epochs=20) plt.plot(history.history[\"loss\"]) plt.show() Epoch 1/20 1000/1000 [==============================] - 4s 4ms/step - loss: 2.2475 Epoch 2/20 1000/1000 [==============================] - 5s 5ms/step - loss: 2.1246 Epoch 3/20 1000/1000 [==============================] - 7s 7ms/step - loss: 2.0519 Epoch 4/20 1000/1000 [==============================] - 8s 8ms/step - loss: 2.0011 Epoch 5/20 1000/1000 [==============================] - 9s 9ms/step - loss: 1.9601 Epoch 6/20 1000/1000 [==============================] - 9s 9ms/step - loss: 1.9214 Epoch 7/20 1000/1000 [==============================] - 9s 9ms/step - loss: 1.9094 Epoch 8/20 1000/1000 [==============================] - 10s 10ms/step - loss: 1.8669 Epoch 9/20 1000/1000 [==============================] - 10s 10ms/step - loss: 1.8462 Epoch 10/20 1000/1000 [==============================] - 10s 10ms/step - loss: 1.8095 Epoch 11/20 1000/1000 [==============================] - 10s 10ms/step - loss: 1.7854 Epoch 12/20 1000/1000 [==============================] - 11s 11ms/step - loss: 1.7595 Epoch 13/20 1000/1000 [==============================] - 11s 11ms/step - loss: 1.7538 Epoch 14/20 1000/1000 [==============================] - 11s 11ms/step - loss: 1.7198 Epoch 15/20 906/1000 [==========================>...] - ETA: 1s - loss: 1.7017 Testing We can review the quality of this model by applying it to the test set and considering near neighbours in the embedding space. First we embed the test set and calculate all near neighbours. Recall that since the embeddings are unit length we can calculate cosine similarity via dot products. near_neighbours_per_example = 10 embeddings = model.predict(x_test) gram_matrix = np.einsum(\"ae,be->ab\", embeddings, embeddings) near_neighbours = np.argsort(gram_matrix.T)[:, -(near_neighbours_per_example + 1) :] As a visual check of these embeddings we can build a collage of the near neighbours for 5 random examples. The first column of the image below is a randomly selected image, the following 10 columns show the nearest neighbours in order of similarity. num_collage_examples = 5 examples = np.empty( ( num_collage_examples, near_neighbours_per_example + 1, height_width, height_width, 3, ), dtype=np.float32, ) for row_idx in range(num_collage_examples): examples[row_idx, 0] = x_test[row_idx] anchor_near_neighbours = reversed(near_neighbours[row_idx][:-1]) for col_idx, nn_idx in enumerate(anchor_near_neighbours): examples[row_idx, col_idx + 1] = x_test[nn_idx] show_collage(examples) png We can also get a quantified view of the performance by considering the correctness of near neighbours in terms of a confusion matrix. Let us sample 10 examples from each of the 10 classes and consider their near neighbours as a form of prediction; that is, does the example and its near neighbours share the same class? We observe that each animal class does generally well, and is confused the most with the other animal classes. The vehicle classes follow the same pattern. confusion_matrix = np.zeros((num_classes, num_classes)) # For each class. for class_idx in range(num_classes): # Consider 10 examples. example_idxs = class_idx_to_test_idxs[class_idx][:10] for y_test_idx in example_idxs: # And count the classes of its near neighbours. for nn_idx in near_neighbours[y_test_idx][:-1]: nn_class_idx = y_test[nn_idx] confusion_matrix[class_idx, nn_class_idx] += 1 # Display a confusion matrix. labels = [ \"Airplane\", \"Automobile\", \"Bird\", \"Cat\", \"Deer\", \"Dog\", \"Frog\", \"Horse\", \"Ship\", \"Truck\", ] disp = ConfusionMatrixDisplay(confusion_matrix=confusion_matrix, display_labels=labels) disp.plot(include_values=True, cmap=\"viridis\", ax=None, xticks_rotation=\"vertical\") plt.show() png Data augmentation using the mixup technique for image classification. Introduction mixup is a domain-agnostic data augmentation technique proposed in mixup: Beyond Empirical Risk Minimization by Zhang et al. It's implemented with the following formulas: (Note that the lambda values are values with the [0, 1] range and are sampled from the Beta distribution.) The technique is quite systematically named - we are literally mixing up the features and their corresponding labels. Implementation-wise it's simple. Neural networks are prone to memorizing corrupt labels. mixup relaxes this by combining different features with one another (same happens for the labels too) so that a network does not get overconfident about the relationship between the features and their labels. mixup is specifically useful when we are not sure about selecting a set of augmentation transforms for a given dataset, medical imaging datasets, for example. mixup can be extended to a variety of data modalities such as computer vision, naturallanguage processing, speech, and so on. This example requires TensorFlow 2.4 or higher. Setup import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.keras import layers Prepare the dataset In this example, we will be using the FashionMNIST dataset. But this same recipe can be used for other classification datasets as well. (x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data() x_train = x_train.astype(\"float32\") / 255.0 x_train = np.reshape(x_train, (-1, 28, 28, 1)) y_train = tf.one_hot(y_train, 10) x_test = x_test.astype(\"float32\") / 255.0 x_test = np.reshape(x_test, (-1, 28, 28, 1)) y_test = tf.one_hot(y_test, 10) Define hyperparameters AUTO = tf.data.AUTOTUNE BATCH_SIZE = 64 EPOCHS = 10 Convert the data into TensorFlow Dataset objects # Put aside a few samples to create our validation set val_samples = 2000 x_val, y_val = x_train[:val_samples], y_train[:val_samples] new_x_train, new_y_train = x_train[val_samples:], y_train[val_samples:] train_ds_one = ( tf.data.Dataset.from_tensor_slices((new_x_train, new_y_train)) .shuffle(BATCH_SIZE * 100) .batch(BATCH_SIZE) ) train_ds_two = ( tf.data.Dataset.from_tensor_slices((new_x_train, new_y_train)) .shuffle(BATCH_SIZE * 100) .batch(BATCH_SIZE) ) # Because we will be mixing up the images and their corresponding labels, we will be # combining two shuffled datasets from the same training data. train_ds = tf.data.Dataset.zip((train_ds_one, train_ds_two)) val_ds = tf.data.Dataset.from_tensor_slices((x_val, y_val)).batch(BATCH_SIZE) test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(BATCH_SIZE) Define the mixup technique function To perform the mixup routine, we create new virtual datasets using the training data from the same dataset, and apply a lambda value within the [0, 1] range sampled from a Beta distribution — such that, for example, new_x = lambda * x1 + (1 - lambda) * x2 (where x1 and x2 are images) and the same equation is applied to the labels as well. def sample_beta_distribution(size, concentration_0=0.2, concentration_1=0.2): gamma_1_sample = tf.random.gamma(shape=[size], alpha=concentration_1) gamma_2_sample = tf.random.gamma(shape=[size], alpha=concentration_0) return gamma_1_sample / (gamma_1_sample + gamma_2_sample) def mix_up(ds_one, ds_two, alpha=0.2): # Unpack two datasets images_one, labels_one = ds_one images_two, labels_two = ds_two batch_size = tf.shape(images_one)[0] # Sample lambda and reshape it to do the mixup l = sample_beta_distribution(batch_size, alpha, alpha) x_l = tf.reshape(l, (batch_size, 1, 1, 1)) y_l = tf.reshape(l, (batch_size, 1)) # Perform mixup on both images and labels by combining a pair of images/labels # (one from each dataset) into one image/label images = images_one * x_l + images_two * (1 - x_l) labels = labels_one * y_l + labels_two * (1 - y_l) return (images, labels) Note that here , we are combining two images to create a single one. Theoretically, we can combine as many we want but that comes at an increased computation cost. In certain cases, it may not help improve the performance as well. Visualize the new augmented dataset # First create the new dataset using our `mix_up` utility train_ds_mu = train_ds.map( lambda ds_one, ds_two: mix_up(ds_one, ds_two, alpha=0.2), num_parallel_calls=AUTO ) # Let's preview 9 samples from the dataset sample_images, sample_labels = next(iter(train_ds_mu)) plt.figure(figsize=(10, 10)) for i, (image, label) in enumerate(zip(sample_images[:9], sample_labels[:9])): ax = plt.subplot(3, 3, i + 1) plt.imshow(image.numpy().squeeze()) print(label.numpy().tolist()) plt.axis(\"off\") [0.01706075668334961, 0.0, 0.0, 0.9829392433166504, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] [0.0, 0.5761554837226868, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.42384451627731323, 0.0] [0.0, 0.0, 0.9999957084655762, 0.0, 4.291534423828125e-06, 0.0, 0.0, 0.0, 0.0, 0.0] [0.0, 0.0, 0.03438800573348999, 0.0, 0.0, 0.0, 0.0, 0.0, 0.96561199426651, 0.0] [0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0] [0.0, 0.0, 0.9808260202407837, 0.0, 0.0, 0.0, 0.01917397230863571, 0.0, 0.0, 0.0] [0.0, 0.9999748468399048, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2.5153160095214844e-05] [0.0, 0.0, 0.0, 0.0002035107754636556, 0.0, 0.9997965097427368, 0.0, 0.0, 0.0, 0.0] [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2410212755203247, 0.0, 0.0, 0.7589787244796753] png Model building def get_training_model(): model = tf.keras.Sequential( [ layers.Conv2D(16, (5, 5), activation=\"relu\", input_shape=(28, 28, 1)), layers.MaxPooling2D(pool_size=(2, 2)), layers.Conv2D(32, (5, 5), activation=\"relu\"), layers.MaxPooling2D(pool_size=(2, 2)), layers.Dropout(0.2), layers.GlobalAvgPool2D(), layers.Dense(128, activation=\"relu\"), layers.Dense(10, activation=\"softmax\"), ] ) return model For the sake of reproducibility, we serialize the initial random weights of our shallow network. initial_model = get_training_model() initial_model.save_weights(\"initial_weights.h5\") 1. Train the model with the mixed up dataset model = get_training_model() model.load_weights(\"initial_weights.h5\") model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"]) model.fit(train_ds_mu, validation_data=val_ds, epochs=EPOCHS) _, test_acc = model.evaluate(test_ds) print(\"Test accuracy: {:.2f}%\".format(test_acc * 100)) Epoch 1/10 907/907 [==============================] - 38s 41ms/step - loss: 1.4440 - accuracy: 0.5173 - val_loss: 0.7120 - val_accuracy: 0.7405 Epoch 2/10 907/907 [==============================] - 38s 42ms/step - loss: 0.9869 - accuracy: 0.7074 - val_loss: 0.5996 - val_accuracy: 0.7780 Epoch 3/10 907/907 [==============================] - 38s 42ms/step - loss: 0.9096 - accuracy: 0.7451 - val_loss: 0.5197 - val_accuracy: 0.8285 Epoch 4/10 907/907 [==============================] - 38s 42ms/step - loss: 0.8485 - accuracy: 0.7741 - val_loss: 0.4830 - val_accuracy: 0.8380 Epoch 5/10 907/907 [==============================] - 38s 42ms/step - loss: 0.8032 - accuracy: 0.7916 - val_loss: 0.4543 - val_accuracy: 0.8445 Epoch 6/10 907/907 [==============================] - 38s 42ms/step - loss: 0.7675 - accuracy: 0.8032 - val_loss: 0.4398 - val_accuracy: 0.8470 Epoch 7/10 907/907 [==============================] - 38s 42ms/step - loss: 0.7474 - accuracy: 0.8098 - val_loss: 0.4262 - val_accuracy: 0.8495 Epoch 8/10 907/907 [==============================] - 38s 42ms/step - loss: 0.7337 - accuracy: 0.8145 - val_loss: 0.3950 - val_accuracy: 0.8650 Epoch 9/10 907/907 [==============================] - 38s 42ms/step - loss: 0.7154 - accuracy: 0.8218 - val_loss: 0.3822 - val_accuracy: 0.8725 Epoch 10/10 907/907 [==============================] - 38s 42ms/step - loss: 0.7095 - accuracy: 0.8224 - val_loss: 0.3563 - val_accuracy: 0.8720 157/157 [==============================] - 2s 14ms/step - loss: 0.3821 - accuracy: 0.8726 Test accuracy: 87.26% 2. Train the model without the mixed up dataset model = get_training_model() model.load_weights(\"initial_weights.h5\") model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"]) # Notice that we are NOT using the mixed up dataset here model.fit(train_ds_one, validation_data=val_ds, epochs=EPOCHS) _, test_acc = model.evaluate(test_ds) print(\"Test accuracy: {:.2f}%\".format(test_acc * 100)) Epoch 1/10 907/907 [==============================] - 37s 40ms/step - loss: 1.2037 - accuracy: 0.5553 - val_loss: 0.6732 - val_accuracy: 0.7565 Epoch 2/10 907/907 [==============================] - 37s 40ms/step - loss: 0.6724 - accuracy: 0.7462 - val_loss: 0.5715 - val_accuracy: 0.7940 Epoch 3/10 907/907 [==============================] - 37s 40ms/step - loss: 0.5828 - accuracy: 0.7897 - val_loss: 0.5042 - val_accuracy: 0.8210 Epoch 4/10 907/907 [==============================] - 37s 40ms/step - loss: 0.5203 - accuracy: 0.8115 - val_loss: 0.4587 - val_accuracy: 0.8405 Epoch 5/10 907/907 [==============================] - 36s 40ms/step - loss: 0.4802 - accuracy: 0.8255 - val_loss: 0.4602 - val_accuracy: 0.8340 Epoch 6/10 907/907 [==============================] - 36s 40ms/step - loss: 0.4566 - accuracy: 0.8351 - val_loss: 0.3985 - val_accuracy: 0.8700 Epoch 7/10 907/907 [==============================] - 37s 40ms/step - loss: 0.4273 - accuracy: 0.8457 - val_loss: 0.3764 - val_accuracy: 0.8685 Epoch 8/10 907/907 [==============================] - 36s 40ms/step - loss: 0.4133 - accuracy: 0.8481 - val_loss: 0.3704 - val_accuracy: 0.8735 Epoch 9/10 907/907 [==============================] - 36s 40ms/step - loss: 0.3951 - accuracy: 0.8543 - val_loss: 0.3715 - val_accuracy: 0.8680 Epoch 10/10 907/907 [==============================] - 36s 40ms/step - loss: 0.3850 - accuracy: 0.8586 - val_loss: 0.3458 - val_accuracy: 0.8735 157/157 [==============================] - 2s 13ms/step - loss: 0.3817 - accuracy: 0.8636 Test accuracy: 86.36% Readers are encouraged to try out mixup on different datasets from different domains and experiment with the lambda parameter. You are strongly advised to check out the original paper as well - the authors present several ablation studies on mixup showing how it can improve generalization, as well as show their results of combining more than two images to create a single one. Notes With mixup, you can create synthetic examples — especially when you lack a large dataset - without incurring high computational costs. Label smoothing and mixup usually do not work well together because label smoothing already modifies the hard labels by some factor. mixup does not work well when you are using Supervised Contrastive Learning (SCL) since SCL expects the true labels during its pre-training phase. A few other benefits of mixup include (as described in the paper) robustness to adversarial examples and stabilized GAN (Generative Adversarial Networks) training. There are a number of data augmentation techniques that extend mixup such as CutMix and AugMix. MobileViT for image classification with combined benefits of convolutions and Transformers. Introduction In this example, we implement the MobileViT architecture (Mehta et al.), which combines the benefits of Transformers (Vaswani et al.) and convolutions. With Transformers, we can capture long-range dependencies that result in global representations. With convolutions, we can capture spatial relationships that model locality. Besides combining the properties of Transformers and convolutions, the authors introduce MobileViT as a general-purpose mobile-friendly backbone for different image recognition tasks. Their findings suggest that, performance-wise, MobileViT is better than other models with the same or higher complexity (MobileNetV3, for example), while being efficient on mobile devices. Imports import tensorflow as tf from keras.applications import imagenet_utils from tensorflow.keras import layers from tensorflow import keras import tensorflow_datasets as tfds import tensorflow_addons as tfa tfds.disable_progress_bar() Hyperparameters # Values are from table 4. patch_size = 4 # 2x2, for the Transformer blocks. image_size = 256 expansion_factor = 2 # expansion factor for the MobileNetV2 blocks. MobileViT utilities The MobileViT architecture is comprised of the following blocks: Strided 3x3 convolutions that process the input image. MobileNetV2-style inverted residual blocks for downsampling the resolution of the intermediate feature maps. MobileViT blocks that combine the benefits of Transformers and convolutions. It is presented in the figure below (taken from the original paper): def conv_block(x, filters=16, kernel_size=3, strides=2): conv_layer = layers.Conv2D( filters, kernel_size, strides=strides, activation=tf.nn.swish, padding=\"same\" ) return conv_layer(x) # Reference: https://git.io/JKgtC def inverted_residual_block(x, expanded_channels, output_channels, strides=1): m = layers.Conv2D(expanded_channels, 1, padding=\"same\", use_bias=False)(x) m = layers.BatchNormalization()(m) m = tf.nn.swish(m) if strides == 2: m = layers.ZeroPadding2D(padding=imagenet_utils.correct_pad(m, 3))(m) m = layers.DepthwiseConv2D( 3, strides=strides, padding=\"same\" if strides == 1 else \"valid\", use_bias=False )(m) m = layers.BatchNormalization()(m) m = tf.nn.swish(m) m = layers.Conv2D(output_channels, 1, padding=\"same\", use_bias=False)(m) m = layers.BatchNormalization()(m) if tf.math.equal(x.shape[-1], output_channels) and strides == 1: return layers.Add()([m, x]) return m # Reference: # https://keras.io/examples/vision/image_classification_with_vision_transformer/ def mlp(x, hidden_units, dropout_rate): for units in hidden_units: x = layers.Dense(units, activation=tf.nn.swish)(x) x = layers.Dropout(dropout_rate)(x) return x def transformer_block(x, transformer_layers, projection_dim, num_heads=2): for _ in range(transformer_layers): # Layer normalization 1. x1 = layers.LayerNormalization(epsilon=1e-6)(x) # Create a multi-head attention layer. attention_output = layers.MultiHeadAttention( num_heads=num_heads, key_dim=projection_dim, dropout=0.1 )(x1, x1) # Skip connection 1. x2 = layers.Add()([attention_output, x]) # Layer normalization 2. x3 = layers.LayerNormalization(epsilon=1e-6)(x2) # MLP. x3 = mlp(x3, hidden_units=[x.shape[-1] * 2, x.shape[-1]], dropout_rate=0.1,) # Skip connection 2. x = layers.Add()([x3, x2]) return x def mobilevit_block(x, num_blocks, projection_dim, strides=1): # Local projection with convolutions. local_features = conv_block(x, filters=projection_dim, strides=strides) local_features = conv_block( local_features, filters=projection_dim, kernel_size=1, strides=strides ) # Unfold into patches and then pass through Transformers. num_patches = int((local_features.shape[1] * local_features.shape[2]) / patch_size) non_overlapping_patches = layers.Reshape((patch_size, num_patches, projection_dim))( local_features ) global_features = transformer_block( non_overlapping_patches, num_blocks, projection_dim ) # Fold into conv-like feature-maps. folded_feature_map = layers.Reshape((*local_features.shape[1:-1], projection_dim))( global_features ) # Apply point-wise conv -> concatenate with the input features. folded_feature_map = conv_block( folded_feature_map, filters=x.shape[-1], kernel_size=1, strides=strides ) local_global_features = layers.Concatenate(axis=-1)([x, folded_feature_map]) # Fuse the local and global features using a convoluion layer. local_global_features = conv_block( local_global_features, filters=projection_dim, strides=strides ) return local_global_features More on the MobileViT block: First, the feature representations (A) go through convolution blocks that capture local relationships. The expected shape of a single entry here would be (h, w, num_channels). Then they get unfolded into another vector with shape (p, n, num_channels), where p is the area of a small patch, and n is (h * w) / p. So, we end up with n non-overlapping patches. This unfolded vector is then passed through a Tranformer block that captures global relationships between the patches. The output vector (B) is again folded into a vector of shape (h, w, num_channels) resembling a feature map coming out of convolutions. Vectors A and B are then passed through two more convolutional layers to fuse the local and global representations. Notice how the spatial resolution of the final vector remains unchanged at this point. The authors also present an explanation of how the MobileViT block resembles a convolution block of a CNN. For more details, please refer to the original paper. Next, we combine these blocks together and implement the MobileViT architecture (XXS variant). The following figure (taken from the original paper) presents a schematic representation of the architecture: def create_mobilevit(num_classes=5): inputs = keras.Input((image_size, image_size, 3)) x = layers.Rescaling(scale=1.0 / 255)(inputs) # Initial conv-stem -> MV2 block. x = conv_block(x, filters=16) x = inverted_residual_block( x, expanded_channels=16 * expansion_factor, output_channels=16 ) # Downsampling with MV2 block. x = inverted_residual_block( x, expanded_channels=16 * expansion_factor, output_channels=24, strides=2 ) x = inverted_residual_block( x, expanded_channels=24 * expansion_factor, output_channels=24 ) x = inverted_residual_block( x, expanded_channels=24 * expansion_factor, output_channels=24 ) # First MV2 -> MobileViT block. x = inverted_residual_block( x, expanded_channels=24 * expansion_factor, output_channels=48, strides=2 ) x = mobilevit_block(x, num_blocks=2, projection_dim=64) # Second MV2 -> MobileViT block. x = inverted_residual_block( x, expanded_channels=64 * expansion_factor, output_channels=64, strides=2 ) x = mobilevit_block(x, num_blocks=4, projection_dim=80) # Third MV2 -> MobileViT block. x = inverted_residual_block( x, expanded_channels=80 * expansion_factor, output_channels=80, strides=2 ) x = mobilevit_block(x, num_blocks=3, projection_dim=96) x = conv_block(x, filters=320, kernel_size=1, strides=1) # Classification head. x = layers.GlobalAvgPool2D()(x) outputs = layers.Dense(num_classes, activation=\"softmax\")(x) return keras.Model(inputs, outputs) mobilevit_xxs = create_mobilevit() mobilevit_xxs.summary() Model: \"model\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 256, 256, 3) 0 __________________________________________________________________________________________________ rescaling (Rescaling) (None, 256, 256, 3) 0 input_1[0][0] __________________________________________________________________________________________________ conv2d (Conv2D) (None, 128, 128, 16) 448 rescaling[0][0] __________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 128, 128, 32) 512 conv2d[0][0] __________________________________________________________________________________________________ batch_normalization (BatchNorma (None, 128, 128, 32) 128 conv2d_1[0][0] __________________________________________________________________________________________________ tf.nn.silu (TFOpLambda) (None, 128, 128, 32) 0 batch_normalization[0][0] __________________________________________________________________________________________________ depthwise_conv2d (DepthwiseConv (None, 128, 128, 32) 288 tf.nn.silu[0][0] __________________________________________________________________________________________________ batch_normalization_1 (BatchNor (None, 128, 128, 32) 128 depthwise_conv2d[0][0] __________________________________________________________________________________________________ tf.nn.silu_1 (TFOpLambda) (None, 128, 128, 32) 0 batch_normalization_1[0][0] __________________________________________________________________________________________________ conv2d_2 (Conv2D) (None, 128, 128, 16) 512 tf.nn.silu_1[0][0] __________________________________________________________________________________________________ batch_normalization_2 (BatchNor (None, 128, 128, 16) 64 conv2d_2[0][0] __________________________________________________________________________________________________ add (Add) (None, 128, 128, 16) 0 batch_normalization_2[0][0] conv2d[0][0] __________________________________________________________________________________________________ conv2d_3 (Conv2D) (None, 128, 128, 32) 512 add[0][0] __________________________________________________________________________________________________ batch_normalization_3 (BatchNor (None, 128, 128, 32) 128 conv2d_3[0][0] __________________________________________________________________________________________________ tf.nn.silu_2 (TFOpLambda) (None, 128, 128, 32) 0 batch_normalization_3[0][0] __________________________________________________________________________________________________ zero_padding2d (ZeroPadding2D) (None, 129, 129, 32) 0 tf.nn.silu_2[0][0] __________________________________________________________________________________________________ depthwise_conv2d_1 (DepthwiseCo (None, 64, 64, 32) 288 zero_padding2d[0][0] __________________________________________________________________________________________________ batch_normalization_4 (BatchNor (None, 64, 64, 32) 128 depthwise_conv2d_1[0][0] __________________________________________________________________________________________________ tf.nn.silu_3 (TFOpLambda) (None, 64, 64, 32) 0 batch_normalization_4[0][0] __________________________________________________________________________________________________ conv2d_4 (Conv2D) (None, 64, 64, 24) 768 tf.nn.silu_3[0][0] __________________________________________________________________________________________________ batch_normalization_5 (BatchNor (None, 64, 64, 24) 96 conv2d_4[0][0] __________________________________________________________________________________________________ conv2d_5 (Conv2D) (None, 64, 64, 48) 1152 batch_normalization_5[0][0] __________________________________________________________________________________________________ batch_normalization_6 (BatchNor (None, 64, 64, 48) 192 conv2d_5[0][0] __________________________________________________________________________________________________ tf.nn.silu_4 (TFOpLambda) (None, 64, 64, 48) 0 batch_normalization_6[0][0] __________________________________________________________________________________________________ depthwise_conv2d_2 (DepthwiseCo (None, 64, 64, 48) 432 tf.nn.silu_4[0][0] __________________________________________________________________________________________________ batch_normalization_7 (BatchNor (None, 64, 64, 48) 192 depthwise_conv2d_2[0][0] __________________________________________________________________________________________________ tf.nn.silu_5 (TFOpLambda) (None, 64, 64, 48) 0 batch_normalization_7[0][0] __________________________________________________________________________________________________ conv2d_6 (Conv2D) (None, 64, 64, 24) 1152 tf.nn.silu_5[0][0] __________________________________________________________________________________________________ batch_normalization_8 (BatchNor (None, 64, 64, 24) 96 conv2d_6[0][0] __________________________________________________________________________________________________ add_1 (Add) (None, 64, 64, 24) 0 batch_normalization_8[0][0] batch_normalization_5[0][0] __________________________________________________________________________________________________ conv2d_7 (Conv2D) (None, 64, 64, 48) 1152 add_1[0][0] __________________________________________________________________________________________________ batch_normalization_9 (BatchNor (None, 64, 64, 48) 192 conv2d_7[0][0] __________________________________________________________________________________________________ tf.nn.silu_6 (TFOpLambda) (None, 64, 64, 48) 0 batch_normalization_9[0][0] __________________________________________________________________________________________________ depthwise_conv2d_3 (DepthwiseCo (None, 64, 64, 48) 432 tf.nn.silu_6[0][0] __________________________________________________________________________________________________ batch_normalization_10 (BatchNo (None, 64, 64, 48) 192 depthwise_conv2d_3[0][0] __________________________________________________________________________________________________ tf.nn.silu_7 (TFOpLambda) (None, 64, 64, 48) 0 batch_normalization_10[0][0] __________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 64, 64, 24) 1152 tf.nn.silu_7[0][0] __________________________________________________________________________________________________ batch_normalization_11 (BatchNo (None, 64, 64, 24) 96 conv2d_8[0][0] __________________________________________________________________________________________________ add_2 (Add) (None, 64, 64, 24) 0 batch_normalization_11[0][0] add_1[0][0] __________________________________________________________________________________________________ conv2d_9 (Conv2D) (None, 64, 64, 48) 1152 add_2[0][0] __________________________________________________________________________________________________ batch_normalization_12 (BatchNo (None, 64, 64, 48) 192 conv2d_9[0][0] __________________________________________________________________________________________________ tf.nn.silu_8 (TFOpLambda) (None, 64, 64, 48) 0 batch_normalization_12[0][0] __________________________________________________________________________________________________ zero_padding2d_1 (ZeroPadding2D (None, 65, 65, 48) 0 tf.nn.silu_8[0][0] __________________________________________________________________________________________________ depthwise_conv2d_4 (DepthwiseCo (None, 32, 32, 48) 432 zero_padding2d_1[0][0] __________________________________________________________________________________________________ batch_normalization_13 (BatchNo (None, 32, 32, 48) 192 depthwise_conv2d_4[0][0] __________________________________________________________________________________________________ tf.nn.silu_9 (TFOpLambda) (None, 32, 32, 48) 0 batch_normalization_13[0][0] __________________________________________________________________________________________________ conv2d_10 (Conv2D) (None, 32, 32, 48) 2304 tf.nn.silu_9[0][0] __________________________________________________________________________________________________ batch_normalization_14 (BatchNo (None, 32, 32, 48) 192 conv2d_10[0][0] __________________________________________________________________________________________________ conv2d_11 (Conv2D) (None, 32, 32, 64) 27712 batch_normalization_14[0][0] __________________________________________________________________________________________________ conv2d_12 (Conv2D) (None, 32, 32, 64) 4160 conv2d_11[0][0] __________________________________________________________________________________________________ reshape (Reshape) (None, 4, 256, 64) 0 conv2d_12[0][0] __________________________________________________________________________________________________ layer_normalization (LayerNorma (None, 4, 256, 64) 128 reshape[0][0] __________________________________________________________________________________________________ multi_head_attention (MultiHead (None, 4, 256, 64) 33216 layer_normalization[0][0] layer_normalization[0][0] __________________________________________________________________________________________________ add_3 (Add) (None, 4, 256, 64) 0 multi_head_attention[0][0] reshape[0][0] __________________________________________________________________________________________________ layer_normalization_1 (LayerNor (None, 4, 256, 64) 128 add_3[0][0] __________________________________________________________________________________________________ dense (Dense) (None, 4, 256, 128) 8320 layer_normalization_1[0][0] __________________________________________________________________________________________________ dropout (Dropout) (None, 4, 256, 128) 0 dense[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (None, 4, 256, 64) 8256 dropout[0][0] __________________________________________________________________________________________________ dropout_1 (Dropout) (None, 4, 256, 64) 0 dense_1[0][0] __________________________________________________________________________________________________ add_4 (Add) (None, 4, 256, 64) 0 dropout_1[0][0] add_3[0][0] __________________________________________________________________________________________________ layer_normalization_2 (LayerNor (None, 4, 256, 64) 128 add_4[0][0] __________________________________________________________________________________________________ multi_head_attention_1 (MultiHe (None, 4, 256, 64) 33216 layer_normalization_2[0][0] layer_normalization_2[0][0] __________________________________________________________________________________________________ add_5 (Add) (None, 4, 256, 64) 0 multi_head_attention_1[0][0] add_4[0][0] __________________________________________________________________________________________________ layer_normalization_3 (LayerNor (None, 4, 256, 64) 128 add_5[0][0] __________________________________________________________________________________________________ dense_2 (Dense) (None, 4, 256, 128) 8320 layer_normalization_3[0][0] __________________________________________________________________________________________________ dropout_2 (Dropout) (None, 4, 256, 128) 0 dense_2[0][0] __________________________________________________________________________________________________ dense_3 (Dense) (None, 4, 256, 64) 8256 dropout_2[0][0] __________________________________________________________________________________________________ dropout_3 (Dropout) (None, 4, 256, 64) 0 dense_3[0][0] __________________________________________________________________________________________________ add_6 (Add) (None, 4, 256, 64) 0 dropout_3[0][0] add_5[0][0] __________________________________________________________________________________________________ reshape_1 (Reshape) (None, 32, 32, 64) 0 add_6[0][0] __________________________________________________________________________________________________ conv2d_13 (Conv2D) (None, 32, 32, 48) 3120 reshape_1[0][0] __________________________________________________________________________________________________ concatenate (Concatenate) (None, 32, 32, 96) 0 batch_normalization_14[0][0] conv2d_13[0][0] __________________________________________________________________________________________________ conv2d_14 (Conv2D) (None, 32, 32, 64) 55360 concatenate[0][0] __________________________________________________________________________________________________ conv2d_15 (Conv2D) (None, 32, 32, 128) 8192 conv2d_14[0][0] __________________________________________________________________________________________________ batch_normalization_15 (BatchNo (None, 32, 32, 128) 512 conv2d_15[0][0] __________________________________________________________________________________________________ tf.nn.silu_10 (TFOpLambda) (None, 32, 32, 128) 0 batch_normalization_15[0][0] __________________________________________________________________________________________________ zero_padding2d_2 (ZeroPadding2D (None, 33, 33, 128) 0 tf.nn.silu_10[0][0] __________________________________________________________________________________________________ depthwise_conv2d_5 (DepthwiseCo (None, 16, 16, 128) 1152 zero_padding2d_2[0][0] __________________________________________________________________________________________________ batch_normalization_16 (BatchNo (None, 16, 16, 128) 512 depthwise_conv2d_5[0][0] __________________________________________________________________________________________________ tf.nn.silu_11 (TFOpLambda) (None, 16, 16, 128) 0 batch_normalization_16[0][0] __________________________________________________________________________________________________ conv2d_16 (Conv2D) (None, 16, 16, 64) 8192 tf.nn.silu_11[0][0] __________________________________________________________________________________________________ batch_normalization_17 (BatchNo (None, 16, 16, 64) 256 conv2d_16[0][0] __________________________________________________________________________________________________ conv2d_17 (Conv2D) (None, 16, 16, 80) 46160 batch_normalization_17[0][0] __________________________________________________________________________________________________ conv2d_18 (Conv2D) (None, 16, 16, 80) 6480 conv2d_17[0][0] __________________________________________________________________________________________________ reshape_2 (Reshape) (None, 4, 64, 80) 0 conv2d_18[0][0] __________________________________________________________________________________________________ layer_normalization_4 (LayerNor (None, 4, 64, 80) 160 reshape_2[0][0] __________________________________________________________________________________________________ multi_head_attention_2 (MultiHe (None, 4, 64, 80) 51760 layer_normalization_4[0][0] layer_normalization_4[0][0] __________________________________________________________________________________________________ add_7 (Add) (None, 4, 64, 80) 0 multi_head_attention_2[0][0] reshape_2[0][0] __________________________________________________________________________________________________ layer_normalization_5 (LayerNor (None, 4, 64, 80) 160 add_7[0][0] __________________________________________________________________________________________________ dense_4 (Dense) (None, 4, 64, 160) 12960 layer_normalization_5[0][0] __________________________________________________________________________________________________ dropout_4 (Dropout) (None, 4, 64, 160) 0 dense_4[0][0] __________________________________________________________________________________________________ dense_5 (Dense) (None, 4, 64, 80) 12880 dropout_4[0][0] __________________________________________________________________________________________________ dropout_5 (Dropout) (None, 4, 64, 80) 0 dense_5[0][0] __________________________________________________________________________________________________ add_8 (Add) (None, 4, 64, 80) 0 dropout_5[0][0] add_7[0][0] __________________________________________________________________________________________________ layer_normalization_6 (LayerNor (None, 4, 64, 80) 160 add_8[0][0] __________________________________________________________________________________________________ multi_head_attention_3 (MultiHe (None, 4, 64, 80) 51760 layer_normalization_6[0][0] layer_normalization_6[0][0] __________________________________________________________________________________________________ add_9 (Add) (None, 4, 64, 80) 0 multi_head_attention_3[0][0] add_8[0][0] __________________________________________________________________________________________________ layer_normalization_7 (LayerNor (None, 4, 64, 80) 160 add_9[0][0] __________________________________________________________________________________________________ dense_6 (Dense) (None, 4, 64, 160) 12960 layer_normalization_7[0][0] __________________________________________________________________________________________________ dropout_6 (Dropout) (None, 4, 64, 160) 0 dense_6[0][0] __________________________________________________________________________________________________ dense_7 (Dense) (None, 4, 64, 80) 12880 dropout_6[0][0] __________________________________________________________________________________________________ dropout_7 (Dropout) (None, 4, 64, 80) 0 dense_7[0][0] __________________________________________________________________________________________________ add_10 (Add) (None, 4, 64, 80) 0 dropout_7[0][0] add_9[0][0] __________________________________________________________________________________________________ layer_normalization_8 (LayerNor (None, 4, 64, 80) 160 add_10[0][0] __________________________________________________________________________________________________ multi_head_attention_4 (MultiHe (None, 4, 64, 80) 51760 layer_normalization_8[0][0] layer_normalization_8[0][0] __________________________________________________________________________________________________ add_11 (Add) (None, 4, 64, 80) 0 multi_head_attention_4[0][0] add_10[0][0] __________________________________________________________________________________________________ layer_normalization_9 (LayerNor (None, 4, 64, 80) 160 add_11[0][0] __________________________________________________________________________________________________ dense_8 (Dense) (None, 4, 64, 160) 12960 layer_normalization_9[0][0] __________________________________________________________________________________________________ dropout_8 (Dropout) (None, 4, 64, 160) 0 dense_8[0][0] __________________________________________________________________________________________________ dense_9 (Dense) (None, 4, 64, 80) 12880 dropout_8[0][0] __________________________________________________________________________________________________ dropout_9 (Dropout) (None, 4, 64, 80) 0 dense_9[0][0] __________________________________________________________________________________________________ add_12 (Add) (None, 4, 64, 80) 0 dropout_9[0][0] add_11[0][0] __________________________________________________________________________________________________ layer_normalization_10 (LayerNo (None, 4, 64, 80) 160 add_12[0][0] __________________________________________________________________________________________________ multi_head_attention_5 (MultiHe (None, 4, 64, 80) 51760 layer_normalization_10[0][0] layer_normalization_10[0][0] __________________________________________________________________________________________________ add_13 (Add) (None, 4, 64, 80) 0 multi_head_attention_5[0][0] add_12[0][0] __________________________________________________________________________________________________ layer_normalization_11 (LayerNo (None, 4, 64, 80) 160 add_13[0][0] __________________________________________________________________________________________________ dense_10 (Dense) (None, 4, 64, 160) 12960 layer_normalization_11[0][0] __________________________________________________________________________________________________ dropout_10 (Dropout) (None, 4, 64, 160) 0 dense_10[0][0] __________________________________________________________________________________________________ dense_11 (Dense) (None, 4, 64, 80) 12880 dropout_10[0][0] __________________________________________________________________________________________________ dropout_11 (Dropout) (None, 4, 64, 80) 0 dense_11[0][0] __________________________________________________________________________________________________ add_14 (Add) (None, 4, 64, 80) 0 dropout_11[0][0] add_13[0][0] __________________________________________________________________________________________________ reshape_3 (Reshape) (None, 16, 16, 80) 0 add_14[0][0] __________________________________________________________________________________________________ conv2d_19 (Conv2D) (None, 16, 16, 64) 5184 reshape_3[0][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 16, 16, 128) 0 batch_normalization_17[0][0] conv2d_19[0][0] __________________________________________________________________________________________________ conv2d_20 (Conv2D) (None, 16, 16, 80) 92240 concatenate_1[0][0] __________________________________________________________________________________________________ conv2d_21 (Conv2D) (None, 16, 16, 160) 12800 conv2d_20[0][0] __________________________________________________________________________________________________ batch_normalization_18 (BatchNo (None, 16, 16, 160) 640 conv2d_21[0][0] __________________________________________________________________________________________________ tf.nn.silu_12 (TFOpLambda) (None, 16, 16, 160) 0 batch_normalization_18[0][0] __________________________________________________________________________________________________ zero_padding2d_3 (ZeroPadding2D (None, 17, 17, 160) 0 tf.nn.silu_12[0][0] __________________________________________________________________________________________________ depthwise_conv2d_6 (DepthwiseCo (None, 8, 8, 160) 1440 zero_padding2d_3[0][0] __________________________________________________________________________________________________ batch_normalization_19 (BatchNo (None, 8, 8, 160) 640 depthwise_conv2d_6[0][0] __________________________________________________________________________________________________ tf.nn.silu_13 (TFOpLambda) (None, 8, 8, 160) 0 batch_normalization_19[0][0] __________________________________________________________________________________________________ conv2d_22 (Conv2D) (None, 8, 8, 80) 12800 tf.nn.silu_13[0][0] __________________________________________________________________________________________________ batch_normalization_20 (BatchNo (None, 8, 8, 80) 320 conv2d_22[0][0] __________________________________________________________________________________________________ conv2d_23 (Conv2D) (None, 8, 8, 96) 69216 batch_normalization_20[0][0] __________________________________________________________________________________________________ conv2d_24 (Conv2D) (None, 8, 8, 96) 9312 conv2d_23[0][0] __________________________________________________________________________________________________ reshape_4 (Reshape) (None, 4, 16, 96) 0 conv2d_24[0][0] __________________________________________________________________________________________________ layer_normalization_12 (LayerNo (None, 4, 16, 96) 192 reshape_4[0][0] __________________________________________________________________________________________________ multi_head_attention_6 (MultiHe (None, 4, 16, 96) 74400 layer_normalization_12[0][0] layer_normalization_12[0][0] __________________________________________________________________________________________________ add_15 (Add) (None, 4, 16, 96) 0 multi_head_attention_6[0][0] reshape_4[0][0] __________________________________________________________________________________________________ layer_normalization_13 (LayerNo (None, 4, 16, 96) 192 add_15[0][0] __________________________________________________________________________________________________ dense_12 (Dense) (None, 4, 16, 192) 18624 layer_normalization_13[0][0] __________________________________________________________________________________________________ dropout_12 (Dropout) (None, 4, 16, 192) 0 dense_12[0][0] __________________________________________________________________________________________________ dense_13 (Dense) (None, 4, 16, 96) 18528 dropout_12[0][0] __________________________________________________________________________________________________ dropout_13 (Dropout) (None, 4, 16, 96) 0 dense_13[0][0] __________________________________________________________________________________________________ add_16 (Add) (None, 4, 16, 96) 0 dropout_13[0][0] add_15[0][0] __________________________________________________________________________________________________ layer_normalization_14 (LayerNo (None, 4, 16, 96) 192 add_16[0][0] __________________________________________________________________________________________________ multi_head_attention_7 (MultiHe (None, 4, 16, 96) 74400 layer_normalization_14[0][0] layer_normalization_14[0][0] __________________________________________________________________________________________________ add_17 (Add) (None, 4, 16, 96) 0 multi_head_attention_7[0][0] add_16[0][0] __________________________________________________________________________________________________ layer_normalization_15 (LayerNo (None, 4, 16, 96) 192 add_17[0][0] __________________________________________________________________________________________________ dense_14 (Dense) (None, 4, 16, 192) 18624 layer_normalization_15[0][0] __________________________________________________________________________________________________ dropout_14 (Dropout) (None, 4, 16, 192) 0 dense_14[0][0] __________________________________________________________________________________________________ dense_15 (Dense) (None, 4, 16, 96) 18528 dropout_14[0][0] __________________________________________________________________________________________________ dropout_15 (Dropout) (None, 4, 16, 96) 0 dense_15[0][0] __________________________________________________________________________________________________ add_18 (Add) (None, 4, 16, 96) 0 dropout_15[0][0] add_17[0][0] __________________________________________________________________________________________________ layer_normalization_16 (LayerNo (None, 4, 16, 96) 192 add_18[0][0] __________________________________________________________________________________________________ multi_head_attention_8 (MultiHe (None, 4, 16, 96) 74400 layer_normalization_16[0][0] layer_normalization_16[0][0] __________________________________________________________________________________________________ add_19 (Add) (None, 4, 16, 96) 0 multi_head_attention_8[0][0] add_18[0][0] __________________________________________________________________________________________________ layer_normalization_17 (LayerNo (None, 4, 16, 96) 192 add_19[0][0] __________________________________________________________________________________________________ dense_16 (Dense) (None, 4, 16, 192) 18624 layer_normalization_17[0][0] __________________________________________________________________________________________________ dropout_16 (Dropout) (None, 4, 16, 192) 0 dense_16[0][0] __________________________________________________________________________________________________ dense_17 (Dense) (None, 4, 16, 96) 18528 dropout_16[0][0] __________________________________________________________________________________________________ dropout_17 (Dropout) (None, 4, 16, 96) 0 dense_17[0][0] __________________________________________________________________________________________________ add_20 (Add) (None, 4, 16, 96) 0 dropout_17[0][0] add_19[0][0] __________________________________________________________________________________________________ reshape_5 (Reshape) (None, 8, 8, 96) 0 add_20[0][0] __________________________________________________________________________________________________ conv2d_25 (Conv2D) (None, 8, 8, 80) 7760 reshape_5[0][0] __________________________________________________________________________________________________ concatenate_2 (Concatenate) (None, 8, 8, 160) 0 batch_normalization_20[0][0] conv2d_25[0][0] __________________________________________________________________________________________________ conv2d_26 (Conv2D) (None, 8, 8, 96) 138336 concatenate_2[0][0] __________________________________________________________________________________________________ conv2d_27 (Conv2D) (None, 8, 8, 320) 31040 conv2d_26[0][0] __________________________________________________________________________________________________ global_average_pooling2d (Globa (None, 320) 0 conv2d_27[0][0] __________________________________________________________________________________________________ dense_18 (Dense) (None, 5) 1605 global_average_pooling2d[0][0] ================================================================================================== Total params: 1,307,621 Trainable params: 1,305,077 Non-trainable params: 2,544 __________________________________________________________________________________________________ Dataset preparation We will be using the tf_flowers dataset to demonstrate the model. Unlike other Transformer-based architectures, MobileViT uses a simple augmentation pipeline primarily because it has the properties of a CNN. batch_size = 64 auto = tf.data.AUTOTUNE resize_bigger = 280 num_classes = 5 def preprocess_dataset(is_training=True): def _pp(image, label): if is_training: # Resize to a bigger spatial resolution and take the random # crops. image = tf.image.resize(image, (resize_bigger, resize_bigger)) image = tf.image.random_crop(image, (image_size, image_size, 3)) image = tf.image.random_flip_left_right(image) else: image = tf.image.resize(image, (image_size, image_size)) label = tf.one_hot(label, depth=num_classes) return image, label return _pp def prepare_dataset(dataset, is_training=True): if is_training: dataset = dataset.shuffle(batch_size * 10) dataset = dataset.map(preprocess_dataset(is_training), num_parallel_calls=auto) return dataset.batch(batch_size).prefetch(auto) The authors use a multi-scale data sampler to help the model learn representations of varied scales. In this example, we discard this part. Load and prepare the dataset train_dataset, val_dataset = tfds.load( \"tf_flowers\", split=[\"train[:90%]\", \"train[90%:]\"], as_supervised=True ) num_train = train_dataset.cardinality() num_val = val_dataset.cardinality() print(f\"Number of training examples: {num_train}\") print(f\"Number of validation examples: {num_val}\") train_dataset = prepare_dataset(train_dataset, is_training=True) val_dataset = prepare_dataset(val_dataset, is_training=False) Number of training examples: 3303 Number of validation examples: 367 Train a MobileViT (XXS) model learning_rate = 0.002 label_smoothing_factor = 0.1 epochs = 30 optimizer = keras.optimizers.Adam(learning_rate=learning_rate) loss_fn = keras.losses.CategoricalCrossentropy(label_smoothing=label_smoothing_factor) def run_experiment(epochs=epochs): mobilevit_xxs = create_mobilevit(num_classes=num_classes) mobilevit_xxs.compile(optimizer=optimizer, loss=loss_fn, metrics=[\"accuracy\"]) checkpoint_filepath = \"/tmp/checkpoint\" checkpoint_callback = keras.callbacks.ModelCheckpoint( checkpoint_filepath, monitor=\"val_accuracy\", save_best_only=True, save_weights_only=True, ) mobilevit_xxs.fit( train_dataset, validation_data=val_dataset, epochs=epochs, callbacks=[checkpoint_callback], ) mobilevit_xxs.load_weights(checkpoint_filepath) _, accuracy = mobilevit_xxs.evaluate(val_dataset) print(f\"Validation accuracy: {round(accuracy * 100, 2)}%\") return mobilevit_xxs mobilevit_xxs = run_experiment() Epoch 1/30 52/52 [==============================] - 47s 459ms/step - loss: 1.3397 - accuracy: 0.4832 - val_loss: 1.7250 - val_accuracy: 0.1662 Epoch 2/30 52/52 [==============================] - 21s 404ms/step - loss: 1.1167 - accuracy: 0.6210 - val_loss: 1.9844 - val_accuracy: 0.1907 Epoch 3/30 52/52 [==============================] - 21s 403ms/step - loss: 1.0217 - accuracy: 0.6709 - val_loss: 1.8187 - val_accuracy: 0.1907 Epoch 4/30 52/52 [==============================] - 21s 409ms/step - loss: 0.9682 - accuracy: 0.7048 - val_loss: 2.0329 - val_accuracy: 0.1907 Epoch 5/30 52/52 [==============================] - 21s 408ms/step - loss: 0.9552 - accuracy: 0.7196 - val_loss: 2.1150 - val_accuracy: 0.1907 Epoch 6/30 52/52 [==============================] - 21s 407ms/step - loss: 0.9186 - accuracy: 0.7318 - val_loss: 2.9713 - val_accuracy: 0.1907 Epoch 7/30 52/52 [==============================] - 21s 407ms/step - loss: 0.8986 - accuracy: 0.7457 - val_loss: 3.2062 - val_accuracy: 0.1907 Epoch 8/30 52/52 [==============================] - 21s 408ms/step - loss: 0.8831 - accuracy: 0.7542 - val_loss: 3.8631 - val_accuracy: 0.1907 Epoch 9/30 52/52 [==============================] - 21s 408ms/step - loss: 0.8433 - accuracy: 0.7714 - val_loss: 1.8029 - val_accuracy: 0.3542 Epoch 10/30 52/52 [==============================] - 21s 408ms/step - loss: 0.8489 - accuracy: 0.7763 - val_loss: 1.7920 - val_accuracy: 0.4796 Epoch 11/30 52/52 [==============================] - 21s 409ms/step - loss: 0.8256 - accuracy: 0.7884 - val_loss: 1.4992 - val_accuracy: 0.5477 Epoch 12/30 52/52 [==============================] - 21s 407ms/step - loss: 0.7859 - accuracy: 0.8123 - val_loss: 0.9236 - val_accuracy: 0.7330 Epoch 13/30 52/52 [==============================] - 21s 409ms/step - loss: 0.7702 - accuracy: 0.8159 - val_loss: 0.8059 - val_accuracy: 0.8011 Epoch 14/30 52/52 [==============================] - 21s 403ms/step - loss: 0.7670 - accuracy: 0.8153 - val_loss: 1.1535 - val_accuracy: 0.7084 Epoch 15/30 52/52 [==============================] - 21s 408ms/step - loss: 0.7332 - accuracy: 0.8344 - val_loss: 0.7746 - val_accuracy: 0.8147 Epoch 16/30 52/52 [==============================] - 21s 404ms/step - loss: 0.7284 - accuracy: 0.8335 - val_loss: 1.0342 - val_accuracy: 0.7330 Epoch 17/30 52/52 [==============================] - 21s 409ms/step - loss: 0.7484 - accuracy: 0.8262 - val_loss: 1.0523 - val_accuracy: 0.7112 Epoch 18/30 52/52 [==============================] - 21s 408ms/step - loss: 0.7209 - accuracy: 0.8450 - val_loss: 0.8146 - val_accuracy: 0.8174 Epoch 19/30 52/52 [==============================] - 21s 409ms/step - loss: 0.7141 - accuracy: 0.8435 - val_loss: 0.8016 - val_accuracy: 0.7875 Epoch 20/30 52/52 [==============================] - 21s 410ms/step - loss: 0.7075 - accuracy: 0.8435 - val_loss: 0.9352 - val_accuracy: 0.7439 Epoch 21/30 52/52 [==============================] - 21s 406ms/step - loss: 0.7066 - accuracy: 0.8504 - val_loss: 1.0171 - val_accuracy: 0.7139 Epoch 22/30 52/52 [==============================] - 21s 405ms/step - loss: 0.6913 - accuracy: 0.8532 - val_loss: 0.7059 - val_accuracy: 0.8610 Epoch 23/30 52/52 [==============================] - 21s 408ms/step - loss: 0.6681 - accuracy: 0.8671 - val_loss: 0.8007 - val_accuracy: 0.8147 Epoch 24/30 52/52 [==============================] - 21s 409ms/step - loss: 0.6636 - accuracy: 0.8747 - val_loss: 0.9490 - val_accuracy: 0.7302 Epoch 25/30 52/52 [==============================] - 21s 408ms/step - loss: 0.6637 - accuracy: 0.8722 - val_loss: 0.6913 - val_accuracy: 0.8556 Epoch 26/30 52/52 [==============================] - 21s 406ms/step - loss: 0.6443 - accuracy: 0.8837 - val_loss: 1.0483 - val_accuracy: 0.7139 Epoch 27/30 52/52 [==============================] - 21s 407ms/step - loss: 0.6555 - accuracy: 0.8695 - val_loss: 0.9448 - val_accuracy: 0.7602 Epoch 28/30 52/52 [==============================] - 21s 409ms/step - loss: 0.6409 - accuracy: 0.8807 - val_loss: 0.9337 - val_accuracy: 0.7302 Epoch 29/30 52/52 [==============================] - 21s 408ms/step - loss: 0.6300 - accuracy: 0.8910 - val_loss: 0.7461 - val_accuracy: 0.8256 Epoch 30/30 52/52 [==============================] - 21s 408ms/step - loss: 0.6093 - accuracy: 0.8968 - val_loss: 0.8651 - val_accuracy: 0.7766 6/6 [==============================] - 0s 65ms/step - loss: 0.7059 - accuracy: 0.8610 Validation accuracy: 86.1% Results and TFLite conversion With about one million parameters, getting to ~85% top-1 accuracy on 256x256 resolution is a strong result. This MobileViT mobile is fully compatible with TensorFlow Lite (TFLite) and can be converted with the following code: # Serialize the model as a SavedModel. mobilevit_xxs.save(\"mobilevit_xxs\") # Convert to TFLite. This form of quantization is called # post-training dynamic-range quantization in TFLite. converter = tf.lite.TFLiteConverter.from_saved_model(\"mobilevit_xxs\") converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_ops = [ tf.lite.OpsSet.TFLITE_BUILTINS, # Enable TensorFlow Lite ops. tf.lite.OpsSet.SELECT_TF_OPS, # Enable TensorFlow ops. ] tflite_model = converter.convert() open(\"mobilevit_xxs.tflite\", \"wb\").write(tflite_model) To learn more about different quantization recipes available in TFLite and running inference with TFLite models, check out this official resource. How to obtain integrated gradients for a classification model. Integrated Gradients Integrated Gradients is a technique for attributing a classification model's prediction to its input features. It is a model interpretability technique: you can use it to visualize the relationship between input features and model predictions. Integrated Gradients is a variation on computing the gradient of the prediction output with regard to features of the input. To compute integrated gradients, we need to perform the following steps: Identify the input and the output. In our case, the input is an image and the output is the last layer of our model (dense layer with softmax activation). Compute which features are important to a neural network when making a prediction on a particular data point. To identify these features, we need to choose a baseline input. A baseline input can be a black image (all pixel values set to zero) or random noise. The shape of the baseline input needs to be the same as our input image, e.g. (299, 299, 3). Interpolate the baseline for a given number of steps. The number of steps represents the steps we need in the gradient approximation for a given input image. The number of steps is a hyperparameter. The authors recommend using anywhere between 20 and 1000 steps. Preprocess these interpolated images and do a forward pass. Get the gradients for these interpolated images. Approximate the gradients integral using the trapezoidal rule. To read in-depth about integrated gradients and why this method works, consider reading this excellent article. References: Integrated Gradients original paper Original implementation Setup import numpy as np import matplotlib.pyplot as plt from scipy import ndimage from IPython.display import Image import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.applications import xception # Size of the input image img_size = (299, 299, 3) # Load Xception model with imagenet weights model = xception.Xception(weights=\"imagenet\") # The local path to our target image img_path = keras.utils.get_file(\"elephant.jpg\", \"https://i.imgur.com/Bvro0YD.png\") display(Image(img_path)) Downloading data from https://i.imgur.com/Bvro0YD.png 4218880/4217496 [==============================] - 0s 0us/step jpeg Integrated Gradients algorithm def get_img_array(img_path, size=(299, 299)): # `img` is a PIL image of size 299x299 img = keras.preprocessing.image.load_img(img_path, target_size=size) # `array` is a float32 Numpy array of shape (299, 299, 3) array = keras.preprocessing.image.img_to_array(img) # We add a dimension to transform our array into a \"batch\" # of size (1, 299, 299, 3) array = np.expand_dims(array, axis=0) return array def get_gradients(img_input, top_pred_idx): \"\"\"Computes the gradients of outputs w.r.t input image. Args: img_input: 4D image tensor top_pred_idx: Predicted label for the input image Returns: Gradients of the predictions w.r.t img_input \"\"\" images = tf.cast(img_input, tf.float32) with tf.GradientTape() as tape: tape.watch(images) preds = model(images) top_class = preds[:, top_pred_idx] grads = tape.gradient(top_class, images) return grads def get_integrated_gradients(img_input, top_pred_idx, baseline=None, num_steps=50): \"\"\"Computes Integrated Gradients for a predicted label. Args: img_input (ndarray): Original image top_pred_idx: Predicted label for the input image baseline (ndarray): The baseline image to start with for interpolation num_steps: Number of interpolation steps between the baseline and the input used in the computation of integrated gradients. These steps along determine the integral approximation error. By default, num_steps is set to 50. Returns: Integrated gradients w.r.t input image \"\"\" # If baseline is not provided, start with a black image # having same size as the input image. if baseline is None: baseline = np.zeros(img_size).astype(np.float32) else: baseline = baseline.astype(np.float32) # 1. Do interpolation. img_input = img_input.astype(np.float32) interpolated_image = [ baseline + (step / num_steps) * (img_input - baseline) for step in range(num_steps + 1) ] interpolated_image = np.array(interpolated_image).astype(np.float32) # 2. Preprocess the interpolated images interpolated_image = xception.preprocess_input(interpolated_image) # 3. Get the gradients grads = [] for i, img in enumerate(interpolated_image): img = tf.expand_dims(img, axis=0) grad = get_gradients(img, top_pred_idx=top_pred_idx) grads.append(grad[0]) grads = tf.convert_to_tensor(grads, dtype=tf.float32) # 4. Approximate the integral using the trapezoidal rule grads = (grads[:-1] + grads[1:]) / 2.0 avg_grads = tf.reduce_mean(grads, axis=0) # 5. Calculate integrated gradients and return integrated_grads = (img_input - baseline) * avg_grads return integrated_grads def random_baseline_integrated_gradients( img_input, top_pred_idx, num_steps=50, num_runs=2 ): \"\"\"Generates a number of random baseline images. Args: img_input (ndarray): 3D image top_pred_idx: Predicted label for the input image num_steps: Number of interpolation steps between the baseline and the input used in the computation of integrated gradients. These steps along determine the integral approximation error. By default, num_steps is set to 50. num_runs: number of baseline images to generate Returns: Averaged integrated gradients for `num_runs` baseline images \"\"\" # 1. List to keep track of Integrated Gradients (IG) for all the images integrated_grads = [] # 2. Get the integrated gradients for all the baselines for run in range(num_runs): baseline = np.random.random(img_size) * 255 igrads = get_integrated_gradients( img_input=img_input, top_pred_idx=top_pred_idx, baseline=baseline, num_steps=num_steps, ) integrated_grads.append(igrads) # 3. Return the average integrated gradients for the image integrated_grads = tf.convert_to_tensor(integrated_grads) return tf.reduce_mean(integrated_grads, axis=0) Helper class for visualizing gradients and integrated gradients class GradVisualizer: \"\"\"Plot gradients of the outputs w.r.t an input image.\"\"\" def __init__(self, positive_channel=None, negative_channel=None): if positive_channel is None: self.positive_channel = [0, 255, 0] else: self.positive_channel = positive_channel if negative_channel is None: self.negative_channel = [255, 0, 0] else: self.negative_channel = negative_channel def apply_polarity(self, attributions, polarity): if polarity == \"positive\": return np.clip(attributions, 0, 1) else: return np.clip(attributions, -1, 0) def apply_linear_transformation( self, attributions, clip_above_percentile=99.9, clip_below_percentile=70.0, lower_end=0.2, ): # 1. Get the thresholds m = self.get_thresholded_attributions( attributions, percentage=100 - clip_above_percentile ) e = self.get_thresholded_attributions( attributions, percentage=100 - clip_below_percentile ) # 2. Transform the attributions by a linear function f(x) = a*x + b such that # f(m) = 1.0 and f(e) = lower_end transformed_attributions = (1 - lower_end) * (np.abs(attributions) - e) / ( m - e ) + lower_end # 3. Make sure that the sign of transformed attributions is the same as original attributions transformed_attributions *= np.sign(attributions) # 4. Only keep values that are bigger than the lower_end transformed_attributions *= transformed_attributions >= lower_end # 5. Clip values and return transformed_attributions = np.clip(transformed_attributions, 0.0, 1.0) return transformed_attributions def get_thresholded_attributions(self, attributions, percentage): if percentage == 100.0: return np.min(attributions) # 1. Flatten the attributions flatten_attr = attributions.flatten() # 2. Get the sum of the attributions total = np.sum(flatten_attr) # 3. Sort the attributions from largest to smallest. sorted_attributions = np.sort(np.abs(flatten_attr))[::-1] # 4. Calculate the percentage of the total sum that each attribution # and the values about it contribute. cum_sum = 100.0 * np.cumsum(sorted_attributions) / total # 5. Threshold the attributions by the percentage indices_to_consider = np.where(cum_sum >= percentage)[0][0] # 6. Select the desired attributions and return attributions = sorted_attributions[indices_to_consider] return attributions def binarize(self, attributions, threshold=0.001): return attributions > threshold def morphological_cleanup_fn(self, attributions, structure=np.ones((4, 4))): closed = ndimage.grey_closing(attributions, structure=structure) opened = ndimage.grey_opening(closed, structure=structure) return opened def draw_outlines( self, attributions, percentage=90, connected_component_structure=np.ones((3, 3)) ): # 1. Binarize the attributions. attributions = self.binarize(attributions) # 2. Fill the gaps attributions = ndimage.binary_fill_holes(attributions) # 3. Compute connected components connected_components, num_comp = ndimage.measurements.label( attributions, structure=connected_component_structure ) # 4. Sum up the attributions for each component total = np.sum(attributions[connected_components > 0]) component_sums = [] for comp in range(1, num_comp + 1): mask = connected_components == comp component_sum = np.sum(attributions[mask]) component_sums.append((component_sum, mask)) # 5. Compute the percentage of top components to keep sorted_sums_and_masks = sorted(component_sums, key=lambda x: x[0], reverse=True) sorted_sums = list(zip(*sorted_sums_and_masks))[0] cumulative_sorted_sums = np.cumsum(sorted_sums) cutoff_threshold = percentage * total / 100 cutoff_idx = np.where(cumulative_sorted_sums >= cutoff_threshold)[0][0] if cutoff_idx > 2: cutoff_idx = 2 # 6. Set the values for the kept components border_mask = np.zeros_like(attributions) for i in range(cutoff_idx + 1): border_mask[sorted_sums_and_masks[i][1]] = 1 # 7. Make the mask hollow and show only the border eroded_mask = ndimage.binary_erosion(border_mask, iterations=1) border_mask[eroded_mask] = 0 # 8. Return the outlined mask return border_mask def process_grads( self, image, attributions, polarity=\"positive\", clip_above_percentile=99.9, clip_below_percentile=0, morphological_cleanup=False, structure=np.ones((3, 3)), outlines=False, outlines_component_percentage=90, overlay=True, ): if polarity not in [\"positive\", \"negative\"]: raise ValueError( f\"\"\" Allowed polarity values: 'positive' or 'negative' but provided {polarity}\"\"\" ) if clip_above_percentile < 0 or clip_above_percentile > 100: raise ValueError(\"clip_above_percentile must be in [0, 100]\") if clip_below_percentile < 0 or clip_below_percentile > 100: raise ValueError(\"clip_below_percentile must be in [0, 100]\") # 1. Apply polarity if polarity == \"positive\": attributions = self.apply_polarity(attributions, polarity=polarity) channel = self.positive_channel else: attributions = self.apply_polarity(attributions, polarity=polarity) attributions = np.abs(attributions) channel = self.negative_channel # 2. Take average over the channels attributions = np.average(attributions, axis=2) # 3. Apply linear transformation to the attributions attributions = self.apply_linear_transformation( attributions, clip_above_percentile=clip_above_percentile, clip_below_percentile=clip_below_percentile, lower_end=0.0, ) # 4. Cleanup if morphological_cleanup: attributions = self.morphological_cleanup_fn( attributions, structure=structure ) # 5. Draw the outlines if outlines: attributions = self.draw_outlines( attributions, percentage=outlines_component_percentage ) # 6. Expand the channel axis and convert to RGB attributions = np.expand_dims(attributions, 2) * channel # 7.Superimpose on the original image if overlay: attributions = np.clip((attributions * 0.8 + image), 0, 255) return attributions def visualize( self, image, gradients, integrated_gradients, polarity=\"positive\", clip_above_percentile=99.9, clip_below_percentile=0, morphological_cleanup=False, structure=np.ones((3, 3)), outlines=False, outlines_component_percentage=90, overlay=True, figsize=(15, 8), ): # 1. Make two copies of the original image img1 = np.copy(image) img2 = np.copy(image) # 2. Process the normal gradients grads_attr = self.process_grads( image=img1, attributions=gradients, polarity=polarity, clip_above_percentile=clip_above_percentile, clip_below_percentile=clip_below_percentile, morphological_cleanup=morphological_cleanup, structure=structure, outlines=outlines, outlines_component_percentage=outlines_component_percentage, overlay=overlay, ) # 3. Process the integrated gradients igrads_attr = self.process_grads( image=img2, attributions=integrated_gradients, polarity=polarity, clip_above_percentile=clip_above_percentile, clip_below_percentile=clip_below_percentile, morphological_cleanup=morphological_cleanup, structure=structure, outlines=outlines, outlines_component_percentage=outlines_component_percentage, overlay=overlay, ) _, ax = plt.subplots(1, 3, figsize=figsize) ax[0].imshow(image) ax[1].imshow(grads_attr.astype(np.uint8)) ax[2].imshow(igrads_attr.astype(np.uint8)) ax[0].set_title(\"Input\") ax[1].set_title(\"Normal gradients\") ax[2].set_title(\"Integrated gradients\") plt.show() Let's test-drive it # 1. Convert the image to numpy array img = get_img_array(img_path) # 2. Keep a copy of the original image orig_img = np.copy(img[0]).astype(np.uint8) # 3. Preprocess the image img_processed = tf.cast(xception.preprocess_input(img), dtype=tf.float32) # 4. Get model predictions preds = model.predict(img_processed) top_pred_idx = tf.argmax(preds[0]) print(\"Predicted:\", top_pred_idx, xception.decode_predictions(preds, top=1)[0]) # 5. Get the gradients of the last layer for the predicted label grads = get_gradients(img_processed, top_pred_idx=top_pred_idx) # 6. Get the integrated gradients igrads = random_baseline_integrated_gradients( np.copy(orig_img), top_pred_idx=top_pred_idx, num_steps=50, num_runs=2 ) # 7. Process the gradients and plot vis = GradVisualizer() vis.visualize( image=orig_img, gradients=grads[0].numpy(), integrated_gradients=igrads.numpy(), clip_above_percentile=99, clip_below_percentile=0, ) vis.visualize( image=orig_img, gradients=grads[0].numpy(), integrated_gradients=igrads.numpy(), clip_above_percentile=95, clip_below_percentile=28, morphological_cleanup=True, outlines=True, ) Predicted: tf.Tensor(386, shape=(), dtype=int64) [('n02504458', 'African_elephant', 0.8871446)] Implement a depth estimation model with a convnet. Introduction Depth estimation is a crucial step towards inferring scene geometry from 2D images. The goal in monocular depth estimation is to predict the depth value of each pixel or inferring depth information, given only a single RGB image as input. This example will show an approach to build a depth estimation model with a convnet and simple loss functions. depth Setup import os import sys import tensorflow as tf from tensorflow.keras import layers import pandas as pd import numpy as np import cv2 import matplotlib.pyplot as plt tf.random.set_seed(123) Downloading the dataset We will be using the dataset DIODE: A Dense Indoor and Outdoor Depth Dataset for this tutorial. However, we use the validation set generating training and evaluation subsets for our model. The reason we use the validation set rather than the training set of the original dataset is because the training set consists of 81GB of data, which is challenging to download compared to the validation set which is only 2.6GB. Other datasets that you could use are NYU-v2 and KITTI. annotation_folder = \"/dataset/\" if not os.path.exists(os.path.abspath(\".\") + annotation_folder): annotation_zip = tf.keras.utils.get_file( \"val.tar.gz\", cache_subdir=os.path.abspath(\".\"), origin=\"http://diode-dataset.s3.amazonaws.com/val.tar.gz\", extract=True, ) Downloading data from http://diode-dataset.s3.amazonaws.com/val.tar.gz 2774630400/2774625282 [==============================] - 90s 0us/step 2774638592/2774625282 [==============================] - 90s 0us/step Preparing the dataset We only use the indoor images to train our depth estimation model. path = \"val/indoors\" filelist = [] for root, dirs, files in os.walk(path): for file in files: filelist.append(os.path.join(root, file)) filelist.sort() data = { \"image\": [x for x in filelist if x.endswith(\".png\")], \"depth\": [x for x in filelist if x.endswith(\"_depth.npy\")], \"mask\": [x for x in filelist if x.endswith(\"_depth_mask.npy\")], } df = pd.DataFrame(data) df = df.sample(frac=1, random_state=42) Preparing hyperparameters HEIGHT = 256 WIDTH = 256 LR = 0.0002 EPOCHS = 30 BATCH_SIZE = 32 Building a data pipeline The pipeline takes a dataframe containing the path for the RGB images, as well as the depth and depth mask files. It reads and resize the RGB images. It reads the depth and depth mask files, process them to generate the depth map image and resize it. It returns the RGB images and the depth map images for a batch. class DataGenerator(tf.keras.utils.Sequence): def __init__(self, data, batch_size=6, dim=(768, 1024), n_channels=3, shuffle=True): \"\"\" Initialization \"\"\" self.data = data self.indices = self.data.index.tolist() self.dim = dim self.n_channels = n_channels self.batch_size = batch_size self.shuffle = shuffle self.min_depth = 0.1 self.on_epoch_end() def __len__(self): return int(np.ceil(len(self.data) / self.batch_size)) def __getitem__(self, index): if (index + 1) * self.batch_size > len(self.indices): self.batch_size = len(self.indices) - index * self.batch_size # Generate one batch of data # Generate indices of the batch index = self.indices[index * self.batch_size : (index + 1) * self.batch_size] # Find list of IDs batch = [self.indices[k] for k in index] x, y = self.data_generation(batch) return x, y def on_epoch_end(self): \"\"\" Updates indexes after each epoch \"\"\" self.index = np.arange(len(self.indices)) if self.shuffle == True: np.random.shuffle(self.index) def load(self, image_path, depth_map, mask): \"\"\"Load input and target image.\"\"\" image_ = cv2.imread(image_path) image_ = cv2.cvtColor(image_, cv2.COLOR_BGR2RGB) image_ = cv2.resize(image_, self.dim) image_ = tf.image.convert_image_dtype(image_, tf.float32) depth_map = np.load(depth_map).squeeze() mask = np.load(mask) mask = mask > 0 max_depth = min(300, np.percentile(depth_map, 99)) depth_map = np.clip(depth_map, self.min_depth, max_depth) depth_map = np.log(depth_map, where=mask) depth_map = np.ma.masked_where(~mask, depth_map) depth_map = np.clip(depth_map, 0.1, np.log(max_depth)) depth_map = cv2.resize(depth_map, self.dim) depth_map = np.expand_dims(depth_map, axis=2) depth_map = tf.image.convert_image_dtype(depth_map, tf.float32) return image_, depth_map def data_generation(self, batch): x = np.empty((self.batch_size, *self.dim, self.n_channels)) y = np.empty((self.batch_size, *self.dim, 1)) for i, batch_id in enumerate(batch): x[i,], y[i,] = self.load( self.data[\"image\"][batch_id], self.data[\"depth\"][batch_id], self.data[\"mask\"][batch_id], ) return x, y Visualizing samples def visualize_depth_map(samples, test=False, model=None): input, target = samples cmap = plt.cm.jet cmap.set_bad(color=\"black\") if test: pred = model.predict(input) fig, ax = plt.subplots(6, 3, figsize=(50, 50)) for i in range(6): ax[i, 0].imshow((input[i].squeeze())) ax[i, 1].imshow((target[i].squeeze()), cmap=cmap) ax[i, 2].imshow((pred[i].squeeze()), cmap=cmap) else: fig, ax = plt.subplots(6, 2, figsize=(50, 50)) for i in range(6): ax[i, 0].imshow((input[i].squeeze())) ax[i, 1].imshow((target[i].squeeze()), cmap=cmap) visualize_samples = next( iter(DataGenerator(data=df, batch_size=6, dim=(HEIGHT, WIDTH))) ) visualize_depth_map(visualize_samples) png 3D point cloud visualization depth_vis = np.flipud(visualize_samples[1][1].squeeze()) # target img_vis = np.flipud(visualize_samples[0][1].squeeze()) # input fig = plt.figure(figsize=(15, 10)) ax = plt.axes(projection=\"3d\") STEP = 3 for x in range(0, img_vis.shape[0], STEP): for y in range(0, img_vis.shape[1], STEP): ax.scatter( [depth_vis[x, y]] * 3, [y] * 3, [x] * 3, c=tuple(img_vis[x, y, :3] / 255), s=3, ) ax.view_init(45, 135) png Building the model The basic model is from U-Net. Addditive skip-connections are implemented in the downscaling block. class DownscaleBlock(layers.Layer): def __init__( self, filters, kernel_size=(3, 3), padding=\"same\", strides=1, **kwargs ): super().__init__(**kwargs) self.convA = layers.Conv2D(filters, kernel_size, strides, padding) self.convB = layers.Conv2D(filters, kernel_size, strides, padding) self.reluA = layers.LeakyReLU(alpha=0.2) self.reluB = layers.LeakyReLU(alpha=0.2) self.bn2a = tf.keras.layers.BatchNormalization() self.bn2b = tf.keras.layers.BatchNormalization() self.pool = layers.MaxPool2D((2, 2), (2, 2)) def call(self, input_tensor): d = self.convA(input_tensor) x = self.bn2a(d) x = self.reluA(x) x = self.convB(x) x = self.bn2b(x) x = self.reluB(x) x += d p = self.pool(x) return x, p class UpscaleBlock(layers.Layer): def __init__( self, filters, kernel_size=(3, 3), padding=\"same\", strides=1, **kwargs ): super().__init__(**kwargs) self.us = layers.UpSampling2D((2, 2)) self.convA = layers.Conv2D(filters, kernel_size, strides, padding) self.convB = layers.Conv2D(filters, kernel_size, strides, padding) self.reluA = layers.LeakyReLU(alpha=0.2) self.reluB = layers.LeakyReLU(alpha=0.2) self.bn2a = tf.keras.layers.BatchNormalization() self.bn2b = tf.keras.layers.BatchNormalization() self.conc = layers.Concatenate() def call(self, x, skip): x = self.us(x) concat = self.conc([x, skip]) x = self.convA(concat) x = self.bn2a(x) x = self.reluA(x) x = self.convB(x) x = self.bn2b(x) x = self.reluB(x) return x class BottleNeckBlock(layers.Layer): def __init__( self, filters, kernel_size=(3, 3), padding=\"same\", strides=1, **kwargs ): super().__init__(**kwargs) self.convA = layers.Conv2D(filters, kernel_size, strides, padding) self.convB = layers.Conv2D(filters, kernel_size, strides, padding) self.reluA = layers.LeakyReLU(alpha=0.2) self.reluB = layers.LeakyReLU(alpha=0.2) def call(self, x): x = self.convA(x) x = self.reluA(x) x = self.convB(x) x = self.reluB(x) return x Defining the loss We will optimize 3 losses in our mode. 1. Structural similarity index(SSIM). 2. L1-loss, or Point-wise depth in our case. 3. Depth smoothness loss. Out of the three loss functions, SSIM contributes the most to improving model performance. class DepthEstimationModel(tf.keras.Model): def __init__(self): super().__init__() self.ssim_loss_weight = 0.85 self.l1_loss_weight = 0.1 self.edge_loss_weight = 0.9 self.loss_metric = tf.keras.metrics.Mean(name=\"loss\") f = [16, 32, 64, 128, 256] self.downscale_blocks = [ DownscaleBlock(f[0]), DownscaleBlock(f[1]), DownscaleBlock(f[2]), DownscaleBlock(f[3]), ] self.bottle_neck_block = BottleNeckBlock(f[4]) self.upscale_blocks = [ UpscaleBlock(f[3]), UpscaleBlock(f[2]), UpscaleBlock(f[1]), UpscaleBlock(f[0]), ] self.conv_layer = layers.Conv2D(1, (1, 1), padding=\"same\", activation=\"tanh\") def calculate_loss(self, target, pred): # Edges dy_true, dx_true = tf.image.image_gradients(target) dy_pred, dx_pred = tf.image.image_gradients(pred) weights_x = tf.exp(tf.reduce_mean(tf.abs(dx_true))) weights_y = tf.exp(tf.reduce_mean(tf.abs(dy_true))) # Depth smoothness smoothness_x = dx_pred * weights_x smoothness_y = dy_pred * weights_y depth_smoothness_loss = tf.reduce_mean(abs(smoothness_x)) + tf.reduce_mean( abs(smoothness_y) ) # Structural similarity (SSIM) index ssim_loss = tf.reduce_mean( 1 - tf.image.ssim( target, pred, max_val=WIDTH, filter_size=7, k1=0.01 ** 2, k2=0.03 ** 2 ) ) # Point-wise depth l1_loss = tf.reduce_mean(tf.abs(target - pred)) loss = ( (self.ssim_loss_weight * ssim_loss) + (self.l1_loss_weight * l1_loss) + (self.edge_loss_weight * depth_smoothness_loss) ) return loss @property def metrics(self): return [self.loss_metric] def train_step(self, batch_data): input, target = batch_data with tf.GradientTape() as tape: pred = self(input, training=True) loss = self.calculate_loss(target, pred) gradients = tape.gradient(loss, self.trainable_variables) self.optimizer.apply_gradients(zip(gradients, self.trainable_variables)) self.loss_metric.update_state(loss) return { \"loss\": self.loss_metric.result(), } def test_step(self, batch_data): input, target = batch_data pred = self(input, training=False) loss = self.calculate_loss(target, pred) self.loss_metric.update_state(loss) return { \"loss\": self.loss_metric.result(), } def call(self, x): c1, p1 = self.downscale_blocks[0](x) c2, p2 = self.downscale_blocks[1](p1) c3, p3 = self.downscale_blocks[2](p2) c4, p4 = self.downscale_blocks[3](p3) bn = self.bottle_neck_block(p4) u1 = self.upscale_blocks[0](bn, c4) u2 = self.upscale_blocks[1](u1, c3) u3 = self.upscale_blocks[2](u2, c2) u4 = self.upscale_blocks[3](u3, c1) return self.conv_layer(u4) Model training optimizer = tf.keras.optimizers.Adam( learning_rate=LR, amsgrad=False, ) model = DepthEstimationModel() # Define the loss function cross_entropy = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, reduction=\"none\" ) # Compile the model model.compile(optimizer, loss=cross_entropy) train_loader = DataGenerator( data=df[:260].reset_index(drop=\"true\"), batch_size=BATCH_SIZE, dim=(HEIGHT, WIDTH) ) validation_loader = DataGenerator( data=df[260:].reset_index(drop=\"true\"), batch_size=BATCH_SIZE, dim=(HEIGHT, WIDTH) ) model.fit( train_loader, epochs=EPOCHS, validation_data=validation_loader, ) Epoch 1/30 9/9 [==============================] - 18s 1s/step - loss: 1.1543 - val_loss: 1.4281 Epoch 2/30 9/9 [==============================] - 3s 390ms/step - loss: 0.8727 - val_loss: 1.0686 Epoch 3/30 9/9 [==============================] - 4s 428ms/step - loss: 0.6659 - val_loss: 0.7884 Epoch 4/30 9/9 [==============================] - 3s 334ms/step - loss: 0.6462 - val_loss: 0.6198 Epoch 5/30 9/9 [==============================] - 3s 355ms/step - loss: 0.5689 - val_loss: 0.6207 Epoch 6/30 9/9 [==============================] - 3s 361ms/step - loss: 0.5067 - val_loss: 0.4876 Epoch 7/30 9/9 [==============================] - 3s 357ms/step - loss: 0.4680 - val_loss: 0.4698 Epoch 8/30 9/9 [==============================] - 3s 325ms/step - loss: 0.4622 - val_loss: 0.7249 Epoch 9/30 9/9 [==============================] - 3s 393ms/step - loss: 0.4215 - val_loss: 0.3826 Epoch 10/30 9/9 [==============================] - 3s 337ms/step - loss: 0.3788 - val_loss: 0.3289 Epoch 11/30 9/9 [==============================] - 3s 345ms/step - loss: 0.3347 - val_loss: 0.3032 Epoch 12/30 9/9 [==============================] - 3s 327ms/step - loss: 0.3488 - val_loss: 0.2631 Epoch 13/30 9/9 [==============================] - 3s 326ms/step - loss: 0.3315 - val_loss: 0.2383 Epoch 14/30 9/9 [==============================] - 3s 331ms/step - loss: 0.3349 - val_loss: 0.2379 Epoch 15/30 9/9 [==============================] - 3s 333ms/step - loss: 0.3394 - val_loss: 0.2151 Epoch 16/30 9/9 [==============================] - 3s 337ms/step - loss: 0.3073 - val_loss: 0.2243 Epoch 17/30 9/9 [==============================] - 3s 355ms/step - loss: 0.3951 - val_loss: 0.2627 Epoch 18/30 9/9 [==============================] - 3s 335ms/step - loss: 0.3657 - val_loss: 0.2175 Epoch 19/30 9/9 [==============================] - 3s 321ms/step - loss: 0.3404 - val_loss: 0.2073 Epoch 20/30 9/9 [==============================] - 3s 320ms/step - loss: 0.3549 - val_loss: 0.1972 Epoch 21/30 9/9 [==============================] - 3s 317ms/step - loss: 0.2802 - val_loss: 0.1936 Epoch 22/30 9/9 [==============================] - 3s 316ms/step - loss: 0.2632 - val_loss: 0.1893 Epoch 23/30 9/9 [==============================] - 3s 318ms/step - loss: 0.2862 - val_loss: 0.1807 Epoch 24/30 9/9 [==============================] - 3s 328ms/step - loss: 0.3083 - val_loss: 0.1923 Epoch 25/30 9/9 [==============================] - 3s 312ms/step - loss: 0.3666 - val_loss: 0.1795 Epoch 26/30 9/9 [==============================] - 3s 316ms/step - loss: 0.2928 - val_loss: 0.1753 Epoch 27/30 9/9 [==============================] - 3s 325ms/step - loss: 0.2945 - val_loss: 0.1790 Epoch 28/30 9/9 [==============================] - 3s 325ms/step - loss: 0.2642 - val_loss: 0.1775 Epoch 29/30 9/9 [==============================] - 3s 333ms/step - loss: 0.2546 - val_loss: 0.1810 Epoch 30/30 9/9 [==============================] - 3s 315ms/step - loss: 0.2650 - val_loss: 0.1795 Visualizing model output We visualize the model output over the validation set. The first image is the RGB image, the second image is the ground truth depth map image and the third one is the predicted depth map image. test_loader = next( iter( DataGenerator( data=df[265:].reset_index(drop=\"true\"), batch_size=6, dim=(HEIGHT, WIDTH) ) ) ) visualize_depth_map(test_loader, test=True, model=model) test_loader = next( iter( DataGenerator( data=df[300:].reset_index(drop=\"true\"), batch_size=6, dim=(HEIGHT, WIDTH) ) ) ) visualize_depth_map(test_loader, test=True, model=model) png png Possible improvements You can improve this model by replacing the encoding part of the U-Net with a pretrained DenseNet or ResNet. Loss functions play an important role in solving this problem. Tuning the loss functions may yield significant improvement. References The following papers go deeper into possible approaches for depth estimation. 1. Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos 2. Digging Into Self-Supervised Monocular Depth Estimation 3. Deeper Depth Prediction with Fully Convolutional Residual Networks You can also find helpful implementations in the papers with code depth estimation task. Implement DeepLabV3+ architecture for Multi-class Semantic Segmentation. Introduction Semantic segmentation, with the goal to assign semantic labels to every pixel in an image, is an essential computer vision task. In this example, we implement the DeepLabV3+ model for multi-class semantic segmentation, a fully-convolutional architecture that performs well on semantic segmentation benchmarks. References: Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation Rethinking Atrous Convolution for Semantic Image Segmentation DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs Downloading the data We will use the Crowd Instance-level Human Parsing Dataset for training our model. The Crowd Instance-level Human Parsing (CIHP) dataset has 38,280 diverse human images. Each image in CIHP is labeled with pixel-wise annotations for 20 categories, as well as instance-level identification. This dataset can be used for the \"human part segmentation\" task. import os import cv2 import numpy as np from glob import glob from scipy.io import loadmat import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers !gdown https://drive.google.com/uc?id=1B9A9UCJYMwTL4oBEo4RZfbMZMaZhKJaz !unzip -q instance-level-human-parsing.zip Downloading... From: https://drive.google.com/uc?id=1B9A9UCJYMwTL4oBEo4RZfbMZMaZhKJaz To: /content/keras-io/scripts/tmp_4374681/instance-level-human-parsing.zip 2.91GB [00:36, 79.6MB/s] Creating a TensorFlow Dataset Training on the entire CIHP dataset with 38,280 images takes a lot of time, hence we will be using a smaller subset of 200 images for training our model in this example. IMAGE_SIZE = 512 BATCH_SIZE = 4 NUM_CLASSES = 20 DATA_DIR = \"./instance-level_human_parsing/instance-level_human_parsing/Training\" NUM_TRAIN_IMAGES = 1000 NUM_VAL_IMAGES = 50 train_images = sorted(glob(os.path.join(DATA_DIR, \"Images/*\")))[:NUM_TRAIN_IMAGES] train_masks = sorted(glob(os.path.join(DATA_DIR, \"Category_ids/*\")))[:NUM_TRAIN_IMAGES] val_images = sorted(glob(os.path.join(DATA_DIR, \"Images/*\")))[ NUM_TRAIN_IMAGES : NUM_VAL_IMAGES + NUM_TRAIN_IMAGES ] val_masks = sorted(glob(os.path.join(DATA_DIR, \"Category_ids/*\")))[ NUM_TRAIN_IMAGES : NUM_VAL_IMAGES + NUM_TRAIN_IMAGES ] def read_image(image_path, mask=False): image = tf.io.read_file(image_path) if mask: image = tf.image.decode_png(image, channels=1) image.set_shape([None, None, 1]) image = tf.image.resize(images=image, size=[IMAGE_SIZE, IMAGE_SIZE]) else: image = tf.image.decode_png(image, channels=3) image.set_shape([None, None, 3]) image = tf.image.resize(images=image, size=[IMAGE_SIZE, IMAGE_SIZE]) image = image / 127.5 - 1 return image def load_data(image_list, mask_list): image = read_image(image_list) mask = read_image(mask_list, mask=True) return image, mask def data_generator(image_list, mask_list): dataset = tf.data.Dataset.from_tensor_slices((image_list, mask_list)) dataset = dataset.map(load_data, num_parallel_calls=tf.data.AUTOTUNE) dataset = dataset.batch(BATCH_SIZE, drop_remainder=True) return dataset train_dataset = data_generator(train_images, train_masks) val_dataset = data_generator(val_images, val_masks) print(\"Train Dataset:\", train_dataset) print(\"Val Dataset:\", val_dataset) Train Dataset: Val Dataset: Building the DeepLabV3+ model DeepLabv3+ extends DeepLabv3 by adding an encoder-decoder structure. The encoder module processes multiscale contextual information by applying dilated convolution at multiple scales, while the decoder module refines the segmentation results along object boundaries. Dilated convolution: With dilated convolution, as we go deeper in the network, we can keep the stride constant but with larger field-of-view without increasing the number of parameters or the amount of computation. Besides, it enables larger output feature maps, which is useful for semantic segmentation. The reason for using Dilated Spatial Pyramid Pooling is that it was shown that as the sampling rate becomes larger, the number of valid filter weights (i.e., weights that are applied to the valid feature region, instead of padded zeros) becomes smaller. def convolution_block( block_input, num_filters=256, kernel_size=3, dilation_rate=1, padding=\"same\", use_bias=False, ): x = layers.Conv2D( num_filters, kernel_size=kernel_size, dilation_rate=dilation_rate, padding=\"same\", use_bias=use_bias, kernel_initializer=keras.initializers.HeNormal(), )(block_input) x = layers.BatchNormalization()(x) return tf.nn.relu(x) def DilatedSpatialPyramidPooling(dspp_input): dims = dspp_input.shape x = layers.AveragePooling2D(pool_size=(dims[-3], dims[-2]))(dspp_input) x = convolution_block(x, kernel_size=1, use_bias=True) out_pool = layers.UpSampling2D( size=(dims[-3] // x.shape[1], dims[-2] // x.shape[2]), interpolation=\"bilinear\", )(x) out_1 = convolution_block(dspp_input, kernel_size=1, dilation_rate=1) out_6 = convolution_block(dspp_input, kernel_size=3, dilation_rate=6) out_12 = convolution_block(dspp_input, kernel_size=3, dilation_rate=12) out_18 = convolution_block(dspp_input, kernel_size=3, dilation_rate=18) x = layers.Concatenate(axis=-1)([out_pool, out_1, out_6, out_12, out_18]) output = convolution_block(x, kernel_size=1) return output The encoder features are first bilinearly upsampled by a factor 4, and then concatenated with the corresponding low-level features from the network backbone that have the same spatial resolution. For this example, we use a ResNet50 pretrained on ImageNet as the backbone model, and we use the low-level features from the conv4_block6_2_relu block of the backbone. def DeeplabV3Plus(image_size, num_classes): model_input = keras.Input(shape=(image_size, image_size, 3)) resnet50 = keras.applications.ResNet50( weights=\"imagenet\", include_top=False, input_tensor=model_input ) x = resnet50.get_layer(\"conv4_block6_2_relu\").output x = DilatedSpatialPyramidPooling(x) input_a = layers.UpSampling2D( size=(image_size // 4 // x.shape[1], image_size // 4 // x.shape[2]), interpolation=\"bilinear\", )(x) input_b = resnet50.get_layer(\"conv2_block3_2_relu\").output input_b = convolution_block(input_b, num_filters=48, kernel_size=1) x = layers.Concatenate(axis=-1)([input_a, input_b]) x = convolution_block(x) x = convolution_block(x) x = layers.UpSampling2D( size=(image_size // x.shape[1], image_size // x.shape[2]), interpolation=\"bilinear\", )(x) model_output = layers.Conv2D(num_classes, kernel_size=(1, 1), padding=\"same\")(x) return keras.Model(inputs=model_input, outputs=model_output) model = DeeplabV3Plus(image_size=IMAGE_SIZE, num_classes=NUM_CLASSES) model.summary() Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5 94773248/94765736 [==============================] - 1s 0us/step 94781440/94765736 [==============================] - 1s 0us/step Model: \"model\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 512, 512, 3) 0 __________________________________________________________________________________________________ conv1_pad (ZeroPadding2D) (None, 518, 518, 3) 0 input_1[0][0] __________________________________________________________________________________________________ conv1_conv (Conv2D) (None, 256, 256, 64) 9472 conv1_pad[0][0] __________________________________________________________________________________________________ conv1_bn (BatchNormalization) (None, 256, 256, 64) 256 conv1_conv[0][0] __________________________________________________________________________________________________ conv1_relu (Activation) (None, 256, 256, 64) 0 conv1_bn[0][0] __________________________________________________________________________________________________ pool1_pad (ZeroPadding2D) (None, 258, 258, 64) 0 conv1_relu[0][0] __________________________________________________________________________________________________ pool1_pool (MaxPooling2D) (None, 128, 128, 64) 0 pool1_pad[0][0] __________________________________________________________________________________________________ conv2_block1_1_conv (Conv2D) (None, 128, 128, 64) 4160 pool1_pool[0][0] __________________________________________________________________________________________________ conv2_block1_1_bn (BatchNormali (None, 128, 128, 64) 256 conv2_block1_1_conv[0][0] __________________________________________________________________________________________________ conv2_block1_1_relu (Activation (None, 128, 128, 64) 0 conv2_block1_1_bn[0][0] __________________________________________________________________________________________________ conv2_block1_2_conv (Conv2D) (None, 128, 128, 64) 36928 conv2_block1_1_relu[0][0] __________________________________________________________________________________________________ conv2_block1_2_bn (BatchNormali (None, 128, 128, 64) 256 conv2_block1_2_conv[0][0] __________________________________________________________________________________________________ conv2_block1_2_relu (Activation (None, 128, 128, 64) 0 conv2_block1_2_bn[0][0] __________________________________________________________________________________________________ conv2_block1_0_conv (Conv2D) (None, 128, 128, 256 16640 pool1_pool[0][0] __________________________________________________________________________________________________ conv2_block1_3_conv (Conv2D) (None, 128, 128, 256 16640 conv2_block1_2_relu[0][0] __________________________________________________________________________________________________ conv2_block1_0_bn (BatchNormali (None, 128, 128, 256 1024 conv2_block1_0_conv[0][0] __________________________________________________________________________________________________ conv2_block1_3_bn (BatchNormali (None, 128, 128, 256 1024 conv2_block1_3_conv[0][0] __________________________________________________________________________________________________ conv2_block1_add (Add) (None, 128, 128, 256 0 conv2_block1_0_bn[0][0] conv2_block1_3_bn[0][0] __________________________________________________________________________________________________ conv2_block1_out (Activation) (None, 128, 128, 256 0 conv2_block1_add[0][0] __________________________________________________________________________________________________ conv2_block2_1_conv (Conv2D) (None, 128, 128, 64) 16448 conv2_block1_out[0][0] __________________________________________________________________________________________________ conv2_block2_1_bn (BatchNormali (None, 128, 128, 64) 256 conv2_block2_1_conv[0][0] __________________________________________________________________________________________________ conv2_block2_1_relu (Activation (None, 128, 128, 64) 0 conv2_block2_1_bn[0][0] __________________________________________________________________________________________________ conv2_block2_2_conv (Conv2D) (None, 128, 128, 64) 36928 conv2_block2_1_relu[0][0] __________________________________________________________________________________________________ conv2_block2_2_bn (BatchNormali (None, 128, 128, 64) 256 conv2_block2_2_conv[0][0] __________________________________________________________________________________________________ conv2_block2_2_relu (Activation (None, 128, 128, 64) 0 conv2_block2_2_bn[0][0] __________________________________________________________________________________________________ conv2_block2_3_conv (Conv2D) (None, 128, 128, 256 16640 conv2_block2_2_relu[0][0] __________________________________________________________________________________________________ conv2_block2_3_bn (BatchNormali (None, 128, 128, 256 1024 conv2_block2_3_conv[0][0] __________________________________________________________________________________________________ conv2_block2_add (Add) (None, 128, 128, 256 0 conv2_block1_out[0][0] conv2_block2_3_bn[0][0] __________________________________________________________________________________________________ conv2_block2_out (Activation) (None, 128, 128, 256 0 conv2_block2_add[0][0] __________________________________________________________________________________________________ conv2_block3_1_conv (Conv2D) (None, 128, 128, 64) 16448 conv2_block2_out[0][0] __________________________________________________________________________________________________ conv2_block3_1_bn (BatchNormali (None, 128, 128, 64) 256 conv2_block3_1_conv[0][0] __________________________________________________________________________________________________ conv2_block3_1_relu (Activation (None, 128, 128, 64) 0 conv2_block3_1_bn[0][0] __________________________________________________________________________________________________ conv2_block3_2_conv (Conv2D) (None, 128, 128, 64) 36928 conv2_block3_1_relu[0][0] __________________________________________________________________________________________________ conv2_block3_2_bn (BatchNormali (None, 128, 128, 64) 256 conv2_block3_2_conv[0][0] __________________________________________________________________________________________________ conv2_block3_2_relu (Activation (None, 128, 128, 64) 0 conv2_block3_2_bn[0][0] __________________________________________________________________________________________________ conv2_block3_3_conv (Conv2D) (None, 128, 128, 256 16640 conv2_block3_2_relu[0][0] __________________________________________________________________________________________________ conv2_block3_3_bn (BatchNormali (None, 128, 128, 256 1024 conv2_block3_3_conv[0][0] __________________________________________________________________________________________________ conv2_block3_add (Add) (None, 128, 128, 256 0 conv2_block2_out[0][0] conv2_block3_3_bn[0][0] __________________________________________________________________________________________________ conv2_block3_out (Activation) (None, 128, 128, 256 0 conv2_block3_add[0][0] __________________________________________________________________________________________________ conv3_block1_1_conv (Conv2D) (None, 64, 64, 128) 32896 conv2_block3_out[0][0] __________________________________________________________________________________________________ conv3_block1_1_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block1_1_conv[0][0] __________________________________________________________________________________________________ conv3_block1_1_relu (Activation (None, 64, 64, 128) 0 conv3_block1_1_bn[0][0] __________________________________________________________________________________________________ conv3_block1_2_conv (Conv2D) (None, 64, 64, 128) 147584 conv3_block1_1_relu[0][0] __________________________________________________________________________________________________ conv3_block1_2_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block1_2_conv[0][0] __________________________________________________________________________________________________ conv3_block1_2_relu (Activation (None, 64, 64, 128) 0 conv3_block1_2_bn[0][0] __________________________________________________________________________________________________ conv3_block1_0_conv (Conv2D) (None, 64, 64, 512) 131584 conv2_block3_out[0][0] __________________________________________________________________________________________________ conv3_block1_3_conv (Conv2D) (None, 64, 64, 512) 66048 conv3_block1_2_relu[0][0] __________________________________________________________________________________________________ conv3_block1_0_bn (BatchNormali (None, 64, 64, 512) 2048 conv3_block1_0_conv[0][0] __________________________________________________________________________________________________ conv3_block1_3_bn (BatchNormali (None, 64, 64, 512) 2048 conv3_block1_3_conv[0][0] __________________________________________________________________________________________________ conv3_block1_add (Add) (None, 64, 64, 512) 0 conv3_block1_0_bn[0][0] conv3_block1_3_bn[0][0] __________________________________________________________________________________________________ conv3_block1_out (Activation) (None, 64, 64, 512) 0 conv3_block1_add[0][0] __________________________________________________________________________________________________ conv3_block2_1_conv (Conv2D) (None, 64, 64, 128) 65664 conv3_block1_out[0][0] __________________________________________________________________________________________________ conv3_block2_1_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block2_1_conv[0][0] __________________________________________________________________________________________________ conv3_block2_1_relu (Activation (None, 64, 64, 128) 0 conv3_block2_1_bn[0][0] __________________________________________________________________________________________________ conv3_block2_2_conv (Conv2D) (None, 64, 64, 128) 147584 conv3_block2_1_relu[0][0] __________________________________________________________________________________________________ conv3_block2_2_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block2_2_conv[0][0] __________________________________________________________________________________________________ conv3_block2_2_relu (Activation (None, 64, 64, 128) 0 conv3_block2_2_bn[0][0] __________________________________________________________________________________________________ conv3_block2_3_conv (Conv2D) (None, 64, 64, 512) 66048 conv3_block2_2_relu[0][0] __________________________________________________________________________________________________ conv3_block2_3_bn (BatchNormali (None, 64, 64, 512) 2048 conv3_block2_3_conv[0][0] __________________________________________________________________________________________________ conv3_block2_add (Add) (None, 64, 64, 512) 0 conv3_block1_out[0][0] conv3_block2_3_bn[0][0] __________________________________________________________________________________________________ conv3_block2_out (Activation) (None, 64, 64, 512) 0 conv3_block2_add[0][0] __________________________________________________________________________________________________ conv3_block3_1_conv (Conv2D) (None, 64, 64, 128) 65664 conv3_block2_out[0][0] __________________________________________________________________________________________________ conv3_block3_1_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block3_1_conv[0][0] __________________________________________________________________________________________________ conv3_block3_1_relu (Activation (None, 64, 64, 128) 0 conv3_block3_1_bn[0][0] __________________________________________________________________________________________________ conv3_block3_2_conv (Conv2D) (None, 64, 64, 128) 147584 conv3_block3_1_relu[0][0] __________________________________________________________________________________________________ conv3_block3_2_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block3_2_conv[0][0] __________________________________________________________________________________________________ conv3_block3_2_relu (Activation (None, 64, 64, 128) 0 conv3_block3_2_bn[0][0] __________________________________________________________________________________________________ conv3_block3_3_conv (Conv2D) (None, 64, 64, 512) 66048 conv3_block3_2_relu[0][0] __________________________________________________________________________________________________ conv3_block3_3_bn (BatchNormali (None, 64, 64, 512) 2048 conv3_block3_3_conv[0][0] __________________________________________________________________________________________________ conv3_block3_add (Add) (None, 64, 64, 512) 0 conv3_block2_out[0][0] conv3_block3_3_bn[0][0] __________________________________________________________________________________________________ conv3_block3_out (Activation) (None, 64, 64, 512) 0 conv3_block3_add[0][0] __________________________________________________________________________________________________ conv3_block4_1_conv (Conv2D) (None, 64, 64, 128) 65664 conv3_block3_out[0][0] __________________________________________________________________________________________________ conv3_block4_1_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block4_1_conv[0][0] __________________________________________________________________________________________________ conv3_block4_1_relu (Activation (None, 64, 64, 128) 0 conv3_block4_1_bn[0][0] __________________________________________________________________________________________________ conv3_block4_2_conv (Conv2D) (None, 64, 64, 128) 147584 conv3_block4_1_relu[0][0] __________________________________________________________________________________________________ conv3_block4_2_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block4_2_conv[0][0] __________________________________________________________________________________________________ conv3_block4_2_relu (Activation (None, 64, 64, 128) 0 conv3_block4_2_bn[0][0] __________________________________________________________________________________________________ conv3_block4_3_conv (Conv2D) (None, 64, 64, 512) 66048 conv3_block4_2_relu[0][0] __________________________________________________________________________________________________ conv3_block4_3_bn (BatchNormali (None, 64, 64, 512) 2048 conv3_block4_3_conv[0][0] __________________________________________________________________________________________________ conv3_block4_add (Add) (None, 64, 64, 512) 0 conv3_block3_out[0][0] conv3_block4_3_bn[0][0] __________________________________________________________________________________________________ conv3_block4_out (Activation) (None, 64, 64, 512) 0 conv3_block4_add[0][0] __________________________________________________________________________________________________ conv4_block1_1_conv (Conv2D) (None, 32, 32, 256) 131328 conv3_block4_out[0][0] __________________________________________________________________________________________________ conv4_block1_1_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block1_1_conv[0][0] __________________________________________________________________________________________________ conv4_block1_1_relu (Activation (None, 32, 32, 256) 0 conv4_block1_1_bn[0][0] __________________________________________________________________________________________________ conv4_block1_2_conv (Conv2D) (None, 32, 32, 256) 590080 conv4_block1_1_relu[0][0] __________________________________________________________________________________________________ conv4_block1_2_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block1_2_conv[0][0] __________________________________________________________________________________________________ conv4_block1_2_relu (Activation (None, 32, 32, 256) 0 conv4_block1_2_bn[0][0] __________________________________________________________________________________________________ conv4_block1_0_conv (Conv2D) (None, 32, 32, 1024) 525312 conv3_block4_out[0][0] __________________________________________________________________________________________________ conv4_block1_3_conv (Conv2D) (None, 32, 32, 1024) 263168 conv4_block1_2_relu[0][0] __________________________________________________________________________________________________ conv4_block1_0_bn (BatchNormali (None, 32, 32, 1024) 4096 conv4_block1_0_conv[0][0] __________________________________________________________________________________________________ conv4_block1_3_bn (BatchNormali (None, 32, 32, 1024) 4096 conv4_block1_3_conv[0][0] __________________________________________________________________________________________________ conv4_block1_add (Add) (None, 32, 32, 1024) 0 conv4_block1_0_bn[0][0] conv4_block1_3_bn[0][0] __________________________________________________________________________________________________ conv4_block1_out (Activation) (None, 32, 32, 1024) 0 conv4_block1_add[0][0] __________________________________________________________________________________________________ conv4_block2_1_conv (Conv2D) (None, 32, 32, 256) 262400 conv4_block1_out[0][0] __________________________________________________________________________________________________ conv4_block2_1_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block2_1_conv[0][0] __________________________________________________________________________________________________ conv4_block2_1_relu (Activation (None, 32, 32, 256) 0 conv4_block2_1_bn[0][0] __________________________________________________________________________________________________ conv4_block2_2_conv (Conv2D) (None, 32, 32, 256) 590080 conv4_block2_1_relu[0][0] __________________________________________________________________________________________________ conv4_block2_2_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block2_2_conv[0][0] __________________________________________________________________________________________________ conv4_block2_2_relu (Activation (None, 32, 32, 256) 0 conv4_block2_2_bn[0][0] __________________________________________________________________________________________________ conv4_block2_3_conv (Conv2D) (None, 32, 32, 1024) 263168 conv4_block2_2_relu[0][0] __________________________________________________________________________________________________ conv4_block2_3_bn (BatchNormali (None, 32, 32, 1024) 4096 conv4_block2_3_conv[0][0] __________________________________________________________________________________________________ conv4_block2_add (Add) (None, 32, 32, 1024) 0 conv4_block1_out[0][0] conv4_block2_3_bn[0][0] __________________________________________________________________________________________________ conv4_block2_out (Activation) (None, 32, 32, 1024) 0 conv4_block2_add[0][0] __________________________________________________________________________________________________ conv4_block3_1_conv (Conv2D) (None, 32, 32, 256) 262400 conv4_block2_out[0][0] __________________________________________________________________________________________________ conv4_block3_1_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block3_1_conv[0][0] __________________________________________________________________________________________________ conv4_block3_1_relu (Activation (None, 32, 32, 256) 0 conv4_block3_1_bn[0][0] __________________________________________________________________________________________________ conv4_block3_2_conv (Conv2D) (None, 32, 32, 256) 590080 conv4_block3_1_relu[0][0] __________________________________________________________________________________________________ conv4_block3_2_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block3_2_conv[0][0] __________________________________________________________________________________________________ conv4_block3_2_relu (Activation (None, 32, 32, 256) 0 conv4_block3_2_bn[0][0] __________________________________________________________________________________________________ conv4_block3_3_conv (Conv2D) (None, 32, 32, 1024) 263168 conv4_block3_2_relu[0][0] __________________________________________________________________________________________________ conv4_block3_3_bn (BatchNormali (None, 32, 32, 1024) 4096 conv4_block3_3_conv[0][0] __________________________________________________________________________________________________ conv4_block3_add (Add) (None, 32, 32, 1024) 0 conv4_block2_out[0][0] conv4_block3_3_bn[0][0] __________________________________________________________________________________________________ conv4_block3_out (Activation) (None, 32, 32, 1024) 0 conv4_block3_add[0][0] __________________________________________________________________________________________________ conv4_block4_1_conv (Conv2D) (None, 32, 32, 256) 262400 conv4_block3_out[0][0] __________________________________________________________________________________________________ conv4_block4_1_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block4_1_conv[0][0] __________________________________________________________________________________________________ conv4_block4_1_relu (Activation (None, 32, 32, 256) 0 conv4_block4_1_bn[0][0] __________________________________________________________________________________________________ conv4_block4_2_conv (Conv2D) (None, 32, 32, 256) 590080 conv4_block4_1_relu[0][0] __________________________________________________________________________________________________ conv4_block4_2_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block4_2_conv[0][0] __________________________________________________________________________________________________ conv4_block4_2_relu (Activation (None, 32, 32, 256) 0 conv4_block4_2_bn[0][0] __________________________________________________________________________________________________ conv4_block4_3_conv (Conv2D) (None, 32, 32, 1024) 263168 conv4_block4_2_relu[0][0] __________________________________________________________________________________________________ conv4_block4_3_bn (BatchNormali (None, 32, 32, 1024) 4096 conv4_block4_3_conv[0][0] __________________________________________________________________________________________________ conv4_block4_add (Add) (None, 32, 32, 1024) 0 conv4_block3_out[0][0] conv4_block4_3_bn[0][0] __________________________________________________________________________________________________ conv4_block4_out (Activation) (None, 32, 32, 1024) 0 conv4_block4_add[0][0] __________________________________________________________________________________________________ conv4_block5_1_conv (Conv2D) (None, 32, 32, 256) 262400 conv4_block4_out[0][0] __________________________________________________________________________________________________ conv4_block5_1_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block5_1_conv[0][0] __________________________________________________________________________________________________ conv4_block5_1_relu (Activation (None, 32, 32, 256) 0 conv4_block5_1_bn[0][0] __________________________________________________________________________________________________ conv4_block5_2_conv (Conv2D) (None, 32, 32, 256) 590080 conv4_block5_1_relu[0][0] __________________________________________________________________________________________________ conv4_block5_2_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block5_2_conv[0][0] __________________________________________________________________________________________________ conv4_block5_2_relu (Activation (None, 32, 32, 256) 0 conv4_block5_2_bn[0][0] __________________________________________________________________________________________________ conv4_block5_3_conv (Conv2D) (None, 32, 32, 1024) 263168 conv4_block5_2_relu[0][0] __________________________________________________________________________________________________ conv4_block5_3_bn (BatchNormali (None, 32, 32, 1024) 4096 conv4_block5_3_conv[0][0] __________________________________________________________________________________________________ conv4_block5_add (Add) (None, 32, 32, 1024) 0 conv4_block4_out[0][0] conv4_block5_3_bn[0][0] __________________________________________________________________________________________________ conv4_block5_out (Activation) (None, 32, 32, 1024) 0 conv4_block5_add[0][0] __________________________________________________________________________________________________ conv4_block6_1_conv (Conv2D) (None, 32, 32, 256) 262400 conv4_block5_out[0][0] __________________________________________________________________________________________________ conv4_block6_1_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block6_1_conv[0][0] __________________________________________________________________________________________________ conv4_block6_1_relu (Activation (None, 32, 32, 256) 0 conv4_block6_1_bn[0][0] __________________________________________________________________________________________________ conv4_block6_2_conv (Conv2D) (None, 32, 32, 256) 590080 conv4_block6_1_relu[0][0] __________________________________________________________________________________________________ conv4_block6_2_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block6_2_conv[0][0] __________________________________________________________________________________________________ conv4_block6_2_relu (Activation (None, 32, 32, 256) 0 conv4_block6_2_bn[0][0] __________________________________________________________________________________________________ average_pooling2d (AveragePooli (None, 1, 1, 256) 0 conv4_block6_2_relu[0][0] __________________________________________________________________________________________________ conv2d (Conv2D) (None, 1, 1, 256) 65792 average_pooling2d[0][0] __________________________________________________________________________________________________ batch_normalization (BatchNorma (None, 1, 1, 256) 1024 conv2d[0][0] __________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 32, 32, 256) 65536 conv4_block6_2_relu[0][0] __________________________________________________________________________________________________ conv2d_2 (Conv2D) (None, 32, 32, 256) 589824 conv4_block6_2_relu[0][0] __________________________________________________________________________________________________ conv2d_3 (Conv2D) (None, 32, 32, 256) 589824 conv4_block6_2_relu[0][0] __________________________________________________________________________________________________ conv2d_4 (Conv2D) (None, 32, 32, 256) 589824 conv4_block6_2_relu[0][0] __________________________________________________________________________________________________ tf.nn.relu (TFOpLambda) (None, 1, 1, 256) 0 batch_normalization[0][0] __________________________________________________________________________________________________ batch_normalization_1 (BatchNor (None, 32, 32, 256) 1024 conv2d_1[0][0] __________________________________________________________________________________________________ batch_normalization_2 (BatchNor (None, 32, 32, 256) 1024 conv2d_2[0][0] __________________________________________________________________________________________________ batch_normalization_3 (BatchNor (None, 32, 32, 256) 1024 conv2d_3[0][0] __________________________________________________________________________________________________ batch_normalization_4 (BatchNor (None, 32, 32, 256) 1024 conv2d_4[0][0] __________________________________________________________________________________________________ up_sampling2d (UpSampling2D) (None, 32, 32, 256) 0 tf.nn.relu[0][0] __________________________________________________________________________________________________ tf.nn.relu_1 (TFOpLambda) (None, 32, 32, 256) 0 batch_normalization_1[0][0] __________________________________________________________________________________________________ tf.nn.relu_2 (TFOpLambda) (None, 32, 32, 256) 0 batch_normalization_2[0][0] __________________________________________________________________________________________________ tf.nn.relu_3 (TFOpLambda) (None, 32, 32, 256) 0 batch_normalization_3[0][0] __________________________________________________________________________________________________ tf.nn.relu_4 (TFOpLambda) (None, 32, 32, 256) 0 batch_normalization_4[0][0] __________________________________________________________________________________________________ concatenate (Concatenate) (None, 32, 32, 1280) 0 up_sampling2d[0][0] tf.nn.relu_1[0][0] tf.nn.relu_2[0][0] tf.nn.relu_3[0][0] tf.nn.relu_4[0][0] __________________________________________________________________________________________________ conv2d_5 (Conv2D) (None, 32, 32, 256) 327680 concatenate[0][0] __________________________________________________________________________________________________ batch_normalization_5 (BatchNor (None, 32, 32, 256) 1024 conv2d_5[0][0] __________________________________________________________________________________________________ conv2d_6 (Conv2D) (None, 128, 128, 48) 3072 conv2_block3_2_relu[0][0] __________________________________________________________________________________________________ tf.nn.relu_5 (TFOpLambda) (None, 32, 32, 256) 0 batch_normalization_5[0][0] __________________________________________________________________________________________________ batch_normalization_6 (BatchNor (None, 128, 128, 48) 192 conv2d_6[0][0] __________________________________________________________________________________________________ up_sampling2d_1 (UpSampling2D) (None, 128, 128, 256 0 tf.nn.relu_5[0][0] __________________________________________________________________________________________________ tf.nn.relu_6 (TFOpLambda) (None, 128, 128, 48) 0 batch_normalization_6[0][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 128, 128, 304 0 up_sampling2d_1[0][0] tf.nn.relu_6[0][0] __________________________________________________________________________________________________ conv2d_7 (Conv2D) (None, 128, 128, 256 700416 concatenate_1[0][0] __________________________________________________________________________________________________ batch_normalization_7 (BatchNor (None, 128, 128, 256 1024 conv2d_7[0][0] __________________________________________________________________________________________________ tf.nn.relu_7 (TFOpLambda) (None, 128, 128, 256 0 batch_normalization_7[0][0] __________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 128, 128, 256 589824 tf.nn.relu_7[0][0] __________________________________________________________________________________________________ batch_normalization_8 (BatchNor (None, 128, 128, 256 1024 conv2d_8[0][0] __________________________________________________________________________________________________ tf.nn.relu_8 (TFOpLambda) (None, 128, 128, 256 0 batch_normalization_8[0][0] __________________________________________________________________________________________________ up_sampling2d_2 (UpSampling2D) (None, 512, 512, 256 0 tf.nn.relu_8[0][0] __________________________________________________________________________________________________ conv2d_9 (Conv2D) (None, 512, 512, 20) 5140 up_sampling2d_2[0][0] ================================================================================================== Total params: 11,857,236 Trainable params: 11,824,500 Non-trainable params: 32,736 __________________________________________________________________________________________________ Training We train the model using sparse categorical crossentropy as the loss function, and Adam as the optimizer. loss = keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile( optimizer=keras.optimizers.Adam(learning_rate=0.001), loss=loss, metrics=[\"accuracy\"], ) history = model.fit(train_dataset, validation_data=val_dataset, epochs=25) plt.plot(history.history[\"loss\"]) plt.title(\"Training Loss\") plt.ylabel(\"loss\") plt.xlabel(\"epoch\") plt.show() plt.plot(history.history[\"accuracy\"]) plt.title(\"Training Accuracy\") plt.ylabel(\"accuracy\") plt.xlabel(\"epoch\") plt.show() plt.plot(history.history[\"val_loss\"]) plt.title(\"Validation Loss\") plt.ylabel(\"val_loss\") plt.xlabel(\"epoch\") plt.show() plt.plot(history.history[\"val_accuracy\"]) plt.title(\"Validation Accuracy\") plt.ylabel(\"val_accuracy\") plt.xlabel(\"epoch\") plt.show() Epoch 1/25 250/250 [==============================] - 115s 359ms/step - loss: 1.1765 - accuracy: 0.6424 - val_loss: 2.3559 - val_accuracy: 0.5960 Epoch 2/25 250/250 [==============================] - 92s 366ms/step - loss: 0.9413 - accuracy: 0.6998 - val_loss: 1.7349 - val_accuracy: 0.5593 Epoch 3/25 250/250 [==============================] - 93s 371ms/step - loss: 0.8415 - accuracy: 0.7310 - val_loss: 1.3097 - val_accuracy: 0.6281 Epoch 4/25 250/250 [==============================] - 93s 372ms/step - loss: 0.7640 - accuracy: 0.7552 - val_loss: 1.0175 - val_accuracy: 0.6885 Epoch 5/25 250/250 [==============================] - 93s 372ms/step - loss: 0.7139 - accuracy: 0.7706 - val_loss: 1.2226 - val_accuracy: 0.6107 Epoch 6/25 250/250 [==============================] - 93s 373ms/step - loss: 0.6647 - accuracy: 0.7867 - val_loss: 0.8583 - val_accuracy: 0.7178 Epoch 7/25 250/250 [==============================] - 94s 375ms/step - loss: 0.5986 - accuracy: 0.8080 - val_loss: 0.9724 - val_accuracy: 0.7135 Epoch 8/25 250/250 [==============================] - 93s 372ms/step - loss: 0.5599 - accuracy: 0.8212 - val_loss: 0.9722 - val_accuracy: 0.7064 Epoch 9/25 250/250 [==============================] - 93s 372ms/step - loss: 0.5161 - accuracy: 0.8364 - val_loss: 0.9023 - val_accuracy: 0.7471 Epoch 10/25 250/250 [==============================] - 93s 373ms/step - loss: 0.4719 - accuracy: 0.8515 - val_loss: 0.8803 - val_accuracy: 0.7540 Epoch 11/25 250/250 [==============================] - 93s 372ms/step - loss: 0.4337 - accuracy: 0.8636 - val_loss: 0.9682 - val_accuracy: 0.7377 Epoch 12/25 250/250 [==============================] - 93s 373ms/step - loss: 0.4079 - accuracy: 0.8718 - val_loss: 0.9586 - val_accuracy: 0.7551 Epoch 13/25 250/250 [==============================] - 93s 373ms/step - loss: 0.3694 - accuracy: 0.8856 - val_loss: 0.9676 - val_accuracy: 0.7606 Epoch 14/25 250/250 [==============================] - 93s 373ms/step - loss: 0.3493 - accuracy: 0.8913 - val_loss: 0.8375 - val_accuracy: 0.7706 Epoch 15/25 250/250 [==============================] - 93s 373ms/step - loss: 0.3217 - accuracy: 0.9008 - val_loss: 0.9956 - val_accuracy: 0.7469 Epoch 16/25 250/250 [==============================] - 93s 372ms/step - loss: 0.3018 - accuracy: 0.9075 - val_loss: 0.9614 - val_accuracy: 0.7474 Epoch 17/25 250/250 [==============================] - 93s 372ms/step - loss: 0.2870 - accuracy: 0.9122 - val_loss: 0.9652 - val_accuracy: 0.7626 Epoch 18/25 250/250 [==============================] - 93s 373ms/step - loss: 0.2685 - accuracy: 0.9182 - val_loss: 0.8913 - val_accuracy: 0.7824 Epoch 19/25 250/250 [==============================] - 93s 373ms/step - loss: 0.2574 - accuracy: 0.9216 - val_loss: 1.0205 - val_accuracy: 0.7417 Epoch 20/25 250/250 [==============================] - 93s 372ms/step - loss: 0.2619 - accuracy: 0.9199 - val_loss: 0.9237 - val_accuracy: 0.7788 Epoch 21/25 250/250 [==============================] - 93s 372ms/step - loss: 0.2372 - accuracy: 0.9280 - val_loss: 0.9076 - val_accuracy: 0.7796 Epoch 22/25 250/250 [==============================] - 93s 372ms/step - loss: 0.2175 - accuracy: 0.9344 - val_loss: 0.9797 - val_accuracy: 0.7742 Epoch 23/25 250/250 [==============================] - 93s 372ms/step - loss: 0.2084 - accuracy: 0.9370 - val_loss: 0.9981 - val_accuracy: 0.7870 Epoch 24/25 250/250 [==============================] - 93s 373ms/step - loss: 0.2077 - accuracy: 0.9370 - val_loss: 1.0494 - val_accuracy: 0.7767 Epoch 25/25 250/250 [==============================] - 93s 372ms/step - loss: 0.2059 - accuracy: 0.9377 - val_loss: 0.9640 - val_accuracy: 0.7651 png png png png Inference using Colormap Overlay The raw predictions from the model represent a one-hot encoded tensor of shape (N, 512, 512, 20) where each one of the 20 channels is a binary mask corresponding to a predicted label. In order to visualize the results, we plot them as RGB segmentation masks where each pixel is represented by a unique color corresponding to the particular label predicted. We can easily find the color corresponding to each label from the human_colormap.mat file provided as part of the dataset. We would also plot an overlay of the RGB segmentation mask on the input image as this further helps us to identify the different categories present in the image more intuitively. # Loading the Colormap colormap = loadmat( \"./instance-level_human_parsing/instance-level_human_parsing/human_colormap.mat\" )[\"colormap\"] colormap = colormap * 100 colormap = colormap.astype(np.uint8) def infer(model, image_tensor): predictions = model.predict(np.expand_dims((image_tensor), axis=0)) predictions = np.squeeze(predictions) predictions = np.argmax(predictions, axis=2) return predictions def decode_segmentation_masks(mask, colormap, n_classes): r = np.zeros_like(mask).astype(np.uint8) g = np.zeros_like(mask).astype(np.uint8) b = np.zeros_like(mask).astype(np.uint8) for l in range(0, n_classes): idx = mask == l r[idx] = colormap[l, 0] g[idx] = colormap[l, 1] b[idx] = colormap[l, 2] rgb = np.stack([r, g, b], axis=2) return rgb def get_overlay(image, colored_mask): image = tf.keras.preprocessing.image.array_to_img(image) image = np.array(image).astype(np.uint8) overlay = cv2.addWeighted(image, 0.35, colored_mask, 0.65, 0) return overlay def plot_samples_matplotlib(display_list, figsize=(5, 3)): _, axes = plt.subplots(nrows=1, ncols=len(display_list), figsize=figsize) for i in range(len(display_list)): if display_list[i].shape[-1] == 3: axes[i].imshow(tf.keras.preprocessing.image.array_to_img(display_list[i])) else: axes[i].imshow(display_list[i]) plt.show() def plot_predictions(images_list, colormap, model): for image_file in images_list: image_tensor = read_image(image_file) prediction_mask = infer(image_tensor=image_tensor, model=model) prediction_colormap = decode_segmentation_masks(prediction_mask, colormap, 20) overlay = get_overlay(image_tensor, prediction_colormap) plot_samples_matplotlib( [image_tensor, overlay, prediction_colormap], figsize=(18, 14) ) Inference on Train Images plot_predictions(train_images[:4], colormap, model=model) png png png png Inference on Validation Images plot_predictions(val_images[:4], colormap, model=model) png png png png Building a near-duplicate image search utility using deep learning and locality-sensitive hashing. Introduction Fetching similar images in (near) real time is an important use case of information retrieval systems. Some popular products utilizing it include Pinterest, Google Image Search, etc. In this example, we will build a similar image search utility using Locality Sensitive Hashing (LSH) and random projection on top of the image representations computed by a pretrained image classifier. This kind of search engine is also known as a near-duplicate (or near-dup) image detector. We will also look into optimizing the inference performance of our search utility on GPU using TensorRT. There are other examples under keras.io/examples/vision that are worth checking out in this regard: Metric learning for image similarity search Image similarity estimation using a Siamese Network with a triplet loss Finally, this example uses the following resource as a reference and as such reuses some of its code: Locality Sensitive Hashing for Similar Item Search. Note that in order to optimize the performance of our parser, you should have a GPU runtime available. Imports import matplotlib.pyplot as plt import tensorflow as tf import numpy as np import time import tensorflow_datasets as tfds tfds.disable_progress_bar() Load the dataset and create a training set of 1,000 images To keep the run time of the example short, we will be using a subset of 1,000 images from the tf_flowers dataset (available through TensorFlow Datasets) to build our vocabulary. train_ds, validation_ds = tfds.load( \"tf_flowers\", split=[\"train[:85%]\", \"train[85%:]\"], as_supervised=True ) IMAGE_SIZE = 224 NUM_IMAGES = 1000 images = [] labels = [] for (image, label) in train_ds.take(NUM_IMAGES): image = tf.image.resize(image, (IMAGE_SIZE, IMAGE_SIZE)) images.append(image.numpy()) labels.append(label.numpy()) images = np.array(images) labels = np.array(labels) Load a pre-trained model In this section, we load an image classification model that was trained on the tf_flowers dataset. 85% of the total images were used to build the training set. For more details on the training, refer to this notebook. The underlying model is a BiT-ResNet (proposed in Big Transfer (BiT): General Visual Representation Learning). The BiT-ResNet family of models is known to provide excellent transfer performance across a wide variety of different downstream tasks. !wget -q https://git.io/JuMq0 -O flower_model_bit_0.96875.zip !unzip -qq flower_model_bit_0.96875.zip bit_model = tf.keras.models.load_model(\"flower_model_bit_0.96875\") bit_model.count_params() 23510597 Create an embedding model To retrieve similar images given a query image, we need to first generate vector representations of all the images involved. We do this via an embedding model that extracts output features from our pretrained classifier and normalizes the resulting feature vectors. embedding_model = tf.keras.Sequential( [ tf.keras.layers.Input((IMAGE_SIZE, IMAGE_SIZE, 3)), tf.keras.layers.Rescaling(scale=1.0 / 255), bit_model.layers[1], tf.keras.layers.Normalization(mean=0, variance=1), ], name=\"embedding_model\", ) embedding_model.summary() Model: \"embedding_model\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= rescaling (Rescaling) (None, 224, 224, 3) 0 _________________________________________________________________ keras_layer (KerasLayer) (None, 2048) 23500352 _________________________________________________________________ normalization (Normalization (None, 2048) 0 ================================================================= Total params: 23,500,352 Trainable params: 23,500,352 Non-trainable params: 0 _________________________________________________________________ Take note of the normalization layer inside the model. It is used to project the representation vectors to the space of unit-spheres. Hashing utilities def hash_func(embedding, random_vectors): embedding = np.array(embedding) # Random projection. bools = np.dot(embedding, random_vectors) > 0 return [bool2int(bool_vec) for bool_vec in bools] def bool2int(x): y = 0 for i, j in enumerate(x): if j: y += 1 << i return y The shape of the vectors coming out of embedding_model is (2048,), and considering practical aspects (storage, retrieval performance, etc.) it is quite large. So, there arises a need to reduce the dimensionality of the embedding vectors without reducing their information content. This is where random projection comes into the picture. It is based on the principle that if the distance between a group of points on a given plane is approximately preserved, the dimensionality of that plane can further be reduced. Inside hash_func(), we first reduce the dimensionality of the embedding vectors. Then we compute the bitwise hash values of the images to determine their hash buckets. Images having same hash values are likely to go into the same hash bucket. From a deployment perspective, bitwise hash values are cheaper to store and operate on. Query utilities The Table class is responsible for building a single hash table. Each entry in the hash table is a mapping between the reduced embedding of an image from our dataset and a unique identifier. Because our dimensionality reduction technique involves randomness, it can so happen that similar images are not mapped to the same hash bucket everytime the process run. To reduce this effect, we will take results from multiple tables into consideration -- the number of tables and the reduction dimensionality are the key hyperparameters here. Crucially, you wouldn't reimplement locality-sensitive hashing yourself when working with real world applications. Instead, you'd likely use one of the following popular libraries: ScaNN Annoy Vald class Table: def __init__(self, hash_size, dim): self.table = {} self.hash_size = hash_size self.random_vectors = np.random.randn(hash_size, dim).T def add(self, id, vectors, label): # Create a unique indentifier. entry = {\"id_label\": str(id) + \"_\" + str(label)} # Compute the hash values. hashes = hash_func(vectors, self.random_vectors) # Add the hash values to the current table. for h in hashes: if h in self.table: self.table[h].append(entry) else: self.table[h] = [entry] def query(self, vectors): # Compute hash value for the query vector. hashes = hash_func(vectors, self.random_vectors) results = [] # Loop over the query hashes and determine if they exist in # the current table. for h in hashes: if h in self.table: results.extend(self.table[h]) return results In the following LSH class we will pack the utilities to have multiple hash tables. class LSH: def __init__(self, hash_size, dim, num_tables): self.num_tables = num_tables self.tables = [] for i in range(self.num_tables): self.tables.append(Table(hash_size, dim)) def add(self, id, vectors, label): for table in self.tables: table.add(id, vectors, label) def query(self, vectors): results = [] for table in self.tables: results.extend(table.query(vectors)) return results Now we can encapsulate the logic for building and operating with the master LSH table (a collection of many tables) inside a class. It has two methods: train(): Responsible for building the final LSH table. query(): Computes the number of matches given a query image and also quantifies the similarity score. class BuildLSHTable: def __init__( self, prediction_model, concrete_function=False, hash_size=8, dim=2048, num_tables=10, ): self.hash_size = hash_size self.dim = dim self.num_tables = num_tables self.lsh = LSH(self.hash_size, self.dim, self.num_tables) self.prediction_model = prediction_model self.concrete_function = concrete_function def train(self, training_files): for id, training_file in enumerate(training_files): # Unpack the data. image, label = training_file if len(image.shape) < 4: image = image[None, ...] # Compute embeddings and update the LSH tables. # More on `self.concrete_function()` later. if self.concrete_function: features = self.prediction_model(tf.constant(image))[ \"normalization\" ].numpy() else: features = self.prediction_model.predict(image) self.lsh.add(id, features, label) def query(self, image, verbose=True): # Compute the embeddings of the query image and fetch the results. if len(image.shape) < 4: image = image[None, ...] if self.concrete_function: features = self.prediction_model(tf.constant(image))[ \"normalization\" ].numpy() else: features = self.prediction_model.predict(image) results = self.lsh.query(features) if verbose: print(\"Matches:\", len(results)) # Calculate Jaccard index to quantify the similarity. counts = {} for r in results: if r[\"id_label\"] in counts: counts[r[\"id_label\"]] += 1 else: counts[r[\"id_label\"]] = 1 for k in counts: counts[k] = float(counts[k]) / self.dim return counts Create LSH tables With our helper utilities and classes implemented, we can now build our LSH table. Since we will be benchmarking performance between optimized and unoptimized embedding models, we will also warm up our GPU to avoid any unfair comparison. # Utility to warm up the GPU. def warmup(): dummy_sample = tf.ones((1, IMAGE_SIZE, IMAGE_SIZE, 3)) for _ in range(100): _ = embedding_model.predict(dummy_sample) Now we can first do the GPU wam-up and proceed to build the master LSH table with embedding_model. warmup() training_files = zip(images, labels) lsh_builder = BuildLSHTable(embedding_model) lsh_builder.train(training_files) At the time of writing, the wall time was 54.1 seconds on a Tesla T4 GPU. This timing may vary based on the GPU you are using. Optimize the model with TensorRT For NVIDIA-based GPUs, the TensorRT framework can be used to dramatically enhance the inference latency by using various model optimization techniques like pruning, constant folding, layer fusion, and so on. Here we will use the tf.experimental.tensorrt module to optimize our embedding model. # First serialize the embedding model as a SavedModel. embedding_model.save(\"embedding_model\") # Initialize the conversion parameters. params = tf.experimental.tensorrt.ConversionParams( precision_mode=\"FP16\", maximum_cached_engines=16 ) # Run the conversion. converter = tf.experimental.tensorrt.Converter( input_saved_model_dir=\"embedding_model\", conversion_params=params ) converter.convert() converter.save(\"tensorrt_embedding_model\") WARNING:tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model. WARNING:tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model. INFO:tensorflow:Assets written to: embedding_model/assets INFO:tensorflow:Assets written to: embedding_model/assets INFO:tensorflow:Linked TensorRT version: (0, 0, 0) INFO:tensorflow:Linked TensorRT version: (0, 0, 0) INFO:tensorflow:Loaded TensorRT version: (0, 0, 0) INFO:tensorflow:Loaded TensorRT version: (0, 0, 0) INFO:tensorflow:Assets written to: tensorrt_embedding_model/assets INFO:tensorflow:Assets written to: tensorrt_embedding_model/assets Notes on the parameters inside of tf.experimental.tensorrt.ConversionParams(): precision_mode defines the numerical precision of the operations in the to-be-converted model. maximum_cached_engines specifies the maximum number of TRT engines that will be cached to handle dynamic operations (operations with unknown shapes). To learn more about the other options, refer to the official documentation. You can also explore the different quantization options provided by the tf.experimental.tensorrt module. # Load the converted model. root = tf.saved_model.load(\"tensorrt_embedding_model\") trt_model_function = root.signatures[\"serving_default\"] Build LSH tables with optimized model warmup() training_files = zip(images, labels) lsh_builder_trt = BuildLSHTable(trt_model_function, concrete_function=True) lsh_builder_trt.train(training_files) Notice the difference in the wall time which is 13.1 seconds. Earlier, with the unoptimized model it was 54.1 seconds. We can take a closer look into one of the hash tables and get an idea of how they are represented. idx = 0 for hash, entry in lsh_builder_trt.lsh.tables[0].table.items(): if idx == 5: break if len(entry) < 5: print(hash, entry) idx += 1 145 [{'id_label': '3_4'}, {'id_label': '727_3'}] 5 [{'id_label': '12_4'}] 128 [{'id_label': '30_2'}, {'id_label': '480_2'}] 208 [{'id_label': '34_2'}, {'id_label': '132_2'}, {'id_label': '984_2'}] 188 [{'id_label': '42_0'}, {'id_label': '135_3'}, {'id_label': '436_3'}, {'id_label': '670_3'}] Visualize results on validation images In this section we will first writing a couple of utility functions to visualize the similar image parsing process. Then we will benchmark the query performance of the models with and without optimization. First, we take 100 images from the validation set for testing purposes. validation_images = [] validation_labels = [] for image, label in validation_ds.take(100): image = tf.image.resize(image, (224, 224)) validation_images.append(image.numpy()) validation_labels.append(label.numpy()) validation_images = np.array(validation_images) validation_labels = np.array(validation_labels) validation_images.shape, validation_labels.shape ((100, 224, 224, 3), (100,)) Now we write our visualization utilities. def plot_images(images, labels): plt.figure(figsize=(20, 10)) columns = 5 for (i, image) in enumerate(images): ax = plt.subplot(len(images) / columns + 1, columns, i + 1) if i == 0: ax.set_title(\"Query Image\n\" + \"Label: {}\".format(labels[i])) else: ax.set_title(\"Similar Image # \" + str(i) + \"\nLabel: {}\".format(labels[i])) plt.imshow(image.astype(\"int\")) plt.axis(\"off\") def visualize_lsh(lsh_class): idx = np.random.choice(len(validation_images)) image = validation_images[idx] label = validation_labels[idx] results = lsh_class.query(image) candidates = [] labels = [] overlaps = [] for idx, r in enumerate(sorted(results, key=results.get, reverse=True)): if idx == 4: break image_id, label = r.split(\"_\")[0], r.split(\"_\")[1] candidates.append(images[int(image_id)]) labels.append(label) overlaps.append(results[r]) candidates.insert(0, image) labels.insert(0, label) plot_images(candidates, labels) Non-TRT model for _ in range(5): visualize_lsh(lsh_builder) visualize_lsh(lsh_builder) Matches: 507 Matches: 554 Matches: 438 Matches: 370 Matches: 407 Matches: 306 png png png png png png TRT model for _ in range(5): visualize_lsh(lsh_builder_trt) Matches: 458 Matches: 181 Matches: 280 Matches: 280 Matches: 503 png png png png png As you may have noticed, there are a couple of incorrect results. This can be mitigated in a few ways: Better models for generating the initial embeddings especially for noisy samples. We can use techniques like ArcFace, Supervised Contrastive Learning, etc. that implicitly encourage better learning of representations for retrieval purposes. The trade-off between the number of tables and the reduction dimensionality is crucial and helps set the right recall required for your application. Benchmarking query performance def benchmark(lsh_class): warmup() start_time = time.time() for _ in range(1000): image = np.ones((1, 224, 224, 3)).astype(\"float32\") _ = lsh_class.query(image, verbose=False) end_time = time.time() - start_time print(f\"Time taken: {end_time:.3f}\") benchmark(lsh_builder) benchmark(lsh_builder_trt) Time taken: 54.359 Time taken: 13.963 We can immediately notice a stark difference between the query performance of the two models. Final remarks In this example, we explored the TensorRT framework from NVIDIA for optimizing our model. It's best suited for GPU-based inference servers. There are other choices for such frameworks that cater to different hardware platforms: TensorFlow Lite for mobile and edge devices. ONNX for commodity CPU-based servers. Apache TVM, compiler for machine learning models covering various platforms. Here are a few resources you might want to check out to learn more about applications based on vector similary search in general: ANN Benchmarks Accelerating Large-Scale Inference with Anisotropic Vector Quantization(ScaNN) Spreading vectors for similarity search Building a real-time embeddings similarity matching system How to build and train a convolutional LSTM model for next-frame video prediction. Introduction The Convolutional LSTM architectures bring together time series processing and computer vision by introducing a convolutional recurrent cell in a LSTM layer. In this example, we will explore the Convolutional LSTM model in an application to next-frame prediction, the process of predicting what video frames come next given a series of past frames. Setup import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import io import imageio from IPython.display import Image, display from ipywidgets import widgets, Layout, HBox Dataset Construction For this example, we will be using the Moving MNIST dataset. We will download the dataset and then construct and preprocess training and validation sets. For next-frame prediction, our model will be using a previous frame, which we'll call f_n, to predict a new frame, called f_(n + 1). To allow the model to create these predictions, we'll need to process the data such that we have \"shifted\" inputs and outputs, where the input data is frame x_n, being used to predict frame y_(n + 1). # Download and load the dataset. fpath = keras.utils.get_file( \"moving_mnist.npy\", \"http://www.cs.toronto.edu/~nitish/unsupervised_video/mnist_test_seq.npy\", ) dataset = np.load(fpath) # Swap the axes representing the number of frames and number of data samples. dataset = np.swapaxes(dataset, 0, 1) # We'll pick out 1000 of the 10000 total examples and use those. dataset = dataset[:1000, ...] # Add a channel dimension since the images are grayscale. dataset = np.expand_dims(dataset, axis=-1) # Split into train and validation sets using indexing to optimize memory. indexes = np.arange(dataset.shape[0]) np.random.shuffle(indexes) train_index = indexes[: int(0.9 * dataset.shape[0])] val_index = indexes[int(0.9 * dataset.shape[0]) :] train_dataset = dataset[train_index] val_dataset = dataset[val_index] # Normalize the data to the 0-1 range. train_dataset = train_dataset / 255 val_dataset = val_dataset / 255 # We'll define a helper function to shift the frames, where # `x` is frames 0 to n - 1, and `y` is frames 1 to n. def create_shifted_frames(data): x = data[:, 0 : data.shape[1] - 1, :, :] y = data[:, 1 : data.shape[1], :, :] return x, y # Apply the processing function to the datasets. x_train, y_train = create_shifted_frames(train_dataset) x_val, y_val = create_shifted_frames(val_dataset) # Inspect the dataset. print(\"Training Dataset Shapes: \" + str(x_train.shape) + \", \" + str(y_train.shape)) print(\"Validation Dataset Shapes: \" + str(x_val.shape) + \", \" + str(y_val.shape)) Training Dataset Shapes: (900, 19, 64, 64, 1), (900, 19, 64, 64, 1) Validation Dataset Shapes: (100, 19, 64, 64, 1), (100, 19, 64, 64, 1) Data Visualization Our data consists of sequences of frames, each of which are used to predict the upcoming frame. Let's take a look at some of these sequential frames. # Construct a figure on which we will visualize the images. fig, axes = plt.subplots(4, 5, figsize=(10, 8)) # Plot each of the sequential images for one random data example. data_choice = np.random.choice(range(len(train_dataset)), size=1)[0] for idx, ax in enumerate(axes.flat): ax.imshow(np.squeeze(train_dataset[data_choice][idx]), cmap=\"gray\") ax.set_title(f\"Frame {idx + 1}\") ax.axis(\"off\") # Print information and display the figure. print(f\"Displaying frames for example {data_choice}.\") plt.show() Displaying frames for example 130. png Model Construction To build a Convolutional LSTM model, we will use the ConvLSTM2D layer, which will accept inputs of shape (batch_size, num_frames, width, height, channels), and return a prediction movie of the same shape. # Construct the input layer with no definite frame size. inp = layers.Input(shape=(None, *x_train.shape[2:])) # We will construct 3 `ConvLSTM2D` layers with batch normalization, # followed by a `Conv3D` layer for the spatiotemporal outputs. x = layers.ConvLSTM2D( filters=64, kernel_size=(5, 5), padding=\"same\", return_sequences=True, activation=\"relu\", )(inp) x = layers.BatchNormalization()(x) x = layers.ConvLSTM2D( filters=64, kernel_size=(3, 3), padding=\"same\", return_sequences=True, activation=\"relu\", )(x) x = layers.BatchNormalization()(x) x = layers.ConvLSTM2D( filters=64, kernel_size=(1, 1), padding=\"same\", return_sequences=True, activation=\"relu\", )(x) x = layers.Conv3D( filters=1, kernel_size=(3, 3, 3), activation=\"sigmoid\", padding=\"same\" )(x) # Next, we will build the complete model and compile it. model = keras.models.Model(inp, x) model.compile( loss=keras.losses.binary_crossentropy, optimizer=keras.optimizers.Adam(), ) Model Training With our model and data constructed, we can now train the model. # Define some callbacks to improve training. early_stopping = keras.callbacks.EarlyStopping(monitor=\"val_loss\", patience=10) reduce_lr = keras.callbacks.ReduceLROnPlateau(monitor=\"val_loss\", patience=5) # Define modifiable training hyperparameters. epochs = 20 batch_size = 5 # Fit the model to the training data. model.fit( x_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(x_val, y_val), callbacks=[early_stopping, reduce_lr], ) Frame Prediction Visualizations With our model now constructed and trained, we can generate some example frame predictions based on a new video. We'll pick a random example from the validation set and then choose the first ten frames from them. From there, we can allow the model to predict 10 new frames, which we can compare to the ground truth frame predictions. # Select a random example from the validation dataset. example = val_dataset[np.random.choice(range(len(val_dataset)), size=1)[0]] # Pick the first/last ten frames from the example. frames = example[:10, ...] original_frames = example[10:, ...] # Predict a new set of 10 frames. for _ in range(10): # Extract the model's prediction and post-process it. new_prediction = model.predict(np.expand_dims(frames, axis=0)) new_prediction = np.squeeze(new_prediction, axis=0) predicted_frame = np.expand_dims(new_prediction[-1, ...], axis=0) # Extend the set of prediction frames. frames = np.concatenate((frames, predicted_frame), axis=0) # Construct a figure for the original and new frames. fig, axes = plt.subplots(2, 10, figsize=(20, 4)) # Plot the original frames. for idx, ax in enumerate(axes[0]): ax.imshow(np.squeeze(original_frames[idx]), cmap=\"gray\") ax.set_title(f\"Frame {idx + 11}\") ax.axis(\"off\") # Plot the new frames. new_frames = frames[10:, ...] for idx, ax in enumerate(axes[1]): ax.imshow(np.squeeze(new_frames[idx]), cmap=\"gray\") ax.set_title(f\"Frame {idx + 11}\") ax.axis(\"off\") # Display the figure. plt.show() png Predicted Videos Finally, we'll pick a few examples from the validation set and construct some GIFs with them to see the model's predicted videos. # Select a few random examples from the dataset. examples = val_dataset[np.random.choice(range(len(val_dataset)), size=5)] # Iterate over the examples and predict the frames. predicted_videos = [] for example in examples: # Pick the first/last ten frames from the example. frames = example[:10, ...] original_frames = example[10:, ...] new_predictions = np.zeros(shape=(10, *frames[0].shape)) # Predict a new set of 10 frames. for i in range(10): # Extract the model's prediction and post-process it. frames = example[: 10 + i + 1, ...] new_prediction = model.predict(np.expand_dims(frames, axis=0)) new_prediction = np.squeeze(new_prediction, axis=0) predicted_frame = np.expand_dims(new_prediction[-1, ...], axis=0) # Extend the set of prediction frames. new_predictions[i] = predicted_frame # Create and save GIFs for each of the ground truth/prediction images. for frame_set in [original_frames, new_predictions]: # Construct a GIF from the selected video frames. current_frames = np.squeeze(frame_set) current_frames = current_frames[..., np.newaxis] * np.ones(3) current_frames = (current_frames * 255).astype(np.uint8) current_frames = list(current_frames) # Construct a GIF from the frames. with io.BytesIO() as gif: imageio.mimsave(gif, current_frames, \"GIF\", fps=5) predicted_videos.append(gif.getvalue()) # Display the videos. print(\" Truth\tPrediction\") for i in range(0, len(predicted_videos), 2): # Construct and display an `HBox` with the ground truth and prediction. box = HBox( [ widgets.Image(value=predicted_videos[i]), widgets.Image(value=predicted_videos[i + 1]), ] ) display(box) Truth Prediction Imgur Implementing RetinaNet: Focal Loss for Dense Object Detection. Introduction Object detection a very important problem in computer vision. Here the model is tasked with localizing the objects present in an image, and at the same time, classifying them into different categories. Object detection models can be broadly classified into \"single-stage\" and \"two-stage\" detectors. Two-stage detectors are often more accurate but at the cost of being slower. Here in this example, we will implement RetinaNet, a popular single-stage detector, which is accurate and runs fast. RetinaNet uses a feature pyramid network to efficiently detect objects at multiple scales and introduces a new loss, the Focal loss function, to alleviate the problem of the extreme foreground-background class imbalance. References: RetinaNet Paper Feature Pyramid Network Paper import os import re import zipfile import numpy as np import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt import tensorflow_datasets as tfds Downloading the COCO2017 dataset Training on the entire COCO2017 dataset which has around 118k images takes a lot of time, hence we will be using a smaller subset of ~500 images for training in this example. url = \"https://github.com/srihari-humbarwadi/datasets/releases/download/v0.1.0/data.zip\" filename = os.path.join(os.getcwd(), \"data.zip\") keras.utils.get_file(filename, url) with zipfile.ZipFile(\"data.zip\", \"r\") as z_fp: z_fp.extractall(\"./\") Downloading data from https://github.com/srihari-humbarwadi/datasets/releases/download/v0.1.0/data.zip 560529408/560525318 [==============================] - 304s 1us/step Implementing utility functions Bounding boxes can be represented in multiple ways, the most common formats are: Storing the coordinates of the corners [xmin, ymin, xmax, ymax] Storing the coordinates of the center and the box dimensions [x, y, width, height] Since we require both formats, we will be implementing functions for converting between the formats. def swap_xy(boxes): \"\"\"Swaps order the of x and y coordinates of the boxes. Arguments: boxes: A tensor with shape `(num_boxes, 4)` representing bounding boxes. Returns: swapped boxes with shape same as that of boxes. \"\"\" return tf.stack([boxes[:, 1], boxes[:, 0], boxes[:, 3], boxes[:, 2]], axis=-1) def convert_to_xywh(boxes): \"\"\"Changes the box format to center, width and height. Arguments: boxes: A tensor of rank 2 or higher with a shape of `(..., num_boxes, 4)` representing bounding boxes where each box is of the format `[xmin, ymin, xmax, ymax]`. Returns: converted boxes with shape same as that of boxes. \"\"\" return tf.concat( [(boxes[..., :2] + boxes[..., 2:]) / 2.0, boxes[..., 2:] - boxes[..., :2]], axis=-1, ) def convert_to_corners(boxes): \"\"\"Changes the box format to corner coordinates Arguments: boxes: A tensor of rank 2 or higher with a shape of `(..., num_boxes, 4)` representing bounding boxes where each box is of the format `[x, y, width, height]`. Returns: converted boxes with shape same as that of boxes. \"\"\" return tf.concat( [boxes[..., :2] - boxes[..., 2:] / 2.0, boxes[..., :2] + boxes[..., 2:] / 2.0], axis=-1, ) Computing pairwise Intersection Over Union (IOU) As we will see later in the example, we would be assigning ground truth boxes to anchor boxes based on the extent of overlapping. This will require us to calculate the Intersection Over Union (IOU) between all the anchor boxes and ground truth boxes pairs. def compute_iou(boxes1, boxes2): \"\"\"Computes pairwise IOU matrix for given two sets of boxes Arguments: boxes1: A tensor with shape `(N, 4)` representing bounding boxes where each box is of the format `[x, y, width, height]`. boxes2: A tensor with shape `(M, 4)` representing bounding boxes where each box is of the format `[x, y, width, height]`. Returns: pairwise IOU matrix with shape `(N, M)`, where the value at ith row jth column holds the IOU between ith box and jth box from boxes1 and boxes2 respectively. \"\"\" boxes1_corners = convert_to_corners(boxes1) boxes2_corners = convert_to_corners(boxes2) lu = tf.maximum(boxes1_corners[:, None, :2], boxes2_corners[:, :2]) rd = tf.minimum(boxes1_corners[:, None, 2:], boxes2_corners[:, 2:]) intersection = tf.maximum(0.0, rd - lu) intersection_area = intersection[:, :, 0] * intersection[:, :, 1] boxes1_area = boxes1[:, 2] * boxes1[:, 3] boxes2_area = boxes2[:, 2] * boxes2[:, 3] union_area = tf.maximum( boxes1_area[:, None] + boxes2_area - intersection_area, 1e-8 ) return tf.clip_by_value(intersection_area / union_area, 0.0, 1.0) def visualize_detections( image, boxes, classes, scores, figsize=(7, 7), linewidth=1, color=[0, 0, 1] ): \"\"\"Visualize Detections\"\"\" image = np.array(image, dtype=np.uint8) plt.figure(figsize=figsize) plt.axis(\"off\") plt.imshow(image) ax = plt.gca() for box, _cls, score in zip(boxes, classes, scores): text = \"{}: {:.2f}\".format(_cls, score) x1, y1, x2, y2 = box w, h = x2 - x1, y2 - y1 patch = plt.Rectangle( [x1, y1], w, h, fill=False, edgecolor=color, linewidth=linewidth ) ax.add_patch(patch) ax.text( x1, y1, text, bbox={\"facecolor\": color, \"alpha\": 0.4}, clip_box=ax.clipbox, clip_on=True, ) plt.show() return ax Implementing Anchor generator Anchor boxes are fixed sized boxes that the model uses to predict the bounding box for an object. It does this by regressing the offset between the location of the object's center and the center of an anchor box, and then uses the width and height of the anchor box to predict a relative scale of the object. In the case of RetinaNet, each location on a given feature map has nine anchor boxes (at three scales and three ratios). class AnchorBox: \"\"\"Generates anchor boxes. This class has operations to generate anchor boxes for feature maps at strides `[8, 16, 32, 64, 128]`. Where each anchor each box is of the format `[x, y, width, height]`. Attributes: aspect_ratios: A list of float values representing the aspect ratios of the anchor boxes at each location on the feature map scales: A list of float values representing the scale of the anchor boxes at each location on the feature map. num_anchors: The number of anchor boxes at each location on feature map areas: A list of float values representing the areas of the anchor boxes for each feature map in the feature pyramid. strides: A list of float value representing the strides for each feature map in the feature pyramid. \"\"\" def __init__(self): self.aspect_ratios = [0.5, 1.0, 2.0] self.scales = [2 ** x for x in [0, 1 / 3, 2 / 3]] self._num_anchors = len(self.aspect_ratios) * len(self.scales) self._strides = [2 ** i for i in range(3, 8)] self._areas = [x ** 2 for x in [32.0, 64.0, 128.0, 256.0, 512.0]] self._anchor_dims = self._compute_dims() def _compute_dims(self): \"\"\"Computes anchor box dimensions for all ratios and scales at all levels of the feature pyramid. \"\"\" anchor_dims_all = [] for area in self._areas: anchor_dims = [] for ratio in self.aspect_ratios: anchor_height = tf.math.sqrt(area / ratio) anchor_width = area / anchor_height dims = tf.reshape( tf.stack([anchor_width, anchor_height], axis=-1), [1, 1, 2] ) for scale in self.scales: anchor_dims.append(scale * dims) anchor_dims_all.append(tf.stack(anchor_dims, axis=-2)) return anchor_dims_all def _get_anchors(self, feature_height, feature_width, level): \"\"\"Generates anchor boxes for a given feature map size and level Arguments: feature_height: An integer representing the height of the feature map. feature_width: An integer representing the width of the feature map. level: An integer representing the level of the feature map in the feature pyramid. Returns: anchor boxes with the shape `(feature_height * feature_width * num_anchors, 4)` \"\"\" rx = tf.range(feature_width, dtype=tf.float32) + 0.5 ry = tf.range(feature_height, dtype=tf.float32) + 0.5 centers = tf.stack(tf.meshgrid(rx, ry), axis=-1) * self._strides[level - 3] centers = tf.expand_dims(centers, axis=-2) centers = tf.tile(centers, [1, 1, self._num_anchors, 1]) dims = tf.tile( self._anchor_dims[level - 3], [feature_height, feature_width, 1, 1] ) anchors = tf.concat([centers, dims], axis=-1) return tf.reshape( anchors, [feature_height * feature_width * self._num_anchors, 4] ) def get_anchors(self, image_height, image_width): \"\"\"Generates anchor boxes for all the feature maps of the feature pyramid. Arguments: image_height: Height of the input image. image_width: Width of the input image. Returns: anchor boxes for all the feature maps, stacked as a single tensor with shape `(total_anchors, 4)` \"\"\" anchors = [ self._get_anchors( tf.math.ceil(image_height / 2 ** i), tf.math.ceil(image_width / 2 ** i), i, ) for i in range(3, 8) ] return tf.concat(anchors, axis=0) Preprocessing data Preprocessing the images involves two steps: Resizing the image: Images are resized such that the shortest size is equal to 800 px, after resizing if the longest side of the image exceeds 1333 px, the image is resized such that the longest size is now capped at 1333 px. Applying augmentation: Random scale jittering and random horizontal flipping are the only augmentations applied to the images. Along with the images, bounding boxes are rescaled and flipped if required. def random_flip_horizontal(image, boxes): \"\"\"Flips image and boxes horizontally with 50% chance Arguments: image: A 3-D tensor of shape `(height, width, channels)` representing an image. boxes: A tensor with shape `(num_boxes, 4)` representing bounding boxes, having normalized coordinates. Returns: Randomly flipped image and boxes \"\"\" if tf.random.uniform(()) > 0.5: image = tf.image.flip_left_right(image) boxes = tf.stack( [1 - boxes[:, 2], boxes[:, 1], 1 - boxes[:, 0], boxes[:, 3]], axis=-1 ) return image, boxes def resize_and_pad_image( image, min_side=800.0, max_side=1333.0, jitter=[640, 1024], stride=128.0 ): \"\"\"Resizes and pads image while preserving aspect ratio. 1. Resizes images so that the shorter side is equal to `min_side` 2. If the longer side is greater than `max_side`, then resize the image with longer side equal to `max_side` 3. Pad with zeros on right and bottom to make the image shape divisible by `stride` Arguments: image: A 3-D tensor of shape `(height, width, channels)` representing an image. min_side: The shorter side of the image is resized to this value, if `jitter` is set to None. max_side: If the longer side of the image exceeds this value after resizing, the image is resized such that the longer side now equals to this value. jitter: A list of floats containing minimum and maximum size for scale jittering. If available, the shorter side of the image will be resized to a random value in this range. stride: The stride of the smallest feature map in the feature pyramid. Can be calculated using `image_size / feature_map_size`. Returns: image: Resized and padded image. image_shape: Shape of the image before padding. ratio: The scaling factor used to resize the image \"\"\" image_shape = tf.cast(tf.shape(image)[:2], dtype=tf.float32) if jitter is not None: min_side = tf.random.uniform((), jitter[0], jitter[1], dtype=tf.float32) ratio = min_side / tf.reduce_min(image_shape) if ratio * tf.reduce_max(image_shape) > max_side: ratio = max_side / tf.reduce_max(image_shape) image_shape = ratio * image_shape image = tf.image.resize(image, tf.cast(image_shape, dtype=tf.int32)) padded_image_shape = tf.cast( tf.math.ceil(image_shape / stride) * stride, dtype=tf.int32 ) image = tf.image.pad_to_bounding_box( image, 0, 0, padded_image_shape[0], padded_image_shape[1] ) return image, image_shape, ratio def preprocess_data(sample): \"\"\"Applies preprocessing step to a single sample Arguments: sample: A dict representing a single training sample. Returns: image: Resized and padded image with random horizontal flipping applied. bbox: Bounding boxes with the shape `(num_objects, 4)` where each box is of the format `[x, y, width, height]`. class_id: An tensor representing the class id of the objects, having shape `(num_objects,)`. \"\"\" image = sample[\"image\"] bbox = swap_xy(sample[\"objects\"][\"bbox\"]) class_id = tf.cast(sample[\"objects\"][\"label\"], dtype=tf.int32) image, bbox = random_flip_horizontal(image, bbox) image, image_shape, _ = resize_and_pad_image(image) bbox = tf.stack( [ bbox[:, 0] * image_shape[1], bbox[:, 1] * image_shape[0], bbox[:, 2] * image_shape[1], bbox[:, 3] * image_shape[0], ], axis=-1, ) bbox = convert_to_xywh(bbox) return image, bbox, class_id Encoding labels The raw labels, consisting of bounding boxes and class ids need to be transformed into targets for training. This transformation consists of the following steps: Generating anchor boxes for the given image dimensions Assigning ground truth boxes to the anchor boxes The anchor boxes that are not assigned any objects, are either assigned the background class or ignored depending on the IOU Generating the classification and regression targets using anchor boxes class LabelEncoder: \"\"\"Transforms the raw labels into targets for training. This class has operations to generate targets for a batch of samples which is made up of the input images, bounding boxes for the objects present and their class ids. Attributes: anchor_box: Anchor box generator to encode the bounding boxes. box_variance: The scaling factors used to scale the bounding box targets. \"\"\" def __init__(self): self._anchor_box = AnchorBox() self._box_variance = tf.convert_to_tensor( [0.1, 0.1, 0.2, 0.2], dtype=tf.float32 ) def _match_anchor_boxes( self, anchor_boxes, gt_boxes, match_iou=0.5, ignore_iou=0.4 ): \"\"\"Matches ground truth boxes to anchor boxes based on IOU. 1. Calculates the pairwise IOU for the M `anchor_boxes` and N `gt_boxes` to get a `(M, N)` shaped matrix. 2. The ground truth box with the maximum IOU in each row is assigned to the anchor box provided the IOU is greater than `match_iou`. 3. If the maximum IOU in a row is less than `ignore_iou`, the anchor box is assigned with the background class. 4. The remaining anchor boxes that do not have any class assigned are ignored during training. Arguments: anchor_boxes: A float tensor with the shape `(total_anchors, 4)` representing all the anchor boxes for a given input image shape, where each anchor box is of the format `[x, y, width, height]`. gt_boxes: A float tensor with shape `(num_objects, 4)` representing the ground truth boxes, where each box is of the format `[x, y, width, height]`. match_iou: A float value representing the minimum IOU threshold for determining if a ground truth box can be assigned to an anchor box. ignore_iou: A float value representing the IOU threshold under which an anchor box is assigned to the background class. Returns: matched_gt_idx: Index of the matched object positive_mask: A mask for anchor boxes that have been assigned ground truth boxes. ignore_mask: A mask for anchor boxes that need to by ignored during training \"\"\" iou_matrix = compute_iou(anchor_boxes, gt_boxes) max_iou = tf.reduce_max(iou_matrix, axis=1) matched_gt_idx = tf.argmax(iou_matrix, axis=1) positive_mask = tf.greater_equal(max_iou, match_iou) negative_mask = tf.less(max_iou, ignore_iou) ignore_mask = tf.logical_not(tf.logical_or(positive_mask, negative_mask)) return ( matched_gt_idx, tf.cast(positive_mask, dtype=tf.float32), tf.cast(ignore_mask, dtype=tf.float32), ) def _compute_box_target(self, anchor_boxes, matched_gt_boxes): \"\"\"Transforms the ground truth boxes into targets for training\"\"\" box_target = tf.concat( [ (matched_gt_boxes[:, :2] - anchor_boxes[:, :2]) / anchor_boxes[:, 2:], tf.math.log(matched_gt_boxes[:, 2:] / anchor_boxes[:, 2:]), ], axis=-1, ) box_target = box_target / self._box_variance return box_target def _encode_sample(self, image_shape, gt_boxes, cls_ids): \"\"\"Creates box and classification targets for a single sample\"\"\" anchor_boxes = self._anchor_box.get_anchors(image_shape[1], image_shape[2]) cls_ids = tf.cast(cls_ids, dtype=tf.float32) matched_gt_idx, positive_mask, ignore_mask = self._match_anchor_boxes( anchor_boxes, gt_boxes ) matched_gt_boxes = tf.gather(gt_boxes, matched_gt_idx) box_target = self._compute_box_target(anchor_boxes, matched_gt_boxes) matched_gt_cls_ids = tf.gather(cls_ids, matched_gt_idx) cls_target = tf.where( tf.not_equal(positive_mask, 1.0), -1.0, matched_gt_cls_ids ) cls_target = tf.where(tf.equal(ignore_mask, 1.0), -2.0, cls_target) cls_target = tf.expand_dims(cls_target, axis=-1) label = tf.concat([box_target, cls_target], axis=-1) return label def encode_batch(self, batch_images, gt_boxes, cls_ids): \"\"\"Creates box and classification targets for a batch\"\"\" images_shape = tf.shape(batch_images) batch_size = images_shape[0] labels = tf.TensorArray(dtype=tf.float32, size=batch_size, dynamic_size=True) for i in range(batch_size): label = self._encode_sample(images_shape, gt_boxes[i], cls_ids[i]) labels = labels.write(i, label) batch_images = tf.keras.applications.resnet.preprocess_input(batch_images) return batch_images, labels.stack() Building the ResNet50 backbone RetinaNet uses a ResNet based backbone, using which a feature pyramid network is constructed. In the example we use ResNet50 as the backbone, and return the feature maps at strides 8, 16 and 32. def get_backbone(): \"\"\"Builds ResNet50 with pre-trained imagenet weights\"\"\" backbone = keras.applications.ResNet50( include_top=False, input_shape=[None, None, 3] ) c3_output, c4_output, c5_output = [ backbone.get_layer(layer_name).output for layer_name in [\"conv3_block4_out\", \"conv4_block6_out\", \"conv5_block3_out\"] ] return keras.Model( inputs=[backbone.inputs], outputs=[c3_output, c4_output, c5_output] ) Building Feature Pyramid Network as a custom layer class FeaturePyramid(keras.layers.Layer): \"\"\"Builds the Feature Pyramid with the feature maps from the backbone. Attributes: num_classes: Number of classes in the dataset. backbone: The backbone to build the feature pyramid from. Currently supports ResNet50 only. \"\"\" def __init__(self, backbone=None, **kwargs): super(FeaturePyramid, self).__init__(name=\"FeaturePyramid\", **kwargs) self.backbone = backbone if backbone else get_backbone() self.conv_c3_1x1 = keras.layers.Conv2D(256, 1, 1, \"same\") self.conv_c4_1x1 = keras.layers.Conv2D(256, 1, 1, \"same\") self.conv_c5_1x1 = keras.layers.Conv2D(256, 1, 1, \"same\") self.conv_c3_3x3 = keras.layers.Conv2D(256, 3, 1, \"same\") self.conv_c4_3x3 = keras.layers.Conv2D(256, 3, 1, \"same\") self.conv_c5_3x3 = keras.layers.Conv2D(256, 3, 1, \"same\") self.conv_c6_3x3 = keras.layers.Conv2D(256, 3, 2, \"same\") self.conv_c7_3x3 = keras.layers.Conv2D(256, 3, 2, \"same\") self.upsample_2x = keras.layers.UpSampling2D(2) def call(self, images, training=False): c3_output, c4_output, c5_output = self.backbone(images, training=training) p3_output = self.conv_c3_1x1(c3_output) p4_output = self.conv_c4_1x1(c4_output) p5_output = self.conv_c5_1x1(c5_output) p4_output = p4_output + self.upsample_2x(p5_output) p3_output = p3_output + self.upsample_2x(p4_output) p3_output = self.conv_c3_3x3(p3_output) p4_output = self.conv_c4_3x3(p4_output) p5_output = self.conv_c5_3x3(p5_output) p6_output = self.conv_c6_3x3(c5_output) p7_output = self.conv_c7_3x3(tf.nn.relu(p6_output)) return p3_output, p4_output, p5_output, p6_output, p7_output Building the classification and box regression heads. The RetinaNet model has separate heads for bounding box regression and for predicting class probabilities for the objects. These heads are shared between all the feature maps of the feature pyramid. def build_head(output_filters, bias_init): \"\"\"Builds the class/box predictions head. Arguments: output_filters: Number of convolution filters in the final layer. bias_init: Bias Initializer for the final convolution layer. Returns: A keras sequential model representing either the classification or the box regression head depending on `output_filters`. \"\"\" head = keras.Sequential([keras.Input(shape=[None, None, 256])]) kernel_init = tf.initializers.RandomNormal(0.0, 0.01) for _ in range(4): head.add( keras.layers.Conv2D(256, 3, padding=\"same\", kernel_initializer=kernel_init) ) head.add(keras.layers.ReLU()) head.add( keras.layers.Conv2D( output_filters, 3, 1, padding=\"same\", kernel_initializer=kernel_init, bias_initializer=bias_init, ) ) return head Building RetinaNet using a subclassed model class RetinaNet(keras.Model): \"\"\"A subclassed Keras model implementing the RetinaNet architecture. Attributes: num_classes: Number of classes in the dataset. backbone: The backbone to build the feature pyramid from. Currently supports ResNet50 only. \"\"\" def __init__(self, num_classes, backbone=None, **kwargs): super(RetinaNet, self).__init__(name=\"RetinaNet\", **kwargs) self.fpn = FeaturePyramid(backbone) self.num_classes = num_classes prior_probability = tf.constant_initializer(-np.log((1 - 0.01) / 0.01)) self.cls_head = build_head(9 * num_classes, prior_probability) self.box_head = build_head(9 * 4, \"zeros\") def call(self, image, training=False): features = self.fpn(image, training=training) N = tf.shape(image)[0] cls_outputs = [] box_outputs = [] for feature in features: box_outputs.append(tf.reshape(self.box_head(feature), [N, -1, 4])) cls_outputs.append( tf.reshape(self.cls_head(feature), [N, -1, self.num_classes]) ) cls_outputs = tf.concat(cls_outputs, axis=1) box_outputs = tf.concat(box_outputs, axis=1) return tf.concat([box_outputs, cls_outputs], axis=-1) Implementing a custom layer to decode predictions class DecodePredictions(tf.keras.layers.Layer): \"\"\"A Keras layer that decodes predictions of the RetinaNet model. Attributes: num_classes: Number of classes in the dataset confidence_threshold: Minimum class probability, below which detections are pruned. nms_iou_threshold: IOU threshold for the NMS operation max_detections_per_class: Maximum number of detections to retain per class. max_detections: Maximum number of detections to retain across all classes. box_variance: The scaling factors used to scale the bounding box predictions. \"\"\" def __init__( self, num_classes=80, confidence_threshold=0.05, nms_iou_threshold=0.5, max_detections_per_class=100, max_detections=100, box_variance=[0.1, 0.1, 0.2, 0.2], **kwargs ): super(DecodePredictions, self).__init__(**kwargs) self.num_classes = num_classes self.confidence_threshold = confidence_threshold self.nms_iou_threshold = nms_iou_threshold self.max_detections_per_class = max_detections_per_class self.max_detections = max_detections self._anchor_box = AnchorBox() self._box_variance = tf.convert_to_tensor( [0.1, 0.1, 0.2, 0.2], dtype=tf.float32 ) def _decode_box_predictions(self, anchor_boxes, box_predictions): boxes = box_predictions * self._box_variance boxes = tf.concat( [ boxes[:, :, :2] * anchor_boxes[:, :, 2:] + anchor_boxes[:, :, :2], tf.math.exp(boxes[:, :, 2:]) * anchor_boxes[:, :, 2:], ], axis=-1, ) boxes_transformed = convert_to_corners(boxes) return boxes_transformed def call(self, images, predictions): image_shape = tf.cast(tf.shape(images), dtype=tf.float32) anchor_boxes = self._anchor_box.get_anchors(image_shape[1], image_shape[2]) box_predictions = predictions[:, :, :4] cls_predictions = tf.nn.sigmoid(predictions[:, :, 4:]) boxes = self._decode_box_predictions(anchor_boxes[None, ...], box_predictions) return tf.image.combined_non_max_suppression( tf.expand_dims(boxes, axis=2), cls_predictions, self.max_detections_per_class, self.max_detections, self.nms_iou_threshold, self.confidence_threshold, clip_boxes=False, ) Implementing Smooth L1 loss and Focal Loss as keras custom losses class RetinaNetBoxLoss(tf.losses.Loss): \"\"\"Implements Smooth L1 loss\"\"\" def __init__(self, delta): super(RetinaNetBoxLoss, self).__init__( reduction=\"none\", name=\"RetinaNetBoxLoss\" ) self._delta = delta def call(self, y_true, y_pred): difference = y_true - y_pred absolute_difference = tf.abs(difference) squared_difference = difference ** 2 loss = tf.where( tf.less(absolute_difference, self._delta), 0.5 * squared_difference, absolute_difference - 0.5, ) return tf.reduce_sum(loss, axis=-1) class RetinaNetClassificationLoss(tf.losses.Loss): \"\"\"Implements Focal loss\"\"\" def __init__(self, alpha, gamma): super(RetinaNetClassificationLoss, self).__init__( reduction=\"none\", name=\"RetinaNetClassificationLoss\" ) self._alpha = alpha self._gamma = gamma def call(self, y_true, y_pred): cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits( labels=y_true, logits=y_pred ) probs = tf.nn.sigmoid(y_pred) alpha = tf.where(tf.equal(y_true, 1.0), self._alpha, (1.0 - self._alpha)) pt = tf.where(tf.equal(y_true, 1.0), probs, 1 - probs) loss = alpha * tf.pow(1.0 - pt, self._gamma) * cross_entropy return tf.reduce_sum(loss, axis=-1) class RetinaNetLoss(tf.losses.Loss): \"\"\"Wrapper to combine both the losses\"\"\" def __init__(self, num_classes=80, alpha=0.25, gamma=2.0, delta=1.0): super(RetinaNetLoss, self).__init__(reduction=\"auto\", name=\"RetinaNetLoss\") self._clf_loss = RetinaNetClassificationLoss(alpha, gamma) self._box_loss = RetinaNetBoxLoss(delta) self._num_classes = num_classes def call(self, y_true, y_pred): y_pred = tf.cast(y_pred, dtype=tf.float32) box_labels = y_true[:, :, :4] box_predictions = y_pred[:, :, :4] cls_labels = tf.one_hot( tf.cast(y_true[:, :, 4], dtype=tf.int32), depth=self._num_classes, dtype=tf.float32, ) cls_predictions = y_pred[:, :, 4:] positive_mask = tf.cast(tf.greater(y_true[:, :, 4], -1.0), dtype=tf.float32) ignore_mask = tf.cast(tf.equal(y_true[:, :, 4], -2.0), dtype=tf.float32) clf_loss = self._clf_loss(cls_labels, cls_predictions) box_loss = self._box_loss(box_labels, box_predictions) clf_loss = tf.where(tf.equal(ignore_mask, 1.0), 0.0, clf_loss) box_loss = tf.where(tf.equal(positive_mask, 1.0), box_loss, 0.0) normalizer = tf.reduce_sum(positive_mask, axis=-1) clf_loss = tf.math.divide_no_nan(tf.reduce_sum(clf_loss, axis=-1), normalizer) box_loss = tf.math.divide_no_nan(tf.reduce_sum(box_loss, axis=-1), normalizer) loss = clf_loss + box_loss return loss Setting up training parameters model_dir = \"retinanet/\" label_encoder = LabelEncoder() num_classes = 80 batch_size = 2 learning_rates = [2.5e-06, 0.000625, 0.00125, 0.0025, 0.00025, 2.5e-05] learning_rate_boundaries = [125, 250, 500, 240000, 360000] learning_rate_fn = tf.optimizers.schedules.PiecewiseConstantDecay( boundaries=learning_rate_boundaries, values=learning_rates ) Initializing and compiling model resnet50_backbone = get_backbone() loss_fn = RetinaNetLoss(num_classes) model = RetinaNet(num_classes, resnet50_backbone) optimizer = tf.optimizers.SGD(learning_rate=learning_rate_fn, momentum=0.9) model.compile(loss=loss_fn, optimizer=optimizer) Setting up callbacks callbacks_list = [ tf.keras.callbacks.ModelCheckpoint( filepath=os.path.join(model_dir, \"weights\" + \"_epoch_{epoch}\"), monitor=\"loss\", save_best_only=False, save_weights_only=True, verbose=1, ) ] Load the COCO2017 dataset using TensorFlow Datasets # set `data_dir=None` to load the complete dataset (train_dataset, val_dataset), dataset_info = tfds.load( \"coco/2017\", split=[\"train\", \"validation\"], with_info=True, data_dir=\"data\" ) Setting up a tf.data pipeline To ensure that the model is fed with data efficiently we will be using tf.data API to create our input pipeline. The input pipeline consists for the following major processing steps: Apply the preprocessing function to the samples Create batches with fixed batch size. Since images in the batch can have different dimensions, and can also have different number of objects, we use padded_batch to the add the necessary padding to create rectangular tensors Create targets for each sample in the batch using LabelEncoder autotune = tf.data.AUTOTUNE train_dataset = train_dataset.map(preprocess_data, num_parallel_calls=autotune) train_dataset = train_dataset.shuffle(8 * batch_size) train_dataset = train_dataset.padded_batch( batch_size=batch_size, padding_values=(0.0, 1e-8, -1), drop_remainder=True ) train_dataset = train_dataset.map( label_encoder.encode_batch, num_parallel_calls=autotune ) train_dataset = train_dataset.apply(tf.data.experimental.ignore_errors()) train_dataset = train_dataset.prefetch(autotune) val_dataset = val_dataset.map(preprocess_data, num_parallel_calls=autotune) val_dataset = val_dataset.padded_batch( batch_size=1, padding_values=(0.0, 1e-8, -1), drop_remainder=True ) val_dataset = val_dataset.map(label_encoder.encode_batch, num_parallel_calls=autotune) val_dataset = val_dataset.apply(tf.data.experimental.ignore_errors()) val_dataset = val_dataset.prefetch(autotune) Training the model # Uncomment the following lines, when training on full dataset # train_steps_per_epoch = dataset_info.splits[\"train\"].num_examples // batch_size # val_steps_per_epoch = \ # dataset_info.splits[\"validation\"].num_examples // batch_size # train_steps = 4 * 100000 # epochs = train_steps // train_steps_per_epoch epochs = 1 # Running 100 training and 50 validation steps, # remove `.take` when training on the full dataset model.fit( train_dataset.take(100), validation_data=val_dataset.take(50), epochs=epochs, callbacks=callbacks_list, verbose=1, ) 100/100 [==============================] - ETA: 0s - loss: 4.0953 Epoch 00001: saving model to retinanet/weights_epoch_1 100/100 [==============================] - 68s 679ms/step - loss: 4.0953 - val_loss: 4.0821 Loading weights # Change this to `model_dir` when not using the downloaded weights weights_dir = \"data\" latest_checkpoint = tf.train.latest_checkpoint(weights_dir) model.load_weights(latest_checkpoint) Building inference model image = tf.keras.Input(shape=[None, None, 3], name=\"image\") predictions = model(image, training=False) detections = DecodePredictions(confidence_threshold=0.5)(image, predictions) inference_model = tf.keras.Model(inputs=image, outputs=detections) Generating detections def prepare_image(image): image, _, ratio = resize_and_pad_image(image, jitter=None) image = tf.keras.applications.resnet.preprocess_input(image) return tf.expand_dims(image, axis=0), ratio val_dataset = tfds.load(\"coco/2017\", split=\"validation\", data_dir=\"data\") int2str = dataset_info.features[\"objects\"][\"label\"].int2str for sample in val_dataset.take(2): image = tf.cast(sample[\"image\"], dtype=tf.float32) input_image, ratio = prepare_image(image) detections = inference_model.predict(input_image) num_detections = detections.valid_detections[0] class_names = [ int2str(int(x)) for x in detections.nmsed_classes[0][:num_detections] ] visualize_detections( image, detections.nmsed_boxes[0][:num_detections] / ratio, class_names, detections.nmsed_scores[0][:num_detections], ) png png How to implement an OCR model using CNNs, RNNs and CTC loss. Introduction This example demonstrates a simple OCR model built with the Functional API. Apart from combining CNN and RNN, it also illustrates how you can instantiate a new layer and use it as an \"Endpoint layer\" for implementing CTC loss. For a detailed guide to layer subclassing, please check out this page in the developer guides. Setup import os import numpy as np import matplotlib.pyplot as plt from pathlib import Path from collections import Counter import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Load the data: Captcha Images Let's download the data. !curl -LO https://github.com/AakashKumarNain/CaptchaCracker/raw/master/captcha_images_v2.zip !unzip -qq captcha_images_v2.zip % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 159 100 159 0 0 164 0 --:--:-- --:--:-- --:--:-- 164 100 8863k 100 8863k 0 0 4882k 0 0:00:01 0:00:01 --:--:-- 33.0M The dataset contains 1040 captcha files as png images. The label for each sample is a string, the name of the file (minus the file extension). We will map each character in the string to an integer for training the model. Similary, we will need to map the predictions of the model back to strings. For this purpose we will maintain two dictionaries, mapping characters to integers, and integers to characters, respectively. # Path to the data directory data_dir = Path(\"./captcha_images_v2/\") # Get list of all the images images = sorted(list(map(str, list(data_dir.glob(\"*.png\"))))) labels = [img.split(os.path.sep)[-1].split(\".png\")[0] for img in images] characters = set(char for label in labels for char in label) print(\"Number of images found: \", len(images)) print(\"Number of labels found: \", len(labels)) print(\"Number of unique characters: \", len(characters)) print(\"Characters present: \", characters) # Batch size for training and validation batch_size = 16 # Desired image dimensions img_width = 200 img_height = 50 # Factor by which the image is going to be downsampled # by the convolutional blocks. We will be using two # convolution blocks and each block will have # a pooling layer which downsample the features by a factor of 2. # Hence total downsampling factor would be 4. downsample_factor = 4 # Maximum length of any captcha in the dataset max_length = max([len(label) for label in labels]) Number of images found: 1040 Number of labels found: 1040 Number of unique characters: 19 Characters present: {'d', 'w', 'y', '4', 'f', '6', 'g', 'e', '3', '5', 'p', 'x', '2', 'c', '7', 'n', 'b', '8', 'm'} Preprocessing # Mapping characters to integers char_to_num = layers.StringLookup( vocabulary=list(characters), mask_token=None ) # Mapping integers back to original characters num_to_char = layers.StringLookup( vocabulary=char_to_num.get_vocabulary(), mask_token=None, invert=True ) def split_data(images, labels, train_size=0.9, shuffle=True): # 1. Get the total size of the dataset size = len(images) # 2. Make an indices array and shuffle it, if required indices = np.arange(size) if shuffle: np.random.shuffle(indices) # 3. Get the size of training samples train_samples = int(size * train_size) # 4. Split data into training and validation sets x_train, y_train = images[indices[:train_samples]], labels[indices[:train_samples]] x_valid, y_valid = images[indices[train_samples:]], labels[indices[train_samples:]] return x_train, x_valid, y_train, y_valid # Splitting data into training and validation sets x_train, x_valid, y_train, y_valid = split_data(np.array(images), np.array(labels)) def encode_single_sample(img_path, label): # 1. Read image img = tf.io.read_file(img_path) # 2. Decode and convert to grayscale img = tf.io.decode_png(img, channels=1) # 3. Convert to float32 in [0, 1] range img = tf.image.convert_image_dtype(img, tf.float32) # 4. Resize to the desired size img = tf.image.resize(img, [img_height, img_width]) # 5. Transpose the image because we want the time # dimension to correspond to the width of the image. img = tf.transpose(img, perm=[1, 0, 2]) # 6. Map the characters in label to numbers label = char_to_num(tf.strings.unicode_split(label, input_encoding=\"UTF-8\")) # 7. Return a dict as our model is expecting two inputs return {\"image\": img, \"label\": label} Create Dataset objects train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = ( train_dataset.map( encode_single_sample, num_parallel_calls=tf.data.AUTOTUNE ) .batch(batch_size) .prefetch(buffer_size=tf.data.AUTOTUNE) ) validation_dataset = tf.data.Dataset.from_tensor_slices((x_valid, y_valid)) validation_dataset = ( validation_dataset.map( encode_single_sample, num_parallel_calls=tf.data.AUTOTUNE ) .batch(batch_size) .prefetch(buffer_size=tf.data.AUTOTUNE) ) Visualize the data _, ax = plt.subplots(4, 4, figsize=(10, 5)) for batch in train_dataset.take(1): images = batch[\"image\"] labels = batch[\"label\"] for i in range(16): img = (images[i] * 255).numpy().astype(\"uint8\") label = tf.strings.reduce_join(num_to_char(labels[i])).numpy().decode(\"utf-8\") ax[i // 4, i % 4].imshow(img[:, :, 0].T, cmap=\"gray\") ax[i // 4, i % 4].set_title(label) ax[i // 4, i % 4].axis(\"off\") plt.show() png Model class CTCLayer(layers.Layer): def __init__(self, name=None): super().__init__(name=name) self.loss_fn = keras.backend.ctc_batch_cost def call(self, y_true, y_pred): # Compute the training-time loss value and add it # to the layer using `self.add_loss()`. batch_len = tf.cast(tf.shape(y_true)[0], dtype=\"int64\") input_length = tf.cast(tf.shape(y_pred)[1], dtype=\"int64\") label_length = tf.cast(tf.shape(y_true)[1], dtype=\"int64\") input_length = input_length * tf.ones(shape=(batch_len, 1), dtype=\"int64\") label_length = label_length * tf.ones(shape=(batch_len, 1), dtype=\"int64\") loss = self.loss_fn(y_true, y_pred, input_length, label_length) self.add_loss(loss) # At test time, just return the computed predictions return y_pred def build_model(): # Inputs to the model input_img = layers.Input( shape=(img_width, img_height, 1), name=\"image\", dtype=\"float32\" ) labels = layers.Input(name=\"label\", shape=(None,), dtype=\"float32\") # First conv block x = layers.Conv2D( 32, (3, 3), activation=\"relu\", kernel_initializer=\"he_normal\", padding=\"same\", name=\"Conv1\", )(input_img) x = layers.MaxPooling2D((2, 2), name=\"pool1\")(x) # Second conv block x = layers.Conv2D( 64, (3, 3), activation=\"relu\", kernel_initializer=\"he_normal\", padding=\"same\", name=\"Conv2\", )(x) x = layers.MaxPooling2D((2, 2), name=\"pool2\")(x) # We have used two max pool with pool size and strides 2. # Hence, downsampled feature maps are 4x smaller. The number of # filters in the last layer is 64. Reshape accordingly before # passing the output to the RNN part of the model new_shape = ((img_width // 4), (img_height // 4) * 64) x = layers.Reshape(target_shape=new_shape, name=\"reshape\")(x) x = layers.Dense(64, activation=\"relu\", name=\"dense1\")(x) x = layers.Dropout(0.2)(x) # RNNs x = layers.Bidirectional(layers.LSTM(128, return_sequences=True, dropout=0.25))(x) x = layers.Bidirectional(layers.LSTM(64, return_sequences=True, dropout=0.25))(x) # Output layer x = layers.Dense( len(char_to_num.get_vocabulary()) + 1, activation=\"softmax\", name=\"dense2\" )(x) # Add CTC layer for calculating CTC loss at each step output = CTCLayer(name=\"ctc_loss\")(labels, x) # Define the model model = keras.models.Model( inputs=[input_img, labels], outputs=output, name=\"ocr_model_v1\" ) # Optimizer opt = keras.optimizers.Adam() # Compile the model and return model.compile(optimizer=opt) return model # Get the model model = build_model() model.summary() Model: \"ocr_model_v1\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== image (InputLayer) [(None, 200, 50, 1)] 0 __________________________________________________________________________________________________ Conv1 (Conv2D) (None, 200, 50, 32) 320 image[0][0] __________________________________________________________________________________________________ pool1 (MaxPooling2D) (None, 100, 25, 32) 0 Conv1[0][0] __________________________________________________________________________________________________ Conv2 (Conv2D) (None, 100, 25, 64) 18496 pool1[0][0] __________________________________________________________________________________________________ pool2 (MaxPooling2D) (None, 50, 12, 64) 0 Conv2[0][0] __________________________________________________________________________________________________ reshape (Reshape) (None, 50, 768) 0 pool2[0][0] __________________________________________________________________________________________________ dense1 (Dense) (None, 50, 64) 49216 reshape[0][0] __________________________________________________________________________________________________ dropout (Dropout) (None, 50, 64) 0 dense1[0][0] __________________________________________________________________________________________________ bidirectional (Bidirectional) (None, 50, 256) 197632 dropout[0][0] __________________________________________________________________________________________________ bidirectional_1 (Bidirectional) (None, 50, 128) 164352 bidirectional[0][0] __________________________________________________________________________________________________ label (InputLayer) [(None, None)] 0 __________________________________________________________________________________________________ dense2 (Dense) (None, 50, 20) 2580 bidirectional_1[0][0] __________________________________________________________________________________________________ ctc_loss (CTCLayer) (None, 50, 20) 0 label[0][0] dense2[0][0] ================================================================================================== Total params: 432,596 Trainable params: 432,596 Non-trainable params: 0 __________________________________________________________________________________________________ Training epochs = 100 early_stopping_patience = 10 # Add early stopping early_stopping = keras.callbacks.EarlyStopping( monitor=\"val_loss\", patience=early_stopping_patience, restore_best_weights=True ) # Train the model history = model.fit( train_dataset, validation_data=validation_dataset, epochs=epochs, callbacks=[early_stopping], ) Epoch 1/100 59/59 [==============================] - 3s 53ms/step - loss: 21.5722 - val_loss: 16.3351 Epoch 2/100 59/59 [==============================] - 2s 27ms/step - loss: 16.3335 - val_loss: 16.3062 Epoch 3/100 59/59 [==============================] - 2s 27ms/step - loss: 16.3360 - val_loss: 16.3116 Epoch 4/100 59/59 [==============================] - 2s 27ms/step - loss: 16.3318 - val_loss: 16.3167 Epoch 5/100 59/59 [==============================] - 2s 27ms/step - loss: 16.3256 - val_loss: 16.3152 Epoch 6/100 59/59 [==============================] - 2s 29ms/step - loss: 16.3229 - val_loss: 16.3123 Epoch 7/100 59/59 [==============================] - 2s 30ms/step - loss: 16.3119 - val_loss: 16.3116 Epoch 8/100 59/59 [==============================] - 2s 27ms/step - loss: 16.2977 - val_loss: 16.3107 Epoch 9/100 59/59 [==============================] - 2s 28ms/step - loss: 16.2801 - val_loss: 16.2552 Epoch 10/100 59/59 [==============================] - 2s 28ms/step - loss: 16.2199 - val_loss: 16.1008 Epoch 11/100 59/59 [==============================] - 2s 28ms/step - loss: 16.1136 - val_loss: 15.9867 Epoch 12/100 59/59 [==============================] - 2s 30ms/step - loss: 16.0138 - val_loss: 15.8825 Epoch 13/100 59/59 [==============================] - 2s 29ms/step - loss: 15.9670 - val_loss: 15.8413 Epoch 14/100 59/59 [==============================] - 2s 29ms/step - loss: 15.9315 - val_loss: 15.8263 Epoch 15/100 59/59 [==============================] - 2s 31ms/step - loss: 15.9162 - val_loss: 15.7971 Epoch 16/100 59/59 [==============================] - 2s 31ms/step - loss: 15.8916 - val_loss: 15.7844 Epoch 17/100 59/59 [==============================] - 2s 31ms/step - loss: 15.8653 - val_loss: 15.7624 Epoch 18/100 59/59 [==============================] - 2s 31ms/step - loss: 15.8543 - val_loss: 15.7620 Epoch 19/100 59/59 [==============================] - 2s 28ms/step - loss: 15.8373 - val_loss: 15.7559 Epoch 20/100 59/59 [==============================] - 2s 27ms/step - loss: 15.8319 - val_loss: 15.7495 Epoch 21/100 59/59 [==============================] - 2s 27ms/step - loss: 15.8104 - val_loss: 15.7430 Epoch 22/100 59/59 [==============================] - 2s 29ms/step - loss: 15.8037 - val_loss: 15.7260 Epoch 23/100 59/59 [==============================] - 2s 29ms/step - loss: 15.8021 - val_loss: 15.7204 Epoch 24/100 59/59 [==============================] - 2s 28ms/step - loss: 15.7901 - val_loss: 15.7174 Epoch 25/100 59/59 [==============================] - 2s 29ms/step - loss: 15.7851 - val_loss: 15.7074 Epoch 26/100 59/59 [==============================] - 2s 27ms/step - loss: 15.7701 - val_loss: 15.7097 Epoch 27/100 59/59 [==============================] - 2s 28ms/step - loss: 15.7694 - val_loss: 15.7040 Epoch 28/100 59/59 [==============================] - 2s 28ms/step - loss: 15.7544 - val_loss: 15.7012 Epoch 29/100 59/59 [==============================] - 2s 31ms/step - loss: 15.7498 - val_loss: 15.7015 Epoch 30/100 59/59 [==============================] - 2s 31ms/step - loss: 15.7521 - val_loss: 15.6880 Epoch 31/100 59/59 [==============================] - 2s 29ms/step - loss: 15.7165 - val_loss: 15.6734 Epoch 32/100 59/59 [==============================] - 2s 27ms/step - loss: 15.6650 - val_loss: 15.5789 Epoch 33/100 59/59 [==============================] - 2s 27ms/step - loss: 15.5300 - val_loss: 15.4026 Epoch 34/100 59/59 [==============================] - 2s 27ms/step - loss: 15.3519 - val_loss: 15.2115 Epoch 35/100 59/59 [==============================] - 2s 27ms/step - loss: 15.1165 - val_loss: 14.7826 Epoch 36/100 59/59 [==============================] - 2s 27ms/step - loss: 14.7086 - val_loss: 14.4432 Epoch 37/100 59/59 [==============================] - 2s 29ms/step - loss: 14.3317 - val_loss: 13.9445 Epoch 38/100 59/59 [==============================] - 2s 29ms/step - loss: 13.9658 - val_loss: 13.6972 Epoch 39/100 59/59 [==============================] - 2s 29ms/step - loss: 13.6728 - val_loss: 13.3388 Epoch 40/100 59/59 [==============================] - 2s 28ms/step - loss: 13.3454 - val_loss: 13.0102 Epoch 41/100 59/59 [==============================] - 2s 27ms/step - loss: 13.0448 - val_loss: 12.8307 Epoch 42/100 59/59 [==============================] - 2s 28ms/step - loss: 12.7552 - val_loss: 12.6071 Epoch 43/100 59/59 [==============================] - 2s 29ms/step - loss: 12.4573 - val_loss: 12.2800 Epoch 44/100 59/59 [==============================] - 2s 31ms/step - loss: 12.1055 - val_loss: 11.9209 Epoch 45/100 59/59 [==============================] - 2s 28ms/step - loss: 11.8148 - val_loss: 11.9132 Epoch 46/100 59/59 [==============================] - 2s 28ms/step - loss: 11.4530 - val_loss: 11.4357 Epoch 47/100 59/59 [==============================] - 2s 29ms/step - loss: 11.0592 - val_loss: 11.1121 Epoch 48/100 59/59 [==============================] - 2s 27ms/step - loss: 10.7746 - val_loss: 10.8532 Epoch 49/100 59/59 [==============================] - 2s 28ms/step - loss: 10.2616 - val_loss: 10.3643 Epoch 50/100 59/59 [==============================] - 2s 28ms/step - loss: 9.8708 - val_loss: 10.0987 Epoch 51/100 59/59 [==============================] - 2s 30ms/step - loss: 9.4077 - val_loss: 9.6371 Epoch 52/100 59/59 [==============================] - 2s 29ms/step - loss: 9.0663 - val_loss: 9.2463 Epoch 53/100 59/59 [==============================] - 2s 28ms/step - loss: 8.4546 - val_loss: 8.7581 Epoch 54/100 59/59 [==============================] - 2s 28ms/step - loss: 7.9226 - val_loss: 8.1805 Epoch 55/100 59/59 [==============================] - 2s 27ms/step - loss: 7.4927 - val_loss: 7.8858 Epoch 56/100 59/59 [==============================] - 2s 28ms/step - loss: 7.0499 - val_loss: 7.3202 Epoch 57/100 59/59 [==============================] - 2s 27ms/step - loss: 6.6383 - val_loss: 7.0875 Epoch 58/100 59/59 [==============================] - 2s 28ms/step - loss: 6.1446 - val_loss: 6.9619 Epoch 59/100 59/59 [==============================] - 2s 28ms/step - loss: 5.8533 - val_loss: 6.3855 Epoch 60/100 59/59 [==============================] - 2s 28ms/step - loss: 5.5107 - val_loss: 5.9797 Epoch 61/100 59/59 [==============================] - 2s 31ms/step - loss: 5.1181 - val_loss: 5.7549 Epoch 62/100 59/59 [==============================] - 2s 31ms/step - loss: 4.6952 - val_loss: 5.5488 Epoch 63/100 59/59 [==============================] - 2s 29ms/step - loss: 4.4189 - val_loss: 5.3030 Epoch 64/100 59/59 [==============================] - 2s 28ms/step - loss: 4.1358 - val_loss: 5.1772 Epoch 65/100 59/59 [==============================] - 2s 28ms/step - loss: 3.8560 - val_loss: 5.1071 Epoch 66/100 59/59 [==============================] - 2s 28ms/step - loss: 3.5342 - val_loss: 4.6958 Epoch 67/100 59/59 [==============================] - 2s 28ms/step - loss: 3.3336 - val_loss: 4.5865 Epoch 68/100 59/59 [==============================] - 2s 27ms/step - loss: 3.0925 - val_loss: 4.3647 Epoch 69/100 59/59 [==============================] - 2s 28ms/step - loss: 2.8751 - val_loss: 4.3005 Epoch 70/100 59/59 [==============================] - 2s 27ms/step - loss: 2.7444 - val_loss: 4.0820 Epoch 71/100 59/59 [==============================] - 2s 27ms/step - loss: 2.5921 - val_loss: 4.1694 Epoch 72/100 59/59 [==============================] - 2s 28ms/step - loss: 2.3246 - val_loss: 3.9142 Epoch 73/100 59/59 [==============================] - 2s 28ms/step - loss: 2.0769 - val_loss: 3.9135 Epoch 74/100 59/59 [==============================] - 2s 29ms/step - loss: 2.0872 - val_loss: 3.9808 Epoch 75/100 59/59 [==============================] - 2s 29ms/step - loss: 1.9498 - val_loss: 3.9935 Epoch 76/100 59/59 [==============================] - 2s 28ms/step - loss: 1.8178 - val_loss: 3.7735 Epoch 77/100 59/59 [==============================] - 2s 29ms/step - loss: 1.7661 - val_loss: 3.6309 Epoch 78/100 59/59 [==============================] - 2s 31ms/step - loss: 1.6236 - val_loss: 3.7410 Epoch 79/100 59/59 [==============================] - 2s 29ms/step - loss: 1.4652 - val_loss: 3.6756 Epoch 80/100 59/59 [==============================] - 2s 27ms/step - loss: 1.3552 - val_loss: 3.4979 Epoch 81/100 59/59 [==============================] - 2s 29ms/step - loss: 1.2655 - val_loss: 3.5306 Epoch 82/100 59/59 [==============================] - 2s 29ms/step - loss: 1.2632 - val_loss: 3.2885 Epoch 83/100 59/59 [==============================] - 2s 28ms/step - loss: 1.2316 - val_loss: 3.2482 Epoch 84/100 59/59 [==============================] - 2s 30ms/step - loss: 1.1260 - val_loss: 3.4285 Epoch 85/100 59/59 [==============================] - 2s 28ms/step - loss: 1.0745 - val_loss: 3.2985 Epoch 86/100 59/59 [==============================] - 2s 29ms/step - loss: 1.0133 - val_loss: 3.2209 Epoch 87/100 59/59 [==============================] - 2s 31ms/step - loss: 0.9417 - val_loss: 3.2203 Epoch 88/100 59/59 [==============================] - 2s 28ms/step - loss: 0.9104 - val_loss: 3.1121 Epoch 89/100 59/59 [==============================] - 2s 30ms/step - loss: 0.8516 - val_loss: 3.2070 Epoch 90/100 59/59 [==============================] - 2s 28ms/step - loss: 0.8275 - val_loss: 3.0335 Epoch 91/100 59/59 [==============================] - 2s 28ms/step - loss: 0.8056 - val_loss: 3.2085 Epoch 92/100 59/59 [==============================] - 2s 28ms/step - loss: 0.7373 - val_loss: 3.0326 Epoch 93/100 59/59 [==============================] - 2s 28ms/step - loss: 0.7753 - val_loss: 2.9935 Epoch 94/100 59/59 [==============================] - 2s 28ms/step - loss: 0.7688 - val_loss: 2.9940 Epoch 95/100 59/59 [==============================] - 2s 27ms/step - loss: 0.6765 - val_loss: 3.0432 Epoch 96/100 59/59 [==============================] - 2s 29ms/step - loss: 0.6674 - val_loss: 3.1233 Epoch 97/100 59/59 [==============================] - 2s 29ms/step - loss: 0.6018 - val_loss: 2.8405 Epoch 98/100 59/59 [==============================] - 2s 28ms/step - loss: 0.6322 - val_loss: 2.8323 Epoch 99/100 59/59 [==============================] - 2s 29ms/step - loss: 0.5889 - val_loss: 2.8786 Epoch 100/100 59/59 [==============================] - 2s 28ms/step - loss: 0.5616 - val_loss: 2.9697 Inference # Get the prediction model by extracting layers till the output layer prediction_model = keras.models.Model( model.get_layer(name=\"image\").input, model.get_layer(name=\"dense2\").output ) prediction_model.summary() # A utility function to decode the output of the network def decode_batch_predictions(pred): input_len = np.ones(pred.shape[0]) * pred.shape[1] # Use greedy search. For complex tasks, you can use beam search results = keras.backend.ctc_decode(pred, input_length=input_len, greedy=True)[0][0][ :, :max_length ] # Iterate over the results and get back the text output_text = [] for res in results: res = tf.strings.reduce_join(num_to_char(res)).numpy().decode(\"utf-8\") output_text.append(res) return output_text # Let's check results on some validation samples for batch in validation_dataset.take(1): batch_images = batch[\"image\"] batch_labels = batch[\"label\"] preds = prediction_model.predict(batch_images) pred_texts = decode_batch_predictions(preds) orig_texts = [] for label in batch_labels: label = tf.strings.reduce_join(num_to_char(label)).numpy().decode(\"utf-8\") orig_texts.append(label) _, ax = plt.subplots(4, 4, figsize=(15, 5)) for i in range(len(pred_texts)): img = (batch_images[i, :, :, 0] * 255).numpy().astype(np.uint8) img = img.T title = f\"Prediction: {pred_texts[i]}\" ax[i // 4, i % 4].imshow(img, cmap=\"gray\") ax[i // 4, i % 4].set_title(title) ax[i // 4, i % 4].axis(\"off\") plt.show() Model: \"functional_1\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= image (InputLayer) [(None, 200, 50, 1)] 0 _________________________________________________________________ Conv1 (Conv2D) (None, 200, 50, 32) 320 _________________________________________________________________ pool1 (MaxPooling2D) (None, 100, 25, 32) 0 _________________________________________________________________ Conv2 (Conv2D) (None, 100, 25, 64) 18496 _________________________________________________________________ pool2 (MaxPooling2D) (None, 50, 12, 64) 0 _________________________________________________________________ reshape (Reshape) (None, 50, 768) 0 _________________________________________________________________ dense1 (Dense) (None, 50, 64) 49216 _________________________________________________________________ dropout (Dropout) (None, 50, 64) 0 _________________________________________________________________ bidirectional (Bidirectional (None, 50, 256) 197632 _________________________________________________________________ bidirectional_1 (Bidirection (None, 50, 128) 164352 _________________________________________________________________ dense2 (Dense) (None, 50, 20) 2580 ================================================================= Total params: 432,596 Trainable params: 432,596 Non-trainable params: 0 _________________________________________________________________ png Medical image classification on TPU. Introduction + Set-up This tutorial will explain how to build an X-ray image classification model to predict whether an X-ray scan shows presence of pneumonia. import re import os import random import numpy as np import pandas as pd import tensorflow as tf import matplotlib.pyplot as plt try: tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect() print(\"Device:\", tpu.master()) strategy = tf.distribute.TPUStrategy(tpu) except: strategy = tf.distribute.get_strategy() print(\"Number of replicas:\", strategy.num_replicas_in_sync) Device: grpc://10.0.27.122:8470 INFO:tensorflow:Initializing the TPU system: grpc://10.0.27.122:8470 INFO:tensorflow:Initializing the TPU system: grpc://10.0.27.122:8470 INFO:tensorflow:Clearing out eager caches INFO:tensorflow:Clearing out eager caches INFO:tensorflow:Finished initializing TPU system. INFO:tensorflow:Finished initializing TPU system. WARNING:absl:[`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is deprecated, please use the non experimental symbol [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) instead. INFO:tensorflow:Found TPU system: INFO:tensorflow:Found TPU system: INFO:tensorflow:*** Num TPU Cores: 8 INFO:tensorflow:*** Num TPU Cores: 8 INFO:tensorflow:*** Num TPU Workers: 1 INFO:tensorflow:*** Num TPU Workers: 1 INFO:tensorflow:*** Num TPU Cores Per Worker: 8 INFO:tensorflow:*** Num TPU Cores Per Worker: 8 INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:CPU:0, CPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:CPU:0, CPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0) Number of replicas: 8 We need a Google Cloud link to our data to load the data using a TPU. Below, we define key configuration parameters we'll use in this example. To run on TPU, this example must be on Colab with the TPU runtime selected. AUTOTUNE = tf.data.AUTOTUNE BATCH_SIZE = 25 * strategy.num_replicas_in_sync IMAGE_SIZE = [180, 180] CLASS_NAMES = [\"NORMAL\", \"PNEUMONIA\"] Load the data The Chest X-ray data we are using from Cell divides the data into training and test files. Let's first load in the training TFRecords. train_images = tf.data.TFRecordDataset( \"gs://download.tensorflow.org/data/ChestXRay2017/train/images.tfrec\" ) train_paths = tf.data.TFRecordDataset( \"gs://download.tensorflow.org/data/ChestXRay2017/train/paths.tfrec\" ) ds = tf.data.Dataset.zip((train_images, train_paths)) Let's count how many healthy/normal chest X-rays we have and how many pneumonia chest X-rays we have: COUNT_NORMAL = len( [ filename for filename in train_paths if \"NORMAL\" in filename.numpy().decode(\"utf-8\") ] ) print(\"Normal images count in training set: \" + str(COUNT_NORMAL)) COUNT_PNEUMONIA = len( [ filename for filename in train_paths if \"PNEUMONIA\" in filename.numpy().decode(\"utf-8\") ] ) print(\"Pneumonia images count in training set: \" + str(COUNT_PNEUMONIA)) Normal images count in training set: 1349 Pneumonia images count in training set: 3883 Notice that there are way more images that are classified as pneumonia than normal. This shows that we have an imbalance in our data. We will correct for this imbalance later on in our notebook. We want to map each filename to the corresponding (image, label) pair. The following methods will help us do that. As we only have two labels, we will encode the label so that 1 or True indicates pneumonia and 0 or False indicates normal. def get_label(file_path): # convert the path to a list of path components parts = tf.strings.split(file_path, \"/\") # The second to last is the class-directory return parts[-2] == \"PNEUMONIA\" def decode_img(img): # convert the compressed string to a 3D uint8 tensor img = tf.image.decode_jpeg(img, channels=3) # resize the image to the desired size. return tf.image.resize(img, IMAGE_SIZE) def process_path(image, path): label = get_label(path) # load the raw data from the file as a string img = decode_img(image) return img, label ds = ds.map(process_path, num_parallel_calls=AUTOTUNE) Let's split the data into a training and validation datasets. ds = ds.shuffle(10000) train_ds = ds.take(4200) val_ds = ds.skip(4200) Let's visualize the shape of an (image, label) pair. for image, label in train_ds.take(1): print(\"Image shape: \", image.numpy().shape) print(\"Label: \", label.numpy()) Image shape: (180, 180, 3) Label: False Load and format the test data as well. test_images = tf.data.TFRecordDataset( \"gs://download.tensorflow.org/data/ChestXRay2017/test/images.tfrec\" ) test_paths = tf.data.TFRecordDataset( \"gs://download.tensorflow.org/data/ChestXRay2017/test/paths.tfrec\" ) test_ds = tf.data.Dataset.zip((test_images, test_paths)) test_ds = test_ds.map(process_path, num_parallel_calls=AUTOTUNE) test_ds = test_ds.batch(BATCH_SIZE) Visualize the dataset First, let's use buffered prefetching so we can yield data from disk without having I/O become blocking. Please note that large image datasets should not be cached in memory. We do it here because the dataset is not very large and we want to train on TPU. def prepare_for_training(ds, cache=True): # This is a small dataset, only load it once, and keep it in memory. # use `.cache(filename)` to cache preprocessing work for datasets that don't # fit in memory. if cache: if isinstance(cache, str): ds = ds.cache(cache) else: ds = ds.cache() ds = ds.batch(BATCH_SIZE) # `prefetch` lets the dataset fetch batches in the background while the model # is training. ds = ds.prefetch(buffer_size=AUTOTUNE) return ds Call the next batch iteration of the training data. train_ds = prepare_for_training(train_ds) val_ds = prepare_for_training(val_ds) image_batch, label_batch = next(iter(train_ds)) Define the method to show the images in the batch. def show_batch(image_batch, label_batch): plt.figure(figsize=(10, 10)) for n in range(25): ax = plt.subplot(5, 5, n + 1) plt.imshow(image_batch[n] / 255) if label_batch[n]: plt.title(\"PNEUMONIA\") else: plt.title(\"NORMAL\") plt.axis(\"off\") As the method takes in NumPy arrays as its parameters, call the numpy function on the batches to return the tensor in NumPy array form. show_batch(image_batch.numpy(), label_batch.numpy()) png Build the CNN To make our model more modular and easier to understand, let's define some blocks. As we're building a convolution neural network, we'll create a convolution block and a dense layer block. The architecture for this CNN has been inspired by this article. from tensorflow import keras from tensorflow.keras import layers def conv_block(filters, inputs): x = layers.SeparableConv2D(filters, 3, activation=\"relu\", padding=\"same\")(inputs) x = layers.SeparableConv2D(filters, 3, activation=\"relu\", padding=\"same\")(x) x = layers.BatchNormalization()(x) outputs = layers.MaxPool2D()(x) return outputs def dense_block(units, dropout_rate, inputs): x = layers.Dense(units, activation=\"relu\")(inputs) x = layers.BatchNormalization()(x) outputs = layers.Dropout(dropout_rate)(x) return outputs The following method will define the function to build our model for us. The images originally have values that range from [0, 255]. CNNs work better with smaller numbers so we will scale this down for our input. The Dropout layers are important, as they reduce the likelikhood of the model overfitting. We want to end the model with a Dense layer with one node, as this will be the binary output that determines if an X-ray shows presence of pneumonia. def build_model(): inputs = keras.Input(shape=(IMAGE_SIZE[0], IMAGE_SIZE[1], 3)) x = layers.Rescaling(1.0 / 255)(inputs) x = layers.Conv2D(16, 3, activation=\"relu\", padding=\"same\")(x) x = layers.Conv2D(16, 3, activation=\"relu\", padding=\"same\")(x) x = layers.MaxPool2D()(x) x = conv_block(32, x) x = conv_block(64, x) x = conv_block(128, x) x = layers.Dropout(0.2)(x) x = conv_block(256, x) x = layers.Dropout(0.2)(x) x = layers.Flatten()(x) x = dense_block(512, 0.7, x) x = dense_block(128, 0.5, x) x = dense_block(64, 0.3, x) outputs = layers.Dense(1, activation=\"sigmoid\")(x) model = keras.Model(inputs=inputs, outputs=outputs) return model Correct for data imbalance We saw earlier in this example that the data was imbalanced, with more images classified as pneumonia than normal. We will correct for that by using class weighting: initial_bias = np.log([COUNT_PNEUMONIA / COUNT_NORMAL]) print(\"Initial bias: {:.5f}\".format(initial_bias[0])) TRAIN_IMG_COUNT = COUNT_NORMAL + COUNT_PNEUMONIA weight_for_0 = (1 / COUNT_NORMAL) * (TRAIN_IMG_COUNT) / 2.0 weight_for_1 = (1 / COUNT_PNEUMONIA) * (TRAIN_IMG_COUNT) / 2.0 class_weight = {0: weight_for_0, 1: weight_for_1} print(\"Weight for class 0: {:.2f}\".format(weight_for_0)) print(\"Weight for class 1: {:.2f}\".format(weight_for_1)) Initial bias: 1.05724 Weight for class 0: 1.94 Weight for class 1: 0.67 The weight for class 0 (Normal) is a lot higher than the weight for class 1 (Pneumonia). Because there are less normal images, each normal image will be weighted more to balance the data as the CNN works best when the training data is balanced. Train the model Defining callbacks The checkpoint callback saves the best weights of the model, so next time we want to use the model, we do not have to spend time training it. The early stopping callback stops the training process when the model starts becoming stagnant, or even worse, when the model starts overfitting. checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(\"xray_model.h5\", save_best_only=True) early_stopping_cb = tf.keras.callbacks.EarlyStopping( patience=10, restore_best_weights=True ) We also want to tune our learning rate. Too high of a learning rate will cause the model to diverge. Too small of a learning rate will cause the model to be too slow. We implement the exponential learning rate scheduling method below. initial_learning_rate = 0.015 lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay( initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True ) Fit the model For our metrics, we want to include precision and recall as they will provide use with a more informed picture of how good our model is. Accuracy tells us what fraction of the labels is correct. Since our data is not balanced, accuracy might give a skewed sense of a good model (i.e. a model that always predicts PNEUMONIA will be 74% accurate but is not a good model). Precision is the number of true positives (TP) over the sum of TP and false positives (FP). It shows what fraction of labeled positives are actually correct. Recall is the number of TP over the sum of TP and false negatves (FN). It shows what fraction of actual positives are correct. Since there are only two possible labels for the image, we will be using the binary crossentropy loss. When we fit the model, remember to specify the class weights, which we defined earlier. Because we are using a TPU, training will be quick - less than 2 minutes. with strategy.scope(): model = build_model() METRICS = [ tf.keras.metrics.BinaryAccuracy(), tf.keras.metrics.Precision(name=\"precision\"), tf.keras.metrics.Recall(name=\"recall\"), ] model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=lr_schedule), loss=\"binary_crossentropy\", metrics=METRICS, ) history = model.fit( train_ds, epochs=100, validation_data=val_ds, class_weight=class_weight, callbacks=[checkpoint_cb, early_stopping_cb], ) Epoch 1/100 WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/multi_device_iterator_ops.py:601: get_next_as_optional (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.data.Iterator.get_next_as_optional()` instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/multi_device_iterator_ops.py:601: get_next_as_optional (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.data.Iterator.get_next_as_optional()` instead. 21/21 [==============================] - 12s 568ms/step - loss: 0.5857 - binary_accuracy: 0.6960 - precision: 0.8887 - recall: 0.6733 - val_loss: 34.0149 - val_binary_accuracy: 0.7180 - val_precision: 0.7180 - val_recall: 1.0000 Epoch 2/100 21/21 [==============================] - 3s 128ms/step - loss: 0.2916 - binary_accuracy: 0.8755 - precision: 0.9540 - recall: 0.8738 - val_loss: 97.5194 - val_binary_accuracy: 0.7180 - val_precision: 0.7180 - val_recall: 1.0000 Epoch 3/100 21/21 [==============================] - 4s 167ms/step - loss: 0.2384 - binary_accuracy: 0.9002 - precision: 0.9663 - recall: 0.8964 - val_loss: 27.7902 - val_binary_accuracy: 0.7180 - val_precision: 0.7180 - val_recall: 1.0000 Epoch 4/100 21/21 [==============================] - 4s 173ms/step - loss: 0.2046 - binary_accuracy: 0.9145 - precision: 0.9725 - recall: 0.9102 - val_loss: 10.8302 - val_binary_accuracy: 0.7180 - val_precision: 0.7180 - val_recall: 1.0000 Epoch 5/100 21/21 [==============================] - 4s 174ms/step - loss: 0.1841 - binary_accuracy: 0.9279 - precision: 0.9733 - recall: 0.9279 - val_loss: 3.5860 - val_binary_accuracy: 0.7103 - val_precision: 0.7162 - val_recall: 0.9879 Epoch 6/100 21/21 [==============================] - 4s 185ms/step - loss: 0.1600 - binary_accuracy: 0.9362 - precision: 0.9791 - recall: 0.9337 - val_loss: 0.3014 - val_binary_accuracy: 0.8895 - val_precision: 0.8973 - val_recall: 0.9555 Epoch 7/100 21/21 [==============================] - 3s 130ms/step - loss: 0.1567 - binary_accuracy: 0.9393 - precision: 0.9798 - recall: 0.9372 - val_loss: 0.6763 - val_binary_accuracy: 0.7810 - val_precision: 0.7760 - val_recall: 0.9771 Epoch 8/100 21/21 [==============================] - 3s 131ms/step - loss: 0.1532 - binary_accuracy: 0.9421 - precision: 0.9825 - recall: 0.9385 - val_loss: 0.3169 - val_binary_accuracy: 0.8895 - val_precision: 0.8684 - val_recall: 0.9973 Epoch 9/100 21/21 [==============================] - 4s 184ms/step - loss: 0.1457 - binary_accuracy: 0.9431 - precision: 0.9822 - recall: 0.9401 - val_loss: 0.2064 - val_binary_accuracy: 0.9273 - val_precision: 0.9840 - val_recall: 0.9136 Epoch 10/100 21/21 [==============================] - 3s 132ms/step - loss: 0.1201 - binary_accuracy: 0.9521 - precision: 0.9869 - recall: 0.9479 - val_loss: 0.4364 - val_binary_accuracy: 0.8605 - val_precision: 0.8443 - val_recall: 0.9879 Epoch 11/100 21/21 [==============================] - 3s 127ms/step - loss: 0.1200 - binary_accuracy: 0.9510 - precision: 0.9863 - recall: 0.9469 - val_loss: 0.5197 - val_binary_accuracy: 0.8508 - val_precision: 1.0000 - val_recall: 0.7922 Epoch 12/100 21/21 [==============================] - 4s 186ms/step - loss: 0.1077 - binary_accuracy: 0.9581 - precision: 0.9870 - recall: 0.9559 - val_loss: 0.1349 - val_binary_accuracy: 0.9486 - val_precision: 0.9587 - val_recall: 0.9703 Epoch 13/100 21/21 [==============================] - 4s 173ms/step - loss: 0.0918 - binary_accuracy: 0.9650 - precision: 0.9914 - recall: 0.9611 - val_loss: 0.0926 - val_binary_accuracy: 0.9700 - val_precision: 0.9837 - val_recall: 0.9744 Epoch 14/100 21/21 [==============================] - 3s 130ms/step - loss: 0.0996 - binary_accuracy: 0.9612 - precision: 0.9913 - recall: 0.9559 - val_loss: 0.1811 - val_binary_accuracy: 0.9419 - val_precision: 0.9956 - val_recall: 0.9231 Epoch 15/100 21/21 [==============================] - 3s 129ms/step - loss: 0.0898 - binary_accuracy: 0.9643 - precision: 0.9901 - recall: 0.9614 - val_loss: 0.1525 - val_binary_accuracy: 0.9486 - val_precision: 0.9986 - val_recall: 0.9298 Epoch 16/100 21/21 [==============================] - 3s 128ms/step - loss: 0.0941 - binary_accuracy: 0.9621 - precision: 0.9904 - recall: 0.9582 - val_loss: 0.5101 - val_binary_accuracy: 0.8527 - val_precision: 1.0000 - val_recall: 0.7949 Epoch 17/100 21/21 [==============================] - 3s 125ms/step - loss: 0.0798 - binary_accuracy: 0.9636 - precision: 0.9897 - recall: 0.9607 - val_loss: 0.1239 - val_binary_accuracy: 0.9622 - val_precision: 0.9875 - val_recall: 0.9595 Epoch 18/100 21/21 [==============================] - 3s 126ms/step - loss: 0.0821 - binary_accuracy: 0.9657 - precision: 0.9911 - recall: 0.9623 - val_loss: 0.1597 - val_binary_accuracy: 0.9322 - val_precision: 0.9956 - val_recall: 0.9096 Epoch 19/100 21/21 [==============================] - 3s 143ms/step - loss: 0.0800 - binary_accuracy: 0.9657 - precision: 0.9917 - recall: 0.9617 - val_loss: 0.2538 - val_binary_accuracy: 0.9109 - val_precision: 1.0000 - val_recall: 0.8758 Epoch 20/100 21/21 [==============================] - 3s 127ms/step - loss: 0.0605 - binary_accuracy: 0.9738 - precision: 0.9950 - recall: 0.9694 - val_loss: 0.6594 - val_binary_accuracy: 0.8566 - val_precision: 1.0000 - val_recall: 0.8003 Epoch 21/100 21/21 [==============================] - 4s 167ms/step - loss: 0.0726 - binary_accuracy: 0.9733 - precision: 0.9937 - recall: 0.9701 - val_loss: 0.0593 - val_binary_accuracy: 0.9816 - val_precision: 0.9945 - val_recall: 0.9798 Epoch 22/100 21/21 [==============================] - 3s 126ms/step - loss: 0.0577 - binary_accuracy: 0.9783 - precision: 0.9951 - recall: 0.9755 - val_loss: 0.1087 - val_binary_accuracy: 0.9729 - val_precision: 0.9931 - val_recall: 0.9690 Epoch 23/100 21/21 [==============================] - 3s 125ms/step - loss: 0.0652 - binary_accuracy: 0.9729 - precision: 0.9924 - recall: 0.9707 - val_loss: 1.8465 - val_binary_accuracy: 0.7180 - val_precision: 0.7180 - val_recall: 1.0000 Epoch 24/100 21/21 [==============================] - 3s 124ms/step - loss: 0.0538 - binary_accuracy: 0.9783 - precision: 0.9951 - recall: 0.9755 - val_loss: 1.5769 - val_binary_accuracy: 0.7180 - val_precision: 0.7180 - val_recall: 1.0000 Epoch 25/100 21/21 [==============================] - 4s 167ms/step - loss: 0.0549 - binary_accuracy: 0.9776 - precision: 0.9954 - recall: 0.9743 - val_loss: 0.0590 - val_binary_accuracy: 0.9777 - val_precision: 0.9904 - val_recall: 0.9784 Epoch 26/100 21/21 [==============================] - 3s 131ms/step - loss: 0.0677 - binary_accuracy: 0.9719 - precision: 0.9924 - recall: 0.9694 - val_loss: 2.6008 - val_binary_accuracy: 0.6928 - val_precision: 0.9977 - val_recall: 0.5735 Epoch 27/100 21/21 [==============================] - 3s 127ms/step - loss: 0.0469 - binary_accuracy: 0.9833 - precision: 0.9971 - recall: 0.9804 - val_loss: 1.0184 - val_binary_accuracy: 0.8605 - val_precision: 0.9983 - val_recall: 0.8070 Epoch 28/100 21/21 [==============================] - 3s 126ms/step - loss: 0.0501 - binary_accuracy: 0.9790 - precision: 0.9961 - recall: 0.9755 - val_loss: 0.3737 - val_binary_accuracy: 0.9089 - val_precision: 0.9954 - val_recall: 0.8772 Epoch 29/100 21/21 [==============================] - 3s 128ms/step - loss: 0.0548 - binary_accuracy: 0.9798 - precision: 0.9941 - recall: 0.9784 - val_loss: 1.2928 - val_binary_accuracy: 0.7907 - val_precision: 1.0000 - val_recall: 0.7085 Epoch 30/100 21/21 [==============================] - 3s 129ms/step - loss: 0.0370 - binary_accuracy: 0.9860 - precision: 0.9980 - recall: 0.9829 - val_loss: 0.1370 - val_binary_accuracy: 0.9612 - val_precision: 0.9972 - val_recall: 0.9487 Epoch 31/100 21/21 [==============================] - 3s 125ms/step - loss: 0.0585 - binary_accuracy: 0.9819 - precision: 0.9951 - recall: 0.9804 - val_loss: 1.1955 - val_binary_accuracy: 0.6870 - val_precision: 0.9976 - val_recall: 0.5655 Epoch 32/100 21/21 [==============================] - 3s 140ms/step - loss: 0.0813 - binary_accuracy: 0.9695 - precision: 0.9934 - recall: 0.9652 - val_loss: 1.0394 - val_binary_accuracy: 0.8576 - val_precision: 0.9853 - val_recall: 0.8138 Epoch 33/100 21/21 [==============================] - 3s 128ms/step - loss: 0.1111 - binary_accuracy: 0.9555 - precision: 0.9870 - recall: 0.9524 - val_loss: 4.9438 - val_binary_accuracy: 0.5911 - val_precision: 1.0000 - val_recall: 0.4305 Epoch 34/100 21/21 [==============================] - 3s 130ms/step - loss: 0.0680 - binary_accuracy: 0.9726 - precision: 0.9921 - recall: 0.9707 - val_loss: 2.8822 - val_binary_accuracy: 0.7267 - val_precision: 0.9978 - val_recall: 0.6208 Epoch 35/100 21/21 [==============================] - 4s 187ms/step - loss: 0.0784 - binary_accuracy: 0.9712 - precision: 0.9892 - recall: 0.9717 - val_loss: 0.3940 - val_binary_accuracy: 0.9390 - val_precision: 0.9942 - val_recall: 0.9204 Visualizing model performance Let's plot the model accuracy and loss for the training and the validating set. Note that no random seed is specified for this notebook. For your notebook, there might be slight variance. fig, ax = plt.subplots(1, 4, figsize=(20, 3)) ax = ax.ravel() for i, met in enumerate([\"precision\", \"recall\", \"binary_accuracy\", \"loss\"]): ax[i].plot(history.history[met]) ax[i].plot(history.history[\"val_\" + met]) ax[i].set_title(\"Model {}\".format(met)) ax[i].set_xlabel(\"epochs\") ax[i].set_ylabel(met) ax[i].legend([\"train\", \"val\"]) png We see that the accuracy for our model is around 95%. Predict and evaluate results Let's evaluate the model on our test data! model.evaluate(test_ds, return_dict=True) 4/4 [==============================] - 3s 708ms/step - loss: 0.9718 - binary_accuracy: 0.7901 - precision: 0.7524 - recall: 0.9897 {'binary_accuracy': 0.7900640964508057, 'loss': 0.9717951416969299, 'precision': 0.752436637878418, 'recall': 0.9897436499595642} We see that our accuracy on our test data is lower than the accuracy for our validating set. This may indicate overfitting. Our recall is greater than our precision, indicating that almost all pneumonia images are correctly identified but some normal images are falsely identified. We should aim to increase our precision. for image, label in test_ds.take(1): plt.imshow(image[0] / 255.0) plt.title(CLASS_NAMES[label[0].numpy()]) prediction = model.predict(test_ds.take(1))[0] scores = [1 - prediction, prediction] for score, name in zip(scores, CLASS_NAMES): print(\"This image is %.2f percent %s\" % ((100 * score), name)) /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:3: DeprecationWarning: In future, it will be an error for 'np.bool_' scalars to be interpreted as an index This is separate from the ipykernel package so we can avoid doing imports until This image is 47.19 percent NORMAL This image is 52.81 percent PNEUMONIA png Implementation of PointNet for ModelNet10 classification. Introduction Classification, detection and segmentation of unordered 3D point sets i.e. point clouds is a core problem in computer vision. This example implements the seminal point cloud deep learning paper PointNet (Qi et al., 2017). For a detailed intoduction on PointNet see this blog post. Setup If using colab first install trimesh with !pip install trimesh. import os import glob import trimesh import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from matplotlib import pyplot as plt tf.random.set_seed(1234) Load dataset We use the ModelNet10 model dataset, the smaller 10 class version of the ModelNet40 dataset. First download the data: DATA_DIR = tf.keras.utils.get_file( \"modelnet.zip\", \"http://3dvision.princeton.edu/projects/2014/3DShapeNets/ModelNet10.zip\", extract=True, ) DATA_DIR = os.path.join(os.path.dirname(DATA_DIR), \"ModelNet10\") Downloading data from http://3dvision.princeton.edu/projects/2014/3DShapeNets/ModelNet10.zip 473407488/473402300 [==============================] - 13s 0us/step We can use the trimesh package to read and visualize the .off mesh files. mesh = trimesh.load(os.path.join(DATA_DIR, \"chair/train/chair_0001.off\")) mesh.show() To convert a mesh file to a point cloud we first need to sample points on the mesh surface. .sample() performs a unifrom random sampling. Here we sample at 2048 locations and visualize in matplotlib. points = mesh.sample(2048) fig = plt.figure(figsize=(5, 5)) ax = fig.add_subplot(111, projection=\"3d\") ax.scatter(points[:, 0], points[:, 1], points[:, 2]) ax.set_axis_off() plt.show() png To generate a tf.data.Dataset() we need to first parse through the ModelNet data folders. Each mesh is loaded and sampled into a point cloud before being added to a standard python list and converted to a numpy array. We also store the current enumerate index value as the object label and use a dictionary to recall this later. def parse_dataset(num_points=2048): train_points = [] train_labels = [] test_points = [] test_labels = [] class_map = {} folders = glob.glob(os.path.join(DATA_DIR, \"[!README]*\")) for i, folder in enumerate(folders): print(\"processing class: {}\".format(os.path.basename(folder))) # store folder name with ID so we can retrieve later class_map[i] = folder.split(\"/\")[-1] # gather all files train_files = glob.glob(os.path.join(folder, \"train/*\")) test_files = glob.glob(os.path.join(folder, \"test/*\")) for f in train_files: train_points.append(trimesh.load(f).sample(num_points)) train_labels.append(i) for f in test_files: test_points.append(trimesh.load(f).sample(num_points)) test_labels.append(i) return ( np.array(train_points), np.array(test_points), np.array(train_labels), np.array(test_labels), class_map, ) Set the number of points to sample and batch size and parse the dataset. This can take ~5minutes to complete. NUM_POINTS = 2048 NUM_CLASSES = 10 BATCH_SIZE = 32 train_points, test_points, train_labels, test_labels, CLASS_MAP = parse_dataset( NUM_POINTS ) processing class: bathtub processing class: desk processing class: monitor processing class: sofa processing class: chair processing class: toilet processing class: dresser processing class: table processing class: bed processing class: night_stand Our data can now be read into a tf.data.Dataset() object. We set the shuffle buffer size to the entire size of the dataset as prior to this the data is ordered by class. Data augmentation is important when working with point cloud data. We create a augmentation function to jitter and shuffle the train dataset. def augment(points, label): # jitter points points += tf.random.uniform(points.shape, -0.005, 0.005, dtype=tf.float64) # shuffle points points = tf.random.shuffle(points) return points, label train_dataset = tf.data.Dataset.from_tensor_slices((train_points, train_labels)) test_dataset = tf.data.Dataset.from_tensor_slices((test_points, test_labels)) train_dataset = train_dataset.shuffle(len(train_points)).map(augment).batch(BATCH_SIZE) test_dataset = test_dataset.shuffle(len(test_points)).batch(BATCH_SIZE) Build a model Each convolution and fully-connected layer (with exception for end layers) consits of Convolution / Dense -> Batch Normalization -> ReLU Activation. def conv_bn(x, filters): x = layers.Conv1D(filters, kernel_size=1, padding=\"valid\")(x) x = layers.BatchNormalization(momentum=0.0)(x) return layers.Activation(\"relu\")(x) def dense_bn(x, filters): x = layers.Dense(filters)(x) x = layers.BatchNormalization(momentum=0.0)(x) return layers.Activation(\"relu\")(x) PointNet consists of two core components. The primary MLP network, and the transformer net (T-net). The T-net aims to learn an affine transformation matrix by its own mini network. The T-net is used twice. The first time to transform the input features (n, 3) into a canonical representation. The second is an affine transformation for alignment in feature space (n, 3). As per the original paper we constrain the transformation to be close to an orthogonal matrix (i.e. ||X*X^T - I|| = 0). class OrthogonalRegularizer(keras.regularizers.Regularizer): def __init__(self, num_features, l2reg=0.001): self.num_features = num_features self.l2reg = l2reg self.eye = tf.eye(num_features) def __call__(self, x): x = tf.reshape(x, (-1, self.num_features, self.num_features)) xxt = tf.tensordot(x, x, axes=(2, 2)) xxt = tf.reshape(xxt, (-1, self.num_features, self.num_features)) return tf.reduce_sum(self.l2reg * tf.square(xxt - self.eye)) We can then define a general function to build T-net layers. def tnet(inputs, num_features): # Initalise bias as the indentity matrix bias = keras.initializers.Constant(np.eye(num_features).flatten()) reg = OrthogonalRegularizer(num_features) x = conv_bn(inputs, 32) x = conv_bn(x, 64) x = conv_bn(x, 512) x = layers.GlobalMaxPooling1D()(x) x = dense_bn(x, 256) x = dense_bn(x, 128) x = layers.Dense( num_features * num_features, kernel_initializer=\"zeros\", bias_initializer=bias, activity_regularizer=reg, )(x) feat_T = layers.Reshape((num_features, num_features))(x) # Apply affine transformation to input features return layers.Dot(axes=(2, 1))([inputs, feat_T]) The main network can be then implemented in the same manner where the t-net mini models can be dropped in a layers in the graph. Here we replicate the network architecture published in the original paper but with half the number of weights at each layer as we are using the smaller 10 class ModelNet dataset. inputs = keras.Input(shape=(NUM_POINTS, 3)) x = tnet(inputs, 3) x = conv_bn(x, 32) x = conv_bn(x, 32) x = tnet(x, 32) x = conv_bn(x, 32) x = conv_bn(x, 64) x = conv_bn(x, 512) x = layers.GlobalMaxPooling1D()(x) x = dense_bn(x, 256) x = layers.Dropout(0.3)(x) x = dense_bn(x, 128) x = layers.Dropout(0.3)(x) outputs = layers.Dense(NUM_CLASSES, activation=\"softmax\")(x) model = keras.Model(inputs=inputs, outputs=outputs, name=\"pointnet\") model.summary() Model: \"pointnet\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 2048, 3)] 0 __________________________________________________________________________________________________ conv1d (Conv1D) (None, 2048, 32) 128 input_1[0][0] __________________________________________________________________________________________________ batch_normalization (BatchNorma (None, 2048, 32) 128 conv1d[0][0] __________________________________________________________________________________________________ activation (Activation) (None, 2048, 32) 0 batch_normalization[0][0] __________________________________________________________________________________________________ conv1d_1 (Conv1D) (None, 2048, 64) 2112 activation[0][0] __________________________________________________________________________________________________ batch_normalization_1 (BatchNor (None, 2048, 64) 256 conv1d_1[0][0] __________________________________________________________________________________________________ activation_1 (Activation) (None, 2048, 64) 0 batch_normalization_1[0][0] __________________________________________________________________________________________________ conv1d_2 (Conv1D) (None, 2048, 512) 33280 activation_1[0][0] __________________________________________________________________________________________________ batch_normalization_2 (BatchNor (None, 2048, 512) 2048 conv1d_2[0][0] __________________________________________________________________________________________________ activation_2 (Activation) (None, 2048, 512) 0 batch_normalization_2[0][0] __________________________________________________________________________________________________ global_max_pooling1d (GlobalMax (None, 512) 0 activation_2[0][0] __________________________________________________________________________________________________ dense (Dense) (None, 256) 131328 global_max_pooling1d[0][0] __________________________________________________________________________________________________ batch_normalization_3 (BatchNor (None, 256) 1024 dense[0][0] __________________________________________________________________________________________________ activation_3 (Activation) (None, 256) 0 batch_normalization_3[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (None, 128) 32896 activation_3[0][0] __________________________________________________________________________________________________ batch_normalization_4 (BatchNor (None, 128) 512 dense_1[0][0] __________________________________________________________________________________________________ activation_4 (Activation) (None, 128) 0 batch_normalization_4[0][0] __________________________________________________________________________________________________ dense_2 (Dense) (None, 9) 1161 activation_4[0][0] __________________________________________________________________________________________________ reshape (Reshape) (None, 3, 3) 0 dense_2[0][0] __________________________________________________________________________________________________ dot (Dot) (None, 2048, 3) 0 input_1[0][0] reshape[0][0] __________________________________________________________________________________________________ conv1d_3 (Conv1D) (None, 2048, 32) 128 dot[0][0] __________________________________________________________________________________________________ batch_normalization_5 (BatchNor (None, 2048, 32) 128 conv1d_3[0][0] __________________________________________________________________________________________________ activation_5 (Activation) (None, 2048, 32) 0 batch_normalization_5[0][0] __________________________________________________________________________________________________ conv1d_4 (Conv1D) (None, 2048, 32) 1056 activation_5[0][0] __________________________________________________________________________________________________ batch_normalization_6 (BatchNor (None, 2048, 32) 128 conv1d_4[0][0] __________________________________________________________________________________________________ activation_6 (Activation) (None, 2048, 32) 0 batch_normalization_6[0][0] __________________________________________________________________________________________________ conv1d_5 (Conv1D) (None, 2048, 32) 1056 activation_6[0][0] __________________________________________________________________________________________________ batch_normalization_7 (BatchNor (None, 2048, 32) 128 conv1d_5[0][0] __________________________________________________________________________________________________ activation_7 (Activation) (None, 2048, 32) 0 batch_normalization_7[0][0] __________________________________________________________________________________________________ conv1d_6 (Conv1D) (None, 2048, 64) 2112 activation_7[0][0] __________________________________________________________________________________________________ batch_normalization_8 (BatchNor (None, 2048, 64) 256 conv1d_6[0][0] __________________________________________________________________________________________________ activation_8 (Activation) (None, 2048, 64) 0 batch_normalization_8[0][0] __________________________________________________________________________________________________ conv1d_7 (Conv1D) (None, 2048, 512) 33280 activation_8[0][0] __________________________________________________________________________________________________ batch_normalization_9 (BatchNor (None, 2048, 512) 2048 conv1d_7[0][0] __________________________________________________________________________________________________ activation_9 (Activation) (None, 2048, 512) 0 batch_normalization_9[0][0] __________________________________________________________________________________________________ global_max_pooling1d_1 (GlobalM (None, 512) 0 activation_9[0][0] __________________________________________________________________________________________________ dense_3 (Dense) (None, 256) 131328 global_max_pooling1d_1[0][0] __________________________________________________________________________________________________ batch_normalization_10 (BatchNo (None, 256) 1024 dense_3[0][0] __________________________________________________________________________________________________ activation_10 (Activation) (None, 256) 0 batch_normalization_10[0][0] __________________________________________________________________________________________________ dense_4 (Dense) (None, 128) 32896 activation_10[0][0] __________________________________________________________________________________________________ batch_normalization_11 (BatchNo (None, 128) 512 dense_4[0][0] __________________________________________________________________________________________________ activation_11 (Activation) (None, 128) 0 batch_normalization_11[0][0] __________________________________________________________________________________________________ dense_5 (Dense) (None, 1024) 132096 activation_11[0][0] __________________________________________________________________________________________________ reshape_1 (Reshape) (None, 32, 32) 0 dense_5[0][0] __________________________________________________________________________________________________ dot_1 (Dot) (None, 2048, 32) 0 activation_6[0][0] reshape_1[0][0] __________________________________________________________________________________________________ conv1d_8 (Conv1D) (None, 2048, 32) 1056 dot_1[0][0] __________________________________________________________________________________________________ batch_normalization_12 (BatchNo (None, 2048, 32) 128 conv1d_8[0][0] __________________________________________________________________________________________________ activation_12 (Activation) (None, 2048, 32) 0 batch_normalization_12[0][0] __________________________________________________________________________________________________ conv1d_9 (Conv1D) (None, 2048, 64) 2112 activation_12[0][0] __________________________________________________________________________________________________ batch_normalization_13 (BatchNo (None, 2048, 64) 256 conv1d_9[0][0] __________________________________________________________________________________________________ activation_13 (Activation) (None, 2048, 64) 0 batch_normalization_13[0][0] __________________________________________________________________________________________________ conv1d_10 (Conv1D) (None, 2048, 512) 33280 activation_13[0][0] __________________________________________________________________________________________________ batch_normalization_14 (BatchNo (None, 2048, 512) 2048 conv1d_10[0][0] __________________________________________________________________________________________________ activation_14 (Activation) (None, 2048, 512) 0 batch_normalization_14[0][0] __________________________________________________________________________________________________ global_max_pooling1d_2 (GlobalM (None, 512) 0 activation_14[0][0] __________________________________________________________________________________________________ dense_6 (Dense) (None, 256) 131328 global_max_pooling1d_2[0][0] __________________________________________________________________________________________________ batch_normalization_15 (BatchNo (None, 256) 1024 dense_6[0][0] __________________________________________________________________________________________________ activation_15 (Activation) (None, 256) 0 batch_normalization_15[0][0] __________________________________________________________________________________________________ dropout (Dropout) (None, 256) 0 activation_15[0][0] __________________________________________________________________________________________________ dense_7 (Dense) (None, 128) 32896 dropout[0][0] __________________________________________________________________________________________________ batch_normalization_16 (BatchNo (None, 128) 512 dense_7[0][0] __________________________________________________________________________________________________ activation_16 (Activation) (None, 128) 0 batch_normalization_16[0][0] __________________________________________________________________________________________________ dropout_1 (Dropout) (None, 128) 0 activation_16[0][0] __________________________________________________________________________________________________ dense_8 (Dense) (None, 10) 1290 dropout_1[0][0] ================================================================================================== Total params: 748,979 Trainable params: 742,899 Non-trainable params: 6,080 __________________________________________________________________________________________________ Train model Once the model is defined it can be trained like any other standard classification model using .compile() and .fit(). model.compile( loss=\"sparse_categorical_crossentropy\", optimizer=keras.optimizers.Adam(learning_rate=0.001), metrics=[\"sparse_categorical_accuracy\"], ) model.fit(train_dataset, epochs=20, validation_data=test_dataset) Epoch 1/20 125/125 [==============================] - 28s 221ms/step - loss: 3.5897 - sparse_categorical_accuracy: 0.2724 - val_loss: 5804697916006203392.0000 - val_sparse_categorical_accuracy: 0.3073 Epoch 2/20 125/125 [==============================] - 27s 215ms/step - loss: 3.1970 - sparse_categorical_accuracy: 0.3443 - val_loss: 836343949164544.0000 - val_sparse_categorical_accuracy: 0.3425 Epoch 3/20 125/125 [==============================] - 27s 215ms/step - loss: 2.8959 - sparse_categorical_accuracy: 0.4260 - val_loss: 15107376738729984.0000 - val_sparse_categorical_accuracy: 0.3084 Epoch 4/20 125/125 [==============================] - 27s 215ms/step - loss: 2.7148 - sparse_categorical_accuracy: 0.4939 - val_loss: 6823221.0000 - val_sparse_categorical_accuracy: 0.3304 Epoch 5/20 125/125 [==============================] - 27s 215ms/step - loss: 2.5500 - sparse_categorical_accuracy: 0.5560 - val_loss: 675110905872323182592.0000 - val_sparse_categorical_accuracy: 0.4493 Epoch 6/20 125/125 [==============================] - 27s 215ms/step - loss: 2.3595 - sparse_categorical_accuracy: 0.6081 - val_loss: 600389124096.0000 - val_sparse_categorical_accuracy: 0.5749 Epoch 7/20 125/125 [==============================] - 27s 215ms/step - loss: 2.2485 - sparse_categorical_accuracy: 0.6394 - val_loss: 680423464582760103936.0000 - val_sparse_categorical_accuracy: 0.4912 Epoch 8/20 125/125 [==============================] - 27s 215ms/step - loss: 2.1945 - sparse_categorical_accuracy: 0.6575 - val_loss: 44108689408.0000 - val_sparse_categorical_accuracy: 0.6410 Epoch 9/20 125/125 [==============================] - 27s 215ms/step - loss: 2.1318 - sparse_categorical_accuracy: 0.6725 - val_loss: 873314112.0000 - val_sparse_categorical_accuracy: 0.6112 Epoch 10/20 125/125 [==============================] - 27s 215ms/step - loss: 2.0140 - sparse_categorical_accuracy: 0.7018 - val_loss: 13168980992.0000 - val_sparse_categorical_accuracy: 0.6784 Epoch 11/20 125/125 [==============================] - 27s 215ms/step - loss: 1.9929 - sparse_categorical_accuracy: 0.7056 - val_loss: 36888236785664.0000 - val_sparse_categorical_accuracy: 0.6586 Epoch 12/20 125/125 [==============================] - 27s 215ms/step - loss: 1.9542 - sparse_categorical_accuracy: 0.7166 - val_loss: 85375.9844 - val_sparse_categorical_accuracy: 0.7026 Epoch 13/20 125/125 [==============================] - 27s 215ms/step - loss: 1.8648 - sparse_categorical_accuracy: 0.7447 - val_loss: 7.7962 - val_sparse_categorical_accuracy: 0.5441 Epoch 14/20 125/125 [==============================] - 27s 215ms/step - loss: 1.9016 - sparse_categorical_accuracy: 0.7444 - val_loss: 66469.9062 - val_sparse_categorical_accuracy: 0.6134 Epoch 15/20 125/125 [==============================] - 27s 215ms/step - loss: 1.8003 - sparse_categorical_accuracy: 0.7695 - val_loss: 519227186348032.0000 - val_sparse_categorical_accuracy: 0.6949 Epoch 16/20 125/125 [==============================] - 27s 215ms/step - loss: 1.8019 - sparse_categorical_accuracy: 0.7702 - val_loss: 5263462156149188460544.0000 - val_sparse_categorical_accuracy: 0.6520 Epoch 17/20 125/125 [==============================] - 27s 215ms/step - loss: 1.7177 - sparse_categorical_accuracy: 0.7903 - val_loss: 142240048.0000 - val_sparse_categorical_accuracy: 0.7941 Epoch 18/20 125/125 [==============================] - 27s 216ms/step - loss: 1.7548 - sparse_categorical_accuracy: 0.7855 - val_loss: 2.6049 - val_sparse_categorical_accuracy: 0.5022 Epoch 19/20 125/125 [==============================] - 27s 215ms/step - loss: 1.7101 - sparse_categorical_accuracy: 0.8003 - val_loss: 1152819181305987072.0000 - val_sparse_categorical_accuracy: 0.7753 Epoch 20/20 125/125 [==============================] - 27s 215ms/step - loss: 1.6812 - sparse_categorical_accuracy: 0.8176 - val_loss: 12854714433536.0000 - val_sparse_categorical_accuracy: 0.7390 Visualize predictions We can use matplotlib to visualize our trained model performance. data = test_dataset.take(1) points, labels = list(data)[0] points = points[:8, ...] labels = labels[:8, ...] # run test data through model preds = model.predict(points) preds = tf.math.argmax(preds, -1) points = points.numpy() # plot points with predicted class and label fig = plt.figure(figsize=(15, 10)) for i in range(8): ax = fig.add_subplot(2, 4, i + 1, projection=\"3d\") ax.scatter(points[i, :, 0], points[i, :, 1], points[i, :, 2]) ax.set_title( \"pred: {:}, label: {:}\".format( CLASS_MAP[preds[i].numpy()], CLASS_MAP[labels.numpy()[i]] ) ) ax.set_axis_off() plt.show() png Implementation of a PointNet-based model for segmenting point clouds. Introduction A \"point cloud\" is an important type of data structure for storing geometric shape data. Due to its irregular format, it's often transformed into regular 3D voxel grids or collections of images before being used in deep learning applications, a step which makes the data unnecessarily large. The PointNet family of models solves this problem by directly consuming point clouds, respecting the permutation-invariance property of the point data. The PointNet family of models provides a simple, unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. In this example, we demonstrate the implementation of the PointNet architecture for shape segmentation. References PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation Point cloud classification with PointNet Spatial Transformer Networks Imports import os import json import random import numpy as np import pandas as pd from tqdm import tqdm from glob import glob import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt Downloading Dataset The ShapeNet dataset is an ongoing effort to establish a richly-annotated, large-scale dataset of 3D shapes. ShapeNetCore is a subset of the full ShapeNet dataset with clean single 3D models and manually verified category and alignment annotations. It covers 55 common object categories, with about 51,300 unique 3D models. For this example, we use one of the 12 object categories of PASCAL 3D+, included as part of the ShapenetCore dataset. dataset_url = \"https://git.io/JiY4i\" dataset_path = keras.utils.get_file( fname=\"shapenet.zip\", origin=dataset_url, cache_subdir=\"datasets\", hash_algorithm=\"auto\", extract=True, archive_format=\"auto\", cache_dir=\"datasets\", ) Loading the dataset We parse the dataset metadata in order to easily map model categories to their respective directories and segmentation classes to colors for the purpose of visualization. with open(\"/tmp/.keras/datasets/PartAnnotation/metadata.json\") as json_file: metadata = json.load(json_file) print(metadata) {'Airplane': {'directory': '02691156', 'lables': ['wing', 'body', 'tail', 'engine'], 'colors': ['blue', 'green', 'red', 'pink']}, 'Bag': {'directory': '02773838', 'lables': ['handle', 'body'], 'colors': ['blue', 'green']}, 'Cap': {'directory': '02954340', 'lables': ['panels', 'peak'], 'colors': ['blue', 'green']}, 'Car': {'directory': '02958343', 'lables': ['wheel', 'hood', 'roof'], 'colors': ['blue', 'green', 'red']}, 'Chair': {'directory': '03001627', 'lables': ['leg', 'arm', 'back', 'seat'], 'colors': ['blue', 'green', 'red', 'pink']}, 'Earphone': {'directory': '03261776', 'lables': ['earphone', 'headband'], 'colors': ['blue', 'green']}, 'Guitar': {'directory': '03467517', 'lables': ['head', 'body', 'neck'], 'colors': ['blue', 'green', 'red']}, 'Knife': {'directory': '03624134', 'lables': ['handle', 'blade'], 'colors': ['blue', 'green']}, 'Lamp': {'directory': '03636649', 'lables': ['canopy', 'lampshade', 'base'], 'colors': ['blue', 'green', 'red']}, 'Laptop': {'directory': '03642806', 'lables': ['keyboard'], 'colors': ['blue']}, 'Motorbike': {'directory': '03790512', 'lables': ['wheel', 'handle', 'gas_tank', 'light', 'seat'], 'colors': ['blue', 'green', 'red', 'pink', 'yellow']}, 'Mug': {'directory': '03797390', 'lables': ['handle'], 'colors': ['blue']}, 'Pistol': {'directory': '03948459', 'lables': ['trigger_and_guard', 'handle', 'barrel'], 'colors': ['blue', 'green', 'red']}, 'Rocket': {'directory': '04099429', 'lables': ['nose', 'body', 'fin'], 'colors': ['blue', 'green', 'red']}, 'Skateboard': {'directory': '04225987', 'lables': ['wheel', 'deck'], 'colors': ['blue', 'green']}, 'Table': {'directory': '04379243', 'lables': ['leg', 'top'], 'colors': ['blue', 'green']}} In this example, we train PointNet to segment the parts of an Airplane model. points_dir = \"/tmp/.keras/datasets/PartAnnotation/{}/points\".format( metadata[\"Airplane\"][\"directory\"] ) labels_dir = \"/tmp/.keras/datasets/PartAnnotation/{}/points_label\".format( metadata[\"Airplane\"][\"directory\"] ) LABELS = metadata[\"Airplane\"][\"lables\"] COLORS = metadata[\"Airplane\"][\"colors\"] VAL_SPLIT = 0.2 NUM_SAMPLE_POINTS = 1024 BATCH_SIZE = 32 EPOCHS = 60 INITIAL_LR = 1e-3 Structuring the dataset We generate the following in-memory data structures from the Airplane point clouds and their labels: point_clouds is a list of np.array objects that represent the point cloud data in the form of x, y and z coordinates. Axis 0 represents the number of points in the point cloud, while axis 1 represents the coordinates. all_labels is the list that represents the label of each coordinate as a string (needed mainly for visualization purposes). test_point_clouds is in the same format as point_clouds, but doesn't have corresponding the labels of the point clouds. all_labels is a list of np.array objects that represent the point cloud labels for each coordinate, corresponding to the point_clouds list. point_cloud_labels is a list of np.array objects that represent the point cloud labels for each coordinate in one-hot encoded form, corresponding to the point_clouds list. point_clouds, test_point_clouds = [], [] point_cloud_labels, all_labels = [], [] points_files = glob(os.path.join(points_dir, \"*.pts\")) for point_file in tqdm(points_files): point_cloud = np.loadtxt(point_file) if point_cloud.shape[0] < NUM_SAMPLE_POINTS: continue # Get the file-id of the current point cloud for parsing its # labels. file_id = point_file.split(\"/\")[-1].split(\".\")[0] label_data, num_labels = {}, 0 for label in LABELS: label_file = os.path.join(labels_dir, label, file_id + \".seg\") if os.path.exists(label_file): label_data[label] = np.loadtxt(label_file).astype(\"float32\") num_labels = len(label_data[label]) # Point clouds having labels will be our training samples. try: label_map = [\"none\"] * num_labels for label in LABELS: for i, data in enumerate(label_data[label]): label_map[i] = label if data == 1 else label_map[i] label_data = [ LABELS.index(label) if label != \"none\" else len(LABELS) for label in label_map ] # Apply one-hot encoding to the dense label representation. label_data = keras.utils.to_categorical(label_data, num_classes=len(LABELS) + 1) point_clouds.append(point_cloud) point_cloud_labels.append(label_data) all_labels.append(label_map) except KeyError: test_point_clouds.append(point_cloud) 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4045/4045 [03:35<00:00, 18.76it/s] Next, we take a look at some samples from the in-memory arrays we just generated: for _ in range(5): i = random.randint(0, len(point_clouds) - 1) print(f\"point_clouds[{i}].shape:\", point_clouds[0].shape) print(f\"point_cloud_labels[{i}].shape:\", point_cloud_labels[0].shape) for j in range(5): print( f\"all_labels[{i}][{j}]:\", all_labels[i][j], f\"\tpoint_cloud_labels[{i}][{j}]:\", point_cloud_labels[i][j], \"\n\", ) point_clouds[475].shape: (2602, 3) point_cloud_labels[475].shape: (2602, 5) all_labels[475][0]: body point_cloud_labels[475][0]: [0. 1. 0. 0. 0.] all_labels[475][1]: engine point_cloud_labels[475][1]: [0. 0. 0. 1. 0.] all_labels[475][2]: body point_cloud_labels[475][2]: [0. 1. 0. 0. 0.] all_labels[475][3]: body point_cloud_labels[475][3]: [0. 1. 0. 0. 0.] all_labels[475][4]: wing point_cloud_labels[475][4]: [1. 0. 0. 0. 0.] point_clouds[2712].shape: (2602, 3) point_cloud_labels[2712].shape: (2602, 5) all_labels[2712][0]: tail point_cloud_labels[2712][0]: [0. 0. 1. 0. 0.] all_labels[2712][1]: wing point_cloud_labels[2712][1]: [1. 0. 0. 0. 0.] all_labels[2712][2]: engine point_cloud_labels[2712][2]: [0. 0. 0. 1. 0.] all_labels[2712][3]: wing point_cloud_labels[2712][3]: [1. 0. 0. 0. 0.] all_labels[2712][4]: wing point_cloud_labels[2712][4]: [1. 0. 0. 0. 0.] point_clouds[1413].shape: (2602, 3) point_cloud_labels[1413].shape: (2602, 5) all_labels[1413][0]: body point_cloud_labels[1413][0]: [0. 1. 0. 0. 0.] all_labels[1413][1]: tail point_cloud_labels[1413][1]: [0. 0. 1. 0. 0.] all_labels[1413][2]: tail point_cloud_labels[1413][2]: [0. 0. 1. 0. 0.] all_labels[1413][3]: tail point_cloud_labels[1413][3]: [0. 0. 1. 0. 0.] all_labels[1413][4]: tail point_cloud_labels[1413][4]: [0. 0. 1. 0. 0.] point_clouds[1207].shape: (2602, 3) point_cloud_labels[1207].shape: (2602, 5) all_labels[1207][0]: tail point_cloud_labels[1207][0]: [0. 0. 1. 0. 0.] all_labels[1207][1]: wing point_cloud_labels[1207][1]: [1. 0. 0. 0. 0.] all_labels[1207][2]: wing point_cloud_labels[1207][2]: [1. 0. 0. 0. 0.] all_labels[1207][3]: body point_cloud_labels[1207][3]: [0. 1. 0. 0. 0.] all_labels[1207][4]: body point_cloud_labels[1207][4]: [0. 1. 0. 0. 0.] point_clouds[2492].shape: (2602, 3) point_cloud_labels[2492].shape: (2602, 5) all_labels[2492][0]: engine point_cloud_labels[2492][0]: [0. 0. 0. 1. 0.] all_labels[2492][1]: body point_cloud_labels[2492][1]: [0. 1. 0. 0. 0.] all_labels[2492][2]: body point_cloud_labels[2492][2]: [0. 1. 0. 0. 0.] all_labels[2492][3]: body point_cloud_labels[2492][3]: [0. 1. 0. 0. 0.] all_labels[2492][4]: engine point_cloud_labels[2492][4]: [0. 0. 0. 1. 0.] Now, let's visualize some of the point clouds along with their labels. def visualize_data(point_cloud, labels): df = pd.DataFrame( data={ \"x\": point_cloud[:, 0], \"y\": point_cloud[:, 1], \"z\": point_cloud[:, 2], \"label\": labels, } ) fig = plt.figure(figsize=(15, 10)) ax = plt.axes(projection=\"3d\") for index, label in enumerate(LABELS): c_df = df[df[\"label\"] == label] try: ax.scatter( c_df[\"x\"], c_df[\"y\"], c_df[\"z\"], label=label, alpha=0.5, c=COLORS[index] ) except IndexError: pass ax.legend() plt.show() visualize_data(point_clouds[0], all_labels[0]) visualize_data(point_clouds[300], all_labels[300]) png png Preprocessing Note that all the point clouds that we have loaded consist of a variable number of points, which makes it difficult for us to batch them together. In order to overcome this problem, we randomly sample a fixed number of points from each point cloud. We also normalize the point clouds in order to make the data scale-invariant. for index in tqdm(range(len(point_clouds))): current_point_cloud = point_clouds[index] current_label_cloud = point_cloud_labels[index] current_labels = all_labels[index] num_points = len(current_point_cloud) # Randomly sampling respective indices. sampled_indices = random.sample(list(range(num_points)), NUM_SAMPLE_POINTS) # Sampling points corresponding to sampled indices. sampled_point_cloud = np.array([current_point_cloud[i] for i in sampled_indices]) # Sampling corresponding one-hot encoded labels. sampled_label_cloud = np.array([current_label_cloud[i] for i in sampled_indices]) # Sampling corresponding labels for visualization. sampled_labels = np.array([current_labels[i] for i in sampled_indices]) # Normalizing sampled point cloud. norm_point_cloud = sampled_point_cloud - np.mean(sampled_point_cloud, axis=0) norm_point_cloud /= np.max(np.linalg.norm(norm_point_cloud, axis=1)) point_clouds[index] = norm_point_cloud point_cloud_labels[index] = sampled_label_cloud all_labels[index] = sampled_labels 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3694/3694 [00:07<00:00, 478.67it/s] Let's visualize the sampled and normalized point clouds along with their corresponding labels. visualize_data(point_clouds[0], all_labels[0]) visualize_data(point_clouds[300], all_labels[300]) png png Creating TensorFlow datasets We create tf.data.Dataset objects for the training and validation data. We also augment the training point clouds by applying random jitter to them. def load_data(point_cloud_batch, label_cloud_batch): point_cloud_batch.set_shape([NUM_SAMPLE_POINTS, 3]) label_cloud_batch.set_shape([NUM_SAMPLE_POINTS, len(LABELS) + 1]) return point_cloud_batch, label_cloud_batch def augment(point_cloud_batch, label_cloud_batch): noise = tf.random.uniform( tf.shape(label_cloud_batch), -0.005, 0.005, dtype=tf.float64 ) point_cloud_batch += noise[:, :, :3] return point_cloud_batch, label_cloud_batch def generate_dataset(point_clouds, label_clouds, is_training=True): dataset = tf.data.Dataset.from_tensor_slices((point_clouds, label_clouds)) dataset = dataset.shuffle(BATCH_SIZE * 100) if is_training else dataset dataset = dataset.map(load_data, num_parallel_calls=tf.data.AUTOTUNE) dataset = dataset.batch(batch_size=BATCH_SIZE) dataset = ( dataset.map(augment, num_parallel_calls=tf.data.AUTOTUNE) if is_training else dataset ) return dataset split_index = int(len(point_clouds) * (1 - VAL_SPLIT)) train_point_clouds = point_clouds[:split_index] train_label_cloud = point_cloud_labels[:split_index] total_training_examples = len(train_point_clouds) val_point_clouds = point_clouds[split_index:] val_label_cloud = point_cloud_labels[split_index:] print(\"Num train point clouds:\", len(train_point_clouds)) print(\"Num train point cloud labels:\", len(train_label_cloud)) print(\"Num val point clouds:\", len(val_point_clouds)) print(\"Num val point cloud labels:\", len(val_label_cloud)) train_dataset = generate_dataset(train_point_clouds, train_label_cloud) val_dataset = generate_dataset(val_point_clouds, val_label_cloud, is_training=False) print(\"Train Dataset:\", train_dataset) print(\"Validation Dataset:\", val_dataset) Num train point clouds: 2955 Num train point cloud labels: 2955 Num val point clouds: 739 Num val point cloud labels: 739 Train Dataset: Validation Dataset: PointNet model The figure below depicts the internals of the PointNet model family: Given that PointNet is meant to consume an unordered set of coordinates as its input data, its architecture needs to match the following characteristic properties of point cloud data: Permutation invariance Given the unstructured nature of point cloud data, a scan made up of n points has n! permutations. The subsequent data processing must be invariant to the different representations. In order to make PointNet invariant to input permutations, we use a symmetric function (such as max-pooling) once the n input points are mapped to higher-dimensional space. The result is a global feature vector that aims to capture an aggregate signature of the n input points. The global feature vector is used alongside local point features for segmentation. Transformation invariance Segmentation outputs should be unchanged if the object undergoes certain transformations, such as translation or scaling. For a given input point cloud, we apply an appropriate rigid or affine transformation to achieve pose normalization. Because each of the n input points are represented as a vector and are mapped to the embedding spaces independently, applying a geometric transformation simply amounts to matrix multiplying each point with a transformation matrix. This is motivated by the concept of Spatial Transformer Networks. The operations comprising the T-Net are motivated by the higher-level architecture of PointNet. MLPs (or fully-connected layers) are used to map the input points independently and identically to a higher-dimensional space; max-pooling is used to encode a global feature vector whose dimensionality is then reduced with fully-connected layers. The input-dependent features at the final fully-connected layer are then combined with globally trainable weights and biases, resulting in a 3-by-3 transformation matrix. Point interactions The interaction between neighboring points often carries useful information (i.e., a single point should not be treated in isolation). Whereas classification need only make use of global features, segmentation must be able to leverage local point features along with global point features. Note: The figures presented in this section have been taken from the original paper. Now that we know the pieces that compose the PointNet model, we can implement the model. We start by implementing the basic blocks i.e., the convolutional block and the multi-layer perceptron block. def conv_block(x: tf.Tensor, filters: int, name: str) -> tf.Tensor: x = layers.Conv1D(filters, kernel_size=1, padding=\"valid\", name=f\"{name}_conv\")(x) x = layers.BatchNormalization(momentum=0.0, name=f\"{name}_batch_norm\")(x) return layers.Activation(\"relu\", name=f\"{name}_relu\")(x) def mlp_block(x: tf.Tensor, filters: int, name: str) -> tf.Tensor: x = layers.Dense(filters, name=f\"{name}_dense\")(x) x = layers.BatchNormalization(momentum=0.0, name=f\"{name}_batch_norm\")(x) return layers.Activation(\"relu\", name=f\"{name}_relu\")(x) We implement a regularizer (taken from this example) to enforce orthogonality in the feature space. This is needed to ensure that the magnitudes of the transformed features do not vary too much. class OrthogonalRegularizer(keras.regularizers.Regularizer): \"\"\"Reference: https://keras.io/examples/vision/pointnet/#build-a-model\"\"\" def __init__(self, num_features, l2reg=0.001): self.num_features = num_features self.l2reg = l2reg self.identity = tf.eye(num_features) def __call__(self, x): x = tf.reshape(x, (-1, self.num_features, self.num_features)) xxt = tf.tensordot(x, x, axes=(2, 2)) xxt = tf.reshape(xxt, (-1, self.num_features, self.num_features)) return tf.reduce_sum(self.l2reg * tf.square(xxt - self.identity)) def get_config(self): config = super(TransformerEncoder, self).get_config() config.update({\"num_features\": self.num_features, \"l2reg_strength\": self.l2reg}) return config The next piece is the transformation network which we explained earlier. def transformation_net(inputs: tf.Tensor, num_features: int, name: str) -> tf.Tensor: \"\"\" Reference: https://keras.io/examples/vision/pointnet/#build-a-model. The `filters` values come from the original paper: https://arxiv.org/abs/1612.00593. \"\"\" x = conv_block(inputs, filters=64, name=f\"{name}_1\") x = conv_block(x, filters=128, name=f\"{name}_2\") x = conv_block(x, filters=1024, name=f\"{name}_3\") x = layers.GlobalMaxPooling1D()(x) x = mlp_block(x, filters=512, name=f\"{name}_1_1\") x = mlp_block(x, filters=256, name=f\"{name}_2_1\") return layers.Dense( num_features * num_features, kernel_initializer=\"zeros\", bias_initializer=keras.initializers.Constant(np.eye(num_features).flatten()), activity_regularizer=OrthogonalRegularizer(num_features), name=f\"{name}_final\", )(x) def transformation_block(inputs: tf.Tensor, num_features: int, name: str) -> tf.Tensor: transformed_features = transformation_net(inputs, num_features, name=name) transformed_features = layers.Reshape((num_features, num_features))( transformed_features ) return layers.Dot(axes=(2, 1), name=f\"{name}_mm\")([inputs, transformed_features]) Finally, we piece the above blocks together and implement the segmentation model. def get_shape_segmentation_model(num_points: int, num_classes: int) -> keras.Model: input_points = keras.Input(shape=(None, 3)) # PointNet Classification Network. transformed_inputs = transformation_block( input_points, num_features=3, name=\"input_transformation_block\" ) features_64 = conv_block(transformed_inputs, filters=64, name=\"features_64\") features_128_1 = conv_block(features_64, filters=128, name=\"features_128_1\") features_128_2 = conv_block(features_128_1, filters=128, name=\"features_128_2\") transformed_features = transformation_block( features_128_2, num_features=128, name=\"transformed_features\" ) features_512 = conv_block(transformed_features, filters=512, name=\"features_512\") features_2048 = conv_block(features_512, filters=2048, name=\"pre_maxpool_block\") global_features = layers.MaxPool1D(pool_size=num_points, name=\"global_features\")( features_2048 ) global_features = tf.tile(global_features, [1, num_points, 1]) # Segmentation head. segmentation_input = layers.Concatenate(name=\"segmentation_input\")( [ features_64, features_128_1, features_128_2, transformed_features, features_512, global_features, ] ) segmentation_features = conv_block( segmentation_input, filters=128, name=\"segmentation_features\" ) outputs = layers.Conv1D( num_classes, kernel_size=1, activation=\"softmax\", name=\"segmentation_head\" )(segmentation_features) return keras.Model(input_points, outputs) Instantiate the model x, y = next(iter(train_dataset)) num_points = x.shape[1] num_classes = y.shape[-1] segmentation_model = get_shape_segmentation_model(num_points, num_classes) segmentation_model.summary() 2021-10-25 01:26:33.563133: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2) Model: \"model\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, None, 3)] 0 __________________________________________________________________________________________________ input_transformation_block_1_co (None, None, 64) 256 input_1[0][0] __________________________________________________________________________________________________ input_transformation_block_1_ba (None, None, 64) 256 input_transformation_block_1_conv __________________________________________________________________________________________________ input_transformation_block_1_re (None, None, 64) 0 input_transformation_block_1_batc __________________________________________________________________________________________________ input_transformation_block_2_co (None, None, 128) 8320 input_transformation_block_1_relu __________________________________________________________________________________________________ input_transformation_block_2_ba (None, None, 128) 512 input_transformation_block_2_conv __________________________________________________________________________________________________ input_transformation_block_2_re (None, None, 128) 0 input_transformation_block_2_batc __________________________________________________________________________________________________ input_transformation_block_3_co (None, None, 1024) 132096 input_transformation_block_2_relu __________________________________________________________________________________________________ input_transformation_block_3_ba (None, None, 1024) 4096 input_transformation_block_3_conv __________________________________________________________________________________________________ input_transformation_block_3_re (None, None, 1024) 0 input_transformation_block_3_batc __________________________________________________________________________________________________ global_max_pooling1d (GlobalMax (None, 1024) 0 input_transformation_block_3_relu __________________________________________________________________________________________________ input_transformation_block_1_1_ (None, 512) 524800 global_max_pooling1d[0][0] __________________________________________________________________________________________________ input_transformation_block_1_1_ (None, 512) 2048 input_transformation_block_1_1_de __________________________________________________________________________________________________ input_transformation_block_1_1_ (None, 512) 0 input_transformation_block_1_1_ba __________________________________________________________________________________________________ input_transformation_block_2_1_ (None, 256) 131328 input_transformation_block_1_1_re __________________________________________________________________________________________________ input_transformation_block_2_1_ (None, 256) 1024 input_transformation_block_2_1_de __________________________________________________________________________________________________ input_transformation_block_2_1_ (None, 256) 0 input_transformation_block_2_1_ba __________________________________________________________________________________________________ input_transformation_block_fina (None, 9) 2313 input_transformation_block_2_1_re __________________________________________________________________________________________________ reshape (Reshape) (None, 3, 3) 0 input_transformation_block_final[ __________________________________________________________________________________________________ input_transformation_block_mm ( (None, None, 3) 0 input_1[0][0] reshape[0][0] __________________________________________________________________________________________________ features_64_conv (Conv1D) (None, None, 64) 256 input_transformation_block_mm[0][ __________________________________________________________________________________________________ features_64_batch_norm (BatchNo (None, None, 64) 256 features_64_conv[0][0] __________________________________________________________________________________________________ features_64_relu (Activation) (None, None, 64) 0 features_64_batch_norm[0][0] __________________________________________________________________________________________________ features_128_1_conv (Conv1D) (None, None, 128) 8320 features_64_relu[0][0] __________________________________________________________________________________________________ features_128_1_batch_norm (Batc (None, None, 128) 512 features_128_1_conv[0][0] __________________________________________________________________________________________________ features_128_1_relu (Activation (None, None, 128) 0 features_128_1_batch_norm[0][0] __________________________________________________________________________________________________ features_128_2_conv (Conv1D) (None, None, 128) 16512 features_128_1_relu[0][0] __________________________________________________________________________________________________ features_128_2_batch_norm (Batc (None, None, 128) 512 features_128_2_conv[0][0] __________________________________________________________________________________________________ features_128_2_relu (Activation (None, None, 128) 0 features_128_2_batch_norm[0][0] __________________________________________________________________________________________________ transformed_features_1_conv (Co (None, None, 64) 8256 features_128_2_relu[0][0] __________________________________________________________________________________________________ transformed_features_1_batch_no (None, None, 64) 256 transformed_features_1_conv[0][0] __________________________________________________________________________________________________ transformed_features_1_relu (Ac (None, None, 64) 0 transformed_features_1_batch_norm __________________________________________________________________________________________________ transformed_features_2_conv (Co (None, None, 128) 8320 transformed_features_1_relu[0][0] __________________________________________________________________________________________________ transformed_features_2_batch_no (None, None, 128) 512 transformed_features_2_conv[0][0] __________________________________________________________________________________________________ transformed_features_2_relu (Ac (None, None, 128) 0 transformed_features_2_batch_norm __________________________________________________________________________________________________ transformed_features_3_conv (Co (None, None, 1024) 132096 transformed_features_2_relu[0][0] __________________________________________________________________________________________________ transformed_features_3_batch_no (None, None, 1024) 4096 transformed_features_3_conv[0][0] __________________________________________________________________________________________________ transformed_features_3_relu (Ac (None, None, 1024) 0 transformed_features_3_batch_norm __________________________________________________________________________________________________ global_max_pooling1d_1 (GlobalM (None, 1024) 0 transformed_features_3_relu[0][0] __________________________________________________________________________________________________ transformed_features_1_1_dense (None, 512) 524800 global_max_pooling1d_1[0][0] __________________________________________________________________________________________________ transformed_features_1_1_batch_ (None, 512) 2048 transformed_features_1_1_dense[0] __________________________________________________________________________________________________ transformed_features_1_1_relu ( (None, 512) 0 transformed_features_1_1_batch_no __________________________________________________________________________________________________ transformed_features_2_1_dense (None, 256) 131328 transformed_features_1_1_relu[0][ __________________________________________________________________________________________________ transformed_features_2_1_batch_ (None, 256) 1024 transformed_features_2_1_dense[0] __________________________________________________________________________________________________ transformed_features_2_1_relu ( (None, 256) 0 transformed_features_2_1_batch_no __________________________________________________________________________________________________ transformed_features_final (Den (None, 16384) 4210688 transformed_features_2_1_relu[0][ __________________________________________________________________________________________________ reshape_1 (Reshape) (None, 128, 128) 0 transformed_features_final[0][0] __________________________________________________________________________________________________ transformed_features_mm (Dot) (None, None, 128) 0 features_128_2_relu[0][0] reshape_1[0][0] __________________________________________________________________________________________________ features_512_conv (Conv1D) (None, None, 512) 66048 transformed_features_mm[0][0] __________________________________________________________________________________________________ features_512_batch_norm (BatchN (None, None, 512) 2048 features_512_conv[0][0] __________________________________________________________________________________________________ features_512_relu (Activation) (None, None, 512) 0 features_512_batch_norm[0][0] __________________________________________________________________________________________________ pre_maxpool_block_conv (Conv1D) (None, None, 2048) 1050624 features_512_relu[0][0] __________________________________________________________________________________________________ pre_maxpool_block_batch_norm (B (None, None, 2048) 8192 pre_maxpool_block_conv[0][0] __________________________________________________________________________________________________ pre_maxpool_block_relu (Activat (None, None, 2048) 0 pre_maxpool_block_batch_norm[0][0 __________________________________________________________________________________________________ global_features (MaxPooling1D) (None, None, 2048) 0 pre_maxpool_block_relu[0][0] __________________________________________________________________________________________________ tf.tile (TFOpLambda) (None, None, 2048) 0 global_features[0][0] __________________________________________________________________________________________________ segmentation_input (Concatenate (None, None, 3008) 0 features_64_relu[0][0] features_128_1_relu[0][0] features_128_2_relu[0][0] transformed_features_mm[0][0] features_512_relu[0][0] tf.tile[0][0] __________________________________________________________________________________________________ segmentation_features_conv (Con (None, None, 128) 385152 segmentation_input[0][0] __________________________________________________________________________________________________ segmentation_features_batch_nor (None, None, 128) 512 segmentation_features_conv[0][0] __________________________________________________________________________________________________ segmentation_features_relu (Act (None, None, 128) 0 segmentation_features_batch_norm[ __________________________________________________________________________________________________ segmentation_head (Conv1D) (None, None, 5) 645 segmentation_features_relu[0][0] ================================================================================================== Total params: 7,370,062 Trainable params: 7,356,110 Non-trainable params: 13,952 __________________________________________________________________________________________________ Training For the training the authors recommend using a learning rate schedule that decays the initial learning rate by half every 20 epochs. In this example, we resort to 15 epochs. training_step_size = total_training_examples // BATCH_SIZE total_training_steps = training_step_size * EPOCHS print(f\"Total training steps: {total_training_steps}.\") lr_schedule = keras.optimizers.schedules.PiecewiseConstantDecay( boundaries=[training_step_size * 15, training_step_size * 15], values=[INITIAL_LR, INITIAL_LR * 0.5, INITIAL_LR * 0.25], ) steps = tf.range(total_training_steps, dtype=tf.int32) lrs = [lr_schedule(step) for step in steps] plt.plot(lrs) plt.xlabel(\"Steps\") plt.ylabel(\"Learning Rate\") plt.show() Total training steps: 5520. png Finally, we implement a utility for running our experiments and launch model training. def run_experiment(epochs): segmentation_model = get_shape_segmentation_model(num_points, num_classes) segmentation_model.compile( optimizer=keras.optimizers.Adam(learning_rate=lr_schedule), loss=keras.losses.CategoricalCrossentropy(), metrics=[\"accuracy\"], ) checkpoint_filepath = \"/tmp/checkpoint\" checkpoint_callback = keras.callbacks.ModelCheckpoint( checkpoint_filepath, monitor=\"val_loss\", save_best_only=True, save_weights_only=True, ) history = segmentation_model.fit( train_dataset, validation_data=val_dataset, epochs=epochs, callbacks=[checkpoint_callback], ) segmentation_model.load_weights(checkpoint_filepath) return segmentation_model, history segmentation_model, history = run_experiment(epochs=EPOCHS) Epoch 1/60 93/93 [==============================] - 28s 127ms/step - loss: 5.3556 - accuracy: 0.7448 - val_loss: 5.8386 - val_accuracy: 0.7471 Epoch 2/60 93/93 [==============================] - 11s 117ms/step - loss: 4.7077 - accuracy: 0.8181 - val_loss: 5.2614 - val_accuracy: 0.7793 Epoch 3/60 93/93 [==============================] - 11s 118ms/step - loss: 4.6566 - accuracy: 0.8301 - val_loss: 4.7907 - val_accuracy: 0.8269 Epoch 4/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6059 - accuracy: 0.8406 - val_loss: 4.6031 - val_accuracy: 0.8482 Epoch 5/60 93/93 [==============================] - 11s 118ms/step - loss: 4.5828 - accuracy: 0.8444 - val_loss: 4.7692 - val_accuracy: 0.8220 Epoch 6/60 93/93 [==============================] - 11s 118ms/step - loss: 4.6150 - accuracy: 0.8408 - val_loss: 5.4460 - val_accuracy: 0.8192 Epoch 7/60 93/93 [==============================] - 11s 117ms/step - loss: 67.5943 - accuracy: 0.7378 - val_loss: 1617.1846 - val_accuracy: 0.5191 Epoch 8/60 93/93 [==============================] - 11s 117ms/step - loss: 15.2910 - accuracy: 0.6651 - val_loss: 8.1014 - val_accuracy: 0.7046 Epoch 9/60 93/93 [==============================] - 11s 117ms/step - loss: 6.8878 - accuracy: 0.7368 - val_loss: 14.2311 - val_accuracy: 0.6949 Epoch 10/60 93/93 [==============================] - 11s 117ms/step - loss: 5.8362 - accuracy: 0.7549 - val_loss: 14.6942 - val_accuracy: 0.6350 Epoch 11/60 93/93 [==============================] - 11s 117ms/step - loss: 5.4777 - accuracy: 0.7648 - val_loss: 44.1037 - val_accuracy: 0.6422 Epoch 12/60 93/93 [==============================] - 11s 117ms/step - loss: 5.2688 - accuracy: 0.7712 - val_loss: 4.9977 - val_accuracy: 0.7692 Epoch 13/60 93/93 [==============================] - 11s 117ms/step - loss: 5.1041 - accuracy: 0.7837 - val_loss: 6.0642 - val_accuracy: 0.7577 Epoch 14/60 93/93 [==============================] - 11s 117ms/step - loss: 5.0011 - accuracy: 0.7862 - val_loss: 4.9313 - val_accuracy: 0.7840 Epoch 15/60 93/93 [==============================] - 11s 117ms/step - loss: 4.8910 - accuracy: 0.7953 - val_loss: 5.8368 - val_accuracy: 0.7725 Epoch 16/60 93/93 [==============================] - 11s 117ms/step - loss: 4.8698 - accuracy: 0.8074 - val_loss: 73.0260 - val_accuracy: 0.7251 Epoch 17/60 93/93 [==============================] - 11s 117ms/step - loss: 4.8299 - accuracy: 0.8109 - val_loss: 17.1503 - val_accuracy: 0.7415 Epoch 18/60 93/93 [==============================] - 11s 117ms/step - loss: 4.8147 - accuracy: 0.8111 - val_loss: 62.2765 - val_accuracy: 0.7344 Epoch 19/60 93/93 [==============================] - 11s 117ms/step - loss: 4.8316 - accuracy: 0.8141 - val_loss: 5.2200 - val_accuracy: 0.7890 Epoch 20/60 93/93 [==============================] - 11s 117ms/step - loss: 4.7853 - accuracy: 0.8142 - val_loss: 5.7062 - val_accuracy: 0.7719 Epoch 21/60 93/93 [==============================] - 11s 117ms/step - loss: 4.7753 - accuracy: 0.8157 - val_loss: 6.2089 - val_accuracy: 0.7839 Epoch 22/60 93/93 [==============================] - 11s 117ms/step - loss: 4.7681 - accuracy: 0.8161 - val_loss: 5.1077 - val_accuracy: 0.8021 Epoch 23/60 93/93 [==============================] - 11s 117ms/step - loss: 4.7554 - accuracy: 0.8187 - val_loss: 4.7912 - val_accuracy: 0.7912 Epoch 24/60 93/93 [==============================] - 11s 117ms/step - loss: 4.7355 - accuracy: 0.8197 - val_loss: 4.9164 - val_accuracy: 0.7978 Epoch 25/60 93/93 [==============================] - 11s 117ms/step - loss: 4.7483 - accuracy: 0.8197 - val_loss: 13.4724 - val_accuracy: 0.7631 Epoch 26/60 93/93 [==============================] - 11s 117ms/step - loss: 4.7200 - accuracy: 0.8218 - val_loss: 8.3074 - val_accuracy: 0.7596 Epoch 27/60 93/93 [==============================] - 11s 118ms/step - loss: 4.7192 - accuracy: 0.8231 - val_loss: 12.4468 - val_accuracy: 0.7591 Epoch 28/60 93/93 [==============================] - 11s 117ms/step - loss: 4.7151 - accuracy: 0.8241 - val_loss: 23.8681 - val_accuracy: 0.7689 Epoch 29/60 93/93 [==============================] - 11s 117ms/step - loss: 4.7096 - accuracy: 0.8237 - val_loss: 4.9069 - val_accuracy: 0.8104 Epoch 30/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6991 - accuracy: 0.8257 - val_loss: 4.9858 - val_accuracy: 0.7950 Epoch 31/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6852 - accuracy: 0.8260 - val_loss: 5.0130 - val_accuracy: 0.7678 Epoch 32/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6630 - accuracy: 0.8286 - val_loss: 4.8523 - val_accuracy: 0.7676 Epoch 33/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6837 - accuracy: 0.8281 - val_loss: 5.4347 - val_accuracy: 0.8095 Epoch 34/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6571 - accuracy: 0.8296 - val_loss: 10.4595 - val_accuracy: 0.7410 Epoch 35/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6460 - accuracy: 0.8321 - val_loss: 4.9189 - val_accuracy: 0.8083 Epoch 36/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6430 - accuracy: 0.8327 - val_loss: 5.8674 - val_accuracy: 0.7911 Epoch 37/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6530 - accuracy: 0.8309 - val_loss: 4.7946 - val_accuracy: 0.8032 Epoch 38/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6391 - accuracy: 0.8318 - val_loss: 5.0111 - val_accuracy: 0.8024 Epoch 39/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6521 - accuracy: 0.8336 - val_loss: 8.1558 - val_accuracy: 0.7727 Epoch 40/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6443 - accuracy: 0.8329 - val_loss: 42.8513 - val_accuracy: 0.7688 Epoch 41/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6316 - accuracy: 0.8342 - val_loss: 5.0960 - val_accuracy: 0.8066 Epoch 42/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6322 - accuracy: 0.8335 - val_loss: 5.0634 - val_accuracy: 0.8158 Epoch 43/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6175 - accuracy: 0.8370 - val_loss: 6.0642 - val_accuracy: 0.8062 Epoch 44/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6175 - accuracy: 0.8371 - val_loss: 11.1805 - val_accuracy: 0.7790 Epoch 45/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6056 - accuracy: 0.8377 - val_loss: 4.7359 - val_accuracy: 0.8145 Epoch 46/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6108 - accuracy: 0.8383 - val_loss: 5.7125 - val_accuracy: 0.7713 Epoch 47/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6103 - accuracy: 0.8377 - val_loss: 6.3271 - val_accuracy: 0.8105 Epoch 48/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6020 - accuracy: 0.8383 - val_loss: 14.2876 - val_accuracy: 0.7529 Epoch 49/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6035 - accuracy: 0.8382 - val_loss: 4.8244 - val_accuracy: 0.8143 Epoch 50/60 93/93 [==============================] - 11s 117ms/step - loss: 4.6076 - accuracy: 0.8381 - val_loss: 8.2636 - val_accuracy: 0.7528 Epoch 51/60 93/93 [==============================] - 11s 117ms/step - loss: 4.5927 - accuracy: 0.8399 - val_loss: 4.6473 - val_accuracy: 0.8266 Epoch 52/60 93/93 [==============================] - 11s 117ms/step - loss: 4.5927 - accuracy: 0.8408 - val_loss: 4.6443 - val_accuracy: 0.8276 Epoch 53/60 93/93 [==============================] - 11s 117ms/step - loss: 4.5852 - accuracy: 0.8413 - val_loss: 5.1300 - val_accuracy: 0.7768 Epoch 54/60 93/93 [==============================] - 11s 117ms/step - loss: 4.5787 - accuracy: 0.8426 - val_loss: 8.9590 - val_accuracy: 0.7582 Epoch 55/60 93/93 [==============================] - 11s 117ms/step - loss: 4.5837 - accuracy: 0.8410 - val_loss: 5.1501 - val_accuracy: 0.8117 Epoch 56/60 93/93 [==============================] - 11s 117ms/step - loss: 4.5875 - accuracy: 0.8422 - val_loss: 31.3518 - val_accuracy: 0.7590 Epoch 57/60 93/93 [==============================] - 11s 117ms/step - loss: 4.5821 - accuracy: 0.8427 - val_loss: 4.8853 - val_accuracy: 0.8144 Epoch 58/60 93/93 [==============================] - 11s 117ms/step - loss: 4.5751 - accuracy: 0.8446 - val_loss: 4.6653 - val_accuracy: 0.8222 Epoch 59/60 93/93 [==============================] - 11s 117ms/step - loss: 4.5752 - accuracy: 0.8447 - val_loss: 6.0078 - val_accuracy: 0.8014 Epoch 60/60 93/93 [==============================] - 11s 118ms/step - loss: 4.5695 - accuracy: 0.8452 - val_loss: 4.8178 - val_accuracy: 0.8192 Visualize the training landscape def plot_result(item): plt.plot(history.history[item], label=item) plt.plot(history.history[\"val_\" + item], label=\"val_\" + item) plt.xlabel(\"Epochs\") plt.ylabel(item) plt.title(\"Train and Validation {} Over Epochs\".format(item), fontsize=14) plt.legend() plt.grid() plt.show() plot_result(\"loss\") plot_result(\"accuracy\") png png Inference validation_batch = next(iter(val_dataset)) val_predictions = segmentation_model.predict(validation_batch[0]) print(f\"Validation prediction shape: {val_predictions.shape}\") def visualize_single_point_cloud(point_clouds, label_clouds, idx): label_map = LABELS + [\"none\"] point_cloud = point_clouds[idx] label_cloud = label_clouds[idx] visualize_data(point_cloud, [label_map[np.argmax(label)] for label in label_cloud]) idx = np.random.choice(len(validation_batch[0])) print(f\"Index selected: {idx}\") # Plotting with ground-truth. visualize_single_point_cloud(validation_batch[0], validation_batch[1], idx) # Plotting with predicted labels. visualize_single_point_cloud(validation_batch[0], val_predictions, idx) Validation prediction shape: (32, 1024, 5) Index selected: 24 png png Final notes If you are interested in learning more about this topic, you may find this repository useful. RandAugment for training an image classification model with improved robustness. Data augmentation is a very useful technique that can help to improve the translational invariance of convolutional neural networks (CNN). RandAugment is a stochastic data augmentation routine for vision data and was proposed in RandAugment: Practical automated data augmentation with a reduced search space. It is composed of strong augmentation transforms like color jitters, Gaussian blurs, saturations, etc. along with more traditional augmentation transforms such as random crops. RandAugment has two parameters: n that denotes the number of randomly selected augmentation transforms to apply sequentially m strength of all the augmentation transforms These parameters are tuned for a given dataset and a network architecture. The authors of RandAugment also provide pseudocode of RandAugment in the original paper (Figure 2). Recently, it has been a key component of works like Noisy Student Training and Unsupervised Data Augmentation for Consistency Training. It has been also central to the success of EfficientNets. This example requires TensorFlow 2.4 or higher, as well as imgaug, which can be installed using the following command: pip install imgaug Imports & setup import tensorflow as tf import numpy as np import matplotlib.pyplot as plt from tensorflow.keras import layers import tensorflow_datasets as tfds from imgaug import augmenters as iaa import imgaug as ia tfds.disable_progress_bar() tf.random.set_seed(42) ia.seed(42) Load the CIFAR10 dataset For this example, we will be using the CIFAR10 dataset. (x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() print(f\"Total training examples: {len(x_train)}\") print(f\"Total test examples: {len(x_test)}\") Total training examples: 50000 Total test examples: 10000 Define hyperparameters AUTO = tf.data.AUTOTUNE BATCH_SIZE = 128 EPOCHS = 1 IMAGE_SIZE = 72 Initialize RandAugment object Now, we will initialize a RandAugment object from the imgaug.augmenters module with the parameters suggested by the RandAugment authors. rand_aug = iaa.RandAugment(n=3, m=7) def augment(images): # Input to `augment()` is a TensorFlow tensor which # is not supported by `imgaug`. This is why we first # convert it to its `numpy` variant. images = tf.cast(images, tf.uint8) return rand_aug(images=images.numpy()) Create TensorFlow Dataset objects Because RandAugment can only process NumPy arrays, it cannot be applied directly as part of the Dataset object (which expects TensorFlow tensors). To make RandAugment part of the dataset, we need to wrap it in a [tf.py_function](https://www.tensorflow.org/api_docs/python/tf/py_function). A tf.py_function is a TensorFlow operation (which, like any other TensorFlow operation, takes TF tensors as arguments and returns TensorFlow tensors) that is capable of running arbitrary Python code. Naturally, this Python code can only be executed on CPU (whereas the rest of the TensorFlow graph can be accelerated on GPU), which in some cases can cause significant slowdowns -- however, in this case, the Dataset pipeline will run asynchronously together with the model, and doing preprocessing on CPU will remain performant. train_ds_rand = ( tf.data.Dataset.from_tensor_slices((x_train, y_train)) .shuffle(BATCH_SIZE * 100) .batch(BATCH_SIZE) .map( lambda x, y: (tf.image.resize(x, (IMAGE_SIZE, IMAGE_SIZE)), y), num_parallel_calls=AUTO, ) .map( lambda x, y: (tf.py_function(augment, [x], [tf.float32])[0], y), num_parallel_calls=AUTO, ) .prefetch(AUTO) ) test_ds = ( tf.data.Dataset.from_tensor_slices((x_test, y_test)) .batch(BATCH_SIZE) .map( lambda x, y: (tf.image.resize(x, (IMAGE_SIZE, IMAGE_SIZE)), y), num_parallel_calls=AUTO, ) .prefetch(AUTO) ) Note about using tf.py_function: As our augment() function is not a native TensorFlow operation chances are likely that it can turn into an expensive operation. This is why it is much better to apply it after batching our dataset. tf.py_function is not compatible with TPUs. So, if you have distributed TensorFlow training pipelines that use TPUs you cannot use tf.py_function. In that case, consider switching to a multi-GPU environment, or rewriting the contents of the function in pure TensorFlow. For comparison purposes, let's also define a simple augmentation pipeline consisting of random flips, random rotations, and random zoomings. simple_aug = tf.keras.Sequential( [ layers.Resizing(IMAGE_SIZE, IMAGE_SIZE), layers.RandomFlip(\"horizontal\"), layers.RandomRotation(factor=0.02), layers.RandomZoom( height_factor=0.2, width_factor=0.2 ), ] ) # Now, map the augmentation pipeline to our training dataset train_ds_simple = ( tf.data.Dataset.from_tensor_slices((x_train, y_train)) .shuffle(BATCH_SIZE * 100) .batch(BATCH_SIZE) .map(lambda x, y: (simple_aug(x), y), num_parallel_calls=AUTO) .prefetch(AUTO) ) Visualize the dataset augmented with RandAugment sample_images, _ = next(iter(train_ds_rand)) plt.figure(figsize=(10, 10)) for i, image in enumerate(sample_images[:9]): ax = plt.subplot(3, 3, i + 1) plt.imshow(image.numpy().astype(\"int\")) plt.axis(\"off\") png You are encouraged to run the above code block a couple of times to see different variations. Visualize the dataset augmented with simple_aug sample_images, _ = next(iter(train_ds_simple)) plt.figure(figsize=(10, 10)) for i, image in enumerate(sample_images[:9]): ax = plt.subplot(3, 3, i + 1) plt.imshow(image.numpy().astype(\"int\")) plt.axis(\"off\") png Define a model building utility function Now, we define a CNN model that is based on the ResNet50V2 architecture. Also, notice that the network already has a rescaling layer inside it. This eliminates the need to do any separate preprocessing on our dataset and is specifically very useful for deployment purposes. def get_training_model(): resnet50_v2 = tf.keras.applications.ResNet50V2( weights=None, include_top=True, input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3), classes=10, ) model = tf.keras.Sequential( [ layers.Input((IMAGE_SIZE, IMAGE_SIZE, 3)), layers.Rescaling(scale=1.0 / 127.5, offset=-1), resnet50_v2, ] ) return model print(get_training_model().summary() Model: \"sequential_1\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= rescaling (Rescaling) (None, 72, 72, 3) 0 _________________________________________________________________ resnet50v2 (Functional) (None, 10) 23585290 ================================================================= Total params: 23,585,290 Trainable params: 23,539,850 Non-trainable params: 45,440 _________________________________________________________________ We will train this network on two different versions of our dataset: One augmented with RandAugment. Another one augmented with simple_aug. Since RandAugment is known to enhance the robustness of models to common perturbations and corruptions, we will also evaluate our models on the CIFAR-10-C dataset, proposed in Benchmarking Neural Network Robustness to Common Corruptions and Perturbations by Hendrycks et al. The CIFAR-10-C dataset consists of 19 different image corruptions and perturbations (for example speckle noise, fog, Gaussian blur, etc.) that too at varying severity levels. For this example we will be using the following configuration: cifar10_corrupted/saturate_5. The images from this configuration look like so: In the interest of reproducibility, we serialize the initial random weights of our shallow network. initial_model = get_training_model() initial_model.save_weights(\"initial_weights.h5\") Train model with RandAugment rand_aug_model = get_training_model() rand_aug_model.load_weights(\"initial_weights.h5\") rand_aug_model.compile( loss=\"sparse_categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"] ) rand_aug_model.fit(train_ds_rand, validation_data=test_ds, epochs=EPOCHS) _, test_acc = rand_aug_model.evaluate(test_ds) print(\"Test accuracy: {:.2f}%\".format(test_acc * 100)) 391/391 [==============================] - 1199s 3s/step - loss: 2.0652 - accuracy: 0.2744 - val_loss: 1.6281 - val_accuracy: 0.4319 79/79 [==============================] - 46s 580ms/step - loss: 1.6281 - accuracy: 0.4319 Test accuracy: 43.19% Train model with simple_aug simple_aug_model = get_training_model() simple_aug_model.load_weights(\"initial_weights.h5\") simple_aug_model.compile( loss=\"sparse_categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"] ) simple_aug_model.fit(train_ds_simple, validation_data=test_ds, epochs=EPOCHS) _, test_acc = simple_aug_model.evaluate(test_ds) print(\"Test accuracy: {:.2f}%\".format(test_acc * 100)) 391/391 [==============================] - 1169s 3s/step - loss: 1.7628 - accuracy: 0.3862 - val_loss: 1.3458 - val_accuracy: 0.5305 79/79 [==============================] - 42s 527ms/step - loss: 1.3458 - accuracy: 0.5305 Test accuracy: 53.05% Load the CIFAR-10-C dataset and evaluate performance # Load and prepare the CIFAR-10-C dataset # (If it's not already downloaded, it takes ~10 minutes of time to download) cifar_10_c = tfds.load(\"cifar10_corrupted/saturate_5\", split=\"test\", as_supervised=True) cifar_10_c = cifar_10_c.batch(BATCH_SIZE).map( lambda x, y: (tf.image.resize(x, (IMAGE_SIZE, IMAGE_SIZE)), y), num_parallel_calls=AUTO, ) # Evaluate `rand_aug_model` _, test_acc = rand_aug_model.evaluate(cifar_10_c, verbose=0) print( \"Accuracy with RandAugment on CIFAR-10-C (saturate_5): {:.2f}%\".format( test_acc * 100 ) ) # Evaluate `simple_aug_model` _, test_acc = simple_aug_model.evaluate(cifar_10_c, verbose=0) print( \"Accuracy with simple_aug on CIFAR-10-C (saturate_5): {:.2f}%\".format( test_acc * 100 ) ) Accuracy with RandAugment on CIFAR-10-C (saturate_5): 35.90% Accuracy with simple_aug on CIFAR-10-C (saturate_5): 47.34% For the purpose of this example, we trained the models for only a single epoch. On the CIFAR-10-C dataset, the model with RandAugment can perform better with a higher accuracy (for example, 76.64% in one experiment) compared with the model trained with simple_aug (e.g., 64.80%). RandAugment can also help stabilize the training. You can explore this notebook to check some of the results. In the notebook, you may notice that, at the expense of increased training time with RandAugment, we are able to carve out far better performance on the CIFAR-10-C dataset. You can experiment on the other corruption and perturbation settings that come with the run the same CIFAR-10-C dataset and see if RandAugment helps. You can also experiment with the different values of n and m in the RandAugment object. In the original paper, the authors show the impact of the individual augmentation transforms for a particular task and a range of ablation studies. You are welcome to check them out. RandAugment has shown great progress in improving the robustness of deep models for computer vision as shown in works like Noisy Student Training and FixMatch. This makes RandAugment quite a useful recipe for training different vision models. Implementation of NNCLR, a self-supervised learning method for computer vision. Introduction Self-supervised learning Self-supervised representation learning aims to obtain robust representations of samples from raw data without expensive labels or annotations. Early methods in this field focused on defining pretraining tasks which involved a surrogate task on a domain with ample weak supervision labels. Encoders trained to solve such tasks are expected to learn general features that might be useful for other downstream tasks requiring expensive annotations like image classification. Contrastive Learning A broad category of self-supervised learning techniques are those that use contrastive losses, which have been used in a wide range of computer vision applications like image similarity, dimensionality reduction (DrLIM) and face verification/identification. These methods learn a latent space that clusters positive samples together while pushing apart negative samples. NNCLR In this example, we implement NNCLR as proposed in the paper With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations, by Google Research and DeepMind. NNCLR learns self-supervised representations that go beyond single-instance positives, which allows for learning better features that are invariant to different viewpoints, deformations, and even intra-class variations. Clustering based methods offer a great approach to go beyond single instance positives, but assuming the entire cluster to be positives could hurt performance due to early over-generalization. Instead, NNCLR uses nearest neighbors in the learned representation space as positives. In addition, NNCLR increases the performance of existing contrastive learning methods like SimCLR(Keras Example) and reduces the reliance of self-supervised methods on data augmentation strategies. Here is a great visualization by the paper authors showing how NNCLR builds on ideas from SimCLR: We can see that SimCLR uses two views of the same image as the positive pair. These two views, which are produced using random data augmentations, are fed through an encoder to obtain the positive embedding pair, we end up using two augmentations. NNCLR instead keeps a support set of embeddings representing the full data distribution, and forms the positive pairs using nearest-neighbours. A support set is used as memory during training, similar to a queue (i.e. first-in-first-out) as in MoCo. This example requires TensorFlow 2.6 or higher, as well as tensorflow_datasets, which can be installed with this command: !pip install tensorflow-datasets Requirement already satisfied: tensorflow-datasets in /opt/conda/lib/python3.7/site-packages (4.3.0) Requirement already satisfied: requests>=2.19.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (2.25.1) Requirement already satisfied: typing-extensions in /home/jupyter/.local/lib/python3.7/site-packages (from tensorflow-datasets) (3.7.4.3) Requirement already satisfied: tensorflow-metadata in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (1.2.0) Requirement already satisfied: absl-py in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (0.13.0) Requirement already satisfied: promise in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (2.3) Requirement already satisfied: six in /home/jupyter/.local/lib/python3.7/site-packages (from tensorflow-datasets) (1.15.0) Requirement already satisfied: termcolor in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (1.1.0) Requirement already satisfied: protobuf>=3.12.2 in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (3.16.0) Requirement already satisfied: tqdm in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (4.62.2) Requirement already satisfied: attrs>=18.1.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (21.2.0) Requirement already satisfied: future in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (0.18.2) Requirement already satisfied: dill in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (0.3.4) Requirement already satisfied: importlib-resources in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (5.2.2) Requirement already satisfied: numpy in /opt/conda/lib/python3.7/site-packages (from tensorflow-datasets) (1.19.5) Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests>=2.19.0->tensorflow-datasets) (2021.5.30) Requirement already satisfied: chardet<5,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests>=2.19.0->tensorflow-datasets) (4.0.0) Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests>=2.19.0->tensorflow-datasets) (2.10) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests>=2.19.0->tensorflow-datasets) (1.26.6) Requirement already satisfied: zipp>=3.1.0 in /opt/conda/lib/python3.7/site-packages (from importlib-resources->tensorflow-datasets) (3.5.0) Requirement already satisfied: googleapis-common-protos<2,>=1.52.0 in /opt/conda/lib/python3.7/site-packages (from tensorflow-metadata->tensorflow-datasets) (1.53.0) Collecting absl-py Downloading absl_py-0.12.0-py3-none-any.whl (129 kB)  |████████████████████████████████| 129 kB 8.1 MB/s [?25hInstalling collected packages: absl-py Attempting uninstall: absl-py Found existing installation: absl-py 0.13.0 Uninstalling absl-py-0.13.0: Successfully uninstalled absl-py-0.13.0 ERROR: Could not install packages due to an OSError: [Errno 13] Permission denied: '_flagvalues.cpython-37.pyc' Consider using the `--user` option or check the permissions.  Setup import matplotlib.pyplot as plt import tensorflow as tf import tensorflow_datasets as tfds from tensorflow import keras from tensorflow.keras import layers Hyperparameters A greater queue_size most likely means better performance as shown in the original paper, but introduces significant computational overhead. The authors show that the best results of NNCLR are achieved with a queue size of 98,304 (the largest queue_size they experimented on). We here use 10,000 to show a working example. AUTOTUNE = tf.data.AUTOTUNE shuffle_buffer = 5000 # The below two values are taken from https://www.tensorflow.org/datasets/catalog/stl10 labelled_train_images = 5000 unlabelled_images = 100000 temperature = 0.1 queue_size = 10000 contrastive_augmenter = { \"brightness\": 0.5, \"name\": \"contrastive_augmenter\", \"scale\": (0.2, 1.0), } classification_augmenter = { \"brightness\": 0.2, \"name\": \"classification_augmenter\", \"scale\": (0.5, 1.0), } input_shape = (96, 96, 3) width = 128 num_epochs = 25 steps_per_epoch = 200 Load the Dataset We load the STL-10 dataset from TensorFlow Datasets, an image recognition dataset for developing unsupervised feature learning, deep learning, self-taught learning algorithms. It is inspired by the CIFAR-10 dataset, with some modifications. dataset_name = \"stl10\" def prepare_dataset(): unlabeled_batch_size = unlabelled_images // steps_per_epoch labeled_batch_size = labelled_train_images // steps_per_epoch batch_size = unlabeled_batch_size + labeled_batch_size unlabeled_train_dataset = ( tfds.load( dataset_name, split=\"unlabelled\", as_supervised=True, shuffle_files=True ) .shuffle(buffer_size=shuffle_buffer) .batch(unlabeled_batch_size, drop_remainder=True) ) labeled_train_dataset = ( tfds.load(dataset_name, split=\"train\", as_supervised=True, shuffle_files=True) .shuffle(buffer_size=shuffle_buffer) .batch(labeled_batch_size, drop_remainder=True) ) test_dataset = ( tfds.load(dataset_name, split=\"test\", as_supervised=True) .batch(batch_size) .prefetch(buffer_size=AUTOTUNE) ) train_dataset = tf.data.Dataset.zip( (unlabeled_train_dataset, labeled_train_dataset) ).prefetch(buffer_size=AUTOTUNE) return batch_size, train_dataset, labeled_train_dataset, test_dataset batch_size, train_dataset, labeled_train_dataset, test_dataset = prepare_dataset() Downloading and preparing dataset 2.46 GiB (download: 2.46 GiB, generated: 1.86 GiB, total: 4.32 GiB) to /home/jupyter/tensorflow_datasets/stl10/1.0.0... Dl Completed...: 0 url [00:00, ? url/s] Dl Size...: 0 MiB [00:00, ? MiB/s] Extraction completed...: 0 file [00:00, ? file/s] Generating splits...: 0%| | 0/3 [00:00 device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:04.0, compute capability: 7.0 Shuffling stl10-train.tfrecord...: 0%| | 0/5000 [00:00bik\", tf.expand_dims(anchor_clustering, axis=1), neighbours_clustering ) # similarity shape: [batch_size, k_neighbours] similarity = layers.Lambda(lambda x: tf.squeeze(x, axis=1), name=\"similarity\")( similarity ) # Create the model. model = keras.Model( inputs=[anchor, neighbours], outputs=[similarity, anchor_clustering], name=\"clustering_learner\", ) return model Train model # If tune_encoder_during_clustering is set to False, # then freeze the encoder weights. for layer in encoder.layers: layer.trainable = tune_encoder_during_clustering # Create the clustering model and learner. clustering_model = create_clustering_model(encoder, num_clusters, name=\"clustering\") clustering_learner = create_clustering_learner(clustering_model) # Instantiate the model losses. losses = [ClustersConsistencyLoss(), ClustersEntropyLoss(entropy_loss_weight=5)] # Create the model inputs and labels. inputs = {\"anchors\": x_data, \"neighbours\": tf.gather(x_data, neighbours)} labels = tf.ones(shape=(x_data.shape[0])) # Compile the model. clustering_learner.compile( optimizer=tfa.optimizers.AdamW(learning_rate=0.0005, weight_decay=0.0001), loss=losses, ) # Begin training the model. clustering_learner.fit(x=inputs, y=labels, batch_size=512, epochs=50) Epoch 1/50 118/118 [==============================] - 20s 95ms/step - loss: 0.6655 - similarity_loss: 0.6642 - clustering_loss: 0.0013 Epoch 2/50 118/118 [==============================] - 10s 86ms/step - loss: 0.6361 - similarity_loss: 0.6325 - clustering_loss: 0.0036 Epoch 3/50 118/118 [==============================] - 10s 85ms/step - loss: 0.6129 - similarity_loss: 0.6070 - clustering_loss: 0.0059 Epoch 4/50 118/118 [==============================] - 10s 85ms/step - loss: 0.6005 - similarity_loss: 0.5930 - clustering_loss: 0.0075 Epoch 5/50 118/118 [==============================] - 10s 85ms/step - loss: 0.5923 - similarity_loss: 0.5849 - clustering_loss: 0.0074 Epoch 6/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5879 - similarity_loss: 0.5795 - clustering_loss: 0.0084 Epoch 7/50 118/118 [==============================] - 10s 85ms/step - loss: 0.5841 - similarity_loss: 0.5754 - clustering_loss: 0.0087 Epoch 8/50 118/118 [==============================] - 10s 85ms/step - loss: 0.5817 - similarity_loss: 0.5733 - clustering_loss: 0.0084 Epoch 9/50 118/118 [==============================] - 10s 85ms/step - loss: 0.5811 - similarity_loss: 0.5717 - clustering_loss: 0.0094 Epoch 10/50 118/118 [==============================] - 10s 85ms/step - loss: 0.5797 - similarity_loss: 0.5697 - clustering_loss: 0.0100 Epoch 11/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5767 - similarity_loss: 0.5676 - clustering_loss: 0.0091 Epoch 12/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5771 - similarity_loss: 0.5667 - clustering_loss: 0.0104 Epoch 13/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5755 - similarity_loss: 0.5661 - clustering_loss: 0.0094 Epoch 14/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5746 - similarity_loss: 0.5653 - clustering_loss: 0.0093 Epoch 15/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5743 - similarity_loss: 0.5640 - clustering_loss: 0.0103 Epoch 16/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5738 - similarity_loss: 0.5636 - clustering_loss: 0.0102 Epoch 17/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5732 - similarity_loss: 0.5627 - clustering_loss: 0.0106 Epoch 18/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5723 - similarity_loss: 0.5621 - clustering_loss: 0.0102 Epoch 19/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5711 - similarity_loss: 0.5615 - clustering_loss: 0.0096 Epoch 20/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5693 - similarity_loss: 0.5596 - clustering_loss: 0.0097 Epoch 21/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5699 - similarity_loss: 0.5600 - clustering_loss: 0.0099 Epoch 22/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5694 - similarity_loss: 0.5592 - clustering_loss: 0.0102 Epoch 23/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5703 - similarity_loss: 0.5595 - clustering_loss: 0.0108 Epoch 24/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5687 - similarity_loss: 0.5587 - clustering_loss: 0.0101 Epoch 25/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5688 - similarity_loss: 0.5585 - clustering_loss: 0.0103 Epoch 26/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5690 - similarity_loss: 0.5583 - clustering_loss: 0.0108 Epoch 27/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5679 - similarity_loss: 0.5572 - clustering_loss: 0.0107 Epoch 28/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5681 - similarity_loss: 0.5573 - clustering_loss: 0.0108 Epoch 29/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5682 - similarity_loss: 0.5572 - clustering_loss: 0.0111 Epoch 30/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5675 - similarity_loss: 0.5571 - clustering_loss: 0.0104 Epoch 31/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5679 - similarity_loss: 0.5562 - clustering_loss: 0.0116 Epoch 32/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5663 - similarity_loss: 0.5554 - clustering_loss: 0.0109 Epoch 33/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5665 - similarity_loss: 0.5556 - clustering_loss: 0.0109 Epoch 34/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5679 - similarity_loss: 0.5568 - clustering_loss: 0.0111 Epoch 35/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5680 - similarity_loss: 0.5563 - clustering_loss: 0.0117 Epoch 36/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5665 - similarity_loss: 0.5553 - clustering_loss: 0.0112 Epoch 37/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5674 - similarity_loss: 0.5556 - clustering_loss: 0.0118 Epoch 38/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5648 - similarity_loss: 0.5543 - clustering_loss: 0.0105 Epoch 39/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5653 - similarity_loss: 0.5549 - clustering_loss: 0.0103 Epoch 40/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5656 - similarity_loss: 0.5544 - clustering_loss: 0.0113 Epoch 41/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5644 - similarity_loss: 0.5542 - clustering_loss: 0.0102 Epoch 42/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5658 - similarity_loss: 0.5540 - clustering_loss: 0.0118 Epoch 43/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5655 - similarity_loss: 0.5539 - clustering_loss: 0.0116 Epoch 44/50 118/118 [==============================] - 10s 87ms/step - loss: 0.5662 - similarity_loss: 0.5543 - clustering_loss: 0.0119 Epoch 45/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5651 - similarity_loss: 0.5537 - clustering_loss: 0.0114 Epoch 46/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5635 - similarity_loss: 0.5534 - clustering_loss: 0.0101 Epoch 47/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5633 - similarity_loss: 0.5529 - clustering_loss: 0.0103 Epoch 48/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5643 - similarity_loss: 0.5526 - clustering_loss: 0.0117 Epoch 49/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5653 - similarity_loss: 0.5532 - clustering_loss: 0.0121 Epoch 50/50 118/118 [==============================] - 10s 86ms/step - loss: 0.5641 - similarity_loss: 0.5525 - clustering_loss: 0.0117 Plot training loss plt.plot(history.history[\"loss\"]) plt.ylabel(\"loss\") plt.xlabel(\"epoch\") plt.show() png Cluster analysis Assign images to clusters # Get the cluster probability distribution of the input images. clustering_probs = clustering_model.predict(x_data, batch_size=batch_size, verbose=1) # Get the cluster of the highest probability. cluster_assignments = tf.math.argmax(clustering_probs, axis=-1).numpy() # Store the clustering confidence. # Images with the highest clustering confidence are considered the 'prototypes' # of the clusters. cluster_confidence = tf.math.reduce_max(clustering_probs, axis=-1).numpy() 120/120 [==============================] - 3s 20ms/step Let's compute the cluster sizes clusters = defaultdict(list) for idx, c in enumerate(cluster_assignments): clusters[c].append((idx, cluster_confidence[idx])) for c in range(num_clusters): print(\"cluster\", c, \":\", len(clusters[c])) cluster 0 : 4132 cluster 1 : 4057 cluster 2 : 1713 cluster 3 : 2801 cluster 4 : 2511 cluster 5 : 2655 cluster 6 : 2517 cluster 7 : 4493 cluster 8 : 3687 cluster 9 : 1716 cluster 10 : 3397 cluster 11 : 3606 cluster 12 : 3325 cluster 13 : 4010 cluster 14 : 2188 cluster 15 : 3278 cluster 16 : 1902 cluster 17 : 1858 cluster 18 : 3828 cluster 19 : 2326 Notice that the clusters have roughly balanced sizes. Visualize cluster images Display the prototypes—instances with the highest clustering confidence—of each cluster: num_images = 8 plt.figure(figsize=(15, 15)) position = 1 for c in range(num_clusters): cluster_instances = sorted(clusters[c], key=lambda kv: kv[1], reverse=True) for j in range(num_images): image_idx = cluster_instances[j][0] plt.subplot(num_clusters, num_images, position) plt.imshow(x_data[image_idx].astype(\"uint8\")) plt.title(classes[y_data[image_idx][0]]) plt.axis(\"off\") position += 1 png Compute clustering accuracy First, we assign a label for each cluster based on the majority label of its images. Then, we compute the accuracy of each cluster by dividing the number of image with the majority label by the size of the cluster. cluster_label_counts = dict() for c in range(num_clusters): cluster_label_counts[c] = [0] * num_classes instances = clusters[c] for i, _ in instances: cluster_label_counts[c][y_data[i][0]] += 1 cluster_label_idx = np.argmax(cluster_label_counts[c]) correct_count = np.max(cluster_label_counts[c]) cluster_size = len(clusters[c]) accuracy = ( np.round((correct_count / cluster_size) * 100, 2) if cluster_size > 0 else 0 ) cluster_label = classes[cluster_label_idx] print(\"cluster\", c, \"label is:\", cluster_label, \" - accuracy:\", accuracy, \"%\") cluster 0 label is: frog - accuracy: 23.11 % cluster 1 label is: truck - accuracy: 23.56 % cluster 2 label is: bird - accuracy: 29.01 % cluster 3 label is: dog - accuracy: 16.67 % cluster 4 label is: truck - accuracy: 27.8 % cluster 5 label is: ship - accuracy: 36.91 % cluster 6 label is: deer - accuracy: 27.89 % cluster 7 label is: dog - accuracy: 23.84 % cluster 8 label is: airplane - accuracy: 21.7 % cluster 9 label is: bird - accuracy: 22.38 % cluster 10 label is: automobile - accuracy: 24.76 % cluster 11 label is: automobile - accuracy: 24.15 % cluster 12 label is: cat - accuracy: 17.44 % cluster 13 label is: truck - accuracy: 23.44 % cluster 14 label is: ship - accuracy: 31.67 % cluster 15 label is: airplane - accuracy: 41.06 % cluster 16 label is: deer - accuracy: 22.77 % cluster 17 label is: airplane - accuracy: 15.18 % cluster 18 label is: frog - accuracy: 33.31 % cluster 19 label is: deer - accuracy: 18.7 % Conclusion To improve the accuracy results, you can: 1) increase the number of epochs in the representation learning and the clustering phases; 2) allow the encoder weights to be tuned during the clustering phase; and 3) perform a final fine-tuning step through self-labeling, as described in the original SCAN paper. Note that unsupervised image clustering techniques are not expected to outperform the accuracy of supervised image classification techniques, rather showing that they can learn the semantics of the images and group them into clusters that are similar to their original classes. Contrastive pretraining with SimCLR for semi-supervised image classification on the STL-10 dataset. Introduction Semi-supervised learning Semi-supervised learning is a machine learning paradigm that deals with partially labeled datasets. When applying deep learning in the real world, one usually has to gather a large dataset to make it work well. However, while the cost of labeling scales linearly with the dataset size (labeling each example takes a constant time), model performance only scales sublinearly with it. This means that labeling more and more samples becomes less and less cost-efficient, while gathering unlabeled data is generally cheap, as it is usually readily available in large quantities. Semi-supervised learning offers to solve this problem by only requiring a partially labeled dataset, and by being label-efficient by utilizing the unlabeled examples for learning as well. In this example, we will pretrain an encoder with contrastive learning on the STL-10 semi-supervised dataset using no labels at all, and then fine-tune it using only its labeled subset. Contrastive learning On the highest level, the main idea behind contrastive learning is to learn representations that are invariant to image augmentations in a self-supervised manner. One problem with this objective is that it has a trivial degenerate solution: the case where the representations are constant, and do not depend at all on the input images. Contrastive learning avoids this trap by modifying the objective in the following way: it pulls representations of augmented versions/views of the same image closer to each other (contracting positives), while simultaneously pushing different images away from each other (contrasting negatives) in representation space. One such contrastive approach is SimCLR, which essentially identifies the core components needed to optimize this objective, and can achieve high performance by scaling this simple approach. Another approach is SimSiam (Keras example), whose main difference from SimCLR is that the former does not use any negatives in its loss. Therefore, it does not explicitly prevent the trivial solution, and, instead, avoids it implicitly by architecture design (asymmetric encoding paths using a predictor network and batch normalization (BatchNorm) are applied in the final layers). For further reading about SimCLR, check out the official Google AI blog post, and for an overview of self-supervised learning across both vision and language check out this blog post. Setup import matplotlib.pyplot as plt import tensorflow as tf import tensorflow_datasets as tfds from tensorflow import keras from tensorflow.keras import layers Hyperparameter setup # Dataset hyperparameters unlabeled_dataset_size = 100000 labeled_dataset_size = 5000 image_size = 96 image_channels = 3 # Algorithm hyperparameters num_epochs = 20 batch_size = 525 # Corresponds to 200 steps per epoch width = 128 temperature = 0.1 # Stronger augmentations for contrastive, weaker ones for supervised training contrastive_augmentation = {\"min_area\": 0.25, \"brightness\": 0.6, \"jitter\": 0.2} classification_augmentation = {\"min_area\": 0.75, \"brightness\": 0.3, \"jitter\": 0.1} Dataset During training we will simultaneously load a large batch of unlabeled images along with a smaller batch of labeled images. def prepare_dataset(): # Labeled and unlabeled samples are loaded synchronously # with batch sizes selected accordingly steps_per_epoch = (unlabeled_dataset_size + labeled_dataset_size) // batch_size unlabeled_batch_size = unlabeled_dataset_size // steps_per_epoch labeled_batch_size = labeled_dataset_size // steps_per_epoch print( f\"batch size is {unlabeled_batch_size} (unlabeled) + {labeled_batch_size} (labeled)\" ) unlabeled_train_dataset = ( tfds.load(\"stl10\", split=\"unlabelled\", as_supervised=True, shuffle_files=True) .shuffle(buffer_size=10 * unlabeled_batch_size) .batch(unlabeled_batch_size) ) labeled_train_dataset = ( tfds.load(\"stl10\", split=\"train\", as_supervised=True, shuffle_files=True) .shuffle(buffer_size=10 * labeled_batch_size) .batch(labeled_batch_size) ) test_dataset = ( tfds.load(\"stl10\", split=\"test\", as_supervised=True) .batch(batch_size) .prefetch(buffer_size=tf.data.AUTOTUNE) ) # Labeled and unlabeled datasets are zipped together train_dataset = tf.data.Dataset.zip( (unlabeled_train_dataset, labeled_train_dataset) ).prefetch(buffer_size=tf.data.AUTOTUNE) return train_dataset, labeled_train_dataset, test_dataset # Load STL10 dataset train_dataset, labeled_train_dataset, test_dataset = prepare_dataset() batch size is 500 (unlabeled) + 25 (labeled) Image augmentations The two most important image augmentations for contrastive learning are the following: Cropping: forces the model to encode different parts of the same image similarly, we implement it with the RandomTranslation and RandomZoom layers Color jitter: prevents a trivial color histogram-based solution to the task by distorting color histograms. A principled way to implement that is by affine transformations in color space. In this example we use random horizontal flips as well. Stronger augmentations are applied for contrastive learning, along with weaker ones for supervised classification to avoid overfitting on the few labeled examples. We implement random color jitter as a custom preprocessing layer. Using preprocessing layers for data augmentation has the following two advantages: The data augmentation will run on GPU in batches, so the training will not be bottlenecked by the data pipeline in environments with constrained CPU resources (such as a Colab Notebook, or a personal machine) Deployment is easier as the data preprocessing pipeline is encapsulated in the model, and does not have to be reimplemented when deploying it # Distorts the color distibutions of images class RandomColorAffine(layers.Layer): def __init__(self, brightness=0, jitter=0, **kwargs): super().__init__(**kwargs) self.brightness = brightness self.jitter = jitter def call(self, images, training=True): if training: batch_size = tf.shape(images)[0] # Same for all colors brightness_scales = 1 + tf.random.uniform( (batch_size, 1, 1, 1), minval=-self.brightness, maxval=self.brightness ) # Different for all colors jitter_matrices = tf.random.uniform( (batch_size, 1, 3, 3), minval=-self.jitter, maxval=self.jitter ) color_transforms = ( tf.eye(3, batch_shape=[batch_size, 1]) * brightness_scales + jitter_matrices ) images = tf.clip_by_value(tf.matmul(images, color_transforms), 0, 1) return images # Image augmentation module def get_augmenter(min_area, brightness, jitter): zoom_factor = 1.0 - tf.sqrt(min_area) return keras.Sequential( [ keras.Input(shape=(image_size, image_size, image_channels)), layers.Rescaling(1 / 255), layers.RandomFlip(\"horizontal\"), layers.RandomTranslation(zoom_factor / 2, zoom_factor / 2), layers.RandomZoom((-zoom_factor, 0.0), (-zoom_factor, 0.0)), RandomColorAffine(brightness, jitter), ] ) def visualize_augmentations(num_images): # Sample a batch from a dataset images = next(iter(train_dataset))[0][0][:num_images] # Apply augmentations augmented_images = zip( images, get_augmenter(**classification_augmentation)(images), get_augmenter(**contrastive_augmentation)(images), get_augmenter(**contrastive_augmentation)(images), ) row_titles = [ \"Original:\", \"Weakly augmented:\", \"Strongly augmented:\", \"Strongly augmented:\", ] plt.figure(figsize=(num_images * 2.2, 4 * 2.2), dpi=100) for column, image_row in enumerate(augmented_images): for row, image in enumerate(image_row): plt.subplot(4, num_images, row * num_images + column + 1) plt.imshow(image) if column == 0: plt.title(row_titles[row], loc=\"left\") plt.axis(\"off\") plt.tight_layout() visualize_augmentations(num_images=8) png Encoder architecture # Define the encoder architecture def get_encoder(): return keras.Sequential( [ keras.Input(shape=(image_size, image_size, image_channels)), layers.Conv2D(width, kernel_size=3, strides=2, activation=\"relu\"), layers.Conv2D(width, kernel_size=3, strides=2, activation=\"relu\"), layers.Conv2D(width, kernel_size=3, strides=2, activation=\"relu\"), layers.Conv2D(width, kernel_size=3, strides=2, activation=\"relu\"), layers.Flatten(), layers.Dense(width, activation=\"relu\"), ], name=\"encoder\", ) Supervised baseline model A baseline supervised model is trained using random initialization. # Baseline supervised training with random initialization baseline_model = keras.Sequential( [ keras.Input(shape=(image_size, image_size, image_channels)), get_augmenter(**classification_augmentation), get_encoder(), layers.Dense(10), ], name=\"baseline_model\", ) baseline_model.compile( optimizer=keras.optimizers.Adam(), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[keras.metrics.SparseCategoricalAccuracy(name=\"acc\")], ) baseline_history = baseline_model.fit( labeled_train_dataset, epochs=num_epochs, validation_data=test_dataset ) print( \"Maximal validation accuracy: {:.2f}%\".format( max(baseline_history.history[\"val_acc\"]) * 100 ) ) Epoch 1/20 200/200 [==============================] - 8s 26ms/step - loss: 2.1769 - acc: 0.1794 - val_loss: 1.7424 - val_acc: 0.3341 Epoch 2/20 200/200 [==============================] - 3s 16ms/step - loss: 1.8366 - acc: 0.3139 - val_loss: 1.6184 - val_acc: 0.3989 Epoch 3/20 200/200 [==============================] - 3s 16ms/step - loss: 1.6331 - acc: 0.3912 - val_loss: 1.5344 - val_acc: 0.4125 Epoch 4/20 200/200 [==============================] - 3s 16ms/step - loss: 1.5439 - acc: 0.4216 - val_loss: 1.4052 - val_acc: 0.4712 Epoch 5/20 200/200 [==============================] - 4s 17ms/step - loss: 1.4576 - acc: 0.4575 - val_loss: 1.4337 - val_acc: 0.4729 Epoch 6/20 200/200 [==============================] - 3s 17ms/step - loss: 1.3723 - acc: 0.4875 - val_loss: 1.4054 - val_acc: 0.4746 Epoch 7/20 200/200 [==============================] - 3s 17ms/step - loss: 1.3445 - acc: 0.5066 - val_loss: 1.3030 - val_acc: 0.5200 Epoch 8/20 200/200 [==============================] - 3s 17ms/step - loss: 1.3015 - acc: 0.5255 - val_loss: 1.2720 - val_acc: 0.5378 Epoch 9/20 200/200 [==============================] - 3s 16ms/step - loss: 1.2244 - acc: 0.5452 - val_loss: 1.3211 - val_acc: 0.5220 Epoch 10/20 200/200 [==============================] - 3s 17ms/step - loss: 1.2204 - acc: 0.5494 - val_loss: 1.2898 - val_acc: 0.5381 Epoch 11/20 200/200 [==============================] - 4s 17ms/step - loss: 1.1359 - acc: 0.5766 - val_loss: 1.2138 - val_acc: 0.5648 Epoch 12/20 200/200 [==============================] - 3s 17ms/step - loss: 1.1228 - acc: 0.5855 - val_loss: 1.2602 - val_acc: 0.5429 Epoch 13/20 200/200 [==============================] - 3s 17ms/step - loss: 1.0853 - acc: 0.6000 - val_loss: 1.2716 - val_acc: 0.5591 Epoch 14/20 200/200 [==============================] - 3s 17ms/step - loss: 1.0632 - acc: 0.6078 - val_loss: 1.2832 - val_acc: 0.5591 Epoch 15/20 200/200 [==============================] - 3s 16ms/step - loss: 1.0268 - acc: 0.6157 - val_loss: 1.1712 - val_acc: 0.5882 Epoch 16/20 200/200 [==============================] - 3s 17ms/step - loss: 0.9594 - acc: 0.6440 - val_loss: 1.2904 - val_acc: 0.5573 Epoch 17/20 200/200 [==============================] - 3s 17ms/step - loss: 0.9524 - acc: 0.6517 - val_loss: 1.1854 - val_acc: 0.5955 Epoch 18/20 200/200 [==============================] - 3s 17ms/step - loss: 0.9118 - acc: 0.6672 - val_loss: 1.1974 - val_acc: 0.5845 Epoch 19/20 200/200 [==============================] - 3s 17ms/step - loss: 0.9187 - acc: 0.6686 - val_loss: 1.1703 - val_acc: 0.6025 Epoch 20/20 200/200 [==============================] - 3s 17ms/step - loss: 0.8520 - acc: 0.6911 - val_loss: 1.1312 - val_acc: 0.6149 Maximal validation accuracy: 61.49% Self-supervised model for contrastive pretraining We pretrain an encoder on unlabeled images with a contrastive loss. A nonlinear projection head is attached to the top of the encoder, as it improves the quality of representations of the encoder. We use the InfoNCE/NT-Xent/N-pairs loss, which can be interpreted in the following way: We treat each image in the batch as if it had its own class. Then, we have two examples (a pair of augmented views) for each \"class\". Each view's representation is compared to every possible pair's one (for both augmented versions). We use the temperature-scaled cosine similarity of compared representations as logits. Finally, we use categorical cross-entropy as the \"classification\" loss The following two metrics are used for monitoring the pretraining performance: Contrastive accuracy (SimCLR Table 5): Self-supervised metric, the ratio of cases in which the representation of an image is more similar to its differently augmented version's one, than to the representation of any other image in the current batch. Self-supervised metrics can be used for hyperparameter tuning even in the case when there are no labeled examples. Linear probing accuracy: Linear probing is a popular metric to evaluate self-supervised classifiers. It is computed as the accuracy of a logistic regression classifier trained on top of the encoder's features. In our case, this is done by training a single dense layer on top of the frozen encoder. Note that contrary to traditional approach where the classifier is trained after the pretraining phase, in this example we train it during pretraining. This might slightly decrease its accuracy, but that way we can monitor its value during training, which helps with experimentation and debugging. Another widely used supervised metric is the KNN accuracy, which is the accuracy of a KNN classifier trained on top of the encoder's features, which is not implemented in this example. # Define the contrastive model with model-subclassing class ContrastiveModel(keras.Model): def __init__(self): super().__init__() self.temperature = temperature self.contrastive_augmenter = get_augmenter(**contrastive_augmentation) self.classification_augmenter = get_augmenter(**classification_augmentation) self.encoder = get_encoder() # Non-linear MLP as projection head self.projection_head = keras.Sequential( [ keras.Input(shape=(width,)), layers.Dense(width, activation=\"relu\"), layers.Dense(width), ], name=\"projection_head\", ) # Single dense layer for linear probing self.linear_probe = keras.Sequential( [layers.Input(shape=(width,)), layers.Dense(10)], name=\"linear_probe\" ) self.encoder.summary() self.projection_head.summary() self.linear_probe.summary() def compile(self, contrastive_optimizer, probe_optimizer, **kwargs): super().compile(**kwargs) self.contrastive_optimizer = contrastive_optimizer self.probe_optimizer = probe_optimizer # self.contrastive_loss will be defined as a method self.probe_loss = keras.losses.SparseCategoricalCrossentropy(from_logits=True) self.contrastive_loss_tracker = keras.metrics.Mean(name=\"c_loss\") self.contrastive_accuracy = keras.metrics.SparseCategoricalAccuracy( name=\"c_acc\" ) self.probe_loss_tracker = keras.metrics.Mean(name=\"p_loss\") self.probe_accuracy = keras.metrics.SparseCategoricalAccuracy(name=\"p_acc\") @property def metrics(self): return [ self.contrastive_loss_tracker, self.contrastive_accuracy, self.probe_loss_tracker, self.probe_accuracy, ] def contrastive_loss(self, projections_1, projections_2): # InfoNCE loss (information noise-contrastive estimation) # NT-Xent loss (normalized temperature-scaled cross entropy) # Cosine similarity: the dot product of the l2-normalized feature vectors projections_1 = tf.math.l2_normalize(projections_1, axis=1) projections_2 = tf.math.l2_normalize(projections_2, axis=1) similarities = ( tf.matmul(projections_1, projections_2, transpose_b=True) / self.temperature ) # The similarity between the representations of two augmented views of the # same image should be higher than their similarity with other views batch_size = tf.shape(projections_1)[0] contrastive_labels = tf.range(batch_size) self.contrastive_accuracy.update_state(contrastive_labels, similarities) self.contrastive_accuracy.update_state( contrastive_labels, tf.transpose(similarities) ) # The temperature-scaled similarities are used as logits for cross-entropy # a symmetrized version of the loss is used here loss_1_2 = keras.losses.sparse_categorical_crossentropy( contrastive_labels, similarities, from_logits=True ) loss_2_1 = keras.losses.sparse_categorical_crossentropy( contrastive_labels, tf.transpose(similarities), from_logits=True ) return (loss_1_2 + loss_2_1) / 2 def train_step(self, data): (unlabeled_images, _), (labeled_images, labels) = data # Both labeled and unlabeled images are used, without labels images = tf.concat((unlabeled_images, labeled_images), axis=0) # Each image is augmented twice, differently augmented_images_1 = self.contrastive_augmenter(images, training=True) augmented_images_2 = self.contrastive_augmenter(images, training=True) with tf.GradientTape() as tape: features_1 = self.encoder(augmented_images_1, training=True) features_2 = self.encoder(augmented_images_2, training=True) # The representations are passed through a projection mlp projections_1 = self.projection_head(features_1, training=True) projections_2 = self.projection_head(features_2, training=True) contrastive_loss = self.contrastive_loss(projections_1, projections_2) gradients = tape.gradient( contrastive_loss, self.encoder.trainable_weights + self.projection_head.trainable_weights, ) self.contrastive_optimizer.apply_gradients( zip( gradients, self.encoder.trainable_weights + self.projection_head.trainable_weights, ) ) self.contrastive_loss_tracker.update_state(contrastive_loss) # Labels are only used in evalutation for an on-the-fly logistic regression preprocessed_images = self.classification_augmenter( labeled_images, training=True ) with tf.GradientTape() as tape: # the encoder is used in inference mode here to avoid regularization # and updating the batch normalization paramers if they are used features = self.encoder(preprocessed_images, training=False) class_logits = self.linear_probe(features, training=True) probe_loss = self.probe_loss(labels, class_logits) gradients = tape.gradient(probe_loss, self.linear_probe.trainable_weights) self.probe_optimizer.apply_gradients( zip(gradients, self.linear_probe.trainable_weights) ) self.probe_loss_tracker.update_state(probe_loss) self.probe_accuracy.update_state(labels, class_logits) return {m.name: m.result() for m in self.metrics} def test_step(self, data): labeled_images, labels = data # For testing the components are used with a training=False flag preprocessed_images = self.classification_augmenter( labeled_images, training=False ) features = self.encoder(preprocessed_images, training=False) class_logits = self.linear_probe(features, training=False) probe_loss = self.probe_loss(labels, class_logits) self.probe_loss_tracker.update_state(probe_loss) self.probe_accuracy.update_state(labels, class_logits) # Only the probe metrics are logged at test time return {m.name: m.result() for m in self.metrics[2:]} # Contrastive pretraining pretraining_model = ContrastiveModel() pretraining_model.compile( contrastive_optimizer=keras.optimizers.Adam(), probe_optimizer=keras.optimizers.Adam(), ) pretraining_history = pretraining_model.fit( train_dataset, epochs=num_epochs, validation_data=test_dataset ) print( \"Maximal validation accuracy: {:.2f}%\".format( max(pretraining_history.history[\"val_p_acc\"]) * 100 ) ) Model: \"encoder\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_4 (Conv2D) (None, 47, 47, 128) 3584 _________________________________________________________________ conv2d_5 (Conv2D) (None, 23, 23, 128) 147584 _________________________________________________________________ conv2d_6 (Conv2D) (None, 11, 11, 128) 147584 _________________________________________________________________ conv2d_7 (Conv2D) (None, 5, 5, 128) 147584 _________________________________________________________________ flatten_1 (Flatten) (None, 3200) 0 _________________________________________________________________ dense_2 (Dense) (None, 128) 409728 ================================================================= Total params: 856,064 Trainable params: 856,064 Non-trainable params: 0 _________________________________________________________________ Model: \"projection_head\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_3 (Dense) (None, 128) 16512 _________________________________________________________________ dense_4 (Dense) (None, 128) 16512 ================================================================= Total params: 33,024 Trainable params: 33,024 Non-trainable params: 0 _________________________________________________________________ Model: \"linear_probe\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_5 (Dense) (None, 10) 1290 ================================================================= Total params: 1,290 Trainable params: 1,290 Non-trainable params: 0 _________________________________________________________________ Epoch 1/20 200/200 [==============================] - 70s 325ms/step - c_loss: 4.7788 - c_acc: 0.1340 - p_loss: 2.2030 - p_acc: 0.1922 - val_p_loss: 2.1043 - val_p_acc: 0.2540 Epoch 2/20 200/200 [==============================] - 67s 323ms/step - c_loss: 3.4836 - c_acc: 0.3047 - p_loss: 2.0159 - p_acc: 0.3030 - val_p_loss: 1.9833 - val_p_acc: 0.3120 Epoch 3/20 200/200 [==============================] - 65s 322ms/step - c_loss: 2.9157 - c_acc: 0.4187 - p_loss: 1.8896 - p_acc: 0.3598 - val_p_loss: 1.8621 - val_p_acc: 0.3556 Epoch 4/20 200/200 [==============================] - 67s 322ms/step - c_loss: 2.5837 - c_acc: 0.4867 - p_loss: 1.7965 - p_acc: 0.3912 - val_p_loss: 1.7400 - val_p_acc: 0.4006 Epoch 5/20 200/200 [==============================] - 67s 322ms/step - c_loss: 2.3462 - c_acc: 0.5403 - p_loss: 1.6961 - p_acc: 0.4138 - val_p_loss: 1.6655 - val_p_acc: 0.4190 Epoch 6/20 200/200 [==============================] - 65s 321ms/step - c_loss: 2.2214 - c_acc: 0.5714 - p_loss: 1.6325 - p_acc: 0.4322 - val_p_loss: 1.6242 - val_p_acc: 0.4366 Epoch 7/20 200/200 [==============================] - 67s 322ms/step - c_loss: 2.0618 - c_acc: 0.6098 - p_loss: 1.5793 - p_acc: 0.4470 - val_p_loss: 1.5348 - val_p_acc: 0.4663 Epoch 8/20 200/200 [==============================] - 65s 322ms/step - c_loss: 1.9532 - c_acc: 0.6360 - p_loss: 1.5173 - p_acc: 0.4652 - val_p_loss: 1.5248 - val_p_acc: 0.4700 Epoch 9/20 200/200 [==============================] - 65s 322ms/step - c_loss: 1.8487 - c_acc: 0.6602 - p_loss: 1.4631 - p_acc: 0.4798 - val_p_loss: 1.4587 - val_p_acc: 0.4905 Epoch 10/20 200/200 [==============================] - 65s 322ms/step - c_loss: 1.7837 - c_acc: 0.6767 - p_loss: 1.4310 - p_acc: 0.4992 - val_p_loss: 1.4265 - val_p_acc: 0.4924 Epoch 11/20 200/200 [==============================] - 65s 321ms/step - c_loss: 1.7133 - c_acc: 0.6955 - p_loss: 1.3764 - p_acc: 0.5090 - val_p_loss: 1.3663 - val_p_acc: 0.5169 Epoch 12/20 200/200 [==============================] - 66s 322ms/step - c_loss: 1.6655 - c_acc: 0.7064 - p_loss: 1.3511 - p_acc: 0.5140 - val_p_loss: 1.3779 - val_p_acc: 0.5071 Epoch 13/20 200/200 [==============================] - 67s 322ms/step - c_loss: 1.6110 - c_acc: 0.7198 - p_loss: 1.3182 - p_acc: 0.5282 - val_p_loss: 1.3259 - val_p_acc: 0.5303 Epoch 14/20 200/200 [==============================] - 66s 321ms/step - c_loss: 1.5727 - c_acc: 0.7312 - p_loss: 1.2965 - p_acc: 0.5308 - val_p_loss: 1.2858 - val_p_acc: 0.5422 Epoch 15/20 200/200 [==============================] - 67s 322ms/step - c_loss: 1.5477 - c_acc: 0.7361 - p_loss: 1.2751 - p_acc: 0.5432 - val_p_loss: 1.2795 - val_p_acc: 0.5472 Epoch 16/20 200/200 [==============================] - 65s 321ms/step - c_loss: 1.5127 - c_acc: 0.7448 - p_loss: 1.2562 - p_acc: 0.5498 - val_p_loss: 1.2731 - val_p_acc: 0.5461 Epoch 17/20 200/200 [==============================] - 67s 321ms/step - c_loss: 1.4811 - c_acc: 0.7517 - p_loss: 1.2306 - p_acc: 0.5574 - val_p_loss: 1.2439 - val_p_acc: 0.5630 Epoch 18/20 200/200 [==============================] - 67s 321ms/step - c_loss: 1.4598 - c_acc: 0.7576 - p_loss: 1.2215 - p_acc: 0.5544 - val_p_loss: 1.2352 - val_p_acc: 0.5623 Epoch 19/20 200/200 [==============================] - 65s 321ms/step - c_loss: 1.4349 - c_acc: 0.7631 - p_loss: 1.2161 - p_acc: 0.5662 - val_p_loss: 1.2670 - val_p_acc: 0.5479 Epoch 20/20 200/200 [==============================] - 66s 321ms/step - c_loss: 1.4159 - c_acc: 0.7691 - p_loss: 1.2044 - p_acc: 0.5656 - val_p_loss: 1.2204 - val_p_acc: 0.5624 Maximal validation accuracy: 56.30% Supervised finetuning of the pretrained encoder We then finetune the encoder on the labeled examples, by attaching a single randomly initalized fully connected classification layer on its top. # Supervised finetuning of the pretrained encoder finetuning_model = keras.Sequential( [ layers.Input(shape=(image_size, image_size, image_channels)), get_augmenter(**classification_augmentation), pretraining_model.encoder, layers.Dense(10), ], name=\"finetuning_model\", ) finetuning_model.compile( optimizer=keras.optimizers.Adam(), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[keras.metrics.SparseCategoricalAccuracy(name=\"acc\")], ) finetuning_history = finetuning_model.fit( labeled_train_dataset, epochs=num_epochs, validation_data=test_dataset ) print( \"Maximal validation accuracy: {:.2f}%\".format( max(finetuning_history.history[\"val_acc\"]) * 100 ) ) Epoch 1/20 200/200 [==============================] - 4s 17ms/step - loss: 1.9942 - acc: 0.2554 - val_loss: 1.4278 - val_acc: 0.4647 Epoch 2/20 200/200 [==============================] - 3s 16ms/step - loss: 1.5209 - acc: 0.4373 - val_loss: 1.3119 - val_acc: 0.5170 Epoch 3/20 200/200 [==============================] - 3s 17ms/step - loss: 1.3210 - acc: 0.5132 - val_loss: 1.2328 - val_acc: 0.5529 Epoch 4/20 200/200 [==============================] - 3s 17ms/step - loss: 1.1932 - acc: 0.5603 - val_loss: 1.1328 - val_acc: 0.5872 Epoch 5/20 200/200 [==============================] - 3s 17ms/step - loss: 1.1217 - acc: 0.5984 - val_loss: 1.1508 - val_acc: 0.5906 Epoch 6/20 200/200 [==============================] - 3s 16ms/step - loss: 1.0665 - acc: 0.6176 - val_loss: 1.2544 - val_acc: 0.5753 Epoch 7/20 200/200 [==============================] - 3s 16ms/step - loss: 0.9890 - acc: 0.6510 - val_loss: 1.0107 - val_acc: 0.6409 Epoch 8/20 200/200 [==============================] - 3s 16ms/step - loss: 0.9775 - acc: 0.6468 - val_loss: 1.0907 - val_acc: 0.6150 Epoch 9/20 200/200 [==============================] - 3s 17ms/step - loss: 0.9105 - acc: 0.6736 - val_loss: 1.1057 - val_acc: 0.6183 Epoch 10/20 200/200 [==============================] - 3s 17ms/step - loss: 0.8658 - acc: 0.6895 - val_loss: 1.1794 - val_acc: 0.5938 Epoch 11/20 200/200 [==============================] - 3s 17ms/step - loss: 0.8503 - acc: 0.6946 - val_loss: 1.0764 - val_acc: 0.6325 Epoch 12/20 200/200 [==============================] - 3s 17ms/step - loss: 0.7973 - acc: 0.7193 - val_loss: 1.0065 - val_acc: 0.6561 Epoch 13/20 200/200 [==============================] - 3s 16ms/step - loss: 0.7516 - acc: 0.7319 - val_loss: 1.0955 - val_acc: 0.6345 Epoch 14/20 200/200 [==============================] - 3s 16ms/step - loss: 0.7504 - acc: 0.7406 - val_loss: 1.1041 - val_acc: 0.6386 Epoch 15/20 200/200 [==============================] - 3s 16ms/step - loss: 0.7419 - acc: 0.7324 - val_loss: 1.0680 - val_acc: 0.6492 Epoch 16/20 200/200 [==============================] - 3s 17ms/step - loss: 0.7318 - acc: 0.7265 - val_loss: 1.1635 - val_acc: 0.6313 Epoch 17/20 200/200 [==============================] - 3s 17ms/step - loss: 0.6904 - acc: 0.7505 - val_loss: 1.0826 - val_acc: 0.6503 Epoch 18/20 200/200 [==============================] - 3s 17ms/step - loss: 0.6389 - acc: 0.7714 - val_loss: 1.1260 - val_acc: 0.6364 Epoch 19/20 200/200 [==============================] - 3s 16ms/step - loss: 0.6355 - acc: 0.7829 - val_loss: 1.0750 - val_acc: 0.6554 Epoch 20/20 200/200 [==============================] - 3s 17ms/step - loss: 0.6279 - acc: 0.7758 - val_loss: 1.0465 - val_acc: 0.6604 Maximal validation accuracy: 66.04% Comparison against the baseline # The classification accuracies of the baseline and the pretraining + finetuning process: def plot_training_curves(pretraining_history, finetuning_history, baseline_history): for metric_key, metric_name in zip([\"acc\", \"loss\"], [\"accuracy\", \"loss\"]): plt.figure(figsize=(8, 5), dpi=100) plt.plot( baseline_history.history[f\"val_{metric_key}\"], label=\"supervised ba Unifying semi-supervised learning and unsupervised domain adaptation with AdaMatch. Introduction In this example, we will implement the AdaMatch algorithm, proposed in AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation by Berthelot et al. It sets a new state-of-the-art in unsupervised domain adaptation (as of June 2021). AdaMatch is particularly interesting because it unifies semi-supervised learning (SSL) and unsupervised domain adaptation (UDA) under one framework. It thereby provides a way to perform semi-supervised domain adaptation (SSDA). This example requires TensorFlow 2.5 or higher, as well as TensorFlow Models, which can be installed using the following command: !pip install -q tf-models-official Before we proceed, let's review a few preliminary concepts underlying this example. Preliminaries In semi-supervised learning (SSL), we use a small amount of labeled data to train models on a bigger unlabeled dataset. Popular semi-supervised learning methods for computer vision include FixMatch, MixMatch, Noisy Student Training, etc. You can refer to this example to get an idea of what a standard SSL workflow looks like. In unsupervised domain adaptation, we have access to a source labeled dataset and a target unlabeled dataset. Then the task is to learn a model that can generalize well to the target dataset. The source and the target datasets vary in terms of distribution. The following figure provides an illustration of this idea. In the present example, we use the MNIST dataset as the source dataset, while the target dataset is SVHN, which consists of images of house numbers. Both datasets have various varying factors in terms of texture, viewpoint, appearence, etc.: their domains, or distributions, are different from one another. Popular domain adaptation algorithms in deep learning include Deep CORAL, Moment Matching, etc. Setup import tensorflow as tf tf.random.set_seed(42) import numpy as np from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras import regularizers from official.vision.image_classification.augment import RandAugment import tensorflow_datasets as tfds tfds.disable_progress_bar() Prepare the data # MNIST ( (mnist_x_train, mnist_y_train), (mnist_x_test, mnist_y_test), ) = keras.datasets.mnist.load_data() # Add a channel dimension mnist_x_train = tf.expand_dims(mnist_x_train, -1) mnist_x_test = tf.expand_dims(mnist_x_test, -1) # Convert the labels to one-hot encoded vectors mnist_y_train = tf.one_hot(mnist_y_train, 10).numpy() # SVHN svhn_train, svhn_test = tfds.load( \"svhn_cropped\", split=[\"train\", \"test\"], as_supervised=True ) Define constants and hyperparameters RESIZE_TO = 32 SOURCE_BATCH_SIZE = 64 TARGET_BATCH_SIZE = 3 * SOURCE_BATCH_SIZE # Reference: Section 3.2 EPOCHS = 10 STEPS_PER_EPOCH = len(mnist_x_train) // SOURCE_BATCH_SIZE TOTAL_STEPS = EPOCHS * STEPS_PER_EPOCH AUTO = tf.data.AUTOTUNE LEARNING_RATE = 0.03 WEIGHT_DECAY = 0.0005 INIT = \"he_normal\" DEPTH = 28 WIDTH_MULT = 2 Data augmentation utilities A standard element of SSL algorithms is to feed weakly and strongly augmented versions of the same images to the learning model to make its predictions consistent. For strong augmentation, RandAugment is a standard choice. For weak augmentation, we will use horizontal flipping and random cropping. # Initialize `RandAugment` object with 2 layers of # augmentation transforms and strength of 5. augmenter = RandAugment(num_layers=2, magnitude=5) def weak_augment(image, source=True): if image.dtype != tf.float32: image = tf.cast(image, tf.float32) # MNIST images are grayscale, this is why we first convert them to # RGB images. if source: image = tf.image.resize_with_pad(image, RESIZE_TO, RESIZE_TO) image = tf.tile(image, [1, 1, 3]) image = tf.image.random_flip_left_right(image) image = tf.image.random_crop(image, (RESIZE_TO, RESIZE_TO, 3)) return image def strong_augment(image, source=True): if image.dtype != tf.float32: image = tf.cast(image, tf.float32) if source: image = tf.image.resize_with_pad(image, RESIZE_TO, RESIZE_TO) image = tf.tile(image, [1, 1, 3]) image = augmenter.distort(image) return image Data loading utilities def create_individual_ds(ds, aug_func, source=True): if source: batch_size = SOURCE_BATCH_SIZE else: # During training 3x more target unlabeled samples are shown # to the model in AdaMatch (Section 3.2 of the paper). batch_size = TARGET_BATCH_SIZE ds = ds.shuffle(batch_size * 10, seed=42) if source: ds = ds.map(lambda x, y: (aug_func(x), y), num_parallel_calls=AUTO) else: ds = ds.map(lambda x, y: (aug_func(x, False), y), num_parallel_calls=AUTO) ds = ds.batch(batch_size).prefetch(AUTO) return ds _w and _s suffixes denote weak and strong respectively. source_ds = tf.data.Dataset.from_tensor_slices((mnist_x_train, mnist_y_train)) source_ds_w = create_individual_ds(source_ds, weak_augment) source_ds_s = create_individual_ds(source_ds, strong_augment) final_source_ds = tf.data.Dataset.zip((source_ds_w, source_ds_s)) target_ds_w = create_individual_ds(svhn_train, weak_augment, source=False) target_ds_s = create_individual_ds(svhn_train, strong_augment, source=False) final_target_ds = tf.data.Dataset.zip((target_ds_w, target_ds_s)) Here's what a single image batch looks like: Loss computation utilities def compute_loss_source(source_labels, logits_source_w, logits_source_s): loss_func = keras.losses.CategoricalCrossentropy(from_logits=True) # First compute the losses between original source labels and # predictions made on the weakly and strongly augmented versions # of the same images. w_loss = loss_func(source_labels, logits_source_w) s_loss = loss_func(source_labels, logits_source_s) return w_loss + s_loss def compute_loss_target(target_pseudo_labels_w, logits_target_s, mask): loss_func = keras.losses.CategoricalCrossentropy(from_logits=True, reduction=\"none\") target_pseudo_labels_w = tf.stop_gradient(target_pseudo_labels_w) # For calculating loss for the target samples, we treat the pseudo labels # as the ground-truth. These are not considered during backpropagation # which is a standard SSL practice. target_loss = loss_func(target_pseudo_labels_w, logits_target_s) # More on `mask` later. mask = tf.cast(mask, target_loss.dtype) target_loss *= mask return tf.reduce_mean(target_loss, 0) Subclassed model for AdaMatch training The figure below presents the overall workflow of AdaMatch (taken from the original paper): Here's a brief step-by-step breakdown of the workflow: We first retrieve the weakly and strongly augmented pairs of images from the source and target datasets. We prepare two concatenated copies: i. One where both pairs are concatenated. ii. One where only the source data image pair is concatenated. We run two forward passes through the model: i. The first forward pass uses the concatenated copy obtained from 2.i. In this forward pass, the Batch Normalization statistics are updated. ii. In the second forward pass, we only use the concatenated copy obtained from 2.ii. Batch Normalization layers are run in inference mode. The respective logits are computed for both the forward passes. The logits go through a series of transformations, introduced in the paper (which we will discuss shortly). We compute the loss and update the gradients of the underlying model. class AdaMatch(keras.Model): def __init__(self, model, total_steps, tau=0.9): super(AdaMatch, self).__init__() self.model = model self.tau = tau # Denotes the confidence threshold self.loss_tracker = tf.keras.metrics.Mean(name=\"loss\") self.total_steps = total_steps self.current_step = tf.Variable(0, dtype=\"int64\") @property def metrics(self): return [self.loss_tracker] # This is a warmup schedule to update the weight of the # loss contributed by the target unlabeled samples. More # on this in the text. def compute_mu(self): pi = tf.constant(np.pi, dtype=\"float32\") step = tf.cast(self.current_step, dtype=\"float32\") return 0.5 - tf.cos(tf.math.minimum(pi, (2 * pi * step) / self.total_steps)) / 2 def train_step(self, data): ## Unpack and organize the data ## source_ds, target_ds = data (source_w, source_labels), (source_s, _) = source_ds ( (target_w, _), (target_s, _), ) = target_ds # Notice that we are NOT using any labels here. combined_images = tf.concat([source_w, source_s, target_w, target_s], 0) combined_source = tf.concat([source_w, source_s], 0) total_source = tf.shape(combined_source)[0] total_target = tf.shape(tf.concat([target_w, target_s], 0))[0] with tf.GradientTape() as tape: ## Forward passes ## combined_logits = self.model(combined_images, training=True) z_d_prime_source = self.model( combined_source, training=False ) # No BatchNorm update. z_prime_source = combined_logits[:total_source] ## 1. Random logit interpolation for the source images ## lambd = tf.random.uniform((total_source, 10), 0, 1) final_source_logits = (lambd * z_prime_source) + ( (1 - lambd) * z_d_prime_source ) ## 2. Distribution alignment (only consider weakly augmented images) ## # Compute softmax for logits of the WEAKLY augmented SOURCE images. y_hat_source_w = tf.nn.softmax(final_source_logits[: tf.shape(source_w)[0]]) # Extract logits for the WEAKLY augmented TARGET images and compute softmax. logits_target = combined_logits[total_source:] logits_target_w = logits_target[: tf.shape(target_w)[0]] y_hat_target_w = tf.nn.softmax(logits_target_w) # Align the target label distribution to that of the source. expectation_ratio = tf.reduce_mean(y_hat_source_w) / tf.reduce_mean( y_hat_target_w ) y_tilde_target_w = tf.math.l2_normalize( y_hat_target_w * expectation_ratio, 1 ) ## 3. Relative confidence thresholding ## row_wise_max = tf.reduce_max(y_hat_source_w, axis=-1) final_sum = tf.reduce_mean(row_wise_max, 0) c_tau = self.tau * final_sum mask = tf.reduce_max(y_tilde_target_w, axis=-1) >= c_tau ## Compute losses (pay attention to the indexing) ## source_loss = compute_loss_source( source_labels, final_source_logits[: tf.shape(source_w)[0]], final_source_logits[tf.shape(source_w)[0] :], ) target_loss = compute_loss_target( y_tilde_target_w, logits_target[tf.shape(target_w)[0] :], mask ) t = self.compute_mu() # Compute weight for the target loss total_loss = source_loss + (t * target_loss) self.current_step.assign_add( 1 ) # Update current training step for the scheduler gradients = tape.gradient(total_loss, self.model.trainable_variables) self.optimizer.apply_gradients(zip(gradients, self.model.trainable_variables)) self.loss_tracker.update_state(total_loss) return {\"loss\": self.loss_tracker.result()} The authors introduce three improvements in the paper: In AdaMatch, we perform two forward passes, and only one of them is respsonsible for updating the Batch Normalization statistics. This is done to account for distribution shifts in the target dataset. In the other forward pass, we only use the source sample, and the Batch Normalization layers are run in inference mode. Logits for the source samples (weakly and strongly augmented versions) from these two passes are slightly different from one another because of how Batch Normalization layers are run. Final logits for the source samples are computed by linearly interpolating between these two different pairs of logits. This induces a form of consistency regularization. This step is referred to as random logit interpolation. Distribution alignment is used to align the source and target label distributions. This further helps the underlying model learn domain-invariant representations. In case of unsupervised domain adaptation, we don't have access to any labels of the target dataset. This is why pseudo labels are generated from the underlying model. The underlying model generates pseudo-labels for the target samples. It's likely that the model would make faulty predictions. Those can propagate back as we make progress in the training, and hurt the overall performance. To compensate for that, we filter the high-confidence predictions based on a threshold (hence the use of mask inside compute_loss_target()). In AdaMatch, this threshold is relatively adjusted which is why it is called relative confidence thresholding. For more details on these methods and to know how each of them contribute please refer to the paper. About compute_mu(): Rather than using a fixed scalar quantity, a varying scalar is used in AdaMatch. It denotes the weight of the loss contibuted by the target samples. Visually, the weight scheduler look like so: This scheduler increases the weight of the target domain loss from 0 to 1 for the first half of the training. Then it keeps that weight at 1 for the second half of the training. Instantiate a Wide-ResNet-28-2 The authors use a WideResNet-28-2 for the dataset pairs we are using in this example. Most of the following code has been referred from this script. Note that the following model has a scaling layer inside it that scales the pixel values to [0, 1]. def wide_basic(x, n_input_plane, n_output_plane, stride): conv_params = [[3, 3, stride, \"same\"], [3, 3, (1, 1), \"same\"]] n_bottleneck_plane = n_output_plane # Residual block for i, v in enumerate(conv_params): if i == 0: if n_input_plane != n_output_plane: x = layers.BatchNormalization()(x) x = layers.Activation(\"relu\")(x) convs = x else: convs = layers.BatchNormalization()(x) convs = layers.Activation(\"relu\")(convs) convs = layers.Conv2D( n_bottleneck_plane, (v[0], v[1]), strides=v[2], padding=v[3], kernel_initializer=INIT, kernel_regularizer=regularizers.l2(WEIGHT_DECAY), use_bias=False, )(convs) else: convs = layers.BatchNormalization()(convs) convs = layers.Activation(\"relu\")(convs) convs = layers.Conv2D( n_bottleneck_plane, (v[0], v[1]), strides=v[2], padding=v[3], kernel_initializer=INIT, kernel_regularizer=regularizers.l2(WEIGHT_DECAY), use_bias=False, )(convs) # Shortcut connection: identity function or 1x1 # convolutional # (depends on difference between input & output shape - this # corresponds to whether we are using the first block in # each # group; see `block_series()`). if n_input_plane != n_output_plane: shortcut = layers.Conv2D( n_output_plane, (1, 1), strides=stride, padding=\"same\", kernel_initializer=INIT, kernel_regularizer=regularizers.l2(WEIGHT_DECAY), use_bias=False, )(x) else: shortcut = x return layers.Add()([convs, shortcut]) # Stacking residual units on the same stage def block_series(x, n_input_plane, n_output_plane, count, stride): x = wide_basic(x, n_input_plane, n_output_plane, stride) for i in range(2, int(count + 1)): x = wide_basic(x, n_output_plane, n_output_plane, stride=1) return x def get_network(image_size=32, num_classes=10): n = (DEPTH - 4) / 6 n_stages = [16, 16 * WIDTH_MULT, 32 * WIDTH_MULT, 64 * WIDTH_MULT] inputs = keras.Input(shape=(image_size, image_size, 3)) x = layers.Rescaling(scale=1.0 / 255)(inputs) conv1 = layers.Conv2D( n_stages[0], (3, 3), strides=1, padding=\"same\", kernel_initializer=INIT, kernel_regularizer=regularizers.l2(WEIGHT_DECAY), use_bias=False, )(x) ## Add wide residual blocks ## conv2 = block_series( conv1, n_input_plane=n_stages[0], n_output_plane=n_stages[1], count=n, stride=(1, 1), ) # Stage 1 conv3 = block_series( conv2, n_input_plane=n_stages[1], n_output_plane=n_stages[2], count=n, stride=(2, 2), ) # Stage 2 conv4 = block_series( conv3, n_input_plane=n_stages[2], n_output_plane=n_stages[3], count=n, stride=(2, 2), ) # Stage 3 batch_norm = layers.BatchNormalization()(conv4) relu = layers.Activation(\"relu\")(batch_norm) # Classifier trunk_outputs = layers.GlobalAveragePooling2D()(relu) outputs = layers.Dense( num_classes, kernel_regularizer=regularizers.l2(WEIGHT_DECAY) )(trunk_outputs) return keras.Model(inputs, outputs) We can now instantiate a Wide ResNet model like so. Note that the purpose of using a Wide ResNet here is to keep the implementation as close to the original one as possible. wrn_model = get_network() print(f\"Model has {wrn_model.count_params()/1e6} Million parameters.\") Model has 1.471226 Million parameters. Instantiate AdaMatch model and compile it reduce_lr = keras.optimizers.schedules.CosineDecay(LEARNING_RATE, TOTAL_STEPS, 0.25) optimizer = keras.optimizers.Adam(reduce_lr) adamatch_trainer = AdaMatch(model=wrn_model, total_steps=TOTAL_STEPS) adamatch_trainer.compile(optimizer=optimizer) Model training total_ds = tf.data.Dataset.zip((final_source_ds, final_target_ds)) adamatch_trainer.fit(total_ds, epochs=EPOCHS) Epoch 1/10 382/382 [==============================] - 53s 96ms/step - loss: 117866954752.0000 Epoch 2/10 382/382 [==============================] - 36s 95ms/step - loss: 2.6231 Epoch 3/10 382/382 [==============================] - 36s 94ms/step - loss: 4.1699 Epoch 4/10 382/382 [==============================] - 36s 95ms/step - loss: 8.2748 Epoch 5/10 382/382 [==============================] - 36s 95ms/step - loss: 28.8679 Epoch 6/10 382/382 [==============================] - 36s 94ms/step - loss: 14.7112 Epoch 7/10 382/382 [==============================] - 36s 94ms/step - loss: 7.8206 Epoch 8/10 382/382 [==============================] - 36s 94ms/step - loss: 18.1182 Epoch 9/10 382/382 [==============================] - 36s 94ms/step - loss: 22.4258 Epoch 10/10 382/382 [==============================] - 36s 95ms/step - loss: 22.1107 Evaluation on the target and source test sets # Compile the AdaMatch model to yield accuracy. adamatch_trained_model = adamatch_trainer.model adamatch_trained_model.compile(metrics=keras.metrics.SparseCategoricalAccuracy()) # Score on the target test set. svhn_test = svhn_test.batch(TARGET_BATCH_SIZE).prefetch(AUTO) _, accuracy = adamatch_trained_model.evaluate(svhn_test) print(f\"Accuracy on target test set: {accuracy * 100:.2f}%\") 136/136 [==============================] - 2s 10ms/step - loss: 572.9810 - sparse_categorical_accuracy: 0.1960 Accuracy on target test set: 19.11% With more training, this score improves. When this same network is trained with standard classification objective, it yields an accuracy of 7.20% which is significantly lower than what we got with AdaMatch. You can check out this notebook to learn more about the hyperparameters and other experimental details. # Utility function for preprocessing the source test set. def prepare_test_ds_source(image, label): image = tf.image.resize_with_pad(image, RESIZE_TO, RESIZE_TO) image = tf.tile(image, [1, 1, 3]) return image, label source_test_ds = tf.data.Dataset.from_tensor_slices((mnist_x_test, mnist_y_test)) source_test_ds = ( source_test_ds.map(prepare_test_ds_source, num_parallel_calls=AUTO) .batch(TARGET_BATCH_SIZE) .prefetch(AUTO) ) # Evaluation on the source test set. _, accuracy = adamatch_trained_model.evaluate(source_test_ds) print(f\"Accuracy on source test set: {accuracy * 100:.2f}%\") 53/53 [==============================] - 1s 10ms/step - loss: 572.9810 - sparse_categorical_accuracy: 0.6532 Accuracy on source test set: 65.32% You can reproduce the results by using these model weights. A simple convnet that achieves ~99% test accuracy on MNIST. Setup import numpy as np from tensorflow import keras from tensorflow.keras import layers Prepare the data # Model / data parameters num_classes = 10 input_shape = (28, 28, 1) # the data, split between train and test sets (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() # Scale images to the [0, 1] range x_train = x_train.astype(\"float32\") / 255 x_test = x_test.astype(\"float32\") / 255 # Make sure images have shape (28, 28, 1) x_train = np.expand_dims(x_train, -1) x_test = np.expand_dims(x_test, -1) print(\"x_train shape:\", x_train.shape) print(x_train.shape[0], \"train samples\") print(x_test.shape[0], \"test samples\") # convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) x_train shape: (60000, 28, 28, 1) 60000 train samples 10000 test samples Build the model model = keras.Sequential( [ keras.Input(shape=input_shape), layers.Conv2D(32, kernel_size=(3, 3), activation=\"relu\"), layers.MaxPooling2D(pool_size=(2, 2)), layers.Conv2D(64, kernel_size=(3, 3), activation=\"relu\"), layers.MaxPooling2D(pool_size=(2, 2)), layers.Flatten(), layers.Dropout(0.5), layers.Dense(num_classes, activation=\"softmax\"), ] ) model.summary() Model: \"sequential\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 13, 13, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 11, 11, 64) 18496 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 5, 5, 64) 0 _________________________________________________________________ flatten (Flatten) (None, 1600) 0 _________________________________________________________________ dropout (Dropout) (None, 1600) 0 _________________________________________________________________ dense (Dense) (None, 10) 16010 ================================================================= Total params: 34,826 Trainable params: 34,826 Non-trainable params: 0 _________________________________________________________________ Train the model batch_size = 128 epochs = 15 model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=["accuracy"]) model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.1) Epoch 1/15 422/422 [==============================] - 13s 29ms/step - loss: 0.7840 - accuracy: 0.7643 - val_loss: 0.0780 - val_accuracy: 0.9780 Epoch 2/15 422/422 [==============================] - 13s 31ms/step - loss: 0.1199 - accuracy: 0.9639 - val_loss: 0.0559 - val_accuracy: 0.9843 Epoch 3/15 422/422 [==============================] - 14s 33ms/step - loss: 0.0845 - accuracy: 0.9737 - val_loss: 0.0469 - val_accuracy: 0.9877 Epoch 4/15 422/422 [==============================] - 14s 33ms/step - loss: 0.0762 - accuracy: 0.9756 - val_loss: 0.0398 - val_accuracy: 0.9895 Epoch 5/15 422/422 [==============================] - 15s 35ms/step - loss: 0.0621 - accuracy: 0.9812 - val_loss: 0.0378 - val_accuracy: 0.9890 Epoch 6/15 422/422 [==============================] - 17s 40ms/step - loss: 0.0547 - accuracy: 0.9825 - val_loss: 0.0360 - val_accuracy: 0.9910 Epoch 7/15 422/422 [==============================] - 17s 41ms/step - loss: 0.0497 - accuracy: 0.9840 - val_loss: 0.0311 - val_accuracy: 0.9920 Epoch 8/15 422/422 [==============================] - 16s 39ms/step - loss: 0.0443 - accuracy: 0.9862 - val_loss: 0.0346 - val_accuracy: 0.9910 Epoch 9/15 422/422 [==============================] - 17s 39ms/step - loss: 0.0436 - accuracy: 0.9860 - val_loss: 0.0325 - val_accuracy: 0.9915 Epoch 10/15 422/422 [==============================] - 16s 38ms/step - loss: 0.0407 - accuracy: 0.9865 - val_loss: 0.0301 - val_accuracy: 0.9920 Epoch 11/15 422/422 [==============================] - 16s 37ms/step - loss: 0.0406 - accuracy: 0.9874 - val_loss: 0.0303 - val_accuracy: 0.9920 Epoch 12/15 237/422 [===============>..............] - ETA: 7s - loss: 0.0398 - accuracy: 0.9877 Evaluate the trained model score = model.evaluate(x_test, y_test, verbose=0) print(\"Test loss:\", score[0]) print(\"Test accuracy:\", score[1]) Test loss: 0.023950600996613503 Test accuracy: 0.9922000169754028 Using supervised contrastive learning for image classification. Introduction Supervised Contrastive Learning (Prannay Khosla et al.) is a training methodology that outperforms supervised training with crossentropy on classification tasks. Essentially, training an image classification model with Supervised Contrastive Learning is performed in two phases: Training an encoder to learn to produce vector representations of input images such that representations of images in the same class will be more similar compared to representations of images in different classes. Training a classifier on top of the frozen encoder. Note that this example requires TensorFlow Addons, which you can install using the following command: pip install tensorflow-addons Setup import tensorflow as tf import tensorflow_addons as tfa import numpy as np from tensorflow import keras from tensorflow.keras import layers Prepare the data num_classes = 10 input_shape = (32, 32, 3) # Load the train and test data splits (x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data() # Display shapes of train and test datasets print(f\"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}\") print(f\"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}\") x_train shape: (50000, 32, 32, 3) - y_train shape: (50000, 1) x_test shape: (10000, 32, 32, 3) - y_test shape: (10000, 1) Using image data augmentation data_augmentation = keras.Sequential( [ layers.Normalization(), layers.RandomFlip(\"horizontal\"), layers.RandomRotation(0.02), layers.RandomWidth(0.2), layers.RandomHeight(0.2), ] ) # Setting the state of the normalization layer. data_augmentation.layers[0].adapt(x_train) Build the encoder model The encoder model takes the image as input and turns it into a 2048-dimensional feature vector. def create_encoder(): resnet = keras.applications.ResNet50V2( include_top=False, weights=None, input_shape=input_shape, pooling=\"avg\" ) inputs = keras.Input(shape=input_shape) augmented = data_augmentation(inputs) outputs = resnet(augmented) model = keras.Model(inputs=inputs, outputs=outputs, name=\"cifar10-encoder\") return model encoder = create_encoder() encoder.summary() learning_rate = 0.001 batch_size = 265 hidden_units = 512 projection_units = 128 num_epochs = 50 dropout_rate = 0.5 temperature = 0.05 Model: \"cifar10-encoder\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) [(None, 32, 32, 3)] 0 _________________________________________________________________ sequential (Sequential) (None, None, None, 3) 7 _________________________________________________________________ resnet50v2 (Functional) (None, 2048) 23564800 ================================================================= Total params: 23,564,807 Trainable params: 23,519,360 Non-trainable params: 45,447 _________________________________________________________________ Build the classification model The classification model adds a fully-connected layer on top of the encoder, plus a softmax layer with the target classes. def create_classifier(encoder, trainable=True): for layer in encoder.layers: layer.trainable = trainable inputs = keras.Input(shape=input_shape) features = encoder(inputs) features = layers.Dropout(dropout_rate)(features) features = layers.Dense(hidden_units, activation=\"relu\")(features) features = layers.Dropout(dropout_rate)(features) outputs = layers.Dense(num_classes, activation=\"softmax\")(features) model = keras.Model(inputs=inputs, outputs=outputs, name=\"cifar10-classifier\") model.compile( optimizer=keras.optimizers.Adam(learning_rate), loss=keras.losses.SparseCategoricalCrossentropy(), metrics=[keras.metrics.SparseCategoricalAccuracy()], ) return model Experiment 1: Train the baseline classification model In this experiment, a baseline classifier is trained as usual, i.e., the encoder and the classifier parts are trained together as a single model to minimize the crossentropy loss. encoder = create_encoder() classifier = create_classifier(encoder) classifier.summary() history = classifier.fit(x=x_train, y=y_train, batch_size=batch_size, epochs=num_epochs) accuracy = classifier.evaluate(x_test, y_test)[1] print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") Model: \"cifar10-classifier\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_5 (InputLayer) [(None, 32, 32, 3)] 0 _________________________________________________________________ cifar10-encoder (Functional) (None, 2048) 23564807 _________________________________________________________________ dropout (Dropout) (None, 2048) 0 _________________________________________________________________ dense (Dense) (None, 512) 1049088 _________________________________________________________________ dropout_1 (Dropout) (None, 512) 0 _________________________________________________________________ dense_1 (Dense) (None, 10) 5130 ================================================================= Total params: 24,619,025 Trainable params: 24,573,578 Non-trainable params: 45,447 _________________________________________________________________ Epoch 1/50 189/189 [==============================] - 15s 77ms/step - loss: 1.9369 - sparse_categorical_accuracy: 0.2874 Epoch 2/50 189/189 [==============================] - 11s 57ms/step - loss: 1.5133 - sparse_categorical_accuracy: 0.4505 Epoch 3/50 189/189 [==============================] - 11s 57ms/step - loss: 1.3468 - sparse_categorical_accuracy: 0.5204 Epoch 4/50 189/189 [==============================] - 11s 60ms/step - loss: 1.2159 - sparse_categorical_accuracy: 0.5733 Epoch 5/50 189/189 [==============================] - 11s 56ms/step - loss: 1.1516 - sparse_categorical_accuracy: 0.6032 Epoch 6/50 189/189 [==============================] - 11s 58ms/step - loss: 1.0769 - sparse_categorical_accuracy: 0.6254 Epoch 7/50 189/189 [==============================] - 11s 58ms/step - loss: 0.9964 - sparse_categorical_accuracy: 0.6547 Epoch 8/50 189/189 [==============================] - 10s 55ms/step - loss: 0.9563 - sparse_categorical_accuracy: 0.6703 Epoch 9/50 189/189 [==============================] - 10s 55ms/step - loss: 0.8952 - sparse_categorical_accuracy: 0.6925 Epoch 10/50 189/189 [==============================] - 11s 56ms/step - loss: 0.8986 - sparse_categorical_accuracy: 0.6922 Epoch 11/50 189/189 [==============================] - 10s 55ms/step - loss: 0.8381 - sparse_categorical_accuracy: 0.7145 Epoch 12/50 189/189 [==============================] - 10s 55ms/step - loss: 0.8513 - sparse_categorical_accuracy: 0.7086 Epoch 13/50 189/189 [==============================] - 11s 56ms/step - loss: 0.7557 - sparse_categorical_accuracy: 0.7448 Epoch 14/50 189/189 [==============================] - 11s 56ms/step - loss: 0.7168 - sparse_categorical_accuracy: 0.7548 Epoch 15/50 189/189 [==============================] - 10s 55ms/step - loss: 0.6772 - sparse_categorical_accuracy: 0.7690 Epoch 16/50 189/189 [==============================] - 11s 56ms/step - loss: 0.7587 - sparse_categorical_accuracy: 0.7416 Epoch 17/50 189/189 [==============================] - 10s 55ms/step - loss: 0.6873 - sparse_categorical_accuracy: 0.7665 Epoch 18/50 189/189 [==============================] - 11s 56ms/step - loss: 0.6418 - sparse_categorical_accuracy: 0.7804 Epoch 19/50 189/189 [==============================] - 11s 56ms/step - loss: 0.6086 - sparse_categorical_accuracy: 0.7927 Epoch 20/50 189/189 [==============================] - 10s 55ms/step - loss: 0.5903 - sparse_categorical_accuracy: 0.7978 Epoch 21/50 189/189 [==============================] - 11s 56ms/step - loss: 0.5636 - sparse_categorical_accuracy: 0.8083 Epoch 22/50 189/189 [==============================] - 11s 56ms/step - loss: 0.5527 - sparse_categorical_accuracy: 0.8123 Epoch 23/50 189/189 [==============================] - 11s 56ms/step - loss: 0.5308 - sparse_categorical_accuracy: 0.8191 Epoch 24/50 189/189 [==============================] - 10s 55ms/step - loss: 0.5282 - sparse_categorical_accuracy: 0.8223 Epoch 25/50 189/189 [==============================] - 10s 55ms/step - loss: 0.5090 - sparse_categorical_accuracy: 0.8263 Epoch 26/50 189/189 [==============================] - 10s 55ms/step - loss: 0.5497 - sparse_categorical_accuracy: 0.8181 Epoch 27/50 189/189 [==============================] - 10s 55ms/step - loss: 0.4950 - sparse_categorical_accuracy: 0.8332 Epoch 28/50 189/189 [==============================] - 11s 56ms/step - loss: 0.4727 - sparse_categorical_accuracy: 0.8391 Epoch 29/50 167/189 [=========================>....] - ETA: 1s - loss: 0.4594 - sparse_categorical_accuracy: 0.8444 Experiment 2: Use supervised contrastive learning In this experiment, the model is trained in two phases. In the first phase, the encoder is pretrained to optimize the supervised contrastive loss, described in Prannay Khosla et al.. In the second phase, the classifier is trained using the trained encoder with its weights freezed; only the weights of fully-connected layers with the softmax are optimized. 1. Supervised contrastive learning loss function class SupervisedContrastiveLoss(keras.losses.Loss): def __init__(self, temperature=1, name=None): super(SupervisedContrastiveLoss, self).__init__(name=name) self.temperature = temperature def __call__(self, labels, feature_vectors, sample_weight=None): # Normalize feature vectors feature_vectors_normalized = tf.math.l2_normalize(feature_vectors, axis=1) # Compute logits logits = tf.divide( tf.matmul( feature_vectors_normalized, tf.transpose(feature_vectors_normalized) ), self.temperature, ) return tfa.losses.npairs_loss(tf.squeeze(labels), logits) def add_projection_head(encoder): inputs = keras.Input(shape=input_shape) features = encoder(inputs) outputs = layers.Dense(projection_units, activation=\"relu\")(features) model = keras.Model( inputs=inputs, outputs=outputs, name=\"cifar-encoder_with_projection-head\" ) return model 2. Pretrain the encoder encoder = create_encoder() encoder_with_projection_head = add_projection_head(encoder) encoder_with_projection_head.compile( optimizer=keras.optimizers.Adam(learning_rate), loss=SupervisedContrastiveLoss(temperature), ) encoder_with_projection_head.summary() history = encoder_with_projection_head.fit( x=x_train, y=y_train, batch_size=batch_size, epochs=num_epochs ) Model: \"cifar-encoder_with_projection-head\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_8 (InputLayer) [(None, 32, 32, 3)] 0 _________________________________________________________________ cifar10-encoder (Functional) (None, 2048) 23564807 _________________________________________________________________ dense_2 (Dense) (None, 128) 262272 ================================================================= Total params: 23,827,079 Trainable params: 23,781,632 Non-trainable params: 45,447 _________________________________________________________________ Epoch 1/50 189/189 [==============================] - 11s 56ms/step - loss: 5.3730 Epoch 2/50 189/189 [==============================] - 11s 56ms/step - loss: 5.1583 Epoch 3/50 189/189 [==============================] - 10s 55ms/step - loss: 5.0368 Epoch 4/50 189/189 [==============================] - 11s 56ms/step - loss: 4.9349 Epoch 5/50 189/189 [==============================] - 10s 55ms/step - loss: 4.8262 Epoch 6/50 189/189 [==============================] - 11s 56ms/step - loss: 4.7470 Epoch 7/50 189/189 [==============================] - 11s 56ms/step - loss: 4.6835 Epoch 8/50 189/189 [==============================] - 11s 56ms/step - loss: 4.6120 Epoch 9/50 189/189 [==============================] - 11s 56ms/step - loss: 4.5608 Epoch 10/50 189/189 [==============================] - 10s 55ms/step - loss: 4.5075 Epoch 11/50 189/189 [==============================] - 11s 56ms/step - loss: 4.4674 Epoch 12/50 189/189 [==============================] - 10s 56ms/step - loss: 4.4362 Epoch 13/50 189/189 [==============================] - 11s 56ms/step - loss: 4.3899 Epoch 14/50 189/189 [==============================] - 10s 55ms/step - loss: 4.3664 Epoch 15/50 189/189 [==============================] - 11s 56ms/step - loss: 4.3188 Epoch 16/50 189/189 [==============================] - 10s 56ms/step - loss: 4.3030 Epoch 17/50 189/189 [==============================] - 11s 57ms/step - loss: 4.2725 Epoch 18/50 189/189 [==============================] - 10s 55ms/step - loss: 4.2523 Epoch 19/50 189/189 [==============================] - 11s 56ms/step - loss: 4.2100 Epoch 20/50 189/189 [==============================] - 10s 55ms/step - loss: 4.2033 Epoch 21/50 189/189 [==============================] - 11s 56ms/step - loss: 4.1741 Epoch 22/50 189/189 [==============================] - 11s 56ms/step - loss: 4.1443 Epoch 23/50 189/189 [==============================] - 11s 56ms/step - loss: 4.1350 Epoch 24/50 189/189 [==============================] - 11s 57ms/step - loss: 4.1192 Epoch 25/50 189/189 [==============================] - 11s 56ms/step - loss: 4.1002 Epoch 26/50 189/189 [==============================] - 11s 57ms/step - loss: 4.0797 Epoch 27/50 189/189 [==============================] - 11s 56ms/step - loss: 4.0547 Epoch 28/50 189/189 [==============================] - 11s 56ms/step - loss: 4.0336 Epoch 29/50 189/189 [==============================] - 11s 56ms/step - loss: 4.0299 Epoch 30/50 189/189 [==============================] - 11s 56ms/step - loss: 4.0031 Epoch 31/50 189/189 [==============================] - 11s 56ms/step - loss: 3.9979 Epoch 32/50 189/189 [==============================] - 11s 56ms/step - loss: 3.9777 Epoch 33/50 189/189 [==============================] - 10s 55ms/step - loss: 3.9800 Epoch 34/50 189/189 [==============================] - 11s 56ms/step - loss: 3.9538 Epoch 35/50 189/189 [==============================] - 11s 56ms/step - loss: 3.9298 Epoch 36/50 189/189 [==============================] - 11s 57ms/step - loss: 3.9241 Epoch 37/50 189/189 [==============================] - 11s 56ms/step - loss: 3.9102 Epoch 38/50 189/189 [==============================] - 11s 56ms/step - loss: 3.9075 Epoch 39/50 189/189 [==============================] - 11s 56ms/step - loss: 3.8897 Epoch 40/50 189/189 [==============================] - 11s 57ms/step - loss: 3.8871 Epoch 41/50 189/189 [==============================] - 11s 56ms/step - loss: 3.8596 Epoch 42/50 189/189 [==============================] - 10s 56ms/step - loss: 3.8526 Epoch 43/50 189/189 [==============================] - 11s 56ms/step - loss: 3.8417 Epoch 44/50 189/189 [==============================] - 10s 55ms/step - loss: 3.8239 Epoch 45/50 189/189 [==============================] - 11s 56ms/step - loss: 3.8178 Epoch 46/50 189/189 [==============================] - 11s 56ms/step - loss: 3.8065 Epoch 47/50 189/189 [==============================] - 11s 56ms/step - loss: 3.8185 Epoch 48/50 189/189 [==============================] - 11s 56ms/step - loss: 3.8022 Epoch 49/50 189/189 [==============================] - 11s 56ms/step - loss: 3.7815 Epoch 50/50 189/189 [==============================] - 11s 56ms/step - loss: 3.7601 3. Train the classifier with the frozen encoder classifier = create_classifier(encoder, trainable=False) history = classifier.fit(x=x_train, y=y_train, batch_size=batch_size, epochs=num_epochs) accuracy = classifier.evaluate(x_test, y_test)[1] print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") Epoch 1/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3979 - sparse_categorical_accuracy: 0.8869 Epoch 2/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3422 - sparse_categorical_accuracy: 0.8959 Epoch 3/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3251 - sparse_categorical_accuracy: 0.9004 Epoch 4/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3313 - sparse_categorical_accuracy: 0.8963 Epoch 5/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3213 - sparse_categorical_accuracy: 0.9006 Epoch 6/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3221 - sparse_categorical_accuracy: 0.9001 Epoch 7/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3134 - sparse_categorical_accuracy: 0.9001 Epoch 8/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3245 - sparse_categorical_accuracy: 0.8978 Epoch 9/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3144 - sparse_categorical_accuracy: 0.9001 Epoch 10/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3191 - sparse_categorical_accuracy: 0.8984 Epoch 11/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3104 - sparse_categorical_accuracy: 0.9025 Epoch 12/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3261 - sparse_categorical_accuracy: 0.8958 Epoch 13/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3130 - sparse_categorical_accuracy: 0.9001 Epoch 14/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3147 - sparse_categorical_accuracy: 0.9003 Epoch 15/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3113 - sparse_categorical_accuracy: 0.9016 Epoch 16/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3114 - sparse_categorical_accuracy: 0.9008 Epoch 17/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3044 - sparse_categorical_accuracy: 0.9026 Epoch 18/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3142 - sparse_categorical_accuracy: 0.8987 Epoch 19/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3139 - sparse_categorical_accuracy: 0.9018 Epoch 20/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3199 - sparse_categorical_accuracy: 0.8987 Epoch 21/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3125 - sparse_categorical_accuracy: 0.8994 Epoch 22/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3291 - sparse_categorical_accuracy: 0.8967 Epoch 23/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3208 - sparse_categorical_accuracy: 0.8963 Epoch 24/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3065 - sparse_categorical_accuracy: 0.9041 Epoch 25/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3099 - sparse_categorical_accuracy: 0.9006 Epoch 26/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3181 - sparse_categorical_accuracy: 0.8986 Epoch 27/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3112 - sparse_categorical_accuracy: 0.9013 Epoch 28/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3136 - sparse_categorical_accuracy: 0.8996 Epoch 29/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3217 - sparse_categorical_accuracy: 0.8969 Epoch 30/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3161 - sparse_categorical_accuracy: 0.8998 Epoch 31/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3151 - sparse_categorical_accuracy: 0.8999 Epoch 32/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3092 - sparse_categorical_accuracy: 0.9009 Epoch 33/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3246 - sparse_categorical_accuracy: 0.8961 Epoch 34/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3143 - sparse_categorical_accuracy: 0.8995 Epoch 35/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3106 - sparse_categorical_accuracy: 0.9002 Epoch 36/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3210 - sparse_categorical_accuracy: 0.8980 Epoch 37/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3178 - sparse_categorical_accuracy: 0.9009 Epoch 38/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3064 - sparse_categorical_accuracy: 0.9032 Epoch 39/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3196 - sparse_categorical_accuracy: 0.8981 Epoch 40/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3177 - sparse_categorical_accuracy: 0.8988 Epoch 41/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3167 - sparse_categorical_accuracy: 0.8987 Epoch 42/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3110 - sparse_categorical_accuracy: 0.9014 Epoch 43/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3124 - sparse_categorical_accuracy: 0.9002 Epoch 44/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3128 - sparse_categorical_accuracy: 0.8999 Epoch 45/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3131 - sparse_categorical_accuracy: 0.8991 Epoch 46/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3149 - sparse_categorical_accuracy: 0.8992 Epoch 47/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3082 - sparse_categorical_accuracy: 0.9021 Epoch 48/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3223 - sparse_categorical_accuracy: 0.8959 Epoch 49/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3195 - sparse_categorical_accuracy: 0.8981 Epoch 50/50 189/189 [==============================] - 3s 16ms/step - loss: 0.3240 - sparse_categorical_accuracy: 0.8962 313/313 [==============================] - 2s 7ms/step - loss: 0.7332 - sparse_categorical_accuracy: 0.8162 Test accuracy: 81.62% We get to an improved test accuracy. Conclusion As shown in the experiments, using the supervised contrastive learning technique outperformed the conventional technique in terms of the test accuracy. Note that the same training budget (i.e., number of epochs) was given to each technique. Supervised contrastive learning pays off when the encoder involves a complex architecture, like ResNet, and multi-class problems with many labels. In addition, large batch sizes and multi-layer projection heads improve its effectiveness. See the Supervised Contrastive Learning paper for more details. Training a video classifier with transfer learning and a recurrent model on the UCF101 dataset. This example demonstrates video classification, an important use-case with applications in recommendations, security, and so on. We will be using the UCF101 dataset to build our video classifier. The dataset consists of videos categorized into different actions, like cricket shot, punching, biking, etc. This dataset is commonly used to build action recognizers, which are an application of video classification. A video consists of an ordered sequence of frames. Each frame contains spatial information, and the sequence of those frames contains temporal information. To model both of these aspects, we use a hybrid architecture that consists of convolutions (for spatial processing) as well as recurrent layers (for temporal processing). Specifically, we'll use a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) consisting of GRU layers. This kind of hybrid architecture is popularly known as a CNN-RNN. This example requires TensorFlow 2.5 or higher, as well as TensorFlow Docs, which can be installed using the following command: !pip install -q git+https://github.com/tensorflow/docs Data collection In order to keep the runtime of this example relatively short, we will be using a subsampled version of the original UCF101 dataset. You can refer to this notebook to know how the subsampling was done. !wget -q https://git.io/JGc31 -O ucf101_top5.tar.gz !tar xf ucf101_top5.tar.gz Setup from tensorflow_docs.vis import embed from tensorflow import keras from imutils import paths import matplotlib.pyplot as plt import tensorflow as tf import pandas as pd import numpy as np import imageio import cv2 import os 2021-09-13 14:08:15.945527: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-09-13 14:08:15.945551: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Define hyperparameters IMG_SIZE = 224 BATCH_SIZE = 64 EPOCHS = 10 MAX_SEQ_LENGTH = 20 NUM_FEATURES = 2048 Data preparation train_df = pd.read_csv(\"train.csv\") test_df = pd.read_csv(\"test.csv\") print(f\"Total videos for training: {len(train_df)}\") print(f\"Total videos for testing: {len(test_df)}\") train_df.sample(10) Total videos for training: 594 Total videos for testing: 224 video_name tag 149 v_PlayingCello_g12_c05.avi PlayingCello 317 v_Punch_g19_c05.avi Punch 438 v_ShavingBeard_g20_c03.avi ShavingBeard 559 v_TennisSwing_g20_c02.avi TennisSwing 368 v_ShavingBeard_g09_c03.avi ShavingBeard 241 v_Punch_g08_c04.avi Punch 398 v_ShavingBeard_g14_c03.avi ShavingBeard 111 v_CricketShot_g25_c01.avi CricketShot 119 v_PlayingCello_g08_c02.avi PlayingCello 249 v_Punch_g09_c05.avi Punch One of the many challenges of training video classifiers is figuring out a way to feed the videos to a network. This blog post discusses five such methods. Since a video is an ordered sequence of frames, we could just extract the frames and put them in a 3D tensor. But the number of frames may differ from video to video which would prevent us from stacking them into batches (unless we use padding). As an alternative, we can save video frames at a fixed interval until a maximum frame count is reached. In this example we will do the following: Capture the frames of a video. Extract frames from the videos until a maximum frame count is reached. In the case, where a video's frame count is lesser than the maximum frame count we will pad the video with zeros. Note that this workflow is identical to problems involving texts sequences. Videos of the UCF101 dataset is known to not contain extreme variations in objects and actions across frames. Because of this, it may be okay to only consider a few frames for the learning task. But this approach may not generalize well to other video classification problems. We will be using OpenCV's VideoCapture() method to read frames from videos. # The following two methods are taken from this tutorial: # https://www.tensorflow.org/hub/tutorials/action_recognition_with_tf_hub def crop_center_square(frame): y, x = frame.shape[0:2] min_dim = min(y, x) start_x = (x // 2) - (min_dim // 2) start_y = (y // 2) - (min_dim // 2) return frame[start_y : start_y + min_dim, start_x : start_x + min_dim] def load_video(path, max_frames=0, resize=(IMG_SIZE, IMG_SIZE)): cap = cv2.VideoCapture(path) frames = [] try: while True: ret, frame = cap.read() if not ret: break frame = crop_center_square(frame) frame = cv2.resize(frame, resize) frame = frame[:, :, [2, 1, 0]] frames.append(frame) if len(frames) == max_frames: break finally: cap.release() return np.array(frames) We can use a pre-trained network to extract meaningful features from the extracted frames. The Keras Applications module provides a number of state-of-the-art models pre-trained on the ImageNet-1k dataset. We will be using the InceptionV3 model for this purpose. def build_feature_extractor(): feature_extractor = keras.applications.InceptionV3( weights=\"imagenet\", include_top=False, pooling=\"avg\", input_shape=(IMG_SIZE, IMG_SIZE, 3), ) preprocess_input = keras.applications.inception_v3.preprocess_input inputs = keras.Input((IMG_SIZE, IMG_SIZE, 3)) preprocessed = preprocess_input(inputs) outputs = feature_extractor(preprocessed) return keras.Model(inputs, outputs, name=\"feature_extractor\") feature_extractor = build_feature_extractor() 2021-09-13 14:08:17.043898: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-13 14:08:17.044381: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-09-13 14:08:17.044436: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublas.so.11'; dlerror: libcublas.so.11: cannot open shared object file: No such file or directory 2021-09-13 14:08:17.044470: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublasLt.so.11'; dlerror: libcublasLt.so.11: cannot open shared object file: No such file or directory 2021-09-13 14:08:17.055998: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusolver.so.11'; dlerror: libcusolver.so.11: cannot open shared object file: No such file or directory 2021-09-13 14:08:17.056056: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusparse.so.11'; dlerror: libcusparse.so.11: cannot open shared object file: No such file or directory 2021-09-13 14:08:17.056646: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1835] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... 2021-09-13 14:08:17.056971: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. The labels of the videos are strings. Neural networks do not understand string values, so they must be converted to some numerical form before they are fed to the model. Here we will use the StringLookup layer encode the class labels as integers. label_processor = keras.layers.StringLookup( num_oov_indices=0, vocabulary=np.unique(train_df[\"tag\"]) ) print(label_processor.get_vocabulary()) ['CricketShot', 'PlayingCello', 'Punch', 'ShavingBeard', 'TennisSwing'] Finally, we can put all the pieces together to create our data processing utility. def prepare_all_videos(df, root_dir): num_samples = len(df) video_paths = df[\"video_name\"].values.tolist() labels = df[\"tag\"].values labels = label_processor(labels[..., None]).numpy() # `frame_masks` and `frame_features` are what we will feed to our sequence model. # `frame_masks` will contain a bunch of booleans denoting if a timestep is # masked with padding or not. frame_masks = np.zeros(shape=(num_samples, MAX_SEQ_LENGTH), dtype=\"bool\") frame_features = np.zeros( shape=(num_samples, MAX_SEQ_LENGTH, NUM_FEATURES), dtype=\"float32\" ) # For each video. for idx, path in enumerate(video_paths): # Gather all its frames and add a batch dimension. frames = load_video(os.path.join(root_dir, path)) frames = frames[None, ...] # Initialize placeholders to store the masks and features of the current video. temp_frame_mask = np.zeros(shape=(1, MAX_SEQ_LENGTH,), dtype=\"bool\") temp_frame_features = np.zeros( shape=(1, MAX_SEQ_LENGTH, NUM_FEATURES), dtype=\"float32\" ) # Extract features from the frames of the current video. for i, batch in enumerate(frames): video_length = batch.shape[0] length = min(MAX_SEQ_LENGTH, video_length) for j in range(length): temp_frame_features[i, j, :] = feature_extractor.predict( batch[None, j, :] ) temp_frame_mask[i, :length] = 1 # 1 = not masked, 0 = masked frame_features[idx,] = temp_frame_features.squeeze() frame_masks[idx,] = temp_frame_mask.squeeze() return (frame_features, frame_masks), labels train_data, train_labels = prepare_all_videos(train_df, \"train\") test_data, test_labels = prepare_all_videos(test_df, \"test\") print(f\"Frame features in train set: {train_data[0].shape}\") print(f\"Frame masks in train set: {train_data[1].shape}\") 2021-09-13 14:08:18.486751: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2) Frame features in train set: (594, 20, 2048) Frame masks in train set: (594, 20) The above code block will take ~20 minutes to execute depending on the machine it's being executed. The sequence model Now, we can feed this data to a sequence model consisting of recurrent layers like GRU. # Utility for our sequence model. def get_sequence_model(): class_vocab = label_processor.get_vocabulary() frame_features_input = keras.Input((MAX_SEQ_LENGTH, NUM_FEATURES)) mask_input = keras.Input((MAX_SEQ_LENGTH,), dtype=\"bool\") # Refer to the following tutorial to understand the significance of using `mask`: # https://keras.io/api/layers/recurrent_layers/gru/ x = keras.layers.GRU(16, return_sequences=True)( frame_features_input, mask=mask_input ) x = keras.layers.GRU(8)(x) x = keras.layers.Dropout(0.4)(x) x = keras.layers.Dense(8, activation=\"relu\")(x) output = keras.layers.Dense(len(class_vocab), activation=\"softmax\")(x) rnn_model = keras.Model([frame_features_input, mask_input], output) rnn_model.compile( loss=\"sparse_categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"] ) return rnn_model # Utility for running experiments. def run_experiment(): filepath = \"/tmp/video_classifier\" checkpoint = keras.callbacks.ModelCheckpoint( filepath, save_weights_only=True, save_best_only=True, verbose=1 ) seq_model = get_sequence_model() history = seq_model.fit( [train_data[0], train_data[1]], train_labels, validation_split=0.3, epochs=EPOCHS, callbacks=[checkpoint], ) seq_model.load_weights(filepath) _, accuracy = seq_model.evaluate([test_data[0], test_data[1]], test_labels) print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") return history, seq_model _, sequence_model = run_experiment() Epoch 1/10 13/13 [==============================] - 4s 101ms/step - loss: 1.5259 - accuracy: 0.3157 - val_loss: 1.4732 - val_accuracy: 0.3408 Epoch 00001: val_loss improved from inf to 1.47325, saving model to /tmp/video_classifier Epoch 2/10 13/13 [==============================] - 0s 21ms/step - loss: 1.3087 - accuracy: 0.5880 - val_loss: 1.4751 - val_accuracy: 0.3408 Epoch 00002: val_loss did not improve from 1.47325 Epoch 3/10 13/13 [==============================] - 0s 20ms/step - loss: 1.1532 - accuracy: 0.6795 - val_loss: 1.5020 - val_accuracy: 0.3408 Epoch 00003: val_loss did not improve from 1.47325 Epoch 4/10 13/13 [==============================] - 0s 20ms/step - loss: 1.0586 - accuracy: 0.7325 - val_loss: 1.5205 - val_accuracy: 0.3464 Epoch 00004: val_loss did not improve from 1.47325 Epoch 5/10 13/13 [==============================] - 0s 21ms/step - loss: 0.9556 - accuracy: 0.7422 - val_loss: 1.5748 - val_accuracy: 0.3464 Epoch 00005: val_loss did not improve from 1.47325 Epoch 6/10 13/13 [==============================] - 0s 21ms/step - loss: 0.8988 - accuracy: 0.7783 - val_loss: 1.6144 - val_accuracy: 0.3464 Epoch 00006: val_loss did not improve from 1.47325 Epoch 7/10 13/13 [==============================] - 0s 21ms/step - loss: 0.8242 - accuracy: 0.8072 - val_loss: 1.7030 - val_accuracy: 0.3408 Epoch 00007: val_loss did not improve from 1.47325 Epoch 8/10 13/13 [==============================] - 0s 20ms/step - loss: 0.7479 - accuracy: 0.8434 - val_loss: 1.7466 - val_accuracy: 0.3464 Epoch 00008: val_loss did not improve from 1.47325 Epoch 9/10 13/13 [==============================] - 0s 20ms/step - loss: 0.6740 - accuracy: 0.8627 - val_loss: 1.8800 - val_accuracy: 0.3464 Epoch 00009: val_loss did not improve from 1.47325 Epoch 10/10 13/13 [==============================] - 0s 20ms/step - loss: 0.6519 - accuracy: 0.8265 - val_loss: 1.9150 - val_accuracy: 0.3464 Epoch 00010: val_loss did not improve from 1.47325 7/7 [==============================] - 1s 5ms/step - loss: 1.3806 - accuracy: 0.6875 Test accuracy: 68.75% Note: To keep the runtime of this example relatively short, we just used a few training examples. This number of training examples is low with respect to the sequence model being used that has 99,909 trainable parameters. You are encouraged to sample more data from the UCF101 dataset using the notebook mentioned above and train the same model. Inference def prepare_single_video(frames): frames = frames[None, ...] frame_mask = np.zeros(shape=(1, MAX_SEQ_LENGTH,), dtype=\"bool\") frame_features = np.zeros(shape=(1, MAX_SEQ_LENGTH, NUM_FEATURES), dtype=\"float32\") for i, batch in enumerate(frames): video_length = batch.shape[0] length = min(MAX_SEQ_LENGTH, video_length) for j in range(length): frame_features[i, j, :] = feature_extractor.predict(batch[None, j, :]) frame_mask[i, :length] = 1 # 1 = not masked, 0 = masked return frame_features, frame_mask def sequence_prediction(path): class_vocab = label_processor.get_vocabulary() frames = load_video(os.path.join(\"test\", path)) frame_features, frame_mask = prepare_single_video(frames) probabilities = sequence_model.predict([frame_features, frame_mask])[0] for i in np.argsort(probabilities)[::-1]: print(f\" {class_vocab[i]}: {probabilities[i] * 100:5.2f}%\") return frames # This utility is for visualization. # Referenced from: # https://www.tensorflow.org/hub/tutorials/action_recognition_with_tf_hub def to_gif(images): converted_images = images.astype(np.uint8) imageio.mimsave(\"animation.gif\", converted_images, fps=10) return embed.embed_file(\"animation.gif\") test_video = np.random.choice(test_df[\"video_name\"].values.tolist()) print(f\"Test video path: {test_video}\") test_frames = sequence_prediction(test_video) to_gif(test_frames[:MAX_SEQ_LENGTH]) Test video path: v_PlayingCello_g05_c03.avi PlayingCello: 25.61% CricketShot: 24.82% ShavingBeard: 19.38% TennisSwing: 17.43% Punch: 12.77% Next steps In this example, we made use of transfer learning for extracting meaningful features from video frames. You could also fine-tune the pre-trained network to notice how that affects the end results. For speed-accuracy trade-offs, you can try out other models present inside tf.keras.applications. Try different combinations of MAX_SEQ_LENGTH to observe how that affects the performance. Train on a higher number of classes and see if you are able to get good performance. Following this tutorial, try a pre-trained action recognition model from DeepMind. Rolling-averaging can be useful technique for video classification and it can be combined with a standard image classification model to infer on videos. This tutorial will help understand how to use rolling-averaging with an image classifier. When there are variations in between the frames of a video not all the frames might be equally important to decide its category. In those situations, putting a self-attention layer in the sequence model will likely yield better results. Following this book chapter, you can implement Transformers-based models for processing videos. Training a video classifier with hybrid transformers. This example is a follow-up to the Video Classification with a CNN-RNN Architecture example. This time, we will be using a Transformer-based model (Vaswani et al.) to classify videos. You can follow this book chapter in case you need an introduction to Transformers (with code). After reading this example, you will know how to develop hybrid Transformer-based models for video classification that operate on CNN feature maps. This example requires TensorFlow 2.5 or higher, as well as TensorFlow Docs, which can be installed using the following command: !pip install -q git+https://github.com/tensorflow/docs  WARNING: Built wheel for tensorflow-docs is invalid: Metadata 1.2 mandates PEP 440 version, but '0.0.0543363dfdc669b09def1e06abdd34b76337fba4e-' is not  DEPRECATION: tensorflow-docs was installed using the legacy 'setup.py install' method, because a wheel could not be built for it. A possible replacement is to fix the wheel build issue reported above. You can find discussion regarding this at https://github.com/pypa/pip/issues/8368. Data collection As done in the predecessor to this example, we will be using a subsampled version of the UCF101 dataset, a well-known benchmark dataset. In case you want to operate on a larger subsample or even the entire dataset, please refer to this notebook. !wget -q https://git.io/JGc31 -O ucf101_top5.tar.gz !tar xf ucf101_top5.tar.gz Setup from tensorflow_docs.vis import embed from tensorflow.keras import layers from tensorflow import keras import matplotlib.pyplot as plt import tensorflow as tf import pandas as pd import numpy as np import imageio import cv2 import os 2021-09-14 13:26:26.593418: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-09-14 13:26:26.593444: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Define hyperparameters MAX_SEQ_LENGTH = 20 NUM_FEATURES = 1024 IMG_SIZE = 128 EPOCHS = 5 Data preparation We will mostly be following the same data preparation steps in this example, except for the following changes: We reduce the image size to 128x128 instead of 224x224 to speed up computation. Instead of using a pre-trained InceptionV3 network, we use a pre-trained DenseNet121 for feature extraction. We directly pad shorter videos to length MAX_SEQ_LENGTH. First, let's load up the DataFrames. train_df = pd.read_csv(\"train.csv\") test_df = pd.read_csv(\"test.csv\") print(f\"Total videos for training: {len(train_df)}\") print(f\"Total videos for testing: {len(test_df)}\") center_crop_layer = layers.CenterCrop(IMG_SIZE, IMG_SIZE) def crop_center(frame): cropped = center_crop_layer(frame[None, ...]) cropped = cropped.numpy().squeeze() return cropped # Following method is modified from this tutorial: # https://www.tensorflow.org/hub/tutorials/action_recognition_with_tf_hub def load_video(path, max_frames=0): cap = cv2.VideoCapture(path) frames = [] try: while True: ret, frame = cap.read() if not ret: break frame = crop_center(frame) frame = frame[:, :, [2, 1, 0]] frames.append(frame) if len(frames) == max_frames: break finally: cap.release() return np.array(frames) def build_feature_extractor(): feature_extractor = keras.applications.DenseNet121( weights=\"imagenet\", include_top=False, pooling=\"avg\", input_shape=(IMG_SIZE, IMG_SIZE, 3), ) preprocess_input = keras.applications.densenet.preprocess_input inputs = keras.Input((IMG_SIZE, IMG_SIZE, 3)) preprocessed = preprocess_input(inputs) outputs = feature_extractor(preprocessed) return keras.Model(inputs, outputs, name=\"feature_extractor\") feature_extractor = build_feature_extractor() # Label preprocessing with StringLookup. label_processor = keras.layers.StringLookup( num_oov_indices=0, vocabulary=np.unique(train_df[\"tag\"]), mask_token=None ) print(label_processor.get_vocabulary()) def prepare_all_videos(df, root_dir): num_samples = len(df) video_paths = df[\"video_name\"].values.tolist() labels = df[\"tag\"].values labels = label_processor(labels[..., None]).numpy() # `frame_features` are what we will feed to our sequence model. frame_features = np.zeros( shape=(num_samples, MAX_SEQ_LENGTH, NUM_FEATURES), dtype=\"float32\" ) # For each video. for idx, path in enumerate(video_paths): # Gather all its frames and add a batch dimension. frames = load_video(os.path.join(root_dir, path)) # Pad shorter videos. if len(frames) < MAX_SEQ_LENGTH: diff = MAX_SEQ_LENGTH - len(frames) padding = np.zeros((diff, IMG_SIZE, IMG_SIZE, 3)) frames = np.concatenate(frames, padding) frames = frames[None, ...] # Initialize placeholder to store the features of the current video. temp_frame_features = np.zeros( shape=(1, MAX_SEQ_LENGTH, NUM_FEATURES), dtype=\"float32\" ) # Extract features from the frames of the current video. for i, batch in enumerate(frames): video_length = batch.shape[0] length = min(MAX_SEQ_LENGTH, video_length) for j in range(length): if np.mean(batch[j, :]) > 0.0: temp_frame_features[i, j, :] = feature_extractor.predict( batch[None, j, :] ) else: temp_frame_features[i, j, :] = 0.0 frame_features[idx,] = temp_frame_features.squeeze() return frame_features, labels Total videos for training: 594 Total videos for testing: 224 2021-09-14 13:26:28.169035: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-14 13:26:28.169629: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-09-14 13:26:28.169696: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublas.so.11'; dlerror: libcublas.so.11: cannot open shared object file: No such file or directory 2021-09-14 13:26:28.169746: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublasLt.so.11'; dlerror: libcublasLt.so.11: cannot open shared object file: No such file or directory 2021-09-14 13:26:28.179403: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusolver.so.11'; dlerror: libcusolver.so.11: cannot open shared object file: No such file or directory 2021-09-14 13:26:28.179462: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusparse.so.11'; dlerror: libcusparse.so.11: cannot open shared object file: No such file or directory 2021-09-14 13:26:28.180051: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1835] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... 2021-09-14 13:26:28.180325: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/densenet/densenet121_weights_tf_dim_ordering_tf_kernels_notop.h5 29089792/29084464 [==============================] - 1s 0us/step 29097984/29084464 [==============================] - 1s 0us/step ['CricketShot', 'PlayingCello', 'Punch', 'ShavingBeard', 'TennisSwing'] Calling prepare_all_videos() on train_df and test_df takes ~20 minutes to complete. For this reason, to save time, here we download already preprocessed NumPy arrays: !wget -q https://git.io/JZmf4 -O top5_data_prepared.tar.gz !tar xf top5_data_prepared.tar.gz train_data, train_labels = np.load(\"train_data.npy\"), np.load(\"train_labels.npy\") test_data, test_labels = np.load(\"test_data.npy\"), np.load(\"test_labels.npy\") print(f\"Frame features in train set: {train_data.shape}\") Frame features in train set: (594, 20, 1024) Building the Transformer-based model We will be building on top of the code shared in this book chapter of Deep Learning with Python (Second ed.) by François Chollet. First, self-attention layers that form the basic blocks of a Transformer are order-agnostic. Since videos are ordered sequences of frames, we need our Transformer model to take into account order information. We do this via positional encoding. We simply embed the positions of the frames present inside videos with an Embedding layer. We then add these positional embeddings to the precomputed CNN feature maps. class PositionalEmbedding(layers.Layer): def __init__(self, sequence_length, output_dim, **kwargs): super().__init__(**kwargs) self.position_embeddings = layers.Embedding( input_dim=sequence_length, output_dim=output_dim ) self.sequence_length = sequence_length self.output_dim = output_dim def call(self, inputs): # The inputs are of shape: `(batch_size, frames, num_features)` length = tf.shape(inputs)[1] positions = tf.range(start=0, limit=length, delta=1) embedded_positions = self.position_embeddings(positions) return inputs + embedded_positions def compute_mask(self, inputs, mask=None): mask = tf.reduce_any(tf.cast(inputs, \"bool\"), axis=-1) return mask Now, we can create a subclassed layer for the Transformer. class TransformerEncoder(layers.Layer): def __init__(self, embed_dim, dense_dim, num_heads, **kwargs): super().__init__(**kwargs) self.embed_dim = embed_dim self.dense_dim = dense_dim self.num_heads = num_heads self.attention = layers.MultiHeadAttention( num_heads=num_heads, key_dim=embed_dim, dropout=0.3 ) self.dense_proj = keras.Sequential( [layers.Dense(dense_dim, activation=tf.nn.gelu), layers.Dense(embed_dim),] ) self.layernorm_1 = layers.LayerNormalization() self.layernorm_2 = layers.LayerNormalization() def call(self, inputs, mask=None): if mask is not None: mask = mask[:, tf.newaxis, :] attention_output = self.attention(inputs, inputs, attention_mask=mask) proj_input = self.layernorm_1(inputs + attention_output) proj_output = self.dense_proj(proj_input) return self.layernorm_2(proj_input + proj_output) Utility functions for training def get_compiled_model(): sequence_length = MAX_SEQ_LENGTH embed_dim = NUM_FEATURES dense_dim = 4 num_heads = 1 classes = len(label_processor.get_vocabulary()) inputs = keras.Input(shape=(None, None)) x = PositionalEmbedding( sequence_length, embed_dim, name=\"frame_position_embedding\" )(inputs) x = TransformerEncoder(embed_dim, dense_dim, num_heads, name=\"transformer_layer\")(x) x = layers.GlobalMaxPooling1D()(x) x = layers.Dropout(0.5)(x) outputs = layers.Dense(classes, activation=\"softmax\")(x) model = keras.Model(inputs, outputs) model.compile( optimizer=\"adam\", loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"] ) return model def run_experiment(): filepath = \"/tmp/video_classifier\" checkpoint = keras.callbacks.ModelCheckpoint( filepath, save_weights_only=True, save_best_only=True, verbose=1 ) model = get_compiled_model() history = model.fit( train_data, train_labels, validation_split=0.15, epochs=EPOCHS, callbacks=[checkpoint], ) model.load_weights(filepath) _, accuracy = model.evaluate(test_data, test_labels) print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") return model Model training and inference trained_model = run_experiment() 2021-09-14 13:27:55.649167: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2) Epoch 1/5 16/16 [==============================] - 2s 69ms/step - loss: 1.7206 - accuracy: 0.6548 - val_loss: 1.6100 - val_accuracy: 0.2889 Epoch 00001: val_loss improved from inf to 1.61001, saving model to /tmp/video_classifier Epoch 2/5 16/16 [==============================] - 1s 58ms/step - loss: 0.1306 - accuracy: 0.9524 - val_loss: 1.9321 - val_accuracy: 0.4111 Epoch 00002: val_loss did not improve from 1.61001 Epoch 3/5 16/16 [==============================] - 1s 58ms/step - loss: 0.0704 - accuracy: 0.9742 - val_loss: 0.7381 - val_accuracy: 0.7556 Epoch 00003: val_loss improved from 1.61001 to 0.73814, saving model to /tmp/video_classifier Epoch 4/5 16/16 [==============================] - 1s 56ms/step - loss: 0.0208 - accuracy: 0.9901 - val_loss: 0.8953 - val_accuracy: 0.7778 Epoch 00004: val_loss did not improve from 0.73814 Epoch 5/5 16/16 [==============================] - 1s 56ms/step - loss: 0.0076 - accuracy: 0.9980 - val_loss: 1.5643 - val_accuracy: 0.7111 Epoch 00005: val_loss did not improve from 0.73814 7/7 [==============================] - 0s 20ms/step - loss: 0.5903 - accuracy: 0.8750 Test accuracy: 87.5% Note: This model has ~4.23 Million parameters, which is way more than the sequence model (99918 parameters) we used in the prequel of this example. This kind of Transformer model works best with a larger dataset and a longer pre-training schedule. def prepare_single_video(frames): frame_features = np.zeros(shape=(1, MAX_SEQ_LENGTH, NUM_FEATURES), dtype=\"float32\") # Pad shorter videos. if len(frames) < MAX_SEQ_LENGTH: diff = MAX_SEQ_LENGTH - len(frames) padding = np.zeros((diff, IMG_SIZE, IMG_SIZE, 3)) frames = np.concatenate(frames, padding) frames = frames[None, ...] # Extract features from the frames of the current video. for i, batch in enumerate(frames): video_length = batch.shape[0] length = min(MAX_SEQ_LENGTH, video_length) for j in range(length): if np.mean(batch[j, :]) > 0.0: frame_features[i, j, :] = feature_extractor.predict(batch[None, j, :]) else: frame_features[i, j, :] = 0.0 return frame_features def predict_action(path): class_vocab = label_processor.get_vocabulary() frames = load_video(os.path.join(\"test\", path)) frame_features = prepare_single_video(frames) probabilities = trained_model.predict(frame_features)[0] for i in np.argsort(probabilities)[::-1]: print(f\" {class_vocab[i]}: {probabilities[i] * 100:5.2f}%\") return frames # This utility is for visualization. # Referenced from: # https://www.tensorflow.org/hub/tutorials/action_recognition_with_tf_hub def to_gif(images): converted_images = images.astype(np.uint8) imageio.mimsave(\"animation.gif\", converted_images, fps=10) return embed.embed_file(\"animation.gif\") test_video = np.random.choice(test_df[\"video_name\"].values.tolist()) print(f\"Test video path: {test_video}\") test_frames = predict_action(test_video) to_gif(test_frames[:MAX_SEQ_LENGTH]) Test video path: v_TennisSwing_g05_c06.avi TennisSwing: 98.90% CricketShot: 1.10% Punch: 0.00% ShavingBeard: 0.00% PlayingCello: 0.00% The performance of our model is far from optimal, because it was trained on a small dataset. Displaying the visual patterns that convnet filters respond to. Introduction In this example, we look into what sort of visual patterns image classification models learn. We'll be using the ResNet50V2 model, trained on the ImageNet dataset. Our process is simple: we will create input images that maximize the activation of specific filters in a target layer (picked somewhere in the middle of the model: layer conv3_block4_out). Such images represent a visualization of the pattern that the filter responds to. Setup import numpy as np import tensorflow as tf from tensorflow import keras # The dimensions of our input image img_width = 180 img_height = 180 # Our target layer: we will visualize the filters from this layer. # See `model.summary()` for list of layer names, if you want to change this. layer_name = \"conv3_block4_out\" Build a feature extraction model # Build a ResNet50V2 model loaded with pre-trained ImageNet weights model = keras.applications.ResNet50V2(weights=\"imagenet\", include_top=False) # Set up a model that returns the activation values for our target layer layer = model.get_layer(name=layer_name) feature_extractor = keras.Model(inputs=model.inputs, outputs=layer.output) Set up the gradient ascent process The \"loss\" we will maximize is simply the mean of the activation of a specific filter in our target layer. To avoid border effects, we exclude border pixels. def compute_loss(input_image, filter_index): activation = feature_extractor(input_image) # We avoid border artifacts by only involving non-border pixels in the loss. filter_activation = activation[:, 2:-2, 2:-2, filter_index] return tf.reduce_mean(filter_activation) Our gradient ascent function simply computes the gradients of the loss above with regard to the input image, and update the update image so as to move it towards a state that will activate the target filter more strongly. @tf.function def gradient_ascent_step(img, filter_index, learning_rate): with tf.GradientTape() as tape: tape.watch(img) loss = compute_loss(img, filter_index) # Compute gradients. grads = tape.gradient(loss, img) # Normalize gradients. grads = tf.math.l2_normalize(grads) img += learning_rate * grads return loss, img Set up the end-to-end filter visualization loop Our process is as follow: Start from a random image that is close to \"all gray\" (i.e. visually netural) Repeatedly apply the gradient ascent step function defined above Convert the resulting input image back to a displayable form, by normalizing it, center-cropping it, and restricting it to the [0, 255] range. def initialize_image(): # We start from a gray image with some random noise img = tf.random.uniform((1, img_width, img_height, 3)) # ResNet50V2 expects inputs in the range [-1, +1]. # Here we scale our random inputs to [-0.125, +0.125] return (img - 0.5) * 0.25 def visualize_filter(filter_index): # We run gradient ascent for 20 steps iterations = 30 learning_rate = 10.0 img = initialize_image() for iteration in range(iterations): loss, img = gradient_ascent_step(img, filter_index, learning_rate) # Decode the resulting input image img = deprocess_image(img[0].numpy()) return loss, img def deprocess_image(img): # Normalize array: center on 0., ensure variance is 0.15 img -= img.mean() img /= img.std() + 1e-5 img *= 0.15 # Center crop img = img[25:-25, 25:-25, :] # Clip to [0, 1] img += 0.5 img = np.clip(img, 0, 1) # Convert to RGB array img *= 255 img = np.clip(img, 0, 255).astype(\"uint8\") return img Let's try it out with filter 0 in the target layer: from IPython.display import Image, display loss, img = visualize_filter(0) keras.preprocessing.image.save_img(\"0.png\", img) This is what an input that maximizes the response of filter 0 in the target layer would look like: display(Image(\"0.png\")) png Visualize the first 64 filters in the target layer Now, let's make a 8x8 grid of the first 64 filters in the target layer to get of feel for the range of different visual patterns that the model has learned. # Compute image inputs that maximize per-filter activations # for the first 64 filters of our target layer all_imgs = [] for filter_index in range(64): print(\"Processing filter %d\" % (filter_index,)) loss, img = visualize_filter(filter_index) all_imgs.append(img) # Build a black picture with enough space for # our 8 x 8 filters of size 128 x 128, with a 5px margin in between margin = 5 n = 8 cropped_width = img_width - 25 * 2 cropped_height = img_height - 25 * 2 width = n * cropped_width + (n - 1) * margin height = n * cropped_height + (n - 1) * margin stitched_filters = np.zeros((width, height, 3)) # Fill the picture with our saved filters for i in range(n): for j in range(n): img = all_imgs[i * n + j] stitched_filters[ (cropped_width + margin) * i : (cropped_width + margin) * i + cropped_width, (cropped_height + margin) * j : (cropped_height + margin) * j + cropped_height, :, ] = img keras.preprocessing.image.save_img(\"stiched_filters.png\", stitched_filters) from IPython.display import Image, display display(Image(\"stiched_filters.png\")) Processing filter 0 Processing filter 1 Processing filter 2 Processing filter 3 Processing filter 4 Processing filter 5 Processing filter 6 Processing filter 7 Processing filter 8 Processing filter 9 Processing filter 10 Processing filter 11 Processing filter 12 Processing filter 13 Processing filter 14 Processing filter 15 Processing filter 16 Processing filter 17 Processing filter 18 Processing filter 19 Processing filter 20 Processing filter 21 Processing filter 22 Processing filter 23 Processing filter 24 Processing filter 25 Processing filter 26 Processing filter 27 Processing filter 28 Processing filter 29 Processing filter 30 Processing filter 31 Processing filter 32 Processing filter 33 Processing filter 34 Processing filter 35 Processing filter 36 Processing filter 37 Processing filter 38 Processing filter 39 Processing filter 40 Processing filter 41 Processing filter 42 Processing filter 43 Processing filter 44 Processing filter 45 Processing filter 46 Processing filter 47 Processing filter 48 Processing filter 49 Processing filter 50 Processing filter 51 Processing filter 52 Processing filter 53 Processing filter 54 Processing filter 55 Processing filter 56 Processing filter 57 Processing filter 58 Processing filter 59 Processing filter 60 Processing filter 61 Processing filter 62 Processing filter 63 png Image classification models see the world by decomposing their inputs over a \"vector basis\" of texture filters such as these. See also this old blog post for analysis and interpretation. Implementing Zero-Reference Deep Curve Estimation for low-light image enhancement Introduction Zero-Reference Deep Curve Estimation or Zero-DCE formulates low-light image enhancement as the task of estimating an image-specific tonal curve with a deep neural network. In this example, we train a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order tonal curves for dynamic range adjustment of a given image. Zero-DCE takes a low-light image as input and produces high-order tonal curves as its output. These curves are then used for pixel-wise adjustment on the dynamic range of the input to obtain an enhanced image. The curve estimation process is done in such a way that it maintains the range of the enhanced image and preserves the contrast of neighboring pixels. This curve estimation is inspired by curves adjustment used in photo editing software such as Adobe Photoshop where users can adjust points throughout an image’s tonal range. Zero-DCE is appealing because of its relaxed assumptions with regard to reference images: it does not require any input/output image pairs during training. This is achieved through a set of carefully formulated non-reference loss functions, which implicitly measure the enhancement quality and guide the training of the network. References Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement Curves adjustment in Adobe Photoshop Downloading LOLDataset The LoL Dataset has been created for low-light image enhancement. It provides 485 images for training and 15 for testing. Each image pair in the dataset consists of a low-light input image and its corresponding well-exposed reference image. import os import random import numpy as np from glob import glob from PIL import Image, ImageOps import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers !gdown https://drive.google.com/uc?id=1DdGIJ4PZPlF2ikl8mNM9V-PdVxVLbQi6 !unzip -q lol_dataset.zip Downloading... From: https://drive.google.com/uc?id=1DdGIJ4PZPlF2ikl8mNM9V-PdVxVLbQi6 To: /content/keras-io/scripts/tmp_4644685/lol_dataset.zip 347MB [00:03, 93.3MB/s] Creating a TensorFlow Dataset We use 300 low-light images from the LoL Dataset training set for training, and we use the remaining 185 low-light images for validation. We resize the images to size 256 x 256 to be used for both training and validation. Note that in order to train the DCE-Net, we will not require the corresponding enhanced images. IMAGE_SIZE = 256 BATCH_SIZE = 16 MAX_TRAIN_IMAGES = 400 def load_data(image_path): image = tf.io.read_file(image_path) image = tf.image.decode_png(image, channels=3) image = tf.image.resize(images=image, size=[IMAGE_SIZE, IMAGE_SIZE]) image = image / 255.0 return image def data_generator(low_light_images): dataset = tf.data.Dataset.from_tensor_slices((low_light_images)) dataset = dataset.map(load_data, num_parallel_calls=tf.data.AUTOTUNE) dataset = dataset.batch(BATCH_SIZE, drop_remainder=True) return dataset train_low_light_images = sorted(glob(\"./lol_dataset/our485/low/*\"))[:MAX_TRAIN_IMAGES] val_low_light_images = sorted(glob(\"./lol_dataset/our485/low/*\"))[MAX_TRAIN_IMAGES:] test_low_light_images = sorted(glob(\"./lol_dataset/eval15/low/*\")) train_dataset = data_generator(train_low_light_images) val_dataset = data_generator(val_low_light_images) print(\"Train Dataset:\", train_dataset) print(\"Validation Dataset:\", val_dataset) Train Dataset: Validation Dataset: The Zero-DCE Framework The goal of DCE-Net is to estimate a set of best-fitting light-enhancement curves (LE-curves) given an input image. The framework then maps all pixels of the input’s RGB channels by applying the curves iteratively to obtain the final enhanced image. Understanding light-enhancement curves A ligh-enhancement curve is a kind of curve that can map a low-light image to its enhanced version automatically, where the self-adaptive curve parameters are solely dependent on the input image. When designing such a curve, three objectives should be taken into account: Each pixel value of the enhanced image should be in the normalized range [0,1], in order to avoid information loss induced by overflow truncation. It should be monotonous, to preserve the contrast between neighboring pixels. The shape of this curve should be as simple as possible, and the curve should be differentiable to allow backpropagation. The light-enhancement curve is separately applied to three RGB channels instead of solely on the illumination channel. The three-channel adjustment can better preserve the inherent color and reduce the risk of over-saturation. DCE-Net The DCE-Net is a lightweight deep neural network that learns the mapping between an input image and its best-fitting curve parameter maps. The input to the DCE-Net is a low-light image while the outputs are a set of pixel-wise curve parameter maps for corresponding higher-order curves. It is a plain CNN of seven convolutional layers with symmetrical concatenation. Each layer consists of 32 convolutional kernels of size 3×3 and stride 1 followed by the ReLU activation function. The last convolutional layer is followed by the Tanh activation function, which produces 24 parameter maps for 8 iterations, where each iteration requires three curve parameter maps for the three channels. def build_dce_net(): input_img = keras.Input(shape=[None, None, 3]) conv1 = layers.Conv2D( 32, (3, 3), strides=(1, 1), activation=\"relu\", padding=\"same\" )(input_img) conv2 = layers.Conv2D( 32, (3, 3), strides=(1, 1), activation=\"relu\", padding=\"same\" )(conv1) conv3 = layers.Conv2D( 32, (3, 3), strides=(1, 1), activation=\"relu\", padding=\"same\" )(conv2) conv4 = layers.Conv2D( 32, (3, 3), strides=(1, 1), activation=\"relu\", padding=\"same\" )(conv3) int_con1 = layers.Concatenate(axis=-1)([conv4, conv3]) conv5 = layers.Conv2D( 32, (3, 3), strides=(1, 1), activation=\"relu\", padding=\"same\" )(int_con1) int_con2 = layers.Concatenate(axis=-1)([conv5, conv2]) conv6 = layers.Conv2D( 32, (3, 3), strides=(1, 1), activation=\"relu\", padding=\"same\" )(int_con2) int_con3 = layers.Concatenate(axis=-1)([conv6, conv1]) x_r = layers.Conv2D(24, (3, 3), strides=(1, 1), activation=\"tanh\", padding=\"same\")( int_con3 ) return keras.Model(inputs=input_img, outputs=x_r) Loss functions To enable zero-reference learning in DCE-Net, we use a set of differentiable zero-reference losses that allow us to evaluate the quality of enhanced images. Color constancy loss The color constancy loss is used to correct the potential color deviations in the enhanced image. def color_constancy_loss(x): mean_rgb = tf.reduce_mean(x, axis=(1, 2), keepdims=True) mr, mg, mb = mean_rgb[:, :, :, 0], mean_rgb[:, :, :, 1], mean_rgb[:, :, :, 2] d_rg = tf.square(mr - mg) d_rb = tf.square(mr - mb) d_gb = tf.square(mb - mg) return tf.sqrt(tf.square(d_rg) + tf.square(d_rb) + tf.square(d_gb)) Exposure loss To restrain under-/over-exposed regions, we use the exposure control loss. It measures the distance between the average intensity value of a local region and a preset well-exposedness level (set to 0.6). def exposure_loss(x, mean_val=0.6): x = tf.reduce_mean(x, axis=3, keepdims=True) mean = tf.nn.avg_pool2d(x, ksize=16, strides=16, padding=\"VALID\") return tf.reduce_mean(tf.square(mean - mean_val)) Illumination smoothness loss To preserve the monotonicity relations between neighboring pixels, the illumination smoothness loss is added to each curve parameter map. def illumination_smoothness_loss(x): batch_size = tf.shape(x)[0] h_x = tf.shape(x)[1] w_x = tf.shape(x)[2] count_h = (tf.shape(x)[2] - 1) * tf.shape(x)[3] count_w = tf.shape(x)[2] * (tf.shape(x)[3] - 1) h_tv = tf.reduce_sum(tf.square((x[:, 1:, :, :] - x[:, : h_x - 1, :, :]))) w_tv = tf.reduce_sum(tf.square((x[:, :, 1:, :] - x[:, :, : w_x - 1, :]))) batch_size = tf.cast(batch_size, dtype=tf.float32) count_h = tf.cast(count_h, dtype=tf.float32) count_w = tf.cast(count_w, dtype=tf.float32) return 2 * (h_tv / count_h + w_tv / count_w) / batch_size Spatial consistency loss The spatial consistency loss encourages spatial coherence of the enhanced image by preserving the contrast between neighboring regions across the input image and its enhanced version. class SpatialConsistencyLoss(keras.losses.Loss): def __init__(self, **kwargs): super(SpatialConsistencyLoss, self).__init__(reduction=\"none\") self.left_kernel = tf.constant( [[[[0, 0, 0]], [[-1, 1, 0]], [[0, 0, 0]]]], dtype=tf.float32 ) self.right_kernel = tf.constant( [[[[0, 0, 0]], [[0, 1, -1]], [[0, 0, 0]]]], dtype=tf.float32 ) self.up_kernel = tf.constant( [[[[0, -1, 0]], [[0, 1, 0]], [[0, 0, 0]]]], dtype=tf.float32 ) self.down_kernel = tf.constant( [[[[0, 0, 0]], [[0, 1, 0]], [[0, -1, 0]]]], dtype=tf.float32 ) def call(self, y_true, y_pred): original_mean = tf.reduce_mean(y_true, 3, keepdims=True) enhanced_mean = tf.reduce_mean(y_pred, 3, keepdims=True) original_pool = tf.nn.avg_pool2d( original_mean, ksize=4, strides=4, padding=\"VALID\" ) enhanced_pool = tf.nn.avg_pool2d( enhanced_mean, ksize=4, strides=4, padding=\"VALID\" ) d_original_left = tf.nn.conv2d( original_pool, self.left_kernel, strides=[1, 1, 1, 1], padding=\"SAME\" ) d_original_right = tf.nn.conv2d( original_pool, self.right_kernel, strides=[1, 1, 1, 1], padding=\"SAME\" ) d_original_up = tf.nn.conv2d( original_pool, self.up_kernel, strides=[1, 1, 1, 1], padding=\"SAME\" ) d_original_down = tf.nn.conv2d( original_pool, self.down_kernel, strides=[1, 1, 1, 1], padding=\"SAME\" ) d_enhanced_left = tf.nn.conv2d( enhanced_pool, self.left_kernel, strides=[1, 1, 1, 1], padding=\"SAME\" ) d_enhanced_right = tf.nn.conv2d( enhanced_pool, self.right_kernel, strides=[1, 1, 1, 1], padding=\"SAME\" ) d_enhanced_up = tf.nn.conv2d( enhanced_pool, self.up_kernel, strides=[1, 1, 1, 1], padding=\"SAME\" ) d_enhanced_down = tf.nn.conv2d( enhanced_pool, self.down_kernel, strides=[1, 1, 1, 1], padding=\"SAME\" ) d_left = tf.square(d_original_left - d_enhanced_left) d_right = tf.square(d_original_right - d_enhanced_right) d_up = tf.square(d_original_up - d_enhanced_up) d_down = tf.square(d_original_down - d_enhanced_down) return d_left + d_right + d_up + d_down Deep curve estimation model We implement the Zero-DCE framework as a Keras subclassed model. class ZeroDCE(keras.Model): def __init__(self, **kwargs): super(ZeroDCE, self).__init__(**kwargs) self.dce_model = build_dce_net() def compile(self, learning_rate, **kwargs): super(ZeroDCE, self).compile(**kwargs) self.optimizer = keras.optimizers.Adam(learning_rate=learning_rate) self.spatial_constancy_loss = SpatialConsistencyLoss(reduction=\"none\") def get_enhanced_image(self, data, output): r1 = output[:, :, :, :3] r2 = output[:, :, :, 3:6] r3 = output[:, :, :, 6:9] r4 = output[:, :, :, 9:12] r5 = output[:, :, :, 12:15] r6 = output[:, :, :, 15:18] r7 = output[:, :, :, 18:21] r8 = output[:, :, :, 21:24] x = data + r1 * (tf.square(data) - data) x = x + r2 * (tf.square(x) - x) x = x + r3 * (tf.square(x) - x) enhanced_image = x + r4 * (tf.square(x) - x) x = enhanced_image + r5 * (tf.square(enhanced_image) - enhanced_image) x = x + r6 * (tf.square(x) - x) x = x + r7 * (tf.square(x) - x) enhanced_image = x + r8 * (tf.square(x) - x) return enhanced_image def call(self, data): dce_net_output = self.dce_model(data) return self.get_enhanced_image(data, dce_net_output) def compute_losses(self, data, output): enhanced_image = self.get_enhanced_image(data, output) loss_illumination = 200 * illumination_smoothness_loss(output) loss_spatial_constancy = tf.reduce_mean( self.spatial_constancy_loss(enhanced_image, data) ) loss_color_constancy = 5 * tf.reduce_mean(color_constancy_loss(enhanced_image)) loss_exposure = 10 * tf.reduce_mean(exposure_loss(enhanced_image)) total_loss = ( loss_illumination + loss_spatial_constancy + loss_color_constancy + loss_exposure ) return { \"total_loss\": total_loss, \"illumination_smoothness_loss\": loss_illumination, \"spatial_constancy_loss\": loss_spatial_constancy, \"color_constancy_loss\": loss_color_constancy, \"exposure_loss\": loss_exposure, } def train_step(self, data): with tf.GradientTape() as tape: output = self.dce_model(data) losses = self.compute_losses(data, output) gradients = tape.gradient( losses[\"total_loss\"], self.dce_model.trainable_weights ) self.optimizer.apply_gradients(zip(gradients, self.dce_model.trainable_weights)) return losses def test_step(self, data): output = self.dce_model(data) return self.compute_losses(data, output) def save_weights(self, filepath, overwrite=True, save_format=None, options=None): \"\"\"While saving the weights, we simply save the weights of the DCE-Net\"\"\" self.dce_model.save_weights( filepath, overwrite=overwrite, save_format=save_format, options=options ) def load_weights(self, filepath, by_name=False, skip_mismatch=False, options=None): \"\"\"While loading the weights, we simply load the weights of the DCE-Net\"\"\" self.dce_model.load_weights( filepath=filepath, by_name=by_name, skip_mismatch=skip_mismatch, options=options, ) Training zero_dce_model = ZeroDCE() zero_dce_model.compile(learning_rate=1e-4) history = zero_dce_model.fit(train_dataset, validation_data=val_dataset, epochs=100) def plot_result(item): plt.plot(history.history[item], label=item) plt.plot(history.history[\"val_\" + item], label=\"val_\" + item) plt.xlabel(\"Epochs\") plt.ylabel(item) plt.title(\"Train and Validation {} Over Epochs\".format(item), fontsize=14) plt.legend() plt.grid() plt.show() plot_result(\"total_loss\") plot_result(\"illumination_smoothness_loss\") plot_result(\"spatial_constancy_loss\") plot_result(\"color_constancy_loss\") plot_result(\"exposure_loss\") Epoch 1/100 25/25 [==============================] - 13s 271ms/step - total_loss: 4.8773 - illumination_smoothness_loss: 1.9298 - spatial_constancy_loss: 4.2610e-06 - color_constancy_loss: 0.0027 - exposure_loss: 2.9448 - val_total_loss: 4.3163 - val_illumination_smoothness_loss: 1.3040 - val_spatial_constancy_loss: 1.4072e-06 - val_color_constancy_loss: 5.3277e-04 - val_exposure_loss: 3.0117 Epoch 2/100 25/25 [==============================] - 7s 270ms/step - total_loss: 4.1537 - illumination_smoothness_loss: 1.2237 - spatial_constancy_loss: 6.5297e-06 - color_constancy_loss: 0.0027 - exposure_loss: 2.9273 - val_total_loss: 3.8239 - val_illumination_smoothness_loss: 0.8263 - val_spatial_constancy_loss: 1.3503e-05 - val_color_constancy_loss: 4.8064e-04 - val_exposure_loss: 2.9971 Epoch 3/100 25/25 [==============================] - 7s 270ms/step - total_loss: 3.7458 - illumination_smoothness_loss: 0.8320 - spatial_constancy_loss: 2.9476e-05 - color_constancy_loss: 0.0028 - exposure_loss: 2.9110 - val_total_loss: 3.5389 - val_illumination_smoothness_loss: 0.5565 - val_spatial_constancy_loss: 4.3614e-05 - val_color_constancy_loss: 4.4507e-04 - val_exposure_loss: 2.9818 Epoch 4/100 25/25 [==============================] - 7s 271ms/step - total_loss: 3.4913 - illumination_smoothness_loss: 0.5945 - spatial_constancy_loss: 7.2733e-05 - color_constancy_loss: 0.0029 - exposure_loss: 2.8939 - val_total_loss: 3.3690 - val_illumination_smoothness_loss: 0.4014 - val_spatial_constancy_loss: 8.7945e-05 - val_color_constancy_loss: 4.3541e-04 - val_exposure_loss: 2.9671 Epoch 5/100 25/25 [==============================] - 7s 271ms/step - total_loss: 3.3210 - illumination_smoothness_loss: 0.4399 - spatial_constancy_loss: 1.2652e-04 - color_constancy_loss: 0.0030 - exposure_loss: 2.8781 - val_total_loss: 3.2557 - val_illumination_smoothness_loss: 0.3019 - val_spatial_constancy_loss: 1.3960e-04 - val_color_constancy_loss: 4.4128e-04 - val_exposure_loss: 2.9533 Epoch 6/100 25/25 [==============================] - 7s 272ms/step - total_loss: 3.1971 - illumination_smoothness_loss: 0.3310 - spatial_constancy_loss: 1.8674e-04 - color_constancy_loss: 0.0031 - exposure_loss: 2.8628 - val_total_loss: 3.1741 - val_illumination_smoothness_loss: 0.2338 - val_spatial_constancy_loss: 1.9747e-04 - val_color_constancy_loss: 4.5618e-04 - val_exposure_loss: 2.9397 Epoch 7/100 25/25 [==============================] - 7s 263ms/step - total_loss: 3.1008 - illumination_smoothness_loss: 0.2506 - spatial_constancy_loss: 2.5713e-04 - color_constancy_loss: 0.0032 - exposure_loss: 2.8468 - val_total_loss: 3.1062 - val_illumination_smoothness_loss: 0.1804 - val_spatial_constancy_loss: 2.6610e-04 - val_color_constancy_loss: 4.7632e-04 - val_exposure_loss: 2.9251 Epoch 8/100 25/25 [==============================] - 7s 272ms/step - total_loss: 3.0244 - illumination_smoothness_loss: 0.1915 - spatial_constancy_loss: 3.4287e-04 - color_constancy_loss: 0.0033 - exposure_loss: 2.8293 - val_total_loss: 3.0512 - val_illumination_smoothness_loss: 0.1415 - val_spatial_constancy_loss: 3.5449e-04 - val_color_constancy_loss: 5.0079e-04 - val_exposure_loss: 2.9088 Epoch 9/100 25/25 [==============================] - 7s 272ms/step - total_loss: 2.9666 - illumination_smoothness_loss: 0.1531 - spatial_constancy_loss: 4.5557e-04 - color_constancy_loss: 0.0035 - exposure_loss: 2.8096 - val_total_loss: 3.0084 - val_illumination_smoothness_loss: 0.1172 - val_spatial_constancy_loss: 4.7605e-04 - val_color_constancy_loss: 5.3119e-04 - val_exposure_loss: 2.8902 Epoch 10/100 25/25 [==============================] - 7s 263ms/step - total_loss: 2.9216 - illumination_smoothness_loss: 0.1294 - spatial_constancy_loss: 6.0396e-04 - color_constancy_loss: 0.0037 - exposure_loss: 2.7879 - val_total_loss: 2.9737 - val_illumination_smoothness_loss: 0.1028 - val_spatial_constancy_loss: 6.3615e-04 - val_color_constancy_loss: 5.6798e-04 - val_exposure_loss: 2.8697 Epoch 11/100 25/25 [==============================] - 7s 264ms/step - total_loss: 2.8823 - illumination_smoothness_loss: 0.1141 - spatial_constancy_loss: 8.0172e-04 - color_constancy_loss: 0.0039 - exposure_loss: 2.7635 - val_total_loss: 2.9422 - val_illumination_smoothness_loss: 0.0951 - val_spatial_constancy_loss: 8.5813e-04 - val_color_constancy_loss: 6.1538e-04 - val_exposure_loss: 2.8456 Epoch 12/100 25/25 [==============================] - 7s 273ms/step - total_loss: 2.8443 - illumination_smoothness_loss: 0.1049 - spatial_constancy_loss: 0.0011 - color_constancy_loss: 0.0043 - exposure_loss: 2.7341 - val_total_loss: 2.9096 - val_illumination_smoothness_loss: 0.0936 - val_spatial_constancy_loss: 0.0012 - val_color_constancy_loss: 6.7707e-04 - val_exposure_loss: 2.8142 Epoch 13/100 25/25 [==============================] - 7s 274ms/step - total_loss: 2.7997 - illumination_smoothness_loss: 0.1031 - spatial_constancy_loss: 0.0016 - color_constancy_loss: 0.0047 - exposure_loss: 2.6903 - val_total_loss: 2.8666 - val_illumination_smoothness_loss: 0.1034 - val_spatial_constancy_loss: 0.0019 - val_color_constancy_loss: 8.0413e-04 - val_exposure_loss: 2.7604 Epoch 14/100 25/25 [==============================] - 7s 275ms/step - total_loss: 2.7249 - illumination_smoothness_loss: 0.1149 - spatial_constancy_loss: 0.0030 - color_constancy_loss: 0.0057 - exposure_loss: 2.6013 - val_total_loss: 2.7764 - val_illumination_smoothness_loss: 0.1291 - val_spatial_constancy_loss: 0.0042 - val_color_constancy_loss: 0.0011 - val_exposure_loss: 2.6419 Epoch 15/100 25/25 [==============================] - 7s 265ms/step - total_loss: 2.5184 - illumination_smoothness_loss: 0.1584 - spatial_constancy_loss: 0.0103 - color_constancy_loss: 0.0093 - exposure_loss: 2.3403 - val_total_loss: 2.4698 - val_illumination_smoothness_loss: 0.1949 - val_spatial_constancy_loss: 0.0194 - val_color_constancy_loss: 0.0031 - val_exposure_loss: 2.2524 Epoch 16/100 25/25 [==============================] - 7s 275ms/step - total_loss: 1.8216 - illumination_smoothness_loss: 0.2401 - spatial_constancy_loss: 0.0934 - color_constancy_loss: 0.0348 - exposure_loss: 1.4532 - val_total_loss: 1.6855 - val_illumination_smoothness_loss: 0.2599 - val_spatial_constancy_loss: 0.1776 - val_color_constancy_loss: 0.0229 - val_exposure_loss: 1.2250 Epoch 17/100 25/25 [==============================] - 7s 267ms/step - total_loss: 1.3387 - illumination_smoothness_loss: 0.2350 - spatial_constancy_loss: 0.2752 - color_constancy_loss: 0.0814 - exposure_loss: 0.7471 - val_total_loss: 1.5451 - val_illumination_smoothness_loss: 0.1862 - val_spatial_constancy_loss: 0.2320 - val_color_constancy_loss: 0.0331 - val_exposure_loss: 1.0938 Epoch 18/100 25/25 [==============================] - 7s 267ms/step - total_loss: 1.2646 - illumination_smoothness_loss: 0.1724 - spatial_constancy_loss: 0.2605 - color_constancy_loss: 0.0720 - exposure_loss: 0.7597 - val_total_loss: 1.5153 - val_illumination_smoothness_loss: 0.1533 - val_spatial_constancy_loss: 0.2295 - val_color_constancy_loss: 0.0343 - val_exposure_loss: 1.0981 Epoch 19/100 25/25 [==============================] - 7s 267ms/step - total_loss: 1.2439 - illumination_smoothness_loss: 0.1559 - spatial_constancy_loss: 0.2706 - color_constancy_loss: 0.0730 - exposure_loss: 0.7443 - val_total_loss: 1.4994 - val_illumination_smoothness_loss: 0.1423 - val_spatial_constancy_loss: 0.2359 - val_color_constancy_loss: 0.0363 - val_exposure_loss: 1.0850 Epoch 20/100 25/25 [==============================] - 7s 276ms/step - total_loss: 1.2311 - illumination_smoothness_loss: 0.1449 - spatial_constancy_loss: 0.2720 - color_constancy_loss: 0.0731 - exposure_loss: 0.7411 - val_total_loss: 1.4889 - val_illumination_smoothness_loss: 0.1299 - val_spatial_constancy_loss: 0.2331 - val_color_constancy_loss: 0.0358 - val_exposure_loss: 1.0901 Epoch 21/100 25/25 [==============================] - 7s 266ms/step - total_loss: 1.2262 - illumination_smoothness_loss: 0.1400 - spatial_constancy_loss: 0.2726 - color_constancy_loss: 0.0734 - exposure_loss: 0.7402 - val_total_loss: 1.4806 - val_illumination_smoothness_loss: 0.1233 - val_spatial_constancy_loss: 0.2356 - val_color_constancy_loss: 0.0371 - val_exposure_loss: 1.0847 Epoch 22/100 25/25 [==============================] - 7s 266ms/step - total_loss: 1.2202 - illumination_smoothness_loss: 0.1325 - spatial_constancy_loss: 0.2739 - color_constancy_loss: 0.0734 - exposure_loss: 0.7404 - val_total_loss: 1.4765 - val_illumination_smoothness_loss: 0.1231 - val_spatial_constancy_loss: 0.2408 - val_color_constancy_loss: 0.0381 - val_exposure_loss: 1.0745 Epoch 23/100 25/25 [==============================] - 7s 277ms/step - total_loss: 1.2122 - illumination_smoothness_loss: 0.1247 - spatial_constancy_loss: 0.2752 - color_constancy_loss: 0.0739 - exposure_loss: 0.7384 - val_total_loss: 1.4757 - val_illumination_smoothness_loss: 0.1253 - val_spatial_constancy_loss: 0.2453 - val_color_constancy_loss: 0.0393 - val_exposure_loss: 1.0658 Epoch 24/100 25/25 [==============================] - 7s 276ms/step - total_loss: 1.2015 - illumination_smoothness_loss: 0.1149 - spatial_constancy_loss: 0.2766 - color_constancy_loss: 0.0740 - exposure_loss: 0.7360 - val_total_loss: 1.4667 - val_illumination_smoothness_loss: 0.1168 - val_spatial_constancy_loss: 0.2456 - val_color_constancy_loss: 0.0390 - val_exposure_loss: 1.0652 Epoch 25/100 25/25 [==============================] - 7s 267ms/step - total_loss: 1.1940 - illumination_smoothness_loss: 0.1087 - spatial_constancy_loss: 0.2783 - color_constancy_loss: 0.0746 - exposure_loss: 0.7324 - val_total_loss: 1.4597 - val_illumination_smoothness_loss: 0.1109 - val_spatial_constancy_loss: 0.2476 - val_color_constancy_loss: 0.0399 - val_exposure_loss: 1.0613 Epoch 26/100 25/25 [==============================] - 7s 277ms/step - total_loss: 1.1878 - illumination_smoothness_loss: 0.1028 - spatial_constancy_loss: 0.2800 - color_constancy_loss: 0.0748 - exposure_loss: 0.7302 - val_total_loss: 1.4537 - val_illumination_smoothness_loss: 0.1054 - val_spatial_constancy_loss: 0.2479 - val_color_constancy_loss: 0.0398 - val_exposure_loss: 1.0606 Epoch 27/100 25/25 [==============================] - 7s 268ms/step - total_loss: 1.1827 - illumination_smoothness_loss: 0.0979 - spatial_constancy_loss: 0.2802 - color_constancy_loss: 0.0750 - exposure_loss: 0.7296 - val_total_loss: 1.4488 - val_illumination_smoothness_loss: 0.1015 - val_spatial_constancy_loss: 0.2496 - val_color_constancy_loss: 0.0404 - val_exposure_loss: 1.0573 Epoch 28/100 25/25 [==============================] - 7s 269ms/step - total_loss: 1.1774 - illumination_smoothness_loss: 0.0928 - spatial_constancy_loss: 0.2814 - color_constancy_loss: 0.0749 - exposure_loss: 0.7283 - val_total_loss: 1.4439 - val_illumination_smoothness_loss: 0.0968 - val_spatial_constancy_loss: 0.2491 - val_color_constancy_loss: 0.0397 - val_exposure_loss: 1.0583 Epoch 29/100 25/25 [==============================] - 7s 277ms/step - total_loss: 1.1720 - illumination_smoothness_loss: 0.0882 - spatial_constancy_loss: 0.2821 - color_constancy_loss: 0.0754 - exposure_loss: 0.7264 - val_total_loss: 1.4372 - val_illumination_smoothness_loss: 0.0907 - val_spatial_constancy_loss: 0.2504 - val_color_constancy_loss: 0.0405 - val_exposure_loss: 1.0557 Epoch 30/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.1660 - illumination_smoothness_loss: 0.0825 - spatial_constancy_loss: 0.2841 - color_constancy_loss: 0.0757 - exposure_loss: 0.7238 - val_total_loss: 1.4307 - val_illumination_smoothness_loss: 0.0840 - val_spatial_constancy_loss: 0.2500 - val_color_constancy_loss: 0.0406 - val_exposure_loss: 1.0561 Epoch 31/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.1626 - illumination_smoothness_loss: 0.0790 - spatial_constancy_loss: 0.2834 - color_constancy_loss: 0.0753 - exposure_loss: 0.7248 - val_total_loss: 1.4285 - val_illumination_smoothness_loss: 0.0829 - val_spatial_constancy_loss: 0.2508 - val_color_constancy_loss: 0.0399 - val_exposure_loss: 1.0549 Epoch 32/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.1576 - illumination_smoothness_loss: 0.0744 - spatial_constancy_loss: 0.2851 - color_constancy_loss: 0.0759 - exposure_loss: 0.7222 - val_total_loss: 1.4213 - val_illumination_smoothness_loss: 0.0756 - val_spatial_constancy_loss: 0.2509 - val_color_constancy_loss: 0.0403 - val_exposure_loss: 1.0545 Epoch 33/100 25/25 [==============================] - 7s 268ms/step - total_loss: 1.1529 - illumination_smoothness_loss: 0.0702 - spatial_constancy_loss: 0.2856 - color_constancy_loss: 0.0757 - exposure_loss: 0.7215 - val_total_loss: 1.4164 - val_illumination_smoothness_loss: 0.0720 - val_spatial_constancy_loss: 0.2525 - val_color_constancy_loss: 0.0403 - val_exposure_loss: 1.0515 Epoch 34/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.1486 - illumination_smoothness_loss: 0.0659 - spatial_constancy_loss: 0.2871 - color_constancy_loss: 0.0762 - exposure_loss: 0.7195 - val_total_loss: 1.4120 - val_illumination_smoothness_loss: 0.0675 - val_spatial_constancy_loss: 0.2528 - val_color_constancy_loss: 0.0410 - val_exposure_loss: 1.0507 Epoch 35/100 25/25 [==============================] - 7s 268ms/step - total_loss: 1.1439 - illumination_smoothness_loss: 0.0617 - spatial_constancy_loss: 0.2876 - color_constancy_loss: 0.0761 - exposure_loss: 0.7184 - val_total_loss: 1.4064 - val_illumination_smoothness_loss: 0.0628 - val_spatial_constancy_loss: 0.2538 - val_color_constancy_loss: 0.0408 - val_exposure_loss: 1.0490 Epoch 36/100 25/25 [==============================] - 7s 279ms/step - total_loss: 1.1393 - illumination_smoothness_loss: 0.0575 - spatial_constancy_loss: 0.2891 - color_constancy_loss: 0.0766 - exposure_loss: 0.7161 - val_total_loss: 1.4016 - val_illumination_smoothness_loss: 0.0574 - val_spatial_constancy_loss: 0.2529 - val_color_constancy_loss: 0.0408 - val_exposure_loss: 1.0505 Epoch 37/100 25/25 [==============================] - 7s 270ms/step - total_loss: 1.1360 - illumination_smoothness_loss: 0.0539 - spatial_constancy_loss: 0.2891 - color_constancy_loss: 0.0763 - exposure_loss: 0.7166 - val_total_loss: 1.3975 - val_illumination_smoothness_loss: 0.0545 - val_spatial_constancy_loss: 0.2547 - val_color_constancy_loss: 0.0410 - val_exposure_loss: 1.0473 Epoch 38/100 25/25 [==============================] - 7s 279ms/step - total_loss: 1.1327 - illumination_smoothness_loss: 0.0512 - spatial_constancy_loss: 0.2907 - color_constancy_loss: 0.0770 - exposure_loss: 0.7138 - val_total_loss: 1.3946 - val_illumination_smoothness_loss: 0.0515 - val_spatial_constancy_loss: 0.2546 - val_color_constancy_loss: 0.0414 - val_exposure_loss: 1.0471 Epoch 39/100 25/25 [==============================] - 7s 279ms/step - total_loss: 1.1283 - illumination_smoothness_loss: 0.0465 - spatial_constancy_loss: 0.2916 - color_constancy_loss: 0.0768 - exposure_loss: 0.7133 - val_total_loss: 1.3906 - val_illumination_smoothness_loss: 0.0473 - val_spatial_constancy_loss: 0.2538 - val_color_constancy_loss: 0.0411 - val_exposure_loss: 1.0485 Epoch 40/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.1257 - illumination_smoothness_loss: 0.0441 - spatial_constancy_loss: 0.2907 - color_constancy_loss: 0.0768 - exposure_loss: 0.7141 - val_total_loss: 1.3889 - val_illumination_smoothness_loss: 0.0477 - val_spatial_constancy_loss: 0.2577 - val_color_constancy_loss: 0.0419 - val_exposure_loss: 1.0416 Epoch 41/100 25/25 [==============================] - 7s 271ms/step - total_loss: 1.1225 - illumination_smoothness_loss: 0.0412 - spatial_constancy_loss: 0.2928 - color_constancy_loss: 0.0772 - exposure_loss: 0.7114 - val_total_loss: 1.3848 - val_illumination_smoothness_loss: 0.0433 - val_spatial_constancy_loss: 0.2569 - val_color_constancy_loss: 0.0417 - val_exposure_loss: 1.0428 Epoch 42/100 25/25 [==============================] - 7s 270ms/step - total_loss: 1.1202 - illumination_smoothness_loss: 0.0391 - spatial_constancy_loss: 0.2929 - color_constancy_loss: 0.0771 - exposure_loss: 0.7110 - val_total_loss: 1.3831 - val_illumination_smoothness_loss: 0.0425 - val_spatial_constancy_loss: 0.2583 - val_color_constancy_loss: 0.0420 - val_exposure_loss: 1.0403 Epoch 43/100 25/25 [==============================] - 7s 270ms/step - total_loss: 1.1177 - illumination_smoothness_loss: 0.0365 - spatial_constancy_loss: 0.2932 - color_constancy_loss: 0.0772 - exposure_loss: 0.7107 - val_total_loss: 1.3784 - val_illumination_smoothness_loss: 0.0376 - val_spatial_constancy_loss: 0.2578 - val_color_constancy_loss: 0.0418 - val_exposure_loss: 1.0412 Epoch 44/100 25/25 [==============================] - 7s 279ms/step - total_loss: 1.1155 - illumination_smoothness_loss: 0.0349 - spatial_constancy_loss: 0.2953 - color_constancy_loss: 0.0777 - exposure_loss: 0.7077 - val_total_loss: 1.3767 - val_illumination_smoothness_loss: 0.0341 - val_spatial_constancy_loss: 0.2545 - val_color_constancy_loss: 0.0413 - val_exposure_loss: 1.0467 Epoch 45/100 25/25 [==============================] - 7s 279ms/step - total_loss: 1.1133 - illumination_smoothness_loss: 0.0321 - spatial_constancy_loss: 0.2931 - color_constancy_loss: 0.0770 - exposure_loss: 0.7110 - val_total_loss: 1.3755 - val_illumination_smoothness_loss: 0.0353 - val_spatial_constancy_loss: 0.2590 - val_color_constancy_loss: 0.0424 - val_exposure_loss: 1.0387 Epoch 46/100 25/25 [==============================] - 7s 280ms/step - total_loss: 1.1112 - illumination_smoothness_loss: 0.0304 - spatial_constancy_loss: 0.2952 - color_constancy_loss: 0.0776 - exposure_loss: 0.7080 - val_total_loss: 1.3728 - val_illumination_smoothness_loss: 0.0328 - val_spatial_constancy_loss: 0.2591 - val_color_constancy_loss: 0.0424 - val_exposure_loss: 1.0385 Epoch 47/100 25/25 [==============================] - 7s 279ms/step - total_loss: 1.1094 - illumination_smoothness_loss: 0.0287 - spatial_constancy_loss: 0.2955 - color_constancy_loss: 0.0775 - exposure_loss: 0.7076 - val_total_loss: 1.3720 - val_illumination_smoothness_loss: 0.0329 - val_spatial_constancy_loss: 0.2605 - val_color_constancy_loss: 0.0425 - val_exposure_loss: 1.0361 Epoch 48/100 25/25 [==============================] - 7s 269ms/step - total_loss: 1.1079 - illumination_smoothness_loss: 0.0276 - spatial_constancy_loss: 0.2955 - color_constancy_loss: 0.0777 - exposure_loss: 0.7072 - val_total_loss: 1.3707 - val_illumination_smoothness_loss: 0.0316 - val_spatial_constancy_loss: 0.2606 - val_color_constancy_loss: 0.0426 - val_exposure_loss: 1.0359 Epoch 49/100 25/25 [==============================] - 7s 269ms/step - total_loss: 1.1056 - illumination_smoothness_loss: 0.0252 - spatial_constancy_loss: 0.2967 - color_constancy_loss: 0.0777 - exposure_loss: 0.7061 - val_total_loss: 1.3672 - val_illumination_smoothness_loss: 0.0277 - val_spatial_constancy_loss: 0.2597 - val_color_constancy_loss: 0.0426 - val_exposure_loss: 1.0372 Epoch 50/100 25/25 [==============================] - 7s 269ms/step - total_loss: 1.1047 - illumination_smoothness_loss: 0.0243 - spatial_constancy_loss: 0.2962 - color_constancy_loss: 0.0776 - exposure_loss: 0.7066 - val_total_loss: 1.3653 - val_illumination_smoothness_loss: 0.0256 - val_spatial_constancy_loss: 0.2590 - val_color_constancy_loss: 0.0423 - val_exposure_loss: 1.0383 Epoch 51/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.1038 - illumination_smoothness_loss: 0.0237 - spatial_constancy_loss: 0.2968 - color_constancy_loss: 0.0778 - exposure_loss: 0.7054 - val_total_loss: 1.3657 - val_illumination_smoothness_loss: 0.0273 - val_spatial_constancy_loss: 0.2617 - val_color_constancy_loss: 0.0431 - val_exposure_loss: 1.0335 Epoch 52/100 25/25 [==============================] - 7s 269ms/step - total_loss: 1.1020 - illumination_smoothness_loss: 0.0220 - spatial_constancy_loss: 0.2979 - color_constancy_loss: 0.0779 - exposure_loss: 0.7042 - val_total_loss: 1.3635 - val_illumination_smoothness_loss: 0.0234 - val_spatial_constancy_loss: 0.2579 - val_color_constancy_loss: 0.0422 - val_exposure_loss: 1.0400 Epoch 53/100 25/25 [==============================] - 7s 270ms/step - total_loss: 1.1012 - illumination_smoothness_loss: 0.0208 - spatial_constancy_loss: 0.2967 - color_constancy_loss: 0.0775 - exposure_loss: 0.7064 - val_total_loss: 1.3636 - val_illumination_smoothness_loss: 0.0250 - val_spatial_constancy_loss: 0.2607 - val_color_constancy_loss: 0.0428 - val_exposure_loss: 1.0352 Epoch 54/100 25/25 [==============================] - 7s 269ms/step - total_loss: 1.1002 - illumination_smoothness_loss: 0.0205 - spatial_constancy_loss: 0.2970 - color_constancy_loss: 0.0777 - exposure_loss: 0.7049 - val_total_loss: 1.3615 - val_illumination_smoothness_loss: 0.0233 - val_spatial_constancy_loss: 0.2611 - val_color_constancy_loss: 0.0427 - val_exposure_loss: 1.0345 Epoch 55/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.0989 - illumination_smoothness_loss: 0.0193 - spatial_constancy_loss: 0.2985 - color_constancy_loss: 0.0780 - exposure_loss: 0.7032 - val_total_loss: 1.3608 - val_illumination_smoothness_loss: 0.0225 - val_spatial_constancy_loss: 0.2609 - val_color_constancy_loss: 0.0428 - val_exposure_loss: 1.0346 Epoch 56/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.0986 - illumination_smoothness_loss: 0.0190 - spatial_constancy_loss: 0.2971 - color_constancy_loss: 0.0777 - exposure_loss: 0.7048 - val_total_loss: 1.3615 - val_illumination_smoothness_loss: 0.0238 - val_spatial_constancy_loss: 0.2621 - val_color_constancy_loss: 0.0430 - val_exposure_loss: 1.0327 Epoch 57/100 25/25 [==============================] - 7s 279ms/step - total_loss: 1.0977 - illumination_smoothness_loss: 0.0182 - spatial_constancy_loss: 0.2987 - color_constancy_loss: 0.0780 - exposure_loss: 0.7028 - val_total_loss: 1.3601 - val_illumination_smoothness_loss: 0.0226 - val_spatial_constancy_loss: 0.2623 - val_color_constancy_loss: 0.0431 - val_exposure_loss: 1.0321 Epoch 58/100 25/25 [==============================] - 7s 269ms/step - total_loss: 1.0971 - illumination_smoothness_loss: 0.0174 - spatial_constancy_loss: 0.2979 - color_constancy_loss: 0.0778 - exposure_loss: 0.7040 - val_total_loss: 1.3596 - val_illumination_smoothness_loss: 0.0218 - val_spatial_constancy_loss: 0.2615 - val_color_constancy_loss: 0.0428 - val_exposure_loss: 1.0334 Epoch 59/100 25/25 [==============================] - 7s 269ms/step - total_loss: 1.0974 - illumination_smoothness_loss: 0.0180 - spatial_constancy_loss: 0.2985 - color_constancy_loss: 0.0780 - exposure_loss: 0.7029 - val_total_loss: 1.3611 - val_illumination_smoothness_loss: 0.0246 - val_spatial_constancy_loss: 0.2645 - val_color_constancy_loss: 0.0437 - val_exposure_loss: 1.0282 Epoch 60/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.0956 - illumination_smoothness_loss: 0.0165 - spatial_constancy_loss: 0.2985 - color_constancy_loss: 0.0780 - exposure_loss: 0.7026 - val_total_loss: 1.3581 - val_illumination_smoothness_loss: 0.0209 - val_spatial_constancy_loss: 0.2623 - val_color_constancy_loss: 0.0430 - val_exposure_loss: 1.0320 Epoch 61/100 25/25 [==============================] - 7s 268ms/step - total_loss: 1.0953 - illumination_smoothness_loss: 0.0159 - spatial_constancy_loss: 0.2992 - color_constancy_loss: 0.0782 - exposure_loss: 0.7020 - val_total_loss: 1.3579 - val_illumination_smoothness_loss: 0.0213 - val_spatial_constancy_loss: 0.2637 - val_color_constancy_loss: 0.0436 - val_exposure_loss: 1.0293 Epoch 62/100 25/25 [==============================] - 7s 277ms/step - total_loss: 1.0945 - illumination_smoothness_loss: 0.0154 - spatial_constancy_loss: 0.2982 - color_constancy_loss: 0.0780 - exposure_loss: 0.7029 - val_total_loss: 1.3571 - val_illumination_smoothness_loss: 0.0199 - val_spatial_constancy_loss: 0.2620 - val_color_constancy_loss: 0.0429 - val_exposure_loss: 1.0323 Epoch 63/100 25/25 [==============================] - 7s 268ms/step - total_loss: 1.0948 - illumination_smoothness_loss: 0.0156 - spatial_constancy_loss: 0.2989 - color_constancy_loss: 0.0781 - exposure_loss: 0.7021 - val_total_loss: 1.3577 - val_illumination_smoothness_loss: 0.0215 - val_spatial_constancy_loss: 0.2641 - val_color_constancy_loss: 0.0435 - val_exposure_loss: 1.0287 Epoch 64/100 25/25 [==============================] - 7s 268ms/step - total_loss: 1.0935 - illumination_smoothness_loss: 0.0146 - spatial_constancy_loss: 0.2994 - color_constancy_loss: 0.0782 - exposure_loss: 0.7014 - val_total_loss: 1.3565 - val_illumination_smoothness_loss: 0.0200 - val_spatial_constancy_loss: 0.2632 - val_color_constancy_loss: 0.0433 - val_exposure_loss: 1.0300 Epoch 65/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.0933 - illumination_smoothness_loss: 0.0144 - spatial_constancy_loss: 0.2992 - color_constancy_loss: 0.0781 - exposure_loss: 0.7015 - val_total_loss: 1.3570 - val_illumination_smoothness_loss: 0.0211 - val_spatial_constancy_loss: 0.2648 - val_color_constancy_loss: 0.0439 - val_exposure_loss: 1.0273 Epoch 66/100 25/25 [==============================] - 7s 268ms/step - total_loss: 1.0927 - illumination_smoothness_loss: 0.0141 - spatial_constancy_loss: 0.2993 - color_constancy_loss: 0.0781 - exposure_loss: 0.7012 - val_total_loss: 1.3549 - val_illumination_smoothness_loss: 0.0179 - val_spatial_constancy_loss: 0.2618 - val_color_constancy_loss: 0.0429 - val_exposure_loss: 1.0323 Epoch 67/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.0930 - illumination_smoothness_loss: 0.0141 - spatial_constancy_loss: 0.2992 - color_constancy_loss: 0.0781 - exposure_loss: 0.7016 - val_total_loss: 1.3565 - val_illumination_smoothness_loss: 0.0208 - val_spatial_constancy_loss: 0.2652 - val_color_constancy_loss: 0.0441 - val_exposure_loss: 1.0265 Epoch 68/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.0919 - illumination_smoothness_loss: 0.0135 - spatial_constancy_loss: 0.3001 - color_constancy_loss: 0.0782 - exposure_loss: 0.7002 - val_total_loss: 1.3543 - val_illumination_smoothness_loss: 0.0173 - val_spatial_constancy_loss: 0.2617 - val_color_constancy_loss: 0.0429 - val_exposure_loss: 1.0323 Epoch 69/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.0925 - illumination_smoothness_loss: 0.0136 - spatial_constancy_loss: 0.2989 - color_constancy_loss: 0.0780 - exposure_loss: 0.7019 - val_total_loss: 1.3562 - val_illumination_smoothness_loss: 0.0203 - val_spatial_constancy_loss: 0.2646 - val_color_constancy_loss: 0.0440 - val_exposure_loss: 1.0272 Epoch 70/100 25/25 [==============================] - 7s 269ms/step - total_loss: 1.0916 - illumination_smoothness_loss: 0.0130 - spatial_constancy_loss: 0.3005 - color_constancy_loss: 0.0782 - exposure_loss: 0.7000 - val_total_loss: 1.3530 - val_illumination_smoothness_loss: 0.0156 - val_spatial_constancy_loss: 0.2606 - val_color_constancy_loss: 0.0428 - val_exposure_loss: 1.0341 Epoch 71/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.0918 - illumination_smoothness_loss: 0.0128 - spatial_constancy_loss: 0.2985 - color_constancy_loss: 0.0778 - exposure_loss: 0.7028 - val_total_loss: 1.3550 - val_illumination_smoothness_loss: 0.0194 - val_spatial_constancy_loss: 0.2645 - val_color_constancy_loss: 0.0439 - val_exposure_loss: 1.0273 Epoch 72/100 25/25 [==============================] - 7s 269ms/step - total_loss: 1.0911 - illumination_smoothness_loss: 0.0127 - spatial_constancy_loss: 0.3001 - color_constancy_loss: 0.0782 - exposure_loss: 0.7001 - val_total_loss: 1.3535 - val_illumination_smoothness_loss: 0.0175 - val_spatial_constancy_loss: 0.2638 - val_color_constancy_loss: 0.0438 - val_exposure_loss: 1.0284 Epoch 73/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.0906 - illumination_smoothness_loss: 0.0121 - spatial_constancy_loss: 0.2998 - color_constancy_loss: 0.0780 - exposure_loss: 0.7006 - val_total_loss: 1.3521 - val_illumination_smoothness_loss: 0.0153 - val_spatial_constancy_loss: 0.2615 - val_color_constancy_loss: 0.0430 - val_exposure_loss: 1.0323 Epoch 74/100 25/25 [==============================] - 7s 269ms/step - total_loss: 1.0914 - illumination_smoothness_loss: 0.0127 - spatial_constancy_loss: 0.2993 - color_constancy_loss: 0.0780 - exposure_loss: 0.7014 - val_total_loss: 1.3547 - val_illumination_smoothness_loss: 0.0189 - val_spatial_constancy_loss: 0.2642 - val_color_constancy_loss: 0.0441 - val_exposure_loss: 1.0275 Epoch 75/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.0908 - illumination_smoothness_loss: 0.0125 - spatial_constancy_loss: 0.2994 - color_constancy_loss: 0.0781 - exposure_loss: 0.7008 - val_total_loss: 1.3533 - val_illumination_smoothness_loss: 0.0174 - val_spatial_constancy_loss: 0.2636 - val_color_constancy_loss: 0.0436 - val_exposure_loss: 1.0286 Epoch 76/100 25/25 [==============================] - 7s 269ms/step - total_loss: 1.0909 - illumination_smoothness_loss: 0.0126 - spatial_constancy_loss: 0.2998 - color_constancy_loss: 0.0782 - exposure_loss: 0.7004 - val_total_loss: 1.3544 - val_illumination_smoothness_loss: 0.0194 - val_spatial_constancy_loss: 0.2655 - val_color_constancy_loss: 0.0442 - val_exposure_loss: 1.0253 Epoch 77/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.0897 - illumination_smoothness_loss: 0.0116 - spatial_constancy_loss: 0.3002 - color_constancy_loss: 0.0783 - exposure_loss: 0.6996 - val_total_loss: 1.3516 - val_illumination_smoothness_loss: 0.0159 - val_spatial_constancy_loss: 0.2635 - val_color_constancy_loss: 0.0436 - val_exposure_loss: 1.0286 Epoch 78/100 25/25 [==============================] - 7s 269ms/step - total_loss: 1.0900 - illumination_smoothness_loss: 0.0120 - spatial_constancy_loss: 0.2998 - color_constancy_loss: 0.0781 - exposure_loss: 0.7001 - val_total_loss: 1.3528 - val_illumination_smoothness_loss: 0.0174 - val_spatial_constancy_loss: 0.2641 - val_color_constancy_loss: 0.0437 - val_exposure_loss: 1.0277 Epoch 79/100 25/25 [==============================] - 7s 279ms/step - total_loss: 1.0904 - illumination_smoothness_loss: 0.0122 - spatial_constancy_loss: 0.2999 - color_constancy_loss: 0.0782 - exposure_loss: 0.7001 - val_total_loss: 1.3528 - val_illumination_smoothness_loss: 0.0178 - val_spatial_constancy_loss: 0.2647 - val_color_constancy_loss: 0.0439 - val_exposure_loss: 1.0264 Epoch 80/100 25/25 [==============================] - 7s 279ms/step - total_loss: 1.0895 - illumination_smoothness_loss: 0.0114 - spatial_constancy_loss: 0.2995 - color_constancy_loss: 0.0782 - exposure_loss: 0.7003 - val_total_loss: 1.3520 - val_illumination_smoothness_loss: 0.0168 - val_spatial_constancy_loss: 0.2643 - val_color_constancy_loss: 0.0438 - val_exposure_loss: 1.0270 Epoch 81/100 25/25 [==============================] - 7s 269ms/step - total_loss: 1.0895 - illumination_smoothness_loss: 0.0116 - spatial_constancy_loss: 0.3002 - color_constancy_loss: 0.0783 - exposure_loss: 0.6995 - val_total_loss: 1.3520 - val_illumination_smoothness_loss: 0.0170 - val_spatial_constancy_loss: 0.2645 - val_color_constancy_loss: 0.0439 - val_exposure_loss: 1.0267 Epoch 82/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.0898 - illumination_smoothness_loss: 0.0116 - spatial_constancy_loss: 0.3001 - color_constancy_loss: 0.0782 - exposure_loss: 0.6999 - val_total_loss: 1.3532 - val_illumination_smoothness_loss: 0.0185 - val_spatial_constancy_loss: 0.2655 - val_color_constancy_loss: 0.0443 - val_exposure_loss: 1.0249 Epoch 83/100 25/25 [==============================] - 7s 269ms/step - total_loss: 1.0888 - illumination_smoothness_loss: 0.0112 - spatial_constancy_loss: 0.3002 - color_constancy_loss: 0.0782 - exposure_loss: 0.6992 - val_total_loss: 1.3517 - val_illumination_smoothness_loss: 0.0166 - val_spatial_constancy_loss: 0.2642 - val_color_constancy_loss: 0.0438 - val_exposure_loss: 1.0271 Epoch 84/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.0887 - illumination_smoothness_loss: 0.0106 - spatial_constancy_loss: 0.3004 - color_constancy_loss: 0.0781 - exposure_loss: 0.6996 - val_total_loss: 1.3500 - val_illumination_smoothness_loss: 0.0148 - val_spatial_constancy_loss: 0.2639 - val_color_constancy_loss: 0.0439 - val_exposure_loss: 1.0275 Epoch 85/100 25/25 [==============================] - 7s 268ms/step - total_loss: 1.0886 - illumination_smoothness_loss: 0.0110 - spatial_constancy_loss: 0.3000 - color_constancy_loss: 0.0781 - exposure_loss: 0.6994 - val_total_loss: 1.3511 - val_illumination_smoothness_loss: 0.0163 - val_spatial_constancy_loss: 0.2644 - val_color_constancy_loss: 0.0438 - val_exposure_loss: 1.0266 Epoch 86/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.0889 - illumination_smoothness_loss: 0.0110 - spatial_constancy_loss: 0.3004 - color_constancy_loss: 0.0782 - exposure_loss: 0.6993 - val_total_loss: 1.3513 - val_illumination_smoothness_loss: 0.0166 - val_spatial_constancy_loss: 0.2649 - val_color_constancy_loss: 0.0442 - val_exposure_loss: 1.0257 Epoch 87/100 25/25 [==============================] - 7s 269ms/step - total_loss: 1.0885 - illumination_smoothness_loss: 0.0111 - spatial_constancy_loss: 0.3001 - color_constancy_loss: 0.0781 - exposure_loss: 0.6992 - val_total_loss: 1.3504 - val_illumination_smoothness_loss: 0.0154 - val_spatial_constancy_loss: 0.2639 - val_color_constancy_loss: 0.0437 - val_exposure_loss: 1.0274 Epoch 88/100 25/25 [==============================] - 7s 268ms/step - total_loss: 1.0889 - illumination_smoothness_loss: 0.0111 - spatial_constancy_loss: 0.3000 - color_constancy_loss: 0.0781 - exposure_loss: 0.6997 - val_total_loss: 1.3512 - val_illumination_smoothness_loss: 0.0165 - val_spatial_constancy_loss: 0.2650 - val_color_constancy_loss: 0.0443 - val_exposure_loss: 1.0254 Epoch 89/100 25/25 [==============================] - 7s 268ms/step - total_loss: 1.0883 - illumination_smoothness_loss: 0.0109 - spatial_constancy_loss: 0.3003 - color_constancy_loss: 0.0781 - exposure_loss: 0.6990 - val_total_loss: 1.3506 - val_illumination_smoothness_loss: 0.0160 - val_spatial_constancy_loss: 0.2645 - val_color_constancy_loss: 0.0439 - val_exposure_loss: 1.0262 Epoch 90/100 25/25 [==============================] - 7s 268ms/step - total_loss: 1.0883 - illumination_smoothness_loss: 0.0106 - spatial_constancy_loss: 0.3003 - color_constancy_loss: 0.0781 - exposure_loss: 0.6993 - val_total_loss: 1.3498 - val_illumination_smoothness_loss: 0.0149 - val_spatial_constancy_loss: 0.2640 - val_color_constancy_loss: 0.0440 - val_exposure_loss: 1.0270 Epoch 91/100 25/25 [==============================] - 7s 277ms/step - total_loss: 1.0883 - illumination_smoothness_loss: 0.0107 - spatial_constancy_loss: 0.3000 - color_constancy_loss: 0.0780 - exposure_loss: 0.6995 - val_total_loss: 1.3492 - val_illumination_smoothness_loss: 0.0146 - val_spatial_constancy_loss: 0.2644 - val_color_constancy_loss: 0.0440 - val_exposure_loss: 1.0262 Epoch 92/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.0884 - illumination_smoothness_loss: 0.0108 - spatial_constancy_loss: 0.3007 - color_constancy_loss: 0.0782 - exposure_loss: 0.6987 - val_total_loss: 1.3496 - val_illumination_smoothness_loss: 0.0148 - val_spatial_constancy_loss: 0.2642 - val_color_constancy_loss: 0.0441 - val_exposure_loss: 1.0265 Epoch 93/100 25/25 [==============================] - 7s 277ms/step - total_loss: 1.0878 - illumination_smoothness_loss: 0.0105 - spatial_constancy_loss: 0.2994 - color_constancy_loss: 0.0780 - exposure_loss: 0.6999 - val_total_loss: 1.3497 - val_illumination_smoothness_loss: 0.0150 - val_spatial_constancy_loss: 0.2643 - val_color_constancy_loss: 0.0440 - val_exposure_loss: 1.0263 Epoch 94/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.0876 - illumination_smoothness_loss: 0.0098 - spatial_constancy_loss: 0.3005 - color_constancy_loss: 0.0781 - exposure_loss: 0.6992 - val_total_loss: 1.3471 - val_illumination_smoothness_loss: 0.0120 - val_spatial_constancy_loss: 0.2633 - val_color_constancy_loss: 0.0439 - val_exposure_loss: 1.0279 Epoch 95/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.0876 - illumination_smoothness_loss: 0.0103 - spatial_constancy_loss: 0.3002 - color_constancy_loss: 0.0782 - exposure_loss: 0.6989 - val_total_loss: 1.3493 - val_illumination_smoothness_loss: 0.0147 - val_spatial_constancy_loss: 0.2642 - val_color_constancy_loss: 0.0441 - val_exposure_loss: 1.0263 Epoch 96/100 25/25 [==============================] - 7s 277ms/step - total_loss: 1.0880 - illumination_smoothness_loss: 0.0105 - spatial_constancy_loss: 0.3001 - color_constancy_loss: 0.0781 - exposure_loss: 0.6994 - val_total_loss: 1.3485 - val_illumination_smoothness_loss: 0.0140 - val_spatial_constancy_loss: 0.2644 - val_color_constancy_loss: 0.0440 - val_exposure_loss: 1.0261 Epoch 97/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.0878 - illumination_smoothness_loss: 0.0102 - spatial_constancy_loss: 0.3005 - color_constancy_loss: 0.0782 - exposure_loss: 0.6990 - val_total_loss: 1.3485 - val_illumination_smoothness_loss: 0.0140 - val_spatial_constancy_loss: 0.2645 - val_color_constancy_loss: 0.0443 - val_exposure_loss: 1.0257 Epoch 98/100 25/25 [==============================] - 7s 278ms/step - total_loss: 1.0875 - illumination_smoothness_loss: 0.0104 - spatial_constancy_loss: 0.3003 - color_constancy_loss: 0.0781 - exposure_loss: 0.6987 - val_total_loss: 1.3485 - val_illumination_smoothness_loss: 0.0140 - val_spatial_constancy_loss: 0.2641 - val_color_constancy_loss: 0.0440 - val_exposure_loss: 1.0264 Epoch 99/100 25/25 [==============================] - 7s 277ms/step - total_loss: 1.0879 - illumination_smoothness_loss: 0.0104 - spatial_constancy_loss: 0.3005 - color_constancy_loss: 0.0782 - exposure_loss: 0.6988 - val_total_loss: 1.3486 - val_illumination_smoothness_loss: 0.0140 - val_spatial_constancy_loss: 0.2642 - val_color_constancy_loss: 0.0443 - val_exposure_loss: 1.0260 Epoch 100/100 25/25 [==============================] - 7s 277ms/step - total_loss: 1.0873 - illumination_smoothness_loss: 0.0102 - spatial_constancy_loss: 0.3001 - color_constancy_loss: 0.0780 - exposure_loss: 0.6991 - val_total_loss: 1.3481 - val_illumination_smoothness_loss: 0.0134 - val_spatial_constancy_loss: 0.2635 - val_color_constancy_loss: 0.0439 - val_exposure_loss: 1.0273 png png png png png Inference def plot_results(images, titles, figure_size=(12, 12)): fig = plt.figure(figsize=figure_size) for i in range(len(images)): fig.add_subplot(1, len(images), i + 1).set_title(titles[i]) _ = plt.imshow(images[i]) plt.axis(\"off\") plt.show() def infer(original_image): image = keras.preprocessing.image.img_to_array(original_image) image = image.astype(\"float32\") / 255.0 image = np.expand_dims(image, axis=0) output_image = zero_dce_model(image) output_image = tf.cast((output_image[0, :, :, :] * 255), dtype=np.uint8) output_image = Image.fromarray(output_image.numpy()) return output_image Inference on test images We compare the test images from LOLDataset enhanced by MIRNet with images enhanced via the PIL.ImageOps.autocontrast() function. for val_image_file in test_low_light_images: original_image = Image.open(val_image_file) enhanced_image = infer(original_image) plot_results( [original_image, ImageOps.autocontrast(original_image), enhanced_image], [\"Original\", \"PIL Autocontrast\", \"Enhanced\"], (20, 12), ) png png png png png png png png png png png png png png png Generate text from Nietzche's writings with a character-level LSTM. Character-level text generation with LSTM Introduction This example demonstrates how to use a LSTM model to generate text character-by-character. At least 20 epochs are required before the generated text starts sounding locally coherent. It is recommended to run this script on GPU, as recurrent networks are quite computationally intensive. If you try this script on new data, make sure your corpus has at least ~100k characters. ~1M is better. Setup from tensorflow import keras from tensorflow.keras import layers import numpy as np import random import io Prepare the data path = keras.utils.get_file( \"nietzsche.txt\", origin=\"https://s3.amazonaws.com/text-datasets/nietzsche.txt\" ) with io.open(path, encoding=\"utf-8\") as f: text = f.read().lower() text = text.replace(\"\n\", \" \") # We remove newlines chars for nicer display print(\"Corpus length:\", len(text)) chars = sorted(list(set(text))) print(\"Total chars:\", len(chars)) char_indices = dict((c, i) for i, c in enumerate(chars)) indices_char = dict((i, c) for i, c in enumerate(chars)) # cut the text in semi-redundant sequences of maxlen characters maxlen = 40 step = 3 sentences = [] next_chars = [] for i in range(0, len(text) - maxlen, step): sentences.append(text[i : i + maxlen]) next_chars.append(text[i + maxlen]) print(\"Number of sequences:\", len(sentences)) x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool) y = np.zeros((len(sentences), len(chars)), dtype=np.bool) for i, sentence in enumerate(sentences): for t, char in enumerate(sentence): x[i, t, char_indices[char]] = 1 y[i, char_indices[next_chars[i]]] = 1 Corpus length: 600893 Total chars: 56 Number of sequences: 200285 Build the model: a single LSTM layer model = keras.Sequential( [ keras.Input(shape=(maxlen, len(chars))), layers.LSTM(128), layers.Dense(len(chars), activation=\"softmax\"), ] ) optimizer = keras.optimizers.RMSprop(learning_rate=0.01) model.compile(loss=\"categorical_crossentropy\", optimizer=optimizer) Prepare the text sampling function def sample(preds, temperature=1.0): # helper function to sample an index from a probability array preds = np.asarray(preds).astype(\"float64\") preds = np.log(preds) / temperature exp_preds = np.exp(preds) preds = exp_preds / np.sum(exp_preds) probas = np.random.multinomial(1, preds, 1) return np.argmax(probas) Train the model epochs = 40 batch_size = 128 for epoch in range(epochs): model.fit(x, y, batch_size=batch_size, epochs=1) print() print(\"Generating text after epoch: %d\" % epoch) start_index = random.randint(0, len(text) - maxlen - 1) for diversity in [0.2, 0.5, 1.0, 1.2]: print(\"...Diversity:\", diversity) generated = \"\" sentence = text[start_index : start_index + maxlen] print('...Generating with seed: \"' + sentence + '\"') for i in range(400): x_pred = np.zeros((1, maxlen, len(chars))) for t, char in enumerate(sentence): x_pred[0, t, char_indices[char]] = 1.0 preds = model.predict(x_pred, verbose=0)[0] next_index = sample(preds, diversity) next_char = indices_char[next_index] sentence = sentence[1:] + next_char generated += next_char print(\"...Generated: \", generated) print() 1565/1565 [==============================] - 7s 4ms/step - loss: 1.9237 Generating text after epoch: 0 ...Diversity: 0.2 ...Generating with seed: \" calm, rational reflection. a church vib\" ...Generated: le and the sugress and the science the sore and and the sore and the such and that the prection and the soul of the sore and the some and the such and the some and the stranstifical the prection and the same to the strange and the stranstification of the some and the sore and the sore to the sould and and the consibely the same and the same the such and the some of the same and the some and and to ...Diversity: 0.5 ...Generating with seed: \" calm, rational reflection. a church vib\" ...Generated: tion and dererations and the prodited to ordingual common the expecial the problight knowledge and and the masters and with the for the sension the spirition the hass and be possing unceater of do extonstitions of ness the consiberent for the more more more and that the extrations and contral to the of the more and and more and the most precisely do of forther the supprable the point hecest of the ...Diversity: 1.0 ...Generating with seed: \" calm, rational reflection. a church vib\" ...Generated: ti, when an extrated and really ye; be atsessical right deally of the once very and man\" there than the own sorm and proartingishient supptishy and itsmed that word \"for monsouranbly asd for ensisiance, this par in ond consintions! ir : call and retrods them is to themstucies of every alortehic hand perony of regarding and beandly child tran be ed firerishe? as neigherness. oncishime--awfate and a ...Diversity: 1.2 ...Generating with seed: \" calm, rational reflection. a church vib\" ...Generated: tion. innot prede for \"prestan\"witimencesition=-s\"phines4 faro-revery insoiviept prictide and coverve als; and \"be mork un. of this ne. \"inthing is pribty require oo edical for mores recance mens, of there is nomuthomd more phile--and gred is not extre shan or the preectirabled reapever of enowe, sucpible--to bedical trreouk. it withoue from himselfin evols ot know on 'tronsly gidest behing ave e 1565/1565 [==============================] - 7s 4ms/step - loss: 1.5699 Generating text after epoch: 1 ...Diversity: 0.2 ...Generating with seed: \" church. so, too, it will not be admitte\" ...Generated: of the soul of the subrice the something and the same of the the strengtion of the subsing the strength, and as the superitional and into a something of the sense of the strange the sense of the the something of the subsimation of the same of the subsiciated and all the such a the strength. the such a the strange the some the strength, and the such a man the subsiciated to the such a something th ...Diversity: 0.5 ...Generating with seed: \" church. so, too, it will not be admitte\" ...Generated: of the self-reads become us it is conciritus of a strick under the formarily respect which a great man should of be contrady, all sense of the among of the interman some us to the experices in such a longing in his interprated to the unitions of the principoral the subrilation, the most philosopher to be proutiation of the concerned and to a not which errors of have a regation of the learness to ...Diversity: 1.0 ...Generating with seed: \" church. so, too, it will not be admitte\" ...Generated: d trasus the vering of the spirits as served, no laves which spiritus is heaktrd? he is those most my should and insidnanpences all didfect revelopication loutter morals of them. but no been belage that is discoving, morality, itself, med, the certainea: to tster that is this organtt: whatever ferress. in celplance--thus a he basful, streeds and it vering, that the might, then the con can mastry u ...Diversity: 1.2 ...Generating with seed: \" church. so, too, it will not be admitte\" ...Generated: r ging, leagns in this foot in philosoph, pressevcupaise -goad rewappoodwelaved, with religglcated and assivinger--flowark-remails with it have, bli, the hutele whicurarit, he rome, perelogy . rirpompnances! benawating refusacrounce, almost once with supchre droubt and allowings at noncieht lengless! a \"who i strriviging the, was nothing, a ot thingmanny yim xw\"-foot? \"he as -probention thus love 1565/1565 [==============================] - 7s 5ms/step - loss: 1.4793 Generating text after epoch: 2 ...Diversity: 0.2 ...Generating with seed: \"d within myself what it is, by what stan\" ...Generated: dary still the still as a still the could and and the still the still to the higher, and the themselves in the still the still to the still the still to the profound the most desires the still concerning and and the problem of the still the still the still the still the still the stric and the still most which the most the still profound the and the still the still the superioration of the stands ...Diversity: 0.5 ...Generating with seed: \"d within myself what it is, by what stan\" ...Generated: dal, and because the sates a something and it with the order to such a simple still be religion of such his soul of the concerness and long to desponsible still to man of our object baspess of the profound as a propess as a different and the still the striction and who se respect, and the schopenhauer perstical the higher completion of the still smeth and he self-resides, the remoran enough of the ...Diversity: 1.0 ...Generating with seed: \"d within myself what it is, by what stan\" ...Generated: terdun; the people has for something almo, in cimps of master things has even him tray as a goal in exore of magoty-chulty, the milssesishelf in comportude, that the nature of amble powerful, bettienness and greatimal dreative could anot a cruest also which can he them. unders or that marmulpanting of leadians always them? at the a fessiid of vicnour example alne, petcoss. had withoue isclumhtes i ...Diversity: 1.2 ...Generating with seed: \"d within myself what it is, by what stan\" ...Generated: datis the ever as if it is need from not he factature of eveny and decesy butk, weser, on that now less, and a necesiontic and be betoves without inraniof, citusan of their -r3faborytofthics to he renent charbe ngain probfinaumiatiof, the promisementslieful, readiced \"omilicted atwiddenming elsep, shartin hils thought, a pailsess, he muspobles, thereand unconder: hin, sworw-monsuev ummaismer is fo 1565/1565 [==============================] - 7s 5ms/step - loss: 1.4307 Generating text after epoch: 3 ...Diversity: 0.2 ...Generating with seed: \"rs, which had involuntarily extended to \" ...Generated: the soul of the sense of the present the soul of the sense of the sense of the sense of the present the sense of the such a sense of the present the sense of the present the sense of the strength and the soul of the present the soul of the present the soul of the streng of the streng the soul of the sense of the present the strength of the sense of the present the standard to the self-was soul to ...Diversity: 0.5 ...Generating with seed: \"rs, which had involuntarily extended to \" ...Generated: century, the peillure of life be the end of the subrent and such a precise of the christians to the such a free one have conscience and in the present of sciently belongs of the process with the masters, the present of the past the streng to the cape a sense of their enough of the the standing of the trigitual belongs of nature of the philosophic here soul and manifold and and stand to the great f ...Diversity: 1.0 ...Generating with seed: \"rs, which had involuntarily extended to \" ...Generated: clooy there lidice or the protonal or truths as to cable in uciness of regreed of the combinist, they belogher of be sad! flough sootity his thing any it. but everying--is loned above, so dirfelment history, have owing upon regarded destrocious indessental with the spirit classificating hack development that to belongs a physed neare loved to ulinal inlicites the sing and, to you had the thing a ...Diversity: 1.2 ...Generating with seed: \"rs, which had involuntarily extended to \" ...Generated: an anralog to man, take quick. this is vign itself. uuminar'squink posted. so someoy of preadwers itself; so--onece not the \"lofg--are zation)--but the th? comppute matious as whis wahdogics senscrieable syng-thing--easis and duce, a shill of the marely, aoth of it. there is this weich wroth at perhaps knowes yous properfulne of losties and another and how should physives that greoss--the moreth l 1565/1565 [==============================] - 7s 5ms/step - loss: 1.3999 Generating text after epoch: 4 ...Diversity: 0.2 ...Generating with seed: \" state nowadays assumes the same right, \" ...Generated: and a soul to the contrary and most strong in the same the strong and a still of the sense of the same a strong of the sense of the soul of the strength of the still of the same and the soul of the substitual and still to a strong and a still to a strong and more and a superiority of the still the strong in the same and a strong of the sense of the confers to the soul of the sense of the strong an ...Diversity: 0.5 ...Generating with seed: \" state nowadays assumes the same right, \" ...Generated: and also a possibilitity of a course and experience in the communitary an and not be the possession of the opposite to a spectation of the an and his revering to be use and concealed and possibilitity and with the fact than the first the statesm century of the condition of the bad every supposed the supposed theer the fact to materment of good and belong be such may i the sense as even the suppose ...Diversity: 1.0 ...Generating with seed: \" state nowadays assumes the same right, \" ...Generated: the aarful poss but of did maifter, at this decection, as be found them, the shortion. is creatuality, and without it be worves edrath, stuty, for the highest moy for lime extreoi-sharping it is a subso the wonterta to a symptom of man the owest arring has not free done to us on somethicw is evertmate crutlom, as its genwher, duglened to calning also, alowing hofte wishe as possibomal philosophiy ...Diversity: 1.2 ...Generating with seed: \" state nowadays assumes the same right, \" ...Generated: beroghing so just age--a god cail! manifbkogobleacur\", something with religious hadgenous doubous--burtadmon, aronked or in aldit for alrow ubyound fiction by prow axkionte to a ady fact of that thing how has viries in froowed, with the for also? and on , manfort in die to onough a serbantenicomanction, without be us-reasing: thiot\" if gemper, in godh- estaice recome poweling lest algarstfuls it s 1565/1565 [==============================] - 7s 5ms/step - loss: 1.3780 Generating text after epoch: 5 ...Diversity: 0.2 ...Generating with seed: \" things, consists precisely in their bei\" ...Generated: ng that the contention of the sensible of the most danger of the contented and the contented and the strengtion of the contented that the sensible of the most morality of the most still and the soul of the contented and the state of the contented and the contente of the most standard of the most dangerous and discinting and the an all the soul of the contented and in the contented and intercourati ...Diversity: 0.5 ...Generating with seed: \" things, consists precisely in their bei\" ...Generated: ng to the personality of certainty, as it is a community and morality, as it is or in the alserece of the most still thought the discinting and all of a not and and conceited as a greater have the and incertive of a call thought and has it is presuntion and imacisaly the standard of its an an possible and loves and defited to the traged of the work of an all things of commans to that the instinct ...Diversity: 1.0 ...Generating with seed: \" things, consists precisely in their bei\" ...Generated: ng and us intellectual moral science is itsieptity trytrems, and to his slineihe indien that to sby every it, almostining basicaled--we cangs, the her de allge. the child, pleoy, not seascession perhaps gojd, how yet redess: it unageous , cannot knoub ourselveimially, it over-pleblans and ass1-aress up to demonstimates no god and discisled, be all eye has how \"worker\", every most popula in s ...Diversity: 1.2 ...Generating with seed: \" things, consists precisely in their bei\" ...Generated: ng a coqiely dreadchable where we sterm fit favourable all, that does remote that aurourd gre: that chart)w everitual ifxentime myself, these my assured, not from vervatedo--gratits, this southrin nature; whist betlery becomeseds is, for his colvide while bet all olstiticg that certaintes: for they he does hats came that senses. he were toobla umes of meterity, thierd soverfort! and than exkeed, s 1565/1565 [==============================] - 7s 5ms/step - loss: 1.3593 Generating text after epoch: 6 ...Diversity: 0.2 ...Generating with seed: \" stupidities from which \"man\" in europe,\" ...Generated: the sense of the sense of the same a still to the soul and in the soul of the soul of the consequently a self subjection of the sense of the consequently a significant of the consequently the same the same and and the soul of the seem of the expression of the more of the such a serve to the soul of the consequently a subjection of the same and a seem of the same morality of the same morality of t ...Diversity: 0.5 ...Generating with seed: \" stupidities from which \"man\" in europe,\" ...Generated: to german consequently so tasteration of exception of the most beargeners of the really responsibility, is rit for every astern or the most doubte a process, the most proposition of the contemporal still conscience to something seeker of all love in the soul in a metaphysical deal to the and and also all these more for a prioricy and a sermicish its order and the contempolory of the same man and ...Diversity: 1.0 ...Generating with seed: \" stupidities from which \"man\" in europe,\" ...Generated: by the woolnis of priber among wails--hunds from that the prepulling as ever younpor, ever storce, do not be again to reasons. euntitiest tull of in we cat do self runts astoreaction that virtues, at the instimyinm and lost doubts weolling nothing for motness\". is retshround, tribelade much a will, te-art, full to the colds to bading imbencaly granter, then, the -dread to the womenter is too: ...Diversity: 1.2 ...Generating with seed: \" stupidities from which \"man\" in europe,\" ...Generated: prasisely ever process, is fortalr othen. it wers the elfinity of love. unpentaccephie can an idrance on ycioted, tpitenly hive good cotsess if the works, 6ut har vavedifing, in a preferont we living for itself thoughts who wisper for mustursanicityher, when it woman nacious religious delicer napudions still had reveron to seemingvatms of my a screectic as 1565/1565 [==============================] - 7s 5ms/step - loss: 1.3452 Generating text after epoch: 7 ...Diversity: 0.2 ...Generating with seed: \" best nourishment and, in certain circum\" ...Generated: stances and and more strenged to the sense of such a strenged to the sense of the sense of the strenged and strenged and constant and strenged to the strenged and strenged to the sense of the same species of the sense of the strenged to the strenged in the strenged and and the strenge the spiritual indifferent and destrust to the strenged and real problem of the conscience of the sense of the sens ...Diversity: 0.5 ...Generating with seed: \" best nourishment and, in certain circum\" ...Generated: stances no more would be the seem and the stringly conduct of the conceite of the consequence are to be the pain in the most power of the secret to the highest suppose the relation of pleased in a truth of the most does what is tyoen will in the first possibilities is a strenged and the german is can so the worst of continual man the nature of the sugpitation of the former and indignation of the f ...Diversity: 1.0 ...Generating with seed: \" best nourishment and, in certain circum\" ...Generated: ary reelidity. moustoriation iureed to ad populal of ind fatherly sounds person and find naits, the stire scyshomal. in groad there is a ascaurion, the parroun miker ray. the mind and themenesting in the rist fect to is element of lurness, in their itplosic we have to powers, has not alon instinct themselves the conduces bowness the sance for expression time in vigning simplify flecosses to the st ...Diversity: 1.2 ...Generating with seed: \" best nourishment and, in certain circum\" ...Generated: s rote, \"is-pain, worawabyse undespression of the very early resceem arase spirit-is more puther. herisom again fawly blinds difficurples,, myself then based fail true stall) of his. manuthilal proof find to poring we have proeocrating at the rare, his rences to i notherurity. . 1w. andishes it at nimitted, such persontid of urumantity, he danlinactity, induist: with permey are proble-self, for 1565/1565 [==============================] - 7s 4ms/step - loss: 1.3329 Generating text after epoch: 8 ...Diversity: 0.2 ...Generating with seed: \"'t chime. 217. let us be careful in dea\" ...Generated: ve an ange of the process of the process of the struggle of the process of the sense of the process of the extent the more contemption of the experiment of the process of the endless of the process of the power of the experiment of the enduring of the exception of the more free world and contemption of the process of the process of the soul and hand of the sense of the conception of the exception ...Diversity: 0.5 ...Generating with seed: \"'t chime. 217. let us be careful in dea\" ...Generated: dence to the religious and feeling of the exervanity of the \"man will to the causity of the struggle and single truths grates, of the actual to the world for the power of an an existence and with the soul of contempt of the present of the easy something propositation to the sense, strength, and the ance such a suffician of the contempt, whatever the more and superficiation. the jews of the more in ...Diversity: 1.0 ...Generating with seed: \"'t chime. 217. let us be careful in dea\" ...Generated: ltcry of septher that are resilencqued been suldant unitiness, morrow of the must cannot see operation.--is re practikes man is, most inheritive doubtogm to the unusion. 12eritues, and a later woman is they are as to the own raght to the age and is all and runifiintog in the masp, according and within which to ruth and advanced knows its great mitter jenumif and be the lorelic of dislivent, and s ...Diversity: 1.2 ...Generating with seed: \"'t chime. 217. let us be careful in dea\" ...Generated: sing an: yea, representablety de within cansy level mindsess repedgriyualc, iten. \"worl\". if nedeuws ofthen by veryacy, diever darive) one forgwoct to the reloginan--el of them,e--it is at lost: and blissan by rerecting on as a frueks its podikan.-with rounked and worth, thes this these common afname the oppour more s\"o--and ri,ldun clearmmensess been fuint of all discebsified ly \"power,\" and 1565/1565 [==============================] - 7s 5ms/step - loss: 1.3237 Generating text after epoch: 9 ...Diversity: 0.2 ...Generating with seed: \" preachers. or, still more so, the hocus\" ...Generated: of the world of the present the soul of the soul of the moral more the one the soul of the problem of the sense of the most more present and moral for the present and the moral of the sense of the way to him and subtleng to the same substion of the sense of the sense of the soul of the subtleng to the soul of the seems of the man to the standards of an exception of the seems of the philosopher of ...Diversity: 0.5 ...Generating with seed: \" preachers. or, still more so, the hocus\" ...Generated: e of the problem of the concess to the great aspect of a subtaustal and sense, and and only passions is is necessary meanted as a fact that we suffer to the higher to the head of an and inconditional moral development been of the early the other and interpretation of power and the power--that is all the thing as he could necessary him and inforce to the miflering that is the contralities of a cont ...Diversity: 1.0 ...Generating with seed: \" preachers. or, still more so, the hocus\" ...Generated: ed still dendection and will as a most blest these actuous on the man they are, ondining is free and incleiscultr, edfing influence, many plecerenment of itself every assiving another, ye hfeever you have looking virtues; and not considers so dede. we has only all incietbles, but they have for the spirit did by even almost contrabout mancounded all appearact of moral cease tray only contraint: all ...Diversity: 1.2 ...Generating with seed: \" preachers. or, still more so, the hocus\" ...Generated: the \"wills\"--i way noame. only assumble. ereachey a thanfer and absolute the old courportent weal very philomous influention perhaps we does not firstly lange you ho\" deceptions, templeums the defin. to orlige of soul, this necessary must ha. nation farthes bralled, but always coars like wriculal exwelted door. yet, changedniggne, for around. the continus, bunsoconm throw such smaloubness of frie 1565/1565 [==============================] - 7s 5ms/step - loss: 1.3137 Generating text after epoch: 10 ...Diversity: 0.2 ...Generating with seed: \"in the form of which \"faith\" comes to it\" ...Generated: s own other stronges the distrust of the still constant, and the still and and souls of the still and hangs of the standard of the state of the still mode of the standard of the still and state of the standard of the strength of the still constantule of the senses of the moral contradictom of the comprehension of the distrust of the senses of the senses of the substating and and and state of the t ...Diversity: 0.5 ...Generating with seed: \"in the form of which \"faith\" comes to it\" ...Generated: s are and the all the exceptions of a distrust of the standard and instance, all the same deceive that the signal spirit which of all this strength there as a master and the the spirit of a man is not voice of the soul, which of rank of all the made and and standard of comprehensible in the extent of a the will of one and most delicates and inconselfischance and prosits, and to the stard to the th ...Diversity: 1.0 ...Generating with seed: \"in the form of which \"faith\" comes to it\" ...Generated: dark of hypothhaly tan actorieikend, anvumhition mits others the gioking presrily the most belief in ever a songuale of his sentiment hix; in natural: when mids say for freeg! one mustly unall form chiradome, common, lesief is the possibility of the tygrcious their enough of wrank all,ge by premordancent, \"banking\" understands! he wishing. but imanictated man--about singuan they williroes was law ...Diversity: 1.2 ...Generating with seed: \"in the form of which \"faith\" comes to it\" ...Generated: , this cirinious musicted no one probabilut superstomkious bet, scene.w\"t\" of discoverly colour to couthislite, connegisting, llegno: \"their ; how that all society behould mind or muvil obligations, time which the p ; fr?whlfottf!de, pshapo-ity musi folingure: the still high tercre of hurh taken evet regative what not called comknaciul that saud lighteaner;--therefore if in every richedness of ons 1565/1565 [==============================] - 7s 5ms/step - loss: 1.3050 Generating text after epoch: 11 ...Diversity: 0.2 ...Generating with seed: \"ly upon other and higher ideas. it bring\" ...Generated: to the conscience and sense of the problem of the sense of the sense of the subject of the contrast higher and something that the christian and case of the soul of the state of the sense of the superstance of the sense of the subject of the standard of the same philosophers of the sense of the sense of the same the standard of the sense of the sense of the soul of the sense of the subject of the ...Diversity: 0.5 ...Generating with seed: \"ly upon other and higher ideas. it bring\" ...Generated: ored prosour for the high still of the spirit and former the soul of the particular of the ordinary final case of the serration and day not were the power and of the greatest distraviting entirely the different is well--that is a find the consciently who has been the sense of the moral the science is not the extent the dangerousness of nation of the subject and made attempts and porten of the dang ...Diversity: 1.0 ...Generating with seed: \"ly upon other and higher ideas. it bring\" ...Generated: the lash in ed ane, immeanes, he non--shapper that is recall thas his truth to father decept the new effect is addition and the uttematible. the chere they never what suffe they do the revide ofecient of the lif wish of things of his time, and originatible, something do date of this limal it in solitud fear, that the world in this \"wishes to almost, well one of inter is, for inforeration that it ...Diversity: 1.2 ...Generating with seed: \"ly upon other and higher ideas. it bring\" ...Generated: utity. 1ited, not who new respincied more for that a times, is findnouptic wiser. this consequene e? which the life is lootg, must. i meanh callly incopid and that that it is man is keepe yearful elsed. vergapen in the sense, that intermand sensed and find belfgre, suffer nuvery. 12112 and are, and too, from dangers of altakm on y ? only his exttentive dinlest percecok,aquity in adtini than th 1565/1565 [==============================] - 7s 4ms/step - loss: 1.2986 Generating text after epoch: 12 ...Diversity: 0.2 ...Generating with seed: \" which he is gentle, endurable, and usef\" ...Generated: ul the strange to the strange to the prouder of the street and the strange the strange to the sees to the problem of the strange that the strange to the strange in the sees of the self the world of the still and imperious and the strange the condition of the probably and the strange the self and the problem of the strange to the state of the great truth that the conscience of the strange to the se ...Diversity: 0.5 ...Generating with seed: \" which he is gentle, endurable, and usef\" ...Generated: ul creative than the conduct evil and feature of the very the world of the contemplation is the science only a the seem and condount and experience of the problem of the astion of the problem of exceptions of fuem and finally whatever, and is the other long the one for a means of the good reasons relations which man who are can necessary the traditional of the condition of the soul, and believed t ...Diversity: 1.0 ...Generating with seed: \" which he is gentle, endurable, and usef\" ...Generated: ul, the trepent race. 2is tomentarkpens once opercacial securs but one have and nation--but is comparers appromote thusful, that it while doakess anything of his fall like flown woman work operates hitself generatively inspire those most languable--that the true to nature of constance. in culture on the decising inlire into all aro love, loves but one has eam is one when the amuly in recin ...Diversity: 1.2 ...Generating with seed: \" which he is gentle, endurable, and usef\" ...Generated: ulness: it want ooccuto-end. many. thealniscsesse, nowadays, in which hav to you mainsnss\"); he refrain for laughes, and fulnal ignor, in which is. 292ou he ogfounder meaning, whuo seen with rationed, with good truth moat valse friensnaants away not luafful, difficulary wourcelunday in soon moys upon riguin sisprisent , heady--appearance, hear-pressed brows: it can stood find its caterg hucaed to 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2925 Generating text after epoch: 13 ...Diversity: 0.2 ...Generating with seed: \"s deeply involved in _untruth_. the indi\" ...Generated: viduals and sense that the strange the soul of the sense of the more and whole souls and will and strength, and the soul of the more in the sense of the still soul of the subjective proportion, and so that it is a strength of the soul of the part, and the more process and the present and the more personal the process of the subject of the subject of the more and honest in the subject of the still ...Diversity: 0.5 ...Generating with seed: \"s deeply involved in _untruth_. the indi\" ...Generated: ssival every and and blamely men, the explanation of a problem of a result and still conditions of the world and interest of the consequence, and humanity of the strength to be a man in the significal comparing of the worst who are man is the laws of the new contral strange the moral themselves in the present of the subject of morality and whole the name of a coming and so so centress, and for the ...Diversity: 1.0 ...Generating with seed: \"s deeply involved in _untruth_. the indi\" ...Generated: vidual has parmantion. in mark; and subrlegg \"bettering personalities a kinds obsentables--only, pours our life to trihes were one de, as brable--ocsed, of every crrusion of women to his new leads as a eculesis humar lacks asmolosy--sleccrowning, more vesy new lose, that they make bobligimy, because of a good english judgmen jewers even aniwhes, strent ideal deluses to far in the influence of surt ...Diversity: 1.2 ...Generating with seed: \"s deeply involved in _untruth_. the indi\" ...Generated: ffebutization, grantert shares carly subalks.--adquitions, firstlity is flows known, with a unholdity, shol? with physined: man, \"feel uppers as the sort thosen classive of puweltrsi duly. only into reverses the comh for sherd of the regloming-desrepjess? -or the worst suipornes, among alliet-e\" forth at permed an insoluntial men a guatuons with personal \"penixess\", inte. prased\" no, as maremes. f 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2863 Generating text after epoch: 14 ...Diversity: 0.2 ...Generating with seed: \" maturing, and perfecting--the greeks, f\" ...Generated: or its distrust to the subject of the sense of the sense of the sense of the spirit of the more the sense of the subject of the subject of the spirit of the more the standard of the problem of the sense of the sense of the sense of the sense of the sense of the standard of the process of the process and desire to the element of the seems of the sense of the sense of the process of the sense of the ...Diversity: 0.5 ...Generating with seed: \" maturing, and perfecting--the greeks, f\" ...Generated: or the self-entirely consequent indicate even in the world of the subremld person and desire as it misture the desire to solitude. the prounding things. the process and his process which the most sense. ...Diversity: 1.0 ...Generating with seed: \" maturing, and perfecting--the greeks, f\" ...Generated: or these man to heirve in thus howinies, it outself, when prives all its \"dominitanty\" her, sutene, whatever \"how tastes, and ilsy--them, the spirit of a hle goo, in the state is to such truth\" is dir diving in sholoness him, the night of this artist, in things, they gratifed that is to its germany--the mave was all the dogmat of matter the secrants of the modly been desirt men it by simple!\" to t ...Diversity: 1.2 ...Generating with seed: \" maturing, and perfecting--the greeks, f\" ...Generated: or errhies at the namitates in them.=--dgone--the human. oth--the responsibility pastchf that the folfens. 68 i what recentle, no mart, onely stretd, \"chrismeness.\"--and persistes eradegning,\" like with magnevated, that he shames? hunce for longest--to different in all, has a cingending sight is the formeoun himbery o\" in us's among moralitien.\" the essential stand to him, threo, prioth. 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2793 Generating text after epoch: 15 ...Diversity: 0.2 ...Generating with seed: \"rought him to prison and to early death.\" ...Generated: the sensations of the sense of the seeming and strong of the process of the superficial supresents of the state man who strong man will and such a sense of the sense of the seeming and according to the process and according to the soul of the sense of the seeming of the sense of the same man is always to the sense of the seeming of the sensations of the sense of the sense of the sensations of the ...Diversity: 0.5 ...Generating with seed: \"rought him to prison and to early death.\" ...Generated: 2ne constant are the still forerality in the mart man of look that we may rendered the origin of the conscience for endurable the good man of which a own become a supersting of being the superficieat that in a resign the porten of the end, and appearent will constantaly in the sense of concealed and of the all these family of the superficial the spirit of this of the strange the higher and ready ...Diversity: 1.0 ...Generating with seed: \"rought him to prison and to early death.\" ...Generated: .q: bhrefic the same flee makes possians-, would have its assiry but that willingly, nable, such inder want of events of nature. sifuleringment most against which i by bad in the advating whithere heaving in which in orders concealed to its reflection to a cause of good and enougds that as free , ialledganis god by a strength for develding that his \"poou by appropagar and abwailsise become but it ...Diversity: 1.2 ...Generating with seed: \"rought him to prison and to early death.\" ...Generated: if negt the there for gratifus to ri-dpost is that not bettermen, to philosophear excluttion; mist degenerated of muadless acrous knowledge and the realms, priorably -aged! the true both, their classes. would be critical inspircived, which thourds of stury, fould at all or of percemly may praising. bagneed by constraining iss in all repuded as the apgrantity) cruates. here found. is refleefly--a 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2754 Generating text after epoch: 16 ...Diversity: 0.2 ...Generating with seed: \" to accept always and at all times this \" ...Generated: soul and secret and and in the soul of a personal to an and and the soul and the superior of the soul and the soul of the soul of the soul of the more interpretation of the pressure of the present and the contrase of the soul of the soul of the soul of the most spirits of the secret mode of the soul of the more soul of the soul of the soul and and the soul of the soul of all the original in the so ...Diversity: 0.5 ...Generating with seed: \" to accept always and at all times this \" ...Generated: therefore the intention of the fact of the more things of interpretation and any one is more more such the such pain and the novel of man is to love to men of the more instinct and the hand of morals is of something to a superent of the world and according to the foreound in the soul of a great priceless of the little and and who are relation of the more conscient of an ancient in the highest appa ...Diversity: 1.0 ...Generating with seed: \" to accept always and at all times this \" ...Generated: english. in enormm means and closes, that it complantantity, a end, of the of his sense--and the errorselines of remorbletedly intention; it is that which complanted in accord-rophts and omidations. this tais; not, have mame man; has may respect, and possible, i almost incardingly because awmody or tokence chant overed of secarive and philosophizance and additional danger, cour soveretacravarily ...Diversity: 1.2 ...Generating with seed: \" to accept always and at all times this \" ...Generated: obeying, hevowed by outsegquites from your upon highly uswa's by tragedy?\" is a suvoblong and segnious larging to among it fadifiur and own; for store and ye ask of mogent our expessibyreme, by henceful deficiars of nmable, is impar worly: varieually eyes grewes by tradiatiog him brind animal mynction to umary-momed: of prais in hhe notion--but li gsjes: probling ound is that very age to comtagnce 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2697 Generating text after epoch: 17 ...Diversity: 0.2 ...Generating with seed: \"same rank as astrology and alchemy, but \" ...Generated: a conscience of the power of the power of the power of the process of the power of the sense of the power of the power of the power of the sense of the sense of the power of the power of the power of the self-conscience of the soul of the power of the sense of the morality of the power and strange the sense of the sense of the conscience of the same conscience and intercarimantic man is the concla ...Diversity: 0.5 ...Generating with seed: \"same rank as astrology and alchemy, but \" ...Generated: the and for a thing of the morality of political disturbing of the highest plaising that in the former pure on a surpress of desirter, but constitately instinctive and acts all the god and relation of the same fine of the intendioc of the proposicness of the conscience of the greater historical intercourse and the seems of the power of their for the same the motive has been the morality of the ant ...Diversity: 1.0 ...Generating with seed: \"same rank as astrology and alchemy, but \" ...Generated: saiddness--as a distrustly to is philosophers to be knowledge, that we do keeping hoaling we still doual believe, hil, foreign is al politioc egoistich also develde, that hescely with respect usless happies is decial cluir and it merelk aupons, name himself an ancain kinds are not successfulness of means, in humanity and spiritualistict, self granted to any \"unills\"? \"hoar immorality,\" without you ...Diversity: 1.2 ...Generating with seed: \"same rank as astrology and alchemy, but \" ...Generated: all decien punifaltingife believe in ! but they lacks and beljent charing for rom fart enough,--oy. in geod unscalcion, forthomer believe it is keep without into almost intellect athain of moralely endpities of date. throughsurbs a turnne ymistive shay fatherles us awaken ke unyo: percepted: plawful as the serpasion upon parhing life, value an excitess upon knee us upon nearly nevertheledss the io 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2653 Generating text after epoch: 18 ...Diversity: 0.2 ...Generating with seed: \"ncy does exactly the same thing--that is\" ...Generated: the most sense of the soul of the self-dogman and the most spirit the spirit of the spirits of the spirits of the spirits of the spirits of the soul to the sense of the popular the present that the subtlet of the power of the subtlet and hereditation of the personality of the present and more the profession of the self-does the most decided and here and a strange of the soul of the subtlet the sp ...Diversity: 0.5 ...Generating with seed: \"ncy does exactly the same thing--that is\" ...Generated: contradictously, like the present of all enough, which does not unfering with the porting of the inner suffer to the reason of the reflection of refined man is a new child, there is the disciple of a philosom. . ...Diversity: 1.0 ...Generating with seed: \"ncy does exactly the same thing--that is\" ...Generated: he is its slive uncertain as it not become have to be bess himself grasely to that the idea of mlsung love, \"prinence\" of a most responsible bearing alreadquous ficledness, like the consciously-remonticing, iscerror ogit has almost the experien in eternal mightes eny, thely just the own eascelice down menilies to get apoke as in about let has with error about thevered and hus elf? centate, human ...Diversity: 1.2 ...Generating with seed: \"ncy does exactly the same thing--that is\" ...Generated: modernaoisitual with no longer leats, bus it before all that it pullitrrush, have almay? become it has vigords oy especially he now we eren\" the philosous and imagis to the imagination, he-sayoloy-rigks ypoitays, seeved forrschevers,, has my spircous everythrally is plebentance for horck, is his viled, and breatedy will found spirits--his often: why progrity, to lak cence a depparise, previded ad 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2610 Generating text after epoch: 19 ...Diversity: 0.2 ...Generating with seed: \"s like what is convenient, so the german\" ...Generated: destruction of the standard of the stand and the sense of the subject of an antitus of the strength of an and no of a stands of the free spirits of the strength of an and every standing of a sense of the strength of its art of the consciously and contrast of his conditionally and all the sense of the problem of the strength of an antitus and sense of the standard of all the subjection of an and t ...Diversity: 0.5 ...Generating with seed: \"s like what is convenient, so the german\" ...Generated: to the defined of his own good egoism and society in the seriousness and image itself and the most scientific the find as enough, to which they consciously and a the passion of habitual said but the stronger sentiment of the ruling of the engain prevalent of any more of the origin and to the former animalise of the power of the understand to be themselves are superficial does the world the interp ...Diversity: 1.0 ...Generating with seed: \"s like what is convenient, so the german\" ...Generated: antich, he future, or animaly is i have within away for \"divinion\" belongera-good--not rendervever, by mysible cultueed well what gotides aned train withans naturally say, this are, not to be europe because the brunume. one is in male has moral! (doun-trooks than conditionally anticouun condition you long hope, into before thy crive more hence the singment of manile and game. the ciluline, differ ...Diversity: 1.2 ...Generating with seed: \"s like what is convenient, so the german\" ...Generated: hatre of gridts that their patient for it over it edficipullmed a precent has the moral fateornar whent, which musicplywquaie of protacted. though--altonow, from the standingrous taken-resugrerly dopam out forl, for yhrowery--in usdety thress, in life have unclearion: orlave these same loved os, however \"a cruti \"ordwingd as is son--beginciatly;--he raused people, than \"notion\",\" in whom a best-b 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2574 Generating text after epoch: 20 ...Diversity: 0.2 ...Generating with seed: \"something arbitrarily barbaric and cerem\" ...Generated: olies and development the fact that the stand the stand the whole masters of the subject. the word of the populact of the spirituality of the contemporary and the philosopher of the sense of the presented the contemouring of the contrary, and what is the fact that the will considerable perhaps has been the belief in the sense of the present and the world in the problem of the present partician of ...Diversity: 0.5 ...Generating with seed: \"something arbitrarily barbaric and cerem\" ...Generated: ols which something thus indeed and comparison and the subject: which in every man, but all the more demons of the heart of the experience in their extent it is all philosophers of the world, and the other the present indignibices, the belief in the sense of their passion of the calpoule of literating our reason in every one that no more what is no a completening dogmasent philosophy. granticity a ...Diversity: 1.0 ...Generating with seed: \"something arbitrarily barbaric and cerem\" ...Generated: ars, as the tensting which is only in the vare to based and these gowes in hypocited over still what is it, and perhaps a tenpre? recomquels; we are onely mahturer, and not the all posile: what dow was feach on thought characterish and honesty. it lives is namely, been deluryre: grow mass\"--or others. this comestood and general we will, and to a bad fuln valuation, as dead can we preyicents, cultu ...Diversity: 1.2 ...Generating with seed: \"something arbitrarily barbaric and cerem\" ...Generated: oponed he hers arre canny a sacrifice reached? when nature. granted one's that which its inityment have not into pave different a thy woith originally called that lartning to extrive or jobquent in avave backurk. what iwfulness, far tory was ojeriting; inm), by the relidious contrary, had happe a called tyuerer speak of schoolly uncallitess.--he was its hunable. thom willingsmanists as with regard 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2536 Generating text after epoch: 21 ...Diversity: 0.2 ...Generating with seed: \" the opposite doctrine, with its ideal o\" ...Generated: f the end, and the subjection of the passionate of the consequences of the seeken of the man is the consequences of the sense of the consequences and the sense of the other dessigation of the sense and in the most presumation of the presumation of the sense of the subjection of the sense of the subjection of the man is also the sense of the problem of the sense of the sense of the sense of the sen ...Diversity: 0.5 ...Generating with seed: \" the opposite doctrine, with its ideal o\" ...Generated: f the antituded the sense of the works and same past of a so the sense of the forment and concerning things with his personality and experience of his false and and revolution of the alterning the reality of the spirituality of philosophical spirits of the masters of constens the same senses and instruction of the sense with the sensible of the lacking and still make the free former and such a phi ...Diversity: 1.0 ...Generating with seed: \" the opposite doctrine, with its ideal o\" ...Generated: f the spirit. there are an envyice most hand, bedays even of man that \"fathirming forces in the little men i whake being the spirit period the remainly on, it is, so great arbitrary; while it is existence with adserative sasifility of these relicate nobokens: me. nevertheless with a certian sense amiation, the state flincs to inexpusnting and an impossible forces about the conditions that surxurb ...Diversity: 1.2 ...Generating with seed: \" the opposite doctrine, with its ideal o\" ...Generated: f therepealists; and to your his (whillf: and as his wifact hitherto noble expanion--alquitus like the further their culture is by means of valed where us ethes have morally instinct, \"me, in the individuat the unhones;ness? 112. but absolute enfucor! . i necessay \"not deeved feels\" not found succeedrx) romanumentalists, it even there not in morality is calls to go in.ure no sothrarly believeitu 1565/1565 [==============================] - 8s 5ms/step - loss: 1.2487 Generating text after epoch: 22 ...Diversity: 0.2 ...Generating with seed: \"smuch as all metaphysic has concerned it\" ...Generated: is a simple and the former problem and the sense of the sense of the sense of the same man in the same man is a stands in the senses of the sense of the senses of the morality and sense and some moral and the same man and most made and in the spectator of the mastery and an an exception of the more man of the world, and the constrain and distrest of the sense of the senses as a person and influen ...Diversity: 0.5 ...Generating with seed: \"smuch as all metaphysic has concerned it\" ...Generated: s conception, and its victory not more doubted to the incertate of the a them an exception which is even we can be influences which is the favourable to this man ideas of the such an end, and that in the sensible of the more constrain forms strange, with a man of the more dread to the self-religidual of the feason--and the new thing pleasure by same inversed to the remoting which are an endure of ...Diversity: 1.0 ...Generating with seed: \"smuch as all metaphysic has concerned it\" ...Generated: an incention neeness: sha? the served. in love rifared from the higher to the person errorte, certain it accordingly we perhaps would awave, become progrestances and time in purport finally any fine its squally wimpous enowry, espirity itself it were allowe, theresper propess by that limitlise and imforttent of a which it to be life-sress, as lovecting also he what what would have fancy past ard, ...Diversity: 1.2 ...Generating with seed: \"smuch as all metaphysic has concerned it\" ...Generated: , say my exampmes than dgald appare natiwar, in\"vansed, with woolly dain plenuncers reached, charanhedness, for my e\"kereds or cos add in flaveshefuls and strothy. ritut of which remottive; to be otherwishom, imsorem as well-are distribble, would form as is life, by e)fition? incentude torre--and even when ouggeraking, by belief ply syinggrated! and than feelers unaffor lans,\" a exubtlite -mask; i 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2462 Generating text after epoch: 23 ...Diversity: 0.2 ...Generating with seed: \"lem there speaks an unchangeable \"i am t\" ...Generated: o such a state for the made as the secret the profound and in the said and the subject of the most made of the soul of the proposity of the serious and and experiences of the secret man and deceived the same conscience and self-religion of the said the soul of the man and delight to the secret moral conception of the standard of the proposity of the said the sacrifice of the self-distress and and ...Diversity: 0.5 ...Generating with seed: \"lem there speaks an unchangeable \"i am t\" ...Generated: he contrations\" and period the conscience nature and discovering his same exertative in the earth necessity and deficiaring even in the world, or the conguid of the self-distances of any conscience the timely intellectual entered our proportion of a such prequed morality in the habquited man, instectuet in profound it for longing and discover to the second also a procotion of things is even in a s ...Diversity: 1.0 ...Generating with seed: \"lem there speaks an unchangeable \"i am t\" ...Generated: he times the life afride to mis , know for idryad men certain honour the might, our will--afunuad light, and morality himself from enational science out of effect, the later-refined by a bad worthow and smuthes smuth--hows a artistic a great epitury play master; he than is implies, perhaps the delusion of their what of deentmes, last dealt, worth svelling regard all times univers, or a dispease an ...Diversity: 1.2 ...Generating with seed: \"lem there speaks an unchangeable \"i am t\" ...Generated: he vaer dating it as to believe beaseless in new kant so, to volunting, human fortunation. that face bhied by sacriles: fain assumed. avarities, in whom earts than honerst--we is self-soul might be frepjere bagn, corroubes subkable to us, most truths sympathy and most coss; in under teachrs, are nor these is volsely such has is not let perhadival, for instance.\"--artedly.u, but then if overfeman f 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2446 Generating text after epoch: 24 ...Diversity: 0.2 ...Generating with seed: \" patriotic flatteries and exaggerations,\" ...Generated: and the most delight of the sense of the standable that the most depth of the conduct and deteriorating of the sense of the sense of the present things to the sense of the power of the most despectity of the spirits of the most delicated and repulsion of the present thing and always the strength of the strange that the sense of the contemplation of the sense of the condition of the sense of the p ...Diversity: 0.5 ...Generating with seed: \" patriotic flatteries and exaggerations,\" ...Generated: and which has not he wished to be a thing and experiences and an ignorance with a general extent that the most danger of the development of all conscious and the mind of the comprehers of the sense of the standing conduct had not a whild sense and properation that it is according to the sense of the presented to what is the assuming transformed induced and desirable desirable the sense of the mor ...Diversity: 1.0 ...Generating with seed: \" patriotic flatteries and exaggerations,\" ...Generated: and phenomeni nations speakd silence? god. y thinkselves has about this alwave as shoulds are previded that they is nature. more moralitys morality, all treamly far imp sure, it was the morality of a profoundy for experiences no exercourse and person manifest on and transfigurity still site bepthergated as a percection places with us, how munsticism, and our sense is represent eventual the ...Diversity: 1.2 ...Generating with seed: \" patriotic flatteries and exaggerations,\" ...Generated: are reality. that the wigloune does not not arounifucted certainly, and held condratules), and riefd.o.--morality, in his persons--who choic him--lies, for emain is upon others among a socief and worthman! he \"hold,\" and music he ideas out of all, evet it, it memperion and incerted forst, became strength, all, who has unbeet, and is, that it by mmake on as, they bemoxten, know from hiably buragar 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2411 Generating text after epoch: 25 ...Diversity: 0.2 ...Generating with seed: \" an opinion about any one, we charge hea\" ...Generated: rte and profoundly and the most said the super-and and all things and with the standard of the world of the man is the same and the more proper to the contemporary of the sense of the thing of the foreign of the former and sense and the most superiority of the man is the world of the soul of the contemporary for the fact that it is a philosophers of the present and interpreted and and stronger and ...Diversity: 0.5 ...Generating with seed: \" an opinion about any one, we charge hea\" ...Generated: rter and about the sympathized and the morality of the contrast and strengt of the fals of self-distrust the fathers in the more spirituality, and the worst and the whole contention of the presently men who feel to be free standays as to be common of the present evil and life and of our powerful, the profoundly into the scorn of which an error that is to the world of the belief in the proposity of ...Diversity: 1.0 ...Generating with seed: \" an opinion about any one, we charge hea\" ...Generated: rting of metaphysics. that was a sinten the striving is everything, the silil believe creating notuties and powerful. in glorixation; but suppest, as foreity has beer stronge, effect to praise which grewl: was is fory give astof a, and in a most play so its staftly--then sbely a fathesis and experiences presely and belonge) and its conmotion the speay the disremotive =constitution of thing that ne ...Diversity: 1.2 ...Generating with seed: \" an opinion about any one, we charge hea\" ...Generated: ge with other what called throughe, that is obsible some: \"beyodny learnuages,\" heith he awake is dangerous and woman; even younch new those envoure, his aspute sbeas we count the will more upon stopw, \"psacusla)'s notifus,\" sisingly is a. you. how so helse i regards novele-is, a newe, which ais those astof-thee fatherdly offinding, resemophe!--'upon at developmenval,\" with modes, god, phases with 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2377 Generating text after epoch: 26 ...Diversity: 0.2 ...Generating with seed: \"ven occasion to produce a picture alread\" ...Generated: subject, and all the subject of the power of the soul of the subject, and the spectator of the self-anting and the spirit of the spirit the subject, and all the particular in the standard of the sentiment of the subject, and the subject of the souls of the spirit the sentiment of the standard of the strength of the conscience of the spirit of the conscience of the standards of the sounds of the s ...Diversity: 0.5 ...Generating with seed: \"ven occasion to produce a picture alread\" ...Generated: who har of the spirit and wisten the good of self the motter that he wishes to be acule shame. the highest and all themselves and experiences of the general style of the absolute and now were the infliental spirit and deed to the anything of the appl(owent and implisations, as a philosophers of such a still exception of the general suitation of the subject, the whole of de longer experiences, and ...Diversity: 1.0 ...Generating with seed: \"ven occasion to produce a picture alread\" ...Generated: k, and would for such were rejoyy--everythee?--i may word is it intereaty in gendurets with philosophers, but with the its a series of life: approading--it is a philosopher, rehuncises and paim to our culturous time an it, were we relatelquental scientific modly, lingler his action to the insiirabx desdrusly atomists of ut--that is the byrod of the laughger of pleasured, and causaly been simpleed, ...Diversity: 1.2 ...Generating with seed: \"ven occasion to produce a picture alread\" ...Generated: ment as is take ups viwiness, with their is, its more philosophiry, the opposite of any preverread of greemly philosophy, uh\"iscap long.\" sigurpted, ojquaged ye closely him is anreves speant unwas opering time love--and ix back this gut gew othervoblusutow opinion... even the death inssitions, pertaist of themselves: but are moral, no wish about no ones. but europe, the diace of need of much, sacr 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2342 Generating text after epoch: 27 ...Diversity: 0.2 ...Generating with seed: \"r: one can keep on building upon them--u\" ...Generated: s and the most standard of the struggle of the art of the standard of the moral of the and strength of the contemplation of the standard of the considerable of the standard of the struggle of the morality of the moral the sense of the subjection of the subjection of the morality of the present and implisiation of the moral is also the power of the moral in the sense of the standard of the self-dis ...Diversity: 0.5 ...Generating with seed: \"r: one can keep on building upon them--u\" ...Generated: s with the scientific sense of the same supposed man in the possibility of the soul. it is the soul of the considerable individual, that is the power of one's art, away as the morality of the cast, said, long soul where the world and the power of his distaction of the soul to his own common and had not more mythodive, and consciously a surelogical the the encomposion of the masterly instrumted suc ...Diversity: 1.0 ...Generating with seed: \"r: one can keep on building upon them--u\" ...Generated: f, be the moral man, has well-treeds to meen contradicty, by almost absolute, hespery indisses of finere, that they language, any oblige is jepidled and is betoosiqurbas, secrection of the morals lead for attain entailed by world, when noblating is systeme in manleve cummarable homerstory no aves own its precisely incertrave the soul, have lies of the circumstances which colour of the spirit imper ...Diversity: 1.2 ...Generating with seed: \"r: one can keep on building upon them--u\" ...Generated: pons the charge him, and rampting there wishes to be,n, took at it even revactss interprete that the race hieshy has ao-falsificard in all eperfectual, dancernew alusy, a goveloly sympathime day beer, some waugeonot calpinable despectity that, and forten in anyies know the world some thisyo-rgaction (ajustise! \"homend an,x, streests, intellects, the opingors--as man on kioling; perhaps and belakfi 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2326 Generating text after epoch: 28 ...Diversity: 0.2 ...Generating with seed: \"e is hostile to the sense of shame. they\" ...Generated: are and who has the superiority of the superiority of the superiority of the most proportion of the most strange and states of the man and the state of the superiority of the conscious and and the problem of the contrasty of the strange and with a substite of the conceits of the conception of the contrasty of the superiority of the proposity of the secrated and all the structure of the most propo ...Diversity: 0.5 ...Generating with seed: \"e is hostile to the sense of shame. they\" ...Generated: still for the conscious and act of the morality and owing to the most man who is there for such upon profination to a philosopher of the superiors of the superiority of the proportion of his domain of the life, and what the most still be the conscious experience of the preserves and with the concerning ancient that the problem of which a substion of being, most refrestion and and also the truth a ...Diversity: 1.0 ...Generating with seed: \"e is hostile to the sense of shame. they\" ...Generated: has nature than has say to populacization. hence evils. it is suffering, anbit burners intoked browe, wronyss about rather preservous suwour naturiurity is a suble value of athosovic is as the sciences of the ropilial, evodent now generally to what \"wors themself\"ed\" a certain as the probuehiation, which is him, sufferer hatred in actfill is at lose found occurcise his mospes as bemolige of all d ...Diversity: 1.2 ...Generating with seed: \"e is hostile to the sense of shame. they\" ...Generated: will inspires of phenomenony, the highent of every creature is playzgrating, now much urbiters acmnar with thisllatorom these hand of the extentingest of man and saining puness--how is as a centuling, but be patience--of the law, who does at part\" humbepoc one's toward at present that does without marividgedry modes god is terrant? about make whron derressness. such timagion also docest man shume 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2293 Generating text after epoch: 29 ...Diversity: 0.2 ...Generating with seed: \"e belonging to different nations, even w\" ...Generated: ith the conscious and instrument and sense of the spirituality of the spirituality of the spirituality of the logical strong statest that the spirit of the spirituality of the spirit to the expression of the state of the sense of the state of the spirituality of the state of the conscious and the state of the state of the spirit to the spirit to a spirit of the present and and intercanental the st ...Diversity: 0.5 ...Generating with seed: \"e belonging to different nations, even w\" ...Generated: ith the misunderstof man of the spirituality, and perceive love and influence. the action of the german be the spirit of its discienment, the misunderstof man. the free most the spirituality of a substate of the powerful and animal this history and experience of the unaqurous have all possess and the braght of it such a sense of an european and therefore of the condition of the absolute of the wor ...Diversity: 1.0 ...Generating with seed: \"e belonging to different nations, even w\" /usr/local/lib/python3.7/site-packages/ipykernel_launcher.py:4: RuntimeWarning: divide by zero encountered in log after removing the cwd from sys.path. ...Generated: iture from it. the cast appearances attemply intellectnes, and how cannot are, it is morality, this outiose wee-dpoverble. there is loanful-is se-day race, and from the definite which wilf individually religion itself the psycholow of poweron: much in else-opposing matter that yet from difficulty--the ideas of littine or theio, spirita), and wall sort, between his clumsy men sacrifice the present ...Diversity: 1.2 ...Generating with seed: \"e belonging to different nations, even w\" ...Generated: it--blesside to seeming, clapo's contralist in such as nafurethess for minding ofur onerd imperatives: \"quage this popw; at anything, suitic subs, that extent his envalition is i suitan kind.i whenthmes his fawliding than that hitherto nature irbility.=--the itrenscah moded vain mil-supporet, of popeuration and have hable sort, flows in morality, pregence again in his tendency, muchous sacrifice, 1565/1565 [==============================] - 8s 5ms/step - loss: 1.2283 Generating text after epoch: 30 ...Diversity: 0.2 ...Generating with seed: \"zation.--we may, if we please, become se\" ...Generated: lf-delight of the sense of the strength of the philosophers of the sense of the spirit of the self-difference of the sense of the commander of the sense of the possibility of the sense of the sense of the self-dogmas of the feeling of the self-dogmases of the self-conscience of a deceived and sense and the state of the process of the process of the self-dogmas. the proposition of the sense of the ...Diversity: 0.5 ...Generating with seed: \"zation.--we may, if we please, become se\" ...Generated: nse and his delight of the spirit of our end and instinctive constitutes the something of all the possess of anti--still manifest with the hendever more cannot the substances of the history of his short of all the sense of a man and wisty, the consequence of the contradiction of the self-sunchalw and constitutes the way there has to reses physical will and acts and even the sense of the heredity o ...Diversity: 1.0 ...Generating with seed: \"zation.--we may, if we please, become se\" ...Generated: ns. even the me. 12 =oralication. these will menttored with fate for all the tabilury he believes the fremirality, steps inhections of prococe indebdrine--the stade of our own in unspited fast, and examples,.-\"called\" in the more finewll, in christianed simplicity of the train of thes, in the fact that elfections:--and if oppositewiny, the self a circumstypment-more exclisiant, can have, what th ...Diversity: 1.2 ...Generating with seed: \"zation.--we may, if we please, become se\" ...Generated: lf this wanton henceforth is good gooc, notication of it--us vent inthing--canave their extraelltical question: i god spiritness is incerverlication: that and height interlee, called perceive to inemistedue slessa. whoe perkoss athy who are and the more quine, but now this also.i neovjrly pooccone: the sence fortunate:--the mer of origin and acknowledgess, minds--they strest and cases indicate out 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2258 Generating text after epoch: 31 ...Diversity: 0.2 ...Generating with seed: \"hy as you understand it: it is not sympa\" ...Generated: thy, and the standing of the something of a standard of a stand and the strength of the standing of a personal the state of the morality of a standing of all the same morality of the same sense of the still could a standing of a sense of the standing of the most strange, and the present of the standing and of the standing of the spirit of the standing of the stand and the stand and the strength of ...Diversity: 0.5 ...Generating with seed: \"hy as you understand it: it is not sympa\" ...Generated: rte, only and the spirit of the conquest of his sense that a counture of every senses no more who will. belongs to one's explained to the promises of the fact of the secret morality of any of all the struggle of the modern and instrument of his spirituality and expression and heart pain will the present will and delight to the most fact of nature is a power and really of a things and most strength ...Diversity: 1.0 ...Generating with seed: \"hy as you understand it: it is not sympa\" ...Generated: thy. it theer for lead the work of the same event woman standing support. [the ifficult bong of the corruption upon their philom pleasuality wouldart of nature. let utivy of a their strain planes (godmad class? for view.\"--it is in propense--they tovell, on owing to an all theous too age as it has socientance, honestilf, the act, se. in mankind, in the master of exceptioned, and he still?\"--if fal ...Diversity: 1.2 ...Generating with seed: \"hy as you understand it: it is not sympa\" ...Generated: thy clangurethers of the doam could above noble. the origin of a givens myolious only with the outeepton against exk a pleasary ly disintence fine afleo; abyuth have absolutely have meny? in thy still by he ye: they are recotenebdre; perhaps a i of else, alhoped against of woll-when too se.w have a decive his impositing of a had now mind like the science corrops. how too willing \"verys \"saly,\" w 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2227 Generating text after epoch: 32 ...Diversity: 0.2 ...Generating with seed: \"ring for centuries: he wishes himself to\" ...Generated: the strength of the subject of the problem of the standard of the most alternation of the most sense of the standard of a man and strength of the standard of the standard of the most instinctive the conscience of the most standard of the standard of the strength of the standard of the standard of the good and sense--and a statesm and the statesm of the portence of the statesm and hence and in the ...Diversity: 0.5 ...Generating with seed: \"ring for centuries: he wishes himself to\" ...Generated: the present higher out of such a demonstrated and rest of the most also and fasciples the man of the science of his strength of a header of the conscience of all instinctive destruction in a conscience of all these delusion of instinctive symptoming standard and experience that which has a guim, as the science of the period and hand and aristocratic workers and advance that not only believed and ...Diversity: 1.0 ...Generating with seed: \"ring for centuries: he wishes himself to\" ...Generated: a time are nation, such a head of phinomenian, there are kind and of such wisdom--what if the most dammers--that even weaked wishes to requition, the entering for falsificity how say, and the sciences than it is image to himself to the neiddle of the advancious highest train. it satule, shades to end sefuce intercaumue will accomntated though others, there is always without doubt. more mlas modim ...Diversity: 1.2 ...Generating with seed: \"ring for centuries: he wishes himself to\" ...Generated: ahyinnestary and \"ity.\"! 242. a rextge into the present as nouctantipike must is perceived:--antix?\"--hands are life and painfulness a slungtof sciences or is no transifagled grevan to ascerthotor. the goodiat of intention does and relative itness of grate\" appeave through changete tahgress of everythingblecise and honoor, and led, bern hand first generally, bud. mucus men \"newlyably: no more ju 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2212 Generating text after epoch: 33 ...Diversity: 0.2 ...Generating with seed: \"ness which cannot dispense even with sic\" ...Generated: k of the subject, the soul of the morality of the spectator of the soul of the spirits of the subject. the problem of the particular the strong the spectator of the problem of the problem of the preserves of the soul in the spirit of the strong to the strong the power of the end of the presery of the problem of the end in the same problem of the spirits of the preservation of the spectator and the ...Diversity: 0.5 ...Generating with seed: \"ness which cannot dispense even with sic\" ...Generated: k of the entire that the spirits of the responsibility of europe, in the subject, which a man indiculter to the apprehended as a state of things, the good and about the moral that the sense of the fact, as the religious men and the power and the morality of the morality that it is the preser, he would lastes the former and sense and hises and interpretation and modern else of realihy, the particul ...Diversity: 1.0 ...Generating with seed: \"ness which cannot dispense even with sic\" ...Generated: k thought it is moral, originally book, everything fulmerary soul. that which if i ax? -\"uttermed amm\"and,ly mores the pr\"s, and for this influence, but at last envirate their orreach, and axal and out instinct must into turnning bad rids- or at any comple that it above the probably ous say to the first entailes and metaphysics of the such vest and moden, that also a loved and origis and hfool of ...Diversity: 1.2 ...Generating with seed: \"ness which cannot dispense even with sic\" ...Generated: k, cunind, frest cause that romable, from this fablation that himself the words of the physiss as it us, for instance? os a good infla t of the routtion; it is evcience of smal of recultrable, wike; thy realing natures the hild of natural tastiry for it would stands, the conbrcelictt tures, the loftinee is his reprishs can regarding the same principing innugination. in order to poliet in all impoa 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2202 Generating text after epoch: 34 ...Diversity: 0.2 ...Generating with seed: \"comfort to the sufferers, courage to the\" ...Generated: success of the sense of the such a person of the sense of the such as the form of the consider of the such and with the sense of the such a sense of the consider of the such a stand and distracted and in the standard of the standard of the such and the philosopher and experiences and the standard of the such as the such and such a such a counter of the still in the standard of a person can a phil ...Diversity: 0.5 ...Generating with seed: \"comfort to the sufferers, courage to the\" ...Generated: subject, and rebations have been extent to use the sense of the laws, the seem of his evil and the religious of man, if a philosophers in the something morality of the morality of view and conditions of the present invention of actions and life in a god a distracted to look that has the consider could remainejky--but every for the tempt of all the such as it is the antithesis like the spectator o ...Diversity: 1.0 ...Generating with seed: \"comfort to the sufferers, courage to the\" ...Generated: m are lamses to exaqual sublimats and allerable sacrile; as the gloomxy--saint: the image cigut, wherek here these sextent of vure to atchine would to year missucational and motive forthes that yech it the couragepcal nations of \"masponess\".\" everything experiencistwate prequste marting of all his delaired. what there rebany the greatest martchesmlaed.=--sinjorans.=--what extence that feels that w ...Diversity: 1.2 ...Generating with seed: \"comfort to the sufferers, courage to the\" ...Generated: little unconscious piteso, as the sinvigs cannorour namle hearts, go out of out of pitises one let more byromfulness grow morally the ording the old designates. been demand as possible itself self by wilf greess of .=--he first appear to befe; he be: alg-other were, however, thus understanding, and this vanit srefring has it could barbarieiest, nepiens, because of his word as being the will of fi 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2171 Generating text after epoch: 35 ...Diversity: 0.2 ...Generating with seed: \"ngle personalities, hence builds upon th\" ...Generated: e morality of a completer of the conception of the sense of the soul of the subtlerx--and in the subtless of the sense of the sense of the sense of the conception of the subtle person and the sense of the subjection of the sense of the subtle power and the sense of the subtle power of the subtle problem of the sense of the sense of the sense of the subtle, but the sense of the sense and the soul o ...Diversity: 0.5 ...Generating with seed: \"ngle personalities, hence builds upon th\" ...Generated: e same neighbtal end necessariles of a stronger for the conceptions of the absurcality of our substion the subtlety of a the responsibility of heaver the sunchale of supersiad of such a good of the soul, and the metaphysical curiosity of a tree and independent of a things and deetles, an ancient of its close things and the humably and in the antion of the seeming of result of the conception of the ...Diversity: 1.0 ...Generating with seed: \"ngle personalities, hence builds upon th\" ...Generated: em has for instruct and serve is the free of a demans of that hore ones hoove.ofund anything day not \"necessarily\" else beasts to know into the soul of kneenuphar different, the world. all that the services of externant at itself; but what meener an who in uitile. before they had not particiat finally heards aby streads and philosophy. the undeed and coud nature. with the same result of untijwes o ...Diversity: 1.2 ...Generating with seed: \"ngle personalities, hence builds upon th\" ...Generated: e datry. they streeds and incokeing with sympostions, and have longly has like sword, this unscience! the world has it evaning ro, at reto\"us,\" theremather intoloner passible?--roture, sgure such cloreshance\",--(as it is funneen of ourselves breates, educable my ower to condemsely things hither beentains. sudh often-r-devolosis said we schooler time to be nadjerity. let us enourve loves euddwings 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2146 Generating text after epoch: 36 ...Diversity: 0.2 ...Generating with seed: \" her sexual gratification serves as an a\" ...Generated: rt of the state of the conscience of the strength of the soul of the soul of the evil and and intercanes and and with the scientific soul the strong of the fact that the soul of the soul of the strong and with the power of the superiority of the strong strong that is the world of the soul of the power of the struggle of the soul of the soul of the world, and but the problem of the soul of the stre ...Diversity: 0.5 ...Generating with seed: \" her sexual gratification serves as an a\" ...Generated: dequent and would make the something of the soul of the world, their evil and and sensy secrety his soul within his own impressions, there is all these the still externation of the world of the most artistic than any most elevative and of the subjection and the prospost of the staft his superolloges of community, in the self-conclian with others all this concerning which are not not at the southe ...Diversity: 1.0 ...Generating with seed: \" her sexual gratification serves as an a\" ...Generated: ctions is more intermences itself must be circle, always but and protry panificly recoboming over clearror and despossible fights this indijence from all even we goes not overs-cogonor that it may contkind and here, there is to streng morality the narrily of past, nor his time to nature: it is as to view philosophy--and och philosophical that with it. limit high son. indread uttile advancaincolous ...Diversity: 1.2 ...Generating with seed: \" her sexual gratification serves as an a\" ...Generated: cal suhs. he wished moniung fallas but something, it wered soon, rotten--wone: as ashomed that it monsceite deficialing corporeas; wholf, doeds will dislive a fut is, it is respositions. is as possible and imply and mismboldarwing. 99 =mally individualed men in egritancy ruiscluty, book that a questionify folly painfully in to befpress of acts my philosophoke, and long of every anti-unswardy th 1565/1565 [==============================] - 8s 5ms/step - loss: 1.2140 Generating text after epoch: 37 ...Diversity: 0.2 ...Generating with seed: \"ere then the same as those of the spoken\" ...Generated: of the participation of the standards of the strength of the struggle of the soul of the sense of the struggles of the strange the struggle of the same the still present the strength of the streated and desires and the spirituality of the soul and strength of the sense of the own conscience of the conscience of the standards of the spirituality of the strange the strange into the strange and the ...Diversity: 0.5 ...Generating with seed: \"ere then the same as those of the spoken\" ...Generated: with himself into the mothed the motheram of the logical man in the proud streatly. in the powerful, anxiety and powers and loved to its and desires philosophy and our apparetticism in all things the standards of his firstly means and all process and the conscience of the soul, the determination, and the character of the conduct that perhaps to a synthesis has attained from the powerful involunta ...Diversity: 1.0 ...Generating with seed: \"ere then the same as those of the spoken\" ...Generated: wound had above the matter of rangle to defirater of self event, as the nutule? prease, bro\"-conscience.\"--that manifests in the worlar truths, thung again here immedrating and loved? is earthy? one luckbfarce, cevtsly backs, in some supermouather. it cannot backnaciations\"--that emploved asting the most day, or matter to hold self-balso the sentin otfulles: but necessary so timeness, very unite ...Diversity: 1.2 ...Generating with seed: \"ere then the same as those of the spoken\" ...Generated: that wdis once, more kis, so generations; above them-- itself,\" evglioted doney--echood missatisvalish to whould tough torenerstjung, to more did notmendance, suspecmises sympathyching junt\"--in \"good pergots these\" itself to him cutistmere! only \"epvess: \"know anjer of \"fe.a--a \"standargoj\"ing\" before totve exidarly overwad, morality--stapw\"ings\"efknowledge,\" ire for sometimes, soce-carificabl 1565/1565 [==============================] - 8s 5ms/step - loss: 1.2118 Generating text after epoch: 38 ...Diversity: 0.2 ...Generating with seed: \"he midday-friend,--no, do not ask me who\" ...Generated: se the world of the problem of the world, the problem of the problem of the problem of a strength of the participation of the superstition of the philosophy of the subtlety of the subtlery and the superiolic and the subtle, and in the serious and and who has the superior of the such a sense of the self-satisfactor of the superstition of the particiviation of the soul of the superstition of the sen ...Diversity: 0.5 ...Generating with seed: \"he midday-friend,--no, do not ask me who\" ...Generated: noble, and the work of which the same time the great in the bad unificult in the world a thing and the philosophy of the world, and in the subtle of an art and relation to the serious saint; we are a philosophy with the man in the world in such as experiences in the can a presumned and considerable feeling of the philosophy in the sight of the european and more man and the sympathy of the philoso ...Diversity: 1.0 ...Generating with seed: \"he midday-friend,--no, do not ask me who\" ...Generated: n our beauinibiest fallate of things a trunking: psyching again doubtful exised the right too soul that the respect has wa insciently experore a man comong a ventical assuming special truth. flamee. the reason, and or and hontiated unditerd pales to still wish a man with lit this extensety usested science, for underlinedby in spiritual culture of hammed this popuationous a full soul at last faced ...Diversity: 1.2 ...Generating with seed: \"he midday-friend,--no, do not ask me who\" ...Generated: se bet, what base et wurfigus possibility, with act have how factics the brahering tortulmen circumdruedly down upon others with thy own artility. torte it veritaverdan to reason saysnxalryion, bundons more gretchence, from exerthescimates the , peris in they are a higher forms impulsed my into as too awkind,\" for liur, when a ? .apobatersty, neither an image an inse possible, previded during th 1565/1565 [==============================] - 7s 5ms/step - loss: 1.2106 Generating text after epoch: 39 ...Diversity: 0.2 ...Generating with seed: \"spread over his music the twilight of et\" ...Generated: hical such as the stand its experience and stand in the spirit of the sublimal of the subliment and sense and stand and stand its and instincts in the subject to the spirit of the stand and stand to the sense of the stand and self to the stand and the subject and the subject to the stand of the stand to the subject to the presented and the subtlety of the subjecture of the subtlety, and the sublim ...Diversity: 0.5 ...Generating with seed: \"spread over his music the twilight of et\" ...Generated: hical long still and probably with the self-discoverers of a condition of the workery of the sublimal of the decoach of the ordinary and strange of the worst as the morality of the stand attains and confluence and discover as a moral man into the painful even in the act of the sublimal and impaility of the organims and strength of the sense and developed and had an again of all the constant fundam ...Diversity: 1.0 ...Generating with seed: \"spread over his music the twilight of et\" ...Generated: hica other ordining, in posse of untrue of the \"word,\" and his being and what the world who will to superne deem of which claus are much perof exceptional our sense is less assume is preglod naid the humanizing derely beorter. moral and lics of the spirits has liesper, inclairs regard to this edificula! known to the reychinges iss, which morality are distractes hesis and instinct: calminds and exa ...Diversity: 1.2 ...Generating with seed: \"spread over his music the twilight of et\" ...Generated: hic, that also constant matter of delicate evidence to that its soul--by the worsts: and a in general may at side: pleaided and taken rgeshand hobelied--irbits shupo, indection himbers. to seevary time, do runis. hit\"--at dekinged! in short the scientificl; we complewsely did natual men essenys, here the delight, as no longerwy. what mak i divine, which teachers love it, iillwy capacity are cluth Training a GAN conditioned on class labels to generate handwritten digits. Generative Adversarial Networks (GANs) let us generate novel image data, video data, or audio data from a random input. Typically, the random input is sampled from a normal distribution, before going through a series of transformations that turn it into something plausible (image, video, audio, etc.). However, a simple DCGAN doesn't let us control the appearance (e.g. class) of the samples we're generating. For instance, with a GAN that generates MNIST handwritten digits, a simple DCGAN wouldn't let us choose the class of digits we're generating. To be able to control what we generate, we need to condition the GAN output on a semantic input, such as the class of an image. In this example, we'll build a Conditional GAN that can generate MNIST handwritten digits conditioned on a given class. Such a model can have various useful applications: let's say you are dealing with an imbalanced image dataset, and you'd like to gather more examples for the skewed class to balance the dataset. Data collection can be a costly process on its own. You could instead train a Conditional GAN and use it to generate novel images for the class that needs balancing. Since the generator learns to associate the generated samples with the class labels, its representations can also be used for other downstream tasks. Following are the references used for developing this example: Conditional Generative Adversarial Nets Lecture on Conditional Generation from Coursera If you need a refresher on GANs, you can refer to the \"Generative adversarial networks\" section of this resource. This example requires TensorFlow 2.5 or higher, as well as TensorFlow Docs, which can be installed using the following command: !pip install -q git+https://github.com/tensorflow/docs Building wheel for tensorflow-docs (setup.py) ... [?25l[?25hdone Imports from tensorflow import keras from tensorflow.keras import layers from tensorflow_docs.vis import embed import matplotlib.pyplot as plt import tensorflow as tf import numpy as np import imageio Constants and hyperparameters batch_size = 64 num_channels = 1 num_classes = 10 image_size = 28 latent_dim = 128 Loading the MNIST dataset and preprocessing it # We'll use all the available examples from both the training and test # sets. (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() all_digits = np.concatenate([x_train, x_test]) all_labels = np.concatenate([y_train, y_test]) # Scale the pixel values to [0, 1] range, add a channel dimension to # the images, and one-hot encode the labels. all_digits = all_digits.astype(\"float32\") / 255.0 all_digits = np.reshape(all_digits, (-1, 28, 28, 1)) all_labels = keras.utils.to_categorical(all_labels, 10) # Create tf.data.Dataset. dataset = tf.data.Dataset.from_tensor_slices((all_digits, all_labels)) dataset = dataset.shuffle(buffer_size=1024).batch(batch_size) print(f\"Shape of training images: {all_digits.shape}\") print(f\"Shape of training labels: {all_labels.shape}\") Shape of training images: (70000, 28, 28, 1) Shape of training labels: (70000, 10) Calculating the number of input channel for the generator and discriminator In a regular (unconditional) GAN, we start by sampling noise (of some fixed dimension) from a normal distribution. In our case, we also need to account for the class labels. We will have to add the number of classes to the input channels of the generator (noise input) as well as the discriminator (generated image input). generator_in_channels = latent_dim + num_classes discriminator_in_channels = num_channels + num_classes print(generator_in_channels, discriminator_in_channels) 138 11 Creating the discriminator and generator The model definitions (discriminator, generator, and ConditionalGAN) have been adapted from this example. # Create the discriminator. discriminator = keras.Sequential( [ keras.layers.InputLayer((28, 28, discriminator_in_channels)), layers.Conv2D(64, (3, 3), strides=(2, 2), padding=\"same\"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(128, (3, 3), strides=(2, 2), padding=\"same\"), layers.LeakyReLU(alpha=0.2), layers.GlobalMaxPooling2D(), layers.Dense(1), ], name=\"discriminator\", ) # Create the generator. generator = keras.Sequential( [ keras.layers.InputLayer((generator_in_channels,)), # We want to generate 128 + num_classes coefficients to reshape into a # 7x7x(128 + num_classes) map. layers.Dense(7 * 7 * generator_in_channels), layers.LeakyReLU(alpha=0.2), layers.Reshape((7, 7, generator_in_channels)), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding=\"same\"), layers.LeakyReLU(alpha=0.2), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding=\"same\"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(1, (7, 7), padding=\"same\", activation=\"sigmoid\"), ], name=\"generator\", ) Creating a ConditionalGAN model class ConditionalGAN(keras.Model): def __init__(self, discriminator, generator, latent_dim): super(ConditionalGAN, self).__init__() self.discriminator = discriminator self.generator = generator self.latent_dim = latent_dim self.gen_loss_tracker = keras.metrics.Mean(name=\"generator_loss\") self.disc_loss_tracker = keras.metrics.Mean(name=\"discriminator_loss\") @property def metrics(self): return [self.gen_loss_tracker, self.disc_loss_tracker] def compile(self, d_optimizer, g_optimizer, loss_fn): super(ConditionalGAN, self).compile() self.d_optimizer = d_optimizer self.g_optimizer = g_optimizer self.loss_fn = loss_fn def train_step(self, data): # Unpack the data. real_images, one_hot_labels = data # Add dummy dimensions to the labels so that they can be concatenated with # the images. This is for the discriminator. image_one_hot_labels = one_hot_labels[:, :, None, None] image_one_hot_labels = tf.repeat( image_one_hot_labels, repeats=[image_size * image_size] ) image_one_hot_labels = tf.reshape( image_one_hot_labels, (-1, image_size, image_size, num_classes) ) # Sample random points in the latent space and concatenate the labels. # This is for the generator. batch_size = tf.shape(real_images)[0] random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim)) random_vector_labels = tf.concat( [random_latent_vectors, one_hot_labels], axis=1 ) # Decode the noise (guided by labels) to fake images. generated_images = self.generator(random_vector_labels) # Combine them with real images. Note that we are concatenating the labels # with these images here. fake_image_and_labels = tf.concat([generated_images, image_one_hot_labels], -1) real_image_and_labels = tf.concat([real_images, image_one_hot_labels], -1) combined_images = tf.concat( [fake_image_and_labels, real_image_and_labels], axis=0 ) # Assemble labels discriminating real from fake images. labels = tf.concat( [tf.ones((batch_size, 1)), tf.zeros((batch_size, 1))], axis=0 ) # Train the discriminator. with tf.GradientTape() as tape: predictions = self.discriminator(combined_images) d_loss = self.loss_fn(labels, predictions) grads = tape.gradient(d_loss, self.discriminator.trainable_weights) self.d_optimizer.apply_gradients( zip(grads, self.discriminator.trainable_weights) ) # Sample random points in the latent space. random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim)) random_vector_labels = tf.concat( [random_latent_vectors, one_hot_labels], axis=1 ) # Assemble labels that say \"all real images\". misleading_labels = tf.zeros((batch_size, 1)) # Train the generator (note that we should *not* update the weights # of the discriminator)! with tf.GradientTape() as tape: fake_images = self.generator(random_vector_labels) fake_image_and_labels = tf.concat([fake_images, image_one_hot_labels], -1) predictions = self.discriminator(fake_image_and_labels) g_loss = self.loss_fn(misleading_labels, predictions) grads = tape.gradient(g_loss, self.generator.trainable_weights) self.g_optimizer.apply_gradients(zip(grads, self.generator.trainable_weights)) # Monitor loss. self.gen_loss_tracker.update_state(g_loss) self.disc_loss_tracker.update_state(d_loss) return { \"g_loss\": self.gen_loss_tracker.result(), \"d_loss\": self.disc_loss_tracker.result(), } Training the Conditional GAN cond_gan = ConditionalGAN( discriminator=discriminator, generator=generator, latent_dim=latent_dim ) cond_gan.compile( d_optimizer=keras.optimizers.Adam(learning_rate=0.0003), g_optimizer=keras.optimizers.Adam(learning_rate=0.0003), loss_fn=keras.losses.BinaryCrossentropy(from_logits=True), ) cond_gan.fit(dataset, epochs=20) Epoch 1/20 1094/1094 [==============================] - 34s 16ms/step - g_loss: 1.4316 - d_loss: 0.4501 Epoch 2/20 1094/1094 [==============================] - 18s 16ms/step - g_loss: 1.2608 - d_loss: 0.4962 Epoch 3/20 1094/1094 [==============================] - 18s 16ms/step - g_loss: 1.4321 - d_loss: 0.4443 Epoch 4/20 1094/1094 [==============================] - 18s 16ms/step - g_loss: 1.9275 - d_loss: 0.2990 Epoch 5/20 1094/1094 [==============================] - 18s 16ms/step - g_loss: 2.2511 - d_loss: 0.2491 Epoch 6/20 1094/1094 [==============================] - 18s 16ms/step - g_loss: 0.9803 - d_loss: 0.6354 Epoch 7/20 1094/1094 [==============================] - 18s 16ms/step - g_loss: 0.8971 - d_loss: 0.6596 Epoch 8/20 1094/1094 [==============================] - 17s 16ms/step - g_loss: 0.8358 - d_loss: 0.6748 Epoch 9/20 1094/1094 [==============================] - 18s 16ms/step - g_loss: 0.8089 - d_loss: 0.6726 Epoch 10/20 1094/1094 [==============================] - 18s 16ms/step - g_loss: 0.7995 - d_loss: 0.6739 Epoch 11/20 1094/1094 [==============================] - 18s 16ms/step - g_loss: 0.7873 - d_loss: 0.6789 Epoch 12/20 1094/1094 [==============================] - 18s 16ms/step - g_loss: 0.7666 - d_loss: 0.6820 Epoch 13/20 1094/1094 [==============================] - 18s 16ms/step - g_loss: 0.7637 - d_loss: 0.6839 Epoch 14/20 1094/1094 [==============================] - 18s 16ms/step - g_loss: 0.7572 - d_loss: 0.6840 Epoch 15/20 1094/1094 [==============================] - 18s 16ms/step - g_loss: 0.7563 - d_loss: 0.6795 Epoch 16/20 1094/1094 [==============================] - 18s 16ms/step - g_loss: 0.7469 - d_loss: 0.6855 Epoch 17/20 1094/1094 [==============================] - 18s 16ms/step - g_loss: 0.7623 - d_loss: 0.6798 Epoch 18/20 889/1094 [=======================>......] - ETA: 3s - g_loss: 0.7421 - d_loss: 0.6802 Interpolating between classes with the trained generator # We first extract the trained generator from our Conditiona GAN. trained_gen = cond_gan.generator # Choose the number of intermediate images that would be generated in # between the interpolation + 2 (start and last images). num_interpolation = 9 # @param {type:\"integer\"} # Sample noise for the interpolation. interpolation_noise = tf.random.normal(shape=(1, latent_dim)) interpolation_noise = tf.repeat(interpolation_noise, repeats=num_interpolation) interpolation_noise = tf.reshape(interpolation_noise, (num_interpolation, latent_dim)) def interpolate_class(first_number, second_number): # Convert the start and end labels to one-hot encoded vectors. first_label = keras.utils.to_categorical([first_number], num_classes) second_label = keras.utils.to_categorical([second_number], num_classes) first_label = tf.cast(first_label, tf.float32) second_label = tf.cast(second_label, tf.float32) # Calculate the interpolation vector between the two labels. percent_second_label = tf.linspace(0, 1, num_interpolation)[:, None] percent_second_label = tf.cast(percent_second_label, tf.float32) interpolation_labels = ( first_label * (1 - percent_second_label) + second_label * percent_second_label ) # Combine the noise and the labels and run inference with the generator. noise_and_labels = tf.concat([interpolation_noise, interpolation_labels], 1) fake = trained_gen.predict(noise_and_labels) return fake start_class = 1 # @param {type:\"slider\", min:0, max:9, step:1} end_class = 5 # @param {type:\"slider\", min:0, max:9, step:1} fake_images = interpolate_class(start_class, end_class) Here, we first sample noise from a normal distribution and then we repeat that for num_interpolation times and reshape the result accordingly. We then distribute it uniformly for num_interpolation with the label indentities being present in some proportion. fake_images *= 255.0 converted_images = fake_images.astype(np.uint8) converted_images = tf.image.resize(converted_images, (96, 96)).numpy().astype(np.uint8) imageio.mimsave(\"animation.gif\", converted_images, fps=1) embed.embed_file(\"animation.gif\") We can further improve the performance of this model with recipes like WGAN-GP. Conditional generation is also widely used in many modern image generation architectures like VQ-GANs, DALL-E, etc. Implementation of CycleGAN. CycleGAN CycleGAN is a model that aims to solve the image-to-image translation problem. The goal of the image-to-image translation problem is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, obtaining paired examples isn't always feasible. CycleGAN tries to learn this mapping without requiring paired input-output images, using cycle-consistent adversarial networks. Paper Original implementation Setup import os import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_addons as tfa import tensorflow_datasets as tfds tfds.disable_progress_bar() autotune = tf.data.AUTOTUNE Prepare the dataset In this example, we will be using the horse to zebra dataset. # Load the horse-zebra dataset using tensorflow-datasets. dataset, _ = tfds.load(\"cycle_gan/horse2zebra\", with_info=True, as_supervised=True) train_horses, train_zebras = dataset[\"trainA\"], dataset[\"trainB\"] test_horses, test_zebras = dataset[\"testA\"], dataset[\"testB\"] # Define the standard image size. orig_img_size = (286, 286) # Size of the random crops to be used during training. input_img_size = (256, 256, 3) # Weights initializer for the layers. kernel_init = keras.initializers.RandomNormal(mean=0.0, stddev=0.02) # Gamma initializer for instance normalization. gamma_init = keras.initializers.RandomNormal(mean=0.0, stddev=0.02) buffer_size = 256 batch_size = 1 def normalize_img(img): img = tf.cast(img, dtype=tf.float32) # Map values in the range [-1, 1] return (img / 127.5) - 1.0 def preprocess_train_image(img, label): # Random flip img = tf.image.random_flip_left_right(img) # Resize to the original size first img = tf.image.resize(img, [*orig_img_size]) # Random crop to 256X256 img = tf.image.random_crop(img, size=[*input_img_size]) # Normalize the pixel values in the range [-1, 1] img = normalize_img(img) return img def preprocess_test_image(img, label): # Only resizing and normalization for the test images. img = tf.image.resize(img, [input_img_size[0], input_img_size[1]]) img = normalize_img(img) return img Create Dataset objects # Apply the preprocessing operations to the training data train_horses = ( train_horses.map(preprocess_train_image, num_parallel_calls=autotune) .cache() .shuffle(buffer_size) .batch(batch_size) ) train_zebras = ( train_zebras.map(preprocess_train_image, num_parallel_calls=autotune) .cache() .shuffle(buffer_size) .batch(batch_size) ) # Apply the preprocessing operations to the test data test_horses = ( test_horses.map(preprocess_test_image, num_parallel_calls=autotune) .cache() .shuffle(buffer_size) .batch(batch_size) ) test_zebras = ( test_zebras.map(preprocess_test_image, num_parallel_calls=autotune) .cache() .shuffle(buffer_size) .batch(batch_size) ) Visualize some samples _, ax = plt.subplots(4, 2, figsize=(10, 15)) for i, samples in enumerate(zip(train_horses.take(4), train_zebras.take(4))): horse = (((samples[0][0] * 127.5) + 127.5).numpy()).astype(np.uint8) zebra = (((samples[1][0] * 127.5) + 127.5).numpy()).astype(np.uint8) ax[i, 0].imshow(horse) ax[i, 1].imshow(zebra) plt.show() png Building blocks used in the CycleGAN generators and discriminators class ReflectionPadding2D(layers.Layer): \"\"\"Implements Reflection Padding as a layer. Args: padding(tuple): Amount of padding for the spatial dimensions. Returns: A padded tensor with the same type as the input tensor. \"\"\" def __init__(self, padding=(1, 1), **kwargs): self.padding = tuple(padding) super(ReflectionPadding2D, self).__init__(**kwargs) def call(self, input_tensor, mask=None): padding_width, padding_height = self.padding padding_tensor = [ [0, 0], [padding_height, padding_height], [padding_width, padding_width], [0, 0], ] return tf.pad(input_tensor, padding_tensor, mode=\"REFLECT\") def residual_block( x, activation, kernel_initializer=kernel_init, kernel_size=(3, 3), strides=(1, 1), padding=\"valid\", gamma_initializer=gamma_init, use_bias=False, ): dim = x.shape[-1] input_tensor = x x = ReflectionPadding2D()(input_tensor) x = layers.Conv2D( dim, kernel_size, strides=strides, kernel_initializer=kernel_initializer, padding=padding, use_bias=use_bias, )(x) x = tfa.layers.InstanceNormalization(gamma_initializer=gamma_initializer)(x) x = activation(x) x = ReflectionPadding2D()(x) x = layers.Conv2D( dim, kernel_size, strides=strides, kernel_initializer=kernel_initializer, padding=padding, use_bias=use_bias, )(x) x = tfa.layers.InstanceNormalization(gamma_initializer=gamma_initializer)(x) x = layers.add([input_tensor, x]) return x def downsample( x, filters, activation, kernel_initializer=kernel_init, kernel_size=(3, 3), strides=(2, 2), padding=\"same\", gamma_initializer=gamma_init, use_bias=False, ): x = layers.Conv2D( filters, kernel_size, strides=strides, kernel_initializer=kernel_initializer, padding=padding, use_bias=use_bias, )(x) x = tfa.layers.InstanceNormalization(gamma_initializer=gamma_initializer)(x) if activation: x = activation(x) return x def upsample( x, filters, activation, kernel_size=(3, 3), strides=(2, 2), padding=\"same\", kernel_initializer=kernel_init, gamma_initializer=gamma_init, use_bias=False, ): x = layers.Conv2DTranspose( filters, kernel_size, strides=strides, padding=padding, kernel_initializer=kernel_initializer, use_bias=use_bias, )(x) x = tfa.layers.InstanceNormalization(gamma_initializer=gamma_initializer)(x) if activation: x = activation(x) return x Build the generators The generator consists of downsampling blocks: nine residual blocks and upsampling blocks. The structure of the generator is the following: c7s1-64 ==> Conv block with `relu` activation, filter size of 7 d128 ====| |-> 2 downsampling blocks d256 ====| R256 ====| R256 | R256 | R256 | R256 |-> 9 residual blocks R256 | R256 | R256 | R256 ====| u128 ====| |-> 2 upsampling blocks u64 ====| c7s1-3 => Last conv block with `tanh` activation, filter size of 7. def get_resnet_generator( filters=64, num_downsampling_blocks=2, num_residual_blocks=9, num_upsample_blocks=2, gamma_initializer=gamma_init, name=None, ): img_input = layers.Input(shape=input_img_size, name=name + \"_img_input\") x = ReflectionPadding2D(padding=(3, 3))(img_input) x = layers.Conv2D(filters, (7, 7), kernel_initializer=kernel_init, use_bias=False)( x ) x = tfa.layers.InstanceNormalization(gamma_initializer=gamma_initializer)(x) x = layers.Activation(\"relu\")(x) # Downsampling for _ in range(num_downsampling_blocks): filters *= 2 x = downsample(x, filters=filters, activation=layers.Activation(\"relu\")) # Residual blocks for _ in range(num_residual_blocks): x = residual_block(x, activation=layers.Activation(\"relu\")) # Upsampling for _ in range(num_upsample_blocks): filters //= 2 x = upsample(x, filters, activation=layers.Activation(\"relu\")) # Final block x = ReflectionPadding2D(padding=(3, 3))(x) x = layers.Conv2D(3, (7, 7), padding=\"valid\")(x) x = layers.Activation(\"tanh\")(x) model = keras.models.Model(img_input, x, name=name) return model Build the discriminators The discriminators implement the following architecture: C64->C128->C256->C512 def get_discriminator( filters=64, kernel_initializer=kernel_init, num_downsampling=3, name=None ): img_input = layers.Input(shape=input_img_size, name=name + \"_img_input\") x = layers.Conv2D( filters, (4, 4), strides=(2, 2), padding=\"same\", kernel_initializer=kernel_initializer, )(img_input) x = layers.LeakyReLU(0.2)(x) num_filters = filters for num_downsample_block in range(3): num_filters *= 2 if num_downsample_block < 2: x = downsample( x, filters=num_filters, activation=layers.LeakyReLU(0.2), kernel_size=(4, 4), strides=(2, 2), ) else: x = downsample( x, filters=num_filters, activation=layers.LeakyReLU(0.2), kernel_size=(4, 4), strides=(1, 1), ) x = layers.Conv2D( 1, (4, 4), strides=(1, 1), padding=\"same\", kernel_initializer=kernel_initializer )(x) model = keras.models.Model(inputs=img_input, outputs=x, name=name) return model # Get the generators gen_G = get_resnet_generator(name=\"generator_G\") gen_F = get_resnet_generator(name=\"generator_F\") # Get the discriminators disc_X = get_discriminator(name=\"discriminator_X\") disc_Y = get_discriminator(name=\"discriminator_Y\") Build the CycleGAN model We will override the train_step() method of the Model class for training via fit(). class CycleGan(keras.Model): def __init__( self, generator_G, generator_F, discriminator_X, discriminator_Y, lambda_cycle=10.0, lambda_identity=0.5, ): super(CycleGan, self).__init__() self.gen_G = generator_G self.gen_F = generator_F self.disc_X = discriminator_X self.disc_Y = discriminator_Y self.lambda_cycle = lambda_cycle self.lambda_identity = lambda_identity def compile( self, gen_G_optimizer, gen_F_optimizer, disc_X_optimizer, disc_Y_optimizer, gen_loss_fn, disc_loss_fn, ): super(CycleGan, self).compile() self.gen_G_optimizer = gen_G_optimizer self.gen_F_optimizer = gen_F_optimizer self.disc_X_optimizer = disc_X_optimizer self.disc_Y_optimizer = disc_Y_optimizer self.generator_loss_fn = gen_loss_fn self.discriminator_loss_fn = disc_loss_fn self.cycle_loss_fn = keras.losses.MeanAbsoluteError() self.identity_loss_fn = keras.losses.MeanAbsoluteError() def train_step(self, batch_data): # x is Horse and y is zebra real_x, real_y = batch_data # For CycleGAN, we need to calculate different # kinds of losses for the generators and discriminators. # We will perform the following steps here: # # 1. Pass real images through the generators and get the generated images # 2. Pass the generated images back to the generators to check if we # we can predict the original image from the generated image. # 3. Do an identity mapping of the real images using the generators. # 4. Pass the generated images in 1) to the corresponding discriminators. # 5. Calculate the generators total loss (adverserial + cycle + identity) # 6. Calculate the discriminators loss # 7. Update the weights of the generators # 8. Update the weights of the discriminators # 9. Return the losses in a dictionary with tf.GradientTape(persistent=True) as tape: # Horse to fake zebra fake_y = self.gen_G(real_x, training=True) # Zebra to fake horse -> y2x fake_x = self.gen_F(real_y, training=True) # Cycle (Horse to fake zebra to fake horse): x -> y -> x cycled_x = self.gen_F(fake_y, training=True) # Cycle (Zebra to fake horse to fake zebra) y -> x -> y cycled_y = self.gen_G(fake_x, training=True) # Identity mapping same_x = self.gen_F(real_x, training=True) same_y = self.gen_G(real_y, training=True) # Discriminator output disc_real_x = self.disc_X(real_x, training=True) disc_fake_x = self.disc_X(fake_x, training=True) disc_real_y = self.disc_Y(real_y, training=True) disc_fake_y = self.disc_Y(fake_y, training=True) # Generator adverserial loss gen_G_loss = self.generator_loss_fn(disc_fake_y) gen_F_loss = self.generator_loss_fn(disc_fake_x) # Generator cycle loss cycle_loss_G = self.cycle_loss_fn(real_y, cycled_y) * self.lambda_cycle cycle_loss_F = self.cycle_loss_fn(real_x, cycled_x) * self.lambda_cycle # Generator identity loss id_loss_G = ( self.identity_loss_fn(real_y, same_y) * self.lambda_cycle * self.lambda_identity ) id_loss_F = ( self.identity_loss_fn(real_x, same_x) * self.lambda_cycle * self.lambda_identity ) # Total generator loss total_loss_G = gen_G_loss + cycle_loss_G + id_loss_G total_loss_F = gen_F_loss + cycle_loss_F + id_loss_F # Discriminator loss disc_X_loss = self.discriminator_loss_fn(disc_real_x, disc_fake_x) disc_Y_loss = self.discriminator_loss_fn(disc_real_y, disc_fake_y) # Get the gradients for the generators grads_G = tape.gradient(total_loss_G, self.gen_G.trainable_variables) grads_F = tape.gradient(total_loss_F, self.gen_F.trainable_variables) # Get the gradients for the discriminators disc_X_grads = tape.gradient(disc_X_loss, self.disc_X.trainable_variables) disc_Y_grads = tape.gradient(disc_Y_loss, self.disc_Y.trainable_variables) # Update the weights of the generators self.gen_G_optimizer.apply_gradients( zip(grads_G, self.gen_G.trainable_variables) ) self.gen_F_optimizer.apply_gradients( zip(grads_F, self.gen_F.trainable_variables) ) # Update the weights of the discriminators self.disc_X_optimizer.apply_gradients( zip(disc_X_grads, self.disc_X.trainable_variables) ) self.disc_Y_optimizer.apply_gradients( zip(disc_Y_grads, self.disc_Y.trainable_variables) ) return { \"G_loss\": total_loss_G, \"F_loss\": total_loss_F, \"D_X_loss\": disc_X_loss, \"D_Y_loss\": disc_Y_loss, } Create a callback that periodically saves generated images class GANMonitor(keras.callbacks.Callback): \"\"\"A callback to generate and save images after each epoch\"\"\" def __init__(self, num_img=4): self.num_img = num_img def on_epoch_end(self, epoch, logs=None): _, ax = plt.subplots(4, 2, figsize=(12, 12)) for i, img in enumerate(test_horses.take(self.num_img)): prediction = self.model.gen_G(img)[0].numpy() prediction = (prediction * 127.5 + 127.5).astype(np.uint8) img = (img[0] * 127.5 + 127.5).numpy().astype(np.uint8) ax[i, 0].imshow(img) ax[i, 1].imshow(prediction) ax[i, 0].set_title(\"Input image\") ax[i, 1].set_title(\"Translated image\") ax[i, 0].axis(\"off\") ax[i, 1].axis(\"off\") prediction = keras.preprocessing.image.array_to_img(prediction) prediction.save( \"generated_img_{i}_{epoch}.png\".format(i=i, epoch=epoch + 1) ) plt.show() plt.close() Train the end-to-end model # Loss function for evaluating adversarial loss adv_loss_fn = keras.losses.MeanSquaredError() # Define the loss function for the generators def generator_loss_fn(fake): fake_loss = adv_loss_fn(tf.ones_like(fake), fake) return fake_loss # Define the loss function for the discriminators def discriminator_loss_fn(real, fake): real_loss = adv_loss_fn(tf.ones_like(real), real) fake_loss = adv_loss_fn(tf.zeros_like(fake), fake) return (real_loss + fake_loss) * 0.5 # Create cycle gan model cycle_gan_model = CycleGan( generator_G=gen_G, generator_F=gen_F, discriminator_X=disc_X, discriminator_Y=disc_Y ) # Compile the model cycle_gan_model.compile( gen_G_optimizer=keras.optimizers.Adam(learning_rate=2e-4, beta_1=0.5), gen_F_optimizer=keras.optimizers.Adam(learning_rate=2e-4, beta_1=0.5), disc_X_optimizer=keras.optimizers.Adam(learning_rate=2e-4, beta_1=0.5), disc_Y_optimizer=keras.optimizers.Adam(learning_rate=2e-4, beta_1=0.5), gen_loss_fn=generator_loss_fn, disc_loss_fn=discriminator_loss_fn, ) # Callbacks plotter = GANMonitor() checkpoint_filepath = \"./model_checkpoints/cyclegan_checkpoints.{epoch:03d}\" model_checkpoint_callback = keras.callbacks.ModelCheckpoint( filepath=checkpoint_filepath ) # Here we will train the model for just one epoch as each epoch takes around # 7 minutes on a single P100 backed machine. cycle_gan_model.fit( tf.data.Dataset.zip((train_horses, train_zebras)), epochs=1, callbacks=[plotter, model_checkpoint_callback], ) 1067/1067 [==============================] - ETA: 0s - G_loss: 4.4794 - F_loss: 4.1048 - D_X_loss: 0.1584 - D_Y_loss: 0.1233 png 1067/1067 [==============================] - 390s 366ms/step - G_loss: 4.4783 - F_loss: 4.1035 - D_X_loss: 0.1584 - D_Y_loss: 0.1232 Test the performance of the model. # This model was trained for 90 epochs. We will be loading those weights # here. Once the weights are loaded, we will take a few samples from the test # data and check the model's performance. !curl -LO https://github.com/AakashKumarNain/CycleGAN_TF2/releases/download/v1.0/saved_checkpoints.zip !unzip -qq saved_checkpoints.zip # Load the checkpoints weight_file = \"./saved_checkpoints/cyclegan_checkpoints.090\" cycle_gan_model.load_weights(weight_file).expect_partial() print(\"Weights loaded successfully\") _, ax = plt.subplots(4, 2, figsize=(10, 15)) for i, img in enumerate(test_horses.take(4)): prediction = cycle_gan_model.gen_G(img, training=False)[0].numpy() prediction = (prediction * 127.5 + 127.5).astype(np.uint8) img = (img[0] * 127.5 + 127.5).numpy().astype(np.uint8) ax[i, 0].imshow(img) ax[i, 1].imshow(prediction) ax[i, 0].set_title(\"Input image\") ax[i, 0].set_title(\"Input image\") ax[i, 1].set_title(\"Translated image\") ax[i, 0].axis(\"off\") ax[i, 1].axis(\"off\") prediction = keras.preprocessing.image.array_to_img(prediction) prediction.save(\"predicted_img_{i}.png\".format(i=i)) plt.tight_layout() plt.show() % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 634 100 634 0 0 2874 0 --:--:-- --:--:-- --:--:-- 2881 100 273M 100 273M 0 0 1736k 0 0:02:41 0:02:41 --:--:-- 2049k Weights loaded successfully png Generating images from limited data using the Caltech Birds dataset. Introduction GANs Generative Adversarial Networks (GANs) are a popular class of generative deep learning models, commonly used for image generation. They consist of a pair of dueling neural networks, called the discriminator and the generator. The discriminator's task is to distinguish real images from generated (fake) ones, while the generator network tries to fool the discriminator by generating more and more realistic images. If the generator is however too easy or too hard to fool, it might fail to provide useful learning signal for the generator, therefore training GANs is usually considered a difficult task. Data augmentation for GANS Data augmentation, a popular technique in deep learning, is the process of randomly applying semantics-preserving transformations to the input data to generate multiple realistic versions of it, thereby effectively multiplying the amount of training data available. The simplest example is left-right flipping an image, which preserves its contents while generating a second unique training sample. Data augmentation is commonly used in supervised learning to prevent overfitting and enhance generalization. The authors of StyleGAN2-ADA show that discriminator overfitting can be an issue in GANs, especially when only low amounts of training data is available. They propose Adaptive Discriminator Augmentation to mitigate this issue. Applying data augmentation to GANs however is not straightforward. Since the generator is updated using the discriminator's gradients, if the generated images are augmented, the augmentation pipeline has to be differentiable and also has to be GPU-compatible for computational efficiency. Luckily, the Keras image augmentation layers fulfill both these requirements, and are therefore very well suited for this task. Invertible data augmentation A possible difficulty when using data augmentation in generative models is the issue of \"leaky augmentations\" (section 2.2), namely when the model generates images that are already augmented. This would mean that it was not able to separate the augmentation from the underlying data distribution, which can be caused by using non-invertible data transformations. For example, if either 0, 90, 180 or 270 degree rotations are performed with equal probability, the original orientation of the images is impossible to infer, and this information is destroyed. A simple trick to make data augmentations invertible is to only apply them with some probability. That way the original version of the images will be more common, and the data distribution can be infered. By properly choosing this probability, one can effectively regularize the discriminator without making the augmentations leaky. Setup import matplotlib.pyplot as plt import tensorflow as tf import tensorflow_datasets as tfds from tensorflow import keras from tensorflow.keras import layers Hyperparameterers # data num_epochs = 10 # train for 400 epochs for good results image_size = 64 # resolution of Kernel Inception Distance measurement, see related section kid_image_size = 75 padding = 0.25 dataset_name = \"caltech_birds2011\" # adaptive discriminator augmentation max_translation = 0.125 max_rotation = 0.125 max_zoom = 0.25 target_accuracy = 0.85 integration_steps = 1000 # architecture noise_size = 64 depth = 4 width = 128 leaky_relu_slope = 0.2 dropout_rate = 0.4 # optimization batch_size = 128 learning_rate = 2e-4 beta_1 = 0.5 # not using the default value of 0.9 is important ema = 0.99 Data pipeline In this example, we will use the Caltech Birds (2011) dataset for generating images of birds, which is a diverse natural dataset containing less then 6000 images for training. When working with such low amounts of data, one has to take extra care to retain as high data quality as possible. In this example, we use the provided bounding boxes of the birds to cut them out with square crops while preserving their aspect ratios when possible. def round_to_int(float_value): return tf.cast(tf.math.round(float_value), dtype=tf.int32) def preprocess_image(data): # unnormalize bounding box coordinates height = tf.cast(tf.shape(data[\"image\"])[0], dtype=tf.float32) width = tf.cast(tf.shape(data[\"image\"])[1], dtype=tf.float32) bounding_box = data[\"bbox\"] * tf.stack([height, width, height, width]) # calculate center and length of longer side, add padding target_center_y = 0.5 * (bounding_box[0] + bounding_box[2]) target_center_x = 0.5 * (bounding_box[1] + bounding_box[3]) target_size = tf.maximum( (1.0 + padding) * (bounding_box[2] - bounding_box[0]), (1.0 + padding) * (bounding_box[3] - bounding_box[1]), ) # modify crop size to fit into image target_height = tf.reduce_min( [target_size, 2.0 * target_center_y, 2.0 * (height - target_center_y)] ) target_width = tf.reduce_min( [target_size, 2.0 * target_center_x, 2.0 * (width - target_center_x)] ) # crop image image = tf.image.crop_to_bounding_box( data[\"image\"], offset_height=round_to_int(target_center_y - 0.5 * target_height), offset_width=round_to_int(target_center_x - 0.5 * target_width), target_height=round_to_int(target_height), target_width=round_to_int(target_width), ) # resize and clip # for image downsampling, area interpolation is the preferred method image = tf.image.resize( image, size=[image_size, image_size], method=tf.image.ResizeMethod.AREA ) return tf.clip_by_value(image / 255.0, 0.0, 1.0) def prepare_dataset(split): # the validation dataset is shuffled as well, because data order matters # for the KID calculation return ( tfds.load(dataset_name, split=split, shuffle_files=True) .map(preprocess_image, num_parallel_calls=tf.data.AUTOTUNE) .cache() .shuffle(10 * batch_size) .batch(batch_size, drop_remainder=True) .prefetch(buffer_size=tf.data.AUTOTUNE) ) train_dataset = prepare_dataset(\"train\") val_dataset = prepare_dataset(\"test\") After preprocessing the training images look like the following: birds dataset Kernel inception distance Kernel Inception Distance (KID) was proposed as a replacement for the popular Frechet Inception Distance (FID) metric for measuring image generation quality. Both metrics measure the difference in the generated and training distributions in the representation space of an InceptionV3 network pretrained on ImageNet. According to the paper, KID was proposed because FID has no unbiased estimator, its expected value is higher when it is measured on fewer images. KID is more suitable for small datasets because its expected value does not depend on the number of samples it is measured on. In my experience it is also computationally lighter, numerically more stable, and simpler to implement because it can be estimated in a per-batch manner. In this example, the images are evaluated at the minimal possible resolution of the Inception network (75x75 instead of 299x299), and the metric is only measured on the validation set for computational efficiency. class KID(keras.metrics.Metric): def __init__(self, name=\"kid\", **kwargs): super().__init__(name=name, **kwargs) # KID is estimated per batch and is averaged across batches self.kid_tracker = keras.metrics.Mean() # a pretrained InceptionV3 is used without its classification layer # transform the pixel values to the 0-255 range, then use the same # preprocessing as during pretraining self.encoder = keras.Sequential( [ layers.InputLayer(input_shape=(image_size, image_size, 3)), layers.Rescaling(255.0), layers.Resizing(height=kid_image_size, width=kid_image_size), layers.Lambda(keras.applications.inception_v3.preprocess_input), keras.applications.InceptionV3( include_top=False, input_shape=(kid_image_size, kid_image_size, 3), weights=\"imagenet\", ), layers.GlobalAveragePooling2D(), ], name=\"inception_encoder\", ) def polynomial_kernel(self, features_1, features_2): feature_dimensions = tf.cast(tf.shape(features_1)[1], dtype=tf.float32) return (features_1 @ tf.transpose(features_2) / feature_dimensions + 1.0) ** 3.0 def update_state(self, real_images, generated_images, sample_weight=None): real_features = self.encoder(real_images, training=False) generated_features = self.encoder(generated_images, training=False) # compute polynomial kernels using the two sets of features kernel_real = self.polynomial_kernel(real_features, real_features) kernel_generated = self.polynomial_kernel( generated_features, generated_features ) kernel_cross = self.polynomial_kernel(real_features, generated_features) # estimate the squared maximum mean discrepancy using the average kernel values batch_size = tf.shape(real_features)[0] batch_size_f = tf.cast(batch_size, dtype=tf.float32) mean_kernel_real = tf.reduce_sum(kernel_real * (1.0 - tf.eye(batch_size))) / ( batch_size_f * (batch_size_f - 1.0) ) mean_kernel_generated = tf.reduce_sum( kernel_generated * (1.0 - tf.eye(batch_size)) ) / (batch_size_f * (batch_size_f - 1.0)) mean_kernel_cross = tf.reduce_mean(kernel_cross) kid = mean_kernel_real + mean_kernel_generated - 2.0 * mean_kernel_cross # update the average KID estimate self.kid_tracker.update_state(kid) def result(self): return self.kid_tracker.result() def reset_state(self): self.kid_tracker.reset_state() Adaptive discriminator augmentation The authors of StyleGAN2-ADA propose to change the augmentation probability adaptively during training. Though it is explained differently in the paper, they use integral control on the augmentation probability to keep the discriminator's accuracy on real images close to a target value. Note, that their controlled variable is actually the average sign of the discriminator logits (r_t in the paper), which corresponds to 2 * accuracy - 1. This method requires two hyperparameters: target_accuracy: the target value for the discriminator's accuracy on real images. I recommend selecting its value from the 80-90% range. integration_steps: the number of update steps required for an accuracy error of 100% to transform into an augmentation probability increase of 100%. To give an intuition, this defines how slowly the augmentation probability is changed. I recommend setting this to a relatively high value (1000 in this case) so that the augmentation strength is only adjusted slowly. The main motivation for this procedure is that the optimal value of the target accuracy is similar across different dataset sizes (see figure 4 and 5 in the paper), so it does not have to be retuned, because the process automatically applies stronger data augmentation when it is needed. # \"hard sigmoid\", useful for binary accuracy calculation from logits def step(values): # negative values -> 0.0, positive values -> 1.0 return 0.5 * (1.0 + tf.sign(values)) # augments images with a probability that is dynamically updated during training class AdaptiveAugmenter(keras.Model): def __init__(self): super().__init__() # stores the current probability of an image being augmented self.probability = tf.Variable(0.0) # the corresponding augmentation names from the paper are shown above each layer # the authors show (see figure 4), that the blitting and geometric augmentations # are the most helpful in the low-data regime self.augmenter = keras.Sequential( [ layers.InputLayer(input_shape=(image_size, image_size, 3)), # blitting/x-flip: layers.RandomFlip(\"horizontal\"), # blitting/integer translation: layers.RandomTranslation( height_factor=max_translation, width_factor=max_translation, interpolation=\"nearest\", ), # geometric/rotation: layers.RandomRotation(factor=max_rotation), # geometric/isotropic and anisotropic scaling: layers.RandomZoom( height_factor=(-max_zoom, 0.0), width_factor=(-max_zoom, 0.0) ), ], name=\"adaptive_augmenter\", ) def call(self, images, training): if training: augmented_images = self.augmenter(images, training) # during training either the original or the augmented images are selected # based on self.probability augmentation_values = tf.random.uniform( shape=(batch_size, 1, 1, 1), minval=0.0, maxval=1.0 ) augmentation_bools = tf.math.less(augmentation_values, self.probability) images = tf.where(augmentation_bools, augmented_images, images) return images def update(self, real_logits): current_accuracy = tf.reduce_mean(step(real_logits)) # the augmentation probability is updated based on the dicriminator's # accuracy on real images accuracy_error = current_accuracy - target_accuracy self.probability.assign( tf.clip_by_value( self.probability + accuracy_error / integration_steps, 0.0, 1.0 ) ) Network architecture Here we specify the architecture of the two networks: generator: maps a random vector to an image, which should be as realistic as possible discriminator: maps an image to a scalar score, which should be high for real and low for generated images GANs tend to be sensitive to the network architecture, I implemented a DCGAN architecture in this example, because it is relatively stable during training while being simple to implement. We use a constant number of filters throughout the network, use a sigmoid instead of tanh in the last layer of the generator, and use default initialization instead of random normal as further simplifications. As a good practice, we disable the learnable scale parameter in the batch normalization layers, because on one hand the following relu + convolutional layers make it redundant (as noted in the documentation). But also because it should be disabled based on theory when using spectral normalization (section 4.1), which is not used here, but is common in GANs. We also disable the bias in the fully connected and convolutional layers, because the following batch normalization makes it redundant. # DCGAN generator def get_generator(): noise_input = keras.Input(shape=(noise_size,)) x = layers.Dense(4 * 4 * width, use_bias=False)(noise_input) x = layers.BatchNormalization(scale=False)(x) x = layers.ReLU()(x) x = layers.Reshape(target_shape=(4, 4, width))(x) for _ in range(depth - 1): x = layers.Conv2DTranspose( width, kernel_size=4, strides=2, padding=\"same\", use_bias=False, )(x) x = layers.BatchNormalization(scale=False)(x) x = layers.ReLU()(x) image_output = layers.Conv2DTranspose( 3, kernel_size=4, strides=2, padding=\"same\", activation=\"sigmoid\", )(x) return keras.Model(noise_input, image_output, name=\"generator\") # DCGAN discriminator def get_discriminator(): image_input = keras.Input(shape=(image_size, image_size, 3)) x = image_input for _ in range(depth): x = layers.Conv2D( width, kernel_size=4, strides=2, padding=\"same\", use_bias=False, )(x) x = layers.BatchNormalization(scale=False)(x) x = layers.LeakyReLU(alpha=leaky_relu_slope)(x) x = layers.Flatten()(x) x = layers.Dropout(dropout_rate)(x) output_score = layers.Dense(1)(x) return keras.Model(image_input, output_score, name=\"discriminator\") GAN model class GAN_ADA(keras.Model): def __init__(self): super().__init__() self.augmenter = AdaptiveAugmenter() self.generator = get_generator() self.ema_generator = keras.models.clone_model(self.generator) self.discriminator = get_discriminator() self.generator.summary() self.discriminator.summary() def compile(self, generator_optimizer, discriminator_optimizer, **kwargs): super().compile(**kwargs) # separate optimizers for the two networks self.generator_optimizer = generator_optimizer self.discriminator_optimizer = discriminator_optimizer self.generator_loss_tracker = keras.metrics.Mean(name=\"g_loss\") self.discriminator_loss_tracker = keras.metrics.Mean(name=\"d_loss\") self.real_accuracy = keras.metrics.BinaryAccuracy(name=\"real_acc\") self.generated_accuracy = keras.metrics.BinaryAccuracy(name=\"gen_acc\") self.augmentation_probability_tracker = keras.metrics.Mean(name=\"aug_p\") self.kid = KID() @property def metrics(self): return [ self.generator_loss_tracker, self.discriminator_loss_tracker, self.real_accuracy, self.generated_accuracy, self.augmentation_probability_tracker, self.kid, ] def generate(self, batch_size, training): latent_samples = tf.random.normal(shape=(batch_size, noise_size)) # use ema_generator during inference if training: generated_images = self.generator(latent_samples, training) else: generated_images = self.ema_generator(latent_samples, training) return generated_images def adversarial_loss(self, real_logits, generated_logits): # this is usually called the non-saturating GAN loss real_labels = tf.ones(shape=(batch_size, 1)) generated_labels = tf.zeros(shape=(batch_size, 1)) # the generator tries to produce images that the discriminator considers as real generator_loss = keras.losses.binary_crossentropy( real_labels, generated_logits, from_logits=True ) # the discriminator tries to determine if images are real or generated discriminator_loss = keras.losses.binary_crossentropy( tf.concat([real_labels, generated_labels], axis=0), tf.concat([real_logits, generated_logits], axis=0), from_logits=True, ) return tf.reduce_mean(generator_loss), tf.reduce_mean(discriminator_loss) def train_step(self, real_images): real_images = self.augmenter(real_images, training=True) # use persistent gradient tape because gradients will be calculated twice with tf.GradientTape(persistent=True) as tape: generated_images = self.generate(batch_size, training=True) # gradient is calculated through the image augmentation generated_images = self.augmenter(generated_images, training=True) # separate forward passes for the real and generated images, meaning # that batch normalization is applied separately real_logits = self.discriminator(real_images, training=True) generated_logits = self.discriminator(generated_images, training=True) generator_loss, discriminator_loss = self.adversarial_loss( real_logits, generated_logits ) # calculate gradients and update weights generator_gradients = tape.gradient( generator_loss, self.generator.trainable_weights ) discriminator_gradients = tape.gradient( discriminator_loss, self.discriminator.trainable_weights ) self.generator_optimizer.apply_gradients( zip(generator_gradients, self.generator.trainable_weights) ) self.discriminator_optimizer.apply_gradients( zip(discriminator_gradients, self.discriminator.trainable_weights) ) # update the augmentation probability based on the discriminator's performance self.augmenter.update(real_logits) self.generator_loss_tracker.update_state(generator_loss) self.discriminator_loss_tracker.update_state(discriminator_loss) self.real_accuracy.update_state(1.0, step(real_logits)) self.generated_accuracy.update_state(0.0, step(generated_logits)) self.augmentation_probability_tracker.update_state(self.augmenter.probability) # track the exponential moving average of the generator's weights to decrease # variance in the generation quality for weight, ema_weight in zip( self.generator.weights, self.ema_generator.weights ): ema_weight.assign(ema * ema_weight + (1 - ema) * weight) # KID is not measured during the training phase for computational efficiency return {m.name: m.result() for m in self.metrics[:-1]} def test_step(self, real_images): generated_images = self.generate(batch_size, training=False) self.kid.update_state(real_images, generated_images) # only KID is measured during the evaluation phase for computational efficiency return {self.kid.name: self.kid.result()} def plot_images(self, epoch=None, logs=None, num_rows=3, num_cols=6, interval=5): # plot random generated images for visual evaluation of generation quality if epoch is None or (epoch + 1) % interval == 0: num_images = num_rows * num_cols generated_images = self.generate(num_images, training=False) plt.figure(figsize=(num_cols * 2.0, num_rows * 2.0)) for row in range(num_rows): for col in range(num_cols): index = row * num_cols + col plt.subplot(num_rows, num_cols, index + 1) plt.imshow(generated_images[index]) plt.axis(\"off\") plt.tight_layout() plt.show() plt.close() Training One can should see from the metrics during training, that if the real accuracy (discriminator's accuracy on real images) is below the target accuracy, the augmentation probability is increased, and vice versa. In my experience, during a healthy GAN training, the discriminator accuracy should stay in the 80-95% range. Below that, the discriminator is too weak, above that it is too strong. Note that we track the exponential moving average of the generator's weights, and use that for image generation and KID evaluation. # create and compile the model model = GAN_ADA() model.compile( generator_optimizer=keras.optimizers.Adam(learning_rate, beta_1), discriminator_optimizer=keras.optimizers.Adam(learning_rate, beta_1), ) # save the best model based on the validation KID metric checkpoint_path = \"gan_model\" checkpoint_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_path, save_weights_only=True, monitor=\"val_kid\", mode=\"min\", save_best_only=True, ) # run training and plot generated images periodically model.fit( train_dataset, epochs=num_epochs, validation_data=val_dataset, callbacks=[ keras.callbacks.LambdaCallback(on_epoch_end=model.plot_images), checkpoint_callback, ], ) Model: \"generator\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) [(None, 64)] 0 _________________________________________________________________ dense (Dense) (None, 2048) 131072 _________________________________________________________________ batch_normalization (BatchNo (None, 2048) 6144 _________________________________________________________________ re_lu (ReLU) (None, 2048) 0 _________________________________________________________________ reshape (Reshape) (None, 4, 4, 128) 0 _________________________________________________________________ conv2d_transpose (Conv2DTran (None, 8, 8, 128) 262144 _________________________________________________________________ batch_normalization_1 (Batch (None, 8, 8, 128) 384 _________________________________________________________________ re_lu_1 (ReLU) (None, 8, 8, 128) 0 _________________________________________________________________ conv2d_transpose_1 (Conv2DTr (None, 16, 16, 128) 262144 _________________________________________________________________ batch_normalization_2 (Batch (None, 16, 16, 128) 384 _________________________________________________________________ re_lu_2 (ReLU) (None, 16, 16, 128) 0 _________________________________________________________________ conv2d_transpose_2 (Conv2DTr (None, 32, 32, 128) 262144 _________________________________________________________________ batch_normalization_3 (Batch (None, 32, 32, 128) 384 _________________________________________________________________ re_lu_3 (ReLU) (None, 32, 32, 128) 0 _________________________________________________________________ conv2d_transpose_3 (Conv2DTr (None, 64, 64, 3) 6147 ================================================================= Total params: 930,947 Trainable params: 926,083 Non-trainable params: 4,864 _________________________________________________________________ Model: \"discriminator\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_3 (InputLayer) [(None, 64, 64, 3)] 0 _________________________________________________________________ conv2d (Conv2D) (None, 32, 32, 128) 6144 _________________________________________________________________ batch_normalization_4 (Batch (None, 32, 32, 128) 384 _________________________________________________________________ leaky_re_lu (LeakyReLU) (None, 32, 32, 128) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 16, 16, 128) 262144 _________________________________________________________________ batch_normalization_5 (Batch (None, 16, 16, 128) 384 _________________________________________________________________ leaky_re_lu_1 (LeakyReLU) (None, 16, 16, 128) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 8, 8, 128) 262144 _________________________________________________________________ batch_normalization_6 (Batch (None, 8, 8, 128) 384 _________________________________________________________________ leaky_re_lu_2 (LeakyReLU) (None, 8, 8, 128) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 4, 4, 128) 262144 _________________________________________________________________ batch_normalization_7 (Batch (None, 4, 4, 128) 384 _________________________________________________________________ leaky_re_lu_3 (LeakyReLU) (None, 4, 4, 128) 0 _________________________________________________________________ flatten (Flatten) (None, 2048) 0 _________________________________________________________________ dropout (Dropout) (None, 2048) 0 _________________________________________________________________ dense_1 (Dense) (None, 1) 2049 ================================================================= Total params: 796,161 Trainable params: 795,137 Non-trainable params: 1,024 _________________________________________________________________ Epoch 1/10 46/46 [==============================] - 36s 307ms/step - g_loss: 3.3293 - d_loss: 0.1576 - real_acc: 0.9387 - gen_acc: 0.9579 - aug_p: 0.0020 - val_kid: 9.0999 Epoch 2/10 46/46 [==============================] - 10s 215ms/step - g_loss: 4.9824 - d_loss: 0.0912 - real_acc: 0.9704 - gen_acc: 0.9798 - aug_p: 0.0077 - val_kid: 8.3523 Epoch 3/10 46/46 [==============================] - 10s 218ms/step - g_loss: 5.0587 - d_loss: 0.1248 - real_acc: 0.9530 - gen_acc: 0.9625 - aug_p: 0.0131 - val_kid: 6.8116 Epoch 4/10 46/46 [==============================] - 10s 221ms/step - g_loss: 4.2580 - d_loss: 0.1002 - real_acc: 0.9686 - gen_acc: 0.9740 - aug_p: 0.0179 - val_kid: 5.2327 Epoch 5/10 46/46 [==============================] - 10s 225ms/step - g_loss: 4.6022 - d_loss: 0.0847 - real_acc: 0.9655 - gen_acc: 0.9852 - aug_p: 0.0234 - val_kid: 3.9004 png Epoch 6/10 46/46 [==============================] - 10s 224ms/step - g_loss: 4.9362 - d_loss: 0.0671 - real_acc: 0.9791 - gen_acc: 0.9895 - aug_p: 0.0291 - val_kid: 6.6020 Epoch 7/10 46/46 [==============================] - 10s 222ms/step - g_loss: 4.4272 - d_loss: 0.1184 - real_acc: 0.9570 - gen_acc: 0.9657 - aug_p: 0.0345 - val_kid: 3.3644 Epoch 8/10 46/46 [==============================] - 10s 220ms/step - g_loss: 4.5060 - d_loss: 0.1635 - real_acc: 0.9421 - gen_acc: 0.9594 - aug_p: 0.0392 - val_kid: 3.1381 Epoch 9/10 46/46 [==============================] - 10s 219ms/step - g_loss: 3.8264 - d_loss: 0.1667 - real_acc: 0.9383 - gen_acc: 0.9484 - aug_p: 0.0433 - val_kid: 2.9423 Epoch 10/10 46/46 [==============================] - 10s 219ms/step - g_loss: 3.4063 - d_loss: 0.1757 - real_acc: 0.9314 - gen_acc: 0.9475 - aug_p: 0.0473 - val_kid: 2.9112 png Inference # load the best model and generate images model.load_weights(checkpoint_path) model.plot_images() png Results By running the training for 400 epochs (which takes 2-3 hours in a Colab notebook), one can get high quality image generations using this code example. The evolution of a random batch of images over a 400 epoch training (ema=0.999 for animation smoothness): birds evolution gif Latent-space interpolation between a batch of selected images: birds interpolation gif I also recommend trying out training on other datasets, such as CelebA for example. In my experience good results can be achieved without changing any hyperparameters (though discriminator augmentation might not be necessary). GAN tips and tricks My goal with this example was to find a good tradeoff between ease of implementation and generation quality for GANs. During preparation I have run numerous ablations using this repository. In this section I list the lessons learned and my recommendations in my subjective order of importance. I recommend checking out the DCGAN paper, this NeurIPS talk, and this large scale GAN study for others' takes on this subject. Architectural tips resolution: Training GANs at higher resolutions tends to get more difficult, I recommend experimenting at 32x32 or 64x64 resolutions initially. initialization: If you see strong colorful patterns early on in the training, the initalization might be the issue. Set the kernel_initializer parameters of layers to random normal, and decrease the standard deviation (recommended value: 0.02, following DCGAN) until the issue disappears. upsampling: There are two main methods for upsampling in the generator. Transposed convolution is faster, but can lead to checkerboard artifacts, which can be reduced by using a kernel size that is divisible with the stride (recommended kernel size is 4 for a stride of 2). Upsampling + standard convolution can have slightly lower quality, but checkerboard artifacts are not an issue. I recommend using nearest-neighbor interpolation over bilinear for it. batch normalization in discriminator: Sometimes has a high impact, I recommend trying out both ways. spectral normalization: A popular technique for training GANs, can help with stability. I recommend disabling batch normalization's learnable scale parameters along with it. residual connections: While residual discriminators behave similarly, residual generators are more difficult to train in my experience. They are however necessary for training large and deep architectures. I recommend starting with non-resiudal architectures. dropout: Using dropout before the last layer of the discriminator improves generation quality in my experience. Recommended dropout rate is below 0.5. leaky ReLU: Use leaky ReLU activations in the discriminator to make its gradients less sparse. Recommended slope/alpha is 0.2 following DCGAN. Algorithmic tips loss functions: Numerous losses have been proposed over the years for training GANs, promising improved performance and stability. I have implemented 5 of them in this repository, and my experience is in line with this GAN study: no loss seems to consistently outperform the default non-saturating GAN loss. I recommend using that as a default. Adam's beta_1 parameter: The beta_1 parameter in Adam can be interpreted as the momentum of mean gradient estimation. Using 0.5 or even 0.0 instead of the default 0.9 value was proposed in DCGAN and is important. This example would not work using its default value. separate batch normalization for generated and real images: The forward pass of the discriminator should be separate for the generated and real images. Doing otherwise can lead to artifacts (45 degree stripes in my case) and decreased performance. exponential moving average of generator's weights: This helps to reduce the variance of the KID measurement, and helps in averaging out the rapid color palette changes during training. different learning rate for generator and discriminator: If one has the resources, it can help to tune the learning rates of the two networks separately. A similar idea is to update either network's (usually the discriminator's) weights multiple times for each of the other network's updates. I recommend using the same learning rate of 2e-4 (Adam), following DCGAN for both networks, and only updating both of them once as a default. label noise: One-sided label smoothing (using less than 1.0 for real labels), or adding noise to the labels can regularize the discriminator not to get overconfident, however in my case they did not improve performance. adaptive data augmentation: Since it adds another dynamic component to the training process, disable it as a default, and only enable it when the other components already work well. Related works Other GAN-related Keras code examples: DCGAN + CelebA WGAN + FashionMNIST WGAN + Molecules ConditionalGAN + MNIST CycleGAN + Horse2Zebra StyleGAN Modern GAN architecture-lines: SAGAN, BigGAN ProgressiveGAN, StyleGAN, StyleGAN2, StyleGAN2-ADA, AliasFreeGAN Concurrent papers on discriminator data augmentation: 1, 2, 3 Recent literature overview on GANs: talk A simple DCGAN trained using fit() by overriding train_step on CelebA images. Setup import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import numpy as np import matplotlib.pyplot as plt import os import gdown from zipfile import ZipFile Prepare CelebA data We'll use face images from the CelebA dataset, resized to 64x64. os.makedirs(\"celeba_gan\") url = \"https://drive.google.com/uc?id=1O7m1010EJjLE5QxLZiM9Fpjs7Oj6e684\" output = \"celeba_gan/data.zip\" gdown.download(url, output, quiet=True) with ZipFile(\"celeba_gan/data.zip\", \"r\") as zipobj: zipobj.extractall(\"celeba_gan\") Create a dataset from our folder, and rescale the images to the [0-1] range: dataset = keras.preprocessing.image_dataset_from_directory( \"celeba_gan\", label_mode=None, image_size=(64, 64), batch_size=32 ) dataset = dataset.map(lambda x: x / 255.0) Found 202599 files belonging to 1 classes. Let's display a sample image: for x in dataset: plt.axis(\"off\") plt.imshow((x.numpy() * 255).astype(\"int32\")[0]) break png Create the discriminator It maps a 64x64 image to a binary classification score. discriminator = keras.Sequential( [ keras.Input(shape=(64, 64, 3)), layers.Conv2D(64, kernel_size=4, strides=2, padding=\"same\"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(128, kernel_size=4, strides=2, padding=\"same\"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(128, kernel_size=4, strides=2, padding=\"same\"), layers.LeakyReLU(alpha=0.2), layers.Flatten(), layers.Dropout(0.2), layers.Dense(1, activation=\"sigmoid\"), ], name=\"discriminator\", ) discriminator.summary() Model: \"discriminator\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 32, 32, 64) 3136 _________________________________________________________________ leaky_re_lu (LeakyReLU) (None, 32, 32, 64) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 16, 16, 128) 131200 _________________________________________________________________ leaky_re_lu_1 (LeakyReLU) (None, 16, 16, 128) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 8, 8, 128) 262272 _________________________________________________________________ leaky_re_lu_2 (LeakyReLU) (None, 8, 8, 128) 0 _________________________________________________________________ flatten (Flatten) (None, 8192) 0 _________________________________________________________________ dropout (Dropout) (None, 8192) 0 _________________________________________________________________ dense (Dense) (None, 1) 8193 ================================================================= Total params: 404,801 Trainable params: 404,801 Non-trainable params: 0 _________________________________________________________________ Create the generator It mirrors the discriminator, replacing Conv2D layers with Conv2DTranspose layers. latent_dim = 128 generator = keras.Sequential( [ keras.Input(shape=(latent_dim,)), layers.Dense(8 * 8 * 128), layers.Reshape((8, 8, 128)), layers.Conv2DTranspose(128, kernel_size=4, strides=2, padding=\"same\"), layers.LeakyReLU(alpha=0.2), layers.Conv2DTranspose(256, kernel_size=4, strides=2, padding=\"same\"), layers.LeakyReLU(alpha=0.2), layers.Conv2DTranspose(512, kernel_size=4, strides=2, padding=\"same\"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(3, kernel_size=5, padding=\"same\", activation=\"sigmoid\"), ], name=\"generator\", ) generator.summary() Model: \"generator\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_1 (Dense) (None, 8192) 1056768 _________________________________________________________________ reshape (Reshape) (None, 8, 8, 128) 0 _________________________________________________________________ conv2d_transpose (Conv2DTran (None, 16, 16, 128) 262272 _________________________________________________________________ leaky_re_lu_3 (LeakyReLU) (None, 16, 16, 128) 0 _________________________________________________________________ conv2d_transpose_1 (Conv2DTr (None, 32, 32, 256) 524544 _________________________________________________________________ leaky_re_lu_4 (LeakyReLU) (None, 32, 32, 256) 0 _________________________________________________________________ conv2d_transpose_2 (Conv2DTr (None, 64, 64, 512) 2097664 _________________________________________________________________ leaky_re_lu_5 (LeakyReLU) (None, 64, 64, 512) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 64, 64, 3) 38403 ================================================================= Total params: 3,979,651 Trainable params: 3,979,651 Non-trainable params: 0 _________________________________________________________________ Override train_step class GAN(keras.Model): def __init__(self, discriminator, generator, latent_dim): super(GAN, self).__init__() self.discriminator = discriminator self.generator = generator self.latent_dim = latent_dim def compile(self, d_optimizer, g_optimizer, loss_fn): super(GAN, self).compile() self.d_optimizer = d_optimizer self.g_optimizer = g_optimizer self.loss_fn = loss_fn self.d_loss_metric = keras.metrics.Mean(name=\"d_loss\") self.g_loss_metric = keras.metrics.Mean(name=\"g_loss\") @property def metrics(self): return [self.d_loss_metric, self.g_loss_metric] def train_step(self, real_images): # Sample random points in the latent space batch_size = tf.shape(real_images)[0] random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim)) # Decode them to fake images generated_images = self.generator(random_latent_vectors) # Combine them with real images combined_images = tf.concat([generated_images, real_images], axis=0) # Assemble labels discriminating real from fake images labels = tf.concat( [tf.ones((batch_size, 1)), tf.zeros((batch_size, 1))], axis=0 ) # Add random noise to the labels - important trick! labels += 0.05 * tf.random.uniform(tf.shape(labels)) # Train the discriminator with tf.GradientTape() as tape: predictions = self.discriminator(combined_images) d_loss = self.loss_fn(labels, predictions) grads = tape.gradient(d_loss, self.discriminator.trainable_weights) self.d_optimizer.apply_gradients( zip(grads, self.discriminator.trainable_weights) ) # Sample random points in the latent space random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim)) # Assemble labels that say \"all real images\" misleading_labels = tf.zeros((batch_size, 1)) # Train the generator (note that we should *not* update the weights # of the discriminator)! with tf.GradientTape() as tape: predictions = self.discriminator(self.generator(random_latent_vectors)) g_loss = self.loss_fn(misleading_labels, predictions) grads = tape.gradient(g_loss, self.generator.trainable_weights) self.g_optimizer.apply_gradients(zip(grads, self.generator.trainable_weights)) # Update metrics self.d_loss_metric.update_state(d_loss) self.g_loss_metric.update_state(g_loss) return { \"d_loss\": self.d_loss_metric.result(), \"g_loss\": self.g_loss_metric.result(), } Create a callback that periodically saves generated images class GANMonitor(keras.callbacks.Callback): def __init__(self, num_img=3, latent_dim=128): self.num_img = num_img self.latent_dim = latent_dim def on_epoch_end(self, epoch, logs=None): random_latent_vectors = tf.random.normal(shape=(self.num_img, self.latent_dim)) generated_images = self.model.generator(random_latent_vectors) generated_images *= 255 generated_images.numpy() for i in range(self.num_img): img = keras.preprocessing.image.array_to_img(generated_images[i]) img.save(\"generated_img_%03d_%d.png\" % (epoch, i)) Train the end-to-end model epochs = 1 # In practice, use ~100 epochs gan = GAN(discriminator=discriminator, generator=generator, latent_dim=latent_dim) gan.compile( d_optimizer=keras.optimizers.Adam(learning_rate=0.0001), g_optimizer=keras.optimizers.Adam(learning_rate=0.0001), loss_fn=keras.losses.BinaryCrossentropy(), ) gan.fit( dataset, epochs=epochs, callbacks=[GANMonitor(num_img=10, latent_dim=latent_dim)] ) 6332/6332 [==============================] - 605s 96ms/step - d_loss: 0.6113 - g_loss: 1.1976 Some of the last generated images around epoch 30 (results keep improving after that): results Generating Deep Dreams with Keras. Introduction \"Deep dream\" is an image-filtering technique which consists of taking an image classification model, and running gradient ascent over an input image to try to maximize the activations of specific layers (and sometimes, specific units in specific layers) for this input. It produces hallucination-like visuals. It was first introduced by Alexander Mordvintsev from Google in July 2015. Process: Load the original image. Define a number of processing scales (\"octaves\"), from smallest to largest. Resize the original image to the smallest scale. For every scale, starting with the smallest (i.e. current one): - Run gradient ascent - Upscale image to the next scale - Reinject the detail that was lost at upscaling time Stop when we are back to the original size. To obtain the detail lost during upscaling, we simply take the original image, shrink it down, upscale it, and compare the result to the (resized) original image. Setup import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras.applications import inception_v3 base_image_path = keras.utils.get_file(\"sky.jpg\", \"https://i.imgur.com/aGBdQyK.jpg\") result_prefix = \"sky_dream\" # These are the names of the layers # for which we try to maximize activation, # as well as their weight in the final loss # we try to maximize. # You can tweak these setting to obtain new visual effects. layer_settings = { \"mixed4\": 1.0, \"mixed5\": 1.5, \"mixed6\": 2.0, \"mixed7\": 2.5, } # Playing with these hyperparameters will also allow you to achieve new effects step = 0.01 # Gradient ascent step size num_octave = 3 # Number of scales at which to run gradient ascent octave_scale = 1.4 # Size ratio between scales iterations = 20 # Number of ascent steps per scale max_loss = 15.0 This is our base image: from IPython.display import Image, display display(Image(base_image_path)) jpeg Let's set up some image preprocessing/deprocessing utilities: def preprocess_image(image_path): # Util function to open, resize and format pictures # into appropriate arrays. img = keras.preprocessing.image.load_img(image_path) img = keras.preprocessing.image.img_to_array(img) img = np.expand_dims(img, axis=0) img = inception_v3.preprocess_input(img) return img def deprocess_image(x): # Util function to convert a NumPy array into a valid image. x = x.reshape((x.shape[1], x.shape[2], 3)) # Undo inception v3 preprocessing x /= 2.0 x += 0.5 x *= 255.0 # Convert to uint8 and clip to the valid range [0, 255] x = np.clip(x, 0, 255).astype(\"uint8\") return x Compute the Deep Dream loss First, build a feature extraction model to retrieve the activations of our target layers given an input image. # Build an InceptionV3 model loaded with pre-trained ImageNet weights model = inception_v3.InceptionV3(weights=\"imagenet\", include_top=False) # Get the symbolic outputs of each \"key\" layer (we gave them unique names). outputs_dict = dict( [ (layer.name, layer.output) for layer in [model.get_layer(name) for name in layer_settings.keys()] ] ) # Set up a model that returns the activation values for every target layer # (as a dict) feature_extractor = keras.Model(inputs=model.inputs, outputs=outputs_dict) The actual loss computation is very simple: def compute_loss(input_image): features = feature_extractor(input_image) # Initialize the loss loss = tf.zeros(shape=()) for name in features.keys(): coeff = layer_settings[name] activation = features[name] # We avoid border artifacts by only involving non-border pixels in the loss. scaling = tf.reduce_prod(tf.cast(tf.shape(activation), \"float32\")) loss += coeff * tf.reduce_sum(tf.square(activation[:, 2:-2, 2:-2, :])) / scaling return loss Set up the gradient ascent loop for one octave @tf.function def gradient_ascent_step(img, learning_rate): with tf.GradientTape() as tape: tape.watch(img) loss = compute_loss(img) # Compute gradients. grads = tape.gradient(loss, img) # Normalize gradients. grads /= tf.maximum(tf.reduce_mean(tf.abs(grads)), 1e-6) img += learning_rate * grads return loss, img def gradient_ascent_loop(img, iterations, learning_rate, max_loss=None): for i in range(iterations): loss, img = gradient_ascent_step(img, learning_rate) if max_loss is not None and loss > max_loss: break print(\"... Loss value at step %d: %.2f\" % (i, loss)) return img Run the training loop, iterating over different octaves original_img = preprocess_image(base_image_path) original_shape = original_img.shape[1:3] successive_shapes = [original_shape] for i in range(1, num_octave): shape = tuple([int(dim / (octave_scale ** i)) for dim in original_shape]) successive_shapes.append(shape) successive_shapes = successive_shapes[::-1] shrunk_original_img = tf.image.resize(original_img, successive_shapes[0]) img = tf.identity(original_img) # Make a copy for i, shape in enumerate(successive_shapes): print(\"Processing octave %d with shape %s\" % (i, shape)) img = tf.image.resize(img, shape) img = gradient_ascent_loop( img, iterations=iterations, learning_rate=step, max_loss=max_loss ) upscaled_shrunk_original_img = tf.image.resize(shrunk_original_img, shape) same_size_original = tf.image.resize(original_img, shape) lost_detail = same_size_original - upscaled_shrunk_original_img img += lost_detail shrunk_original_img = tf.image.resize(original_img, shape) keras.preprocessing.image.save_img(result_prefix + \".png\", deprocess_image(img.numpy())) Processing octave 0 with shape (326, 489) ... Loss value at step 0: 0.44 ... Loss value at step 1: 0.62 ... Loss value at step 2: 0.90 ... Loss value at step 3: 1.25 ... Loss value at step 4: 1.57 ... Loss value at step 5: 1.92 ... Loss value at step 6: 2.20 ... Loss value at step 7: 2.52 ... Loss value at step 8: 2.82 ... Loss value at step 9: 3.11 ... Loss value at step 10: 3.39 ... Loss value at step 11: 3.67 ... Loss value at step 12: 3.93 ... Loss value at step 13: 4.19 ... Loss value at step 14: 4.42 ... Loss value at step 15: 4.69 ... Loss value at step 16: 4.93 ... Loss value at step 17: 5.18 ... Loss value at step 18: 5.47 ... Loss value at step 19: 5.70 Processing octave 1 with shape (457, 685) ... Loss value at step 0: 1.08 ... Loss value at step 1: 1.74 ... Loss value at step 2: 2.30 ... Loss value at step 3: 2.79 ... Loss value at step 4: 3.21 ... Loss value at step 5: 3.64 ... Loss value at step 6: 4.04 ... Loss value at step 7: 4.42 ... Loss value at step 8: 4.78 ... Loss value at step 9: 5.13 ... Loss value at step 10: 5.49 ... Loss value at step 11: 5.82 ... Loss value at step 12: 6.14 ... Loss value at step 13: 6.43 ... Loss value at step 14: 6.78 ... Loss value at step 15: 7.07 ... Loss value at step 16: 7.36 ... Loss value at step 17: 7.64 ... Loss value at step 18: 7.94 ... Loss value at step 19: 8.21 Processing octave 2 with shape (640, 960) ... Loss value at step 0: 1.25 ... Loss value at step 1: 2.02 ... Loss value at step 2: 2.65 ... Loss value at step 3: 3.18 ... Loss value at step 4: 3.68 ... Loss value at step 5: 4.18 ... Loss value at step 6: 4.63 ... Loss value at step 7: 5.09 ... Loss value at step 8: 5.49 ... Loss value at step 9: 5.90 ... Loss value at step 10: 6.24 ... Loss value at step 11: 6.57 ... Loss value at step 12: 6.84 ... Loss value at step 13: 7.21 ... Loss value at step 14: 7.59 ... Loss value at step 15: 7.89 ... Loss value at step 16: 8.18 ... Loss value at step 17: 8.55 ... Loss value at step 18: 8.84 ... Loss value at step 19: 9.13 Display the result. display(Image(result_prefix + \".png\")) pngpng Estimating the density distribution of the 'double moon' dataset. Introduction The aim of this work is to map a simple distribution - which is easy to sample and whose density is simple to estimate - to a more complex one learned from the data. This kind of generative model is also known as \"normalizing flow\". In order to do this, the model is trained via the maximum likelihood principle, using the \"change of variable\" formula. We will use an affine coupling function. We create it such that its inverse, as well as the determinant of the Jacobian, are easy to obtain (more details in the referenced paper). Requirements: Tensorflow 2.3 Tensorflow probability 0.11.0 Reference: Density estimation using Real NVP Setup import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras import regularizers from sklearn.datasets import make_moons import numpy as np import matplotlib.pyplot as plt import tensorflow_probability as tfp Load the data data = make_moons(3000, noise=0.05)[0].astype(\"float32\") norm = layers.Normalization() norm.adapt(data) normalized_data = norm(data) Affine coupling layer # Creating a custom layer with keras API. output_dim = 256 reg = 0.01 def Coupling(input_shape): input = keras.layers.Input(shape=input_shape) t_layer_1 = keras.layers.Dense( output_dim, activation=\"relu\", kernel_regularizer=regularizers.l2(reg) )(input) t_layer_2 = keras.layers.Dense( output_dim, activation=\"relu\", kernel_regularizer=regularizers.l2(reg) )(t_layer_1) t_layer_3 = keras.layers.Dense( output_dim, activation=\"relu\", kernel_regularizer=regularizers.l2(reg) )(t_layer_2) t_layer_4 = keras.layers.Dense( output_dim, activation=\"relu\", kernel_regularizer=regularizers.l2(reg) )(t_layer_3) t_layer_5 = keras.layers.Dense( input_shape, activation=\"linear\", kernel_regularizer=regularizers.l2(reg) )(t_layer_4) s_layer_1 = keras.layers.Dense( output_dim, activation=\"relu\", kernel_regularizer=regularizers.l2(reg) )(input) s_layer_2 = keras.layers.Dense( output_dim, activation=\"relu\", kernel_regularizer=regularizers.l2(reg) )(s_layer_1) s_layer_3 = keras.layers.Dense( output_dim, activation=\"relu\", kernel_regularizer=regularizers.l2(reg) )(s_layer_2) s_layer_4 = keras.layers.Dense( output_dim, activation=\"relu\", kernel_regularizer=regularizers.l2(reg) )(s_layer_3) s_layer_5 = keras.layers.Dense( input_shape, activation=\"tanh\", kernel_regularizer=regularizers.l2(reg) )(s_layer_4) return keras.Model(inputs=input, outputs=[s_layer_5, t_layer_5]) Real NVP class RealNVP(keras.Model): def __init__(self, num_coupling_layers): super(RealNVP, self).__init__() self.num_coupling_layers = num_coupling_layers # Distribution of the latent space. self.distribution = tfp.distributions.MultivariateNormalDiag( loc=[0.0, 0.0], scale_diag=[1.0, 1.0] ) self.masks = np.array( [[0, 1], [1, 0]] * (num_coupling_layers // 2), dtype=\"float32\" ) self.loss_tracker = keras.metrics.Mean(name=\"loss\") self.layers_list = [Coupling(2) for i in range(num_coupling_layers)] @property def metrics(self): \"\"\"List of the model's metrics. We make sure the loss tracker is listed as part of `model.metrics` so that `fit()` and `evaluate()` are able to `reset()` the loss tracker at the start of each epoch and at the start of an `evaluate()` call. \"\"\" return [self.loss_tracker] def call(self, x, training=True): log_det_inv = 0 direction = 1 if training: direction = -1 for i in range(self.num_coupling_layers)[::direction]: x_masked = x * self.masks[i] reversed_mask = 1 - self.masks[i] s, t = self.layers_list[i](x_masked) s *= reversed_mask t *= reversed_mask gate = (direction - 1) / 2 x = ( reversed_mask * (x * tf.exp(direction * s) + direction * t * tf.exp(gate * s)) + x_masked ) log_det_inv += gate * tf.reduce_sum(s, [1]) return x, log_det_inv # Log likelihood of the normal distribution plus the log determinant of the jacobian. def log_loss(self, x): y, logdet = self(x) log_likelihood = self.distribution.log_prob(y) + logdet return -tf.reduce_mean(log_likelihood) def train_step(self, data): with tf.GradientTape() as tape: loss = self.log_loss(data) g = tape.gradient(loss, self.trainable_variables) self.optimizer.apply_gradients(zip(g, self.trainable_variables)) self.loss_tracker.update_state(loss) return {\"loss\": self.loss_tracker.result()} def test_step(self, data): loss = self.log_loss(data) self.loss_tracker.update_state(loss) return {\"loss\": self.loss_tracker.result()} Model training model = RealNVP(num_coupling_layers=6) model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.0001)) history = model.fit( normalized_data, batch_size=256, epochs=300, verbose=2, validation_split=0.2 ) Epoch 1/300 10/10 - 1s - loss: 2.7178 - val_loss: 2.5872 Epoch 2/300 10/10 - 0s - loss: 2.6151 - val_loss: 2.5421 Epoch 3/300 10/10 - 0s - loss: 2.5702 - val_loss: 2.5001 Epoch 4/300 10/10 - 0s - loss: 2.5241 - val_loss: 2.4650 Epoch 5/300 10/10 - 0s - loss: 2.4934 - val_loss: 2.4377 Epoch 6/300 10/10 - 0s - loss: 2.4684 - val_loss: 2.4236 Epoch 7/300 10/10 - 0s - loss: 2.4420 - val_loss: 2.3976 Epoch 8/300 10/10 - 0s - loss: 2.4185 - val_loss: 2.3722 Epoch 9/300 10/10 - 0s - loss: 2.3857 - val_loss: 2.3591 Epoch 10/300 10/10 - 0s - loss: 2.3611 - val_loss: 2.3341 Epoch 11/300 10/10 - 0s - loss: 2.3323 - val_loss: 2.2999 Epoch 12/300 10/10 - 0s - loss: 2.3035 - val_loss: 2.2688 Epoch 13/300 10/10 - 0s - loss: 2.2694 - val_loss: 2.2435 Epoch 14/300 10/10 - 0s - loss: 2.2359 - val_loss: 2.2137 Epoch 15/300 10/10 - 0s - loss: 2.2053 - val_loss: 2.1877 Epoch 16/300 10/10 - 0s - loss: 2.1775 - val_loss: 2.1626 Epoch 17/300 10/10 - 0s - loss: 2.1546 - val_loss: 2.1257 Epoch 18/300 10/10 - 0s - loss: 2.1310 - val_loss: 2.1022 Epoch 19/300 10/10 - 0s - loss: 2.1258 - val_loss: 2.1022 Epoch 20/300 10/10 - 0s - loss: 2.1097 - val_loss: 2.0670 Epoch 21/300 10/10 - 0s - loss: 2.0811 - val_loss: 2.0502 Epoch 22/300 10/10 - 0s - loss: 2.0407 - val_loss: 2.0235 Epoch 23/300 10/10 - 0s - loss: 2.0169 - val_loss: 1.9946 Epoch 24/300 10/10 - 0s - loss: 2.0011 - val_loss: 1.9843 Epoch 25/300 10/10 - 0s - loss: 2.0151 - val_loss: 1.9728 Epoch 26/300 10/10 - 0s - loss: 1.9427 - val_loss: 1.9473 Epoch 27/300 10/10 - 0s - loss: 1.9266 - val_loss: 1.9245 Epoch 28/300 10/10 - 0s - loss: 1.8574 - val_loss: 1.7811 Epoch 29/300 10/10 - 0s - loss: 1.7765 - val_loss: 1.7016 Epoch 30/300 10/10 - 0s - loss: 1.7020 - val_loss: 1.6801 Epoch 31/300 10/10 - 0s - loss: 1.6935 - val_loss: 1.6644 Epoch 32/300 10/10 - 0s - loss: 1.6643 - val_loss: 1.6998 Epoch 33/300 10/10 - 0s - loss: 1.6733 - val_loss: 1.7054 Epoch 34/300 10/10 - 0s - loss: 1.6405 - val_loss: 1.6217 Epoch 35/300 10/10 - 0s - loss: 1.6035 - val_loss: 1.6094 Epoch 36/300 10/10 - 0s - loss: 1.5700 - val_loss: 1.6086 Epoch 37/300 10/10 - 0s - loss: 1.5750 - val_loss: 1.6160 Epoch 38/300 10/10 - 0s - loss: 1.5512 - val_loss: 1.6023 Epoch 39/300 10/10 - 0s - loss: 1.5664 - val_loss: 1.5859 Epoch 40/300 10/10 - 0s - loss: 1.5949 - val_loss: 1.6684 Epoch 41/300 10/10 - 0s - loss: 1.6125 - val_loss: 1.5688 Epoch 42/300 10/10 - 0s - loss: 1.5855 - val_loss: 1.5783 Epoch 43/300 10/10 - 0s - loss: 1.5394 - val_loss: 1.5332 Epoch 44/300 10/10 - 0s - loss: 1.5093 - val_loss: 1.6073 Epoch 45/300 10/10 - 0s - loss: 1.5417 - val_loss: 1.5910 Epoch 46/300 10/10 - 0s - loss: 1.5095 - val_loss: 1.5061 Epoch 47/300 10/10 - 0s - loss: 1.4626 - val_loss: 1.5143 Epoch 48/300 10/10 - 0s - loss: 1.4588 - val_loss: 1.5005 Epoch 49/300 10/10 - 0s - loss: 1.4683 - val_loss: 1.5071 Epoch 50/300 10/10 - 0s - loss: 1.4285 - val_loss: 1.5894 Epoch 51/300 10/10 - 0s - loss: 1.4110 - val_loss: 1.4964 Epoch 52/300 10/10 - 0s - loss: 1.4510 - val_loss: 1.5608 Epoch 53/300 10/10 - 0s - loss: 1.4584 - val_loss: 1.5640 Epoch 54/300 10/10 - 0s - loss: 1.4393 - val_loss: 1.5073 Epoch 55/300 10/10 - 0s - loss: 1.4248 - val_loss: 1.5284 Epoch 56/300 10/10 - 0s - loss: 1.4659 - val_loss: 1.4654 Epoch 57/300 10/10 - 0s - loss: 1.4572 - val_loss: 1.4633 Epoch 58/300 10/10 - 0s - loss: 1.4254 - val_loss: 1.4536 Epoch 59/300 10/10 - 0s - loss: 1.3927 - val_loss: 1.4672 Epoch 60/300 10/10 - 0s - loss: 1.3782 - val_loss: 1.4166 Epoch 61/300 10/10 - 0s - loss: 1.3674 - val_loss: 1.4340 Epoch 62/300 10/10 - 0s - loss: 1.3521 - val_loss: 1.4302 Epoch 63/300 10/10 - 0s - loss: 1.3656 - val_loss: 1.4610 Epoch 64/300 10/10 - 0s - loss: 1.3916 - val_loss: 1.5597 Epoch 65/300 10/10 - 0s - loss: 1.4478 - val_loss: 1.4781 Epoch 66/300 10/10 - 0s - loss: 1.3987 - val_loss: 1.5077 Epoch 67/300 10/10 - 0s - loss: 1.3553 - val_loss: 1.4511 Epoch 68/300 10/10 - 0s - loss: 1.3901 - val_loss: 1.4013 Epoch 69/300 10/10 - 0s - loss: 1.3682 - val_loss: 1.4378 Epoch 70/300 10/10 - 0s - loss: 1.3688 - val_loss: 1.4445 Epoch 71/300 10/10 - 0s - loss: 1.3341 - val_loss: 1.4139 Epoch 72/300 10/10 - 0s - loss: 1.3621 - val_loss: 1.5097 Epoch 73/300 10/10 - 0s - loss: 1.4158 - val_loss: 1.4735 Epoch 74/300 10/10 - 0s - loss: 1.4013 - val_loss: 1.4390 Epoch 75/300 10/10 - 0s - loss: 1.3637 - val_loss: 1.4306 Epoch 76/300 10/10 - 0s - loss: 1.3278 - val_loss: 1.4007 Epoch 77/300 10/10 - 0s - loss: 1.3153 - val_loss: 1.4226 Epoch 78/300 10/10 - 0s - loss: 1.3687 - val_loss: 1.4315 Epoch 79/300 10/10 - 0s - loss: 1.3377 - val_loss: 1.4520 Epoch 80/300 10/10 - 0s - loss: 1.3214 - val_loss: 1.4643 Epoch 81/300 10/10 - 0s - loss: 1.2906 - val_loss: 1.5738 Epoch 82/300 10/10 - 0s - loss: 1.3231 - val_loss: 1.8303 Epoch 83/300 10/10 - 0s - loss: 1.3099 - val_loss: 1.4406 Epoch 84/300 10/10 - 0s - loss: 1.3427 - val_loss: 1.5539 Epoch 85/300 10/10 - 0s - loss: 1.3270 - val_loss: 1.5454 Epoch 86/300 10/10 - 0s - loss: 1.3959 - val_loss: 1.4328 Epoch 87/300 10/10 - 0s - loss: 1.3469 - val_loss: 1.4087 Epoch 88/300 10/10 - 0s - loss: 1.3383 - val_loss: 1.4003 Epoch 89/300 10/10 - 0s - loss: 1.2968 - val_loss: 1.4284 Epoch 90/300 10/10 - 0s - loss: 1.4229 - val_loss: 1.4831 Epoch 91/300 10/10 - 0s - loss: 1.4664 - val_loss: 1.4332 Epoch 92/300 10/10 - 0s - loss: 1.4076 - val_loss: 1.4708 Epoch 93/300 10/10 - 0s - loss: 1.3508 - val_loss: 1.3865 Epoch 94/300 10/10 - 0s - loss: 1.3170 - val_loss: 1.3794 Epoch 95/300 10/10 - 0s - loss: 1.3266 - val_loss: 1.5315 Epoch 96/300 10/10 - 0s - loss: 1.3247 - val_loss: 1.4001 Epoch 97/300 10/10 - 0s - loss: 1.2963 - val_loss: 1.4036 Epoch 98/300 10/10 - 0s - loss: 1.2839 - val_loss: 1.4195 Epoch 99/300 10/10 - 0s - loss: 1.3517 - val_loss: 1.4023 Epoch 100/300 10/10 - 0s - loss: 1.3468 - val_loss: 1.4460 Epoch 101/300 10/10 - 0s - loss: 1.3938 - val_loss: 1.4292 Epoch 102/300 10/10 - 0s - loss: 1.3313 - val_loss: 1.4288 Epoch 103/300 10/10 - 0s - loss: 1.3267 - val_loss: 1.3968 Epoch 104/300 10/10 - 0s - loss: 1.3321 - val_loss: 1.4145 Epoch 105/300 10/10 - 0s - loss: 1.2973 - val_loss: 1.3500 Epoch 106/300 10/10 - 0s - loss: 1.2455 - val_loss: 1.4672 Epoch 107/300 10/10 - 0s - loss: 1.3255 - val_loss: 1.4633 Epoch 108/300 10/10 - 0s - loss: 1.3379 - val_loss: 1.3717 Epoch 109/300 10/10 - 0s - loss: 1.3243 - val_loss: 1.4118 Epoch 110/300 10/10 - 0s - loss: 1.3184 - val_loss: 1.3922 Epoch 111/300 10/10 - 0s - loss: 1.2779 - val_loss: 1.3783 Epoch 112/300 10/10 - 0s - loss: 1.3495 - val_loss: 1.6651 Epoch 113/300 10/10 - 0s - loss: 1.5595 - val_loss: 1.5984 Epoch 114/300 10/10 - 0s - loss: 1.4541 - val_loss: 1.4844 Epoch 115/300 10/10 - 0s - loss: 1.4001 - val_loss: 1.4477 Epoch 116/300 10/10 - 0s - loss: 1.3305 - val_loss: 1.4097 Epoch 117/300 10/10 - 0s - loss: 1.3084 - val_loss: 1.3643 Epoch 118/300 10/10 - 0s - loss: 1.2993 - val_loss: 1.3726 Epoch 119/300 10/10 - 0s - loss: 1.2624 - val_loss: 1.3927 Epoch 120/300 10/10 - 0s - loss: 1.3288 - val_loss: 1.3912 Epoch 121/300 10/10 - 0s - loss: 1.2925 - val_loss: 1.3809 Epoch 122/300 10/10 - 0s - loss: 1.2756 - val_loss: 1.3434 Epoch 123/300 10/10 - 0s - loss: 1.2540 - val_loss: 1.3699 Epoch 124/300 10/10 - 0s - loss: 1.3008 - val_loss: 1.3272 Epoch 125/300 10/10 - 0s - loss: 1.2932 - val_loss: 1.3365 Epoch 126/300 10/10 - 0s - loss: 1.2844 - val_loss: 1.3824 Epoch 127/300 10/10 - 0s - loss: 1.2688 - val_loss: 1.3413 Epoch 128/300 10/10 - 0s - loss: 1.2636 - val_loss: 1.3659 Epoch 129/300 10/10 - 0s - loss: 1.2590 - val_loss: 1.3724 Epoch 130/300 10/10 - 0s - loss: 1.4471 - val_loss: 1.4119 Epoch 131/300 10/10 - 0s - loss: 1.5125 - val_loss: 1.5486 Epoch 132/300 10/10 - 0s - loss: 1.5826 - val_loss: 1.4578 Epoch 133/300 10/10 - 0s - loss: 1.4168 - val_loss: 1.4405 Epoch 134/300 10/10 - 0s - loss: 1.3739 - val_loss: 1.4728 Epoch 135/300 10/10 - 0s - loss: 1.3304 - val_loss: 1.3734 Epoch 136/300 10/10 - 0s - loss: 1.2987 - val_loss: 1.3769 Epoch 137/300 10/10 - 0s - loss: 1.2883 - val_loss: 1.3542 Epoch 138/300 10/10 - 0s - loss: 1.2805 - val_loss: 1.4974 Epoch 139/300 10/10 - 0s - loss: 1.3558 - val_loss: 1.3958 Epoch 140/300 10/10 - 0s - loss: 1.3244 - val_loss: 1.3705 Epoch 141/300 10/10 - 0s - loss: 1.3043 - val_loss: 1.3563 Epoch 142/300 10/10 - 0s - loss: 1.3302 - val_loss: 1.3611 Epoch 143/300 10/10 - 0s - loss: 1.3188 - val_loss: 1.4500 Epoch 144/300 10/10 - 0s - loss: 1.3100 - val_loss: 1.3893 Epoch 145/300 10/10 - 0s - loss: 1.2864 - val_loss: 1.3436 Epoch 146/300 10/10 - 0s - loss: 1.3013 - val_loss: 1.3548 Epoch 147/300 10/10 - 0s - loss: 1.2672 - val_loss: 1.4179 Epoch 148/300 10/10 - 0s - loss: 1.2650 - val_loss: 1.3705 Epoch 149/300 10/10 - 0s - loss: 1.2931 - val_loss: 1.3274 Epoch 150/300 10/10 - 0s - loss: 1.3365 - val_loss: 1.4164 Epoch 151/300 10/10 - 0s - loss: 1.3562 - val_loss: 1.3815 Epoch 152/300 10/10 - 0s - loss: 1.3067 - val_loss: 1.4100 Epoch 153/300 10/10 - 0s - loss: 1.2752 - val_loss: 1.3928 Epoch 154/300 10/10 - 0s - loss: 1.2659 - val_loss: 1.3512 Epoch 155/300 10/10 - 0s - loss: 1.2696 - val_loss: 1.3715 Epoch 156/300 10/10 - 0s - loss: 1.2719 - val_loss: 1.3366 Epoch 157/300 10/10 - 0s - loss: 1.2718 - val_loss: 1.5284 Epoch 158/300 10/10 - 0s - loss: 1.3099 - val_loss: 1.3342 Epoch 159/300 10/10 - 0s - loss: 1.2655 - val_loss: 1.3692 Epoch 160/300 10/10 - 0s - loss: 1.2694 - val_loss: 1.5034 Epoch 161/300 10/10 - 0s - loss: 1.3370 - val_loss: 1.3611 Epoch 162/300 10/10 - 0s - loss: 1.2799 - val_loss: 1.3745 Epoch 163/300 10/10 - 0s - loss: 1.2714 - val_loss: 1.3639 Epoch 164/300 10/10 - 0s - loss: 1.2711 - val_loss: 1.3178 Epoch 165/300 10/10 - 0s - loss: 1.2754 - val_loss: 1.3722 Epoch 166/300 10/10 - 0s - loss: 1.2515 - val_loss: 1.3407 Epoch 167/300 10/10 - 0s - loss: 1.2431 - val_loss: 1.4075 Epoch 168/300 10/10 - 0s - loss: 1.2534 - val_loss: 1.3128 Epoch 169/300 10/10 - 0s - loss: 1.2159 - val_loss: 1.3614 Epoch 170/300 10/10 - 0s - loss: 1.2591 - val_loss: 1.3247 Epoch 171/300 10/10 - 0s - loss: 1.2424 - val_loss: 1.3186 Epoch 172/300 10/10 - 0s - loss: 1.2218 - val_loss: 1.3259 Epoch 173/300 10/10 - 0s - loss: 1.2328 - val_loss: 1.3401 Epoch 174/300 10/10 - 0s - loss: 1.2168 - val_loss: 1.3092 Epoch 175/300 10/10 - 0s - loss: 1.2779 - val_loss: 1.3349 Epoch 176/300 10/10 - 0s - loss: 1.2560 - val_loss: 1.3331 Epoch 177/300 10/10 - 0s - loss: 1.2445 - val_loss: 1.3119 Epoch 178/300 10/10 - 0s - loss: 1.2250 - val_loss: 1.3168 Epoch 179/300 10/10 - 0s - loss: 1.2139 - val_loss: 1.3217 Epoch 180/300 10/10 - 0s - loss: 1.2020 - val_loss: 1.2753 Epoch 181/300 10/10 - 0s - loss: 1.1906 - val_loss: 1.2765 Epoch 182/300 10/10 - 0s - loss: 1.2045 - val_loss: 1.2821 Epoch 183/300 10/10 - 0s - loss: 1.2229 - val_loss: 1.2810 Epoch 184/300 10/10 - 0s - loss: 1.1967 - val_loss: 1.3295 Epoch 185/300 10/10 - 0s - loss: 1.1852 - val_loss: 1.2866 Epoch 186/300 10/10 - 0s - loss: 1.1941 - val_loss: 1.3126 Epoch 187/300 10/10 - 0s - loss: 1.1783 - val_loss: 1.3282 Epoch 188/300 10/10 - 0s - loss: 1.1758 - val_loss: 1.2702 Epoch 189/300 10/10 - 0s - loss: 1.1763 - val_loss: 1.2694 Epoch 190/300 10/10 - 0s - loss: 1.1802 - val_loss: 1.3377 Epoch 191/300 10/10 - 0s - loss: 1.1989 - val_loss: 1.2996 Epoch 192/300 10/10 - 0s - loss: 1.1998 - val_loss: 1.2948 Epoch 193/300 10/10 - 0s - loss: 1.1977 - val_loss: 1.3324 Epoch 194/300 10/10 - 0s - loss: 1.1756 - val_loss: 1.3388 Epoch 195/300 10/10 - 0s - loss: 1.1738 - val_loss: 1.3121 Epoch 196/300 10/10 - 0s - loss: 1.1752 - val_loss: 1.2886 Epoch 197/300 10/10 - 0s - loss: 1.1894 - val_loss: 1.2996 Epoch 198/300 10/10 - 0s - loss: 1.1771 - val_loss: 1.2697 Epoch 199/300 10/10 - 0s - loss: 1.1741 - val_loss: 1.2830 Epoch 200/300 10/10 - 0s - loss: 1.1775 - val_loss: 1.3095 Epoch 201/300 10/10 - 0s - loss: 1.1814 - val_loss: 1.2873 Epoch 202/300 10/10 - 0s - loss: 1.1782 - val_loss: 1.2748 Epoch 203/300 10/10 - 0s - loss: 1.1623 - val_loss: 1.2861 Epoch 204/300 10/10 - 0s - loss: 1.1691 - val_loss: 1.2960 Epoch 205/300 10/10 - 0s - loss: 1.1722 - val_loss: 1.3015 Epoch 206/300 10/10 - 0s - loss: 1.2002 - val_loss: 1.2970 Epoch 207/300 10/10 - 0s - loss: 1.1916 - val_loss: 1.3317 Epoch 208/300 10/10 - 0s - loss: 1.1938 - val_loss: 1.3479 Epoch 209/300 10/10 - 0s - loss: 1.2207 - val_loss: 1.2718 Epoch 210/300 10/10 - 0s - loss: 1.1927 - val_loss: 1.2947 Epoch 211/300 10/10 - 0s - loss: 1.1799 - val_loss: 1.2910 Epoch 212/300 10/10 - 0s - loss: 1.1877 - val_loss: 1.3001 Epoch 213/300 10/10 - 0s - loss: 1.1671 - val_loss: 1.2740 Epoch 214/300 10/10 - 0s - loss: 1.2021 - val_loss: 1.3010 Epoch 215/300 10/10 - 0s - loss: 1.1937 - val_loss: 1.2906 Epoch 216/300 10/10 - 0s - loss: 1.1659 - val_loss: 1.2879 Epoch 217/300 10/10 - 0s - loss: 1.1914 - val_loss: 1.2839 Epoch 218/300 10/10 - 0s - loss: 1.1787 - val_loss: 1.2966 Epoch 219/300 10/10 - 0s - loss: 1.1651 - val_loss: 1.2927 Epoch 220/300 10/10 - 0s - loss: 1.1803 - val_loss: 1.2818 Epoch 221/300 10/10 - 0s - loss: 1.1701 - val_loss: 1.2787 Epoch 222/300 10/10 - 0s - loss: 1.2009 - val_loss: 1.3056 Epoch 223/300 10/10 - 0s - loss: 1.1741 - val_loss: 1.3055 Epoch 224/300 10/10 - 0s - loss: 1.1955 - val_loss: 1.3187 Epoch 225/300 10/10 - 0s - loss: 1.2137 - val_loss: 1.2908 Epoch 226/300 10/10 - 0s - loss: 1.1723 - val_loss: 1.2808 Epoch 227/300 10/10 - 0s - loss: 1.1682 - val_loss: 1.2974 Epoch 228/300 10/10 - 0s - loss: 1.1569 - val_loss: 1.3180 Epoch 229/300 10/10 - 0s - loss: 1.1848 - val_loss: 1.2840 Epoch 230/300 10/10 - 0s - loss: 1.1912 - val_loss: 1.2940 Epoch 231/300 10/10 - 0s - loss: 1.1633 - val_loss: 1.2905 Epoch 232/300 10/10 - 0s - loss: 1.1539 - val_loss: 1.2985 Epoch 233/300 10/10 - 0s - loss: 1.1574 - val_loss: 1.2750 Epoch 234/300 10/10 - 0s - loss: 1.1555 - val_loss: 1.2690 Epoch 235/300 10/10 - 0s - loss: 1.1519 - val_loss: 1.2961 Epoch 236/300 10/10 - 0s - loss: 1.1763 - val_loss: 1.2750 Epoch 237/300 10/10 - 0s - loss: 1.1670 - val_loss: 1.3295 Epoch 238/300 10/10 - 0s - loss: 1.1574 - val_loss: 1.2904 Epoch 239/300 10/10 - 0s - loss: 1.1588 - val_loss: 1.3034 Epoch 240/300 10/10 - 0s - loss: 1.1630 - val_loss: 1.2803 Epoch 241/300 10/10 - 0s - loss: 1.1688 - val_loss: 1.2860 Epoch 242/300 10/10 - 0s - loss: 1.1730 - val_loss: 1.3309 Epoch 243/300 10/10 - 0s - loss: 1.2057 - val_loss: 1.3330 Epoch 244/300 10/10 - 0s - loss: 1.1706 - val_loss: 1.3037 Epoch 245/300 10/10 - 0s - loss: 1.1526 - val_loss: 1.2910 Epoch 246/300 10/10 - 0s - loss: 1.1625 - val_loss: 1.2869 Epoch 247/300 10/10 - 0s - loss: 1.1555 - val_loss: 1.3253 Epoch 248/300 10/10 - 0s - loss: 1.1527 - val_loss: 1.3349 Epoch 249/300 10/10 - 0s - loss: 1.1544 - val_loss: 1.2894 Epoch 250/300 10/10 - 0s - loss: 1.1434 - val_loss: 1.2844 Epoch 251/300 10/10 - 0s - loss: 1.1479 - val_loss: 1.3500 Epoch 252/300 10/10 - 0s - loss: 1.1594 - val_loss: 1.3206 Epoch 253/300 10/10 - 0s - loss: 1.1975 - val_loss: 1.2897 Epoch 254/300 10/10 - 0s - loss: 1.1800 - val_loss: 1.2983 Epoch 255/300 10/10 - 0s - loss: 1.1656 - val_loss: 1.2979 Epoch 256/300 10/10 - 0s - loss: 1.1658 - val_loss: 1.3044 Epoch 257/300 10/10 - 0s - loss: 1.1665 - val_loss: 1.2955 Epoch 258/300 10/10 - 0s - loss: 1.1577 - val_loss: 1.2998 Epoch 259/300 10/10 - 0s - loss: 1.1625 - val_loss: 1.3247 Epoch 260/300 10/10 - 0s - loss: 1.1652 - val_loss: 1.3172 Epoch 261/300 10/10 - 0s - loss: 1.1551 - val_loss: 1.2899 Epoch 262/300 10/10 - 0s - loss: 1.1433 - val_loss: 1.2832 Epoch 263/300 10/10 - 0s - loss: 1.1498 - val_loss: 1.2781 Epoch 264/300 10/10 - 0s - loss: 1.1599 - val_loss: 1.3124 Epoch 265/300 10/10 - 0s - loss: 1.1693 - val_loss: 1.2873 Epoch 266/300 10/10 - 0s - loss: 1.1663 - val_loss: 1.2625 Epoch 267/300 10/10 - 0s - loss: 1.1706 - val_loss: 1.2935 Epoch 268/300 10/10 - 0s - loss: 1.1641 - val_loss: 1.2688 Epoch 269/300 10/10 - 0s - loss: 1.1564 - val_loss: 1.2748 Epoch 270/300 10/10 - 0s - loss: 1.1558 - val_loss: 1.2903 Epoch 271/300 10/10 - 0s - loss: 1.1699 - val_loss: 1.3047 Epoch 272/300 10/10 - 0s - loss: 1.1511 - val_loss: 1.3155 Epoch 273/300 10/10 - 0s - loss: 1.1574 - val_loss: 1.3227 Epoch 274/300 10/10 - 0s - loss: 1.2026 - val_loss: 1.2986 Epoch 275/300 10/10 - 0s - loss: 1.1880 - val_loss: 1.3880 Epoch 276/300 10/10 - 0s - loss: 1.1912 - val_loss: 1.3257 Epoch 277/300 10/10 - 0s - loss: 1.2500 - val_loss: 1.3678 Epoch 278/300 10/10 - 0s - loss: 1.2577 - val_loss: 1.3459 Epoch 279/300 10/10 - 0s - loss: 1.2060 - val_loss: 1.3124 Epoch 280/300 10/10 - 0s - loss: 1.1785 - val_loss: 1.2839 Epoch 281/300 10/10 - 0s - loss: 1.1617 - val_loss: 1.2958 Epoch 282/300 10/10 - 0s - loss: 1.1535 - val_loss: 1.2837 Epoch 283/300 10/10 - 0s - loss: 1.1544 - val_loss: 1.2685 Epoch 284/300 10/10 - 0s - loss: 1.1444 - val_loss: 1.2963 Epoch 285/300 10/10 - 0s - loss: 1.1540 - val_loss: 1.3266 Epoch 286/300 10/10 - 0s - loss: 1.1817 - val_loss: 1.2867 Epoch 287/300 10/10 - 0s - loss: 1.1504 - val_loss: 1.2798 Epoch 288/300 10/10 - 0s - loss: 1.1495 - val_loss: 1.3050 Epoch 289/300 10/10 - 0s - loss: 1.1667 - val_loss: 1.2821 Epoch 290/300 10/10 - 0s - loss: 1.1761 - val_loss: 1.3154 Epoch 291/300 10/10 - 0s - loss: 1.1608 - val_loss: 1.3160 Epoch 292/300 10/10 - 0s - loss: 1.1688 - val_loss: 1.3394 Epoch 293/300 10/10 - 0s - loss: 1.1595 - val_loss: 1.3182 Epoch 294/300 10/10 - 0s - loss: 1.1630 - val_loss: 1.3249 Epoch 295/300 10/10 - 0s - loss: 1.1427 - val_loss: 1.3061 Epoch 296/300 10/10 - 0s - loss: 1.1473 - val_loss: 1.2985 Epoch 297/300 10/10 - 0s - loss: 1.1393 - val_loss: 1.3054 Epoch 298/300 10/10 - 0s - loss: 1.1641 - val_loss: 1.3133 Epoch 299/300 10/10 - 0s - loss: 1.1740 - val_loss: 1.2902 Epoch 300/300 10/10 - 0s - loss: 1.1717 - val_loss: 1.2780 Performance evaluation plt.figure(figsize=(15, 10)) plt.plot(history.history[\"loss\"]) plt.plot(history.history[\"val_loss\"]) plt.title(\"model loss\") plt.legend([\"train\", \"validation\"], loc=\"upper right\") plt.ylabel(\"loss\") plt.xlabel(\"epoch\") # From data to latent space. z, _ = model(normalized_data) # From latent space to data. samples = model.distribution.sample(3000) x, _ = model.predict(samples) f, axes = plt.subplots(2, 2) f.set_size_inches(20, 15) axes[0, 0].scatter(normalized_data[:, 0], normalized_data[:, 1], color=\"r\") axes[0, 0].set(title=\"Inference data space X\", xlabel=\"x\", ylabel=\"y\") axes[0, 1].scatter(z[:, 0], z[:, 1], color=\"r\") axes[0, 1].set(title=\"Inference latent space Z\", xlabel=\"x\", ylabel=\"y\") axes[0, 1].set_xlim([-3.5, 4]) axes[0, 1].set_ylim([-4, 4]) axes[1, 0].scatter(samples[:, 0], samples[:, 1], color=\"g\") axes[1, 0].set(title=\"Generated latent space Z\", xlabel=\"x\", ylabel=\"y\") axes[1, 1].scatter(x[:, 0], x[:, 1], color=\"g\") axes[1, 1].set(title=\"Generated data space X\", label=\"x\", ylabel=\"y\") axes[1, 1].set_xlim([-2, 2]) axes[1, 1].set_ylim([-2, 2]) (-2.0, 2.0) png png Implementation of StyleGAN Introduction The key idea of StyleGAN is to progressively increase the resolution of the generated images and to incorporate style features in the generative process.This StyleGAN implementation is based on the book Hands-on Image Generation with TensorFlow. The code from the book's Github repository was refactored to leverage a custom train_step() to enable faster training time via compilation and distribution. Setup import os import random import math import numpy as np import matplotlib.pyplot as plt from enum import Enum from glob import glob from functools import partial import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.models import Sequential from tensorflow_addons.layers import InstanceNormalization import tensorflow_datasets as tfds Prepare the dataset In this example, we will train using the CelebA from TensorFlow Datasets. def log2(x): return int(np.log2(x)) # we use different batch size for different resolution, so larger image size # could fit into GPU memory. The keys is image resolution in log2 batch_sizes = {2: 16, 3: 16, 4: 16, 5: 16, 6: 16, 7: 8, 8: 4, 9: 2, 10: 1} # We adjust the train step accordingly train_step_ratio = {k: batch_sizes[2] / v for k, v in batch_sizes.items()} ds_train = tfds.load(\"celeb_a\", split=\"train\") def resize_image(res, sample): image = sample[\"image\"] # only donwsampling, so use nearest neighbor that is faster to run image = tf.image.resize( image, (res, res), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR ) image = tf.cast(image, tf.float32) / 127.5 - 1.0 return image def create_dataloader(res): batch_size = batch_sizes[log2(res)] dl = ds_train.map(partial(resize_image, res), num_parallel_calls=tf.data.AUTOTUNE) dl = dl.shuffle(200).batch(batch_size, drop_remainder=True).prefetch(1).repeat() return dl Utility function to display images after each epoch def plot_images(images, log2_res, fname=\"\"): scales = {2: 0.5, 3: 1, 4: 2, 5: 3, 6: 4, 7: 5, 8: 6, 9: 7, 10: 8} scale = scales[log2_res] grid_col = min(images.shape[0], int(32 // scale)) grid_row = 1 f, axarr = plt.subplots( grid_row, grid_col, figsize=(grid_col * scale, grid_row * scale) ) for row in range(grid_row): ax = axarr if grid_row == 1 else axarr[row] for col in range(grid_col): ax[col].imshow(images[row * grid_col + col]) ax[col].axis(\"off\") plt.show() if fname: f.savefig(fname) Custom Layers The following are building blocks that will be used to construct the generators and discriminators of the StyleGAN model. def fade_in(alpha, a, b): return alpha * a + (1.0 - alpha) * b def wasserstein_loss(y_true, y_pred): return -tf.reduce_mean(y_true * y_pred) def pixel_norm(x, epsilon=1e-8): return x / tf.math.sqrt(tf.reduce_mean(x ** 2, axis=-1, keepdims=True) + epsilon) def minibatch_std(input_tensor, epsilon=1e-8): n, h, w, c = tf.shape(input_tensor) group_size = tf.minimum(4, n) x = tf.reshape(input_tensor, [group_size, -1, h, w, c]) group_mean, group_var = tf.nn.moments(x, axes=(0), keepdims=False) group_std = tf.sqrt(group_var + epsilon) avg_std = tf.reduce_mean(group_std, axis=[1, 2, 3], keepdims=True) x = tf.tile(avg_std, [group_size, h, w, 1]) return tf.concat([input_tensor, x], axis=-1) class EqualizedConv(layers.Layer): def __init__(self, out_channels, kernel=3, gain=2, **kwargs): super(EqualizedConv, self).__init__(**kwargs) self.kernel = kernel self.out_channels = out_channels self.gain = gain self.pad = kernel != 1 def build(self, input_shape): self.in_channels = input_shape[-1] initializer = keras.initializers.RandomNormal(mean=0.0, stddev=1.0) self.w = self.add_weight( shape=[self.kernel, self.kernel, self.in_channels, self.out_channels], initializer=initializer, trainable=True, name=\"kernel\", ) self.b = self.add_weight( shape=(self.out_channels,), initializer=\"zeros\", trainable=True, name=\"bias\" ) fan_in = self.kernel * self.kernel * self.in_channels self.scale = tf.sqrt(self.gain / fan_in) def call(self, inputs): if self.pad: x = tf.pad(inputs, [[0, 0], [1, 1], [1, 1], [0, 0]], mode=\"REFLECT\") else: x = inputs output = ( tf.nn.conv2d(x, self.scale * self.w, strides=1, padding=\"VALID\") + self.b ) return output class EqualizedDense(layers.Layer): def __init__(self, units, gain=2, learning_rate_multiplier=1, **kwargs): super(EqualizedDense, self).__init__(**kwargs) self.units = units self.gain = gain self.learning_rate_multiplier = learning_rate_multiplier def build(self, input_shape): self.in_channels = input_shape[-1] initializer = keras.initializers.RandomNormal( mean=0.0, stddev=1.0 / self.learning_rate_multiplier ) self.w = self.add_weight( shape=[self.in_channels, self.units], initializer=initializer, trainable=True, name=\"kernel\", ) self.b = self.add_weight( shape=(self.units,), initializer=\"zeros\", trainable=True, name=\"bias\" ) fan_in = self.in_channels self.scale = tf.sqrt(self.gain / fan_in) def call(self, inputs): output = tf.add(tf.matmul(inputs, self.scale * self.w), self.b) return output * self.learning_rate_multiplier class AddNoise(layers.Layer): def build(self, input_shape): n, h, w, c = input_shape[0] initializer = keras.initializers.RandomNormal(mean=0.0, stddev=1.0) self.b = self.add_weight( shape=[1, 1, 1, c], initializer=initializer, trainable=True, name=\"kernel\" ) def call(self, inputs): x, noise = inputs output = x + self.b * noise return output class AdaIN(layers.Layer): def __init__(self, gain=1, **kwargs): super(AdaIN, self).__init__(**kwargs) self.gain = gain def build(self, input_shapes): x_shape = input_shapes[0] w_shape = input_shapes[1] self.w_channels = w_shape[-1] self.x_channels = x_shape[-1] self.dense_1 = EqualizedDense(self.x_channels, gain=1) self.dense_2 = EqualizedDense(self.x_channels, gain=1) def call(self, inputs): x, w = inputs ys = tf.reshape(self.dense_1(w), (-1, 1, 1, self.x_channels)) yb = tf.reshape(self.dense_2(w), (-1, 1, 1, self.x_channels)) return ys * x + yb Next we build the following: A model mapping to map the random noise into style code The generator The discriminator For the generator, we build generator blocks at multiple resolutions, e.g. 4x4, 8x8, ...up to 1024x1024. We only use 4x4 in the beginning and we use progressively larger-resolution blocks as the training proceeds. Same for the discriminator. def Mapping(num_stages, input_shape=512): z = layers.Input(shape=(input_shape)) w = pixel_norm(z) for i in range(8): w = EqualizedDense(512, learning_rate_multiplier=0.01)(w) w = layers.LeakyReLU(0.2)(w) w = tf.tile(tf.expand_dims(w, 1), (1, num_stages, 1)) return keras.Model(z, w, name=\"mapping\") class Generator: def __init__(self, start_res_log2, target_res_log2): self.start_res_log2 = start_res_log2 self.target_res_log2 = target_res_log2 self.num_stages = target_res_log2 - start_res_log2 + 1 # list of generator blocks at increasing resolution self.g_blocks = [] # list of layers to convert g_block activation to RGB self.to_rgb = [] # list of noise input of different resolutions into g_blocks self.noise_inputs = [] # filter size to use at each stage, keys are log2(resolution) self.filter_nums = { 0: 512, 1: 512, 2: 512, # 4x4 3: 512, # 8x8 4: 512, # 16x16 5: 512, # 32x32 6: 256, # 64x64 7: 128, # 128x128 8: 64, # 256x256 9: 32, # 512x512 10: 16, } # 1024x1024 start_res = 2 ** start_res_log2 self.input_shape = (start_res, start_res, self.filter_nums[start_res_log2]) self.g_input = layers.Input(self.input_shape, name=\"generator_input\") for i in range(start_res_log2, target_res_log2 + 1): filter_num = self.filter_nums[i] res = 2 ** i self.noise_inputs.append( layers.Input(shape=(res, res, 1), name=f\"noise_{res}x{res}\") ) to_rgb = Sequential( [ layers.InputLayer(input_shape=(res, res, filter_num)), EqualizedConv(3, 1, gain=1), ], name=f\"to_rgb_{res}x{res}\", ) self.to_rgb.append(to_rgb) is_base = i == self.start_res_log2 if is_base: input_shape = (res, res, self.filter_nums[i - 1]) else: input_shape = (2 ** (i - 1), 2 ** (i - 1), self.filter_nums[i - 1]) g_block = self.build_block( filter_num, res=res, input_shape=input_shape, is_base=is_base ) self.g_blocks.append(g_block) def build_block(self, filter_num, res, input_shape, is_base): input_tensor = layers.Input(shape=input_shape, name=f\"g_{res}\") noise = layers.Input(shape=(res, res, 1), name=f\"noise_{res}\") w = layers.Input(shape=512) x = input_tensor if not is_base: x = layers.UpSampling2D((2, 2))(x) x = EqualizedConv(filter_num, 3)(x) x = AddNoise()([x, noise]) x = layers.LeakyReLU(0.2)(x) x = InstanceNormalization()(x) x = AdaIN()([x, w]) x = EqualizedConv(filter_num, 3)(x) x = AddNoise()([x, noise]) x = layers.LeakyReLU(0.2)(x) x = InstanceNormalization()(x) x = AdaIN()([x, w]) return keras.Model([input_tensor, w, noise], x, name=f\"genblock_{res}x{res}\") def grow(self, res_log2): res = 2 ** res_log2 num_stages = res_log2 - self.start_res_log2 + 1 w = layers.Input(shape=(self.num_stages, 512), name=\"w\") alpha = layers.Input(shape=(1), name=\"g_alpha\") x = self.g_blocks[0]([self.g_input, w[:, 0], self.noise_inputs[0]]) if num_stages == 1: rgb = self.to_rgb[0](x) else: for i in range(1, num_stages - 1): x = self.g_blocks[i]([x, w[:, i], self.noise_inputs[i]]) old_rgb = self.to_rgb[num_stages - 2](x) old_rgb = layers.UpSampling2D((2, 2))(old_rgb) i = num_stages - 1 x = self.g_blocks[i]([x, w[:, i], self.noise_inputs[i]]) new_rgb = self.to_rgb[i](x) rgb = fade_in(alpha[0], new_rgb, old_rgb) return keras.Model( [self.g_input, w, self.noise_inputs, alpha], rgb, name=f\"generator_{res}_x_{res}\", ) class Discriminator: def __init__(self, start_res_log2, target_res_log2): self.start_res_log2 = start_res_log2 self.target_res_log2 = target_res_log2 self.num_stages = target_res_log2 - start_res_log2 + 1 # filter size to use at each stage, keys are log2(resolution) self.filter_nums = { 0: 512, 1: 512, 2: 512, # 4x4 3: 512, # 8x8 4: 512, # 16x16 5: 512, # 32x32 6: 256, # 64x64 7: 128, # 128x128 8: 64, # 256x256 9: 32, # 512x512 10: 16, } # 1024x1024 # list of discriminator blocks at increasing resolution self.d_blocks = [] # list of layers to convert RGB into activation for d_blocks inputs self.from_rgb = [] for res_log2 in range(self.start_res_log2, self.target_res_log2 + 1): res = 2 ** res_log2 filter_num = self.filter_nums[res_log2] from_rgb = Sequential( [ layers.InputLayer( input_shape=(res, res, 3), name=f\"from_rgb_input_{res}\" ), EqualizedConv(filter_num, 1), layers.LeakyReLU(0.2), ], name=f\"from_rgb_{res}\", ) self.from_rgb.append(from_rgb) input_shape = (res, res, filter_num) if len(self.d_blocks) == 0: d_block = self.build_base(filter_num, res) else: d_block = self.build_block( filter_num, self.filter_nums[res_log2 - 1], res ) self.d_blocks.append(d_block) def build_base(self, filter_num, res): input_tensor = layers.Input(shape=(res, res, filter_num), name=f\"d_{res}\") x = minibatch_std(input_tensor) x = EqualizedConv(filter_num, 3)(x) x = layers.LeakyReLU(0.2)(x) x = layers.Flatten()(x) x = EqualizedDense(filter_num)(x) x = layers.LeakyReLU(0.2)(x) x = EqualizedDense(1)(x) return keras.Model(input_tensor, x, name=f\"d_{res}\") def build_block(self, filter_num_1, filter_num_2, res): input_tensor = layers.Input(shape=(res, res, filter_num_1), name=f\"d_{res}\") x = EqualizedConv(filter_num_1, 3)(input_tensor) x = layers.LeakyReLU(0.2)(x) x = EqualizedConv(filter_num_2)(x) x = layers.LeakyReLU(0.2)(x) x = layers.AveragePooling2D((2, 2))(x) return keras.Model(input_tensor, x, name=f\"d_{res}\") def grow(self, res_log2): res = 2 ** res_log2 idx = res_log2 - self.start_res_log2 alpha = layers.Input(shape=(1), name=\"d_alpha\") input_image = layers.Input(shape=(res, res, 3), name=\"input_image\") x = self.from_rgb[idx](input_image) x = self.d_blocks[idx](x) if idx > 0: idx -= 1 downsized_image = layers.AveragePooling2D((2, 2))(input_image) y = self.from_rgb[idx](downsized_image) x = fade_in(alpha[0], x, y) for i in range(idx, -1, -1): x = self.d_blocks[i](x) return keras.Model([input_image, alpha], x, name=f\"discriminator_{res}_x_{res}\") Build StyleGAN with custom train step class StyleGAN(tf.keras.Model): def __init__(self, z_dim=512, target_res=64, start_res=4): super(StyleGAN, self).__init__() self.z_dim = z_dim self.target_res_log2 = log2(target_res) self.start_res_log2 = log2(start_res) self.current_res_log2 = self.target_res_log2 self.num_stages = self.target_res_log2 - self.start_res_log2 + 1 self.alpha = tf.Variable(1.0, dtype=tf.float32, trainable=False, name=\"alpha\") self.mapping = Mapping(num_stages=self.num_stages) self.d_builder = Discriminator(self.start_res_log2, self.target_res_log2) self.g_builder = Generator(self.start_res_log2, self.target_res_log2) self.g_input_shape = self.g_builder.input_shape self.phase = None self.train_step_counter = tf.Variable(0, dtype=tf.int32, trainable=False) self.loss_weights = {\"gradient_penalty\": 10, \"drift\": 0.001} def grow_model(self, res): tf.keras.backend.clear_session() res_log2 = log2(res) self.generator = self.g_builder.grow(res_log2) self.discriminator = self.d_builder.grow(res_log2) self.current_res_log2 = res_log2 print(f\"\nModel resolution:{res}x{res}\") def compile( self, steps_per_epoch, phase, res, d_optimizer, g_optimizer, *args, **kwargs ): self.loss_weights = kwargs.pop(\"loss_weights\", self.loss_weights) self.steps_per_epoch = steps_per_epoch if res != 2 ** self.current_res_log2: self.grow_model(res) self.d_optimizer = d_optimizer self.g_optimizer = g_optimizer self.train_step_counter.assign(0) self.phase = phase self.d_loss_metric = keras.metrics.Mean(name=\"d_loss\") self.g_loss_metric = keras.metrics.Mean(name=\"g_loss\") super(StyleGAN, self).compile(*args, **kwargs) @property def metrics(self): return [self.d_loss_metric, self.g_loss_metric] def generate_noise(self, batch_size): noise = [ tf.random.normal((batch_size, 2 ** res, 2 ** res, 1)) for res in range(self.start_res_log2, self.target_res_log2 + 1) ] return noise def gradient_loss(self, grad): loss = tf.square(grad) loss = tf.reduce_sum(loss, axis=tf.range(1, tf.size(tf.shape(loss)))) loss = tf.sqrt(loss) loss = tf.reduce_mean(tf.square(loss - 1)) return loss def train_step(self, real_images): self.train_step_counter.assign_add(1) if self.phase == \"TRANSITION\": self.alpha.assign( tf.cast(self.train_step_counter / self.steps_per_epoch, tf.float32) ) elif self.phase == \"STABLE\": self.alpha.assign(1.0) else: raise NotImplementedError alpha = tf.expand_dims(self.alpha, 0) batch_size = tf.shape(real_images)[0] real_labels = tf.ones(batch_size) fake_labels = -tf.ones(batch_size) z = tf.random.normal((batch_size, self.z_dim)) const_input = tf.ones(tuple([batch_size] + list(self.g_input_shape))) noise = self.generate_noise(batch_size) # generator with tf.GradientTape() as g_tape: w = self.mapping(z) fake_images = self.generator([const_input, w, noise, alpha]) pred_fake = self.discriminator([fake_images, alpha]) g_loss = wasserstein_loss(real_labels, pred_fake) trainable_weights = ( self.mapping.trainable_weights + self.generator.trainable_weights ) gradients = g_tape.gradient(g_loss, trainable_weights) self.g_optimizer.apply_gradients(zip(gradients, trainable_weights)) # discriminator with tf.GradientTape() as gradient_tape, tf.GradientTape() as total_tape: # forward pass pred_fake = self.discriminator([fake_images, alpha]) pred_real = self.discriminator([real_images, alpha]) epsilon = tf.random.uniform((batch_size, 1, 1, 1)) interpolates = epsilon * real_images + (1 - epsilon) * fake_images gradient_tape.watch(interpolates) pred_fake_grad = self.discriminator([interpolates, alpha]) # calculate losses loss_fake = wasserstein_loss(fake_labels, pred_fake) loss_real = wasserstein_loss(real_labels, pred_real) loss_fake_grad = wasserstein_loss(fake_labels, pred_fake_grad) # gradient penalty gradients_fake = gradient_tape.gradient(loss_fake_grad, [interpolates]) gradient_penalty = self.loss_weights[ \"gradient_penalty\" ] * self.gradient_loss(gradients_fake) # drift loss all_pred = tf.concat([pred_fake, pred_real], axis=0) drift_loss = self.loss_weights[\"drift\"] * tf.reduce_mean(all_pred ** 2) d_loss = loss_fake + loss_real + gradient_penalty + drift_loss gradients = total_tape.gradient( d_loss, self.discriminator.trainable_weights ) self.d_optimizer.apply_gradients( zip(gradients, self.discriminator.trainable_weights) ) # Update metrics self.d_loss_metric.update_state(d_loss) self.g_loss_metric.update_state(g_loss) return { \"d_loss\": self.d_loss_metric.result(), \"g_loss\": self.g_loss_metric.result(), } def call(self, inputs: dict()): style_code = inputs.get(\"style_code\", None) z = inputs.get(\"z\", None) noise = inputs.get(\"noise\", None) batch_size = inputs.get(\"batch_size\", 1) alpha = inputs.get(\"alpha\", 1.0) alpha = tf.expand_dims(alpha, 0) if style_code is None: if z is None: z = tf.random.normal((batch_size, self.z_dim)) style_code = self.mapping(z) if noise is None: noise = self.generate_noise(batch_size) # self.alpha.assign(alpha) const_input = tf.ones(tuple([batch_size] + list(self.g_input_shape))) images = self.generator([const_input, style_code, noise, alpha]) images = np.clip((images * 0.5 + 0.5) * 255, 0, 255).astype(np.uint8) return images Training We first build the StyleGAN at smallest resolution, such as 4x4 or 8x8. Then we progressively grow the model to higher resolution by appending new generator and discriminator blocks. START_RES = 4 TARGET_RES = 128 style_gan = StyleGAN(start_res=START_RES, target_res=TARGET_RES) The training for each new resolution happen in two phases - \"transition\" and \"stable\". In the transition phase, the features from the previous resolution are mixed with the current resolution. This allows for a smoother transition when scalling up. We use each epoch in model.fit() as a phase. def train( start_res=START_RES, target_res=TARGET_RES, steps_per_epoch=5000, display_images=True, ): opt_cfg = {\"learning_rate\": 1e-3, \"beta_1\": 0.0, \"beta_2\": 0.99, \"epsilon\": 1e-8} val_batch_size = 16 val_z = tf.random.normal((val_batch_size, style_gan.z_dim)) val_noise = style_gan.generate_noise(val_batch_size) start_res_log2 = int(np.log2(start_res)) target_res_log2 = int(np.log2(target_res)) for res_log2 in range(start_res_log2, target_res_log2 + 1): res = 2 ** res_log2 for phase in [\"TRANSITION\", \"STABLE\"]: if res == start_res and phase == \"TRANSITION\": continue train_dl = create_dataloader(res) steps = int(train_step_ratio[res_log2] * steps_per_epoch) style_gan.compile( d_optimizer=tf.keras.optimizers.Adam(**opt_cfg), g_optimizer=tf.keras.optimizers.Adam(**opt_cfg), loss_weights={\"gradient_penalty\": 10, \"drift\": 0.001}, steps_per_epoch=steps, res=res, phase=phase, run_eagerly=False, ) prefix = f\"res_{res}x{res}_{style_gan.phase}\" ckpt_cb = keras.callbacks.ModelCheckpoint( f\"checkpoints/stylegan_{res}x{res}.ckpt\", save_weights_only=True, verbose=0, ) print(phase) style_gan.fit( train_dl, epochs=1, steps_per_epoch=steps, callbacks=[ckpt_cb] ) if display_images: images = style_gan({\"z\": val_z, \"noise\": val_noise, \"alpha\": 1.0}) plot_images(images, res_log2) StyleGAN can take a long time to train, in the code below, a small steps_per_epoch value of 1 is used to sanity-check the code is working alright. In practice, a larger steps_per_epoch value (over 10000) is required to get decent results. train(start_res=4, target_res=16, steps_per_epoch=1, display_images=False) Model resolution:4x4 STABLE 1/1 [==============================] - 3s 3s/step - d_loss: 2.0971 - g_loss: 2.5965 Model resolution:8x8 TRANSITION 1/1 [==============================] - 5s 5s/step - d_loss: 6.6954 - g_loss: 0.3432 STABLE 1/1 [==============================] - 4s 4s/step - d_loss: 3.3558 - g_loss: 3.7813 Model resolution:16x16 TRANSITION 1/1 [==============================] - 10s 10s/step - d_loss: 3.3166 - g_loss: 6.6047 STABLE WARNING:tensorflow:5 out of the last 5 calls to .train_function at 0x7f7f0e7005e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:5 out of the last 5 calls to .train_function at 0x7f7f0e7005e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. 1/1 [==============================] - 8s 8s/step - d_loss: -6.1128 - g_loss: 17.0095 Results We can now run some inference using pre-trained 64x64 checkpoints. In general, the image fidelity increases with the resolution. You can try to train this StyleGAN to resolutions above 128x128 with the CelebA HQ dataset. url = \"https://github.com/soon-yau/stylegan_keras/releases/download/keras_example_v1.0/stylegan_128x128.ckpt.zip\" weights_path = keras.utils.get_file( \"stylegan_128x128.ckpt.zip\", url, extract=True, cache_dir=os.path.abspath(\".\"), cache_subdir=\"pretrained\", ) style_gan.grow_model(128) style_gan.load_weights(os.path.join(\"pretrained/stylegan_128x128.ckpt\")) tf.random.set_seed(196) batch_size = 2 z = tf.random.normal((batch_size, style_gan.z_dim)) w = style_gan.mapping(z) noise = style_gan.generate_noise(batch_size=batch_size) images = style_gan({\"style_code\": w, \"noise\": noise, \"alpha\": 1.0}) plot_images(images, 5) Downloading data from https://github.com/soon-yau/stylegan_keras/releases/download/keras_example_v1.0/stylegan_128x128.ckpt.zip 540540928/540534982 [==============================] - 30s 0us/step png Style Mixing We can also mix styles from two images to create a new image. alpha = 0.4 w_mix = np.expand_dims(alpha * w[0] + (1 - alpha) * w[1], 0) noise_a = [np.expand_dims(n[0], 0) for n in noise] mix_images = style_gan({\"style_code\": w_mix, \"noise\": noise_a}) image_row = np.hstack([images[0], images[1], mix_images[0]]) plt.figure(figsize=(9, 3)) plt.imshow(image_row) plt.axis(\"off\") (-0.5, 383.5, 127.5, -0.5) png Neural Style Transfer with Adaptive Instance Normalization. Introduction Neural Style Transfer is the process of transferring the style of one image onto the content of another. This was first introduced in the seminal paper \"A Neural Algorithm of Artistic Style\" by Gatys et al. A major limitation of the technique proposed in this work is in its runtime, as the algorithm uses a slow iterative optimization process. Follow-up papers that introduced Batch Normalization, Instance Normalization and Conditional Instance Normalization allowed Style Transfer to be performed in new ways, no longer requiring a slow iterative process. Following these papers, the authors Xun Huang and Serge Belongie propose Adaptive Instance Normalization (AdaIN), which allows arbitrary style transfer in real time. In this example we implement Adapative Instance Normalization for Neural Style Transfer. We show in the below figure the output of our AdaIN model trained for only 30 epochs. Style transfer sample gallery You can also try out the model with your own images with this Hugging Face demo. Setup We begin with importing the necessary packages. We also set the seed for reproducibility. The global variables are hyperparameters which we can change as we like. import os import glob import imageio import numpy as np from tqdm import tqdm import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt import tensorflow_datasets as tfds from tensorflow.keras import layers # Defining the global variables. IMAGE_SIZE = (224, 224) BATCH_SIZE = 64 # Training for single epoch for time constraint. # Please use atleast 30 epochs to see good results. EPOCHS = 1 AUTOTUNE = tf.data.AUTOTUNE Style transfer sample gallery For Neural Style Transfer we need style images and content images. In this example we will use the Best Artworks of All Time as our style dataset and Pascal VOC as our content dataset. This is a deviation from the original paper implementation by the authors, where they use WIKI-Art as style and MSCOCO as content datasets respectively. We do this to create a minimal yet reproducible example. Downloading the dataset from Kaggle The Best Artworks of All Time dataset is hosted on Kaggle and one can easily download it in Colab by following these steps: Follow the instructions here in order to obtain your Kaggle API keys in case you don't have them. Use the following command to upload the Kaggle API keys. from google.colab import files files.upload() Use the following commands to move the API keys to the proper directory and download the dataset. $ mkdir ~/.kaggle $ cp kaggle.json ~/.kaggle/ $ chmod 600 ~/.kaggle/kaggle.json $ kaggle datasets download ikarus777/best-artworks-of-all-time $ unzip -qq best-artworks-of-all-time.zip $ rm -rf images $ mv resized artwork $ rm best-artworks-of-all-time.zip artists.csv tf.data pipeline In this section, we will build the tf.data pipeline for the project. For the style dataset, we decode, convert and resize the images from the folder. For the content images we are already presented with a tf.data dataset as we use the tfds module. After we have our style and content data pipeline ready, we zip the two together to obtain the data pipeline that our model will consume. def decode_and_resize(image_path): \"\"\"Decodes and resizes an image from the image file path. Args: image_path: The image file path. size: The size of the image to be resized to. Returns: A resized image. \"\"\" image = tf.io.read_file(image_path) image = tf.image.decode_jpeg(image, channels=3) image = tf.image.convert_image_dtype(image, dtype=\"float32\") image = tf.image.resize(image, IMAGE_SIZE) return image def extract_image_from_voc(element): \"\"\"Extracts image from the PascalVOC dataset. Args: element: A dictionary of data. size: The size of the image to be resized to. Returns: A resized image. \"\"\" image = element[\"image\"] image = tf.image.convert_image_dtype(image, dtype=\"float32\") image = tf.image.resize(image, IMAGE_SIZE) return image # Get the image file paths for the style images. style_images = os.listdir(\"/content/artwork/resized\") style_images = [os.path.join(\"/content/artwork/resized\", path) for path in style_images] # split the style images in train, val and test total_style_images = len(style_images) train_style = style_images[: int(0.8 * total_style_images)] val_style = style_images[int(0.8 * total_style_images) : int(0.9 * total_style_images)] test_style = style_images[int(0.9 * total_style_images) :] # Build the style and content tf.data datasets. train_style_ds = ( tf.data.Dataset.from_tensor_slices(train_style) .map(decode_and_resize, num_parallel_calls=AUTOTUNE) .repeat() ) train_content_ds = tfds.load(\"voc\", split=\"train\").map(extract_image_from_voc).repeat() val_style_ds = ( tf.data.Dataset.from_tensor_slices(val_style) .map(decode_and_resize, num_parallel_calls=AUTOTUNE) .repeat() ) val_content_ds = ( tfds.load(\"voc\", split=\"validation\").map(extract_image_from_voc).repeat() ) test_style_ds = ( tf.data.Dataset.from_tensor_slices(test_style) .map(decode_and_resize, num_parallel_calls=AUTOTUNE) .repeat() ) test_content_ds = ( tfds.load(\"voc\", split=\"test\") .map(extract_image_from_voc, num_parallel_calls=AUTOTUNE) .repeat() ) # Zipping the style and content datasets. train_ds = ( tf.data.Dataset.zip((train_style_ds, train_content_ds)) .shuffle(BATCH_SIZE * 2) .batch(BATCH_SIZE) .prefetch(AUTOTUNE) ) val_ds = ( tf.data.Dataset.zip((val_style_ds, val_content_ds)) .shuffle(BATCH_SIZE * 2) .batch(BATCH_SIZE) .prefetch(AUTOTUNE) ) test_ds = ( tf.data.Dataset.zip((test_style_ds, test_content_ds)) .shuffle(BATCH_SIZE * 2) .batch(BATCH_SIZE) .prefetch(AUTOTUNE) ) Downloading and preparing dataset voc/2007/4.0.0 (download: 868.85 MiB, generated: Unknown size, total: 868.85 MiB) to /root/tensorflow_datasets/voc/2007/4.0.0... Dl Completed...: 0 url [00:00, ? url/s] Dl Size...: 0 MiB [00:00, ? MiB/s] Extraction completed...: 0 file [00:00, ? file/s] 0 examples [00:00, ? examples/s] Shuffling and writing examples to /root/tensorflow_datasets/voc/2007/4.0.0.incompleteP16YU5/voc-test.tfrecord 0%| | 0/4952 [00:00'RGB' x = x[:, :, ::-1] x = np.clip(x, 0, 255).astype(\"uint8\") return x Compute the style transfer loss First, we need to define 4 utility functions: gram_matrix (used to compute the style loss) The style_loss function, which keeps the generated image close to the local textures of the style reference image The content_loss function, which keeps the high-level representation of the generated image close to that of the base image The total_variation_loss function, a regularization loss which keeps the generated image locally-coherent # The gram matrix of an image tensor (feature-wise outer product) def gram_matrix(x): x = tf.transpose(x, (2, 0, 1)) features = tf.reshape(x, (tf.shape(x)[0], -1)) gram = tf.matmul(features, tf.transpose(features)) return gram # The \"style loss\" is designed to maintain # the style of the reference image in the generated image. # It is based on the gram matrices (which capture style) of # feature maps from the style reference image # and from the generated image def style_loss(style, combination): S = gram_matrix(style) C = gram_matrix(combination) channels = 3 size = img_nrows * img_ncols return tf.reduce_sum(tf.square(S - C)) / (4.0 * (channels ** 2) * (size ** 2)) # An auxiliary loss function # designed to maintain the \"content\" of the # base image in the generated image def content_loss(base, combination): return tf.reduce_sum(tf.square(combination - base)) # The 3rd loss function, total variation loss, # designed to keep the generated image locally coherent def total_variation_loss(x): a = tf.square( x[:, : img_nrows - 1, : img_ncols - 1, :] - x[:, 1:, : img_ncols - 1, :] ) b = tf.square( x[:, : img_nrows - 1, : img_ncols - 1, :] - x[:, : img_nrows - 1, 1:, :] ) return tf.reduce_sum(tf.pow(a + b, 1.25)) Next, let's create a feature extraction model that retrieves the intermediate activations of VGG19 (as a dict, by name). # Build a VGG19 model loaded with pre-trained ImageNet weights model = vgg19.VGG19(weights=\"imagenet\", include_top=False) # Get the symbolic outputs of each \"key\" layer (we gave them unique names). outputs_dict = dict([(layer.name, layer.output) for layer in model.layers]) # Set up a model that returns the activation values for every layer in # VGG19 (as a dict). feature_extractor = keras.Model(inputs=model.inputs, outputs=outputs_dict) Finally, here's the code that computes the style transfer loss. # List of layers to use for the style loss. style_layer_names = [ \"block1_conv1\", \"block2_conv1\", \"block3_conv1\", \"block4_conv1\", \"block5_conv1\", ] # The layer to use for the content loss. content_layer_name = \"block5_conv2\" def compute_loss(combination_image, base_image, style_reference_image): input_tensor = tf.concat( [base_image, style_reference_image, combination_image], axis=0 ) features = feature_extractor(input_tensor) # Initialize the loss loss = tf.zeros(shape=()) # Add content loss layer_features = features[content_layer_name] base_image_features = layer_features[0, :, :, :] combination_features = layer_features[2, :, :, :] loss = loss + content_weight * content_loss( base_image_features, combination_features ) # Add style loss for layer_name in style_layer_names: layer_features = features[layer_name] style_reference_features = layer_features[1, :, :, :] combination_features = layer_features[2, :, :, :] sl = style_loss(style_reference_features, combination_features) loss += (style_weight / len(style_layer_names)) * sl # Add total variation loss loss += total_variation_weight * total_variation_loss(combination_image) return loss Add a tf.function decorator to loss & gradient computation To compile it, and thus make it fast. @tf.function def compute_loss_and_grads(combination_image, base_image, style_reference_image): with tf.GradientTape() as tape: loss = compute_loss(combination_image, base_image, style_reference_image) grads = tape.gradient(loss, combination_image) return loss, grads The training loop Repeatedly run vanilla gradient descent steps to minimize the loss, and save the resulting image every 100 iterations. We decay the learning rate by 0.96 every 100 steps. optimizer = keras.optimizers.SGD( keras.optimizers.schedules.ExponentialDecay( initial_learning_rate=100.0, decay_steps=100, decay_rate=0.96 ) ) base_image = preprocess_image(base_image_path) style_reference_image = preprocess_image(style_reference_image_path) combination_image = tf.Variable(preprocess_image(base_image_path)) iterations = 4000 for i in range(1, iterations + 1): loss, grads = compute_loss_and_grads( combination_image, base_image, style_reference_image ) optimizer.apply_gradients([(grads, combination_image)]) if i % 100 == 0: print(\"Iteration %d: loss=%.2f\" % (i, loss)) img = deprocess_image(combination_image.numpy()) fname = result_prefix + \"_at_iteration_%d.png\" % i keras.preprocessing.image.save_img(fname, img) Iteration 100: loss=11018.36 Iteration 200: loss=8514.28 Iteration 300: loss=7571.70 Iteration 400: loss=7064.09 Iteration 500: loss=6736.33 Iteration 600: loss=6501.82 Iteration 700: loss=6323.21 Iteration 800: loss=6181.44 Iteration 900: loss=6065.30 Iteration 1000: loss=5967.72 Iteration 1100: loss=5884.61 Iteration 1200: loss=5812.84 Iteration 1300: loss=5750.36 Iteration 1400: loss=5695.61 Iteration 1500: loss=5647.19 Iteration 1600: loss=5604.15 Iteration 1700: loss=5565.45 Iteration 1800: loss=5530.61 Iteration 1900: loss=5498.99 Iteration 2000: loss=5470.26 Iteration 2100: loss=5444.05 Iteration 2200: loss=5420.09 Iteration 2300: loss=5398.12 Iteration 2400: loss=5377.92 Iteration 2500: loss=5359.31 Iteration 2600: loss=5342.14 Iteration 2700: loss=5326.28 Iteration 2800: loss=5311.56 Iteration 2900: loss=5297.89 Iteration 3000: loss=5285.14 Iteration 3100: loss=5273.21 Iteration 3200: loss=5262.05 Iteration 3300: loss=5251.60 Iteration 3400: loss=5241.82 Iteration 3500: loss=5232.64 Iteration 3600: loss=5224.02 Iteration 3700: loss=5215.90 Iteration 3800: loss=5208.26 Iteration 3900: loss=5201.06 Iteration 4000: loss=5194.26 After 4000 iterations, you get the following result: display(Image(result_prefix + \"_at_iteration_4000.png\")) png PixelCNN implemented in Keras. Introduction PixelCNN is a generative model proposed in 2016 by van den Oord et al. (reference: Conditional Image Generation with PixelCNN Decoders). It is designed to generate images (or other data types) iteratively from an input vector where the probability distribution of prior elements dictates the probability distribution of later elements. In the following example, images are generated in this fashion, pixel-by-pixel, via a masked convolution kernel that only looks at data from previously generated pixels (origin at the top left) to generate later pixels. During inference, the output of the network is used as a probability distribution from which new pixel values are sampled to generate a new image (here, with MNIST, the pixel values range from white (0) to black (255)). import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tqdm import tqdm Getting the data # Model / data parameters num_classes = 10 input_shape = (28, 28, 1) n_residual_blocks = 5 # The data, split between train and test sets (x, _), (y, _) = keras.datasets.mnist.load_data() # Concatenate all of the images together data = np.concatenate((x, y), axis=0) # Round all pixel values less than 33% of the max 256 value to 0 # anything above this value gets rounded up to 1 so that all values are either # 0 or 1 data = np.where(data < (0.33 * 256), 0, 1) data = data.astype(np.float32) Create two classes for the requisite Layers for the model # The first layer is the PixelCNN layer. This layer simply # builds on the 2D convolutional layer, but includes masking. class PixelConvLayer(layers.Layer): def __init__(self, mask_type, **kwargs): super(PixelConvLayer, self).__init__() self.mask_type = mask_type self.conv = layers.Conv2D(**kwargs) def build(self, input_shape): # Build the conv2d layer to initialize kernel variables self.conv.build(input_shape) # Use the initialized kernel to create the mask kernel_shape = self.conv.kernel.get_shape() self.mask = np.zeros(shape=kernel_shape) self.mask[: kernel_shape[0] // 2, ...] = 1.0 self.mask[kernel_shape[0] // 2, : kernel_shape[1] // 2, ...] = 1.0 if self.mask_type == \"B\": self.mask[kernel_shape[0] // 2, kernel_shape[1] // 2, ...] = 1.0 def call(self, inputs): self.conv.kernel.assign(self.conv.kernel * self.mask) return self.conv(inputs) # Next, we build our residual block layer. # This is just a normal residual block, but based on the PixelConvLayer. class ResidualBlock(keras.layers.Layer): def __init__(self, filters, **kwargs): super(ResidualBlock, self).__init__(**kwargs) self.conv1 = keras.layers.Conv2D( filters=filters, kernel_size=1, activation=\"relu\" ) self.pixel_conv = PixelConvLayer( mask_type=\"B\", filters=filters // 2, kernel_size=3, activation=\"relu\", padding=\"same\", ) self.conv2 = keras.layers.Conv2D( filters=filters, kernel_size=1, activation=\"relu\" ) def call(self, inputs): x = self.conv1(inputs) x = self.pixel_conv(x) x = self.conv2(x) return keras.layers.add([inputs, x]) Build the model based on the original paper inputs = keras.Input(shape=input_shape) x = PixelConvLayer( mask_type=\"A\", filters=128, kernel_size=7, activation=\"relu\", padding=\"same\" )(inputs) for _ in range(n_residual_blocks): x = ResidualBlock(filters=128)(x) for _ in range(2): x = PixelConvLayer( mask_type=\"B\", filters=128, kernel_size=1, strides=1, activation=\"relu\", padding=\"valid\", )(x) out = keras.layers.Conv2D( filters=1, kernel_size=1, strides=1, activation=\"sigmoid\", padding=\"valid\" )(x) pixel_cnn = keras.Model(inputs, out) adam = keras.optimizers.Adam(learning_rate=0.0005) pixel_cnn.compile(optimizer=adam, loss=\"binary_crossentropy\") pixel_cnn.summary() pixel_cnn.fit( x=data, y=data, batch_size=128, epochs=50, validation_split=0.1, verbose=2 ) Model: \"model\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ pixel_conv_layer (PixelConvL (None, 28, 28, 128) 6400 _________________________________________________________________ residual_block (ResidualBloc (None, 28, 28, 128) 98624 _________________________________________________________________ residual_block_1 (ResidualBl (None, 28, 28, 128) 98624 _________________________________________________________________ residual_block_2 (ResidualBl (None, 28, 28, 128) 98624 _________________________________________________________________ residual_block_3 (ResidualBl (None, 28, 28, 128) 98624 _________________________________________________________________ residual_block_4 (ResidualBl (None, 28, 28, 128) 98624 _________________________________________________________________ pixel_conv_layer_6 (PixelCon (None, 28, 28, 128) 16512 _________________________________________________________________ pixel_conv_layer_7 (PixelCon (None, 28, 28, 128) 16512 _________________________________________________________________ conv2d_18 (Conv2D) (None, 28, 28, 1) 129 ================================================================= Total params: 532,673 Trainable params: 532,673 Non-trainable params: 0 _________________________________________________________________ Epoch 1/50 493/493 - 18s - loss: 0.1163 - val_loss: 0.0937 Epoch 2/50 493/493 - 18s - loss: 0.0911 - val_loss: 0.0908 Epoch 3/50 493/493 - 18s - loss: 0.0889 - val_loss: 0.0890 Epoch 4/50 493/493 - 18s - loss: 0.0878 - val_loss: 0.0879 Epoch 5/50 493/493 - 18s - loss: 0.0871 - val_loss: 0.0868 Epoch 6/50 493/493 - 18s - loss: 0.0865 - val_loss: 0.0875 Epoch 7/50 493/493 - 18s - loss: 0.0861 - val_loss: 0.0857 Epoch 8/50 493/493 - 18s - loss: 0.0857 - val_loss: 0.0860 Epoch 9/50 493/493 - 18s - loss: 0.0854 - val_loss: 0.0855 Epoch 10/50 493/493 - 18s - loss: 0.0850 - val_loss: 0.0853 Epoch 11/50 493/493 - 18s - loss: 0.0848 - val_loss: 0.0849 Epoch 12/50 493/493 - 18s - loss: 0.0846 - val_loss: 0.0850 Epoch 13/50 493/493 - 18s - loss: 0.0844 - val_loss: 0.0849 Epoch 14/50 493/493 - 18s - loss: 0.0842 - val_loss: 0.0845 Epoch 15/50 493/493 - 18s - loss: 0.0840 - val_loss: 0.0850 Epoch 16/50 493/493 - 18s - loss: 0.0839 - val_loss: 0.0850 Epoch 17/50 493/493 - 18s - loss: 0.0837 - val_loss: 0.0843 Epoch 18/50 493/493 - 18s - loss: 0.0836 - val_loss: 0.0842 Epoch 19/50 493/493 - 18s - loss: 0.0835 - val_loss: 0.0840 Epoch 20/50 493/493 - 18s - loss: 0.0834 - val_loss: 0.0842 Epoch 21/50 493/493 - 18s - loss: 0.0832 - val_loss: 0.0837 Epoch 22/50 493/493 - 18s - loss: 0.0831 - val_loss: 0.0839 Epoch 23/50 493/493 - 18s - loss: 0.0830 - val_loss: 0.0835 Epoch 24/50 493/493 - 18s - loss: 0.0829 - val_loss: 0.0839 Epoch 25/50 493/493 - 18s - loss: 0.0829 - val_loss: 0.0835 Epoch 26/50 493/493 - 18s - loss: 0.0827 - val_loss: 0.0836 Epoch 27/50 493/493 - 18s - loss: 0.0827 - val_loss: 0.0834 Epoch 28/50 493/493 - 18s - loss: 0.0826 - val_loss: 0.0834 Epoch 29/50 493/493 - 18s - loss: 0.0825 - val_loss: 0.0834 Epoch 30/50 493/493 - 18s - loss: 0.0824 - val_loss: 0.0834 Epoch 31/50 493/493 - 18s - loss: 0.0823 - val_loss: 0.0832 Epoch 32/50 493/493 - 18s - loss: 0.0823 - val_loss: 0.0832 Epoch 33/50 493/493 - 18s - loss: 0.0822 - val_loss: 0.0833 Epoch 34/50 493/493 - 18s - loss: 0.0821 - val_loss: 0.0835 Epoch 35/50 493/493 - 18s - loss: 0.0821 - val_loss: 0.0834 Epoch 36/50 493/493 - 18s - loss: 0.0820 - val_loss: 0.0837 Epoch 37/50 493/493 - 18s - loss: 0.0820 - val_loss: 0.0832 Epoch 38/50 493/493 - 18s - loss: 0.0819 - val_loss: 0.0834 Epoch 39/50 493/493 - 18s - loss: 0.0818 - val_loss: 0.0834 Epoch 40/50 493/493 - 18s - loss: 0.0818 - val_loss: 0.0832 Epoch 41/50 493/493 - 18s - loss: 0.0817 - val_loss: 0.0834 Epoch 42/50 493/493 - 18s - loss: 0.0817 - val_loss: 0.0836 Epoch 43/50 493/493 - 18s - loss: 0.0816 - val_loss: 0.0833 Epoch 44/50 493/493 - 18s - loss: 0.0816 - val_loss: 0.0835 Epoch 45/50 493/493 - 18s - loss: 0.0815 - val_loss: 0.0832 Epoch 46/50 493/493 - 18s - loss: 0.0815 - val_loss: 0.0830 Epoch 47/50 493/493 - 18s - loss: 0.0814 - val_loss: 0.0831 Epoch 48/50 493/493 - 18s - loss: 0.0813 - val_loss: 0.0832 Epoch 49/50 493/493 - 18s - loss: 0.0813 - val_loss: 0.0834 Epoch 50/50 493/493 - 18s - loss: 0.0813 - val_loss: 0.0832 Demonstration The PixelCNN cannot generate the full image at once. Instead, it must generate each pixel in order, append the last generated pixel to the current image, and feed the image back into the model to repeat the process. from IPython.display import Image, display # Create an empty array of pixels. batch = 4 pixels = np.zeros(shape=(batch,) + (pixel_cnn.input_shape)[1:]) batch, rows, cols, channels = pixels.shape # Iterate over the pixels because generation has to be done sequentially pixel by pixel. for row in tqdm(range(rows)): for col in range(cols): for channel in range(channels): # Feed the whole array and retrieving the pixel value probabilities for the next # pixel. probs = pixel_cnn.predict(pixels)[:, row, col, channel] # Use the probabilities to pick pixel values and append the values to the image # frame. pixels[:, row, col, channel] = tf.math.ceil( probs - tf.random.uniform(probs.shape) ) def deprocess_image(x): # Stack the single channeled black and white image to RGB values. x = np.stack((x, x, x), 2) # Undo preprocessing x *= 255.0 # Convert to uint8 and clip to the valid range [0, 255] x = np.clip(x, 0, 255).astype(\"uint8\") return x # Iterate over the generated images and plot them with matplotlib. for i, pic in enumerate(pixels): keras.preprocessing.image.save_img( \"generated_image_{}.png\".format(i), deprocess_image(np.squeeze(pic, -1)) ) display(Image(\"generated_image_0.png\")) display(Image(\"generated_image_1.png\")) display(Image(\"generated_image_2.png\")) display(Image(\"generated_image_3.png\")) 100%|██████████| 28/28 [00:18<00:00, 1.51it/s] png png png Implement a miniature version of GPT and train it to generate text. Introduction This example demonstrates how to implement an autoregressive language model using a miniature version of the GPT model. The model consists of a single Transformer block with causal masking in its attention layer. We use the text from the IMDB sentiment classification dataset for training and generate new movie reviews for a given prompt. When using this script with your own dataset, make sure it has at least 1 million words. This example should be run with tf-nightly>=2.3.0-dev20200531 or with TensorFlow 2.3 or higher. References: GPT GPT-2 GPT-3 Setup import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.layers import TextVectorization import numpy as np import os import re import string import random Implement a Transformer block as a layer def causal_attention_mask(batch_size, n_dest, n_src, dtype): \"\"\" Mask the upper half of the dot product matrix in self attention. This prevents flow of information from future tokens to current token. 1's in the lower triangle, counting from the lower right corner. \"\"\" i = tf.range(n_dest)[:, None] j = tf.range(n_src) m = i >= j - n_src + n_dest mask = tf.cast(m, dtype) mask = tf.reshape(mask, [1, n_dest, n_src]) mult = tf.concat( [tf.expand_dims(batch_size, -1), tf.constant([1, 1], dtype=tf.int32)], 0 ) return tf.tile(mask, mult) class TransformerBlock(layers.Layer): def __init__(self, embed_dim, num_heads, ff_dim, rate=0.1): super(TransformerBlock, self).__init__() self.att = layers.MultiHeadAttention(num_heads, embed_dim) self.ffn = keras.Sequential( [layers.Dense(ff_dim, activation=\"relu\"), layers.Dense(embed_dim),] ) self.layernorm1 = layers.LayerNormalization(epsilon=1e-6) self.layernorm2 = layers.LayerNormalization(epsilon=1e-6) self.dropout1 = layers.Dropout(rate) self.dropout2 = layers.Dropout(rate) def call(self, inputs): input_shape = tf.shape(inputs) batch_size = input_shape[0] seq_len = input_shape[1] causal_mask = causal_attention_mask(batch_size, seq_len, seq_len, tf.bool) attention_output = self.att(inputs, inputs, attention_mask=causal_mask) attention_output = self.dropout1(attention_output) out1 = self.layernorm1(inputs + attention_output) ffn_output = self.ffn(out1) ffn_output = self.dropout2(ffn_output) return self.layernorm2(out1 + ffn_output) Implement an embedding layer Create two seperate embedding layers: one for tokens and one for token index (positions). class TokenAndPositionEmbedding(layers.Layer): def __init__(self, maxlen, vocab_size, embed_dim): super(TokenAndPositionEmbedding, self).__init__() self.token_emb = layers.Embedding(input_dim=vocab_size, output_dim=embed_dim) self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=embed_dim) def call(self, x): maxlen = tf.shape(x)[-1] positions = tf.range(start=0, limit=maxlen, delta=1) positions = self.pos_emb(positions) x = self.token_emb(x) return x + positions Implement the miniature GPT model vocab_size = 20000 # Only consider the top 20k words maxlen = 80 # Max sequence size embed_dim = 256 # Embedding size for each token num_heads = 2 # Number of attention heads feed_forward_dim = 256 # Hidden layer size in feed forward network inside transformer def create_model(): inputs = layers.Input(shape=(maxlen,), dtype=tf.int32) embedding_layer = TokenAndPositionEmbedding(maxlen, vocab_size, embed_dim) x = embedding_layer(inputs) transformer_block = TransformerBlock(embed_dim, num_heads, feed_forward_dim) x = transformer_block(x) outputs = layers.Dense(vocab_size)(x) model = keras.Model(inputs=inputs, outputs=[outputs, x]) loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile( \"adam\", loss=[loss_fn, None], ) # No loss and optimization based on word embeddings from transformer block return model Prepare the data for word-level language modelling Download the IMDB dataset and combine training and validation sets for a text generation task. !curl -O https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -xf aclImdb_v1.tar.gz batch_size = 128 # The dataset contains each review in a separate text file # The text files are present in four different folders # Create a list all files filenames = [] directories = [ \"aclImdb/train/pos\", \"aclImdb/train/neg\", \"aclImdb/test/pos\", \"aclImdb/test/neg\", ] for dir in directories: for f in os.listdir(dir): filenames.append(os.path.join(dir, f)) print(f\"{len(filenames)} files\") # Create a dataset from text files random.shuffle(filenames) text_ds = tf.data.TextLineDataset(filenames) text_ds = text_ds.shuffle(buffer_size=256) text_ds = text_ds.batch(batch_size) def custom_standardization(input_string): \"\"\" Remove html line-break tags and handle punctuation \"\"\" lowercased = tf.strings.lower(input_string) stripped_html = tf.strings.regex_replace(lowercased, \"
\", \" \") return tf.strings.regex_replace(stripped_html, f\"([{string.punctuation}])\", r\" \1\") # Create a vectorization layer and adapt it to the text vectorize_layer = TextVectorization( standardize=custom_standardization, max_tokens=vocab_size - 1, output_mode=\"int\", output_sequence_length=maxlen + 1, ) vectorize_layer.adapt(text_ds) vocab = vectorize_layer.get_vocabulary() # To get words back from token indices def prepare_lm_inputs_labels(text): \"\"\" Shift word sequences by 1 position so that the target for position (i) is word at position (i+1). The model will use all words up till position (i) to predict the next word. \"\"\" text = tf.expand_dims(text, -1) tokenized_sentences = vectorize_layer(text) x = tokenized_sentences[:, :-1] y = tokenized_sentences[:, 1:] return x, y text_ds = text_ds.map(prepare_lm_inputs_labels) text_ds = text_ds.prefetch(tf.data.AUTOTUNE) % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 80.2M 100 80.2M 0 0 24.2M 0 0:00:03 0:00:03 --:--:-- 24.2M 50000 files Implement a Keras callback for generating text class TextGenerator(keras.callbacks.Callback): \"\"\"A callback to generate text from a trained model. 1. Feed some starting prompt to the model 2. Predict probabilities for the next token 3. Sample the next token and add it to the next input Arguments: max_tokens: Integer, the number of tokens to be generated after prompt. start_tokens: List of integers, the token indices for the starting prompt. index_to_word: List of strings, obtained from the TextVectorization layer. top_k: Integer, sample from the `top_k` token predictions. print_every: Integer, print after this many epochs. \"\"\" def __init__( self, max_tokens, start_tokens, index_to_word, top_k=10, print_every=1 ): self.max_tokens = max_tokens self.start_tokens = start_tokens self.index_to_word = index_to_word self.print_every = print_every self.k = top_k def sample_from(self, logits): logits, indices = tf.math.top_k(logits, k=self.k, sorted=True) indices = np.asarray(indices).astype(\"int32\") preds = keras.activations.softmax(tf.expand_dims(logits, 0))[0] preds = np.asarray(preds).astype(\"float32\") return np.random.choice(indices, p=preds) def detokenize(self, number): return self.index_to_word[number] def on_epoch_end(self, epoch, logs=None): start_tokens = [_ for _ in self.start_tokens] if (epoch + 1) % self.print_every != 0: return num_tokens_generated = 0 tokens_generated = [] while num_tokens_generated <= self.max_tokens: pad_len = maxlen - len(start_tokens) sample_index = len(start_tokens) - 1 if pad_len < 0: x = start_tokens[:maxlen] sample_index = maxlen - 1 elif pad_len > 0: x = start_tokens + [0] * pad_len else: x = start_tokens x = np.array([x]) y, _ = self.model.predict(x) sample_token = self.sample_from(y[0][sample_index]) tokens_generated.append(sample_token) start_tokens.append(sample_token) num_tokens_generated = len(tokens_generated) txt = \" \".join( [self.detokenize(_) for _ in self.start_tokens + tokens_generated] ) print(f\"generated text:\n{txt}\n\") # Tokenize starting prompt word_to_index = {} for index, word in enumerate(vocab): word_to_index[word] = index start_prompt = \"this movie is\" start_tokens = [word_to_index.get(_, 1) for _ in start_prompt.split()] num_tokens_generated = 40 text_gen_callback = TextGenerator(num_tokens_generated, start_tokens, vocab) Train the model Note: This code should preferably be run on GPU. model = create_model() model.fit(text_ds, verbose=2, epochs=25, callbacks=[text_gen_callback]) Epoch 1/25 391/391 - 135s - loss: 5.5949 - dense_2_loss: 5.5949 generated text: this movie is a great movie . the film is so many other comments . the plot and some people were [UNK] to do . i think the story is about that it is not a good movie . there are very good actors Epoch 2/25 391/391 - 135s - loss: 4.7108 - dense_2_loss: 4.7108 generated text: this movie is one of the worst movies i have ever seen . i have no doubt the better movies of this one 's worst movies i have ever seen . i don 't know what the hell , and i 'm not going Epoch 3/25 391/391 - 135s - loss: 4.4620 - dense_2_loss: 4.4620 generated text: this movie is a very good movie , i think i am not a kid . the story is a great movie . the director who is a great director who likes the director 's film . this was not funny and the director Epoch 4/25 391/391 - 136s - loss: 4.3047 - dense_2_loss: 4.3047 generated text: this movie is a very good story and very well . this movie is one of the worst movies i have ever seen , and there are some good actors and actresses in the movie , it is not the worst . the script Epoch 5/25 391/391 - 135s - loss: 4.1840 - dense_2_loss: 4.1840 generated text: this movie is a very good movie . it is the best thing about it 's a very good movie . it 's not funny , very , it 's so bad that it 's so funny , it 's like most romantic movie Epoch 6/25 391/391 - 135s - loss: 4.0834 - dense_2_loss: 4.0834 generated text: this movie is the worst . the acting is awful . i have to admit that you 're just watching this film as i have to say that it is a [UNK] with [UNK] [UNK] \" in the last ten years . i think Epoch 7/25 391/391 - 135s - loss: 3.9987 - dense_2_loss: 3.9987 generated text: this movie is really about the acting is good and the script . i don 't think this is just a waste of movie . it was so terrible that it wasn 't funny , but that 's what it was made in movies Epoch 8/25 391/391 - 134s - loss: 3.9242 - dense_2_loss: 3.9242 generated text: this movie is so bad . the story itself is about a family guy named jack , who is told by a father , who is trying to get to help him to commit . he has the same problem and the [UNK] . Epoch 9/25 391/391 - 135s - loss: 3.8579 - dense_2_loss: 3.8579 generated text: this movie is not bad , it does not deserve one . i can say that i was able to sit at , relax [UNK] . i was wrong , and i think i was able to buy the dvd , i would say Epoch 10/25 391/391 - 134s - loss: 3.7989 - dense_2_loss: 3.7989 generated text: this movie is very funny ! its very funny . a touching movie about three women who don 't know who is not to go on with a movie that has a lot of fun to watch . it is funny . the main Epoch 11/25 391/391 - 134s - loss: 3.7459 - dense_2_loss: 3.7459 generated text: this movie is not the best movie i 've seen in a long time . this movie was just about a guy who gets killed for one . . i saw this movie at a time when i first saw it in the movie Epoch 12/25 391/391 - 134s - loss: 3.6974 - dense_2_loss: 3.6974 generated text: this movie is a good example of how many films have seen and many films , that are often overlooked , in the seventies , in fact it is more enjoyable than the average viewer has some interesting parallels . this movie is based Epoch 13/25 391/391 - 134s - loss: 3.6534 - dense_2_loss: 3.6534 generated text: this movie is so bad ! i think this is one . i really didn 't think anybody who gets the impression that the people who is trying to find themselves to be funny . . there 's the humor is no punchline ? Epoch 14/25 391/391 - 134s - loss: 3.6123 - dense_2_loss: 3.6123 generated text: this movie is really bad . the actors are good ,the acting is great . a must see [UNK] the worst in history of all time . the plot is so bad that you can 't even make a bad movie about the bad Epoch 15/25 391/391 - 134s - loss: 3.5745 - dense_2_loss: 3.5745 generated text: this movie is one of the worst movies i 've ever had . the acting and direction are terrible . what i 've seen , i 've watched it several times , and i can 't really believe how to make a movie about Epoch 16/25 391/391 - 134s - loss: 3.5404 - dense_2_loss: 3.5404 generated text: this movie is so bad it is . that it is supposed to be a comedy . the script , which is just as bad as some movies are bad . if you 're looking for it , if you 're in the mood Epoch 17/25 391/391 - 134s - loss: 3.5083 - dense_2_loss: 3.5083 generated text: this movie is one of all bad movies i have a fan ever seen . i have seen a good movies , this isn 't the worst . i 've seen in a long time . the story involves twins , a priest and Epoch 18/25 391/391 - 134s - loss: 3.4789 - dense_2_loss: 3.4789 generated text: this movie is a great movie . it 's a shame that it was hard to see that it was . this movie is a good movie . the movie itself is a complete waste of time and time you have a bad rant Epoch 19/25 391/391 - 134s - loss: 3.4513 - dense_2_loss: 3.4513 generated text: this movie is not one of the most moving movies i have ever seen . the story is about the plot is just so ridiculous that i could have done it with the actors . the actors are great and the acting is great Epoch 20/25 391/391 - 134s - loss: 3.4251 - dense_2_loss: 3.4251 generated text: this movie is about a man named todd . it is a funny movie that has a lot of nerve on screen . it is not just the right ingredients and a movie . it is a great film , and it is a Epoch 21/25 391/391 - 134s - loss: 3.4011 - dense_2_loss: 3.4011 generated text: this movie is not only funny , but i have never seen it before . the other comments i am not kidding or have been [UNK] and the worst movie i have to be . . there is something that is no where else Epoch 22/25 391/391 - 134s - loss: 3.3787 - dense_2_loss: 3.3787 generated text: this movie is a very entertaining , very funny , and very funny , very well written and very nicely directed movie . this was done , very well done , with very good acting and a wonderful script , a very good movie Epoch 23/25 391/391 - 133s - loss: 3.3575 - dense_2_loss: 3.3575 generated text: this movie is the kind of movie you will not be disappointed . it 's like an [UNK] [UNK] , who is a movie . it 's a great story and the characters are great , the actors are good , their [UNK] , Epoch 24/25 391/391 - 134s - loss: 3.3372 - dense_2_loss: 3.3372 generated text: this movie is a classic 80s horror movie . this has a great premise and the characters is a bit too typical [UNK] and [UNK] \" with the [UNK] \" . it 's all that makes sense . the characters were shallow and unrealistic Epoch 25/25 391/391 - 134s - loss: 3.3182 - dense_2_loss: 3.3182 generated text: this movie is not the worst movie i have ever seen . it 's a movie where i 've never seen it before and i 've seen it again and again , again , i can 't believe it was made in a theatre Convolutional Variational AutoEncoder (VAE) trained on MNIST digits. Setup import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Create a sampling layer class Sampling(layers.Layer): \"\"\"Uses (z_mean, z_log_var) to sample z, the vector encoding a digit.\"\"\" def call(self, inputs): z_mean, z_log_var = inputs batch = tf.shape(z_mean)[0] dim = tf.shape(z_mean)[1] epsilon = tf.keras.backend.random_normal(shape=(batch, dim)) return z_mean + tf.exp(0.5 * z_log_var) * epsilon Build the encoder latent_dim = 2 encoder_inputs = keras.Input(shape=(28, 28, 1)) x = layers.Conv2D(32, 3, activation=\"relu\", strides=2, padding=\"same\")(encoder_inputs) x = layers.Conv2D(64, 3, activation=\"relu\", strides=2, padding=\"same\")(x) x = layers.Flatten()(x) x = layers.Dense(16, activation=\"relu\")(x) z_mean = layers.Dense(latent_dim, name=\"z_mean\")(x) z_log_var = layers.Dense(latent_dim, name=\"z_log_var\")(x) z = Sampling()([z_mean, z_log_var]) encoder = keras.Model(encoder_inputs, [z_mean, z_log_var, z], name=\"encoder\") encoder.summary() Model: \"encoder\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 28, 28, 1)] 0 __________________________________________________________________________________________________ conv2d (Conv2D) (None, 14, 14, 32) 320 input_1[0][0] __________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 7, 7, 64) 18496 conv2d[0][0] __________________________________________________________________________________________________ flatten (Flatten) (None, 3136) 0 conv2d_1[0][0] __________________________________________________________________________________________________ dense (Dense) (None, 16) 50192 flatten[0][0] __________________________________________________________________________________________________ z_mean (Dense) (None, 2) 34 dense[0][0] __________________________________________________________________________________________________ z_log_var (Dense) (None, 2) 34 dense[0][0] __________________________________________________________________________________________________ sampling (Sampling) (None, 2) 0 z_mean[0][0] z_log_var[0][0] ================================================================================================== Total params: 69,076 Trainable params: 69,076 Non-trainable params: 0 __________________________________________________________________________________________________ Build the decoder latent_inputs = keras.Input(shape=(latent_dim,)) x = layers.Dense(7 * 7 * 64, activation=\"relu\")(latent_inputs) x = layers.Reshape((7, 7, 64))(x) x = layers.Conv2DTranspose(64, 3, activation=\"relu\", strides=2, padding=\"same\")(x) x = layers.Conv2DTranspose(32, 3, activation=\"relu\", strides=2, padding=\"same\")(x) decoder_outputs = layers.Conv2DTranspose(1, 3, activation=\"sigmoid\", padding=\"same\")(x) decoder = keras.Model(latent_inputs, decoder_outputs, name=\"decoder\") decoder.summary() Model: \"decoder\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) [(None, 2)] 0 _________________________________________________________________ dense_1 (Dense) (None, 3136) 9408 _________________________________________________________________ reshape (Reshape) (None, 7, 7, 64) 0 _________________________________________________________________ conv2d_transpose (Conv2DTran (None, 14, 14, 64) 36928 _________________________________________________________________ conv2d_transpose_1 (Conv2DTr (None, 28, 28, 32) 18464 _________________________________________________________________ conv2d_transpose_2 (Conv2DTr (None, 28, 28, 1) 289 ================================================================= Total params: 65,089 Trainable params: 65,089 Non-trainable params: 0 _________________________________________________________________ Define the VAE as a Model with a custom train_step class VAE(keras.Model): def __init__(self, encoder, decoder, **kwargs): super(VAE, self).__init__(**kwargs) self.encoder = encoder self.decoder = decoder self.total_loss_tracker = keras.metrics.Mean(name=\"total_loss\") self.reconstruction_loss_tracker = keras.metrics.Mean( name=\"reconstruction_loss\" ) self.kl_loss_tracker = keras.metrics.Mean(name=\"kl_loss\") @property def metrics(self): return [ self.total_loss_tracker, self.reconstruction_loss_tracker, self.kl_loss_tracker, ] def train_step(self, data): with tf.GradientTape() as tape: z_mean, z_log_var, z = self.encoder(data) reconstruction = self.decoder(z) reconstruction_loss = tf.reduce_mean( tf.reduce_sum( keras.losses.binary_crossentropy(data, reconstruction), axis=(1, 2) ) ) kl_loss = -0.5 * (1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var)) kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1)) total_loss = reconstruction_loss + kl_loss grads = tape.gradient(total_loss, self.trainable_weights) self.optimizer.apply_gradients(zip(grads, self.trainable_weights)) self.total_loss_tracker.update_state(total_loss) self.reconstruction_loss_tracker.update_state(reconstruction_loss) self.kl_loss_tracker.update_state(kl_loss) return { \"loss\": self.total_loss_tracker.result(), \"reconstruction_loss\": self.reconstruction_loss_tracker.result(), \"kl_loss\": self.kl_loss_tracker.result(), } Train the VAE (x_train, _), (x_test, _) = keras.datasets.mnist.load_data() mnist_digits = np.concatenate([x_train, x_test], axis=0) mnist_digits = np.expand_dims(mnist_digits, -1).astype(\"float32\") / 255 vae = VAE(encoder, decoder) vae.compile(optimizer=keras.optimizers.Adam()) vae.fit(mnist_digits, epochs=30, batch_size=128) Epoch 1/30 547/547 [==============================] - 35s 62ms/step - loss: 255.8020 - reconstruction_loss: 208.5391 - kl_loss: 2.9673 Epoch 2/30 547/547 [==============================] - 38s 69ms/step - loss: 178.8786 - reconstruction_loss: 168.4294 - kl_loss: 5.4217 Epoch 3/30 547/547 [==============================] - 39s 72ms/step - loss: 166.0320 - reconstruction_loss: 158.7979 - kl_loss: 5.8015 Epoch 4/30 547/547 [==============================] - 38s 69ms/step - loss: 161.1647 - reconstruction_loss: 154.5963 - kl_loss: 5.9926 Epoch 5/30 547/547 [==============================] - 40s 72ms/step - loss: 152.0941 - reconstruction_loss: 145.7407 - kl_loss: 6.4654 Epoch 14/30 547/547 [==============================] - 38s 70ms/step - loss: 148.8709 - reconstruction_loss: 142.5713 - kl_loss: 6.6179 Epoch 27/30 191/547 [=========>....................] - ETA: 25s - loss: 149.0829 - reconstruction_loss: 142.2507 - kl_loss: 6.6429 Display a grid of sampled digits import matplotlib.pyplot as plt def plot_latent_space(vae, n=30, figsize=15): # display a n*n 2D manifold of digits digit_size = 28 scale = 1.0 figure = np.zeros((digit_size * n, digit_size * n)) # linearly spaced coordinates corresponding to the 2D plot # of digit classes in the latent space grid_x = np.linspace(-scale, scale, n) grid_y = np.linspace(-scale, scale, n)[::-1] for i, yi in enumerate(grid_y): for j, xi in enumerate(grid_x): z_sample = np.array([[xi, yi]]) x_decoded = vae.decoder.predict(z_sample) digit = x_decoded[0].reshape(digit_size, digit_size) figure[ i * digit_size : (i + 1) * digit_size, j * digit_size : (j + 1) * digit_size, ] = digit plt.figure(figsize=(figsize, figsize)) start_range = digit_size // 2 end_range = n * digit_size + start_range pixel_range = np.arange(start_range, end_range, digit_size) sample_range_x = np.round(grid_x, 1) sample_range_y = np.round(grid_y, 1) plt.xticks(pixel_range, sample_range_x) plt.yticks(pixel_range, sample_range_y) plt.xlabel(\"z[0]\") plt.ylabel(\"z[1]\") plt.imshow(figure, cmap=\"Greys_r\") plt.show() plot_latent_space(vae) png Display how the latent space clusters different digit classes def plot_label_clusters(vae, data, labels): # display a 2D plot of the digit classes in the latent space z_mean, _, _ = vae.encoder.predict(data) plt.figure(figsize=(12, 10)) plt.scatter(z_mean[:, 0], z_mean[:, 1], c=labels) plt.colorbar() plt.xlabel(\"z[0]\") plt.ylabel(\"z[1]\") plt.show() (x_train, y_train), _ = keras.datasets.mnist.load_data() x_train = np.expand_dims(x_train, -1).astype(\"float32\") / 255 plot_label_clusters(vae, x_train, y_train) png Training a VQ-VAE for image reconstruction and codebook sampling for generation. In this example, we will develop a Vector Quantized Variational Autoencoder (VQ-VAE). VQ-VAE was proposed in Neural Discrete Representation Learning by van der Oord et al. In traditional VAEs, the latent space is continuous and is sampled from a Gaussian distribution. It is generally harder to learn such a continuous distribution via gradient descent. VQ-VAEs, on the other hand, operate on a discrete latent space, making the optimization problem simpler. It does so by maintaining a discrete codebook. The codebook is developed by discretizing the distance between continuous embeddings and the encoded outputs. These discrete code words are then fed to the decoder, which is trained to generate reconstructed samples. For a detailed overview of VQ-VAEs, please refer to the original paper and this video explanation. If you need a refresher on VAEs, you can refer to this book chapter. VQ-VAEs are one of the main recipes behind DALL-E and the idea of a codebook is used in VQ-GANs. This example uses references from the official VQ-VAE tutorial from DeepMind. To run this example, you will need TensorFlow 2.5 or higher, as well as TensorFlow Probability, which can be installed using the command below. !pip install -q tensorflow-probability Imports import numpy as np import matplotlib.pyplot as plt from tensorflow import keras from tensorflow.keras import layers import tensorflow_probability as tfp import tensorflow as tf VectorQuantizer layer Here, we will implement a custom layer to encapsulate the vector quantizer logic, which is the central component of VQ-VAEs. Consider an output from the encoder, with shape (batch_size, height, width, num_channels). The vector quantizer will first flatten this output, only keeping the num_channels dimension intact. So, the shape would become (batch_size * height * width, num_channels). The rationale behind this is to treat the total number of channels as the space for the latent embeddings. An embedding table is then initialized to learn a codebook. We measure the L2-normalized distance between the flattened encoder outputs and code words of this codebook. We take the code that yields the minimum distance, and we apply one-hot encoding to achieve quantization. This way, the code yielding the minimum distance to the corresponding encoder output is mapped as one and the remaining codes are mapped as zeros. Since the quantization process is not differentiable, we apply a straight-through estimator in between the decoder and the encoder, so that the decoder gradients are directly propagated to the encoder. As the encoder and decoder share the same channel space, the hope is that the decoder gradients will still be meaningful to the encoder. class VectorQuantizer(layers.Layer): def __init__(self, num_embeddings, embedding_dim, beta=0.25, **kwargs): super().__init__(**kwargs) self.embedding_dim = embedding_dim self.num_embeddings = num_embeddings self.beta = ( beta # This parameter is best kept between [0.25, 2] as per the paper. ) # Initialize the embeddings which we will quantize. w_init = tf.random_uniform_initializer() self.embeddings = tf.Variable( initial_value=w_init( shape=(self.embedding_dim, self.num_embeddings), dtype=\"float32\" ), trainable=True, name=\"embeddings_vqvae\", ) def call(self, x): # Calculate the input shape of the inputs and # then flatten the inputs keeping `embedding_dim` intact. input_shape = tf.shape(x) flattened = tf.reshape(x, [-1, self.embedding_dim]) # Quantization. encoding_indices = self.get_code_indices(flattened) encodings = tf.one_hot(encoding_indices, self.num_embeddings) quantized = tf.matmul(encodings, self.embeddings, transpose_b=True) quantized = tf.reshape(quantized, input_shape) # Calculate vector quantization loss and add that to the layer. You can learn more # about adding losses to different layers here: # https://keras.io/guides/making_new_layers_and_models_via_subclassing/. Check # the original paper to get a handle on the formulation of the loss function. commitment_loss = self.beta * tf.reduce_mean( (tf.stop_gradient(quantized) - x) ** 2 ) codebook_loss = tf.reduce_mean((quantized - tf.stop_gradient(x)) ** 2) self.add_loss(commitment_loss + codebook_loss) # Straight-through estimator. quantized = x + tf.stop_gradient(quantized - x) return quantized def get_code_indices(self, flattened_inputs): # Calculate L2-normalized distance between the inputs and the codes. similarity = tf.matmul(flattened_inputs, self.embeddings) distances = ( tf.reduce_sum(flattened_inputs ** 2, axis=1, keepdims=True) + tf.reduce_sum(self.embeddings ** 2, axis=0) - 2 * similarity ) # Derive the indices for minimum distances. encoding_indices = tf.argmin(distances, axis=1) return encoding_indices A note on straight-through estimation: This line of code does the straight-through estimation part: quantized = x + tf.stop_gradient(quantized - x). During backpropagation, (quantized - x) won't be included in the computation graph and th gradients obtaind for quantized will be copied for inputs. Thanks to this video for helping me understand this technique. Encoder and decoder We will now implement the encoder and the decoder for the VQ-VAE. We will keep them small so that their capacity is a good fit for the MNIST dataset, which we will use to demonstrate the results. The definitions of the encoder and decoder come from this example. def get_encoder(latent_dim=16): encoder_inputs = keras.Input(shape=(28, 28, 1)) x = layers.Conv2D(32, 3, activation=\"relu\", strides=2, padding=\"same\")( encoder_inputs ) x = layers.Conv2D(64, 3, activation=\"relu\", strides=2, padding=\"same\")(x) encoder_outputs = layers.Conv2D(latent_dim, 1, padding=\"same\")(x) return keras.Model(encoder_inputs, encoder_outputs, name=\"encoder\") def get_decoder(latent_dim=16): latent_inputs = keras.Input(shape=get_encoder().output.shape[1:]) x = layers.Conv2DTranspose(64, 3, activation=\"relu\", strides=2, padding=\"same\")( latent_inputs ) x = layers.Conv2DTranspose(32, 3, activation=\"relu\", strides=2, padding=\"same\")(x) decoder_outputs = layers.Conv2DTranspose(1, 3, padding=\"same\")(x) return keras.Model(latent_inputs, decoder_outputs, name=\"decoder\") Standalone VQ-VAE model def get_vqvae(latent_dim=16, num_embeddings=64): vq_layer = VectorQuantizer(num_embeddings, latent_dim, name=\"vector_quantizer\") encoder = get_encoder(latent_dim) decoder = get_decoder(latent_dim) inputs = keras.Input(shape=(28, 28, 1)) encoder_outputs = encoder(inputs) quantized_latents = vq_layer(encoder_outputs) reconstructions = decoder(quantized_latents) return keras.Model(inputs, reconstructions, name=\"vq_vae\") get_vqvae().summary() Model: \"vq_vae\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_4 (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ encoder (Functional) (None, 7, 7, 16) 19856 _________________________________________________________________ vector_quantizer (VectorQuan (None, 7, 7, 16) 1024 _________________________________________________________________ decoder (Functional) (None, 28, 28, 1) 28033 ================================================================= Total params: 48,913 Trainable params: 48,913 Non-trainable params: 0 _________________________________________________________________ Note that the output channels of the encoder should match the latent_dim for the vector quantizer. Wrapping up the training loop inside VQVAETrainer class VQVAETrainer(keras.models.Model): def __init__(self, train_variance, latent_dim=32, num_embeddings=128, **kwargs): super(VQVAETrainer, self).__init__(**kwargs) self.train_variance = train_variance self.latent_dim = latent_dim self.num_embeddings = num_embeddings self.vqvae = get_vqvae(self.latent_dim, self.num_embeddings) self.total_loss_tracker = keras.metrics.Mean(name=\"total_loss\") self.reconstruction_loss_tracker = keras.metrics.Mean( name=\"reconstruction_loss\" ) self.vq_loss_tracker = keras.metrics.Mean(name=\"vq_loss\") @property def metrics(self): return [ self.total_loss_tracker, self.reconstruction_loss_tracker, self.vq_loss_tracker, ] def train_step(self, x): with tf.GradientTape() as tape: # Outputs from the VQ-VAE. reconstructions = self.vqvae(x) # Calculate the losses. reconstruction_loss = ( tf.reduce_mean((x - reconstructions) ** 2) / self.train_variance ) total_loss = reconstruction_loss + sum(self.vqvae.losses) # Backpropagation. grads = tape.gradient(total_loss, self.vqvae.trainable_variables) self.optimizer.apply_gradients(zip(grads, self.vqvae.trainable_variables)) # Loss tracking. self.total_loss_tracker.update_state(total_loss) self.reconstruction_loss_tracker.update_state(reconstruction_loss) self.vq_loss_tracker.update_state(sum(self.vqvae.losses)) # Log results. return { \"loss\": self.total_loss_tracker.result(), \"reconstruction_loss\": self.reconstruction_loss_tracker.result(), \"vqvae_loss\": self.vq_loss_tracker.result(), } Load and preprocess the MNIST dataset (x_train, _), (x_test, _) = keras.datasets.mnist.load_data() x_train = np.expand_dims(x_train, -1) x_test = np.expand_dims(x_test, -1) x_train_scaled = (x_train / 255.0) - 0.5 x_test_scaled = (x_test / 255.0) - 0.5 data_variance = np.var(x_train / 255.0) Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz 11493376/11490434 [==============================] - 0s 0us/step Train the VQ-VAE model vqvae_trainer = VQVAETrainer(data_variance, latent_dim=16, num_embeddings=128) vqvae_trainer.compile(optimizer=keras.optimizers.Adam()) vqvae_trainer.fit(x_train_scaled, epochs=30, batch_size=128) Epoch 1/30 469/469 [==============================] - 18s 6ms/step - loss: 2.2962 - reconstruction_loss: 0.3869 - vqvae_loss: 1.5950 Epoch 2/30 469/469 [==============================] - 3s 6ms/step - loss: 2.2980 - reconstruction_loss: 0.1692 - vqvae_loss: 2.1108 Epoch 3/30 469/469 [==============================] - 3s 6ms/step - loss: 1.1356 - reconstruction_loss: 0.1281 - vqvae_loss: 0.9997 Epoch 4/30 469/469 [==============================] - 3s 6ms/step - loss: 0.6112 - reconstruction_loss: 0.1030 - vqvae_loss: 0.5031 Epoch 5/30 469/469 [==============================] - 3s 6ms/step - loss: 0.4375 - reconstruction_loss: 0.0883 - vqvae_loss: 0.3464 Epoch 6/30 469/469 [==============================] - 3s 6ms/step - loss: 0.3579 - reconstruction_loss: 0.0788 - vqvae_loss: 0.2775 Epoch 7/30 469/469 [==============================] - 3s 5ms/step - loss: 0.3197 - reconstruction_loss: 0.0725 - vqvae_loss: 0.2457 Epoch 8/30 469/469 [==============================] - 3s 5ms/step - loss: 0.2960 - reconstruction_loss: 0.0673 - vqvae_loss: 0.2277 Epoch 9/30 469/469 [==============================] - 3s 5ms/step - loss: 0.2798 - reconstruction_loss: 0.0640 - vqvae_loss: 0.2152 Epoch 10/30 469/469 [==============================] - 3s 5ms/step - loss: 0.2681 - reconstruction_loss: 0.0612 - vqvae_loss: 0.2061 Epoch 11/30 469/469 [==============================] - 3s 6ms/step - loss: 0.2578 - reconstruction_loss: 0.0590 - vqvae_loss: 0.1986 Epoch 12/30 469/469 [==============================] - 3s 6ms/step - loss: 0.2551 - reconstruction_loss: 0.0574 - vqvae_loss: 0.1974 Epoch 13/30 469/469 [==============================] - 3s 6ms/step - loss: 0.2526 - reconstruction_loss: 0.0560 - vqvae_loss: 0.1961 Epoch 14/30 469/469 [==============================] - 3s 6ms/step - loss: 0.2485 - reconstruction_loss: 0.0546 - vqvae_loss: 0.1936 Epoch 15/30 469/469 [==============================] - 3s 6ms/step - loss: 0.2462 - reconstruction_loss: 0.0533 - vqvae_loss: 0.1926 Epoch 16/30 469/469 [==============================] - 3s 6ms/step - loss: 0.2445 - reconstruction_loss: 0.0523 - vqvae_loss: 0.1920 Epoch 17/30 469/469 [==============================] - 3s 6ms/step - loss: 0.2427 - reconstruction_loss: 0.0515 - vqvae_loss: 0.1911 Epoch 18/30 469/469 [==============================] - 3s 6ms/step - loss: 0.2405 - reconstruction_loss: 0.0505 - vqvae_loss: 0.1898 Epoch 19/30 469/469 [==============================] - 3s 6ms/step - loss: 0.2368 - reconstruction_loss: 0.0495 - vqvae_loss: 0.1871 Epoch 20/30 469/469 [==============================] - 3s 5ms/step - loss: 0.2310 - reconstruction_loss: 0.0486 - vqvae_loss: 0.1822 Epoch 21/30 469/469 [==============================] - 3s 5ms/step - loss: 0.2245 - reconstruction_loss: 0.0475 - vqvae_loss: 0.1769 Epoch 22/30 469/469 [==============================] - 3s 5ms/step - loss: 0.2205 - reconstruction_loss: 0.0469 - vqvae_loss: 0.1736 Epoch 23/30 469/469 [==============================] - 3s 5ms/step - loss: 0.2195 - reconstruction_loss: 0.0465 - vqvae_loss: 0.1730 Epoch 24/30 469/469 [==============================] - 3s 5ms/step - loss: 0.2187 - reconstruction_loss: 0.0461 - vqvae_loss: 0.1726 Epoch 25/30 469/469 [==============================] - 3s 5ms/step - loss: 0.2180 - reconstruction_loss: 0.0458 - vqvae_loss: 0.1721 Epoch 26/30 469/469 [==============================] - 3s 5ms/step - loss: 0.2163 - reconstruction_loss: 0.0454 - vqvae_loss: 0.1709 Epoch 27/30 469/469 [==============================] - 3s 5ms/step - loss: 0.2156 - reconstruction_loss: 0.0452 - vqvae_loss: 0.1704 Epoch 28/30 469/469 [==============================] - 3s 5ms/step - loss: 0.2146 - reconstruction_loss: 0.0449 - vqvae_loss: 0.1696 Epoch 29/30 469/469 [==============================] - 3s 5ms/step - loss: 0.2139 - reconstruction_loss: 0.0447 - vqvae_loss: 0.1692 Epoch 30/30 469/469 [==============================] - 3s 5ms/step - loss: 0.2127 - reconstruction_loss: 0.0444 - vqvae_loss: 0.1682 Reconstruction results on the test set def show_subplot(original, reconstructed): plt.subplot(1, 2, 1) plt.imshow(original.squeeze() + 0.5) plt.title(\"Original\") plt.axis(\"off\") plt.subplot(1, 2, 2) plt.imshow(reconstructed.squeeze() + 0.5) plt.title(\"Reconstructed\") plt.axis(\"off\") plt.show() trained_vqvae_model = vqvae_trainer.vqvae idx = np.random.choice(len(x_test_scaled), 10) test_images = x_test_scaled[idx] reconstructions_test = trained_vqvae_model.predict(test_images) for test_image, reconstructed_image in zip(test_images, reconstructions_test): show_subplot(test_image, reconstructed_image) png png png png png png png png png png These results look decent. You are encouraged to play with different hyperparameters (especially the number of embeddings and the dimensions of the embeddings) and observe how they affect the results. Visualizing the discrete codes encoder = vqvae_trainer.vqvae.get_layer(\"encoder\") quantizer = vqvae_trainer.vqvae.get_layer(\"vector_quantizer\") encoded_outputs = encoder.predict(test_images) flat_enc_outputs = encoded_outputs.reshape(-1, encoded_outputs.shape[-1]) codebook_indices = quantizer.get_code_indices(flat_enc_outputs) codebook_indices = codebook_indices.numpy().reshape(encoded_outputs.shape[:-1]) for i in range(len(test_images)): plt.subplot(1, 2, 1) plt.imshow(test_images[i].squeeze() + 0.5) plt.title(\"Original\") plt.axis(\"off\") plt.subplot(1, 2, 2) plt.imshow(codebook_indices[i]) plt.title(\"Code\") plt.axis(\"off\") plt.show() png png png png png png png png png png The figure above shows that the discrete codes have been able to capture some regularities from the dataset. Now, you might wonder, how do we use these codes to generate new samples? Specifically, how do we sample from this codebook to create novel examples? Since these codes are discrete and we imposed a categorical distribution on them, we cannot use them yet to generate anything meaningful. These codes were not updated during the training process as well. So, they need to be adjusted further so that we can use for them the subsequent image generation task. The authors use a PixelCNN to train these codes so that they can be used as powerful priors to generate novel examples. PixelCNN was proposed in Conditional Image Generation with PixelCNN Decoders by van der Oord et al. We will borrow code from this example to develop a PixelCNN. It's an auto-regressive generative model where the current outputs are conditioned on the prior ones. In other words, a PixelCNN generates an image on a pixel-by-pixel basis. PixelCNN hyperparameters num_residual_blocks = 2 num_pixelcnn_layers = 2 pixelcnn_input_shape = encoded_outputs.shape[1:-1] print(f\"Input shape of the PixelCNN: {pixelcnn_input_shape}\") Input shape of the PixelCNN: (7, 7) Don't worry about the input shape. It'll become clear in the following sections. PixelCNN model Majority of this comes from this example. # The first layer is the PixelCNN layer. This layer simply # builds on the 2D convolutional layer, but includes masking. class PixelConvLayer(layers.Layer): def __init__(self, mask_type, **kwargs): super(PixelConvLayer, self).__init__() self.mask_type = mask_type self.conv = layers.Conv2D(**kwargs) def build(self, input_shape): # Build the conv2d layer to initialize kernel variables self.conv.build(input_shape) # Use the initialized kernel to create the mask kernel_shape = self.conv.kernel.get_shape() self.mask = np.zeros(shape=kernel_shape) self.mask[: kernel_shape[0] // 2, ...] = 1.0 self.mask[kernel_shape[0] // 2, : kernel_shape[1] // 2, ...] = 1.0 if self.mask_type == \"B\": self.mask[kernel_shape[0] // 2, kernel_shape[1] // 2, ...] = 1.0 def call(self, inputs): self.conv.kernel.assign(self.conv.kernel * self.mask) return self.conv(inputs) # Next, we build our residual block layer. # This is just a normal residual block, but based on the PixelConvLayer. class ResidualBlock(keras.layers.Layer): def __init__(self, filters, **kwargs): super(ResidualBlock, self).__init__(**kwargs) self.conv1 = keras.layers.Conv2D( filters=filters, kernel_size=1, activation=\"relu\" ) self.pixel_conv = PixelConvLayer( mask_type=\"B\", filters=filters // 2, kernel_size=3, activation=\"relu\", padding=\"same\", ) self.conv2 = keras.layers.Conv2D( filters=filters, kernel_size=1, activation=\"relu\" ) def call(self, inputs): x = self.conv1(inputs) x = self.pixel_conv(x) x = self.conv2(x) return keras.layers.add([inputs, x]) pixelcnn_inputs = keras.Input(shape=pixelcnn_input_shape, dtype=tf.int32) ohe = tf.one_hot(pixelcnn_inputs, vqvae_trainer.num_embeddings) x = PixelConvLayer( mask_type=\"A\", filters=128, kernel_size=7, activation=\"relu\", padding=\"same\" )(ohe) for _ in range(num_residual_blocks): x = ResidualBlock(filters=128)(x) for _ in range(num_pixelcnn_layers): x = PixelConvLayer( mask_type=\"B\", filters=128, kernel_size=1, strides=1, activation=\"relu\", padding=\"valid\", )(x) out = keras.layers.Conv2D( filters=vqvae_trainer.num_embeddings, kernel_size=1, strides=1, padding=\"valid\" )(x) pixel_cnn = keras.Model(pixelcnn_inputs, out, name=\"pixel_cnn\") pixel_cnn.summary() Model: \"pixel_cnn\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_9 (InputLayer) [(None, 7, 7)] 0 _________________________________________________________________ tf.one_hot (TFOpLambda) (None, 7, 7, 128) 0 _________________________________________________________________ pixel_conv_layer (PixelConvL (None, 7, 7, 128) 802944 _________________________________________________________________ residual_block (ResidualBloc (None, 7, 7, 128) 98624 _________________________________________________________________ residual_block_1 (ResidualBl (None, 7, 7, 128) 98624 _________________________________________________________________ pixel_conv_layer_3 (PixelCon (None, 7, 7, 128) 16512 _________________________________________________________________ pixel_conv_layer_4 (PixelCon (None, 7, 7, 128) 16512 _________________________________________________________________ conv2d_21 (Conv2D) (None, 7, 7, 128) 16512 ================================================================= Total params: 1,049,728 Trainable params: 1,049,728 Non-trainable params: 0 _________________________________________________________________ Prepare data to train the PixelCNN We will train the PixelCNN to learn a categorical distribution of the discrete codes. First, we will generate code indices using the encoder and vector quantizer we just trained. Our training objective will be to minimize the crossentropy loss between these indices and the PixelCNN outputs. Here, the number of categories is equal to the number of embeddings present in our codebook (128 in our case). The PixelCNN model is trained to learn a distribution (as opposed to minimizing the L1/L2 loss), which is where it gets its generative capabilities from. # Generate the codebook indices. encoded_outputs = encoder.predict(x_train_scaled) flat_enc_outputs = encoded_outputs.reshape(-1, encoded_outputs.shape[-1]) codebook_indices = quantizer.get_code_indices(flat_enc_outputs) codebook_indices = codebook_indices.numpy().reshape(encoded_outputs.shape[:-1]) print(f\"Shape of the training data for PixelCNN: {codebook_indices.shape}\") Shape of the training data for PixelCNN: (60000, 7, 7) PixelCNN training pixel_cnn.compile( optimizer=keras.optimizers.Adam(3e-4), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[\"accuracy\"], ) pixel_cnn.fit( x=codebook_indices, y=codebook_indices, batch_size=128, epochs=30, validation_split=0.1, ) Epoch 1/30 422/422 [==============================] - 4s 8ms/step - loss: 1.8550 - accuracy: 0.5959 - val_loss: 1.3127 - val_accuracy: 0.6268 Epoch 2/30 422/422 [==============================] - 3s 7ms/step - loss: 1.2207 - accuracy: 0.6402 - val_loss: 1.1722 - val_accuracy: 0.6482 Epoch 3/30 422/422 [==============================] - 3s 7ms/step - loss: 1.1412 - accuracy: 0.6536 - val_loss: 1.1313 - val_accuracy: 0.6552 Epoch 4/30 422/422 [==============================] - 3s 7ms/step - loss: 1.1060 - accuracy: 0.6601 - val_loss: 1.1058 - val_accuracy: 0.6596 Epoch 5/30 422/422 [==============================] - 3s 7ms/step - loss: 1.0828 - accuracy: 0.6646 - val_loss: 1.1020 - val_accuracy: 0.6603 Epoch 6/30 422/422 [==============================] - 3s 7ms/step - loss: 1.0649 - accuracy: 0.6682 - val_loss: 1.0809 - val_accuracy: 0.6638 Epoch 7/30 422/422 [==============================] - 3s 7ms/step - loss: 1.0515 - accuracy: 0.6710 - val_loss: 1.0712 - val_accuracy: 0.6659 Epoch 8/30 422/422 [==============================] - 3s 7ms/step - loss: 1.0406 - accuracy: 0.6733 - val_loss: 1.0647 - val_accuracy: 0.6671 Epoch 9/30 422/422 [==============================] - 3s 7ms/step - loss: 1.0312 - accuracy: 0.6752 - val_loss: 1.0633 - val_accuracy: 0.6674 Epoch 10/30 422/422 [==============================] - 3s 7ms/step - loss: 1.0235 - accuracy: 0.6771 - val_loss: 1.0554 - val_accuracy: 0.6695 Epoch 11/30 422/422 [==============================] - 3s 7ms/step - loss: 1.0162 - accuracy: 0.6788 - val_loss: 1.0518 - val_accuracy: 0.6694 Epoch 12/30 422/422 [==============================] - 3s 7ms/step - loss: 1.0105 - accuracy: 0.6799 - val_loss: 1.0541 - val_accuracy: 0.6693 Epoch 13/30 422/422 [==============================] - 3s 7ms/step - loss: 1.0050 - accuracy: 0.6811 - val_loss: 1.0481 - val_accuracy: 0.6705 Epoch 14/30 422/422 [==============================] - 3s 7ms/step - loss: 1.0011 - accuracy: 0.6820 - val_loss: 1.0462 - val_accuracy: 0.6709 Epoch 15/30 422/422 [==============================] - 3s 7ms/step - loss: 0.9964 - accuracy: 0.6831 - val_loss: 1.0459 - val_accuracy: 0.6709 Epoch 16/30 422/422 [==============================] - 3s 7ms/step - loss: 0.9922 - accuracy: 0.6840 - val_loss: 1.0444 - val_accuracy: 0.6704 Epoch 17/30 422/422 [==============================] - 3s 7ms/step - loss: 0.9884 - accuracy: 0.6848 - val_loss: 1.0405 - val_accuracy: 0.6725 Epoch 18/30 422/422 [==============================] - 3s 7ms/step - loss: 0.9846 - accuracy: 0.6859 - val_loss: 1.0400 - val_accuracy: 0.6722 Epoch 19/30 422/422 [==============================] - 3s 7ms/step - loss: 0.9822 - accuracy: 0.6864 - val_loss: 1.0394 - val_accuracy: 0.6728 Epoch 20/30 422/422 [==============================] - 3s 7ms/step - loss: 0.9787 - accuracy: 0.6872 - val_loss: 1.0393 - val_accuracy: 0.6717 Epoch 21/30 422/422 [==============================] - 3s 7ms/step - loss: 0.9761 - accuracy: 0.6878 - val_loss: 1.0398 - val_accuracy: 0.6725 Epoch 22/30 422/422 [==============================] - 3s 7ms/step - loss: 0.9733 - accuracy: 0.6884 - val_loss: 1.0376 - val_accuracy: 0.6726 Epoch 23/30 422/422 [==============================] - 3s 7ms/step - loss: 0.9708 - accuracy: 0.6890 - val_loss: 1.0352 - val_accuracy: 0.6732 Epoch 24/30 422/422 [==============================] - 3s 7ms/step - loss: 0.9685 - accuracy: 0.6894 - val_loss: 1.0369 - val_accuracy: 0.6723 Epoch 25/30 422/422 [==============================] - 3s 7ms/step - loss: 0.9660 - accuracy: 0.6901 - val_loss: 1.0384 - val_accuracy: 0.6733 Epoch 26/30 422/422 [==============================] - 3s 7ms/step - loss: 0.9638 - accuracy: 0.6908 - val_loss: 1.0355 - val_accuracy: 0.6728 Epoch 27/30 422/422 [==============================] - 3s 7ms/step - loss: 0.9619 - accuracy: 0.6912 - val_loss: 1.0325 - val_accuracy: 0.6739 Epoch 28/30 422/422 [==============================] - 3s 7ms/step - loss: 0.9594 - accuracy: 0.6917 - val_loss: 1.0334 - val_accuracy: 0.6736 Epoch 29/30 422/422 [==============================] - 3s 7ms/step - loss: 0.9582 - accuracy: 0.6920 - val_loss: 1.0366 - val_accuracy: 0.6733 Epoch 30/30 422/422 [==============================] - 3s 7ms/step - loss: 0.9561 - accuracy: 0.6926 - val_loss: 1.0336 - val_accuracy: 0.6728 We can improve these scores with more training and hyperparameter tuning. Codebook sampling Now that our PixelCNN is trained, we can sample distinct codes from its outputs and pass them to our decoder to generate novel images. # Create a mini sampler model. inputs = layers.Input(shape=pixel_cnn.input_shape[1:]) x = pixel_cnn(inputs, training=False) dist = tfp.distributions.Categorical(logits=x) sampled = dist.sample() sampler = keras.Model(inputs, sampled) We now construct a prior to generate images. Here, we will generate 10 images. # Create an empty array of priors. batch = 10 priors = np.zeros(shape=(batch,) + (pixel_cnn.input_shape)[1:]) batch, rows, cols = priors.shape # Iterate over the priors because generation has to be done sequentially pixel by pixel. for row in range(rows): for col in range(cols): # Feed the whole array and retrieving the pixel value probabilities for the next # pixel. probs = sampler.predict(priors) # Use the probabilities to pick pixel values and append the values to the priors. priors[:, row, col] = probs[:, row, col] print(f\"Prior shape: {priors.shape}\") Prior shape: (10, 7, 7) We can now use our decoder to generate the images. # Perform an embedding lookup. pretrained_embeddings = quantizer.embeddings priors_ohe = tf.one_hot(priors.astype(\"int32\"), vqvae_trainer.num_embeddings).numpy() quantized = tf.matmul( priors_ohe.astype(\"float32\"), pretrained_embeddings, transpose_b=True ) quantized = tf.reshape(quantized, (-1, *(encoded_outputs.shape[1:]))) # Generate novel images. decoder = vqvae_trainer.vqvae.get_layer(\"decoder\") generated_samples = decoder.predict(quantized) for i in range(batch): plt.subplot(1, 2, 1) plt.imshow(priors[i]) plt.title(\"Code\") plt.axis(\"off\") plt.subplot(1, 2, 2) plt.imshow(generated_samples[i].squeeze() + 0.5) plt.title(\"Generated Sample\") plt.axis(\"off\") plt.show() png png png png png png png png png png We can enhance the quality of these generated samples by tweaking the PixelCNN. Additional notes After the VQ-VAE paper was initially released, the authors developed an exponential moving averaging scheme to update the embeddings inside the quantizer. If you're interested you can check out this snippet. To further enhance the quality of the generated samples, VQ-VAE-2 was proposed that follows a cascaded approach to learn the codebook and to generate the images. Implementation of Wasserstein GAN with Gradient Penalty. Wasserstein GAN (WGAN) with Gradient Penalty (GP) The original Wasserstein GAN leverages the Wasserstein distance to produce a value function that has better theoretical properties than the value function used in the original GAN paper. WGAN requires that the discriminator (aka the critic) lie within the space of 1-Lipschitz functions. The authors proposed the idea of weight clipping to achieve this constraint. Though weight clipping works, it can be a problematic way to enforce 1-Lipschitz constraint and can cause undesirable behavior, e.g. a very deep WGAN discriminator (critic) often fails to converge. The WGAN-GP method proposes an alternative to weight clipping to ensure smooth training. Instead of clipping the weights, the authors proposed a \"gradient penalty\" by adding a loss term that keeps the L2 norm of the discriminator gradients close to 1. Setup import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Prepare the Fashion-MNIST data To demonstrate how to train WGAN-GP, we will be using the Fashion-MNIST dataset. Each sample in this dataset is a 28x28 grayscale image associated with a label from 10 classes (e.g. trouser, pullover, sneaker, etc.) IMG_SHAPE = (28, 28, 1) BATCH_SIZE = 512 # Size of the noise vector noise_dim = 128 fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() print(f\"Number of examples: {len(train_images)}\") print(f\"Shape of the images in the dataset: {train_images.shape[1:]}\") # Reshape each sample to (28, 28, 1) and normalize the pixel values in the [-1, 1] range train_images = train_images.reshape(train_images.shape[0], *IMG_SHAPE).astype(\"float32\") train_images = (train_images - 127.5) / 127.5 Number of examples: 60000 Shape of the images in the dataset: (28, 28) Create the discriminator (the critic in the original WGAN) The samples in the dataset have a (28, 28, 1) shape. Because we will be using strided convolutions, this can result in a shape with odd dimensions. For example, (28, 28) -> Conv_s2 -> (14, 14) -> Conv_s2 -> (7, 7) -> Conv_s2 ->(3, 3). While peforming upsampling in the generator part of the network, we won't get the same input shape as the original images if we aren't careful. To avoid this, we will do something much simpler: - In the discriminator: \"zero pad\" the input to change the shape to (32, 32, 1) for each sample; and - Ihe generator: crop the final output to match the shape with input shape. def conv_block( x, filters, activation, kernel_size=(3, 3), strides=(1, 1), padding=\"same\", use_bias=True, use_bn=False, use_dropout=False, drop_value=0.5, ): x = layers.Conv2D( filters, kernel_size, strides=strides, padding=padding, use_bias=use_bias )(x) if use_bn: x = layers.BatchNormalization()(x) x = activation(x) if use_dropout: x = layers.Dropout(drop_value)(x) return x def get_discriminator_model(): img_input = layers.Input(shape=IMG_SHAPE) # Zero pad the input to make the input images size to (32, 32, 1). x = layers.ZeroPadding2D((2, 2))(img_input) x = conv_block( x, 64, kernel_size=(5, 5), strides=(2, 2), use_bn=False, use_bias=True, activation=layers.LeakyReLU(0.2), use_dropout=False, drop_value=0.3, ) x = conv_block( x, 128, kernel_size=(5, 5), strides=(2, 2), use_bn=False, activation=layers.LeakyReLU(0.2), use_bias=True, use_dropout=True, drop_value=0.3, ) x = conv_block( x, 256, kernel_size=(5, 5), strides=(2, 2), use_bn=False, activation=layers.LeakyReLU(0.2), use_bias=True, use_dropout=True, drop_value=0.3, ) x = conv_block( x, 512, kernel_size=(5, 5), strides=(2, 2), use_bn=False, activation=layers.LeakyReLU(0.2), use_bias=True, use_dropout=False, drop_value=0.3, ) x = layers.Flatten()(x) x = layers.Dropout(0.2)(x) x = layers.Dense(1)(x) d_model = keras.models.Model(img_input, x, name=\"discriminator\") return d_model d_model = get_discriminator_model() d_model.summary() Model: \"discriminator\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ zero_padding2d (ZeroPadding2 (None, 32, 32, 1) 0 _________________________________________________________________ conv2d (Conv2D) (None, 16, 16, 64) 1664 _________________________________________________________________ leaky_re_lu (LeakyReLU) (None, 16, 16, 64) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 8, 8, 128) 204928 _________________________________________________________________ leaky_re_lu_1 (LeakyReLU) (None, 8, 8, 128) 0 _________________________________________________________________ dropout (Dropout) (None, 8, 8, 128) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 4, 4, 256) 819456 _________________________________________________________________ leaky_re_lu_2 (LeakyReLU) (None, 4, 4, 256) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 4, 4, 256) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 2, 2, 512) 3277312 _________________________________________________________________ leaky_re_lu_3 (LeakyReLU) (None, 2, 2, 512) 0 _________________________________________________________________ flatten (Flatten) (None, 2048) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 2048) 0 _________________________________________________________________ dense (Dense) (None, 1) 2049 ================================================================= Total params: 4,305,409 Trainable params: 4,305,409 Non-trainable params: 0 _________________________________________________________________ Create the generator def upsample_block( x, filters, activation, kernel_size=(3, 3), strides=(1, 1), up_size=(2, 2), padding=\"same\", use_bn=False, use_bias=True, use_dropout=False, drop_value=0.3, ): x = layers.UpSampling2D(up_size)(x) x = layers.Conv2D( filters, kernel_size, strides=strides, padding=padding, use_bias=use_bias )(x) if use_bn: x = layers.BatchNormalization()(x) if activation: x = activation(x) if use_dropout: x = layers.Dropout(drop_value)(x) return x def get_generator_model(): noise = layers.Input(shape=(noise_dim,)) x = layers.Dense(4 * 4 * 256, use_bias=False)(noise) x = layers.BatchNormalization()(x) x = layers.LeakyReLU(0.2)(x) x = layers.Reshape((4, 4, 256))(x) x = upsample_block( x, 128, layers.LeakyReLU(0.2), strides=(1, 1), use_bias=False, use_bn=True, padding=\"same\", use_dropout=False, ) x = upsample_block( x, 64, layers.LeakyReLU(0.2), strides=(1, 1), use_bias=False, use_bn=True, padding=\"same\", use_dropout=False, ) x = upsample_block( x, 1, layers.Activation(\"tanh\"), strides=(1, 1), use_bias=False, use_bn=True ) # At this point, we have an output which has the same shape as the input, (32, 32, 1). # We will use a Cropping2D layer to make it (28, 28, 1). x = layers.Cropping2D((2, 2))(x) g_model = keras.models.Model(noise, x, name=\"generator\") return g_model g_model = get_generator_model() g_model.summary() Model: \"generator\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) [(None, 128)] 0 _________________________________________________________________ dense_1 (Dense) (None, 4096) 524288 _________________________________________________________________ batch_normalization (BatchNo (None, 4096) 16384 _________________________________________________________________ leaky_re_lu_4 (LeakyReLU) (None, 4096) 0 _________________________________________________________________ reshape (Reshape) (None, 4, 4, 256) 0 _________________________________________________________________ up_sampling2d (UpSampling2D) (None, 8, 8, 256) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 8, 8, 128) 294912 _________________________________________________________________ batch_normalization_1 (Batch (None, 8, 8, 128) 512 _________________________________________________________________ leaky_re_lu_5 (LeakyReLU) (None, 8, 8, 128) 0 _________________________________________________________________ up_sampling2d_1 (UpSampling2 (None, 16, 16, 128) 0 _________________________________________________________________ conv2d_5 (Conv2D) (None, 16, 16, 64) 73728 _________________________________________________________________ batch_normalization_2 (Batch (None, 16, 16, 64) 256 _________________________________________________________________ leaky_re_lu_6 (LeakyReLU) (None, 16, 16, 64) 0 _________________________________________________________________ up_sampling2d_2 (UpSampling2 (None, 32, 32, 64) 0 _________________________________________________________________ conv2d_6 (Conv2D) (None, 32, 32, 1) 576 _________________________________________________________________ batch_normalization_3 (Batch (None, 32, 32, 1) 4 _________________________________________________________________ activation (Activation) (None, 32, 32, 1) 0 _________________________________________________________________ cropping2d (Cropping2D) (None, 28, 28, 1) 0 ================================================================= Total params: 910,660 Trainable params: 902,082 Non-trainable params: 8,578 _________________________________________________________________ Create the WGAN-GP model Now that we have defined our generator and discriminator, it's time to implement the WGAN-GP model. We will also override the train_step for training. class WGAN(keras.Model): def __init__( self, discriminator, generator, latent_dim, discriminator_extra_steps=3, gp_weight=10.0, ): super(WGAN, self).__init__() self.discriminator = discriminator self.generator = generator self.latent_dim = latent_dim self.d_steps = discriminator_extra_steps self.gp_weight = gp_weight def compile(self, d_optimizer, g_optimizer, d_loss_fn, g_loss_fn): super(WGAN, self).compile() self.d_optimizer = d_optimizer self.g_optimizer = g_optimizer self.d_loss_fn = d_loss_fn self.g_loss_fn = g_loss_fn def gradient_penalty(self, batch_size, real_images, fake_images): \"\"\" Calculates the gradient penalty. This loss is calculated on an interpolated image and added to the discriminator loss. \"\"\" # Get the interpolated image alpha = tf.random.normal([batch_size, 1, 1, 1], 0.0, 1.0) diff = fake_images - real_images interpolated = real_images + alpha * diff with tf.GradientTape() as gp_tape: gp_tape.watch(interpolated) # 1. Get the discriminator output for this interpolated image. pred = self.discriminator(interpolated, training=True) # 2. Calculate the gradients w.r.t to this interpolated image. grads = gp_tape.gradient(pred, [interpolated])[0] # 3. Calculate the norm of the gradients. norm = tf.sqrt(tf.reduce_sum(tf.square(grads), axis=[1, 2, 3])) gp = tf.reduce_mean((norm - 1.0) ** 2) return gp def train_step(self, real_images): if isinstance(real_images, tuple): real_images = real_images[0] # Get the batch size batch_size = tf.shape(real_images)[0] # For each batch, we are going to perform the # following steps as laid out in the original paper: # 1. Train the generator and get the generator loss # 2. Train the discriminator and get the discriminator loss # 3. Calculate the gradient penalty # 4. Multiply this gradient penalty with a constant weight factor # 5. Add the gradient penalty to the discriminator loss # 6. Return the generator and discriminator losses as a loss dictionary # Train the discriminator first. The original paper recommends training # the discriminator for `x` more steps (typically 5) as compared to # one step of the generator. Here we will train it for 3 extra steps # as compared to 5 to reduce the training time. for i in range(self.d_steps): # Get the latent vector random_latent_vectors = tf.random.normal( shape=(batch_size, self.latent_dim) ) with tf.GradientTape() as tape: # Generate fake images from the latent vector fake_images = self.generator(random_latent_vectors, training=True) # Get the logits for the fake images fake_logits = self.discriminator(fake_images, training=True) # Get the logits for the real images real_logits = self.discriminator(real_images, training=True) # Calculate the discriminator loss using the fake and real image logits d_cost = self.d_loss_fn(real_img=real_logits, fake_img=fake_logits) # Calculate the gradient penalty gp = self.gradient_penalty(batch_size, real_images, fake_images) # Add the gradient penalty to the original discriminator loss d_loss = d_cost + gp * self.gp_weight # Get the gradients w.r.t the discriminator loss d_gradient = tape.gradient(d_loss, self.discriminator.trainable_variables) # Update the weights of the discriminator using the discriminator optimizer self.d_optimizer.apply_gradients( zip(d_gradient, self.discriminator.trainable_variables) ) # Train the generator # Get the latent vector random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim)) with tf.GradientTape() as tape: # Generate fake images using the generator generated_images = self.generator(random_latent_vectors, training=True) # Get the discriminator logits for fake images gen_img_logits = self.discriminator(generated_images, training=True) # Calculate the generator loss g_loss = self.g_loss_fn(gen_img_logits) # Get the gradients w.r.t the generator loss gen_gradient = tape.gradient(g_loss, self.generator.trainable_variables) # Update the weights of the generator using the generator optimizer self.g_optimizer.apply_gradients( zip(gen_gradient, self.generator.trainable_variables) ) return {\"d_loss\": d_loss, \"g_loss\": g_loss} Create a Keras callback that periodically saves generated images class GANMonitor(keras.callbacks.Callback): def __init__(self, num_img=6, latent_dim=128): self.num_img = num_img self.latent_dim = latent_dim def on_epoch_end(self, epoch, logs=None): random_latent_vectors = tf.random.normal(shape=(self.num_img, self.latent_dim)) generated_images = self.model.generator(random_latent_vectors) generated_images = (generated_images * 127.5) + 127.5 for i in range(self.num_img): img = generated_images[i].numpy() img = keras.preprocessing.image.array_to_img(img) img.save(\"generated_img_{i}_{epoch}.png\".format(i=i, epoch=epoch)) Train the end-to-end model # Instantiate the optimizer for both networks # (learning_rate=0.0002, beta_1=0.5 are recommended) generator_optimizer = keras.optimizers.Adam( learning_rate=0.0002, beta_1=0.5, beta_2=0.9 ) discriminator_optimizer = keras.optimizers.Adam( learning_rate=0.0002, beta_1=0.5, beta_2=0.9 ) # Define the loss functions for the discriminator, # which should be (fake_loss - real_loss). # We will add the gradient penalty later to this loss function. def discriminator_loss(real_img, fake_img): real_loss = tf.reduce_mean(real_img) fake_loss = tf.reduce_mean(fake_img) return fake_loss - real_loss # Define the loss functions for the generator. def generator_loss(fake_img): return -tf.reduce_mean(fake_img) # Set the number of epochs for trainining. epochs = 20 # Instantiate the customer `GANMonitor` Keras callback. cbk = GANMonitor(num_img=3, latent_dim=noise_dim) # Instantiate the WGAN model. wgan = WGAN( discriminator=d_model, generator=g_model, latent_dim=noise_dim, discriminator_extra_steps=3, ) # Compile the WGAN model. wgan.compile( d_optimizer=discriminator_optimizer, g_optimizer=generator_optimizer, g_loss_fn=generator_loss, d_loss_fn=discriminator_loss, ) # Start training the model. wgan.fit(train_images, batch_size=BATCH_SIZE, epochs=epochs, callbacks=[cbk]) Epoch 1/20 118/118 [==============================] - 39s 334ms/step - d_loss: -7.6571 - g_loss: -16.9272 Epoch 2/20 118/118 [==============================] - 39s 334ms/step - d_loss: -7.2396 - g_loss: -8.5466 Epoch 3/20 118/118 [==============================] - 40s 335ms/step - d_loss: -6.3892 - g_loss: 1.3971 Epoch 4/20 118/118 [==============================] - 40s 335ms/step - d_loss: -5.7705 - g_loss: 6.5997 Epoch 5/20 118/118 [==============================] - 40s 336ms/step - d_loss: -5.2659 - g_loss: 7.4743 Epoch 6/20 118/118 [==============================] - 40s 335ms/step - d_loss: -4.9563 - g_loss: 6.2071 Epoch 7/20 118/118 [==============================] - 40s 335ms/step - d_loss: -4.5759 - g_loss: 6.4767 Epoch 8/20 118/118 [==============================] - 40s 335ms/step - d_loss: -4.3748 - g_loss: 5.4304 Epoch 9/20 118/118 [==============================] - 40s 335ms/step - d_loss: -4.1142 - g_loss: 6.4326 Epoch 10/20 118/118 [==============================] - 40s 335ms/step - d_loss: -3.7956 - g_loss: 7.1200 Epoch 11/20 118/118 [==============================] - 40s 335ms/step - d_loss: -3.5723 - g_loss: 7.1837 Epoch 12/20 118/118 [==============================] - 40s 335ms/step - d_loss: -3.4374 - g_loss: 9.0537 Epoch 13/20 118/118 [==============================] - 40s 335ms/step - d_loss: -3.3402 - g_loss: 8.4949 Epoch 14/20 118/118 [==============================] - 40s 335ms/step - d_loss: -3.1252 - g_loss: 8.6130 Epoch 15/20 118/118 [==============================] - 40s 336ms/step - d_loss: -3.0130 - g_loss: 9.4563 Epoch 16/20 118/118 [==============================] - 40s 335ms/step - d_loss: -2.9330 - g_loss: 8.8075 Epoch 17/20 118/118 [==============================] - 40s 336ms/step - d_loss: -2.7980 - g_loss: 8.0775 Epoch 18/20 118/118 [==============================] - 40s 335ms/step - d_loss: -2.7835 - g_loss: 8.7983 Epoch 19/20 118/118 [==============================] - 40s 335ms/step - d_loss: -2.6409 - g_loss: 7.8309 Epoch 20/20 118/118 [==============================] - 40s 336ms/step - d_loss: -2.5134 - g_loss: 8.6653 Display the last generated images: from IPython.display import Image, display display(Image(\"generated_img_0_19.png\")) display(Image(\"generated_img_1_19.png\")) display(Image(\"generated_img_2_19.png\")) png png png Complete implementation of WGAN-GP with R-GCN to generate novel molecules. Introduction In this tutorial, we implement a generative model for graphs and use it to generate novel molecules. Motivation: The development of new drugs (molecules) can be extremely time-consuming and costly. The use of deep learning models can alleviate the search for good candidate drugs, by predicting properties of known molecules (e.g., solubility, toxicity, affinity to target protein, etc.). As the number of possible molecules is astronomical, the space in which we search for/explore molecules is just a fraction of the entire space. Therefore, it's arguably desirable to implement generative models that can learn to generate novel molecules (which would otherwise have never been explored). References (implementation) The implementation in this tutorial is based on/inspired by the MolGAN paper and DeepChem's Basic MolGAN. Further reading (generative models) Recent implementations of generative models for molecular graphs also include Mol-CycleGAN, GraphVAE and JT-VAE. For more information on generative adverserial networks, see GAN, WGAN and WGAN-GP. Setup Install RDKit RDKit is a collection of cheminformatics and machine-learning software written in C++ and Python. In this tutorial, RDKit is used to conviently and efficiently transform SMILES to molecule objects, and then from those obtain sets of atoms and bonds. SMILES expresses the structure of a given molecule in the form of an ASCII string. The SMILES string is a compact encoding which, for smaller molecules, is relatively human-readable. Encoding molecules as a string both alleviates and facilitates database and/or web searching of a given molecule. RDKit uses algorithms to accurately transform a given SMILES to a molecule object, which can then be used to compute a great number of molecular properties/features. Notice, RDKit is commonly installed via Conda. However, thanks to rdkit_platform_wheels, rdkit can now (for the sake of this tutorial) be installed easily via pip, as follows: pip -q install rdkit-pypi And to allow easy visualization of a molecule objects, Pillow needs to be installed: pip -q install Pillow Import packages from rdkit import Chem, RDLogger from rdkit.Chem.Draw import IPythonConsole, MolsToGridImage import numpy as np import tensorflow as tf from tensorflow import keras RDLogger.DisableLog(\"rdApp.*\") Dataset The dataset used in this tutorial is a quantum mechanics dataset (QM9), obtained from MoleculeNet. Although many feature and label columns come with the dataset, we'll only focus on the SMILES column. The QM9 dataset is a good first dataset to work with for generating graphs, as the maximum number of heavy (non-hydrogen) atoms found in a molecule is only nine. csv_path = tf.keras.utils.get_file( \"qm9.csv\", \"https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/qm9.csv\" ) data = [] with open(csv_path, \"r\") as f: for line in f.readlines()[1:]: data.append(line.split(\",\")[1]) # Let's look at a molecule of the dataset smiles = data[1000] print(\"SMILES:\", smiles) molecule = Chem.MolFromSmiles(smiles) print(\"Num heavy atoms:\", molecule.GetNumHeavyAtoms()) molecule SMILES: Cn1cncc1O Num heavy atoms: 7 png Define helper functions These helper functions will help convert SMILES to graphs and graphs to molecule objects. Representing a molecular graph. Molecules can naturally be expressed as undirected graphs G = (V, E), where V is a set of vertices (atoms), and E a set of edges (bonds). As for this implementation, each graph (molecule) will be represented as an adjacency tensor A, which encodes existence/non-existence of atom-pairs with their one-hot encoded bond types stretching an extra dimension, and a feature tensor H, which for each atom, one-hot encodes its atom type. Notice, as hydrogen atoms can be inferred by RDKit, hydrogen atoms are excluded from A and H for easier modeling. atom_mapping = { \"C\": 0, 0: \"C\", \"N\": 1, 1: \"N\", \"O\": 2, 2: \"O\", \"F\": 3, 3: \"F\", } bond_mapping = { \"SINGLE\": 0, 0: Chem.BondType.SINGLE, \"DOUBLE\": 1, 1: Chem.BondType.DOUBLE, \"TRIPLE\": 2, 2: Chem.BondType.TRIPLE, \"AROMATIC\": 3, 3: Chem.BondType.AROMATIC, } NUM_ATOMS = 9 # Maximum number of atoms ATOM_DIM = 4 + 1 # Number of atom types BOND_DIM = 4 + 1 # Number of bond types LATENT_DIM = 64 # Size of the latent space def smiles_to_graph(smiles): # Converts SMILES to molecule object molecule = Chem.MolFromSmiles(smiles) # Initialize adjacency and feature tensor adjacency = np.zeros((BOND_DIM, NUM_ATOMS, NUM_ATOMS), \"float32\") features = np.zeros((NUM_ATOMS, ATOM_DIM), \"float32\") # loop over each atom in molecule for atom in molecule.GetAtoms(): i = atom.GetIdx() atom_type = atom_mapping[atom.GetSymbol()] features[i] = np.eye(ATOM_DIM)[atom_type] # loop over one-hop neighbors for neighbor in atom.GetNeighbors(): j = neighbor.GetIdx() bond = molecule.GetBondBetweenAtoms(i, j) bond_type_idx = bond_mapping[bond.GetBondType().name] adjacency[bond_type_idx, [i, j], [j, i]] = 1 # Where no bond, add 1 to last channel (indicating \"non-bond\") # Notice: channels-first adjacency[-1, np.sum(adjacency, axis=0) == 0] = 1 # Where no atom, add 1 to last column (indicating \"non-atom\") features[np.where(np.sum(features, axis=1) == 0)[0], -1] = 1 return adjacency, features def graph_to_molecule(graph): # Unpack graph adjacency, features = graph # RWMol is a molecule object intended to be edited molecule = Chem.RWMol() # Remove \"no atoms\" & atoms with no bonds keep_idx = np.where( (np.argmax(features, axis=1) != ATOM_DIM - 1) & (np.sum(adjacency[:-1], axis=(0, 1)) != 0) )[0] features = features[keep_idx] adjacency = adjacency[:, keep_idx, :][:, :, keep_idx] # Add atoms to molecule for atom_type_idx in np.argmax(features, axis=1): atom = Chem.Atom(atom_mapping[atom_type_idx]) _ = molecule.AddAtom(atom) # Add bonds between atoms in molecule; based on the upper triangles # of the [symmetric] adjacency tensor (bonds_ij, atoms_i, atoms_j) = np.where(np.triu(adjacency) == 1) for (bond_ij, atom_i, atom_j) in zip(bonds_ij, atoms_i, atoms_j): if atom_i == atom_j or bond_ij == BOND_DIM - 1: continue bond_type = bond_mapping[bond_ij] molecule.AddBond(int(atom_i), int(atom_j), bond_type) # Sanitize the molecule; for more information on sanitization, see # https://www.rdkit.org/docs/RDKit_Book.html#molecular-sanitization flag = Chem.SanitizeMol(molecule, catchErrors=True) # Let's be strict. If sanitization fails, return None if flag != Chem.SanitizeFlags.SANITIZE_NONE: return None return molecule # Test helper functions graph_to_molecule(smiles_to_graph(smiles)) png Generate training set To save training time, we'll only use a tenth of the QM9 dataset. adjacency_tensor, feature_tensor = [], [] for smiles in data[::10]: adjacency, features = smiles_to_graph(smiles) adjacency_tensor.append(adjacency) feature_tensor.append(features) adjacency_tensor = np.array(adjacency_tensor) feature_tensor = np.array(feature_tensor) print(\"adjacency_tensor.shape =\", adjacency_tensor.shape) print(\"feature_tensor.shape =\", feature_tensor.shape) adjacency_tensor.shape = (13389, 5, 9, 9) feature_tensor.shape = (13389, 9, 5) Model The idea is to implement a generator network and a discriminator network via WGAN-GP, that will result in a generator network that can generate small novel molecules (small graphs). The generator network needs to be able to map (for each example in the batch) a vector z to a 3-D adjacency tensor (A) and 2-D feature tensor (H). For this, z will first be passed through a fully-connected network, for which the output will be further passed through two separate fully-connected networks. Each of these two fully-connected networks will then output (for each example in the batch) a tanh-activated vector followed by a reshape and softmax to match that of a multi-dimensional adjacency/feature tensor. As the discriminator network will recieves as input a graph (A, H) from either the genrator or from the training set, we'll need to implement graph convolutional layers, which allows us to operate on graphs. This means that input to the discriminator network will first pass through graph convolutional layers, then an average-pooling layer, and finally a few fully-connected layers. The final output should be a scalar (for each example in the batch) which indicates the \"realness\" of the associated input (in this case a \"fake\" or \"real\" molecule). Graph generator def GraphGenerator( dense_units, dropout_rate, latent_dim, adjacency_shape, feature_shape, ): z = keras.layers.Input(shape=(LATENT_DIM,)) # Propagate through one or more densely connected layers x = z for units in dense_units: x = keras.layers.Dense(units, activation=\"tanh\")(x) x = keras.layers.Dropout(dropout_rate)(x) # Map outputs of previous layer (x) to [continuous] adjacency tensors (x_adjacency) x_adjacency = keras.layers.Dense(tf.math.reduce_prod(adjacency_shape))(x) x_adjacency = keras.layers.Reshape(adjacency_shape)(x_adjacency) # Symmetrify tensors in the last two dimensions x_adjacency = (x_adjacency + tf.transpose(x_adjacency, (0, 1, 3, 2))) / 2 x_adjacency = keras.layers.Softmax(axis=1)(x_adjacency) # Map outputs of previous layer (x) to [continuous] feature tensors (x_features) x_features = keras.layers.Dense(tf.math.reduce_prod(feature_shape))(x) x_features = keras.layers.Reshape(feature_shape)(x_features) x_features = keras.layers.Softmax(axis=2)(x_features) return keras.Model(inputs=z, outputs=[x_adjacency, x_features], name=\"Generator\") generator = GraphGenerator( dense_units=[128, 256, 512], dropout_rate=0.2, latent_dim=LATENT_DIM, adjacency_shape=(BOND_DIM, NUM_ATOMS, NUM_ATOMS), feature_shape=(NUM_ATOMS, ATOM_DIM), ) generator.summary() Model: \"Generator\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 64)] 0 __________________________________________________________________________________________________ dense (Dense) (None, 128) 8320 input_1[0][0] __________________________________________________________________________________________________ dropout (Dropout) (None, 128) 0 dense[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (None, 256) 33024 dropout[0][0] __________________________________________________________________________________________________ dropout_1 (Dropout) (None, 256) 0 dense_1[0][0] __________________________________________________________________________________________________ dense_2 (Dense) (None, 512) 131584 dropout_1[0][0] __________________________________________________________________________________________________ dropout_2 (Dropout) (None, 512) 0 dense_2[0][0] __________________________________________________________________________________________________ dense_3 (Dense) (None, 405) 207765 dropout_2[0][0] __________________________________________________________________________________________________ reshape (Reshape) (None, 5, 9, 9) 0 dense_3[0][0] __________________________________________________________________________________________________ tf.compat.v1.transpose (TFOpLam (None, 5, 9, 9) 0 reshape[0][0] __________________________________________________________________________________________________ tf.__operators__.add (TFOpLambd (None, 5, 9, 9) 0 reshape[0][0] tf.compat.v1.transpose[0][0] __________________________________________________________________________________________________ dense_4 (Dense) (None, 45) 23085 dropout_2[0][0] __________________________________________________________________________________________________ tf.math.truediv (TFOpLambda) (None, 5, 9, 9) 0 tf.__operators__.add[0][0] __________________________________________________________________________________________________ reshape_1 (Reshape) (None, 9, 5) 0 dense_4[0][0] __________________________________________________________________________________________________ softmax (Softmax) (None, 5, 9, 9) 0 tf.math.truediv[0][0] __________________________________________________________________________________________________ softmax_1 (Softmax) (None, 9, 5) 0 reshape_1[0][0] ================================================================================================== Total params: 403,778 Trainable params: 403,778 Non-trainable params: 0 __________________________________________________________________________________________________ Graph discriminator Graph convolutional layer. The relational graph convolutional layers implements non-linearly transformed neighborhood aggregations. We can define these layers as follows: H^{l+1} = σ(D^{-1} @ A @ H^{l+1} @ W^{l}) Where σ denotes the non-linear transformation (commonly a ReLU activation), A the adjacency tensor, H^{l} the feature tensor at the l:th layer, D^{-1} the inverse diagonal degree tensor of A, and W^{l} the trainable weight tensor at the l:th layer. Specifically, for each bond type (relation), the degree tensor expresses, in the diagonal, the number of bonds attached to each atom. Notice, in this tutorial D^{-1} is omitted, for two reasons: (1) it's not obvious how to apply this normalization on the continuous adjacency tensors (generated by the generator), and (2) the performance of the WGAN without normalization seems to work just fine. Furthermore, in contrast to the original paper, no self-loop is defined, as we don't want to train the generator to predict \"self-bonding\". class RelationalGraphConvLayer(keras.layers.Layer): def __init__( self, units=128, activation=\"relu\", use_bias=False, kernel_initializer=\"glorot_uniform\", bias_initializer=\"zeros\", kernel_regularizer=None, bias_regularizer=None, **kwargs ): super().__init__(**kwargs) self.units = units self.activation = keras.activations.get(activation) self.use_bias = use_bias self.kernel_initializer = keras.initializers.get(kernel_initializer) self.bias_initializer = keras.initializers.get(bias_initializer) self.kernel_regularizer = keras.regularizers.get(kernel_regularizer) self.bias_regularizer = keras.regularizers.get(bias_regularizer) def build(self, input_shape): bond_dim = input_shape[0][1] atom_dim = input_shape[1][2] self.kernel = self.add_weight( shape=(bond_dim, atom_dim, self.units), initializer=self.kernel_initializer, regularizer=self.kernel_regularizer, trainable=True, name=\"W\", dtype=tf.float32, ) if self.use_bias: self.bias = self.add_weight( shape=(bond_dim, 1, self.units), initializer=self.bias_initializer, regularizer=self.bias_regularizer, trainable=True, name=\"b\", dtype=tf.float32, ) self.built = True def call(self, inputs, training=False): adjacency, features = inputs # Aggregate information from neighbors x = tf.matmul(adjacency, features[:, None, :, :]) # Apply linear transformation x = tf.matmul(x, self.kernel) if self.use_bias: x += self.bias # Reduce bond types dim x_reduced = tf.reduce_sum(x, axis=1) # Apply non-linear transformation return self.activation(x_reduced) def GraphDiscriminator( gconv_units, dense_units, dropout_rate, adjacency_shape, feature_shape ): adjacency = keras.layers.Input(shape=adjacency_shape) features = keras.layers.Input(shape=feature_shape) # Propagate through one or more graph convolutional layers features_transformed = features for units in gconv_units: features_transformed = RelationalGraphConvLayer(units)( [adjacency, features_transformed] ) # Reduce 2-D representation of molecule to 1-D x = keras.layers.GlobalAveragePooling1D()(features_transformed) # Propagate through one or more densely connected layers for units in dense_units: x = keras.layers.Dense(units, activation=\"relu\")(x) x = keras.layers.Dropout(dropout_rate)(x) # For each molecule, output a single scalar value expressing the # \"realness\" of the inputted molecule x_out = keras.layers.Dense(1, dtype=\"float32\")(x) return keras.Model(inputs=[adjacency, features], outputs=x_out) discriminator = GraphDiscriminator( gconv_units=[128, 128, 128, 128], dense_units=[512, 512], dropout_rate=0.2, adjacency_shape=(BOND_DIM, NUM_ATOMS, NUM_ATOMS), feature_shape=(NUM_ATOMS, ATOM_DIM), ) discriminator.summary() Model: \"model\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_2 (InputLayer) [(None, 5, 9, 9)] 0 __________________________________________________________________________________________________ input_3 (InputLayer) [(None, 9, 5)] 0 __________________________________________________________________________________________________ relational_graph_conv_layer (Re (None, 9, 128) 3200 input_2[0][0] input_3[0][0] __________________________________________________________________________________________________ relational_graph_conv_layer_1 ( (None, 9, 128) 81920 input_2[0][0] relational_graph_conv_layer[0][0] __________________________________________________________________________________________________ relational_graph_conv_layer_2 ( (None, 9, 128) 81920 input_2[0][0] relational_graph_conv_layer_1[0][ __________________________________________________________________________________________________ relational_graph_conv_layer_3 ( (None, 9, 128) 81920 input_2[0][0] relational_graph_conv_layer_2[0][ __________________________________________________________________________________________________ global_average_pooling1d (Globa (None, 128) 0 relational_graph_conv_layer_3[0][ __________________________________________________________________________________________________ dense_5 (Dense) (None, 512) 66048 global_average_pooling1d[0][0] __________________________________________________________________________________________________ dropout_3 (Dropout) (None, 512) 0 dense_5[0][0] __________________________________________________________________________________________________ dense_6 (Dense) (None, 512) 262656 dropout_3[0][0] __________________________________________________________________________________________________ dropout_4 (Dropout) (None, 512) 0 dense_6[0][0] __________________________________________________________________________________________________ dense_7 (Dense) (None, 1) 513 dropout_4[0][0] ================================================================================================== Total params: 578,177 Trainable params: 578,177 Non-trainable params: 0 __________________________________________________________________________________________________ WGAN-GP class GraphWGAN(keras.Model): def __init__( self, generator, discriminator, discriminator_steps=1, generator_steps=1, gp_weight=10, **kwargs ): super().__init__(**kwargs) self.generator = generator self.discriminator = discriminator self.discriminator_steps = discriminator_steps self.generator_steps = generator_steps self.gp_weight = gp_weight self.latent_dim = self.generator.input_shape[-1] def compile(self, optimizer_generator, optimizer_discriminator, **kwargs): super().compile(**kwargs) self.optimizer_generator = optimizer_generator self.optimizer_discriminator = optimizer_discriminator self.metric_generator = keras.metrics.Mean(name=\"loss_gen\") self.metric_discriminator = keras.metrics.Mean(name=\"loss_dis\") def train_step(self, inputs): if isinstance(inputs[0], tuple): inputs = inputs[0] graph_real = inputs self.batch_size = tf.shape(inputs[0])[0] # Train the discriminator for one or more steps for _ in range(self.discriminator_steps): z = tf.random.normal((self.batch_size, self.latent_dim)) with tf.GradientTape() as tape: graph_generated = self.generator(z, training=True) loss = self._loss_discriminator(graph_real, graph_generated) grads = tape.gradient(loss, self.discriminator.trainable_weights) self.optimizer_discriminator.apply_gradients( zip(grads, self.discriminator.trainable_weights) ) self.metric_discriminator.update_state(loss) # Train the generator for one or more steps for _ in range(self.generator_steps): z = tf.random.normal((self.batch_size, self.latent_dim)) with tf.GradientTape() as tape: graph_generated = self.generator(z, training=True) loss = self._loss_generator(graph_generated) grads = tape.gradient(loss, self.generator.trainable_weights) self.optimizer_generator.apply_gradients( zip(grads, self.generator.trainable_weights) ) self.metric_generator.update_state(loss) return {m.name: m.result() for m in self.metrics} def _loss_discriminator(self, graph_real, graph_generated): logits_real = self.discriminator(graph_real, training=True) logits_generated = self.discriminator(graph_generated, training=True) loss = tf.reduce_mean(logits_generated) - tf.reduce_mean(logits_real) loss_gp = self._gradient_penalty(graph_real, graph_generated) return loss + loss_gp * self.gp_weight def _loss_generator(self, graph_generated): logits_generated = self.discriminator(graph_generated, training=True) return -tf.reduce_mean(logits_generated) def _gradient_penalty(self, graph_real, graph_generated): # Unpack graphs adjacency_real, features_real = graph_real adjacency_generated, features_generated = graph_generated # Generate interpolated graphs (adjacency_interp and features_interp) alpha = tf.random.uniform([self.batch_size]) alpha = tf.reshape(alpha, (self.batch_size, 1, 1, 1)) adjacency_interp = (adjacency_real * alpha) + (1 - alpha) * adjacency_generated alpha = tf.reshape(alpha, (self.batch_size, 1, 1)) features_interp = (features_real * alpha) + (1 - alpha) * features_generated # Compute the logits of interpolated graphs with tf.GradientTape() as tape: tape.watch(adjacency_interp) tape.watch(features_interp) logits = self.discriminator( [adjacency_interp, features_interp], training=True ) # Compute the gradients with respect to the interpolated graphs grads = tape.gradient(logits, [adjacency_interp, features_interp]) # Compute the gradient penalty grads_adjacency_penalty = (1 - tf.norm(grads[0], axis=1)) ** 2 grads_features_penalty = (1 - tf.norm(grads[1], axis=2)) ** 2 return tf.reduce_mean( tf.reduce_mean(grads_adjacency_penalty, axis=(-2, -1)) + tf.reduce_mean(grads_features_penalty, axis=(-1)) ) Train the model To save time (if run on a CPU), we'll only train the model for 10 epochs. wgan = GraphWGAN(generator, discriminator, discriminator_steps=1) wgan.compile( optimizer_generator=keras.optimizers.Adam(5e-4), optimizer_discriminator=keras.optimizers.Adam(5e-4), ) wgan.fit([adjacency_tensor, feature_tensor], epochs=10, batch_size=16) Epoch 1/10 837/837 [==============================] - 27s 29ms/step - loss_gen: 1.2595 - loss_dis: -3.7314 Epoch 2/10 837/837 [==============================] - 24s 29ms/step - loss_gen: 0.2039 - loss_dis: -1.4319 Epoch 3/10 837/837 [==============================] - 25s 29ms/step - loss_gen: 0.2395 - loss_dis: -1.4390 Epoch 4/10 837/837 [==============================] - 26s 31ms/step - loss_gen: -0.0859 - loss_dis: -1.2093 Epoch 5/10 837/837 [==============================] - 25s 29ms/step - loss_gen: 0.3703 - loss_dis: -1.4996 Epoch 6/10 837/837 [==============================] - 24s 29ms/step - loss_gen: 0.9488 - loss_dis: -1.9018 Epoch 7/10 837/837 [==============================] - 24s 29ms/step - loss_gen: 0.8143 - loss_dis: -2.0511 Epoch 8/10 837/837 [==============================] - 25s 30ms/step - loss_gen: 0.9974 - loss_dis: -2.0642 Epoch 9/10 837/837 [==============================] - 24s 29ms/step - loss_gen: 1.2580 - loss_dis: -2.3094 Epoch 10/10 837/837 [==============================] - 24s 29ms/step - loss_gen: 1.6188 - loss_dis: -2.5193 Sample novel molecules with the generator def sample(generator, batch_size): z = tf.random.normal((batch_size, LATENT_DIM)) graph = generator.predict(z) # obtain one-hot encoded adjacency tensor adjacency = tf.argmax(graph[0], axis=1) adjacency = tf.one_hot(adjacency, depth=BOND_DIM, axis=1) # Remove potential self-loops from adjacency adjacency = tf.linalg.set_diag(adjacency, tf.zeros(tf.shape(adjacency)[:-1])) # obtain one-hot encoded feature tensor features = tf.argmax(graph[1], axis=2) features = tf.one_hot(features, depth=ATOM_DIM, axis=2) return [ graph_to_molecule([adjacency[i].numpy(), features[i].numpy()]) for i in range(batch_size) ] molecules = sample(wgan.generator, batch_size=48) MolsToGridImage( [m for m in molecules if m is not None][:25], molsPerRow=5, subImgSize=(150, 150) ) png Concluding thoughts Inspecting the results. Ten epochs of training seemed enough to generate some decent looking molecules! Notice, in contrast to the MolGAN paper, the uniqueness of the generated molecules in this tutorial seems really high, which is great! What we've learned, and prospects. In this tutorial, a generative model for molecular graphs was succesfully implemented, which allowed us to generate novel molecules. In the future, it would be interesting to implement generative models that can modify existing molecules (for instance, to optimize solubility or protein-binding of an existing molecule). For that however, a reconstruction loss would likely be needed, which is tricky to implement as there's no easy and obvious way to compute similarity between two molecular graphs. An implementation of Graph Attention Networks (GATs) for node classification. Introduction This code example solves the CartPole-v0 environment using a Proximal Policy Optimization (PPO) agent. CartPole-v0 A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center. After 200 steps the episode ends. Thus, the highest return we can get is equal to 200. CartPole-v0 Proximal Policy Optimization PPO is a policy gradient method and can be used for environments with either discrete or continuous action spaces. It trains a stochastic policy in an on-policy way. Also, it utilizes the actor critic method. The actor maps the observation to an action and the critic gives an expectation of the rewards of the agent for the observation given. Firstly, it collects a set of trajectories for each epoch by sampling from the latest version of the stochastic policy. Then, the rewards-to-go and the advantage estimates are computed in order to update the policy and fit the value function. The policy is updated via a stochastic gradient ascent optimizer, while the value function is fitted via some gradient descent algorithm. This procedure is applied for many epochs until the environment is solved. Algorithm PPO Original Paper OpenAI Spinning Up docs - PPO Note This code example uses Keras and Tensorflow v2. It is based on the PPO Original Paper, the OpenAI's Spinning Up docs for PPO, and the OpenAI's Spinning Up implementation of PPO using Tensorflow v1. OpenAI Spinning Up Github - PPO Libraries For this example the following libraries are used: numpy for n-dimensional arrays tensorflow and keras for building the deep RL PPO agent gym for getting everything we need about the environment scipy.signal for calculating the discounted cumulative sums of vectors import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import gym import scipy.signal import time Functions and class def discounted_cumulative_sums(x, discount): # Discounted cumulative sums of vectors for computing rewards-to-go and advantage estimates return scipy.signal.lfilter([1], [1, float(-discount)], x[::-1], axis=0)[::-1] class Buffer: # Buffer for storing trajectories def __init__(self, observation_dimensions, size, gamma=0.99, lam=0.95): # Buffer initialization self.observation_buffer = np.zeros( (size, observation_dimensions), dtype=np.float32 ) self.action_buffer = np.zeros(size, dtype=np.int32) self.advantage_buffer = np.zeros(size, dtype=np.float32) self.reward_buffer = np.zeros(size, dtype=np.float32) self.return_buffer = np.zeros(size, dtype=np.float32) self.value_buffer = np.zeros(size, dtype=np.float32) self.logprobability_buffer = np.zeros(size, dtype=np.float32) self.gamma, self.lam = gamma, lam self.pointer, self.trajectory_start_index = 0, 0 def store(self, observation, action, reward, value, logprobability): # Append one step of agent-environment interaction self.observation_buffer[self.pointer] = observation self.action_buffer[self.pointer] = action self.reward_buffer[self.pointer] = reward self.value_buffer[self.pointer] = value self.logprobability_buffer[self.pointer] = logprobability self.pointer += 1 def finish_trajectory(self, last_value=0): # Finish the trajectory by computing advantage estimates and rewards-to-go path_slice = slice(self.trajectory_start_index, self.pointer) rewards = np.append(self.reward_buffer[path_slice], last_value) values = np.append(self.value_buffer[path_slice], last_value) deltas = rewards[:-1] + self.gamma * values[1:] - values[:-1] self.advantage_buffer[path_slice] = discounted_cumulative_sums( deltas, self.gamma * self.lam ) self.return_buffer[path_slice] = discounted_cumulative_sums( rewards, self.gamma )[:-1] self.trajectory_start_index = self.pointer def get(self): # Get all data of the buffer and normalize the advantages self.pointer, self.trajectory_start_index = 0, 0 advantage_mean, advantage_std = ( np.mean(self.advantage_buffer), np.std(self.advantage_buffer), ) self.advantage_buffer = (self.advantage_buffer - advantage_mean) / advantage_std return ( self.observation_buffer, self.action_buffer, self.advantage_buffer, self.return_buffer, self.logprobability_buffer, ) def mlp(x, sizes, activation=tf.tanh, output_activation=None): # Build a feedforward neural network for size in sizes[:-1]: x = layers.Dense(units=size, activation=activation)(x) return layers.Dense(units=sizes[-1], activation=output_activation)(x) def logprobabilities(logits, a): # Compute the log-probabilities of taking actions a by using the logits (i.e. the output of the actor) logprobabilities_all = tf.nn.log_softmax(logits) logprobability = tf.reduce_sum( tf.one_hot(a, num_actions) * logprobabilities_all, axis=1 ) return logprobability # Sample action from actor @tf.function def sample_action(observation): logits = actor(observation) action = tf.squeeze(tf.random.categorical(logits, 1), axis=1) return logits, action # Train the policy by maxizing the PPO-Clip objective @tf.function def train_policy( observation_buffer, action_buffer, logprobability_buffer, advantage_buffer ): with tf.GradientTape() as tape: # Record operations for automatic differentiation. ratio = tf.exp( logprobabilities(actor(observation_buffer), action_buffer) - logprobability_buffer ) min_advantage = tf.where( advantage_buffer > 0, (1 + clip_ratio) * advantage_buffer, (1 - clip_ratio) * advantage_buffer, ) policy_loss = -tf.reduce_mean( tf.minimum(ratio * advantage_buffer, min_advantage) ) policy_grads = tape.gradient(policy_loss, actor.trainable_variables) policy_optimizer.apply_gradients(zip(policy_grads, actor.trainable_variables)) kl = tf.reduce_mean( logprobability_buffer - logprobabilities(actor(observation_buffer), action_buffer) ) kl = tf.reduce_sum(kl) return kl # Train the value function by regression on mean-squared error @tf.function def train_value_function(observation_buffer, return_buffer): with tf.GradientTape() as tape: # Record operations for automatic differentiation. value_loss = tf.reduce_mean((return_buffer - critic(observation_buffer)) ** 2) value_grads = tape.gradient(value_loss, critic.trainable_variables) value_optimizer.apply_gradients(zip(value_grads, critic.trainable_variables)) Hyperparameters # Hyperparameters of the PPO algorithm steps_per_epoch = 4000 epochs = 30 gamma = 0.99 clip_ratio = 0.2 policy_learning_rate = 3e-4 value_function_learning_rate = 1e-3 train_policy_iterations = 80 train_value_iterations = 80 lam = 0.97 target_kl = 0.01 hidden_sizes = (64, 64) # True if you want to render the environment render = False Initializations # Initialize the environment and get the dimensionality of the # observation space and the number of possible actions env = gym.make(\"CartPole-v0\") observation_dimensions = env.observation_space.shape[0] num_actions = env.action_space.n # Initialize the buffer buffer = Buffer(observation_dimensions, steps_per_epoch) # Initialize the actor and the critic as keras models observation_input = keras.Input(shape=(observation_dimensions,), dtype=tf.float32) logits = mlp(observation_input, list(hidden_sizes) + [num_actions], tf.tanh, None) actor = keras.Model(inputs=observation_input, outputs=logits) value = tf.squeeze( mlp(observation_input, list(hidden_sizes) + [1], tf.tanh, None), axis=1 ) critic = keras.Model(inputs=observation_input, outputs=value) # Initialize the policy and the value function optimizers policy_optimizer = keras.optimizers.Adam(learning_rate=policy_learning_rate) value_optimizer = keras.optimizers.Adam(learning_rate=value_function_learning_rate) # Initialize the observation, episode return and episode length observation, episode_return, episode_length = env.reset(), 0, 0 Train # Iterate over the number of epochs for epoch in range(epochs): # Initialize the sum of the returns, lengths and number of episodes for each epoch sum_return = 0 sum_length = 0 num_episodes = 0 # Iterate over the steps of each epoch for t in range(steps_per_epoch): if render: env.render() # Get the logits, action, and take one step in the environment observation = observation.reshape(1, -1) logits, action = sample_action(observation) observation_new, reward, done, _ = env.step(action[0].numpy()) episode_return += reward episode_length += 1 # Get the value and log-probability of the action value_t = critic(observation) logprobability_t = logprobabilities(logits, action) # Store obs, act, rew, v_t, logp_pi_t buffer.store(observation, action, reward, value_t, logprobability_t) # Update the observation observation = observation_new # Finish trajectory if reached to a terminal state terminal = done if terminal or (t == steps_per_epoch - 1): last_value = 0 if done else critic(observation.reshape(1, -1)) buffer.finish_trajectory(last_value) sum_return += episode_return sum_length += episode_length num_episodes += 1 observation, episode_return, episode_length = env.reset(), 0, 0 # Get values from the buffer ( observation_buffer, action_buffer, advantage_buffer, return_buffer, logprobability_buffer, ) = buffer.get() # Update the policy and implement early stopping using KL divergence for _ in range(train_policy_iterations): kl = train_policy( observation_buffer, action_buffer, logprobability_buffer, advantage_buffer ) if kl > 1.5 * target_kl: # Early Stopping break # Update the value function for _ in range(train_value_iterations): train_value_function(observation_buffer, return_buffer) # Print mean return and length for each epoch print( f\" Epoch: {epoch + 1}. Mean Return: {sum_return / num_episodes}. Mean Length: {sum_length / num_episodes}\" ) Epoch: 1. Mean Return: 18.01801801801802. Mean Length: 18.01801801801802 Epoch: 2. Mean Return: 21.978021978021978. Mean Length: 21.978021978021978 Epoch: 3. Mean Return: 27.397260273972602. Mean Length: 27.397260273972602 Epoch: 4. Mean Return: 36.69724770642202. Mean Length: 36.69724770642202 Epoch: 5. Mean Return: 48.19277108433735. Mean Length: 48.19277108433735 Epoch: 6. Mean Return: 66.66666666666667. Mean Length: 66.66666666666667 Epoch: 7. Mean Return: 133.33333333333334. Mean Length: 133.33333333333334 Epoch: 8. Mean Return: 166.66666666666666. Mean Length: 166.66666666666666 Epoch: 9. Mean Return: 181.8181818181818. Mean Length: 181.8181818181818 Epoch: 10. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 Epoch: 11. Mean Return: 200.0. Mean Length: 200.0 Epoch: 12. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 Epoch: 13. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 Epoch: 14. Mean Return: 181.8181818181818. Mean Length: 181.8181818181818 Epoch: 15. Mean Return: 181.8181818181818. Mean Length: 181.8181818181818 Epoch: 16. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 Epoch: 17. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 Epoch: 18. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 Epoch: 19. Mean Return: 200.0. Mean Length: 200.0 Epoch: 20. Mean Return: 200.0. Mean Length: 200.0 Epoch: 21. Mean Return: 200.0. Mean Length: 200.0 Epoch: 22. Mean Return: 200.0. Mean Length: 200.0 Epoch: 23. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 Epoch: 24. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 Epoch: 25. Mean Return: 200.0. Mean Length: 200.0 Epoch: 26. Mean Return: 200.0. Mean Length: 200.0 Epoch: 27. Mean Return: 200.0. Mean Length: 200.0 Epoch: 28. Mean Return: 200.0. Mean Length: 200.0 Epoch: 29. Mean Return: 200.0. Mean Length: 200.0 Epoch: 30. Mean Return: 200.0. Mean Length: 200.0 Visualizations Before training: Imgur After 8 epochs of training: Imgur After 20 epochs of training: Imgur Implementing the node2vec model to generate embeddings for movies from the MovieLens dataset. Introduction Learning useful representations from objects structured as graphs is useful for a variety of machine learning (ML) applications—such as social and communication networks analysis, biomedicine studies, and recommendation systems. Graph representation Learning aims to learn embeddings for the graph nodes, which can be used for a variety of ML tasks such as node label prediction (e.g. categorizing an article based on its citations) and link prediction (e.g. recommending an interest group to a user in a social network). node2vec is a simple, yet scalable and effective technique for learning low-dimensional embeddings for nodes in a graph by optimizing a neighborhood-preserving objective. The aim is to learn similar embeddings for neighboring nodes, with respect to the graph structure. Given your data items structured as a graph (where the items are represented as nodes and the relationship between items are represented as edges), node2vec works as follows: Generate item sequences using (biased) random walk. Create positive and negative training examples from these sequences. Train a word2vec model (skip-gram) to learn embeddings for the items. In this example, we demonstrate the node2vec technique on the small version of the Movielens dataset to learn movie embeddings. Such a dataset can be represented as a graph by treating the movies as nodes, and creating edges between movies that have similar ratings by the users. The learnt movie embeddings can be used for tasks such as movie recommendation, or movie genres prediction. This example requires networkx package, which can be installed using the following command: pip install networkx Setup import os from collections import defaultdict import math import networkx as nx import random from tqdm import tqdm from zipfile import ZipFile from urllib.request import urlretrieve import numpy as np import pandas as pd import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt Download the MovieLens dataset and prepare the data The small version of the MovieLens dataset includes around 100k ratings from 610 users on 9,742 movies. First, let's download the dataset. The downloaded folder will contain three data files: users.csv, movies.csv, and ratings.csv. In this example, we will only need the movies.dat, and ratings.dat data files. urlretrieve( \"http://files.grouplens.org/datasets/movielens/ml-latest-small.zip\", \"movielens.zip\" ) ZipFile(\"movielens.zip\", \"r\").extractall() Then, we load the data into a Pandas DataFrame and perform some basic preprocessing. # Load movies to a DataFrame. movies = pd.read_csv(\"ml-latest-small/movies.csv\") # Create a `movieId` string. movies[\"movieId\"] = movies[\"movieId\"].apply(lambda x: f\"movie_{x}\") # Load ratings to a DataFrame. ratings = pd.read_csv(\"ml-latest-small/ratings.csv\") # Convert the `ratings` to floating point ratings[\"rating\"] = ratings[\"rating\"].apply(lambda x: float(x)) # Create the `movie_id` string. ratings[\"movieId\"] = ratings[\"movieId\"].apply(lambda x: f\"movie_{x}\") print(\"Movies data shape:\", movies.shape) print(\"Ratings data shape:\", ratings.shape) Movies data shape: (9742, 3) Ratings data shape: (100836, 4) Let's inspect a sample instance of the ratings DataFrame. ratings.head() userId movieId rating timestamp 0 1 movie_1 4.0 964982703 1 1 movie_3 4.0 964981247 2 1 movie_6 4.0 964982224 3 1 movie_47 5.0 964983815 4 1 movie_50 5.0 964982931 Next, let's check a sample instance of the movies DataFrame. movies.head() movieId title genres 0 movie_1 Toy Story (1995) Adventure|Animation|Children|Comedy|Fantasy 1 movie_2 Jumanji (1995) Adventure|Children|Fantasy 2 movie_3 Grumpier Old Men (1995) Comedy|Romance 3 movie_4 Waiting to Exhale (1995) Comedy|Drama|Romance 4 movie_5 Father of the Bride Part II (1995) Comedy Implement two utility functions for the movies DataFrame. def get_movie_title_by_id(movieId): return list(movies[movies.movieId == movieId].title)[0] def get_movie_id_by_title(title): return list(movies[movies.title == title].movieId)[0] Construct the Movies graph We create an edge between two movie nodes in the graph if both movies are rated by the same user >= min_rating. The weight of the edge will be based on the pointwise mutual information between the two movies, which is computed as: log(xy) - log(x) - log(y) + log(D), where: xy is how many users rated both movie x and movie y with >= min_rating. x is how many users rated movie x >= min_rating. y is how many users rated movie y >= min_rating. D total number of movie ratings >= min_rating. Step 1: create the weighted edges between movies. min_rating = 5 pair_frequency = defaultdict(int) item_frequency = defaultdict(int) # Filter instances where rating is greater than or equal to min_rating. rated_movies = ratings[ratings.rating >= min_rating] # Group instances by user. movies_grouped_by_users = list(rated_movies.groupby(\"userId\")) for group in tqdm( movies_grouped_by_users, position=0, leave=True, desc=\"Compute movie rating frequencies\", ): # Get a list of movies rated by the user. current_movies = list(group[1][\"movieId\"]) for i in range(len(current_movies)): item_frequency[current_movies[i]] += 1 for j in range(i + 1, len(current_movies)): x = min(current_movies[i], current_movies[j]) y = max(current_movies[i], current_movies[j]) pair_frequency[(x, y)] += 1 Compute movie rating frequencies: 100%|██████████| 573/573 [00:00<00:00, 1041.36it/s] Step 2: create the graph with the nodes and the edges To reduce the number of edges between nodes, we only add an edge between movies if the weight of the edge is greater than min_weight. min_weight = 10 D = math.log(sum(item_frequency.values())) # Create the movies undirected graph. movies_graph = nx.Graph() # Add weighted edges between movies. # This automatically adds the movie nodes to the graph. for pair in tqdm( pair_frequency, position=0, leave=True, desc=\"Creating the movie graph\" ): x, y = pair xy_frequency = pair_frequency[pair] x_frequency = item_frequency[x] y_frequency = item_frequency[y] pmi = math.log(xy_frequency) - math.log(x_frequency) - math.log(y_frequency) + D weight = pmi * xy_frequency # Only include edges with weight >= min_weight. if weight >= min_weight: movies_graph.add_edge(x, y, weight=weight) Creating the movie graph: 100%|██████████| 298586/298586 [00:00<00:00, 762305.97it/s] Let's display the total number of nodes and edges in the graph. Note that the number of nodes is less than the total number of movies, since only the movies that have edges to other movies are added. print(\"Total number of graph nodes:\", movies_graph.number_of_nodes()) print(\"Total number of graph edges:\", movies_graph.number_of_edges()) Total number of graph nodes: 1405 Total number of graph edges: 40043 Let's display the average node degree (number of neighbours) in the graph. degrees = [] for node in movies_graph.nodes: degrees.append(movies_graph.degree[node]) print(\"Average node degree:\", round(sum(degrees) / len(degrees), 2)) Average node degree: 57.0 Step 3: Create vocabulary and a mapping from tokens to integer indices The vocabulary is the nodes (movie IDs) in the graph. vocabulary = [\"NA\"] + list(movies_graph.nodes) vocabulary_lookup = {token: idx for idx, token in enumerate(vocabulary)} Implement the biased random walk A random walk starts from a given node, and randomly picks a neighbour node to move to. If the edges are weighted, the neighbour is selected probabilistically with respect to weights of the edges between the current node and its neighbours. This procedure is repeated for num_steps to generate a sequence of related nodes. The biased random walk balances between breadth-first sampling (where only local neighbours are visited) and depth-first sampling (where distant neighbours are visited) by introducing the following two parameters: Return parameter (p): Controls the likelihood of immediately revisiting a node in the walk. Setting it to a high value encourages moderate exploration, while setting it to a low value would keep the walk local. In-out parameter (q): Allows the search to differentiate between inward and outward nodes. Setting it to a high value biases the random walk towards local nodes, while setting it to a low value biases the walk to visit nodes which are further away. def next_step(graph, previous, current, p, q): neighbors = list(graph.neighbors(current)) weights = [] # Adjust the weights of the edges to the neighbors with respect to p and q. for neighbor in neighbors: if neighbor == previous: # Control the probability to return to the previous node. weights.append(graph[current][neighbor][\"weight\"] / p) elif graph.has_edge(neighbor, previous): # The probability of visiting a local node. weights.append(graph[current][neighbor][\"weight\"]) else: # Control the probability to move forward. weights.append(graph[current][neighbor][\"weight\"] / q) # Compute the probabilities of visiting each neighbor. weight_sum = sum(weights) probabilities = [weight / weight_sum for weight in weights] # Probabilistically select a neighbor to visit. next = np.random.choice(neighbors, size=1, p=probabilities)[0] return next def random_walk(graph, num_walks, num_steps, p, q): walks = [] nodes = list(graph.nodes()) # Perform multiple iterations of the random walk. for walk_iteration in range(num_walks): random.shuffle(nodes) for node in tqdm( nodes, position=0, leave=True, desc=f\"Random walks iteration {walk_iteration + 1} of {num_walks}\", ): # Start the walk with a random node from the graph. walk = [node] # Randomly walk for num_steps. while len(walk) < num_steps: current = walk[-1] previous = walk[-2] if len(walk) > 1 else None # Compute the next node to visit. next = next_step(graph, previous, current, p, q) walk.append(next) # Replace node ids (movie ids) in the walk with token ids. walk = [vocabulary_lookup[token] for token in walk] # Add the walk to the generated sequence. walks.append(walk) return walks Generate training data using the biased random walk You can explore different configurations of p and q to different results of related movies. # Random walk return parameter. p = 1 # Random walk in-out parameter. q = 1 # Number of iterations of random walks. num_walks = 5 # Number of steps of each random walk. num_steps = 10 walks = random_walk(movies_graph, num_walks, num_steps, p, q) print(\"Number of walks generated:\", len(walks)) Random walks iteration 1 of 5: 100%|██████████| 1405/1405 [00:04<00:00, 296.67it/s] Random walks iteration 2 of 5: 100%|██████████| 1405/1405 [00:05<00:00, 274.60it/s] Random walks iteration 3 of 5: 100%|██████████| 1405/1405 [00:04<00:00, 281.69it/s] Random walks iteration 4 of 5: 100%|██████████| 1405/1405 [00:04<00:00, 285.56it/s] Random walks iteration 5 of 5: 100%|██████████| 1405/1405 [00:04<00:00, 301.79it/s] Number of walks generated: 7025 Generate positive and negative examples To train a skip-gram model, we use the generated walks to create positive and negative training examples. Each example includes the following features: target: A movie in a walk sequence. context: Another movie in a walk sequence. weight: How many times these two movies occured in walk sequences. label: The label is 1 if these two movies are samples from the walk sequences, otherwise (i.e., if randomly sampled) the label is 0. Generate examples def generate_examples(sequences, window_size, num_negative_samples, vocabulary_size): example_weights = defaultdict(int) # Iterate over all sequences (walks). for sequence in tqdm( sequences, position=0, leave=True, desc=f\"Generating postive and negative examples\", ): # Generate positive and negative skip-gram pairs for a sequence (walk). pairs, labels = keras.preprocessing.sequence.skipgrams( sequence, vocabulary_size=vocabulary_size, window_size=window_size, negative_samples=num_negative_samples, ) for idx in range(len(pairs)): pair = pairs[idx] label = labels[idx] target, context = min(pair[0], pair[1]), max(pair[0], pair[1]) if target == context: continue entry = (target, context, label) example_weights[entry] += 1 targets, contexts, labels, weights = [], [], [], [] for entry in example_weights: weight = example_weights[entry] target, context, label = entry targets.append(target) contexts.append(context) labels.append(label) weights.append(weight) return np.array(targets), np.array(contexts), np.array(labels), np.array(weights) num_negative_samples = 4 targets, contexts, labels, weights = generate_examples( sequences=walks, window_size=num_steps, num_negative_samples=num_negative_samples, vocabulary_size=len(vocabulary), ) Generating postive and negative examples: 100%|██████████| 7025/7025 [00:11<00:00, 638.29it/s] Let's display the shapes of the outputs print(f\"Targets shape: {targets.shape}\") print(f\"Contexts shape: {contexts.shape}\") print(f\"Labels shape: {labels.shape}\") print(f\"Weights shape: {weights.shape}\") Targets shape: (880170,) Contexts shape: (880170,) Labels shape: (880170,) Weights shape: (880170,) Convert the data into tf.data.Dataset objects batch_size = 1024 def create_dataset(targets, contexts, labels, weights, batch_size): inputs = { \"target\": targets, \"context\": contexts, } dataset = tf.data.Dataset.from_tensor_slices((inputs, labels, weights)) dataset = dataset.shuffle(buffer_size=batch_size * 2) dataset = dataset.batch(batch_size, drop_remainder=True) dataset = dataset.prefetch(tf.data.AUTOTUNE) return dataset dataset = create_dataset( targets=targets, contexts=contexts, labels=labels, weights=weights, batch_size=batch_size, ) Train the skip-gram model Our skip-gram is a simple binary classification model that works as follows: An embedding is looked up for the target movie. An embedding is looked up for the context movie. The dot product is computed between these two embeddings. The result (after a sigmoid activation) is compared to the label. A binary crossentropy loss is used. learning_rate = 0.001 embedding_dim = 50 num_epochs = 10 Implement the model def create_model(vocabulary_size, embedding_dim): inputs = { \"target\": layers.Input(name=\"target\", shape=(), dtype=\"int32\"), \"context\": layers.Input(name=\"context\", shape=(), dtype=\"int32\"), } # Initialize item embeddings. embed_item = layers.Embedding( input_dim=vocabulary_size, output_dim=embedding_dim, embeddings_initializer=\"he_normal\", embeddings_regularizer=keras.regularizers.l2(1e-6), name=\"item_embeddings\", ) # Lookup embeddings for target. target_embeddings = embed_item(inputs[\"target\"]) # Lookup embeddings for context. context_embeddings = embed_item(inputs[\"context\"]) # Compute dot similarity between target and context embeddings. logits = layers.Dot(axes=1, normalize=False, name=\"dot_similarity\")( [target_embeddings, context_embeddings] ) # Create the model. model = keras.Model(inputs=inputs, outputs=logits) return model Train the model We instantiate the model and compile it. model = create_model(len(vocabulary), embedding_dim) model.compile( optimizer=keras.optimizers.Adam(learning_rate), loss=keras.losses.BinaryCrossentropy(from_logits=True), ) Let's plot the model. keras.utils.plot_model( model, show_shapes=True, show_dtype=True, show_layer_names=True, ) ('Failed to import pydot. You must `pip install pydot` and install graphviz (https://graphviz.gitlab.io/download/), ', 'for `pydotprint` to work.') Now we train the model on the dataset. history = model.fit(dataset, epochs=num_epochs) Epoch 1/10 859/859 [==============================] - 3s 3ms/step - loss: 3.4761 Epoch 2/10 859/859 [==============================] - 2s 3ms/step - loss: 3.3149 Epoch 3/10 859/859 [==============================] - 2s 3ms/step - loss: 3.2930 Epoch 4/10 859/859 [==============================] - 3s 3ms/step - loss: 3.2771 Epoch 5/10 859/859 [==============================] - 2s 3ms/step - loss: 3.2673 Epoch 6/10 859/859 [==============================] - 2s 3ms/step - loss: 3.2592 Epoch 7/10 859/859 [==============================] - 2s 3ms/step - loss: 3.2508 Epoch 8/10 859/859 [==============================] - 3s 3ms/step - loss: 3.2418 Epoch 9/10 859/859 [==============================] - 2s 3ms/step - loss: 3.2354 Epoch 10/10 859/859 [==============================] - 3s 3ms/step - loss: 3.2273 Finally we plot the learning history. plt.plot(history.history[\"loss\"]) plt.ylabel(\"loss\") plt.xlabel(\"epoch\") plt.show() png Analyze the learnt embeddings. movie_embeddings = model.get_layer(\"item_embeddings\").get_weights()[0] print(\"Embeddings shape:\", movie_embeddings.shape) Embeddings shape: (1406, 50) Find related movies Define a list with some movies called query_movies. query_movies = [ \"Matrix, The (1999)\", \"Star Wars: Episode IV - A New Hope (1977)\", \"Lion King, The (1994)\", \"Terminator 2: Judgment Day (1991)\", \"Godfather, The (1972)\", ] Get the embeddings of the movies in query_movies. query_embeddings = [] for movie_title in query_movies: movieId = get_movie_id_by_title(movie_title) token_id = vocabulary_lookup[movieId] movie_embedding = movie_embeddings[token_id] query_embeddings.append(movie_embedding) query_embeddings = np.array(query_embeddings) Compute the consine similarity between the embeddings of query_movies and all the other movies, then pick the top k for each. similarities = tf.linalg.matmul( tf.math.l2_normalize(query_embeddings), tf.math.l2_normalize(movie_embeddings), transpose_b=True, ) _, indices = tf.math.top_k(similarities, k=5) indices = indices.numpy().tolist() Display the top related movies in query_movies. for idx, title in enumerate(query_movies): print(title) print(\"\".rjust(len(title), \"-\")) similar_tokens = indices[idx] for token in similar_tokens: similar_movieId = vocabulary[token] similar_title = get_movie_title_by_id(similar_movieId) print(f\"- {similar_title}\") print() Matrix, The (1999) ------------------ - Matrix, The (1999) - Inception (2010) - Dark Knight, The (2008) - Back to the Future (1985) - Lord of the Rings: The Fellowship of the Ring, The (2001) Star Wars: Episode IV - A New Hope (1977) ----------------------------------------- - Star Wars: Episode V - The Empire Strikes Back (1980) - Star Wars: Episode IV - A New Hope (1977) - Back to the Future (1985) - Matrix, The (1999) - Star Wars: Episode VI - Return of the Jedi (1983) Lion King, The (1994) --------------------- - Lion King, The (1994) - Beauty and the Beast (1991) - Jurassic Park (1993) - Mrs. Doubtfire (1993) - Independence Day (a.k.a. ID4) (1996) Terminator 2: Judgment Day (1991) --------------------------------- - Terminator 2: Judgment Day (1991) - Star Wars: Episode VI - Return of the Jedi (1983) - Apollo 13 (1995) - Star Wars: Episode V - The Empire Strikes Back (1980) - Braveheart (1995) Godfather, The (1972) --------------------- - Godfather, The (1972) - Reservoir Dogs (1992) - Apocalypse Now (1979) - Fargo (1996) - American Beauty (1999) Visualize the embeddings using the Embedding Projector import io out_v = io.open(\"embeddings.tsv\", \"w\", encoding=\"utf-8\") out_m = io.open(\"metadata.tsv\", \"w\", encoding=\"utf-8\") for idx, movie_id in enumerate(vocabulary[1:]): movie_title = list(movies[movies.movieId == movie_id].title)[0] vector = movie_embeddings[idx] out_v.write(\"\t\".join([str(x) for x in vector]) + \"\n\") out_m.write(movie_title + \"\n\") out_v.close() out_m.close() Download the embeddings.tsv and metadata.tsv to analyze the obtained embeddings in the Embedding Projector. Implementation of an MPNN to predict blood-brain barrier permeability. Introduction In this tutorial, we will implement a type of graph neural network (GNN) known as _ message passing neural network_ (MPNN) to predict graph properties. Specifically, we will implement an MPNN to predict a molecular property known as blood-brain barrier permeability (BBBP). Motivation: as molecules are naturally represented as an undirected graph G = (V, E), where V is a set or vertices (nodes; atoms) and E a set of edges (bonds), GNNs (such as MPNN) are proving to be a useful method for predicting molecular properties. Until now, more traditional methods, such as random forests, support vector machines, etc., have been commonly used to predict molecular properties. In contrast to GNNs, these traditional approaches often operate on precomputed molecular features such as molecular weight, polarity, charge, number of carbon atoms, etc. Although these molecular features prove to be good predictors for various molecular properties, it is hypothesized that operating on these more \"raw\", \"low-level\", features could prove even better. References In recent years, a lot of effort has been put into developing neural networks for graph data, including molecular graphs. For a summary of graph neural networks, see e.g., A Comprehensive Survey on Graph Neural Networks and Graph Neural Networks: A Review of Methods and Applications; and for further reading on the specific graph neural network implemented in this tutorial see Neural Message Passing for Quantum Chemistry and DeepChem's MPNNModel. Setup Install RDKit and other dependencies (Text below taken from this tutorial). RDKit is a collection of cheminformatics and machine-learning software written in C++ and Python. In this tutorial, RDKit is used to conveniently and efficiently transform SMILES to molecule objects, and then from those obtain sets of atoms and bonds. SMILES expresses the structure of a given molecule in the form of an ASCII string. The SMILES string is a compact encoding which, for smaller molecules, is relatively human-readable. Encoding molecules as a string both alleviates and facilitates database and/or web searching of a given molecule. RDKit uses algorithms to accurately transform a given SMILES to a molecule object, which can then be used to compute a great number of molecular properties/features. Notice, RDKit is commonly installed via Conda. However, thanks to rdkit_platform_wheels, rdkit can now (for the sake of this tutorial) be installed easily via pip, as follows: pip -q install rdkit-pypi And for easy and efficient reading of csv files and visualization, the below needs to be installed: pip -q install pandas pip -q install Pillow pip -q install matplotlib pip -q install pydot sudo apt-get -qq install graphviz Import packages import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import numpy as np import pandas as pd import matplotlib.pyplot as plt import warnings from rdkit import Chem from rdkit import RDLogger from rdkit.Chem.Draw import IPythonConsole from rdkit.Chem.Draw import MolsToGridImage import logging tf.get_logger().setLevel(logging.ERROR) warnings.filterwarnings(\"ignore\") RDLogger.DisableLog(\"rdApp.*\") np.random.seed(42) tf.random.set_seed(42) Dataset Information about the dataset can be found in A Bayesian Approach to in Silico Blood-Brain Barrier Penetration Modeling and MoleculeNet: A Benchmark for Molecular Machine Learning. The dataset will be downloaded from MoleculeNet.ai. About The dataset contains 2,050 molecules. Each molecule come with a name, label and SMILES string. The blood-brain barrier (BBB) is a membrane separating the blood from the brain extracellular fluid, hence blocking out most drugs (molecules) from reaching the brain. Because of this, the BBBP has been important to study for the development of new drugs that aim to target the central nervous system. The labels for this data set are binary (1 or 0) and indicate the permeability of the molecules. csv_path = keras.utils.get_file( \"BBBP.csv\", \"https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/BBBP.csv\" ) df = pd.read_csv(csv_path, usecols=[1, 2, 3]) df.iloc[96:104] name p_np smiles 96 cefoxitin 1 CO[C@]1(NC(=O)Cc2sccc2)[C@H]3SCC(=C(N3C1=O)C(O... 97 Org34167 1 NC(CC=C)c1ccccc1c2noc3c2cccc3 98 9-OH Risperidone 1 OC1C(N2CCC1)=NC(C)=C(CCN3CCC(CC3)c4c5ccc(F)cc5... 99 acetaminophen 1 CC(=O)Nc1ccc(O)cc1 100 acetylsalicylate 0 CC(=O)Oc1ccccc1C(O)=O 101 allopurinol 0 O=C1N=CN=C2NNC=C12 102 Alprostadil 0 CCCCC[C@H](O)/C=C/[C@H]1[C@H](O)CC(=O)[C@@H]1C... 103 aminophylline 0 CN1C(=O)N(C)c2nc[nH]c2C1=O.CN3C(=O)N(C)c4nc[nH... Define features To encode features for atoms and bonds (which we will need later), we'll define two classes: AtomFeaturizer and BondFeatuzier respectively. To reduce the lines of code, i.e., to keep this tutorial short and concise, only about a handful of (atom and bond) features will be considered: [atom features] symbol (element), number of valence electrons, number of hydrogen bonds, orbital hybridization, [bond features] (covalent) bond type, and conjugation. class Featurizer: def __init__(self, allowable_sets): self.dim = 0 self.features_mapping = {} for k, s in allowable_sets.items(): s = sorted(list(s)) self.features_mapping[k] = dict(zip(s, range(self.dim, len(s) + self.dim))) self.dim += len(s) def encode(self, inputs): output = np.zeros((self.dim,)) for name_feature, feature_mapping in self.features_mapping.items(): feature = getattr(self, name_feature)(inputs) if feature not in feature_mapping: continue output[feature_mapping[feature]] = 1.0 return output class AtomFeaturizer(Featurizer): def __init__(self, allowable_sets): super().__init__(allowable_sets) def symbol(self, atom): return atom.GetSymbol() def n_valence(self, atom): return atom.GetTotalValence() def n_hydrogens(self, atom): return atom.GetTotalNumHs() def hybridization(self, atom): return atom.GetHybridization().name.lower() class BondFeaturizer(Featurizer): def __init__(self, allowable_sets): super().__init__(allowable_sets) self.dim += 1 def encode(self, bond): output = np.zeros((self.dim,)) if bond is None: output[-1] = 1.0 return output output = super().encode(bond) return output def bond_type(self, bond): return bond.GetBondType().name.lower() def conjugated(self, bond): return bond.GetIsConjugated() atom_featurizer = AtomFeaturizer( allowable_sets={ \"symbol\": {\"B\", \"Br\", \"C\", \"Ca\", \"Cl\", \"F\", \"H\", \"I\", \"N\", \"Na\", \"O\", \"P\", \"S\"}, \"n_valence\": {0, 1, 2, 3, 4, 5, 6}, \"n_hydrogens\": {0, 1, 2, 3, 4}, \"hybridization\": {\"s\", \"sp\", \"sp2\", \"sp3\"}, } ) bond_featurizer = BondFeaturizer( allowable_sets={ \"bond_type\": {\"single\", \"double\", \"triple\", \"aromatic\"}, \"conjugated\": {True, False}, } ) Generate graphs Before we can generate complete graphs from SMILES, we need to implement the following functions: molecule_from_smiles, which takes as input a SMILES and returns a molecule object. This is all handled by RDKit. graph_from_molecule, which takes as input a molecule object and returns a graph, represented as a three-tuple (atom_features, bond_features, pair_indices). For this we will make use of the classes defined previously. Finally, we can now implement the function graphs_from_smiles, which applies function (1) and subsequently (2) on all SMILES of the training, validation and test datasets. Notice: although scaffold splitting is recommended for this data set (see here), for simplicity, simple random splittings were performed. def molecule_from_smiles(smiles): # MolFromSmiles(m, sanitize=True) should be equivalent to # MolFromSmiles(m, sanitize=False) -> SanitizeMol(m) -> AssignStereochemistry(m, ...) molecule = Chem.MolFromSmiles(smiles, sanitize=False) # If sanitization is unsuccessful, catch the error, and try again without # the sanitization step that caused the error flag = Chem.SanitizeMol(molecule, catchErrors=True) if flag != Chem.SanitizeFlags.SANITIZE_NONE: Chem.SanitizeMol(molecule, sanitizeOps=Chem.SanitizeFlags.SANITIZE_ALL ^ flag) Chem.AssignStereochemistry(molecule, cleanIt=True, force=True) return molecule def graph_from_molecule(molecule): # Initialize graph atom_features = [] bond_features = [] pair_indices = [] for atom in molecule.GetAtoms(): atom_features.append(atom_featurizer.encode(atom)) # Add self-loop. Notice, this also helps against some edge cases where the # last node has no edges. Alternatively, if no self-loops are used, for these # edge cases, zero-padding on the output of the edge network is needed. pair_indices.append([atom.GetIdx(), atom.GetIdx()]) bond_features.append(bond_featurizer.encode(None)) atom_neighbors = atom.GetNeighbors() for neighbor in atom_neighbors: bond = molecule.GetBondBetweenAtoms(atom.GetIdx(), neighbor.GetIdx()) pair_indices.append([atom.GetIdx(), neighbor.GetIdx()]) bond_features.append(bond_featurizer.encode(bond)) return np.array(atom_features), np.array(bond_features), np.array(pair_indices) def graphs_from_smiles(smiles_list): # Initialize graphs atom_features_list = [] bond_features_list = [] pair_indices_list = [] for smiles in smiles_list: molecule = molecule_from_smiles(smiles) atom_features, bond_features, pair_indices = graph_from_molecule(molecule) atom_features_list.append(atom_features) bond_features_list.append(bond_features) pair_indices_list.append(pair_indices) # Convert lists to ragged tensors for tf.data.Dataset later on return ( tf.ragged.constant(atom_features_list, dtype=tf.float32), tf.ragged.constant(bond_features_list, dtype=tf.float32), tf.ragged.constant(pair_indices_list, dtype=tf.int64), ) # Shuffle array of indices ranging from 0 to 2049 permuted_indices = np.random.permutation(np.arange(df.shape[0])) # Train set: 80 % of data train_index = permuted_indices[: int(df.shape[0] * 0.8)] x_train = graphs_from_smiles(df.iloc[train_index].smiles) y_train = df.iloc[train_index].p_np # Valid set: 19 % of data valid_index = permuted_indices[int(df.shape[0] * 0.8) : int(df.shape[0] * 0.99)] x_valid = graphs_from_smiles(df.iloc[valid_index].smiles) y_valid = df.iloc[valid_index].p_np # Test set: 1 % of data test_index = permuted_indices[int(df.shape[0] * 0.99) :] x_test = graphs_from_smiles(df.iloc[test_index].smiles) y_test = df.iloc[test_index].p_np Test the functions print(f\"Name:\t{df.name[100]}\nSMILES:\t{df.smiles[100]}\nBBBP:\t{df.p_np[100]}\") molecule = molecule_from_smiles(df.iloc[100].smiles) print(\"Molecule:\") molecule Name: acetylsalicylate SMILES: CC(=O)Oc1ccccc1C(O)=O BBBP: 0 Molecule: png graph = graph_from_molecule(molecule) print(\"Graph (including self-loops):\") print(\"\tatom features\t\", graph[0].shape) print(\"\tbond features\t\", graph[1].shape) print(\"\tpair indices\t\", graph[2].shape) Graph (including self-loops): atom features (13, 29) bond features (39, 7) pair indices (39, 2) Create a tf.data.Dataset In this tutorial, the MPNN implementation will take as input (per iteration) a single graph. Therefore, given a batch of (sub)graphs (molecules), we need to merge them into a single graph (we'll refer to this graph as global graph). This global graph is a disconnected graph where each subgraph is completely separated from the other subgraphs. def prepare_batch(x_batch, y_batch): \"\"\"Merges (sub)graphs of batch into a single global (disconnected) graph \"\"\" atom_features, bond_features, pair_indices = x_batch # Obtain number of atoms and bonds for each graph (molecule) num_atoms = atom_features.row_lengths() num_bonds = bond_features.row_lengths() # Obtain partition indices. atom_partition_indices will be used to # gather (sub)graphs from global graph in model later on molecule_indices = tf.range(len(num_atoms)) atom_partition_indices = tf.repeat(molecule_indices, num_atoms) bond_partition_indices = tf.repeat(molecule_indices[:-1], num_bonds[1:]) # Merge (sub)graphs into a global (disconnected) graph. Adding 'increment' to # 'pair_indices' (and merging ragged tensors) actualizes the global graph increment = tf.cumsum(num_atoms[:-1]) increment = tf.pad( tf.gather(increment, bond_partition_indices), [(num_bonds[0], 0)] ) pair_indices = pair_indices.merge_dims(outer_axis=0, inner_axis=1).to_tensor() pair_indices = pair_indices + increment[:, tf.newaxis] atom_features = atom_features.merge_dims(outer_axis=0, inner_axis=1).to_tensor() bond_features = bond_features.merge_dims(outer_axis=0, inner_axis=1).to_tensor() return (atom_features, bond_features, pair_indices, atom_partition_indices), y_batch def MPNNDataset(X, y, batch_size=32, shuffle=False): dataset = tf.data.Dataset.from_tensor_slices((X, (y))) if shuffle: dataset = dataset.shuffle(1024) return dataset.batch(batch_size).map(prepare_batch, -1) Model The MPNN model can take on various shapes and forms. In this tutorial, we will implement an MPNN based on the original paper Neural Message Passing for Quantum Chemistry and DeepChem's MPNNModel. The MPNN of this tutorial consists of three stages: message passing, readout and classification. Message passing The message passing step itself consists of two parts: The edge network, which passes messages from 1-hop neighbors w^{t}_{i} of v^{t} to v^{t}, based on the edge features between them (e_{v^{t}w^{t}_{i}}, where t = 0), resulting in an updated node state v^{t+1}. _{i} denotes the i:th neighbor of v^{t} and ^{t} the t:th state of v or w. An important feature of the edge network (in contrast to e.g. the relational graph convolutional network) is that it allows for non-discrete edge features. However, in this tutorial, only discrete edge features will be used. The gated recurrent unit (GRU), which takes as input the most recent node state (e.g., v^{t+1}) and updates it based on previous node state(s) (e.g., v^{t}). In other words, the most recent node states serves as the input to the GRU, while the previous node state(s) are incorporated within the memory state of the GRU. Importantly, step (1) and (2) are repeated for k steps, and where at each step 1...k, the radius (or # hops) of aggregated information from the source node v increases by 1. class EdgeNetwork(layers.Layer): def __init__(self, **kwargs): super().__init__(**kwargs) def build(self, input_shape): self.atom_dim = input_shape[0][-1] self.bond_dim = input_shape[1][-1] self.kernel = self.add_weight( shape=(self.bond_dim, self.atom_dim * self.atom_dim), trainable=True, initializer=\"glorot_uniform\", ) self.bias = self.add_weight( shape=(self.atom_dim * self.atom_dim), trainable=True, initializer=\"zeros\", ) self.built = True def call(self, inputs): atom_features, bond_features, pair_indices = inputs # Apply linear transformation to bond features bond_features = tf.matmul(bond_features, self.kernel) + self.bias # Reshape for neighborhood aggregation later bond_features = tf.reshape(bond_features, (-1, self.atom_dim, self.atom_dim)) # Obtain atom features of neighbors atom_features_neighbors = tf.gather(atom_features, pair_indices[:, 1]) atom_features_neighbors = tf.expand_dims(atom_features_neighbors, axis=-1) # Apply neighborhood aggregation transformed_features = tf.matmul(bond_features, atom_features_neighbors) transformed_features = tf.squeeze(transformed_features, axis=-1) aggregated_features = tf.math.segment_sum( transformed_features, pair_indices[:, 0] ) return aggregated_features class MessagePassing(layers.Layer): def __init__(self, units, steps=4, **kwargs): super().__init__(**kwargs) self.units = units self.steps = steps def build(self, input_shape): self.atom_dim = input_shape[0][-1] self.message_step = EdgeNetwork() self.pad_length = max(0, self.units - self.atom_dim) self.update_step = layers.GRUCell(self.atom_dim + self.pad_length) self.built = True def call(self, inputs): atom_features, bond_features, pair_indices = inputs # Pad atom features if number of desired units exceeds atom_features dim atom_features_updated = tf.pad(atom_features, [(0, 0), (0, self.pad_length)]) # Perform a number of steps of message passing for i in range(self.steps): # Aggregate atom_features from neighbors atom_features_aggregated = self.message_step( [atom_features_updated, bond_features, pair_indices] ) # Update aggregated atom_features via a step of GRU atom_features_updated, _ = self.update_step( atom_features_aggregated, atom_features_updated ) return atom_features_updated Readout When the message passing procedure ends, the k-step-aggregated node states are to be partitioned into subgraphs (correspoding to each molecule in the batch) and subsequently reduced to graph-level embeddings. In the original paper, a set-to-set layer was used for this purpose. In this tutorial however, a transformer encoder will be used. Specifically: the k-step-aggregated node states will be partitioned into the subgraphs (corresponding to each molecule in the batch); each subgraph will then be padded to match the subgraph with the greatest number of nodes, followed by a tf.stack(...); the (stacked) padded tensor, encoding subgraphs (each subgraph containing sets of node states), are masked to make sure the paddings don't interfere with training; finally, the padded tensor is passed to the transformer followed by an average pooling. class PartitionPadding(layers.Layer): def __init__(self, batch_size, **kwargs): super().__init__(**kwargs) self.batch_size = batch_size def call(self, inputs): atom_features, atom_partition_indices = inputs # Obtain subgraphs atom_features = tf.dynamic_partition( atom_features, atom_partition_indices, self.batch_size ) # Pad and stack subgraphs num_atoms = [tf.shape(f)[0] for f in atom_features] max_num_atoms = tf.reduce_max(num_atoms) atom_features_padded = tf.stack( [ tf.pad(f, [(0, max_num_atoms - n), (0, 0)]) for f, n in zip(atom_features, num_atoms) ], axis=0, ) # Remove empty subgraphs (usually for last batch) nonempty_examples = tf.where(tf.reduce_sum(atom_features_padded, (1, 2)) != 0) nonempty_examples = tf.squeeze(nonempty_examples, axis=-1) return tf.gather(atom_features_padded, nonempty_examples, axis=0) class TransformerEncoder(layers.Layer): def __init__(self, num_heads=8, embed_dim=64, dense_dim=512, **kwargs): super().__init__(**kwargs) self.attention = layers.MultiHeadAttention(num_heads, embed_dim) self.dense_proj = keras.Sequential( [layers.Dense(dense_dim, activation=\"relu\"), layers.Dense(embed_dim),] ) self.layernorm_1 = layers.LayerNormalization() self.layernorm_2 = layers.LayerNormalization() self.supports_masking = True def call(self, inputs, mask=None): attention_mask = mask[:, tf.newaxis, :] if mask is not None else None attention_output = self.attention(inputs, inputs, attention_mask=attention_mask) proj_input = self.layernorm_1(inputs + attention_output) return self.layernorm_2(proj_input + self.dense_proj(proj_input)) Message Passing Neural Network (MPNN) It is now time to complete the MPNN model. In addition to the message passing and readout, a two-layer classification network will be implemented to make predictions of BBBP. def MPNNModel( atom_dim, bond_dim, batch_size=32, message_units=64, message_steps=4, num_attention_heads=8, dense_units=512, ): atom_features = layers.Input((atom_dim), dtype=\"float32\", name=\"atom_features\") bond_features = layers.Input((bond_dim), dtype=\"float32\", name=\"bond_features\") pair_indices = layers.Input((2), dtype=\"int32\", name=\"pair_indices\") atom_partition_indices = layers.Input( (), dtype=\"int32\", name=\"atom_partition_indices\" ) x = MessagePassing(message_units, message_steps)( [atom_features, bond_features, pair_indices] ) x = PartitionPadding(batch_size)([x, atom_partition_indices]) x = layers.Masking()(x) x = TransformerEncoder(num_attention_heads, message_units, dense_units)(x) x = layers.GlobalAveragePooling1D()(x) x = layers.Dense(dense_units, activation=\"relu\")(x) x = layers.Dense(1, activation=\"sigmoid\")(x) model = keras.Model( inputs=[atom_features, bond_features, pair_indices, atom_partition_indices], outputs=[x], ) return model mpnn = MPNNModel( atom_dim=x_train[0][0][0].shape[0], bond_dim=x_train[1][0][0].shape[0], ) mpnn.compile( loss=keras.losses.BinaryCrossentropy(), optimizer=keras.optimizers.Adam(learning_rate=5e-4), metrics=[keras.metrics.AUC(name=\"AUC\")], ) keras.utils.plot_model(mpnn, show_dtype=True, show_shapes=True) png Training train_dataset = MPNNDataset(x_train, y_train) valid_dataset = MPNNDataset(x_valid, y_valid) test_dataset = MPNNDataset(x_test, y_test) history = mpnn.fit( train_dataset, validation_data=valid_dataset, epochs=40, verbose=2, class_weight={0: 2.0, 1: 0.5}, ) plt.figure(figsize=(10, 6)) plt.plot(history.history[\"AUC\"], label=\"train AUC\") plt.plot(history.history[\"val_AUC\"], label=\"valid AUC\") plt.xlabel(\"Epochs\", fontsize=16) plt.ylabel(\"AUC\", fontsize=16) plt.legend(fontsize=16) Epoch 1/40 52/52 - 4s - loss: 0.5240 - AUC: 0.7202 - val_loss: 0.5523 - val_AUC: 0.8310 Epoch 2/40 52/52 - 1s - loss: 0.4704 - AUC: 0.7899 - val_loss: 0.5592 - val_AUC: 0.8381 Epoch 3/40 52/52 - 1s - loss: 0.4529 - AUC: 0.8088 - val_loss: 0.5911 - val_AUC: 0.8406 Epoch 4/40 52/52 - 1s - loss: 0.4385 - AUC: 0.8224 - val_loss: 0.5379 - val_AUC: 0.8435 Epoch 5/40 52/52 - 1s - loss: 0.4256 - AUC: 0.8348 - val_loss: 0.4765 - val_AUC: 0.8473 Epoch 6/40 52/52 - 1s - loss: 0.4143 - AUC: 0.8448 - val_loss: 0.4760 - val_AUC: 0.8518 Epoch 7/40 52/52 - 1s - loss: 0.3968 - AUC: 0.8600 - val_loss: 0.4917 - val_AUC: 0.8592 Epoch 8/40 52/52 - 1s - loss: 0.3823 - AUC: 0.8716 - val_loss: 0.5301 - val_AUC: 0.8607 Epoch 9/40 52/52 - 1s - loss: 0.3724 - AUC: 0.8785 - val_loss: 0.5795 - val_AUC: 0.8632 Epoch 10/40 52/52 - 1s - loss: 0.3610 - AUC: 0.8878 - val_loss: 0.6460 - val_AUC: 0.8655 Epoch 11/40 52/52 - 1s - loss: 0.3491 - AUC: 0.8956 - val_loss: 0.6604 - val_AUC: 0.8685 Epoch 12/40 52/52 - 1s - loss: 0.3311 - AUC: 0.9076 - val_loss: 0.6075 - val_AUC: 0.8745 Epoch 13/40 52/52 - 1s - loss: 0.3162 - AUC: 0.9165 - val_loss: 0.5659 - val_AUC: 0.8832 Epoch 14/40 52/52 - 1s - loss: 0.3214 - AUC: 0.9131 - val_loss: 0.6581 - val_AUC: 0.8886 Epoch 15/40 52/52 - 1s - loss: 0.3064 - AUC: 0.9213 - val_loss: 0.6957 - val_AUC: 0.8884 Epoch 16/40 52/52 - 1s - loss: 0.2999 - AUC: 0.9246 - val_loss: 0.7201 - val_AUC: 0.8868 Epoch 17/40 52/52 - 1s - loss: 0.2825 - AUC: 0.9338 - val_loss: 0.8034 - val_AUC: 0.8850 Epoch 18/40 52/52 - 1s - loss: 0.2813 - AUC: 0.9337 - val_loss: 0.8026 - val_AUC: 0.8812 Epoch 19/40 52/52 - 1s - loss: 0.2725 - AUC: 0.9376 - val_loss: 0.8710 - val_AUC: 0.8867 Epoch 20/40 52/52 - 1s - loss: 0.2698 - AUC: 0.9378 - val_loss: 0.8262 - val_AUC: 0.8959 Epoch 21/40 52/52 - 1s - loss: 0.2729 - AUC: 0.9358 - val_loss: 0.7017 - val_AUC: 0.8970 Epoch 22/40 52/52 - 1s - loss: 0.2707 - AUC: 0.9376 - val_loss: 0.5759 - val_AUC: 0.8897 Epoch 23/40 52/52 - 1s - loss: 0.2562 - AUC: 0.9440 - val_loss: 0.4482 - val_AUC: 0.8945 Epoch 24/40 52/52 - 1s - loss: 0.2693 - AUC: 0.9387 - val_loss: 0.4220 - val_AUC: 0.8944 Epoch 25/40 52/52 - 1s - loss: 0.2753 - AUC: 0.9356 - val_loss: 0.5671 - val_AUC: 0.9081 Epoch 26/40 52/52 - 1s - loss: 0.2315 - AUC: 0.9538 - val_loss: 0.4307 - val_AUC: 0.9105 Epoch 27/40 52/52 - 1s - loss: 0.2269 - AUC: 0.9545 - val_loss: 0.4037 - val_AUC: 0.9084 Epoch 28/40 52/52 - 1s - loss: 0.2318 - AUC: 0.9528 - val_loss: 0.4394 - val_AUC: 0.9133 Epoch 29/40 52/52 - 1s - loss: 0.2162 - AUC: 0.9584 - val_loss: 0.4683 - val_AUC: 0.9199 Epoch 30/40 52/52 - 1s - loss: 0.2038 - AUC: 0.9622 - val_loss: 0.4301 - val_AUC: 0.9186 Epoch 31/40 52/52 - 1s - loss: 0.1924 - AUC: 0.9656 - val_loss: 0.3870 - val_AUC: 0.9253 Epoch 32/40 52/52 - 1s - loss: 0.2012 - AUC: 0.9632 - val_loss: 0.4105 - val_AUC: 0.9164 Epoch 33/40 52/52 - 1s - loss: 0.2030 - AUC: 0.9624 - val_loss: 0.3595 - val_AUC: 0.9175 Epoch 34/40 52/52 - 1s - loss: 0.2041 - AUC: 0.9625 - val_loss: 0.3983 - val_AUC: 0.9116 Epoch 35/40 52/52 - 1s - loss: 0.2017 - AUC: 0.9631 - val_loss: 0.3790 - val_AUC: 0.9220 Epoch 36/40 52/52 - 1s - loss: 0.1986 - AUC: 0.9640 - val_loss: 0.3593 - val_AUC: 0.9289 Epoch 37/40 52/52 - 1s - loss: 0.1892 - AUC: 0.9657 - val_loss: 0.3663 - val_AUC: 0.9235 Epoch 38/40 52/52 - 1s - loss: 0.1948 - AUC: 0.9632 - val_loss: 0.4329 - val_AUC: 0.9160 Epoch 39/40 52/52 - 1s - loss: 0.1734 - AUC: 0.9701 - val_loss: 0.3298 - val_AUC: 0.9263 Epoch 40/40 52/52 - 1s - loss: 0.1800 - AUC: 0.9690 - val_loss: 0.3345 - val_AUC: 0.9246 png Predicting molecules = [molecule_from_smiles(df.smiles.values[index]) for index in test_index] y_true = [df.p_np.values[index] for index in test_index] y_pred = tf.squeeze(mpnn.predict(test_dataset), axis=1) legends = [f\"y_true/y_pred = {y_true[i]}/{y_pred[i]:.2f}\" for i in range(len(y_true))] MolsToGridImage(molecules, molsPerRow=4, legends=legends) png Conclusions In this tutorial, we demonstarted a message passing neural network (MPNN) to predict blood-brain barrier permeability (BBBP) for a number of different molecules. We first had to construct graphs from SMILES, and then build a Keras model that could operate on these graphs. Implementing a graph neural network for predicting the topic of a paper given citations. Introduction Many datasets in various machine learning (ML) applications have structural relationships between their entities, which can be represented as graphs. Such application includes social and communication networks analysis, traffic prediction, and fraud detection. Graph representation Learning aims to build and train models for graph datasets to be used for a variety of ML tasks. This example demonstrate a simple implementation of a Graph Neural Network (GNN) model. The model is used for a node prediction task on the Cora dataset to predict the subject of a paper given its words and citations network. Note that, we implement a Graph Convolution Layer from scratch to provide better understanding of how they work. However, there is a number of specialized TensorFlow-based libraries that provide rich GNN APIs, such as Spectral, StellarGraph, and GraphNets. Setup import os import pandas as pd import numpy as np import networkx as nx import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Prepare the Dataset The Cora dataset consists of 2,708 scientific papers classified into one of seven classes. The citation network consists of 5,429 links. Each paper has a binary word vector of size 1,433, indicating the presence of a corresponding word. Download the dataset The dataset has two tap-separated files: cora.cites and cora.content. The cora.cites includes the citation records with two columns: cited_paper_id (target) and citing_paper_id (source). The cora.content includes the paper content records with 1,435 columns: paper_id, subject, and 1,433 binary features. Let's download the dataset. zip_file = keras.utils.get_file( fname=\"cora.tgz\", origin=\"https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz\", extract=True, ) data_dir = os.path.join(os.path.dirname(zip_file), \"cora\") Process and visualize the dataset Then we load the citations data into a Pandas DataFrame. citations = pd.read_csv( os.path.join(data_dir, \"cora.cites\"), sep=\"\t\", header=None, names=[\"target\", \"source\"], ) print(\"Citations shape:\", citations.shape) Citations shape: (5429, 2) Now we display a sample of the citations DataFrame. The target column includes the paper ids cited by the paper ids in the source column. citations.sample(frac=1).head() target source 2581 28227 6169 1500 7297 7276 1194 6184 1105718 4221 139738 1108834 3707 79809 1153275 Now let's load the papers data into a Pandas DataFrame. column_names = [\"paper_id\"] + [f\"term_{idx}\" for idx in range(1433)] + [\"subject\"] papers = pd.read_csv( os.path.join(data_dir, \"cora.content\"), sep=\"\t\", header=None, names=column_names, ) print(\"Papers shape:\", papers.shape) Papers shape: (2708, 1435) Now we display a sample of the papers DataFrame. The DataFrame includes the paper_id and the subject columns, as well as 1,433 binary column representing whether a term exists in the paper or not. print(papers.sample(5).T) 1 133 2425 paper_id 1061127 34355 1108389 term_0 0 0 0 term_1 0 0 0 term_2 0 0 0 term_3 0 0 0 ... ... ... ... term_1429 0 0 0 term_1430 0 0 0 term_1431 0 0 0 term_1432 0 0 0 subject Rule_Learning Neural_Networks Probabilistic_Methods 2103 1346 paper_id 1153942 80491 term_0 0 0 term_1 0 0 term_2 1 0 term_3 0 0 ... ... ... term_1429 0 0 term_1430 0 0 term_1431 0 0 term_1432 0 0 subject Genetic_Algorithms Neural_Networks [1435 rows x 5 columns] Let's display the count of the papers in each subject. print(papers.subject.value_counts()) Neural_Networks 818 Probabilistic_Methods 426 Genetic_Algorithms 418 Theory 351 Case_Based 298 Reinforcement_Learning 217 Rule_Learning 180 Name: subject, dtype: int64 We convert the paper ids and the subjects into zero-based indices. class_values = sorted(papers[\"subject\"].unique()) class_idx = {name: id for id, name in enumerate(class_values)} paper_idx = {name: idx for idx, name in enumerate(sorted(papers[\"paper_id\"].unique()))} papers[\"paper_id\"] = papers[\"paper_id\"].apply(lambda name: paper_idx[name]) citations[\"source\"] = citations[\"source\"].apply(lambda name: paper_idx[name]) citations[\"target\"] = citations[\"target\"].apply(lambda name: paper_idx[name]) papers[\"subject\"] = papers[\"subject\"].apply(lambda value: class_idx[value]) Now let's visualize the citation graph. Each node in the graph represents a paper, and the color of the node corresponds to its subject. Note that we only show a sample of the papers in the dataset. plt.figure(figsize=(10, 10)) colors = papers[\"subject\"].tolist() cora_graph = nx.from_pandas_edgelist(citations.sample(n=1500)) subjects = list(papers[papers[\"paper_id\"].isin(list(cora_graph.nodes))][\"subject\"]) nx.draw_spring(cora_graph, node_size=15, node_color=subjects) png Split the dataset into stratified train and test sets train_data, test_data = [], [] for _, group_data in papers.groupby(\"subject\"): # Select around 50% of the dataset for training. random_selection = np.random.rand(len(group_data.index)) <= 0.5 train_data.append(group_data[random_selection]) test_data.append(group_data[~random_selection]) train_data = pd.concat(train_data).sample(frac=1) test_data = pd.concat(test_data).sample(frac=1) print(\"Train data shape:\", train_data.shape) print(\"Test data shape:\", test_data.shape) Train data shape: (1360, 1435) Test data shape: (1348, 1435) Implement Train and Evaluate Experiment hidden_units = [32, 32] learning_rate = 0.01 dropout_rate = 0.5 num_epochs = 300 batch_size = 256 This function compiles and trains an input model using the given training data. def run_experiment(model, x_train, y_train): # Compile the model. model.compile( optimizer=keras.optimizers.Adam(learning_rate), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[keras.metrics.SparseCategoricalAccuracy(name=\"acc\")], ) # Create an early stopping callback. early_stopping = keras.callbacks.EarlyStopping( monitor=\"val_acc\", patience=50, restore_best_weights=True ) # Fit the model. history = model.fit( x=x_train, y=y_train, epochs=num_epochs, batch_size=batch_size, validation_split=0.15, callbacks=[early_stopping], ) return history This function displays the loss and accuracy curves of the model during training. def display_learning_curves(history): fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5)) ax1.plot(history.history[\"loss\"]) ax1.plot(history.history[\"val_loss\"]) ax1.legend([\"train\", \"test\"], loc=\"upper right\") ax1.set_xlabel(\"Epochs\") ax1.set_ylabel(\"Loss\") ax2.plot(history.history[\"acc\"]) ax2.plot(history.history[\"val_acc\"]) ax2.legend([\"train\", \"test\"], loc=\"upper right\") ax2.set_xlabel(\"Epochs\") ax2.set_ylabel(\"Accuracy\") plt.show() Implement Feedforward Network (FFN) Module We will use this module in the baseline and the GNN models. def create_ffn(hidden_units, dropout_rate, name=None): fnn_layers = [] for units in hidden_units: fnn_layers.append(layers.BatchNormalization()) fnn_layers.append(layers.Dropout(dropout_rate)) fnn_layers.append(layers.Dense(units, activation=tf.nn.gelu)) return keras.Sequential(fnn_layers, name=name) Build a Baseline Neural Network Model Prepare the data for the baseline model feature_names = set(papers.columns) - {\"paper_id\", \"subject\"} num_features = len(feature_names) num_classes = len(class_idx) # Create train and test features as a numpy array. x_train = train_data[feature_names].to_numpy() x_test = test_data[feature_names].to_numpy() # Create train and test targets as a numpy array. y_train = train_data[\"subject\"] y_test = test_data[\"subject\"] Implement a baseline classifier We add five FFN blocks with skip connections, so that we generate a baseline model with roughly the same number of parameters as the GNN models to be built later. def create_baseline_model(hidden_units, num_classes, dropout_rate=0.2): inputs = layers.Input(shape=(num_features,), name=\"input_features\") x = create_ffn(hidden_units, dropout_rate, name=f\"ffn_block1\")(inputs) for block_idx in range(4): # Create an FFN block. x1 = create_ffn(hidden_units, dropout_rate, name=f\"ffn_block{block_idx + 2}\")(x) # Add skip connection. x = layers.Add(name=f\"skip_connection{block_idx + 2}\")([x, x1]) # Compute logits. logits = layers.Dense(num_classes, name=\"logits\")(x) # Create the model. return keras.Model(inputs=inputs, outputs=logits, name=\"baseline\") baseline_model = create_baseline_model(hidden_units, num_classes, dropout_rate) baseline_model.summary() Model: \"baseline\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_features (InputLayer) [(None, 1433)] 0 __________________________________________________________________________________________________ ffn_block1 (Sequential) (None, 32) 52804 input_features[0][0] __________________________________________________________________________________________________ ffn_block2 (Sequential) (None, 32) 2368 ffn_block1[0][0] __________________________________________________________________________________________________ skip_connection2 (Add) (None, 32) 0 ffn_block1[0][0] ffn_block2[0][0] __________________________________________________________________________________________________ ffn_block3 (Sequential) (None, 32) 2368 skip_connection2[0][0] __________________________________________________________________________________________________ skip_connection3 (Add) (None, 32) 0 skip_connection2[0][0] ffn_block3[0][0] __________________________________________________________________________________________________ ffn_block4 (Sequential) (None, 32) 2368 skip_connection3[0][0] __________________________________________________________________________________________________ skip_connection4 (Add) (None, 32) 0 skip_connection3[0][0] ffn_block4[0][0] __________________________________________________________________________________________________ ffn_block5 (Sequential) (None, 32) 2368 skip_connection4[0][0] __________________________________________________________________________________________________ skip_connection5 (Add) (None, 32) 0 skip_connection4[0][0] ffn_block5[0][0] __________________________________________________________________________________________________ logits (Dense) (None, 7) 231 skip_connection5[0][0] ================================================================================================== Total params: 62,507 Trainable params: 59,065 Non-trainable params: 3,442 __________________________________________________________________________________________________ Train the baseline classifier history = run_experiment(baseline_model, x_train, y_train) Epoch 1/300 5/5 [==============================] - 3s 203ms/step - loss: 4.1695 - acc: 0.1660 - val_loss: 1.9008 - val_acc: 0.3186 Epoch 2/300 5/5 [==============================] - 0s 15ms/step - loss: 2.9269 - acc: 0.2630 - val_loss: 1.8906 - val_acc: 0.3235 Epoch 3/300 5/5 [==============================] - 0s 15ms/step - loss: 2.5669 - acc: 0.2424 - val_loss: 1.8713 - val_acc: 0.3186 Epoch 4/300 5/5 [==============================] - 0s 15ms/step - loss: 2.1377 - acc: 0.3147 - val_loss: 1.8687 - val_acc: 0.3529 Epoch 5/300 5/5 [==============================] - 0s 15ms/step - loss: 2.0256 - acc: 0.3297 - val_loss: 1.8285 - val_acc: 0.3235 Epoch 6/300 5/5 [==============================] - 0s 15ms/step - loss: 1.8148 - acc: 0.3495 - val_loss: 1.8000 - val_acc: 0.3235 Epoch 7/300 5/5 [==============================] - 0s 15ms/step - loss: 1.7216 - acc: 0.3883 - val_loss: 1.7771 - val_acc: 0.3333 Epoch 8/300 5/5 [==============================] - 0s 15ms/step - loss: 1.6941 - acc: 0.3910 - val_loss: 1.7528 - val_acc: 0.3284 Epoch 9/300 5/5 [==============================] - 0s 15ms/step - loss: 1.5690 - acc: 0.4358 - val_loss: 1.7128 - val_acc: 0.3333 Epoch 10/300 5/5 [==============================] - 0s 15ms/step - loss: 1.5139 - acc: 0.4367 - val_loss: 1.6650 - val_acc: 0.3676 Epoch 11/300 5/5 [==============================] - 0s 15ms/step - loss: 1.4370 - acc: 0.4930 - val_loss: 1.6145 - val_acc: 0.3775 Epoch 12/300 5/5 [==============================] - 0s 15ms/step - loss: 1.3696 - acc: 0.5109 - val_loss: 1.5787 - val_acc: 0.3873 Epoch 13/300 5/5 [==============================] - 0s 15ms/step - loss: 1.3979 - acc: 0.5341 - val_loss: 1.5564 - val_acc: 0.3922 Epoch 14/300 5/5 [==============================] - 0s 15ms/step - loss: 1.2681 - acc: 0.5599 - val_loss: 1.5547 - val_acc: 0.3922 Epoch 15/300 5/5 [==============================] - 0s 16ms/step - loss: 1.1970 - acc: 0.5807 - val_loss: 1.5735 - val_acc: 0.3873 Epoch 16/300 5/5 [==============================] - 0s 15ms/step - loss: 1.1555 - acc: 0.6032 - val_loss: 1.5131 - val_acc: 0.4216 Epoch 17/300 5/5 [==============================] - 0s 15ms/step - loss: 1.1234 - acc: 0.6130 - val_loss: 1.4385 - val_acc: 0.4608 Epoch 18/300 5/5 [==============================] - 0s 14ms/step - loss: 1.0507 - acc: 0.6306 - val_loss: 1.3929 - val_acc: 0.4804 Epoch 19/300 5/5 [==============================] - 0s 15ms/step - loss: 1.0341 - acc: 0.6393 - val_loss: 1.3628 - val_acc: 0.4902 Epoch 20/300 5/5 [==============================] - 0s 35ms/step - loss: 0.9457 - acc: 0.6693 - val_loss: 1.3383 - val_acc: 0.4902 Epoch 21/300 5/5 [==============================] - 0s 17ms/step - loss: 0.9054 - acc: 0.6756 - val_loss: 1.3365 - val_acc: 0.4951 Epoch 22/300 5/5 [==============================] - 0s 15ms/step - loss: 0.8952 - acc: 0.6854 - val_loss: 1.3228 - val_acc: 0.5049 Epoch 23/300 5/5 [==============================] - 0s 15ms/step - loss: 0.8413 - acc: 0.7217 - val_loss: 1.2924 - val_acc: 0.5294 Epoch 24/300 5/5 [==============================] - 0s 15ms/step - loss: 0.8543 - acc: 0.6998 - val_loss: 1.2379 - val_acc: 0.5490 Epoch 25/300 5/5 [==============================] - 0s 16ms/step - loss: 0.7632 - acc: 0.7376 - val_loss: 1.1516 - val_acc: 0.5833 Epoch 26/300 5/5 [==============================] - 0s 15ms/step - loss: 0.7189 - acc: 0.7496 - val_loss: 1.1296 - val_acc: 0.5931 Epoch 27/300 5/5 [==============================] - 0s 15ms/step - loss: 0.7433 - acc: 0.7482 - val_loss: 1.0937 - val_acc: 0.6127 Epoch 28/300 5/5 [==============================] - 0s 15ms/step - loss: 0.7310 - acc: 0.7440 - val_loss: 1.0950 - val_acc: 0.5980 Epoch 29/300 5/5 [==============================] - 0s 16ms/step - loss: 0.7059 - acc: 0.7654 - val_loss: 1.1343 - val_acc: 0.5882 Epoch 30/300 5/5 [==============================] - 0s 21ms/step - loss: 0.6831 - acc: 0.7645 - val_loss: 1.1938 - val_acc: 0.5686 Epoch 31/300 5/5 [==============================] - 0s 23ms/step - loss: 0.6741 - acc: 0.7788 - val_loss: 1.1281 - val_acc: 0.5931 Epoch 32/300 5/5 [==============================] - 0s 16ms/step - loss: 0.6344 - acc: 0.7753 - val_loss: 1.0870 - val_acc: 0.6029 Epoch 33/300 5/5 [==============================] - 0s 16ms/step - loss: 0.6052 - acc: 0.7876 - val_loss: 1.0947 - val_acc: 0.6127 Epoch 34/300 5/5 [==============================] - 0s 15ms/step - loss: 0.6313 - acc: 0.7908 - val_loss: 1.1186 - val_acc: 0.5882 Epoch 35/300 5/5 [==============================] - 0s 16ms/step - loss: 0.6163 - acc: 0.7955 - val_loss: 1.0899 - val_acc: 0.6176 Epoch 36/300 5/5 [==============================] - 0s 16ms/step - loss: 0.5388 - acc: 0.8203 - val_loss: 1.1222 - val_acc: 0.5882 Epoch 37/300 5/5 [==============================] - 0s 16ms/step - loss: 0.5487 - acc: 0.8080 - val_loss: 1.0205 - val_acc: 0.6127 Epoch 38/300 5/5 [==============================] - 0s 16ms/step - loss: 0.5885 - acc: 0.7903 - val_loss: 0.9268 - val_acc: 0.6569 Epoch 39/300 5/5 [==============================] - 0s 15ms/step - loss: 0.5541 - acc: 0.8025 - val_loss: 0.9367 - val_acc: 0.6471 Epoch 40/300 5/5 [==============================] - 0s 36ms/step - loss: 0.5594 - acc: 0.7935 - val_loss: 0.9688 - val_acc: 0.6275 Epoch 41/300 5/5 [==============================] - 0s 17ms/step - loss: 0.5255 - acc: 0.8169 - val_loss: 1.0076 - val_acc: 0.6324 Epoch 42/300 5/5 [==============================] - 0s 16ms/step - loss: 0.5284 - acc: 0.8180 - val_loss: 1.0106 - val_acc: 0.6373 Epoch 43/300 5/5 [==============================] - 0s 15ms/step - loss: 0.5141 - acc: 0.8188 - val_loss: 0.8842 - val_acc: 0.6912 Epoch 44/300 5/5 [==============================] - 0s 16ms/step - loss: 0.4767 - acc: 0.8342 - val_loss: 0.8249 - val_acc: 0.7108 Epoch 45/300 5/5 [==============================] - 0s 15ms/step - loss: 0.5915 - acc: 0.8055 - val_loss: 0.8567 - val_acc: 0.6912 Epoch 46/300 5/5 [==============================] - 0s 15ms/step - loss: 0.5026 - acc: 0.8357 - val_loss: 0.9287 - val_acc: 0.6618 Epoch 47/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4859 - acc: 0.8304 - val_loss: 0.9044 - val_acc: 0.6667 Epoch 48/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4860 - acc: 0.8440 - val_loss: 0.8672 - val_acc: 0.6912 Epoch 49/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4723 - acc: 0.8358 - val_loss: 0.8717 - val_acc: 0.6863 Epoch 50/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4831 - acc: 0.8457 - val_loss: 0.8674 - val_acc: 0.6912 Epoch 51/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4873 - acc: 0.8353 - val_loss: 0.8587 - val_acc: 0.7010 Epoch 52/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4537 - acc: 0.8472 - val_loss: 0.8544 - val_acc: 0.7059 Epoch 53/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4684 - acc: 0.8425 - val_loss: 0.8423 - val_acc: 0.7206 Epoch 54/300 5/5 [==============================] - 0s 16ms/step - loss: 0.4436 - acc: 0.8523 - val_loss: 0.8607 - val_acc: 0.6961 Epoch 55/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4589 - acc: 0.8335 - val_loss: 0.8462 - val_acc: 0.7059 Epoch 56/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4757 - acc: 0.8360 - val_loss: 0.8415 - val_acc: 0.7010 Epoch 57/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4270 - acc: 0.8593 - val_loss: 0.8094 - val_acc: 0.7255 Epoch 58/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4530 - acc: 0.8307 - val_loss: 0.8357 - val_acc: 0.7108 Epoch 59/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4370 - acc: 0.8453 - val_loss: 0.8804 - val_acc: 0.7108 Epoch 60/300 5/5 [==============================] - 0s 16ms/step - loss: 0.4379 - acc: 0.8465 - val_loss: 0.8791 - val_acc: 0.7108 Epoch 61/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4254 - acc: 0.8615 - val_loss: 0.8355 - val_acc: 0.7059 Epoch 62/300 5/5 [==============================] - 0s 15ms/step - loss: 0.3929 - acc: 0.8696 - val_loss: 0.8355 - val_acc: 0.7304 Epoch 63/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4039 - acc: 0.8516 - val_loss: 0.8576 - val_acc: 0.7353 Epoch 64/300 5/5 [==============================] - 0s 35ms/step - loss: 0.4220 - acc: 0.8596 - val_loss: 0.8848 - val_acc: 0.7059 Epoch 65/300 5/5 [==============================] - 0s 17ms/step - loss: 0.4091 - acc: 0.8521 - val_loss: 0.8560 - val_acc: 0.7108 Epoch 66/300 5/5 [==============================] - 0s 16ms/step - loss: 0.4658 - acc: 0.8470 - val_loss: 0.8518 - val_acc: 0.7206 Epoch 67/300 5/5 [==============================] - 0s 16ms/step - loss: 0.4269 - acc: 0.8437 - val_loss: 0.7878 - val_acc: 0.7255 Epoch 68/300 5/5 [==============================] - 0s 16ms/step - loss: 0.4368 - acc: 0.8438 - val_loss: 0.7859 - val_acc: 0.7255 Epoch 69/300 5/5 [==============================] - 0s 16ms/step - loss: 0.4113 - acc: 0.8452 - val_loss: 0.8056 - val_acc: 0.7402 Epoch 70/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4304 - acc: 0.8469 - val_loss: 0.8093 - val_acc: 0.7451 Epoch 71/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4159 - acc: 0.8585 - val_loss: 0.8090 - val_acc: 0.7451 Epoch 72/300 5/5 [==============================] - 0s 16ms/step - loss: 0.4218 - acc: 0.8610 - val_loss: 0.8028 - val_acc: 0.7402 Epoch 73/300 5/5 [==============================] - 0s 16ms/step - loss: 0.3632 - acc: 0.8714 - val_loss: 0.8153 - val_acc: 0.7304 Epoch 74/300 5/5 [==============================] - 0s 16ms/step - loss: 0.3745 - acc: 0.8722 - val_loss: 0.8299 - val_acc: 0.7402 Epoch 75/300 5/5 [==============================] - 0s 16ms/step - loss: 0.3997 - acc: 0.8680 - val_loss: 0.8445 - val_acc: 0.7255 Epoch 76/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4143 - acc: 0.8620 - val_loss: 0.8344 - val_acc: 0.7206 Epoch 77/300 5/5 [==============================] - 0s 16ms/step - loss: 0.4006 - acc: 0.8616 - val_loss: 0.8358 - val_acc: 0.7255 Epoch 78/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4266 - acc: 0.8532 - val_loss: 0.8266 - val_acc: 0.7206 Epoch 79/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4337 - acc: 0.8523 - val_loss: 0.8181 - val_acc: 0.7206 Epoch 80/300 5/5 [==============================] - 0s 16ms/step - loss: 0.3857 - acc: 0.8624 - val_loss: 0.8143 - val_acc: 0.7206 Epoch 81/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4146 - acc: 0.8567 - val_loss: 0.8192 - val_acc: 0.7108 Epoch 82/300 5/5 [==============================] - 0s 16ms/step - loss: 0.3638 - acc: 0.8794 - val_loss: 0.8248 - val_acc: 0.7206 Epoch 83/300 5/5 [==============================] - 0s 16ms/step - loss: 0.4126 - acc: 0.8678 - val_loss: 0.8565 - val_acc: 0.7255 Epoch 84/300 5/5 [==============================] - 0s 36ms/step - loss: 0.3941 - acc: 0.8530 - val_loss: 0.8624 - val_acc: 0.7206 Epoch 85/300 5/5 [==============================] - 0s 17ms/step - loss: 0.3843 - acc: 0.8786 - val_loss: 0.8389 - val_acc: 0.7255 Epoch 86/300 5/5 [==============================] - 0s 15ms/step - loss: 0.3651 - acc: 0.8747 - val_loss: 0.8314 - val_acc: 0.7206 Epoch 87/300 5/5 [==============================] - 0s 16ms/step - loss: 0.3911 - acc: 0.8657 - val_loss: 0.8736 - val_acc: 0.7255 Epoch 88/300 5/5 [==============================] - 0s 15ms/step - loss: 0.3706 - acc: 0.8714 - val_loss: 0.9159 - val_acc: 0.7108 Epoch 89/300 5/5 [==============================] - 0s 15ms/step - loss: 0.4403 - acc: 0.8386 - val_loss: 0.9038 - val_acc: 0.7206 Epoch 90/300 5/5 [==============================] - 0s 16ms/step - loss: 0.3865 - acc: 0.8668 - val_loss: 0.8733 - val_acc: 0.7206 Epoch 91/300 5/5 [==============================] - 0s 15ms/step - loss: 0.3757 - acc: 0.8643 - val_loss: 0.8704 - val_acc: 0.7157 Epoch 92/300 5/5 [==============================] - 0s 15ms/step - loss: 0.3828 - acc: 0.8669 - val_loss: 0.8786 - val_acc: 0.7157 Epoch 93/300 5/5 [==============================] - 0s 15ms/step - loss: 0.3651 - acc: 0.8787 - val_loss: 0.8977 - val_acc: 0.7206 Epoch 94/300 5/5 [==============================] - 0s 16ms/step - loss: 0.3913 - acc: 0.8614 - val_loss: 0.9415 - val_acc: 0.7206 Epoch 95/300 5/5 [==============================] - 0s 15ms/step - loss: 0.3995 - acc: 0.8590 - val_loss: 0.9495 - val_acc: 0.7157 Epoch 96/300 5/5 [==============================] - 0s 16ms/step - loss: 0.4228 - acc: 0.8508 - val_loss: 0.9490 - val_acc: 0.7059 Epoch 97/300 5/5 [==============================] - 0s 16ms/step - loss: 0.3853 - acc: 0.8789 - val_loss: 0.9402 - val_acc: 0.7157 Epoch 98/300 5/5 [==============================] - 0s 16ms/step - loss: 0.3711 - acc: 0.8812 - val_loss: 0.9283 - val_acc: 0.7206 Epoch 99/300 5/5 [==============================] - 0s 15ms/step - loss: 0.3949 - acc: 0.8578 - val_loss: 0.9591 - val_acc: 0.7108 Epoch 100/300 5/5 [==============================] - 0s 15ms/step - loss: 0.3563 - acc: 0.8780 - val_loss: 0.9744 - val_acc: 0.7206 Epoch 101/300 5/5 [==============================] - 0s 16ms/step - loss: 0.3579 - acc: 0.8815 - val_loss: 0.9358 - val_acc: 0.7206 Epoch 102/300 5/5 [==============================] - 0s 16ms/step - loss: 0.4069 - acc: 0.8698 - val_loss: 0.9245 - val_acc: 0.7157 Epoch 103/300 5/5 [==============================] - 0s 16ms/step - loss: 0.3161 - acc: 0.8955 - val_loss: 0.9401 - val_acc: 0.7157 Epoch 104/300 5/5 [==============================] - 0s 16ms/step - loss: 0.3346 - acc: 0.8910 - val_loss: 0.9517 - val_acc: 0.7157 Epoch 105/300 5/5 [==============================] - 0s 16ms/step - loss: 0.4204 - acc: 0.8538 - val_loss: 0.9366 - val_acc: 0.7157 Epoch 106/300 5/5 [==============================] - 0s 16ms/step - loss: 0.3492 - acc: 0.8821 - val_loss: 0.9424 - val_acc: 0.7353 Epoch 107/300 5/5 [==============================] - 0s 16ms/step - loss: 0.4002 - acc: 0.8604 - val_loss: 0.9842 - val_acc: 0.7157 Epoch 108/300 5/5 [==============================] - 0s 35ms/step - loss: 0.3701 - acc: 0.8736 - val_loss: 0.9999 - val_acc: 0.7010 Epoch 109/300 5/5 [==============================] - 0s 17ms/step - loss: 0.3391 - acc: 0.8866 - val_loss: 0.9768 - val_acc: 0.6961 Epoch 110/300 5/5 [==============================] - 0s 15ms/step - loss: 0.3857 - acc: 0.8739 - val_loss: 0.9953 - val_acc: 0.7255 Epoch 111/300 5/5 [==============================] - 0s 16ms/step - loss: 0.3822 - acc: 0.8731 - val_loss: 0.9817 - val_acc: 0.7255 Epoch 112/300 5/5 [==============================] - 0s 23ms/step - loss: 0.3211 - acc: 0.8887 - val_loss: 0.9781 - val_acc: 0.7108 Epoch 113/300 5/5 [==============================] - 0s 20ms/step - loss: 0.3473 - acc: 0.8715 - val_loss: 0.9927 - val_acc: 0.6912 Epoch 114/300 5/5 [==============================] - 0s 20ms/step - loss: 0.4026 - acc: 0.8621 - val_loss: 1.0002 - val_acc: 0.6863 Epoch 115/300 5/5 [==============================] - 0s 20ms/step - loss: 0.3413 - acc: 0.8837 - val_loss: 1.0031 - val_acc: 0.6912 Epoch 116/300 5/5 [==============================] - 0s 20ms/step - loss: 0.3653 - acc: 0.8765 - val_loss: 1.0065 - val_acc: 0.7010 Epoch 117/300 5/5 [==============================] - 0s 21ms/step - loss: 0.3147 - acc: 0.8974 - val_loss: 1.0206 - val_acc: 0.7059 Epoch 118/300 5/5 [==============================] - 0s 21ms/step - loss: 0.3639 - acc: 0.8783 - val_loss: 1.0206 - val_acc: 0.7010 Epoch 119/300 5/5 [==============================] - 0s 19ms/step - loss: 0.3660 - acc: 0.8696 - val_loss: 1.0260 - val_acc: 0.6912 Epoch 120/300 5/5 [==============================] - 0s 18ms/step - loss: 0.3624 - acc: 0.8708 - val_loss: 1.0619 - val_acc: 0.6814 Let's plot the learning curves. display_learning_curves(history) png Now we evaluate the baseline model on the test data split. _, test_accuracy = baseline_model.evaluate(x=x_test, y=y_test, verbose=0) print(f\"Test accuracy: {round(test_accuracy * 100, 2)}%\") Test accuracy: 73.52% Examine the baseline model predictions Let's create new data instances by randomly generating binary word vectors with respect to the word presence probabilities. def generate_random_instances(num_instances): token_probability = x_train.mean(axis=0) instances = [] for _ in range(num_instances): probabilities = np.random.uniform(size=len(token_probability)) instance = (probabilities <= token_probability).astype(int) instances.append(instance) return np.array(instances) def display_class_probabilities(probabilities): for instance_idx, probs in enumerate(probabilities): print(f\"Instance {instance_idx + 1}:\") for class_idx, prob in enumerate(probs): print(f\"- {class_values[class_idx]}: {round(prob * 100, 2)}%\") Now we show the baseline model predictions given these randomly generated instances. new_instances = generate_random_instances(num_classes) logits = baseline_model.predict(new_instances) probabilities = keras.activations.softmax(tf.convert_to_tensor(logits)).numpy() display_class_probabilities(probabilities) Instance 1: - Case_Based: 13.02% - Genetic_Algorithms: 6.89% - Neural_Networks: 23.32% - Probabilistic_Methods: 47.89% - Reinforcement_Learning: 2.66% - Rule_Learning: 1.18% - Theory: 5.03% Instance 2: - Case_Based: 1.64% - Genetic_Algorithms: 59.74% - Neural_Networks: 27.13% - Probabilistic_Methods: 9.02% - Reinforcement_Learning: 1.05% - Rule_Learning: 0.12% - Theory: 1.31% Instance 3: - Case_Based: 1.35% - Genetic_Algorithms: 77.41% - Neural_Networks: 9.56% - Probabilistic_Methods: 7.89% - Reinforcement_Learning: 0.42% - Rule_Learning: 0.46% - Theory: 2.92% Instance 4: - Case_Based: 0.43% - Genetic_Algorithms: 3.87% - Neural_Networks: 92.88% - Probabilistic_Methods: 0.97% - Reinforcement_Learning: 0.56% - Rule_Learning: 0.09% - Theory: 1.2% Instance 5: - Case_Based: 0.11% - Genetic_Algorithms: 0.17% - Neural_Networks: 10.26% - Probabilistic_Methods: 0.5% - Reinforcement_Learning: 0.35% - Rule_Learning: 0.63% - Theory: 87.97% Instance 6: - Case_Based: 0.98% - Genetic_Algorithms: 23.37% - Neural_Networks: 70.76% - Probabilistic_Methods: 1.12% - Reinforcement_Learning: 2.23% - Rule_Learning: 0.21% - Theory: 1.33% Instance 7: - Case_Based: 0.64% - Genetic_Algorithms: 2.42% - Neural_Networks: 27.19% - Probabilistic_Methods: 14.07% - Reinforcement_Learning: 1.62% - Rule_Learning: 9.35% - Theory: 44.7% Build a Graph Neural Network Model Prepare the data for the graph model Preparing and loading the graphs data into the model for training is the most challenging part in GNN models, which is addressed in different ways by the specialised libraries. In this example, we show a simple approach for preparing and using graph data that is suitable if your dataset consists of a single graph that fits entirely in memory. The graph data is represented by the graph_info tuple, which consists of the following three elements: node_features: This is a [num_nodes, num_features] NumPy array that includes the node features. In this dataset, the nodes are the papers, and the node_features are the word-presence binary vectors of each paper. edges: This is [num_edges, num_edges] NumPy array representing a sparse adjacency matrix of the links between the nodes. In this example, the links are the citations between the papers. edge_weights (optional): This is a [num_edges] NumPy array that includes the edge weights, which quantify the relationships between nodes in the graph. In this example, there are no weights for the paper citations. # Create an edges array (sparse adjacency matrix) of shape [2, num_edges]. edges = citations[[\"source\", \"target\"]].to_numpy().T # Create an edge weights array of ones. edge_weights = tf.ones(shape=edges.shape[1]) # Create a node features array of shape [num_nodes, num_features]. node_features = tf.cast( papers.sort_values(\"paper_id\")[feature_names].to_numpy(), dtype=tf.dtypes.float32 ) # Create graph info tuple with node_features, edges, and edge_weights. graph_info = (node_features, edges, edge_weights) print(\"Edges shape:\", edges.shape) print(\"Nodes shape:\", node_features.shape) Edges shape: (2, 5429) Nodes shape: (2708, 1433) Implement a graph convolution layer We implement a graph convolution module as a Keras Layer. Our GraphConvLayer performs the following steps: Prepare: The input node representations are processed using a FFN to produce a message. You can simplify the processing by only applying linear transformation to the representations. Aggregate: The messages of the neighbours of each node are aggregated with respect to the edge_weights using a permutation invariant pooling operation, such as sum, mean, and max, to prepare a single aggregated message for each node. See, for example, tf.math.unsorted_segment_sum APIs used to aggregate neighbour messages. Update: The node_repesentations and aggregated_messages—both of shape [num_nodes, representation_dim]— are combined and processed to produce the new state of the node representations (node embeddings). If combination_type is gru, the node_repesentations and aggregated_messages are stacked to create a sequence, then processed by a GRU layer. Otherwise, the node_repesentations and aggregated_messages are added or concatenated, then processed using a FFN. The technique implemented use ideas from Graph Convolutional Networks, GraphSage, Graph Isomorphism Network, Simple Graph Networks, and Gated Graph Sequence Neural Networks. Two other key techniques that are not covered are Graph Attention Networks and Message Passing Neural Networks. class GraphConvLayer(layers.Layer): def __init__( self, hidden_units, dropout_rate=0.2, aggregation_type=\"mean\", combination_type=\"concat\", normalize=False, *args, **kwargs, ): super(GraphConvLayer, self).__init__(*args, **kwargs) self.aggregation_type = aggregation_type self.combination_type = combination_type self.normalize = normalize self.ffn_prepare = create_ffn(hidden_units, dropout_rate) if self.combination_type == \"gated\": self.update_fn = layers.GRU( units=hidden_units, activation=\"tanh\", recurrent_activation=\"sigmoid\", dropout=dropout_rate, return_state=True, recurrent_dropout=dropout_rate, ) else: self.update_fn = create_ffn(hidden_units, dropout_rate) def prepare(self, node_repesentations, weights=None): # node_repesentations shape is [num_edges, embedding_dim]. messages = self.ffn_prepare(node_repesentations) if weights is not None: messages = messages * tf.expand_dims(weights, -1) return messages def aggregate(self, node_indices, neighbour_messages): # node_indices shape is [num_edges]. # neighbour_messages shape: [num_edges, representation_dim]. num_nodes = tf.math.reduce_max(node_indices) + 1 if self.aggregation_type == \"sum\": aggregated_message = tf.math.unsorted_segment_sum( neighbour_messages, node_indices, num_segments=num_nodes ) elif self.aggregation_type == \"mean\": aggregated_message = tf.math.unsorted_segment_mean( neighbour_messages, node_indices, num_segments=num_nodes ) elif self.aggregation_type == \"max\": aggregated_message = tf.math.unsorted_segment_max( neighbour_messages, node_indices, num_segments=num_nodes ) else: raise ValueError(f\"Invalid aggregation type: {self.aggregation_type}.\") return aggregated_message def update(self, node_repesentations, aggregated_messages): # node_repesentations shape is [num_nodes, representation_dim]. # aggregated_messages shape is [num_nodes, representation_dim]. if self.combination_type == \"gru\": # Create a sequence of two elements for the GRU layer. h = tf.stack([node_repesentations, aggregated_messages], axis=1) elif self.combination_type == \"concat\": # Concatenate the node_repesentations and aggregated_messages. h = tf.concat([node_repesentations, aggregated_messages], axis=1) elif self.combination_type == \"add\": # Add node_repesentations and aggregated_messages. h = node_repesentations + aggregated_messages else: raise ValueError(f\"Invalid combination type: {self.combination_type}.\") # Apply the processing function. node_embeddings = self.update_fn(h) if self.combination_type == \"gru\": node_embeddings = tf.unstack(node_embeddings, axis=1)[-1] if self.normalize: node_embeddings = tf.nn.l2_normalize(node_embeddings, axis=-1) return node_embeddings def call(self, inputs): \"\"\"Process the inputs to produce the node_embeddings. inputs: a tuple of three elements: node_repesentations, edges, edge_weights. Returns: node_embeddings of shape [num_nodes, representation_dim]. \"\"\" node_repesentations, edges, edge_weights = inputs # Get node_indices (source) and neighbour_indices (target) from edges. node_indices, neighbour_indices = edges[0], edges[1] # neighbour_repesentations shape is [num_edges, representation_dim]. neighbour_repesentations = tf.gather(node_repesentations, neighbour_indices) # Prepare the messages of the neighbours. neighbour_messages = self.prepare(neighbour_repesentations, edge_weights) # Aggregate the neighbour messages. aggregated_messages = self.aggregate(node_indices, neighbour_messages) # Update the node embedding with the neighbour messages. return self.update(node_repesentations, aggregated_messages) Implement a graph neural network node classifier The GNN classification model follows the Design Space for Graph Neural Networks approach, as follows: Apply preprocessing using FFN to the node features to generate initial node representations. Apply one or more graph convolutional layer, with skip connections, to the node representation to produce node embeddings. Apply post-processing using FFN to the node embeddings to generat the final node embeddings. Feed the node embeddings in a Softmax layer to predict the node class. Each graph convolutional layer added captures information from a further level of neighbours. However, adding many graph convolutional layer can cause oversmoothing, where the model produces similar embeddings for all the nodes. Note that the graph_info passed to the constructor of the Keras model, and used as a property of the Keras model object, rather than input data for training or prediction. The model will accept a batch of node_indices, which are used to lookup the node features and neighbours from the graph_info. class GNNNodeClassifier(tf.keras.Model): def __init__( self, graph_info, num_classes, hidden_units, aggregation_type=\"sum\", combination_type=\"concat\", dropout_rate=0.2, normalize=True, *args, **kwargs, ): super(GNNNodeClassifier, self).__init__(*args, **kwargs) # Unpack graph_info to three elements: node_features, edges, and edge_weight. node_features, edges, edge_weights = graph_info self.node_features = node_features self.edges = edges self.edge_weights = edge_weights # Set edge_weights to ones if not provided. if self.edge_weights is None: self.edge_weights = tf.ones(shape=edges.shape[1]) # Scale edge_weights to sum to 1. self.edge_weights = self.edge_weights / tf.math.reduce_sum(self.edge_weights) # Create a process layer. self.preprocess = create_ffn(hidden_units, dropout_rate, name=\"preprocess\") # Create the first GraphConv layer. self.conv1 = GraphConvLayer( hidden_units, dropout_rate, aggregation_type, combination_type, normalize, name=\"graph_conv1\", ) # Create the second GraphConv layer. self.conv2 = GraphConvLayer( hidden_units, dropout_rate, aggregation_type, combination_type, normalize, name=\"graph_conv2\", ) # Create a postprocess layer. self.postprocess = create_ffn(hidden_units, dropout_rate, name=\"postprocess\") # Create a compute logits layer. self.compute_logits = layers.Dense(units=num_classes, name=\"logits\") def call(self, input_node_indices): # Preprocess the node_features to produce node representations. x = self.preprocess(self.node_features) # Apply the first graph conv layer. x1 = self.conv1((x, self.edges, self.edge_weights)) # Skip connection. x = x1 + x # Apply the second graph conv layer. x2 = self.conv2((x, self.edges, self.edge_weights)) # Skip connection. x = x2 + x # Postprocess node embedding. x = self.postprocess(x) # Fetch node embeddings for the input node_indices. node_embeddings = tf.gather(x, input_node_indices) # Compute logits return self.compute_logits(node_embeddings) Let's test instantiating and calling the GNN model. Notice that if you provide N node indices, the output will be a tensor of shape [N, num_classes], regardless of the size of the graph. gnn_model = GNNNodeClassifier( graph_info=graph_info, num_classes=num_classes, hidden_units=hidden_units, dropout_rate=dropout_rate, name=\"gnn_model\", ) print(\"GNN output shape:\", gnn_model([1, 10, 100])) gnn_model.summary() GNN output shape: tf.Tensor( [[ 0.00620723 0.06162593 0.0176599 0.00830251 -0.03019211 -0.00402163 0.00277454] [ 0.01705155 -0.0467547 0.01400987 -0.02146192 -0.11757397 0.10820404 -0.0375765 ] [-0.02516522 -0.05514468 -0.03842098 -0.0495692 -0.05128997 -0.02241635 -0.07738923]], shape=(3, 7), dtype=float32) Model: \"gnn_model\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= preprocess (Sequential) (2708, 32) 52804 _________________________________________________________________ graph_conv1 (GraphConvLayer) multiple 5888 _________________________________________________________________ graph_conv2 (GraphConvLayer) multiple 5888 _________________________________________________________________ postprocess (Sequential) (2708, 32) 2368 _________________________________________________________________ logits (Dense) multiple 231 ================================================================= Total params: 67,179 Trainable params: 63,481 Non-trainable params: 3,698 _________________________________________________________________ Train the GNN model Note that we use the standard supervised cross-entropy loss to train the model. However, we can add another self-supervised loss term for the generated node embeddings that makes sure that neighbouring nodes in graph have similar representations, while faraway nodes have dissimilar representations. x_train = train_data.paper_id.to_numpy() history = run_experiment(gnn_model, x_train, y_train) Epoch 1/300 5/5 [==============================] - 4s 188ms/step - loss: 2.2529 - acc: 0.1793 - val_loss: 1.8933 - val_acc: 0.2941 Epoch 2/300 5/5 [==============================] - 0s 83ms/step - loss: 1.9866 - acc: 0.2601 - val_loss: 1.8753 - val_acc: 0.3186 Epoch 3/300 5/5 [==============================] - 0s 77ms/step - loss: 1.8794 - acc: 0.2846 - val_loss: 1.8655 - val_acc: 0.3186 Epoch 4/300 5/5 [==============================] - 0s 74ms/step - loss: 1.8432 - acc: 0.3078 - val_loss: 1.8529 - val_acc: 0.3186 Epoch 5/300 5/5 [==============================] - 0s 69ms/step - loss: 1.8314 - acc: 0.3134 - val_loss: 1.8429 - val_acc: 0.3186 Epoch 6/300 5/5 [==============================] - 0s 68ms/step - loss: 1.8157 - acc: 0.3208 - val_loss: 1.8326 - val_acc: 0.3186 Epoch 7/300 5/5 [==============================] - 0s 94ms/step - loss: 1.8112 - acc: 0.3071 - val_loss: 1.8265 - val_acc: 0.3186 Epoch 8/300 5/5 [==============================] - 0s 67ms/step - loss: 1.8028 - acc: 0.3132 - val_loss: 1.8171 - val_acc: 0.3186 Epoch 9/300 5/5 [==============================] - 0s 68ms/step - loss: 1.8007 - acc: 0.3206 - val_loss: 1.7961 - val_acc: 0.3186 Epoch 10/300 5/5 [==============================] - 0s 68ms/step - loss: 1.7571 - acc: 0.3259 - val_loss: 1.7623 - val_acc: 0.3186 Epoch 11/300 5/5 [==============================] - 0s 68ms/step - loss: 1.7373 - acc: 0.3279 - val_loss: 1.7131 - val_acc: 0.3186 Epoch 12/300 5/5 [==============================] - 0s 76ms/step - loss: 1.7130 - acc: 0.3169 - val_loss: 1.6552 - val_acc: 0.3186 Epoch 13/300 5/5 [==============================] - 0s 70ms/step - loss: 1.6989 - acc: 0.3315 - val_loss: 1.6075 - val_acc: 0.3284 Epoch 14/300 5/5 [==============================] - 0s 79ms/step - loss: 1.6733 - acc: 0.3522 - val_loss: 1.6027 - val_acc: 0.3333 Epoch 15/300 5/5 [==============================] - 0s 75ms/step - loss: 1.6060 - acc: 0.3641 - val_loss: 1.6422 - val_acc: 0.3480 Epoch 16/300 5/5 [==============================] - 0s 68ms/step - loss: 1.5783 - acc: 0.3924 - val_loss: 1.6893 - val_acc: 0.3676 Epoch 17/300 5/5 [==============================] - 0s 70ms/step - loss: 1.5269 - acc: 0.4315 - val_loss: 1.7534 - val_acc: 0.3725 Epoch 18/300 5/5 [==============================] - 0s 77ms/step - loss: 1.4558 - acc: 0.4633 - val_loss: 1.7224 - val_acc: 0.4167 Epoch 19/300 5/5 [==============================] - 0s 75ms/step - loss: 1.4131 - acc: 0.4765 - val_loss: 1.6482 - val_acc: 0.4510 Epoch 20/300 5/5 [==============================] - 0s 70ms/step - loss: 1.3880 - acc: 0.4859 - val_loss: 1.4956 - val_acc: 0.4706 Epoch 21/300 5/5 [==============================] - 0s 73ms/step - loss: 1.3223 - acc: 0.5166 - val_loss: 1.5299 - val_acc: 0.4853 Epoch 22/300 5/5 [==============================] - 0s 75ms/step - loss: 1.3226 - acc: 0.5172 - val_loss: 1.6304 - val_acc: 0.4902 Epoch 23/300 5/5 [==============================] - 0s 75ms/step - loss: 1.2888 - acc: 0.5267 - val_loss: 1.6679 - val_acc: 0.5000 Epoch 24/300 5/5 [==============================] - 0s 69ms/step - loss: 1.2478 - acc: 0.5279 - val_loss: 1.6552 - val_acc: 0.4853 Epoch 25/300 5/5 [==============================] - 0s 70ms/step - loss: 1.1978 - acc: 0.5720 - val_loss: 1.6705 - val_acc: 0.4902 Epoch 26/300 5/5 [==============================] - 0s 70ms/step - loss: 1.1814 - acc: 0.5596 - val_loss: 1.6327 - val_acc: 0.5343 Epoch 27/300 5/5 [==============================] - 0s 68ms/step - loss: 1.1085 - acc: 0.5979 - val_loss: 1.5184 - val_acc: 0.5245 Epoch 28/300 5/5 [==============================] - 0s 69ms/step - loss: 1.0695 - acc: 0.6078 - val_loss: 1.5212 - val_acc: 0.4853 Epoch 29/300 5/5 [==============================] - 0s 70ms/step - loss: 1.1063 - acc: 0.6002 - val_loss: 1.5988 - val_acc: 0.4706 Epoch 30/300 5/5 [==============================] - 0s 68ms/step - loss: 1.0194 - acc: 0.6326 - val_loss: 1.5636 - val_acc: 0.4951 Epoch 31/300 5/5 [==============================] - 0s 70ms/step - loss: 1.0320 - acc: 0.6268 - val_loss: 1.5191 - val_acc: 0.5196 Epoch 32/300 5/5 [==============================] - 0s 82ms/step - loss: 0.9749 - acc: 0.6433 - val_loss: 1.5922 - val_acc: 0.5098 Epoch 33/300 5/5 [==============================] - 0s 85ms/step - loss: 0.9095 - acc: 0.6717 - val_loss: 1.5879 - val_acc: 0.5000 Epoch 34/300 5/5 [==============================] - 0s 78ms/step - loss: 0.9324 - acc: 0.6903 - val_loss: 1.5717 - val_acc: 0.4951 Epoch 35/300 5/5 [==============================] - 0s 80ms/step - loss: 0.8908 - acc: 0.6953 - val_loss: 1.5010 - val_acc: 0.5098 Epoch 36/300 5/5 [==============================] - 0s 99ms/step - loss: 0.8858 - acc: 0.6977 - val_loss: 1.5939 - val_acc: 0.5147 Epoch 37/300 5/5 [==============================] - 0s 79ms/step - loss: 0.8376 - acc: 0.6991 - val_loss: 1.4000 - val_acc: 0.5833 Epoch 38/300 5/5 [==============================] - 0s 75ms/step - loss: 0.8657 - acc: 0.7080 - val_loss: 1.3288 - val_acc: 0.5931 Epoch 39/300 5/5 [==============================] - 0s 86ms/step - loss: 0.9160 - acc: 0.6819 - val_loss: 1.1358 - val_acc: 0.6275 Epoch 40/300 5/5 [==============================] - 0s 80ms/step - loss: 0.8676 - acc: 0.7109 - val_loss: 1.0618 - val_acc: 0.6765 Epoch 41/300 5/5 [==============================] - 0s 72ms/step - loss: 0.8065 - acc: 0.7246 - val_loss: 1.0785 - val_acc: 0.6765 Epoch 42/300 5/5 [==============================] - 0s 76ms/step - loss: 0.8478 - acc: 0.7145 - val_loss: 1.0502 - val_acc: 0.6569 Epoch 43/300 5/5 [==============================] - 0s 78ms/step - loss: 0.8125 - acc: 0.7068 - val_loss: 0.9888 - val_acc: 0.6520 Epoch 44/300 5/5 [==============================] - 0s 68ms/step - loss: 0.7791 - acc: 0.7425 - val_loss: 0.9820 - val_acc: 0.6618 Epoch 45/300 5/5 [==============================] - 0s 69ms/step - loss: 0.7492 - acc: 0.7368 - val_loss: 0.9297 - val_acc: 0.6961 Epoch 46/300 5/5 [==============================] - 0s 71ms/step - loss: 0.7521 - acc: 0.7668 - val_loss: 0.9757 - val_acc: 0.6961 Epoch 47/300 5/5 [==============================] - 0s 71ms/step - loss: 0.7090 - acc: 0.7587 - val_loss: 0.9676 - val_acc: 0.7059 Epoch 48/300 5/5 [==============================] - 0s 68ms/step - loss: 0.7008 - acc: 0.7430 - val_loss: 0.9457 - val_acc: 0.7010 Epoch 49/300 5/5 [==============================] - 0s 69ms/step - loss: 0.6919 - acc: 0.7584 - val_loss: 0.9998 - val_acc: 0.6569 Epoch 50/300 5/5 [==============================] - 0s 68ms/step - loss: 0.7583 - acc: 0.7628 - val_loss: 0.9707 - val_acc: 0.6667 Epoch 51/300 5/5 [==============================] - 0s 69ms/step - loss: 0.6575 - acc: 0.7697 - val_loss: 0.9260 - val_acc: 0.6814 Epoch 52/300 5/5 [==============================] - 0s 78ms/step - loss: 0.6751 - acc: 0.7774 - val_loss: 0.9173 - val_acc: 0.6765 Epoch 53/300 5/5 [==============================] - 0s 92ms/step - loss: 0.6964 - acc: 0.7561 - val_loss: 0.8985 - val_acc: 0.6961 Epoch 54/300 5/5 [==============================] - 0s 77ms/step - loss: 0.6386 - acc: 0.7872 - val_loss: 0.9455 - val_acc: 0.6961 Epoch 55/300 5/5 [==============================] - 0s 77ms/step - loss: 0.6110 - acc: 0.8130 - val_loss: 0.9780 - val_acc: 0.6716 Epoch 56/300 5/5 [==============================] - 0s 76ms/step - loss: 0.6483 - acc: 0.7703 - val_loss: 0.9650 - val_acc: 0.6863 Epoch 57/300 5/5 [==============================] - 0s 78ms/step - loss: 0.6811 - acc: 0.7706 - val_loss: 0.9446 - val_acc: 0.6667 Epoch 58/300 5/5 [==============================] - 0s 76ms/step - loss: 0.6391 - acc: 0.7852 - val_loss: 0.9059 - val_acc: 0.7010 Epoch 59/300 5/5 [==============================] - 0s 76ms/step - loss: 0.6533 - acc: 0.7784 - val_loss: 0.8964 - val_acc: 0.7108 Epoch 60/300 5/5 [==============================] - 0s 101ms/step - loss: 0.6587 - acc: 0.7863 - val_loss: 0.8417 - val_acc: 0.7108 Epoch 61/300 5/5 [==============================] - 0s 84ms/step - loss: 0.5776 - acc: 0.8166 - val_loss: 0.8035 - val_acc: 0.7304 Epoch 62/300 5/5 [==============================] - 0s 80ms/step - loss: 0.6396 - acc: 0.7792 - val_loss: 0.8072 - val_acc: 0.7500 Epoch 63/300 5/5 [==============================] - 0s 67ms/step - loss: 0.6201 - acc: 0.7972 - val_loss: 0.7809 - val_acc: 0.7696 Epoch 64/300 5/5 [==============================] - 0s 68ms/step - loss: 0.6358 - acc: 0.7875 - val_loss: 0.7635 - val_acc: 0.7500 Epoch 65/300 5/5 [==============================] - 0s 70ms/step - loss: 0.5914 - acc: 0.8027 - val_loss: 0.8147 - val_acc: 0.7402 Epoch 66/300 5/5 [==============================] - 0s 69ms/step - loss: 0.5960 - acc: 0.7955 - val_loss: 0.9350 - val_acc: 0.7304 Epoch 67/300 5/5 [==============================] - 0s 68ms/step - loss: 0.5752 - acc: 0.8001 - val_loss: 0.9849 - val_acc: 0.7157 Epoch 68/300 5/5 [==============================] - 0s 68ms/step - loss: 0.5189 - acc: 0.8322 - val_loss: 1.0268 - val_acc: 0.7206 Epoch 69/300 5/5 [==============================] - 0s 68ms/step - loss: 0.5413 - acc: 0.8078 - val_loss: 0.9132 - val_acc: 0.7549 Epoch 70/300 5/5 [==============================] - 0s 75ms/step - loss: 0.5231 - acc: 0.8222 - val_loss: 0.8673 - val_acc: 0.7647 Epoch 71/300 5/5 [==============================] - 0s 68ms/step - loss: 0.5416 - acc: 0.8219 - val_loss: 0.8179 - val_acc: 0.7696 Epoch 72/300 5/5 [==============================] - 0s 68ms/step - loss: 0.5060 - acc: 0.8263 - val_loss: 0.7870 - val_acc: 0.7794 Epoch 73/300 5/5 [==============================] - 0s 68ms/step - loss: 0.5502 - acc: 0.8221 - val_loss: 0.7749 - val_acc: 0.7549 Epoch 74/300 5/5 [==============================] - 0s 68ms/step - loss: 0.5111 - acc: 0.8434 - val_loss: 0.7830 - val_acc: 0.7549 Epoch 75/300 5/5 [==============================] - 0s 69ms/step - loss: 0.5119 - acc: 0.8386 - val_loss: 0.8140 - val_acc: 0.7451 Epoch 76/300 5/5 [==============================] - 0s 69ms/step - loss: 0.4922 - acc: 0.8433 - val_loss: 0.8149 - val_acc: 0.7353 Epoch 77/300 5/5 [==============================] - 0s 71ms/step - loss: 0.5217 - acc: 0.8188 - val_loss: 0.7784 - val_acc: 0.7598 Epoch 78/300 5/5 [==============================] - 0s 68ms/step - loss: 0.5027 - acc: 0.8410 - val_loss: 0.7660 - val_acc: 0.7696 Epoch 79/300 5/5 [==============================] - 0s 67ms/step - loss: 0.5307 - acc: 0.8265 - val_loss: 0.7217 - val_acc: 0.7696 Epoch 80/300 5/5 [==============================] - 0s 68ms/step - loss: 0.5164 - acc: 0.8239 - val_loss: 0.6974 - val_acc: 0.7647 Epoch 81/300 5/5 [==============================] - 0s 69ms/step - loss: 0.4404 - acc: 0.8526 - val_loss: 0.6891 - val_acc: 0.7745 Epoch 82/300 5/5 [==============================] - 0s 69ms/step - loss: 0.4565 - acc: 0.8449 - val_loss: 0.6839 - val_acc: 0.7696 Epoch 83/300 5/5 [==============================] - 0s 67ms/step - loss: 0.4759 - acc: 0.8491 - val_loss: 0.7162 - val_acc: 0.7745 Epoch 84/300 5/5 [==============================] - 0s 70ms/step - loss: 0.5154 - acc: 0.8476 - val_loss: 0.7889 - val_acc: 0.7598 Epoch 85/300 5/5 [==============================] - 0s 68ms/step - loss: 0.4847 - acc: 0.8480 - val_loss: 0.7579 - val_acc: 0.7794 Epoch 86/300 5/5 [==============================] - 0s 68ms/step - loss: 0.4519 - acc: 0.8592 - val_loss: 0.7056 - val_acc: 0.7941 Epoch 87/300 5/5 [==============================] - 0s 67ms/step - loss: 0.5038 - acc: 0.8472 - val_loss: 0.6725 - val_acc: 0.7794 Epoch 88/300 5/5 [==============================] - 0s 92ms/step - loss: 0.4729 - acc: 0.8454 - val_loss: 0.7057 - val_acc: 0.7745 Epoch 89/300 5/5 [==============================] - 0s 69ms/step - loss: 0.4811 - acc: 0.8562 - val_loss: 0.6784 - val_acc: 0.7990 Epoch 90/300 5/5 [==============================] - 0s 70ms/step - loss: 0.4102 - acc: 0.8779 - val_loss: 0.6383 - val_acc: 0.8039 Epoch 91/300 5/5 [==============================] - 0s 69ms/step - loss: 0.4493 - acc: 0.8703 - val_loss: 0.6574 - val_acc: 0.7941 Epoch 92/300 5/5 [==============================] - 0s 68ms/step - loss: 0.4560 - acc: 0.8610 - val_loss: 0.6764 - val_acc: 0.7941 Epoch 93/300 5/5 [==============================] - 0s 68ms/step - loss: 0.4465 - acc: 0.8626 - val_loss: 0.6628 - val_acc: 0.7892 Epoch 94/300 5/5 [==============================] - 0s 69ms/step - loss: 0.4773 - acc: 0.8446 - val_loss: 0.6573 - val_acc: 0.7941 Epoch 95/300 5/5 [==============================] - 0s 69ms/step - loss: 0.4313 - acc: 0.8734 - val_loss: 0.6875 - val_acc: 0.7941 Epoch 96/300 5/5 [==============================] - 0s 69ms/step - loss: 0.4668 - acc: 0.8598 - val_loss: 0.6712 - val_acc: 0.8039 Epoch 97/300 5/5 [==============================] - 0s 69ms/step - loss: 0.4329 - acc: 0.8696 - val_loss: 0.6274 - val_acc: 0.8088 Epoch 98/300 5/5 [==============================] - 0s 71ms/step - loss: 0.4223 - acc: 0.8542 - val_loss: 0.6259 - val_acc: 0.7990 Epoch 99/300 5/5 [==============================] - 0s 68ms/step - loss: 0.4677 - acc: 0.8488 - val_loss: 0.6431 - val_acc: 0.8186 Epoch 100/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3933 - acc: 0.8753 - val_loss: 0.6559 - val_acc: 0.8186 Epoch 101/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3945 - acc: 0.8777 - val_loss: 0.6461 - val_acc: 0.8186 Epoch 102/300 5/5 [==============================] - 0s 70ms/step - loss: 0.4671 - acc: 0.8324 - val_loss: 0.6607 - val_acc: 0.7990 Epoch 103/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3890 - acc: 0.8762 - val_loss: 0.6792 - val_acc: 0.7941 Epoch 104/300 5/5 [==============================] - 0s 67ms/step - loss: 0.4336 - acc: 0.8646 - val_loss: 0.6854 - val_acc: 0.7990 Epoch 105/300 5/5 [==============================] - 0s 68ms/step - loss: 0.4304 - acc: 0.8651 - val_loss: 0.6949 - val_acc: 0.8039 Epoch 106/300 5/5 [==============================] - 0s 68ms/step - loss: 0.4043 - acc: 0.8723 - val_loss: 0.6941 - val_acc: 0.7892 Epoch 107/300 5/5 [==============================] - 0s 69ms/step - loss: 0.4043 - acc: 0.8713 - val_loss: 0.6798 - val_acc: 0.8088 Epoch 108/300 5/5 [==============================] - 0s 70ms/step - loss: 0.4647 - acc: 0.8599 - val_loss: 0.6726 - val_acc: 0.8039 Epoch 109/300 5/5 [==============================] - 0s 73ms/step - loss: 0.3916 - acc: 0.8820 - val_loss: 0.6680 - val_acc: 0.8137 Epoch 110/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3990 - acc: 0.8875 - val_loss: 0.6580 - val_acc: 0.8137 Epoch 111/300 5/5 [==============================] - 0s 95ms/step - loss: 0.4240 - acc: 0.8786 - val_loss: 0.6487 - val_acc: 0.8137 Epoch 112/300 5/5 [==============================] - 0s 67ms/step - loss: 0.4050 - acc: 0.8633 - val_loss: 0.6471 - val_acc: 0.8186 Epoch 113/300 5/5 [==============================] - 0s 69ms/step - loss: 0.4120 - acc: 0.8522 - val_loss: 0.6375 - val_acc: 0.8137 Epoch 114/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3802 - acc: 0.8793 - val_loss: 0.6454 - val_acc: 0.8137 Epoch 115/300 5/5 [==============================] - 0s 68ms/step - loss: 0.4073 - acc: 0.8730 - val_loss: 0.6504 - val_acc: 0.8088 Epoch 116/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3573 - acc: 0.8948 - val_loss: 0.6501 - val_acc: 0.7990 Epoch 117/300 5/5 [==============================] - 0s 68ms/step - loss: 0.4238 - acc: 0.8611 - val_loss: 0.7339 - val_acc: 0.7843 Epoch 118/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3565 - acc: 0.8832 - val_loss: 0.7533 - val_acc: 0.7941 Epoch 119/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3863 - acc: 0.8834 - val_loss: 0.7470 - val_acc: 0.8186 Epoch 120/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3935 - acc: 0.8768 - val_loss: 0.6778 - val_acc: 0.8333 Epoch 121/300 5/5 [==============================] - 0s 70ms/step - loss: 0.3745 - acc: 0.8862 - val_loss: 0.6741 - val_acc: 0.8137 Epoch 122/300 5/5 [==============================] - 0s 68ms/step - loss: 0.4152 - acc: 0.8647 - val_loss: 0.6594 - val_acc: 0.8235 Epoch 123/300 5/5 [==============================] - 0s 64ms/step - loss: 0.3987 - acc: 0.8813 - val_loss: 0.6478 - val_acc: 0.8235 Epoch 124/300 5/5 [==============================] - 0s 69ms/step - loss: 0.4005 - acc: 0.8798 - val_loss: 0.6837 - val_acc: 0.8284 Epoch 125/300 5/5 [==============================] - 0s 68ms/step - loss: 0.4366 - acc: 0.8699 - val_loss: 0.6456 - val_acc: 0.8235 Epoch 126/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3544 - acc: 0.8852 - val_loss: 0.6967 - val_acc: 0.8088 Epoch 127/300 5/5 [==============================] - 0s 70ms/step - loss: 0.3835 - acc: 0.8676 - val_loss: 0.7279 - val_acc: 0.8088 Epoch 128/300 5/5 [==============================] - 0s 67ms/step - loss: 0.3932 - acc: 0.8723 - val_loss: 0.7471 - val_acc: 0.8137 Epoch 129/300 5/5 [==============================] - 0s 66ms/step - loss: 0.3788 - acc: 0.8822 - val_loss: 0.7028 - val_acc: 0.8284 Epoch 130/300 5/5 [==============================] - 0s 67ms/step - loss: 0.3546 - acc: 0.8876 - val_loss: 0.6424 - val_acc: 0.8382 Epoch 131/300 5/5 [==============================] - 0s 69ms/step - loss: 0.4244 - acc: 0.8784 - val_loss: 0.6478 - val_acc: 0.8382 Epoch 132/300 5/5 [==============================] - 0s 66ms/step - loss: 0.4120 - acc: 0.8689 - val_loss: 0.6834 - val_acc: 0.8186 Epoch 133/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3585 - acc: 0.8872 - val_loss: 0.6802 - val_acc: 0.8186 Epoch 134/300 5/5 [==============================] - 0s 71ms/step - loss: 0.3782 - acc: 0.8788 - val_loss: 0.6936 - val_acc: 0.8235 Epoch 135/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3459 - acc: 0.8776 - val_loss: 0.6776 - val_acc: 0.8431 Epoch 136/300 5/5 [==============================] - 0s 70ms/step - loss: 0.3176 - acc: 0.9108 - val_loss: 0.6881 - val_acc: 0.8382 Epoch 137/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3205 - acc: 0.9052 - val_loss: 0.6934 - val_acc: 0.8431 Epoch 138/300 5/5 [==============================] - 0s 69ms/step - loss: 0.4079 - acc: 0.8782 - val_loss: 0.6830 - val_acc: 0.8431 Epoch 139/300 5/5 [==============================] - 0s 71ms/step - loss: 0.3465 - acc: 0.8973 - val_loss: 0.6876 - val_acc: 0.8431 Epoch 140/300 5/5 [==============================] - 0s 95ms/step - loss: 0.3935 - acc: 0.8766 - val_loss: 0.7166 - val_acc: 0.8382 Epoch 141/300 5/5 [==============================] - 0s 71ms/step - loss: 0.3905 - acc: 0.8868 - val_loss: 0.7320 - val_acc: 0.8284 Epoch 142/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3482 - acc: 0.8887 - val_loss: 0.7575 - val_acc: 0.8186 Epoch 143/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3567 - acc: 0.8820 - val_loss: 0.7537 - val_acc: 0.8235 Epoch 144/300 5/5 [==============================] - 0s 70ms/step - loss: 0.3427 - acc: 0.8753 - val_loss: 0.7225 - val_acc: 0.8284 Epoch 145/300 5/5 [==============================] - 0s 72ms/step - loss: 0.3894 - acc: 0.8750 - val_loss: 0.7228 - val_acc: 0.8333 Epoch 146/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3585 - acc: 0.8938 - val_loss: 0.6870 - val_acc: 0.8284 Epoch 147/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3450 - acc: 0.8830 - val_loss: 0.6666 - val_acc: 0.8284 Epoch 148/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3174 - acc: 0.8929 - val_loss: 0.6683 - val_acc: 0.8382 Epoch 149/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3357 - acc: 0.9041 - val_loss: 0.6676 - val_acc: 0.8480 Epoch 150/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3597 - acc: 0.8792 - val_loss: 0.6913 - val_acc: 0.8235 Epoch 151/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3043 - acc: 0.9093 - val_loss: 0.7146 - val_acc: 0.8039 Epoch 152/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3935 - acc: 0.8814 - val_loss: 0.6716 - val_acc: 0.8382 Epoch 153/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3200 - acc: 0.8898 - val_loss: 0.6832 - val_acc: 0.8578 Epoch 154/300 5/5 [==============================] - 0s 71ms/step - loss: 0.3738 - acc: 0.8809 - val_loss: 0.6622 - val_acc: 0.8529 Epoch 155/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3784 - acc: 0.8777 - val_loss: 0.6510 - val_acc: 0.8431 Epoch 156/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3565 - acc: 0.8962 - val_loss: 0.6600 - val_acc: 0.8333 Epoch 157/300 5/5 [==============================] - 0s 68ms/step - loss: 0.2935 - acc: 0.9137 - val_loss: 0.6732 - val_acc: 0.8333 Epoch 158/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3130 - acc: 0.9060 - val_loss: 0.7070 - val_acc: 0.8284 Epoch 159/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3386 - acc: 0.8937 - val_loss: 0.6865 - val_acc: 0.8480 Epoch 160/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3310 - acc: 0.9038 - val_loss: 0.7082 - val_acc: 0.8382 Epoch 161/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3232 - acc: 0.8993 - val_loss: 0.7184 - val_acc: 0.8431 Epoch 162/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3062 - acc: 0.9036 - val_loss: 0.7070 - val_acc: 0.8382 Epoch 163/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3374 - acc: 0.8962 - val_loss: 0.7187 - val_acc: 0.8284 Epoch 164/300 5/5 [==============================] - 0s 94ms/step - loss: 0.3249 - acc: 0.8977 - val_loss: 0.7197 - val_acc: 0.8382 Epoch 165/300 5/5 [==============================] - 0s 69ms/step - loss: 0.4041 - acc: 0.8764 - val_loss: 0.7195 - val_acc: 0.8431 Epoch 166/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3356 - acc: 0.9015 - val_loss: 0.7114 - val_acc: 0.8333 Epoch 167/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3006 - acc: 0.9017 - val_loss: 0.6988 - val_acc: 0.8235 Epoch 168/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3368 - acc: 0.8970 - val_loss: 0.6795 - val_acc: 0.8284 Epoch 169/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3049 - acc: 0.9124 - val_loss: 0.6590 - val_acc: 0.8333 Epoch 170/300 5/5 [==============================] - 0s 67ms/step - loss: 0.3652 - acc: 0.8900 - val_loss: 0.6538 - val_acc: 0.8431 Epoch 171/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3153 - acc: 0.9094 - val_loss: 0.6342 - val_acc: 0.8480 Epoch 172/300 5/5 [==============================] - 0s 67ms/step - loss: 0.2881 - acc: 0.9038 - val_loss: 0.6242 - val_acc: 0.8382 Epoch 173/300 5/5 [==============================] - 0s 66ms/step - loss: 0.3764 - acc: 0.8824 - val_loss: 0.6220 - val_acc: 0.8480 Epoch 174/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3352 - acc: 0.8958 - val_loss: 0.6305 - val_acc: 0.8578 Epoch 175/300 5/5 [==============================] - 0s 70ms/step - loss: 0.3450 - acc: 0.9026 - val_loss: 0.6426 - val_acc: 0.8578 Epoch 176/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3471 - acc: 0.8941 - val_loss: 0.6653 - val_acc: 0.8333 Epoch 177/300 5/5 [==============================] - 0s 70ms/step - loss: 0.3373 - acc: 0.8970 - val_loss: 0.6941 - val_acc: 0.8137 Epoch 178/300 5/5 [==============================] - 0s 69ms/step - loss: 0.2986 - acc: 0.9092 - val_loss: 0.6841 - val_acc: 0.8137 Epoch 179/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3466 - acc: 0.9038 - val_loss: 0.6704 - val_acc: 0.8284 Epoch 180/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3661 - acc: 0.8998 - val_loss: 0.6995 - val_acc: 0.8235 Epoch 181/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3163 - acc: 0.8902 - val_loss: 0.6806 - val_acc: 0.8235 Epoch 182/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3278 - acc: 0.9025 - val_loss: 0.6815 - val_acc: 0.8284 Epoch 183/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3343 - acc: 0.8960 - val_loss: 0.6704 - val_acc: 0.8333 Epoch 184/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3172 - acc: 0.8906 - val_loss: 0.6434 - val_acc: 0.8333 Epoch 185/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3679 - acc: 0.8921 - val_loss: 0.6394 - val_acc: 0.8529 Epoch 186/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3030 - acc: 0.9079 - val_loss: 0.6677 - val_acc: 0.8480 Epoch 187/300 5/5 [==============================] - 0s 67ms/step - loss: 0.3102 - acc: 0.8908 - val_loss: 0.6456 - val_acc: 0.8529 Epoch 188/300 5/5 [==============================] - 0s 68ms/step - loss: 0.2763 - acc: 0.9140 - val_loss: 0.6151 - val_acc: 0.8431 Epoch 189/300 5/5 [==============================] - 0s 70ms/step - loss: 0.3298 - acc: 0.8964 - val_loss: 0.6119 - val_acc: 0.8676 Epoch 190/300 5/5 [==============================] - 0s 69ms/step - loss: 0.2928 - acc: 0.9094 - val_loss: 0.6141 - val_acc: 0.8480 Epoch 191/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3066 - acc: 0.9093 - val_loss: 0.6393 - val_acc: 0.8480 Epoch 192/300 5/5 [==============================] - 0s 94ms/step - loss: 0.2988 - acc: 0.9060 - val_loss: 0.6380 - val_acc: 0.8431 Epoch 193/300 5/5 [==============================] - 0s 70ms/step - loss: 0.3654 - acc: 0.8800 - val_loss: 0.6102 - val_acc: 0.8578 Epoch 194/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3482 - acc: 0.8981 - val_loss: 0.6396 - val_acc: 0.8480 Epoch 195/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3029 - acc: 0.9083 - val_loss: 0.6410 - val_acc: 0.8431 Epoch 196/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3276 - acc: 0.8931 - val_loss: 0.6209 - val_acc: 0.8529 Epoch 197/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3252 - acc: 0.8989 - val_loss: 0.6153 - val_acc: 0.8578 Epoch 198/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3542 - acc: 0.8917 - val_loss: 0.6079 - val_acc: 0.8627 Epoch 199/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3191 - acc: 0.9006 - val_loss: 0.6087 - val_acc: 0.8578 Epoch 200/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3077 - acc: 0.9008 - val_loss: 0.6209 - val_acc: 0.8529 Epoch 201/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3045 - acc: 0.9076 - val_loss: 0.6609 - val_acc: 0.8333 Epoch 202/300 5/5 [==============================] - 0s 71ms/step - loss: 0.3053 - acc: 0.9058 - val_loss: 0.7324 - val_acc: 0.8284 Epoch 203/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3107 - acc: 0.8985 - val_loss: 0.7755 - val_acc: 0.8235 Epoch 204/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3047 - acc: 0.8995 - val_loss: 0.7936 - val_acc: 0.7941 Epoch 205/300 5/5 [==============================] - 0s 67ms/step - loss: 0.3131 - acc: 0.9098 - val_loss: 0.6453 - val_acc: 0.8529 Epoch 206/300 5/5 [==============================] - 0s 71ms/step - loss: 0.3795 - acc: 0.8849 - val_loss: 0.6213 - val_acc: 0.8529 Epoch 207/300 5/5 [==============================] - 0s 70ms/step - loss: 0.2903 - acc: 0.9114 - val_loss: 0.6354 - val_acc: 0.8578 Epoch 208/300 5/5 [==============================] - 0s 68ms/step - loss: 0.2599 - acc: 0.9164 - val_loss: 0.6390 - val_acc: 0.8676 Epoch 209/300 5/5 [==============================] - 0s 71ms/step - loss: 0.2954 - acc: 0.9041 - val_loss: 0.6376 - val_acc: 0.8775 Epoch 210/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3250 - acc: 0.9023 - val_loss: 0.6206 - val_acc: 0.8725 Epoch 211/300 5/5 [==============================] - 0s 69ms/step - loss: 0.2694 - acc: 0.9149 - val_loss: 0.6177 - val_acc: 0.8676 Epoch 212/300 5/5 [==============================] - 0s 71ms/step - loss: 0.2920 - acc: 0.9054 - val_loss: 0.6438 - val_acc: 0.8627 Epoch 213/300 5/5 [==============================] - 0s 68ms/step - loss: 0.2861 - acc: 0.9048 - val_loss: 0.7128 - val_acc: 0.8480 Epoch 214/300 5/5 [==============================] - 0s 65ms/step - loss: 0.2916 - acc: 0.9083 - val_loss: 0.7030 - val_acc: 0.8431 Epoch 215/300 5/5 [==============================] - 0s 91ms/step - loss: 0.3288 - acc: 0.8887 - val_loss: 0.6593 - val_acc: 0.8529 Epoch 216/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3802 - acc: 0.8875 - val_loss: 0.6165 - val_acc: 0.8578 Epoch 217/300 5/5 [==============================] - 0s 67ms/step - loss: 0.2905 - acc: 0.9175 - val_loss: 0.6141 - val_acc: 0.8725 Epoch 218/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3078 - acc: 0.9104 - val_loss: 0.6158 - val_acc: 0.8676 Epoch 219/300 5/5 [==============================] - 0s 66ms/step - loss: 0.2757 - acc: 0.9214 - val_loss: 0.6195 - val_acc: 0.8578 Epoch 220/300 5/5 [==============================] - 0s 67ms/step - loss: 0.3159 - acc: 0.8958 - val_loss: 0.6375 - val_acc: 0.8578 Epoch 221/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3348 - acc: 0.8944 - val_loss: 0.6839 - val_acc: 0.8431 Epoch 222/300 5/5 [==============================] - 0s 70ms/step - loss: 0.3239 - acc: 0.8936 - val_loss: 0.6450 - val_acc: 0.8578 Epoch 223/300 5/5 [==============================] - 0s 73ms/step - loss: 0.2783 - acc: 0.9081 - val_loss: 0.6163 - val_acc: 0.8627 Epoch 224/300 5/5 [==============================] - 0s 68ms/step - loss: 0.2852 - acc: 0.9165 - val_loss: 0.6495 - val_acc: 0.8431 Epoch 225/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3073 - acc: 0.8902 - val_loss: 0.6622 - val_acc: 0.8529 Epoch 226/300 5/5 [==============================] - 0s 67ms/step - loss: 0.3127 - acc: 0.9102 - val_loss: 0.6652 - val_acc: 0.8431 Epoch 227/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3248 - acc: 0.9067 - val_loss: 0.6475 - val_acc: 0.8529 Epoch 228/300 5/5 [==============================] - 0s 69ms/step - loss: 0.3155 - acc: 0.9089 - val_loss: 0.6263 - val_acc: 0.8382 Epoch 229/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3585 - acc: 0.8898 - val_loss: 0.6308 - val_acc: 0.8578 Epoch 230/300 5/5 [==============================] - 0s 68ms/step - loss: 0.2812 - acc: 0.9180 - val_loss: 0.6201 - val_acc: 0.8529 Epoch 231/300 5/5 [==============================] - 0s 67ms/step - loss: 0.3070 - acc: 0.8984 - val_loss: 0.6170 - val_acc: 0.8431 Epoch 232/300 5/5 [==============================] - 0s 67ms/step - loss: 0.3433 - acc: 0.8909 - val_loss: 0.6568 - val_acc: 0.8431 Epoch 233/300 5/5 [==============================] - 0s 67ms/step - loss: 0.2844 - acc: 0.9085 - val_loss: 0.6571 - val_acc: 0.8529 Epoch 234/300 5/5 [==============================] - 0s 67ms/step - loss: 0.3122 - acc: 0.9044 - val_loss: 0.6516 - val_acc: 0.8480 Epoch 235/300 5/5 [==============================] - 0s 67ms/step - loss: 0.3047 - acc: 0.9232 - val_loss: 0.6505 - val_acc: 0.8480 Epoch 236/300 5/5 [==============================] - 0s 67ms/step - loss: 0.2913 - acc: 0.9192 - val_loss: 0.6432 - val_acc: 0.8529 Epoch 237/300 5/5 [==============================] - 0s 67ms/step - loss: 0.2505 - acc: 0.9322 - val_loss: 0.6462 - val_acc: 0.8627 Epoch 238/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3033 - acc: 0.9085 - val_loss: 0.6378 - val_acc: 0.8627 Epoch 239/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3418 - acc: 0.8975 - val_loss: 0.6232 - val_acc: 0.8578 Epoch 240/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3167 - acc: 0.9051 - val_loss: 0.6284 - val_acc: 0.8627 Epoch 241/300 5/5 [==============================] - 0s 69ms/step - loss: 0.2637 - acc: 0.9145 - val_loss: 0.6427 - val_acc: 0.8627 Epoch 242/300 5/5 [==============================] - 0s 68ms/step - loss: 0.2678 - acc: 0.9227 - val_loss: 0.6492 - val_acc: 0.8578 Epoch 243/300 5/5 [==============================] - 0s 67ms/step - loss: 0.2730 - acc: 0.9113 - val_loss: 0.6736 - val_acc: 0.8578 Epoch 244/300 5/5 [==============================] - 0s 93ms/step - loss: 0.3013 - acc: 0.9077 - val_loss: 0.7138 - val_acc: 0.8333 Epoch 245/300 5/5 [==============================] - 0s 67ms/step - loss: 0.3151 - acc: 0.9096 - val_loss: 0.7278 - val_acc: 0.8382 Epoch 246/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3307 - acc: 0.9058 - val_loss: 0.6944 - val_acc: 0.8627 Epoch 247/300 5/5 [==============================] - 0s 68ms/step - loss: 0.2631 - acc: 0.9236 - val_loss: 0.6789 - val_acc: 0.8529 Epoch 248/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3215 - acc: 0.9027 - val_loss: 0.6790 - val_acc: 0.8529 Epoch 249/300 5/5 [==============================] - 0s 67ms/step - loss: 0.2968 - acc: 0.9038 - val_loss: 0.6864 - val_acc: 0.8480 Epoch 250/300 5/5 [==============================] - 0s 68ms/step - loss: 0.2998 - acc: 0.9078 - val_loss: 0.7079 - val_acc: 0.8480 Epoch 251/300 5/5 [==============================] - 0s 67ms/step - loss: 0.2375 - acc: 0.9197 - val_loss: 0.7252 - val_acc: 0.8529 Epoch 252/300 5/5 [==============================] - 0s 68ms/step - loss: 0.2955 - acc: 0.9178 - val_loss: 0.7298 - val_acc: 0.8284 Epoch 253/300 5/5 [==============================] - 0s 69ms/step - loss: 0.2946 - acc: 0.9039 - val_loss: 0.7172 - val_acc: 0.8284 Epoch 254/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3051 - acc: 0.9087 - val_loss: 0.6861 - val_acc: 0.8382 Epoch 255/300 5/5 [==============================] - 0s 67ms/step - loss: 0.3563 - acc: 0.8882 - val_loss: 0.6739 - val_acc: 0.8480 Epoch 256/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3144 - acc: 0.8969 - val_loss: 0.6970 - val_acc: 0.8382 Epoch 257/300 5/5 [==============================] - 0s 68ms/step - loss: 0.3210 - acc: 0.9152 - val_loss: 0.7106 - val_acc: 0.8333 Epoch 258/300 5/5 [==============================] - 0s 67ms/step - loss: 0.2523 - acc: 0.9214 - val_loss: 0.7111 - val_acc: 0.8431 Epoch 259/300 5/5 [==============================] - 0s 68ms/step - loss: 0.2552 - acc: 0.9236 - val_loss: 0.7258 - val_acc: 0.8382 Let's plot the learning curves display_learning_curves(history) png Now we evaluate the GNN model on the test data split. The results may vary depending on the training sample, however the GNN model always outperforms the baseline model in terms of the test accuracy. x_test = test_data.paper_id.to_numpy() _, test_accuracy = gnn_model.evaluate(x=x_test, y=y_test, verbose=0) print(f\"Test accuracy: {round(test_accuracy * 100, 2)}%\") Test accuracy: 80.19% Examine the GNN model predictions Let's add the new instances as nodes to the node_features, and generate links (citations) to existing nodes. # First we add the N new_instances as nodes to the graph # by appending the new_instance to node_features. num_nodes = node_features.shape[0] new_node_features = np.concatenate([node_features, new_instances]) # Second we add the M edges (citations) from each new node to a set # of existing nodes in a particular subject new_node_indices = [i + num_nodes for i in range(num_classes)] new_citations = [] for subject_idx, group in papers.groupby(\"subject\"): subject_papers = list(group.paper_id) # Select random x papers specific subject. selected_paper_indices1 = np.random.choice(subject_papers, 5) # Select random y papers from any subject (where y < x). selected_paper_indices2 = np.random.choice(list(papers.paper_id), 2) # Merge the selected paper indices. selected_paper_indices = np.concatenate( [selected_paper_indices1, selected_paper_indices2], axis=0 ) # Create edges between a citing paper idx and the selected cited papers. citing_paper_indx = new_node_indices[subject_idx] for cited_paper_idx in selected_paper_indices: new_citations.append([citing_paper_indx, cited_paper_idx]) new_citations = np.array(new_citations).T new_edges = np.concatenate([edges, new_citations], axis=1) Now let's update the node_features and the edges in the GNN model. print(\"Original node_features shape:\", gnn_model.node_features.shape) print(\"Original edges shape:\", gnn_model.edges.shape) gnn_model.node_features = new_node_features gnn_model.edges = new_edges gnn_model.edge_weights = tf.ones(shape=new_edges.shape[1]) print(\"New node_features shape:\", gnn_model.node_features.shape) print(\"New edges shape:\", gnn_model.edges.shape) logits = gnn_model.predict(tf.convert_to_tensor(new_node_indices)) probabilities = keras.activations.softmax(tf.convert_to_tensor(logits)).numpy() display_class_probabilities(probabilities) Original node_features shape: (2708, 1433) Original edges shape: (2, 5429) New node_features shape: (2715, 1433) New edges shape: (2, 5478) Instance 1: - Case_Based: 4.35% - Genetic_Algorithms: 4.19% - Neural_Networks: 1.49% - Probabilistic_Methods: 1.68% - Reinforcement_Learning: 21.34% - Rule_Learning: 52.82% - Theory: 14.14% Instance 2: - Case_Based: 0.01% - Genetic_Algorithms: 99.88% - Neural_Networks: 0.03% - Probabilistic_Methods: 0.0% - Reinforcement_Learning: 0.07% - Rule_Learning: 0.0% - Theory: 0.01% Instance 3: - Case_Based: 0.1% - Genetic_Algorithms: 59.18% - Neural_Networks: 39.17% - Probabilistic_Methods: 0.38% - Reinforcement_Learning: 0.55% - Rule_Learning: 0.08% - Theory: 0.54% Instance 4: - Case_Based: 0.14% - Genetic_Algorithms: 10.44% - Neural_Networks: 84.1% - Probabilistic_Methods: 3.61% - Reinforcement_Learning: 0.71% - Rule_Learning: 0.16% - Theory: 0.85% Instance 5: - Case_Based: 0.27% - Genetic_Algorithms: 0.15% - Neural_Networks: 0.48% - Probabilistic_Methods: 0.23% - Reinforcement_Learning: 0.79% - Rule_Learning: 0.45% - Theory: 97.63% Instance 6: - Case_Based: 3.12% - Genetic_Algorithms: 1.35% - Neural_Networks: 19.72% - Probabilistic_Methods: 0.48% - Reinforcement_Learning: 39.56% - Rule_Learning: 28.0% - Theory: 7.77% Instance 7: - Case_Based: 1.6% - Genetic_Algorithms: 34.76% - Neural_Networks: 4.45% - Probabilistic_Methods: 9.59% - Reinforcement_Learning: 2.97% - Rule_Learning: 4.05% - Theory: 42.6% Notice that the probabilities of the expected subjects (to which several citations are added) are higher compared to the baseline model. Train a 2-layer bidirectional LSTM on the IMDB movie review sentiment classification. Setup import numpy as np from tensorflow import keras from tensorflow.keras import layers max_features = 20000 # Only consider the top 20k words maxlen = 200 # Only consider the first 200 words of each movie review Build the model # Input for variable-length sequences of integers inputs = keras.Input(shape=(None,), dtype=\"int32\") # Embed each integer in a 128-dimensional vector x = layers.Embedding(max_features, 128)(inputs) # Add 2 bidirectional LSTMs x = layers.Bidirectional(layers.LSTM(64, return_sequences=True))(x) x = layers.Bidirectional(layers.LSTM(64))(x) # Add a classifier outputs = layers.Dense(1, activation=\"sigmoid\")(x) model = keras.Model(inputs, outputs) model.summary() Model: \"model\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, None)] 0 _________________________________________________________________ embedding (Embedding) (None, None, 128) 2560000 _________________________________________________________________ bidirectional (Bidirectional (None, None, 128) 98816 _________________________________________________________________ bidirectional_1 (Bidirection (None, 128) 98816 _________________________________________________________________ dense (Dense) (None, 1) 129 ================================================================= Total params: 2,757,761 Trainable params: 2,757,761 Non-trainable params: 0 _________________________________________________________________ Load the IMDB movie review sentiment data (x_train, y_train), (x_val, y_val) = keras.datasets.imdb.load_data( num_words=max_features ) print(len(x_train), \"Training sequences\") print(len(x_val), \"Validation sequences\") x_train = keras.preprocessing.sequence.pad_sequences(x_train, maxlen=maxlen) x_val = keras.preprocessing.sequence.pad_sequences(x_val, maxlen=maxlen) 25000 Training sequences 25000 Validation sequences Train and evaluate the model model.compile(\"adam\", \"binary_crossentropy\", metrics=[\"accuracy\"]) model.fit(x_train, y_train, batch_size=32, epochs=2, validation_data=(x_val, y_val)) Epoch 1/2 782/782 [==============================] - 220s 281ms/step - loss: 0.4117 - accuracy: 0.8083 - val_loss: 0.6497 - val_accuracy: 0.6983 Epoch 2/2 726/782 [==========================>...] - ETA: 11s - loss: 0.3170 - accuracy: 0.8683 Character-level recurrent sequence-to-sequence model. Introduction This example demonstrates how to implement a basic character-level recurrent sequence-to-sequence model. We apply it to translating short English sentences into short French sentences, character-by-character. Note that it is fairly unusual to do character-level machine translation, as word-level models are more common in this domain. Summary of the algorithm We start with input sequences from a domain (e.g. English sentences) and corresponding target sequences from another domain (e.g. French sentences). An encoder LSTM turns input sequences to 2 state vectors (we keep the last LSTM state and discard the outputs). A decoder LSTM is trained to turn the target sequences into the same sequence but offset by one timestep in the future, a training process called \"teacher forcing\" in this context. It uses as initial state the state vectors from the encoder. Effectively, the decoder learns to generate targets[t+1...] given targets[...t], conditioned on the input sequence. In inference mode, when we want to decode unknown input sequences, we: - Encode the input sequence into state vectors - Start with a target sequence of size 1 (just the start-of-sequence character) - Feed the state vectors and 1-char target sequence to the decoder to produce predictions for the next character - Sample the next character using these predictions (we simply use argmax). - Append the sampled character to the target sequence - Repeat until we generate the end-of-sequence character or we hit the character limit. Setup import numpy as np import tensorflow as tf from tensorflow import keras Download the data !!curl -O http://www.manythings.org/anki/fra-eng.zip !!unzip fra-eng.zip ['Archive: fra-eng.zip', ' inflating: _about.txt ', ' inflating: fra.txt '] Configuration batch_size = 64 # Batch size for training. epochs = 100 # Number of epochs to train for. latent_dim = 256 # Latent dimensionality of the encoding space. num_samples = 10000 # Number of samples to train on. # Path to the data txt file on disk. data_path = \"fra.txt\" Prepare the data # Vectorize the data. input_texts = [] target_texts = [] input_characters = set() target_characters = set() with open(data_path, \"r\", encoding=\"utf-8\") as f: lines = f.read().split(\"\n\") for line in lines[: min(num_samples, len(lines) - 1)]: input_text, target_text, _ = line.split(\"\t\") # We use \"tab\" as the \"start sequence\" character # for the targets, and \"\n\" as \"end sequence\" character. target_text = \"\t\" + target_text + \"\n\" input_texts.append(input_text) target_texts.append(target_text) for char in input_text: if char not in input_characters: input_characters.add(char) for char in target_text: if char not in target_characters: target_characters.add(char) input_characters = sorted(list(input_characters)) target_characters = sorted(list(target_characters)) num_encoder_tokens = len(input_characters) num_decoder_tokens = len(target_characters) max_encoder_seq_length = max([len(txt) for txt in input_texts]) max_decoder_seq_length = max([len(txt) for txt in target_texts]) print(\"Number of samples:\", len(input_texts)) print(\"Number of unique input tokens:\", num_encoder_tokens) print(\"Number of unique output tokens:\", num_decoder_tokens) print(\"Max sequence length for inputs:\", max_encoder_seq_length) print(\"Max sequence length for outputs:\", max_decoder_seq_length) input_token_index = dict([(char, i) for i, char in enumerate(input_characters)]) target_token_index = dict([(char, i) for i, char in enumerate(target_characters)]) encoder_input_data = np.zeros( (len(input_texts), max_encoder_seq_length, num_encoder_tokens), dtype=\"float32\" ) decoder_input_data = np.zeros( (len(input_texts), max_decoder_seq_length, num_decoder_tokens), dtype=\"float32\" ) decoder_target_data = np.zeros( (len(input_texts), max_decoder_seq_length, num_decoder_tokens), dtype=\"float32\" ) for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)): for t, char in enumerate(input_text): encoder_input_data[i, t, input_token_index[char]] = 1.0 encoder_input_data[i, t + 1 :, input_token_index[\" \"]] = 1.0 for t, char in enumerate(target_text): # decoder_target_data is ahead of decoder_input_data by one timestep decoder_input_data[i, t, target_token_index[char]] = 1.0 if t > 0: # decoder_target_data will be ahead by one timestep # and will not include the start character. decoder_target_data[i, t - 1, target_token_index[char]] = 1.0 decoder_input_data[i, t + 1 :, target_token_index[\" \"]] = 1.0 decoder_target_data[i, t:, target_token_index[\" \"]] = 1.0 Number of samples: 10000 Number of unique input tokens: 71 Number of unique output tokens: 93 Max sequence length for inputs: 16 Max sequence length for outputs: 59 Build the model # Define an input sequence and process it. encoder_inputs = keras.Input(shape=(None, num_encoder_tokens)) encoder = keras.layers.LSTM(latent_dim, return_state=True) encoder_outputs, state_h, state_c = encoder(encoder_inputs) # We discard `encoder_outputs` and only keep the states. encoder_states = [state_h, state_c] # Set up the decoder, using `encoder_states` as initial state. decoder_inputs = keras.Input(shape=(None, num_decoder_tokens)) # We set up our decoder to return full output sequences, # and to return internal states as well. We don't use the # return states in the training model, but we will use them in inference. decoder_lstm = keras.layers.LSTM(latent_dim, return_sequences=True, return_state=True) decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states) decoder_dense = keras.layers.Dense(num_decoder_tokens, activation=\"softmax\") decoder_outputs = decoder_dense(decoder_outputs) # Define the model that will turn # `encoder_input_data` & `decoder_input_data` into `decoder_target_data` model = keras.Model([encoder_inputs, decoder_inputs], decoder_outputs) Train the model model.compile( optimizer=\"rmsprop\", loss=\"categorical_crossentropy\", metrics=[\"accuracy\"] ) model.fit( [encoder_input_data, decoder_input_data], decoder_target_data, batch_size=batch_size, epochs=epochs, validation_split=0.2, ) # Save model model.save(\"s2s\") Epoch 1/100 125/125 [==============================] - 2s 16ms/step - loss: 1.1806 - accuracy: 0.7246 - val_loss: 1.0825 - val_accuracy: 0.6995 Epoch 2/100 125/125 [==============================] - 1s 11ms/step - loss: 0.8599 - accuracy: 0.7671 - val_loss: 0.8524 - val_accuracy: 0.7646 Epoch 3/100 125/125 [==============================] - 1s 11ms/step - loss: 0.6867 - accuracy: 0.8069 - val_loss: 0.7129 - val_accuracy: 0.7928 Epoch 4/100 125/125 [==============================] - 1s 11ms/step - loss: 0.5982 - accuracy: 0.8262 - val_loss: 0.6547 - val_accuracy: 0.8111 Epoch 5/100 125/125 [==============================] - 1s 11ms/step - loss: 0.5490 - accuracy: 0.8398 - val_loss: 0.6407 - val_accuracy: 0.8114 Epoch 6/100 125/125 [==============================] - 1s 11ms/step - loss: 0.5140 - accuracy: 0.8489 - val_loss: 0.5834 - val_accuracy: 0.8288 Epoch 7/100 125/125 [==============================] - 1s 11ms/step - loss: 0.4854 - accuracy: 0.8569 - val_loss: 0.5577 - val_accuracy: 0.8357 Epoch 8/100 125/125 [==============================] - 1s 11ms/step - loss: 0.4613 - accuracy: 0.8632 - val_loss: 0.5384 - val_accuracy: 0.8407 Epoch 9/100 125/125 [==============================] - 1s 11ms/step - loss: 0.4405 - accuracy: 0.8691 - val_loss: 0.5255 - val_accuracy: 0.8435 Epoch 10/100 125/125 [==============================] - 1s 11ms/step - loss: 0.4219 - accuracy: 0.8743 - val_loss: 0.5049 - val_accuracy: 0.8497 Epoch 11/100 125/125 [==============================] - 1s 11ms/step - loss: 0.4042 - accuracy: 0.8791 - val_loss: 0.4986 - val_accuracy: 0.8522 Epoch 12/100 125/125 [==============================] - 1s 11ms/step - loss: 0.3888 - accuracy: 0.8836 - val_loss: 0.4854 - val_accuracy: 0.8552 Epoch 13/100 125/125 [==============================] - 1s 11ms/step - loss: 0.3735 - accuracy: 0.8883 - val_loss: 0.4754 - val_accuracy: 0.8586 Epoch 14/100 125/125 [==============================] - 1s 11ms/step - loss: 0.3595 - accuracy: 0.8915 - val_loss: 0.4753 - val_accuracy: 0.8589 Epoch 15/100 125/125 [==============================] - 1s 11ms/step - loss: 0.3467 - accuracy: 0.8956 - val_loss: 0.4611 - val_accuracy: 0.8634 Epoch 16/100 125/125 [==============================] - 1s 11ms/step - loss: 0.3346 - accuracy: 0.8991 - val_loss: 0.4535 - val_accuracy: 0.8658 Epoch 17/100 125/125 [==============================] - 1s 11ms/step - loss: 0.3231 - accuracy: 0.9025 - val_loss: 0.4504 - val_accuracy: 0.8665 Epoch 18/100 125/125 [==============================] - 1s 11ms/step - loss: 0.3120 - accuracy: 0.9059 - val_loss: 0.4442 - val_accuracy: 0.8699 Epoch 19/100 125/125 [==============================] - 1s 10ms/step - loss: 0.3015 - accuracy: 0.9088 - val_loss: 0.4439 - val_accuracy: 0.8692 Epoch 20/100 125/125 [==============================] - 1s 11ms/step - loss: 0.2917 - accuracy: 0.9118 - val_loss: 0.4415 - val_accuracy: 0.8712 Epoch 21/100 125/125 [==============================] - 1s 10ms/step - loss: 0.2821 - accuracy: 0.9147 - val_loss: 0.4372 - val_accuracy: 0.8722 Epoch 22/100 125/125 [==============================] - 1s 11ms/step - loss: 0.2731 - accuracy: 0.9174 - val_loss: 0.4424 - val_accuracy: 0.8713 Epoch 23/100 125/125 [==============================] - 1s 11ms/step - loss: 0.2642 - accuracy: 0.9201 - val_loss: 0.4371 - val_accuracy: 0.8725 Epoch 24/100 125/125 [==============================] - 1s 11ms/step - loss: 0.2561 - accuracy: 0.9226 - val_loss: 0.4400 - val_accuracy: 0.8728 Epoch 25/100 125/125 [==============================] - 1s 11ms/step - loss: 0.2481 - accuracy: 0.9245 - val_loss: 0.4358 - val_accuracy: 0.8757 Epoch 26/100 125/125 [==============================] - 1s 11ms/step - loss: 0.2404 - accuracy: 0.9270 - val_loss: 0.4407 - val_accuracy: 0.8746 Epoch 27/100 125/125 [==============================] - 1s 11ms/step - loss: 0.2332 - accuracy: 0.9294 - val_loss: 0.4462 - val_accuracy: 0.8736 Epoch 28/100 125/125 [==============================] - 1s 11ms/step - loss: 0.2263 - accuracy: 0.9310 - val_loss: 0.4436 - val_accuracy: 0.8736 Epoch 29/100 125/125 [==============================] - 1s 11ms/step - loss: 0.2194 - accuracy: 0.9328 - val_loss: 0.4411 - val_accuracy: 0.8755 Epoch 30/100 125/125 [==============================] - 1s 11ms/step - loss: 0.2126 - accuracy: 0.9351 - val_loss: 0.4457 - val_accuracy: 0.8755 Epoch 31/100 125/125 [==============================] - 1s 11ms/step - loss: 0.2069 - accuracy: 0.9370 - val_loss: 0.4498 - val_accuracy: 0.8752 Epoch 32/100 125/125 [==============================] - 1s 11ms/step - loss: 0.2010 - accuracy: 0.9388 - val_loss: 0.4518 - val_accuracy: 0.8755 Epoch 33/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1953 - accuracy: 0.9404 - val_loss: 0.4545 - val_accuracy: 0.8758 Epoch 34/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1897 - accuracy: 0.9423 - val_loss: 0.4547 - val_accuracy: 0.8769 Epoch 35/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1846 - accuracy: 0.9435 - val_loss: 0.4582 - val_accuracy: 0.8763 Epoch 36/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1794 - accuracy: 0.9451 - val_loss: 0.4653 - val_accuracy: 0.8755 Epoch 37/100 125/125 [==============================] - 1s 10ms/step - loss: 0.1747 - accuracy: 0.9464 - val_loss: 0.4633 - val_accuracy: 0.8768 Epoch 38/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1700 - accuracy: 0.9479 - val_loss: 0.4665 - val_accuracy: 0.8772 Epoch 39/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1657 - accuracy: 0.9493 - val_loss: 0.4725 - val_accuracy: 0.8755 Epoch 40/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1612 - accuracy: 0.9504 - val_loss: 0.4799 - val_accuracy: 0.8752 Epoch 41/100 125/125 [==============================] - 1s 10ms/step - loss: 0.1576 - accuracy: 0.9516 - val_loss: 0.4777 - val_accuracy: 0.8760 Epoch 42/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1531 - accuracy: 0.9530 - val_loss: 0.4842 - val_accuracy: 0.8761 Epoch 43/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1495 - accuracy: 0.9542 - val_loss: 0.4879 - val_accuracy: 0.8761 Epoch 44/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1456 - accuracy: 0.9552 - val_loss: 0.4933 - val_accuracy: 0.8757 Epoch 45/100 125/125 [==============================] - 1s 10ms/step - loss: 0.1419 - accuracy: 0.9562 - val_loss: 0.4988 - val_accuracy: 0.8753 Epoch 46/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1385 - accuracy: 0.9574 - val_loss: 0.5012 - val_accuracy: 0.8758 Epoch 47/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1356 - accuracy: 0.9581 - val_loss: 0.5040 - val_accuracy: 0.8763 Epoch 48/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1325 - accuracy: 0.9591 - val_loss: 0.5114 - val_accuracy: 0.8761 Epoch 49/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1291 - accuracy: 0.9601 - val_loss: 0.5151 - val_accuracy: 0.8764 Epoch 50/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1263 - accuracy: 0.9607 - val_loss: 0.5214 - val_accuracy: 0.8761 Epoch 51/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1232 - accuracy: 0.9621 - val_loss: 0.5210 - val_accuracy: 0.8759 Epoch 52/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1205 - accuracy: 0.9626 - val_loss: 0.5232 - val_accuracy: 0.8761 Epoch 53/100 125/125 [==============================] - 1s 10ms/step - loss: 0.1177 - accuracy: 0.9633 - val_loss: 0.5329 - val_accuracy: 0.8754 Epoch 54/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1152 - accuracy: 0.9644 - val_loss: 0.5317 - val_accuracy: 0.8753 Epoch 55/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1132 - accuracy: 0.9648 - val_loss: 0.5418 - val_accuracy: 0.8748 Epoch 56/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1102 - accuracy: 0.9658 - val_loss: 0.5456 - val_accuracy: 0.8745 Epoch 57/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1083 - accuracy: 0.9663 - val_loss: 0.5438 - val_accuracy: 0.8753 Epoch 58/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1058 - accuracy: 0.9669 - val_loss: 0.5519 - val_accuracy: 0.8753 Epoch 59/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1035 - accuracy: 0.9675 - val_loss: 0.5543 - val_accuracy: 0.8753 Epoch 60/100 125/125 [==============================] - 1s 11ms/step - loss: 0.1017 - accuracy: 0.9679 - val_loss: 0.5619 - val_accuracy: 0.8756 Epoch 61/100 125/125 [==============================] - 1s 10ms/step - loss: 0.0993 - accuracy: 0.9686 - val_loss: 0.5680 - val_accuracy: 0.8751 Epoch 62/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0975 - accuracy: 0.9690 - val_loss: 0.5768 - val_accuracy: 0.8737 Epoch 63/100 125/125 [==============================] - 1s 10ms/step - loss: 0.0954 - accuracy: 0.9697 - val_loss: 0.5800 - val_accuracy: 0.8733 Epoch 64/100 125/125 [==============================] - 1s 10ms/step - loss: 0.0936 - accuracy: 0.9700 - val_loss: 0.5782 - val_accuracy: 0.8744 Epoch 65/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0918 - accuracy: 0.9709 - val_loss: 0.5832 - val_accuracy: 0.8743 Epoch 66/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0897 - accuracy: 0.9714 - val_loss: 0.5863 - val_accuracy: 0.8744 Epoch 67/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0880 - accuracy: 0.9718 - val_loss: 0.5912 - val_accuracy: 0.8742 Epoch 68/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0863 - accuracy: 0.9722 - val_loss: 0.5972 - val_accuracy: 0.8741 Epoch 69/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0850 - accuracy: 0.9727 - val_loss: 0.5969 - val_accuracy: 0.8743 Epoch 70/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0832 - accuracy: 0.9732 - val_loss: 0.6046 - val_accuracy: 0.8736 Epoch 71/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0815 - accuracy: 0.9738 - val_loss: 0.6037 - val_accuracy: 0.8746 Epoch 72/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0799 - accuracy: 0.9741 - val_loss: 0.6092 - val_accuracy: 0.8744 Epoch 73/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0785 - accuracy: 0.9746 - val_loss: 0.6118 - val_accuracy: 0.8750 Epoch 74/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0769 - accuracy: 0.9751 - val_loss: 0.6150 - val_accuracy: 0.8737 Epoch 75/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0753 - accuracy: 0.9754 - val_loss: 0.6196 - val_accuracy: 0.8736 Epoch 76/100 125/125 [==============================] - 1s 10ms/step - loss: 0.0742 - accuracy: 0.9759 - val_loss: 0.6237 - val_accuracy: 0.8738 Epoch 77/100 125/125 [==============================] - 1s 10ms/step - loss: 0.0731 - accuracy: 0.9760 - val_loss: 0.6310 - val_accuracy: 0.8731 Epoch 78/100 125/125 [==============================] - 1s 10ms/step - loss: 0.0719 - accuracy: 0.9765 - val_loss: 0.6335 - val_accuracy: 0.8746 Epoch 79/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0702 - accuracy: 0.9770 - val_loss: 0.6366 - val_accuracy: 0.8744 Epoch 80/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0692 - accuracy: 0.9773 - val_loss: 0.6368 - val_accuracy: 0.8745 Epoch 81/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0678 - accuracy: 0.9777 - val_loss: 0.6472 - val_accuracy: 0.8735 Epoch 82/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0669 - accuracy: 0.9778 - val_loss: 0.6474 - val_accuracy: 0.8735 Epoch 83/100 125/125 [==============================] - 1s 10ms/step - loss: 0.0653 - accuracy: 0.9783 - val_loss: 0.6466 - val_accuracy: 0.8745 Epoch 84/100 125/125 [==============================] - 1s 10ms/step - loss: 0.0645 - accuracy: 0.9787 - val_loss: 0.6576 - val_accuracy: 0.8733 Epoch 85/100 125/125 [==============================] - 1s 10ms/step - loss: 0.0633 - accuracy: 0.9790 - val_loss: 0.6539 - val_accuracy: 0.8742 Epoch 86/100 125/125 [==============================] - 1s 10ms/step - loss: 0.0626 - accuracy: 0.9792 - val_loss: 0.6609 - val_accuracy: 0.8738 Epoch 87/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0614 - accuracy: 0.9794 - val_loss: 0.6641 - val_accuracy: 0.8739 Epoch 88/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0602 - accuracy: 0.9799 - val_loss: 0.6677 - val_accuracy: 0.8739 Epoch 89/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0594 - accuracy: 0.9801 - val_loss: 0.6659 - val_accuracy: 0.8731 Epoch 90/100 125/125 [==============================] - 1s 10ms/step - loss: 0.0581 - accuracy: 0.9803 - val_loss: 0.6744 - val_accuracy: 0.8740 Epoch 91/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0575 - accuracy: 0.9806 - val_loss: 0.6722 - val_accuracy: 0.8737 Epoch 92/100 125/125 [==============================] - 1s 10ms/step - loss: 0.0568 - accuracy: 0.9808 - val_loss: 0.6778 - val_accuracy: 0.8737 Epoch 93/100 125/125 [==============================] - 1s 10ms/step - loss: 0.0557 - accuracy: 0.9814 - val_loss: 0.6837 - val_accuracy: 0.8733 Epoch 94/100 125/125 [==============================] - 1s 10ms/step - loss: 0.0548 - accuracy: 0.9814 - val_loss: 0.6906 - val_accuracy: 0.8732 Epoch 95/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0543 - accuracy: 0.9816 - val_loss: 0.6913 - val_accuracy: 0.8733 Epoch 96/100 125/125 [==============================] - 1s 10ms/step - loss: 0.0536 - accuracy: 0.9816 - val_loss: 0.6955 - val_accuracy: 0.8723 Epoch 97/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0531 - accuracy: 0.9817 - val_loss: 0.7001 - val_accuracy: 0.8724 Epoch 98/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0521 - accuracy: 0.9821 - val_loss: 0.7017 - val_accuracy: 0.8738 Epoch 99/100 125/125 [==============================] - 1s 10ms/step - loss: 0.0512 - accuracy: 0.9822 - val_loss: 0.7069 - val_accuracy: 0.8731 Epoch 100/100 125/125 [==============================] - 1s 11ms/step - loss: 0.0506 - accuracy: 0.9826 - val_loss: 0.7050 - val_accuracy: 0.8726 WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:105: Network.state_updates (from tensorflow.python.keras.engine.network) is deprecated and will be removed in a future version. Instructions for updating: This property should not be used in TensorFlow 2.0, as updates are applied automatically. INFO:tensorflow:Assets written to: s2s/assets Run inference (sampling) encode input and retrieve initial decoder state run one step of decoder with this initial state and a \"start of sequence\" token as target. Output will be the next target token. Repeat with the current target token and current states # Define sampling models # Restore the model and construct the encoder and decoder. model = keras.models.load_model(\"s2s\") encoder_inputs = model.input[0] # input_1 encoder_outputs, state_h_enc, state_c_enc = model.layers[2].output # lstm_1 encoder_states = [state_h_enc, state_c_enc] encoder_model = keras.Model(encoder_inputs, encoder_states) decoder_inputs = model.input[1] # input_2 decoder_state_input_h = keras.Input(shape=(latent_dim,)) decoder_state_input_c = keras.Input(shape=(latent_dim,)) decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c] decoder_lstm = model.layers[3] decoder_outputs, state_h_dec, state_c_dec = decoder_lstm( decoder_inputs, initial_state=decoder_states_inputs ) decoder_states = [state_h_dec, state_c_dec] decoder_dense = model.layers[4] decoder_outputs = decoder_dense(decoder_outputs) decoder_model = keras.Model( [decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states ) # Reverse-lookup token index to decode sequences back to # something readable. reverse_input_char_index = dict((i, char) for char, i in input_token_index.items()) reverse_target_char_index = dict((i, char) for char, i in target_token_index.items()) def decode_sequence(input_seq): # Encode the input as state vectors. states_value = encoder_model.predict(input_seq) # Generate empty target sequence of length 1. target_seq = np.zeros((1, 1, num_decoder_tokens)) # Populate the first character of target sequence with the start character. target_seq[0, 0, target_token_index[\"\t\"]] = 1.0 # Sampling loop for a batch of sequences # (to simplify, here we assume a batch of size 1). stop_condition = False decoded_sentence = \"\" while not stop_condition: output_tokens, h, c = decoder_model.predict([target_seq] + states_value) # Sample a token sampled_token_index = np.argmax(output_tokens[0, -1, :]) sampled_char = reverse_target_char_index[sampled_token_index] decoded_sentence += sampled_char # Exit condition: either hit max length # or find stop character. if sampled_char == \"\n\" or len(decoded_sentence) > max_decoder_seq_length: stop_condition = True # Update the target sequence (of length 1). target_seq = np.zeros((1, 1, num_decoder_tokens)) target_seq[0, 0, sampled_token_index] = 1.0 # Update states states_value = [h, c] return decoded_sentence You can now generate decoded sentences as such: for seq_index in range(20): # Take one sequence (part of the training set) # for trying out decoding. input_seq = encoder_input_data[seq_index : seq_index + 1] decoded_sentence = decode_sequence(input_seq) print(\"-\") print(\"Input sentence:\", input_texts[seq_index]) print(\"Decoded sentence:\", decoded_sentence) - Input sentence: Go. Decoded sentence: Va ! - Input sentence: Hi. Decoded sentence: Salut ! - Input sentence: Hi. Decoded sentence: Salut ! - Input sentence: Run! Decoded sentence: Cours ! - Input sentence: Run! Decoded sentence: Cours ! - Input sentence: Who? Decoded sentence: Qui ? - Input sentence: Wow! Decoded sentence: Ça alors ! - Input sentence: Fire! Decoded sentence: Au feu ! - Input sentence: Help! Decoded sentence: À l'aide ! - Input sentence: Jump. Decoded sentence: Saute. - Input sentence: Stop! Decoded sentence: Stop ! - Input sentence: Stop! Decoded sentence: Stop ! - Input sentence: Stop! Decoded sentence: Stop ! - Input sentence: Wait! Decoded sentence: Attendez ! - Input sentence: Wait! Decoded sentence: Attendez ! - Input sentence: Go on. Decoded sentence: Poursuis. - Input sentence: Go on. Decoded sentence: Poursuis. - Input sentence: Go on. Decoded sentence: Poursuis. - Input sentence: Hello! Decoded sentence: Salut ! - Input sentence: Hello! Decoded sentence: Salut ! Implement a Masked Language Model (MLM) with BERT and fine-tune it on the IMDB Reviews dataset. Introduction Masked Language Modeling is a fill-in-the-blank task, where a model uses the context words surrounding a mask token to try to predict what the masked word should be. For an input that contains one or more mask tokens, the model will generate the most likely substitution for each. Example: Input: \"I have watched this [MASK] and it was awesome.\" Output: \"I have watched this movie and it was awesome.\" Masked language modeling is a great way to train a language model in a self-supervised setting (without human-annotated labels). Such a model can then be fine-tuned to accomplish various supervised NLP tasks. This example teaches you how to build a BERT model from scratch, train it with the masked language modeling task, and then fine-tune this model on a sentiment classification task. We will use the Keras TextVectorization and MultiHeadAttention layers to create a BERT Transformer-Encoder network architecture. Note: This example should be run with tf-nightly. Setup Install tf-nightly via pip install tf-nightly. import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.layers import TextVectorization from dataclasses import dataclass import pandas as pd import numpy as np import glob import re from pprint import pprint Set-up Configuration @dataclass class Config: MAX_LEN = 256 BATCH_SIZE = 32 LR = 0.001 VOCAB_SIZE = 30000 EMBED_DIM = 128 NUM_HEAD = 8 # used in bert model FF_DIM = 128 # used in bert model NUM_LAYERS = 1 config = Config() Load the data We will first download the IMDB data and load into a Pandas dataframe. !curl -O https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -xf aclImdb_v1.tar.gz def get_text_list_from_files(files): text_list = [] for name in files: with open(name) as f: for line in f: text_list.append(line) return text_list def get_data_from_text_files(folder_name): pos_files = glob.glob(\"aclImdb/\" + folder_name + \"/pos/*.txt\") pos_texts = get_text_list_from_files(pos_files) neg_files = glob.glob(\"aclImdb/\" + folder_name + \"/neg/*.txt\") neg_texts = get_text_list_from_files(neg_files) df = pd.DataFrame( { \"review\": pos_texts + neg_texts, \"sentiment\": [0] * len(pos_texts) + [1] * len(neg_texts), } ) df = df.sample(len(df)).reset_index(drop=True) return df train_df = get_data_from_text_files(\"train\") test_df = get_data_from_text_files(\"test\") all_data = train_df.append(test_df) % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 80.2M 100 80.2M 0 0 45.3M 0 0:00:01 0:00:01 --:--:-- 45.3M Dataset preparation We will use the TextVectorization layer to vectorize the text into integer token ids. It transforms a batch of strings into either a sequence of token indices (one sample = 1D array of integer token indices, in order) or a dense representation (one sample = 1D array of float values encoding an unordered set of tokens). Below, we define 3 preprocessing functions. The get_vectorize_layer function builds the TextVectorization layer. The encode function encodes raw text into integer token ids. The get_masked_input_and_labels function will mask input token ids. It masks 15% of all input tokens in each sequence at random. def custom_standardization(input_data): lowercase = tf.strings.lower(input_data) stripped_html = tf.strings.regex_replace(lowercase, \"
\", \" \") return tf.strings.regex_replace( stripped_html, \"[%s]\" % re.escape(\"!#$%&'()*+,-./:;<=>?@\^_`{|}~\"), \"\" ) def get_vectorize_layer(texts, vocab_size, max_seq, special_tokens=[\"[MASK]\"]): \"\"\"Build Text vectorization layer Args: texts (list): List of string i.e input texts vocab_size (int): vocab size max_seq (int): Maximum sequence lenght. special_tokens (list, optional): List of special tokens. Defaults to ['[MASK]']. Returns: layers.Layer: Return TextVectorization Keras Layer \"\"\" vectorize_layer = TextVectorization( max_tokens=vocab_size, output_mode=\"int\", standardize=custom_standardization, output_sequence_length=max_seq, ) vectorize_layer.adapt(texts) # Insert mask token in vocabulary vocab = vectorize_layer.get_vocabulary() vocab = vocab[2 : vocab_size - len(special_tokens)] + [\"[mask]\"] vectorize_layer.set_vocabulary(vocab) return vectorize_layer vectorize_layer = get_vectorize_layer( all_data.review.values.tolist(), config.VOCAB_SIZE, config.MAX_LEN, special_tokens=[\"[mask]\"], ) # Get mask token id for masked language model mask_token_id = vectorize_layer([\"[mask]\"]).numpy()[0][0] def encode(texts): encoded_texts = vectorize_layer(texts) return encoded_texts.numpy() def get_masked_input_and_labels(encoded_texts): # 15% BERT masking inp_mask = np.random.rand(*encoded_texts.shape) < 0.15 # Do not mask special tokens inp_mask[encoded_texts <= 2] = False # Set targets to -1 by default, it means ignore labels = -1 * np.ones(encoded_texts.shape, dtype=int) # Set labels for masked tokens labels[inp_mask] = encoded_texts[inp_mask] # Prepare input encoded_texts_masked = np.copy(encoded_texts) # Set input to [MASK] which is the last token for the 90% of tokens # This means leaving 10% unchanged inp_mask_2mask = inp_mask & (np.random.rand(*encoded_texts.shape) < 0.90) encoded_texts_masked[ inp_mask_2mask ] = mask_token_id # mask token is the last in the dict # Set 10% to a random token inp_mask_2random = inp_mask_2mask & (np.random.rand(*encoded_texts.shape) < 1 / 9) encoded_texts_masked[inp_mask_2random] = np.random.randint( 3, mask_token_id, inp_mask_2random.sum() ) # Prepare sample_weights to pass to .fit() method sample_weights = np.ones(labels.shape) sample_weights[labels == -1] = 0 # y_labels would be same as encoded_texts i.e input tokens y_labels = np.copy(encoded_texts) return encoded_texts_masked, y_labels, sample_weights # We have 25000 examples for training x_train = encode(train_df.review.values) # encode reviews with vectorizer y_train = train_df.sentiment.values train_classifier_ds = ( tf.data.Dataset.from_tensor_slices((x_train, y_train)) .shuffle(1000) .batch(config.BATCH_SIZE) ) # We have 25000 examples for testing x_test = encode(test_df.review.values) y_test = test_df.sentiment.values test_classifier_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch( config.BATCH_SIZE ) # Build dataset for end to end model input (will be used at the end) test_raw_classifier_ds = tf.data.Dataset.from_tensor_slices( (test_df.review.values, y_test) ).batch(config.BATCH_SIZE) # Prepare data for masked language model x_all_review = encode(all_data.review.values) x_masked_train, y_masked_labels, sample_weights = get_masked_input_and_labels( x_all_review ) mlm_ds = tf.data.Dataset.from_tensor_slices( (x_masked_train, y_masked_labels, sample_weights) ) mlm_ds = mlm_ds.shuffle(1000).batch(config.BATCH_SIZE) Create BERT model (Pretraining Model) for masked language modeling We will create a BERT-like pretraining model architecture using the MultiHeadAttention layer. It will take token ids as inputs (including masked tokens) and it will predict the correct ids for the masked input tokens. def bert_module(query, key, value, i): # Multi headed self-attention attention_output = layers.MultiHeadAttention( num_heads=config.NUM_HEAD, key_dim=config.EMBED_DIM // config.NUM_HEAD, name=\"encoder_{}/multiheadattention\".format(i), )(query, key, value) attention_output = layers.Dropout(0.1, name=\"encoder_{}/att_dropout\".format(i))( attention_output ) attention_output = layers.LayerNormalization( epsilon=1e-6, name=\"encoder_{}/att_layernormalization\".format(i) )(query + attention_output) # Feed-forward layer ffn = keras.Sequential( [ layers.Dense(config.FF_DIM, activation=\"relu\"), layers.Dense(config.EMBED_DIM), ], name=\"encoder_{}/ffn\".format(i), ) ffn_output = ffn(attention_output) ffn_output = layers.Dropout(0.1, name=\"encoder_{}/ffn_dropout\".format(i))( ffn_output ) sequence_output = layers.LayerNormalization( epsilon=1e-6, name=\"encoder_{}/ffn_layernormalization\".format(i) )(attention_output + ffn_output) return sequence_output def get_pos_encoding_matrix(max_len, d_emb): pos_enc = np.array( [ [pos / np.power(10000, 2 * (j // 2) / d_emb) for j in range(d_emb)] if pos != 0 else np.zeros(d_emb) for pos in range(max_len) ] ) pos_enc[1:, 0::2] = np.sin(pos_enc[1:, 0::2]) # dim 2i pos_enc[1:, 1::2] = np.cos(pos_enc[1:, 1::2]) # dim 2i+1 return pos_enc loss_fn = keras.losses.SparseCategoricalCrossentropy( reduction=tf.keras.losses.Reduction.NONE ) loss_tracker = tf.keras.metrics.Mean(name=\"loss\") class MaskedLanguageModel(tf.keras.Model): def train_step(self, inputs): if len(inputs) == 3: features, labels, sample_weight = inputs else: features, labels = inputs sample_weight = None with tf.GradientTape() as tape: predictions = self(features, training=True) loss = loss_fn(labels, predictions, sample_weight=sample_weight) # Compute gradients trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Compute our own metrics loss_tracker.update_state(loss, sample_weight=sample_weight) # Return a dict mapping metric names to current value return {\"loss\": loss_tracker.result()} @property def metrics(self): # We list our `Metric` objects here so that `reset_states()` can be # called automatically at the start of each epoch # or at the start of `evaluate()`. # If you don't implement this property, you have to call # `reset_states()` yourself at the time of your choosing. return [loss_tracker] def create_masked_language_bert_model(): inputs = layers.Input((config.MAX_LEN,), dtype=tf.int64) word_embeddings = layers.Embedding( config.VOCAB_SIZE, config.EMBED_DIM, name=\"word_embedding\" )(inputs) position_embeddings = layers.Embedding( input_dim=config.MAX_LEN, output_dim=config.EMBED_DIM, weights=[get_pos_encoding_matrix(config.MAX_LEN, config.EMBED_DIM)], name=\"position_embedding\", )(tf.range(start=0, limit=config.MAX_LEN, delta=1)) embeddings = word_embeddings + position_embeddings encoder_output = embeddings for i in range(config.NUM_LAYERS): encoder_output = bert_module(encoder_output, encoder_output, encoder_output, i) mlm_output = layers.Dense(config.VOCAB_SIZE, name=\"mlm_cls\", activation=\"softmax\")( encoder_output ) mlm_model = MaskedLanguageModel(inputs, mlm_output, name=\"masked_bert_model\") optimizer = keras.optimizers.Adam(learning_rate=config.LR) mlm_model.compile(optimizer=optimizer) return mlm_model id2token = dict(enumerate(vectorize_layer.get_vocabulary())) token2id = {y: x for x, y in id2token.items()} class MaskedTextGenerator(keras.callbacks.Callback): def __init__(self, sample_tokens, top_k=5): self.sample_tokens = sample_tokens self.k = top_k def decode(self, tokens): return \" \".join([id2token[t] for t in tokens if t != 0]) def convert_ids_to_tokens(self, id): return id2token[id] def on_epoch_end(self, epoch, logs=None): prediction = self.model.predict(self.sample_tokens) masked_index = np.where(self.sample_tokens == mask_token_id) masked_index = masked_index[1] mask_prediction = prediction[0][masked_index] top_indices = mask_prediction[0].argsort()[-self.k :][::-1] values = mask_prediction[0][top_indices] for i in range(len(top_indices)): p = top_indices[i] v = values[i] tokens = np.copy(sample_tokens[0]) tokens[masked_index[0]] = p result = { \"input_text\": self.decode(sample_tokens[0].numpy()), \"prediction\": self.decode(tokens), \"probability\": v, \"predicted mask token\": self.convert_ids_to_tokens(p), } pprint(result) sample_tokens = vectorize_layer([\"I have watched this [mask] and it was awesome\"]) generator_callback = MaskedTextGenerator(sample_tokens.numpy()) bert_masked_model = create_masked_language_bert_model() bert_masked_model.summary() Model: \"masked_bert_model\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 256)] 0 __________________________________________________________________________________________________ word_embedding (Embedding) (None, 256, 128) 3840000 input_1[0][0] __________________________________________________________________________________________________ tf.__operators__.add (TFOpLambd (None, 256, 128) 0 word_embedding[0][0] __________________________________________________________________________________________________ encoder_0/multiheadattention (M (None, 256, 128) 66048 tf.__operators__.add[0][0] tf.__operators__.add[0][0] tf.__operators__.add[0][0] __________________________________________________________________________________________________ encoder_0/att_dropout (Dropout) (None, 256, 128) 0 encoder_0/multiheadattention[0][0 __________________________________________________________________________________________________ tf.__operators__.add_1 (TFOpLam (None, 256, 128) 0 tf.__operators__.add[0][0] encoder_0/att_dropout[0][0] __________________________________________________________________________________________________ encoder_0/att_layernormalizatio (None, 256, 128) 256 tf.__operators__.add_1[0][0] __________________________________________________________________________________________________ encoder_0/ffn (Sequential) (None, 256, 128) 33024 encoder_0/att_layernormalization[ __________________________________________________________________________________________________ encoder_0/ffn_dropout (Dropout) (None, 256, 128) 0 encoder_0/ffn[0][0] __________________________________________________________________________________________________ tf.__operators__.add_2 (TFOpLam (None, 256, 128) 0 encoder_0/att_layernormalization[ encoder_0/ffn_dropout[0][0] __________________________________________________________________________________________________ encoder_0/ffn_layernormalizatio (None, 256, 128) 256 tf.__operators__.add_2[0][0] __________________________________________________________________________________________________ mlm_cls (Dense) (None, 256, 30000) 3870000 encoder_0/ffn_layernormalization[ ================================================================================================== Total params: 7,809,584 Trainable params: 7,809,584 Non-trainable params: 0 __________________________________________________________________________________________________ Train and Save bert_masked_model.fit(mlm_ds, epochs=5, callbacks=[generator_callback]) bert_masked_model.save(\"bert_mlm_imdb.h5\") Epoch 1/5 1563/1563 [==============================] - ETA: 0s - loss: 7.0111{'input_text': 'i have watched this [mask] and it was awesome', 'predicted mask token': 'this', 'prediction': 'i have watched this this and it was awesome', 'probability': 0.086307295} {'input_text': 'i have watched this [mask] and it was awesome', 'predicted mask token': 'i', 'prediction': 'i have watched this i and it was awesome', 'probability': 0.066265985} {'input_text': 'i have watched this [mask] and it was awesome', 'predicted mask token': 'movie', 'prediction': 'i have watched this movie and it was awesome', 'probability': 0.044195656} {'input_text': 'i have watched this [mask] and it was awesome', 'predicted mask token': 'a', 'prediction': 'i have watched this a and it was awesome', 'probability': 0.04020928} {'input_text': 'i have watched this [mask] and it was awesome', 'predicted mask token': 'was', 'prediction': 'i have watched this was and it was awesome', 'probability': 0.027878676} 1563/1563 [==============================] - 661s 423ms/step - loss: 7.0111 Epoch 2/5 1563/1563 [==============================] - ETA: 0s - loss: 6.4498{'input_text': 'i have watched this [mask] and it was awesome', 'predicted mask token': 'movie', 'prediction': 'i have watched this movie and it was awesome', 'probability': 0.44448906} {'input_text': 'i have watched this [mask] and it was awesome', 'predicted mask token': 'film', 'prediction': 'i have watched this film and it was awesome', 'probability': 0.1507494} {'input_text': 'i have watched this [mask] and it was awesome', 'predicted mask token': 'is', 'prediction': 'i have watched this is and it was awesome', 'probability': 0.06385628} {'input_text': 'i have watched this [mask] and it was awesome', 'predicted mask token': 'one', 'prediction': 'i have watched this one and it was awesome', 'probability': 0.023549262} {'input_text': 'i have watched this [mask] and it was awesome', 'predicted mask token': 'was', 'prediction': 'i have watched this was and it was awesome', 'probability': 0.022277055} 1563/1563 [==============================] - 660s 422ms/step - loss: 6.4498 Epoch 3/5 1563/1563 [==============================] - ETA: 0s - loss: 5.8709{'input_text': 'i have watched this [mask] and it was awesome', 'predicted mask token': 'movie', 'prediction': 'i have watched this movie and it was awesome', 'probability': 0.4759983} {'input_text': 'i have watched this [mask] and it was awesome', 'predicted mask token': 'film', 'prediction': 'i have watched this film and it was awesome', 'probability': 0.18642229} {'input_text': 'i have watched this [mask] and it was awesome', 'predicted mask token': 'one', 'prediction': 'i have watched this one and it was awesome', 'probability': 0.045611132} {'input_text': 'i have watched this [mask] and it was awesome', 'predicted mask token': 'is', 'prediction': 'i have watched this is and it was awesome', 'probability': 0.028308254} {'input_text': 'i have watched this [mask] and it was awesome', 'predicted mask token': 'series', 'prediction': 'i have watched this series and it was awesome', 'probability': 0.027862877} 1563/1563 [==============================] - 661s 423ms/step - loss: 5.8709 Epoch 4/5 771/1563 [=============>................] - ETA: 5:35 - loss: 5.3782 Fine-tune a sentiment classification model We will fine-tune our self-supervised model on a downstream task of sentiment classification. To do this, let's create a classifier by adding a pooling layer and a Dense layer on top of the pretrained BERT features. # Load pretrained bert model mlm_model = keras.models.load_model( \"bert_mlm_imdb.h5\", custom_objects={\"MaskedLanguageModel\": MaskedLanguageModel} ) pretrained_bert_model = tf.keras.Model( mlm_model.input, mlm_model.get_layer(\"encoder_0/ffn_layernormalization\").output ) # Freeze it pretrained_bert_model.trainable = False def create_classifier_bert_model(): inputs = layers.Input((config.MAX_LEN,), dtype=tf.int64) sequence_output = pretrained_bert_model(inputs) pooled_output = layers.GlobalMaxPooling1D()(sequence_output) hidden_layer = layers.Dense(64, activation=\"relu\")(pooled_output) outputs = layers.Dense(1, activation=\"sigmoid\")(hidden_layer) classifer_model = keras.Model(inputs, outputs, name=\"classification\") optimizer = keras.optimizers.Adam() classifer_model.compile( optimizer=optimizer, loss=\"binary_crossentropy\", metrics=[\"accuracy\"] ) return classifer_model classifer_model = create_classifier_bert_model() classifer_model.summary() # Train the classifier with frozen BERT stage classifer_model.fit( train_classifier_ds, epochs=5, validation_data=test_classifier_ds, ) # Unfreeze the BERT model for fine-tuning pretrained_bert_model.trainable = True optimizer = keras.optimizers.Adam() classifer_model.compile( optimizer=optimizer, loss=\"binary_crossentropy\", metrics=[\"accuracy\"] ) classifer_model.fit( train_classifier_ds, epochs=5, validation_data=test_classifier_ds, ) Model: \"classification\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) [(None, 256)] 0 _________________________________________________________________ model (Functional) (None, 256, 128) 3939584 _________________________________________________________________ global_max_pooling1d (Global (None, 128) 0 _________________________________________________________________ dense_2 (Dense) (None, 64) 8256 _________________________________________________________________ dense_3 (Dense) (None, 1) 65 ================================================================= Total params: 3,947,905 Trainable params: 8,321 Non-trainable params: 3,939,584 _________________________________________________________________ Epoch 1/5 782/782 [==============================] - 15s 19ms/step - loss: 0.8096 - accuracy: 0.5498 - val_loss: 0.6406 - val_accuracy: 0.6329 Epoch 2/5 782/782 [==============================] - 14s 18ms/step - loss: 0.6551 - accuracy: 0.6220 - val_loss: 0.6423 - val_accuracy: 0.6338 Epoch 3/5 782/782 [==============================] - 14s 18ms/step - loss: 0.6473 - accuracy: 0.6310 - val_loss: 0.6380 - val_accuracy: 0.6350 Epoch 4/5 782/782 [==============================] - 14s 18ms/step - loss: 0.6307 - accuracy: 0.6471 - val_loss: 0.6432 - val_accuracy: 0.6312 Epoch 5/5 782/782 [==============================] - 14s 18ms/step - loss: 0.6278 - accuracy: 0.6465 - val_loss: 0.6107 - val_accuracy: 0.6678 Epoch 1/5 782/782 [==============================] - 46s 59ms/step - loss: 0.5234 - accuracy: 0.7373 - val_loss: 0.3533 - val_accuracy: 0.8427 Epoch 2/5 782/782 [==============================] - 45s 57ms/step - loss: 0.2808 - accuracy: 0.8814 - val_loss: 0.3252 - val_accuracy: 0.8633 Epoch 3/5 782/782 [==============================] - 43s 55ms/step - loss: 0.1493 - accuracy: 0.9413 - val_loss: 0.4374 - val_accuracy: 0.8486 Epoch 4/5 782/782 [==============================] - 43s 55ms/step - loss: 0.0600 - accuracy: 0.9803 - val_loss: 0.6422 - val_accuracy: 0.8380 Epoch 5/5 782/782 [==============================] - 43s 55ms/step - loss: 0.0305 - accuracy: 0.9893 - val_loss: 0.6064 - val_accuracy: 0.8440 Create an end-to-end model and evaluate it When you want to deploy a model, it's best if it already includes its preprocessing pipeline, so that you don't have to reimplement the preprocessing logic in your production environment. Let's create an end-to-end model that incorporates the TextVectorization layer, and let's evaluate. Our model will accept raw strings as input. def get_end_to_end(model): inputs_string = keras.Input(shape=(1,), dtype=\"string\") indices = vectorize_layer(inputs_string) outputs = model(indices) end_to_end_model = keras.Model(inputs_string, outputs, name=\"end_to_end_model\") optimizer = keras.optimizers.Adam(learning_rate=config.LR) end_to_end_model.compile( optimizer=optimizer, loss=\"binary_crossentropy\", metrics=[\"accuracy\"] ) return end_to_end_model end_to_end_classification_model = get_end_to_end(classifer_model) end_to_end_classification_model.evaluate(test_raw_classifier_ds) 782/782 [==============================] - 8s 11ms/step - loss: 0.5967 - accuracy: 0.8446 [0.6064175963401794, 0.8439599871635437] Implementing a sequence-to-sequence Transformer and training it on a machine translation task. Introduction In this example, we'll build a sequence-to-sequence Transformer model, which we'll train on an English-to-Spanish machine translation task. You'll learn how to: Vectorize text using the Keras TextVectorization layer. Implement a TransformerEncoder layer, a TransformerDecoder layer, and a PositionalEmbedding layer. Prepare data for training a sequence-to-sequence model. Use the trained model to generate translations of never-seen-before input sentences (sequence-to-sequence inference). The code featured here is adapted from the book Deep Learning with Python, Second Edition (chapter 11: Deep learning for text). The present example is fairly barebones, so for detailed explanations of how each building block works, as well as the theory behind Transformers, I recommend reading the book. Setup import pathlib import random import string import re import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.layers import TextVectorization Downloading the data We'll be working with an English-to-Spanish translation dataset provided by Anki. Let's download it: text_file = keras.utils.get_file( fname=\"spa-eng.zip\", origin=\"http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip\", extract=True, ) text_file = pathlib.Path(text_file).parent / \"spa-eng\" / \"spa.txt\" Parsing the data Each line contains an English sentence and its corresponding Spanish sentence. The English sentence is the source sequence and Spanish one is the target sequence. We prepend the token \"[start]\" and we append the token \"[end]\" to the Spanish sentence. with open(text_file) as f: lines = f.read().split(\"\n\")[:-1] text_pairs = [] for line in lines: eng, spa = line.split(\"\t\") spa = \"[start] \" + spa + \" [end]\" text_pairs.append((eng, spa)) Here's what our sentence pairs look like: for _ in range(5): print(random.choice(text_pairs)) (\"You can dance, can't you?\", '[start] Puedes bailar, ¿verdad? [end]') ('I passed by her house yesterday.', '[start] Me pasé por su casa ayer. [end]') ('I like tulips.', '[start] Me gustan los tulipanes. [end]') ('He is fluent in French.', '[start] Habla un francés fluido. [end]') ('Tom asked me what I had been doing.', '[start] Tom me preguntó qué había estado haciendo. [end]') Now, let's split the sentence pairs into a training set, a validation set, and a test set. random.shuffle(text_pairs) num_val_samples = int(0.15 * len(text_pairs)) num_train_samples = len(text_pairs) - 2 * num_val_samples train_pairs = text_pairs[:num_train_samples] val_pairs = text_pairs[num_train_samples : num_train_samples + num_val_samples] test_pairs = text_pairs[num_train_samples + num_val_samples :] print(f\"{len(text_pairs)} total pairs\") print(f\"{len(train_pairs)} training pairs\") print(f\"{len(val_pairs)} validation pairs\") print(f\"{len(test_pairs)} test pairs\") 118964 total pairs 83276 training pairs 17844 validation pairs 17844 test pairs Vectorizing the text data We'll use two instances of the TextVectorization layer to vectorize the text data (one for English and one for Spanish), that is to say, to turn the original strings into integer sequences where each integer represents the index of a word in a vocabulary. The English layer will use the default string standardization (strip punctuation characters) and splitting scheme (split on whitespace), while the Spanish layer will use a custom standardization, where we add the character \"¿\" to the set of punctuation characters to be stripped. Note: in a production-grade machine translation model, I would not recommend stripping the punctuation characters in either language. Instead, I would recommend turning each punctuation character into its own token, which you could achieve by providing a custom split function to the TextVectorization layer. strip_chars = string.punctuation + \"¿\" strip_chars = strip_chars.replace(\"[\", \"\") strip_chars = strip_chars.replace(\"]\", \"\") vocab_size = 15000 sequence_length = 20 batch_size = 64 def custom_standardization(input_string): lowercase = tf.strings.lower(input_string) return tf.strings.regex_replace(lowercase, \"[%s]\" % re.escape(strip_chars), \"\") eng_vectorization = TextVectorization( max_tokens=vocab_size, output_mode=\"int\", output_sequence_length=sequence_length, ) spa_vectorization = TextVectorization( max_tokens=vocab_size, output_mode=\"int\", output_sequence_length=sequence_length + 1, standardize=custom_standardization, ) train_eng_texts = [pair[0] for pair in train_pairs] train_spa_texts = [pair[1] for pair in train_pairs] eng_vectorization.adapt(train_eng_texts) spa_vectorization.adapt(train_spa_texts) Next, we'll format our datasets. At each training step, the model will seek to predict target words N+1 (and beyond) using the source sentence and the target words 0 to N. As such, the training dataset will yield a tuple (inputs, targets), where: inputs is a dictionary with the keys encoder_inputs and decoder_inputs. encoder_inputs is the vectorized source sentence and encoder_inputs is the target sentence \"so far\", that is to say, the words 0 to N used to predict word N+1 (and beyond) in the target sentence. target is the target sentence offset by one step: it provides the next words in the target sentence -- what the model will try to predict. def format_dataset(eng, spa): eng = eng_vectorization(eng) spa = spa_vectorization(spa) return ({\"encoder_inputs\": eng, \"decoder_inputs\": spa[:, :-1],}, spa[:, 1:]) def make_dataset(pairs): eng_texts, spa_texts = zip(*pairs) eng_texts = list(eng_texts) spa_texts = list(spa_texts) dataset = tf.data.Dataset.from_tensor_slices((eng_texts, spa_texts)) dataset = dataset.batch(batch_size) dataset = dataset.map(format_dataset) return dataset.shuffle(2048).prefetch(16).cache() train_ds = make_dataset(train_pairs) val_ds = make_dataset(val_pairs) Let's take a quick look at the sequence shapes (we have batches of 64 pairs, and all sequences are 20 steps long): for inputs, targets in train_ds.take(1): print(f'inputs[\"encoder_inputs\"].shape: {inputs[\"encoder_inputs\"].shape}') print(f'inputs[\"decoder_inputs\"].shape: {inputs[\"decoder_inputs\"].shape}') print(f\"targets.shape: {targets.shape}\") inputs[\"encoder_inputs\"].shape: (64, 20) inputs[\"decoder_inputs\"].shape: (64, 20) targets.shape: (64, 20) Building the model Our sequence-to-sequence Transformer consists of a TransformerEncoder and a TransformerDecoder chained together. To make the model aware of word order, we also use a PositionalEmbedding layer. The source sequence will be pass to the TransformerEncoder, which will produce a new representation of it. This new representation will then be passed to the TransformerDecoder, together with the target sequence so far (target words 0 to N). The TransformerDecoder will then seek to predict the next words in the target sequence (N+1 and beyond). A key detail that makes this possible is causal masking (see method get_causal_attention_mask() on the TransformerDecoder). The TransformerDecoder sees the entire sequences at once, and thus we must make sure that it only uses information from target tokens 0 to N when predicting token N+1 (otherwise, it could use information from the future, which would result in a model that cannot be used at inference time). class TransformerEncoder(layers.Layer): def __init__(self, embed_dim, dense_dim, num_heads, **kwargs): super(TransformerEncoder, self).__init__(**kwargs) self.embed_dim = embed_dim self.dense_dim = dense_dim self.num_heads = num_heads self.attention = layers.MultiHeadAttention( num_heads=num_heads, key_dim=embed_dim ) self.dense_proj = keras.Sequential( [layers.Dense(dense_dim, activation=\"relu\"), layers.Dense(embed_dim),] ) self.layernorm_1 = layers.LayerNormalization() self.layernorm_2 = layers.LayerNormalization() self.supports_masking = True def call(self, inputs, mask=None): if mask is not None: padding_mask = tf.cast(mask[:, tf.newaxis, tf.newaxis, :], dtype=\"int32\") attention_output = self.attention( query=inputs, value=inputs, key=inputs, attention_mask=padding_mask ) proj_input = self.layernorm_1(inputs + attention_output) proj_output = self.dense_proj(proj_input) return self.layernorm_2(proj_input + proj_output) class PositionalEmbedding(layers.Layer): def __init__(self, sequence_length, vocab_size, embed_dim, **kwargs): super(PositionalEmbedding, self).__init__(**kwargs) self.token_embeddings = layers.Embedding( input_dim=vocab_size, output_dim=embed_dim ) self.position_embeddings = layers.Embedding( input_dim=sequence_length, output_dim=embed_dim ) self.sequence_length = sequence_length self.vocab_size = vocab_size self.embed_dim = embed_dim def call(self, inputs): length = tf.shape(inputs)[-1] positions = tf.range(start=0, limit=length, delta=1) embedded_tokens = self.token_embeddings(inputs) embedded_positions = self.position_embeddings(positions) return embedded_tokens + embedded_positions def compute_mask(self, inputs, mask=None): return tf.math.not_equal(inputs, 0) class TransformerDecoder(layers.Layer): def __init__(self, embed_dim, latent_dim, num_heads, **kwargs): super(TransformerDecoder, self).__init__(**kwargs) self.embed_dim = embed_dim self.latent_dim = latent_dim self.num_heads = num_heads self.attention_1 = layers.MultiHeadAttention( num_heads=num_heads, key_dim=embed_dim ) self.attention_2 = layers.MultiHeadAttention( num_heads=num_heads, key_dim=embed_dim ) self.dense_proj = keras.Sequential( [layers.Dense(latent_dim, activation=\"relu\"), layers.Dense(embed_dim),] ) self.layernorm_1 = layers.LayerNormalization() self.layernorm_2 = layers.LayerNormalization() self.layernorm_3 = layers.LayerNormalization() self.supports_masking = True def call(self, inputs, encoder_outputs, mask=None): causal_mask = self.get_causal_attention_mask(inputs) if mask is not None: padding_mask = tf.cast(mask[:, tf.newaxis, :], dtype=\"int32\") padding_mask = tf.minimum(padding_mask, causal_mask) attention_output_1 = self.attention_1( query=inputs, value=inputs, key=inputs, attention_mask=causal_mask ) out_1 = self.layernorm_1(inputs + attention_output_1) attention_output_2 = self.attention_2( query=out_1, value=encoder_outputs, key=encoder_outputs, attention_mask=padding_mask, ) out_2 = self.layernorm_2(out_1 + attention_output_2) proj_output = self.dense_proj(out_2) return self.layernorm_3(out_2 + proj_output) def get_causal_attention_mask(self, inputs): input_shape = tf.shape(inputs) batch_size, sequence_length = input_shape[0], input_shape[1] i = tf.range(sequence_length)[:, tf.newaxis] j = tf.range(sequence_length) mask = tf.cast(i >= j, dtype=\"int32\") mask = tf.reshape(mask, (1, input_shape[1], input_shape[1])) mult = tf.concat( [tf.expand_dims(batch_size, -1), tf.constant([1, 1], dtype=tf.int32)], axis=0, ) return tf.tile(mask, mult) Next, we assemble the end-to-end model. embed_dim = 256 latent_dim = 2048 num_heads = 8 encoder_inputs = keras.Input(shape=(None,), dtype=\"int64\", name=\"encoder_inputs\") x = PositionalEmbedding(sequence_length, vocab_size, embed_dim)(encoder_inputs) encoder_outputs = TransformerEncoder(embed_dim, latent_dim, num_heads)(x) encoder = keras.Model(encoder_inputs, encoder_outputs) decoder_inputs = keras.Input(shape=(None,), dtype=\"int64\", name=\"decoder_inputs\") encoded_seq_inputs = keras.Input(shape=(None, embed_dim), name=\"decoder_state_inputs\") x = PositionalEmbedding(sequence_length, vocab_size, embed_dim)(decoder_inputs) x = TransformerDecoder(embed_dim, latent_dim, num_heads)(x, encoded_seq_inputs) x = layers.Dropout(0.5)(x) decoder_outputs = layers.Dense(vocab_size, activation=\"softmax\")(x) decoder = keras.Model([decoder_inputs, encoded_seq_inputs], decoder_outputs) decoder_outputs = decoder([decoder_inputs, encoder_outputs]) transformer = keras.Model( [encoder_inputs, decoder_inputs], decoder_outputs, name=\"transformer\" ) Training our model We'll use accuracy as a quick way to monitor training progress on the validation data. Note that machine translation typically uses BLEU scores as well as other metrics, rather than accuracy. Here we only train for 1 epoch, but to get the model to actually converge you should train for at least 30 epochs. epochs = 1 # This should be at least 30 for convergence transformer.summary() transformer.compile( \"rmsprop\", loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"] ) transformer.fit(train_ds, epochs=epochs, validation_data=val_ds) Model: \"transformer\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== encoder_inputs (InputLayer) [(None, None)] 0 __________________________________________________________________________________________________ positional_embedding (Positiona (None, None, 256) 3845120 encoder_inputs[0][0] __________________________________________________________________________________________________ decoder_inputs (InputLayer) [(None, None)] 0 __________________________________________________________________________________________________ transformer_encoder (Transforme (None, None, 256) 3155456 positional_embedding[0][0] __________________________________________________________________________________________________ model_1 (Functional) (None, None, 15000) 12959640 decoder_inputs[0][0] transformer_encoder[0][0] ================================================================================================== Total params: 19,960,216 Trainable params: 19,960,216 Non-trainable params: 0 __________________________________________________________________________________________________ 1302/1302 [==============================] - 1297s 993ms/step - loss: 1.6495 - accuracy: 0.4284 - val_loss: 1.2843 - val_accuracy: 0.5211 Decoding test sentences Finally, let's demonstrate how to translate brand new English sentences. We simply feed into the model the vectorized English sentence as well as the target token \"[start]\", then we repeatedly generated the next token, until we hit the token \"[end]\". spa_vocab = spa_vectorization.get_vocabulary() spa_index_lookup = dict(zip(range(len(spa_vocab)), spa_vocab)) max_decoded_sentence_length = 20 def decode_sequence(input_sentence): tokenized_input_sentence = eng_vectorization([input_sentence]) decoded_sentence = \"[start]\" for i in range(max_decoded_sentence_length): tokenized_target_sentence = spa_vectorization([decoded_sentence])[:, :-1] predictions = transformer([tokenized_input_sentence, tokenized_target_sentence]) sampled_token_index = np.argmax(predictions[0, i, :]) sampled_token = spa_index_lookup[sampled_token_index] decoded_sentence += \" \" + sampled_token if sampled_token == \"[end]\": break return decoded_sentence test_eng_texts = [pair[0] for pair in test_pairs] for _ in range(30): input_sentence = random.choice(test_eng_texts) translated = decode_sequence(input_sentence) After 30 epochs, we get results such as: She handed him the money. [start] ella le pasó el dinero [end] Tom has never heard Mary sing. [start] tom nunca ha oído cantar a mary [end] Perhaps she will come tomorrow. [start] tal vez ella vendrá mañana [end] I love to write. [start] me encanta escribir [end] His French is improving little by little. [start] su francés va a [UNK] sólo un poco [end] My hotel told me to call you. [start] mi hotel me dijo que te [UNK] [end] Implementing a large-scale multi-label text classification model. Introduction In this example, we will build a multi-label text classifier to predict the subject areas of arXiv papers from their abstract bodies. This type of classifier can be useful for conference submission portals like OpenReview. Given a paper abstract, the portal could provide suggestions for which areas the paper would best belong to. The dataset was collected using the arXiv Python library that provides a wrapper around the original arXiv API. To learn more about the data collection process, please refer to this notebook. Additionally, you can also find the dataset on Kaggle. Imports from tensorflow.keras import layers from tensorflow import keras import tensorflow as tf from sklearn.model_selection import train_test_split from ast import literal_eval import matplotlib.pyplot as plt import pandas as pd import numpy as np Perform exploratory data analysis In this section, we first load the dataset into a pandas dataframe and then perform some basic exploratory data analysis (EDA). arxiv_data = pd.read_csv( \"https://github.com/soumik12345/multi-label-text-classification/releases/download/v0.2/arxiv_data.csv\" ) arxiv_data.head() titles summaries terms 0 Survey on Semantic Stereo Matching / Semantic ... Stereo matching is one of the widely used tech... ['cs.CV', 'cs.LG'] 1 FUTURE-AI: Guiding Principles and Consensus Re... The recent advancements in artificial intellig... ['cs.CV', 'cs.AI', 'cs.LG'] 2 Enforcing Mutual Consistency of Hard Regions f... In this paper, we proposed a novel mutual cons... ['cs.CV', 'cs.AI'] 3 Parameter Decoupling Strategy for Semi-supervi... Consistency training has proven to be an advan... ['cs.CV'] 4 Background-Foreground Segmentation for Interio... To ensure safety in automated driving, the cor... ['cs.CV', 'cs.LG'] Our text features are present in the summaries column and their corresponding labels are in terms. As you can notice, there are multiple categories associated with a particular entry. print(f\"There are {len(arxiv_data)} rows in the dataset.\") There are 51774 rows in the dataset. Real-world data is noisy. One of the most commonly observed source of noise is data duplication. Here we notice that our initial dataset has got about 13k duplicate entries. total_duplicate_titles = sum(arxiv_data[\"titles\"].duplicated()) print(f\"There are {total_duplicate_titles} duplicate titles.\") There are 12802 duplicate titles. Before proceeding further, we drop these entries. arxiv_data = arxiv_data[~arxiv_data[\"titles\"].duplicated()] print(f\"There are {len(arxiv_data)} rows in the deduplicated dataset.\") # There are some terms with occurrence as low as 1. print(sum(arxiv_data[\"terms\"].value_counts() == 1)) # How many unique terms? print(arxiv_data[\"terms\"].nunique()) There are 38972 rows in the deduplicated dataset. 2321 3157 As observed above, out of 3,157 unique combinations of terms, 2,321 entries have the lowest occurrence. To prepare our train, validation, and test sets with stratification, we need to drop these terms. # Filtering the rare terms. arxiv_data_filtered = arxiv_data.groupby(\"terms\").filter(lambda x: len(x) > 1) arxiv_data_filtered.shape (36651, 3) Convert the string labels to lists of strings The initial labels are represented as raw strings. Here we make them List[str] for a more compact representation. arxiv_data_filtered[\"terms\"] = arxiv_data_filtered[\"terms\"].apply( lambda x: literal_eval(x) ) arxiv_data_filtered[\"terms\"].values[:5] array([list(['cs.CV', 'cs.LG']), list(['cs.CV', 'cs.AI', 'cs.LG']), list(['cs.CV', 'cs.AI']), list(['cs.CV']), list(['cs.CV', 'cs.LG'])], dtype=object) Use stratified splits because of class imbalance The dataset has a class imbalance problem. So, to have a fair evaluation result, we need to ensure the datasets are sampled with stratification. To know more about different strategies to deal with the class imbalance problem, you can follow this tutorial. For an end-to-end demonstration of classification with imbablanced data, refer to Imbalanced classification: credit card fraud detection. test_split = 0.1 # Initial train and test split. train_df, test_df = train_test_split( arxiv_data_filtered, test_size=test_split, stratify=arxiv_data_filtered[\"terms\"].values, ) # Splitting the test set further into validation # and new test sets. val_df = test_df.sample(frac=0.5) test_df.drop(val_df.index, inplace=True) print(f\"Number of rows in training set: {len(train_df)}\") print(f\"Number of rows in validation set: {len(val_df)}\") print(f\"Number of rows in test set: {len(test_df)}\") Number of rows in training set: 32985 Number of rows in validation set: 1833 Number of rows in test set: 1833 Multi-label binarization Now we preprocess our labels using the StringLookup layer. terms = tf.ragged.constant(train_df[\"terms\"].values) lookup = tf.keras.layers.StringLookup(output_mode=\"multi_hot\") lookup.adapt(terms) vocab = lookup.get_vocabulary() def invert_multi_hot(encoded_labels): \"\"\"Reverse a single multi-hot encoded label to a tuple of vocab terms.\"\"\" hot_indices = np.argwhere(encoded_labels == 1.0)[..., 0] return np.take(vocab, hot_indices) print(\"Vocabulary:\n\") print(vocab) Vocabulary: ['[UNK]', 'cs.CV', 'cs.LG', 'stat.ML', 'cs.AI', 'eess.IV', 'cs.RO', 'cs.CL', 'cs.NE', 'cs.CR', 'math.OC', 'eess.SP', 'cs.GR', 'cs.SI', 'cs.MM', 'cs.SY', 'cs.IR', 'cs.MA', 'eess.SY', 'cs.HC', 'math.IT', 'cs.IT', 'cs.DC', 'cs.CY', 'stat.AP', 'stat.TH', 'math.ST', 'stat.ME', 'eess.AS', 'cs.SD', 'q-bio.QM', 'q-bio.NC', 'cs.DS', 'cs.GT', 'cs.CG', 'cs.SE', 'cs.NI', 'I.2.6', 'stat.CO', 'math.NA', 'cs.NA', 'physics.chem-ph', 'cs.DB', 'q-bio.BM', 'cs.LO', 'cond-mat.dis-nn', '68T45', 'math.PR', 'cs.PL', 'physics.comp-ph', 'cs.CE', 'cs.AR', 'I.2.10', 'q-fin.ST', 'cond-mat.stat-mech', '68T05', 'math.DS', 'cs.CC', 'quant-ph', 'physics.data-an', 'I.4.6', 'physics.soc-ph', 'physics.ao-ph', 'econ.EM', 'cs.DM', 'q-bio.GN', 'physics.med-ph', 'cs.PF', 'astro-ph.IM', 'I.4.8', 'math.AT', 'I.4', 'q-fin.TR', 'cs.FL', 'I.5.4', 'I.2', '68U10', 'hep-ex', 'cond-mat.mtrl-sci', '68T10', 'physics.optics', 'physics.geo-ph', 'physics.flu-dyn', 'math.CO', 'math.AP', 'I.4; I.5', 'I.4.9', 'I.2.6; I.2.8', '68T01', '65D19', 'q-fin.CP', 'nlin.CD', 'cs.MS', 'I.2.6; I.5.1', 'I.2.10; I.4; I.5', 'I.2.0; I.2.6', '68T07', 'cs.SC', 'cs.ET', 'K.3.2', 'I.2; I.5', 'I.2.8', '68U01', '68T30', 'q-fin.GN', 'q-fin.EC', 'q-bio.MN', 'econ.GN', 'I.4.9; I.5.4', 'I.4.5', 'I.2; I.4; I.5', 'I.2.6; I.2.7', 'I.2.10; I.4.8', '68T99', '68Q32', '68', '62H30', 'q-fin.RM', 'q-fin.PM', 'q-bio.TO', 'q-bio.OT', 'physics.bio-ph', 'nlin.AO', 'math.LO', 'math.FA', 'hep-ph', 'cond-mat.soft', 'I.4.6; I.4.8', 'I.4.4', 'I.4.3', 'I.4.0', 'I.2; J.2', 'I.2; I.2.6; I.2.7', 'I.2.7', 'I.2.6; I.5.4', 'I.2.6; I.2.9', 'I.2.6; I.2.7; H.3.1; H.3.3', 'I.2.6; I.2.10', 'I.2.6, I.5.4', 'I.2.1; J.3', 'I.2.10; I.5.1; I.4.8', 'I.2.10; I.4.8; I.5.4', 'I.2.10; I.2.6', 'I.2.1', 'H.3.1; I.2.6; I.2.7', 'H.3.1; H.3.3; I.2.6; I.2.7', 'G.3', 'F.2.2; I.2.7', 'E.5; E.4; E.2; H.1.1; F.1.1; F.1.3', '68Txx', '62H99', '62H35', '14J60 (Primary) 14F05, 14J26 (Secondary)'] Here we are separating the individual unique classes available from the label pool and then using this information to represent a given label set with 0's and 1's. Below is an example. sample_label = train_df[\"terms\"].iloc[0] print(f\"Original label: {sample_label}\") label_binarized = lookup([sample_label]) print(f\"Label-binarized representation: {label_binarized}\") Original label: ['cs.LG', 'cs.CY'] Label-binarized representation: [[0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] Data preprocessing and tf.data.Dataset objects We first get percentile estimates of the sequence lengths. The purpose will be clear in a moment. train_df[\"summaries\"].apply(lambda x: len(x.split(\" \"))).describe() count 32985.000000 mean 156.419706 std 41.528906 min 5.000000 25% 128.000000 50% 154.000000 75% 183.000000 max 462.000000 Name: summaries, dtype: float64 Notice that 50% of the abstracts have a length of 154 (you may get a different number based on the split). So, any number close to that value is a good enough approximate for the maximum sequence length. Now, we implement utilities to prepare our datasets that would go straight to the text classifier model. max_seqlen = 150 batch_size = 128 padding_token = \"\" auto = tf.data.AUTOTUNE def unify_text_length(text, label): # Split the given abstract and calculate its length. word_splits = tf.strings.split(text, sep=\" \") sequence_length = tf.shape(word_splits)[0] # Calculate the padding amount. padding_amount = max_seqlen - sequence_length # Check if we need to pad or truncate. if padding_amount > 0: unified_text = tf.pad([text], [[0, padding_amount]], constant_values=\"\") unified_text = tf.strings.reduce_join(unified_text, separator=\"\") else: unified_text = tf.strings.reduce_join(word_splits[:max_seqlen], separator=\" \") # The expansion is needed for subsequent vectorization. return tf.expand_dims(unified_text, -1), label def make_dataset(dataframe, is_train=True): labels = tf.ragged.constant(dataframe[\"terms\"].values) label_binarized = lookup(labels).numpy() dataset = tf.data.Dataset.from_tensor_slices( (dataframe[\"summaries\"].values, label_binarized) ) dataset = dataset.shuffle(batch_size * 10) if is_train else dataset dataset = dataset.map(unify_text_length, num_parallel_calls=auto).cache() return dataset.batch(batch_size) Now we can prepare the tf.data.Dataset objects. train_dataset = make_dataset(train_df, is_train=True) validation_dataset = make_dataset(val_df, is_train=False) test_dataset = make_dataset(test_df, is_train=False) Dataset preview text_batch, label_batch = next(iter(train_dataset)) for i, text in enumerate(text_batch[:5]): label = label_batch[i].numpy()[None, ...] print(f\"Abstract: {text[0]}\") print(f\"Label(s): {invert_multi_hot(label[0])}\") print(\" \") Abstract: b'For the integration of renewable energy sources, power grid operators need\nrealistic information about the effects of energy production and consumption to\nassess grid stability.\n Recently, research in scenario planning benefits from utilizing generative\nadversarial networks (GANs) as generative models for operational scenario\nplanning.\n In these scenarios, operators examine temporal as well as spatial influences\nof different energy sources on the grid.\n The analysis of how renewable energy resources affect the grid enables the\noperators to evaluate the stability and to identify potential weak points such\nas a limiting transformer.\n However, due to their novelty, there are limited studies on how well GANs\nmodel the underlying power distribution.\n This analysis is essential because, e.g., especially extreme situations with\nlow or high power generation are required to evaluate grid stability.\n We conduct a comparative study of the Wasserstein distance,\nbinary-cross-entropy loss, and a Gaussian copula as the baseline applied on two\nwind and two solar datasets' Label(s): ['cs.LG' 'eess.SP'] Abstract: b'We study the optimization problem for decomposing $d$ dimensional\nfourth-order Tensors with $k$ non-orthogonal components. We derive\n\\textit{deterministic} conditions under which such a problem does not have\nspurious local minima. In particular, we show that if $\\kappa =\n\\frac{\\lambda_{max}}{\\lambda_{min}} < \\frac{5}{4}$, and incoherence coefficient\nis of the order $O(\\frac{1}{\\sqrt{d}})$, then all the local minima are globally\noptimal. Using standard techniques, these conditions could be easily\ntransformed into conditions that would hold with high probability in high\ndimensions when the components are generated randomly. Finally, we prove that\nthe tensor power method with deflation and restarts could efficiently extract\nall the components within a tolerance level $O(\\kappa \\sqrt{k\\tau^3})$ that\nseems to be the noise floor of non-orthogonal tensor decomposition.' Label(s): ['cs.LG'] Abstract: b'Explainable Artificial Intelligence (XAI) is an emerging area of research in\nthe field of Artificial Intelligence (AI). XAI can explain how AI obtained a\nparticular solution (e.g., classification or object detection) and can also\nanswer other \"wh\" questions. This explainability is not possible in traditional\nAI. Explainability is essential for critical applications, such as defense,\nhealth care, law and order, and autonomous driving vehicles, etc, where the\nknow-how is required for trust and transparency. A number of XAI techniques so\nfar have been purposed for such applications. This paper provides an overview\nof these techniques from a multimedia (i.e., text, image, audio, and video)\npoint of view. The advantages and shortcomings of these techniques have been\ndiscussed, and pointers to some future directions have also been provided.' Label(s): ['cs.LG' 'cs.AI'] Abstract: b'Some of the most important tasks take place in environments which lack cheap\nand perfect simulators, thus hampering the application of model-free\nreinforcement learning (RL). While model-based RL aims to learn a dynamics\nmodel, in a more general case the learner does not know a priori what the\naction space is. Here we propose a formalism where the learner induces a world\nprogram by learning a dynamics model and the actions in graph-based\ncompositional environments by observing state-state transition examples. Then,\nthe learner can perform RL with the world program as the simulator for complex\nplanning tasks. We highlight a recent application, and propose a challenge for\nthe community to assess world program-based planning.' Label(s): ['cs.LG' 'stat.ML'] Abstract: b'Deep learning based image compression has recently witnessed exciting\nprogress and in some cases even managed to surpass transform coding based\napproaches that have been established and refined over many decades. However,\nstate-of-the-art solutions for deep image compression typically employ\nautoencoders which map the input to a lower dimensional latent space and thus\nirreversibly discard information already before quantization. Due to that, they\ninherently limit the range of quality levels that can be covered. In contrast,\ntraditional approaches in image compression allow for a larger range of quality\nlevels. Interestingly, they employ an invertible transformation before\nperforming the quantization step which explicitly discards information.\nInspired by this, we propose a deep image compression method that is able to go\nfrom low bit-rates to near lossless quality by leveraging normalizing flows to\nlearn a bijective mapping from the image space to a latent representation. In\naddition to this, we demonstrate further advantages unique to our solution,\nsuch as the ability to maintain constant quality results' Label(s): ['cs.CV'] Vectorization Before we feed the data to our model, we need to vectorize it (represent it in a numerical form). For that purpose, we will use the TextVectorization layer. It can operate as a part of your main model so that the model is excluded from the core preprocessing logic. This greatly reduces the chances of training / serving skew during inference. We first calculate the number of unique words present in the abstracts. train_df[\"total_words\"] = train_df[\"summaries\"].str.split().str.len() vocabulary_size = train_df[\"total_words\"].max() print(f\"Vocabulary size: {vocabulary_size}\") Vocabulary size: 498 We now create our vectorization layer and map() to the tf.data.Datasets created earlier. text_vectorizer = layers.TextVectorization( max_tokens=vocabulary_size, ngrams=2, output_mode=\"tf_idf\" ) # `TextVectorization` layer needs to be adapted as per the vocabulary from our # training set. with tf.device(\"/CPU:0\"): text_vectorizer.adapt(train_dataset.map(lambda text, label: text)) train_dataset = train_dataset.map( lambda text, label: (text_vectorizer(text), label), num_parallel_calls=auto ).prefetch(auto) validation_dataset = validation_dataset.map( lambda text, label: (text_vectorizer(text), label), num_parallel_calls=auto ).prefetch(auto) test_dataset = test_dataset.map( lambda text, label: (text_vectorizer(text), label), num_parallel_calls=auto ).prefetch(auto) A batch of raw text will first go through the TextVectorization layer and it will generate their integer representations. Internally, the TextVectorization layer will first create bi-grams out of the sequences and then represent them using TF-IDF. The output representations will then be passed to the shallow model responsible for text classification. To learn more about other possible configurations with TextVectorizer, please consult the official documentation. Note: Setting the max_tokens argument to a pre-calculated vocabulary size is not a requirement. Create a text classification model We will keep our model simple -- it will be a small stack of fully-connected layers with ReLU as the non-linearity. def make_model(): shallow_mlp_model = keras.Sequential( [ layers.Dense(512, activation=\"relu\"), layers.Dense(256, activation=\"relu\"), layers.Dense(lookup.vocabulary_size(), activation=\"sigmoid\"), ] # More on why \"sigmoid\" has been used here in a moment. ) return shallow_mlp_model Train the model We will train our model using the binary crossentropy loss. This is because the labels are not disjoint. For a given abstract, we may have multiple categories. So, we will divide the prediction task into a series of multiple binary classification problems. This is also why we kept the activation function of the classification layer in our model to sigmoid. Researchers have used other combinations of loss function and activation function as well. For example, in Exploring the Limits of Weakly Supervised Pretraining, Mahajan et al. used the softmax activation function and cross-entropy loss to train their models. epochs = 20 shallow_mlp_model = make_model() shallow_mlp_model.compile( loss=\"binary_crossentropy\", optimizer=\"adam\", metrics=[\"categorical_accuracy\"] ) history = shallow_mlp_model.fit( train_dataset, validation_data=validation_dataset, epochs=epochs ) def plot_result(item): plt.plot(history.history[item], label=item) plt.plot(history.history[\"val_\" + item], label=\"val_\" + item) plt.xlabel(\"Epochs\") plt.ylabel(item) plt.title(\"Train and Validation {} Over Epochs\".format(item), fontsize=14) plt.legend() plt.grid() plt.show() plot_result(\"loss\") plot_result(\"categorical_accuracy\") Epoch 1/20 258/258 [==============================] - 3s 7ms/step - loss: 0.0607 - categorical_accuracy: 0.8037 - val_loss: 0.0226 - val_categorical_accuracy: 0.8767 Epoch 2/20 258/258 [==============================] - 1s 5ms/step - loss: 0.0225 - categorical_accuracy: 0.8726 - val_loss: 0.0213 - val_categorical_accuracy: 0.8871 Epoch 3/20 258/258 [==============================] - 1s 6ms/step - loss: 0.0215 - categorical_accuracy: 0.8750 - val_loss: 0.0210 - val_categorical_accuracy: 0.8893 Epoch 4/20 258/258 [==============================] - 1s 6ms/step - loss: 0.0207 - categorical_accuracy: 0.8794 - val_loss: 0.0209 - val_categorical_accuracy: 0.8860 Epoch 5/20 258/258 [==============================] - 1s 6ms/step - loss: 0.0201 - categorical_accuracy: 0.8823 - val_loss: 0.0208 - val_categorical_accuracy: 0.8882 Epoch 6/20 258/258 [==============================] - 1s 6ms/step - loss: 0.0196 - categorical_accuracy: 0.8857 - val_loss: 0.0203 - val_categorical_accuracy: 0.8925 Epoch 7/20 258/258 [==============================] - 1s 6ms/step - loss: 0.0191 - categorical_accuracy: 0.8876 - val_loss: 0.0196 - val_categorical_accuracy: 0.8914 Epoch 8/20 258/258 [==============================] - 1s 6ms/step - loss: 0.0187 - categorical_accuracy: 0.8900 - val_loss: 0.0195 - val_categorical_accuracy: 0.8729 Epoch 9/20 258/258 [==============================] - 1s 6ms/step - loss: 0.0183 - categorical_accuracy: 0.8919 - val_loss: 0.0193 - val_categorical_accuracy: 0.8800 Epoch 10/20 258/258 [==============================] - 1s 6ms/step - loss: 0.0179 - categorical_accuracy: 0.8932 - val_loss: 0.0190 - val_categorical_accuracy: 0.8958 Epoch 11/20 258/258 [==============================] - 1s 6ms/step - loss: 0.0176 - categorical_accuracy: 0.8950 - val_loss: 0.0192 - val_categorical_accuracy: 0.8974 Epoch 12/20 258/258 [==============================] - 1s 6ms/step - loss: 0.0172 - categorical_accuracy: 0.8967 - val_loss: 0.0191 - val_categorical_accuracy: 0.8936 Epoch 13/20 258/258 [==============================] - 1s 6ms/step - loss: 0.0169 - categorical_accuracy: 0.8980 - val_loss: 0.0192 - val_categorical_accuracy: 0.8920 Epoch 14/20 258/258 [==============================] - 1s 6ms/step - loss: 0.0166 - categorical_accuracy: 0.8993 - val_loss: 0.0194 - val_categorical_accuracy: 0.8811 Epoch 15/20 258/258 [==============================] - 1s 6ms/step - loss: 0.0162 - categorical_accuracy: 0.9008 - val_loss: 0.0196 - val_categorical_accuracy: 0.8822 Epoch 16/20 258/258 [==============================] - 1s 6ms/step - loss: 0.0159 - categorical_accuracy: 0.9032 - val_loss: 0.0196 - val_categorical_accuracy: 0.8794 Epoch 17/20 258/258 [==============================] - 1s 6ms/step - loss: 0.0156 - categorical_accuracy: 0.9047 - val_loss: 0.0197 - val_categorical_accuracy: 0.8652 Epoch 18/20 258/258 [==============================] - 1s 6ms/step - loss: 0.0153 - categorical_accuracy: 0.9061 - val_loss: 0.0198 - val_categorical_accuracy: 0.8718 Epoch 19/20 258/258 [==============================] - 1s 6ms/step - loss: 0.0150 - categorical_accuracy: 0.9067 - val_loss: 0.0200 - val_categorical_accuracy: 0.8734 Epoch 20/20 258/258 [==============================] - 1s 6ms/step - loss: 0.0146 - categorical_accuracy: 0.9087 - val_loss: 0.0202 - val_categorical_accuracy: 0.8691 png png While training, we notice an initial sharp fall in the loss followed by a gradual decay. Evaluate the model _, categorical_acc = shallow_mlp_model.evaluate(test_dataset) print(f\"Categorical accuracy on the test set: {round(categorical_acc * 100, 2)}%.\") 15/15 [==============================] - 0s 13ms/step - loss: 0.0208 - categorical_accuracy: 0.8642 Categorical accuracy on the test set: 86.42%. The trained model gives us an evaluation accuracy of ~87%. Inference An important feature of the preprocessing layers provided by Keras is that they can be included inside a tf.keras.Model. We will export an inference model by including the text_vectorization layer on top of shallow_mlp_model. This will allow our inference model to directly operate on raw strings. Note that during training it is always preferable to use these preprocessing layers as a part of the data input pipeline rather than the model to avoid surfacing bottlenecks for the hardware accelerators. This also allows for asynchronous data processing. # Create a model for inference. model_for_inference = keras.Sequential([text_vectorizer, shallow_mlp_model]) # Create a small dataset just for demoing inference. inference_dataset = make_dataset(test_df.sample(100), is_train=False) text_batch, label_batch = next(iter(inference_dataset)) predicted_probabilities = model_for_inference.predict(text_batch) # Perform inference. for i, text in enumerate(text_batch[:5]): label = label_batch[i].numpy()[None, ...] print(f\"Abstract: {text[0]}\") print(f\"Label(s): {invert_multi_hot(label[0])}\") predicted_proba = [proba for proba in predicted_probabilities[i]] top_3_labels = [ x for _, x in sorted( zip(predicted_probabilities[i], lookup.get_vocabulary()), key=lambda pair: pair[0], reverse=True, ) ][:3] print(f\"Predicted Label(s): ({', '.join([label for label in top_3_labels])})\") print(\" \") Abstract: b'Learning interpretable and interpolatable latent representations has been an\nemerging research direction, allowi Training a multimodal model for predicting entailment. Introduction In this example, we will build and train a model for predicting multimodal entailment. We will be using the multimodal entailment dataset recently introduced by Google Research. What is multimodal entailment? On social media platforms, to audit and moderate content we may want to find answers to the following questions in near real-time: Does a given piece of information contradict the other? Does a given piece of information imply the other? In NLP, this task is called analyzing textual entailment. However, that's only when the information comes from text content. In practice, it's often the case the information available comes not just from text content, but from a multimodal combination of text, images, audio, video, etc. Multimodal entailment is simply the extension of textual entailment to a variety of new input modalities. Requirements This example requires TensorFlow 2.5 or higher. In addition, TensorFlow Hub and TensorFlow Text are required for the BERT model (Devlin et al.). These libraries can be installed using the following command: !pip install -q tensorflow_text Imports from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt import pandas as pd import numpy as np import os import tensorflow as tf import tensorflow_hub as hub import tensorflow_text as text from tensorflow import keras Define a label map label_map = {\"Contradictory\": 0, \"Implies\": 1, \"NoEntailment\": 2} Collect the dataset The original dataset is available here. It comes with URLs of images which are hosted on Twitter's photo storage system called the Photo Blob Storage (PBS for short). We will be working with the downloaded images along with additional data that comes with the original dataset. Thanks to Nilabhra Roy Chowdhury who worked on preparing the image data. image_base_path = keras.utils.get_file( \"tweet_images\", \"https://github.com/sayakpaul/Multimodal-Entailment-Baseline/releases/download/v1.0.0/tweet_images.tar.gz\", untar=True, ) Read the dataset and apply basic preprocessing df = pd.read_csv( \"https://github.com/sayakpaul/Multimodal-Entailment-Baseline/raw/main/csvs/tweets.csv\" ) df.sample(10) id_1 text_1 image_1 id_2 text_2 image_2 label 291 1330800194863190016 #KLM1167 (B738): #AMS (Amsterdam) to #HEL (Van... http://pbs.twimg.com/media/EnfzuZAW4AE236p.png 1378695438480588802 #CKK205 (B77L): #PVG (Shanghai) to #AMS (Amste... http://pbs.twimg.com/media/EyIcMexXEAE6gia.png NoEntailment 37 1366581728312057856 Friends, interested all go to have a look!\n@j... http://pbs.twimg.com/media/EvcS1v4UcAEEXPO.jpg 1373810535066570759 Friends, interested all go to have a look!\n@f... http://pbs.twimg.com/media/ExDBZqwVIAQ4LWk.jpg Contradictory 315 1352551603258052608 #WINk Drops I have earned today🚀\n\nToday:1/22... http://pbs.twimg.com/media/EsTdcLLVcAIiFKT.jpg 1354636016234098688 #WINk Drops I have earned today☀\n\nToday:1/28... http://pbs.twimg.com/media/EsyhK-qU0AgfMAH.jpg NoEntailment 761 1379795999493853189 #buythedip Ready to FLY even HIGHER #pennysto... http://pbs.twimg.com/media/EyYFJCzWgAMfTrT.jpg 1380190250144792576 #buythedip Ready to FLY even HIGHER #pennysto... http://pbs.twimg.com/media/Eydrt0ZXAAMmbfv.jpg NoEntailment 146 1340185132293099523 I know sometimes I am weird to you.\n\nBecause... http://pbs.twimg.com/media/EplLRriWwAAJ2AE.jpg 1359755419883814913 I put my sword down and get on my knees to swe... http://pbs.twimg.com/media/Et7SWWeWYAICK-c.jpg NoEntailment 1351 1381256604926967813 Finally completed the skin rendering. Will sta... http://pbs.twimg.com/media/Eys1j7NVIAgF-YF.jpg 1381630932092784641 Hair rendering. Will finish the hair by tomorr... http://pbs.twimg.com/media/EyyKAoaUUAElm-e.jpg NoEntailment 368 1371883298805403649 📉 $LINK Number of Receiving Addresses (7d MA) ... http://pbs.twimg.com/media/EwnoltOWEAAS4mG.jpg 1373216720974979072 📉 $LINK Number of Receiving Addresses (7d MA) ... http://pbs.twimg.com/media/Ew6lVGYXEAE6Ugi.jpg NoEntailment 1112 1377679115159887873 April is National Distracted Driving Awareness... http://pbs.twimg.com/media/Ex5_u7UVIAARjQ2.jpg 1379075258448281608 April is Distracted Driving Awareness Month. ... http://pbs.twimg.com/media/EyN1YjpWUAMc5ak.jpg NoEntailment 264 1330727515741167619 ♥️Verse Of The Day♥️\n.\n#VerseOfTheDay #Quran... http://pbs.twimg.com/media/EnexnydXIAYuI11.jpg 1332623263495819264 ♥️Verse Of The Day♥️\n.\n#VerseOfTheDay #Quran... http://pbs.twimg.com/media/En5ty1VXUAATALP.jpg NoEntailment 865 1377784616275296261 No white picket fence can keep us in. #TBT 200... http://pbs.twimg.com/media/Ex7fzouWQAITAq8.jpg 1380175915804672012 Sometimes you just need to change your altitud... http://pbs.twimg.com/media/EydernQXIAk2g5v.jpg NoEntailment The columns we are interested in are the following: text_1 image_1 text_2 image_2 label The entailment task is formulated as the following: Given the pairs of (text_1, image_1) and (text_2, image_2) do they entail (or not entail or contradict) each other? We have the images already downloaded. image_1 is downloaded as id1 as its filename and image2 is downloaded as id2 as its filename. In the next step, we will add two more columns to df - filepaths of image_1s and image_2s. images_one_paths = [] images_two_paths = [] for idx in range(len(df)): current_row = df.iloc[idx] id_1 = current_row[\"id_1\"] id_2 = current_row[\"id_2\"] extentsion_one = current_row[\"image_1\"].split(\".\")[-1] extentsion_two = current_row[\"image_2\"].split(\".\")[-1] image_one_path = os.path.join(image_base_path, str(id_1) + f\".{extentsion_one}\") image_two_path = os.path.join(image_base_path, str(id_2) + f\".{extentsion_two}\") images_one_paths.append(image_one_path) images_two_paths.append(image_two_path) df[\"image_1_path\"] = images_one_paths df[\"image_2_path\"] = images_two_paths # Create another column containing the integer ids of # the string labels. df[\"label_idx\"] = df[\"label\"].apply(lambda x: label_map[x]) Dataset visualization def visualize(idx): current_row = df.iloc[idx] image_1 = plt.imread(current_row[\"image_1_path\"]) image_2 = plt.imread(current_row[\"image_2_path\"]) text_1 = current_row[\"text_1\"] text_2 = current_row[\"text_2\"] label = current_row[\"label\"] plt.subplot(1, 2, 1) plt.imshow(image_1) plt.axis(\"off\") plt.title(\"Image One\") plt.subplot(1, 2, 2) plt.imshow(image_1) plt.axis(\"off\") plt.title(\"Image Two\") plt.show() print(f\"Text one: {text_1}\") print(f\"Text two: {text_2}\") print(f\"Label: {label}\") random_idx = np.random.choice(len(df)) visualize(random_idx) random_idx = np.random.choice(len(df)) visualize(random_idx) png Text one: Friends, interested all go to have a look! @ThePartyGoddess @OurLadyAngels @BJsWholesale @Richard_Jeni @FashionLavidaG @RapaRooski @DMVTHING @DeMarcoReports @LobidaFo @DeMarcoMorgan https://t.co/cStULl7y7G Text two: Friends, interested all go to have a look! @smittyses @CYosabel @crum_7 @CrumDarrell @ElymalikU @jenloarn @SoCodiePrevost @roblowry82 @Crummy_14 @CSchmelzenbach https://t.co/IZphLTNzgl Label: Contradictory png Text one: 👟 KICK OFF @ MARDEN SPORTS COMPLEX We're underway in the Round 6 opener! 📺: @Foxtel, @kayosports 📱: My Football Live app https://t.co/wHSpvQaoGC #WLeague #ADLvMVC #AUFC #MVFC https://t.co/3Smp8KXm8W Text two: 👟 KICK OFF @ MARSDEN SPORTS COMPLEX We're underway in sunny Adelaide! 📺: @Foxtel, @kayosports 📱: My Football Live app https://t.co/wHSpvQaoGC #ADLvCBR #WLeague #AUFC #UnitedAlways https://t.co/fG1PyLQXM4 Label: NoEntailment Train/test split The dataset suffers from class imbalance problem. We can confirm that in the following cell. df[\"label\"].value_counts() NoEntailment 1182 Implies 109 Contradictory 109 Name: label, dtype: int64 To account for that we will go for a stratified split. # 10% for test train_df, test_df = train_test_split( df, test_size=0.1, stratify=df[\"label\"].values, random_state=42 ) # 5% for validation train_df, val_df = train_test_split( train_df, test_size=0.05, stratify=train_df[\"label\"].values, random_state=42 ) print(f\"Total training examples: {len(train_df)}\") print(f\"Total validation examples: {len(val_df)}\") print(f\"Total test examples: {len(test_df)}\") Total training examples: 1197 Total validation examples: 63 Total test examples: 140 Data input pipeline TensorFlow Hub provides variety of BERT family of models. Each of those models comes with a corresponding preprocessing layer. You can learn more about these models and their preprocessing layers from this resource. To keep the runtime of this example relatively short, we will use a smaller variant of the original BERT model. # Define TF Hub paths to the BERT encoder and its preprocessor bert_model_path = ( \"https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-256_A-4/1\" ) bert_preprocess_path = \"https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3\" Our text preprocessing code mostly comes from this tutorial. You are highly encouraged to check out the tutorial to learn more about the input preprocessing. def make_bert_preprocessing_model(sentence_features, seq_length=128): \"\"\"Returns Model mapping string features to BERT inputs. Args: sentence_features: A list with the names of string-valued features. seq_length: An integer that defines the sequence length of BERT inputs. Returns: A Keras Model that can be called on a list or dict of string Tensors (with the order or names, resp., given by sentence_features) and returns a dict of tensors for input to BERT. \"\"\" input_segments = [ tf.keras.layers.Input(shape=(), dtype=tf.string, name=ft) for ft in sentence_features ] # Tokenize the text to word pieces. bert_preprocess = hub.load(bert_preprocess_path) tokenizer = hub.KerasLayer(bert_preprocess.tokenize, name=\"tokenizer\") segments = [tokenizer(s) for s in input_segments] # Optional: Trim segments in a smart way to fit seq_length. # Simple cases (like this example) can skip this step and let # the next step apply a default truncation to approximately equal lengths. truncated_segments = segments # Pack inputs. The details (start/end token ids, dict of output tensors) # are model-dependent, so this gets loaded from the SavedModel. packer = hub.KerasLayer( bert_preprocess.bert_pack_inputs, arguments=dict(seq_length=seq_length), name=\"packer\", ) model_inputs = packer(truncated_segments) return keras.Model(input_segments, model_inputs) bert_preprocess_model = make_bert_preprocessing_model([\"text_1\", \"text_2\"]) keras.utils.plot_model(bert_preprocess_model, show_shapes=True, show_dtype=True) png Run the preprocessor on a sample input idx = np.random.choice(len(train_df)) row = train_df.iloc[idx] sample_text_1, sample_text_2 = row[\"text_1\"], row[\"text_2\"] print(f\"Text 1: {sample_text_1}\") print(f\"Text 2: {sample_text_2}\") test_text = [np.array([sample_text_1]), np.array([sample_text_2])] text_preprocessed = bert_preprocess_model(test_text) print(\"Keys : \", list(text_preprocessed.keys())) print(\"Shape Word Ids : \", text_preprocessed[\"input_word_ids\"].shape) print(\"Word Ids : \", text_preprocessed[\"input_word_ids\"][0, :16]) print(\"Shape Mask : \", text_preprocessed[\"input_mask\"].shape) print(\"Input Mask : \", text_preprocessed[\"input_mask\"][0, :16]) print(\"Shape Type Ids : \", text_preprocessed[\"input_type_ids\"].shape) print(\"Type Ids : \", text_preprocessed[\"input_type_ids\"][0, :16]) Text 1: Renewables met 97% of Scotland's electricity demand in 2020!!!! https://t.co/wi5c9UFAUF https://t.co/arcuBgh0BP Text 2: Renewables met 97% of Scotland's electricity demand in 2020 https://t.co/SrhyqPnIkU https://t.co/LORgvTM7Sn Keys : ['input_mask', 'input_word_ids', 'input_type_ids'] Shape Word Ids : (1, 128) Word Ids : tf.Tensor( [ 101 13918 2015 2777 5989 1003 1997 3885 1005 1055 6451 5157 1999 12609 999 999], shape=(16,), dtype=int32) Shape Mask : (1, 128) Input Mask : tf.Tensor([1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1], shape=(16,), dtype=int32) Shape Type Ids : (1, 128) Type Ids : tf.Tensor([0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0], shape=(16,), dtype=int32) We will now create tf.data.Dataset objects from the dataframes. Note that the text inputs will be preprocessed as a part of the data input pipeline. But the preprocessing modules can also be a part of their corresponding BERT models. This helps reduce the training/serving skew and lets our models operate with raw text inputs. Follow this tutorial to learn more about how to incorporate the preprocessing modules directly inside the models. def dataframe_to_dataset(dataframe): columns = [\"image_1_path\", \"image_2_path\", \"text_1\", \"text_2\", \"label_idx\"] dataframe = dataframe[columns].copy() labels = dataframe.pop(\"label_idx\") ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels)) ds = ds.shuffle(buffer_size=len(dataframe)) return ds Preprocessing utilities resize = (128, 128) bert_input_features = [\"input_word_ids\", \"input_type_ids\", \"input_mask\"] def preprocess_image(image_path): extension = tf.strings.split(image_path)[-1] image = tf.io.read_file(image_path) if extension == b\"jpg\": image = tf.image.decode_jpeg(image, 3) else: image = tf.image.decode_png(image, 3) image = tf.image.resize(image, resize) return image def preprocess_text(text_1, text_2): text_1 = tf.convert_to_tensor([text_1]) text_2 = tf.convert_to_tensor([text_2]) output = bert_preprocess_model([text_1, text_2]) output = {feature: tf.squeeze(output[feature]) for feature in bert_input_features} return output def preprocess_text_and_image(sample): image_1 = preprocess_image(sample[\"image_1_path\"]) image_2 = preprocess_image(sample[\"image_2_path\"]) text = preprocess_text(sample[\"text_1\"], sample[\"text_2\"]) return {\"image_1\": image_1, \"image_2\": image_2, \"text\": text} Create the final datasets batch_size = 32 auto = tf.data.AUTOTUNE def prepare_dataset(dataframe, training=True): ds = dataframe_to_dataset(dataframe) if training: ds = ds.shuffle(len(train_df)) ds = ds.map(lambda x, y: (preprocess_text_and_image(x), y)).cache() ds = ds.batch(batch_size).prefetch(auto) return ds train_ds = prepare_dataset(train_df) validation_ds = prepare_dataset(val_df, False) test_ds = prepare_dataset(test_df, False) Model building utilities Our final model will accept two images along with their text counterparts. While the images will be directly fed to the model the text inputs will first be preprocessed and then will make it into the model. Below is a visual illustration of this approach: The model consists of the following elements: A standalone encoder for the images. We will use a ResNet50V2 pre-trained on the ImageNet-1k dataset for this. A standalone encoder for the images. A pre-trained BERT will be used for this. After extracting the individual embeddings, they will be projected in an identical space. Finally, their projections will be concatenated and be fed to the final classification layer. This is a multi-class classification problem involving the following classes: NoEntailment Implies Contradictory project_embeddings(), create_vision_encoder(), and create_text_encoder() utilities are referred from this example. Projection utilities def project_embeddings( embeddings, num_projection_layers, projection_dims, dropout_rate ): projected_embeddings = keras.layers.Dense(units=projection_dims)(embeddings) for _ in range(num_projection_layers): x = tf.nn.gelu(projected_embeddings) x = keras.layers.Dense(projection_dims)(x) x = keras.layers.Dropout(dropout_rate)(x) x = keras.layers.Add()([projected_embeddings, x]) projected_embeddings = keras.layers.LayerNormalization()(x) return projected_embeddings Vision encoder utilities def create_vision_encoder( num_projection_layers, projection_dims, dropout_rate, trainable=False ): # Load the pre-trained ResNet50V2 model to be used as the base encoder. resnet_v2 = keras.applications.ResNet50V2( include_top=False, weights=\"imagenet\", pooling=\"avg\" ) # Set the trainability of the base encoder. for layer in resnet_v2.layers: layer.trainable = trainable # Receive the images as inputs. image_1 = keras.Input(shape=(128, 128, 3), name=\"image_1\") image_2 = keras.Input(shape=(128, 128, 3), name=\"image_2\") # Preprocess the input image. preprocessed_1 = keras.applications.resnet_v2.preprocess_input(image_1) preprocessed_2 = keras.applications.resnet_v2.preprocess_input(image_2) # Generate the embeddings for the images using the resnet_v2 model # concatenate them. embeddings_1 = resnet_v2(preprocessed_1) embeddings_2 = resnet_v2(preprocessed_2) embeddings = keras.layers.Concatenate()([embeddings_1, embeddings_2]) # Project the embeddings produced by the model. outputs = project_embeddings( embeddings, num_projection_layers, projection_dims, dropout_rate ) # Create the vision encoder model. return keras.Model([image_1, image_2], outputs, name=\"vision_encoder\") Text encoder utilities def create_text_encoder( num_projection_layers, projection_dims, dropout_rate, trainable=False ): # Load the pre-trained BERT model to be used as the base encoder. bert = hub.KerasLayer(bert_model_path, name=\"bert\",) # Set the trainability of the base encoder. bert.trainable = trainable # Receive the text as inputs. bert_input_features = [\"input_type_ids\", \"input_mask\", \"input_word_ids\"] inputs = { feature: keras.Input(shape=(128,), dtype=tf.int32, name=feature) for feature in bert_input_features } # Generate embeddings for the preprocessed text using the BERT model. embeddings = bert(inputs)[\"pooled_output\"] # Project the embeddings produced by the model. outputs = project_embeddings( embeddings, num_projection_layers, projection_dims, dropout_rate ) # Create the text encoder model. return keras.Model(inputs, outputs, name=\"text_encoder\") Multimodal model utilities def create_multimodal_model( num_projection_layers=1, projection_dims=256, dropout_rate=0.1, vision_trainable=False, text_trainable=False, ): # Receive the images as inputs. image_1 = keras.Input(shape=(128, 128, 3), name=\"image_1\") image_2 = keras.Input(shape=(128, 128, 3), name=\"image_2\") # Receive the text as inputs. bert_input_features = [\"input_type_ids\", \"input_mask\", \"input_word_ids\"] text_inputs = { feature: keras.Input(shape=(128,), dtype=tf.int32, name=feature) for feature in bert_input_features } # Create the encoders. vision_encoder = create_vision_encoder( num_projection_layers, projection_dims, dropout_rate, vision_trainable ) text_encoder = create_text_encoder( num_projection_layers, projection_dims, dropout_rate, text_trainable ) # Fetch the embedding projections. vision_projections = vision_encoder([image_1, image_2]) text_projections = text_encoder(text_inputs) # Concatenate the projections and pass through the classification layer. concatenated = keras.layers.Concatenate()([vision_projections, text_projections]) outputs = keras.layers.Dense(3, activation=\"softmax\")(concatenated) return keras.Model([image_1, image_2, text_inputs], outputs) multimodal_model = create_multimodal_model() keras.utils.plot_model(multimodal_model, show_shapes=True) png You can inspect the structure of the individual encoders as well by setting the expand_nested argument of plot_model() to True. You are encouraged to play with the different hyperparameters involved in building this model and observe how the final performance is affected. Compile and train the model multimodal_model.compile( optimizer=\"adam\", loss=\"sparse_categorical_crossentropy\", metrics=\"accuracy\" ) history = multimodal_model.fit(train_ds, validation_data=validation_ds, epochs=10) Epoch 1/10 38/38 [==============================] - 49s 789ms/step - loss: 1.0014 - accuracy: 0.8229 - val_loss: 0.5514 - val_accuracy: 0.8571 Epoch 2/10 38/38 [==============================] - 3s 90ms/step - loss: 0.4019 - accuracy: 0.8814 - val_loss: 0.5866 - val_accuracy: 0.8571 Epoch 3/10 38/38 [==============================] - 3s 90ms/step - loss: 0.3557 - accuracy: 0.8897 - val_loss: 0.5929 - val_accuracy: 0.8571 Epoch 4/10 38/38 [==============================] - 3s 91ms/step - loss: 0.2877 - accuracy: 0.9006 - val_loss: 0.6272 - val_accuracy: 0.8571 Epoch 5/10 38/38 [==============================] - 3s 91ms/step - loss: 0.1796 - accuracy: 0.9398 - val_loss: 0.8545 - val_accuracy: 0.8254 Epoch 6/10 38/38 [==============================] - 3s 91ms/step - loss: 0.1292 - accuracy: 0.9566 - val_loss: 1.2276 - val_accuracy: 0.8413 Epoch 7/10 38/38 [==============================] - 3s 91ms/step - loss: 0.1015 - accuracy: 0.9666 - val_loss: 1.2914 - val_accuracy: 0.7778 Epoch 8/10 38/38 [==============================] - 3s 92ms/step - loss: 0.1253 - accuracy: 0.9524 - val_loss: 1.1944 - val_accuracy: 0.8413 Epoch 9/10 38/38 [==============================] - 3s 92ms/step - loss: 0.3064 - accuracy: 0.9131 - val_loss: 1.2162 - val_accuracy: 0.8095 Epoch 10/10 38/38 [==============================] - 3s 92ms/step - loss: 0.2212 - accuracy: 0.9248 - val_loss: 1.1080 - val_accuracy: 0.8413 Evaluate the model _, acc = multimodal_model.evaluate(test_ds) print(f\"Accuracy on the test set: {round(acc * 100, 2)}%.\") 5/5 [==============================] - 6s 1s/step - loss: 0.8390 - accuracy: 0.8429 Accuracy on the test set: 84.29%. Additional notes regarding training Incorporating regularization: The training logs suggest that the model is starting to overfit and may have benefitted from regularization. Dropout (Srivastava et al.) is a simple yet powerful regularization technique that we can use in our model. But how should we apply it here? We could always introduce Dropout (keras.layers.Dropout) in between different layers of the model. But here is another recipe. Our model expects inputs from two different data modalities. What if either of the modalities is not present during inference? To account for this, we can introduce Dropout to the individual projections just before they get concatenated: vision_projections = keras.layers.Dropout(rate)(vision_projections) text_projections = keras.layers.Dropout(rate)(text_projections) concatenated = keras.layers.Concatenate()([vision_projections, text_projections]) Attending to what matters: Do all parts of the images correspond equally to their textual counterparts? It's likely not the case. To make our model only focus on the most important bits of the images that relate well to their corresponding textual parts we can use \"cross-attention\": # Embeddings. vision_projections = vision_encoder([image_1, image_2]) text_projections = text_encoder(text_inputs) # Cross-attention (Luong-style). query_value_attention_seq = keras.layers.Attention(use_scale=True, dropout=0.2)( [vision_projections, text_projections] ) # Concatenate. concatenated = keras.layers.Concatenate()([vision_projections, text_projections]) contextual = keras.layers.Concatenate()([concatenated, query_value_attention_seq]) To see this in action, refer to this notebook. Handling class imbalance: The dataset suffers from class imbalance. Investigating the confusion matrix of the above model reveals that it performs poorly on the minority classes. If we had used a weighted loss then the training would have been more guided. You can check out this notebook that takes class-imbalance into account during model training. Using only text inputs: Also, what if we had only incorporated text inputs for the entailment task? Because of the nature of the text inputs encountered on social media platforms, text inputs alone would have hurt the final performance. Under a similar training setup, by only using text inputs we get to 67.14% top-1 accuracy on the same test set. Refer to this notebook for details. Finally, here is a table comparing different approaches taken for the entailment task: Type Standard Cross-entropy Loss-weighted Cross-entropy Focal Loss Multimodal 77.86% 67.86% 86.43% Only text 67.14% 11.43% 37.86% You can check out this repository to learn more about how the experiments were conducted to obtain these numbers. Final remarks The architecture we used in this example is too large for the number of data points available for training. It's going to benefit from more data. We used a smaller variant of the original BERT model. Chances are high that with a larger variant, this performance will be improved. TensorFlow Hub provides a number of different BERT models that you can experiment with. We kept the pre-trained models frozen. Fine-tuning them on the multimodal entailment task would could resulted in better performance. We built a simple baseline model for the multimodal entailment task. There are various approaches that have been proposed to tackle the entailment problem. This presentation deck from the Recognizing Multimodal Entailment tutorial provides a comprehensive overview. NER using the Transformers and data from CoNLL 2003 shared task. Introduction Named Entity Recognition (NER) is the process of identifying named entities in text. Example of named entities are: \"Person\", \"Location\", \"Organization\", \"Dates\" etc. NER is essentially a token classification task where every token is classified into one or more predetermined categories. In this exercise, we will train a simple Transformer based model to perform NER. We will be using the data from CoNLL 2003 shared task. For more information about the dataset, please visit the dataset website. However, since obtaining this data requires an additional step of getting a free license, we will be using HuggingFace's datasets library which contains a processed version of this dataset. Install the open source datasets library from HuggingFace We also download the script used to evaluate NER models. !pip3 install datasets !wget https://raw.githubusercontent.com/sighsmile/conlleval/master/conlleval.py import os import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from datasets import load_dataset from collections import Counter from conlleval import evaluate We will be using the transformer implementation from this fantastic example. Let's start by defining a TransformerBlock layer: class TransformerBlock(layers.Layer): def __init__(self, embed_dim, num_heads, ff_dim, rate=0.1): super(TransformerBlock, self).__init__() self.att = keras.layers.MultiHeadAttention( num_heads=num_heads, key_dim=embed_dim ) self.ffn = keras.Sequential( [ keras.layers.Dense(ff_dim, activation=\"relu\"), keras.layers.Dense(embed_dim), ] ) self.layernorm1 = keras.layers.LayerNormalization(epsilon=1e-6) self.layernorm2 = keras.layers.LayerNormalization(epsilon=1e-6) self.dropout1 = keras.layers.Dropout(rate) self.dropout2 = keras.layers.Dropout(rate) def call(self, inputs, training=False): attn_output = self.att(inputs, inputs) attn_output = self.dropout1(attn_output, training=training) out1 = self.layernorm1(inputs + attn_output) ffn_output = self.ffn(out1) ffn_output = self.dropout2(ffn_output, training=training) return self.layernorm2(out1 + ffn_output) Next, let's define a TokenAndPositionEmbedding layer: class TokenAndPositionEmbedding(layers.Layer): def __init__(self, maxlen, vocab_size, embed_dim): super(TokenAndPositionEmbedding, self).__init__() self.token_emb = keras.layers.Embedding( input_dim=vocab_size, output_dim=embed_dim ) self.pos_emb = keras.layers.Embedding(input_dim=maxlen, output_dim=embed_dim) def call(self, inputs): maxlen = tf.shape(inputs)[-1] positions = tf.range(start=0, limit=maxlen, delta=1) position_embeddings = self.pos_emb(positions) token_embeddings = self.token_emb(inputs) return token_embeddings + position_embeddings Build the NER model class as a keras.Model subclass class NERModel(keras.Model): def __init__( self, num_tags, vocab_size, maxlen=128, embed_dim=32, num_heads=2, ff_dim=32 ): super(NERModel, self).__init__() self.embedding_layer = TokenAndPositionEmbedding(maxlen, vocab_size, embed_dim) self.transformer_block = TransformerBlock(embed_dim, num_heads, ff_dim) self.dropout1 = layers.Dropout(0.1) self.ff = layers.Dense(ff_dim, activation=\"relu\") self.dropout2 = layers.Dropout(0.1) self.ff_final = layers.Dense(num_tags, activation=\"softmax\") def call(self, inputs, training=False): x = self.embedding_layer(inputs) x = self.transformer_block(x) x = self.dropout1(x, training=training) x = self.ff(x) x = self.dropout2(x, training=training) x = self.ff_final(x) return x Load the CoNLL 2003 dataset from the datasets library and process it conll_data = load_dataset(\"conll2003\") We will export this data to a tab-separated file format which will be easy to read as a tf.data.Dataset object. def export_to_file(export_file_path, data): with open(export_file_path, \"w\") as f: for record in data: ner_tags = record[\"ner_tags\"] tokens = record[\"tokens\"] f.write( str(len(tokens)) + \"\t\" + \"\t\".join(tokens) + \"\t\" + \"\t\".join(map(str, ner_tags)) + \"\n\" ) os.mkdir(\"data\") export_to_file(\"./data/conll_train.txt\", conll_data[\"train\"]) export_to_file(\"./data/conll_val.txt\", conll_data[\"validation\"]) Make the NER label lookup table NER labels are usually provided in IOB, IOB2 or IOBES formats. Checkout this link for more information: Wikipedia Note that we start our label numbering from 1 since 0 will be reserved for padding. We have a total of 10 labels: 9 from the NER dataset and one for padding. def make_tag_lookup_table(): iob_labels = [\"B\", \"I\"] ner_labels = [\"PER\", \"ORG\", \"LOC\", \"MISC\"] all_labels = [(label1, label2) for label2 in ner_labels for label1 in iob_labels] all_labels = [\"-\".join([a, b]) for a, b in all_labels] all_labels = [\"[PAD]\", \"O\"] + all_labels return dict(zip(range(0, len(all_labels) + 1), all_labels)) mapping = make_tag_lookup_table() print(mapping) {0: '[PAD]', 1: 'O', 2: 'B-PER', 3: 'I-PER', 4: 'B-ORG', 5: 'I-ORG', 6: 'B-LOC', 7: 'I-LOC', 8: 'B-MISC', 9: 'I-MISC'} Get a list of all tokens in the training dataset. This will be used to create the vocabulary. all_tokens = sum(conll_data[\"train\"][\"tokens\"], []) all_tokens_array = np.array(list(map(str.lower, all_tokens))) counter = Counter(all_tokens_array) print(len(counter)) num_tags = len(mapping) vocab_size = 20000 # We only take (vocab_size - 2) most commons words from the training data since # the `StringLookup` class uses 2 additional tokens - one denoting an unknown # token and another one denoting a masking token vocabulary = [token for token, count in counter.most_common(vocab_size - 2)] # The StringLook class will convert tokens to token IDs lookup_layer = keras.layers.StringLookup( vocabulary=vocabulary ) 21009 Create 2 new Dataset objects from the training and validation data train_data = tf.data.TextLineDataset(\"./data/conll_train.txt\") val_data = tf.data.TextLineDataset(\"./data/conll_val.txt\") Print out one line to make sure it looks good. The first record in the line is the number of tokens. After that we will have all the tokens followed by all the ner tags. print(list(train_data.take(1).as_numpy_iterator())) [b'9\tEU\trejects\tGerman\tcall\tto\tboycott\tBritish\tlamb\t.\t3\t0\t7\t0\t0\t0\t7\t0\t0'] We will be using the following map function to transform the data in the dataset: def map_record_to_training_data(record): record = tf.strings.split(record, sep=\"\t\") length = tf.strings.to_number(record[0], out_type=tf.int32) tokens = record[1 : length + 1] tags = record[length + 1 :] tags = tf.strings.to_number(tags, out_type=tf.int64) tags += 1 return tokens, tags def lowercase_and_convert_to_ids(tokens): tokens = tf.strings.lower(tokens) return lookup_layer(tokens) # We use `padded_batch` here because each record in the dataset has a # different length. batch_size = 32 train_dataset = ( train_data.map(map_record_to_training_data) .map(lambda x, y: (lowercase_and_convert_to_ids(x), y)) .padded_batch(batch_size) ) val_dataset = ( val_data.map(map_record_to_training_data) .map(lambda x, y: (lowercase_and_convert_to_ids(x), y)) .padded_batch(batch_size) ) ner_model = NERModel(num_tags, vocab_size, embed_dim=32, num_heads=4, ff_dim=64) We will be using a custom loss function that will ignore the loss from padded tokens. class CustomNonPaddingTokenLoss(keras.losses.Loss): def __init__(self, name=\"custom_ner_loss\"): super().__init__(name=name) def call(self, y_true, y_pred): loss_fn = keras.losses.SparseCategoricalCrossentropy( from_logits=True, reduction=keras.losses.Reduction.NONE ) loss = loss_fn(y_true, y_pred) mask = tf.cast((y_true > 0), dtype=tf.float32) loss = loss * mask return tf.reduce_sum(loss) / tf.reduce_sum(mask) loss = CustomNonPaddingTokenLoss() Compile and fit the model ner_model.compile(optimizer=\"adam\", loss=loss) ner_model.fit(train_dataset, epochs=10) def tokenize_and_convert_to_ids(text): tokens = text.split() return lowercase_and_convert_to_ids(tokens) # Sample inference using the trained model sample_input = tokenize_and_convert_to_ids( \"eu rejects german call to boycott british lamb\" ) sample_input = tf.reshape(sample_input, shape=[1, -1]) print(sample_input) output = ner_model.predict(sample_input) prediction = np.argmax(output, axis=-1)[0] prediction = [mapping[i] for i in prediction] # eu -> B-ORG, german -> B-MISC, british -> B-MISC print(prediction) Epoch 1/10 439/439 [==============================] - 13s 26ms/step - loss: 0.9300 Epoch 2/10 439/439 [==============================] - 11s 24ms/step - loss: 0.2997 Epoch 3/10 439/439 [==============================] - 11s 24ms/step - loss: 0.1544 Epoch 4/10 439/439 [==============================] - 11s 25ms/step - loss: 0.1129 Epoch 5/10 439/439 [==============================] - 11s 25ms/step - loss: 0.0875 Epoch 6/10 439/439 [==============================] - 11s 25ms/step - loss: 0.0696 Epoch 7/10 439/439 [==============================] - 11s 25ms/step - loss: 0.0597 Epoch 8/10 439/439 [==============================] - 11s 25ms/step - loss: 0.0509 Epoch 9/10 439/439 [==============================] - 11s 25ms/step - loss: 0.0461 Epoch 10/10 439/439 [==============================] - 11s 25ms/step - loss: 0.0408 tf.Tensor([[ 989 10951 205 629 7 3939 216 5774]], shape=(1, 8), dtype=int64) ['B-ORG', 'O', 'B-MISC', 'O', 'O', 'O', 'B-MISC', 'O'] Metrics calculation Here is a function to calculate the metrics. The function calculates F1 score for the overall NER dataset as well as individual scores for each NER tag. def calculate_metrics(dataset): all_true_tag_ids, all_predicted_tag_ids = [], [] for x, y in dataset: output = ner_model.predict(x) predictions = np.argmax(output, axis=-1) predictions = np.reshape(predictions, [-1]) true_tag_ids = np.reshape(y, [-1]) mask = (true_tag_ids > 0) & (predictions > 0) true_tag_ids = true_tag_ids[mask] predicted_tag_ids = predictions[mask] all_true_tag_ids.append(true_tag_ids) all_predicted_tag_ids.append(predicted_tag_ids) all_true_tag_ids = np.concatenate(all_true_tag_ids) all_predicted_tag_ids = np.concatenate(all_predicted_tag_ids) predicted_tags = [mapping[tag] for tag in all_predicted_tag_ids] real_tags = [mapping[tag] for tag in all_true_tag_ids] evaluate(real_tags, predicted_tags) calculate_metrics(val_dataset) processed 51362 tokens with 5942 phrases; found: 5504 phrases; correct: 3855. accuracy: 63.28%; (non-O) accuracy: 93.22%; precision: 70.04%; recall: 64.88%; FB1: 67.36 LOC: precision: 85.67%; recall: 78.12%; FB1: 81.72 1675 MISC: precision: 73.15%; recall: 65.29%; FB1: 69.00 823 ORG: precision: 56.05%; recall: 63.53%; FB1: 59.56 1520 PER: precision: 65.01%; recall: 52.44%; FB1: 58.05 1486 Conclusions In this exercise, we created a simple transformer based named entity recognition model. We trained it on the CoNLL 2003 shared task data and got an overall F1 score of around 70%. State of the art NER models fine-tuned on pretrained models such as BERT or ELECTRA can easily get much higher F1 score -between 90-95% on this dataset owing to the inherent knowledge of words as part of the pretraining process and the usage of subword tokenization. Implementation of a dual encoder model for retrieving images that match natural language queries. Introduction The example demonstrates how to build a dual encoder (also known as two-tower) neural network model to search for images using natural language. The model is inspired by the CLIP approach, introduced by Alec Radford et al. The idea is to train a vision encoder and a text encoder jointly to project the representation of images and their captions into the same embedding space, such that the caption embeddings are located near the embeddings of the images they describe. This example requires TensorFlow 2.4 or higher. In addition, TensorFlow Hub and TensorFlow Text are required for the BERT model, and TensorFlow Addons is required for the AdamW optimizer. These libraries can be installed using the following command: pip install -q -U tensorflow-hub tensorflow-text tensorflow-addons Setup import os import collections import json import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_hub as hub import tensorflow_text as text import tensorflow_addons as tfa import matplotlib.pyplot as plt import matplotlib.image as mpimg from tqdm import tqdm # Suppressing tf.hub warnings tf.get_logger().setLevel(\"ERROR\") Prepare the data We will use the MS-COCO dataset to train our dual encoder model. MS-COCO contains over 82,000 images, each of which has at least 5 different caption annotations. The dataset is usually used for image captioning tasks, but we can repurpose the image-caption pairs to train our dual encoder model for image search. Download and extract the data First, let's download the dataset, which consists of two compressed folders: one with images, and the other—with associated image captions. Note that the compressed images folder is 13GB in size. root_dir = \"datasets\" annotations_dir = os.path.join(root_dir, \"annotations\") images_dir = os.path.join(root_dir, \"train2014\") tfrecords_dir = os.path.join(root_dir, \"tfrecords\") annotation_file = os.path.join(annotations_dir, \"captions_train2014.json\") # Download caption annotation files if not os.path.exists(annotations_dir): annotation_zip = tf.keras.utils.get_file( \"captions.zip\", cache_dir=os.path.abspath(\".\"), origin=\"http://images.cocodataset.org/annotations/annotations_trainval2014.zip\", extract=True, ) os.remove(annotation_zip) # Download image files if not os.path.exists(images_dir): image_zip = tf.keras.utils.get_file( \"train2014.zip\", cache_dir=os.path.abspath(\".\"), origin=\"http://images.cocodataset.org/zips/train2014.zip\", extract=True, ) os.remove(image_zip) print(\"Dataset is downloaded and extracted successfully.\") with open(annotation_file, \"r\") as f: annotations = json.load(f)[\"annotations\"] image_path_to_caption = collections.defaultdict(list) for element in annotations: caption = f\"{element['caption'].lower().rstrip('.')}\" image_path = images_dir + \"/COCO_train2014_\" + \"%012d.jpg\" % (element[\"image_id\"]) image_path_to_caption[image_path].append(caption) image_paths = list(image_path_to_caption.keys()) print(f\"Number of images: {len(image_paths)}\") Downloading data from http://images.cocodataset.org/annotations/annotations_trainval2014.zip 252878848/252872794 [==============================] - 5s 0us/step Downloading data from http://images.cocodataset.org/zips/train2014.zip 13510574080/13510573713 [==============================] - 394s 0us/step Dataset is downloaded and extracted successfully. Number of images: 82783 Process and save the data to TFRecord files You can change the sample_size parameter to control many image-caption pairs will be used for training the dual encoder model. In this example we set train_size to 30,000 images, which is about 35% of the dataset. We use 2 captions for each image, thus producing 60,000 image-caption pairs. The size of the training set affects the quality of the produced encoders, but more examples would lead to longer training time. train_size = 30000 valid_size = 5000 captions_per_image = 2 images_per_file = 2000 train_image_paths = image_paths[:train_size] num_train_files = int(np.ceil(train_size / images_per_file)) train_files_prefix = os.path.join(tfrecords_dir, \"train\") valid_image_paths = image_paths[-valid_size:] num_valid_files = int(np.ceil(valid_size / images_per_file)) valid_files_prefix = os.path.join(tfrecords_dir, \"valid\") tf.io.gfile.makedirs(tfrecords_dir) def bytes_feature(value): return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value])) def create_example(image_path, caption): feature = { \"caption\": bytes_feature(caption.encode()), \"raw_image\": bytes_feature(tf.io.read_file(image_path).numpy()), } return tf.train.Example(features=tf.train.Features(feature=feature)) def write_tfrecords(file_name, image_paths): caption_list = [] image_path_list = [] for image_path in image_paths: captions = image_path_to_caption[image_path][:captions_per_image] caption_list.extend(captions) image_path_list.extend([image_path] * len(captions)) with tf.io.TFRecordWriter(file_name) as writer: for example_idx in range(len(image_path_list)): example = create_example( image_path_list[example_idx], caption_list[example_idx] ) writer.write(example.SerializeToString()) return example_idx + 1 def write_data(image_paths, num_files, files_prefix): example_counter = 0 for file_idx in tqdm(range(num_files)): file_name = files_prefix + \"-%02d.tfrecord\" % (file_idx) start_idx = images_per_file * file_idx end_idx = start_idx + images_per_file example_counter += write_tfrecords(file_name, image_paths[start_idx:end_idx]) return example_counter train_example_count = write_data(train_image_paths, num_train_files, train_files_prefix) print(f\"{train_example_count} training examples were written to tfrecord files.\") valid_example_count = write_data(valid_image_paths, num_valid_files, valid_files_prefix) print(f\"{valid_example_count} evaluation examples were written to tfrecord files.\") 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [03:19<00:00, 13.27s/it] 0%| | 0/3 [00:00\", \" \") return tf.strings.regex_replace( stripped_html, f\"[{re.escape(string.punctuation)}]\", \"\" ) vectorizer = layers.TextVectorization( 3000, standardize=custom_standardization, output_sequence_length=150 ) # Adapting the dataset vectorizer.adapt( train_dataset.map(lambda x, y: x, num_parallel_calls=tf.data.AUTOTUNE).batch(256) ) def vectorize_text(text, label): text = vectorizer(text) return text, label train_dataset = train_dataset.map( vectorize_text, num_parallel_calls=tf.data.AUTOTUNE ).prefetch(tf.data.AUTOTUNE) pool_negatives = pool_negatives.map(vectorize_text, num_parallel_calls=tf.data.AUTOTUNE) pool_positives = pool_positives.map(vectorize_text, num_parallel_calls=tf.data.AUTOTUNE) val_dataset = val_dataset.batch(256).map( vectorize_text, num_parallel_calls=tf.data.AUTOTUNE ) test_dataset = test_dataset.batch(256).map( vectorize_text, num_parallel_calls=tf.data.AUTOTUNE ) Creating Helper Functions # Helper function for merging new history objects with older ones def append_history(losses, val_losses, accuracy, val_accuracy, history): losses = losses + history.history[\"loss\"] val_losses = val_losses + history.history[\"val_loss\"] accuracy = accuracy + history.history[\"binary_accuracy\"] val_accuracy = val_accuracy + history.history[\"val_binary_accuracy\"] return losses, val_losses, accuracy, val_accuracy # Plotter function def plot_history(losses, val_losses, accuracies, val_accuracies): plt.plot(losses) plt.plot(val_losses) plt.legend([\"train_loss\", \"val_loss\"]) plt.xlabel(\"Epochs\") plt.ylabel(\"Loss\") plt.show() plt.plot(accuracies) plt.plot(val_accuracies) plt.legend([\"train_accuracy\", \"val_accuracy\"]) plt.xlabel(\"Epochs\") plt.ylabel(\"Accuracy\") plt.show() Creating the Model We create a small bidirectional LSTM model. When using Active Learning, you should make sure that the model architecture is capable of overfitting to the initial data. Overfitting gives a strong hint that the model will have enough capacity for future, unseen data. def create_model(): model = keras.models.Sequential( [ layers.Input(shape=(150,)), layers.Embedding(input_dim=3000, output_dim=128), layers.Bidirectional(layers.LSTM(32, return_sequences=True)), layers.GlobalMaxPool1D(), layers.Dense(20, activation=\"relu\"), layers.Dropout(0.5), layers.Dense(1, activation=\"sigmoid\"), ] ) model.summary() return model Training on the entire dataset To show the effectiveness of Active Learning, we will first train the model on the entire dataset containing 40,000 labeled samples. This model will be used for comparison later. def train_full_model(full_train_dataset, val_dataset, test_dataset): model = create_model() model.compile( loss=\"binary_crossentropy\", optimizer=\"rmsprop\", metrics=[ keras.metrics.BinaryAccuracy(), keras.metrics.FalseNegatives(), keras.metrics.FalsePositives(), ], ) # We will save the best model at every epoch and load the best one for evaluation on the test set history = model.fit( full_train_dataset.batch(256), epochs=20, validation_data=val_dataset, callbacks=[ keras.callbacks.EarlyStopping(patience=4, verbose=1), keras.callbacks.ModelCheckpoint( \"FullModelCheckpoint.h5\", verbose=1, save_best_only=True ), ], ) # Plot history plot_history( history.history[\"loss\"], history.history[\"val_loss\"], history.history[\"binary_accuracy\"], history.history[\"val_binary_accuracy\"], ) # Loading the best checkpoint model = keras.models.load_model(\"FullModelCheckpoint.h5\") print(\"-\" * 100) print( \"Test set evaluation: \", model.evaluate(test_dataset, verbose=0, return_dict=True), ) print(\"-\" * 100) return model # Sampling the full train dataset to train on full_train_dataset = ( train_dataset.concatenate(pool_positives) .concatenate(pool_negatives) .cache() .shuffle(20000) ) # Training the full model full_dataset_model = train_full_model(full_train_dataset, val_dataset, test_dataset) Model: \"sequential\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (None, 150, 128) 384000 bidirectional (Bidirectiona (None, 150, 64) 41216 l) global_max_pooling1d (Globa (None, 64) 0 lMaxPooling1D) dense (Dense) (None, 20) 1300 dropout (Dropout) (None, 20) 0 dense_1 (Dense) (None, 1) 21 ================================================================= Total params: 426,537 Trainable params: 426,537 Non-trainable params: 0 _________________________________________________________________ Epoch 1/20 156/157 [============================>.] - ETA: 0s - loss: 0.5150 - binary_accuracy: 0.7615 - false_negatives: 3314.0000 - false_positives: 6210.0000 Epoch 00001: val_loss improved from inf to 0.47791, saving model to FullModelCheckpoint.h5 157/157 [==============================] - 25s 103ms/step - loss: 0.5148 - binary_accuracy: 0.7617 - false_negatives: 3316.0000 - false_positives: 6217.0000 - val_loss: 0.4779 - val_binary_accuracy: 0.7858 - val_false_negatives: 970.0000 - val_false_positives: 101.0000 Epoch 2/20 156/157 [============================>.] - ETA: 0s - loss: 0.3659 - binary_accuracy: 0.8500 - false_negatives: 2833.0000 - false_positives: 3158.0000 Epoch 00002: val_loss improved from 0.47791 to 0.35345, saving model to FullModelCheckpoint.h5 157/157 [==============================] - 9s 59ms/step - loss: 0.3656 - binary_accuracy: 0.8501 - false_negatives: 2836.0000 - false_positives: 3159.0000 - val_loss: 0.3535 - val_binary_accuracy: 0.8502 - val_false_negatives: 363.0000 - val_false_positives: 386.0000 Epoch 3/20 156/157 [============================>.] - ETA: 0s - loss: 0.3319 - binary_accuracy: 0.8653 - false_negatives: 2507.0000 - false_positives: 2873.0000 Epoch 00003: val_loss improved from 0.35345 to 0.33150, saving model to FullModelCheckpoint.h5 157/157 [==============================] - 9s 55ms/step - loss: 0.3319 - binary_accuracy: 0.8652 - false_negatives: 2512.0000 - false_positives: 2878.0000 - val_loss: 0.3315 - val_binary_accuracy: 0.8576 - val_false_negatives: 423.0000 - val_false_positives: 289.0000 Epoch 4/20 156/157 [============================>.] - ETA: 0s - loss: 0.3130 - binary_accuracy: 0.8764 - false_negatives: 2398.0000 - false_positives: 2538.0000 Epoch 00004: val_loss did not improve from 0.33150 157/157 [==============================] - 9s 55ms/step - loss: 0.3129 - binary_accuracy: 0.8763 - false_negatives: 2404.0000 - false_positives: 2542.0000 - val_loss: 0.3328 - val_binary_accuracy: 0.8586 - val_false_negatives: 263.0000 - val_false_positives: 444.0000 Epoch 5/20 156/157 [============================>.] - ETA: 0s - loss: 0.2918 - binary_accuracy: 0.8867 - false_negatives: 2141.0000 - false_positives: 2385.0000 Epoch 00005: val_loss did not improve from 0.33150 157/157 [==============================] - 9s 55ms/step - loss: 0.2917 - binary_accuracy: 0.8867 - false_negatives: 2143.0000 - false_positives: 2388.0000 - val_loss: 0.3762 - val_binary_accuracy: 0.8468 - val_false_negatives: 476.0000 - val_false_positives: 290.0000 Epoch 6/20 156/157 [============================>.] - ETA: 0s - loss: 0.2819 - binary_accuracy: 0.8901 - false_negatives: 2112.0000 - false_positives: 2277.0000 Epoch 00006: val_loss did not improve from 0.33150 157/157 [==============================] - 9s 55ms/step - loss: 0.2819 - binary_accuracy: 0.8902 - false_negatives: 2112.0000 - false_positives: 2282.0000 - val_loss: 0.4018 - val_binary_accuracy: 0.8312 - val_false_negatives: 694.0000 - val_false_positives: 150.0000 Epoch 7/20 156/157 [============================>.] - ETA: 0s - loss: 0.2650 - binary_accuracy: 0.8992 - false_negatives: 1902.0000 - false_positives: 2122.0000 Epoch 00007: val_loss improved from 0.33150 to 0.32843, saving model to FullModelCheckpoint.h5 157/157 [==============================] - 9s 55ms/step - loss: 0.2649 - binary_accuracy: 0.8992 - false_negatives: 1908.0000 - false_positives: 2123.0000 - val_loss: 0.3284 - val_binary_accuracy: 0.8578 - val_false_negatives: 274.0000 - val_false_positives: 437.0000 Epoch 8/20 157/157 [==============================] - ETA: 0s - loss: 0.2508 - binary_accuracy: 0.9051 - false_negatives: 1821.0000 - false_positives: 1974.0000 Epoch 00008: val_loss did not improve from 0.32843 157/157 [==============================] - 9s 55ms/step - loss: 0.2508 - binary_accuracy: 0.9051 - false_negatives: 1821.0000 - false_positives: 1974.0000 - val_loss: 0.4806 - val_binary_accuracy: 0.8194 - val_false_negatives: 788.0000 - val_false_positives: 115.0000 Epoch 9/20 156/157 [============================>.] - ETA: 0s - loss: 0.2377 - binary_accuracy: 0.9112 - false_negatives: 1771.0000 - false_positives: 1775.0000 Epoch 00009: val_loss did not improve from 0.32843 157/157 [==============================] - 9s 54ms/step - loss: 0.2378 - binary_accuracy: 0.9112 - false_negatives: 1775.0000 - false_positives: 1777.0000 - val_loss: 0.3378 - val_binary_accuracy: 0.8562 - val_false_negatives: 335.0000 - val_false_positives: 384.0000 Epoch 10/20 156/157 [============================>.] - ETA: 0s - loss: 0.2209 - binary_accuracy: 0.9195 - false_negatives: 1591.0000 - false_positives: 1623.0000 Epoch 00010: val_loss did not improve from 0.32843 157/157 [==============================] - 9s 55ms/step - loss: 0.2211 - binary_accuracy: 0.9195 - false_negatives: 1594.0000 - false_positives: 1627.0000 - val_loss: 0.3475 - val_binary_accuracy: 0.8556 - val_false_negatives: 425.0000 - val_false_positives: 297.0000 Epoch 11/20 156/157 [============================>.] - ETA: 0s - loss: 0.2060 - binary_accuracy: 0.9251 - false_negatives: 1512.0000 - false_positives: 1479.0000 Epoch 00011: val_loss did not improve from 0.32843 157/157 [==============================] - 9s 55ms/step - loss: 0.2061 - binary_accuracy: 0.9251 - false_negatives: 1517.0000 - false_positives: 1479.0000 - val_loss: 0.3823 - val_binary_accuracy: 0.8522 - val_false_negatives: 276.0000 - val_false_positives: 463.0000 Epoch 00011: early stopping png png ---------------------------------------------------------------------------------------------------- Test set evaluation: {'loss': 0.34183189272880554, 'binary_accuracy': 0.8579999804496765, 'false_negatives': 295.0, 'false_positives': 415.0} ---------------------------------------------------------------------------------------------------- Training via Active Learning The general process we follow when performing Active Learning is demonstrated below: Active Learning The pipeline can be summarized in five parts: Sample and annotate a small, balanced training dataset Train the model on this small subset Evaluate the model on a balanced testing set If the model satisfies the business criteria, deploy it in a real time setting If it doesn't pass the criteria, sample a few more samples according to the ratio of false positives and negatives, add them to the training set and repeat from step 2 till the model passes the tests or till all available data is exhausted. For the code below, we will perform sampling using the following formula: Ratio Sampling Active Learning techniques use callbacks extensively for progress tracking. We will be using model checkpointing and early stopping for this example. The patience parameter for Early Stopping can help minimize overfitting and the time required. We have set it patience=4 for now but since the model is robust, we can increase the patience level if desired. Note: We are not loading the checkpoint after the first training iteration. In my experience working on Active Learning techniques, this helps the model probe the newly formed loss landscape. Even if the model fails to improve in the second iteration, we will still gain insight about the possible future false positive and negative rates. This will help us sample a better set in the next iteration where the model will have a greater chance to improve. def train_active_learning_models( train_dataset, pool_negatives, pool_positives, val_dataset, test_dataset, num_iterations=3, sampling_size=5000, ): # Creating lists for storing metrics losses, val_losses, accuracies, val_accuracies = [], [], [], [] model = create_model() # We will monitor the false positives and false negatives predicted by our model # These will decide the subsequent sampling ratio for every Active Learning loop model.compile( loss=\"binary_crossentropy\", optimizer=\"rmsprop\", metrics=[ keras.metrics.BinaryAccuracy(), keras.metrics.FalseNegatives(), keras.metrics.FalsePositives(), ], ) # Defining checkpoints. # The checkpoint callback is reused throughout the training since it only saves the best overall model. checkpoint = keras.callbacks.ModelCheckpoint( \"AL_Model.h5\", save_best_only=True, verbose=1 ) # Here, patience is set to 4. This can be set higher if desired. early_stopping = keras.callbacks.EarlyStopping(patience=4, verbose=1) print(f\"Starting to train with {len(train_dataset)} samples\") # Initial fit with a small subset of the training set history = model.fit( train_dataset.cache().shuffle(20000).batch(256), epochs=20, validation_data=val_dataset, callbacks=[checkpoint, early_stopping], ) # Appending history losses, val_losses, accuracies, val_accuracies = append_history( losses, val_losses, accuracies, val_accuracies, history ) for iteration in range(num_iterations): # Getting predictions from previously trained model predictions = model.predict(test_dataset) # Generating labels from the output probabilities rounded = tf.where(tf.greater(predictions, 0.5), 1, 0) # Evaluating the number of zeros and ones incorrrectly classified _, _, false_negatives, false_positives = model.evaluate(test_dataset, verbose=0) print(\"-\" * 100) print( f\"Number of zeros incorrectly classified: {false_negatives}, Number of ones incorrectly classified: {false_positives}\" ) # This technique of Active Learning demonstrates ratio based sampling where # Number of ones/zeros to sample = Number of ones/zeros incorrectly classified / Total incorrectly classified if false_negatives != 0 and false_positives != 0: total = false_negatives + false_positives sample_ratio_ones, sample_ratio_zeros = ( false_positives / total, false_negatives / total, ) # In the case where all samples are correctly predicted, we can sample both classes equally else: sample_ratio_ones, sample_ratio_zeros = 0.5, 0.5 print( f\"Sample ratio for positives: {sample_ratio_ones}, Sample ratio for negatives:{sample_ratio_zeros}\" ) # Sample the required number of ones and zeros sampled_dataset = pool_negatives.take( int(sample_ratio_zeros * sampling_size) ).concatenate(pool_positives.take(int(sample_ratio_ones * sampling_size))) # Skip the sampled data points to avoid repetition of sample pool_negatives = pool_negatives.skip(int(sample_ratio_zeros * sampling_size)) pool_positives = pool_positives.skip(int(sample_ratio_ones * sampling_size)) # Concatenating the train_dataset with the sampled_dataset train_dataset = train_dataset.concatenate(sampled_dataset).prefetch( tf.data.AUTOTUNE ) print(f\"Starting training with {len(train_dataset)} samples\") print(\"-\" * 100) # We recompile the model to reset the optimizer states and retrain the model model.compile( loss=\"binary_crossentropy\", optimizer=\"rmsprop\", metrics=[ keras.metrics.BinaryAccuracy(), keras.metrics.FalseNegatives(), keras.metrics.FalsePositives(), ], ) history = model.fit( train_dataset.cache().shuffle(20000).batch(256), validation_data=val_dataset, epochs=20, callbacks=[ checkpoint, keras.callbacks.EarlyStopping(patience=4, verbose=1), ], ) # Appending the history losses, val_losses, accuracies, val_accuracies = append_history( losses, val_losses, accuracies, val_accuracies, history ) # Loading the best model from this training loop model = keras.models.load_model(\"AL_Model.h5\") # Plotting the overall history and evaluating the final model plot_history(losses, val_losses, accuracies, val_accuracies) print(\"-\" * 100) print( \"Test set evaluation: \", model.evaluate(test_dataset, verbose=0, return_dict=True), ) print(\"-\" * 100) return model active_learning_model = train_active_learning_models( train_dataset, pool_negatives, pool_positives, val_dataset, test_dataset ) Model: \"sequential_1\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_1 (Embedding) (None, 150, 128) 384000 bidirectional_1 (Bidirectio (None, 150, 64) 41216 nal) global_max_pooling1d_1 (Glo (None, 64) 0 balMaxPooling1D) dense_2 (Dense) (None, 20) 1300 dropout_1 (Dropout) (None, 20) 0 dense_3 (Dense) (None, 1) 21 ================================================================= Total params: 426,537 Trainable params: 426,537 Non-trainable params: 0 _________________________________________________________________ Starting to train with 15000 samples Epoch 1/20 59/59 [==============================] - ETA: 0s - loss: 0.6235 - binary_accuracy: 0.6679 - false_negatives_1: 3111.0000 - false_positives_1: 1870.0000 Epoch 00001: val_loss improved from inf to 0.43017, saving model to AL_Model.h5 59/59 [==============================] - 13s 87ms/step - loss: 0.6235 - binary_accuracy: 0.6679 - false_negatives_1: 3111.0000 - false_positives_1: 1870.0000 - val_loss: 0.4302 - val_binary_accuracy: 0.8286 - val_false_negatives_1: 513.0000 - val_false_positives_1: 344.0000 Epoch 2/20 58/59 [============================>.] - ETA: 0s - loss: 0.4381 - binary_accuracy: 0.8232 - false_negatives_1: 1412.0000 - false_positives_1: 1213.0000 Epoch 00002: val_loss improved from 0.43017 to 0.40090, saving model to AL_Model.h5 59/59 [==============================] - 4s 64ms/step - loss: 0.4373 - binary_accuracy: 0.8235 - false_negatives_1: 1423.0000 - false_positives_1: 1225.0000 - val_loss: 0.4009 - val_binary_accuracy: 0.8248 - val_false_negatives_1: 674.0000 - val_false_positives_1: 202.0000 Epoch 3/20 58/59 [============================>.] - ETA: 0s - loss: 0.3810 - binary_accuracy: 0.8544 - false_negatives_1: 1115.0000 - false_positives_1: 1047.0000 Epoch 00003: val_loss improved from 0.40090 to 0.36085, saving model to AL_Model.h5 59/59 [==============================] - 4s 61ms/step - loss: 0.3805 - binary_accuracy: 0.8545 - false_negatives_1: 1123.0000 - false_positives_1: 1060.0000 - val_loss: 0.3608 - val_binary_accuracy: 0.8408 - val_false_negatives_1: 231.0000 - val_false_positives_1: 565.0000 Epoch 4/20 58/59 [============================>.] - ETA: 0s - loss: 0.3436 - binary_accuracy: 0.8647 - false_negatives_1: 995.0000 - false_positives_1: 1014.0000 Epoch 00004: val_loss improved from 0.36085 to 0.35469, saving model to AL_Model.h5 59/59 [==============================] - 4s 61ms/step - loss: 0.3428 - binary_accuracy: 0.8654 - false_negatives_1: 999.0000 - false_positives_1: 1020.0000 - val_loss: 0.3547 - val_binary_accuracy: 0.8452 - val_false_negatives_1: 266.0000 - val_false_positives_1: 508.0000 Epoch 5/20 58/59 [============================>.] - ETA: 0s - loss: 0.3166 - binary_accuracy: 0.8834 - false_negatives_1: 835.0000 - false_positives_1: 897.0000 Epoch 00005: val_loss did not improve from 0.35469 59/59 [==============================] - 4s 60ms/step - loss: 0.3163 - binary_accuracy: 0.8835 - false_negatives_1: 839.0000 - false_positives_1: 908.0000 - val_loss: 0.3554 - val_binary_accuracy: 0.8508 - val_false_negatives_1: 382.0000 - val_false_positives_1: 364.0000 Epoch 6/20 58/59 [============================>.] - ETA: 0s - loss: 0.2935 - binary_accuracy: 0.8944 - false_negatives_1: 757.0000 - false_positives_1: 811.0000 Epoch 00006: val_loss did not improve from 0.35469 59/59 [==============================] - 4s 60ms/step - loss: 0.2938 - binary_accuracy: 0.8945 - false_negatives_1: 765.0000 - false_positives_1: 818.0000 - val_loss: 0.3718 - val_binary_accuracy: 0.8458 - val_false_negatives_1: 345.0000 - val_false_positives_1: 426.0000 Epoch 7/20 58/59 [============================>.] - ETA: 0s - loss: 0.2794 - binary_accuracy: 0.9003 - false_negatives_1: 732.0000 - false_positives_1: 748.0000 Epoch 00007: val_loss did not improve from 0.35469 59/59 [==============================] - 3s 59ms/step - loss: 0.2797 - binary_accuracy: 0.9001 - false_negatives_1: 749.0000 - false_positives_1: 749.0000 - val_loss: 0.3825 - val_binary_accuracy: 0.8406 - val_false_negatives_1: 228.0000 - val_false_positives_1: 569.0000 Epoch 8/20 58/59 [============================>.] - ETA: 0s - loss: 0.2526 - binary_accuracy: 0.9147 - false_negatives_1: 620.0000 - false_positives_1: 647.0000 Epoch 00008: val_loss did not improve from 0.35469 59/59 [==============================] - 4s 60ms/step - loss: 0.2561 - binary_accuracy: 0.9134 - false_negatives_1: 620.0000 - false_positives_1: 679.0000 - val_loss: 0.4109 - val_binary_accuracy: 0.8258 - val_false_negatives_1: 622.0000 - val_false_positives_1: 249.0000 Epoch 00008: early stopping ---------------------------------------------------------------------------------------------------- Number of zeros incorrectly classified: 665.0, Number of ones incorrectly classified: 234.0 Sample ratio for positives: 0.26028921023359286, Sample ratio for negatives:0.7397107897664071 Starting training with 19999 samples ---------------------------------------------------------------------------------------------------- Epoch 1/20 78/79 [============================>.] - ETA: 0s - loss: 0.2955 - binary_accuracy: 0.8902 - false_negatives_2: 1091.0000 - false_positives_2: 1101.0000 Epoch 00001: val_loss did not improve from 0.35469 79/79 [==============================] - 15s 83ms/step - loss: 0.2956 - binary_accuracy: 0.8901 - false_negatives_2: 1095.0000 - false_positives_2: 1102.0000 - val_loss: 0.4136 - val_binary_accuracy: 0.8238 - val_false_negatives_2: 156.0000 - val_false_positives_2: 725.0000 Epoch 2/20 78/79 [============================>.] - ETA: 0s - loss: 0.2657 - binary_accuracy: 0.9047 - false_negatives_2: 953.0000 - false_positives_2: 949.0000 Epoch 00002: val_loss did not improve from 0.35469 79/79 [==============================] - 5s 61ms/step - loss: 0.2659 - binary_accuracy: 0.9047 - false_negatives_2: 954.0000 - false_positives_2: 951.0000 - val_loss: 0.4079 - val_binary_accuracy: 0.8386 - val_false_negatives_2: 510.0000 - val_false_positives_2: 297.0000 Epoch 3/20 78/79 [============================>.] - ETA: 0s - loss: 0.2475 - binary_accuracy: 0.9126 - false_negatives_2: 892.0000 - false_positives_2: 854.0000 Epoch 00003: val_loss did not improve from 0.35469 79/79 [==============================] - 5s 58ms/step - loss: 0.2474 - binary_accuracy: 0.9126 - false_negatives_2: 893.0000 - false_positives_2: 855.0000 - val_loss: 0.4207 - val_binary_accuracy: 0.8364 - val_false_negatives_2: 228.0000 - val_false_positives_2: 590.0000 Epoch 4/20 78/79 [============================>.] - ETA: 0s - loss: 0.2319 - binary_accuracy: 0.9193 - false_negatives_2: 805.0000 - false_positives_2: 807.0000 Epoch 00004: val_loss did not improve from 0.35469 79/79 [==============================] - 5s 57ms/step - loss: 0.2319 - binary_accuracy: 0.9192 - false_negatives_2: 807.0000 - false_positives_2: 808.0000 - val_loss: 0.4080 - val_binary_accuracy: 0.8310 - val_false_negatives_2: 264.0000 - val_false_positives_2: 581.0000 Epoch 5/20 78/79 [============================>.] - ETA: 0s - loss: 0.2133 - binary_accuracy: 0.9260 - false_negatives_2: 728.0000 - false_positives_2: 750.0000 Epoch 00005: val_loss did not improve from 0.35469 79/79 [==============================] - 5s 57ms/step - loss: 0.2133 - binary_accuracy: 0.9259 - false_negatives_2: 729.0000 - false_positives_2: 752.0000 - val_loss: 0.4054 - val_binary_accuracy: 0.8394 - val_false_negatives_2: 371.0000 - val_false_positives_2: 432.0000 Epoch 6/20 78/79 [============================>.] - ETA: 0s - loss: 0.1982 - binary_accuracy: 0.9361 - false_negatives_2: 639.0000 - false_positives_2: 636.0000 Epoch 00006: val_loss did not improve from 0.35469 79/79 [==============================] - 5s 57ms/step - loss: 0.1980 - binary_accuracy: 0.9362 - false_negatives_2: 639.0000 - false_positives_2: 636.0000 - val_loss: 0.5185 - val_binary_accuracy: 0.8284 - val_false_negatives_2: 590.0000 - val_false_positives_2: 268.0000 Epoch 7/20 78/79 [============================>.] - ETA: 0s - loss: 0.1887 - binary_accuracy: 0.9409 - false_negatives_2: 606.0000 - false_positives_2: 575.0000 Epoch 00007: val_loss did not improve from 0.35469 79/79 [==============================] - 5s 57ms/step - loss: 0.1886 - binary_accuracy: 0.9408 - false_negatives_2: 606.0000 - false_positives_2: 577.0000 - val_loss: 0.6881 - val_binary_accuracy: 0.7886 - val_false_negatives_2: 893.0000 - val_false_positives_2: 164.0000 Epoch 8/20 78/79 [============================>.] - ETA: 0s - loss: 0.1778 - binary_accuracy: 0.9443 - false_negatives_2: 575.0000 - false_positives_2: 538.0000 Epoch 00008: val_loss did not improve from 0.35469 79/79 [==============================] - 5s 57ms/step - loss: 0.1776 - binary_accuracy: 0.9443 - false_negatives_2: 575.0000 - false_positives_2: 538.0000 - val_loss: 0.5921 - val_binary_accuracy: 0.8244 - val_false_negatives_2: 634.0000 - val_false_positives_2: 244.0000 Epoch 9/20 78/79 [============================>.] - ETA: 0s - loss: 0.1598 - binary_accuracy: 0.9505 - false_negatives_2: 507.0000 - false_positives_2: 481.0000 Epoch 00009: val_loss did not improve from 0.35469 79/79 [==============================] - 5s 57ms/step - loss: 0.1597 - binary_accuracy: 0.9506 - false_negatives_2: 507.0000 - false_positives_2: 481.0000 - val_loss: 0.5393 - val_binary_accuracy: 0.8214 - val_false_negatives_2: 542.0000 - val_false_positives_2: 351.0000 Epoch 00009: early stopping ---------------------------------------------------------------------------------------------------- Number of zeros incorrectly classified: 270.0, Number of ones incorrectly classified: 498.0 Sample ratio for positives: 0.6484375, Sample ratio for negatives:0.3515625 Starting training with 24998 samples ---------------------------------------------------------------------------------------------------- Epoch 1/20 97/98 [============================>.] - ETA: 0s - loss: 0.3554 - binary_accuracy: 0.8609 - false_negatives_3: 1714.0000 - false_positives_3: 1739.0000 Epoch 00001: val_loss improved from 0.35469 to 0.34182, saving model to AL_Model.h5 98/98 [==============================] - 17s 82ms/step - loss: 0.3548 - binary_accuracy: 0.8613 - false_negatives_3: 1720.0000 - false_positives_3: 1748.0000 - val_loss: 0.3418 - val_binary_accuracy: 0.8528 - val_false_negatives_3: 369.0000 - val_false_positives_3: 367.0000 Epoch 2/20 97/98 [============================>.] - ETA: 0s - loss: 0.3176 - binary_accuracy: 0.8785 - false_negatives_3: 1473.0000 - false_positives_3: 1544.0000 Epoch 00002: val_loss did not improve from 0.34182 98/98 [==============================] - 6s 56ms/step - loss: 0.3179 - binary_accuracy: 0.8784 - false_negatives_3: 1479.0000 - false_positives_3: 1560.0000 - val_loss: 0.4785 - val_binary_accuracy: 0.8102 - val_false_negatives_3: 793.0000 - val_false_positives_3: 156.0000 Epoch 3/20 97/98 [============================>.] - ETA: 0s - loss: 0.2986 - binary_accuracy: 0.8893 - false_negatives_3: 1353.0000 - false_positives_3: 1396.0000 Epoch 00003: val_loss did not improve from 0.34182 98/98 [==============================] - 5s 56ms/step - loss: 0.2985 - binary_accuracy: 0.8893 - false_negatives_3: 1366.0000 - false_positives_3: 1402.0000 - val_loss: 0.3473 - val_binary_accuracy: 0.8542 - val_false_negatives_3: 340.0000 - val_false_positives_3: 389.0000 Epoch 4/20 97/98 [============================>.] - ETA: 0s - loss: 0.2822 - binary_accuracy: 0.8970 - false_negatives_3: 1253.0000 - false_positives_3: 1305.0000 Epoch 00004: val_loss did not improve from 0.34182 98/98 [==============================] - 6s 56ms/step - loss: 0.2820 - binary_accuracy: 0.8971 - false_negatives_3: 1257.0000 - false_positives_3: 1316.0000 - val_loss: 0.3849 - val_binary_accuracy: 0.8386 - val_false_negatives_3: 537.0000 - val_false_positives_3: 270.0000 Epoch 5/20 97/98 [============================>.] - ETA: 0s - loss: 0.2666 - binary_accuracy: 0.9047 - false_negatives_3: 1130.0000 - false_positives_3: 1237.0000 Epoch 00005: val_loss did not improve from 0.34182 98/98 [==============================] - 6s 56ms/step - loss: 0.2666 - binary_accuracy: 0.9048 - false_negatives_3: 1142.0000 - false_positives_3: 1238.0000 - val_loss: 0.3731 - val_binary_accuracy: 0.8444 - val_false_negatives_3: 251.0000 - val_false_positives_3: 527.0000 Epoch 00005: early stopping ---------------------------------------------------------------------------------------------------- Number of zeros incorrectly classified: 392.0, Number of ones incorrectly classified: 356.0 Sample ratio for positives: 0.47593582887700536, Sample ratio for negatives:0.5240641711229946 Starting training with 29997 samples ---------------------------------------------------------------------------------------------------- Epoch 1/20 117/118 [============================>.] - ETA: 0s - loss: 0.3345 - binary_accuracy: 0.8720 - false_negatives_4: 1835.0000 - false_positives_4: 1998.0000 Epoch 00001: val_loss did not improve from 0.34182 118/118 [==============================] - 20s 96ms/step - loss: 0.3343 - binary_accuracy: 0.8722 - false_negatives_4: 1835.0000 - false_positives_4: 1999.0000 - val_loss: 0.3478 - val_binary_accuracy: 0.8488 - val_false_negatives_4: 250.0000 - val_false_positives_4: 506.0000 Epoch 2/20 117/118 [============================>.] - ETA: 0s - loss: 0.3061 - binary_accuracy: 0.8842 - false_negatives_4: 1667.0000 - false_positives_4: 1801.0000 Epoch 00002: val_loss improved from 0.34182 to 0.33779, saving model to AL_Model.h5 118/118 [==============================] - 7s 56ms/step - loss: 0.3059 - binary_accuracy: 0.8843 - false_negatives_4: 1670.0000 - false_positives_4: 1802.0000 - val_loss: 0.3378 - val_binary_accuracy: 0.8534 - val_false_negatives_4: 335.0000 - val_false_positives_4: 398.0000 Epoch 3/20 117/118 [============================>.] - ETA: 0s - loss: 0.2923 - binary_accuracy: 0.8921 - false_negatives_4: 1626.0000 - false_positives_4: 1607.0000 Epoch 00003: val_loss did not improve from 0.33779 118/118 [==============================] - 7s 56ms/step - loss: 0.2923 - binary_accuracy: 0.8921 - false_negatives_4: 1626.0000 - false_positives_4: 1611.0000 - val_loss: 0.3413 - val_binary_accuracy: 0.8486 - val_false_negatives_4: 269.0000 - val_false_positives_4: 488.0000 Epoch 4/20 117/118 [============================>.] - ETA: 0s - loss: 0.2746 - binary_accuracy: 0.8997 - false_negatives_4: 1459.0000 - false_positives_4: 1546.0000 Epoch 00004: val_loss did not improve from 0.33779 118/118 [==============================] - 7s 55ms/step - loss: 0.2746 - binary_accuracy: 0.8996 - false_negatives_4: 1465.0000 - false_positives_4: 1546.0000 - val_loss: 0.3810 - val_binary_accuracy: 0.8326 - val_false_negatives_4: 169.0000 - val_false_positives_4: 668.0000 Epoch 5/20 117/118 [============================>.] - ETA: 0s - loss: 0.2598 - binary_accuracy: 0.9066 - false_negatives_4: 1336.0000 - false_positives_4: 1462.0000 Epoch 00005: val_loss did not improve from 0.33779 118/118 [==============================] - 7s 56ms/step - loss: 0.2597 - binary_accuracy: 0.9066 - false_negatives_4: 1337.0000 - false_positives_4: 1465.0000 - val_loss: 0.4038 - val_binary_accuracy: 0.8332 - val_false_negatives_4: 643.0000 - val_false_positives_4: 191.0000 Epoch 6/20 117/118 [============================>.] - ETA: 0s - loss: 0.2461 - binary_accuracy: 0.9132 - false_negatives_4: 1263.0000 - false_positives_4: 1337.0000 Epoch 00006: val_loss did not improve from 0.33779 118/118 [==============================] - 7s 55ms/step - loss: 0.2462 - binary_accuracy: 0.9132 - false_negatives_4: 1263.0000 - false_positives_4: 1341.0000 - val_loss: 0.3546 - val_binary_accuracy: 0.8500 - val_false_negatives_4: 359.0000 - val_false_positives_4: 391.0000 Epoch 00006: early stopping png png ---------------------------------------------------------------------------------------------------- Test set evaluation: {'loss': 0.34248775243759155, 'binary_accuracy': 0.854200005531311, 'false_negatives_4': 348.0, 'false_positives_4': 381.0} ---------------------------------------------------------------------------------------------------- Conclusion Active Learning is a growing area of research. This example demonstrates the cost-efficiency benefits of using Active Learning, as it eliminates the need to annotate large amounts of data, saving resources. The following are some noteworthy observations from this example: We only require 30,000 samples to reach the same (if not better) scores as the model trained on the full datatset. This means that in a real life setting, we save the effort required for annotating 10,000 images! The number of false negatives and false positives are well balanced at the end of the training as compared to the skewed ratio obtained from the full training. This makes the model slightly more useful in real life scenarios where both the labels hold equal importance. For further reading about the types of sampling ratios, training techniques or available open source libraries/implementations, you can refer to the resources below: Active Learning Literature Survey (Burr Settles, 2010). modAL: A Modular Active Learning framework. Google's unofficial Active Learning playground. Natural Language Inference by fine-tuning BERT model on SNLI Corpus. Introduction Semantic Similarity is the task of determining how similar two sentences are, in terms of what they mean. This example demonstrates the use of SNLI (Stanford Natural Language Inference) Corpus to predict sentence semantic similarity with Transformers. We will fine-tune a BERT model that takes two sentences as inputs and that outputs a similarity score for these two sentences. References BERT SNLI Setup Note: install HuggingFace transformers via pip install transformers (version >= 2.11.0). import numpy as np import pandas as pd import tensorflow as tf import transformers Configuration max_length = 128 # Maximum length of input sentence to the model. batch_size = 32 epochs = 2 # Labels in our dataset. labels = [\"contradiction\", \"entailment\", \"neutral\"] Load the Data !curl -LO https://raw.githubusercontent.com/MohamadMerchant/SNLI/master/data.tar.gz !tar -xvzf data.tar.gz # There are more than 550k samples in total; we will use 100k for this example. train_df = pd.read_csv(\"SNLI_Corpus/snli_1.0_train.csv\", nrows=100000) valid_df = pd.read_csv(\"SNLI_Corpus/snli_1.0_dev.csv\") test_df = pd.read_csv(\"SNLI_Corpus/snli_1.0_test.csv\") # Shape of the data print(f\"Total train samples : {train_df.shape[0]}\") print(f\"Total validation samples: {valid_df.shape[0]}\") print(f\"Total test samples: {valid_df.shape[0]}\") % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 11.1M 100 11.1M 0 0 5231k 0 0:00:02 0:00:02 --:--:-- 5231k SNLI_Corpus/ SNLI_Corpus/snli_1.0_dev.csv SNLI_Corpus/snli_1.0_train.csv SNLI_Corpus/snli_1.0_test.csv Total train samples : 100000 Total validation samples: 10000 Total test samples: 10000 Dataset Overview: sentence1: The premise caption that was supplied to the author of the pair. sentence2: The hypothesis caption that was written by the author of the pair. similarity: This is the label chosen by the majority of annotators. Where no majority exists, the label \"-\" is used (we will skip such samples here). Here are the \"similarity\" label values in our dataset: Contradiction: The sentences share no similarity. Entailment: The sentences have similar meaning. Neutral: The sentences are neutral. Let's look at one sample from the dataset: print(f\"Sentence1: {train_df.loc[1, 'sentence1']}\") print(f\"Sentence2: {train_df.loc[1, 'sentence2']}\") print(f\"Similarity: {train_df.loc[1, 'similarity']}\") Sentence1: A person on a horse jumps over a broken down airplane. Sentence2: A person is at a diner, ordering an omelette. Similarity: contradiction Preprocessing # We have some NaN entries in our train data, we will simply drop them. print(\"Number of missing values\") print(train_df.isnull().sum()) train_df.dropna(axis=0, inplace=True) Number of missing values similarity 0 sentence1 0 sentence2 3 dtype: int64 Distribution of our training targets. print(\"Train Target Distribution\") print(train_df.similarity.value_counts()) Train Target Distribution entailment 33384 contradiction 33310 neutral 33193 - 110 Name: similarity, dtype: int64 Distribution of our validation targets. print(\"Validation Target Distribution\") print(valid_df.similarity.value_counts()) Validation Target Distribution entailment 3329 contradiction 3278 neutral 3235 - 158 Name: similarity, dtype: int64 The value \"-\" appears as part of our training and validation targets. We will skip these samples. train_df = ( train_df[train_df.similarity != \"-\"] .sample(frac=1.0, random_state=42) .reset_index(drop=True) ) valid_df = ( valid_df[valid_df.similarity != \"-\"] .sample(frac=1.0, random_state=42) .reset_index(drop=True) ) One-hot encode training, validation, and test labels. train_df[\"label\"] = train_df[\"similarity\"].apply( lambda x: 0 if x == \"contradiction\" else 1 if x == \"entailment\" else 2 ) y_train = tf.keras.utils.to_categorical(train_df.label, num_classes=3) valid_df[\"label\"] = valid_df[\"similarity\"].apply( lambda x: 0 if x == \"contradiction\" else 1 if x == \"entailment\" else 2 ) y_val = tf.keras.utils.to_categorical(valid_df.label, num_classes=3) test_df[\"label\"] = test_df[\"similarity\"].apply( lambda x: 0 if x == \"contradiction\" else 1 if x == \"entailment\" else 2 ) y_test = tf.keras.utils.to_categorical(test_df.label, num_classes=3) Create a custom data generator class BertSemanticDataGenerator(tf.keras.utils.Sequence): \"\"\"Generates batches of data. Args: sentence_pairs: Array of premise and hypothesis input sentences. labels: Array of labels. batch_size: Integer batch size. shuffle: boolean, whether to shuffle the data. include_targets: boolean, whether to incude the labels. Returns: Tuples `([input_ids, attention_mask, `token_type_ids], labels)` (or just `[input_ids, attention_mask, `token_type_ids]` if `include_targets=False`) \"\"\" def __init__( self, sentence_pairs, labels, batch_size=batch_size, shuffle=True, include_targets=True, ): self.sentence_pairs = sentence_pairs self.labels = labels self.shuffle = shuffle self.batch_size = batch_size self.include_targets = include_targets # Load our BERT Tokenizer to encode the text. # We will use base-base-uncased pretrained model. self.tokenizer = transformers.BertTokenizer.from_pretrained( \"bert-base-uncased\", do_lower_case=True ) self.indexes = np.arange(len(self.sentence_pairs)) self.on_epoch_end() def __len__(self): # Denotes the number of batches per epoch. return len(self.sentence_pairs) // self.batch_size def __getitem__(self, idx): # Retrieves the batch of index. indexes = self.indexes[idx * self.batch_size : (idx + 1) * self.batch_size] sentence_pairs = self.sentence_pairs[indexes] # With BERT tokenizer's batch_encode_plus batch of both the sentences are # encoded together and separated by [SEP] token. encoded = self.tokenizer.batch_encode_plus( sentence_pairs.tolist(), add_special_tokens=True, max_length=max_length, return_attention_mask=True, return_token_type_ids=True, pad_to_max_length=True, return_tensors=\"tf\", ) # Convert batch of encoded features to numpy array. input_ids = np.array(encoded[\"input_ids\"], dtype=\"int32\") attention_masks = np.array(encoded[\"attention_mask\"], dtype=\"int32\") token_type_ids = np.array(encoded[\"token_type_ids\"], dtype=\"int32\") # Set to true if data generator is used for training/validation. if self.include_targets: labels = np.array(self.labels[indexes], dtype=\"int32\") return [input_ids, attention_masks, token_type_ids], labels else: return [input_ids, attention_masks, token_type_ids] def on_epoch_end(self): # Shuffle indexes after each epoch if shuffle is set to True. if self.shuffle: np.random.RandomState(42).shuffle(self.indexes) Build the model # Create the model under a distribution strategy scope. strategy = tf.distribute.MirroredStrategy() with strategy.scope(): # Encoded token ids from BERT tokenizer. input_ids = tf.keras.layers.Input( shape=(max_length,), dtype=tf.int32, name=\"input_ids\" ) # Attention masks indicates to the model which tokens should be attended to. attention_masks = tf.keras.layers.Input( shape=(max_length,), dtype=tf.int32, name=\"attention_masks\" ) # Token type ids are binary masks identifying different sequences in the model. token_type_ids = tf.keras.layers.Input( shape=(max_length,), dtype=tf.int32, name=\"token_type_ids\" ) # Loading pretrained BERT model. bert_model = transformers.TFBertModel.from_pretrained(\"bert-base-uncased\") # Freeze the BERT model to reuse the pretrained features without modifying them. bert_model.trainable = False bert_output = bert_model( input_ids, attention_mask=attention_masks, token_type_ids=token_type_ids ) sequence_output = bert_output.last_hidden_state pooled_output = bert_output.pooler_output # Add trainable layers on top of frozen layers to adapt the pretrained features on the new data. bi_lstm = tf.keras.layers.Bidirectional( tf.keras.layers.LSTM(64, return_sequences=True) )(sequence_output) # Applying hybrid pooling approach to bi_lstm sequence output. avg_pool = tf.keras.layers.GlobalAveragePooling1D()(bi_lstm) max_pool = tf.keras.layers.GlobalMaxPooling1D()(bi_lstm) concat = tf.keras.layers.concatenate([avg_pool, max_pool]) dropout = tf.keras.layers.Dropout(0.3)(concat) output = tf.keras.layers.Dense(3, activation=\"softmax\")(dropout) model = tf.keras.models.Model( inputs=[input_ids, attention_masks, token_type_ids], outputs=output ) model.compile( optimizer=tf.keras.optimizers.Adam(), loss=\"categorical_crossentropy\", metrics=[\"acc\"], ) print(f\"Strategy: {strategy}\") model.summary() HBox(children=(FloatProgress(value=0.0, description='Downloading', max=433.0, style=ProgressStyle(description_… HBox(children=(FloatProgress(value=0.0, description='Downloading', max=536063208.0, style=ProgressStyle(descri… Strategy: Model: \"functional_1\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_ids (InputLayer) [(None, 128)] 0 __________________________________________________________________________________________________ attention_masks (InputLayer) [(None, 128)] 0 __________________________________________________________________________________________________ token_type_ids (InputLayer) [(None, 128)] 0 __________________________________________________________________________________________________ tf_bert_model (TFBertModel) ((None, 128, 768), ( 109482240 input_ids[0][0] attention_masks[0][0] token_type_ids[0][0] __________________________________________________________________________________________________ bidirectional (Bidirectional) (None, 128, 128) 426496 tf_bert_model[0][0] __________________________________________________________________________________________________ global_average_pooling1d (Globa (None, 128) 0 bidirectional[0][0] __________________________________________________________________________________________________ global_max_pooling1d (GlobalMax (None, 128) 0 bidirectional[0][0] __________________________________________________________________________________________________ concatenate (Concatenate) (None, 256) 0 global_average_pooling1d[0][0] global_max_pooling1d[0][0] __________________________________________________________________________________________________ dropout_37 (Dropout) (None, 256) 0 concatenate[0][0] __________________________________________________________________________________________________ dense (Dense) (None, 3) 771 dropout_37[0][0] ================================================================================================== Total params: 109,909,507 Trainable params: 427,267 Non-trainable params: 109,482,240 __________________________________________________________________________________________________ Create train and validation data generators train_data = BertSemanticDataGenerator( train_df[[\"sentence1\", \"sentence2\"]].values.astype(\"str\"), y_train, batch_size=batch_size, shuffle=True, ) valid_data = BertSemanticDataGenerator( valid_df[[\"sentence1\", \"sentence2\"]].values.astype(\"str\"), y_val, batch_size=batch_size, shuffle=False, ) HBox(children=(FloatProgress(value=0.0, description='Downloading', max=231508.0, style=ProgressStyle(descripti… Train the Model Training is done only for the top layers to perform \"feature extraction\", which will allow the model to use the representations of the pretrained model. history = model.fit( train_data, validation_data=valid_data, epochs=epochs, use_multiprocessing=True, workers=-1, ) Epoch 1/2 3121/3121 [==============================] - 666s 213ms/step - loss: 0.6925 - acc: 0.7049 - val_loss: 0.5294 - val_acc: 0.7899 Epoch 2/2 3121/3121 [==============================] - 661s 212ms/step - loss: 0.5917 - acc: 0.7587 - val_loss: 0.4955 - val_acc: 0.8052 Fine-tuning This step must only be performed after the feature extraction model has been trained to convergence on the new data. This is an optional last step where bert_model is unfreezed and retrained with a very low learning rate. This can deliver meaningful improvement by incrementally adapting the pretrained features to the new data. # Unfreeze the bert_model. bert_model.trainable = True # Recompile the model to make the change effective. model.compile( optimizer=tf.keras.optimizers.Adam(1e-5), loss=\"categorical_crossentropy\", metrics=[\"accuracy\"], ) model.summary() Model: \"functional_1\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_ids (InputLayer) [(None, 128)] 0 __________________________________________________________________________________________________ attention_masks (InputLayer) [(None, 128)] 0 __________________________________________________________________________________________________ token_type_ids (InputLayer) [(None, 128)] 0 __________________________________________________________________________________________________ tf_bert_model (TFBertModel) ((None, 128, 768), ( 109482240 input_ids[0][0] attention_masks[0][0] token_type_ids[0][0] __________________________________________________________________________________________________ bidirectional (Bidirectional) (None, 128, 128) 426496 tf_bert_model[0][0] __________________________________________________________________________________________________ global_average_pooling1d (Globa (None, 128) 0 bidirectional[0][0] __________________________________________________________________________________________________ global_max_pooling1d (GlobalMax (None, 128) 0 bidirectional[0][0] __________________________________________________________________________________________________ concatenate (Concatenate) (None, 256) 0 global_average_pooling1d[0][0] global_max_pooling1d[0][0] __________________________________________________________________________________________________ dropout_37 (Dropout) (None, 256) 0 concatenate[0][0] __________________________________________________________________________________________________ dense (Dense) (None, 3) 771 dropout_37[0][0] ================================================================================================== Total params: 109,909,507 Trainable params: 109,909,507 Non-trainable params: 0 __________________________________________________________________________________________________ Train the entire model end-to-end history = model.fit( train_data, validation_data=valid_data, epochs=epochs, use_multiprocessing=True, workers=-1, ) Epoch 1/2 3121/3121 [==============================] - 1574s 504ms/step - loss: 0.4698 - accuracy: 0.8181 - val_loss: 0.3787 - val_accuracy: 0.8598 Epoch 2/2 3121/3121 [==============================] - 1569s 503ms/step - loss: 0.3516 - accuracy: 0.8702 - val_loss: 0.3416 - val_accuracy: 0.8757 Evaluate model on the test set test_data = BertSemanticDataGenerator( test_df[[\"sentence1\", \"sentence2\"]].values.astype(\"str\"), y_test, batch_size=batch_size, shuffle=False, ) model.evaluate(test_data, verbose=1) 312/312 [==============================] - 55s 177ms/step - loss: 0.3697 - accuracy: 0.8629 [0.3696725070476532, 0.8628805875778198] Inference on custom sentences def check_similarity(sentence1, sentence2): sentence_pairs = np.array([[str(sentence1), str(sentence2)]]) test_data = BertSemanticDataGenerator( sentence_pairs, labels=None, batch_size=1, shuffle=False, include_targets=False, ) proba = model.predict(test_data[0])[0] idx = np.argmax(proba) proba = f\"{proba[idx]: .2f}%\" pred = labels[idx] return pred, proba Check results on some example sentence pairs. sentence1 = \"Two women are observing something together.\" sentence2 = \"Two women are standing with their eyes closed.\" check_similarity(sentence1, sentence2) ('contradiction', ' 0.91%') Check results on some example sentence pairs. sentence1 = \"A smiling costumed woman is holding an umbrella\" sentence2 = \"A happy woman in a fairy costume holds an umbrella\" check_similarity(sentence1, sentence2) ('neutral', ' 0.88%') Check results on some example sentence pairs sentence1 = \"A soccer game with multiple males playing\" sentence2 = \"Some men are playing a sport\" check_similarity(sentence1, sentence2) ('entailment', ' 0.94%') A model that learns to add strings of numbers, e.g. '535+61'->'596'. Introduction In this example, we train a model to learn to add two numbers, provided as strings. Example: Input: \"535+61\" Output: \"596\" Input may optionally be reversed, which was shown to increase performance in many tasks in: Learning to Execute and [Sequence to Sequence Learning with Neural Networks]( http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf) Theoretically, sequence order inversion introduces shorter term dependencies between source and target for this problem. Results: For two digits (reversed): One layer LSTM (128 HN), 5k training examples = 99% train/test accuracy in 55 epochs Three digits (reversed): One layer LSTM (128 HN), 50k training examples = 99% train/test accuracy in 100 epochs Four digits (reversed): One layer LSTM (128 HN), 400k training examples = 99% train/test accuracy in 20 epochs Five digits (reversed): One layer LSTM (128 HN), 550k training examples = 99% train/test accuracy in 30 epochs Setup from tensorflow import keras from tensorflow.keras import layers import numpy as np # Parameters for the model and dataset. TRAINING_SIZE = 50000 DIGITS = 3 REVERSE = True # Maximum length of input is 'int + int' (e.g., '345+678'). Maximum length of # int is DIGITS. MAXLEN = DIGITS + 1 + DIGITS Generate the data class CharacterTable: \"\"\"Given a set of characters: + Encode them to a one-hot integer representation + Decode the one-hot or integer representation to their character output + Decode a vector of probabilities to their character output \"\"\" def __init__(self, chars): \"\"\"Initialize character table. # Arguments chars: Characters that can appear in the input. \"\"\" self.chars = sorted(set(chars)) self.char_indices = dict((c, i) for i, c in enumerate(self.chars)) self.indices_char = dict((i, c) for i, c in enumerate(self.chars)) def encode(self, C, num_rows): \"\"\"One-hot encode given string C. # Arguments C: string, to be encoded. num_rows: Number of rows in the returned one-hot encoding. This is used to keep the # of rows for each data the same. \"\"\" x = np.zeros((num_rows, len(self.chars))) for i, c in enumerate(C): x[i, self.char_indices[c]] = 1 return x def decode(self, x, calc_argmax=True): \"\"\"Decode the given vector or 2D array to their character output. # Arguments x: A vector or a 2D array of probabilities or one-hot representations; or a vector of character indices (used with `calc_argmax=False`). calc_argmax: Whether to find the character index with maximum probability, defaults to `True`. \"\"\" if calc_argmax: x = x.argmax(axis=-1) return \"\".join(self.indices_char[x] for x in x) # All the numbers, plus sign and space for padding. chars = \"0123456789+ \" ctable = CharacterTable(chars) questions = [] expected = [] seen = set() print(\"Generating data...\") while len(questions) < TRAINING_SIZE: f = lambda: int( \"\".join( np.random.choice(list(\"0123456789\")) for i in range(np.random.randint(1, DIGITS + 1)) ) ) a, b = f(), f() # Skip any addition questions we've already seen # Also skip any such that x+Y == Y+x (hence the sorting). key = tuple(sorted((a, b))) if key in seen: continue seen.add(key) # Pad the data with spaces such that it is always MAXLEN. q = \"{}+{}\".format(a, b) query = q + \" \" * (MAXLEN - len(q)) ans = str(a + b) # Answers can be of maximum size DIGITS + 1. ans += \" \" * (DIGITS + 1 - len(ans)) if REVERSE: # Reverse the query, e.g., '12+345 ' becomes ' 543+21'. (Note the # space used for padding.) query = query[::-1] questions.append(query) expected.append(ans) print(\"Total questions:\", len(questions)) Generating data... Total questions: 50000 Vectorize the data print(\"Vectorization...\") x = np.zeros((len(questions), MAXLEN, len(chars)), dtype=np.bool) y = np.zeros((len(questions), DIGITS + 1, len(chars)), dtype=np.bool) for i, sentence in enumerate(questions): x[i] = ctable.encode(sentence, MAXLEN) for i, sentence in enumerate(expected): y[i] = ctable.encode(sentence, DIGITS + 1) # Shuffle (x, y) in unison as the later parts of x will almost all be larger # digits. indices = np.arange(len(y)) np.random.shuffle(indices) x = x[indices] y = y[indices] # Explicitly set apart 10% for validation data that we never train over. split_at = len(x) - len(x) // 10 (x_train, x_val) = x[:split_at], x[split_at:] (y_train, y_val) = y[:split_at], y[split_at:] print(\"Training Data:\") print(x_train.shape) print(y_train.shape) print(\"Validation Data:\") print(x_val.shape) print(y_val.shape) Vectorization... Training Data: (45000, 7, 12) (45000, 4, 12) Validation Data: (5000, 7, 12) (5000, 4, 12) Build the model print(\"Build model...\") num_layers = 1 # Try to add more LSTM layers! model = keras.Sequential() # \"Encode\" the input sequence using a LSTM, producing an output of size 128. # Note: In a situation where your input sequences have a variable length, # use input_shape=(None, num_feature). model.add(layers.LSTM(128, input_shape=(MAXLEN, len(chars)))) # As the decoder RNN's input, repeatedly provide with the last output of # RNN for each time step. Repeat 'DIGITS + 1' times as that's the maximum # length of output, e.g., when DIGITS=3, max output is 999+999=1998. model.add(layers.RepeatVector(DIGITS + 1)) # The decoder RNN could be multiple layers stacked or a single layer. for _ in range(num_layers): # By setting return_sequences to True, return not only the last output but # all the outputs so far in the form of (num_samples, timesteps, # output_dim). This is necessary as TimeDistributed in the below expects # the first dimension to be the timesteps. model.add(layers.LSTM(128, return_sequences=True)) # Apply a dense layer to the every temporal slice of an input. For each of step # of the output sequence, decide which character should be chosen. model.add(layers.Dense(len(chars), activation=\"softmax\")) model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"]) model.summary() Build model... Model: \"sequential\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= lstm (LSTM) (None, 128) 72192 _________________________________________________________________ repeat_vector (RepeatVector) (None, 4, 128) 0 _________________________________________________________________ lstm_1 (LSTM) (None, 4, 128) 131584 _________________________________________________________________ dense (Dense) (None, 4, 12) 1548 ================================================================= Total params: 205,324 Trainable params: 205,324 Non-trainable params: 0 _________________________________________________________________ Train the model epochs = 30 batch_size = 32 # Train the model each generation and show predictions against the validation # dataset. for epoch in range(1, epochs): print() print(\"Iteration\", epoch) model.fit( x_train, y_train, batch_size=batch_size, epochs=1, validation_data=(x_val, y_val), ) # Select 10 samples from the validation set at random so we can visualize # errors. for i in range(10): ind = np.random.randint(0, len(x_val)) rowx, rowy = x_val[np.array([ind])], y_val[np.array([ind])] preds = np.argmax(model.predict(rowx), axis=-1) q = ctable.decode(rowx[0]) correct = ctable.decode(rowy[0]) guess = ctable.decode(preds[0], calc_argmax=False) print(\"Q\", q[::-1] if REVERSE else q, end=\" \") print(\"T\", correct, end=\" \") if correct == guess: print(\"- \" + guess) else: print(\"-\" + guess) Iteration 1 1407/1407 [==============================] - 8s 6ms/step - loss: 1.7622 - accuracy: 0.3571 - val_loss: 1.5618 - val_accuracy: 0.4175 Q 99+580 T 679 - 905 Q 800+652 T 1452 - 1311 Q 900+0 T 900 - 909 Q 26+12 T 38 - 22 Q 8+397 T 405 - 903 Q 14+478 T 492 - 441 Q 59+589 T 648 - 551 Q 653+77 T 730 - 601 Q 10+35 T 45 - 11 Q 51+185 T 236 - 211 Iteration 2 1407/1407 [==============================] - 8s 6ms/step - loss: 1.3387 - accuracy: 0.5005 - val_loss: 1.1605 - val_accuracy: 0.5726 Q 373+107 T 480 - 417 Q 910+771 T 1681 - 1610 Q 494+86 T 580 - 555 Q 829+503 T 1332 - 1283 Q 820+292 T 1112 - 1102 Q 276+741 T 1017 - 1000 Q 208+84 T 292 - 397 Q 28+349 T 377 - 377 Q 875+47 T 922 - 930 Q 654+81 T 735 - 720 Iteration 3 1407/1407 [==============================] - 8s 6ms/step - loss: 1.0369 - accuracy: 0.6144 - val_loss: 0.9534 - val_accuracy: 0.6291 Q 73+290 T 363 - 358 Q 284+928 T 1212 - 1202 Q 12+775 T 787 - 783 Q 652+651 T 1303 - 1302 Q 12+940 T 952 - 953 Q 10+89 T 99 - 10 Q 86+947 T 1033 - 1023 Q 866+10 T 876 - 873 Q 196+8 T 204 - 208 Q 3+763 T 766 - 763 Iteration 4 1407/1407 [==============================] - 8s 6ms/step - loss: 0.8553 - accuracy: 0.6862 - val_loss: 0.8083 - val_accuracy: 0.6976 Q 561+68 T 629 ☒ 625 Q 1+878 T 879 ☒ 875 Q 461+525 T 986 ☒ 988 Q 453+84 T 537 ☒ 535 Q 92+33 T 125 ☒ 121 Q 29+624 T 653 ☒ 651 Q 656+89 T 745 ☑ 745 Q 30+418 T 448 ☒ 455 Q 600+3 T 603 ☒ 605 Q 26+346 T 372 ☒ 375 Iteration 5 1407/1407 [==============================] - 8s 6ms/step - loss: 0.7516 - accuracy: 0.7269 - val_loss: 0.7113 - val_accuracy: 0.7427 Q 522+451 T 973 ☒ 978 Q 721+69 T 790 ☒ 784 Q 294+53 T 347 ☒ 344 Q 80+48 T 128 ☒ 121 Q 343+182 T 525 ☒ 524 Q 17+83 T 100 ☒ 90 Q 132+3 T 135 ☒ 134 Q 63+963 T 1026 ☒ 1028 Q 427+655 T 1082 ☒ 1084 Q 76+36 T 112 ☒ 114 Iteration 6 1407/1407 [==============================] - 8s 6ms/step - loss: 0.6492 - accuracy: 0.7639 - val_loss: 0.5625 - val_accuracy: 0.7868 Q 20+56 T 76 ☒ 77 Q 904+74 T 978 ☒ 979 Q 716+736 T 1452 ☒ 1451 Q 69+512 T 581 ☑ 581 Q 82+501 T 583 ☒ 584 Q 297+442 T 739 ☒ 730 Q 759+30 T 789 ☑ 789 Q 160+451 T 611 ☒ 612 Q 765+30 T 795 ☒ 796 Q 658+37 T 695 ☒ 694 Iteration 7 1407/1407 [==============================] - 8s 6ms/step - loss: 0.4077 - accuracy: 0.8595 - val_loss: 0.3167 - val_accuracy: 0.9025 Q 558+81 T 639 ☑ 639 Q 795+73 T 868 ☑ 868 Q 98+93 T 191 ☑ 191 Q 7+454 T 461 ☑ 461 Q 64+764 T 828 ☑ 828 Q 91+14 T 105 ☒ 104 Q 554+53 T 607 ☑ 607 Q 7+454 T 461 ☑ 461 Q 411+46 T 457 ☑ 457 Q 991+55 T 1046 ☑ 1046 Iteration 8 1407/1407 [==============================] - 8s 6ms/step - loss: 0.2317 - accuracy: 0.9354 - val_loss: 0.2460 - val_accuracy: 0.9119 Q 136+57 T 193 ☑ 193 Q 896+60 T 956 ☒ 957 Q 453+846 T 1299 ☑ 1299 Q 86+601 T 687 ☑ 687 Q 272+230 T 502 ☒ 503 Q 675+886 T 1561 ☒ 1551 Q 121+634 T 755 ☒ 745 Q 17+853 T 870 ☑ 870 Q 9+40 T 49 ☒ 40 Q 290+80 T 370 ☒ 481 Iteration 9 1407/1407 [==============================] - 8s 6ms/step - loss: 0.1434 - accuracy: 0.9665 - val_loss: 0.1223 - val_accuracy: 0.9686 Q 1+532 T 533 ☑ 533 Q 298+20 T 318 ☑ 318 Q 750+28 T 778 ☑ 778 Q 44+576 T 620 ☑ 620 Q 988+481 T 1469 ☒ 1479 Q 234+829 T 1063 ☑ 1063 Q 855+19 T 874 ☑ 874 Q 741+56 T 797 ☑ 797 Q 7+643 T 650 ☑ 650 Q 14+598 T 612 ☒ 613 Iteration 10 1407/1407 [==============================] - 8s 6ms/step - loss: 0.1024 - accuracy: 0.9764 - val_loss: 0.0948 - val_accuracy: 0.9750 Q 380+26 T 406 ☑ 406 Q 813+679 T 1492 ☒ 1592 Q 3+763 T 766 ☑ 766 Q 677+83 T 760 ☑ 760 Q 474+13 T 487 ☑ 487 Q 861+4 T 865 ☑ 865 Q 83+24 T 107 ☑ 107 Q 67+177 T 244 ☑ 244 Q 841+31 T 872 ☑ 872 Q 740+121 T 861 ☒ 871 Iteration 11 1407/1407 [==============================] - 8s 6ms/step - loss: 0.0780 - accuracy: 0.9812 - val_loss: 0.0537 - val_accuracy: 0.9893 Q 199+36 T 235 ☑ 235 Q 970+78 T 1048 ☑ 1048 Q 21+610 T 631 ☑ 631 Q 36+686 T 722 ☑ 722 Q 476+488 T 964 ☑ 964 Q 583+1 T 584 ☑ 584 Q 72+408 T 480 ☑ 480 Q 0+141 T 141 ☑ 141 Q 858+837 T 1695 ☒ 1795 Q 27+346 T 373 ☑ 373 Iteration 12 1407/1407 [==============================] - 8s 6ms/step - loss: 0.0481 - accuracy: 0.9900 - val_loss: 0.0298 - val_accuracy: 0.9965 Q 23+44 T 67 ☑ 67 Q 905+251 T 1156 ☑ 1156 Q 298+46 T 344 ☑ 344 Q 320+31 T 351 ☑ 351 Q 854+730 T 1584 ☑ 1584 Q 765+30 T 795 ☑ 795 Q 60+179 T 239 ☑ 239 Q 792+76 T 868 ☑ 868 Q 79+114 T 193 ☑ 193 Q 354+23 T 377 ☑ 377 Iteration 13 1407/1407 [==============================] - 8s 6ms/step - loss: 0.0547 - accuracy: 0.9857 - val_loss: 0.0956 - val_accuracy: 0.9682 Q 4+568 T 572 ☑ 572 Q 199+867 T 1066 ☑ 1066 Q 77+727 T 804 ☑ 804 Q 47+385 T 432 ☑ 432 Q 21+20 T 41 ☑ 41 Q 18+521 T 539 ☑ 539 Q 409+58 T 467 ☑ 467 Q 201+99 T 300 ☒ 200 Q 46+205 T 251 ☑ 251 Q 613+984 T 1597 ☑ 1597 Iteration 14 1407/1407 [==============================] - 8s 6ms/step - loss: 0.0445 - accuracy: 0.9889 - val_loss: 0.0364 - val_accuracy: 0.9914 Q 50+770 T 820 ☑ 820 Q 338+329 T 667 ☑ 667 Q 535+529 T 1064 ☑ 1064 Q 50+907 T 957 ☑ 957 Q 266+30 T 296 ☑ 296 Q 65+91 T 156 ☑ 156 Q 43+8 T 51 ☑ 51 Q 714+3 T 717 ☑ 717 Q 415+38 T 453 ☑ 453 Q 432+252 T 684 ☑ 684 Iteration 15 1407/1407 [==============================] - 8s 6ms/step - loss: 0.0324 - accuracy: 0.9920 - val_loss: 0.0196 - val_accuracy: 0.9965 Q 748+45 T 793 ☑ 793 Q 457+2 T 459 ☑ 459 Q 205+30 T 235 ☑ 235 Q 16+402 T 418 ☑ 418 Q 810+415 T 1225 ☑ 1225 Q 917+421 T 1338 ☑ 1338 Q 803+68 T 871 ☑ 871 Q 66+351 T 417 ☑ 417 Q 901+3 T 904 ☑ 904 Q 26+897 T 923 ☑ 923 Iteration 16 1407/1407 [==============================] - 8s 6ms/step - loss: 0.0353 - accuracy: 0.9906 - val_loss: 0.0174 - val_accuracy: 0.9966 Q 295+57 T 352 ☑ 352 Q 4+683 T 687 ☑ 687 Q 608+892 T 1500 ☒ 1400 Q 618+71 T 689 ☑ 689 Q 43+299 T 342 ☑ 342 Q 662+9 T 671 ☑ 671 Q 50+318 T 368 ☑ 368 Q 33+665 T 698 ☑ 698 Q 2+11 T 13 ☑ 13 Q 29+261 T 290 ☑ 290 Iteration 17 1407/1407 [==============================] - 8s 6ms/step - loss: 0.0368 - accuracy: 0.9903 - val_loss: 0.0148 - val_accuracy: 0.9971 Q 4+568 T 572 ☑ 572 Q 121+316 T 437 ☑ 437 Q 78+662 T 740 ☑ 740 Q 883+47 T 930 ☑ 930 Q 696+78 T 774 ☑ 774 Q 23+921 T 944 ☑ 944 Q 768+813 T 1581 ☑ 1581 Q 1+586 T 587 ☑ 587 Q 276+92 T 368 ☑ 368 Q 623+9 T 632 ☑ 632 Iteration 18 1407/1407 [==============================] - 8s 6ms/step - loss: 0.0317 - accuracy: 0.9917 - val_loss: 0.0119 - val_accuracy: 0.9985 Q 50+430 T 480 ☑ 480 Q 583+86 T 669 ☑ 669 Q 899+342 T 1241 ☑ 1241 Q 164+369 T 533 ☑ 533 Q 728+9 T 737 ☑ 737 Q 182+85 T 267 ☑ 267 Q 81+323 T 404 ☑ 404 Q 91+85 T 176 ☑ 176 Q 602+606 T 1208 ☑ 1208 Q 334+193 T 527 ☑ 527 Iteration 19 1407/1407 [==============================] - 8s 6ms/step - loss: 0.0225 - accuracy: 0.9940 - val_loss: 0.0291 - val_accuracy: 0.9915 Q 416+636 T 1052 ☑ 1052 Q 224+330 T 554 ☑ 554 Q 347+8 T 355 ☑ 355 Q 918+890 T 1808 ☒ 1809 Q 12+852 T 864 ☑ 864 Q 535+93 T 628 ☑ 628 Q 476+98 T 574 ☑ 574 Q 89+682 T 771 ☑ 771 Q 731+99 T 830 ☑ 830 Q 222+45 T 267 ☑ 267 Iteration 20 1407/1407 [==============================] - 8s 6ms/step - loss: 0.0325 - accuracy: 0.9914 - val_loss: 0.0118 - val_accuracy: 0.9980 Q 342+270 T 612 ☑ 612 Q 20+188 T 208 ☑ 208 Q 37+401 T 438 ☑ 438 Q 672+417 T 1089 ☑ 1089 Q 597+12 T 609 ☑ 609 Q 569+81 T 650 ☑ 650 Q 58+46 T 104 ☑ 104 Q 48+46 T 94 ☑ 94 Q 801+47 T 848 ☑ 848 Q 356+550 T 906 ☑ 906 Iteration 21 1407/1407 [==============================] - 8s 6ms/step - loss: 0.0226 - accuracy: 0.9941 - val_loss: 0.0097 - val_accuracy: 0.9984 Q 77+188 T 265 ☑ 265 Q 449+35 T 484 ☑ 484 Q 76+287 T 363 ☑ 363 Q 204+231 T 435 ☑ 435 Q 880+1 T 881 ☑ 881 Q 571+79 T 650 ☑ 650 Q 6+126 T 132 ☑ 132 Q 567+6 T 573 ☑ 573 Q 284+928 T 1212 ☑ 1212 Q 889+9 T 898 ☑ 898 Iteration 22 1407/1407 [==============================] - 8s 6ms/step - loss: 0.0224 - accuracy: 0.9937 - val_loss: 0.0100 - val_accuracy: 0.9980 Q 694+851 T 1545 ☑ 1545 Q 84+582 T 666 ☑ 666 Q 900+476 T 1376 ☑ 1376 Q 661+848 T 1509 ☑ 1509 Q 2+210 T 212 ☑ 212 Q 4+568 T 572 ☑ 572 Q 699+555 T 1254 ☑ 1254 Q 750+64 T 814 ☑ 814 Q 299+938 T 1237 ☑ 1237 Q 213+94 T 307 ☑ 307 Iteration 23 1407/1407 [==============================] - 8s 6ms/step - loss: 0.0285 - accuracy: 0.9919 - val_loss: 0.0769 - val_accuracy: 0.9790 Q 70+650 T 720 ☑ 720 Q 914+8 T 922 ☑ 922 Q 925+53 T 978 ☑ 978 Q 19+49 T 68 ☒ 78 Q 12+940 T 952 ☑ 952 Q 85+879 T 964 ☑ 964 Q 652+461 T 1113 ☑ 1113 Q 223+59 T 282 ☑ 282 Q 361+55 T 416 ☑ 416 Q 940+1 T 941 ☑ 941 Iteration 24 1407/1407 [==============================] - 8s 6ms/step - loss: 0.0101 - accuracy: 0.9979 - val_loss: 0.0218 - val_accuracy: 0.9937 Q 653+77 T 730 ☑ 730 Q 73+155 T 228 ☑ 228 Q 62+355 T 417 ☑ 417 Q 859+916 T 1775 ☑ 1775 Q 201+153 T 354 ☑ 354 Q 469+1 T 470 ☑ 470 Q 52+363 T 415 ☑ 415 Q 22+706 T 728 ☑ 728 Q 58+33 T 91 ☑ 91 Q 371+51 T 422 ☑ 422 Iteration 25 1407/1407 [==============================] - 8s 6ms/step - loss: 0.0174 - accuracy: 0.9952 - val_loss: 0.1332 - val_accuracy: 0.9670 Q 213+94 T 307 ☑ 307 Q 390+7 T 397 ☑ 397 Q 14+498 T 512 ☑ 512 Q 14+312 T 326 ☑ 326 Q 56+653 T 709 ☑ 709 Q 37+28 T 65 ☑ 65 Q 113+70 T 183 ☑ 183 Q 326+398 T 724 ☑ 724 Q 137+8 T 145 ☑ 145 Q 50+19 T 69 ☑ 69 Iteration 26 1407/1407 [==============================] - 8s 6ms/step - loss: 0.0231 - accuracy: 0.9937 - val_loss: 0.0088 - val_accuracy: 0.9983 Q 886+20 T 906 ☑ 906 Q 61+790 T 851 ☑ 851 Q 610+63 T 673 ☑ 673 Q 27+20 T 47 ☑ 47 Q 130+32 T 162 ☑ 162 Q 555+25 T 580 ☑ 580 Q 95+43 T 138 ☑ 138 Q 5+427 T 432 ☑ 432 Q 395+651 T 1046 ☑ 1046 Q 188+19 T 207 ☑ 207 Iteration 27 1407/1407 [==============================] - 8s 6ms/step - loss: 0.0218 - accuracy: 0.9940 - val_loss: 0.0070 - val_accuracy: 0.9987 Q 495+735 T 1230 ☑ 1230 Q 74+607 T 681 ☑ 681 Q 225+56 T 281 ☑ 281 Q 581+589 T 1170 ☑ 1170 Q 37+953 T 990 ☑ 990 Q 17+510 T 527 ☑ 527 Q 621+73 T 694 ☑ 694 Q 54+298 T 352 ☑ 352 Q 636+518 T 1154 ☑ 1154 Q 7+673 T 680 ☑ 680 Iteration 28 1407/1407 [==============================] - 8s 6ms/step - loss: 0.0114 - accuracy: 0.9971 - val_loss: 0.0238 - val_accuracy: 0.9930 Q 67+12 T 79 ☑ 79 Q 109+464 T 573 ☑ 573 Q 4+52 T 56 ☑ 56 Q 907+746 T 1653 ☑ 1653 Q 153+864 T 1017 ☑ 1017 Q 666+77 T 743 ☑ 743 Q 65+777 T 842 ☑ 842 Q 52+60 T 112 ☑ 112 Q 941+692 T 1633 ☑ 1633 Q 931+666 T 1597 ☑ 1597 Iteration 29 1407/1407 [==============================] - 8s 6ms/step - loss: 0.0262 - accuracy: 0.9929 - val_loss: 0.0643 - val_accuracy: 0.9804 Q 128+86 T 214 ☑ 214 Q 20+494 T 514 ☑ 514 Q 34+896 T 930 ☑ 930 Q 372+15 T 387 ☑ 387 Q 466+63 T 529 ☑ 529 Q 327+9 T 336 ☑ 336 Q 458+85 T 543 ☑ 543 Q 134+431 T 565 ☑ 565 Q 807+289 T 1096 ☑ 1096 Q 100+60 T 160 ☑ 160 You'll get to 99+% validation accuracy after ~30 epochs. Text sentiment classification starting from raw text files. Introduction This example shows how to do text classification starting from raw text (as a set of text files on disk). We demonstrate the workflow on the IMDB sentiment classification dataset (unprocessed version). We use the TextVectorization layer for word splitting & indexing. Setup import tensorflow as tf import numpy as np Load the data: IMDB movie review sentiment classification Let's download the data and inspect its structure. !curl -O https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -xf aclImdb_v1.tar.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 80.2M 100 80.2M 0 0 16.1M 0 0:00:04 0:00:04 --:--:-- 16.4M The aclImdb folder contains a train and test subfolder: !ls aclImdb !ls aclImdb/test !ls aclImdb/train README imdb.vocab imdbEr.txt test train labeledBow.feat neg pos urls_neg.txt urls_pos.txt labeledBow.feat pos unsupBow.feat urls_pos.txt neg unsup urls_neg.txt urls_unsup.txt The aclImdb/train/pos and aclImdb/train/neg folders contain text files, each of which represents one review (either positive or negative): !cat aclImdb/train/pos/6248_7.txt Being an Austrian myself this has been a straight knock in my face. Fortunately I don't live nowhere near the place where this movie takes place but unfortunately it portrays everything that the rest of Austria hates about Viennese people (or people close to that region). And it is very easy to read that this is exactly the directors intention: to let your head sink into your hands and say \"Oh my god, how can THAT be possible!\". No, not with me, the (in my opinion) totally exaggerated uncensored swinger club scene is not necessary, I watch porn, sure, but in this context I was rather disgusted than put in the right context.

This movie tells a story about how misled people who suffer from lack of education or bad company try to survive and live in a world of redundancy and boring horizons. A girl who is treated like a whore by her super-jealous boyfriend (and still keeps coming back), a female teacher who discovers her masochism by putting the life of her super-cruel \"lover\" on the line, an old couple who has an almost mathematical daily cycle (she is the \"official replacement\" of his ex wife), a couple that has just divorced and has the ex husband suffer under the acts of his former wife obviously having a relationship with her masseuse and finally a crazy hitchhiker who asks her drivers the most unusual questions and stretches their nerves by just being super-annoying.

After having seen it you feel almost nothing. You're not even shocked, sad, depressed or feel like doing anything... Maybe that's why I gave it 7 points, it made me react in a way I never reacted before. If that's good or bad is up to you! We are only interested in the pos and neg subfolders, so let's delete the rest: !rm -r aclImdb/train/unsup You can use the utility tf.keras.preprocessing.text_dataset_from_directory to generate a labeled tf.data.Dataset object from a set of text files on disk filed into class-specific folders. Let's use it to generate the training, validation, and test datasets. The validation and training datasets are generated from two subsets of the train directory, with 20% of samples going to the validation dataset and 80% going to the training dataset. Having a validation dataset in addition to the test dataset is useful for tuning hyperparameters, such as the model architecture, for which the test dataset should not be used. Before putting the model out into the real world however, it should be retrained using all available training data (without creating a validation dataset), so its performance is maximized. When using the validation_split & subset arguments, make sure to either specify a random seed, or to pass shuffle=False, so that the validation & training splits you get have no overlap. batch_size = 32 raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory( \"aclImdb/train\", batch_size=batch_size, validation_split=0.2, subset=\"training\", seed=1337, ) raw_val_ds = tf.keras.preprocessing.text_dataset_from_directory( \"aclImdb/train\", batch_size=batch_size, validation_split=0.2, subset=\"validation\", seed=1337, ) raw_test_ds = tf.keras.preprocessing.text_dataset_from_directory( \"aclImdb/test\", batch_size=batch_size ) print(f\"Number of batches in raw_train_ds: {raw_train_ds.cardinality()}\") print(f\"Number of batches in raw_val_ds: {raw_val_ds.cardinality()}\") print(f\"Number of batches in raw_test_ds: {raw_test_ds.cardinality()}\") Found 25000 files belonging to 2 classes. Using 20000 files for training. Found 25000 files belonging to 2 classes. Using 5000 files for validation. Found 25000 files belonging to 2 classes. Number of batches in raw_train_ds: 625 Number of batches in raw_val_ds: 157 Number of batches in raw_test_ds: 782 Let's preview a few samples: # It's important to take a look at your raw data to ensure your normalization # and tokenization will work as expected. We can do that by taking a few # examples from the training set and looking at them. # This is one of the places where eager execution shines: # we can just evaluate these tensors using .numpy() # instead of needing to evaluate them in a Session/Graph context. for text_batch, label_batch in raw_train_ds.take(1): for i in range(5): print(text_batch.numpy()[i]) print(label_batch.numpy()[i]) b'I\\'ve seen tons of science fiction from the 70s; some horrendously bad, and others thought provoking and truly frightening. Soylent Green fits into the latter category. Yes, at times it\'s a little campy, and yes, the furniture is good for a giggle or two, but some of the film seems awfully prescient. Here we have a film, 9 years before Blade Runner, that dares to imagine the future as somthing dark, scary, and nihilistic. Both Charlton Heston and Edward G. Robinson fare far better in this than The Ten Commandments, and Robinson\'s assisted-suicide scene is creepily prescient of Kevorkian and his ilk. Some of the attitudes are dated (can you imagine a filmmaker getting away with the \"women as furniture\" concept in our oh-so-politically-correct-90s?), but it\'s rare to find a film from the Me Decade that actually can make you think. This is one I\'d love to see on the big screen, because even in a widescreen presentation, I don\'t think the overall scope of this film would receive its due. Check it out.' 1 b'First than anything, I\\'m not going to praise I\\xc3\\xb1arritu\\'s short film, even I\\'m Mexican and proud of his success in mainstream Hollywood.

In another hand, I see most of the reviews focuses on their favorite (and not so) short films; but we are forgetting that there is a subtle bottom line that circles the whole compilation, and maybe it will not be so pleasant for American people. (Even if that was not the main purpose of the producers)

What i\'m talking about is that most of the short films does not show the suffering that WASP people went through because the terrorist attack on September 11th, but the suffering of the Other people.

Do you need proofs about what i\'m saying? Look, in the Bosnia short film, the message is: \"You cry because of the people who died in the Towers, but we (The Others = East Europeans) are crying long ago for the crimes committed against our women and nobody pay attention to us like the whole world has done to you\".

Even though the Burkina Fasso story is more in comedy, there is a the same thought: \"You are angry because Osama Bin Laden punched you in an evil way, but we (The Others = Africans) should be more angry, because our people is dying of hunger, poverty and AIDS long time ago, and nobody pay attention to us like the whole world has done to you\".

Look now at the Sean Penn short: The fall of the Twin Towers makes happy to a lonely (and alienated) man. So the message is that the Power and the Greed (symbolized by the Towers) must fall for letting the people see the sun rise and the flowers blossom? It is remarkable that this terrible bottom line has been proposed by an American. There is so much irony in this short film that it is close to be subversive.

Well, the Ken Loach (very know because his anti-capitalism ideology) is much more clearly and shameless in going straight to the point: \"You are angry because your country has been attacked by evil forces, but we (The Others = Latin Americans) suffered at a similar date something worst, and nobody remembers our grief as the whole world has done to you\".

It is like if the creative of this project wanted to say to Americans: \"You see now, America? You are not the only that have become victim of the world violence, you are not alone in your pain and by the way, we (the Others = the Non Americans) have been suffering a lot more than you from long time ago; so, we are in solidarity with you in your pain... and by the way, we are sorry because you have had some taste of your own medicine\" Only the Mexican and the French short films showed some compassion and sympathy for American people; the others are like a slap on the face for the American State, that is not equal to American People.' 1 b'Blood Castle (aka Scream of the Demon Lover, Altar of Blood, Ivanna--the best, but least exploitation cinema-sounding title, and so on) is a very traditional Gothic Romance film. That means that it has big, creepy castles, a headstrong young woman, a mysterious older man, hints of horror and the supernatural, and romance elements in the contemporary sense of that genre term. It also means that it is very deliberately paced, and that the film will work best for horror mavens who are big fans of understatement. If you love films like Robert Wise\'s The Haunting (1963), but you also have a taste for late 1960s/early 1970s Spanish and Italian horror, you may love Blood Castle, as well.

Baron Janos Dalmar (Carlos Quiney) lives in a large castle on the outskirts of a traditional, unspecified European village. The locals fear him because legend has it that whenever he beds a woman, she soon after ends up dead--the consensus is that he sets his ferocious dogs on them. This is quite a problem because the Baron has a very healthy appetite for women. At the beginning of the film, yet another woman has turned up dead and mutilated.

Meanwhile, Dr. Ivanna Rakowsky (Erna Sch\xc3\xbcrer) has appeared in the center of the village, asking to be taken to Baron Dalmar\'s castle. She\'s an out-of-towner who has been hired by the Baron for her expertise in chemistry. Of course, no one wants to go near the castle. Finally, Ivanna finds a shady individual (who becomes even shadier) to take her. Once there, an odd woman who lives in the castle, Olga (Cristiana Galloni), rejects Ivanna and says that she shouldn\'t be there since she\'s a woman. Baron Dalmar vacillates over whether she should stay. She ends up staying, but somewhat reluctantly. The Baron has hired her to try to reverse the effects of severe burns, which the Baron\'s brother, Igor, is suffering from.

Unfortunately, the Baron\'s brother appears to be just a lump of decomposing flesh in a vat of bizarre, blackish liquid. And furthermore, Ivanna is having bizarre, hallucinatory dreams. Just what is going on at the castle? Is the Baron responsible for the crimes? Is he insane?

I wanted to like Blood Castle more than I did. As I mentioned, the film is very deliberate in its pacing, and most of it is very understated. I can go either way on material like that. I don\'t care for The Haunting (yes, I\'m in a very small minority there), but I\'m a big fan of 1960s and 1970s European horror. One of my favorite directors is Mario Bava. I also love Dario Argento\'s work from that period. But occasionally, Blood Castle moved a bit too slow for me at times. There are large chunks that amount to scenes of not very exciting talking alternated with scenes of Ivanna slowly walking the corridors of the castle.

But the atmosphere of the film is decent. Director Jos\xc3\xa9 Luis Merino managed more than passable sets and locations, and they\'re shot fairly well by Emanuele Di Cola. However, Blood Castle feels relatively low budget, and this is a Roger Corman-produced film, after all (which usually means a low-budget, though often surprisingly high quality \"quickie\"). So while there is a hint of the lushness of Bava\'s colors and complex set decoration, everything is much more minimalist. Of course, it doesn\'t help that the Retromedia print I watched looks like a 30-year old photograph that\'s been left out in the sun too long. It appears \"washed out\", with compromised contrast.

Still, Merino and Di Cola occasionally set up fantastic visuals. For example, a scene of Ivanna walking in a darkened hallway that\'s shot from an exaggerated angle, and where an important plot element is revealed through shadows on a wall only. There are also a couple Ingmar Bergmanesque shots, where actors are exquisitely blocked to imply complex relationships, besides just being visually attractive and pulling your eye deep into the frame.

The performances are fairly good, and the women--especially Sch\xc3\xbcrer--are very attractive. Merino exploits this fact by incorporating a decent amount of nudity. Sch\xc3\xbcrer went on to do a number of films that were as much soft corn porn as they were other genres, with English titles such as Sex Life in a Woman\'s Prison (1974), Naked and Lustful (1974), Strip Nude for Your Killer (1975) and Erotic Exploits of a Sexy Seducer (1977). Blood Castle is much tamer, but in addition to the nudity, there are still mild scenes suggesting rape and bondage, and of course the scenes mixing sex and death.

The primary attraction here, though, is probably the story, which is much a slow-burning romance as anything else. The horror elements, the mystery elements, and a somewhat unexpected twist near the end are bonuses, but in the end, Blood Castle is a love story, about a couple overcoming various difficulties and antagonisms (often with physical threats or harms) to be together.' 1 b\"I was talked into watching this movie by a friend who blubbered on about what a cute story this was.

Yuck.

I want my two hours back, as I could have done SO many more productive things with my time...like, for instance, twiddling my thumbs. I see nothing redeeming about this film at all, save for the eye-candy aspect of it...

3/10 (and that's being generous)\" 0 b\"Michelle Rodriguez is the defining actress who could be the charging force for other actresses to look out for. She has the audacity to place herself in a rarely seen tough-girl role very early in her career (and pull it off), which is a feat that should be recognized. Although her later films pigeonhole her to that same role, this film was made for her ruggedness.

Her character is a romanticized student/fighter/lover, struggling to overcome her disenchanted existence in the projects, which is a little overdone in film...but not by a girl. That aspect of this film isn't very original, but the story goes in depth when the heated relationships that this girl has to deal with come to a boil and her primal rage takes over.

I haven't seen an actress take such an aggressive stance in movie-making yet, and I'm glad that she's getting that original twist out there in Hollywood. This film got a 7 from me because of the average story of ghetto youth, but it has such a great actress portraying a rarely-seen role in a minimal budget movie. Great work.\" 1 Prepare the data In particular, we remove
tags. from tensorflow.keras.layers import TextVectorization import string import re # Having looked at our data above, we see that the raw text contains HTML break # tags of the form '
'. These tags will not be removed by the default # standardizer (which doesn't strip HTML). Because of this, we will need to # create a custom standardization function. def custom_standardization(input_data): lowercase = tf.strings.lower(input_data) stripped_html = tf.strings.regex_replace(lowercase, \"
\", \" \") return tf.strings.regex_replace( stripped_html, f\"[{re.escape(string.punctuation)}]\", \"\" ) # Model constants. max_features = 20000 embedding_dim = 128 sequence_length = 500 # Now that we have our custom standardization, we can instantiate our text # vectorization layer. We are using this layer to normalize, split, and map # strings to integers, so we set our 'output_mode' to 'int'. # Note that we're using the default split function, # and the custom standardization defined above. # We also set an explicit maximum sequence length, since the CNNs later in our # model won't support ragged sequences. vectorize_layer = TextVectorization( standardize=custom_standardization, max_tokens=max_features, output_mode=\"int\", output_sequence_length=sequence_length, ) # Now that the vocab layer has been created, call `adapt` on a text-only # dataset to create the vocabulary. You don't have to batch, but for very large # datasets this means you're not keeping spare copies of the dataset in memory. # Let's make a text-only dataset (no labels): text_ds = raw_train_ds.map(lambda x, y: x) # Let's call `adapt`: vectorize_layer.adapt(text_ds) Two options to vectorize the data There are 2 ways we can use our text vectorization layer: Option 1: Make it part of the model, so as to obtain a model that processes raw strings, like this: text_input = tf.keras.Input(shape=(1,), dtype=tf.string, name='text') x = vectorize_layer(text_input) x = layers.Embedding(max_features + 1, embedding_dim)(x) ... Option 2: Apply it to the text dataset to obtain a dataset of word indices, then feed it into a model that expects integer sequences as inputs. An important difference between the two is that option 2 enables you to do asynchronous CPU processing and buffering of your data when training on GPU. So if you're training the model on GPU, you probably want to go with this option to get the best performance. This is what we will do below. If we were to export our model to production, we'd ship a model that accepts raw strings as input, like in the code snippet for option 1 above. This can be done after training. We do this in the last section. def vectorize_text(text, label): text = tf.expand_dims(text, -1) return vectorize_layer(text), label # Vectorize the data. train_ds = raw_train_ds.map(vectorize_text) val_ds = raw_val_ds.map(vectorize_text) test_ds = raw_test_ds.map(vectorize_text) # Do async prefetching / buffering of the data for best performance on GPU. train_ds = train_ds.cache().prefetch(buffer_size=10) val_ds = val_ds.cache().prefetch(buffer_size=10) test_ds = test_ds.cache().prefetch(buffer_size=10) Build a model We choose a simple 1D convnet starting with an Embedding layer. from tensorflow.keras import layers # A integer input for vocab indices. inputs = tf.keras.Input(shape=(None,), dtype=\"int64\") # Next, we add a layer to map those vocab indices into a space of dimensionality # 'embedding_dim'. x = layers.Embedding(max_features, embedding_dim)(inputs) x = layers.Dropout(0.5)(x) # Conv1D + global max pooling x = layers.Conv1D(128, 7, padding=\"valid\", activation=\"relu\", strides=3)(x) x = layers.Conv1D(128, 7, padding=\"valid\", activation=\"relu\", strides=3)(x) x = layers.GlobalMaxPooling1D()(x) # We add a vanilla hidden layer: x = layers.Dense(128, activation=\"relu\")(x) x = layers.Dropout(0.5)(x) # We project onto a single unit output layer, and squash it with a sigmoid: predictions = layers.Dense(1, activation=\"sigmoid\", name=\"predictions\")(x) model = tf.keras.Model(inputs, predictions) # Compile the model with binary crossentropy loss and an adam optimizer. model.compile(loss=\"binary_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"]) Train the model epochs = 3 # Fit the model using the train and test datasets. model.fit(train_ds, validation_data=val_ds, epochs=epochs) Epoch 1/3 625/625 [==============================] - 46s 73ms/step - loss: 0.5005 - accuracy: 0.7156 - val_loss: 0.3103 - val_accuracy: 0.8696 Epoch 2/3 625/625 [==============================] - 51s 81ms/step - loss: 0.2262 - accuracy: 0.9115 - val_loss: 0.3255 - val_accuracy: 0.8754 Epoch 3/3 625/625 [==============================] - 50s 81ms/step - loss: 0.1142 - accuracy: 0.9574 - val_loss: 0.4157 - val_accuracy: 0.8698 Evaluate the model on the test set model.evaluate(test_ds) 782/782 [==============================] - 14s 18ms/step - loss: 0.4539 - accuracy: 0.8570 [0.45387956500053406, 0.8569999933242798] Make an end-to-end model If you want to obtain a model capable of processing raw strings, you can simply create a new model (using the weights we just trained): # A string input inputs = tf.keras.Input(shape=(1,), dtype=\"string\") # Turn strings into vocab indices indices = vectorize_layer(inputs) # Turn vocab indices into predictions outputs = model(indices) # Our end to end model end_to_end_model = tf.keras.Model(inputs, outputs) end_to_end_model.compile( loss=\"binary_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"] ) # Test it with `raw_test_ds`, which yields raw strings end_to_end_model.evaluate(raw_test_ds) 782/782 [==============================] - 20s 25ms/step - loss: 0.4539 - accuracy: 0.8570 [0.45387890934944153, 0.8569999933242798] Implement a Switch Transformer for text classification. Introduction This example demonstrates the implementation of the Switch Transformer model for text classification. The Switch Transformer replaces the feedforward network (FFN) layer in the standard Transformer with a Mixture of Expert (MoE) routing layer, where each expert operates independently on the tokens in the sequence. This allows increasing the model size without increasing the computation needed to process each example. Note that, for training the Switch Transformer efficiently, data and model parallelism need to be applied, so that expert modules can run simultaneously, each on its own accelerator. While the implementation described in the paper uses the TensorFlow Mesh framework for distributed training, this example presents a simple, non-distributed implementation of the Switch Transformer model for demonstration purposes. Setup import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Download and prepare dataset vocab_size = 20000 # Only consider the top 20k words num_tokens_per_example = 200 # Only consider the first 200 words of each movie review (x_train, y_train), (x_val, y_val) = keras.datasets.imdb.load_data(num_words=vocab_size) print(len(x_train), \"Training sequences\") print(len(x_val), \"Validation sequences\") x_train = keras.preprocessing.sequence.pad_sequences( x_train, maxlen=num_tokens_per_example ) x_val = keras.preprocessing.sequence.pad_sequences(x_val, maxlen=num_tokens_per_example) 25000 Training sequences 25000 Validation sequences Define hyperparameters embed_dim = 32 # Embedding size for each token. num_heads = 2 # Number of attention heads ff_dim = 32 # Hidden layer size in feedforward network. num_experts = 10 # Number of experts used in the Switch Transformer. batch_size = 50 # Batch size. learning_rate = 0.001 # Learning rate. dropout_rate = 0.25 # Dropout rate. num_epochs = 3 # Number of epochs. num_tokens_per_batch = ( batch_size * num_tokens_per_example ) # Total number of tokens per batch. print(f\"Number of tokens per batch: {num_tokens_per_batch}\") Number of tokens per batch: 10000 Implement token & position embedding layer It consists of two seperate embedding layers, one for tokens, one for token index (positions). class TokenAndPositionEmbedding(layers.Layer): def __init__(self, maxlen, vocab_size, embed_dim): super(TokenAndPositionEmbedding, self).__init__() self.token_emb = layers.Embedding(input_dim=vocab_size, output_dim=embed_dim) self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=embed_dim) def call(self, x): maxlen = tf.shape(x)[-1] positions = tf.range(start=0, limit=maxlen, delta=1) positions = self.pos_emb(positions) x = self.token_emb(x) return x + positions Implement the feedforward network This is used as the Mixture of Experts in the Switch Transformer. def create_feedforward_network(ff_dim, name=None): return keras.Sequential( [layers.Dense(ff_dim, activation=\"relu\"), layers.Dense(ff_dim)], name=name ) Implement the load-balanced loss This is an auxiliary loss to encourage a balanced load across experts. def load_balanced_loss(router_probs, expert_mask): # router_probs [tokens_per_batch, num_experts] is the probability assigned for # each expert per token. expert_mask [tokens_per_batch, num_experts] contains # the expert with the highest router probability in one−hot format. num_experts = tf.shape(expert_mask)[-1] # Get the fraction of tokens routed to each expert. # density is a vector of length num experts that sums to 1. density = tf.reduce_mean(expert_mask, axis=0) # Get fraction of probability mass assigned to each expert from the router # across all tokens. density_proxy is a vector of length num experts that sums to 1. density_proxy = tf.reduce_mean(router_probs, axis=0) # Want both vectors to have uniform allocation (1/num experts) across all # num_expert elements. The two vectors will be pushed towards uniform allocation # when the dot product is minimized. loss = tf.reduce_mean(density_proxy * density) * tf.cast( (num_experts ** 2), tf.dtypes.float32 ) return loss Implement the router as a layer class Router(layers.Layer): def __init__(self, num_experts, expert_capacity): self.num_experts = num_experts self.route = layers.Dense(units=num_experts) self.expert_capacity = expert_capacity super(Router, self).__init__() def call(self, inputs, training=False): # inputs shape: [tokens_per_batch, embed_dim] # router_logits shape: [tokens_per_batch, num_experts] router_logits = self.route(inputs) if training: # Add noise for exploration across experts. router_logits += tf.random.uniform( shape=router_logits.shape, minval=0.9, maxval=1.1 ) # Probabilities for each token of what expert it should be sent to. router_probs = keras.activations.softmax(router_logits, axis=-1) # Get the top−1 expert for each token. expert_gate is the top−1 probability # from the router for each token. expert_index is what expert each token # is going to be routed to. expert_gate, expert_index = tf.math.top_k(router_probs, k=1) # expert_mask shape: [tokens_per_batch, num_experts] expert_mask = tf.one_hot(expert_index, depth=self.num_experts) # Compute load balancing loss. aux_loss = load_balanced_loss(router_probs, expert_mask) self.add_loss(aux_loss) # Experts have a fixed capacity, ensure we do not exceed it. Construct # the batch indices, to each expert, with position in expert make sure that # not more that expert capacity examples can be routed to each expert. position_in_expert = tf.cast( tf.math.cumsum(expert_mask, axis=0) * expert_mask, tf.dtypes.int32 ) # Keep only tokens that fit within expert capacity. expert_mask *= tf.cast( tf.math.less( tf.cast(position_in_expert, tf.dtypes.int32), self.expert_capacity ), tf.dtypes.float32, ) expert_mask_flat = tf.reduce_sum(expert_mask, axis=-1) # Mask out the experts that have overflowed the expert capacity. expert_gate *= expert_mask_flat # Combine expert outputs and scaling with router probability. # combine_tensor shape: [tokens_per_batch, num_experts, expert_capacity] combined_tensor = tf.expand_dims( expert_gate * expert_mask_flat * tf.squeeze(tf.one_hot(expert_index, depth=self.num_experts), 1), -1, ) * tf.squeeze(tf.one_hot(position_in_expert, depth=self.expert_capacity), 1) # Create binary dispatch_tensor [tokens_per_batch, num_experts, expert_capacity] # that is 1 if the token gets routed to the corresponding expert. dispatch_tensor = tf.cast(combined_tensor, tf.dtypes.float32) return dispatch_tensor, combined_tensor Implement a Switch layer class Switch(layers.Layer): def __init__(self, num_experts, embed_dim, num_tokens_per_batch, capacity_factor=1): self.num_experts = num_experts self.embed_dim = embed_dim self.experts = [ create_feedforward_network(embed_dim) for _ in range(num_experts) ] self.expert_capacity = num_tokens_per_batch // self.num_experts self.router = Router(self.num_experts, self.expert_capacity) super(Switch, self).__init__() def call(self, inputs): batch_size = tf.shape(inputs)[0] num_tokens_per_example = tf.shape(inputs)[1] # inputs shape: [num_tokens_per_batch, embed_dim] inputs = tf.reshape(inputs, [num_tokens_per_batch, self.embed_dim]) # dispatch_tensor shape: [expert_capacity, num_experts, tokens_per_batch] # combine_tensor shape: [tokens_per_batch, num_experts, expert_capacity] dispatch_tensor, combine_tensor = self.router(inputs) # expert_inputs shape: [num_experts, expert_capacity, embed_dim] expert_inputs = tf.einsum(\"ab,acd->cdb\", inputs, dispatch_tensor) expert_inputs = tf.reshape( expert_inputs, [self.num_experts, self.expert_capacity, self.embed_dim] ) # Dispatch to experts expert_input_list = tf.unstack(expert_inputs, axis=0) expert_output_list = [ self.experts[idx](expert_input) for idx, expert_input in enumerate(expert_input_list) ] # expert_outputs shape: [expert_capacity, num_experts, embed_dim] expert_outputs = tf.stack(expert_output_list, axis=1) # expert_outputs_combined shape: [tokens_per_batch, embed_dim] expert_outputs_combined = tf.einsum( \"abc,xba->xc\", expert_outputs, combine_tensor ) # output shape: [batch_size, num_tokens_per_example, embed_dim] outputs = tf.reshape( expert_outputs_combined, [batch_size, num_tokens_per_example, self.embed_dim], ) return outputs Implement a Transformer block layer class TransformerBlock(layers.Layer): def __init__(self, embed_dim, num_heads, ffn, dropout_rate=0.1): super(TransformerBlock, self).__init__() self.att = layers.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) # The ffn can be either a standard feedforward network or a switch # layer with a Mixture of Experts. self.ffn = ffn self.layernorm1 = layers.LayerNormalization(epsilon=1e-6) self.layernorm2 = layers.LayerNormalization(epsilon=1e-6) self.dropout1 = layers.Dropout(dropout_rate) self.dropout2 = layers.Dropout(dropout_rate) def call(self, inputs, training): attn_output = self.att(inputs, inputs) attn_output = self.dropout1(attn_output, training=training) out1 = self.layernorm1(inputs + attn_output) ffn_output = self.ffn(out1) ffn_output = self.dropout2(ffn_output, training=training) return self.layernorm2(out1 + ffn_output) Implement the classifier The TransformerBlock layer outputs one vector for each time step of our input sequence. Here, we take the mean across all time steps and use a feedforward network on top of it to classify text. def create_classifier(): switch = Switch(num_experts, embed_dim, num_tokens_per_batch) transformer_block = TransformerBlock(ff_dim, num_heads, switch) inputs = layers.Input(shape=(num_tokens_per_example,)) embedding_layer = TokenAndPositionEmbedding( num_tokens_per_example, vocab_size, embed_dim ) x = embedding_layer(inputs) x = transformer_block(x) x = layers.GlobalAveragePooling1D()(x) x = layers.Dropout(dropout_rate)(x) x = layers.Dense(ff_dim, activation=\"relu\")(x) x = layers.Dropout(dropout_rate)(x) outputs = layers.Dense(2, activation=\"softmax\")(x) classifier = keras.Model(inputs=inputs, outputs=outputs) return classifier Train and evaluate the model def run_experiment(classifier): classifier.compile( optimizer=keras.optimizers.Adam(learning_rate), loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"], ) history = classifier.fit( x_train, y_train, batch_size=batch_size, epochs=num_epochs, validation_data=(x_val, y_val), ) return history classifier = create_classifier() run_experiment(classifier) Epoch 1/3 500/500 [==============================] - 575s 1s/step - loss: 1.5311 - accuracy: 0.7151 - val_loss: 1.2915 - val_accuracy: 0.8772 Epoch 2/3 500/500 [==============================] - 575s 1s/step - loss: 1.1971 - accuracy: 0.9262 - val_loss: 1.3073 - val_accuracy: 0.8708 Epoch 3/3 500/500 [==============================] - 624s 1s/step - loss: 1.1284 - accuracy: 0.9563 - val_loss: 1.3547 - val_accuracy: 0.8637 Conclusion Compared to the standard Transformer architecture, the Switch Transformer can have a much larger number of parameters, leading to increased model capacity, while maintaining a reasonable computational cost. Implement a Transformer block as a Keras layer and use it for text classification. Setup import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Implement a Transformer block as a layer class TransformerBlock(layers.Layer): def __init__(self, embed_dim, num_heads, ff_dim, rate=0.1): super(TransformerBlock, self).__init__() self.att = layers.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.ffn = keras.Sequential( [layers.Dense(ff_dim, activation=\"relu\"), layers.Dense(embed_dim),] ) self.layernorm1 = layers.LayerNormalization(epsilon=1e-6) self.layernorm2 = layers.LayerNormalization(epsilon=1e-6) self.dropout1 = layers.Dropout(rate) self.dropout2 = layers.Dropout(rate) def call(self, inputs, training): attn_output = self.att(inputs, inputs) attn_output = self.dropout1(attn_output, training=training) out1 = self.layernorm1(inputs + attn_output) ffn_output = self.ffn(out1) ffn_output = self.dropout2(ffn_output, training=training) return self.layernorm2(out1 + ffn_output) Implement embedding layer Two seperate embedding layers, one for tokens, one for token index (positions). class TokenAndPositionEmbedding(layers.Layer): def __init__(self, maxlen, vocab_size, embed_dim): super(TokenAndPositionEmbedding, self).__init__() self.token_emb = layers.Embedding(input_dim=vocab_size, output_dim=embed_dim) self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=embed_dim) def call(self, x): maxlen = tf.shape(x)[-1] positions = tf.range(start=0, limit=maxlen, delta=1) positions = self.pos_emb(positions) x = self.token_emb(x) return x + positions Download and prepare dataset vocab_size = 20000 # Only consider the top 20k words maxlen = 200 # Only consider the first 200 words of each movie review (x_train, y_train), (x_val, y_val) = keras.datasets.imdb.load_data(num_words=vocab_size) print(len(x_train), \"Training sequences\") print(len(x_val), \"Validation sequences\") x_train = keras.preprocessing.sequence.pad_sequences(x_train, maxlen=maxlen) x_val = keras.preprocessing.sequence.pad_sequences(x_val, maxlen=maxlen) Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz 17465344/17464789 [==============================] - 0s 0us/step :6: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/datasets/imdb.py:159: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray x_train, y_train = np.array(xs[:idx]), np.array(labels[:idx]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/datasets/imdb.py:160: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray x_test, y_test = np.array(xs[idx:]), np.array(labels[idx:]) 25000 Training sequences 25000 Validation sequences Create classifier model using transformer layer Transformer layer outputs one vector for each time step of our input sequence. Here, we take the mean across all time steps and use a feed forward network on top of it to classify text. embed_dim = 32 # Embedding size for each token num_heads = 2 # Number of attention heads ff_dim = 32 # Hidden layer size in feed forward network inside transformer inputs = layers.Input(shape=(maxlen,)) embedding_layer = TokenAndPositionEmbedding(maxlen, vocab_size, embed_dim) x = embedding_layer(inputs) transformer_block = TransformerBlock(embed_dim, num_heads, ff_dim) x = transformer_block(x) x = layers.GlobalAveragePooling1D()(x) x = layers.Dropout(0.1)(x) x = layers.Dense(20, activation=\"relu\")(x) x = layers.Dropout(0.1)(x) outputs = layers.Dense(2, activation=\"softmax\")(x) model = keras.Model(inputs=inputs, outputs=outputs) Train and Evaluate model.compile(optimizer=\"adam\", loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"]) history = model.fit( x_train, y_train, batch_size=32, epochs=2, validation_data=(x_val, y_val) ) Epoch 1/2 782/782 [==============================] - 15s 18ms/step - loss: 0.5112 - accuracy: 0.7070 - val_loss: 0.3598 - val_accuracy: 0.8444 Epoch 2/2 782/782 [==============================] - 13s 17ms/step - loss: 0.1942 - accuracy: 0.9297 - val_loss: 0.2977 - val_accuracy: 0.8745 Fine tune pretrained BERT from HuggingFace Transformers on SQuAD. Introduction This demonstration uses SQuAD (Stanford Question-Answering Dataset). In SQuAD, an input consists of a question, and a paragraph for context. The goal is to find the span of text in the paragraph that answers the question. We evaluate our performance on this data with the \"Exact Match\" metric, which measures the percentage of predictions that exactly match any one of the ground-truth answers. We fine-tune a BERT model to perform this task as follows: Feed the context and the question as inputs to BERT. Take two vectors S and T with dimensions equal to that of hidden states in BERT. Compute the probability of each token being the start and end of the answer span. The probability of a token being the start of the answer is given by a dot product between S and the representation of the token in the last layer of BERT, followed by a softmax over all tokens. The probability of a token being the end of the answer is computed similarly with the vector T. Fine-tune BERT and learn S and T along the way. References: BERT SQuAD Setup import os import re import json import string import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tokenizers import BertWordPieceTokenizer from transformers import BertTokenizer, TFBertModel, BertConfig max_len = 384 configuration = BertConfig() # default parameters and configuration for BERT Set-up BERT tokenizer # Save the slow pretrained tokenizer slow_tokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\") save_path = \"bert_base_uncased/\" if not os.path.exists(save_path): os.makedirs(save_path) slow_tokenizer.save_pretrained(save_path) # Load the fast tokenizer from saved file tokenizer = BertWordPieceTokenizer(\"bert_base_uncased/vocab.txt\", lowercase=True) Load the data train_data_url = \"https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json\" train_path = keras.utils.get_file(\"train.json\", train_data_url) eval_data_url = \"https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json\" eval_path = keras.utils.get_file(\"eval.json\", eval_data_url) Preprocess the data Go through the JSON file and store every record as a SquadExample object. Go through each SquadExample and create x_train, y_train, x_eval, y_eval. class SquadExample: def __init__(self, question, context, start_char_idx, answer_text, all_answers): self.question = question self.context = context self.start_char_idx = start_char_idx self.answer_text = answer_text self.all_answers = all_answers self.skip = False def preprocess(self): context = self.context question = self.question answer_text = self.answer_text start_char_idx = self.start_char_idx # Clean context, answer and question context = \" \".join(str(context).split()) question = \" \".join(str(question).split()) answer = \" \".join(str(answer_text).split()) # Find end character index of answer in context end_char_idx = start_char_idx + len(answer) if end_char_idx >= len(context): self.skip = True return # Mark the character indexes in context that are in answer is_char_in_ans = [0] * len(context) for idx in range(start_char_idx, end_char_idx): is_char_in_ans[idx] = 1 # Tokenize context tokenized_context = tokenizer.encode(context) # Find tokens that were created from answer characters ans_token_idx = [] for idx, (start, end) in enumerate(tokenized_context.offsets): if sum(is_char_in_ans[start:end]) > 0: ans_token_idx.append(idx) if len(ans_token_idx) == 0: self.skip = True return # Find start and end token index for tokens from answer start_token_idx = ans_token_idx[0] end_token_idx = ans_token_idx[-1] # Tokenize question tokenized_question = tokenizer.encode(question) # Create inputs input_ids = tokenized_context.ids + tokenized_question.ids[1:] token_type_ids = [0] * len(tokenized_context.ids) + [1] * len( tokenized_question.ids[1:] ) attention_mask = [1] * len(input_ids) # Pad and create attention masks. # Skip if truncation is needed padding_length = max_len - len(input_ids) if padding_length > 0: # pad input_ids = input_ids + ([0] * padding_length) attention_mask = attention_mask + ([0] * padding_length) token_type_ids = token_type_ids + ([0] * padding_length) elif padding_length < 0: # skip self.skip = True return self.input_ids = input_ids self.token_type_ids = token_type_ids self.attention_mask = attention_mask self.start_token_idx = start_token_idx self.end_token_idx = end_token_idx self.context_token_to_char = tokenized_context.offsets with open(train_path) as f: raw_train_data = json.load(f) with open(eval_path) as f: raw_eval_data = json.load(f) def create_squad_examples(raw_data): squad_examples = [] for item in raw_data[\"data\"]: for para in item[\"paragraphs\"]: context = para[\"context\"] for qa in para[\"qas\"]: question = qa[\"question\"] answer_text = qa[\"answers\"][0][\"text\"] all_answers = [_[\"text\"] for _ in qa[\"answers\"]] start_char_idx = qa[\"answers\"][0][\"answer_start\"] squad_eg = SquadExample( question, context, start_char_idx, answer_text, all_answers ) squad_eg.preprocess() squad_examples.append(squad_eg) return squad_examples def create_inputs_targets(squad_examples): dataset_dict = { \"input_ids\": [], \"token_type_ids\": [], \"attention_mask\": [], \"start_token_idx\": [], \"end_token_idx\": [], } for item in squad_examples: if item.skip == False: for key in dataset_dict: dataset_dict[key].append(getattr(item, key)) for key in dataset_dict: dataset_dict[key] = np.array(dataset_dict[key]) x = [ dataset_dict[\"input_ids\"], dataset_dict[\"token_type_ids\"], dataset_dict[\"attention_mask\"], ] y = [dataset_dict[\"start_token_idx\"], dataset_dict[\"end_token_idx\"]] return x, y train_squad_examples = create_squad_examples(raw_train_data) x_train, y_train = create_inputs_targets(train_squad_examples) print(f\"{len(train_squad_examples)} training points created.\") eval_squad_examples = create_squad_examples(raw_eval_data) x_eval, y_eval = create_inputs_targets(eval_squad_examples) print(f\"{len(eval_squad_examples)} evaluation points created.\") 87599 training points created. 10570 evaluation points created. Create the Question-Answering Model using BERT and Functional API def create_model(): ## BERT encoder encoder = TFBertModel.from_pretrained(\"bert-base-uncased\") ## QA Model input_ids = layers.Input(shape=(max_len,), dtype=tf.int32) token_type_ids = layers.Input(shape=(max_len,), dtype=tf.int32) attention_mask = layers.Input(shape=(max_len,), dtype=tf.int32) embedding = encoder( input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask )[0] start_logits = layers.Dense(1, name=\"start_logit\", use_bias=False)(embedding) start_logits = layers.Flatten()(start_logits) end_logits = layers.Dense(1, name=\"end_logit\", use_bias=False)(embedding) end_logits = layers.Flatten()(end_logits) start_probs = layers.Activation(keras.activations.softmax)(start_logits) end_probs = layers.Activation(keras.activations.softmax)(end_logits) model = keras.Model( inputs=[input_ids, token_type_ids, attention_mask], outputs=[start_probs, end_probs], ) loss = keras.losses.SparseCategoricalCrossentropy(from_logits=False) optimizer = keras.optimizers.Adam(lr=5e-5) model.compile(optimizer=optimizer, loss=[loss, loss]) return model This code should preferably be run on Google Colab TPU runtime. With Colab TPUs, each epoch will take 5-6 minutes. use_tpu = True if use_tpu: # Create distribution strategy tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect() strategy = tf.distribute.TPUStrategy(tpu) # Create model with strategy.scope(): model = create_model() else: model = create_model() model.summary() INFO:absl:Entering into master device scope: /job:worker/replica:0/task:0/device:CPU:0 INFO:tensorflow:Initializing the TPU system: grpc://10.48.159.170:8470 INFO:tensorflow:Clearing out eager caches INFO:tensorflow:Finished initializing TPU system. INFO:tensorflow:Found TPU system: INFO:tensorflow:*** Num TPU Cores: 8 INFO:tensorflow:*** Num TPU Workers: 1 INFO:tensorflow:*** Num TPU Cores Per Worker: 8 Model: \"model\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 384)] 0 __________________________________________________________________________________________________ input_3 (InputLayer) [(None, 384)] 0 __________________________________________________________________________________________________ input_2 (InputLayer) [(None, 384)] 0 __________________________________________________________________________________________________ tf_bert_model (TFBertModel) ((None, 384, 768), ( 109482240 input_1[0][0] __________________________________________________________________________________________________ start_logit (Dense) (None, 384, 1) 768 tf_bert_model[0][0] __________________________________________________________________________________________________ end_logit (Dense) (None, 384, 1) 768 tf_bert_model[0][0] __________________________________________________________________________________________________ flatten (Flatten) (None, 384) 0 start_logit[0][0] __________________________________________________________________________________________________ flatten_1 (Flatten) (None, 384) 0 end_logit[0][0] __________________________________________________________________________________________________ activation_7 (Activation) (None, 384) 0 flatten[0][0] __________________________________________________________________________________________________ activation_8 (Activation) (None, 384) 0 flatten_1[0][0] ================================================================================================== Total params: 109,483,776 Trainable params: 109,483,776 Non-trainable params: 0 __________________________________________________________________________________________________ Create evaluation Callback This callback will compute the exact match score using the validation data after every epoch. def normalize_text(text): text = text.lower() # Remove punctuations exclude = set(string.punctuation) text = \"\".join(ch for ch in text if ch not in exclude) # Remove articles regex = re.compile(r\"\b(a|an|the)\b\", re.UNICODE) text = re.sub(regex, \" \", text) # Remove extra white space text = \" \".join(text.split()) return text class ExactMatch(keras.callbacks.Callback): \"\"\" Each `SquadExample` object contains the character level offsets for each token in its input paragraph. We use them to get back the span of text corresponding to the tokens between our predicted start and end tokens. All the ground-truth answers are also present in each `SquadExample` object. We calculate the percentage of data points where the span of text obtained from model predictions matches one of the ground-truth answers. \"\"\" def __init__(self, x_eval, y_eval): self.x_eval = x_eval self.y_eval = y_eval def on_epoch_end(self, epoch, logs=None): pred_start, pred_end = self.model.predict(self.x_eval) count = 0 eval_examples_no_skip = [_ for _ in eval_squad_examples if _.skip == False] for idx, (start, end) in enumerate(zip(pred_start, pred_end)): squad_eg = eval_examples_no_skip[idx] offsets = squad_eg.context_token_to_char start = np.argmax(start) end = np.argmax(end) if start >= len(offsets): continue pred_char_start = offsets[start][0] if end < len(offsets): pred_char_end = offsets[end][1] pred_ans = squad_eg.context[pred_char_start:pred_char_end] else: pred_ans = squad_eg.context[pred_char_start:] normalized_pred_ans = normalize_text(pred_ans) normalized_true_ans = [normalize_text(_) for _ in squad_eg.all_answers] if normalized_pred_ans in normalized_true_ans: count += 1 acc = count / len(self.y_eval[0]) print(f\"\nepoch={epoch+1}, exact match score={acc:.2f}\") Train and Evaluate exact_match_callback = ExactMatch(x_eval, y_eval) model.fit( x_train, y_train, epochs=1, # For demonstration, 3 epochs are recommended verbose=2, batch_size=64, callbacks=[exact_match_callback], ) epoch=1, exact match score=0.78 1346/1346 - 350s - activation_7_loss: 1.3488 - loss: 2.5905 - activation_8_loss: 1.2417 FNet transformer for text generation in Keras. Introduction The original transformer implementation (Vaswani et al., 2017) was one of the major breakthroughs in Natural Language Processing, giving rise to important architectures such BERT and GPT. However, the drawback of these architectures is that the self-attention mechanism they use is computationally expensive. The FNet architecture proposes to replace this self-attention attention with a leaner mechanism: a Fourier transformation-based linear mixer for input tokens. The FNet model was able to achieve 92-97% of BERT's accuracy while training 80% faster on GPUs and almost 70% faster on TPUs. This type of design provides an efficient and small model size, leading to faster inference times. In this example, we will implement and train this architecture on the Cornell Movie Dialog corpus to show the applicability of this model to text generation. Imports import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import os import re # Defining hyperparameters VOCAB_SIZE = 8192 MAX_SAMPLES = 50000 BUFFER_SIZE = 20000 MAX_LENGTH = 40 EMBED_DIM = 256 LATENT_DIM = 512 NUM_HEADS = 8 BATCH_SIZE = 64 Loading data We will be using the Cornell Dialog Corpus. We will parse the movie conversations into questions and answers sets. path_to_zip = keras.utils.get_file( \"cornell_movie_dialogs.zip\", origin=\"http://www.cs.cornell.edu/~cristian/data/cornell_movie_dialogs_corpus.zip\", extract=True, ) path_to_dataset = os.path.join( os.path.dirname(path_to_zip), \"cornell movie-dialogs corpus\" ) path_to_movie_lines = os.path.join(path_to_dataset, \"movie_lines.txt\") path_to_movie_conversations = os.path.join(path_to_dataset, \"movie_conversations.txt\") def load_conversations(): # Helper function for loading the conversation splits id2line = {} with open(path_to_movie_lines, errors=\"ignore\") as file: lines = file.readlines() for line in lines: parts = line.replace(\"\n\", \"\").split(\" +++$+++ \") id2line[parts[0]] = parts[4] inputs, outputs = [], [] with open(path_to_movie_conversations, \"r\") as file: lines = file.readlines() for line in lines: parts = line.replace(\"\n\", \"\").split(\" +++$+++ \") # get conversation in a list of line ID conversation = [line[1:-1] for line in parts[3][1:-1].split(\", \")] for i in range(len(conversation) - 1): inputs.append(id2line[conversation[i]]) outputs.append(id2line[conversation[i + 1]]) if len(inputs) >= MAX_SAMPLES: return inputs, outputs return inputs, outputs questions, answers = load_conversations() # Splitting training and validation sets train_dataset = tf.data.Dataset.from_tensor_slices((questions[:40000], answers[:40000])) val_dataset = tf.data.Dataset.from_tensor_slices((questions[40000:], answers[40000:])) Downloading data from http://www.cs.cornell.edu/~cristian/data/cornell_movie_dialogs_corpus.zip 9920512/9916637 [==============================] - 0s 0us/step 9928704/9916637 [==============================] - 0s 0us/step Preprocessing and Tokenization def preprocess_text(sentence): sentence = tf.strings.lower(sentence) # Adding a space between the punctuation and the last word to allow better tokenization sentence = tf.strings.regex_replace(sentence, r\"([?.!,])\", r\" \\1 \") # Replacing multiple continuous spaces with a single space sentence = tf.strings.regex_replace(sentence, r\"\\s\\s+\", \" \") # Replacing non english words with spaces sentence = tf.strings.regex_replace(sentence, r\"[^a-z?.!,]+\", \" \") sentence = tf.strings.strip(sentence) sentence = tf.strings.join([\"[start]\", sentence, \"[end]\"], separator=\" \") return sentence vectorizer = layers.TextVectorization( VOCAB_SIZE, standardize=preprocess_text, output_mode=\"int\", output_sequence_length=MAX_LENGTH, ) # We will adapt the vectorizer to both the questions and answers # This dataset is batched to parallelize and speed up the process vectorizer.adapt(tf.data.Dataset.from_tensor_slices((questions + answers)).batch(128)) Tokenizing and padding sentences using TextVectorization def vectorize_text(inputs, outputs): inputs, outputs = vectorizer(inputs), vectorizer(outputs) # One extra padding token to the right to match the output shape outputs = tf.pad(outputs, [[0, 1]]) return ( {\"encoder_inputs\": inputs, \"decoder_inputs\": outputs[:-1]}, {\"outputs\": outputs[1:]}, ) train_dataset = train_dataset.map(vectorize_text, num_parallel_calls=tf.data.AUTOTUNE) val_dataset = val_dataset.map(vectorize_text, num_parallel_calls=tf.data.AUTOTUNE) train_dataset = ( train_dataset.cache() .shuffle(BUFFER_SIZE) .batch(BATCH_SIZE) .prefetch(tf.data.AUTOTUNE) ) val_dataset = val_dataset.cache().batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE) Creating the FNet Encoder The FNet paper proposes a replacement for the standard attention mechanism used by the Transformer architecture (Vaswani et al., 2017). Architecture The outputs of the FFT layer are complex numbers. To avoid dealing with complex layers, only the real part (the magnitude) is extracted. The dense layers that follow the Fourier transformation act as convolutions applied on the frequency domain. class FNetEncoder(layers.Layer): def __init__(self, embed_dim, dense_dim, **kwargs): super(FNetEncoder, self).__init__(**kwargs) self.embed_dim = embed_dim self.dense_dim = dense_dim self.dense_proj = keras.Sequential( [ layers.Dense(dense_dim, activation=\"relu\"), layers.Dense(embed_dim), ] ) self.layernorm_1 = layers.LayerNormalization() self.layernorm_2 = layers.LayerNormalization() def call(self, inputs): # Casting the inputs to complex64 inp_complex = tf.cast(inputs, tf.complex64) # Projecting the inputs to the frequency domain using FFT2D and # extracting the real part of the output fft = tf.math.real(tf.signal.fft2d(inp_complex)) proj_input = self.layernorm_1(inputs + fft) proj_output = self.dense_proj(proj_input) return self.layernorm_2(proj_input + proj_output) Creating the Decoder The decoder architecture remains the same as the one proposed by (Vaswani et al., 2017) in the original transformer architecture, consisting of an embedding, positional encoding, two masked multihead attention layers and finally the dense output layers. The architecture that follows is taken from Deep Learning with Python, second edition, chapter 11. class PositionalEmbedding(layers.Layer): def __init__(self, sequence_length, vocab_size, embed_dim, **kwargs): super(PositionalEmbedding, self).__init__(**kwargs) self.token_embeddings = layers.Embedding( input_dim=vocab_size, output_dim=embed_dim ) self.position_embeddings = layers.Embedding( input_dim=sequence_length, output_dim=embed_dim ) self.sequence_length = sequence_length self.vocab_size = vocab_size self.embed_dim = embed_dim def call(self, inputs): length = tf.shape(inputs)[-1] positions = tf.range(start=0, limit=length, delta=1) embedded_tokens = self.token_embeddings(inputs) embedded_positions = self.position_embeddings(positions) return embedded_tokens + embedded_positions def compute_mask(self, inputs, mask=None): return tf.math.not_equal(inputs, 0) class FNetDecoder(layers.Layer): def __init__(self, embed_dim, latent_dim, num_heads, **kwargs): super(FNetDecoder, self).__init__(**kwargs) self.embed_dim = embed_dim self.latent_dim = latent_dim self.num_heads = num_heads self.attention_1 = layers.MultiHeadAttention( num_heads=num_heads, key_dim=embed_dim ) self.attention_2 = layers.MultiHeadAttention( num_heads=num_heads, key_dim=embed_dim ) self.dense_proj = keras.Sequential( [ layers.Dense(latent_dim, activation=\"relu\"), layers.Dense(embed_dim), ] ) self.layernorm_1 = layers.LayerNormalization() self.layernorm_2 = layers.LayerNormalization() self.layernorm_3 = layers.LayerNormalization() self.supports_masking = True def call(self, inputs, encoder_outputs, mask=None): causal_mask = self.get_causal_attention_mask(inputs) if mask is not None: padding_mask = tf.cast(mask[:, tf.newaxis, :], dtype=\"int32\") padding_mask = tf.minimum(padding_mask, causal_mask) attention_output_1 = self.attention_1( query=inputs, value=inputs, key=inputs, attention_mask=causal_mask ) out_1 = self.layernorm_1(inputs + attention_output_1) attention_output_2 = self.attention_2( query=out_1, value=encoder_outputs, key=encoder_outputs, attention_mask=padding_mask, ) out_2 = self.layernorm_2(out_1 + attention_output_2) proj_output = self.dense_proj(out_2) return self.layernorm_3(out_2 + proj_output) def get_causal_attention_mask(self, inputs): input_shape = tf.shape(inputs) batch_size, sequence_length = input_shape[0], input_shape[1] i = tf.range(sequence_length)[:, tf.newaxis] j = tf.range(sequence_length) mask = tf.cast(i >= j, dtype=\"int32\") mask = tf.reshape(mask, (1, input_shape[1], input_shape[1])) mult = tf.concat( [tf.expand_dims(batch_size, -1), tf.constant([1, 1], dtype=tf.int32)], axis=0, ) return tf.tile(mask, mult) def create_model(): encoder_inputs = keras.Input(shape=(None,), dtype=\"int32\", name=\"encoder_inputs\") x = PositionalEmbedding(MAX_LENGTH, VOCAB_SIZE, EMBED_DIM)(encoder_inputs) encoder_outputs = FNetEncoder(EMBED_DIM, LATENT_DIM)(x) encoder = keras.Model(encoder_inputs, encoder_outputs) decoder_inputs = keras.Input(shape=(None,), dtype=\"int32\", name=\"decoder_inputs\") encoded_seq_inputs = keras.Input( shape=(None, EMBED_DIM), name=\"decoder_state_inputs\" ) x = PositionalEmbedding(MAX_LENGTH, VOCAB_SIZE, EMBED_DIM)(decoder_inputs) x = FNetDecoder(EMBED_DIM, LATENT_DIM, NUM_HEADS)(x, encoded_seq_inputs) x = layers.Dropout(0.5)(x) decoder_outputs = layers.Dense(VOCAB_SIZE, activation=\"softmax\")(x) decoder = keras.Model( [decoder_inputs, encoded_seq_inputs], decoder_outputs, name=\"outputs\" ) decoder_outputs = decoder([decoder_inputs, encoder_outputs]) fnet = keras.Model([encoder_inputs, decoder_inputs], decoder_outputs, name=\"fnet\") return fnet Creating and Training the model fnet = create_model() fnet.compile(\"adam\", loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"]) Here, the epochs parameter is set to a single epoch, but in practice the model will take around 20-30 epochs of training to start outputting comprehensible sentences. Although accuracy is not a good measure for this task, we will use it just to get a hint of the improvement of the network. fnet.fit(train_dataset, epochs=1, validation_data=val_dataset) 625/625 [==============================] - 96s 133ms/step - loss: 1.3036 - accuracy: 0.4354 - val_loss: 0.7964 - val_accuracy: 0.6374 Performing inference VOCAB = vectorizer.get_vocabulary() def decode_sentence(input_sentence): # Mapping the input sentence to tokens and adding start and end tokens tokenized_input_sentence = vectorizer( tf.constant(\"[start] \" + preprocess_text(input_sentence) + \" [end]\") ) # Initializing the initial sentence consisting of only the start token. tokenized_target_sentence = tf.expand_dims(VOCAB.index(\"[start]\"), 0) decoded_sentence = \"\" for i in range(MAX_LENGTH): # Get the predictions predictions = fnet.predict( { \"encoder_inputs\": tf.expand_dims(tokenized_input_sentence, 0), \"decoder_inputs\": tf.expand_dims( tf.pad( tokenized_target_sentence, [[0, MAX_LENGTH - tf.shape(tokenized_target_sentence)[0]]], ), 0, ), } ) # Calculating the token with maximum probability and getting the corresponding word sampled_token_index = tf.argmax(predictions[0, i, :]) sampled_token = VOCAB[sampled_token_index.numpy()] # If sampled token is the end token then stop generating and return the sentence if tf.equal(sampled_token_index, VOCAB.index(\"[end]\")): break decoded_sentence += sampled_token + \" \" tokenized_target_sentence = tf.concat( [tokenized_target_sentence, [sampled_token_index]], 0 ) return decoded_sentence decode_sentence(\"Where have you been all this time?\") 'i m sorry .' Conclusion This example shows how to train and perform inference using the FNet model. For getting insight into the architecture or for further reading, you can refer to: FNet: Mixing Tokens with Fourier Transforms (Lee-Thorp et al., 2021) Attention Is All You Need (Vaswani et al., 2017) Thanks to François Chollet for his Keras example on English-to-Spanish translation with a sequence-to-sequence Transformer from which the decoder implementation was extracted. Text classification on the Newsgroup20 dataset using pre-trained GloVe word embeddings. Setup import numpy as np import tensorflow as tf from tensorflow import keras Introduction In this example, we show how to train a text classification model that uses pre-trained word embeddings. We'll work with the Newsgroup20 dataset, a set of 20,000 message board messages belonging to 20 different topic categories. For the pre-trained word embeddings, we'll use GloVe embeddings. Download the Newsgroup20 data data_path = keras.utils.get_file( \"news20.tar.gz\", \"http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-20/www/data/news20.tar.gz\", untar=True, ) Let's take a look at the data import os import pathlib data_dir = pathlib.Path(data_path).parent / \"20_newsgroup\" dirnames = os.listdir(data_dir) print(\"Number of directories:\", len(dirnames)) print(\"Directory names:\", dirnames) fnames = os.listdir(data_dir / \"comp.graphics\") print(\"Number of files in comp.graphics:\", len(fnames)) print(\"Some example filenames:\", fnames[:5]) Number of directories: 20 Directory names: ['talk.politics.mideast', 'rec.autos', 'comp.sys.mac.hardware', 'alt.atheism', 'rec.sport.baseball', 'comp.os.ms-windows.misc', 'rec.sport.hockey', 'sci.crypt', 'sci.med', 'talk.politics.misc', 'rec.motorcycles', 'comp.windows.x', 'comp.graphics', 'comp.sys.ibm.pc.hardware', 'sci.electronics', 'talk.politics.guns', 'sci.space', 'soc.religion.christian', 'misc.forsale', 'talk.religion.misc'] Number of files in comp.graphics: 1000 Some example filenames: ['38254', '38402', '38630', '38865', '38891'] Here's a example of what one file contains: print(open(data_dir / \"comp.graphics\" / \"38987\").read()) Newsgroups: comp.graphics Path: cantaloupe.srv.cs.cmu.edu!das-news.harvard.edu!noc.near.net!howland.reston.ans.net!agate!dog.ee.lbl.gov!network.ucsd.edu!usc!rpi!nason110.its.rpi.edu!mabusj From: mabusj@nason110.its.rpi.edu (Jasen M. Mabus) Subject: Looking for Brain in CAD Message-ID: Nntp-Posting-Host: nason110.its.rpi.edu Reply-To: mabusj@rpi.edu Organization: Rensselaer Polytechnic Institute, Troy, NY. Date: Thu, 29 Apr 1993 23:27:20 GMT Lines: 7 Jasen Mabus RPI student I am looking for a hman brain in any CAD (.dxf,.cad,.iges,.cgm,etc.) or picture (.gif,.jpg,.ras,etc.) format for an animation demonstration. If any has or knows of a location please reply by e-mail to mabusj@rpi.edu. Thank you in advance, Jasen Mabus As you can see, there are header lines that are leaking the file's category, either explicitly (the first line is literally the category name), or implicitly, e.g. via the Organization filed. Let's get rid of the headers: samples = [] labels = [] class_names = [] class_index = 0 for dirname in sorted(os.listdir(data_dir)): class_names.append(dirname) dirpath = data_dir / dirname fnames = os.listdir(dirpath) print(\"Processing %s, %d files found\" % (dirname, len(fnames))) for fname in fnames: fpath = dirpath / fname f = open(fpath, encoding=\"latin-1\") content = f.read() lines = content.split(\"\n\") lines = lines[10:] content = \"\n\".join(lines) samples.append(content) labels.append(class_index) class_index += 1 print(\"Classes:\", class_names) print(\"Number of samples:\", len(samples)) Processing alt.atheism, 1000 files found Processing comp.graphics, 1000 files found Processing comp.os.ms-windows.misc, 1000 files found Processing comp.sys.ibm.pc.hardware, 1000 files found Processing comp.sys.mac.hardware, 1000 files found Processing comp.windows.x, 1000 files found Processing misc.forsale, 1000 files found Processing rec.autos, 1000 files found Processing rec.motorcycles, 1000 files found Processing rec.sport.baseball, 1000 files found Processing rec.sport.hockey, 1000 files found Processing sci.crypt, 1000 files found Processing sci.electronics, 1000 files found Processing sci.med, 1000 files found Processing sci.space, 1000 files found Processing soc.religion.christian, 997 files found Processing talk.politics.guns, 1000 files found Processing talk.politics.mideast, 1000 files found Processing talk.politics.misc, 1000 files found Processing talk.religion.misc, 1000 files found Classes: ['alt.atheism', 'comp.graphics', 'comp.os.ms-windows.misc', 'comp.sys.ibm.pc.hardware', 'comp.sys.mac.hardware', 'comp.windows.x', 'misc.forsale', 'rec.autos', 'rec.motorcycles', 'rec.sport.baseball', 'rec.sport.hockey', 'sci.crypt', 'sci.electronics', 'sci.med', 'sci.space', 'soc.religion.christian', 'talk.politics.guns', 'talk.politics.mideast', 'talk.politics.misc', 'talk.religion.misc'] Number of samples: 19997 There's actually one category that doesn't have the expected number of files, but the difference is small enough that the problem remains a balanced classification problem. Shuffle and split the data into training & validation sets # Shuffle the data seed = 1337 rng = np.random.RandomState(seed) rng.shuffle(samples) rng = np.random.RandomState(seed) rng.shuffle(labels) # Extract a training & validation split validation_split = 0.2 num_validation_samples = int(validation_split * len(samples)) train_samples = samples[:-num_validation_samples] val_samples = samples[-num_validation_samples:] train_labels = labels[:-num_validation_samples] val_labels = labels[-num_validation_samples:] Create a vocabulary index Let's use the TextVectorization to index the vocabulary found in the dataset. Later, we'll use the same layer instance to vectorize the samples. Our layer will only consider the top 20,000 words, and will truncate or pad sequences to be actually 200 tokens long. from tensorflow.keras.layers import TextVectorization vectorizer = TextVectorization(max_tokens=20000, output_sequence_length=200) text_ds = tf.data.Dataset.from_tensor_slices(train_samples).batch(128) vectorizer.adapt(text_ds) You can retrieve the computed vocabulary used via vectorizer.get_vocabulary(). Let's print the top 5 words: vectorizer.get_vocabulary()[:5] ['', '[UNK]', 'the', 'to', 'of'] Let's vectorize a test sentence: output = vectorizer([[\"the cat sat on the mat\"]]) output.numpy()[0, :6] array([ 2, 3697, 1686, 15, 2, 5943]) As you can see, \"the\" gets represented as \"2\". Why not 0, given that \"the\" was the first word in the vocabulary? That's because index 0 is reserved for padding and index 1 is reserved for \"out of vocabulary\" tokens. Here's a dict mapping words to their indices: voc = vectorizer.get_vocabulary() word_index = dict(zip(voc, range(len(voc)))) As you can see, we obtain the same encoding as above for our test sentence: test = [\"the\", \"cat\", \"sat\", \"on\", \"the\", \"mat\"] [word_index[w] for w in test] [2, 3697, 1686, 15, 2, 5943] Load pre-trained word embeddings Let's download pre-trained GloVe embeddings (a 822M zip file). You'll need to run the following commands: !wget http://nlp.stanford.edu/data/glove.6B.zip !unzip -q glove.6B.zip The archive contains text-encoded vectors of various sizes: 50-dimensional, 100-dimensional, 200-dimensional, 300-dimensional. We'll use the 100D ones. Let's make a dict mapping words (strings) to their NumPy vector representation: path_to_glove_file = os.path.join( os.path.expanduser(\"~\"), \".keras/datasets/glove.6B.100d.txt\" ) embeddings_index = {} with open(path_to_glove_file) as f: for line in f: word, coefs = line.split(maxsplit=1) coefs = np.fromstring(coefs, \"f\", sep=\" \") embeddings_index[word] = coefs print(\"Found %s word vectors.\" % len(embeddings_index)) Found 400000 word vectors. Now, let's prepare a corresponding embedding matrix that we can use in a Keras Embedding layer. It's a simple NumPy matrix where entry at index i is the pre-trained vector for the word of index i in our vectorizer's vocabulary. num_tokens = len(voc) + 2 embedding_dim = 100 hits = 0 misses = 0 # Prepare embedding matrix embedding_matrix = np.zeros((num_tokens, embedding_dim)) for word, i in word_index.items(): embedding_vector = embeddings_index.get(word) if embedding_vector is not None: # Words not found in embedding index will be all-zeros. # This includes the representation for \"padding\" and \"OOV\" embedding_matrix[i] = embedding_vector hits += 1 else: misses += 1 print(\"Converted %d words (%d misses)\" % (hits, misses)) Converted 17999 words (2001 misses) Next, we load the pre-trained word embeddings matrix into an Embedding layer. Note that we set trainable=False so as to keep the embeddings fixed (we don't want to update them during training). from tensorflow.keras.layers import Embedding embedding_layer = Embedding( num_tokens, embedding_dim, embeddings_initializer=keras.initializers.Constant(embedding_matrix), trainable=False, ) Build the model A simple 1D convnet with global max pooling and a classifier at the end. from tensorflow.keras import layers int_sequences_input = keras.Input(shape=(None,), dtype=\"int64\") embedded_sequences = embedding_layer(int_sequences_input) x = layers.Conv1D(128, 5, activation=\"relu\")(embedded_sequences) x = layers.MaxPooling1D(5)(x) x = layers.Conv1D(128, 5, activation=\"relu\")(x) x = layers.MaxPooling1D(5)(x) x = layers.Conv1D(128, 5, activation=\"relu\")(x) x = layers.GlobalMaxPooling1D()(x) x = layers.Dense(128, activation=\"relu\")(x) x = layers.Dropout(0.5)(x) preds = layers.Dense(len(class_names), activation=\"softmax\")(x) model = keras.Model(int_sequences_input, preds) model.summary() Model: \"model\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, None)] 0 _________________________________________________________________ embedding (Embedding) (None, None, 100) 2000200 _________________________________________________________________ conv1d (Conv1D) (None, None, 128) 64128 _________________________________________________________________ max_pooling1d (MaxPooling1D) (None, None, 128) 0 _________________________________________________________________ conv1d_1 (Conv1D) (None, None, 128) 82048 _________________________________________________________________ max_pooling1d_1 (MaxPooling1 (None, None, 128) 0 _________________________________________________________________ conv1d_2 (Conv1D) (None, None, 128) 82048 _________________________________________________________________ global_max_pooling1d (Global (None, 128) 0 _________________________________________________________________ dense (Dense) (None, 128) 16512 _________________________________________________________________ dropout (Dropout) (None, 128) 0 _________________________________________________________________ dense_1 (Dense) (None, 20) 2580 ================================================================= Total params: 2,247,516 Trainable params: 247,316 Non-trainable params: 2,000,200 _________________________________________________________________ Train the model First, convert our list-of-strings data to NumPy arrays of integer indices. The arrays are right-padded. x_train = vectorizer(np.array([[s] for s in train_samples])).numpy() x_val = vectorizer(np.array([[s] for s in val_samples])).numpy() y_train = np.array(train_labels) y_val = np.array(val_labels) We use categorical crossentropy as our loss since we're doing softmax classification. Moreover, we use sparse_categorical_crossentropy since our labels are integers. model.compile( loss=\"sparse_categorical_crossentropy\", optimizer=\"rmsprop\", metrics=[\"acc\"] ) model.fit(x_train, y_train, batch_size=128, epochs=20, validation_data=(x_val, y_val)) Epoch 1/20 125/125 [==============================] - 8s 57ms/step - loss: 2.8766 - acc: 0.0945 - val_loss: 2.0770 - val_acc: 0.2956 Epoch 2/20 125/125 [==============================] - 7s 58ms/step - loss: 2.0792 - acc: 0.2887 - val_loss: 1.6626 - val_acc: 0.4076 Epoch 3/20 125/125 [==============================] - 7s 60ms/step - loss: 1.5632 - acc: 0.4527 - val_loss: 1.3000 - val_acc: 0.5609 Epoch 4/20 125/125 [==============================] - 8s 60ms/step - loss: 1.2945 - acc: 0.5612 - val_loss: 1.2282 - val_acc: 0.5944 Epoch 5/20 125/125 [==============================] - 8s 61ms/step - loss: 1.1137 - acc: 0.6209 - val_loss: 1.0695 - val_acc: 0.6409 Epoch 6/20 125/125 [==============================] - 8s 61ms/step - loss: 0.9556 - acc: 0.6718 - val_loss: 1.1743 - val_acc: 0.6124 Epoch 7/20 125/125 [==============================] - 8s 61ms/step - loss: 0.8235 - acc: 0.7172 - val_loss: 1.0126 - val_acc: 0.6602 Epoch 8/20 125/125 [==============================] - 8s 65ms/step - loss: 0.7268 - acc: 0.7475 - val_loss: 1.0608 - val_acc: 0.6632 Epoch 9/20 125/125 [==============================] - 8s 63ms/step - loss: 0.6441 - acc: 0.7759 - val_loss: 1.0606 - val_acc: 0.6664 Epoch 10/20 125/125 [==============================] - 8s 63ms/step - loss: 0.5409 - acc: 0.8120 - val_loss: 1.0380 - val_acc: 0.6884 Epoch 11/20 125/125 [==============================] - 8s 65ms/step - loss: 0.4846 - acc: 0.8273 - val_loss: 1.1073 - val_acc: 0.6729 Epoch 12/20 125/125 [==============================] - 8s 62ms/step - loss: 0.4173 - acc: 0.8553 - val_loss: 1.1256 - val_acc: 0.6864 Epoch 13/20 125/125 [==============================] - 8s 63ms/step - loss: 0.3419 - acc: 0.8808 - val_loss: 1.1576 - val_acc: 0.6979 Epoch 14/20 125/125 [==============================] - 8s 68ms/step - loss: 0.2869 - acc: 0.9053 - val_loss: 1.1381 - val_acc: 0.6974 Epoch 15/20 125/125 [==============================] - 8s 67ms/step - loss: 0.2617 - acc: 0.9118 - val_loss: 1.3850 - val_acc: 0.6747 Epoch 16/20 125/125 [==============================] - 8s 67ms/step - loss: 0.2543 - acc: 0.9152 - val_loss: 1.3119 - val_acc: 0.6972 Epoch 17/20 125/125 [==============================] - 8s 66ms/step - loss: 0.2109 - acc: 0.9267 - val_loss: 1.3145 - val_acc: 0.6954 Epoch 18/20 125/125 [==============================] - 8s 64ms/step - loss: 0.1939 - acc: 0.9364 - val_loss: 1.4054 - val_acc: 0.7009 Epoch 19/20 125/125 [==============================] - 8s 67ms/step - loss: 0.1873 - acc: 0.9379 - val_loss: 1.7441 - val_acc: 0.6667 Epoch 20/20 125/125 [==============================] - 9s 70ms/step - loss: 0.1762 - acc: 0.9420 - val_loss: 1.5269 - val_acc: 0.6927 Export an end-to-end model Now, we may want to export a Model object that takes as input a string of arbitrary length, rather than a sequence of indices. It would make the model much more portable, since you wouldn't have to worry about the input preprocessing pipeline. Our vectorizer is actually a Keras layer, so it's simple: string_input = keras.Input(shape=(1,), dtype=\"string\") x = vectorizer(string_input) preds = model(x) end_to_end_model = keras.Model(string_input, preds) probabilities = end_to_end_model.predict( [[\"this message is about computer graphics and 3D modeling\"]] ) class_names[np.argmax(probabilities[0])] 'comp.graphics' Demonstration of how to train a Keras model that approximates a SVM. Introduction This example demonstrates how to train a Keras model that approximates a Support Vector Machine (SVM). The key idea is to stack a RandomFourierFeatures layer with a linear layer. The RandomFourierFeatures layer can be used to \"kernelize\" linear models by applying a non-linear transformation to the input features and then training a linear model on top of the transformed features. Depending on the loss function of the linear model, the composition of this layer and the linear model results to models that are equivalent (up to approximation) to kernel SVMs (for hinge loss), kernel logistic regression (for logistic loss), kernel linear regression (for MSE loss), etc. In our case, we approximate SVM using a hinge loss. Setup from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.layers.experimental import RandomFourierFeatures Build the model model = keras.Sequential( [ keras.Input(shape=(784,)), RandomFourierFeatures( output_dim=4096, scale=10.0, kernel_initializer=\"gaussian\" ), layers.Dense(units=10), ] ) model.compile( optimizer=keras.optimizers.Adam(learning_rate=1e-3), loss=keras.losses.hinge, metrics=[keras.metrics.CategoricalAccuracy(name=\"acc\")], ) Prepare the data # Load MNIST (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() # Preprocess the data by flattening & scaling it x_train = x_train.reshape(-1, 784).astype(\"float32\") / 255 x_test = x_test.reshape(-1, 784).astype(\"float32\") / 255 # Categorical (one hot) encoding of the labels y_train = keras.utils.to_categorical(y_train) y_test = keras.utils.to_categorical(y_test) Train the model model.fit(x_train, y_train, epochs=20, batch_size=128, validation_split=0.2) Epoch 1/20 375/375 [==============================] - 2s 6ms/step - loss: 0.0829 - acc: 0.8681 - val_loss: 0.0432 - val_acc: 0.9358 Epoch 2/20 375/375 [==============================] - 2s 6ms/step - loss: 0.0423 - acc: 0.9364 - val_loss: 0.0364 - val_acc: 0.9471 Epoch 3/20 375/375 [==============================] - 2s 6ms/step - loss: 0.0340 - acc: 0.9502 - val_loss: 0.0360 - val_acc: 0.9488 Epoch 4/20 375/375 [==============================] - 2s 6ms/step - loss: 0.0292 - acc: 0.9572 - val_loss: 0.0286 - val_acc: 0.9582 Epoch 5/20 375/375 [==============================] - 2s 6ms/step - loss: 0.0251 - acc: 0.9637 - val_loss: 0.0261 - val_acc: 0.9643 Epoch 6/20 375/375 [==============================] - 2s 6ms/step - loss: 0.0231 - acc: 0.9684 - val_loss: 0.0259 - val_acc: 0.9639 Epoch 7/20 375/375 [==============================] - 2s 6ms/step - loss: 0.0215 - acc: 0.9710 - val_loss: 0.0247 - val_acc: 0.9662 Epoch 8/20 375/375 [==============================] - 2s 7ms/step - loss: 0.0197 - acc: 0.9740 - val_loss: 0.0263 - val_acc: 0.9629 Epoch 9/20 375/375 [==============================] - 2s 6ms/step - loss: 0.0190 - acc: 0.9749 - val_loss: 0.0222 - val_acc: 0.9704 Epoch 10/20 375/375 [==============================] - 2s 6ms/step - loss: 0.0177 - acc: 0.9767 - val_loss: 0.0224 - val_acc: 0.9689 Epoch 11/20 375/375 [==============================] - 2s 6ms/step - loss: 0.0168 - acc: 0.9781 - val_loss: 0.0231 - val_acc: 0.9661 Epoch 12/20 375/375 [==============================] - 2s 6ms/step - loss: 0.0158 - acc: 0.9804 - val_loss: 0.0232 - val_acc: 0.9688 Epoch 13/20 375/375 [==============================] - 2s 6ms/step - loss: 0.0153 - acc: 0.9814 - val_loss: 0.0227 - val_acc: 0.9682 Epoch 14/20 375/375 [==============================] - 2s 6ms/step - loss: 0.0140 - acc: 0.9829 - val_loss: 0.0228 - val_acc: 0.9678 Epoch 15/20 375/375 [==============================] - 2s 6ms/step - loss: 0.0143 - acc: 0.9820 - val_loss: 0.0230 - val_acc: 0.9676 Epoch 16/20 375/375 [==============================] - 2s 6ms/step - loss: 0.0134 - acc: 0.9840 - val_loss: 0.0246 - val_acc: 0.9675 Epoch 17/20 375/375 [==============================] - 2s 6ms/step - loss: 0.0127 - acc: 0.9853 - val_loss: 0.0224 - val_acc: 0.9697 Epoch 18/20 375/375 [==============================] - 2s 6ms/step - loss: 0.0124 - acc: 0.9855 - val_loss: 0.0248 - val_acc: 0.9659 Epoch 19/20 375/375 [==============================] - 2s 6ms/step - loss: 0.0117 - acc: 0.9867 - val_loss: 0.0207 - val_acc: 0.9722 Epoch 20/20 375/375 [==============================] - 2s 6ms/step - loss: 0.0113 - acc: 0.9870 - val_loss: 0.0205 - val_acc: 0.9724 I can't say that it works well or that it is indeed a good idea, but you can probably get decent results by tuning your hyperparameters. You can use this setup to add a \"SVM layer\" on top of a deep learning model, and train the whole model end-to-end. Converting data to the TFRecord format. Introduction The TFRecord format is a simple format for storing a sequence of binary records. Converting your data into TFRecord has many advantages, such as: More efficient storage: the TFRecord data can take up less space than the original data; it can also be partitioned into multiple files. Fast I/O: the TFRecord format can be read with parallel I/O operations, which is useful for TPUs or multiple hosts. Self-contained files: the TFRecord data can be read from a single source—for example, the COCO2017 dataset originally stores data in two folders (\"images\" and \"annotations\"). An important use case of the TFRecord data format is training on TPUs. First, TPUs are fast enough to benefit from optimized I/O operations. In addition, TPUs require data to be stored remotely (e.g. on Google Cloud Storage) and using the TFRecord format makes it easier to load the data without batch-downloading. Performance using the TFRecord format can be further improved if you also use it with the tf.data API. In this example you will learn how to convert data of different types (image, text, and numeric) into TFRecord. Reference TFRecord and tf.train.Example Dependencies import os import json import pprint import tensorflow as tf import matplotlib.pyplot as plt Download the COCO2017 dataset We will be using the COCO2017 dataset, because it has many different types of features, including images, floating point data, and lists. It will serve as a good example of how to encode different features into the TFRecord format. This dataset has two sets of fields: images and annotation meta-data. The images are a collection of JPG files and the meta-data are stored in a JSON file which, according to the official site, contains the following properties: id: int, image_id: int, category_id: int, segmentation: RLE or [polygon], object segmentation mask bbox: [x,y,width,height], object bounding box coordinates area: float, area of the bounding box iscrowd: 0 or 1, is single object or a collection root_dir = \"datasets\" tfrecords_dir = \"tfrecords\" images_dir = os.path.join(root_dir, \"val2017\") annotations_dir = os.path.join(root_dir, \"annotations\") annotation_file = os.path.join(annotations_dir, \"instances_val2017.json\") images_url = \"http://images.cocodataset.org/zips/val2017.zip\" annotations_url = ( \"http://images.cocodataset.org/annotations/annotations_trainval2017.zip\" ) # Download image files if not os.path.exists(images_dir): image_zip = tf.keras.utils.get_file( \"images.zip\", cache_dir=os.path.abspath(\".\"), origin=images_url, extract=True, ) os.remove(image_zip) # Download caption annotation files if not os.path.exists(annotations_dir): annotation_zip = tf.keras.utils.get_file( \"captions.zip\", cache_dir=os.path.abspath(\".\"), origin=annotations_url, extract=True, ) os.remove(annotation_zip) print(\"The COCO dataset has been downloaded and extracted successfully.\") with open(annotation_file, \"r\") as f: annotations = json.load(f)[\"annotations\"] print(f\"Number of images: {len(annotations)}\") Downloading data from http://images.cocodataset.org/zips/val2017.zip 815587328/815585330 [==============================] - 990s 1us/step Downloading data from http://images.cocodataset.org/annotations/annotations_trainval2017.zip 172441600/252907541 [===================>..........] - ETA: 1:35 Contents of the COCO2017 dataset pprint.pprint(annotations[60]) {'area': 367.89710000000014, 'bbox': [265.67, 222.31, 26.48, 14.71], 'category_id': 72, 'id': 34096, 'image_id': 525083, 'iscrowd': 0, 'segmentation': [[267.51, 222.31, 292.15, 222.31, 291.05, 237.02, 265.67, 237.02]]} Parameters num_samples is the number of data samples on each TFRecord file. num_tfrecords is total number of TFRecords that we will create. num_samples = 4096 num_tfrecords = len(annotations) // num_samples if len(annotations) % num_samples: num_tfrecords += 1 # add one record if there are any remaining samples if not os.path.exists(tfrecords_dir): os.makedirs(tfrecords_dir) # creating TFRecords output folder Define TFRecords helper functions def image_feature(value): \"\"\"Returns a bytes_list from a string / byte.\"\"\" return tf.train.Feature( bytes_list=tf.train.BytesList(value=[tf.io.encode_jpeg(value).numpy()]) ) def bytes_feature(value): \"\"\"Returns a bytes_list from a string / byte.\"\"\" return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value.encode()])) def float_feature(value): \"\"\"Returns a float_list from a float / double.\"\"\" return tf.train.Feature(float_list=tf.train.FloatList(value=[value])) def int64_feature(value): \"\"\"Returns an int64_list from a bool / enum / int / uint.\"\"\" return tf.train.Feature(int64_list=tf.train.Int64List(value=[value])) def float_feature_list(value): \"\"\"Returns a list of float_list from a float / double.\"\"\" return tf.train.Feature(float_list=tf.train.FloatList(value=value)) def create_example(image, path, example): feature = { \"image\": image_feature(image), \"path\": bytes_feature(path), \"area\": float_feature(example[\"area\"]), \"bbox\": float_feature_list(example[\"bbox\"]), \"category_id\": int64_feature(example[\"category_id\"]), \"id\": int64_feature(example[\"id\"]), \"image_id\": int64_feature(example[\"image_id\"]), } return tf.train.Example(features=tf.train.Features(feature=feature)) def parse_tfrecord_fn(example): feature_description = { \"image\": tf.io.FixedLenFeature([], tf.string), \"path\": tf.io.FixedLenFeature([], tf.string), \"area\": tf.io.FixedLenFeature([], tf.float32), \"bbox\": tf.io.VarLenFeature(tf.float32), \"category_id\": tf.io.FixedLenFeature([], tf.int64), \"id\": tf.io.FixedLenFeature([], tf.int64), \"image_id\": tf.io.FixedLenFeature([], tf.int64), } example = tf.io.parse_single_example(example, feature_description) example[\"image\"] = tf.io.decode_jpeg(example[\"image\"], channels=3) example[\"bbox\"] = tf.sparse.to_dense(example[\"bbox\"]) return example Generate data in the TFRecord format Let's generate the COCO2017 data in the TFRecord format. The format will be file_{number}.tfrec (this is optional, but including the number sequences in the file names can make counting easier). for tfrec_num in range(num_tfrecords): samples = annotations[(tfrec_num * num_samples) : ((tfrec_num + 1) * num_samples)] with tf.io.TFRecordWriter( tfrecords_dir + \"/file_%.2i-%i.tfrec\" % (tfrec_num, len(samples)) ) as writer: for sample in samples: image_path = f\"{images_dir}/{sample['image_id']:012d}.jpg\" image = tf.io.decode_jpeg(tf.io.read_file(image_path)) example = create_example(image, image_path, sample) writer.write(example.SerializeToString()) Explore one sample from the generated TFRecord raw_dataset = tf.data.TFRecordDataset(f\"{tfrecords_dir}/file_00-{num_samples}.tfrec\") parsed_dataset = raw_dataset.map(parse_tfrecord_fn) for features in parsed_dataset.take(1): for key in features.keys(): if key != \"image\": print(f\"{key}: {features[key]}\") print(f\"Image shape: {features['image'].shape}\") plt.figure(figsize=(7, 7)) plt.imshow(features[\"image\"].numpy()) plt.show() bbox: [473.07 395.93 38.65 28.67] area: 702.1057739257812 category_id: 18 id: 1768 image_id: 289343 path: b'datasets/val2017/000000289343.jpg' Image shape: (640, 529, 3) png Train a simple model using the generated TFRecords Another advantage of TFRecord is that you are able to add many features to it and later use only a few of them, in this case, we are going to use only image and category_id. Define dataset helper functions def prepare_sample(features): image = tf.image.resize(features[\"image\"], size=(224, 224)) return image, features[\"category_id\"] def get_dataset(filenames, batch_size): dataset = ( tf.data.TFRecordDataset(filenames, num_parallel_reads=AUTOTUNE) .map(parse_tfrecord_fn, num_parallel_calls=AUTOTUNE) .map(prepare_sample, num_parallel_calls=AUTOTUNE) .shuffle(batch_size * 10) .batch(batch_size) .prefetch(AUTOTUNE) ) return dataset train_filenames = tf.io.gfile.glob(f\"{tfrecords_dir}/*.tfrec\") batch_size = 32 epochs = 1 steps_per_epoch = 50 AUTOTUNE = tf.data.AUTOTUNE input_tensor = tf.keras.layers.Input(shape=(224, 224, 3), name=\"image\") model = tf.keras.applications.EfficientNetB0( input_tensor=input_tensor, weights=None, classes=91 ) model.compile( optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()], ) model.fit( x=get_dataset(train_filenames, batch_size), epochs=epochs, steps_per_epoch=steps_per_epoch, verbose=1, ) 50/50 [==============================] - 258s 5s/step - loss: 3.9857 - sparse_categorical_accuracy: 0.2375 Conclusion This example demonstrates that instead of reading images and annotations from different sources you can have your data coming from a single source thanks to TFRecord. This process can make storing and reading data simpler and more efficient. For more information, you can go to the TFRecord and tf.train.Example tutorial. The example shows how to implement custom convolution layers using the Conv.convolution_op() API. Introduction You may sometimes need to implement custom versions of convolution layers like Conv1D and Conv2D. Keras enables you do this without implementing the entire layer from scratch: you can reuse most of the base convolution layer and just customize the convolution op itself via the convolution_op() method. This method was introduced in Keras 2.7. So before using the convolution_op() API, ensure that you are running Keras version 2.7.0 or greater. import tensorflow.keras as keras print(keras.__version__) 2.7.0 A Simple StandardizedConv2D implementation There are two ways to use the Conv.convolution_op() API. The first way is to override the convolution_op() method on a convolution layer subclass. Using this approach, we can quickly implement a StandardizedConv2D as shown below. import tensorflow as tf import tensorflow.keras as keras import keras.layers as layers import numpy as np class StandardizedConv2DWithOverride(layers.Conv2D): def convolution_op(self, inputs, kernel): mean, var = tf.nn.moments(kernel, axes=[0, 1, 2], keepdims=True) return tf.nn.conv2d( inputs, (kernel - mean) / tf.sqrt(var + 1e-10), padding=\"VALID\", strides=list(self.strides), name=self.__class__.__name__, ) The other way to use the Conv.convolution_op() API is to directly call the convolution_op() method from the call() method of a convolution layer subclass. A comparable class implemented using this approach is shown below. class StandardizedConv2DWithCall(layers.Conv2D): def call(self, inputs): mean, var = tf.nn.moments(self.kernel, axes=[0, 1, 2], keepdims=True) result = self.convolution_op( inputs, (self.kernel - mean) / tf.sqrt(var + 1e-10) ) if self.use_bias: result = result + self.bias return result Example Usage Both of these layers work as drop-in replacements for Conv2D. The following demonstration performs classification on the MNIST dataset. # Model / data parameters num_classes = 10 input_shape = (28, 28, 1) # the data, split between train and test sets (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() # Scale images to the [0, 1] range x_train = x_train.astype(\"float32\") / 255 x_test = x_test.astype(\"float32\") / 255 # Make sure images have shape (28, 28, 1) x_train = np.expand_dims(x_train, -1) x_test = np.expand_dims(x_test, -1) print(\"x_train shape:\", x_train.shape) print(x_train.shape[0], \"train samples\") print(x_test.shape[0], \"test samples\") # convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) model = keras.Sequential( [ keras.layers.InputLayer(input_shape=input_shape), StandardizedConv2DWithCall(32, kernel_size=(3, 3), activation=\"relu\"), layers.MaxPooling2D(pool_size=(2, 2)), StandardizedConv2DWithOverride(64, kernel_size=(3, 3), activation=\"relu\"), layers.MaxPooling2D(pool_size=(2, 2)), layers.Flatten(), layers.Dropout(0.5), layers.Dense(num_classes, activation=\"softmax\"), ] ) model.summary() batch_size = 128 epochs = 5 model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"]) model.fit(x_train, y_train, batch_size=batch_size, epochs=5, validation_split=0.1) x_train shape: (60000, 28, 28, 1) 60000 train samples 10000 test samples Model: \"sequential\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= standardized_conv2d_with_ca (None, 26, 26, 32) 320 ll (StandardizedConv2DWithC all) max_pooling2d (MaxPooling2D (None, 13, 13, 32) 0 ) standardized_conv2d_with_ov (None, 11, 11, 64) 18496 erride (StandardizedConv2DW ithOverride) max_pooling2d_1 (MaxPooling (None, 5, 5, 64) 0 2D) flatten (Flatten) (None, 1600) 0 dropout (Dropout) (None, 1600) 0 dense (Dense) (None, 10) 16010 ================================================================= Total params: 34,826 Trainable params: 34,826 Non-trainable params: 0 _________________________________________________________________ Epoch 1/5 422/422 [==============================] - 7s 15ms/step - loss: 1.8435 - accuracy: 0.8415 - val_loss: 0.1177 - val_accuracy: 0.9660 Epoch 2/5 422/422 [==============================] - 6s 14ms/step - loss: 0.2460 - accuracy: 0.9338 - val_loss: 0.0727 - val_accuracy: 0.9772 Epoch 3/5 422/422 [==============================] - 6s 14ms/step - loss: 0.1600 - accuracy: 0.9541 - val_loss: 0.0537 - val_accuracy: 0.9862 Epoch 4/5 422/422 [==============================] - 6s 14ms/step - loss: 0.1264 - accuracy: 0.9633 - val_loss: 0.0509 - val_accuracy: 0.9845 Epoch 5/5 422/422 [==============================] - 6s 14ms/step - loss: 0.1090 - accuracy: 0.9679 - val_loss: 0.0457 - val_accuracy: 0.9872 Conclusion The Conv.convolution_op() API provides an easy and readable way to implement custom convolution layers. A StandardizedConvolution implementation using the API is quite terse, consisting of only four lines of code. Demonstration of the 'endpoint layer' pattern (layer that handles loss management). Setup import tensorflow as tf from tensorflow import keras import numpy as np Usage of endpoint layers in the Functional API An \"endpoint layer\" has access to the model's targets, and creates arbitrary losses and metrics using add_loss and add_metric. This enables you to define losses and metrics that don't match the usual signature fn(y_true, y_pred, sample_weight=None). Note that you could have separate metrics for training and eval with this pattern. class LogisticEndpoint(keras.layers.Layer): def __init__(self, name=None): super(LogisticEndpoint, self).__init__(name=name) self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True) self.accuracy_fn = keras.metrics.BinaryAccuracy(name=\"accuracy\") def call(self, logits, targets=None, sample_weight=None): if targets is not None: # Compute the training-time loss value and add it # to the layer using `self.add_loss()`. loss = self.loss_fn(targets, logits, sample_weight) self.add_loss(loss) # Log the accuracy as a metric (we could log arbitrary metrics, # including different metrics for training and inference. self.add_metric(self.accuracy_fn(targets, logits, sample_weight)) # Return the inference-time prediction tensor (for `.predict()`). return tf.nn.softmax(logits) inputs = keras.Input((764,), name=\"inputs\") logits = keras.layers.Dense(1)(inputs) targets = keras.Input((1,), name=\"targets\") sample_weight = keras.Input((1,), name=\"sample_weight\") preds = LogisticEndpoint()(logits, targets, sample_weight) model = keras.Model([inputs, targets, sample_weight], preds) data = { \"inputs\": np.random.random((1000, 764)), \"targets\": np.random.random((1000, 1)), \"sample_weight\": np.random.random((1000, 1)), } model.compile(keras.optimizers.Adam(1e-3)) model.fit(data, epochs=2) Epoch 1/2 32/32 [==============================] - 0s 898us/step - loss: 0.3674 - accuracy: 0.0000e+00 Epoch 2/2 32/32 [==============================] - 0s 847us/step - loss: 0.3563 - accuracy: 0.0000e+00 Exporting an inference-only model Simply don't include targets in the model. The weights stay the same. inputs = keras.Input((764,), name=\"inputs\") logits = keras.layers.Dense(1)(inputs) preds = LogisticEndpoint()(logits, targets=None, sample_weight=None) inference_model = keras.Model(inputs, preds) inference_model.set_weights(model.get_weights()) preds = inference_model.predict(np.random.random((1000, 764))) Usage of loss endpoint layers in subclassed models class LogReg(keras.Model): def __init__(self): super(LogReg, self).__init__() self.dense = keras.layers.Dense(1) self.logistic_endpoint = LogisticEndpoint() def call(self, inputs): # Note that all inputs should be in the first argument # since we want to be able to call `model.fit(inputs)`. logits = self.dense(inputs[\"inputs\"]) preds = self.logistic_endpoint( logits=logits, targets=inputs[\"targets\"], sample_weight=inputs[\"sample_weight\"], ) return preds model = LogReg() data = { \"inputs\": np.random.random((1000, 764)), \"targets\": np.random.random((1000, 1)), \"sample_weight\": np.random.random((1000, 1)), } model.compile(keras.optimizers.Adam(1e-3)) model.fit(data, epochs=2) Epoch 1/2 32/32 [==============================] - 0s 833us/step - loss: 0.3499 - accuracy: 0.0000e+00 Epoch 2/2 32/32 [==============================] - 0s 643us/step - loss: 0.3443 - accuracy: 0.0000e+00 Modeling the relationship between training set size and model accuracy. Introduction In many real-world scenarios, the amount image data available to train a deep learning model is limited. This is especially true in the medical imaging domain, where dataset creation is costly. One of the first questions that usually comes up when approaching a new problem is: \"how many images will we need to train a good enough machine learning model?\" In most cases, a small set of samples is available, and we can use it to model the relationship between training data size and model performance. Such a model can be used to estimate the optimal number of images needed to arrive at a sample size that would achieve the required model performance. A systematic review of Sample-Size Determination Methodologies by Balki et al. provides examples of several sample-size determination methods. In this example, a balanced subsampling scheme is used to determine the optimal sample size for our model. This is done by selecting a random subsample consisting of Y number of images and training the model using the subsample. The model is then evaluated on an independent test set. This process is repeated N times for each subsample with replacement to allow for the construction of a mean and confidence interval for the observed performance. Setup import matplotlib.pyplot as plt import numpy as np import tensorflow as tf from tensorflow import keras import tensorflow_datasets as tfds from tensorflow.keras import layers # Define seed and fixed variables seed = 42 tf.random.set_seed(seed) np.random.seed(seed) AUTO = tf.data.AUTOTUNE Load TensorFlow dataset and convert to NumPy arrays We'll be using the TF Flowers dataset. # Specify dataset parameters dataset_name = \"tf_flowers\" batch_size = 64 image_size = (224, 224) # Load data from tfds and split 10% off for a test set (train_data, test_data), ds_info = tfds.load( dataset_name, split=[\"train[:90%]\", \"train[90%:]\"], shuffle_files=True, as_supervised=True, with_info=True, ) # Extract number of classes and list of class names num_classes = ds_info.features[\"label\"].num_classes class_names = ds_info.features[\"label\"].names print(f\"Number of classes: {num_classes}\") print(f\"Class names: {class_names}\") # Convert datasets to NumPy arrays def dataset_to_array(dataset, image_size, num_classes): images, labels = [], [] for img, lab in dataset.as_numpy_iterator(): images.append(tf.image.resize(img, image_size).numpy()) labels.append(tf.one_hot(lab, num_classes)) return np.array(images), np.array(labels) img_train, label_train = dataset_to_array(train_data, image_size, num_classes) img_test, label_test = dataset_to_array(test_data, image_size, num_classes) num_train_samples = len(img_train) print(f\"Number of training samples: {num_train_samples}\") Number of classes: 5 Class names: ['dandelion', 'daisy', 'tulips', 'sunflowers', 'roses'] Number of training samples: 3303 Plot a few examples from the test set plt.figure(figsize=(16, 12)) for n in range(30): ax = plt.subplot(5, 6, n + 1) plt.imshow(img_test[n].astype(\"uint8\")) plt.title(np.array(class_names)[label_test[n] == True][0]) plt.axis(\"off\") png Augmentation Define image augmentation using keras preprocessing layers and apply them to the training set. # Define image augmentation model image_augmentation = keras.Sequential( [ layers.RandomFlip(mode=\"horizontal\"), layers.RandomRotation(factor=0.1), layers.RandomZoom(height_factor=(-0.1, -0)), layers.RandomContrast(factor=0.1), ], ) # Apply the augmentations to the training images and plot a few examples img_train = image_augmentation(img_train).numpy() plt.figure(figsize=(16, 12)) for n in range(30): ax = plt.subplot(5, 6, n + 1) plt.imshow(img_train[n].astype(\"uint8\")) plt.title(np.array(class_names)[label_train[n] == True][0]) plt.axis(\"off\") png Define model building & training functions We create a few convenience functions to build a transfer-learning model, compile and train it and unfreeze layers for fine-tuning. def build_model(num_classes, img_size=image_size[0], top_dropout=0.3): \"\"\"Creates a classifier based on pre-trained MobileNetV2. Arguments: num_classes: Int, number of classese to use in the softmax layer. img_size: Int, square size of input images (defaults is 224). top_dropout: Int, value for dropout layer (defaults is 0.3). Returns: Uncompiled Keras model. \"\"\" # Create input and pre-processing layers for MobileNetV2 inputs = layers.Input(shape=(img_size, img_size, 3)) x = layers.Rescaling(scale=1.0 / 127.5, offset=-1)(inputs) model = keras.applications.MobileNetV2( include_top=False, weights=\"imagenet\", input_tensor=x ) # Freeze the pretrained weights model.trainable = False # Rebuild top x = layers.GlobalAveragePooling2D(name=\"avg_pool\")(model.output) x = layers.Dropout(top_dropout)(x) outputs = layers.Dense(num_classes, activation=\"softmax\")(x) model = keras.Model(inputs, outputs) print(\"Trainable weights:\", len(model.trainable_weights)) print(\"Non_trainable weights:\", len(model.non_trainable_weights)) return model def compile_and_train( model, training_data, training_labels, metrics=[keras.metrics.AUC(name=\"auc\"), \"acc\"], optimizer=keras.optimizers.Adam(), patience=5, epochs=5, ): \"\"\"Compiles and trains the model. Arguments: model: Uncompiled Keras model. training_data: NumPy Array, trainig data. training_labels: NumPy Array, trainig labels. metrics: Keras/TF metrics, requires at least 'auc' metric (default is `[keras.metrics.AUC(name='auc'), 'acc']`). optimizer: Keras/TF optimizer (defaults is `keras.optimizers.Adam()). patience: Int, epochsfor EarlyStopping patience (defaults is 5). epochs: Int, number of epochs to train (default is 5). Returns: Training history for trained Keras model. \"\"\" stopper = keras.callbacks.EarlyStopping( monitor=\"val_auc\", mode=\"max\", min_delta=0, patience=patience, verbose=1, restore_best_weights=True, ) model.compile(loss=\"categorical_crossentropy\", optimizer=optimizer, metrics=metrics) history = model.fit( x=training_data, y=training_labels, batch_size=batch_size, epochs=epochs, validation_split=0.1, callbacks=[stopper], ) return history def unfreeze(model, block_name, verbose=0): \"\"\"Unfreezes Keras model layers. Arguments: model: Keras model. block_name: Str, layer name for example block_name = 'block4'. Checks if supplied string is in the layer name. verbose: Int, 0 means silent, 1 prints out layers trainability status. Returns: Keras model with all layers after (and including) the specified block_name to trainable, excluding BatchNormalization layers. \"\"\" # Unfreeze from block_name onwards set_trainable = False for layer in model.layers: if block_name in layer.name: set_trainable = True if set_trainable and not isinstance(layer, layers.BatchNormalization): layer.trainable = True if verbose == 1: print(layer.name, \"trainable\") else: if verbose == 1: print(layer.name, \"NOT trainable\") print(\"Trainable weights:\", len(model.trainable_weights)) print(\"Non-trainable weights:\", len(model.non_trainable_weights)) return model Define iterative training function To train a model over several subsample sets we need to create an iterative training function. def train_model(training_data, training_labels): \"\"\"Trains the model as follows: - Trains only the top layers for 10 epochs. - Unfreezes deeper layers. - Train for 20 more epochs. Arguments: training_data: NumPy Array, trainig data. training_labels: NumPy Array, trainig labels. Returns: Model accuracy. \"\"\" model = build_model(num_classes) # Compile and train top layers history = compile_and_train( model, training_data, training_labels, metrics=[keras.metrics.AUC(name=\"auc\"), \"acc\"], optimizer=keras.optimizers.Adam(), patience=3, epochs=10, ) # Unfreeze model from block 10 onwards model = unfreeze(model, \"block_10\") # Compile and train for 20 epochs with a lower learning rate fine_tune_epochs = 20 total_epochs = history.epoch[-1] + fine_tune_epochs history_fine = compile_and_train( model, training_data, training_labels, metrics=[keras.metrics.AUC(name=\"auc\"), \"acc\"], optimizer=keras.optimizers.Adam(learning_rate=1e-4), patience=5, epochs=total_epochs, ) # Calculate model accuracy on the test set _, _, acc = model.evaluate(img_test, label_test) return np.round(acc, 4) Train models iteratively Now that we have model building functions and supporting iterative functions we can train the model over several subsample splits. We select the subsample splits as 5%, 10%, 25% and 50% of the downloaded dataset. We pretend that only 50% of the actual data is available at present. We train the model 5 times from scratch at each split and record the accuracy values. Note that this trains 20 models and will take some time. Make sure you have a GPU runtime active. To keep this example lightweight, sample data from a previous training run is provided. def train_iteratively(sample_splits=[0.05, 0.1, 0.25, 0.5], iter_per_split=5): \"\"\"Trains a model iteratively over several sample splits. Arguments: sample_splits: List/NumPy array, contains fractions of the trainins set to train over. iter_per_split: Int, number of times to train a model per sample split. Returns: Training accuracy for all splits and iterations and the number of samples used for training at each split. \"\"\" # Train all the sample models and calculate accuracy train_acc = [] sample_sizes = [] for fraction in sample_splits: print(f\"Fraction split: {fraction}\") # Repeat training 3 times for each sample size sample_accuracy = [] num_samples = int(num_train_samples * fraction) for i in range(iter_per_split): print(f\"Run {i+1} out of {iter_per_split}:\") # Create fractional subsets rand_idx = np.random.randint(num_train_samples, size=num_samples) train_img_subset = img_train[rand_idx, :] train_label_subset = label_train[rand_idx, :] # Train model and calculate accuracy accuracy = train_model(train_img_subset, train_label_subset) print(f\"Accuracy: {accuracy}\") sample_accuracy.append(accuracy) train_acc.append(sample_accuracy) sample_sizes.append(num_samples) return train_acc, sample_sizes # Running the above function produces the following outputs train_acc = [ [0.8202, 0.7466, 0.8011, 0.8447, 0.8229], [0.861, 0.8774, 0.8501, 0.8937, 0.891], [0.891, 0.9237, 0.8856, 0.9101, 0.891], [0.8937, 0.9373, 0.9128, 0.8719, 0.9128], ] sample_sizes = [165, 330, 825, 1651] Learning curve We now plot the learning curve by fitting an exponential curve through the mean accuracy points. We use TF to fit an exponential function through the data. We then extrapolate the learning curve to the predict the accuracy of a model trained on the whole training set. def fit_and_predict(train_acc, sample_sizes, pred_sample_size): \"\"\"Fits a learning curve to model training accuracy results. Arguments: train_acc: List/Numpy Array, training accuracy for all model training splits and iterations. sample_sizes: List/Numpy array, number of samples used for training at each split. pred_sample_size: Int, sample size to predict model accuracy based on fitted learning curve. \"\"\" x = sample_sizes mean_acc = [np.mean(i) for i in train_acc] error = [np.std(i) for i in train_acc] # Define mean squared error cost and exponential curve fit functions mse = keras.losses.MeanSquaredError() def exp_func(x, a, b): return a * x ** b # Define variables, learning rate and number of epochs for fitting with TF a = tf.Variable(0.0) b = tf.Variable(0.0) learning_rate = 0.01 training_epochs = 5000 # Fit the exponential function to the data for epoch in range(training_epochs): with tf.GradientTape() as tape: y_pred = exp_func(x, a, b) cost_function = mse(y_pred, mean_acc) # Get gradients and compute adjusted weights gradients = tape.gradient(cost_function, [a, b]) a.assign_sub(gradients[0] * learning_rate) b.assign_sub(gradients[1] * learning_rate) print(f\"Curve fit weights: a = {a.numpy()} and b = {b.numpy()}.\") # We can now estimate the accuracy for pred_sample_size max_acc = exp_func(pred_sample_size, a, b).numpy() # Print predicted x value and append to plot values print(f\"A model accuracy of {max_acc} is predicted for {pred_sample_size} samples.\") x_cont = np.linspace(x[0], pred_sample_size, 100) # Build the plot fig, ax = plt.subplots(figsize=(12, 6)) ax.errorbar(x, mean_acc, yerr=error, fmt=\"o\", label=\"Mean acc & std dev.\") ax.plot(x_cont, exp_func(x_cont, a, b), \"r-\", label=\"Fitted exponential curve.\") ax.set_ylabel(\"Model clasification accuracy.\", fontsize=12) ax.set_xlabel(\"Training sample size.\", fontsize=12) ax.set_xticks(np.append(x, pred_sample_size)) ax.set_yticks(np.append(mean_acc, max_acc)) ax.set_xticklabels(list(np.append(x, pred_sample_size)), rotation=90, fontsize=10) ax.yaxis.set_tick_params(labelsize=10) ax.set_title(\"Learning curve: model accuracy vs sample size.\", fontsize=14) ax.legend(loc=(0.75, 0.75), fontsize=10) ax.xaxis.grid(True) ax.yaxis.grid(True) plt.tight_layout() plt.show() # The mean absolute error (MAE) is calculated for curve fit to see how well # it fits the data. The lower the error the better the fit. mae = keras.losses.MeanAbsoluteError() print(f\"The mae for the curve fit is {mae(mean_acc, exp_func(x, a, b)).numpy()}.\") # We use the whole training set to predict the model accuracy fit_and_predict(train_acc, sample_sizes, pred_sample_size=num_train_samples) Curve fit weights: a = 0.6445642113685608 and b = 0.0480974055826664. A model accuracy of 0.9517360925674438 is predicted for 3303 samples. png The mae for the curve fit is 0.016098812222480774. From the extrapolated curve we can see that 3303 images will yield an estimated accuracy of about 95%. Now, let's use all the data (3303 images) and train the model to see if our prediction was accurate! # Now train the model with full dataset to get the actual accuracy accuracy = train_model(img_train, label_train) print(f\"A model accuracy of {accuracy} is reached on {num_train_samples} images!\") Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/mobilenet_v2/mobilenet_v2_weights_tf_dim_ordering_tf_kernels_1.0_224_no_top.h5 9412608/9406464 [==============================] - 0s 0us/step Trainable weights: 2 Non_trainable weights: 260 Epoch 1/10 47/47 [==============================] - 34s 88ms/step - loss: 1.0756 - auc: 0.8513 - acc: 0.5821 - val_loss: 0.4947 - val_auc: 0.9761 - val_acc: 0.8429 Epoch 2/10 47/47 [==============================] - 3s 67ms/step - loss: 0.5470 - auc: 0.9629 - acc: 0.8022 - val_loss: 0.3746 - val_auc: 0.9854 - val_acc: 0.8882 Epoch 3/10 47/47 [==============================] - 3s 66ms/step - loss: 0.4495 - auc: 0.9744 - acc: 0.8445 - val_loss: 0.3474 - val_auc: 0.9861 - val_acc: 0.8882 Epoch 4/10 47/47 [==============================] - 3s 66ms/step - loss: 0.3914 - auc: 0.9802 - acc: 0.8647 - val_loss: 0.3171 - val_auc: 0.9882 - val_acc: 0.8912 Epoch 5/10 47/47 [==============================] - 3s 73ms/step - loss: 0.3631 - auc: 0.9832 - acc: 0.8681 - val_loss: 0.2983 - val_auc: 0.9895 - val_acc: 0.9003 Epoch 6/10 47/47 [==============================] - 3s 67ms/step - loss: 0.3242 - auc: 0.9867 - acc: 0.8856 - val_loss: 0.2915 - val_auc: 0.9898 - val_acc: 0.9003 Epoch 7/10 47/47 [==============================] - 3s 73ms/step - loss: 0.3016 - auc: 0.9883 - acc: 0.8930 - val_loss: 0.2912 - val_auc: 0.9895 - val_acc: 0.9033 Epoch 8/10 47/47 [==============================] - 3s 66ms/step - loss: 0.2765 - auc: 0.9906 - acc: 0.9017 - val_loss: 0.2824 - val_auc: 0.9900 - val_acc: 0.9033 Epoch 9/10 47/47 [==============================] - 3s 66ms/step - loss: 0.2721 - auc: 0.9907 - acc: 0.9028 - val_loss: 0.2804 - val_auc: 0.9899 - val_acc: 0.9033 Epoch 10/10 47/47 [==============================] - 3s 65ms/step - loss: 0.2564 - auc: 0.9914 - acc: 0.9098 - val_loss: 0.2913 - val_auc: 0.9891 - val_acc: 0.8973 Trainable weights: 24 Non-trainable weights: 238 Epoch 1/29 47/47 [==============================] - 9s 112ms/step - loss: 0.3316 - auc: 0.9850 - acc: 0.8789 - val_loss: 0.2392 - val_auc: 0.9915 - val_acc: 0.9033 Epoch 2/29 47/47 [==============================] - 4s 93ms/step - loss: 0.1497 - auc: 0.9966 - acc: 0.9478 - val_loss: 0.2797 - val_auc: 0.9906 - val_acc: 0.8731 Epoch 3/29 47/47 [==============================] - 4s 94ms/step - loss: 0.0981 - auc: 0.9982 - acc: 0.9640 - val_loss: 0.1795 - val_auc: 0.9960 - val_acc: 0.9366 Epoch 4/29 47/47 [==============================] - 4s 94ms/step - loss: 0.0652 - auc: 0.9990 - acc: 0.9788 - val_loss: 0.2161 - val_auc: 0.9924 - val_acc: 0.9275 Epoch 5/29 47/47 [==============================] - 4s 94ms/step - loss: 0.0327 - auc: 0.9999 - acc: 0.9896 - val_loss: 0.2161 - val_auc: 0.9919 - val_acc: 0.9517 Epoch 6/29 47/47 [==============================] - 4s 94ms/step - loss: 0.0269 - auc: 0.9999 - acc: 0.9923 - val_loss: 0.2485 - val_auc: 0.9894 - val_acc: 0.9335 Epoch 7/29 47/47 [==============================] - 4s 94ms/step - loss: 0.0174 - auc: 0.9998 - acc: 0.9956 - val_loss: 0.2692 - val_auc: 0.9871 - val_acc: 0.9215 Epoch 8/29 47/47 [==============================] - 4s 94ms/step - loss: 0.0253 - auc: 0.9999 - acc: 0.9913 - val_loss: 0.2645 - val_auc: 0.9864 - val_acc: 0.9275 Restoring model weights from the end of the best epoch. Epoch 00008: early stopping 12/12 [==============================] - 1s 39ms/step - loss: 0.1378 - auc: 0.9975 - acc: 0.9537 A model accuracy of 0.9537 is reached on 3303 images! Conclusion We see that a model accuracy of about 94-96%* is reached using 3303 images. This is quite close to our estimate! Even though we used only 50% of the dataset (1651 images) we were able to model the training behaviour of our model and predict the model accuracy for a given amount of images. This same methodology can be used to predict the amount of images needed to reach a desired accuracy. This is very useful when a smaller set of data is available, and it has been shown that convergence on a deep learning model is possible, but more images are needed. The image count prediction can be used to plan and budget for further image collection initiatives. This example shows how to use Keras callbacks to evaluate and export non-TensorFlow based metrics. Introduction Keras callbacks allow for the execution of arbitrary code at various stages of the Keras training process. While Keras offers first-class support for metric evaluation, Keras metrics may only rely on TensorFlow code internally. While there are TensorFlow implementations of many metrics online, some metrics are implemented using NumPy or another Python-based numerical computation library. By performing metric evaluation inside of a Keras callback, we can leverage any existing metric, and ultimately export the result to TensorBoard. Jaccard score metric This example makes use of a sklearn metric, sklearn.metrics.jaccard_score(), and writes the result to TensorBoard using the tf.summary API. This template can be modified slightly to make it work with any existing sklearn metric. import tensorflow as tf import tensorflow.keras as keras import tensorflow.keras.layers as layers from sklearn.metrics import jaccard_score import numpy as np import os class JaccardScoreCallback(keras.callbacks.Callback): \"\"\"Computes the Jaccard score and logs the results to TensorBoard.\"\"\" def __init__(self, model, x_test, y_test, log_dir): self.model = model self.x_test = x_test self.y_test = y_test self.keras_metric = tf.keras.metrics.Mean(\"jaccard_score\") self.epoch = 0 self.summary_writer = tf.summary.create_file_writer( os.path.join(log_dir, model.name) ) def on_epoch_end(self, batch, logs=None): self.epoch += 1 self.keras_metric.reset_state() predictions = self.model.predict(self.x_test) jaccard_value = jaccard_score( np.argmax(predictions, axis=-1), self.y_test, average=None ) self.keras_metric.update_state(jaccard_value) self._write_metric( self.keras_metric.name, self.keras_metric.result().numpy().astype(float) ) def _write_metric(self, name, value): with self.summary_writer.as_default(): tf.summary.scalar( name, value, step=self.epoch, ) self.summary_writer.flush() Sample usage Let's test our JaccardScoreCallback class with a Keras model. # Model / data parameters num_classes = 10 input_shape = (28, 28, 1) # The data, split between train and test sets (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() # Scale images to the [0, 1] range x_train = x_train.astype(\"float32\") / 255 x_test = x_test.astype(\"float32\") / 255 # Make sure images have shape (28, 28, 1) x_train = np.expand_dims(x_train, -1) x_test = np.expand_dims(x_test, -1) print(\"x_train shape:\", x_train.shape) print(x_train.shape[0], \"train samples\") print(x_test.shape[0], \"test samples\") # Convert class vectors to binary class matrices. y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) model = keras.Sequential( [ keras.Input(shape=input_shape), layers.Conv2D(32, kernel_size=(3, 3), activation=\"relu\"), layers.MaxPooling2D(pool_size=(2, 2)), layers.Conv2D(64, kernel_size=(3, 3), activation=\"relu\"), layers.MaxPooling2D(pool_size=(2, 2)), layers.Flatten(), layers.Dropout(0.5), layers.Dense(num_classes, activation=\"softmax\"), ] ) model.summary() batch_size = 128 epochs = 15 model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"]) callbacks = [JaccardScoreCallback(model, x_test, np.argmax(y_test, axis=-1), \"logs\")] model.fit( x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.1, callbacks=callbacks, ) x_train shape: (60000, 28, 28, 1) 60000 train samples 10000 test samples Model: \"sequential\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 13, 13, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 11, 11, 64) 18496 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 5, 5, 64) 0 _________________________________________________________________ flatten (Flatten) (None, 1600) 0 _________________________________________________________________ dropout (Dropout) (None, 1600) 0 _________________________________________________________________ dense (Dense) (None, 10) 16010 ================================================================= Total params: 34,826 Trainable params: 34,826 Non-trainable params: 0 _________________________________________________________________ Epoch 1/15 422/422 [==============================] - 6s 14ms/step - loss: 0.3661 - accuracy: 0.8895 - val_loss: 0.0823 - val_accuracy: 0.9765 Epoch 2/15 422/422 [==============================] - 6s 14ms/step - loss: 0.1119 - accuracy: 0.9653 - val_loss: 0.0620 - val_accuracy: 0.9823 Epoch 3/15 422/422 [==============================] - 6s 14ms/step - loss: 0.0841 - accuracy: 0.9742 - val_loss: 0.0488 - val_accuracy: 0.9873 Epoch 4/15 422/422 [==============================] - 6s 14ms/step - loss: 0.0696 - accuracy: 0.9787 - val_loss: 0.0404 - val_accuracy: 0.9888 Epoch 5/15 422/422 [==============================] - 6s 14ms/step - loss: 0.0615 - accuracy: 0.9813 - val_loss: 0.0406 - val_accuracy: 0.9897 Epoch 6/15 422/422 [==============================] - 6s 13ms/step - loss: 0.0565 - accuracy: 0.9826 - val_loss: 0.0373 - val_accuracy: 0.9900 Epoch 7/15 422/422 [==============================] - 6s 14ms/step - loss: 0.0520 - accuracy: 0.9833 - val_loss: 0.0369 - val_accuracy: 0.9898 Epoch 8/15 422/422 [==============================] - 6s 14ms/step - loss: 0.0488 - accuracy: 0.9851 - val_loss: 0.0353 - val_accuracy: 0.9905 Epoch 9/15 422/422 [==============================] - 6s 14ms/step - loss: 0.0440 - accuracy: 0.9861 - val_loss: 0.0347 - val_accuracy: 0.9893 Epoch 10/15 422/422 [==============================] - 6s 14ms/step - loss: 0.0424 - accuracy: 0.9871 - val_loss: 0.0294 - val_accuracy: 0.9907 Epoch 11/15 422/422 [==============================] - 6s 14ms/step - loss: 0.0402 - accuracy: 0.9874 - val_loss: 0.0340 - val_accuracy: 0.9903 Epoch 12/15 422/422 [==============================] - 6s 13ms/step - loss: 0.0382 - accuracy: 0.9878 - val_loss: 0.0290 - val_accuracy: 0.9917 Epoch 13/15 422/422 [==============================] - 6s 14ms/step - loss: 0.0358 - accuracy: 0.9886 - val_loss: 0.0286 - val_accuracy: 0.9923 Epoch 14/15 422/422 [==============================] - 6s 13ms/step - loss: 0.0349 - accuracy: 0.9885 - val_loss: 0.0282 - val_accuracy: 0.9918 Epoch 15/15 422/422 [==============================] - 6s 14ms/step - loss: 0.0323 - accuracy: 0.9899 - val_loss: 0.0283 - val_accuracy: 0.9922 If you now launch a TensorBoard instance using tensorboard --logdir=logs, you will see the jaccard_score metric alongside any other exported metrics! TensorBoard Jaccard Score Conclusion Many ML practitioners and researchers rely on metrics that may not yet have a TensorFlow implementation. Keras users can still leverage the wide variety of existing metric implementations in other frameworks by using a Keras callback. These metrics can be exported, viewed and analyzed in the TensorBoard like any other metric. Loading TFRecords for computer vision models. Introduction + Set Up TFRecords store a sequence of binary records, read linearly. They are useful format for storing data because they can be read efficiently. Learn more about TFRecords here. We'll explore how we can easily load in TFRecords for our melanoma classifier. import tensorflow as tf from functools import partial import matplotlib.pyplot as plt try: tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect() print(\"Device:\", tpu.master()) strategy = tf.distribute.TPUStrategy(tpu) except: strategy = tf.distribute.get_strategy() print(\"Number of replicas:\", strategy.num_replicas_in_sync) Number of replicas: 8 We want a bigger batch size as our data is not balanced. AUTOTUNE = tf.data.AUTOTUNE GCS_PATH = \"gs://kds-b38ce1b823c3ae623f5691483dbaa0f0363f04b0d6a90b63cf69946e\" BATCH_SIZE = 64 IMAGE_SIZE = [1024, 1024] Load the data FILENAMES = tf.io.gfile.glob(GCS_PATH + \"/tfrecords/train*.tfrec\") split_ind = int(0.9 * len(FILENAMES)) TRAINING_FILENAMES, VALID_FILENAMES = FILENAMES[:split_ind], FILENAMES[split_ind:] TEST_FILENAMES = tf.io.gfile.glob(GCS_PATH + \"/tfrecords/test*.tfrec\") print(\"Train TFRecord Files:\", len(TRAINING_FILENAMES)) print(\"Validation TFRecord Files:\", len(VALID_FILENAMES)) print(\"Test TFRecord Files:\", len(TEST_FILENAMES)) Train TFRecord Files: 14 Validation TFRecord Files: 2 Test TFRecord Files: 16 Decoding the data The images have to be converted to tensors so that it will be a valid input in our model. As images utilize an RBG scale, we specify 3 channels. We also reshape our data so that all of the images will be the same shape. def decode_image(image): image = tf.image.decode_jpeg(image, channels=3) image = tf.cast(image, tf.float32) image = tf.reshape(image, [*IMAGE_SIZE, 3]) return image As we load in our data, we need both our X and our Y. The X is our image; the model will find features and patterns in our image dataset. We want to predict Y, the probability that the lesion in the image is malignant. We will to through our TFRecords and parse out the image and the target values. def read_tfrecord(example, labeled): tfrecord_format = ( { \"image\": tf.io.FixedLenFeature([], tf.string), \"target\": tf.io.FixedLenFeature([], tf.int64), } if labeled else {\"image\": tf.io.FixedLenFeature([], tf.string),} ) example = tf.io.parse_single_example(example, tfrecord_format) image = decode_image(example[\"image\"]) if labeled: label = tf.cast(example[\"target\"], tf.int32) return image, label return image Define loading methods Our dataset is not ordered in any meaningful way, so the order can be ignored when loading our dataset. By ignoring the order and reading files as soon as they come in, it will take a shorter time to load the data. def load_dataset(filenames, labeled=True): ignore_order = tf.data.Options() ignore_order.experimental_deterministic = False # disable order, increase speed dataset = tf.data.TFRecordDataset( filenames ) # automatically interleaves reads from multiple files dataset = dataset.with_options( ignore_order ) # uses data as soon as it streams in, rather than in its original order dataset = dataset.map( partial(read_tfrecord, labeled=labeled), num_parallel_calls=AUTOTUNE ) # returns a dataset of (image, label) pairs if labeled=True or just images if labeled=False return dataset We define the following function to get our different datasets. def get_dataset(filenames, labeled=True): dataset = load_dataset(filenames, labeled=labeled) dataset = dataset.shuffle(2048) dataset = dataset.prefetch(buffer_size=AUTOTUNE) dataset = dataset.batch(BATCH_SIZE) return dataset Visualize input images train_dataset = get_dataset(TRAINING_FILENAMES) valid_dataset = get_dataset(VALID_FILENAMES) test_dataset = get_dataset(TEST_FILENAMES, labeled=False) image_batch, label_batch = next(iter(train_dataset)) def show_batch(image_batch, label_batch): plt.figure(figsize=(10, 10)) for n in range(25): ax = plt.subplot(5, 5, n + 1) plt.imshow(image_batch[n] / 255.0) if label_batch[n]: plt.title(\"MALIGNANT\") else: plt.title(\"BENIGN\") plt.axis(\"off\") show_batch(image_batch.numpy(), label_batch.numpy()) png Building our model Define callbacks The following function allows for the model to change the learning rate as it runs each epoch. We can use callbacks to stop training when there are no improvements in the model. At the end of the training process, the model will restore the weights of its best iteration. initial_learning_rate = 0.01 lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay( initial_learning_rate, decay_steps=20, decay_rate=0.96, staircase=True ) checkpoint_cb = tf.keras.callbacks.ModelCheckpoint( \"melanoma_model.h5\", save_best_only=True ) early_stopping_cb = tf.keras.callbacks.EarlyStopping( patience=10, restore_best_weights=True ) Build our base model Transfer learning is a great way to reap the benefits of a well-trained model without having the train the model ourselves. For this notebook, we want to import the Xception model. A more in-depth analysis of transfer learning can be found here. We do not want our metric to be accuracy because our data is imbalanced. For our example, we will be looking at the area under a ROC curve. def make_model(): base_model = tf.keras.applications.Xception( input_shape=(*IMAGE_SIZE, 3), include_top=False, weights=\"imagenet\" ) base_model.trainable = False inputs = tf.keras.layers.Input([*IMAGE_SIZE, 3]) x = tf.keras.applications.xception.preprocess_input(inputs) x = base_model(x) x = tf.keras.layers.GlobalAveragePooling2D()(x) x = tf.keras.layers.Dense(8, activation=\"relu\")(x) x = tf.keras.layers.Dropout(0.7)(x) outputs = tf.keras.layers.Dense(1, activation=\"sigmoid\")(x) model = tf.keras.Model(inputs=inputs, outputs=outputs) model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=lr_schedule), loss=\"binary_crossentropy\", metrics=tf.keras.metrics.AUC(name=\"auc\"), ) return model Train the model with strategy.scope(): model = make_model() history = model.fit( train_dataset, epochs=2, validation_data=valid_dataset, callbacks=[checkpoint_cb, early_stopping_cb], ) Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/xception/xception_weights_tf_dim_ordering_tf_kernels_notop.h5 83689472/83683744 [==============================] - 3s 0us/step Epoch 1/2 454/454 [==============================] - 525s 1s/step - loss: 0.1895 - auc: 0.5841 - val_loss: 0.0825 - val_auc: 0.8109 Epoch 2/2 454/454 [==============================] - 118s 260ms/step - loss: 0.1063 - auc: 0.5994 - val_loss: 0.0861 - val_auc: 0.8336 Predict results We'll use our model to predict results for our test dataset images. Values closer to 0 are more likely to be benign and values closer to 1 are more likely to be malignant. def show_batch_predictions(image_batch): plt.figure(figsize=(10, 10)) for n in range(25): ax = plt.subplot(5, 5, n + 1) plt.imshow(image_batch[n] / 255.0) img_array = tf.expand_dims(image_batch[n], axis=0) plt.title(model.predict(img_array)[0]) plt.axis(\"off\") image_batch = next(iter(test_dataset)) show_batch_predictions(image_batch) png Four simple tips to help you debug your Keras code. Introduction It's generally possible to do almost anything in Keras without writing code per se: whether you're implementing a new type of GAN or the latest convnet architecture for image segmentation, you can usually stick to calling built-in methods. Because all built-in methods do extensive input validation checks, you will have little to no debugging to do. A Functional API model made entirely of built-in layers will work on first try -- if you can compile it, it will run. However, sometimes, you will need to dive deeper and write your own code. Here are some common examples: Creating a new Layer subclass. Creating a custom Metric subclass. Implementing a custom train_step on a Model. This document provides a few simple tips to help you navigate debugging in these situations. Tip 1: test each part before you test the whole If you've created any object that has a chance of not working as expected, don't just drop it in your end-to-end process and watch sparks fly. Rather, test your custom object in isolation first. This may seem obvious -- but you'd be surprised how often people don't start with this. If you write a custom layer, don't call fit() on your entire model just yet. Call your layer on some test data first. If you write a custom metric, start by printing its output for some reference inputs. Here's a simple example. Let's write a custom layer a bug in it: import tensorflow as tf from tensorflow.keras import layers class MyAntirectifier(layers.Layer): def build(self, input_shape): output_dim = input_shape[-1] self.kernel = self.add_weight( shape=(output_dim * 2, output_dim), initializer=\"he_normal\", name=\"kernel\", trainable=True, ) def call(self, inputs): # Take the positive part of the input pos = tf.nn.relu(inputs) # Take the negative part of the input neg = tf.nn.relu(-inputs) # Concatenate the positive and negative parts concatenated = tf.concat([pos, neg], axis=0) # Project the concatenation down to the same dimensionality as the input return tf.matmul(concatenated, self.kernel) Now, rather than using it in a end-to-end model directly, let's try to call the layer on some test data: x = tf.random.normal(shape=(2, 5)) y = MyAntirectifier()(x) We get the following error: ... 1 x = tf.random.normal(shape=(2, 5)) ----> 2 y = MyAntirectifier()(x) ... 17 neg = tf.nn.relu(-inputs) 18 concatenated = tf.concat([pos, neg], axis=0) ---> 19 return tf.matmul(concatenated, self.kernel) ... InvalidArgumentError: Matrix size-incompatible: In[0]: [4,5], In[1]: [10,5] [Op:MatMul] Looks like our input tensor in the matmul op may have an incorrect shape. Let's add a print statement to check the actual shapes: class MyAntirectifier(layers.Layer): def build(self, input_shape): output_dim = input_shape[-1] self.kernel = self.add_weight( shape=(output_dim * 2, output_dim), initializer=\"he_normal\", name=\"kernel\", trainable=True, ) def call(self, inputs): pos = tf.nn.relu(inputs) neg = tf.nn.relu(-inputs) print(\"pos.shape:\", pos.shape) print(\"neg.shape:\", neg.shape) concatenated = tf.concat([pos, neg], axis=0) print(\"concatenated.shape:\", concatenated.shape) print(\"kernel.shape:\", self.kernel.shape) return tf.matmul(concatenated, self.kernel) We get the following: pos.shape: (2, 5) neg.shape: (2, 5) concatenated.shape: (4, 5) kernel.shape: (10, 5) Turns out we had the wrong axis for the concat op! We should be concatenating neg and pos alongside the feature axis 1, not the batch axis 0. Here's the correct version: class MyAntirectifier(layers.Layer): def build(self, input_shape): output_dim = input_shape[-1] self.kernel = self.add_weight( shape=(output_dim * 2, output_dim), initializer=\"he_normal\", name=\"kernel\", trainable=True, ) def call(self, inputs): pos = tf.nn.relu(inputs) neg = tf.nn.relu(-inputs) print(\"pos.shape:\", pos.shape) print(\"neg.shape:\", neg.shape) concatenated = tf.concat([pos, neg], axis=1) print(\"concatenated.shape:\", concatenated.shape) print(\"kernel.shape:\", self.kernel.shape) return tf.matmul(concatenated, self.kernel) Now our code works fine: x = tf.random.normal(shape=(2, 5)) y = MyAntirectifier()(x) pos.shape: (2, 5) neg.shape: (2, 5) concatenated.shape: (2, 10) kernel.shape: (10, 5) Tip 2: use model.summary() and plot_model() to check layer output shapes If you're working with complex network topologies, you're going to need a way to visualize how your layers are connected and how they transform the data that passes through them. Here's an example. Consider this model with three inputs and two outputs (lifted from the Functional API guide): from tensorflow import keras num_tags = 12 # Number of unique issue tags num_words = 10000 # Size of vocabulary obtained when preprocessing text data num_departments = 4 # Number of departments for predictions title_input = keras.Input( shape=(None,), name=\"title\" ) # Variable-length sequence of ints body_input = keras.Input(shape=(None,), name=\"body\") # Variable-length sequence of ints tags_input = keras.Input( shape=(num_tags,), name=\"tags\" ) # Binary vectors of size `num_tags` # Embed each word in the title into a 64-dimensional vector title_features = layers.Embedding(num_words, 64)(title_input) # Embed each word in the text into a 64-dimensional vector body_features = layers.Embedding(num_words, 64)(body_input) # Reduce sequence of embedded words in the title into a single 128-dimensional vector title_features = layers.LSTM(128)(title_features) # Reduce sequence of embedded words in the body into a single 32-dimensional vector body_features = layers.LSTM(32)(body_features) # Merge all available features into a single large vector via concatenation x = layers.concatenate([title_features, body_features, tags_input]) # Stick a logistic regression for priority prediction on top of the features priority_pred = layers.Dense(1, name=\"priority\")(x) # Stick a department classifier on top of the features department_pred = layers.Dense(num_departments, name=\"department\")(x) # Instantiate an end-to-end model predicting both priority and department model = keras.Model( inputs=[title_input, body_input, tags_input], outputs=[priority_pred, department_pred], ) Calling summary() can help you check the output shape of each layer: model.summary() Model: \"functional_1\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== title (InputLayer) [(None, None)] 0 __________________________________________________________________________________________________ body (InputLayer) [(None, None)] 0 __________________________________________________________________________________________________ embedding (Embedding) (None, None, 64) 640000 title[0][0] __________________________________________________________________________________________________ embedding_1 (Embedding) (None, None, 64) 640000 body[0][0] __________________________________________________________________________________________________ lstm (LSTM) (None, 128) 98816 embedding[0][0] __________________________________________________________________________________________________ lstm_1 (LSTM) (None, 32) 12416 embedding_1[0][0] __________________________________________________________________________________________________ tags (InputLayer) [(None, 12)] 0 __________________________________________________________________________________________________ concatenate (Concatenate) (None, 172) 0 lstm[0][0] lstm_1[0][0] tags[0][0] __________________________________________________________________________________________________ priority (Dense) (None, 1) 173 concatenate[0][0] __________________________________________________________________________________________________ department (Dense) (None, 4) 692 concatenate[0][0] ================================================================================================== Total params: 1,392,097 Trainable params: 1,392,097 Non-trainable params: 0 __________________________________________________________________________________________________ You can also visualize the entire network topology alongside output shapes using plot_model: keras.utils.plot_model(model, show_shapes=True) png With this plot, any connectivity-level error becomes immediately obvious. Tip 3: to debug what happens during fit(), use run_eagerly=True The fit() method is fast: it runs a well-optimized, fully-compiled computation graph. That's great for performance, but it also means that the code you're executing isn't the Python code you've written. This can be problematic when debugging. As you may recall, Python is slow -- so we use it as a staging language, not as an execution language. Thankfully, there's an easy way to run your code in \"debug mode\", fully eagerly: pass run_eagerly=True to compile(). Your call to fit() will now get executed line by line, without any optimization. It's slower, but it makes it possible to print the value of intermediate tensors, or to use a Python debugger. Great for debugging. Here's a basic example: let's write a really simple model with a custom train_step. Our model just implements gradient descent, but instead of first-order gradients, it uses a combination of first-order and second-order gradients. Pretty trivial so far. Can you spot what we're doing wrong? class MyModel(keras.Model): def train_step(self, data): inputs, targets = data trainable_vars = self.trainable_variables with tf.GradientTape() as tape2: with tf.GradientTape() as tape1: preds = self(inputs, training=True) # Forward pass # Compute the loss value # (the loss function is configured in `compile()`) loss = self.compiled_loss(targets, preds) # Compute first-order gradients dl_dw = tape1.gradient(loss, trainable_vars) # Compute second-order gradients d2l_dw2 = tape2.gradient(dl_dw, trainable_vars) # Combine first-order and second-order gradients grads = [0.5 * w1 + 0.5 * w2 for (w1, w2) in zip(d2l_dw2, dl_dw)] # Update weights self.optimizer.apply_gradients(zip(grads, trainable_vars)) # Update metrics (includes the metric that tracks the loss) self.compiled_metrics.update_state(targets, preds) # Return a dict mapping metric names to current value return {m.name: m.result() for m in self.metrics} Let's train a one-layer model on MNIST with this custom training loop. We pick, somewhat at random, a batch size of 1024 and a learning rate of 0.1. The general idea being to use larger batches and a larger learning rate than usual, since our \"improved\" gradients should lead us to quicker convergence. import numpy as np # Construct an instance of MyModel def get_model(): inputs = keras.Input(shape=(784,)) intermediate = layers.Dense(256, activation=\"relu\")(inputs) outputs = layers.Dense(10, activation=\"softmax\")(intermediate) model = MyModel(inputs, outputs) return model # Prepare data (x_train, y_train), _ = keras.datasets.mnist.load_data() x_train = np.reshape(x_train, (-1, 784)) / 255 model = get_model() model.compile( optimizer=keras.optimizers.SGD(learning_rate=1e-2), loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"], ) model.fit(x_train, y_train, epochs=3, batch_size=1024, validation_split=0.1) Epoch 1/3 53/53 [==============================] - 1s 15ms/step - loss: 2.2960 - accuracy: 0.1580 - val_loss: 2.3071 - val_accuracy: 0.0963 Epoch 2/3 53/53 [==============================] - 1s 13ms/step - loss: 2.3246 - accuracy: 0.0995 - val_loss: 2.3454 - val_accuracy: 0.0960 Epoch 3/3 53/53 [==============================] - 1s 12ms/step - loss: 2.3578 - accuracy: 0.0995 - val_loss: 2.3767 - val_accuracy: 0.0960 Oh no, it doesn't converge! Something is not working as planned. Time for some step-by-step printing of what's going on with our gradients. We add various print statements in the train_step method, and we make sure to pass run_eagerly=True to compile() to run our code step-by-step, eagerly. class MyModel(keras.Model): def train_step(self, data): print() print(\"----Start of step: %d\" % (self.step_counter,)) self.step_counter += 1 inputs, targets = data trainable_vars = self.trainable_variables with tf.GradientTape() as tape2: with tf.GradientTape() as tape1: preds = self(inputs, training=True) # Forward pass # Compute the loss value # (the loss function is configured in `compile()`) loss = self.compiled_loss(targets, preds) # Compute first-order gradients dl_dw = tape1.gradient(loss, trainable_vars) # Compute second-order gradients d2l_dw2 = tape2.gradient(dl_dw, trainable_vars) print(\"Max of dl_dw[0]: %.4f\" % tf.reduce_max(dl_dw[0])) print(\"Min of dl_dw[0]: %.4f\" % tf.reduce_min(dl_dw[0])) print(\"Mean of dl_dw[0]: %.4f\" % tf.reduce_mean(dl_dw[0])) print(\"-\") print(\"Max of d2l_dw2[0]: %.4f\" % tf.reduce_max(d2l_dw2[0])) print(\"Min of d2l_dw2[0]: %.4f\" % tf.reduce_min(d2l_dw2[0])) print(\"Mean of d2l_dw2[0]: %.4f\" % tf.reduce_mean(d2l_dw2[0])) # Combine first-order and second-order gradients grads = [0.5 * w1 + 0.5 * w2 for (w1, w2) in zip(d2l_dw2, dl_dw)] # Update weights self.optimizer.apply_gradients(zip(grads, trainable_vars)) # Update metrics (includes the metric that tracks the loss) self.compiled_metrics.update_state(targets, preds) # Return a dict mapping metric names to current value return {m.name: m.result() for m in self.metrics} model = get_model() model.compile( optimizer=keras.optimizers.SGD(learning_rate=1e-2), loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"], run_eagerly=True, ) model.step_counter = 0 # We pass epochs=1 and steps_per_epoch=10 to only run 10 steps of training. model.fit(x_train, y_train, epochs=1, batch_size=1024, verbose=0, steps_per_epoch=10) ----Start of step: 0 Max of dl_dw[0]: 0.0236 Min of dl_dw[0]: -0.0198 Mean of dl_dw[0]: 0.0001 - Max of d2l_dw2[0]: 2.6148 Min of d2l_dw2[0]: -1.8798 Mean of d2l_dw2[0]: 0.0401 ----Start of step: 1 Max of dl_dw[0]: 0.0611 Min of dl_dw[0]: -0.0233 Mean of dl_dw[0]: 0.0009 - Max of d2l_dw2[0]: 8.3185 Min of d2l_dw2[0]: -4.0696 Mean of d2l_dw2[0]: 0.1708 ----Start of step: 2 Max of dl_dw[0]: 0.0528 Min of dl_dw[0]: -0.0200 Mean of dl_dw[0]: 0.0010 - Max of d2l_dw2[0]: 3.4744 Min of d2l_dw2[0]: -3.1926 Mean of d2l_dw2[0]: 0.0559 ----Start of step: 3 Max of dl_dw[0]: 0.0983 Min of dl_dw[0]: -0.0174 Mean of dl_dw[0]: 0.0014 - Max of d2l_dw2[0]: 2.2682 Min of d2l_dw2[0]: -0.7935 Mean of d2l_dw2[0]: 0.0253 ----Start of step: 4 Max of dl_dw[0]: 0.0732 Min of dl_dw[0]: -0.0125 Mean of dl_dw[0]: 0.0009 - Max of d2l_dw2[0]: 5.1099 Min of d2l_dw2[0]: -2.4236 Mean of d2l_dw2[0]: 0.0860 ----Start of step: 5 Max of dl_dw[0]: 0.1309 Min of dl_dw[0]: -0.0103 Mean of dl_dw[0]: 0.0007 - Max of d2l_dw2[0]: 5.1275 Min of d2l_dw2[0]: -0.6684 Mean of d2l_dw2[0]: 0.0349 ----Start of step: 6 Max of dl_dw[0]: 0.0484 Min of dl_dw[0]: -0.0128 Mean of dl_dw[0]: 0.0001 - Max of d2l_dw2[0]: 5.3465 Min of d2l_dw2[0]: -0.2145 Mean of d2l_dw2[0]: 0.0618 ----Start of step: 7 Max of dl_dw[0]: 0.0049 Min of dl_dw[0]: -0.0093 Mean of dl_dw[0]: -0.0001 - Max of d2l_dw2[0]: 0.2465 Min of d2l_dw2[0]: -0.0313 Mean of d2l_dw2[0]: 0.0075 ----Start of step: 8 Max of dl_dw[0]: 0.0050 Min of dl_dw[0]: -0.0120 Mean of dl_dw[0]: -0.0001 - Max of d2l_dw2[0]: 0.1978 Min of d2l_dw2[0]: -0.0291 Mean of d2l_dw2[0]: 0.0063 ----Start of step: 9 Max of dl_dw[0]: 0.0050 Min of dl_dw[0]: -0.0125 Mean of dl_dw[0]: -0.0001 - Max of d2l_dw2[0]: 0.1594 Min of d2l_dw2[0]: -0.0238 Mean of d2l_dw2[0]: 0.0055 What did we learn? The first order and second order gradients can have values that differ by orders of magnitudes. Sometimes, they may not even have the same sign. Their values can vary greatly at each step. This leads us to an obvious idea: let's normalize the gradients before combining them. class MyModel(keras.Model): def train_step(self, data): inputs, targets = data trainable_vars = self.trainable_variables with tf.GradientTape() as tape2: with tf.GradientTape() as tape1: preds = self(inputs, training=True) # Forward pass # Compute the loss value # (the loss function is configured in `compile()`) loss = self.compiled_loss(targets, preds) # Compute first-order gradients dl_dw = tape1.gradient(loss, trainable_vars) # Compute second-order gradients d2l_dw2 = tape2.gradient(dl_dw, trainable_vars) dl_dw = [tf.math.l2_normalize(w) for w in dl_dw] d2l_dw2 = [tf.math.l2_normalize(w) for w in d2l_dw2] # Combine first-order and second-order gradients grads = [0.5 * w1 + 0.5 * w2 for (w1, w2) in zip(d2l_dw2, dl_dw)] # Update weights self.optimizer.apply_gradients(zip(grads, trainable_vars)) # Update metrics (includes the metric that tracks the loss) self.compiled_metrics.update_state(targets, preds) # Return a dict mapping metric names to current value return {m.name: m.result() for m in self.metrics} model = get_model() model.compile( optimizer=keras.optimizers.SGD(learning_rate=1e-2), loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"], ) model.fit(x_train, y_train, epochs=5, batch_size=1024, validation_split=0.1) Epoch 1/5 53/53 [==============================] - 1s 15ms/step - loss: 2.1680 - accuracy: 0.2796 - val_loss: 2.0063 - val_accuracy: 0.4688 Epoch 2/5 53/53 [==============================] - 1s 13ms/step - loss: 1.9071 - accuracy: 0.5292 - val_loss: 1.7729 - val_accuracy: 0.6312 Epoch 3/5 53/53 [==============================] - 1s 13ms/step - loss: 1.7098 - accuracy: 0.6197 - val_loss: 1.5966 - val_accuracy: 0.6785 Epoch 4/5 53/53 [==============================] - 1s 13ms/step - loss: 1.5686 - accuracy: 0.6434 - val_loss: 1.4748 - val_accuracy: 0.6875 Epoch 5/5 53/53 [==============================] - 1s 14ms/step - loss: 1.4729 - accuracy: 0.6448 - val_loss: 1.3908 - val_accuracy: 0.6862 Now, training converges! It doesn't work well at all, but at least the model learns something. After spending a few minutes tuning parameters, we get to the following configuration that works somewhat well (achieves 97% validation accuracy and seems reasonably robust to overfitting): Use 0.2 * w1 + 0.8 * w2 for combining gradients. Use a learning rate that decays linearly over time. I'm not going to say that the idea works -- this isn't at all how you're supposed to do second-order optimization (pointers: see the Newton & Gauss-Newton methods, quasi-Newton methods, and BFGS). But hopefully this demonstration gave you an idea of how you can debug your way out of uncomfortable training situations. Remember: use run_eagerly=True for debugging what happens in fit(). And when your code is finally working as expected, make sure to remove this flag in order to get the best runtime performance! Here's our final training run: class MyModel(keras.Model): def train_step(self, data): inputs, targets = data trainable_vars = self.trainable_variables with tf.GradientTape() as tape2: with tf.GradientTape() as tape1: preds = self(inputs, training=True) # Forward pass # Compute the loss value # (the loss function is configured in `compile()`) loss = self.compiled_loss(targets, preds) # Compute first-order gradients dl_dw = tape1.gradient(loss, trainable_vars) # Compute second-order gradients d2l_dw2 = tape2.gradient(dl_dw, trainable_vars) dl_dw = [tf.math.l2_normalize(w) for w in dl_dw] d2l_dw2 = [tf.math.l2_normalize(w) for w in d2l_dw2] # Combine first-order and second-order gradients grads = [0.2 * w1 + 0.8 * w2 for (w1, w2) in zip(d2l_dw2, dl_dw)] # Update weights self.optimizer.apply_gradients(zip(grads, trainable_vars)) # Update metrics (includes the metric that tracks the loss) self.compiled_metrics.update_state(targets, preds) # Return a dict mapping metric names to current value return {m.name: m.result() for m in self.metrics} model = get_model() lr = learning_rate = keras.optimizers.schedules.InverseTimeDecay( initial_learning_rate=0.1, decay_steps=25, decay_rate=0.1 ) model.compile( optimizer=keras.optimizers.SGD(lr), loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"], ) model.fit(x_train, y_train, epochs=50, batch_size=2048, validation_split=0.1) Epoch 1/50 27/27 [==============================] - 1s 31ms/step - loss: 1.3838 - accuracy: 0.6598 - val_loss: 0.6603 - val_accuracy: 0.8688 Epoch 2/50 27/27 [==============================] - 1s 29ms/step - loss: 0.5872 - accuracy: 0.8547 - val_loss: 0.4188 - val_accuracy: 0.8977 Epoch 3/50 27/27 [==============================] - 1s 31ms/step - loss: 0.4481 - accuracy: 0.8782 - val_loss: 0.3434 - val_accuracy: 0.9113 Epoch 4/50 27/27 [==============================] - 1s 32ms/step - loss: 0.3857 - accuracy: 0.8933 - val_loss: 0.3149 - val_accuracy: 0.9115 Epoch 5/50 27/27 [==============================] - 1s 30ms/step - loss: 0.3482 - accuracy: 0.9020 - val_loss: 0.2752 - val_accuracy: 0.9248 Epoch 6/50 27/27 [==============================] - 1s 34ms/step - loss: 0.3219 - accuracy: 0.9091 - val_loss: 0.2549 - val_accuracy: 0.9287 Epoch 7/50 27/27 [==============================] - 1s 30ms/step - loss: 0.3023 - accuracy: 0.9147 - val_loss: 0.2480 - val_accuracy: 0.9305 Epoch 8/50 27/27 [==============================] - 1s 33ms/step - loss: 0.2866 - accuracy: 0.9188 - val_loss: 0.2327 - val_accuracy: 0.9362 Epoch 9/50 27/27 [==============================] - 1s 39ms/step - loss: 0.2733 - accuracy: 0.9228 - val_loss: 0.2226 - val_accuracy: 0.9383 Epoch 10/50 27/27 [==============================] - 1s 33ms/step - loss: 0.2613 - accuracy: 0.9267 - val_loss: 0.2147 - val_accuracy: 0.9420 Epoch 11/50 27/27 [==============================] - 1s 34ms/step - loss: 0.2509 - accuracy: 0.9294 - val_loss: 0.2049 - val_accuracy: 0.9447 Epoch 12/50 27/27 [==============================] - 1s 32ms/step - loss: 0.2417 - accuracy: 0.9324 - val_loss: 0.1978 - val_accuracy: 0.9455 Epoch 13/50 27/27 [==============================] - 1s 32ms/step - loss: 0.2330 - accuracy: 0.9345 - val_loss: 0.1906 - val_accuracy: 0.9488 Epoch 14/50 27/27 [==============================] - 1s 34ms/step - loss: 0.2252 - accuracy: 0.9372 - val_loss: 0.1853 - val_accuracy: 0.9508 Epoch 15/50 27/27 [==============================] - 1s 34ms/step - loss: 0.2184 - accuracy: 0.9392 - val_loss: 0.1805 - val_accuracy: 0.9523 Epoch 16/50 27/27 [==============================] - 1s 38ms/step - loss: 0.2113 - accuracy: 0.9413 - val_loss: 0.1760 - val_accuracy: 0.9518 Epoch 17/50 27/27 [==============================] - 1s 38ms/step - loss: 0.2055 - accuracy: 0.9427 - val_loss: 0.1709 - val_accuracy: 0.9552 Epoch 18/50 27/27 [==============================] - 1s 42ms/step - loss: 0.1998 - accuracy: 0.9441 - val_loss: 0.1669 - val_accuracy: 0.9567 Epoch 19/50 27/27 [==============================] - 1s 40ms/step - loss: 0.1944 - accuracy: 0.9458 - val_loss: 0.1625 - val_accuracy: 0.9577 Epoch 20/50 27/27 [==============================] - 1s 33ms/step - loss: 0.1891 - accuracy: 0.9471 - val_loss: 0.1580 - val_accuracy: 0.9585 Epoch 21/50 27/27 [==============================] - 1s 40ms/step - loss: 0.1846 - accuracy: 0.9484 - val_loss: 0.1564 - val_accuracy: 0.9603 Epoch 22/50 27/27 [==============================] - 1s 41ms/step - loss: 0.1804 - accuracy: 0.9498 - val_loss: 0.1518 - val_accuracy: 0.9622 Epoch 23/50 27/27 [==============================] - 1s 38ms/step - loss: 0.1762 - accuracy: 0.9507 - val_loss: 0.1485 - val_accuracy: 0.9628 Epoch 24/50 27/27 [==============================] - 1s 41ms/step - loss: 0.1722 - accuracy: 0.9521 - val_loss: 0.1461 - val_accuracy: 0.9623 Epoch 25/50 27/27 [==============================] - 1s 40ms/step - loss: 0.1686 - accuracy: 0.9534 - val_loss: 0.1434 - val_accuracy: 0.9633 Epoch 26/50 27/27 [==============================] - 1s 35ms/step - loss: 0.1652 - accuracy: 0.9542 - val_loss: 0.1419 - val_accuracy: 0.9637 Epoch 27/50 27/27 [==============================] - 1s 34ms/step - loss: 0.1618 - accuracy: 0.9550 - val_loss: 0.1397 - val_accuracy: 0.9633 Epoch 28/50 27/27 [==============================] - 1s 35ms/step - loss: 0.1589 - accuracy: 0.9556 - val_loss: 0.1371 - val_accuracy: 0.9647 Epoch 29/50 27/27 [==============================] - 1s 37ms/step - loss: 0.1561 - accuracy: 0.9566 - val_loss: 0.1350 - val_accuracy: 0.9650 Epoch 30/50 27/27 [==============================] - 1s 41ms/step - loss: 0.1534 - accuracy: 0.9574 - val_loss: 0.1331 - val_accuracy: 0.9655 Epoch 31/50 27/27 [==============================] - 1s 39ms/step - loss: 0.1508 - accuracy: 0.9583 - val_loss: 0.1319 - val_accuracy: 0.9660 Epoch 32/50 27/27 [==============================] - 1s 40ms/step - loss: 0.1484 - accuracy: 0.9589 - val_loss: 0.1314 - val_accuracy: 0.9667 Epoch 33/50 27/27 [==============================] - 1s 39ms/step - loss: 0.1463 - accuracy: 0.9597 - val_loss: 0.1290 - val_accuracy: 0.9668 Epoch 34/50 27/27 [==============================] - 1s 40ms/step - loss: 0.1439 - accuracy: 0.9600 - val_loss: 0.1268 - val_accuracy: 0.9675 Epoch 35/50 27/27 [==============================] - 1s 40ms/step - loss: 0.1418 - accuracy: 0.9608 - val_loss: 0.1256 - val_accuracy: 0.9677 Epoch 36/50 27/27 [==============================] - 1s 38ms/step - loss: 0.1397 - accuracy: 0.9614 - val_loss: 0.1245 - val_accuracy: 0.9685 Epoch 37/50 27/27 [==============================] - 1s 35ms/step - loss: 0.1378 - accuracy: 0.9625 - val_loss: 0.1223 - val_accuracy: 0.9683 Epoch 38/50 27/27 [==============================] - 1s 38ms/step - loss: 0.1362 - accuracy: 0.9620 - val_loss: 0.1216 - val_accuracy: 0.9695 Epoch 39/50 27/27 [==============================] - 1s 38ms/step - loss: 0.1344 - accuracy: 0.9628 - val_loss: 0.1207 - val_accuracy: 0.9685 Epoch 40/50 27/27 [==============================] - 1s 37ms/step - loss: 0.1327 - accuracy: 0.9634 - val_loss: 0.1192 - val_accuracy: 0.9692 Epoch 41/50 27/27 [==============================] - 1s 41ms/step - loss: 0.1309 - accuracy: 0.9635 - val_loss: 0.1179 - val_accuracy: 0.9695 Epoch 42/50 27/27 [==============================] - 1s 39ms/step - loss: 0.1294 - accuracy: 0.9641 - val_loss: 0.1173 - val_accuracy: 0.9695 Epoch 43/50 27/27 [==============================] - 1s 41ms/step - loss: 0.1281 - accuracy: 0.9646 - val_loss: 0.1160 - val_accuracy: 0.9705 Epoch 44/50 27/27 [==============================] - 1s 42ms/step - loss: 0.1265 - accuracy: 0.9650 - val_loss: 0.1158 - val_accuracy: 0.9700 Epoch 45/50 27/27 [==============================] - 1s 40ms/step - loss: 0.1251 - accuracy: 0.9654 - val_loss: 0.1149 - val_accuracy: 0.9695 Epoch 46/50 27/27 [==============================] - 1s 39ms/step - loss: 0.1237 - accuracy: 0.9658 - val_loss: 0.1140 - val_accuracy: 0.9700 Epoch 47/50 27/27 [==============================] - 1s 40ms/step - loss: 0.1224 - accuracy: 0.9664 - val_loss: 0.1128 - val_accuracy: 0.9707 Epoch 48/50 27/27 [==============================] - 1s 38ms/step - loss: 0.1211 - accuracy: 0.9664 - val_loss: 0.1122 - val_accuracy: 0.9710 Epoch 49/50 27/27 [==============================] - 1s 39ms/step - loss: 0.1198 - accuracy: 0.9670 - val_loss: 0.1114 - val_accuracy: 0.9713 Epoch 50/50 27/27 [==============================] - 1s 45ms/step - loss: 0.1186 - accuracy: 0.9677 - val_loss: 0.1106 - val_accuracy: 0.9703 Tip 4: if your code is slow, run the TensorFlow profiler One last tip -- if your code seems slower than it should be, you're going to want to plot how much time is spent on each computation step. Look for any bottleneck that might be causing less than 100% device utilization. To learn more about TensorFlow profiling, see this extensive guide. You can quickly profile a Keras model via the TensorBoard callback: # Profile from batches 10 to 15 tb_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, profile_batch=(10, 15)) # Train the model and use the TensorBoard Keras callback to collect # performance profiling data model.fit(dataset, epochs=1, callbacks=[tb_callback]) Then navigate to the TensorBoard app and check the \"profile\" tab. Training better student models via knowledge distillation with function matching. Introduction Knowledge distillation (Hinton et al.) is a technique that enables us to compress larger models into smaller ones. This allows us to reap the benefits of high performing larger models, while reducing storage and memory costs and achieving higher inference speed: Smaller models -> smaller memory footprint Reduced complexity -> fewer floating-point operations (FLOPs) In Knowledge distillation: A good teacher is patient and consistent, Beyer et al. investigate various existing setups for performing knowledge distillation and show that all of them lead to sub-optimal performance. Due to this, practitioners often settle for other alternatives (quantization, pruning, weight clustering, etc.) when developing production systems that are resource-constrained. Beyer et al. investigate how we can improve the student models that come out of the knowledge distillation process and always match the performance of their teacher models. In this example, we will study the recipes introduced by them, using the Flowers102 dataset. As a reference, with these recipes, the authors were able to produce a ResNet50 model that achieves 82.8% accuracy on the ImageNet-1k dataset. In case you need a refresher on knowledge distillation and want to study how it is implemented in Keras, you can refer to this example. You can also follow this example that shows an extension of knowledge distillation applied to consistency training. To follow this example, you will need TensorFlow 2.5 or higher as well as TensorFlow Addons, which can be installed using the command below: !pip install -q tensorflow-addons Imports from tensorflow import keras import tensorflow_addons as tfa import tensorflow as tf import matplotlib.pyplot as plt import numpy as np import tensorflow_datasets as tfds tfds.disable_progress_bar() Hyperparameters and contants AUTO = tf.data.AUTOTUNE # Used to dynamically adjust parallelism. BATCH_SIZE = 64 # Comes from Table 4 and \"Training setup\" section. TEMPERATURE = 10 # Used to soften the logits before they go to softmax. INIT_LR = 0.003 # Initial learning rate that will be decayed over the training period. WEIGHT_DECAY = 0.001 # Used for regularization. CLIP_THRESHOLD = 1.0 # Used for clipping the gradients by L2-norm. # We will first resize the training images to a bigger size and then we will take # random crops of a lower size. BIGGER = 160 RESIZE = 128 Load the Flowers102 dataset train_ds, validation_ds, test_ds = tfds.load( \"oxford_flowers102\", split=[\"train\", \"validation\", \"test\"], as_supervised=True ) print(f\"Number of training examples: {train_ds.cardinality()}.\") print( f\"Number of validation examples: {validation_ds.cardinality()}.\" ) print(f\"Number of test examples: {test_ds.cardinality()}.\") Number of training examples: 1020. Number of validation examples: 1020. Number of test examples: 6149. Teacher model As is common with any distillation technique, it's important to first train a well-performing teacher model which is usually larger than the subsequent student model. The authors distill a BiT ResNet152x2 model (teacher) into a BiT ResNet50 model (student). BiT stands for Big Transfer and was introduced in Big Transfer (BiT): General Visual Representation Learning. BiT variants of ResNets use Group Normalization (Wu et al.) and Weight Standardization (Qiao et al.) in place of Batch Normalization (Ioffe et al.). In order to limit the time it takes to run this example, we will be using a BiT ResNet101x3 already trained on the Flowers102 dataset. You can refer to this notebook to learn more about the training process. This model reaches 98.18% accuracy on the test set of Flowers102. The model weights are hosted on Kaggle as a dataset. To download the weights, follow these steps: Create an account on Kaggle here. Go to the \"Account\" tab of your user profile. Select \"Create API Token\". This will trigger the download of kaggle.json, a file containing your API credentials. From that JSON file, copy your Kaggle username and API key. Now run the following: import os os.environ[\"KAGGLE_USERNAME\"] = \"\" # TODO: enter your Kaggle user name here os.environ[\"KAGGLE_KEY\"] = \"\" # TODO: enter your Kaggle key here Once the environment variables are set, run: $ kaggle datasets download -d spsayakpaul/bitresnet101x3flowers102 $ unzip -qq bitresnet101x3flowers102.zip This should generate a folder named T-r101x3-128 which is essentially a teacher SavedModel. import os os.environ[\"KAGGLE_USERNAME\"] = \"\" # TODO: enter your Kaggle user name here os.environ[\"KAGGLE_KEY\"] = \"\" # TODO: enter your Kaggle API key here !kaggle datasets download -d spsayakpaul/bitresnet101x3flowers102 !unzip -qq bitresnet101x3flowers102.zip # Since the teacher model is not going to be trained further we make # it non-trainable. teacher_model = keras.models.load_model( \"/home/jupyter/keras-io/examples/keras_recipes/T-r101x3-128\" ) teacher_model.trainable = False teacher_model.summary() Model: \"my_bi_t_model_1\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_1 (Dense) multiple 626790 _________________________________________________________________ keras_layer_1 (KerasLayer) multiple 381789888 ================================================================= Total params: 382,416,678 Trainable params: 0 Non-trainable params: 382,416,678 _________________________________________________________________ The \"function matching\" recipe To train a high-quality student model, the authors propose the following changes to the student training workflow: Use an aggressive variant of MixUp (Zhang et al.). This is done by sampling the alpha parameter from a uniform distribution instead of a beta distribution. MixUp is used here in order to help the student model capture the function underlying the teacher model. MixUp linearly interpolates between different samples across the data manifold. So the rationale here is if the student is trained to fit that it should be able to match the teacher model better. To incorporate more invariance MixUp is coupled with \"Inception-style\" cropping (Szegedy et al.). This is where the \"function matching\" term makes its way in the original paper. Unlike other works (Noisy Student Training for example), both the teacher and student models receive the same copy of an image, which is mixed up and randomly cropped. By providing the same inputs to both the models, the authors make the teacher consistent with the student. With MixUp, we are essentially introducing a strong form of regularization when training the student. As such, it should be trained for a relatively long period of time (1000 epochs at least). Since the student is trained with strong regularization, the risk of overfitting due to a longer training schedule are also mitigated. In summary, one needs to be consistent and patient while training the student model. Data input pipeline def mixup(images, labels): alpha = tf.random.uniform([], 0, 1) mixedup_images = alpha * images + (1 - alpha) * tf.reverse(images, axis=[0]) # The labels do not matter here since they are NOT used during # training. return mixedup_images, labels def preprocess_image(image, label, train=True): image = tf.cast(image, tf.float32) / 255.0 if train: image = tf.image.resize(image, (BIGGER, BIGGER)) image = tf.image.random_crop(image, (RESIZE, RESIZE, 3)) image = tf.image.random_flip_left_right(image) else: # Central fraction amount is from here: # https://git.io/J8Kda. image = tf.image.central_crop(image, central_fraction=0.875) image = tf.image.resize(image, (RESIZE, RESIZE)) return image, label def prepare_dataset(dataset, train=True, batch_size=BATCH_SIZE): if train: dataset = dataset.map(preprocess_image, num_parallel_calls=AUTO) dataset = dataset.shuffle(BATCH_SIZE * 10) else: dataset = dataset.map( lambda x, y: (preprocess_image(x, y, train)), num_parallel_calls=AUTO ) dataset = dataset.batch(batch_size) if train: dataset = dataset.map(mixup, num_parallel_calls=AUTO) dataset = dataset.prefetch(AUTO) return dataset Note that for brevity, we used mild crops for the training set but in practice \"Inception-style\" preprocessing should be applied. You can refer to this script for a closer implementation. Also, the ground-truth labels are not used for training the student. train_ds = prepare_dataset(train_ds, True) validation_ds = prepare_dataset(validation_ds, False) test_ds = prepare_dataset(test_ds, False) Visualization sample_images, _ = next(iter(train_ds)) plt.figure(figsize=(10, 10)) for n in range(25): ax = plt.subplot(5, 5, n + 1) plt.imshow(sample_images[n].numpy()) plt.axis(\"off\") plt.show() png Student model For the purpose of this example, we will use the standard ResNet50V2 (He et al.). def get_resnetv2(): resnet_v2 = keras.applications.ResNet50V2( weights=None, input_shape=(RESIZE, RESIZE, 3), classes=102, classifier_activation=\"linear\", ) return resnet_v2 get_resnetv2().count_params() 23773798 Compared to the teacher model, this model has 358 Million fewer parameters. Distillation utility We will reuse some code from this example on knowledge distillation. class Distiller(tf.keras.Model): def __init__(self, student, teacher): super(Distiller, self).__init__() self.student = student self.teacher = teacher self.loss_tracker = keras.metrics.Mean(name=\"distillation_loss\") @property def metrics(self): metrics = super().metrics metrics.append(self.loss_tracker) return metrics def compile( self, optimizer, metrics, distillation_loss_fn, temperature=TEMPERATURE, ): super(Distiller, self).compile(optimizer=optimizer, metrics=metrics) self.distillation_loss_fn = distillation_loss_fn self.temperature = temperature def train_step(self, data): # Unpack data x, _ = data # Forward pass of teacher teacher_predictions = self.teacher(x, training=False) with tf.GradientTape() as tape: # Forward pass of student student_predictions = self.student(x, training=True) # Compute loss distillation_loss = self.distillation_loss_fn( tf.nn.softmax(teacher_predictions / self.temperature, axis=1), tf.nn.softmax(student_predictions / self.temperature, axis=1), ) # Compute gradients trainable_vars = self.student.trainable_variables gradients = tape.gradient(distillation_loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Report progress self.loss_tracker.update_state(distillation_loss) return {\"distillation_loss\": self.loss_tracker.result()} def test_step(self, data): # Unpack data x, y = data # Forward passes teacher_predictions = self.teacher(x, training=False) student_predictions = self.student(x, training=False) # Calculate the loss distillation_loss = self.distillation_loss_fn( tf.nn.softmax(teacher_predictions / self.temperature, axis=1), tf.nn.softmax(student_predictions / self.temperature, axis=1), ) # Report progress self.loss_tracker.update_state(distillation_loss) self.compiled_metrics.update_state(y, student_predictions) results = {m.name: m.result() for m in self.metrics} return results Learning rate schedule A warmup cosine learning rate schedule is used in the paper. This schedule is also typical for many pre-training methods especially for computer vision. # Some code is taken from: # https://www.kaggle.com/ashusma/training-rfcx-tensorflow-tpu-effnet-b2. class WarmUpCosine(keras.optimizers.schedules.LearningRateSchedule): def __init__( self, learning_rate_base, total_steps, warmup_learning_rate, warmup_steps ): super(WarmUpCosine, self).__init__() self.learning_rate_base = learning_rate_base self.total_steps = total_steps self.warmup_learning_rate = warmup_learning_rate self.warmup_steps = warmup_steps self.pi = tf.constant(np.pi) def __call__(self, step): if self.total_steps < self.warmup_steps: raise ValueError(\"Total_steps must be larger or equal to warmup_steps.\") cos_annealed_lr = tf.cos( self.pi * (tf.cast(step, tf.float32) - self.warmup_steps) / float(self.total_steps - self.warmup_steps) ) learning_rate = 0.5 * self.learning_rate_base * (1 + cos_annealed_lr) if self.warmup_steps > 0: if self.learning_rate_base < self.warmup_learning_rate: raise ValueError( \"Learning_rate_base must be larger or equal to \" \"warmup_learning_rate.\" ) slope = ( self.learning_rate_base - self.warmup_learning_rate ) / self.warmup_steps warmup_rate = slope * tf.cast(step, tf.float32) + self.warmup_learning_rate learning_rate = tf.where( step < self.warmup_steps, warmup_rate, learning_rate ) return tf.where( step > self.total_steps, 0.0, learning_rate, name=\"learning_rate\" ) We can now plot a a graph of learning rates generated using this schedule. ARTIFICIAL_EPOCHS = 1000 ARTIFICIAL_BATCH_SIZE = 512 DATASET_NUM_TRAIN_EXAMPLES = 1020 TOTAL_STEPS = int( DATASET_NUM_TRAIN_EXAMPLES / ARTIFICIAL_BATCH_SIZE * ARTIFICIAL_EPOCHS ) scheduled_lrs = WarmUpCosine( learning_rate_base=INIT_LR, total_steps=TOTAL_STEPS, warmup_learning_rate=0.0, warmup_steps=1500, ) lrs = [scheduled_lrs(step) for step in range(TOTAL_STEPS)] plt.plot(lrs) plt.xlabel(\"Step\", fontsize=14) plt.ylabel(\"LR\", fontsize=14) plt.show() png The original paper uses at least 1000 epochs and a batch size of 512 to perform \"function matching\". The objective of this example is to present a workflow to implement the recipe and not to demonstrate the results when they are applied at full scale. However, these recipes will transfer to the original settings from the paper. Please refer to this repository if you are interested in finding out more. Training optimizer = tfa.optimizers.AdamW( weight_decay=WEIGHT_DECAY, learning_rate=scheduled_lrs, clipnorm=CLIP_THRESHOLD ) student_model = get_resnetv2() distiller = Distiller(student=student_model, teacher=teacher_model) distiller.compile( optimizer, metrics=[keras.metrics.SparseCategoricalAccuracy()], distillation_loss_fn=keras.losses.KLDivergence(), temperature=TEMPERATURE, ) history = distiller.fit( train_ds, steps_per_epoch=int(np.ceil(DATASET_NUM_TRAIN_EXAMPLES / BATCH_SIZE)), validation_data=validation_ds, epochs=30, # This should be at least 1000. ) student = distiller.student student_model.compile(metrics=[\"accuracy\"]) _, top1_accuracy = student.evaluate(test_ds) print(f\"Top-1 accuracy on the test set: {round(top1_accuracy * 100, 2)}%\") Epoch 1/30 16/16 [==============================] - 74s 3s/step - distillation_loss: 0.0070 - val_sparse_categorical_accuracy: 0.0039 - val_distillation_loss: 0.0061 Epoch 2/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0059 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0061 Epoch 3/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0049 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0060 Epoch 4/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0048 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0060 Epoch 5/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0043 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0060 Epoch 6/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0041 - val_sparse_categorical_accuracy: 0.0108 - val_distillation_loss: 0.0060 Epoch 7/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0038 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0061 Epoch 8/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0040 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0062 Epoch 9/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0039 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0063 Epoch 10/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0035 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0064 Epoch 11/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0041 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0064 Epoch 12/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0039 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0067 Epoch 13/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0039 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0067 Epoch 14/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0036 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0066 Epoch 15/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0037 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0065 Epoch 16/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0038 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0068 Epoch 17/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0039 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0066 Epoch 18/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0038 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0064 Epoch 19/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0035 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0071 Epoch 20/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0038 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0066 Epoch 21/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0038 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0068 Epoch 22/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0034 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0073 Epoch 23/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0035 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0078 Epoch 24/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0037 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0087 Epoch 25/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0031 - val_sparse_categorical_accuracy: 0.0108 - val_distillation_loss: 0.0078 Epoch 26/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0033 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0072 Epoch 27/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0036 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0071 Epoch 28/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0036 - val_sparse_categorical_accuracy: 0.0275 - val_distillation_loss: 0.0078 Epoch 29/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0032 - val_sparse_categorical_accuracy: 0.0196 - val_distillation_loss: 0.0068 Epoch 30/30 16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0034 - val_sparse_categorical_accuracy: 0.0147 - val_distillation_loss: 0.0071 97/97 [==============================] - 7s 64ms/step - loss: 0.0000e+00 - accuracy: 0.0107 Top-1 accuracy on the test set: 1.07% Results With just 30 epochs of training, the results are nowhere near expected. This is where the benefits of patience aka a longer training schedule will come into play. Let's investigate what the model trained for 1000 epochs can do. # Download the pre-trained weights. !wget https://git.io/JBO3Y -O S-r50x1-128-1000.tar.gz !tar xf S-r50x1-128-1000.tar.gz pretrained_student = keras.models.load_model(\"S-r50x1-128-1000\") pretrained_student.summary() Model: \"resnet\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= root_block (Sequential) (None, 32, 32, 64) 9408 _________________________________________________________________ block1 (Sequential) (None, 32, 32, 256) 214912 _________________________________________________________________ block2 (Sequential) (None, 16, 16, 512) 1218048 _________________________________________________________________ block3 (Sequential) (None, 8, 8, 1024) 7095296 _________________________________________________________________ block4 (Sequential) (None, 4, 4, 2048) 14958592 _________________________________________________________________ group_norm (GroupNormalizati multiple 4096 _________________________________________________________________ re_lu_97 (ReLU) multiple 0 _________________________________________________________________ global_average_pooling2d_1 ( multiple 0 _________________________________________________________________ head/dense (Dense) multiple 208998 ================================================================= Total params: 23,709,350 Trainable params: 23,709,350 Non-trainable params: 0 _________________________________________________________________ This model exactly follows what the authors have used in their student models. This is why the model summary is a bit different. _, top1_accuracy = pretrained_student.evaluate(test_ds) print(f\"Top-1 accuracy on the test set: {round(top1_accuracy * 100, 2)}%\") 97/97 [==============================] - 14s 131ms/step - loss: 0.0000e+00 - accuracy: 0.8102 Top-1 accuracy on the test set: 81.02% With 100000 epochs of training, this same model leads to a top-1 accuracy of 95.54%. There are a number of important ablations studies presented in the paper that show the effectiveness of these recipes compared to the prior art. So if you are skeptical about these recipes, definitely consult the paper. Note on training for longer With TPU-based hardware infrastructure, we can train the model for 1000 epochs faster. This does not even require adding a lot of changes to this codebase. You are encouraged to check this repository as it presents TPU-compatible training workflows for these recipes and can be run on Kaggle Kernel leveraging their free TPU v3-8 hardware. Using compositional and mixed-dimension embeddings for memory-efficient recommendation models. Introduction This example demonstrates two techniques for building memory-efficient recommendation models by reducing the size of the embedding tables, without sacrificing model effectiveness: Quotient-remainder trick, by Hao-Jun Michael Shi et al., which reduces the number of embedding vectors to store, yet produces unique embedding vector for each item without explicit definition. Mixed Dimension embeddings, by Antonio Ginart et al., which stores embedding vectors with mixed dimensions, where less popular items have reduced dimension embeddings. We use the 1M version of the Movielens dataset. The dataset includes around 1 million ratings from 6,000 users on 4,000 movies. Setup import os import math from zipfile import ZipFile from urllib.request import urlretrieve import numpy as np import pandas as pd import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.layers import StringLookup import matplotlib.pyplot as plt Prepare the data Download and process data urlretrieve(\"http://files.grouplens.org/datasets/movielens/ml-1m.zip\", \"movielens.zip\") ZipFile(\"movielens.zip\", \"r\").extractall() ratings_data = pd.read_csv( \"ml-1m/ratings.dat\", sep=\"::\", names=[\"user_id\", \"movie_id\", \"rating\", \"unix_timestamp\"], ) ratings_data[\"movie_id\"] = ratings_data[\"movie_id\"].apply(lambda x: f\"movie_{x}\") ratings_data[\"user_id\"] = ratings_data[\"user_id\"].apply(lambda x: f\"user_{x}\") ratings_data[\"rating\"] = ratings_data[\"rating\"].apply(lambda x: float(x)) del ratings_data[\"unix_timestamp\"] print(f\"Number of users: {len(ratings_data.user_id.unique())}\") print(f\"Number of movies: {len(ratings_data.movie_id.unique())}\") print(f\"Number of ratings: {len(ratings_data.index)}\") Number of users: 6040 Number of movies: 3706 Number of ratings: 1000209 Create train and eval data splits random_selection = np.random.rand(len(ratings_data.index)) <= 0.85 train_data = ratings_data[random_selection] eval_data = ratings_data[~random_selection] train_data.to_csv(\"train_data.csv\", index=False, sep=\"|\", header=False) eval_data.to_csv(\"eval_data.csv\", index=False, sep=\"|\", header=False) print(f\"Train data split: {len(train_data.index)}\") print(f\"Eval data split: {len(eval_data.index)}\") print(\"Train and eval data files are saved.\") Train data split: 850361 Eval data split: 149848 Train and eval data files are saved. Define dataset metadata and hyperparameters csv_header = list(ratings_data.columns) user_vocabulary = list(ratings_data.user_id.unique()) movie_vocabulary = list(ratings_data.movie_id.unique()) target_feature_name = \"rating\" learning_rate = 0.001 batch_size = 128 num_epochs = 3 base_embedding_dim = 64 Train and evaluate the model def get_dataset_from_csv(csv_file_path, batch_size=128, shuffle=True): return tf.data.experimental.make_csv_dataset( csv_file_path, batch_size=batch_size, column_names=csv_header, label_name=target_feature_name, num_epochs=1, header=False, field_delim=\"|\", shuffle=shuffle, ) def run_experiment(model): # Compile the model. model.compile( optimizer=keras.optimizers.Adam(learning_rate), loss=tf.keras.losses.MeanSquaredError(), metrics=[keras.metrics.MeanAbsoluteError(name=\"mae\")], ) # Read the training data. train_dataset = get_dataset_from_csv(\"train_data.csv\", batch_size) # Read the test data. eval_dataset = get_dataset_from_csv(\"eval_data.csv\", batch_size, shuffle=False) # Fit the model with the training data. history = model.fit(train_dataset, epochs=num_epochs, validation_data=eval_dataset,) return history Experiment 1: baseline collaborative filtering model Implement embedding encoder def embedding_encoder(vocabulary, embedding_dim, num_oov_indices=0, name=None): return keras.Sequential( [ StringLookup( vocabulary=vocabulary, mask_token=None, num_oov_indices=num_oov_indices ), layers.Embedding( input_dim=len(vocabulary) + num_oov_indices, output_dim=embedding_dim ), ], name=f\"{name}_embedding\" if name else None, ) Implement the baseline model def create_baseline_model(): # Receive the user as an input. user_input = layers.Input(name=\"user_id\", shape=(), dtype=tf.string) # Get user embedding. user_embedding = embedding_encoder( vocabulary=user_vocabulary, embedding_dim=base_embedding_dim, name=\"user\" )(user_input) # Receive the movie as an input. movie_input = layers.Input(name=\"movie_id\", shape=(), dtype=tf.string) # Get embedding. movie_embedding = embedding_encoder( vocabulary=movie_vocabulary, embedding_dim=base_embedding_dim, name=\"movie\" )(movie_input) # Compute dot product similarity between user and movie embeddings. logits = layers.Dot(axes=1, name=\"dot_similarity\")( [user_embedding, movie_embedding] ) # Convert to rating scale. prediction = keras.activations.sigmoid(logits) * 5 # Create the model. model = keras.Model( inputs=[user_input, movie_input], outputs=prediction, name=\"baseline_model\" ) return model baseline_model = create_baseline_model() baseline_model.summary() Model: \"baseline_model\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== user_id (InputLayer) [(None,)] 0 __________________________________________________________________________________________________ movie_id (InputLayer) [(None,)] 0 __________________________________________________________________________________________________ user_embedding (Sequential) (None, 64) 386560 user_id[0][0] __________________________________________________________________________________________________ movie_embedding (Sequential) (None, 64) 237184 movie_id[0][0] __________________________________________________________________________________________________ dot_similarity (Dot) (None, 1) 0 user_embedding[0][0] movie_embedding[0][0] __________________________________________________________________________________________________ tf.math.sigmoid (TFOpLambda) (None, 1) 0 dot_similarity[0][0] __________________________________________________________________________________________________ tf.math.multiply (TFOpLambda) (None, 1) 0 tf.math.sigmoid[0][0] ================================================================================================== Total params: 623,744 Trainable params: 623,744 Non-trainable params: 0 __________________________________________________________________________________________________ Notice that the number of trainable parameters is 623,744 history = run_experiment(baseline_model) plt.plot(history.history[\"loss\"]) plt.plot(history.history[\"val_loss\"]) plt.title(\"model loss\") plt.ylabel(\"loss\") plt.xlabel(\"epoch\") plt.legend([\"train\", \"eval\"], loc=\"upper left\") plt.show() Epoch 1/3 6644/6644 [==============================] - 46s 7ms/step - loss: 1.4399 - mae: 0.9818 - val_loss: 0.9348 - val_mae: 0.7569 Epoch 2/3 6644/6644 [==============================] - 53s 8ms/step - loss: 0.8422 - mae: 0.7246 - val_loss: 0.7991 - val_mae: 0.7076 Epoch 3/3 6644/6644 [==============================] - 58s 9ms/step - loss: 0.7461 - mae: 0.6819 - val_loss: 0.7564 - val_mae: 0.6869 png Experiment 2: memory-efficient model Implement Quotient-Remainder embedding as a layer The Quotient-Remainder technique works as follows. For a set of vocabulary and embedding size embedding_dim, instead of creating a vocabulary_size X embedding_dim embedding table, we create two num_buckets X embedding_dim embedding tables, where num_buckets is much smaller than vocabulary_size. An embedding for a given item index is generated via the following steps: Compute the quotient_index as index // num_buckets. Compute the remainder_index as index % num_buckets. Lookup quotient_embedding from the first embedding table using quotient_index. Lookup remainder_embedding from the second embedding table using remainder_index. Return quotient_embedding * remainder_embedding. This technique not only reduces the number of embedding vectors needs to be stored and trained, but also generates a unique embedding vector for each item of size embedding_dim. Note that q_embedding and r_embedding can be combined using other operations, like Add and Concatenate. class QREmbedding(keras.layers.Layer): def __init__(self, vocabulary, embedding_dim, num_buckets, name=None): super(QREmbedding, self).__init__(name=name) self.num_buckets = num_buckets self.index_lookup = StringLookup( vocabulary=vocabulary, mask_token=None, num_oov_indices=0 ) self.q_embeddings = layers.Embedding(num_buckets, embedding_dim,) self.r_embeddings = layers.Embedding(num_buckets, embedding_dim,) def call(self, inputs): # Get the item index. embedding_index = self.index_lookup(inputs) # Get the quotient index. quotient_index = tf.math.floordiv(embedding_index, self.num_buckets) # Get the reminder index. remainder_index = tf.math.floormod(embedding_index, self.num_buckets) # Lookup the quotient_embedding using the quotient_index. quotient_embedding = self.q_embeddings(quotient_index) # Lookup the remainder_embedding using the remainder_index. remainder_embedding = self.r_embeddings(remainder_index) # Use multiplication as a combiner operation return quotient_embedding * remainder_embedding Implement Mixed Dimension embedding as a layer In the mixed dimension embedding technique, we train embedding vectors with full dimensions for the frequently queried items, while train embedding vectors with reduced dimensions for less frequent items, plus a projection weights matrix to bring low dimension embeddings to the full dimensions. More precisely, we define blocks of items of similar frequencies. For each block, a block_vocab_size X block_embedding_dim embedding table and block_embedding_dim X full_embedding_dim projection weights matrix are created. Note that, if block_embedding_dim equals full_embedding_dim, the projection weights matrix becomes an identity matrix. Embeddings for a given batch of item indices are generated via the following steps: For each block, lookup the block_embedding_dim embedding vectors using indices, and project them to the full_embedding_dim. If an item index does not belong to a given block, an out-of-vocabulary embedding is returned. Each block will return a batch_size X full_embedding_dim tensor. A mask is applied to the embeddings returned from each block in order to convert the out-of-vocabulary embeddings to vector of zeros. That is, for each item in the batch, a single non-zero embedding vector is returned from the all block embeddings. Embeddings retrieved from the blocks are combined using sum to produce the final batch_size X full_embedding_dim tensor. class MDEmbedding(keras.layers.Layer): def __init__( self, blocks_vocabulary, blocks_embedding_dims, base_embedding_dim, name=None ): super(MDEmbedding, self).__init__(name=name) self.num_blocks = len(blocks_vocabulary) # Create vocab to block lookup. keys = [] values = [] for block_idx, block_vocab in enumerate(blocks_vocabulary): keys.extend(block_vocab) values.extend([block_idx] * len(block_vocab)) self.vocab_to_block = tf.lookup.StaticHashTable( tf.lookup.KeyValueTensorInitializer(keys, values), default_value=-1 ) self.block_embedding_encoders = [] self.block_embedding_projectors = [] # Create block embedding encoders and projectors. for idx in range(self.num_blocks): vocabulary = blocks_vocabulary[idx] embedding_dim = blocks_embedding_dims[idx] block_embedding_encoder = embedding_encoder( vocabulary, embedding_dim, num_oov_indices=1 ) self.block_embedding_encoders.append(block_embedding_encoder) if embedding_dim == base_embedding_dim: self.block_embedding_projectors.append(layers.Lambda(lambda x: x)) else: self.block_embedding_projectors.append( layers.Dense(units=base_embedding_dim) ) def call(self, inputs): # Get block index for each input item. block_indicies = self.vocab_to_block.lookup(inputs) # Initialize output embeddings to zeros. embeddings = tf.zeros(shape=(tf.shape(inputs)[0], base_embedding_dim)) # Generate embeddings from blocks. for idx in range(self.num_blocks): # Lookup embeddings from the current block. block_embeddings = self.block_embedding_encoders[idx](inputs) # Project embeddings to base_embedding_dim. block_embeddings = self.block_embedding_projectors[idx](block_embeddings) # Create a mask to filter out embeddings of items that do not belong to the current block. mask = tf.expand_dims(tf.cast(block_indicies == idx, tf.dtypes.float32), 1) # Set the embeddings for the items not belonging to the current block to zeros. block_embeddings = block_embeddings * mask # Add the block embeddings to the final embeddings. embeddings += block_embeddings return embeddings Implement the memory-efficient model In this experiment, we are going to use the Quotient-Remainder technique to reduce the size of the user embeddings, and the Mixed Dimension technique to reduce the size of the movie embeddings. While in the paper, an alpha-power rule is used to determined the dimensions of the embedding of each block, we simply set the number of blocks and the dimensions of embeddings of each block based on the histogram visualization of movies popularity. movie_frequencies = ratings_data[\"movie_id\"].value_counts() movie_frequencies.hist(bins=10) png You can see that we can group the movies into three blocks, and assign them 64, 32, and 16 embedding dimensions, respectively. Feel free to experiment with different number of blocks and dimensions. sorted_movie_vocabulary = list(movie_frequencies.keys()) movie_blocks_vocabulary = [ sorted_movie_vocabulary[:400], # high popularity movies block sorted_movie_vocabulary[400:1700], # normal popularity movies block sorted_movie_vocabulary[1700:], # low popularity movies block ] movie_blocks_embedding_dims = [64, 32, 16] user_embedding_num_buckets = len(user_vocabulary) // 50 def create_memory_efficient_model(): # Take the user as an input. user_input = layers.Input(name=\"user_id\", shape=(), dtype=tf.string) # Get user embedding. user_embedding = QREmbedding( vocabulary=user_vocabulary, embedding_dim=base_embedding_dim, num_buckets=user_embedding_num_buckets, name=\"user_embedding\", )(user_input) # Take the movie as an input. movie_input = layers.Input(name=\"movie_id\", shape=(), dtype=tf.string) # Get embedding. movie_embedding = MDEmbedding( blocks_vocabulary=movie_blocks_vocabulary, blocks_embedding_dims=movie_blocks_embedding_dims, base_embedding_dim=base_embedding_dim, name=\"movie_embedding\", )(movie_input) # Compute dot product similarity between user and movie embeddings. logits = layers.Dot(axes=1, name=\"dot_similarity\")( [user_embedding, movie_embedding] ) # Convert to rating scale. prediction = keras.activations.sigmoid(logits) * 5 # Create the model. model = keras.Model( inputs=[user_input, movie_input], outputs=prediction, name=\"baseline_model\" ) return model memory_efficient_model = create_memory_efficient_model() memory_efficient_model.summary() Model: \"baseline_model\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== user_id (InputLayer) [(None,)] 0 __________________________________________________________________________________________________ movie_id (InputLayer) [(None,)] 0 __________________________________________________________________________________________________ user_embedding (QREmbedding) (None, 64) 15360 user_id[0][0] __________________________________________________________________________________________________ movie_embedding (MDEmbedding) (None, 64) 102608 movie_id[0][0] __________________________________________________________________________________________________ dot_similarity (Dot) (None, 1) 0 user_embedding[0][0] movie_embedding[0][0] __________________________________________________________________________________________________ tf.math.sigmoid_1 (TFOpLambda) (None, 1) 0 dot_similarity[0][0] __________________________________________________________________________________________________ tf.math.multiply_1 (TFOpLambda) (None, 1) 0 tf.math.sigmoid_1[0][0] ================================================================================================== Total params: 117,968 Trainable params: 117,968 Non-trainable params: 0 __________________________________________________________________________________________________ Notice that the number of trainable parameters is 117,968, which is more than 5x less than the number of parameters in the baseline model. history = run_experiment(memory_efficient_model) plt.plot(history.history[\"loss\"]) plt.plot(history.history[\"val_loss\"]) plt.title(\"model loss\") plt.ylabel(\"loss\") plt.xlabel(\"epoch\") plt.legend([\"train\", \"eval\"], loc=\"upper left\") plt.show() Epoch 1/3 6644/6644 [==============================] - 10s 1ms/step - loss: 1.2632 - mae: 0.9078 - val_loss: 1.0593 - val_mae: 0.8045 Epoch 2/3 6644/6644 [==============================] - 9s 1ms/step - loss: 0.8933 - mae: 0.7512 - val_loss: 0.8932 - val_mae: 0.7519 Epoch 3/3 6644/6644 [==============================] - 9s 1ms/step - loss: 0.8412 - mae: 0.7279 - val_loss: 0.8612 - val_mae: 0.7357 png Building Probabilistic Bayesian neural network models with TensorFlow Probability. Introduction Taking a probabilistic approach to deep learning allows to account for uncertainty, so that models can assign less levels of confidence to incorrect predictions. Sources of uncertainty can be found in the data, due to measurement error or noise in the labels, or the model, due to insufficient data availability for the model to learn effectively. This example demonstrates how to build basic probabilistic Bayesian neural networks to account for these two types of uncertainty. We use TensorFlow Probability library, which is compatible with Keras API. This example requires TensorFlow 2.3 or higher. You can install Tensorflow Probability using the following command: pip install tensorflow-probability The dataset We use the Wine Quality dataset, which is available in the TensorFlow Datasets. We use the red wine subset, which contains 4,898 examples. The dataset has 11numerical physicochemical features of the wine, and the task is to predict the wine quality, which is a score between 0 and 10. In this example, we treat this as a regression task. You can install TensorFlow Datasets using the following command: pip install tensorflow-datasets Setup import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_datasets as tfds import tensorflow_probability as tfp Create training and evaluation datasets Here, we load the wine_quality dataset using tfds.load(), and we convert the target feature to float. Then, we shuffle the dataset and split it into training and test sets. We take the first train_size examples as the train split, and the rest as the test split. def get_train_and_test_splits(train_size, batch_size=1): # We prefetch with a buffer the same size as the dataset because th dataset # is very small and fits into memory. dataset = ( tfds.load(name=\"wine_quality\", as_supervised=True, split=\"train\") .map(lambda x, y: (x, tf.cast(y, tf.float32))) .prefetch(buffer_size=dataset_size) .cache() ) # We shuffle with a buffer the same size as the dataset. train_dataset = ( dataset.take(train_size).shuffle(buffer_size=train_size).batch(batch_size) ) test_dataset = dataset.skip(train_size).batch(batch_size) return train_dataset, test_dataset Compile, train, and evaluate the model hidden_units = [8, 8] learning_rate = 0.001 def run_experiment(model, loss, train_dataset, test_dataset): model.compile( optimizer=keras.optimizers.RMSprop(learning_rate=learning_rate), loss=loss, metrics=[keras.metrics.RootMeanSquaredError()], ) print(\"Start training the model...\") model.fit(train_dataset, epochs=num_epochs, validation_data=test_dataset) print(\"Model training finished.\") _, rmse = model.evaluate(train_dataset, verbose=0) print(f\"Train RMSE: {round(rmse, 3)}\") print(\"Evaluating model performance...\") _, rmse = model.evaluate(test_dataset, verbose=0) print(f\"Test RMSE: {round(rmse, 3)}\") Create model inputs FEATURE_NAMES = [ \"fixed acidity\", \"volatile acidity\", \"citric acid\", \"residual sugar\", \"chlorides\", \"free sulfur dioxide\", \"total sulfur dioxide\", \"density\", \"pH\", \"sulphates\", \"alcohol\", ] def create_model_inputs(): inputs = {} for feature_name in FEATURE_NAMES: inputs[feature_name] = layers.Input( name=feature_name, shape=(1,), dtype=tf.float32 ) return inputs Experiment 1: standard neural network We create a standard deterministic neural network model as a baseline. def create_baseline_model(): inputs = create_model_inputs() input_values = [value for _, value in sorted(inputs.items())] features = keras.layers.concatenate(input_values) features = layers.BatchNormalization()(features) # Create hidden layers with deterministic weights using the Dense layer. for units in hidden_units: features = layers.Dense(units, activation=\"sigmoid\")(features) # The output is deterministic: a single point estimate. outputs = layers.Dense(units=1)(features) model = keras.Model(inputs=inputs, outputs=outputs) return model Let's split the wine dataset into training and test sets, with 85% and 15% of the examples, respectively. dataset_size = 4898 batch_size = 256 train_size = int(dataset_size * 0.85) train_dataset, test_dataset = get_train_and_test_splits(train_size, batch_size) Now let's train the baseline model. We use the MeanSquaredError as the loss function. num_epochs = 100 mse_loss = keras.losses.MeanSquaredError() baseline_model = create_baseline_model() run_experiment(baseline_model, mse_loss, train_dataset, test_dataset) Start training the model... Epoch 1/100 17/17 [==============================] - 1s 53ms/step - loss: 37.5710 - root_mean_squared_error: 6.1294 - val_loss: 35.6750 - val_root_mean_squared_error: 5.9729 Epoch 2/100 17/17 [==============================] - 0s 7ms/step - loss: 35.5154 - root_mean_squared_error: 5.9594 - val_loss: 34.2430 - val_root_mean_squared_error: 5.8518 Epoch 3/100 17/17 [==============================] - 0s 7ms/step - loss: 33.9975 - root_mean_squared_error: 5.8307 - val_loss: 32.8003 - val_root_mean_squared_error: 5.7272 Epoch 4/100 17/17 [==============================] - 0s 12ms/step - loss: 32.5928 - root_mean_squared_error: 5.7090 - val_loss: 31.3385 - val_root_mean_squared_error: 5.5981 Epoch 5/100 17/17 [==============================] - 0s 7ms/step - loss: 30.8914 - root_mean_squared_error: 5.5580 - val_loss: 29.8659 - val_root_mean_squared_error: 5.4650 ... Epoch 95/100 17/17 [==============================] - 0s 6ms/step - loss: 0.6927 - root_mean_squared_error: 0.8322 - val_loss: 0.6901 - val_root_mean_squared_error: 0.8307 Epoch 96/100 17/17 [==============================] - 0s 6ms/step - loss: 0.6929 - root_mean_squared_error: 0.8323 - val_loss: 0.6866 - val_root_mean_squared_error: 0.8286 Epoch 97/100 17/17 [==============================] - 0s 6ms/step - loss: 0.6582 - root_mean_squared_error: 0.8112 - val_loss: 0.6797 - val_root_mean_squared_error: 0.8244 Epoch 98/100 17/17 [==============================] - 0s 6ms/step - loss: 0.6733 - root_mean_squared_error: 0.8205 - val_loss: 0.6740 - val_root_mean_squared_error: 0.8210 Epoch 99/100 17/17 [==============================] - 0s 7ms/step - loss: 0.6623 - root_mean_squared_error: 0.8138 - val_loss: 0.6713 - val_root_mean_squared_error: 0.8193 Epoch 100/100 17/17 [==============================] - 0s 6ms/step - loss: 0.6522 - root_mean_squared_error: 0.8075 - val_loss: 0.6666 - val_root_mean_squared_error: 0.8165 Model training finished. Train RMSE: 0.809 Evaluating model performance... Test RMSE: 0.816 We take a sample from the test set use the model to obtain predictions for them. Note that since the baseline model is deterministic, we get a single a point estimate prediction for each test example, with no information about the uncertainty of the model nor the prediction. sample = 10 examples, targets = list(test_dataset.unbatch().shuffle(batch_size * 10).batch(sample))[ 0 ] predicted = baseline_model(examples).numpy() for idx in range(sample): print(f\"Predicted: {round(float(predicted[idx][0]), 1)} - Actual: {targets[idx]}\") Predicted: 6.0 - Actual: 6.0 Predicted: 6.2 - Actual: 6.0 Predicted: 5.8 - Actual: 7.0 Predicted: 6.0 - Actual: 5.0 Predicted: 5.7 - Actual: 5.0 Predicted: 6.2 - Actual: 7.0 Predicted: 5.6 - Actual: 5.0 Predicted: 6.2 - Actual: 6.0 Predicted: 6.2 - Actual: 6.0 Predicted: 6.2 - Actual: 7.0 Experiment 2: Bayesian neural network (BNN) The object of the Bayesian approach for modeling neural networks is to capture the epistemic uncertainty, which is uncertainty about the model fitness, due to limited training data. The idea is that, instead of learning specific weight (and bias) values in the neural network, the Bayesian approach learns weight distributions - from which we can sample to produce an output for a given input - to encode weight uncertainty. Thus, we need to define prior and the posterior distributions of these weights, and the training process is to learn the parameters of these distributions. # Define the prior weight distribution as Normal of mean=0 and stddev=1. # Note that, in this example, the we prior distribution is not trainable, # as we fix its parameters. def prior(kernel_size, bias_size, dtype=None): n = kernel_size + bias_size prior_model = keras.Sequential( [ tfp.layers.DistributionLambda( lambda t: tfp.distributions.MultivariateNormalDiag( loc=tf.zeros(n), scale_diag=tf.ones(n) ) ) ] ) return prior_model # Define variational posterior weight distribution as multivariate Gaussian. # Note that the learnable parameters for this distribution are the means, # variances, and covariances. def posterior(kernel_size, bias_size, dtype=None): n = kernel_size + bias_size posterior_model = keras.Sequential( [ tfp.layers.VariableLayer( tfp.layers.MultivariateNormalTriL.params_size(n), dtype=dtype ), tfp.layers.MultivariateNormalTriL(n), ] ) return posterior_model We use the tfp.layers.DenseVariational layer instead of the standard keras.layers.Dense layer in the neural network model. def create_bnn_model(train_size): inputs = create_model_inputs() features = keras.layers.concatenate(list(inputs.values())) features = layers.BatchNormalization()(features) # Create hidden layers with weight uncertainty using the DenseVariational layer. for units in hidden_units: features = tfp.layers.DenseVariational( units=units, make_prior_fn=prior, make_posterior_fn=posterior, kl_weight=1 / train_size, activation=\"sigmoid\", )(features) # The output is deterministic: a single point estimate. outputs = layers.Dense(units=1)(features) model = keras.Model(inputs=inputs, outputs=outputs) return model The epistemic uncertainty can be reduced as we increase the size of the training data. That is, the more data the BNN model sees, the more it is certain about its estimates for the weights (distribution parameters). Let's test this behaviour by training the BNN model on a small subset of the training set, and then on the full training set, to compare the output variances. Train BNN with a small training subset. num_epochs = 500 train_sample_size = int(train_size * 0.3) small_train_dataset = train_dataset.unbatch().take(train_sample_size).batch(batch_size) bnn_model_small = create_bnn_model(train_sample_size) run_experiment(bnn_model_small, mse_loss, small_train_dataset, test_dataset) Start training the model... Epoch 1/500 5/5 [==============================] - 2s 123ms/step - loss: 34.5497 - root_mean_squared_error: 5.8764 - val_loss: 37.1164 - val_root_mean_squared_error: 6.0910 Epoch 2/500 5/5 [==============================] - 0s 28ms/step - loss: 36.0738 - root_mean_squared_error: 6.0007 - val_loss: 31.7373 - val_root_mean_squared_error: 5.6322 Epoch 3/500 5/5 [==============================] - 0s 29ms/step - loss: 33.3177 - root_mean_squared_error: 5.7700 - val_loss: 36.2135 - val_root_mean_squared_error: 6.0164 Epoch 4/500 5/5 [==============================] - 0s 30ms/step - loss: 35.1247 - root_mean_squared_error: 5.9232 - val_loss: 35.6158 - val_root_mean_squared_error: 5.9663 Epoch 5/500 5/5 [==============================] - 0s 23ms/step - loss: 34.7653 - root_mean_squared_error: 5.8936 - val_loss: 34.3038 - val_root_mean_squared_error: 5.8556 ... Epoch 495/500 5/5 [==============================] - 0s 24ms/step - loss: 0.6978 - root_mean_squared_error: 0.8162 - val_loss: 0.6258 - val_root_mean_squared_error: 0.7723 Epoch 496/500 5/5 [==============================] - 0s 22ms/step - loss: 0.6448 - root_mean_squared_error: 0.7858 - val_loss: 0.6372 - val_root_mean_squared_error: 0.7808 Epoch 497/500 5/5 [==============================] - 0s 23ms/step - loss: 0.6871 - root_mean_squared_error: 0.8121 - val_loss: 0.6437 - val_root_mean_squared_error: 0.7825 Epoch 498/500 5/5 [==============================] - 0s 23ms/step - loss: 0.6213 - root_mean_squared_error: 0.7690 - val_loss: 0.6581 - val_root_mean_squared_error: 0.7922 Epoch 499/500 5/5 [==============================] - 0s 22ms/step - loss: 0.6604 - root_mean_squared_error: 0.7913 - val_loss: 0.6522 - val_root_mean_squared_error: 0.7908 Epoch 500/500 5/5 [==============================] - 0s 22ms/step - loss: 0.6190 - root_mean_squared_error: 0.7678 - val_loss: 0.6734 - val_root_mean_squared_error: 0.8037 Model training finished. Train RMSE: 0.805 Evaluating model performance... Test RMSE: 0.801 Since we have trained a BNN model, the model produces a different output each time we call it with the same input, since each time a new set of weights are sampled from the distributions to construct the network and produce an output. The less certain the mode weights are, the more variability (wider range) we will see in the outputs of the same inputs. def compute_predictions(model, iterations=100): predicted = [] for _ in range(iterations): predicted.append(model(examples).numpy()) predicted = np.concatenate(predicted, axis=1) prediction_mean = np.mean(predicted, axis=1).tolist() prediction_min = np.min(predicted, axis=1).tolist() prediction_max = np.max(predicted, axis=1).tolist() prediction_range = (np.max(predicted, axis=1) - np.min(predicted, axis=1)).tolist() for idx in range(sample): print( f\"Predictions mean: {round(prediction_mean[idx], 2)}, \" f\"min: {round(prediction_min[idx], 2)}, \" f\"max: {round(prediction_max[idx], 2)}, \" f\"range: {round(prediction_range[idx], 2)} - \" f\"Actual: {targets[idx]}\" ) compute_predictions(bnn_model_small) Predictions mean: 5.63, min: 4.92, max: 6.15, range: 1.23 - Actual: 6.0 Predictions mean: 6.35, min: 6.01, max: 6.54, range: 0.53 - Actual: 6.0 Predictions mean: 5.65, min: 4.84, max: 6.25, range: 1.41 - Actual: 7.0 Predictions mean: 5.74, min: 5.21, max: 6.25, range: 1.04 - Actual: 5.0 Predictions mean: 5.99, min: 5.26, max: 6.29, range: 1.03 - Actual: 5.0 Predictions mean: 6.26, min: 6.01, max: 6.47, range: 0.46 - Actual: 7.0 Predictions mean: 5.28, min: 4.73, max: 5.86, range: 1.12 - Actual: 5.0 Predictions mean: 6.34, min: 6.06, max: 6.53, range: 0.47 - Actual: 6.0 Predictions mean: 6.23, min: 5.91, max: 6.44, range: 0.53 - Actual: 6.0 Predictions mean: 6.33, min: 6.05, max: 6.54, range: 0.48 - Actual: 7.0 Train BNN with the whole training set. num_epochs = 500 bnn_model_full = create_bnn_model(train_size) run_experiment(bnn_model_full, mse_loss, train_dataset, test_dataset) compute_predictions(bnn_model_full) Start training the model... Epoch 1/500 17/17 [==============================] - 2s 32ms/step - loss: 25.4811 - root_mean_squared_error: 5.0465 - val_loss: 23.8428 - val_root_mean_squared_error: 4.8824 Epoch 2/500 17/17 [==============================] - 0s 7ms/step - loss: 23.0849 - root_mean_squared_error: 4.8040 - val_loss: 24.1269 - val_root_mean_squared_error: 4.9115 Epoch 3/500 17/17 [==============================] - 0s 7ms/step - loss: 22.5191 - root_mean_squared_error: 4.7449 - val_loss: 23.3312 - val_root_mean_squared_error: 4.8297 Epoch 4/500 17/17 [==============================] - 0s 7ms/step - loss: 22.9571 - root_mean_squared_error: 4.7896 - val_loss: 24.4072 - val_root_mean_squared_error: 4.9399 Epoch 5/500 17/17 [==============================] - 0s 6ms/step - loss: 21.4049 - root_mean_squared_error: 4.6245 - val_loss: 21.1895 - val_root_mean_squared_error: 4.6027 ... Epoch 495/500 17/17 [==============================] - 0s 7ms/step - loss: 0.5799 - root_mean_squared_error: 0.7511 - val_loss: 0.5902 - val_root_mean_squared_error: 0.7572 Epoch 496/500 17/17 [==============================] - 0s 6ms/step - loss: 0.5926 - root_mean_squared_error: 0.7603 - val_loss: 0.5961 - val_root_mean_squared_error: 0.7616 Epoch 497/500 17/17 [==============================] - 0s 7ms/step - loss: 0.5928 - root_mean_squared_error: 0.7595 - val_loss: 0.5916 - val_root_mean_squared_error: 0.7595 Epoch 498/500 17/17 [==============================] - 0s 7ms/step - loss: 0.6115 - root_mean_squared_error: 0.7715 - val_loss: 0.5869 - val_root_mean_squared_error: 0.7558 Epoch 499/500 17/17 [==============================] - 0s 7ms/step - loss: 0.6044 - root_mean_squared_error: 0.7673 - val_loss: 0.6007 - val_root_mean_squared_error: 0.7645 Epoch 500/500 17/17 [==============================] - 0s 7ms/step - loss: 0.5853 - root_mean_squared_error: 0.7550 - val_loss: 0.5999 - val_root_mean_squared_error: 0.7651 Model training finished. Train RMSE: 0.762 Evaluating model performance... Test RMSE: 0.759 Predictions mean: 5.41, min: 5.06, max: 5.9, range: 0.84 - Actual: 6.0 Predictions mean: 6.5, min: 6.16, max: 6.61, range: 0.44 - Actual: 6.0 Predictions mean: 5.59, min: 4.96, max: 6.0, range: 1.04 - Actual: 7.0 Predictions mean: 5.67, min: 5.25, max: 6.01, range: 0.76 - Actual: 5.0 Predictions mean: 6.02, min: 5.68, max: 6.39, range: 0.71 - Actual: 5.0 Predictions mean: 6.35, min: 6.11, max: 6.52, range: 0.41 - Actual: 7.0 Predictions mean: 5.21, min: 4.85, max: 5.68, range: 0.83 - Actual: 5.0 Predictions mean: 6.53, min: 6.35, max: 6.64, range: 0.28 - Actual: 6.0 Predictions mean: 6.3, min: 6.05, max: 6.47, range: 0.42 - Actual: 6.0 Predictions mean: 6.44, min: 6.19, max: 6.59, range: 0.4 - Actual: 7.0 Notice that the model trained with the full training dataset shows smaller range (uncertainty) in the prediction values for the same inputs, compared to the model trained with a subset of the training dataset. Experiment 3: probabilistic Bayesian neural network So far, the output of the standard and the Bayesian NN models that we built is deterministic, that is, produces a point estimate as a prediction for a given example. We can create a probabilistic NN by letting the model output a distribution. In this case, the model captures the aleatoric uncertainty as well, which is due to irreducible noise in the data, or to the stochastic nature of the process generating the data. In this example, we model the output as a IndependentNormal distribution, with learnable mean and variance parameters. If the task was classification, we would have used IndependentBernoulli with binary classes, and OneHotCategorical with multiple classes, to model distribution of the model output. def create_probablistic_bnn_model(train_size): inputs = create_model_inputs() features = keras.layers.concatenate(list(inputs.values())) features = layers.BatchNormalization()(features) # Create hidden layers with weight uncertainty using the DenseVariational layer. for units in hidden_units: features = tfp.layers.DenseVariational( units=units, make_prior_fn=prior, make_posterior_fn=posterior, kl_weight=1 / train_size, activation=\"sigmoid\", )(features) # Create a probabilisticå output (Normal distribution), and use the `Dense` layer # to produce the parameters of the distribution. # We set units=2 to learn both the mean and the variance of the Normal distribution. distribution_params = layers.Dense(units=2)(features) outputs = tfp.layers.IndependentNormal(1)(distribution_params) model = keras.Model(inputs=inputs, outputs=outputs) return model Since the output of the model is a distribution, rather than a point estimate, we use the negative loglikelihood as our loss function to compute how likely to see the true data (targets) from the estimated distribution produced by the model. def negative_loglikelihood(targets, estimated_distribution): return -estimated_distribution.log_prob(targets) num_epochs = 1000 prob_bnn_model = create_probablistic_bnn_model(train_size) run_experiment(prob_bnn_model, negative_loglikelihood, train_dataset, test_dataset) Start training the model... Epoch 1/1000 17/17 [==============================] - 2s 36ms/step - loss: 11.2378 - root_mean_squared_error: 6.6758 - val_loss: 8.5554 - val_root_mean_squared_error: 6.6240 Epoch 2/1000 17/17 [==============================] - 0s 7ms/step - loss: 11.8285 - root_mean_squared_error: 6.5718 - val_loss: 8.2138 - val_root_mean_squared_error: 6.5256 Epoch 3/1000 17/17 [==============================] - 0s 7ms/step - loss: 8.8566 - root_mean_squared_error: 6.5369 - val_loss: 5.8749 - val_root_mean_squared_error: 6.3394 Epoch 4/1000 17/17 [==============================] - 0s 7ms/step - loss: 7.8191 - root_mean_squared_error: 6.3981 - val_loss: 7.6224 - val_root_mean_squared_error: 6.4473 Epoch 5/1000 17/17 [==============================] - 0s 7ms/step - loss: 6.2598 - root_mean_squared_error: 6.4613 - val_loss: 5.9415 - val_root_mean_squared_error: 6.3466 ... Epoch 995/1000 17/17 [==============================] - 0s 7ms/step - loss: 1.1323 - root_mean_squared_error: 1.0431 - val_loss: 1.1553 - val_root_mean_squared_error: 1.1060 Epoch 996/1000 17/17 [==============================] - 0s 7ms/step - loss: 1.1613 - root_mean_squared_error: 1.0686 - val_loss: 1.1554 - val_root_mean_squared_error: 1.0370 Epoch 997/1000 17/17 [==============================] - 0s 7ms/step - loss: 1.1351 - root_mean_squared_error: 1.0628 - val_loss: 1.1472 - val_root_mean_squared_error: 1.0813 Epoch 998/1000 17/17 [==============================] - 0s 7ms/step - loss: 1.1324 - root_mean_squared_error: 1.0858 - val_loss: 1.1527 - val_root_mean_squared_error: 1.0578 Epoch 999/1000 17/17 [==============================] - 0s 7ms/step - loss: 1.1591 - root_mean_squared_error: 1.0801 - val_loss: 1.1483 - val_root_mean_squared_error: 1.0442 Epoch 1000/1000 17/17 [==============================] - 0s 7ms/step - loss: 1.1402 - root_mean_squared_error: 1.0554 - val_loss: 1.1495 - val_root_mean_squared_error: 1.0389 Model training finished. Train RMSE: 1.068 Evaluating model performance... Test RMSE: 1.068 Now let's produce an output from the model given the test examples. The output is now a distribution, and we can use its mean and variance to compute the confidence intervals (CI) of the prediction. prediction_distribution = prob_bnn_model(examples) prediction_mean = prediction_distribution.mean().numpy().tolist() prediction_stdv = prediction_distribution.stddev().numpy() # The 95% CI is computed as mean ± (1.96 * stdv) upper = (prediction_mean + (1.96 * prediction_stdv)).tolist() lower = (prediction_mean - (1.96 * prediction_stdv)).tolist() prediction_stdv = prediction_stdv.tolist() for idx in range(sample): print( f\"Prediction mean: {round(prediction_mean[idx][0], 2)}, \" f\"stddev: {round(prediction_stdv[idx][0], 2)}, \" f\"95% CI: [{round(upper[idx][0], 2)} - {round(lower[idx][0], 2)}]\" f\" - Actual: {targets[idx]}\" ) Prediction mean: 5.29, stddev: 0.66, 95% CI: [6.58 - 4.0] - Actual: 6.0 Prediction mean: 6.49, stddev: 0.81, 95% CI: [8.08 - 4.89] - Actual: 6.0 Prediction mean: 5.85, stddev: 0.7, 95% CI: [7.22 - 4.48] - Actual: 7.0 Prediction mean: 5.59, stddev: 0.69, 95% CI: [6.95 - 4.24] - Actual: 5.0 Prediction mean: 6.37, stddev: 0.87, 95% CI: [8.07 - 4.67] - Actual: 5.0 Prediction mean: 6.34, stddev: 0.78, 95% CI: [7.87 - 4.81] - Actual: 7.0 Prediction mean: 5.14, stddev: 0.65, 95% CI: [6.4 - 3.87] - Actual: 5.0 Prediction mean: 6.49, stddev: 0.81, 95% CI: [8.09 - 4.89] - Actual: 6.0 Prediction mean: 6.25, stddev: 0.77, 95% CI: [7.76 - 4.74] - Actual: 6.0 Prediction mean: 6.39, stddev: 0.78, 95% CI: [7.92 - 4.85] - Actual: 7.0 Demonstration of custom layer creation. Introduction This example shows how to create custom layers, using the Antirectifier layer (originally proposed as a Keras example script in January 2016), an alternative to ReLU. Instead of zeroing-out the negative part of the input, it splits the negative and positive parts and returns the concatenation of the absolute value of both. This avoids loss of information, at the cost of an increase in dimensionality. To fix the dimensionality increase, we linearly combine the features back to a space of the original size. Setup import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers The Antirectifier layer class Antirectifier(layers.Layer): def __init__(self, initializer=\"he_normal\", **kwargs): super(Antirectifier, self).__init__(**kwargs) self.initializer = keras.initializers.get(initializer) def build(self, input_shape): output_dim = input_shape[-1] self.kernel = self.add_weight( shape=(output_dim * 2, output_dim), initializer=self.initializer, name=\"kernel\", trainable=True, ) def call(self, inputs): inputs -= tf.reduce_mean(inputs, axis=-1, keepdims=True) pos = tf.nn.relu(inputs) neg = tf.nn.relu(-inputs) concatenated = tf.concat([pos, neg], axis=-1) mixed = tf.matmul(concatenated, self.kernel) return mixed def get_config(self): # Implement get_config to enable serialization. This is optional. base_config = super(Antirectifier, self).get_config() config = {\"initializer\": keras.initializers.serialize(self.initializer)} return dict(list(base_config.items()) + list(config.items())) Let's test-drive it on MNIST # Training parameters batch_size = 128 num_classes = 10 epochs = 20 # The data, split between train and test sets (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() x_train = x_train.reshape(-1, 784) x_test = x_test.reshape(-1, 784) x_train = x_train.astype(\"float32\") x_test = x_test.astype(\"float32\") x_train /= 255 x_test /= 255 print(x_train.shape[0], \"train samples\") print(x_test.shape[0], \"test samples\") # Build the model model = keras.Sequential( [ keras.Input(shape=(784,)), layers.Dense(256), Antirectifier(), layers.Dense(256), Antirectifier(), layers.Dropout(0.5), layers.Dense(10), ] ) # Compile the model model.compile( loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer=keras.optimizers.RMSprop(), metrics=[keras.metrics.SparseCategoricalAccuracy()], ) # Train the model model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.15) # Test the model model.evaluate(x_test, y_test) 60000 train samples 10000 test samples Epoch 1/20 399/399 [==============================] - 2s 5ms/step - loss: 0.3827 - sparse_categorical_accuracy: 0.8882 - val_loss: 0.1407 - val_sparse_categorical_accuracy: 0.9587 Epoch 2/20 399/399 [==============================] - 2s 5ms/step - loss: 0.1771 - sparse_categorical_accuracy: 0.9513 - val_loss: 0.1337 - val_sparse_categorical_accuracy: 0.9674 Epoch 3/20 399/399 [==============================] - 2s 5ms/step - loss: 0.1400 - sparse_categorical_accuracy: 0.9620 - val_loss: 0.1225 - val_sparse_categorical_accuracy: 0.9709 Epoch 4/20 399/399 [==============================] - 2s 5ms/step - loss: 0.1099 - sparse_categorical_accuracy: 0.9707 - val_loss: 0.1465 - val_sparse_categorical_accuracy: 0.9636 Epoch 5/20 399/399 [==============================] - 2s 5ms/step - loss: 0.0996 - sparse_categorical_accuracy: 0.9739 - val_loss: 0.1703 - val_sparse_categorical_accuracy: 0.9626 Epoch 6/20 399/399 [==============================] - 2s 5ms/step - loss: 0.0860 - sparse_categorical_accuracy: 0.9774 - val_loss: 0.1354 - val_sparse_categorical_accuracy: 0.9712 Epoch 7/20 399/399 [==============================] - 2s 5ms/step - loss: 0.0833 - sparse_categorical_accuracy: 0.9791 - val_loss: 0.2018 - val_sparse_categorical_accuracy: 0.9574 Epoch 8/20 399/399 [==============================] - 2s 5ms/step - loss: 0.0712 - sparse_categorical_accuracy: 0.9814 - val_loss: 0.1527 - val_sparse_categorical_accuracy: 0.9723 Epoch 9/20 399/399 [==============================] - 2s 5ms/step - loss: 0.0710 - sparse_categorical_accuracy: 0.9827 - val_loss: 0.1613 - val_sparse_categorical_accuracy: 0.9694 Epoch 10/20 399/399 [==============================] - 2s 5ms/step - loss: 0.0633 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.1463 - val_sparse_categorical_accuracy: 0.9758 Epoch 11/20 399/399 [==============================] - 2s 5ms/step - loss: 0.0604 - sparse_categorical_accuracy: 0.9856 - val_loss: 0.1390 - val_sparse_categorical_accuracy: 0.9769 Epoch 12/20 399/399 [==============================] - 2s 5ms/step - loss: 0.0561 - sparse_categorical_accuracy: 0.9865 - val_loss: 0.1761 - val_sparse_categorical_accuracy: 0.9740 Epoch 13/20 399/399 [==============================] - 2s 5ms/step - loss: 0.0589 - sparse_categorical_accuracy: 0.9873 - val_loss: 0.1598 - val_sparse_categorical_accuracy: 0.9769 Epoch 14/20 399/399 [==============================] - 2s 5ms/step - loss: 0.0527 - sparse_categorical_accuracy: 0.9879 - val_loss: 0.1565 - val_sparse_categorical_accuracy: 0.9802 Epoch 15/20 399/399 [==============================] - 2s 5ms/step - loss: 0.0563 - sparse_categorical_accuracy: 0.9878 - val_loss: 0.1970 - val_sparse_categorical_accuracy: 0.9758 Epoch 16/20 399/399 [==============================] - 2s 5ms/step - loss: 0.0525 - sparse_categorical_accuracy: 0.9888 - val_loss: 0.1937 - val_sparse_categorical_accuracy: 0.9757 Epoch 17/20 399/399 [==============================] - 2s 5ms/step - loss: 0.0522 - sparse_categorical_accuracy: 0.9898 - val_loss: 0.1777 - val_sparse_categorical_accuracy: 0.9797 Epoch 18/20 399/399 [==============================] - 2s 5ms/step - loss: 0.0568 - sparse_categorical_accuracy: 0.9894 - val_loss: 0.1831 - val_sparse_categorical_accuracy: 0.9791 Epoch 19/20 399/399 [==============================] - 2s 5ms/step - loss: 0.0526 - sparse_categorical_accuracy: 0.9900 - val_loss: 0.1812 - val_sparse_categorical_accuracy: 0.9782 Epoch 20/20 399/399 [==============================] - 2s 5ms/step - loss: 0.0503 - sparse_categorical_accuracy: 0.9902 - val_loss: 0.2098 - val_sparse_categorical_accuracy: 0.9776 313/313 [==============================] - 0s 731us/step - loss: 0.2002 - sparse_categorical_accuracy: 0.9776 [0.20024622976779938, 0.9775999784469604] Overview of how to use the TensorFlow NumPy API to write Keras models. Introduction NumPy is a hugely successful Python linear algebra library. TensorFlow recently launched tf_numpy, a TensorFlow implementation of a large subset of the NumPy API. Thanks to tf_numpy, you can write Keras layers or models in the NumPy style! The TensorFlow NumPy API has full integration with the TensorFlow ecosystem. Features such as automatic differentiation, TensorBoard, Keras model callbacks, TPU distribution and model exporting are all supported. Let's run through a few examples. Setup TensorFlow NumPy requires TensorFlow 2.5 or later. import tensorflow as tf import tensorflow.experimental.numpy as tnp import keras import keras.layers as layers import numpy as np Optionally, you can call tnp.experimental_enable_numpy_behavior() to enable type promotion in TensorFlow. This allows TNP to more closely follow the NumPy standard. tnp.experimental_enable_numpy_behavior() To test our models we will use the Boston housing prices regression dataset. (x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data( path=\"boston_housing.npz\", test_split=0.2, seed=113 ) def evaluate_model(model: keras.Model): [loss, percent_error] = model.evaluate(x_test, y_test, verbose=0) print(\"Mean absolute percent error before training: \", percent_error) model.fit(x_train, y_train, epochs=200, verbose=0) [loss, percent_error] = model.evaluate(x_test, y_test, verbose=0) print(\"Mean absolute percent error after training:\", percent_error) Subclassing keras.Model with TNP The most flexible way to make use of the Keras API is to subclass the [keras.Model](/api/models/model#model-class) class. Subclassing the Model class gives you the ability to fully customize what occurs in the training loop. This makes subclassing Model a popular option for researchers. In this example, we will implement a Model subclass that performs regression over the boston housing dataset using the TNP API. Note that differentiation and gradient descent is handled automatically when using the TNP API alongside keras. First let's define a simple TNPForwardFeedRegressionNetwork class. class TNPForwardFeedRegressionNetwork(keras.Model): def __init__(self, blocks=None, **kwargs): super(TNPForwardFeedRegressionNetwork, self).__init__(**kwargs) if not isinstance(blocks, list): raise ValueError(f\"blocks must be a list, got blocks={blocks}\") self.blocks = blocks self.block_weights = None self.biases = None def build(self, input_shape): current_shape = input_shape[1] self.block_weights = [] self.biases = [] for i, block in enumerate(self.blocks): self.block_weights.append( self.add_weight( shape=(current_shape, block), trainable=True, name=f\"block-{i}\" ) ) self.biases.append( self.add_weight(shape=(block,), trainable=True, name=f\"bias-{i}\") ) current_shape = block self.linear_layer = self.add_weight( shape=(current_shape, 1), name=\"linear_projector\", trainable=True ) def call(self, inputs): activations = inputs for w, b in zip(self.block_weights, self.biases): activations = tnp.matmul(activations, w) + b # ReLu activation function activations = tnp.maximum(activations, 0.0) return tnp.matmul(activations, self.linear_layer) Just like with any other Keras model we can utilize any supported optimizer, loss, metrics or callbacks that we want. Let's see how the model performs! model = TNPForwardFeedRegressionNetwork(blocks=[3, 3]) model.compile( optimizer=\"adam\", loss=\"mean_squared_error\", metrics=[keras.metrics.MeanAbsolutePercentageError()], ) evaluate_model(model) Mean absolute percent error before training: 422.45343017578125 Mean absolute percent error after training: 97.24715423583984 Great! Our model seems to be effectively learning to solve the problem at hand. We can also write our own custom loss function using TNP. def tnp_mse(y_true, y_pred): return tnp.mean(tnp.square(y_true - y_pred), axis=0) keras.backend.clear_session() model = TNPForwardFeedRegressionNetwork(blocks=[3, 3]) model.compile( optimizer=\"adam\", loss=tnp_mse, metrics=[keras.metrics.MeanAbsolutePercentageError()], ) evaluate_model(model) Mean absolute percent error before training: 79.84039306640625 Mean absolute percent error after training: 28.658035278320312 Implementing a Keras Layer Based Model with TNP If desired, TNP can also be used in layer oriented Keras code structure. Let's implement the same model, but using a layered approach! def tnp_relu(x): return tnp.maximum(x, 0) class TNPDense(keras.layers.Layer): def __init__(self, units, activation=None): super().__init__() self.units = units self.activation = activation def build(self, input_shape): self.w = self.add_weight( name=\"weights\", shape=(input_shape[1], self.units), initializer=\"random_normal\", trainable=True, ) self.bias = self.add_weight( name=\"bias\", shape=(self.units,), initializer=\"random_normal\", trainable=True, ) def call(self, inputs): outputs = tnp.matmul(inputs, self.w) + self.bias if self.activation: return self.activation(outputs) return outputs def create_layered_tnp_model(): return keras.Sequential( [ TNPDense(3, activation=tnp_relu), TNPDense(3, activation=tnp_relu), TNPDense(1), ] ) model = create_layered_tnp_model() model.compile( optimizer=\"adam\", loss=\"mean_squared_error\", metrics=[keras.metrics.MeanAbsolutePercentageError()], ) model.build((None, 13,)) model.summary() evaluate_model(model) Model: \"sequential\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= tnp_dense (TNPDense) (None, 3) 42 _________________________________________________________________ tnp_dense_1 (TNPDense) (None, 3) 12 _________________________________________________________________ tnp_dense_2 (TNPDense) (None, 1) 4 ================================================================= Total params: 58 Trainable params: 58 Non-trainable params: 0 _________________________________________________________________ Mean absolute percent error before training: 101.17143249511719 Mean absolute percent error after training: 23.479856491088867 You can also seamlessly switch between TNP layers and native Keras layers! def create_mixed_model(): return keras.Sequential( [ TNPDense(3, activation=tnp_relu), # The model will have no issue using a normal Dense layer layers.Dense(3, activation=\"relu\"), # ... or switching back to tnp layers! TNPDense(1), ] ) model = create_mixed_model() model.compile( optimizer=\"adam\", loss=\"mean_squared_error\", metrics=[keras.metrics.MeanAbsolutePercentageError()], ) model.build((None, 13,)) model.summary() evaluate_model(model) Model: \"sequential_1\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= tnp_dense_3 (TNPDense) (None, 3) 42 _________________________________________________________________ dense (Dense) (None, 3) 12 _________________________________________________________________ tnp_dense_4 (TNPDense) (None, 1) 4 ================================================================= Total params: 58 Trainable params: 58 Non-trainable params: 0 _________________________________________________________________ Mean absolute percent error before training: 104.59967041015625 Mean absolute percent error after training: 27.712949752807617 The Keras API offers a wide variety of layers. The ability to use them alongside NumPy code can be a huge time saver in projects. Distribution Strategy TensorFlow NumPy and Keras integrate with TensorFlow Distribution Strategies. This makes it simple to perform distributed training across multiple GPUs, or even an entire TPU Pod. gpus = tf.config.list_logical_devices(\"GPU\") if gpus: strategy = tf.distribute.MirroredStrategy(gpus) else: # We can fallback to a no-op CPU strategy. strategy = tf.distribute.get_strategy() print(\"Running with strategy:\", str(strategy.__class__.__name__)) with strategy.scope(): model = create_layered_tnp_model() model.compile( optimizer=\"adam\", loss=\"mean_squared_error\", metrics=[keras.metrics.MeanAbsolutePercentageError()], ) model.build((None, 13,)) model.summary() evaluate_model(model) Running with strategy: _DefaultDistributionStrategy Model: \"sequential_2\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= tnp_dense_5 (TNPDense) (None, 3) 42 _________________________________________________________________ tnp_dense_6 (TNPDense) (None, 3) 12 _________________________________________________________________ tnp_dense_7 (TNPDense) (None, 1) 4 ================================================================= Total params: 58 Trainable params: 58 Non-trainable params: 0 _________________________________________________________________ Mean absolute percent error before training: 100.5331039428711 Mean absolute percent error after training: 20.71842384338379 TensorBoard Integration One of the many benefits of using the Keras API is the ability to monitor training through TensorBoard. Using the TensorFlow NumPy API alongside Keras allows you to easily leverage TensorBoard. keras.backend.clear_session() To load the TensorBoard from a Jupyter notebook, you can run the following magic: %load_ext tensorboard models = [ (TNPForwardFeedRegressionNetwork(blocks=[3, 3]), \"TNPForwardFeedRegressionNetwork\"), (create_layered_tnp_model(), \"layered_tnp_model\"), (create_mixed_model(), \"mixed_model\"), ] for model, model_name in models: model.compile( optimizer=\"adam\", loss=\"mean_squared_error\", metrics=[keras.metrics.MeanAbsolutePercentageError()], ) model.fit( x_train, y_train, epochs=200, verbose=0, callbacks=[keras.callbacks.TensorBoard(log_dir=f\"logs/{model_name}\")], ) To load the TensorBoard from a Jupyter notebook you can use the %tensorboard magic: %tensorboard --logdir logs The TensorBoard monitor metrics and examine the training curve. Tensorboard training graph The TensorBoard also allows you to explore the computation graph used in your models. Tensorboard graph exploration The ability to introspect into your models can be valuable during debugging. Conclusion Porting existing NumPy code to Keras models using the tensorflow_numpy API is easy! By integrating with Keras you gain the ability to use existing Keras callbacks, metrics and optimizers, easily distribute your training and use Tensorboard. Migrating a more complex model, such as a ResNet, to the TensorFlow NumPy API would be a great follow up learning exercise. Several open source NumPy ResNet implementations are available online. Implement Actor Critic Method in CartPole environment. Introduction This script shows an implementation of Actor Critic method on CartPole-V0 environment. Actor Critic Method As an agent takes actions and moves through an environment, it learns to map the observed state of the environment to two possible outputs: Recommended action: A probability value for each action in the action space. The part of the agent responsible for this output is called the actor. Estimated rewards in the future: Sum of all rewards it expects to receive in the future. The part of the agent responsible for this output is the critic. Agent and Critic learn to perform their tasks, such that the recommended actions from the actor maximize the rewards. CartPole-V0 A pole is attached to a cart placed on a frictionless track. The agent has to apply force to move the cart. It is rewarded for every time step the pole remains upright. The agent, therefore, must learn to keep the pole from falling over. References CartPole Actor Critic Method Setup import gym import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers # Configuration parameters for the whole setup seed = 42 gamma = 0.99 # Discount factor for past rewards max_steps_per_episode = 10000 env = gym.make(\"CartPole-v0\") # Create the environment env.seed(seed) eps = np.finfo(np.float32).eps.item() # Smallest number such that 1.0 + eps != 1.0 Implement Actor Critic network This network learns two functions: Actor: This takes as input the state of our environment and returns a probability value for each action in its action space. Critic: This takes as input the state of our environment and returns an estimate of total rewards in the future. In our implementation, they share the initial layer. num_inputs = 4 num_actions = 2 num_hidden = 128 inputs = layers.Input(shape=(num_inputs,)) common = layers.Dense(num_hidden, activation=\"relu\")(inputs) action = layers.Dense(num_actions, activation=\"softmax\")(common) critic = layers.Dense(1)(common) model = keras.Model(inputs=inputs, outputs=[action, critic]) Train optimizer = keras.optimizers.Adam(learning_rate=0.01) huber_loss = keras.losses.Huber() action_probs_history = [] critic_value_history = [] rewards_history = [] running_reward = 0 episode_count = 0 while True: # Run until solved state = env.reset() episode_reward = 0 with tf.GradientTape() as tape: for timestep in range(1, max_steps_per_episode): # env.render(); Adding this line would show the attempts # of the agent in a pop up window. state = tf.convert_to_tensor(state) state = tf.expand_dims(state, 0) # Predict action probabilities and estimated future rewards # from environment state action_probs, critic_value = model(state) critic_value_history.append(critic_value[0, 0]) # Sample action from action probability distribution action = np.random.choice(num_actions, p=np.squeeze(action_probs)) action_probs_history.append(tf.math.log(action_probs[0, action])) # Apply the sampled action in our environment state, reward, done, _ = env.step(action) rewards_history.append(reward) episode_reward += reward if done: break # Update running reward to check condition for solving running_reward = 0.05 * episode_reward + (1 - 0.05) * running_reward # Calculate expected value from rewards # - At each timestep what was the total reward received after that timestep # - Rewards in the past are discounted by multiplying them with gamma # - These are the labels for our critic returns = [] discounted_sum = 0 for r in rewards_history[::-1]: discounted_sum = r + gamma * discounted_sum returns.insert(0, discounted_sum) # Normalize returns = np.array(returns) returns = (returns - np.mean(returns)) / (np.std(returns) + eps) returns = returns.tolist() # Calculating loss values to update our network history = zip(action_probs_history, critic_value_history, returns) actor_losses = [] critic_losses = [] for log_prob, value, ret in history: # At this point in history, the critic estimated that we would get a # total reward = `value` in the future. We took an action with log probability # of `log_prob` and ended up recieving a total reward = `ret`. # The actor must be updated so that it predicts an action that leads to # high rewards (compared to critic's estimate) with high probability. diff = ret - value actor_losses.append(-log_prob * diff) # actor loss # The critic must be updated so that it predicts a better estimate of # the future rewards. critic_losses.append( huber_loss(tf.expand_dims(value, 0), tf.expand_dims(ret, 0)) ) # Backpropagation loss_value = sum(actor_losses) + sum(critic_losses) grads = tape.gradient(loss_value, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) # Clear the loss and reward history action_probs_history.clear() critic_value_history.clear() rewards_history.clear() # Log details episode_count += 1 if episode_count % 10 == 0: template = \"running reward: {:.2f} at episode {}\" print(template.format(running_reward, episode_count)) if running_reward > 195: # Condition to consider the task solved print(\"Solved at episode {}!\".format(episode_count)) break running reward: 8.82 at episode 10 running reward: 23.04 at episode 20 running reward: 28.41 at episode 30 running reward: 53.59 at episode 40 running reward: 53.71 at episode 50 running reward: 77.35 at episode 60 running reward: 74.76 at episode 70 running reward: 57.89 at episode 80 running reward: 46.59 at episode 90 running reward: 43.48 at episode 100 running reward: 63.77 at episode 110 running reward: 111.13 at episode 120 running reward: 142.77 at episode 130 running reward: 127.96 at episode 140 running reward: 113.92 at episode 150 running reward: 128.57 at episode 160 running reward: 139.95 at episode 170 running reward: 154.95 at episode 180 running reward: 171.45 at episode 190 running reward: 171.33 at episode 200 running reward: 177.74 at episode 210 running reward: 184.76 at episode 220 running reward: 190.88 at episode 230 running reward: 154.78 at episode 240 running reward: 114.38 at episode 250 running reward: 107.51 at episode 260 running reward: 128.99 at episode 270 running reward: 157.48 at episode 280 running reward: 174.54 at episode 290 running reward: 184.76 at episode 300 running reward: 190.87 at episode 310 running reward: 194.54 at episode 320 Solved at episode 322! Visualizations In early stages of training: Imgur In later stages of training: Imgur Implementing DDPG algorithm on the Inverted Pendulum Problem. Introduction Deep Deterministic Policy Gradient (DDPG) is a model-free off-policy algorithm for learning continous actions. It combines ideas from DPG (Deterministic Policy Gradient) and DQN (Deep Q-Network). It uses Experience Replay and slow-learning target networks from DQN, and it is based on DPG, which can operate over continuous action spaces. This tutorial closely follow this paper - Continuous control with deep reinforcement learning Problem We are trying to solve the classic Inverted Pendulum control problem. In this setting, we can take only two actions: swing left or swing right. What make this problem challenging for Q-Learning Algorithms is that actions are continuous instead of being discrete. That is, instead of using two discrete actions like -1 or +1, we have to select from infinite actions ranging from -2 to +2. Quick theory Just like the Actor-Critic method, we have two networks: Actor - It proposes an action given a state. Critic - It predicts if the action is good (positive value) or bad (negative value) given a state and an action. DDPG uses two more techniques not present in the original DQN: First, it uses two Target networks. Why? Because it add stability to training. In short, we are learning from estimated targets and Target networks are updated slowly, hence keeping our estimated targets stable. Conceptually, this is like saying, \"I have an idea of how to play this well, I'm going to try it out for a bit until I find something better\", as opposed to saying \"I'm going to re-learn how to play this entire game after every move\". See this StackOverflow answer. Second, it uses Experience Replay. We store list of tuples (state, action, reward, next_state), and instead of learning only from recent experience, we learn from sampling all of our experience accumulated so far. Now, let's see how is it implemented. import gym import tensorflow as tf from tensorflow.keras import layers import numpy as np import matplotlib.pyplot as plt We use OpenAIGym to create the environment. We will use the upper_bound parameter to scale our actions later. problem = \"Pendulum-v0\" env = gym.make(problem) num_states = env.observation_space.shape[0] print(\"Size of State Space -> {}\".format(num_states)) num_actions = env.action_space.shape[0] print(\"Size of Action Space -> {}\".format(num_actions)) upper_bound = env.action_space.high[0] lower_bound = env.action_space.low[0] print(\"Max Value of Action -> {}\".format(upper_bound)) print(\"Min Value of Action -> {}\".format(lower_bound)) Size of State Space -> 3 Size of Action Space -> 1 Max Value of Action -> 2.0 Min Value of Action -> -2.0 To implement better exploration by the Actor network, we use noisy perturbations, specifically an Ornstein-Uhlenbeck process for generating noise, as described in the paper. It samples noise from a correlated normal distribution. class OUActionNoise: def __init__(self, mean, std_deviation, theta=0.15, dt=1e-2, x_initial=None): self.theta = theta self.mean = mean self.std_dev = std_deviation self.dt = dt self.x_initial = x_initial self.reset() def __call__(self): # Formula taken from https://www.wikipedia.org/wiki/Ornstein-Uhlenbeck_process. x = ( self.x_prev + self.theta * (self.mean - self.x_prev) * self.dt + self.std_dev * np.sqrt(self.dt) * np.random.normal(size=self.mean.shape) ) # Store x into x_prev # Makes next noise dependent on current one self.x_prev = x return x def reset(self): if self.x_initial is not None: self.x_prev = self.x_initial else: self.x_prev = np.zeros_like(self.mean) The Buffer class implements Experience Replay. Algorithm Critic loss - Mean Squared Error of y - Q(s, a) where y is the expected return as seen by the Target network, and Q(s, a) is action value predicted by the Critic network. y is a moving target that the critic model tries to achieve; we make this target stable by updating the Target model slowly. Actor loss - This is computed using the mean of the value given by the Critic network for the actions taken by the Actor network. We seek to maximize this quantity. Hence we update the Actor network so that it produces actions that get the maximum predicted value as seen by the Critic, for a given state. class Buffer: def __init__(self, buffer_capacity=100000, batch_size=64): # Number of \"experiences\" to store at max self.buffer_capacity = buffer_capacity # Num of tuples to train on. self.batch_size = batch_size # Its tells us num of times record() was called. self.buffer_counter = 0 # Instead of list of tuples as the exp.replay concept go # We use different np.arrays for each tuple element self.state_buffer = np.zeros((self.buffer_capacity, num_states)) self.action_buffer = np.zeros((self.buffer_capacity, num_actions)) self.reward_buffer = np.zeros((self.buffer_capacity, 1)) self.next_state_buffer = np.zeros((self.buffer_capacity, num_states)) # Takes (s,a,r,s') obervation tuple as input def record(self, obs_tuple): # Set index to zero if buffer_capacity is exceeded, # replacing old records index = self.buffer_counter % self.buffer_capacity self.state_buffer[index] = obs_tuple[0] self.action_buffer[index] = obs_tuple[1] self.reward_buffer[index] = obs_tuple[2] self.next_state_buffer[index] = obs_tuple[3] self.buffer_counter += 1 # Eager execution is turned on by default in TensorFlow 2. Decorating with tf.function allows # TensorFlow to build a static graph out of the logic and computations in our function. # This provides a large speed up for blocks of code that contain many small TensorFlow operations such as this one. @tf.function def update( self, state_batch, action_batch, reward_batch, next_state_batch, ): # Training and updating Actor & Critic networks. # See Pseudo Code. with tf.GradientTape() as tape: target_actions = target_actor(next_state_batch, training=True) y = reward_batch + gamma * target_critic( [next_state_batch, target_actions], training=True ) critic_value = critic_model([state_batch, action_batch], training=True) critic_loss = tf.math.reduce_mean(tf.math.square(y - critic_value)) critic_grad = tape.gradient(critic_loss, critic_model.trainable_variables) critic_optimizer.apply_gradients( zip(critic_grad, critic_model.trainable_variables) ) with tf.GradientTape() as tape: actions = actor_model(state_batch, training=True) critic_value = critic_model([state_batch, actions], training=True) # Used `-value` as we want to maximize the value given # by the critic for our actions actor_loss = -tf.math.reduce_mean(critic_value) actor_grad = tape.gradient(actor_loss, actor_model.trainable_variables) actor_optimizer.apply_gradients( zip(actor_grad, actor_model.trainable_variables) ) # We compute the loss and update parameters def learn(self): # Get sampling range record_range = min(self.buffer_counter, self.buffer_capacity) # Randomly sample indices batch_indices = np.random.choice(record_range, self.batch_size) # Convert to tensors state_batch = tf.convert_to_tensor(self.state_buffer[batch_indices]) action_batch = tf.convert_to_tensor(self.action_buffer[batch_indices]) reward_batch = tf.convert_to_tensor(self.reward_buffer[batch_indices]) reward_batch = tf.cast(reward_batch, dtype=tf.float32) next_state_batch = tf.convert_to_tensor(self.next_state_buffer[batch_indices]) self.update(state_batch, action_batch, reward_batch, next_state_batch) # This update target parameters slowly # Based on rate `tau`, which is much less than one. @tf.function def update_target(target_weights, weights, tau): for (a, b) in zip(target_weights, weights): a.assign(b * tau + a * (1 - tau)) Here we define the Actor and Critic networks. These are basic Dense models with ReLU activation. Note: We need the initialization for last layer of the Actor to be between -0.003 and 0.003 as this prevents us from getting 1 or -1 output values in the initial stages, which would squash our gradients to zero, as we use the tanh activation. def get_actor(): # Initialize weights between -3e-3 and 3-e3 last_init = tf.random_uniform_initializer(minval=-0.003, maxval=0.003) inputs = layers.Input(shape=(num_states,)) out = layers.Dense(256, activation=\"relu\")(inputs) out = layers.Dense(256, activation=\"relu\")(out) outputs = layers.Dense(1, activation=\"tanh\", kernel_initializer=last_init)(out) # Our upper bound is 2.0 for Pendulum. outputs = outputs * upper_bound model = tf.keras.Model(inputs, outputs) return model def get_critic(): # State as input state_input = layers.Input(shape=(num_states)) state_out = layers.Dense(16, activation=\"relu\")(state_input) state_out = layers.Dense(32, activation=\"relu\")(state_out) # Action as input action_input = layers.Input(shape=(num_actions)) action_out = layers.Dense(32, activation=\"relu\")(action_input) # Both are passed through seperate layer before concatenating concat = layers.Concatenate()([state_out, action_out]) out = layers.Dense(256, activation=\"relu\")(concat) out = layers.Dense(256, activation=\"relu\")(out) outputs = layers.Dense(1)(out) # Outputs single value for give state-action model = tf.keras.Model([state_input, action_input], outputs) return model policy() returns an action sampled from our Actor network plus some noise for exploration. def policy(state, noise_object): sampled_actions = tf.squeeze(actor_model(state)) noise = noise_object() # Adding noise to action sampled_actions = sampled_actions.numpy() + noise # We make sure action is within bounds legal_action = np.clip(sampled_actions, lower_bound, upper_bound) return [np.squeeze(legal_action)] Training hyperparameters std_dev = 0.2 ou_noise = OUActionNoise(mean=np.zeros(1), std_deviation=float(std_dev) * np.ones(1)) actor_model = get_actor() critic_model = get_critic() target_actor = get_actor() target_critic = get_critic() # Making the weights equal initially target_actor.set_weights(actor_model.get_weights()) target_critic.set_weights(critic_model.get_weights()) # Learning rate for actor-critic models critic_lr = 0.002 actor_lr = 0.001 critic_optimizer = tf.keras.optimizers.Adam(critic_lr) actor_optimizer = tf.keras.optimizers.Adam(actor_lr) total_episodes = 100 # Discount factor for future rewards gamma = 0.99 # Used to update target networks tau = 0.005 buffer = Buffer(50000, 64) Now we implement our main training loop, and iterate over episodes. We sample actions using policy() and train with learn() at each time step, along with updating the Target networks at a rate tau. # To store reward history of each episode ep_reward_list = [] # To store average reward history of last few episodes avg_reward_list = [] # Takes about 4 min to train for ep in range(total_episodes): prev_state = env.reset() episodic_reward = 0 while True: # Uncomment this to see the Actor in action # But not in a python notebook. # env.render() tf_prev_state = tf.expand_dims(tf.convert_to_tensor(prev_state), 0) action = policy(tf_prev_state, ou_noise) # Recieve state and reward from environment. state, reward, done, info = env.step(action) buffer.record((prev_state, action, reward, state)) episodic_reward += reward buffer.learn() update_target(target_actor.variables, actor_model.variables, tau) update_target(target_critic.variables, critic_model.variables, tau) # End this episode when `done` is True if done: break prev_state = state ep_reward_list.append(episodic_reward) # Mean of last 40 episodes avg_reward = np.mean(ep_reward_list[-40:]) print(\"Episode * {} * Avg Reward is ==> {}\".format(ep, avg_reward)) avg_reward_list.append(avg_reward) # Plotting graph # Episodes versus Avg. Rewards plt.plot(avg_reward_list) plt.xlabel(\"Episode\") plt.ylabel(\"Avg. Epsiodic Reward\") plt.show() Episode * 0 * Avg Reward is ==> -1269.3278950595395 Episode * 1 * Avg Reward is ==> -1528.3008939716287 Episode * 2 * Avg Reward is ==> -1511.1737868279706 Episode * 3 * Avg Reward is ==> -1512.8568141261057 Episode * 4 * Avg Reward is ==> -1386.054573343386 Episode * 5 * Avg Reward is ==> -1411.4818856846339 Episode * 6 * Avg Reward is ==> -1431.6790621961388 Episode * 7 * Avg Reward is ==> -1427.9515009474867 Episode * 8 * Avg Reward is ==> -1392.9313930075857 Episode * 9 * Avg Reward is ==> -1346.6839043846012 Episode * 10 * Avg Reward is ==> -1325.5818224096574 Episode * 11 * Avg Reward is ==> -1271.778361283553 Episode * 12 * Avg Reward is ==> -1194.0784354001732 Episode * 13 * Avg Reward is ==> -1137.1096928093427 Episode * 14 * Avg Reward is ==> -1087.2426176918214 Episode * 15 * Avg Reward is ==> -1043.5265287176114 Episode * 16 * Avg Reward is ==> -990.0857409180443 Episode * 17 * Avg Reward is ==> -949.0661362879348 Episode * 18 * Avg Reward is ==> -906.1744575963231 Episode * 19 * Avg Reward is ==> -914.0098344966382 Episode * 20 * Avg Reward is ==> -886.8905055354011 Episode * 21 * Avg Reward is ==> -859.3416389004793 Episode * 22 * Avg Reward is ==> -827.5405203616622 Episode * 23 * Avg Reward is ==> -798.3875178404127 Episode * 24 * Avg Reward is ==> -771.289491103158 Episode * 25 * Avg Reward is ==> -741.6622445749622 Episode * 26 * Avg Reward is ==> -727.7080867854874 Episode * 27 * Avg Reward is ==> -710.485046117201 Episode * 28 * Avg Reward is ==> -690.3850022530833 Episode * 29 * Avg Reward is ==> -671.3205042911178 Episode * 30 * Avg Reward is ==> -653.4475135842247 Episode * 31 * Avg Reward is ==> -637.0057392119055 Episode * 32 * Avg Reward is ==> -629.2474166794424 Episode * 33 * Avg Reward is ==> -614.4655398230501 Episode * 34 * Avg Reward is ==> -603.3854873345723 Episode * 35 * Avg Reward is ==> -589.86534490467 Episode * 36 * Avg Reward is ==> -577.1806480684269 Episode * 37 * Avg Reward is ==> -565.1365286280546 Episode * 38 * Avg Reward is ==> -550.6647028563134 Episode * 39 * Avg Reward is ==> -540.0095147571197 Episode * 40 * Avg Reward is ==> -517.3861294233157 Episode * 41 * Avg Reward is ==> -478.705352005952 Episode * 42 * Avg Reward is ==> -444.8350788756713 Episode * 43 * Avg Reward is ==> -409.85293165991334 Episode * 44 * Avg Reward is ==> -390.83984710631546 Episode * 45 * Avg Reward is ==> -360.88156865913675 Episode * 46 * Avg Reward is ==> -325.26685315168595 Episode * 47 * Avg Reward is ==> -290.2315644399411 Episode * 48 * Avg Reward is ==> -268.0351126010609 Episode * 49 * Avg Reward is ==> -247.8952699063706 Episode * 50 * Avg Reward is ==> -222.99123461788048 Episode * 51 * Avg Reward is ==> -209.0830401020491 Episode * 52 * Avg Reward is ==> -205.65143423678765 Episode * 53 * Avg Reward is ==> -201.8910585767988 Episode * 54 * Avg Reward is ==> -192.18560466037357 Episode * 55 * Avg Reward is ==> -189.43475813660137 Episode * 56 * Avg Reward is ==> -191.92700535454787 Episode * 57 * Avg Reward is ==> -188.5196218645745 Episode * 58 * Avg Reward is ==> -188.17872234729674 Episode * 59 * Avg Reward is ==> -167.33043921566485 Episode * 60 * Avg Reward is ==> -165.01361185173954 Episode * 61 * Avg Reward is ==> -164.5316658073024 Episode * 62 * Avg Reward is ==> -164.4025677076815 Episode * 63 * Avg Reward is ==> -167.27842005634784 Episode * 64 * Avg Reward is ==> -167.12049955654845 Episode * 65 * Avg Reward is ==> -170.02761731078783 Episode * 66 * Avg Reward is ==> -167.56039601863873 Episode * 67 * Avg Reward is ==> -164.60482495249738 Episode * 68 * Avg Reward is ==> -167.45278232469394 Episode * 69 * Avg Reward is ==> -167.42407364484592 Episode * 70 * Avg Reward is ==> -167.57794933965346 Episode * 71 * Avg Reward is ==> -170.6408611483338 Episode * 72 * Avg Reward is ==> -163.96954092530822 Episode * 73 * Avg Reward is ==> -160.82007525469245 Episode * 74 * Avg Reward is ==> -158.38239222565778 Episode * 75 * Avg Reward is ==> -158.3554729720654 Episode * 76 * Avg Reward is ==> -158.51036948298994 Episode * 77 * Avg Reward is ==> -158.68906473090686 Episode * 78 * Avg Reward is ==> -164.60260866654318 Episode * 79 * Avg Reward is ==> -161.5493472156026 Episode * 80 * Avg Reward is ==> -152.48077012719403 Episode * 81 * Avg Reward is ==> -149.52532010375975 Episode * 82 * Avg Reward is ==> -149.61942419730423 Episode * 83 * Avg Reward is ==> -149.82443455067468 Episode * 84 * Avg Reward is ==> -149.80009937226978 Episode * 85 * Avg Reward is ==> -144.51659331262107 Episode * 86 * Avg Reward is ==> -150.7545561142967 Episode * 87 * Avg Reward is ==> -153.84772667131307 Episode * 88 * Avg Reward is ==> -151.35200443047225 Episode * 89 * Avg Reward is ==> -148.30392250041828 Episode * 90 * Avg Reward is ==> -151.33886235855053 Episode * 91 * Avg Reward is ==> -151.153096135589 Episode * 92 * Avg Reward is ==> -151.19626034791332 Episode * 93 * Avg Reward is ==> -151.15870791946685 Episode * 94 * Avg Reward is ==> -154.2673372216281 Episode * 95 * Avg Reward is ==> -150.40737651480134 Episode * 96 * Avg Reward is ==> -147.7969116731913 Episode * 97 * Avg Reward is ==> -147.88640802454557 Episode * 98 * Avg Reward is ==> -144.88997165191319 Episode * 99 * Avg Reward is ==> -142.22158276699662 png If training proceeds correctly, the average episodic reward will increase with time. Feel free to try different learning rates, tau values, and architectures for the Actor and Critic networks. The Inverted Pendulum problem has low complexity, but DDPG work great on many other problems. Another great environment to try this on is LunarLandingContinuous-v2, but it will take more episodes to obtain good results. # Save the weights actor_model.save_weights(\"pendulum_actor.h5\") critic_model.save_weights(\"pendulum_critic.h5\") target_actor.save_weights(\"pendulum_target_actor.h5\") target_critic.save_weights(\"pendulum_target_critic.h5\") Before Training: before_img After 100 episodes: after_img Play Atari Breakout with a Deep Q-Network. Introduction This script shows an implementation of Deep Q-Learning on the BreakoutNoFrameskip-v4 environment. Deep Q-Learning As an agent takes actions and moves through an environment, it learns to map the observed state of the environment to an action. An agent will choose an action in a given state based on a \"Q-value\", which is a weighted reward based on the expected highest long-term reward. A Q-Learning Agent learns to perform its task such that the recommended action maximizes the potential future rewards. This method is considered an \"Off-Policy\" method, meaning its Q values are updated assuming that the best action was chosen, even if the best action was not chosen. Atari Breakout In this environment, a board moves along the bottom of the screen returning a ball that will destroy blocks at the top of the screen. The aim of the game is to remove all blocks and breakout of the level. The agent must learn to control the board by moving left and right, returning the ball and removing all the blocks without the ball passing the board. Note The Deepmind paper trained for \"a total of 50 million frames (that is, around 38 days of game experience in total)\". However this script will give good results at around 10 million frames which are processed in less than 24 hours on a modern machine. References Q-Learning Deep Q-Learning Setup from baselines.common.atari_wrappers import make_atari, wrap_deepmind import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers # Configuration paramaters for the whole setup seed = 42 gamma = 0.99 # Discount factor for past rewards epsilon = 1.0 # Epsilon greedy parameter epsilon_min = 0.1 # Minimum epsilon greedy parameter epsilon_max = 1.0 # Maximum epsilon greedy parameter epsilon_interval = ( epsilon_max - epsilon_min ) # Rate at which to reduce chance of random action being taken batch_size = 32 # Size of batch taken from replay buffer max_steps_per_episode = 10000 # Use the Baseline Atari environment because of Deepmind helper functions env = make_atari(\"BreakoutNoFrameskip-v4\") # Warp the frames, grey scale, stake four frame and scale to smaller ratio env = wrap_deepmind(env, frame_stack=True, scale=True) env.seed(seed) Implement the Deep Q-Network This network learns an approximation of the Q-table, which is a mapping between the states and actions that an agent will take. For every state we'll have four actions, that can be taken. The environment provides the state, and the action is chosen by selecting the larger of the four Q-values predicted in the output layer. num_actions = 4 def create_q_model(): # Network defined by the Deepmind paper inputs = layers.Input(shape=(84, 84, 4,)) # Convolutions on the frames on the screen layer1 = layers.Conv2D(32, 8, strides=4, activation=\"relu\")(inputs) layer2 = layers.Conv2D(64, 4, strides=2, activation=\"relu\")(layer1) layer3 = layers.Conv2D(64, 3, strides=1, activation=\"relu\")(layer2) layer4 = layers.Flatten()(layer3) layer5 = layers.Dense(512, activation=\"relu\")(layer4) action = layers.Dense(num_actions, activation=\"linear\")(layer5) return keras.Model(inputs=inputs, outputs=action) # The first model makes the predictions for Q-values which are used to # make a action. model = create_q_model() # Build a target model for the prediction of future rewards. # The weights of a target model get updated every 10000 steps thus when the # loss between the Q-values is calculated the target Q-value is stable. model_target = create_q_model() Train # In the Deepmind paper they use RMSProp however then Adam optimizer # improves training time optimizer = keras.optimizers.Adam(learning_rate=0.00025, clipnorm=1.0) # Experience replay buffers action_history = [] state_history = [] state_next_history = [] rewards_history = [] done_history = [] episode_reward_history = [] running_reward = 0 episode_count = 0 frame_count = 0 # Number of frames to take random action and observe output epsilon_random_frames = 50000 # Number of frames for exploration epsilon_greedy_frames = 1000000.0 # Maximum replay length # Note: The Deepmind paper suggests 1000000 however this causes memory issues max_memory_length = 100000 # Train the model after 4 actions update_after_actions = 4 # How often to update the target network update_target_network = 10000 # Using huber loss for stability loss_function = keras.losses.Huber() while True: # Run until solved state = np.array(env.reset()) episode_reward = 0 for timestep in range(1, max_steps_per_episode): # env.render(); Adding this line would show the attempts # of the agent in a pop up window. frame_count += 1 # Use epsilon-greedy for exploration if frame_count < epsilon_random_frames or epsilon > np.random.rand(1)[0]: # Take random action action = np.random.choice(num_actions) else: # Predict action Q-values # From environment state state_tensor = tf.convert_to_tensor(state) state_tensor = tf.expand_dims(state_tensor, 0) action_probs = model(state_tensor, training=False) # Take best action action = tf.argmax(action_probs[0]).numpy() # Decay probability of taking random action epsilon -= epsilon_interval / epsilon_greedy_frames epsilon = max(epsilon, epsilon_min) # Apply the sampled action in our environment state_next, reward, done, _ = env.step(action) state_next = np.array(state_next) episode_reward += reward # Save actions and states in replay buffer action_history.append(action) state_history.append(state) state_next_history.append(state_next) done_history.append(done) rewards_history.append(reward) state = state_next # Update every fourth frame and once batch size is over 32 if frame_count % update_after_actions == 0 and len(done_history) > batch_size: # Get indices of samples for replay buffers indices = np.random.choice(range(len(done_history)), size=batch_size) # Using list comprehension to sample from replay buffer state_sample = np.array([state_history[i] for i in indices]) state_next_sample = np.array([state_next_history[i] for i in indices]) rewards_sample = [rewards_history[i] for i in indices] action_sample = [action_history[i] for i in indices] done_sample = tf.convert_to_tensor( [float(done_history[i]) for i in indices] ) # Build the updated Q-values for the sampled future states # Use the target model for stability future_rewards = model_target.predict(state_next_sample) # Q value = reward + discount factor * expected future reward updated_q_values = rewards_sample + gamma * tf.reduce_max( future_rewards, axis=1 ) # If final frame set the last value to -1 updated_q_values = updated_q_values * (1 - done_sample) - done_sample # Create a mask so we only calculate loss on the updated Q-values masks = tf.one_hot(action_sample, num_actions) with tf.GradientTape() as tape: # Train the model on the states and updated Q-values q_values = model(state_sample) # Apply the masks to the Q-values to get the Q-value for action taken q_action = tf.reduce_sum(tf.multiply(q_values, masks), axis=1) # Calculate loss between new Q-value and old Q-value loss = loss_function(updated_q_values, q_action) # Backpropagation grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) if frame_count % update_target_network == 0: # update the the target network with new weights model_target.set_weights(model.get_weights()) # Log details template = \"running reward: {:.2f} at episode {}, frame count {}\" print(template.format(running_reward, episode_count, frame_count)) # Limit the state and reward history if len(rewards_history) > max_memory_length: del rewards_history[:1] del state_history[:1] del state_next_history[:1] del action_history[:1] del done_history[:1] if done: break # Update running reward to check condition for solving episode_reward_history.append(episode_reward) if len(episode_reward_history) > 100: del episode_reward_history[:1] running_reward = np.mean(episode_reward_history) episode_count += 1 if running_reward > 40: # Condition to consider the task solved print(\"Solved at episode {}!\".format(episode_count)) break Visualizations Before any training: Imgur In early stages of training: Imgur In later stages of training: Imgur Implementation of a Proximal Policy Optimization agent for the CartPole-v0 environment. Introduction This code example solves the CartPole-v0 environment using a Proximal Policy Optimization (PPO) agent. CartPole-v0 A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center. After 200 steps the episode ends. Thus, the highest return we can get is equal to 200. CartPole-v0 Proximal Policy Optimization PPO is a policy gradient method and can be used for environments with either discrete or continuous action spaces. It trains a stochastic policy in an on-policy way. Also, it utilizes the actor critic method. The actor maps the observation to an action and the critic gives an expectation of the rewards of the agent for the observation given. Firstly, it collects a set of trajectories for each epoch by sampling from the latest version of the stochastic policy. Then, the rewards-to-go and the advantage estimates are computed in order to update the policy and fit the value function. The policy is updated via a stochastic gradient ascent optimizer, while the value function is fitted via some gradient descent algorithm. This procedure is applied for many epochs until the environment is solved. Algorithm PPO Original Paper OpenAI Spinning Up docs - PPO Note This code example uses Keras and Tensorflow v2. It is based on the PPO Original Paper, the OpenAI's Spinning Up docs for PPO, and the OpenAI's Spinning Up implementation of PPO using Tensorflow v1. OpenAI Spinning Up Github - PPO Libraries For this example the following libraries are used: numpy for n-dimensional arrays tensorflow and keras for building the deep RL PPO agent gym for getting everything we need about the environment scipy.signal for calculating the discounted cumulative sums of vectors import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import gym import scipy.signal import time Functions and class def discounted_cumulative_sums(x, discount): # Discounted cumulative sums of vectors for computing rewards-to-go and advantage estimates return scipy.signal.lfilter([1], [1, float(-discount)], x[::-1], axis=0)[::-1] class Buffer: # Buffer for storing trajectories def __init__(self, observation_dimensions, size, gamma=0.99, lam=0.95): # Buffer initialization self.observation_buffer = np.zeros( (size, observation_dimensions), dtype=np.float32 ) self.action_buffer = np.zeros(size, dtype=np.int32) self.advantage_buffer = np.zeros(size, dtype=np.float32) self.reward_buffer = np.zeros(size, dtype=np.float32) self.return_buffer = np.zeros(size, dtype=np.float32) self.value_buffer = np.zeros(size, dtype=np.float32) self.logprobability_buffer = np.zeros(size, dtype=np.float32) self.gamma, self.lam = gamma, lam self.pointer, self.trajectory_start_index = 0, 0 def store(self, observation, action, reward, value, logprobability): # Append one step of agent-environment interaction self.observation_buffer[self.pointer] = observation self.action_buffer[self.pointer] = action self.reward_buffer[self.pointer] = reward self.value_buffer[self.pointer] = value self.logprobability_buffer[self.pointer] = logprobability self.pointer += 1 def finish_trajectory(self, last_value=0): # Finish the trajectory by computing advantage estimates and rewards-to-go path_slice = slice(self.trajectory_start_index, self.pointer) rewards = np.append(self.reward_buffer[path_slice], last_value) values = np.append(self.value_buffer[path_slice], last_value) deltas = rewards[:-1] + self.gamma * values[1:] - values[:-1] self.advantage_buffer[path_slice] = discounted_cumulative_sums( deltas, self.gamma * self.lam ) self.return_buffer[path_slice] = discounted_cumulative_sums( rewards, self.gamma )[:-1] self.trajectory_start_index = self.pointer def get(self): # Get all data of the buffer and normalize the advantages self.pointer, self.trajectory_start_index = 0, 0 advantage_mean, advantage_std = ( np.mean(self.advantage_buffer), np.std(self.advantage_buffer), ) self.advantage_buffer = (self.advantage_buffer - advantage_mean) / advantage_std return ( self.observation_buffer, self.action_buffer, self.advantage_buffer, self.return_buffer, self.logprobability_buffer, ) def mlp(x, sizes, activation=tf.tanh, output_activation=None): # Build a feedforward neural network for size in sizes[:-1]: x = layers.Dense(units=size, activation=activation)(x) return layers.Dense(units=sizes[-1], activation=output_activation)(x) def logprobabilities(logits, a): # Compute the log-probabilities of taking actions a by using the logits (i.e. the output of the actor) logprobabilities_all = tf.nn.log_softmax(logits) logprobability = tf.reduce_sum( tf.one_hot(a, num_actions) * logprobabilities_all, axis=1 ) return logprobability # Sample action from actor @tf.function def sample_action(observation): logits = actor(observation) action = tf.squeeze(tf.random.categorical(logits, 1), axis=1) return logits, action # Train the policy by maxizing the PPO-Clip objective @tf.function def train_policy( observation_buffer, action_buffer, logprobability_buffer, advantage_buffer ): with tf.GradientTape() as tape: # Record operations for automatic differentiation. ratio = tf.exp( logprobabilities(actor(observation_buffer), action_buffer) - logprobability_buffer ) min_advantage = tf.where( advantage_buffer > 0, (1 + clip_ratio) * advantage_buffer, (1 - clip_ratio) * advantage_buffer, ) policy_loss = -tf.reduce_mean( tf.minimum(ratio * advantage_buffer, min_advantage) ) policy_grads = tape.gradient(policy_loss, actor.trainable_variables) policy_optimizer.apply_gradients(zip(policy_grads, actor.trainable_variables)) kl = tf.reduce_mean( logprobability_buffer - logprobabilities(actor(observation_buffer), action_buffer) ) kl = tf.reduce_sum(kl) return kl # Train the value function by regression on mean-squared error @tf.function def train_value_function(observation_buffer, return_buffer): with tf.GradientTape() as tape: # Record operations for automatic differentiation. value_loss = tf.reduce_mean((return_buffer - critic(observation_buffer)) ** 2) value_grads = tape.gradient(value_loss, critic.trainable_variables) value_optimizer.apply_gradients(zip(value_grads, critic.trainable_variables)) Hyperparameters # Hyperparameters of the PPO algorithm steps_per_epoch = 4000 epochs = 30 gamma = 0.99 clip_ratio = 0.2 policy_learning_rate = 3e-4 value_function_learning_rate = 1e-3 train_policy_iterations = 80 train_value_iterations = 80 lam = 0.97 target_kl = 0.01 hidden_sizes = (64, 64) # True if you want to render the environment render = False Initializations # Initialize the environment and get the dimensionality of the # observation space and the number of possible actions env = gym.make(\"CartPole-v0\") observation_dimensions = env.observation_space.shape[0] num_actions = env.action_space.n # Initialize the buffer buffer = Buffer(observation_dimensions, steps_per_epoch) # Initialize the actor and the critic as keras models observation_input = keras.Input(shape=(observation_dimensions,), dtype=tf.float32) logits = mlp(observation_input, list(hidden_sizes) + [num_actions], tf.tanh, None) actor = keras.Model(inputs=observation_input, outputs=logits) value = tf.squeeze( mlp(observation_input, list(hidden_sizes) + [1], tf.tanh, None), axis=1 ) critic = keras.Model(inputs=observation_input, outputs=value) # Initialize the policy and the value function optimizers policy_optimizer = keras.optimizers.Adam(learning_rate=policy_learning_rate) value_optimizer = keras.optimizers.Adam(learning_rate=value_function_learning_rate) # Initialize the observation, episode return and episode length observation, episode_return, episode_length = env.reset(), 0, 0 Train # Iterate over the number of epochs for epoch in range(epochs): # Initialize the sum of the returns, lengths and number of episodes for each epoch sum_return = 0 sum_length = 0 num_episodes = 0 # Iterate over the steps of each epoch for t in range(steps_per_epoch): if render: env.render() # Get the logits, action, and take one step in the environment observation = observation.reshape(1, -1) logits, action = sample_action(observation) observation_new, reward, done, _ = env.step(action[0].numpy()) episode_return += reward episode_length += 1 # Get the value and log-probability of the action value_t = critic(observation) logprobability_t = logprobabilities(logits, action) # Store obs, act, rew, v_t, logp_pi_t buffer.store(observation, action, reward, value_t, logprobability_t) # Update the observation observation = observation_new # Finish trajectory if reached to a terminal state terminal = done if terminal or (t == steps_per_epoch - 1): last_value = 0 if done else critic(observation.reshape(1, -1)) buffer.finish_trajectory(last_value) sum_return += episode_return sum_length += episode_length num_episodes += 1 observation, episode_return, episode_length = env.reset(), 0, 0 # Get values from the buffer ( observation_buffer, action_buffer, advantage_buffer, return_buffer, logprobability_buffer, ) = buffer.get() # Update the policy and implement early stopping using KL divergence for _ in range(train_policy_iterations): kl = train_policy( observation_buffer, action_buffer, logprobability_buffer, advantage_buffer ) if kl > 1.5 * target_kl: # Early Stopping break # Update the value function for _ in range(train_value_iterations): train_value_function(observation_buffer, return_buffer) # Print mean return and length for each epoch print( f\" Epoch: {epoch + 1}. Mean Return: {sum_return / num_episodes}. Mean Length: {sum_length / num_episodes}\" ) Epoch: 1. Mean Return: 18.01801801801802. Mean Length: 18.01801801801802 Epoch: 2. Mean Return: 21.978021978021978. Mean Length: 21.978021978021978 Epoch: 3. Mean Return: 27.397260273972602. Mean Length: 27.397260273972602 Epoch: 4. Mean Return: 36.69724770642202. Mean Length: 36.69724770642202 Epoch: 5. Mean Return: 48.19277108433735. Mean Length: 48.19277108433735 Epoch: 6. Mean Return: 66.66666666666667. Mean Length: 66.66666666666667 Epoch: 7. Mean Return: 133.33333333333334. Mean Length: 133.33333333333334 Epoch: 8. Mean Return: 166.66666666666666. Mean Length: 166.66666666666666 Epoch: 9. Mean Return: 181.8181818181818. Mean Length: 181.8181818181818 Epoch: 10. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 Epoch: 11. Mean Return: 200.0. Mean Length: 200.0 Epoch: 12. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 Epoch: 13. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 Epoch: 14. Mean Return: 181.8181818181818. Mean Length: 181.8181818181818 Epoch: 15. Mean Return: 181.8181818181818. Mean Length: 181.8181818181818 Epoch: 16. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 Epoch: 17. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 Epoch: 18. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 Epoch: 19. Mean Return: 200.0. Mean Length: 200.0 Epoch: 20. Mean Return: 200.0. Mean Length: 200.0 Epoch: 21. Mean Return: 200.0. Mean Length: 200.0 Epoch: 22. Mean Return: 200.0. Mean Length: 200.0 Epoch: 23. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 Epoch: 24. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 Epoch: 25. Mean Return: 200.0. Mean Length: 200.0 Epoch: 26. Mean Return: 200.0. Mean Length: 200.0 Epoch: 27. Mean Return: 200.0. Mean Length: 200.0 Epoch: 28. Mean Return: 200.0. Mean Length: 200.0 Epoch: 29. Mean Return: 200.0. Mean Length: 200.0 Epoch: 30. Mean Return: 200.0. Mean Length: 200.0 Visualizations Before training: Imgur After 8 epochs of training: Imgur After 20 epochs of training: Imgur Rating rate prediction using the Behavior Sequence Transformer (BST) model on the Movielens. Introduction This example demonstrates the Behavior Sequence Transformer (BST) model, by Qiwei Chen et al., using the Movielens dataset. The BST model leverages the sequential behaviour of the users in watching and rating movies, as well as user profile and movie features, to predict the rating of the user to a target movie. More precisely, the BST model aims to predict the rating of a target movie by accepting the following inputs: A fixed-length sequence of movie_ids watched by a user. A fixed-length sequence of the ratings for the movies watched by a user. A set of user features, including user_id, sex, occupation, and age_group. A set of genres for each movie in the input sequence and the target movie. A target_movie_id for which to predict the rating. This example modifies the original BST model in the following ways: We incorporate the movie features (genres) into the processing of the embedding of each movie of the input sequence and the target movie, rather than treating them as \"other features\" outside the transformer layer. We utilize the ratings of movies in the input sequence, along with the their positions in the sequence, to update them before feeding them into the self-attention layer. Note that this example should be run with TensorFlow 2.4 or higher. The dataset We use the 1M version of the Movielens dataset. The dataset includes around 1 million ratings from 6000 users on 4000 movies, along with some user features, movie genres. In addition, the timestamp of each user-movie rating is provided, which allows creating sequences of movie ratings for each user, as expected by the BST model. Setup import os import math from zipfile import ZipFile from urllib.request import urlretrieve import numpy as np import pandas as pd import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.layers import StringLookup Prepare the data Download and prepare the DataFrames First, let's download the movielens data. The downloaded folder will contain three data files: users.dat, movies.dat, and ratings.dat. urlretrieve(\"http://files.grouplens.org/datasets/movielens/ml-1m.zip\", \"movielens.zip\") ZipFile(\"movielens.zip\", \"r\").extractall() Then, we load the data into pandas DataFrames with their proper column names. users = pd.read_csv( \"ml-1m/users.dat\", sep=\"::\", names=[\"user_id\", \"sex\", \"age_group\", \"occupation\", \"zip_code\"], ) ratings = pd.read_csv( \"ml-1m/ratings.dat\", sep=\"::\", names=[\"user_id\", \"movie_id\", \"rating\", \"unix_timestamp\"], ) movies = pd.read_csv( \"ml-1m/movies.dat\", sep=\"::\", names=[\"movie_id\", \"title\", \"genres\"] ) Here, we do some simple data processing to fix the data types of the columns. users[\"user_id\"] = users[\"user_id\"].apply(lambda x: f\"user_{x}\") users[\"age_group\"] = users[\"age_group\"].apply(lambda x: f\"group_{x}\") users[\"occupation\"] = users[\"occupation\"].apply(lambda x: f\"occupation_{x}\") movies[\"movie_id\"] = movies[\"movie_id\"].apply(lambda x: f\"movie_{x}\") ratings[\"movie_id\"] = ratings[\"movie_id\"].apply(lambda x: f\"movie_{x}\") ratings[\"user_id\"] = ratings[\"user_id\"].apply(lambda x: f\"user_{x}\") ratings[\"rating\"] = ratings[\"rating\"].apply(lambda x: float(x)) Each movie has multiple genres. We split them into separate columns in the movies DataFrame. genres = [ \"Action\", \"Adventure\", \"Animation\", \"Children's\", \"Comedy\", \"Crime\", \"Documentary\", \"Drama\", \"Fantasy\", \"Film-Noir\", \"Horror\", \"Musical\", \"Mystery\", \"Romance\", \"Sci-Fi\", \"Thriller\", \"War\", \"Western\", ] for genre in genres: movies[genre] = movies[\"genres\"].apply( lambda values: int(genre in values.split(\"|\")) ) Transform the movie ratings data into sequences First, let's sort the the ratings data using the unix_timestamp, and then group the movie_id values and the rating values by user_id. The output DataFrame will have a record for each user_id, with two ordered lists (sorted by rating datetime): the movies they have rated, and their ratings of these movies. ratings_group = ratings.sort_values(by=[\"unix_timestamp\"]).groupby(\"user_id\") ratings_data = pd.DataFrame( data={ \"user_id\": list(ratings_group.groups.keys()), \"movie_ids\": list(ratings_group.movie_id.apply(list)), \"ratings\": list(ratings_group.rating.apply(list)), \"timestamps\": list(ratings_group.unix_timestamp.apply(list)), } ) Now, let's split the movie_ids list into a set of sequences of a fixed length. We do the same for the ratings. Set the sequence_length variable to change the length of the input sequence to the model. You can also change the step_size to control the number of sequences to generate for each user. sequence_length = 4 step_size = 2 def create_sequences(values, window_size, step_size): sequences = [] start_index = 0 while True: end_index = start_index + window_size seq = values[start_index:end_index] if len(seq) < window_size: seq = values[-window_size:] if len(seq) == window_size: sequences.append(seq) break sequences.append(seq) start_index += step_size return sequences ratings_data.movie_ids = ratings_data.movie_ids.apply( lambda ids: create_sequences(ids, sequence_length, step_size) ) ratings_data.ratings = ratings_data.ratings.apply( lambda ids: create_sequences(ids, sequence_length, step_size) ) del ratings_data[\"timestamps\"] After that, we process the output to have each sequence in a separate records in the DataFrame. In addition, we join the user features with the ratings data. ratings_data_movies = ratings_data[[\"user_id\", \"movie_ids\"]].explode( \"movie_ids\", ignore_index=True ) ratings_data_rating = ratings_data[[\"ratings\"]].explode(\"ratings\", ignore_index=True) ratings_data_transformed = pd.concat([ratings_data_movies, ratings_data_rating], axis=1) ratings_data_transformed = ratings_data_transformed.join( users.set_index(\"user_id\"), on=\"user_id\" ) ratings_data_transformed.movie_ids = ratings_data_transformed.movie_ids.apply( lambda x: \",\".join(x) ) ratings_data_transformed.ratings = ratings_data_transformed.ratings.apply( lambda x: \",\".join([str(v) for v in x]) ) del ratings_data_transformed[\"zip_code\"] ratings_data_transformed.rename( columns={\"movie_ids\": \"sequence_movie_ids\", \"ratings\": \"sequence_ratings\"}, inplace=True, ) With sequence_length of 4 and step_size of 2, we end up with 498,623 sequences. Finally, we split the data into training and testing splits, with 85% and 15% of the instances, respectively, and store them to CSV files. random_selection = np.random.rand(len(ratings_data_transformed.index)) <= 0.85 train_data = ratings_data_transformed[random_selection] test_data = ratings_data_transformed[~random_selection] train_data.to_csv(\"train_data.csv\", index=False, sep=\"|\", header=False) test_data.to_csv(\"test_data.csv\", index=False, sep=\"|\", header=False) Define metadata CSV_HEADER = list(ratings_data_transformed.columns) CATEGORICAL_FEATURES_WITH_VOCABULARY = { \"user_id\": list(users.user_id.unique()), \"movie_id\": list(movies.movie_id.unique()), \"sex\": list(users.sex.unique()), \"age_group\": list(users.age_group.unique()), \"occupation\": list(users.occupation.unique()), } USER_FEATURES = [\"sex\", \"age_group\", \"occupation\"] MOVIE_FEATURES = [\"genres\"] Create tf.data.Dataset for training and evaluation def get_dataset_from_csv(csv_file_path, shuffle=False, batch_size=128): def process(features): movie_ids_string = features[\"sequence_movie_ids\"] sequence_movie_ids = tf.strings.split(movie_ids_string, \",\").to_tensor() # The last movie id in the sequence is the target movie. features[\"target_movie_id\"] = sequence_movie_ids[:, -1] features[\"sequence_movie_ids\"] = sequence_movie_ids[:, :-1] ratings_string = features[\"sequence_ratings\"] sequence_ratings = tf.strings.to_number( tf.strings.split(ratings_string, \",\"), tf.dtypes.float32 ).to_tensor() # The last rating in the sequence is the target for the model to predict. target = sequence_ratings[:, -1] features[\"sequence_ratings\"] = sequence_ratings[:, :-1] return features, target dataset = tf.data.experimental.make_csv_dataset( csv_file_path, batch_size=batch_size, column_names=CSV_HEADER, num_epochs=1, header=False, field_delim=\"|\", shuffle=shuffle, ).map(process) return dataset Create model inputs def create_model_inputs(): return { \"user_id\": layers.Input(name=\"user_id\", shape=(1,), dtype=tf.string), \"sequence_movie_ids\": layers.Input( name=\"sequence_movie_ids\", shape=(sequence_length - 1,), dtype=tf.string ), \"target_movie_id\": layers.Input( name=\"target_movie_id\", shape=(1,), dtype=tf.string ), \"sequence_ratings\": layers.Input( name=\"sequence_ratings\", shape=(sequence_length - 1,), dtype=tf.float32 ), \"sex\": layers.Input(name=\"sex\", shape=(1,), dtype=tf.string), \"age_group\": layers.Input(name=\"age_group\", shape=(1,), dtype=tf.string), \"occupation\": layers.Input(name=\"occupation\", shape=(1,), dtype=tf.string), } Encode input features The encode_input_features method works as follows: Each categorical user feature is encoded using layers.Embedding, with embedding dimension equals to the square root of the vocabulary size of the feature. The embeddings of these features are concatenated to form a single input tensor. Each movie in the movie sequence and the target movie is encoded layers.Embedding, where the dimension size is the square root of the number of movies. A multi-hot genres vector for each movie is concatenated with its embedding vector, and processed using a non-linear layers.Dense to output a vector of the same movie embedding dimensions. A positional embedding is added to each movie embedding in the sequence, and then multiplied by its rating from the ratings sequence. The target movie embedding is concatenated to the sequence movie embeddings, producing a tensor with the shape of [batch size, sequence length, embedding size], as expected by the attention layer for the transformer architecture. The method returns a tuple of two elements: encoded_transformer_features and encoded_other_features. def encode_input_features( inputs, include_user_id=True, include_user_features=True, include_movie_features=True, ): encoded_transformer_features = [] encoded_other_features = [] other_feature_names = [] if include_user_id: other_feature_names.append(\"user_id\") if include_user_features: other_feature_names.extend(USER_FEATURES) ## Encode user features for feature_name in other_feature_names: # Convert the string input values into integer indices. vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name] idx = StringLookup(vocabulary=vocabulary, mask_token=None, num_oov_indices=0)( inputs[feature_name] ) # Compute embedding dimensions embedding_dims = int(math.sqrt(len(vocabulary))) # Create an embedding layer with the specified dimensions. embedding_encoder = layers.Embedding( input_dim=len(vocabulary), output_dim=embedding_dims, name=f\"{feature_name}_embedding\", ) # Convert the index values to embedding representations. encoded_other_features.append(embedding_encoder(idx)) ## Create a single embedding vector for the user features if len(encoded_other_features) > 1: encoded_other_features = layers.concatenate(encoded_other_features) elif len(encoded_other_features) == 1: encoded_other_features = encoded_other_features[0] else: encoded_other_features = None ## Create a movie embedding encoder movie_vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[\"movie_id\"] movie_embedding_dims = int(math.sqrt(len(movie_vocabulary))) # Create a lookup to convert string values to integer indices. movie_index_lookup = StringLookup( vocabulary=movie_vocabulary, mask_token=None, num_oov_indices=0, name=\"movie_index_lookup\", ) # Create an embedding layer with the specified dimensions. movie_embedding_encoder = layers.Embedding( input_dim=len(movie_vocabulary), output_dim=movie_embedding_dims, name=f\"movie_embedding\", ) # Create a vector lookup for movie genres. genre_vectors = movies[genres].to_numpy() movie_genres_lookup = layers.Embedding( input_dim=genre_vectors.shape[0], output_dim=genre_vectors.shape[1], embeddings_initializer=tf.keras.initializers.Constant(genre_vectors), trainable=False, name=\"genres_vector\", ) # Create a processing layer for genres. movie_embedding_processor = layers.Dense( units=movie_embedding_dims, activation=\"relu\", name=\"process_movie_embedding_with_genres\", ) ## Define a function to encode a given movie id. def encode_movie(movie_id): # Convert the string input values into integer indices. movie_idx = movie_index_lookup(movie_id) movie_embedding = movie_embedding_encoder(movie_idx) encoded_movie = movie_embedding if include_movie_features: movie_genres_vector = movie_genres_lookup(movie_idx) encoded_movie = movie_embedding_processor( layers.concatenate([movie_embedding, movie_genres_vector]) ) return encoded_movie ## Encoding target_movie_id target_movie_id = inputs[\"target_movie_id\"] encoded_target_movie = encode_movie(target_movie_id) ## Encoding sequence movie_ids. sequence_movies_ids = inputs[\"sequence_movie_ids\"] encoded_sequence_movies = encode_movie(sequence_movies_ids) # Create positional embedding. position_embedding_encoder = layers.Embedding( input_dim=sequence_length, output_dim=movie_embedding_dims, name=\"position_embedding\", ) positions = tf.range(start=0, limit=sequence_length - 1, delta=1) encodded_positions = position_embedding_encoder(positions) # Retrieve sequence ratings to incorporate them into the encoding of the movie. sequence_ratings = tf.expand_dims(inputs[\"sequence_ratings\"], -1) # Add the positional encoding to the movie encodings and multiply them by rating. encoded_sequence_movies_with_poistion_and_rating = layers.Multiply()( [(encoded_sequence_movies + encodded_positions), sequence_ratings] ) # Construct the transformer inputs. for encoded_movie in tf.unstack( encoded_sequence_movies_with_poistion_and_rating, axis=1 ): encoded_transformer_features.append(tf.expand_dims(encoded_movie, 1)) encoded_transformer_features.append(encoded_target_movie) encoded_transformer_features = layers.concatenate( encoded_transformer_features, axis=1 ) return encoded_transformer_features, encoded_other_features Create a BST model include_user_id = False include_user_features = False include_movie_features = False hidden_units = [256, 128] dropout_rate = 0.1 num_heads = 3 def create_model(): inputs = create_model_inputs() transformer_features, other_features = encode_input_features( inputs, include_user_id, include_user_features, include_movie_features ) # Create a multi-headed attention layer. attention_output = layers.MultiHeadAttention( num_heads=num_heads, key_dim=transformer_features.shape[2], dropout=dropout_rate )(transformer_features, transformer_features) # Transformer block. attention_output = layers.Dropout(dropout_rate)(attention_output) x1 = layers.Add()([transformer_features, attention_output]) x1 = layers.LayerNormalization()(x1) x2 = layers.LeakyReLU()(x1) x2 = layers.Dense(units=x2.shape[-1])(x2) x2 = layers.Dropout(dropout_rate)(x2) transformer_features = layers.Add()([x1, x2]) transformer_features = layers.LayerNormalization()(transformer_features) features = layers.Flatten()(transformer_features) # Included the other features. if other_features is not None: features = layers.concatenate( [features, layers.Reshape([other_features.shape[-1]])(other_features)] ) # Fully-connected layers. for num_units in hidden_units: features = layers.Dense(num_units)(features) features = layers.BatchNormalization()(features) features = layers.LeakyReLU()(features) features = layers.Dropout(dropout_rate)(features) outputs = layers.Dense(units=1)(features) model = keras.Model(inputs=inputs, outputs=outputs) return model model = create_model() Run training and evaluation experiment # Compile the model. model.compile( optimizer=keras.optimizers.Adagrad(learning_rate=0.01), loss=keras.losses.MeanSquaredError(), metrics=[keras.metrics.MeanAbsoluteError()], ) # Read the training data. train_dataset = get_dataset_from_csv(\"train_data.csv\", shuffle=True, batch_size=265) # Fit the model with the training data. model.fit(train_dataset, epochs=5) # Read the test data. test_dataset = get_dataset_from_csv(\"test_data.csv\", batch_size=265) # Evaluate the model on the test data. _, rmse = model.evaluate(test_dataset, verbose=0) print(f\"Test MAE: {round(rmse, 3)}\") Epoch 1/5 1598/1598 [==============================] - 46s 27ms/step - loss: 1.6617 - mean_absolute_error: 0.9981 Epoch 2/5 1598/1598 [==============================] - 43s 27ms/step - loss: 1.0282 - mean_absolute_error: 0.8101 Epoch 3/5 1598/1598 [==============================] - 43s 27ms/step - loss: 0.9609 - mean_absolute_error: 0.7812 Epoch 4/5 1598/1598 [==============================] - 43s 27ms/step - loss: 0.9272 - mean_absolute_error: 0.7675 Epoch 5/5 1598/1598 [==============================] - 43s 27ms/step - loss: 0.9062 - mean_absolute_error: 0.7588 Test MAE: 0.761 You should achieve a Mean Absolute Error (MAE) at or around 0.7 on the test data. Conclusion The BST model uses the Transformer layer in its architecture to capture the sequential signals underlying users’ behavior sequences for recommendation. You can try training this model with different configurations, for example, by increasing the input sequence length and training the model for a larger number of epochs. In addition, you can try including other features like movie release year and customer zipcode, and including cross features like sex X genre. Using Gated Residual and Variable Selection Networks for income level prediction. Introduction This example demonstrates the use of Gated Residual Networks (GRN) and Variable Selection Networks (VSN), proposed by Bryan Lim et al. in Temporal Fusion Transformers (TFT) for Interpretable Multi-horizon Time Series Forecasting, for structured data classification. GRNs give the flexibility to the model to apply non-linear processing only where needed. VSNs allow the model to softly remove any unnecessary noisy inputs which could negatively impact performance. Together, those techniques help improving the learning capacity of deep neural network models. Note that this example implements only the GRN and VSN components described in in the paper, rather than the whole TFT model, as GRN and VSN can be useful on their own for structured data learning tasks. To run the code you need to use TensorFlow 2.3 or higher. The dataset This example uses the United States Census Income Dataset provided by the UC Irvine Machine Learning Repository. The task is binary classification to determine whether a person makes over 50K a year. The dataset includes ~300K instances with 41 input features: 7 numerical features and 34 categorical features. Setup import math import numpy as np import pandas as pd import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Prepare the data First we load the data from the UCI Machine Learning Repository into a Pandas DataFrame. # Column names. CSV_HEADER = [ \"age\", \"class_of_worker\", \"detailed_industry_recode\", \"detailed_occupation_recode\", \"education\", \"wage_per_hour\", \"enroll_in_edu_inst_last_wk\", \"marital_stat\", \"major_industry_code\", \"major_occupation_code\", \"race\", \"hispanic_origin\", \"sex\", \"member_of_a_labor_union\", \"reason_for_unemployment\", \"full_or_part_time_employment_stat\", \"capital_gains\", \"capital_losses\", \"dividends_from_stocks\", \"tax_filer_stat\", \"region_of_previous_residence\", \"state_of_previous_residence\", \"detailed_household_and_family_stat\", \"detailed_household_summary_in_household\", \"instance_weight\", \"migration_code-change_in_msa\", \"migration_code-change_in_reg\", \"migration_code-move_within_reg\", \"live_in_this_house_1_year_ago\", \"migration_prev_res_in_sunbelt\", \"num_persons_worked_for_employer\", \"family_members_under_18\", \"country_of_birth_father\", \"country_of_birth_mother\", \"country_of_birth_self\", \"citizenship\", \"own_business_or_self_employed\", \"fill_inc_questionnaire_for_veteran's_admin\", \"veterans_benefits\", \"weeks_worked_in_year\", \"year\", \"income_level\", ] data_url = \"https://archive.ics.uci.edu/ml/machine-learning-databases/census-income-mld/census-income.data.gz\" data = pd.read_csv(data_url, header=None, names=CSV_HEADER) test_data_url = \"https://archive.ics.uci.edu/ml/machine-learning-databases/census-income-mld/census-income.test.gz\" test_data = pd.read_csv(test_data_url, header=None, names=CSV_HEADER) print(f\"Data shape: {data.shape}\") print(f\"Test data shape: {test_data.shape}\") Data shape: (199523, 42) Test data shape: (99762, 42) We convert the target column from string to integer. data[\"income_level\"] = data[\"income_level\"].apply( lambda x: 0 if x == \" - 50000.\" else 1 ) test_data[\"income_level\"] = test_data[\"income_level\"].apply( lambda x: 0 if x == \" - 50000.\" else 1 ) Then, We split the dataset into train and validation sets. random_selection = np.random.rand(len(data.index)) <= 0.85 train_data = data[random_selection] valid_data = data[~random_selection] Finally we store the train and test data splits locally to CSV files. train_data_file = \"train_data.csv\" valid_data_file = \"valid_data.csv\" test_data_file = \"test_data.csv\" train_data.to_csv(train_data_file, index=False, header=False) valid_data.to_csv(valid_data_file, index=False, header=False) test_data.to_csv(test_data_file, index=False, header=False) Define dataset metadata Here, we define the metadata of the dataset that will be useful for reading and parsing the data into input features, and encoding the input features with respect to their types. # Target feature name. TARGET_FEATURE_NAME = \"income_level\" # Weight column name. WEIGHT_COLUMN_NAME = \"instance_weight\" # Numeric feature names. NUMERIC_FEATURE_NAMES = [ \"age\", \"wage_per_hour\", \"capital_gains\", \"capital_losses\", \"dividends_from_stocks\", \"num_persons_worked_for_employer\", \"weeks_worked_in_year\", ] # Categorical features and their vocabulary lists. # Note that we add 'v=' as a prefix to all categorical feature values to make # sure that they are treated as strings. CATEGORICAL_FEATURES_WITH_VOCABULARY = { feature_name: sorted([str(value) for value in list(data[feature_name].unique())]) for feature_name in CSV_HEADER if feature_name not in list(NUMERIC_FEATURE_NAMES + [WEIGHT_COLUMN_NAME, TARGET_FEATURE_NAME]) } # All features names. FEATURE_NAMES = NUMERIC_FEATURE_NAMES + list( CATEGORICAL_FEATURES_WITH_VOCABULARY.keys() ) # Feature default values. COLUMN_DEFAULTS = [ [0.0] if feature_name in NUMERIC_FEATURE_NAMES + [TARGET_FEATURE_NAME, WEIGHT_COLUMN_NAME] else [\"NA\"] for feature_name in CSV_HEADER ] Create a tf.data.Dataset for training and evaluation We create an input function to read and parse the file, and convert features and labels into a [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) for training and evaluation. from tensorflow.keras.layers import StringLookup def process(features, target): for feature_name in features: if feature_name in CATEGORICAL_FEATURES_WITH_VOCABULARY: # Cast categorical feature values to string. features[feature_name] = tf.cast(features[feature_name], tf.dtypes.string) # Get the instance weight. weight = features.pop(WEIGHT_COLUMN_NAME) return features, target, weight def get_dataset_from_csv(csv_file_path, shuffle=False, batch_size=128): dataset = tf.data.experimental.make_csv_dataset( csv_file_path, batch_size=batch_size, column_names=CSV_HEADER, column_defaults=COLUMN_DEFAULTS, label_name=TARGET_FEATURE_NAME, num_epochs=1, header=False, shuffle=shuffle, ).map(process) return dataset Create model inputs def create_model_inputs(): inputs = {} for feature_name in FEATURE_NAMES: if feature_name in NUMERIC_FEATURE_NAMES: inputs[feature_name] = layers.Input( name=feature_name, shape=(), dtype=tf.float32 ) else: inputs[feature_name] = layers.Input( name=feature_name, shape=(), dtype=tf.string ) return inputs Encode input features For categorical features, we encode them using layers.Embedding using the encoding_size as the embedding dimensions. For the numerical features, we apply linear transformation using layers.Dense to project each feature into encoding_size-dimensional vector. Thus, all the encoded features will have the same dimensionality. def encode_inputs(inputs, encoding_size): encoded_features = [] for feature_name in inputs: if feature_name in CATEGORICAL_FEATURES_WITH_VOCABULARY: vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name] # Create a lookup to convert a string values to an integer indices. # Since we are not using a mask token nor expecting any out of vocabulary # (oov) token, we set mask_token to None and num_oov_indices to 0. index = StringLookup( vocabulary=vocabulary, mask_token=None, num_oov_indices=0 ) # Convert the string input values into integer indices. value_index = index(inputs[feature_name]) # Create an embedding layer with the specified dimensions embedding_ecoder = layers.Embedding( input_dim=len(vocabulary), output_dim=encoding_size ) # Convert the index values to embedding representations. encoded_feature = embedding_ecoder(value_index) else: # Project the numeric feature to encoding_size using linear transformation. encoded_feature = tf.expand_dims(inputs[feature_name], -1) encoded_feature = layers.Dense(units=encoding_size)(encoded_feature) encoded_features.append(encoded_feature) return encoded_features Implement the Gated Linear Unit Gated Linear Units (GLUs) provide the flexibility to suppress input that are not relevant for a given task. class GatedLinearUnit(layers.Layer): def __init__(self, units): super(GatedLinearUnit, self).__init__() self.linear = layers.Dense(units) self.sigmoid = layers.Dense(units, activation=\"sigmoid\") def call(self, inputs): return self.linear(inputs) * self.sigmoid(inputs) Implement the Gated Residual Network The Gated Residual Network (GRN) works as follows: Applies the nonlinear ELU transformation to the inputs. Applies linear transformation followed by dropout. Applies GLU and adds the original inputs to the output of the GLU to perform skip (residual) connection. Applies layer normalization and produces the output. class GatedResidualNetwork(layers.Layer): def __init__(self, units, dropout_rate): super(GatedResidualNetwork, self).__init__() self.units = units self.elu_dense = layers.Dense(units, activation=\"elu\") self.linear_dense = layers.Dense(units) self.dropout = layers.Dropout(dropout_rate) self.gated_linear_unit = GatedLinearUnit(units) self.layer_norm = layers.LayerNormalization() self.project = layers.Dense(units) def call(self, inputs): x = self.elu_dense(inputs) x = self.linear_dense(x) x = self.dropout(x) if inputs.shape[-1] != self.units: inputs = self.project(inputs) x = inputs + self.gated_linear_unit(x) x = self.layer_norm(x) return x Implement the Variable Selection Network The Variable Selection Network (VSN) works as follows: Applies a GRN to each feature individually. Applies a GRN on the concatenation of all the features, followed by a softmax to produce feature weights. Produces a weighted sum of the output of the individual GRN. Note that the output of the VSN is [batch_size, encoding_size], regardless of the number of the input features. class VariableSelection(layers.Layer): def __init__(self, num_features, units, dropout_rate): super(VariableSelection, self).__init__() self.grns = list() # Create a GRN for each feature independently for idx in range(num_features): grn = GatedResidualNetwork(units, dropout_rate) self.grns.append(grn) # Create a GRN for the concatenation of all the features self.grn_concat = GatedResidualNetwork(units, dropout_rate) self.softmax = layers.Dense(units=num_features, activation=\"softmax\") def call(self, inputs): v = layers.concatenate(inputs) v = self.grn_concat(v) v = tf.expand_dims(self.softmax(v), axis=-1) x = [] for idx, input in enumerate(inputs): x.append(self.grns[idx](input)) x = tf.stack(x, axis=1) outputs = tf.squeeze(tf.matmul(v, x, transpose_a=True), axis=1) return outputs Create Gated Residual and Variable Selection Networks model def create_model(encoding_size): inputs = create_model_inputs() feature_list = encode_inputs(inputs, encoding_size) num_features = len(feature_list) features = VariableSelection(num_features, encoding_size, dropout_rate)( feature_list ) outputs = layers.Dense(units=1, activation=\"sigmoid\")(features) model = keras.Model(inputs=inputs, outputs=outputs) return model Compile, train, and evaluate the model learning_rate = 0.001 dropout_rate = 0.15 batch_size = 265 num_epochs = 20 encoding_size = 16 model = create_model(encoding_size) model.compile( optimizer=keras.optimizers.Adam(learning_rate=learning_rate), loss=keras.losses.BinaryCrossentropy(), metrics=[keras.metrics.BinaryAccuracy(name=\"accuracy\")], ) # Create an early stopping callback. early_stopping = tf.keras.callbacks.EarlyStopping( monitor=\"val_loss\", patience=5, restore_best_weights=True ) print(\"Start training the model...\") train_dataset = get_dataset_from_csv( train_data_file, shuffle=True, batch_size=batch_size ) valid_dataset = get_dataset_from_csv(valid_data_file, batch_size=batch_size) model.fit( train_dataset, epochs=num_epochs, validation_data=valid_dataset, callbacks=[early_stopping], ) print(\"Model training finished.\") print(\"Evaluating model performance...\") test_dataset = get_dataset_from_csv(test_data_file, batch_size=batch_size) _, accuracy = model.evaluate(test_dataset) print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") Start training the model... Epoch 1/20 641/641 [==============================] - 26s 22ms/step - loss: 317.7028 - accuracy: 0.9353 - val_loss: 230.1805 - val_accuracy: 0.9497 Epoch 2/20 641/641 [==============================] - 13s 19ms/step - loss: 231.4161 - accuracy: 0.9506 - val_loss: 224.7825 - val_accuracy: 0.9498 Epoch 3/20 641/641 [==============================] - 12s 19ms/step - loss: 226.8173 - accuracy: 0.9503 - val_loss: 223.0818 - val_accuracy: 0.9508 Epoch 4/20 641/641 [==============================] - 13s 19ms/step - loss: 224.1516 - accuracy: 0.9507 - val_loss: 221.8637 - val_accuracy: 0.9509 Epoch 5/20 641/641 [==============================] - 13s 19ms/step - loss: 223.9696 - accuracy: 0.9507 - val_loss: 217.8728 - val_accuracy: 0.9513 Epoch 6/20 641/641 [==============================] - 13s 19ms/step - loss: 220.7267 - accuracy: 0.9508 - val_loss: 220.2448 - val_accuracy: 0.9516 Epoch 7/20 641/641 [==============================] - 13s 19ms/step - loss: 219.7464 - accuracy: 0.9514 - val_loss: 216.4628 - val_accuracy: 0.9516 Epoch 8/20 641/641 [==============================] - 13s 19ms/step - loss: 218.7294 - accuracy: 0.9517 - val_loss: 215.2192 - val_accuracy: 0.9519 Epoch 9/20 641/641 [==============================] - 12s 19ms/step - loss: 218.3938 - accuracy: 0.9516 - val_loss: 217.1790 - val_accuracy: 0.9514 Epoch 10/20 641/641 [==============================] - 13s 19ms/step - loss: 217.2871 - accuracy: 0.9522 - val_loss: 213.4623 - val_accuracy: 0.9523 Epoch 11/20 641/641 [==============================] - 13s 19ms/step - loss: 215.0476 - accuracy: 0.9522 - val_loss: 211.6762 - val_accuracy: 0.9523 Epoch 12/20 641/641 [==============================] - 13s 19ms/step - loss: 213.2402 - accuracy: 0.9527 - val_loss: 212.2001 - val_accuracy: 0.9525 Epoch 13/20 641/641 [==============================] - 13s 20ms/step - loss: 212.8123 - accuracy: 0.9530 - val_loss: 207.9878 - val_accuracy: 0.9538 Epoch 14/20 641/641 [==============================] - 13s 19ms/step - loss: 208.4605 - accuracy: 0.9541 - val_loss: 208.0063 - val_accuracy: 0.9543 Epoch 15/20 641/641 [==============================] - 13s 19ms/step - loss: 211.9185 - accuracy: 0.9533 - val_loss: 208.2112 - val_accuracy: 0.9540 Epoch 16/20 641/641 [==============================] - 13s 19ms/step - loss: 207.7694 - accuracy: 0.9544 - val_loss: 207.3279 - val_accuracy: 0.9547 Epoch 17/20 641/641 [==============================] - 13s 19ms/step - loss: 208.6964 - accuracy: 0.9540 - val_loss: 204.3082 - val_accuracy: 0.9553 Epoch 18/20 641/641 [==============================] - 13s 19ms/step - loss: 207.2199 - accuracy: 0.9547 - val_loss: 206.4799 - val_accuracy: 0.9549 Epoch 19/20 641/641 [==============================] - 13s 19ms/step - loss: 206.7960 - accuracy: 0.9548 - val_loss: 206.0898 - val_accuracy: 0.9555 Epoch 20/20 641/641 [==============================] - 13s 20ms/step - loss: 206.2721 - accuracy: 0.9547 - val_loss: 206.6541 - val_accuracy: 0.9549 Model training finished. Evaluating model performance... 377/377 [==============================] - 5s 11ms/step - loss: 206.3511 - accuracy: 0.9541 Test accuracy: 95.41% You should achieve more than 95% accuracy on the test set. To increase the learning capacity of the model, you can try increasing the encoding_size value, or stacking multiple GRN layers on top of the VSN layer. This may require to also increase the dropout_rate value to avoid overfitting. How to train differentiable decision trees for end-to-end learning in deep neural networks. Introduction This example provides an implementation of the Deep Neural Decision Forest model introduced by P. Kontschieder et al. for structured data classification. It demonstrates how to build a stochastic and differentiable decision tree model, train it end-to-end, and unify decision trees with deep representation learning. The dataset This example uses the United States Census Income Dataset provided by the UC Irvine Machine Learning Repository. The task is binary classification to predict whether a person is likely to be making over USD 50,000 a year. The dataset includes 48,842 instances with 14 input features (such as age, work class, education, occupation, and so on): 5 numerical features and 9 categorical features. Setup import tensorflow as tf import numpy as np import pandas as pd from tensorflow import keras from tensorflow.keras import layers import math Prepare the data CSV_HEADER = [ \"age\", \"workclass\", \"fnlwgt\", \"education\", \"education_num\", \"marital_status\", \"occupation\", \"relationship\", \"race\", \"gender\", \"capital_gain\", \"capital_loss\", \"hours_per_week\", \"native_country\", \"income_bracket\", ] train_data_url = ( \"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data\" ) train_data = pd.read_csv(train_data_url, header=None, names=CSV_HEADER) test_data_url = ( \"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test\" ) test_data = pd.read_csv(test_data_url, header=None, names=CSV_HEADER) print(f\"Train dataset shape: {train_data.shape}\") print(f\"Test dataset shape: {test_data.shape}\") Train dataset shape: (32561, 15) Test dataset shape: (16282, 15) Remove the first record (because it is not a valid data example) and a trailing 'dot' in the class labels. test_data = test_data[1:] test_data.income_bracket = test_data.income_bracket.apply( lambda value: value.replace(\".\", \"\") ) We store the training and test data splits locally as CSV files. train_data_file = \"train_data.csv\" test_data_file = \"test_data.csv\" train_data.to_csv(train_data_file, index=False, header=False) test_data.to_csv(test_data_file, index=False, header=False) Define dataset metadata Here, we define the metadata of the dataset that will be useful for reading and parsing and encoding input features. # A list of the numerical feature names. NUMERIC_FEATURE_NAMES = [ \"age\", \"education_num\", \"capital_gain\", \"capital_loss\", \"hours_per_week\", ] # A dictionary of the categorical features and their vocabulary. CATEGORICAL_FEATURES_WITH_VOCABULARY = { \"workclass\": sorted(list(train_data[\"workclass\"].unique())), \"education\": sorted(list(train_data[\"education\"].unique())), \"marital_status\": sorted(list(train_data[\"marital_status\"].unique())), \"occupation\": sorted(list(train_data[\"occupation\"].unique())), \"relationship\": sorted(list(train_data[\"relationship\"].unique())), \"race\": sorted(list(train_data[\"race\"].unique())), \"gender\": sorted(list(train_data[\"gender\"].unique())), \"native_country\": sorted(list(train_data[\"native_country\"].unique())), } # A list of the columns to ignore from the dataset. IGNORE_COLUMN_NAMES = [\"fnlwgt\"] # A list of the categorical feature names. CATEGORICAL_FEATURE_NAMES = list(CATEGORICAL_FEATURES_WITH_VOCABULARY.keys()) # A list of all the input features. FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES # A list of column default values for each feature. COLUMN_DEFAULTS = [ [0.0] if feature_name in NUMERIC_FEATURE_NAMES + IGNORE_COLUMN_NAMES else [\"NA\"] for feature_name in CSV_HEADER ] # The name of the target feature. TARGET_FEATURE_NAME = \"income_bracket\" # A list of the labels of the target features. TARGET_LABELS = [\" <=50K\", \" >50K\"] Create tf.data.Dataset objects for training and validation We create an input function to read and parse the file, and convert features and labels into a [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) for training and validation. We also preprocess the input by mapping the target label to an index. from tensorflow.keras.layers import StringLookup target_label_lookup = StringLookup( vocabulary=TARGET_LABELS, mask_token=None, num_oov_indices=0 ) def get_dataset_from_csv(csv_file_path, shuffle=False, batch_size=128): dataset = tf.data.experimental.make_csv_dataset( csv_file_path, batch_size=batch_size, column_names=CSV_HEADER, column_defaults=COLUMN_DEFAULTS, label_name=TARGET_FEATURE_NAME, num_epochs=1, header=False, na_value=\"?\", shuffle=shuffle, ).map(lambda features, target: (features, target_label_lookup(target))) return dataset.cache() Create model inputs def create_model_inputs(): inputs = {} for feature_name in FEATURE_NAMES: if feature_name in NUMERIC_FEATURE_NAMES: inputs[feature_name] = layers.Input( name=feature_name, shape=(), dtype=tf.float32 ) else: inputs[feature_name] = layers.Input( name=feature_name, shape=(), dtype=tf.string ) return inputs Encode input features def encode_inputs(inputs): encoded_features = [] for feature_name in inputs: if feature_name in CATEGORICAL_FEATURE_NAMES: vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name] # Create a lookup to convert a string values to an integer indices. # Since we are not using a mask token, nor expecting any out of vocabulary # (oov) token, we set mask_token to None and num_oov_indices to 0. lookup = StringLookup( vocabulary=vocabulary, mask_token=None, num_oov_indices=0 ) # Convert the string input values into integer indices. value_index = lookup(inputs[feature_name]) embedding_dims = int(math.sqrt(lookup.vocabulary_size())) # Create an embedding layer with the specified dimensions. embedding = layers.Embedding( input_dim=lookup.vocabulary_size(), output_dim=embedding_dims ) # Convert the index values to embedding representations. encoded_feature = embedding(value_index) else: # Use the numerical features as-is. encoded_feature = inputs[feature_name] if inputs[feature_name].shape[-1] is None: encoded_feature = tf.expand_dims(encoded_feature, -1) encoded_features.append(encoded_feature) encoded_features = layers.concatenate(encoded_features) return encoded_features Deep Neural Decision Tree A neural decision tree model has two sets of weights to learn. The first set is pi, which represents the probability distribution of the classes in the tree leaves. The second set is the weights of the routing layer decision_fn, which represents the probability of going to each leave. The forward pass of the model works as follows: The model expects input features as a single vector encoding all the features of an instance in the batch. This vector can be generated from a Convolution Neural Network (CNN) applied to images or dense transformations applied to structured data features. The model first applies a used_features_mask to randomly select a subset of input features to use. Then, the model computes the probabilities (mu) for the input instances to reach the tree leaves by iteratively performing a stochastic routing throughout the tree levels. Finally, the probabilities of reaching the leaves are combined by the class probabilities at the leaves to produce the final outputs. class NeuralDecisionTree(keras.Model): def __init__(self, depth, num_features, used_features_rate, num_classes): super(NeuralDecisionTree, self).__init__() self.depth = depth self.num_leaves = 2 ** depth self.num_classes = num_classes # Create a mask for the randomly selected features. num_used_features = int(num_features * used_features_rate) one_hot = np.eye(num_features) sampled_feature_indicies = np.random.choice( np.arange(num_features), num_used_features, replace=False ) self.used_features_mask = one_hot[sampled_feature_indicies] # Initialize the weights of the classes in leaves. self.pi = tf.Variable( initial_value=tf.random_normal_initializer()( shape=[self.num_leaves, self.num_classes] ), dtype=\"float32\", trainable=True, ) # Initialize the stochastic routing layer. self.decision_fn = layers.Dense( units=self.num_leaves, activation=\"sigmoid\", name=\"decision\" ) def call(self, features): batch_size = tf.shape(features)[0] # Apply the feature mask to the input features. features = tf.matmul( features, self.used_features_mask, transpose_b=True ) # [batch_size, num_used_features] # Compute the routing probabilities. decisions = tf.expand_dims( self.decision_fn(features), axis=2 ) # [batch_size, num_leaves, 1] # Concatenate the routing probabilities with their complements. decisions = layers.concatenate( [decisions, 1 - decisions], axis=2 ) # [batch_size, num_leaves, 2] mu = tf.ones([batch_size, 1, 1]) begin_idx = 1 end_idx = 2 # Traverse the tree in breadth-first order. for level in range(self.depth): mu = tf.reshape(mu, [batch_size, -1, 1]) # [batch_size, 2 ** level, 1] mu = tf.tile(mu, (1, 1, 2)) # [batch_size, 2 ** level, 2] level_decisions = decisions[ :, begin_idx:end_idx, : ] # [batch_size, 2 ** level, 2] mu = mu * level_decisions # [batch_size, 2**level, 2] begin_idx = end_idx end_idx = begin_idx + 2 ** (level + 1) mu = tf.reshape(mu, [batch_size, self.num_leaves]) # [batch_size, num_leaves] probabilities = keras.activations.softmax(self.pi) # [num_leaves, num_classes] outputs = tf.matmul(mu, probabilities) # [batch_size, num_classes] return outputs Deep Neural Decision Forest The neural decision forest model consists of a set of neural decision trees that are trained simultaneously. The output of the forest model is the average outputs of its trees. class NeuralDecisionForest(keras.Model): def __init__(self, num_trees, depth, num_features, used_features_rate, num_classes): super(NeuralDecisionForest, self).__init__() self.ensemble = [] # Initialize the ensemble by adding NeuralDecisionTree instances. # Each tree will have its own randomly selected input features to use. for _ in range(num_trees): self.ensemble.append( NeuralDecisionTree(depth, num_features, used_features_rate, num_classes) ) def call(self, inputs): # Initialize the outputs: a [batch_size, num_classes] matrix of zeros. batch_size = tf.shape(inputs)[0] outputs = tf.zeros([batch_size, num_classes]) # Aggregate the outputs of trees in the ensemble. for tree in self.ensemble: outputs += tree(inputs) # Divide the outputs by the ensemble size to get the average. outputs /= len(self.ensemble) return outputs Finally, let's set up the code that will train and evaluate the model. learning_rate = 0.01 batch_size = 265 num_epochs = 10 hidden_units = [64, 64] def run_experiment(model): model.compile( optimizer=keras.optimizers.Adam(learning_rate=learning_rate), loss=keras.losses.SparseCategoricalCrossentropy(), metrics=[keras.metrics.SparseCategoricalAccuracy()], ) print(\"Start training the model...\") train_dataset = get_dataset_from_csv( train_data_file, shuffle=True, batch_size=batch_size ) model.fit(train_dataset, epochs=num_epochs) print(\"Model training finished\") print(\"Evaluating the model on the test data...\") test_dataset = get_dataset_from_csv(test_data_file, batch_size=batch_size) _, accuracy = model.evaluate(test_dataset) print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") Experiment 1: train a decision tree model In this experiment, we train a single neural decision tree model where we use all input features. num_trees = 10 depth = 10 used_features_rate = 1.0 num_classes = len(TARGET_LABELS) def create_tree_model(): inputs = create_model_inputs() features = encode_inputs(inputs) features = layers.BatchNormalization()(features) num_features = features.shape[1] tree = NeuralDecisionTree(depth, num_features, used_features_rate, num_classes) outputs = tree(features) model = keras.Model(inputs=inputs, outputs=outputs) return model tree_model = create_tree_model() run_experiment(tree_model) 123/123 [==============================] - 3s 9ms/step - loss: 0.5326 - sparse_categorical_accuracy: 0.7838 Epoch 2/10 123/123 [==============================] - 1s 9ms/step - loss: 0.3406 - sparse_categorical_accuracy: 0.8469 Epoch 3/10 123/123 [==============================] - 1s 9ms/step - loss: 0.3254 - sparse_categorical_accuracy: 0.8499 Epoch 4/10 123/123 [==============================] - 1s 9ms/step - loss: 0.3188 - sparse_categorical_accuracy: 0.8539 Epoch 5/10 123/123 [==============================] - 1s 9ms/step - loss: 0.3137 - sparse_categorical_accuracy: 0.8573 Epoch 6/10 123/123 [==============================] - 1s 9ms/step - loss: 0.3091 - sparse_categorical_accuracy: 0.8581 Epoch 7/10 123/123 [==============================] - 1s 9ms/step - loss: 0.3039 - sparse_categorical_accuracy: 0.8596 Epoch 8/10 123/123 [==============================] - 1s 9ms/step - loss: 0.2991 - sparse_categorical_accuracy: 0.8633 Epoch 9/10 123/123 [==============================] - 1s 9ms/step - loss: 0.2935 - sparse_categorical_accuracy: 0.8667 Epoch 10/10 123/123 [==============================] - 1s 9ms/step - loss: 0.2877 - sparse_categorical_accuracy: 0.8708 Model training finished Evaluating the model on the test data... 62/62 [==============================] - 1s 5ms/step - loss: 0.3314 - sparse_categorical_accuracy: 0.8471 Test accuracy: 84.71% Experiment 2: train a forest model In this experiment, we train a neural decision forest with num_trees trees where each tree uses randomly selected 50% of the input features. You can control the number of features to be used in each tree by setting the used_features_rate variable. In addition, we set the depth to 5 instead of 10 compared to the previous experiment. num_trees = 25 depth = 5 used_features_rate = 0.5 def create_forest_model(): inputs = create_model_inputs() features = encode_inputs(inputs) features = layers.BatchNormalization()(features) num_features = features.shape[1] forest_model = NeuralDecisionForest( num_trees, depth, num_features, used_features_rate, num_classes ) outputs = forest_model(features) model = keras.Model(inputs=inputs, outputs=outputs) return model forest_model = create_forest_model() run_experiment(forest_model) Start training the model... Epoch 1/10 123/123 [==============================] - 9s 7ms/step - loss: 0.5523 - sparse_categorical_accuracy: 0.7872 Epoch 2/10 123/123 [==============================] - 1s 6ms/step - loss: 0.3435 - sparse_categorical_accuracy: 0.8465 Epoch 3/10 123/123 [==============================] - 1s 6ms/step - loss: 0.3260 - sparse_categorical_accuracy: 0.8514 Epoch 4/10 123/123 [==============================] - 1s 6ms/step - loss: 0.3197 - sparse_categorical_accuracy: 0.8533 Epoch 5/10 123/123 [==============================] - 1s 6ms/step - loss: 0.3160 - sparse_categorical_accuracy: 0.8535 Epoch 6/10 123/123 [==============================] - 1s 6ms/step - loss: 0.3133 - sparse_categorical_accuracy: 0.8545 Epoch 7/10 123/123 [==============================] - 1s 6ms/step - loss: 0.3110 - sparse_categorical_accuracy: 0.8556 Epoch 8/10 123/123 [==============================] - 1s 6ms/step - loss: 0.3088 - sparse_categorical_accuracy: 0.8559 Epoch 9/10 123/123 [==============================] - 1s 6ms/step - loss: 0.3066 - sparse_categorical_accuracy: 0.8573 Epoch 10/10 123/123 [==============================] - 1s 6ms/step - loss: 0.3048 - sparse_categorical_accuracy: 0.8573 Model training finished Evaluating the model on the test data... 62/62 [==============================] - 2s 5ms/step - loss: 0.3140 - sparse_categorical_accuracy: 0.8533 Test accuracy: 85.33% Recommending movies using a model trained on Movielens dataset. Introduction This example demonstrates Collaborative filtering using the Movielens dataset to recommend movies to users. The MovieLens ratings dataset lists the ratings given by a set of users to a set of movies. Our goal is to be able to predict ratings for movies a user has not yet watched. The movies with the highest predicted ratings can then be recommended to the user. The steps in the model are as follows: Map user ID to a \"user vector\" via an embedding matrix Map movie ID to a \"movie vector\" via an embedding matrix Compute the dot product between the user vector and movie vector, to obtain the a match score between the user and the movie (predicted rating). Train the embeddings via gradient descent using all known user-movie pairs. References: Collaborative Filtering Neural Collaborative Filtering import pandas as pd import numpy as np from zipfile import ZipFile import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from pathlib import Path import matplotlib.pyplot as plt First, load the data and apply preprocessing # Download the actual data from http://files.grouplens.org/datasets/movielens/ml-latest-small.zip\" # Use the ratings.csv file movielens_data_file_url = ( \"http://files.grouplens.org/datasets/movielens/ml-latest-small.zip\" ) movielens_zipped_file = keras.utils.get_file( \"ml-latest-small.zip\", movielens_data_file_url, extract=False ) keras_datasets_path = Path(movielens_zipped_file).parents[0] movielens_dir = keras_datasets_path / \"ml-latest-small\" # Only extract the data the first time the script is run. if not movielens_dir.exists(): with ZipFile(movielens_zipped_file, \"r\") as zip: # Extract files print(\"Extracting all the files now...\") zip.extractall(path=keras_datasets_path) print(\"Done!\") ratings_file = movielens_dir / \"ratings.csv\" df = pd.read_csv(ratings_file) First, need to perform some preprocessing to encode users and movies as integer indices. user_ids = df[\"userId\"].unique().tolist() user2user_encoded = {x: i for i, x in enumerate(user_ids)} userencoded2user = {i: x for i, x in enumerate(user_ids)} movie_ids = df[\"movieId\"].unique().tolist() movie2movie_encoded = {x: i for i, x in enumerate(movie_ids)} movie_encoded2movie = {i: x for i, x in enumerate(movie_ids)} df[\"user\"] = df[\"userId\"].map(user2user_encoded) df[\"movie\"] = df[\"movieId\"].map(movie2movie_encoded) num_users = len(user2user_encoded) num_movies = len(movie_encoded2movie) df[\"rating\"] = df[\"rating\"].values.astype(np.float32) # min and max ratings will be used to normalize the ratings later min_rating = min(df[\"rating\"]) max_rating = max(df[\"rating\"]) print( \"Number of users: {}, Number of Movies: {}, Min rating: {}, Max rating: {}\".format( num_users, num_movies, min_rating, max_rating ) ) Number of users: 610, Number of Movies: 9724, Min rating: 0.5, Max rating: 5.0 Prepare training and validation data df = df.sample(frac=1, random_state=42) x = df[[\"user\", \"movie\"]].values # Normalize the targets between 0 and 1. Makes it easy to train. y = df[\"rating\"].apply(lambda x: (x - min_rating) / (max_rating - min_rating)).values # Assuming training on 90% of the data and validating on 10%. train_indices = int(0.9 * df.shape[0]) x_train, x_val, y_train, y_val = ( x[:train_indices], x[train_indices:], y[:train_indices], y[train_indices:], ) Create the model We embed both users and movies in to 50-dimensional vectors. The model computes a match score between user and movie embeddings via a dot product, and adds a per-movie and per-user bias. The match score is scaled to the [0, 1] interval via a sigmoid (since our ratings are normalized to this range). EMBEDDING_SIZE = 50 class RecommenderNet(keras.Model): def __init__(self, num_users, num_movies, embedding_size, **kwargs): super(RecommenderNet, self).__init__(**kwargs) self.num_users = num_users self.num_movies = num_movies self.embedding_size = embedding_size self.user_embedding = layers.Embedding( num_users, embedding_size, embeddings_initializer=\"he_normal\", embeddings_regularizer=keras.regularizers.l2(1e-6), ) self.user_bias = layers.Embedding(num_users, 1) self.movie_embedding = layers.Embedding( num_movies, embedding_size, embeddings_initializer=\"he_normal\", embeddings_regularizer=keras.regularizers.l2(1e-6), ) self.movie_bias = layers.Embedding(num_movies, 1) def call(self, inputs): user_vector = self.user_embedding(inputs[:, 0]) user_bias = self.user_bias(inputs[:, 0]) movie_vector = self.movie_embedding(inputs[:, 1]) movie_bias = self.movie_bias(inputs[:, 1]) dot_user_movie = tf.tensordot(user_vector, movie_vector, 2) # Add all the components (including bias) x = dot_user_movie + user_bias + movie_bias # The sigmoid activation forces the rating to between 0 and 1 return tf.nn.sigmoid(x) model = RecommenderNet(num_users, num_movies, EMBEDDING_SIZE) model.compile( loss=tf.keras.losses.BinaryCrossentropy(), optimizer=keras.optimizers.Adam(lr=0.001) ) Train the model based on the data split history = model.fit( x=x_train, y=y_train, batch_size=64, epochs=5, verbose=1, validation_data=(x_val, y_val), ) Epoch 1/5 1418/1418 [==============================] - 6s 4ms/step - loss: 0.6368 - val_loss: 0.6206 Epoch 2/5 1418/1418 [==============================] - 7s 5ms/step - loss: 0.6131 - val_loss: 0.6176 Epoch 3/5 1418/1418 [==============================] - 6s 4ms/step - loss: 0.6083 - val_loss: 0.6146 Epoch 4/5 1418/1418 [==============================] - 6s 4ms/step - loss: 0.6072 - val_loss: 0.6131 Epoch 5/5 1418/1418 [==============================] - 6s 4ms/step - loss: 0.6075 - val_loss: 0.6150 Plot training and validation loss plt.plot(history.history[\"loss\"]) plt.plot(history.history[\"val_loss\"]) plt.title(\"model loss\") plt.ylabel(\"loss\") plt.xlabel(\"epoch\") plt.legend([\"train\", \"test\"], loc=\"upper left\") plt.show() png Show top 10 movie recommendations to a user movie_df = pd.read_csv(movielens_dir / \"movies.csv\") # Let us get a user and see the top recommendations. user_id = df.userId.sample(1).iloc[0] movies_watched_by_user = df[df.userId == user_id] movies_not_watched = movie_df[ ~movie_df[\"movieId\"].isin(movies_watched_by_user.movieId.values) ][\"movieId\"] movies_not_watched = list( set(movies_not_watched).intersection(set(movie2movie_encoded.keys())) ) movies_not_watched = [[movie2movie_encoded.get(x)] for x in movies_not_watched] user_encoder = user2user_encoded.get(user_id) user_movie_array = np.hstack( ([[user_encoder]] * len(movies_not_watched), movies_not_watched) ) ratings = model.predict(user_movie_array).flatten() top_ratings_indices = ratings.argsort()[-10:][::-1] recommended_movie_ids = [ movie_encoded2movie.get(movies_not_watched[x][0]) for x in top_ratings_indices ] print(\"Showing recommendations for user: {}\".format(user_id)) print(\"====\" * 9) print(\"Movies with high ratings from user\") print(\"----\" * 8) top_movies_user = ( movies_watched_by_user.sort_values(by=\"rating\", ascending=False) .head(5) .movieId.values ) movie_df_rows = movie_df[movie_df[\"movieId\"].isin(top_movies_user)] for row in movie_df_rows.itertuples(): print(row.title, \":\", row.genres) print(\"----\" * 8) print(\"Top 10 movie recommendations\") print(\"----\" * 8) recommended_movies = movie_df[movie_df[\"movieId\"].isin(recommended_movie_ids)] for row in recommended_movies.itertuples(): print(row.title, \":\", row.genres) Showing recommendations for user: 474 ==================================== Movies with high ratings from user -------------------------------- Fugitive, The (1993) : Thriller Remains of the Day, The (1993) : Drama|Romance West Side Story (1961) : Drama|Musical|Romance X2: X-Men United (2003) : Action|Adventure|Sci-Fi|Thriller Spider-Man 2 (2004) : Action|Adventure|Sci-Fi|IMAX -------------------------------- Top 10 movie recommendations -------------------------------- Dazed and Confused (1993) : Comedy Ghost in the Shell (Kôkaku kidôtai) (1995) : Animation|Sci-Fi Drugstore Cowboy (1989) : Crime|Drama Road Warrior, The (Mad Max 2) (1981) : Action|Adventure|Sci-Fi|Thriller Dark Knight, The (2008) : Action|Crime|Drama|IMAX Inglourious Basterds (2009) : Action|Drama|War Up (2009) : Adventure|Animation|Children|Drama Dark Knight Rises, The (2012) : Action|Adventure|Crime|IMAX Star Wars: Episode VII - The Force Awakens (2015) : Action|Adventure|Fantasy|Sci-Fi|IMAX Thor: Ragnarok (2017) : Action|Adventure|Sci-Fi Demonstration of how to handle highly imbalanced classification problems. Introduction This example looks at the Kaggle Credit Card Fraud Detection dataset to demonstrate how to train a classification model on data with highly imbalanced classes. First, vectorize the CSV data import csv import numpy as np # Get the real data from https://www.kaggle.com/mlg-ulb/creditcardfraud/ fname = \"/Users/fchollet/Downloads/creditcard.csv\" all_features = [] all_targets = [] with open(fname) as f: for i, line in enumerate(f): if i == 0: print(\"HEADER:\", line.strip()) continue # Skip header fields = line.strip().split(\",\") all_features.append([float(v.replace('\"', \"\")) for v in fields[:-1]]) all_targets.append([int(fields[-1].replace('\"', \"\"))]) if i == 1: print(\"EXAMPLE FEATURES:\", all_features[-1]) features = np.array(all_features, dtype=\"float32\") targets = np.array(all_targets, dtype=\"uint8\") print(\"features.shape:\", features.shape) print(\"targets.shape:\", targets.shape) HEADER: \"Time\",\"V1\",\"V2\",\"V3\",\"V4\",\"V5\",\"V6\",\"V7\",\"V8\",\"V9\",\"V10\",\"V11\",\"V12\",\"V13\",\"V14\",\"V15\",\"V16\",\"V17\",\"V18\",\"V19\",\"V20\",\"V21\",\"V22\",\"V23\",\"V24\",\"V25\",\"V26\",\"V27\",\"V28\",\"Amount\",\"Class\" EXAMPLE FEATURES: [0.0, -1.3598071336738, -0.0727811733098497, 2.53634673796914, 1.37815522427443, -0.338320769942518, 0.462387777762292, 0.239598554061257, 0.0986979012610507, 0.363786969611213, 0.0907941719789316, -0.551599533260813, -0.617800855762348, -0.991389847235408, -0.311169353699879, 1.46817697209427, -0.470400525259478, 0.207971241929242, 0.0257905801985591, 0.403992960255733, 0.251412098239705, -0.018306777944153, 0.277837575558899, -0.110473910188767, 0.0669280749146731, 0.128539358273528, -0.189114843888824, 0.133558376740387, -0.0210530534538215, 149.62] features.shape: (284807, 30) targets.shape: (284807, 1) Prepare a validation set num_val_samples = int(len(features) * 0.2) train_features = features[:-num_val_samples] train_targets = targets[:-num_val_samples] val_features = features[-num_val_samples:] val_targets = targets[-num_val_samples:] print(\"Number of training samples:\", len(train_features)) print(\"Number of validation samples:\", len(val_features)) Number of training samples: 227846 Number of validation samples: 56961 Analyze class imbalance in the targets counts = np.bincount(train_targets[:, 0]) print( \"Number of positive samples in training data: {} ({:.2f}% of total)\".format( counts[1], 100 * float(counts[1]) / len(train_targets) ) ) weight_for_0 = 1.0 / counts[0] weight_for_1 = 1.0 / counts[1] Number of positive samples in training data: 417 (0.18% of total) Normalize the data using training set statistics mean = np.mean(train_features, axis=0) train_features -= mean val_features -= mean std = np.std(train_features, axis=0) train_features /= std val_features /= std Build a binary classification model from tensorflow import keras model = keras.Sequential( [ keras.layers.Dense( 256, activation=\"relu\", input_shape=(train_features.shape[-1],) ), keras.layers.Dense(256, activation=\"relu\"), keras.layers.Dropout(0.3), keras.layers.Dense(256, activation=\"relu\"), keras.layers.Dropout(0.3), keras.layers.Dense(1, activation=\"sigmoid\"), ] ) model.summary() Model: \"sequential\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 256) 7936 _________________________________________________________________ dense_1 (Dense) (None, 256) 65792 _________________________________________________________________ dropout (Dropout) (None, 256) 0 _________________________________________________________________ dense_2 (Dense) (None, 256) 65792 _________________________________________________________________ dropout_1 (Dropout) (None, 256) 0 _________________________________________________________________ dense_3 (Dense) (None, 1) 257 ================================================================= Total params: 139,777 Trainable params: 139,777 Non-trainable params: 0 _________________________________________________________________ Train the model with class_weight argument metrics = [ keras.metrics.FalseNegatives(name=\"fn\"), keras.metrics.FalsePositives(name=\"fp\"), keras.metrics.TrueNegatives(name=\"tn\"), keras.metrics.TruePositives(name=\"tp\"), keras.metrics.Precision(name=\"precision\"), keras.metrics.Recall(name=\"recall\"), ] model.compile( optimizer=keras.optimizers.Adam(1e-2), loss=\"binary_crossentropy\", metrics=metrics ) callbacks = [keras.callbacks.ModelCheckpoint(\"fraud_model_at_epoch_{epoch}.h5\")] class_weight = {0: weight_for_0, 1: weight_for_1} model.fit( train_features, train_targets, batch_size=2048, epochs=30, verbose=2, callbacks=callbacks, validation_data=(val_features, val_targets), class_weight=class_weight, ) Epoch 1/30 112/112 - 2s - loss: 2.4210e-06 - fn: 51.0000 - fp: 29417.0000 - tn: 198012.0000 - tp: 366.0000 - precision: 0.0123 - recall: 0.8777 - val_loss: 0.0759 - val_fn: 9.0000 - val_fp: 611.0000 - val_tn: 56275.0000 - val_tp: 66.0000 - val_precision: 0.0975 - val_recall: 0.8800 Epoch 2/30 112/112 - 2s - loss: 1.4337e-06 - fn: 35.0000 - fp: 7058.0000 - tn: 220371.0000 - tp: 382.0000 - precision: 0.0513 - recall: 0.9161 - val_loss: 0.1632 - val_fn: 6.0000 - val_fp: 2343.0000 - val_tn: 54543.0000 - val_tp: 69.0000 - val_precision: 0.0286 - val_recall: 0.9200 Epoch 3/30 112/112 - 2s - loss: 1.2100e-06 - fn: 27.0000 - fp: 7382.0000 - tn: 220047.0000 - tp: 390.0000 - precision: 0.0502 - recall: 0.9353 - val_loss: 0.1882 - val_fn: 5.0000 - val_fp: 3690.0000 - val_tn: 53196.0000 - val_tp: 70.0000 - val_precision: 0.0186 - val_recall: 0.9333 Epoch 4/30 112/112 - 2s - loss: 1.0770e-06 - fn: 24.0000 - fp: 7306.0000 - tn: 220123.0000 - tp: 393.0000 - precision: 0.0510 - recall: 0.9424 - val_loss: 0.0444 - val_fn: 9.0000 - val_fp: 674.0000 - val_tn: 56212.0000 - val_tp: 66.0000 - val_precision: 0.0892 - val_recall: 0.8800 Epoch 5/30 112/112 - 2s - loss: 9.3284e-07 - fn: 18.0000 - fp: 5607.0000 - tn: 221822.0000 - tp: 399.0000 - precision: 0.0664 - recall: 0.9568 - val_loss: 0.0455 - val_fn: 8.0000 - val_fp: 604.0000 - val_tn: 56282.0000 - val_tp: 67.0000 - val_precision: 0.0999 - val_recall: 0.8933 Epoch 6/30 112/112 - 2s - loss: 8.9186e-07 - fn: 21.0000 - fp: 6917.0000 - tn: 220512.0000 - tp: 396.0000 - precision: 0.0542 - recall: 0.9496 - val_loss: 0.0385 - val_fn: 9.0000 - val_fp: 462.0000 - val_tn: 56424.0000 - val_tp: 66.0000 - val_precision: 0.1250 - val_recall: 0.8800 Epoch 7/30 112/112 - 2s - loss: 6.4562e-07 - fn: 13.0000 - fp: 5878.0000 - tn: 221551.0000 - tp: 404.0000 - precision: 0.0643 - recall: 0.9688 - val_loss: 0.0205 - val_fn: 9.0000 - val_fp: 372.0000 - val_tn: 56514.0000 - val_tp: 66.0000 - val_precision: 0.1507 - val_recall: 0.8800 Epoch 8/30 112/112 - 2s - loss: 7.3378e-07 - fn: 15.0000 - fp: 6825.0000 - tn: 220604.0000 - tp: 402.0000 - precision: 0.0556 - recall: 0.9640 - val_loss: 0.0188 - val_fn: 10.0000 - val_fp: 246.0000 - val_tn: 56640.0000 - val_tp: 65.0000 - val_precision: 0.2090 - val_recall: 0.8667 Epoch 9/30 112/112 - 2s - loss: 5.1385e-07 - fn: 9.0000 - fp: 5265.0000 - tn: 222164.0000 - tp: 408.0000 - precision: 0.0719 - recall: 0.9784 - val_loss: 0.0244 - val_fn: 11.0000 - val_fp: 495.0000 - val_tn: 56391.0000 - val_tp: 64.0000 - val_precision: 0.1145 - val_recall: 0.8533 Epoch 10/30 112/112 - 2s - loss: 8.6498e-07 - fn: 13.0000 - fp: 8506.0000 - tn: 218923.0000 - tp: 404.0000 - precision: 0.0453 - recall: 0.9688 - val_loss: 0.0177 - val_fn: 11.0000 - val_fp: 367.0000 - val_tn: 56519.0000 - val_tp: 64.0000 - val_precision: 0.1485 - val_recall: 0.8533 Epoch 11/30 112/112 - 2s - loss: 6.0585e-07 - fn: 12.0000 - fp: 6676.0000 - tn: 220753.0000 - tp: 405.0000 - precision: 0.0572 - recall: 0.9712 - val_loss: 0.0356 - val_fn: 9.0000 - val_fp: 751.0000 - val_tn: 56135.0000 - val_tp: 66.0000 - val_precision: 0.0808 - val_recall: 0.8800 Epoch 12/30 112/112 - 2s - loss: 6.0788e-07 - fn: 9.0000 - fp: 6219.0000 - tn: 221210.0000 - tp: 408.0000 - precision: 0.0616 - recall: 0.9784 - val_loss: 0.0249 - val_fn: 10.0000 - val_fp: 487.0000 - val_tn: 56399.0000 - val_tp: 65.0000 - val_precision: 0.1178 - val_recall: 0.8667 Epoch 13/30 112/112 - 3s - loss: 8.3899e-07 - fn: 12.0000 - fp: 6612.0000 - tn: 220817.0000 - tp: 405.0000 - precision: 0.0577 - recall: 0.9712 - val_loss: 0.0905 - val_fn: 5.0000 - val_fp: 2159.0000 - val_tn: 54727.0000 - val_tp: 70.0000 - val_precision: 0.0314 - val_recall: 0.9333 Epoch 14/30 112/112 - 3s - loss: 6.0584e-07 - fn: 8.0000 - fp: 6823.0000 - tn: 220606.0000 - tp: 409.0000 - precision: 0.0566 - recall: 0.9808 - val_loss: 0.0205 - val_fn: 10.0000 - val_fp: 446.0000 - val_tn: 56440.0000 - val_tp: 65.0000 - val_precision: 0.1272 - val_recall: 0.8667 Epoch 15/30 112/112 - 2s - loss: 3.9569e-07 - fn: 6.0000 - fp: 3820.0000 - tn: 223609.0000 - tp: 411.0000 - precision: 0.0971 - recall: 0.9856 - val_loss: 0.0212 - val_fn: 10.0000 - val_fp: 413.0000 - val_tn: 56473.0000 - val_tp: 65.0000 - val_precision: 0.1360 - val_recall: 0.8667 Epoch 16/30 112/112 - 2s - loss: 5.4548e-07 - fn: 5.0000 - fp: 3910.0000 - tn: 223519.0000 - tp: 412.0000 - precision: 0.0953 - recall: 0.9880 - val_loss: 0.0906 - val_fn: 8.0000 - val_fp: 1905.0000 - val_tn: 54981.0000 - val_tp: 67.0000 - val_precision: 0.0340 - val_recall: 0.8933 Epoch 17/30 112/112 - 3s - loss: 6.2734e-07 - fn: 8.0000 - fp: 6005.0000 - tn: 221424.0000 - tp: 409.0000 - precision: 0.0638 - recall: 0.9808 - val_loss: 0.0161 - val_fn: 10.0000 - val_fp: 340.0000 - val_tn: 56546.0000 - val_tp: 65.0000 - val_precision: 0.1605 - val_recall: 0.8667 Epoch 18/30 112/112 - 3s - loss: 4.9752e-07 - fn: 5.0000 - fp: 4302.0000 - tn: 223127.0000 - tp: 412.0000 - precision: 0.0874 - recall: 0.9880 - val_loss: 0.0186 - val_fn: 10.0000 - val_fp: 408.0000 - val_tn: 56478.0000 - val_tp: 65.0000 - val_precision: 0.1374 - val_recall: 0.8667 Epoch 19/30 112/112 - 3s - loss: 6.7296e-07 - fn: 5.0000 - fp: 5986.0000 - tn: 221443.0000 - tp: 412.0000 - precision: 0.0644 - recall: 0.9880 - val_loss: 0.0165 - val_fn: 10.0000 - val_fp: 276.0000 - val_tn: 56610.0000 - val_tp: 65.0000 - val_precision: 0.1906 - val_recall: 0.8667 Epoch 20/30 112/112 - 3s - loss: 5.0178e-07 - fn: 7.0000 - fp: 5161.0000 - tn: 222268.0000 - tp: 410.0000 - precision: 0.0736 - recall: 0.9832 - val_loss: 0.2156 - val_fn: 7.0000 - val_fp: 1041.0000 - val_tn: 55845.0000 - val_tp: 68.0000 - val_precision: 0.0613 - val_recall: 0.9067 Epoch 21/30 112/112 - 3s - loss: 7.1907e-07 - fn: 7.0000 - fp: 5825.0000 - tn: 221604.0000 - tp: 410.0000 - precision: 0.0658 - recall: 0.9832 - val_loss: 0.0283 - val_fn: 8.0000 - val_fp: 511.0000 - val_tn: 56375.0000 - val_tp: 67.0000 - val_precision: 0.1159 - val_recall: 0.8933 Epoch 22/30 112/112 - 3s - loss: 3.6405e-07 - fn: 6.0000 - fp: 4149.0000 - tn: 223280.0000 - tp: 411.0000 - precision: 0.0901 - recall: 0.9856 - val_loss: 0.0269 - val_fn: 8.0000 - val_fp: 554.0000 - val_tn: 56332.0000 - val_tp: 67.0000 - val_precision: 0.1079 - val_recall: 0.8933 Epoch 23/30 112/112 - 3s - loss: 2.8464e-07 - fn: 1.0000 - fp: 4131.0000 - tn: 223298.0000 - tp: 416.0000 - precision: 0.0915 - recall: 0.9976 - val_loss: 0.0097 - val_fn: 10.0000 - val_fp: 191.0000 - val_tn: 56695.0000 - val_tp: 65.0000 - val_precision: 0.2539 - val_recall: 0.8667 Epoch 24/30 112/112 - 3s - loss: 3.2445e-07 - fn: 3.0000 - fp: 4040.0000 - tn: 223389.0000 - tp: 414.0000 - precision: 0.0930 - recall: 0.9928 - val_loss: 0.0129 - val_fn: 9.0000 - val_fp: 278.0000 - val_tn: 56608.0000 - val_tp: 66.0000 - val_precision: 0.1919 - val_recall: 0.8800 Epoch 25/30 112/112 - 3s - loss: 5.4032e-07 - fn: 4.0000 - fp: 4834.0000 - tn: 222595.0000 - tp: 413.0000 - precision: 0.0787 - recall: 0.9904 - val_loss: 0.1334 - val_fn: 7.0000 - val_fp: 885.0000 - val_tn: 56001.0000 - val_tp: 68.0000 - val_precision: 0.0714 - val_recall: 0.9067 Epoch 26/30 112/112 - 3s - loss: 1.2099e-06 - fn: 9.0000 - fp: 5767.0000 - tn: 221662.0000 - tp: 408.0000 - precision: 0.0661 - recall: 0.9784 - val_loss: 0.0426 - val_fn: 11.0000 - val_fp: 211.0000 - val_tn: 56675.0000 - val_tp: 64.0000 - val_precision: 0.2327 - val_recall: 0.8533 Epoch 27/30 112/112 - 2s - loss: 5.0924e-07 - fn: 7.0000 - fp: 4185.0000 - tn: 223244.0000 - tp: 410.0000 - precision: 0.0892 - recall: 0.9832 - val_loss: 0.0345 - val_fn: 6.0000 - val_fp: 710.0000 - val_tn: 56176.0000 - val_tp: 69.0000 - val_precision: 0.0886 - val_recall: 0.9200 Epoch 28/30 112/112 - 3s - loss: 4.9177e-07 - fn: 7.0000 - fp: 3871.0000 - tn: 223558.0000 - tp: 410.0000 - precision: 0.0958 - recall: 0.9832 - val_loss: 0.0631 - val_fn: 7.0000 - val_fp: 912.0000 - val_tn: 55974.0000 - val_tp: 68.0000 - val_precision: 0.0694 - val_recall: 0.9067 Epoch 29/30 112/112 - 3s - loss: 1.8390e-06 - fn: 9.0000 - fp: 7199.0000 - tn: 220230.0000 - tp: 408.0000 - precision: 0.0536 - recall: 0.9784 - val_loss: 0.0661 - val_fn: 10.0000 - val_fp: 292.0000 - val_tn: 56594.0000 - val_tp: 65.0000 - val_precision: 0.1821 - val_recall: 0.8667 Epoch 30/30 112/112 - 3s - loss: 3.5976e-06 - fn: 14.0000 - fp: 5541.0000 - tn: 221888.0000 - tp: 403.0000 - precision: 0.0678 - recall: 0.9664 - val_loss: 0.1205 - val_fn: 10.0000 - val_fp: 206.0000 - val_tn: 56680.0000 - val_tp: 65.0000 - val_precision: 0.2399 - val_recall: 0.8667 Conclusions At the end of training, out of 56,961 validation transactions, we are: Correctly identifying 66 of them as fraudulent Missing 9 fraudulent transactions At the cost of incorrectly flagging 441 legitimate transactions In the real world, one would put an even higher weight on class 1, so as to reflect that False Negatives are more costly than False Positives. Next time your credit card gets declined in an online purchase -- this is why. Binary classification of structured data including numerical and categorical features. Introduction This example demonstrates how to do structured data classification, starting from a raw CSV file. Our data includes both numerical and categorical features. We will use Keras preprocessing layers to normalize the numerical features and vectorize the categorical ones. Note that this example should be run with TensorFlow 2.5 or higher. The dataset Our dataset is provided by the Cleveland Clinic Foundation for Heart Disease. It's a CSV file with 303 rows. Each row contains information about a patient (a sample), and each column describes an attribute of the patient (a feature). We use the features to predict whether a patient has a heart disease (binary classification). Here's the description of each feature: Column Description Feature Type Age Age in years Numerical Sex (1 = male; 0 = female) Categorical CP Chest pain type (0, 1, 2, 3, 4) Categorical Trestbpd Resting blood pressure (in mm Hg on admission) Numerical Chol Serum cholesterol in mg/dl Numerical FBS fasting blood sugar in 120 mg/dl (1 = true; 0 = false) Categorical RestECG Resting electrocardiogram results (0, 1, 2) Categorical Thalach Maximum heart rate achieved Numerical Exang Exercise induced angina (1 = yes; 0 = no) Categorical Oldpeak ST depression induced by exercise relative to rest Numerical Slope Slope of the peak exercise ST segment Numerical CA Number of major vessels (0-3) colored by fluoroscopy Both numerical & categorical Thal 3 = normal; 6 = fixed defect; 7 = reversible defect Categorical Target Diagnosis of heart disease (1 = true; 0 = false) Target Setup import tensorflow as tf import numpy as np import pandas as pd from tensorflow import keras from tensorflow.keras import layers Preparing the data Let's download the data and load it into a Pandas dataframe: file_url = \"http://storage.googleapis.com/download.tensorflow.org/data/heart.csv\" dataframe = pd.read_csv(file_url) The dataset includes 303 samples with 14 columns per sample (13 features, plus the target label): dataframe.shape (303, 14) Here's a preview of a few samples: dataframe.head() age sex cp trestbps chol fbs restecg thalach exang oldpeak slope ca thal target 0 63 1 1 145 233 1 2 150 0 2.3 3 0 fixed 0 1 67 1 4 160 286 0 2 108 1 1.5 2 3 normal 1 2 67 1 4 120 229 0 2 129 1 2.6 2 2 reversible 0 3 37 1 3 130 250 0 0 187 0 3.5 3 0 normal 0 4 41 0 2 130 204 0 2 172 0 1.4 1 0 normal 0 The last column, \"target\", indicates whether the patient has a heart disease (1) or not (0). Let's split the data into a training and validation set: val_dataframe = dataframe.sample(frac=0.2, random_state=1337) train_dataframe = dataframe.drop(val_dataframe.index) print( \"Using %d samples for training and %d for validation\" % (len(train_dataframe), len(val_dataframe)) ) Using 242 samples for training and 61 for validation Let's generate tf.data.Dataset objects for each dataframe: def dataframe_to_dataset(dataframe): dataframe = dataframe.copy() labels = dataframe.pop(\"target\") ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels)) ds = ds.shuffle(buffer_size=len(dataframe)) return ds train_ds = dataframe_to_dataset(train_dataframe) val_ds = dataframe_to_dataset(val_dataframe) Each Dataset yields a tuple (input, target) where input is a dictionary of features and target is the value 0 or 1: for x, y in train_ds.take(1): print(\"Input:\", x) print(\"Target:\", y) Input: {'age': , 'sex': , 'cp': , 'trestbps': , 'chol': , 'fbs': , 'restecg': , 'thalach': , 'exang': , 'oldpeak': , 'slope': , 'ca': , 'thal': } Target: tf.Tensor(1, shape=(), dtype=int64) Let's batch the datasets: train_ds = train_ds.batch(32) val_ds = val_ds.batch(32) Feature preprocessing with Keras layers The following features are categorical features encoded as integers: sex cp fbs restecg exang ca We will encode these features using one-hot encoding. We have two options here: Use CategoryEncoding(), which requires knowing the range of input values and will error on input outside the range. Use IntegerLookup() which will build a lookup table for inputs and reserve an output index for unkown input values. For this example, we want a simple solution that will handle out of range inputs at inference, so we will use IntegerLookup(). We also have a categorical feature encoded as a string: thal. We will create an index of all possible features and encode output using the StringLookup() layer. Finally, the following feature are continuous numerical features: age trestbps chol thalach oldpeak slope For each of these features, we will use a Normalization() layer to make sure the mean of each feature is 0 and its standard deviation is 1. Below, we define 3 utility functions to do the operations: encode_numerical_feature to apply featurewise normalization to numerical features. encode_string_categorical_feature to first turn string inputs into integer indices, then one-hot encode these integer indices. encode_integer_categorical_feature to one-hot encode integer categorical features. from tensorflow.keras.layers import IntegerLookup from tensorflow.keras.layers import Normalization from tensorflow.keras.layers import StringLookup def encode_numerical_feature(feature, name, dataset): # Create a Normalization layer for our feature normalizer = Normalization() # Prepare a Dataset that only yields our feature feature_ds = dataset.map(lambda x, y: x[name]) feature_ds = feature_ds.map(lambda x: tf.expand_dims(x, -1)) # Learn the statistics of the data normalizer.adapt(feature_ds) # Normalize the input feature encoded_feature = normalizer(feature) return encoded_feature def encode_categorical_feature(feature, name, dataset, is_string): lookup_class = StringLookup if is_string else IntegerLookup # Create a lookup layer which will turn strings into integer indices lookup = lookup_class(output_mode=\"binary\") # Prepare a Dataset that only yields our feature feature_ds = dataset.map(lambda x, y: x[name]) feature_ds = feature_ds.map(lambda x: tf.expand_dims(x, -1)) # Learn the set of possible string values and assign them a fixed integer index lookup.adapt(feature_ds) # Turn the string input into integer indices encoded_feature = lookup(feature) return encoded_feature Build a model With this done, we can create our end-to-end model: # Categorical features encoded as integers sex = keras.Input(shape=(1,), name=\"sex\", dtype=\"int64\") cp = keras.Input(shape=(1,), name=\"cp\", dtype=\"int64\") fbs = keras.Input(shape=(1,), name=\"fbs\", dtype=\"int64\") restecg = keras.Input(shape=(1,), name=\"restecg\", dtype=\"int64\") exang = keras.Input(shape=(1,), name=\"exang\", dtype=\"int64\") ca = keras.Input(shape=(1,), name=\"ca\", dtype=\"int64\") # Categorical feature encoded as string thal = keras.Input(shape=(1,), name=\"thal\", dtype=\"string\") # Numerical features age = keras.Input(shape=(1,), name=\"age\") trestbps = keras.Input(shape=(1,), name=\"trestbps\") chol = keras.Input(shape=(1,), name=\"chol\") thalach = keras.Input(shape=(1,), name=\"thalach\") oldpeak = keras.Input(shape=(1,), name=\"oldpeak\") slope = keras.Input(shape=(1,), name=\"slope\") all_inputs = [ sex, cp, fbs, restecg, exang, ca, thal, age, trestbps, chol, thalach, oldpeak, slope, ] # Integer categorical features sex_encoded = encode_categorical_feature(sex, \"sex\", train_ds, False) cp_encoded = encode_categorical_feature(cp, \"cp\", train_ds, False) fbs_encoded = encode_categorical_feature(fbs, \"fbs\", train_ds, False) restecg_encoded = encode_categorical_feature(restecg, \"restecg\", train_ds, False) exang_encoded = encode_categorical_feature(exang, \"exang\", train_ds, False) ca_encoded = encode_categorical_feature(ca, \"ca\", train_ds, False) # String categorical features thal_encoded = encode_categorical_feature(thal, \"thal\", train_ds, True) # Numerical features age_encoded = encode_numerical_feature(age, \"age\", train_ds) trestbps_encoded = encode_numerical_feature(trestbps, \"trestbps\", train_ds) chol_encoded = encode_numerical_feature(chol, \"chol\", train_ds) thalach_encoded = encode_numerical_feature(thalach, \"thalach\", train_ds) oldpeak_encoded = encode_numerical_feature(oldpeak, \"oldpeak\", train_ds) slope_encoded = encode_numerical_feature(slope, \"slope\", train_ds) all_features = layers.concatenate( [ sex_encoded, cp_encoded, fbs_encoded, restecg_encoded, exang_encoded, slope_encoded, ca_encoded, thal_encoded, age_encoded, trestbps_encoded, chol_encoded, thalach_encoded, oldpeak_encoded, ] ) x = layers.Dense(32, activation=\"relu\")(all_features) x = layers.Dropout(0.5)(x) output = layers.Dense(1, activation=\"sigmoid\")(x) model = keras.Model(all_inputs, output) model.compile(\"adam\", \"binary_crossentropy\", metrics=[\"accuracy\"]) Let's visualize our connectivity graph: # `rankdir='LR'` is to make the graph horizontal. keras.utils.plot_model(model, show_shapes=True, rankdir=\"LR\") ('You must install pydot (`pip install pydot`) and install graphviz (see instructions at https://graphviz.gitlab.io/download/) ', 'for plot_model/model_to_dot to work.') Train the model model.fit(train_ds, epochs=50, validation_data=val_ds) Epoch 1/50 8/8 [==============================] - 1s 35ms/step - loss: 0.7554 - accuracy: 0.5058 - val_loss: 0.6907 - val_accuracy: 0.6393 Epoch 2/50 8/8 [==============================] - 0s 4ms/step - loss: 0.7024 - accuracy: 0.5917 - val_loss: 0.6564 - val_accuracy: 0.7049 Epoch 3/50 8/8 [==============================] - 0s 5ms/step - loss: 0.6661 - accuracy: 0.6249 - val_loss: 0.6252 - val_accuracy: 0.7213 Epoch 4/50 8/8 [==============================] - 0s 4ms/step - loss: 0.6287 - accuracy: 0.7024 - val_loss: 0.5978 - val_accuracy: 0.7377 Epoch 5/50 8/8 [==============================] - 0s 4ms/step - loss: 0.6490 - accuracy: 0.6668 - val_loss: 0.5745 - val_accuracy: 0.7213 Epoch 6/50 8/8 [==============================] - 0s 4ms/step - loss: 0.5906 - accuracy: 0.7570 - val_loss: 0.5550 - val_accuracy: 0.7541 Epoch 7/50 8/8 [==============================] - 0s 4ms/step - loss: 0.5659 - accuracy: 0.7353 - val_loss: 0.5376 - val_accuracy: 0.7869 Epoch 8/50 8/8 [==============================] - 0s 4ms/step - loss: 0.5463 - accuracy: 0.7190 - val_loss: 0.5219 - val_accuracy: 0.7869 Epoch 9/50 8/8 [==============================] - 0s 3ms/step - loss: 0.5498 - accuracy: 0.7106 - val_loss: 0.5082 - val_accuracy: 0.7869 Epoch 10/50 8/8 [==============================] - 0s 4ms/step - loss: 0.5344 - accuracy: 0.7141 - val_loss: 0.4965 - val_accuracy: 0.8033 Epoch 11/50 8/8 [==============================] - 0s 4ms/step - loss: 0.5369 - accuracy: 0.6961 - val_loss: 0.4857 - val_accuracy: 0.8033 Epoch 12/50 8/8 [==============================] - 0s 5ms/step - loss: 0.4920 - accuracy: 0.7948 - val_loss: 0.4757 - val_accuracy: 0.8197 Epoch 13/50 8/8 [==============================] - 0s 4ms/step - loss: 0.4802 - accuracy: 0.7915 - val_loss: 0.4674 - val_accuracy: 0.8197 Epoch 14/50 8/8 [==============================] - 0s 3ms/step - loss: 0.4936 - accuracy: 0.7382 - val_loss: 0.4599 - val_accuracy: 0.8197 Epoch 15/50 8/8 [==============================] - 0s 4ms/step - loss: 0.4956 - accuracy: 0.7907 - val_loss: 0.4538 - val_accuracy: 0.8033 Epoch 16/50 8/8 [==============================] - 0s 5ms/step - loss: 0.4455 - accuracy: 0.7839 - val_loss: 0.4484 - val_accuracy: 0.8033 Epoch 17/50 8/8 [==============================] - 0s 3ms/step - loss: 0.4192 - accuracy: 0.8480 - val_loss: 0.4432 - val_accuracy: 0.8197 Epoch 18/50 8/8 [==============================] - 0s 3ms/step - loss: 0.4265 - accuracy: 0.7966 - val_loss: 0.4393 - val_accuracy: 0.8197 Epoch 19/50 8/8 [==============================] - 0s 3ms/step - loss: 0.4694 - accuracy: 0.8085 - val_loss: 0.4366 - val_accuracy: 0.8197 Epoch 20/50 8/8 [==============================] - 0s 4ms/step - loss: 0.4566 - accuracy: 0.8133 - val_loss: 0.4336 - val_accuracy: 0.8197 Epoch 21/50 8/8 [==============================] - 0s 4ms/step - loss: 0.4060 - accuracy: 0.8351 - val_loss: 0.4314 - val_accuracy: 0.8197 Epoch 22/50 8/8 [==============================] - 0s 4ms/step - loss: 0.4059 - accuracy: 0.8435 - val_loss: 0.4290 - val_accuracy: 0.8197 Epoch 23/50 8/8 [==============================] - 0s 5ms/step - loss: 0.3863 - accuracy: 0.8342 - val_loss: 0.4272 - val_accuracy: 0.8197 Epoch 24/50 8/8 [==============================] - 0s 5ms/step - loss: 0.4222 - accuracy: 0.7998 - val_loss: 0.4260 - val_accuracy: 0.8197 Epoch 25/50 8/8 [==============================] - 0s 4ms/step - loss: 0.3662 - accuracy: 0.8245 - val_loss: 0.4247 - val_accuracy: 0.8033 Epoch 26/50 8/8 [==============================] - 0s 5ms/step - loss: 0.4014 - accuracy: 0.8217 - val_loss: 0.4232 - val_accuracy: 0.8033 Epoch 27/50 8/8 [==============================] - 0s 4ms/step - loss: 0.3935 - accuracy: 0.8375 - val_loss: 0.4219 - val_accuracy: 0.8033 Epoch 28/50 8/8 [==============================] - 0s 4ms/step - loss: 0.4319 - accuracy: 0.8026 - val_loss: 0.4206 - val_accuracy: 0.8197 Epoch 29/50 8/8 [==============================] - 0s 5ms/step - loss: 0.3893 - accuracy: 0.8074 - val_loss: 0.4202 - val_accuracy: 0.8197 Epoch 30/50 8/8 [==============================] - 0s 4ms/step - loss: 0.3437 - accuracy: 0.8605 - val_loss: 0.4200 - val_accuracy: 0.8197 Epoch 31/50 8/8 [==============================] - 0s 4ms/step - loss: 0.3859 - accuracy: 0.8133 - val_loss: 0.4198 - val_accuracy: 0.8197 Epoch 32/50 8/8 [==============================] - 0s 4ms/step - loss: 0.3716 - accuracy: 0.8443 - val_loss: 0.4195 - val_accuracy: 0.8197 Epoch 33/50 8/8 [==============================] - 0s 5ms/step - loss: 0.3691 - accuracy: 0.8217 - val_loss: 0.4198 - val_accuracy: 0.8197 Epoch 34/50 8/8 [==============================] - 0s 5ms/step - loss: 0.3579 - accuracy: 0.8388 - val_loss: 0.4195 - val_accuracy: 0.8197 Epoch 35/50 8/8 [==============================] - 0s 4ms/step - loss: 0.3164 - accuracy: 0.8620 - val_loss: 0.4199 - val_accuracy: 0.8197 Epoch 36/50 8/8 [==============================] - 0s 4ms/step - loss: 0.3276 - accuracy: 0.8433 - val_loss: 0.4210 - val_accuracy: 0.8197 Epoch 37/50 8/8 [==============================] - 0s 4ms/step - loss: 0.3781 - accuracy: 0.8469 - val_loss: 0.4214 - val_accuracy: 0.8197 Epoch 38/50 8/8 [==============================] - 0s 4ms/step - loss: 0.3522 - accuracy: 0.8482 - val_loss: 0.4214 - val_accuracy: 0.8197 Epoch 39/50 8/8 [==============================] - 0s 4ms/step - loss: 0.3988 - accuracy: 0.7981 - val_loss: 0.4216 - val_accuracy: 0.8197 Epoch 40/50 8/8 [==============================] - 0s 4ms/step - loss: 0.3340 - accuracy: 0.8782 - val_loss: 0.4229 - val_accuracy: 0.8197 Epoch 41/50 8/8 [==============================] - 0s 4ms/step - loss: 0.3404 - accuracy: 0.8318 - val_loss: 0.4227 - val_accuracy: 0.8197 Epoch 42/50 8/8 [==============================] - 0s 4ms/step - loss: 0.3005 - accuracy: 0.8533 - val_loss: 0.4225 - val_accuracy: 0.8197 Epoch 43/50 8/8 [==============================] - 0s 4ms/step - loss: 0.3364 - accuracy: 0.8675 - val_loss: 0.4223 - val_accuracy: 0.8197 Epoch 44/50 8/8 [==============================] - 0s 4ms/step - loss: 0.2801 - accuracy: 0.8792 - val_loss: 0.4229 - val_accuracy: 0.8197 Epoch 45/50 8/8 [==============================] - 0s 4ms/step - loss: 0.3463 - accuracy: 0.8487 - val_loss: 0.4237 - val_accuracy: 0.8197 Epoch 46/50 8/8 [==============================] - 0s 4ms/step - loss: 0.3047 - accuracy: 0.8694 - val_loss: 0.4238 - val_accuracy: 0.8197 Epoch 47/50 8/8 [==============================] - 0s 4ms/step - loss: 0.3157 - accuracy: 0.8621 - val_loss: 0.4249 - val_accuracy: 0.8197 Epoch 48/50 8/8 [==============================] - 0s 4ms/step - loss: 0.3048 - accuracy: 0.8557 - val_loss: 0.4251 - val_accuracy: 0.8197 Epoch 49/50 8/8 [==============================] - 0s 4ms/step - loss: 0.3722 - accuracy: 0.8316 - val_loss: 0.4254 - val_accuracy: 0.8197 Epoch 50/50 8/8 [==============================] - 0s 5ms/step - loss: 0.3302 - accuracy: 0.8688 - val_loss: 0.4254 - val_accuracy: 0.8197 We quickly get to 80% validation accuracy. Inference on new data To get a prediction for a new sample, you can simply call model.predict(). There are just two things you need to do: wrap scalars into a list so as to have a batch dimension (models only process batches of data, not single samples) Call convert_to_tensor on each feature sample = { \"age\": 60, \"sex\": 1, \"cp\": 1, \"trestbps\": 145, \"chol\": 233, \"fbs\": 1, \"restecg\": 2, \"thalach\": 150, \"exang\": 0, \"oldpeak\": 2.3, \"slope\": 3, \"ca\": 0, \"thal\": \"fixed\", } input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()} predictions = model.predict(input_dict) print( \"This particular patient had a %.1f percent probability \" \"of having a heart disease, as evaluated by our model.\" % (100 * predictions[0][0],) ) This particular patient had a 18.8 percent probability of having a heart disease, as evaluated by our model. Using Wide & Deep and Deep & Cross networks for structured data classification. Introduction This example demonstrates how to do structured data classification using the two modeling techniques: Wide & Deep models Deep & Cross models Note that this example should be run with TensorFlow 2.5 or higher. The dataset This example uses the Covertype dataset from the UCI Machine Learning Repository. The task is to predict forest cover type from cartographic variables. The dataset includes 506,011 instances with 12 input features: 10 numerical features and 2 categorical features. Each instance is categorized into 1 of 7 classes. Setup import math import numpy as np import pandas as pd import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Prepare the data First, let's load the dataset from the UCI Machine Learning Repository into a Pandas DataFrame: data_url = ( \"https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.data.gz\" ) raw_data = pd.read_csv(data_url, header=None) print(f\"Dataset shape: {raw_data.shape}\") raw_data.head() Dataset shape: (581012, 55) 0 1 2 3 4 5 6 7 8 9 ... 45 46 47 48 49 50 51 52 53 54 0 2596 51 3 258 0 510 221 232 148 6279 ... 0 0 0 0 0 0 0 0 0 5 1 2590 56 2 212 -6 390 220 235 151 6225 ... 0 0 0 0 0 0 0 0 0 5 2 2804 139 9 268 65 3180 234 238 135 6121 ... 0 0 0 0 0 0 0 0 0 2 3 2785 155 18 242 118 3090 238 238 122 6211 ... 0 0 0 0 0 0 0 0 0 2 4 2595 45 2 153 -1 391 220 234 150 6172 ... 0 0 0 0 0 0 0 0 0 5 5 rows × 55 columns The two categorical features in the dataset are binary-encoded. We will convert this dataset representation to the typical representation, where each categorical feature is represented as a single integer value. soil_type_values = [f\"soil_type_{idx+1}\" for idx in range(40)] wilderness_area_values = [f\"area_type_{idx+1}\" for idx in range(4)] soil_type = raw_data.loc[:, 14:53].apply( lambda x: soil_type_values[0::1][x.to_numpy().nonzero()[0][0]], axis=1 ) wilderness_area = raw_data.loc[:, 10:13].apply( lambda x: wilderness_area_values[0::1][x.to_numpy().nonzero()[0][0]], axis=1 ) CSV_HEADER = [ \"Elevation\", \"Aspect\", \"Slope\", \"Horizontal_Distance_To_Hydrology\", \"Vertical_Distance_To_Hydrology\", \"Horizontal_Distance_To_Roadways\", \"Hillshade_9am\", \"Hillshade_Noon\", \"Hillshade_3pm\", \"Horizontal_Distance_To_Fire_Points\", \"Wilderness_Area\", \"Soil_Type\", \"Cover_Type\", ] data = pd.concat( [raw_data.loc[:, 0:9], wilderness_area, soil_type, raw_data.loc[:, 54]], axis=1, ignore_index=True, ) data.columns = CSV_HEADER # Convert the target label indices into a range from 0 to 6 (there are 7 labels in total). data[\"Cover_Type\"] = data[\"Cover_Type\"] - 1 print(f\"Dataset shape: {data.shape}\") data.head().T Dataset shape: (581012, 13) 0 1 2 3 4 Elevation 2596 2590 2804 2785 2595 Aspect 51 56 139 155 45 Slope 3 2 9 18 2 Horizontal_Distance_To_Hydrology 258 212 268 242 153 Vertical_Distance_To_Hydrology 0 -6 65 118 -1 Horizontal_Distance_To_Roadways 510 390 3180 3090 391 Hillshade_9am 221 220 234 238 220 Hillshade_Noon 232 235 238 238 234 Hillshade_3pm 148 151 135 122 150 Horizontal_Distance_To_Fire_Points 6279 6225 6121 6211 6172 Wilderness_Area area_type_1 area_type_1 area_type_1 area_type_1 area_type_1 Soil_Type soil_type_29 soil_type_29 soil_type_12 soil_type_30 soil_type_29 Cover_Type 4 4 1 1 4 The shape of the DataFrame shows there are 13 columns per sample (12 for the features and 1 for the target label). Let's split the data into training (85%) and test (15%) sets. train_splits = [] test_splits = [] for _, group_data in data.groupby(\"Cover_Type\"): random_selection = np.random.rand(len(group_data.index)) <= 0.85 train_splits.append(group_data[random_selection]) test_splits.append(group_data[~random_selection]) train_data = pd.concat(train_splits).sample(frac=1).reset_index(drop=True) test_data = pd.concat(test_splits).sample(frac=1).reset_index(drop=True) print(f\"Train split size: {len(train_data.index)}\") print(f\"Test split size: {len(test_data.index)}\") Train split size: 493323 Test split size: 87689 Next, store the training and test data in separate CSV files. train_data_file = \"train_data.csv\" test_data_file = \"test_data.csv\" train_data.to_csv(train_data_file, index=False) test_data.to_csv(test_data_file, index=False) Define dataset metadata Here, we define the metadata of the dataset that will be useful for reading and parsing the data into input features, and encoding the input features with respect to their types. TARGET_FEATURE_NAME = \"Cover_Type\" TARGET_FEATURE_LABELS = [\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\"] NUMERIC_FEATURE_NAMES = [ \"Aspect\", \"Elevation\", \"Hillshade_3pm\", \"Hillshade_9am\", \"Hillshade_Noon\", \"Horizontal_Distance_To_Fire_Points\", \"Horizontal_Distance_To_Hydrology\", \"Horizontal_Distance_To_Roadways\", \"Slope\", \"Vertical_Distance_To_Hydrology\", ] CATEGORICAL_FEATURES_WITH_VOCABULARY = { \"Soil_Type\": list(data[\"Soil_Type\"].unique()), \"Wilderness_Area\": list(data[\"Wilderness_Area\"].unique()), } CATEGORICAL_FEATURE_NAMES = list(CATEGORICAL_FEATURES_WITH_VOCABULARY.keys()) FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES COLUMN_DEFAULTS = [ [0] if feature_name in NUMERIC_FEATURE_NAMES + [TARGET_FEATURE_NAME] else [\"NA\"] for feature_name in CSV_HEADER ] NUM_CLASSES = len(TARGET_FEATURE_LABELS) Experiment setup Next, let's define an input function that reads and parses the file, then converts features and labels into atf.data.Dataset for training or evaluation. def get_dataset_from_csv(csv_file_path, batch_size, shuffle=False): dataset = tf.data.experimental.make_csv_dataset( csv_file_path, batch_size=batch_size, column_names=CSV_HEADER, column_defaults=COLUMN_DEFAULTS, label_name=TARGET_FEATURE_NAME, num_epochs=1, header=True, shuffle=shuffle, ) return dataset.cache() Here we configure the parameters and implement the procedure for running a training and evaluation experiment given a model. learning_rate = 0.001 dropout_rate = 0.1 batch_size = 265 num_epochs = 50 hidden_units = [32, 32] def run_experiment(model): model.compile( optimizer=keras.optimizers.Adam(learning_rate=learning_rate), loss=keras.losses.SparseCategoricalCrossentropy(), metrics=[keras.metrics.SparseCategoricalAccuracy()], ) train_dataset = get_dataset_from_csv(train_data_file, batch_size, shuffle=True) test_dataset = get_dataset_from_csv(test_data_file, batch_size) print(\"Start training the model...\") history = model.fit(train_dataset, epochs=num_epochs) print(\"Model training finished\") _, accuracy = model.evaluate(test_dataset, verbose=0) print(f\"Test accuracy: {round(accuracy * 100, 2)}%\") Create model inputs Now, define the inputs for the models as a dictionary, where the key is the feature name, and the value is a keras.layers.Input tensor with the corresponding feature shape and data type. def create_model_inputs(): inputs = {} for feature_name in FEATURE_NAMES: if feature_name in NUMERIC_FEATURE_NAMES: inputs[feature_name] = layers.Input( name=feature_name, shape=(), dtype=tf.float32 ) else: inputs[feature_name] = layers.Input( name=feature_name, shape=(), dtype=tf.string ) return inputs Encode features We create two representations of our input features: sparse and dense: 1. In the sparse representation, the categorical features are encoded with one-hot encoding using the CategoryEncoding layer. This representation can be useful for the model to memorize particular feature values to make certain predictions. 2. In the dense representation, the categorical features are encoded with low-dimensional embeddings using the Embedding layer. This representation helps the model to generalize well to unseen feature combinations. from tensorflow.keras.layers import StringLookup def encode_inputs(inputs, use_embedding=False): encoded_features = [] for feature_name in inputs: if feature_name in CATEGORICAL_FEATURE_NAMES: vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name] # Create a lookup to convert string values to an integer indices. # Since we are not using a mask token nor expecting any out of vocabulary # (oov) token, we set mask_token to None and num_oov_indices to 0. lookup = StringLookup( vocabulary=vocabulary, mask_token=None, num_oov_indices=0, output_mode=\"int\" if use_embedding else \"binary\", ) if use_embedding: # Convert the string input values into integer indices. encoded_feature = lookup(inputs[feature_name]) embedding_dims = int(math.sqrt(len(vocabulary))) # Create an embedding layer with the specified dimensions. embedding = layers.Embedding( input_dim=len(vocabulary), output_dim=embedding_dims ) # Convert the index values to embedding representations. encoded_feature = embedding(encoded_feature) else: # Convert the string input values into a one hot encoding. encoded_feature = lookup(tf.expand_dims(inputs[feature_name], -1)) else: # Use the numerical features as-is. encoded_feature = tf.expand_dims(inputs[feature_name], -1) encoded_features.append(encoded_feature) all_features = layers.concatenate(encoded_features) return all_features Experiment 1: a baseline model In the first experiment, let's create a multi-layer feed-forward network, where the categorical features are one-hot encoded. def create_baseline_model(): inputs = create_model_inputs() features = encode_inputs(inputs) for units in hidden_units: features = layers.Dense(units)(features) features = layers.BatchNormalization()(features) features = layers.ReLU()(features) features = layers.Dropout(dropout_rate)(features) outputs = layers.Dense(units=NUM_CLASSES, activation=\"softmax\")(features) model = keras.Model(inputs=inputs, outputs=outputs) return model baseline_model = create_baseline_model() keras.utils.plot_model(baseline_model, show_shapes=True, rankdir=\"LR\") ('You must install pydot (`pip install pydot`) and install graphviz (see instructions at https://graphviz.gitlab.io/download/) ', 'for plot_model/model_to_dot to work.') Let's run it: run_experiment(baseline_model) Start training the model... Epoch 1/50 1862/1862 [==============================] - 10s 5ms/step - loss: 0.9208 - sparse_categorical_accuracy: 0.6334 Epoch 2/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.6758 - sparse_categorical_accuracy: 0.7081 Epoch 3/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.6409 - sparse_categorical_accuracy: 0.7225 Epoch 4/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.6209 - sparse_categorical_accuracy: 0.7316 Epoch 5/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.6074 - sparse_categorical_accuracy: 0.7371 Epoch 6/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5975 - sparse_categorical_accuracy: 0.7419 Epoch 7/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5889 - sparse_categorical_accuracy: 0.7458 Epoch 8/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5846 - sparse_categorical_accuracy: 0.7474 Epoch 9/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5810 - sparse_categorical_accuracy: 0.7502 Epoch 10/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5789 - sparse_categorical_accuracy: 0.7502 Epoch 11/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5746 - sparse_categorical_accuracy: 0.7528 Epoch 12/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5718 - sparse_categorical_accuracy: 0.7540 Epoch 13/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5689 - sparse_categorical_accuracy: 0.7551 Epoch 14/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5671 - sparse_categorical_accuracy: 0.7558 Epoch 15/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5650 - sparse_categorical_accuracy: 0.7568 Epoch 16/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5623 - sparse_categorical_accuracy: 0.7577 Epoch 17/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5616 - sparse_categorical_accuracy: 0.7591 Epoch 18/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5583 - sparse_categorical_accuracy: 0.7590 Epoch 19/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5577 - sparse_categorical_accuracy: 0.7593 Epoch 20/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5549 - sparse_categorical_accuracy: 0.7608 Epoch 21/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5564 - sparse_categorical_accuracy: 0.7599 Epoch 22/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5554 - sparse_categorical_accuracy: 0.7606 Epoch 23/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5537 - sparse_categorical_accuracy: 0.7617 Epoch 24/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5518 - sparse_categorical_accuracy: 0.7624 Epoch 25/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5508 - sparse_categorical_accuracy: 0.7618 Epoch 26/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5498 - sparse_categorical_accuracy: 0.7621 Epoch 27/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5497 - sparse_categorical_accuracy: 0.7623 Epoch 28/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5482 - sparse_categorical_accuracy: 0.7645 Epoch 29/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5467 - sparse_categorical_accuracy: 0.7637 Epoch 30/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5469 - sparse_categorical_accuracy: 0.7638 Epoch 31/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5457 - sparse_categorical_accuracy: 0.7641 Epoch 32/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5448 - sparse_categorical_accuracy: 0.7647 Epoch 33/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5440 - sparse_categorical_accuracy: 0.7644 Epoch 34/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5448 - sparse_categorical_accuracy: 0.7653 Epoch 35/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5424 - sparse_categorical_accuracy: 0.7652 Epoch 36/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5416 - sparse_categorical_accuracy: 0.7666 Epoch 37/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5411 - sparse_categorical_accuracy: 0.7663 Epoch 38/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5399 - sparse_categorical_accuracy: 0.7673 Epoch 39/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5410 - sparse_categorical_accuracy: 0.7664 Epoch 40/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5402 - sparse_categorical_accuracy: 0.7668 Epoch 41/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5395 - sparse_categorical_accuracy: 0.7670 Epoch 42/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5382 - sparse_categorical_accuracy: 0.7679 Epoch 43/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5369 - sparse_categorical_accuracy: 0.7680 Epoch 44/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5370 - sparse_categorical_accuracy: 0.7686 Epoch 45/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5358 - sparse_categorical_accuracy: 0.7680 Epoch 46/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5358 - sparse_categorical_accuracy: 0.7698 Epoch 47/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5363 - sparse_categorical_accuracy: 0.7697 Epoch 48/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5349 - sparse_categorical_accuracy: 0.7691 Epoch 49/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5357 - sparse_categorical_accuracy: 0.7691 Epoch 50/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5338 - sparse_categorical_accuracy: 0.7697 Model training finished Test accuracy: 75.72% The baseline linear model achieves ~76% test accuracy. Experiment 2: Wide & Deep model In the second experiment, we create a Wide & Deep model. The wide part of the model a linear model, while the deep part of the model is a multi-layer feed-forward network. Use the sparse representation of the input features in the wide part of the model and the dense representation of the input features for the deep part of the model. Note that every input features contributes to both parts of the model with different representations. def create_wide_and_deep_model(): inputs = create_model_inputs() wide = encode_inputs(inputs) wide = layers.BatchNormalization()(wide) deep = encode_inputs(inputs, use_embedding=True) for units in hidden_units: deep = layers.Dense(units)(deep) deep = layers.BatchNormalization()(deep) deep = layers.ReLU()(deep) deep = layers.Dropout(dropout_rate)(deep) merged = layers.concatenate([wide, deep]) outputs = layers.Dense(units=NUM_CLASSES, activation=\"softmax\")(merged) model = keras.Model(inputs=inputs, outputs=outputs) return model wide_and_deep_model = create_wide_and_deep_model() keras.utils.plot_model(wide_and_deep_model, show_shapes=True, rankdir=\"LR\") ('You must install pydot (`pip install pydot`) and install graphviz (see instructions at https://graphviz.gitlab.io/download/) ', 'for plot_model/model_to_dot to work.') Let's run it: run_experiment(wide_and_deep_model) Start training the model... Epoch 1/50 1862/1862 [==============================] - 11s 5ms/step - loss: 0.8994 - sparse_categorical_accuracy: 0.6469 Epoch 2/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.6112 - sparse_categorical_accuracy: 0.7350 Epoch 3/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5936 - sparse_categorical_accuracy: 0.7426 Epoch 4/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5814 - sparse_categorical_accuracy: 0.7468 Epoch 5/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5716 - sparse_categorical_accuracy: 0.7517 Epoch 6/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5652 - sparse_categorical_accuracy: 0.7553 Epoch 7/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5595 - sparse_categorical_accuracy: 0.7581 Epoch 8/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5542 - sparse_categorical_accuracy: 0.7600 Epoch 9/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5498 - sparse_categorical_accuracy: 0.7631 Epoch 10/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5459 - sparse_categorical_accuracy: 0.7647 Epoch 11/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5427 - sparse_categorical_accuracy: 0.7655 Epoch 12/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5398 - sparse_categorical_accuracy: 0.7675 Epoch 13/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5360 - sparse_categorical_accuracy: 0.7695 Epoch 14/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5335 - sparse_categorical_accuracy: 0.7697 Epoch 15/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5310 - sparse_categorical_accuracy: 0.7709 Epoch 16/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5289 - sparse_categorical_accuracy: 0.7725 Epoch 17/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5263 - sparse_categorical_accuracy: 0.7739 Epoch 18/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5255 - sparse_categorical_accuracy: 0.7745 Epoch 19/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5235 - sparse_categorical_accuracy: 0.7750 Epoch 20/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5224 - sparse_categorical_accuracy: 0.7757 Epoch 21/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5216 - sparse_categorical_accuracy: 0.7770 Epoch 22/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5205 - sparse_categorical_accuracy: 0.7771 Epoch 23/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5191 - sparse_categorical_accuracy: 0.7769 Epoch 24/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5189 - sparse_categorical_accuracy: 0.7779 Epoch 25/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5166 - sparse_categorical_accuracy: 0.7793 Epoch 26/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5160 - sparse_categorical_accuracy: 0.7794 Epoch 27/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5146 - sparse_categorical_accuracy: 0.7791 Epoch 28/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5136 - sparse_categorical_accuracy: 0.7810 Epoch 29/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5125 - sparse_categorical_accuracy: 0.7809 Epoch 30/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5124 - sparse_categorical_accuracy: 0.7806 Epoch 31/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5112 - sparse_categorical_accuracy: 0.7808 Epoch 32/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5098 - sparse_categorical_accuracy: 0.7822 Epoch 33/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5097 - sparse_categorical_accuracy: 0.7808 Epoch 34/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5094 - sparse_categorical_accuracy: 0.7819 Epoch 35/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5084 - sparse_categorical_accuracy: 0.7823 Epoch 36/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5077 - sparse_categorical_accuracy: 0.7826 Epoch 37/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5067 - sparse_categorical_accuracy: 0.7830 Epoch 38/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5063 - sparse_categorical_accuracy: 0.7834 Epoch 39/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5058 - sparse_categorical_accuracy: 0.7841 Epoch 40/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5047 - sparse_categorical_accuracy: 0.7840 Epoch 41/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5041 - sparse_categorical_accuracy: 0.7848 Epoch 42/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5024 - sparse_categorical_accuracy: 0.7857 Epoch 43/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5020 - sparse_categorical_accuracy: 0.7857 Epoch 44/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5009 - sparse_categorical_accuracy: 0.7865 Epoch 45/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4998 - sparse_categorical_accuracy: 0.7868 Epoch 46/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5000 - sparse_categorical_accuracy: 0.7864 Epoch 47/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4985 - sparse_categorical_accuracy: 0.7876 Epoch 48/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4985 - sparse_categorical_accuracy: 0.7877 Epoch 49/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4979 - sparse_categorical_accuracy: 0.7876 Epoch 50/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4973 - sparse_categorical_accuracy: 0.7881 Model training finished Test accuracy: 80.69% The wide and deep model achieves ~79% test accuracy. Experiment 3: Deep & Cross model In the third experiment, we create a Deep & Cross model. The deep part of this model is the same as the deep part created in the previous experiment. The key idea of the cross part is to apply explicit feature crossing in an efficient way, where the degree of cross features grows with layer depth. def create_deep_and_cross_model(): inputs = create_model_inputs() x0 = encode_inputs(inputs, use_embedding=True) cross = x0 for _ in hidden_units: units = cross.shape[-1] x = layers.Dense(units)(cross) cross = x0 * x + cross cross = layers.BatchNormalization()(cross) deep = x0 for units in hidden_units: deep = layers.Dense(units)(deep) deep = layers.BatchNormalization()(deep) deep = layers.ReLU()(deep) deep = layers.Dropout(dropout_rate)(deep) merged = layers.concatenate([cross, deep]) outputs = layers.Dense(units=NUM_CLASSES, activation=\"softmax\")(merged) model = keras.Model(inputs=inputs, outputs=outputs) return model deep_and_cross_model = create_deep_and_cross_model() keras.utils.plot_model(deep_and_cross_model, show_shapes=True, rankdir=\"LR\") ('You must install pydot (`pip install pydot`) and install graphviz (see instructions at https://graphviz.gitlab.io/download/) ', 'for plot_model/model_to_dot to work.') Let's run it: run_experiment(deep_and_cross_model) Start training the model... Epoch 1/50 1862/1862 [==============================] - 11s 5ms/step - loss: 0.8585 - sparse_categorical_accuracy: 0.6547 Epoch 2/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5968 - sparse_categorical_accuracy: 0.7424 Epoch 3/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5729 - sparse_categorical_accuracy: 0.7520 Epoch 4/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5610 - sparse_categorical_accuracy: 0.7583 Epoch 5/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5511 - sparse_categorical_accuracy: 0.7623 Epoch 6/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5460 - sparse_categorical_accuracy: 0.7651 Epoch 7/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5408 - sparse_categorical_accuracy: 0.7671 Epoch 8/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5374 - sparse_categorical_accuracy: 0.7695 Epoch 9/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5344 - sparse_categorical_accuracy: 0.7704 Epoch 10/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5310 - sparse_categorical_accuracy: 0.7715 Epoch 11/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5286 - sparse_categorical_accuracy: 0.7725 Epoch 12/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5254 - sparse_categorical_accuracy: 0.7737 Epoch 13/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5249 - sparse_categorical_accuracy: 0.7737 Epoch 14/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5223 - sparse_categorical_accuracy: 0.7752 Epoch 15/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5206 - sparse_categorical_accuracy: 0.7759 Epoch 16/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5187 - sparse_categorical_accuracy: 0.7765 Epoch 17/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5179 - sparse_categorical_accuracy: 0.7772 Epoch 18/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5152 - sparse_categorical_accuracy: 0.7788 Epoch 19/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5145 - sparse_categorical_accuracy: 0.7785 Epoch 20/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5128 - sparse_categorical_accuracy: 0.7800 Epoch 21/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5117 - sparse_categorical_accuracy: 0.7803 Epoch 22/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5105 - sparse_categorical_accuracy: 0.7809 Epoch 23/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5089 - sparse_categorical_accuracy: 0.7813 Epoch 24/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5074 - sparse_categorical_accuracy: 0.7823 Epoch 25/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5061 - sparse_categorical_accuracy: 0.7821 Epoch 26/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5048 - sparse_categorical_accuracy: 0.7832 Epoch 27/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5037 - sparse_categorical_accuracy: 0.7837 Epoch 28/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5017 - sparse_categorical_accuracy: 0.7846 Epoch 29/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.5010 - sparse_categorical_accuracy: 0.7851 Epoch 30/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4991 - sparse_categorical_accuracy: 0.7861 Epoch 31/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4989 - sparse_categorical_accuracy: 0.7849 Epoch 32/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4979 - sparse_categorical_accuracy: 0.7865 Epoch 33/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4961 - sparse_categorical_accuracy: 0.7867 Epoch 34/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4955 - sparse_categorical_accuracy: 0.7871 Epoch 35/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4946 - sparse_categorical_accuracy: 0.7871 Epoch 36/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4946 - sparse_categorical_accuracy: 0.7873 Epoch 37/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4925 - sparse_categorical_accuracy: 0.7877 Epoch 38/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4920 - sparse_categorical_accuracy: 0.7884 Epoch 39/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4910 - sparse_categorical_accuracy: 0.7887 Epoch 40/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4909 - sparse_categorical_accuracy: 0.7883 Epoch 41/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4906 - sparse_categorical_accuracy: 0.7890 Epoch 42/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4883 - sparse_categorical_accuracy: 0.7892 Epoch 43/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4883 - sparse_categorical_accuracy: 0.7896 Epoch 44/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4875 - sparse_categorical_accuracy: 0.7908 Epoch 45/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4866 - sparse_categorical_accuracy: 0.7900 Epoch 46/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4864 - sparse_categorical_accuracy: 0.7902 Epoch 47/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4862 - sparse_categorical_accuracy: 0.7909 Epoch 48/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4849 - sparse_categorical_accuracy: 0.7908 Epoch 49/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4843 - sparse_categorical_accuracy: 0.7910 Epoch 50/50 1862/1862 [==============================] - 5s 3ms/step - loss: 0.4841 - sparse_categorical_accuracy: 0.7921 Model training finished Test accuracy: 80.61% The deep and cross model achieves ~81% test accuracy. Conclusion You can use Keras Preprocessing Layers to easily handle categorical features with different encoding mechanisms, including one-hot encoding and feature embedding. In addition, different model architectures — like wide, deep, and cross networks — have different advantages, with respect to different dataset properties. You can explore using them independently or combining them to achieve the best result for your dataset. Detect anomalies in a timeseries using an Autoencoder. Introduction This script demonstrates how you can use a reconstruction convolutional autoencoder model to detect anomalies in timeseries data. Setup import numpy as np import pandas as pd from tensorflow import keras from tensorflow.keras import layers from matplotlib import pyplot as plt Load the data We will use the Numenta Anomaly Benchmark(NAB) dataset. It provides artifical timeseries data containing labeled anomalous periods of behavior. Data are ordered, timestamped, single-valued metrics. We will use the art_daily_small_noise.csv file for training and the art_daily_jumpsup.csv file for testing. The simplicity of this dataset allows us to demonstrate anomaly detection effectively. master_url_root = \"https://raw.githubusercontent.com/numenta/NAB/master/data/\" df_small_noise_url_suffix = \"artificialNoAnomaly/art_daily_small_noise.csv\" df_small_noise_url = master_url_root + df_small_noise_url_suffix df_small_noise = pd.read_csv( df_small_noise_url, parse_dates=True, index_col=\"timestamp\" ) df_daily_jumpsup_url_suffix = \"artificialWithAnomaly/art_daily_jumpsup.csv\" df_daily_jumpsup_url = master_url_root + df_daily_jumpsup_url_suffix df_daily_jumpsup = pd.read_csv( df_daily_jumpsup_url, parse_dates=True, index_col=\"timestamp\" ) Quick look at the data print(df_small_noise.head()) print(df_daily_jumpsup.head()) value timestamp 2014-04-01 00:00:00 18.324919 2014-04-01 00:05:00 21.970327 2014-04-01 00:10:00 18.624806 2014-04-01 00:15:00 21.953684 2014-04-01 00:20:00 21.909120 value timestamp 2014-04-01 00:00:00 19.761252 2014-04-01 00:05:00 20.500833 2014-04-01 00:10:00 19.961641 2014-04-01 00:15:00 21.490266 2014-04-01 00:20:00 20.187739 Visualize the data Timeseries data without anomalies We will use the following data for training. fig, ax = plt.subplots() df_small_noise.plot(legend=False, ax=ax) plt.show() png Timeseries data with anomalies We will use the following data for testing and see if the sudden jump up in the data is detected as an anomaly. fig, ax = plt.subplots() df_daily_jumpsup.plot(legend=False, ax=ax) plt.show() png Prepare training data Get data values from the training timeseries data file and normalize the value data. We have a value for every 5 mins for 14 days. 24 * 60 / 5 = 288 timesteps per day 288 * 14 = 4032 data points in total # Normalize and save the mean and std we get, # for normalizing test data. training_mean = df_small_noise.mean() training_std = df_small_noise.std() df_training_value = (df_small_noise - training_mean) / training_std print(\"Number of training samples:\", len(df_training_value)) Number of training samples: 4032 Create sequences Create sequences combining TIME_STEPS contiguous data values from the training data. TIME_STEPS = 288 # Generated training sequences for use in the model. def create_sequences(values, time_steps=TIME_STEPS): output = [] for i in range(len(values) - time_steps + 1): output.append(values[i : (i + time_steps)]) return np.stack(output) x_train = create_sequences(df_training_value.values) print(\"Training input shape: \", x_train.shape) Training input shape: (3745, 288, 1) Build a model We will build a convolutional reconstruction autoencoder model. The model will take input of shape (batch_size, sequence_length, num_features) and return output of the same shape. In this case, sequence_length is 288 and num_features is 1. model = keras.Sequential( [ layers.Input(shape=(x_train.shape[1], x_train.shape[2])), layers.Conv1D( filters=32, kernel_size=7, padding=\"same\", strides=2, activation=\"relu\" ), layers.Dropout(rate=0.2), layers.Conv1D( filters=16, kernel_size=7, padding=\"same\", strides=2, activation=\"relu\" ), layers.Conv1DTranspose( filters=16, kernel_size=7, padding=\"same\", strides=2, activation=\"relu\" ), layers.Dropout(rate=0.2), layers.Conv1DTranspose( filters=32, kernel_size=7, padding=\"same\", strides=2, activation=\"relu\" ), layers.Conv1DTranspose(filters=1, kernel_size=7, padding=\"same\"), ] ) model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.001), loss=\"mse\") model.summary() WARNING:tensorflow:Please add `keras.layers.InputLayer` instead of [`keras.Input`](/api/layers/core_layers/input#input-function) to Sequential model. [`keras.Input`](/api/layers/core_layers/input#input-function) is intended to be used by Functional model. Model: \"sequential\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv1d (Conv1D) (None, 144, 32) 256 _________________________________________________________________ dropout (Dropout) (None, 144, 32) 0 _________________________________________________________________ conv1d_1 (Conv1D) (None, 72, 16) 3600 _________________________________________________________________ conv1d_transpose (Conv1DTran (None, 144, 16) 1808 _________________________________________________________________ dropout_1 (Dropout) (None, 144, 16) 0 _________________________________________________________________ conv1d_transpose_1 (Conv1DTr (None, 288, 32) 3616 _________________________________________________________________ conv1d_transpose_2 (Conv1DTr (None, 288, 1) 225 ================================================================= Total params: 9,505 Trainable params: 9,505 Non-trainable params: 0 _________________________________________________________________ Train the model Please note that we are using x_train as both the input and the target since this is a reconstruction model. history = model.fit( x_train, x_train, epochs=50, batch_size=128, validation_split=0.1, callbacks=[ keras.callbacks.EarlyStopping(monitor=\"val_loss\", patience=5, mode=\"min\") ], ) Epoch 1/50 27/27 [==============================] - 2s 35ms/step - loss: 0.5868 - val_loss: 0.1225 Epoch 2/50 27/27 [==============================] - 1s 29ms/step - loss: 0.0882 - val_loss: 0.0404 Epoch 3/50 27/27 [==============================] - 1s 30ms/step - loss: 0.0594 - val_loss: 0.0359 Epoch 4/50 27/27 [==============================] - 1s 29ms/step - loss: 0.0486 - val_loss: 0.0287 Epoch 5/50 27/27 [==============================] - 1s 30ms/step - loss: 0.0398 - val_loss: 0.0231 Epoch 6/50 27/27 [==============================] - 1s 31ms/step - loss: 0.0337 - val_loss: 0.0208 Epoch 7/50 27/27 [==============================] - 1s 31ms/step - loss: 0.0299 - val_loss: 0.0182 Epoch 8/50 27/27 [==============================] - 1s 31ms/step - loss: 0.0271 - val_loss: 0.0187 Epoch 9/50 27/27 [==============================] - 1s 32ms/step - loss: 0.0251 - val_loss: 0.0190 Epoch 10/50 27/27 [==============================] - 1s 31ms/step - loss: 0.0235 - val_loss: 0.0179 Epoch 11/50 27/27 [==============================] - 1s 32ms/step - loss: 0.0224 - val_loss: 0.0189 Epoch 12/50 27/27 [==============================] - 1s 33ms/step - loss: 0.0214 - val_loss: 0.0199 Epoch 13/50 27/27 [==============================] - 1s 33ms/step - loss: 0.0206 - val_loss: 0.0194 Epoch 14/50 27/27 [==============================] - 1s 32ms/step - loss: 0.0199 - val_loss: 0.0208 Epoch 15/50 27/27 [==============================] - 1s 35ms/step - loss: 0.0192 - val_loss: 0.0204 Let's plot training and validation loss to see how the training went. plt.plot(history.history[\"loss\"], label=\"Training Loss\") plt.plot(history.history[\"val_loss\"], label=\"Validation Loss\") plt.legend() plt.show() png Detecting anomalies We will detect anomalies by determining how well our model can reconstruct the input data. Find MAE loss on training samples. Find max MAE loss value. This is the worst our model has performed trying to reconstruct a sample. We will make this the threshold for anomaly detection. If the reconstruction loss for a sample is greater than this threshold value then we can infer that the model is seeing a pattern that it isn't familiar with. We will label this sample as an anomaly. # Get train MAE loss. x_train_pred = model.predict(x_train) train_mae_loss = np.mean(np.abs(x_train_pred - x_train), axis=1) plt.hist(train_mae_loss, bins=50) plt.xlabel(\"Train MAE loss\") plt.ylabel(\"No of samples\") plt.show() # Get reconstruction loss threshold. threshold = np.max(train_mae_loss) print(\"Reconstruction error threshold: \", threshold) png Reconstruction error threshold: 0.1195600905852785 Compare recontruction Just for fun, let's see how our model has recontructed the first sample. This is the 288 timesteps from day 1 of our training dataset. # Checking how the first sequence is learnt plt.plot(x_train[0]) plt.plot(x_train_pred[0]) plt.show() png Prepare test data df_test_value = (df_daily_jumpsup - training_mean) / training_std fig, ax = plt.subplots() df_test_value.plot(legend=False, ax=ax) plt.show() # Create sequences from test values. x_test = create_sequences(df_test_value.values) print(\"Test input shape: \", x_test.shape) # Get test MAE loss. x_test_pred = model.predict(x_test) test_mae_loss = np.mean(np.abs(x_test_pred - x_test), axis=1) test_mae_loss = test_mae_loss.reshape((-1)) plt.hist(test_mae_loss, bins=50) plt.xlabel(\"test MAE loss\") plt.ylabel(\"No of samples\") plt.show() # Detect all the samples which are anomalies. anomalies = test_mae_loss > threshold print(\"Number of anomaly samples: \", np.sum(anomalies)) print(\"Indices of anomaly samples: \", np.where(anomalies)) png Test input shape: (3745, 288, 1) png Number of anomaly samples: 399 Indices of anomaly samples: (array([ 789, 1653, 1654, 1941, 2697, 2702, 2703, 2704, 2705, 2706, 2707, 2708, 2709, 2710, 2711, 2712, 2713, 2714, 2715, 2716, 2717, 2718, 2719, 2720, 2721, 2722, 2723, 2724, 2725, 2726, 2727, 2728, 2729, 2730, 2731, 2732, 2733, 2734, 2735, 2736, 2737, 2738, 2739, 2740, 2741, 2742, 2743, 2744, 2745, 2746, 2747, 2748, 2749, 2750, 2751, 2752, 2753, 2754, 2755, 2756, 2757, 2758, 2759, 2760, 2761, 2762, 2763, 2764, 2765, 2766, 2767, 2768, 2769, 2770, 2771, 2772, 2773, 2774, 2775, 2776, 2777, 2778, 2779, 2780, 2781, 2782, 2783, 2784, 2785, 2786, 2787, 2788, 2789, 2790, 2791, 2792, 2793, 2794, 2795, 2796, 2797, 2798, 2799, 2800, 2801, 2802, 2803, 2804, 2805, 2806, 2807, 2808, 2809, 2810, 2811, 2812, 2813, 2814, 2815, 2816, 2817, 2818, 2819, 2820, 2821, 2822, 2823, 2824, 2825, 2826, 2827, 2828, 2829, 2830, 2831, 2832, 2833, 2834, 2835, 2836, 2837, 2838, 2839, 2840, 2841, 2842, 2843, 2844, 2845, 2846, 2847, 2848, 2849, 2850, 2851, 2852, 2853, 2854, 2855, 2856, 2857, 2858, 2859, 2860, 2861, 2862, 2863, 2864, 2865, 2866, 2867, 2868, 2869, 2870, 2871, 2872, 2873, 2874, 2875, 2876, 2877, 2878, 2879, 2880, 2881, 2882, 2883, 2884, 2885, 2886, 2887, 2888, 2889, 2890, 2891, 2892, 2893, 2894, 2895, 2896, 2897, 2898, 2899, 2900, 2901, 2902, 2903, 2904, 2905, 2906, 2907, 2908, 2909, 2910, 2911, 2912, 2913, 2914, 2915, 2916, 2917, 2918, 2919, 2920, 2921, 2922, 2923, 2924, 2925, 2926, 2927, 2928, 2929, 2930, 2931, 2932, 2933, 2934, 2935, 2936, 2937, 2938, 2939, 2940, 2941, 2942, 2943, 2944, 2945, 2946, 2947, 2948, 2949, 2950, 2951, 2952, 2953, 2954, 2955, 2956, 2957, 2958, 2959, 2960, 2961, 2962, 2963, 2964, 2965, 2966, 2967, 2968, 2969, 2970, 2971, 2972, 2973, 2974, 2975, 2976, 2977, 2978, 2979, 2980, 2981, 2982, 2983, 2984, 2985, 2986, 2987, 2988, 2989, 2990, 2991, 2992, 2993, 2994, 2995, 2996, 2997, 2998, 2999, 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015, 3016, 3017, 3018, 3019, 3020, 3021, 3022, 3023, 3024, 3025, 3026, 3027, 3028, 3029, 3030, 3031, 3032, 3033, 3034, 3035, 3036, 3037, 3038, 3039, 3040, 3041, 3042, 3043, 3044, 3045, 3046, 3047, 3048, 3049, 3050, 3051, 3052, 3053, 3054, 3055, 3056, 3057, 3058, 3059, 3060, 3061, 3062, 3063, 3064, 3065, 3066, 3067, 3068, 3069, 3070, 3071, 3072, 3073, 3074, 3075, 3076, 3077, 3078, 3079, 3080, 3081, 3082, 3083, 3084, 3085, 3086, 3087, 3088, 3089, 3090, 3091, 3092, 3093, 3094, 3095]),) Plot anomalies We now know the samples of the data which are anomalies. With this, we will find the corresponding timestamps from the original test data. We will be using the following method to do that: Let's say time_steps = 3 and we have 10 training values. Our x_train will look like this: 0, 1, 2 1, 2, 3 2, 3, 4 3, 4, 5 4, 5, 6 5, 6, 7 6, 7, 8 7, 8, 9 All except the initial and the final time_steps-1 data values, will appear in time_steps number of samples. So, if we know that the samples [(3, 4, 5), (4, 5, 6), (5, 6, 7)] are anomalies, we can say that the data point 5 is an anomaly. # data i is an anomaly if samples [(i - timesteps + 1) to (i)] are anomalies anomalous_data_indices = [] for data_idx in range(TIME_STEPS - 1, len(df_test_value) - TIME_STEPS + 1): if np.all(anomalies[data_idx - TIME_STEPS + 1 : data_idx]): anomalous_data_indices.append(data_idx) Let's overlay the anomalies on the original test data plot. df_subset = df_daily_jumpsup.iloc[anomalous_data_indices] fig, ax = plt.subplots() df_daily_jumpsup.plot(legend=False, ax=ax) df_subset.plot(legend=False, ax=ax, color=\"r\") plt.show() png Training a timeseries classifier from scratch on the FordA dataset from the UCR/UEA archive. Introduction This example shows how to do timeseries classification from scratch, starting from raw CSV timeseries files on disk. We demonstrate the workflow on the FordA dataset from the UCR/UEA archive. Setup from tensorflow import keras import numpy as np import matplotlib.pyplot as plt Load the data: the FordA dataset Dataset description The dataset we are using here is called FordA. The data comes from the UCR archive. The dataset contains 3601 training instances and another 1320 testing instances. Each timeseries corresponds to a measurement of engine noise captured by a motor sensor. For this task, the goal is to automatically detect the presence of a specific issue with the engine. The problem is a balanced binary classification task. The full description of this dataset can be found here. Read the TSV data We will use the FordA_TRAIN file for training and the FordA_TEST file for testing. The simplicity of this dataset allows us to demonstrate effectively how to use ConvNets for timeseries classification. In this file, the first column corresponds to the label. def readucr(filename): data = np.loadtxt(filename, delimiter=\"\t\") y = data[:, 0] x = data[:, 1:] return x, y.astype(int) root_url = \"https://raw.githubusercontent.com/hfawaz/cd-diagram/master/FordA/\" x_train, y_train = readucr(root_url + \"FordA_TRAIN.tsv\") x_test, y_test = readucr(root_url + \"FordA_TEST.tsv\") Visualize the data Here we visualize one timeseries example for each class in the dataset. classes = np.unique(np.concatenate((y_train, y_test), axis=0)) plt.figure() for c in classes: c_x_train = x_train[y_train == c] plt.plot(c_x_train[0], label=\"class \" + str(c)) plt.legend(loc=\"best\") plt.show() plt.close() png Standardize the data Our timeseries are already in a single length (500). However, their values are usually in various ranges. This is not ideal for a neural network; in general we should seek to make the input values normalized. For this specific dataset, the data is already z-normalized: each timeseries sample has a mean equal to zero and a standard deviation equal to one. This type of normalization is very common for timeseries classification problems, see Bagnall et al. (2016). Note that the timeseries data used here are univariate, meaning we only have one channel per timeseries example. We will therefore transform the timeseries into a multivariate one with one channel using a simple reshaping via numpy. This will allow us to construct a model that is easily applicable to multivariate time series. x_train = x_train.reshape((x_train.shape[0], x_train.shape[1], 1)) x_test = x_test.reshape((x_test.shape[0], x_test.shape[1], 1)) Finally, in order to use sparse_categorical_crossentropy, we will have to count the number of classes beforehand. num_classes = len(np.unique(y_train)) Now we shuffle the training set because we will be using the validation_split option later when training. idx = np.random.permutation(len(x_train)) x_train = x_train[idx] y_train = y_train[idx] Standardize the labels to positive integers. The expected labels will then be 0 and 1. y_train[y_train == -1] = 0 y_test[y_test == -1] = 0 Build a model We build a Fully Convolutional Neural Network originally proposed in this paper. The implementation is based on the TF 2 version provided here. The following hyperparameters (kernel_size, filters, the usage of BatchNorm) were found via random search using KerasTuner. def make_model(input_shape): input_layer = keras.layers.Input(input_shape) conv1 = keras.layers.Conv1D(filters=64, kernel_size=3, padding=\"same\")(input_layer) conv1 = keras.layers.BatchNormalization()(conv1) conv1 = keras.layers.ReLU()(conv1) conv2 = keras.layers.Conv1D(filters=64, kernel_size=3, padding=\"same\")(conv1) conv2 = keras.layers.BatchNormalization()(conv2) conv2 = keras.layers.ReLU()(conv2) conv3 = keras.layers.Conv1D(filters=64, kernel_size=3, padding=\"same\")(conv2) conv3 = keras.layers.BatchNormalization()(conv3) conv3 = keras.layers.ReLU()(conv3) gap = keras.layers.GlobalAveragePooling1D()(conv3) output_layer = keras.layers.Dense(num_classes, activation=\"softmax\")(gap) return keras.models.Model(inputs=input_layer, outputs=output_layer) model = make_model(input_shape=x_train.shape[1:]) keras.utils.plot_model(model, show_shapes=True) ('Failed to import pydot. You must `pip install pydot` and install graphviz (https://graphviz.gitlab.io/download/), ', 'for `pydotprint` to work.') Train the model epochs = 500 batch_size = 32 callbacks = [ keras.callbacks.ModelCheckpoint( \"best_model.h5\", save_best_only=True, monitor=\"val_loss\" ), keras.callbacks.ReduceLROnPlateau( monitor=\"val_loss\", factor=0.5, patience=20, min_lr=0.0001 ), keras.callbacks.EarlyStopping(monitor=\"val_loss\", patience=50, verbose=1), ] model.compile( optimizer=\"adam\", loss=\"sparse_categorical_crossentropy\", metrics=[\"sparse_categorical_accuracy\"], ) history = model.fit( x_train, y_train, batch_size=batch_size, epochs=epochs, callbacks=callbacks, validation_split=0.2, verbose=1, ) Epoch 1/500 90/90 [==============================] - 1s 8ms/step - loss: 0.5531 - sparse_categorical_accuracy: 0.7017 - val_loss: 0.7335 - val_sparse_categorical_accuracy: 0.4882 Epoch 2/500 90/90 [==============================] - 1s 6ms/step - loss: 0.4520 - sparse_categorical_accuracy: 0.7729 - val_loss: 0.7446 - val_sparse_categorical_accuracy: 0.4882 Epoch 3/500 90/90 [==============================] - 1s 6ms/step - loss: 0.4404 - sparse_categorical_accuracy: 0.7733 - val_loss: 0.7706 - val_sparse_categorical_accuracy: 0.4882 Epoch 4/500 90/90 [==============================] - 1s 6ms/step - loss: 0.4234 - sparse_categorical_accuracy: 0.7899 - val_loss: 0.9741 - val_sparse_categorical_accuracy: 0.4882 Epoch 5/500 90/90 [==============================] - 1s 6ms/step - loss: 0.4180 - sparse_categorical_accuracy: 0.7972 - val_loss: 0.6679 - val_sparse_categorical_accuracy: 0.5936 Epoch 6/500 90/90 [==============================] - 1s 6ms/step - loss: 0.3988 - sparse_categorical_accuracy: 0.8066 - val_loss: 0.5399 - val_sparse_categorical_accuracy: 0.6990 Epoch 7/500 90/90 [==============================] - 1s 6ms/step - loss: 0.4012 - sparse_categorical_accuracy: 0.8024 - val_loss: 0.4051 - val_sparse_categorical_accuracy: 0.8225 Epoch 8/500 90/90 [==============================] - 1s 6ms/step - loss: 0.3903 - sparse_categorical_accuracy: 0.8080 - val_loss: 0.9671 - val_sparse_categorical_accuracy: 0.5340 Epoch 9/500 90/90 [==============================] - 1s 6ms/step - loss: 0.3948 - sparse_categorical_accuracy: 0.7986 - val_loss: 0.5778 - val_sparse_categorical_accuracy: 0.6436 Epoch 10/500 90/90 [==============================] - 1s 6ms/step - loss: 0.3731 - sparse_categorical_accuracy: 0.8260 - val_loss: 0.4307 - val_sparse_categorical_accuracy: 0.7698 Epoch 11/500 90/90 [==============================] - 1s 6ms/step - loss: 0.3645 - sparse_categorical_accuracy: 0.8260 - val_loss: 0.4010 - val_sparse_categorical_accuracy: 0.7698 Epoch 12/500 90/90 [==============================] - 1s 6ms/step - loss: 0.3666 - sparse_categorical_accuracy: 0.8247 - val_loss: 0.3574 - val_sparse_categorical_accuracy: 0.8350 Epoch 13/500 90/90 [==============================] - 1s 6ms/step - loss: 0.3618 - sparse_categorical_accuracy: 0.8271 - val_loss: 0.3942 - val_sparse_categorical_accuracy: 0.8044 Epoch 14/500 90/90 [==============================] - 1s 6ms/step - loss: 0.3619 - sparse_categorical_accuracy: 0.8257 - val_loss: 0.4104 - val_sparse_categorical_accuracy: 0.7906 Epoch 15/500 90/90 [==============================] - 1s 6ms/step - loss: 0.3353 - sparse_categorical_accuracy: 0.8521 - val_loss: 0.3819 - val_sparse_categorical_accuracy: 0.7684 Epoch 16/500 90/90 [==============================] - 1s 6ms/step - loss: 0.3287 - sparse_categorical_accuracy: 0.8514 - val_loss: 0.3776 - val_sparse_categorical_accuracy: 0.8252 Epoch 17/500 90/90 [==============================] - 1s 6ms/step - loss: 0.3299 - sparse_categorical_accuracy: 0.8545 - val_loss: 0.3555 - val_sparse_categorical_accuracy: 0.8350 Epoch 18/500 90/90 [==============================] - 1s 6ms/step - loss: 0.3206 - sparse_categorical_accuracy: 0.8601 - val_loss: 0.4051 - val_sparse_categorical_accuracy: 0.7906 Epoch 19/500 90/90 [==============================] - 1s 6ms/step - loss: 0.3125 - sparse_categorical_accuracy: 0.8608 - val_loss: 0.3792 - val_sparse_categorical_accuracy: 0.8114 Epoch 20/500 90/90 [==============================] - 1s 6ms/step - loss: 0.3052 - sparse_categorical_accuracy: 0.8750 - val_loss: 0.3448 - val_sparse_categorical_accuracy: 0.8377 Epoch 21/500 90/90 [==============================] - 1s 6ms/step - loss: 0.3023 - sparse_categorical_accuracy: 0.8736 - val_loss: 0.3325 - val_sparse_categorical_accuracy: 0.8363 Epoch 22/500 90/90 [==============================] - 1s 6ms/step - loss: 0.2955 - sparse_categorical_accuracy: 0.8736 - val_loss: 0.3447 - val_sparse_categorical_accuracy: 0.8225 Epoch 23/500 90/90 [==============================] - 1s 6ms/step - loss: 0.2934 - sparse_categorical_accuracy: 0.8788 - val_loss: 0.2943 - val_sparse_categorical_accuracy: 0.8779 Epoch 24/500 90/90 [==============================] - 1s 6ms/step - loss: 0.2972 - sparse_categorical_accuracy: 0.8715 - val_loss: 0.4946 - val_sparse_categorical_accuracy: 0.7462 Epoch 25/500 90/90 [==============================] - 1s 6ms/step - loss: 0.2800 - sparse_categorical_accuracy: 0.8865 - val_loss: 0.2860 - val_sparse_categorical_accuracy: 0.8821 Epoch 26/500 90/90 [==============================] - 1s 6ms/step - loss: 0.2752 - sparse_categorical_accuracy: 0.8847 - val_loss: 0.2924 - val_sparse_categorical_accuracy: 0.8655 Epoch 27/500 90/90 [==============================] - 1s 6ms/step - loss: 0.2769 - sparse_categorical_accuracy: 0.8847 - val_loss: 0.6254 - val_sparse_categorical_accuracy: 0.6879 Epoch 28/500 90/90 [==============================] - 1s 6ms/step - loss: 0.2821 - sparse_categorical_accuracy: 0.8799 - val_loss: 0.2764 - val_sparse_categorical_accuracy: 0.8821 Epoch 29/500 90/90 [==============================] - 1s 6ms/step - loss: 0.2713 - sparse_categorical_accuracy: 0.8892 - val_loss: 0.7015 - val_sparse_categorical_accuracy: 0.6422 Epoch 30/500 90/90 [==============================] - 1s 6ms/step - loss: 0.2633 - sparse_categorical_accuracy: 0.8885 - val_loss: 0.8508 - val_sparse_categorical_accuracy: 0.7254 Epoch 31/500 90/90 [==============================] - 1s 6ms/step - loss: 0.2673 - sparse_categorical_accuracy: 0.8896 - val_loss: 0.4354 - val_sparse_categorical_accuracy: 0.7725 Epoch 32/500 90/90 [==============================] - 1s 6ms/step - loss: 0.2518 - sparse_categorical_accuracy: 0.8997 - val_loss: 0.9172 - val_sparse_categorical_accuracy: 0.6394 Epoch 33/500 90/90 [==============================] - 1s 6ms/step - loss: 0.2484 - sparse_categorical_accuracy: 0.9024 - val_loss: 0.5055 - val_sparse_categorical_accuracy: 0.7531 Epoch 34/500 90/90 [==============================] - 1s 6ms/step - loss: 0.2352 - sparse_categorical_accuracy: 0.9059 - val_loss: 0.6289 - val_sparse_categorical_accuracy: 0.7115 Epoch 35/500 90/90 [==============================] - 1s 6ms/step - loss: 0.2389 - sparse_categorical_accuracy: 0.9104 - val_loss: 0.2776 - val_sparse_categorical_accuracy: 0.8946 Epoch 36/500 90/90 [==============================] - 1s 6ms/step - loss: 0.2218 - sparse_categorical_accuracy: 0.9122 - val_loss: 1.3105 - val_sparse_categorical_accuracy: 0.6408 Epoch 37/500 90/90 [==============================] - 1s 6ms/step - loss: 0.2237 - sparse_categorical_accuracy: 0.9125 - val_loss: 0.4860 - val_sparse_categorical_accuracy: 0.7628 Epoch 38/500 90/90 [==============================] - 1s 6ms/step - loss: 0.2008 - sparse_categorical_accuracy: 0.9281 - val_loss: 0.5553 - val_sparse_categorical_accuracy: 0.7226 Epoch 39/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1999 - sparse_categorical_accuracy: 0.9233 - val_loss: 0.4511 - val_sparse_categorical_accuracy: 0.8058 Epoch 40/500 90/90 [==============================] - 0s 6ms/step - loss: 0.1857 - sparse_categorical_accuracy: 0.9330 - val_loss: 0.2912 - val_sparse_categorical_accuracy: 0.8516 Epoch 41/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1736 - sparse_categorical_accuracy: 0.9399 - val_loss: 0.9930 - val_sparse_categorical_accuracy: 0.5506 Epoch 42/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1649 - sparse_categorical_accuracy: 0.9396 - val_loss: 0.5852 - val_sparse_categorical_accuracy: 0.7198 Epoch 43/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1501 - sparse_categorical_accuracy: 0.9538 - val_loss: 0.1911 - val_sparse_categorical_accuracy: 0.9168 Epoch 44/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1512 - sparse_categorical_accuracy: 0.9455 - val_loss: 0.8169 - val_sparse_categorical_accuracy: 0.6130 Epoch 45/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1358 - sparse_categorical_accuracy: 0.9552 - val_loss: 0.4748 - val_sparse_categorical_accuracy: 0.7795 Epoch 46/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1401 - sparse_categorical_accuracy: 0.9535 - val_loss: 1.7678 - val_sparse_categorical_accuracy: 0.5881 Epoch 47/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1444 - sparse_categorical_accuracy: 0.9545 - val_loss: 1.7005 - val_sparse_categorical_accuracy: 0.5950 Epoch 48/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1320 - sparse_categorical_accuracy: 0.9542 - val_loss: 0.1550 - val_sparse_categorical_accuracy: 0.9431 Epoch 49/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1333 - sparse_categorical_accuracy: 0.9576 - val_loss: 0.1665 - val_sparse_categorical_accuracy: 0.9362 Epoch 50/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1367 - sparse_categorical_accuracy: 0.9549 - val_loss: 0.4227 - val_sparse_categorical_accuracy: 0.8308 Epoch 51/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1391 - sparse_categorical_accuracy: 0.9503 - val_loss: 0.1729 - val_sparse_categorical_accuracy: 0.9390 Epoch 52/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1237 - sparse_categorical_accuracy: 0.9573 - val_loss: 0.1338 - val_sparse_categorical_accuracy: 0.9487 Epoch 53/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1397 - sparse_categorical_accuracy: 0.9531 - val_loss: 0.1667 - val_sparse_categorical_accuracy: 0.9487 Epoch 54/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1205 - sparse_categorical_accuracy: 0.9601 - val_loss: 0.2904 - val_sparse_categorical_accuracy: 0.8821 Epoch 55/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1302 - sparse_categorical_accuracy: 0.9538 - val_loss: 0.9437 - val_sparse_categorical_accuracy: 0.7060 Epoch 56/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1241 - sparse_categorical_accuracy: 0.9580 - val_loss: 0.1346 - val_sparse_categorical_accuracy: 0.9501 Epoch 57/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1158 - sparse_categorical_accuracy: 0.9646 - val_loss: 0.9489 - val_sparse_categorical_accuracy: 0.6907 Epoch 58/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1175 - sparse_categorical_accuracy: 0.9573 - val_loss: 0.6089 - val_sparse_categorical_accuracy: 0.7212 Epoch 59/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1160 - sparse_categorical_accuracy: 0.9611 - val_loss: 0.1294 - val_sparse_categorical_accuracy: 0.9487 Epoch 60/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1096 - sparse_categorical_accuracy: 0.9642 - val_loss: 0.1527 - val_sparse_categorical_accuracy: 0.9417 Epoch 61/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1163 - sparse_categorical_accuracy: 0.9611 - val_loss: 0.5554 - val_sparse_categorical_accuracy: 0.7684 Epoch 62/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1090 - sparse_categorical_accuracy: 0.9656 - val_loss: 0.2433 - val_sparse_categorical_accuracy: 0.8904 Epoch 63/500 90/90 [==============================] - 0s 6ms/step - loss: 0.1105 - sparse_categorical_accuracy: 0.9656 - val_loss: 0.3426 - val_sparse_categorical_accuracy: 0.8571 Epoch 64/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1058 - sparse_categorical_accuracy: 0.9667 - val_loss: 2.1389 - val_sparse_categorical_accuracy: 0.5520 Epoch 65/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1037 - sparse_categorical_accuracy: 0.9674 - val_loss: 0.3875 - val_sparse_categorical_accuracy: 0.8738 Epoch 66/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1135 - sparse_categorical_accuracy: 0.9622 - val_loss: 0.1783 - val_sparse_categorical_accuracy: 0.9459 Epoch 67/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1006 - sparse_categorical_accuracy: 0.9681 - val_loss: 0.1462 - val_sparse_categorical_accuracy: 0.9515 Epoch 68/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0994 - sparse_categorical_accuracy: 0.9684 - val_loss: 0.1140 - val_sparse_categorical_accuracy: 0.9584 Epoch 69/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1095 - sparse_categorical_accuracy: 0.9635 - val_loss: 1.6500 - val_sparse_categorical_accuracy: 0.5589 Epoch 70/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1118 - sparse_categorical_accuracy: 0.9628 - val_loss: 1.3355 - val_sparse_categorical_accuracy: 0.6768 Epoch 71/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1155 - sparse_categorical_accuracy: 0.9608 - val_loss: 0.3167 - val_sparse_categorical_accuracy: 0.8793 Epoch 72/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1041 - sparse_categorical_accuracy: 0.9677 - val_loss: 0.1329 - val_sparse_categorical_accuracy: 0.9417 Epoch 73/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1001 - sparse_categorical_accuracy: 0.9677 - val_loss: 0.1385 - val_sparse_categorical_accuracy: 0.9417 Epoch 74/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0997 - sparse_categorical_accuracy: 0.9642 - val_loss: 0.1369 - val_sparse_categorical_accuracy: 0.9473 Epoch 75/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1051 - sparse_categorical_accuracy: 0.9667 - val_loss: 0.5135 - val_sparse_categorical_accuracy: 0.7781 Epoch 76/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0945 - sparse_categorical_accuracy: 0.9688 - val_loss: 0.1440 - val_sparse_categorical_accuracy: 0.9556 Epoch 77/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1081 - sparse_categorical_accuracy: 0.9618 - val_loss: 0.2210 - val_sparse_categorical_accuracy: 0.9196 Epoch 78/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1109 - sparse_categorical_accuracy: 0.9618 - val_loss: 0.2181 - val_sparse_categorical_accuracy: 0.9196 Epoch 79/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1047 - sparse_categorical_accuracy: 0.9608 - val_loss: 0.2074 - val_sparse_categorical_accuracy: 0.9237 Epoch 80/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1035 - sparse_categorical_accuracy: 0.9663 - val_loss: 0.3792 - val_sparse_categorical_accuracy: 0.8571 Epoch 81/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1040 - sparse_categorical_accuracy: 0.9674 - val_loss: 0.7353 - val_sparse_categorical_accuracy: 0.7420 Epoch 82/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1106 - sparse_categorical_accuracy: 0.9649 - val_loss: 0.2948 - val_sparse_categorical_accuracy: 0.9140 Epoch 83/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1066 - sparse_categorical_accuracy: 0.9656 - val_loss: 0.1338 - val_sparse_categorical_accuracy: 0.9570 Epoch 84/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0988 - sparse_categorical_accuracy: 0.9691 - val_loss: 0.1095 - val_sparse_categorical_accuracy: 0.9570 Epoch 85/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1065 - sparse_categorical_accuracy: 0.9622 - val_loss: 0.1717 - val_sparse_categorical_accuracy: 0.9417 Epoch 86/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1087 - sparse_categorical_accuracy: 0.9660 - val_loss: 0.1206 - val_sparse_categorical_accuracy: 0.9570 Epoch 87/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0991 - sparse_categorical_accuracy: 0.9656 - val_loss: 0.4285 - val_sparse_categorical_accuracy: 0.8474 Epoch 88/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0984 - sparse_categorical_accuracy: 0.9667 - val_loss: 0.1589 - val_sparse_categorical_accuracy: 0.9334 Epoch 89/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1023 - sparse_categorical_accuracy: 0.9701 - val_loss: 1.5442 - val_sparse_categorical_accuracy: 0.6782 Epoch 90/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0995 - sparse_categorical_accuracy: 0.9663 - val_loss: 0.1211 - val_sparse_categorical_accuracy: 0.9528 Epoch 91/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0908 - sparse_categorical_accuracy: 0.9705 - val_loss: 0.0987 - val_sparse_categorical_accuracy: 0.9556 Epoch 92/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0919 - sparse_categorical_accuracy: 0.9677 - val_loss: 0.2109 - val_sparse_categorical_accuracy: 0.9140 Epoch 93/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0890 - sparse_categorical_accuracy: 0.9715 - val_loss: 0.1509 - val_sparse_categorical_accuracy: 0.9431 Epoch 94/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0958 - sparse_categorical_accuracy: 0.9694 - val_loss: 0.1761 - val_sparse_categorical_accuracy: 0.9417 Epoch 95/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1000 - sparse_categorical_accuracy: 0.9663 - val_loss: 0.1466 - val_sparse_categorical_accuracy: 0.9293 Epoch 96/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0913 - sparse_categorical_accuracy: 0.9698 - val_loss: 0.6963 - val_sparse_categorical_accuracy: 0.7725 Epoch 97/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0954 - sparse_categorical_accuracy: 0.9667 - val_loss: 0.3042 - val_sparse_categorical_accuracy: 0.8738 Epoch 98/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0866 - sparse_categorical_accuracy: 0.9722 - val_loss: 0.1115 - val_sparse_categorical_accuracy: 0.9584 Epoch 99/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1017 - sparse_categorical_accuracy: 0.9615 - val_loss: 0.1195 - val_sparse_categorical_accuracy: 0.9584 Epoch 100/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1012 - sparse_categorical_accuracy: 0.9677 - val_loss: 0.1975 - val_sparse_categorical_accuracy: 0.9196 Epoch 101/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1058 - sparse_categorical_accuracy: 0.9622 - val_loss: 0.1960 - val_sparse_categorical_accuracy: 0.9487 Epoch 102/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0914 - sparse_categorical_accuracy: 0.9705 - val_loss: 0.1086 - val_sparse_categorical_accuracy: 0.9598 Epoch 103/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0907 - sparse_categorical_accuracy: 0.9701 - val_loss: 0.1117 - val_sparse_categorical_accuracy: 0.9584 Epoch 104/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0959 - sparse_categorical_accuracy: 0.9674 - val_loss: 3.9192 - val_sparse_categorical_accuracy: 0.4993 Epoch 105/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0991 - sparse_categorical_accuracy: 0.9632 - val_loss: 0.1232 - val_sparse_categorical_accuracy: 0.9473 Epoch 106/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0953 - sparse_categorical_accuracy: 0.9653 - val_loss: 0.1328 - val_sparse_categorical_accuracy: 0.9584 Epoch 107/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0835 - sparse_categorical_accuracy: 0.9750 - val_loss: 0.1480 - val_sparse_categorical_accuracy: 0.9542 Epoch 108/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0865 - sparse_categorical_accuracy: 0.9701 - val_loss: 0.1095 - val_sparse_categorical_accuracy: 0.9598 Epoch 109/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0940 - sparse_categorical_accuracy: 0.9681 - val_loss: 3.4316 - val_sparse_categorical_accuracy: 0.6422 Epoch 110/500 90/90 [==============================] - 1s 6ms/step - loss: 0.1015 - sparse_categorical_accuracy: 0.9632 - val_loss: 4.1126 - val_sparse_categorical_accuracy: 0.4965 Epoch 111/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0882 - sparse_categorical_accuracy: 0.9698 - val_loss: 0.1968 - val_sparse_categorical_accuracy: 0.9390 Epoch 112/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0778 - sparse_categorical_accuracy: 0.9764 - val_loss: 0.1051 - val_sparse_categorical_accuracy: 0.9584 Epoch 113/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0784 - sparse_categorical_accuracy: 0.9743 - val_loss: 0.1120 - val_sparse_categorical_accuracy: 0.9612 Epoch 114/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0765 - sparse_categorical_accuracy: 0.9792 - val_loss: 0.1347 - val_sparse_categorical_accuracy: 0.9556 Epoch 115/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0771 - sparse_categorical_accuracy: 0.9736 - val_loss: 0.1268 - val_sparse_categorical_accuracy: 0.9556 Epoch 116/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0787 - sparse_categorical_accuracy: 0.9743 - val_loss: 0.1014 - val_sparse_categorical_accuracy: 0.9626 Epoch 117/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0802 - sparse_categorical_accuracy: 0.9726 - val_loss: 0.0995 - val_sparse_categorical_accuracy: 0.9695 Epoch 118/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0770 - sparse_categorical_accuracy: 0.9774 - val_loss: 0.1022 - val_sparse_categorical_accuracy: 0.9598 Epoch 119/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0758 - sparse_categorical_accuracy: 0.9764 - val_loss: 0.2318 - val_sparse_categorical_accuracy: 0.9098 Epoch 120/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0751 - sparse_categorical_accuracy: 0.9750 - val_loss: 0.3361 - val_sparse_categorical_accuracy: 0.8793 Epoch 121/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0708 - sparse_categorical_accuracy: 0.9792 - val_loss: 0.1739 - val_sparse_categorical_accuracy: 0.9362 Epoch 122/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0764 - sparse_categorical_accuracy: 0.9753 - val_loss: 0.1351 - val_sparse_categorical_accuracy: 0.9556 Epoch 123/500 90/90 [==============================] - 0s 6ms/step - loss: 0.0724 - sparse_categorical_accuracy: 0.9750 - val_loss: 0.1064 - val_sparse_categorical_accuracy: 0.9556 Epoch 124/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0788 - sparse_categorical_accuracy: 0.9736 - val_loss: 0.1159 - val_sparse_categorical_accuracy: 0.9598 Epoch 125/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0806 - sparse_categorical_accuracy: 0.9719 - val_loss: 0.1268 - val_sparse_categorical_accuracy: 0.9612 Epoch 126/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0755 - sparse_categorical_accuracy: 0.9753 - val_loss: 0.1175 - val_sparse_categorical_accuracy: 0.9528 Epoch 127/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0741 - sparse_categorical_accuracy: 0.9757 - val_loss: 0.1049 - val_sparse_categorical_accuracy: 0.9612 Epoch 128/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0720 - sparse_categorical_accuracy: 0.9767 - val_loss: 0.1756 - val_sparse_categorical_accuracy: 0.9376 Epoch 129/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0734 - sparse_categorical_accuracy: 0.9757 - val_loss: 0.1165 - val_sparse_categorical_accuracy: 0.9639 Epoch 130/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0743 - sparse_categorical_accuracy: 0.9778 - val_loss: 0.1398 - val_sparse_categorical_accuracy: 0.9417 Epoch 131/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0764 - sparse_categorical_accuracy: 0.9726 - val_loss: 0.1193 - val_sparse_categorical_accuracy: 0.9459 Epoch 132/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0741 - sparse_categorical_accuracy: 0.9747 - val_loss: 0.1661 - val_sparse_categorical_accuracy: 0.9473 Epoch 133/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0677 - sparse_categorical_accuracy: 0.9792 - val_loss: 0.1016 - val_sparse_categorical_accuracy: 0.9612 Epoch 134/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0673 - sparse_categorical_accuracy: 0.9778 - val_loss: 0.1049 - val_sparse_categorical_accuracy: 0.9584 Epoch 135/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0681 - sparse_categorical_accuracy: 0.9802 - val_loss: 0.1109 - val_sparse_categorical_accuracy: 0.9515 Epoch 136/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0673 - sparse_categorical_accuracy: 0.9806 - val_loss: 0.1198 - val_sparse_categorical_accuracy: 0.9542 Epoch 137/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0679 - sparse_categorical_accuracy: 0.9767 - val_loss: 0.1130 - val_sparse_categorical_accuracy: 0.9528 Epoch 138/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0717 - sparse_categorical_accuracy: 0.9774 - val_loss: 0.1009 - val_sparse_categorical_accuracy: 0.9612 Epoch 139/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0657 - sparse_categorical_accuracy: 0.9771 - val_loss: 0.1046 - val_sparse_categorical_accuracy: 0.9528 Epoch 140/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0711 - sparse_categorical_accuracy: 0.9767 - val_loss: 0.0977 - val_sparse_categorical_accuracy: 0.9639 Epoch 141/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0719 - sparse_categorical_accuracy: 0.9774 - val_loss: 0.1071 - val_sparse_categorical_accuracy: 0.9612 Epoch 142/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0663 - sparse_categorical_accuracy: 0.9826 - val_loss: 0.1027 - val_sparse_categorical_accuracy: 0.9612 Epoch 143/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0699 - sparse_categorical_accuracy: 0.9781 - val_loss: 0.1131 - val_sparse_categorical_accuracy: 0.9626 Epoch 144/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0670 - sparse_categorical_accuracy: 0.9771 - val_loss: 0.1025 - val_sparse_categorical_accuracy: 0.9626 Epoch 145/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0653 - sparse_categorical_accuracy: 0.9785 - val_loss: 0.0935 - val_sparse_categorical_accuracy: 0.9653 Epoch 146/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0616 - sparse_categorical_accuracy: 0.9812 - val_loss: 0.1075 - val_sparse_categorical_accuracy: 0.9556 Epoch 147/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0643 - sparse_categorical_accuracy: 0.9806 - val_loss: 0.0960 - val_sparse_categorical_accuracy: 0.9584 Epoch 148/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0681 - sparse_categorical_accuracy: 0.9792 - val_loss: 0.0944 - val_sparse_categorical_accuracy: 0.9639 Epoch 149/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0661 - sparse_categorical_accuracy: 0.9792 - val_loss: 0.1311 - val_sparse_categorical_accuracy: 0.9501 Epoch 150/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0693 - sparse_categorical_accuracy: 0.9781 - val_loss: 0.1715 - val_sparse_categorical_accuracy: 0.9390 Epoch 151/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0658 - sparse_categorical_accuracy: 0.9802 - val_loss: 0.1010 - val_sparse_categorical_accuracy: 0.9612 Epoch 152/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0652 - sparse_categorical_accuracy: 0.9778 - val_loss: 0.0949 - val_sparse_categorical_accuracy: 0.9639 Epoch 153/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0640 - sparse_categorical_accuracy: 0.9812 - val_loss: 0.0996 - val_sparse_categorical_accuracy: 0.9598 Epoch 154/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0659 - sparse_categorical_accuracy: 0.9785 - val_loss: 0.0980 - val_sparse_categorical_accuracy: 0.9612 Epoch 155/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0666 - sparse_categorical_accuracy: 0.9806 - val_loss: 0.1490 - val_sparse_categorical_accuracy: 0.9501 Epoch 156/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0659 - sparse_categorical_accuracy: 0.9774 - val_loss: 0.1010 - val_sparse_categorical_accuracy: 0.9570 Epoch 157/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0650 - sparse_categorical_accuracy: 0.9792 - val_loss: 0.1040 - val_sparse_categorical_accuracy: 0.9570 Epoch 158/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0626 - sparse_categorical_accuracy: 0.9816 - val_loss: 0.0965 - val_sparse_categorical_accuracy: 0.9612 Epoch 159/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0645 - sparse_categorical_accuracy: 0.9823 - val_loss: 0.1010 - val_sparse_categorical_accuracy: 0.9570 Epoch 160/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0691 - sparse_categorical_accuracy: 0.9774 - val_loss: 0.0987 - val_sparse_categorical_accuracy: 0.9626 Epoch 161/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0615 - sparse_categorical_accuracy: 0.9806 - val_loss: 0.0936 - val_sparse_categorical_accuracy: 0.9612 Epoch 162/500 90/90 [==============================] - 0s 6ms/step - loss: 0.0625 - sparse_categorical_accuracy: 0.9792 - val_loss: 0.1129 - val_sparse_categorical_accuracy: 0.9626 Epoch 163/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0601 - sparse_categorical_accuracy: 0.9802 - val_loss: 0.0989 - val_sparse_categorical_accuracy: 0.9584 Epoch 164/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0624 - sparse_categorical_accuracy: 0.9812 - val_loss: 0.1512 - val_sparse_categorical_accuracy: 0.9515 Epoch 165/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0641 - sparse_categorical_accuracy: 0.9778 - val_loss: 0.0986 - val_sparse_categorical_accuracy: 0.9584 Epoch 166/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0558 - sparse_categorical_accuracy: 0.9823 - val_loss: 0.0979 - val_sparse_categorical_accuracy: 0.9598 Epoch 167/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0607 - sparse_categorical_accuracy: 0.9837 - val_loss: 0.1085 - val_sparse_categorical_accuracy: 0.9626 Epoch 168/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0585 - sparse_categorical_accuracy: 0.9812 - val_loss: 0.0976 - val_sparse_categorical_accuracy: 0.9639 Epoch 169/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0599 - sparse_categorical_accuracy: 0.9826 - val_loss: 0.1078 - val_sparse_categorical_accuracy: 0.9626 Epoch 170/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0608 - sparse_categorical_accuracy: 0.9833 - val_loss: 0.0951 - val_sparse_categorical_accuracy: 0.9626 Epoch 171/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0612 - sparse_categorical_accuracy: 0.9812 - val_loss: 0.1004 - val_sparse_categorical_accuracy: 0.9612 Epoch 172/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0622 - sparse_categorical_accuracy: 0.9806 - val_loss: 0.0949 - val_sparse_categorical_accuracy: 0.9653 Epoch 173/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0622 - sparse_categorical_accuracy: 0.9823 - val_loss: 0.0923 - val_sparse_categorical_accuracy: 0.9639 Epoch 174/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0600 - sparse_categorical_accuracy: 0.9802 - val_loss: 0.1019 - val_sparse_categorical_accuracy: 0.9639 Epoch 175/500 90/90 [==============================] - 0s 6ms/step - loss: 0.0591 - sparse_categorical_accuracy: 0.9816 - val_loss: 0.1238 - val_sparse_categorical_accuracy: 0.9626 Epoch 176/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0588 - sparse_categorical_accuracy: 0.9823 - val_loss: 0.0917 - val_sparse_categorical_accuracy: 0.9639 Epoch 177/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0598 - sparse_categorical_accuracy: 0.9819 - val_loss: 0.1138 - val_sparse_categorical_accuracy: 0.9626 Epoch 178/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0566 - sparse_categorical_accuracy: 0.9826 - val_loss: 0.0938 - val_sparse_categorical_accuracy: 0.9639 Epoch 179/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0634 - sparse_categorical_accuracy: 0.9809 - val_loss: 0.0966 - val_sparse_categorical_accuracy: 0.9639 Epoch 180/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0579 - sparse_categorical_accuracy: 0.9830 - val_loss: 0.1033 - val_sparse_categorical_accuracy: 0.9653 Epoch 181/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0601 - sparse_categorical_accuracy: 0.9819 - val_loss: 0.0937 - val_sparse_categorical_accuracy: 0.9626 Epoch 182/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0545 - sparse_categorical_accuracy: 0.9847 - val_loss: 0.0979 - val_sparse_categorical_accuracy: 0.9626 Epoch 183/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0569 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.0987 - val_sparse_categorical_accuracy: 0.9626 Epoch 184/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0569 - sparse_categorical_accuracy: 0.9854 - val_loss: 0.0907 - val_sparse_categorical_accuracy: 0.9626 Epoch 185/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0579 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.0918 - val_sparse_categorical_accuracy: 0.9626 Epoch 186/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0571 - sparse_categorical_accuracy: 0.9819 - val_loss: 0.0933 - val_sparse_categorical_accuracy: 0.9626 Epoch 187/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0577 - sparse_categorical_accuracy: 0.9826 - val_loss: 0.0933 - val_sparse_categorical_accuracy: 0.9626 Epoch 188/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0634 - sparse_categorical_accuracy: 0.9809 - val_loss: 0.1014 - val_sparse_categorical_accuracy: 0.9667 Epoch 189/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0582 - sparse_categorical_accuracy: 0.9837 - val_loss: 0.0906 - val_sparse_categorical_accuracy: 0.9639 Epoch 190/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0571 - sparse_categorical_accuracy: 0.9806 - val_loss: 0.0931 - val_sparse_categorical_accuracy: 0.9626 Epoch 191/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0602 - sparse_categorical_accuracy: 0.9812 - val_loss: 0.0903 - val_sparse_categorical_accuracy: 0.9626 Epoch 192/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0581 - sparse_categorical_accuracy: 0.9809 - val_loss: 0.0915 - val_sparse_categorical_accuracy: 0.9639 Epoch 193/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0574 - sparse_categorical_accuracy: 0.9819 - val_loss: 0.0914 - val_sparse_categorical_accuracy: 0.9639 Epoch 194/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0530 - sparse_categorical_accuracy: 0.9868 - val_loss: 0.0941 - val_sparse_categorical_accuracy: 0.9626 Epoch 195/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0557 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.0925 - val_sparse_categorical_accuracy: 0.9653 Epoch 196/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0576 - sparse_categorical_accuracy: 0.9819 - val_loss: 0.1018 - val_sparse_categorical_accuracy: 0.9639 Epoch 197/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0562 - sparse_categorical_accuracy: 0.9823 - val_loss: 0.1003 - val_sparse_categorical_accuracy: 0.9626 Epoch 198/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0582 - sparse_categorical_accuracy: 0.9806 - val_loss: 0.0917 - val_sparse_categorical_accuracy: 0.9612 Epoch 199/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0602 - sparse_categorical_accuracy: 0.9809 - val_loss: 0.1001 - val_sparse_categorical_accuracy: 0.9667 Epoch 200/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0580 - sparse_categorical_accuracy: 0.9816 - val_loss: 0.0927 - val_sparse_categorical_accuracy: 0.9584 Epoch 201/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0573 - sparse_categorical_accuracy: 0.9833 - val_loss: 0.1226 - val_sparse_categorical_accuracy: 0.9612 Epoch 202/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0581 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.0941 - val_sparse_categorical_accuracy: 0.9612 Epoch 203/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0602 - sparse_categorical_accuracy: 0.9819 - val_loss: 0.0933 - val_sparse_categorical_accuracy: 0.9639 Epoch 204/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0539 - sparse_categorical_accuracy: 0.9854 - val_loss: 0.0956 - val_sparse_categorical_accuracy: 0.9626 Epoch 205/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0561 - sparse_categorical_accuracy: 0.9819 - val_loss: 0.0947 - val_sparse_categorical_accuracy: 0.9639 Epoch 206/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0604 - sparse_categorical_accuracy: 0.9806 - val_loss: 0.1132 - val_sparse_categorical_accuracy: 0.9639 Epoch 207/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0564 - sparse_categorical_accuracy: 0.9844 - val_loss: 0.0930 - val_sparse_categorical_accuracy: 0.9653 Epoch 208/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0615 - sparse_categorical_accuracy: 0.9806 - val_loss: 0.0941 - val_sparse_categorical_accuracy: 0.9626 Epoch 209/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0555 - sparse_categorical_accuracy: 0.9830 - val_loss: 0.0900 - val_sparse_categorical_accuracy: 0.9626 Epoch 210/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0589 - sparse_categorical_accuracy: 0.9844 - val_loss: 0.0936 - val_sparse_categorical_accuracy: 0.9612 Epoch 211/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0615 - sparse_categorical_accuracy: 0.9806 - val_loss: 0.0947 - val_sparse_categorical_accuracy: 0.9626 Epoch 212/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0599 - sparse_categorical_accuracy: 0.9799 - val_loss: 0.0943 - val_sparse_categorical_accuracy: 0.9612 Epoch 213/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0599 - sparse_categorical_accuracy: 0.9819 - val_loss: 0.0908 - val_sparse_categorical_accuracy: 0.9653 Epoch 214/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0548 - sparse_categorical_accuracy: 0.9837 - val_loss: 0.1143 - val_sparse_categorical_accuracy: 0.9639 Epoch 215/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0526 - sparse_categorical_accuracy: 0.9837 - val_loss: 0.0965 - val_sparse_categorical_accuracy: 0.9626 Epoch 216/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0588 - sparse_categorical_accuracy: 0.9830 - val_loss: 0.0958 - val_sparse_categorical_accuracy: 0.9639 Epoch 217/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0549 - sparse_categorical_accuracy: 0.9837 - val_loss: 0.0942 - val_sparse_categorical_accuracy: 0.9612 Epoch 218/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0513 - sparse_categorical_accuracy: 0.9833 - val_loss: 0.1027 - val_sparse_categorical_accuracy: 0.9612 Epoch 219/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0555 - sparse_categorical_accuracy: 0.9816 - val_loss: 0.1217 - val_sparse_categorical_accuracy: 0.9598 Epoch 220/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0572 - sparse_categorical_accuracy: 0.9816 - val_loss: 0.0933 - val_sparse_categorical_accuracy: 0.9653 Epoch 221/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0545 - sparse_categorical_accuracy: 0.9823 - val_loss: 0.0959 - val_sparse_categorical_accuracy: 0.9653 Epoch 222/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0545 - sparse_categorical_accuracy: 0.9833 - val_loss: 0.1163 - val_sparse_categorical_accuracy: 0.9639 Epoch 223/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0556 - sparse_categorical_accuracy: 0.9830 - val_loss: 0.0955 - val_sparse_categorical_accuracy: 0.9626 Epoch 224/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0566 - sparse_categorical_accuracy: 0.9816 - val_loss: 0.0931 - val_sparse_categorical_accuracy: 0.9598 Epoch 225/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0543 - sparse_categorical_accuracy: 0.9851 - val_loss: 0.0915 - val_sparse_categorical_accuracy: 0.9667 Epoch 226/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0566 - sparse_categorical_accuracy: 0.9826 - val_loss: 0.0931 - val_sparse_categorical_accuracy: 0.9626 Epoch 227/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0528 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.0984 - val_sparse_categorical_accuracy: 0.9639 Epoch 228/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0576 - sparse_categorical_accuracy: 0.9816 - val_loss: 0.1019 - val_sparse_categorical_accuracy: 0.9639 Epoch 229/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0572 - sparse_categorical_accuracy: 0.9823 - val_loss: 0.0908 - val_sparse_categorical_accuracy: 0.9639 Epoch 230/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0543 - sparse_categorical_accuracy: 0.9826 - val_loss: 0.0923 - val_sparse_categorical_accuracy: 0.9639 Epoch 231/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0566 - sparse_categorical_accuracy: 0.9816 - val_loss: 0.0960 - val_sparse_categorical_accuracy: 0.9626 Epoch 232/500 90/90 [==============================] - 0s 6ms/step - loss: 0.0539 - sparse_categorical_accuracy: 0.9823 - val_loss: 0.0954 - val_sparse_categorical_accuracy: 0.9653 Epoch 233/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0536 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.0965 - val_sparse_categorical_accuracy: 0.9626 Epoch 234/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0512 - sparse_categorical_accuracy: 0.9865 - val_loss: 0.0945 - val_sparse_categorical_accuracy: 0.9639 Epoch 235/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0528 - sparse_categorical_accuracy: 0.9851 - val_loss: 0.0925 - val_sparse_categorical_accuracy: 0.9639 Epoch 236/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0497 - sparse_categorical_accuracy: 0.9861 - val_loss: 0.0974 - val_sparse_categorical_accuracy: 0.9626 Epoch 237/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0529 - sparse_categorical_accuracy: 0.9844 - val_loss: 0.0957 - val_sparse_categorical_accuracy: 0.9612 Epoch 238/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0552 - sparse_categorical_accuracy: 0.9819 - val_loss: 0.0961 - val_sparse_categorical_accuracy: 0.9626 Epoch 239/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0573 - sparse_categorical_accuracy: 0.9830 - val_loss: 0.0943 - val_sparse_categorical_accuracy: 0.9598 Epoch 240/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0558 - sparse_categorical_accuracy: 0.9812 - val_loss: 0.0935 - val_sparse_categorical_accuracy: 0.9639 Epoch 241/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0526 - sparse_categorical_accuracy: 0.9826 - val_loss: 0.0958 - val_sparse_categorical_accuracy: 0.9626 Epoch 242/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0488 - sparse_categorical_accuracy: 0.9861 - val_loss: 0.0976 - val_sparse_categorical_accuracy: 0.9626 Epoch 243/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0499 - sparse_categorical_accuracy: 0.9844 - val_loss: 0.0935 - val_sparse_categorical_accuracy: 0.9626 Epoch 244/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0505 - sparse_categorical_accuracy: 0.9861 - val_loss: 0.0945 - val_sparse_categorical_accuracy: 0.9639 Epoch 245/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0483 - sparse_categorical_accuracy: 0.9861 - val_loss: 0.0952 - val_sparse_categorical_accuracy: 0.9584 Epoch 246/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0524 - sparse_categorical_accuracy: 0.9847 - val_loss: 0.0958 - val_sparse_categorical_accuracy: 0.9653 Epoch 247/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0507 - sparse_categorical_accuracy: 0.9851 - val_loss: 0.0934 - val_sparse_categorical_accuracy: 0.9653 Epoch 248/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0553 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.0946 - val_sparse_categorical_accuracy: 0.9598 Epoch 249/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0577 - sparse_categorical_accuracy: 0.9809 - val_loss: 0.0979 - val_sparse_categorical_accuracy: 0.9612 Epoch 250/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0535 - sparse_categorical_accuracy: 0.9826 - val_loss: 0.0979 - val_sparse_categorical_accuracy: 0.9626 Epoch 251/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0509 - sparse_categorical_accuracy: 0.9847 - val_loss: 0.0937 - val_sparse_categorical_accuracy: 0.9626 Epoch 252/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0571 - sparse_categorical_accuracy: 0.9826 - val_loss: 0.0937 - val_sparse_categorical_accuracy: 0.9612 Epoch 253/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0525 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.1017 - val_sparse_categorical_accuracy: 0.9639 Epoch 254/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0551 - sparse_categorical_accuracy: 0.9844 - val_loss: 0.0930 - val_sparse_categorical_accuracy: 0.9639 Epoch 255/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0557 - sparse_categorical_accuracy: 0.9837 - val_loss: 0.0896 - val_sparse_categorical_accuracy: 0.9653 Epoch 256/500 90/90 [==============================] - 0s 6ms/step - loss: 0.0494 - sparse_categorical_accuracy: 0.9865 - val_loss: 0.0908 - val_sparse_categorical_accuracy: 0.9612 Epoch 257/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0492 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.0953 - val_sparse_categorical_accuracy: 0.9612 Epoch 258/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0525 - sparse_categorical_accuracy: 0.9844 - val_loss: 0.0923 - val_sparse_categorical_accuracy: 0.9626 Epoch 259/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0514 - sparse_categorical_accuracy: 0.9854 - val_loss: 0.0937 - val_sparse_categorical_accuracy: 0.9612 Epoch 260/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0511 - sparse_categorical_accuracy: 0.9851 - val_loss: 0.0934 - val_sparse_categorical_accuracy: 0.9612 Epoch 261/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0510 - sparse_categorical_accuracy: 0.9854 - val_loss: 0.0914 - val_sparse_categorical_accuracy: 0.9639 Epoch 262/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0498 - sparse_categorical_accuracy: 0.9844 - val_loss: 0.0957 - val_sparse_categorical_accuracy: 0.9653 Epoch 263/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0543 - sparse_categorical_accuracy: 0.9819 - val_loss: 0.0956 - val_sparse_categorical_accuracy: 0.9653 Epoch 264/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0564 - sparse_categorical_accuracy: 0.9812 - val_loss: 0.0917 - val_sparse_categorical_accuracy: 0.9598 Epoch 265/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0529 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.0928 - val_sparse_categorical_accuracy: 0.9626 Epoch 266/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0564 - sparse_categorical_accuracy: 0.9816 - val_loss: 0.0978 - val_sparse_categorical_accuracy: 0.9639 Epoch 267/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0497 - sparse_categorical_accuracy: 0.9868 - val_loss: 0.0917 - val_sparse_categorical_accuracy: 0.9639 Epoch 268/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0538 - sparse_categorical_accuracy: 0.9795 - val_loss: 0.0913 - val_sparse_categorical_accuracy: 0.9626 Epoch 269/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0497 - sparse_categorical_accuracy: 0.9875 - val_loss: 0.0928 - val_sparse_categorical_accuracy: 0.9598 Epoch 270/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0553 - sparse_categorical_accuracy: 0.9851 - val_loss: 0.0950 - val_sparse_categorical_accuracy: 0.9626 Epoch 271/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0501 - sparse_categorical_accuracy: 0.9868 - val_loss: 0.0923 - val_sparse_categorical_accuracy: 0.9639 Epoch 272/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0575 - sparse_categorical_accuracy: 0.9806 - val_loss: 0.0903 - val_sparse_categorical_accuracy: 0.9639 Epoch 273/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0490 - sparse_categorical_accuracy: 0.9865 - val_loss: 0.1155 - val_sparse_categorical_accuracy: 0.9626 Epoch 274/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0553 - sparse_categorical_accuracy: 0.9809 - val_loss: 0.0923 - val_sparse_categorical_accuracy: 0.9653 Epoch 275/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0513 - sparse_categorical_accuracy: 0.9837 - val_loss: 0.0915 - val_sparse_categorical_accuracy: 0.9598 Epoch 276/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0494 - sparse_categorical_accuracy: 0.9872 - val_loss: 0.0918 - val_sparse_categorical_accuracy: 0.9639 Epoch 277/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0606 - sparse_categorical_accuracy: 0.9819 - val_loss: 0.1049 - val_sparse_categorical_accuracy: 0.9639 Epoch 278/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0488 - sparse_categorical_accuracy: 0.9858 - val_loss: 0.0936 - val_sparse_categorical_accuracy: 0.9598 Epoch 279/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0535 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.0934 - val_sparse_categorical_accuracy: 0.9639 Epoch 280/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0493 - sparse_categorical_accuracy: 0.9865 - val_loss: 0.0997 - val_sparse_categorical_accuracy: 0.9626 Epoch 281/500 90/90 [==============================] - 0s 5ms/step - loss: 0.0485 - sparse_categorical_accuracy: 0.9858 - val_loss: 0.0943 - val_sparse_categorical_accuracy: 0.9626 Epoch 282/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0493 - sparse_categorical_accuracy: 0.9865 - val_loss: 0.0906 - val_sparse_categorical_accuracy: 0.9626 Epoch 283/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0491 - sparse_categorical_accuracy: 0.9847 - val_loss: 0.0919 - val_sparse_categorical_accuracy: 0.9653 Epoch 284/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0482 - sparse_categorical_accuracy: 0.9865 - val_loss: 0.0895 - val_sparse_categorical_accuracy: 0.9639 Epoch 285/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0505 - sparse_categorical_accuracy: 0.9858 - val_loss: 0.0926 - val_sparse_categorical_accuracy: 0.9612 Epoch 286/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0466 - sparse_categorical_accuracy: 0.9844 - val_loss: 0.0950 - val_sparse_categorical_accuracy: 0.9639 Epoch 287/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0576 - sparse_categorical_accuracy: 0.9823 - val_loss: 0.0935 - val_sparse_categorical_accuracy: 0.9639 Epoch 288/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0527 - sparse_categorical_accuracy: 0.9837 - val_loss: 0.0943 - val_sparse_categorical_accuracy: 0.9639 Epoch 289/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0492 - sparse_categorical_accuracy: 0.9878 - val_loss: 0.0961 - val_sparse_categorical_accuracy: 0.9667 Epoch 290/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0466 - sparse_categorical_accuracy: 0.9882 - val_loss: 0.0947 - val_sparse_categorical_accuracy: 0.9612 Epoch 291/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0498 - sparse_categorical_accuracy: 0.9844 - val_loss: 0.0936 - val_sparse_categorical_accuracy: 0.9653 Epoch 292/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0489 - sparse_categorical_accuracy: 0.9858 - val_loss: 0.0922 - val_sparse_categorical_accuracy: 0.9653 Epoch 293/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0499 - sparse_categorical_accuracy: 0.9878 - val_loss: 0.0907 - val_sparse_categorical_accuracy: 0.9612 Epoch 294/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0511 - sparse_categorical_accuracy: 0.9837 - val_loss: 0.0892 - val_sparse_categorical_accuracy: 0.9639 Epoch 295/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0502 - sparse_categorical_accuracy: 0.9868 - val_loss: 0.0946 - val_sparse_categorical_accuracy: 0.9639 Epoch 296/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0504 - sparse_categorical_accuracy: 0.9865 - val_loss: 0.0902 - val_sparse_categorical_accuracy: 0.9639 Epoch 297/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0532 - sparse_categorical_accuracy: 0.9826 - val_loss: 0.0908 - val_sparse_categorical_accuracy: 0.9639 Epoch 298/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0526 - sparse_categorical_accuracy: 0.9823 - val_loss: 0.0950 - val_sparse_categorical_accuracy: 0.9584 Epoch 299/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0478 - sparse_categorical_accuracy: 0.9851 - val_loss: 0.1001 - val_sparse_categorical_accuracy: 0.9612 Epoch 300/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0543 - sparse_categorical_accuracy: 0.9833 - val_loss: 0.0929 - val_sparse_categorical_accuracy: 0.9639 Epoch 301/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0507 - sparse_categorical_accuracy: 0.9847 - val_loss: 0.0935 - val_sparse_categorical_accuracy: 0.9653 Epoch 302/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0512 - sparse_categorical_accuracy: 0.9833 - val_loss: 0.0897 - val_sparse_categorical_accuracy: 0.9612 Epoch 303/500 90/90 [==============================] - 0s 5ms/step - loss: 0.0480 - sparse_categorical_accuracy: 0.9851 - val_loss: 0.1003 - val_sparse_categorical_accuracy: 0.9612 Epoch 304/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0538 - sparse_categorical_accuracy: 0.9858 - val_loss: 0.0997 - val_sparse_categorical_accuracy: 0.9612 Epoch 305/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0528 - sparse_categorical_accuracy: 0.9861 - val_loss: 0.1028 - val_sparse_categorical_accuracy: 0.9626 Epoch 306/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0507 - sparse_categorical_accuracy: 0.9858 - val_loss: 0.0949 - val_sparse_categorical_accuracy: 0.9612 Epoch 307/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0534 - sparse_categorical_accuracy: 0.9812 - val_loss: 0.0902 - val_sparse_categorical_accuracy: 0.9639 Epoch 308/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0497 - sparse_categorical_accuracy: 0.9851 - val_loss: 0.0929 - val_sparse_categorical_accuracy: 0.9681 Epoch 309/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0510 - sparse_categorical_accuracy: 0.9865 - val_loss: 0.0904 - val_sparse_categorical_accuracy: 0.9626 Epoch 310/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0518 - sparse_categorical_accuracy: 0.9851 - val_loss: 0.0967 - val_sparse_categorical_accuracy: 0.9598 Epoch 311/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0521 - sparse_categorical_accuracy: 0.9847 - val_loss: 0.0945 - val_sparse_categorical_accuracy: 0.9626 Epoch 312/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0586 - sparse_categorical_accuracy: 0.9806 - val_loss: 0.0957 - val_sparse_categorical_accuracy: 0.9626 Epoch 313/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0470 - sparse_categorical_accuracy: 0.9858 - val_loss: 0.0984 - val_sparse_categorical_accuracy: 0.9598 Epoch 314/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0533 - sparse_categorical_accuracy: 0.9861 - val_loss: 0.0908 - val_sparse_categorical_accuracy: 0.9598 Epoch 315/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0502 - sparse_categorical_accuracy: 0.9858 - val_loss: 0.0908 - val_sparse_categorical_accuracy: 0.9639 Epoch 316/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0463 - sparse_categorical_accuracy: 0.9851 - val_loss: 0.0912 - val_sparse_categorical_accuracy: 0.9639 Epoch 317/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0515 - sparse_categorical_accuracy: 0.9830 - val_loss: 0.1047 - val_sparse_categorical_accuracy: 0.9626 Epoch 318/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0522 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.0916 - val_sparse_categorical_accuracy: 0.9639 Epoch 319/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0494 - sparse_categorical_accuracy: 0.9858 - val_loss: 0.0919 - val_sparse_categorical_accuracy: 0.9639 Epoch 320/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0446 - sparse_categorical_accuracy: 0.9906 - val_loss: 0.0901 - val_sparse_categorical_accuracy: 0.9626 Epoch 321/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0527 - sparse_categorical_accuracy: 0.9847 - val_loss: 0.0910 - val_sparse_categorical_accuracy: 0.9598 Epoch 322/500 90/90 [==============================] - 0s 6ms/step - loss: 0.0476 - sparse_categorical_accuracy: 0.9872 - val_loss: 0.1029 - val_sparse_categorical_accuracy: 0.9598 Epoch 323/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0505 - sparse_categorical_accuracy: 0.9844 - val_loss: 0.0939 - val_sparse_categorical_accuracy: 0.9626 Epoch 324/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0505 - sparse_categorical_accuracy: 0.9837 - val_loss: 0.0900 - val_sparse_categorical_accuracy: 0.9612 Epoch 325/500 90/90 [==============================] - 0s 6ms/step - loss: 0.0516 - sparse_categorical_accuracy: 0.9854 - val_loss: 0.1024 - val_sparse_categorical_accuracy: 0.9626 Epoch 326/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0512 - sparse_categorical_accuracy: 0.9858 - val_loss: 0.0946 - val_sparse_categorical_accuracy: 0.9598 Epoch 327/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0509 - sparse_categorical_accuracy: 0.9872 - val_loss: 0.0988 - val_sparse_categorical_accuracy: 0.9626 Epoch 328/500 90/90 [==============================] - 0s 5ms/step - loss: 0.0427 - sparse_categorical_accuracy: 0.9889 - val_loss: 0.0913 - val_sparse_categorical_accuracy: 0.9639 Epoch 329/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0515 - sparse_categorical_accuracy: 0.9861 - val_loss: 0.0962 - val_sparse_categorical_accuracy: 0.9612 Epoch 330/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0477 - sparse_categorical_accuracy: 0.9865 - val_loss: 0.0917 - val_sparse_categorical_accuracy: 0.9598 Epoch 331/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0485 - sparse_categorical_accuracy: 0.9851 - val_loss: 0.0911 - val_sparse_categorical_accuracy: 0.9626 Epoch 332/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0479 - sparse_categorical_accuracy: 0.9865 - val_loss: 0.0999 - val_sparse_categorical_accuracy: 0.9612 Epoch 333/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0465 - sparse_categorical_accuracy: 0.9872 - val_loss: 0.0877 - val_sparse_categorical_accuracy: 0.9639 Epoch 334/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0500 - sparse_categorical_accuracy: 0.9833 - val_loss: 0.1073 - val_sparse_categorical_accuracy: 0.9626 Epoch 335/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0506 - sparse_categorical_accuracy: 0.9851 - val_loss: 0.0913 - val_sparse_categorical_accuracy: 0.9612 Epoch 336/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0473 - sparse_categorical_accuracy: 0.9872 - val_loss: 0.1075 - val_sparse_categorical_accuracy: 0.9639 Epoch 337/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0494 - sparse_categorical_accuracy: 0.9868 - val_loss: 0.0953 - val_sparse_categorical_accuracy: 0.9626 Epoch 338/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0510 - sparse_categorical_accuracy: 0.9844 - val_loss: 0.0904 - val_sparse_categorical_accuracy: 0.9639 Epoch 339/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0521 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.0913 - val_sparse_categorical_accuracy: 0.9584 Epoch 340/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0512 - sparse_categorical_accuracy: 0.9833 - val_loss: 0.0908 - val_sparse_categorical_accuracy: 0.9626 Epoch 341/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0468 - sparse_categorical_accuracy: 0.9847 - val_loss: 0.0990 - val_sparse_categorical_accuracy: 0.9626 Epoch 342/500 90/90 [==============================] - 0s 5ms/step - loss: 0.0494 - sparse_categorical_accuracy: 0.9875 - val_loss: 0.0950 - val_sparse_categorical_accuracy: 0.9653 Epoch 343/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0518 - sparse_categorical_accuracy: 0.9851 - val_loss: 0.0937 - val_sparse_categorical_accuracy: 0.9598 Epoch 344/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0488 - sparse_categorical_accuracy: 0.9851 - val_loss: 0.0958 - val_sparse_categorical_accuracy: 0.9639 Epoch 345/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0523 - sparse_categorical_accuracy: 0.9865 - val_loss: 0.1467 - val_sparse_categorical_accuracy: 0.9515 Epoch 346/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0482 - sparse_categorical_accuracy: 0.9844 - val_loss: 0.0917 - val_sparse_categorical_accuracy: 0.9667 Epoch 347/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0492 - sparse_categorical_accuracy: 0.9837 - val_loss: 0.1134 - val_sparse_categorical_accuracy: 0.9626 Epoch 348/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0455 - sparse_categorical_accuracy: 0.9861 - val_loss: 0.0976 - val_sparse_categorical_accuracy: 0.9612 Epoch 349/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0462 - sparse_categorical_accuracy: 0.9896 - val_loss: 0.0898 - val_sparse_categorical_accuracy: 0.9667 Epoch 350/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0497 - sparse_categorical_accuracy: 0.9847 - val_loss: 0.0912 - val_sparse_categorical_accuracy: 0.9639 Epoch 351/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0462 - sparse_categorical_accuracy: 0.9889 - val_loss: 0.0932 - val_sparse_categorical_accuracy: 0.9626 Epoch 352/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0515 - sparse_categorical_accuracy: 0.9823 - val_loss: 0.0913 - val_sparse_categorical_accuracy: 0.9653 Epoch 353/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0455 - sparse_categorical_accuracy: 0.9868 - val_loss: 0.0945 - val_sparse_categorical_accuracy: 0.9612 Epoch 354/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0452 - sparse_categorical_accuracy: 0.9861 - val_loss: 0.0921 - val_sparse_categorical_accuracy: 0.9598 Epoch 355/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0430 - sparse_categorical_accuracy: 0.9861 - val_loss: 0.0903 - val_sparse_categorical_accuracy: 0.9626 Epoch 356/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0471 - sparse_categorical_accuracy: 0.9865 - val_loss: 0.1045 - val_sparse_categorical_accuracy: 0.9626 Epoch 357/500 90/90 [==============================] - 0s 5ms/step - loss: 0.0508 - sparse_categorical_accuracy: 0.9847 - val_loss: 0.0949 - val_sparse_categorical_accuracy: 0.9653 Epoch 358/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0468 - sparse_categorical_accuracy: 0.9868 - val_loss: 0.0931 - val_sparse_categorical_accuracy: 0.9639 Epoch 359/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0466 - sparse_categorical_accuracy: 0.9851 - val_loss: 0.0913 - val_sparse_categorical_accuracy: 0.9612 Epoch 360/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0440 - sparse_categorical_accuracy: 0.9899 - val_loss: 0.0988 - val_sparse_categorical_accuracy: 0.9626 Epoch 361/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0448 - sparse_categorical_accuracy: 0.9875 - val_loss: 0.0975 - val_sparse_categorical_accuracy: 0.9667 Epoch 362/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0477 - sparse_categorical_accuracy: 0.9875 - val_loss: 0.0914 - val_sparse_categorical_accuracy: 0.9639 Epoch 363/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0493 - sparse_categorical_accuracy: 0.9868 - val_loss: 0.0906 - val_sparse_categorical_accuracy: 0.9626 Epoch 364/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0488 - sparse_categorical_accuracy: 0.9858 - val_loss: 0.0931 - val_sparse_categorical_accuracy: 0.9626 Epoch 365/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0491 - sparse_categorical_accuracy: 0.9868 - val_loss: 0.0960 - val_sparse_categorical_accuracy: 0.9626 Epoch 366/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0477 - sparse_categorical_accuracy: 0.9865 - val_loss: 0.0891 - val_sparse_categorical_accuracy: 0.9612 Epoch 367/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0470 - sparse_categorical_accuracy: 0.9858 - val_loss: 0.1026 - val_sparse_categorical_accuracy: 0.9626 Epoch 368/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0463 - sparse_categorical_accuracy: 0.9885 - val_loss: 0.0909 - val_sparse_categorical_accuracy: 0.9626 Epoch 369/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0459 - sparse_categorical_accuracy: 0.9865 - val_loss: 0.0909 - val_sparse_categorical_accuracy: 0.9639 Epoch 370/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0511 - sparse_categorical_accuracy: 0.9868 - val_loss: 0.1036 - val_sparse_categorical_accuracy: 0.9626 Epoch 371/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0479 - sparse_categorical_accuracy: 0.9837 - val_loss: 0.0922 - val_sparse_categorical_accuracy: 0.9626 Epoch 372/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0516 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.0932 - val_sparse_categorical_accuracy: 0.9653 Epoch 373/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0451 - sparse_categorical_accuracy: 0.9858 - val_loss: 0.0928 - val_sparse_categorical_accuracy: 0.9639 Epoch 374/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0461 - sparse_categorical_accuracy: 0.9854 - val_loss: 0.0911 - val_sparse_categorical_accuracy: 0.9612 Epoch 375/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0494 - sparse_categorical_accuracy: 0.9833 - val_loss: 0.0895 - val_sparse_categorical_accuracy: 0.9639 Epoch 376/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0466 - sparse_categorical_accuracy: 0.9830 - val_loss: 0.0902 - val_sparse_categorical_accuracy: 0.9639 Epoch 377/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0465 - sparse_categorical_accuracy: 0.9844 - val_loss: 0.0908 - val_sparse_categorical_accuracy: 0.9681 Epoch 378/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0430 - sparse_categorical_accuracy: 0.9882 - val_loss: 0.0906 - val_sparse_categorical_accuracy: 0.9626 Epoch 379/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0524 - sparse_categorical_accuracy: 0.9837 - val_loss: 0.0910 - val_sparse_categorical_accuracy: 0.9598 Epoch 380/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0467 - sparse_categorical_accuracy: 0.9872 - val_loss: 0.0947 - val_sparse_categorical_accuracy: 0.9639 Epoch 381/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0464 - sparse_categorical_accuracy: 0.9885 - val_loss: 0.0922 - val_sparse_categorical_accuracy: 0.9653 Epoch 382/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0449 - sparse_categorical_accuracy: 0.9885 - val_loss: 0.0918 - val_sparse_categorical_accuracy: 0.9639 Epoch 383/500 90/90 [==============================] - 1s 6ms/step - loss: 0.0438 - sparse_categorical_accuracy: 0.9889 - val_loss: 0.0905 - val_sparse_categorical_accuracy: 0.9612 Epoch 00383: early stopping Evaluate model on test data model = keras.models.load_model(\"best_model.h5\") test_loss, test_acc = model.evaluate(x_test, y_test) print(\"Test accuracy\", test_acc) print(\"Test loss\", test_loss) 42/42 [==============================] - 0s 2ms/step - loss: 0.0936 - sparse_categorical_accuracy: 0.9682 Test accuracy 0.9681817889213562 Test loss 0.0935916006565094 Plot the model's training and validation loss metric = \"sparse_categorical_accuracy\" plt.figure() plt.plot(history.history[metric]) plt.plot(history.history[\"val_\" + metric]) plt.title(\"model \" + metric) plt.ylabel(metric, fontsize=\"large\") plt.xlabel(\"epoch\", fontsize=\"large\") plt.legend([\"train\", \"val\"], loc=\"best\") plt.show() plt.close() png We can see how the training accuracy reaches almost 0.95 after 100 epochs. However, by observing the validation accuracy we can see how the network still needs training until it reaches almost 0.97 for both the validation and the training accuracy after 200 epochs. Beyond the 200th epoch, if we continue on training, the validation accuracy will start decreasing while the training accuracy will continue on increasing: the model starts overfitting. This notebook demonstrates how to do timeseries classification using a Transformer model. Introduction This is the Transformer architecture from Attention Is All You Need, applied to timeseries instead of natural language. This example requires TensorFlow 2.4 or higher. Load the dataset We are going to use the same dataset and preprocessing as the TimeSeries Classification from Scratch example. import numpy as np def readucr(filename): data = np.loadtxt(filename, delimiter=\"\t\") y = data[:, 0] x = data[:, 1:] return x, y.astype(int) root_url = \"https://raw.githubusercontent.com/hfawaz/cd-diagram/master/FordA/\" x_train, y_train = readucr(root_url + \"FordA_TRAIN.tsv\") x_test, y_test = readucr(root_url + \"FordA_TEST.tsv\") x_train = x_train.reshape((x_train.shape[0], x_train.shape[1], 1)) x_test = x_test.reshape((x_test.shape[0], x_test.shape[1], 1)) n_classes = len(np.unique(y_train)) idx = np.random.permutation(len(x_train)) x_train = x_train[idx] y_train = y_train[idx] y_train[y_train == -1] = 0 y_test[y_test == -1] = 0 Build the model Our model processes a tensor of shape (batch size, sequence length, features), where sequence length is the number of time steps and features is each input timeseries. You can replace your classification RNN layers with this one: the inputs are fully compatible! from tensorflow import keras from tensorflow.keras import layers We include residual connections, layer normalization, and dropout. The resulting layer can be stacked multiple times. The projection layers are implemented through keras.layers.Conv1D. def transformer_encoder(inputs, head_size, num_heads, ff_dim, dropout=0): # Attention and Normalization x = layers.MultiHeadAttention( key_dim=head_size, num_heads=num_heads, dropout=dropout )(inputs, inputs) x = layers.Dropout(dropout)(x) x = layers.LayerNormalization(epsilon=1e-6)(x) res = x + inputs # Feed Forward Part x = layers.Conv1D(filters=ff_dim, kernel_size=1, activation=\"relu\")(res) x = layers.Dropout(dropout)(x) x = layers.Conv1D(filters=inputs.shape[-1], kernel_size=1)(x) x = layers.LayerNormalization(epsilon=1e-6)(x) return x + res The main part of our model is now complete. We can stack multiple of those transformer_encoder blocks and we can also proceed to add the final Multi-Layer Perceptron classification head. Apart from a stack of Dense layers, we need to reduce the output tensor of the TransformerEncoder part of our model down to a vector of features for each data point in the current batch. A common way to achieve this is to use a pooling layer. For this example, a GlobalAveragePooling1D layer is sufficient. def build_model( input_shape, head_size, num_heads, ff_dim, num_transformer_blocks, mlp_units, dropout=0, mlp_dropout=0, ): inputs = keras.Input(shape=input_shape) x = inputs for _ in range(num_transformer_blocks): x = transformer_encoder(x, head_size, num_heads, ff_dim, dropout) x = layers.GlobalAveragePooling1D(data_format=\"channels_first\")(x) for dim in mlp_units: x = layers.Dense(dim, activation=\"relu\")(x) x = layers.Dropout(mlp_dropout)(x) outputs = layers.Dense(n_classes, activation=\"softmax\")(x) return keras.Model(inputs, outputs) Train and evaluate input_shape = x_train.shape[1:] model = build_model( input_shape, head_size=256, num_heads=4, ff_dim=4, num_transformer_blocks=4, mlp_units=[128], mlp_dropout=0.4, dropout=0.25, ) model.compile( loss=\"sparse_categorical_crossentropy\", optimizer=keras.optimizers.Adam(learning_rate=1e-4), metrics=[\"sparse_categorical_accuracy\"], ) model.summary() callbacks = [keras.callbacks.EarlyStopping(patience=10, restore_best_weights=True)] model.fit( x_train, y_train, validation_split=0.2, epochs=200, batch_size=64, callbacks=callbacks, ) model.evaluate(x_test, y_test, verbose=1) Model: \"model\" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 500, 1)] 0 __________________________________________________________________________________________________ layer_normalization (LayerNorma (None, 500, 1) 2 input_1[0][0] __________________________________________________________________________________________________ multi_head_attention (MultiHead (None, 500, 1) 7169 layer_normalization[0][0] layer_normalization[0][0] __________________________________________________________________________________________________ dropout (Dropout) (None, 500, 1) 0 multi_head_attention[0][0] __________________________________________________________________________________________________ tf.__operators__.add (TFOpLambd (None, 500, 1) 0 dropout[0][0] input_1[0][0] __________________________________________________________________________________________________ layer_normalization_1 (LayerNor (None, 500, 1) 2 tf.__operators__.add[0][0] __________________________________________________________________________________________________ conv1d (Conv1D) (None, 500, 4) 8 layer_normalization_1[0][0] __________________________________________________________________________________________________ dropout_1 (Dropout) (None, 500, 4) 0 conv1d[0][0] __________________________________________________________________________________________________ conv1d_1 (Conv1D) (None, 500, 1) 5 dropout_1[0][0] __________________________________________________________________________________________________ tf.__operators__.add_1 (TFOpLam (None, 500, 1) 0 conv1d_1[0][0] tf.__operators__.add[0][0] __________________________________________________________________________________________________ layer_normalization_2 (LayerNor (None, 500, 1) 2 tf.__operators__.add_1[0][0] __________________________________________________________________________________________________ multi_head_attention_1 (MultiHe (None, 500, 1) 7169 layer_normalization_2[0][0] layer_normalization_2[0][0] __________________________________________________________________________________________________ dropout_2 (Dropout) (None, 500, 1) 0 multi_head_attention_1[0][0] __________________________________________________________________________________________________ tf.__operators__.add_2 (TFOpLam (None, 500, 1) 0 dropout_2[0][0] tf.__operators__.add_1[0][0] __________________________________________________________________________________________________ layer_normalization_3 (LayerNor (None, 500, 1) 2 tf.__operators__.add_2[0][0] __________________________________________________________________________________________________ conv1d_2 (Conv1D) (None, 500, 4) 8 layer_normalization_3[0][0] __________________________________________________________________________________________________ dropout_3 (Dropout) (None, 500, 4) 0 conv1d_2[0][0] __________________________________________________________________________________________________ conv1d_3 (Conv1D) (None, 500, 1) 5 dropout_3[0][0] __________________________________________________________________________________________________ tf.__operators__.add_3 (TFOpLam (None, 500, 1) 0 conv1d_3[0][0] tf.__operators__.add_2[0][0] __________________________________________________________________________________________________ layer_normalization_4 (LayerNor (None, 500, 1) 2 tf.__operators__.add_3[0][0] __________________________________________________________________________________________________ multi_head_attention_2 (MultiHe (None, 500, 1) 7169 layer_normalization_4[0][0] layer_normalization_4[0][0] __________________________________________________________________________________________________ dropout_4 (Dropout) (None, 500, 1) 0 multi_head_attention_2[0][0] __________________________________________________________________________________________________ tf.__operators__.add_4 (TFOpLam (None, 500, 1) 0 dropout_4[0][0] tf.__operators__.add_3[0][0] __________________________________________________________________________________________________ layer_normalization_5 (LayerNor (None, 500, 1) 2 tf.__operators__.add_4[0][0] __________________________________________________________________________________________________ conv1d_4 (Conv1D) (None, 500, 4) 8 layer_normalization_5[0][0] __________________________________________________________________________________________________ dropout_5 (Dropout) (None, 500, 4) 0 conv1d_4[0][0] __________________________________________________________________________________________________ conv1d_5 (Conv1D) (None, 500, 1) 5 dropout_5[0][0] __________________________________________________________________________________________________ tf.__operators__.add_5 (TFOpLam (None, 500, 1) 0 conv1d_5[0][0] tf.__operators__.add_4[0][0] __________________________________________________________________________________________________ layer_normalization_6 (LayerNor (None, 500, 1) 2 tf.__operators__.add_5[0][0] __________________________________________________________________________________________________ multi_head_attention_3 (MultiHe (None, 500, 1) 7169 layer_normalization_6[0][0] layer_normalization_6[0][0] __________________________________________________________________________________________________ dropout_6 (Dropout) (None, 500, 1) 0 multi_head_attention_3[0][0] __________________________________________________________________________________________________ tf.__operators__.add_6 (TFOpLam (None, 500, 1) 0 dropout_6[0][0] tf.__operators__.add_5[0][0] __________________________________________________________________________________________________ layer_normalization_7 (LayerNor (None, 500, 1) 2 tf.__operators__.add_6[0][0] __________________________________________________________________________________________________ conv1d_6 (Conv1D) (None, 500, 4) 8 layer_normalization_7[0][0] __________________________________________________________________________________________________ dropout_7 (Dropout) (None, 500, 4) 0 conv1d_6[0][0] __________________________________________________________________________________________________ conv1d_7 (Conv1D) (None, 500, 1) 5 dropout_7[0][0] __________________________________________________________________________________________________ tf.__operators__.add_7 (TFOpLam (None, 500, 1) 0 conv1d_7[0][0] tf.__operators__.add_6[0][0] __________________________________________________________________________________________________ global_average_pooling1d (Globa (None, 500) 0 tf.__operators__.add_7[0][0] __________________________________________________________________________________________________ dense (Dense) (None, 128) 64128 global_average_pooling1d[0][0] __________________________________________________________________________________________________ dropout_8 (Dropout) (None, 128) 0 dense[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (None, 2) 258 dropout_8[0][0] ================================================================================================== Total params: 93,130 Trainable params: 93,130 Non-trainable params: 0 __________________________________________________________________________________________________ Epoch 1/200 45/45 [==============================] - 26s 499ms/step - loss: 1.0233 - sparse_categorical_accuracy: 0.5174 - val_loss: 0.7853 - val_sparse_categorical_accuracy: 0.5368 Epoch 2/200 45/45 [==============================] - 22s 499ms/step - loss: 0.9108 - sparse_categorical_accuracy: 0.5507 - val_loss: 0.7169 - val_sparse_categorical_accuracy: 0.5659 Epoch 3/200 45/45 [==============================] - 23s 509ms/step - loss: 0.8177 - sparse_categorical_accuracy: 0.5851 - val_loss: 0.6851 - val_sparse_categorical_accuracy: 0.5839 Epoch 4/200 45/45 [==============================] - 24s 532ms/step - loss: 0.7494 - sparse_categorical_accuracy: 0.6160 - val_loss: 0.6554 - val_sparse_categorical_accuracy: 0.6214 Epoch 5/200 45/45 [==============================] - 23s 520ms/step - loss: 0.7287 - sparse_categorical_accuracy: 0.6319 - val_loss: 0.6333 - val_sparse_categorical_accuracy: 0.6463 Epoch 6/200 45/45 [==============================] - 23s 509ms/step - loss: 0.7108 - sparse_categorical_accuracy: 0.6424 - val_loss: 0.6185 - val_sparse_categorical_accuracy: 0.6546 Epoch 7/200 45/45 [==============================] - 23s 512ms/step - loss: 0.6624 - sparse_categorical_accuracy: 0.6667 - val_loss: 0.6023 - val_sparse_categorical_accuracy: 0.6657 Epoch 8/200 45/45 [==============================] - 23s 518ms/step - loss: 0.6392 - sparse_categorical_accuracy: 0.6774 - val_loss: 0.5935 - val_sparse_categorical_accuracy: 0.6796 Epoch 9/200 45/45 [==============================] - 23s 513ms/step - loss: 0.5978 - sparse_categorical_accuracy: 0.6955 - val_loss: 0.5778 - val_sparse_categorical_accuracy: 0.6907 Epoch 10/200 45/45 [==============================] - 23s 511ms/step - loss: 0.5909 - sparse_categorical_accuracy: 0.6948 - val_loss: 0.5687 - val_sparse_categorical_accuracy: 0.6935 Epoch 11/200 45/45 [==============================] - 23s 513ms/step - loss: 0.5785 - sparse_categorical_accuracy: 0.7021 - val_loss: 0.5628 - val_sparse_categorical_accuracy: 0.6990 Epoch 12/200 45/45 [==============================] - 23s 514ms/step - loss: 0.5547 - sparse_categorical_accuracy: 0.7247 - val_loss: 0.5545 - val_sparse_categorical_accuracy: 0.7101 Epoch 13/200 45/45 [==============================] - 24s 535ms/step - loss: 0.5705 - sparse_categorical_accuracy: 0.7240 - val_loss: 0.5461 - val_sparse_categorical_accuracy: 0.7240 Epoch 14/200 45/45 [==============================] - 23s 517ms/step - loss: 0.5538 - sparse_categorical_accuracy: 0.7250 - val_loss: 0.5403 - val_sparse_categorical_accuracy: 0.7212 Epoch 15/200 45/45 [==============================] - 23s 515ms/step - loss: 0.5144 - sparse_categorical_accuracy: 0.7500 - val_loss: 0.5318 - val_sparse_categorical_accuracy: 0.7295 Epoch 16/200 45/45 [==============================] - 23s 512ms/step - loss: 0.5200 - sparse_categorical_accuracy: 0.7521 - val_loss: 0.5286 - val_sparse_categorical_accuracy: 0.7379 Epoch 17/200 45/45 [==============================] - 23s 515ms/step - loss: 0.4910 - sparse_categorical_accuracy: 0.7590 - val_loss: 0.5229 - val_sparse_categorical_accuracy: 0.7393 Epoch 18/200 45/45 [==============================] - 23s 514ms/step - loss: 0.5013 - sparse_categorical_accuracy: 0.7427 - val_loss: 0.5157 - val_sparse_categorical_accuracy: 0.7462 Epoch 19/200 45/45 [==============================] - 23s 511ms/step - loss: 0.4883 - sparse_categorical_accuracy: 0.7712 - val_loss: 0.5123 - val_sparse_categorical_accuracy: 0.7490 Epoch 20/200 45/45 [==============================] - 23s 514ms/step - loss: 0.4935 - sparse_categorical_accuracy: 0.7667 - val_loss: 0.5032 - val_sparse_categorical_accuracy: 0.7545 Epoch 21/200 45/45 [==============================] - 23s 514ms/step - loss: 0.4551 - sparse_categorical_accuracy: 0.7799 - val_loss: 0.4978 - val_sparse_categorical_accuracy: 0.7573 Epoch 22/200 45/45 [==============================] - 23s 516ms/step - loss: 0.4477 - sparse_categorical_accuracy: 0.7948 - val_loss: 0.4941 - val_sparse_categorical_accuracy: 0.7531 Epoch 23/200 45/45 [==============================] - 23s 518ms/step - loss: 0.4549 - sparse_categorical_accuracy: 0.7858 - val_loss: 0.4893 - val_sparse_categorical_accuracy: 0.7656 Epoch 24/200 45/45 [==============================] - 23s 516ms/step - loss: 0.4426 - sparse_categorical_accuracy: 0.7948 - val_loss: 0.4842 - val_sparse_categorical_accuracy: 0.7712 Epoch 25/200 45/45 [==============================] - 23s 520ms/step - loss: 0.4360 - sparse_categorical_accuracy: 0.8035 - val_loss: 0.4798 - val_sparse_categorical_accuracy: 0.7809 Epoch 26/200 45/45 [==============================] - 23s 515ms/step - loss: 0.4316 - sparse_categorical_accuracy: 0.8035 - val_loss: 0.4715 - val_sparse_categorical_accuracy: 0.7809 Epoch 27/200 45/45 [==============================] - 23s 518ms/step - loss: 0.4084 - sparse_categorical_accuracy: 0.8146 - val_loss: 0.4676 - val_sparse_categorical_accuracy: 0.7878 Epoch 28/200 45/45 [==============================] - 23s 515ms/step - loss: 0.3998 - sparse_categorical_accuracy: 0.8240 - val_loss: 0.4667 - val_sparse_categorical_accuracy: 0.7933 Epoch 29/200 45/45 [==============================] - 23s 514ms/step - loss: 0.3993 - sparse_categorical_accuracy: 0.8198 - val_loss: 0.4603 - val_sparse_categorical_accuracy: 0.7892 Epoch 30/200 45/45 [==============================] - 23s 515ms/step - loss: 0.4031 - sparse_categorical_accuracy: 0.8243 - val_loss: 0.4562 - val_sparse_categorical_accuracy: 0.7920 Epoch 31/200 45/45 [==============================] - 23s 511ms/step - loss: 0.3891 - sparse_categorical_accuracy: 0.8184 - val_loss: 0.4528 - val_sparse_categorical_accuracy: 0.7920 Epoch 32/200 45/45 [==============================] - 23s 516ms/step - loss: 0.3922 - sparse_categorical_accuracy: 0.8292 - val_loss: 0.4485 - val_sparse_categorical_accuracy: 0.7892 Epoch 33/200 45/45 [==============================] - 23s 516ms/step - loss: 0.3802 - sparse_categorical_accuracy: 0.8309 - val_loss: 0.4463 - val_sparse_categorical_accuracy: 0.8003 Epoch 34/200 45/45 [==============================] - 23s 514ms/step - loss: 0.3711 - sparse_categorical_accuracy: 0.8372 - val_loss: 0.4427 - val_sparse_categorical_accuracy: 0.7975 Epoch 35/200 45/45 [==============================] - 23s 512ms/step - loss: 0.3744 - sparse_categorical_accuracy: 0.8378 - val_loss: 0.4366 - val_sparse_categorical_accuracy: 0.8072 Epoch 36/200 45/45 [==============================] - 23s 511ms/step - loss: 0.3653 - sparse_categorical_accuracy: 0.8372 - val_loss: 0.4338 - val_sparse_categorical_accuracy: 0.8072 Epoch 37/200 45/45 [==============================] - 23s 512ms/step - loss: 0.3681 - sparse_categorical_accuracy: 0.8382 - val_loss: 0.4337 - val_sparse_categorical_accuracy: 0.8058 Epoch 38/200 45/45 [==============================] - 23s 512ms/step - loss: 0.3634 - sparse_categorical_accuracy: 0.8514 - val_loss: 0.4264 - val_sparse_categorical_accuracy: 0.8128 Epoch 39/200 45/45 [==============================] - 23s 512ms/step - loss: 0.3498 - sparse_categorical_accuracy: 0.8535 - val_loss: 0.4211 - val_sparse_categorical_accuracy: 0.8225 Epoch 40/200 45/45 [==============================] - 23s 514ms/step - loss: 0.3358 - sparse_categorical_accuracy: 0.8663 - val_loss: 0.4161 - val_sparse_categorical_accuracy: 0.8197 Epoch 41/200 45/45 [==============================] - 23s 512ms/step - loss: 0.3448 - sparse_categorical_accuracy: 0.8573 - val_loss: 0.4161 - val_sparse_categorical_accuracy: 0.8169 Epoch 42/200 45/45 [==============================] - 23s 512ms/step - loss: 0.3439 - sparse_categorical_accuracy: 0.8552 - val_loss: 0.4119 - val_sparse_categorical_accuracy: 0.8211 Epoch 43/200 45/45 [==============================] - 23s 510ms/step - loss: 0.3335 - sparse_categorical_accuracy: 0.8660 - val_loss: 0.4101 - val_sparse_categorical_accuracy: 0.8266 Epoch 44/200 45/45 [==============================] - 23s 510ms/step - loss: 0.3235 - sparse_categorical_accuracy: 0.8660 - val_loss: 0.4067 - val_sparse_categorical_accuracy: 0.8294 Epoch 45/200 45/45 [==============================] - 23s 510ms/step - loss: 0.3273 - sparse_categorical_accuracy: 0.8656 - val_loss: 0.4033 - val_sparse_categorical_accuracy: 0.8350 Epoch 46/200 45/45 [==============================] - 23s 513ms/step - loss: 0.3277 - sparse_categorical_accuracy: 0.8608 - val_loss: 0.3994 - val_sparse_categorical_accuracy: 0.8336 Epoch 47/200 45/45 [==============================] - 23s 519ms/step - loss: 0.3136 - sparse_categorical_accuracy: 0.8708 - val_loss: 0.3945 - val_sparse_categorical_accuracy: 0.8363 Epoch 48/200 45/45 [==============================] - 23s 518ms/step - loss: 0.3122 - sparse_categorical_accuracy: 0.8764 - val_loss: 0.3925 - val_sparse_categorical_accuracy: 0.8350 Epoch 49/200 45/45 [==============================] - 23s 519ms/step - loss: 0.3035 - sparse_categorical_accuracy: 0.8826 - val_loss: 0.3906 - val_sparse_categorical_accuracy: 0.8308 Epoch 50/200 45/45 [==============================] - 23s 512ms/step - loss: 0.2994 - sparse_categorical_accuracy: 0.8823 - val_loss: 0.3888 - val_sparse_categorical_accuracy: 0.8377 Epoch 51/200 45/45 [==============================] - 23s 514ms/step - loss: 0.3023 - sparse_categorical_accuracy: 0.8781 - val_loss: 0.3862 - val_sparse_categorical_accuracy: 0.8391 Epoch 52/200 45/45 [==============================] - 23s 515ms/step - loss: 0.3012 - sparse_categorical_accuracy: 0.8833 - val_loss: 0.3854 - val_sparse_categorical_accuracy: 0.8350 Epoch 53/200 45/45 [==============================] - 23s 513ms/step - loss: 0.2890 - sparse_categorical_accuracy: 0.8837 - val_loss: 0.3837 - val_sparse_categorical_accuracy: 0.8363 Epoch 54/200 45/45 [==============================] - 23s 513ms/step - loss: 0.2931 - sparse_categorical_accuracy: 0.8858 - val_loss: 0.3809 - val_sparse_categorical_accuracy: 0.8433 Epoch 55/200 45/45 [==============================] - 23s 515ms/step - loss: 0.2867 - sparse_categorical_accuracy: 0.8885 - val_loss: 0.3784 - val_sparse_categorical_accuracy: 0.8447 Epoch 56/200 45/45 [==============================] - 23s 511ms/step - loss: 0.2731 - sparse_categorical_accuracy: 0.8986 - val_loss: 0.3756 - val_sparse_categorical_accuracy: 0.8488 Epoch 57/200 45/45 [==============================] - 23s 515ms/step - loss: 0.2754 - sparse_categorical_accuracy: 0.8955 - val_loss: 0.3759 - val_sparse_categorical_accuracy: 0.8474 Epoch 58/200 45/45 [==============================] - 23s 511ms/step - loss: 0.2775 - sparse_categorical_accuracy: 0.8976 - val_loss: 0.3704 - val_sparse_categorical_accuracy: 0.8474 Epoch 59/200 45/45 [==============================] - 23s 513ms/step - loss: 0.2770 - sparse_categorical_accuracy: 0.9000 - val_loss: 0.3698 - val_sparse_categorical_accuracy: 0.8558 Epoch 60/200 45/45 [==============================] - 23s 516ms/step - loss: 0.2688 - sparse_categorical_accuracy: 0.8965 - val_loss: 0.3697 - val_sparse_categorical_accuracy: 0.8502 Epoch 61/200 45/45 [==============================] - 23s 518ms/step - loss: 0.2716 - sparse_categorical_accuracy: 0.8972 - val_loss: 0.3710 - val_sparse_categorical_accuracy: 0.8405 Epoch 62/200 45/45 [==============================] - 23s 515ms/step - loss: 0.2635 - sparse_categorical_accuracy: 0.9087 - val_loss: 0.3656 - val_sparse_categorical_accuracy: 0.8488 Epoch 63/200 45/45 [==============================] - 23s 520ms/step - loss: 0.2596 - sparse_categorical_accuracy: 0.8979 - val_loss: 0.3654 - val_sparse_categorical_accuracy: 0.8488 Epoch 64/200 45/45 [==============================] - 23s 518ms/step - loss: 0.2586 - sparse_categorical_accuracy: 0.9062 - val_loss: 0.3634 - val_sparse_categorical_accuracy: 0.8530 Epoch 65/200 45/45 [==============================] - 23s 516ms/step - loss: 0.2491 - sparse_categorical_accuracy: 0.9139 - val_loss: 0.3591 - val_sparse_categorical_accuracy: 0.8530 Epoch 66/200 45/45 [==============================] - 23s 519ms/step - loss: 0.2600 - sparse_categorical_accuracy: 0.9017 - val_loss: 0.3621 - val_sparse_categorical_accuracy: 0.8516 Epoch 67/200 45/45 [==============================] - 23s 518ms/step - loss: 0.2465 - sparse_categorical_accuracy: 0.9156 - val_loss: 0.3608 - val_sparse_categorical_accuracy: 0.8488 Epoch 68/200 45/45 [==============================] - 23s 518ms/step - loss: 0.2502 - sparse_categorical_accuracy: 0.9101 - val_loss: 0.3557 - val_sparse_categorical_accuracy: 0.8627 Epoch 69/200 45/45 [==============================] - 23s 517ms/step - loss: 0.2418 - sparse_categorical_accuracy: 0.9104 - val_loss: 0.3561 - val_sparse_categorical_accuracy: 0.8502 Epoch 70/200 45/45 [==============================] - 23s 516ms/step - loss: 0.2463 - sparse_categorical_accuracy: 0.9049 - val_loss: 0.3554 - val_sparse_categorical_accuracy: 0.8613 Epoch 71/200 45/45 [==============================] - 23s 520ms/step - loss: 0.2372 - sparse_categorical_accuracy: 0.9177 - val_loss: 0.3548 - val_sparse_categorical_accuracy: 0.8627 Epoch 72/200 45/45 [==============================] - 23s 515ms/step - loss: 0.2365 - sparse_categorical_accuracy: 0.9118 - val_loss: 0.3528 - val_sparse_categorical_accuracy: 0.8655 Epoch 73/200 45/45 [==============================] - 23s 518ms/step - loss: 0.2420 - sparse_categorical_accuracy: 0.9083 - val_loss: 0.3510 - val_sparse_categorical_accuracy: 0.8655 Epoch 74/200 45/45 [==============================] - 23s 518ms/step - loss: 0.2342 - sparse_categorical_accuracy: 0.9205 - val_loss: 0.3478 - val_sparse_categorical_accuracy: 0.8669 Epoch 75/200 45/45 [==============================] - 23s 515ms/step - loss: 0.2337 - sparse_categorical_accuracy: 0.9062 - val_loss: 0.3484 - val_sparse_categorical_accuracy: 0.8655 Epoch 76/200 45/45 [==============================] - 23s 516ms/step - loss: 0.2298 - sparse_categorical_accuracy: 0.9153 - val_loss: 0.3478 - val_sparse_categorical_accuracy: 0.8585 Epoch 77/200 45/45 [==============================] - 23s 516ms/step - loss: 0.2218 - sparse_categorical_accuracy: 0.9243 - val_loss: 0.3467 - val_sparse_categorical_accuracy: 0.8613 Epoch 78/200 45/45 [==============================] - 23s 518ms/step - loss: 0.2352 - sparse_categorical_accuracy: 0.9083 - val_loss: 0.3431 - val_sparse_categorical_accuracy: 0.8641 Epoch 79/200 45/45 [==============================] - 23s 515ms/step - loss: 0.2218 - sparse_categorical_accuracy: 0.9194 - val_loss: 0.3448 - val_sparse_categorical_accuracy: 0.8613 Epoch 80/200 45/45 [==============================] - 23s 515ms/step - loss: 0.2246 - sparse_categorical_accuracy: 0.9198 - val_loss: 0.3417 - val_sparse_categorical_accuracy: 0.8682 Epoch 81/200 45/45 [==============================] - 23s 518ms/step - loss: 0.2168 - sparse_categorical_accuracy: 0.9201 - val_loss: 0.3397 - val_sparse_categorical_accuracy: 0.8641 Epoch 82/200 45/45 [==============================] - 23s 517ms/step - loss: 0.2254 - sparse_categorical_accuracy: 0.9153 - val_loss: 0.3373 - val_sparse_categorical_accuracy: 0.8682 Epoch 83/200 45/45 [==============================] - 23s 518ms/step - loss: 0.2230 - sparse_categorical_accuracy: 0.9194 - val_loss: 0.3391 - val_sparse_categorical_accuracy: 0.8655 Epoch 84/200 45/45 [==============================] - 23s 518ms/step - loss: 0.2124 - sparse_categorical_accuracy: 0.9240 - val_loss: 0.3370 - val_sparse_categorical_accuracy: 0.8682 Epoch 85/200 45/45 [==============================] - 23s 515ms/step - loss: 0.2123 - sparse_categorical_accuracy: 0.9278 - val_loss: 0.3394 - val_sparse_categorical_accuracy: 0.8571 Epoch 86/200 45/45 [==============================] - 23s 520ms/step - loss: 0.2119 - sparse_categorical_accuracy: 0.9260 - val_loss: 0.3355 - val_sparse_categorical_accuracy: 0.8627 Epoch 87/200 45/45 [==============================] - 23s 517ms/step - loss: 0.2052 - sparse_categorical_accuracy: 0.9247 - val_loss: 0.3353 - val_sparse_categorical_accuracy: 0.8738 Epoch 88/200 45/45 [==============================] - 23s 518ms/step - loss: 0.2089 - sparse_categorical_accuracy: 0.9299 - val_loss: 0.3342 - val_sparse_categorical_accuracy: 0.8779 Epoch 89/200 45/45 [==============================] - 23s 519ms/step - loss: 0.2027 - sparse_categorical_accuracy: 0.9250 - val_loss: 0.3353 - val_sparse_categorical_accuracy: 0.8793 Epoch 90/200 45/45 [==============================] - 23s 517ms/step - loss: 0.2110 - sparse_categorical_accuracy: 0.9264 - val_loss: 0.3320 - val_sparse_categorical_accuracy: 0.8752 Epoch 91/200 45/45 [==============================] - 23s 516ms/step - loss: 0.1965 - sparse_categorical_accuracy: 0.9292 - val_loss: 0.3339 - val_sparse_categorical_accuracy: 0.8710 Epoch 92/200 45/45 [==============================] - 23s 520ms/step - loss: 0.2030 - sparse_categorical_accuracy: 0.9253 - val_loss: 0.3296 - val_sparse_categorical_accuracy: 0.8752 Epoch 93/200 45/45 [==============================] - 23s 519ms/step - loss: 0.1969 - sparse_categorical_accuracy: 0.9347 - val_loss: 0.3298 - val_sparse_categorical_accuracy: 0.8807 Epoch 94/200 45/45 [==============================] - 23s 518ms/step - loss: 0.1939 - sparse_categorical_accuracy: 0.9295 - val_loss: 0.3300 - val_sparse_categorical_accuracy: 0.8779 Epoch 95/200 45/45 [==============================] - 23s 517ms/step - loss: 0.1930 - sparse_categorical_accuracy: 0.9330 - val_loss: 0.3305 - val_sparse_categorical_accuracy: 0.8766 Epoch 96/200 45/45 [==============================] - 23s 518ms/step - loss: 0.1946 - sparse_categorical_accuracy: 0.9288 - val_loss: 0.3288 - val_sparse_categorical_accuracy: 0.8669 Epoch 97/200 45/45 [==============================] - 23s 518ms/step - loss: 0.1951 - sparse_categorical_accuracy: 0.9264 - val_loss: 0.3281 - val_sparse_categorical_accuracy: 0.8682 Epoch 98/200 45/45 [==============================] - 23s 516ms/step - loss: 0.1899 - sparse_categorical_accuracy: 0.9354 - val_loss: 0.3307 - val_sparse_categorical_accuracy: 0.8696 Epoch 99/200 45/45 [==============================] - 23s 519ms/step - loss: 0.1901 - sparse_categorical_accuracy: 0.9250 - val_loss: 0.3307 - val_sparse_categorical_accuracy: 0.8710 Epoch 100/200 45/45 [==============================] - 23s 516ms/step - loss: 0.1902 - sparse_categorical_accuracy: 0.9319 - val_loss: 0.3259 - val_sparse_categorical_accuracy: 0.8696 Epoch 101/200 45/45 [==============================] - 23s 518ms/step - loss: 0.1868 - sparse_categorical_accuracy: 0.9358 - val_loss: 0.3262 - val_sparse_categorical_accuracy: 0.8724 Epoch 102/200 45/45 [==============================] - 23s 518ms/step - loss: 0.1779 - sparse_categorical_accuracy: 0.9431 - val_loss: 0.3250 - val_sparse_categorical_accuracy: 0.8710 Epoch 103/200 45/45 [==============================] - 23s 520ms/step - loss: 0.1870 - sparse_categorical_accuracy: 0.9351 - val_loss: 0.3260 - val_sparse_categorical_accuracy: 0.8724 Epoch 104/200 45/45 [==============================] - 23s 521ms/step - loss: 0.1826 - sparse_categorical_accuracy: 0.9344 - val_loss: 0.3232 - val_sparse_categorical_accuracy: 0.8766 Epoch 105/200 45/45 [==============================] - 23s 519ms/step - loss: 0.1731 - sparse_categorical_accuracy: 0.9399 - val_loss: 0.3245 - val_sparse_categorical_accuracy: 0.8724 Epoch 106/200 45/45 [==============================] - 23s 518ms/step - loss: 0.1766 - sparse_categorical_accuracy: 0.9361 - val_loss: 0.3254 - val_sparse_categorical_accuracy: 0.8682 Epoch 107/200 Conclusions In about 110-120 epochs (25s each on Colab), the model reaches a training accuracy of ~0.95, validation accuracy of ~84 and a testing accuracy of ~85, without hyperparameter tuning. And that is for a model with less than 100k parameters. Of course, parameter count and accuracy could be improved by a hyperparameter search and a more sophisticated learning rate schedule, or a different optimizer. This notebook demonstrates how to do timeseries forecasting using a LSTM model. Setup This example requires TensorFlow 2.3 or higher. import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras Climate Data Time-Series We will be using Jena Climate dataset recorded by the Max Planck Institute for Biogeochemistry. The dataset consists of 14 features such as temperature, pressure, humidity etc, recorded once per 10 minutes. Location: Weather Station, Max Planck Institute for Biogeochemistry in Jena, Germany Time-frame Considered: Jan 10, 2009 - December 31, 2016 The table below shows the column names, their value formats, and their description. Index Features Format Description 1 Date Time 01.01.2009 00:10:00 Date-time reference 2 p (mbar) 996.52 The pascal SI derived unit of pressure used to quantify internal pressure. Meteorological reports typically state atmospheric pressure in millibars. 3 T (degC) -8.02 Temperature in Celsius 4 Tpot (K) 265.4 Temperature in Kelvin 5 Tdew (degC) -8.9 Temperature in Celsius relative to humidity. Dew Point is a measure of the absolute amount of water in the air, the DP is the temperature at which the air cannot hold all the moisture in it and water condenses. 6 rh (%) 93.3 Relative Humidity is a measure of how saturated the air is with water vapor, the %RH determines the amount of water contained within collection objects. 7 VPmax (mbar) 3.33 Saturation vapor pressure 8 VPact (mbar) 3.11 Vapor pressure 9 VPdef (mbar) 0.22 Vapor pressure deficit 10 sh (g/kg) 1.94 Specific humidity 11 H2OC (mmol/mol) 3.12 Water vapor concentration 12 rho (g/m ** 3) 1307.75 Airtight 13 wv (m/s) 1.03 Wind speed 14 max. wv (m/s) 1.75 Maximum wind speed 15 wd (deg) 152.3 Wind direction in degrees from zipfile import ZipFile import os uri = \"https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip\" zip_path = keras.utils.get_file(origin=uri, fname=\"jena_climate_2009_2016.csv.zip\") zip_file = ZipFile(zip_path) zip_file.extractall() csv_path = \"jena_climate_2009_2016.csv\" df = pd.read_csv(csv_path) Raw Data Visualization To give us a sense of the data we are working with, each feature has been plotted below. This shows the distinct pattern of each feature over the time period from 2009 to 2016. It also shows where anomalies are present, which will be addressed during normalization. titles = [ \"Pressure\", \"Temperature\", \"Temperature in Kelvin\", \"Temperature (dew point)\", \"Relative Humidity\", \"Saturation vapor pressure\", \"Vapor pressure\", \"Vapor pressure deficit\", \"Specific humidity\", \"Water vapor concentration\", \"Airtight\", \"Wind speed\", \"Maximum wind speed\", \"Wind direction in degrees\", ] feature_keys = [ \"p (mbar)\", \"T (degC)\", \"Tpot (K)\", \"Tdew (degC)\", \"rh (%)\", \"VPmax (mbar)\", \"VPact (mbar)\", \"VPdef (mbar)\", \"sh (g/kg)\", \"H2OC (mmol/mol)\", \"rho (g/m**3)\", \"wv (m/s)\", \"max. wv (m/s)\", \"wd (deg)\", ] colors = [ \"blue\", \"orange\", \"green\", \"red\", \"purple\", \"brown\", \"pink\", \"gray\", \"olive\", \"cyan\", ] date_time_key = \"Date Time\" def show_raw_visualization(data): time_data = data[date_time_key] fig, axes = plt.subplots( nrows=7, ncols=2, figsize=(15, 20), dpi=80, facecolor=\"w\", edgecolor=\"k\" ) for i in range(len(feature_keys)): key = feature_keys[i] c = colors[i % (len(colors))] t_data = data[key] t_data.index = time_data t_data.head() ax = t_data.plot( ax=axes[i // 2, i % 2], color=c, title=\"{} - {}\".format(titles[i], key), rot=25, ) ax.legend([titles[i]]) plt.tight_layout() show_raw_visualization(df) png This heat map shows the correlation between different features. def show_heatmap(data): plt.matshow(data.corr()) plt.xticks(range(data.shape[1]), data.columns, fontsize=14, rotation=90) plt.gca().xaxis.tick_bottom() plt.yticks(range(data.shape[1]), data.columns, fontsize=14) cb = plt.colorbar() cb.ax.tick_params(labelsize=14) plt.title(\"Feature Correlation Heatmap\", fontsize=14) plt.show() show_heatmap(df) png Data Preprocessing Here we are picking ~300,000 data points for training. Observation is recorded every 10 mins, that means 6 times per hour. We will resample one point per hour since no drastic change is expected within 60 minutes. We do this via the sampling_rate argument in timeseries_dataset_from_array utility. We are tracking data from past 720 timestamps (720/6=120 hours). This data will be used to predict the temperature after 72 timestamps (72/6=12 hours). Since every feature has values with varying ranges, we do normalization to confine feature values to a range of [0, 1] before training a neural network. We do this by subtracting the mean and dividing by the standard deviation of each feature. 71.5 % of the data will be used to train the model, i.e. 300,693 rows. split_fraction can be changed to alter this percentage. The model is shown data for first 5 days i.e. 720 observations, that are sampled every hour. The temperature after 72 (12 hours * 6 observation per hour) observation will be used as a label. split_fraction = 0.715 train_split = int(split_fraction * int(df.shape[0])) step = 6 past = 720 future = 72 learning_rate = 0.001 batch_size = 256 epochs = 10 def normalize(data, train_split): data_mean = data[:train_split].mean(axis=0) data_std = data[:train_split].std(axis=0) return (data - data_mean) / data_std We can see from the correlation heatmap, few parameters like Relative Humidity and Specific Humidity are redundant. Hence we will be using select features, not all. print( \"The selected parameters are:\", \", \".join([titles[i] for i in [0, 1, 5, 7, 8, 10, 11]]), ) selected_features = [feature_keys[i] for i in [0, 1, 5, 7, 8, 10, 11]] features = df[selected_features] features.index = df[date_time_key] features.head() features = normalize(features.values, train_split) features = pd.DataFrame(features) features.head() train_data = features.loc[0 : train_split - 1] val_data = features.loc[train_split:] The selected parameters are: Pressure, Temperature, Saturation vapor pressure, Vapor pressure deficit, Specific humidity, Airtight, Wind speed Training dataset The training dataset labels starts from the 792nd observation (720 + 72). start = past + future end = start + train_split x_train = train_data[[i for i in range(7)]].values y_train = features.iloc[start:end][[1]] sequence_length = int(past / step) The timeseries_dataset_from_array function takes in a sequence of data-points gathered at equal intervals, along with time series parameters such as length of the sequences/windows, spacing between two sequence/windows, etc., to produce batches of sub-timeseries inputs and targets sampled from the main timeseries. dataset_train = keras.preprocessing.timeseries_dataset_from_array( x_train, y_train, sequence_length=sequence_length, sampling_rate=step, batch_size=batch_size, ) Validation dataset The validation dataset must not contain the last 792 rows as we won't have label data for those records, hence 792 must be subtracted from the end of the data. The validation label dataset must start from 792 after train_split, hence we must add past + future (792) to label_start. x_end = len(val_data) - past - future label_start = train_split + past + future x_val = val_data.iloc[:x_end][[i for i in range(7)]].values y_val = features.iloc[label_start:][[1]] dataset_val = keras.preprocessing.timeseries_dataset_from_array( x_val, y_val, sequence_length=sequence_length, sampling_rate=step, batch_size=batch_size, ) for batch in dataset_train.take(1): inputs, targets = batch print(\"Input shape:\", inputs.numpy().shape) print(\"Target shape:\", targets.numpy().shape) Input shape: (256, 120, 7) Target shape: (256, 1) Training inputs = keras.layers.Input(shape=(inputs.shape[1], inputs.shape[2])) lstm_out = keras.layers.LSTM(32)(inputs) outputs = keras.layers.Dense(1)(lstm_out) model = keras.Model(inputs=inputs, outputs=outputs) model.compile(optimizer=keras.optimizers.Adam(learning_rate=learning_rate), loss=\"mse\") model.summary() Model: \"functional_1\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 120, 7)] 0 _________________________________________________________________ lstm (LSTM) (None, 32) 5120 _________________________________________________________________ dense (Dense) (None, 1) 33 ================================================================= Total params: 5,153 Trainable params: 5,153 Non-trainable params: 0 _________________________________________________________________ We'll use the ModelCheckpoint callback to regularly save checkpoints, and the EarlyStopping callback to interrupt training when the validation loss is not longer improving. path_checkpoint = \"model_checkpoint.h5\" es_callback = keras.callbacks.EarlyStopping(monitor=\"val_loss\", min_delta=0, patience=5) modelckpt_callback = keras.callbacks.ModelCheckpoint( monitor=\"val_loss\", filepath=path_checkpoint, verbose=1, save_weights_only=True, save_best_only=True, ) history = model.fit( dataset_train, epochs=epochs, validation_data=dataset_val, callbacks=[es_callback, modelckpt_callback], ) Epoch 1/10 1172/1172 [==============================] - ETA: 0s - loss: 0.2059 Epoch 00001: val_loss improved from inf to 0.16357, saving model to model_checkpoint.h5 1172/1172 [==============================] - 101s 86ms/step - loss: 0.2059 - val_loss: 0.1636 Epoch 2/10 1172/1172 [==============================] - ETA: 0s - loss: 0.1271 Epoch 00002: val_loss improved from 0.16357 to 0.13362, saving model to model_checkpoint.h5 1172/1172 [==============================] - 107s 92ms/step - loss: 0.1271 - val_loss: 0.1336 Epoch 3/10 1172/1172 [==============================] - ETA: 0s - loss: 0.1089 Epoch 00005: val_loss did not improve from 0.13362 1172/1172 [==============================] - 110s 94ms/step - loss: 0.1089 - val_loss: 0.1481 Epoch 6/10 271/1172 [=====>........................] - ETA: 1:12 - loss: 0.1117 We can visualize the loss with the function below. After one point, the loss stops decreasing. def visualize_loss(history, title): loss = history.history[\"loss\"] val_loss = history.history[\"val_loss\"] epochs = range(len(loss)) plt.figure() plt.plot(epochs, loss, \"b\", label=\"Training loss\") plt.plot(epochs, val_loss, \"r\", label=\"Validation loss\") plt.title(title) plt.xlabel(\"Epochs\") plt.ylabel(\"Loss\") plt.legend() plt.show() visualize_loss(history, \"Training and Validation Loss\") png Prediction The trained model above is now able to make predictions for 5 sets of values from validation set. def show_plot(plot_data, delta, title): labels = [\"History\", \"True Future\", \"Model Prediction\"] marker = [\".-\", \"rx\", \"go\"] time_steps = list(range(-(plot_data[0].shape[0]), 0)) if delta: future = delta else: future = 0 plt.title(title) for i, val in enumerate(plot_data): if i: plt.plot(future, plot_data[i], marker[i], markersize=10, label=labels[i]) else: plt.plot(time_steps, plot_data[i].flatten(), marker[i], label=labels[i]) plt.legend() plt.xlim([time_steps[0], (future + 5) * 2]) plt.xlabel(\"Time-Step\") plt.show() return for x, y in dataset_val.take(5): show_plot( [x[0][:, 1].numpy(), y[0].numpy(), model.predict(x)[0]], 12, \"Single Step Prediction\", ) png png png png png Fashion MNIST dataset, an alternative to MNIST load_data function tf.keras.datasets.fashion_mnist.load_data() Loads the Fashion-MNIST dataset. This is a dataset of 60,000 28x28 grayscale images of 10 fashion categories, along with a test set of 10,000 images. This dataset can be used as a drop-in replacement for MNIST. The classes are: Label Description 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot Returns Tuple of NumPy arrays: (x_train, y_train), (x_test, y_test). x_train: uint8 NumPy array of grayscale image data with shapes (60000, 28, 28), containing the training data. y_train: uint8 NumPy array of labels (integers in range 0-9) with shape (60000,) for the training data. x_test: uint8 NumPy array of grayscale image data with shapes (10000, 28, 28), containing the test data. y_test: uint8 NumPy array of labels (integers in range 0-9) with shape (10000,) for the test data. Example (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data() assert x_train.shape == (60000, 28, 28) assert x_test.shape == (10000, 28, 28) assert y_train.shape == (60000,) assert y_test.shape == (10000,) License: The copyright for Fashion-MNIST is held by Zalando SE. Fashion-MNIST is licensed under the MIT license. MNIST digits classification dataset load_data function tf.keras.datasets.mnist.load_data(path="mnist.npz") Loads the MNIST dataset. This is a dataset of 60,000 28x28 grayscale images of the 10 digits, along with a test set of 10,000 images. More info can be found at the MNIST homepage. Arguments path: path where to cache the dataset locally (relative to ~/.keras/datasets). Returns Tuple of NumPy arrays: (x_train, y_train), (x_test, y_test). x_train: uint8 NumPy array of grayscale image data with shapes (60000, 28, 28), containing the training data. Pixel values range from 0 to 255. y_train: uint8 NumPy array of digit labels (integers in range 0-9) with shape (60000,) for the training data. x_test: uint8 NumPy array of grayscale image data with shapes (10000, 28, 28), containing the test data. Pixel values range from 0 to 255. y_test: uint8 NumPy array of digit labels (integers in range 0-9) with shape (10000,) for the test data. Example (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() assert x_train.shape == (60000, 28, 28) assert x_test.shape == (10000, 28, 28) assert y_train.shape == (60000,) assert y_test.shape == (10000,) License: Yann LeCun and Corinna Cortes hold the copyright of MNIST dataset, which is a derivative work from original NIST datasets. MNIST dataset is made available under the terms of the Creative Commons Attribution-Share Alike 3.0 license. Reuters newswire classification dataset load_data function tf.keras.datasets.reuters.load_data( path="reuters.npz", num_words=None, skip_top=0, maxlen=None, test_split=0.2, seed=113, start_char=1, oov_char=2, index_from=3, **kwargs ) Loads the Reuters newswire classification dataset. This is a dataset of 11,228 newswires from Reuters, labeled over 46 topics. This was originally generated by parsing and preprocessing the classic Reuters-21578 dataset, but the preprocessing code is no longer packaged with Keras. See this github discussion for more info. Each newswire is encoded as a list of word indexes (integers). For convenience, words are indexed by overall frequency in the dataset, so that for instance the integer "3" encodes the 3rd most frequent word in the data. This allows for quick filtering operations such as: "only consider the top 10,000 most common words, but eliminate the top 20 most common words". As a convention, "0" does not stand for a specific word, but instead is used to encode any unknown word. Arguments path: where to cache the data (relative to ~/.keras/dataset). num_words: integer or None. Words are ranked by how often they occur (in the training set) and only the num_words most frequent words are kept. Any less frequent word will appear as oov_char value in the sequence data. If None, all words are kept. Defaults to None, so all words are kept. skip_top: skip the top N most frequently occurring words (which may not be informative). These words will appear as oov_char value in the dataset. Defaults to 0, so no words are skipped. maxlen: int or None. Maximum sequence length. Any longer sequence will be truncated. Defaults to None, which means no truncation. test_split: Float between 0 and 1. Fraction of the dataset to be used as test data. Defaults to 0.2, meaning 20% of the dataset is used as test data. seed: int. Seed for reproducible data shuffling. start_char: int. The start of a sequence will be marked with this character. Defaults to 1 because 0 is usually the padding character. oov_char: int. The out-of-vocabulary character. Words that were cut out because of the num_words or skip_top limits will be replaced with this character. index_from: int. Index actual words with this index and higher. **kwargs: Used for backwards compatibility. Returns Tuple of Numpy arrays: (x_train, y_train), (x_test, y_test). x_train, x_test: lists of sequences, which are lists of indexes (integers). If the num_words argument was specific, the maximum possible index value is num_words - 1. If the maxlen argument was specified, the largest possible sequence length is maxlen. y_train, y_test: lists of integer labels (1 or 0). Note: The 'out of vocabulary' character is only used for words that were present in the training set but are not included because they're not making the num_words cut here. Words that were not seen in the training set but are in the test set have simply been skipped. get_word_index function tf.keras.datasets.reuters.get_word_index(path="reuters_word_index.json") Retrieves a dict mapping words to their index in the Reuters dataset. Arguments path: where to cache the data (relative to ~/.keras/dataset). Returns The word index dictionary. Keys are word strings, values are their index. Boston Housing price regression dataset load_data function tf.keras.datasets.boston_housing.load_data( path="boston_housing.npz", test_split=0.2, seed=113 ) Loads the Boston Housing dataset. This is a dataset taken from the StatLib library which is maintained at Carnegie Mellon University. Samples contain 13 attributes of houses at different locations around the Boston suburbs in the late 1970s. Targets are the median values of the houses at a location (in k$). The attributes themselves are defined in the StatLib website. Arguments path: path where to cache the dataset locally (relative to ~/.keras/datasets). test_split: fraction of the data to reserve as test set. seed: Random seed for shuffling the data before computing the test split. Returns Tuple of Numpy arrays: (x_train, y_train), (x_test, y_test). x_train, x_test: numpy arrays with shape (num_samples, 13) containing either the training samples (for x_train), or test samples (for y_train). y_train, y_test: numpy arrays of shape (num_samples,) containing the target scalars. The targets are float scalars typically between 10 and 50 that represent the home prices in k$.CIFAR10 small images classification dataset load_data function tf.keras.datasets.cifar10.load_data() Loads the CIFAR10 dataset. This is a dataset of 50,000 32x32 color training images and 10,000 test images, labeled over 10 categories. See more info at the CIFAR homepage. The classes are: Label Description 0 airplane 1 automobile 2 bird 3 cat 4 deer 5 dog 6 frog 7 horse 8 ship 9 truck Returns Tuple of NumPy arrays: (x_train, y_train), (x_test, y_test). x_train: uint8 NumPy array of grayscale image data with shapes (50000, 32, 32, 3), containing the training data. Pixel values range from 0 to 255. y_train: uint8 NumPy array of labels (integers in range 0-9) with shape (50000, 1) for the training data. x_test: uint8 NumPy array of grayscale image data with shapes (10000, 32, 32, 3), containing the test data. Pixel values range from 0 to 255. y_test: uint8 NumPy array of labels (integers in range 0-9) with shape (10000, 1) for the test data. Example (x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data() assert x_train.shape == (50000, 32, 32, 3) assert x_test.shape == (10000, 32, 32, 3) assert y_train.shape == (50000, 1) assert y_test.shape == (10000, 1)IMDB movie review sentiment classification dataset load_data function tf.keras.datasets.imdb.load_data( path="imdb.npz", num_words=None, skip_top=0, maxlen=None, seed=113, start_char=1, oov_char=2, index_from=3, **kwargs ) Loads the IMDB dataset. This is a dataset of 25,000 movies reviews from IMDB, labeled by sentiment (positive/negative). Reviews have been preprocessed, and each review is encoded as a list of word indexes (integers). For convenience, words are indexed by overall frequency in the dataset, so that for instance the integer "3" encodes the 3rd most frequent word in the data. This allows for quick filtering operations such as: "only consider the top 10,000 most common words, but eliminate the top 20 most common words". As a convention, "0" does not stand for a specific word, but instead is used to encode any unknown word. Arguments path: where to cache the data (relative to ~/.keras/dataset). num_words: integer or None. Words are ranked by how often they occur (in the training set) and only the num_words most frequent words are kept. Any less frequent word will appear as oov_char value in the sequence data. If None, all words are kept. Defaults to None, so all words are kept. skip_top: skip the top N most frequently occurring words (which may not be informative). These words will appear as oov_char value in the dataset. Defaults to 0, so no words are skipped. maxlen: int or None. Maximum sequence length. Any longer sequence will be truncated. Defaults to None, which means no truncation. seed: int. Seed for reproducible data shuffling. start_char: int. The start of a sequence will be marked with this character. Defaults to 1 because 0 is usually the padding character. oov_char: int. The out-of-vocabulary character. Words that were cut out because of the num_words or skip_top limits will be replaced with this character. index_from: int. Index actual words with this index and higher. **kwargs: Used for backwards compatibility. Returns Tuple of Numpy arrays: (x_train, y_train), (x_test, y_test). x_train, x_test: lists of sequences, which are lists of indexes (integers). If the num_words argument was specific, the maximum possible index value is num_words - 1. If the maxlen argument was specified, the largest possible sequence length is maxlen. y_train, y_test: lists of integer labels (1 or 0). Raises ValueError: in case maxlen is so low that no input sequence could be kept. Note that the 'out of vocabulary' character is only used for words that were present in the training set but are not included because they're not making the num_words cut here. Words that were not seen in the training set but are in the test set have simply been skipped. get_word_index function tf.keras.datasets.imdb.get_word_index(path="imdb_word_index.json") Retrieves a dict mapping words to their index in the IMDB dataset. Arguments path: where to cache the data (relative to ~/.keras/dataset). Returns The word index dictionary. Keys are word strings, values are their index. Example # Retrieve the training sequences. (x_train, _), _ = keras.datasets.imdb.load_data() # Retrieve the word index file mapping words to indices word_index = keras.datasets.imdb.get_word_index() # Reverse the word index to obtain a dict mapping indices to words inverted_word_index = dict((i, word) for (word, i) in word_index.items()) # Decode the first sequence in the dataset decoded_sequence = " ".join(inverted_word_index[i] for i in x_train[0])CIFAR100 small images classification dataset load_data function tf.keras.datasets.cifar100.load_data(label_mode="fine") Loads the CIFAR100 dataset. This is a dataset of 50,000 32x32 color training images and 10,000 test images, labeled over 100 fine-grained classes that are grouped into 20 coarse-grained classes. See more info at the CIFAR homepage. Arguments label_mode: one of "fine", "coarse". If it is "fine" the category labels are the fine-grained labels, if it is "coarse" the output labels are the coarse-grained superclasses. Returns Tuple of NumPy arrays: (x_train, y_train), (x_test, y_test). x_train: uint8 NumPy array of grayscale image data with shapes (50000, 32, 32, 3), containing the training data. Pixel values range from 0 to 255. y_train: uint8 NumPy array of labels (integers in range 0-99) with shape (50000, 1) for the training data. x_test: uint8 NumPy array of grayscale image data with shapes (10000, 32, 32, 3), containing the test data. Pixel values range from 0 to 255. y_test: uint8 NumPy array of labels (integers in range 0-99) with shape (10000, 1) for the test data. Example (x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data() assert x_train.shape == (50000, 32, 32, 3) assert x_test.shape == (10000, 32, 32, 3) assert y_train.shape == (50000, 1) assert y_test.shape == (10000, 1) ResNet and ResNetV2 ResNet50 function tf.keras.applications.ResNet50( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, **kwargs ) Instantiates the ResNet50 architecture. Reference Deep Residual Learning for Image Recognition (CVPR 2015) For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. Note: each Keras Application expects a specific kind of input preprocessing. For ResNet, call tf.keras.applications.resnet.preprocess_input on your inputs before passing them to the model. resnet.preprocess_input will convert the input images from RGB to BGR, then will zero-center each color channel with respect to the ImageNet dataset, without scaling. Arguments include_top: whether to include the fully-connected layer at the top of the network. weights: one of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. input_tensor: optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) (with 'channels_last' data format) or (3, 224, 224) (with 'channels_first' data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 32. E.g. (200, 200, 3) would be one valid value. pooling: Optional pooling mode for feature extraction when include_top is False. None means that the output of the model will be the 4D tensor output of the last convolutional block. avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. max means that global max pooling will be applied. classes: optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. When loading pretrained weights, classifier_activation can only be None or "softmax". Returns A Keras model instance. ResNet101 function tf.keras.applications.ResNet101( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, **kwargs ) Instantiates the ResNet101 architecture. Reference Deep Residual Learning for Image Recognition (CVPR 2015) For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. Note: each Keras Application expects a specific kind of input preprocessing. For ResNet, call tf.keras.applications.resnet.preprocess_input on your inputs before passing them to the model. resnet.preprocess_input will convert the input images from RGB to BGR, then will zero-center each color channel with respect to the ImageNet dataset, without scaling. Arguments include_top: whether to include the fully-connected layer at the top of the network. weights: one of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. input_tensor: optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) (with 'channels_last' data format) or (3, 224, 224) (with 'channels_first' data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 32. E.g. (200, 200, 3) would be one valid value. pooling: Optional pooling mode for feature extraction when include_top is False. None means that the output of the model will be the 4D tensor output of the last convolutional block. avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. max means that global max pooling will be applied. classes: optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. When loading pretrained weights, classifier_activation can only be None or "softmax". Returns A Keras model instance. ResNet152 function tf.keras.applications.ResNet152( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, **kwargs ) Instantiates the ResNet152 architecture. Reference Deep Residual Learning for Image Recognition (CVPR 2015) For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. Note: each Keras Application expects a specific kind of input preprocessing. For ResNet, call tf.keras.applications.resnet.preprocess_input on your inputs before passing them to the model. resnet.preprocess_input will convert the input images from RGB to BGR, then will zero-center each color channel with respect to the ImageNet dataset, without scaling. Arguments include_top: whether to include the fully-connected layer at the top of the network. weights: one of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. input_tensor: optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) (with 'channels_last' data format) or (3, 224, 224) (with 'channels_first' data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 32. E.g. (200, 200, 3) would be one valid value. pooling: Optional pooling mode for feature extraction when include_top is False. None means that the output of the model will be the 4D tensor output of the last convolutional block. avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. max means that global max pooling will be applied. classes: optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. When loading pretrained weights, classifier_activation can only be None or "softmax". Returns A Keras model instance. ResNet50V2 function tf.keras.applications.ResNet50V2( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, classifier_activation="softmax", ) Instantiates the ResNet50V2 architecture. Reference [Identity Mappings in Deep Residual Networks] (https://arxiv.org/abs/1603.05027) (CVPR 2016) For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. Note: each Keras Application expects a specific kind of input preprocessing. For ResNetV2, call tf.keras.applications.resnet_v2.preprocess_input on your inputs before passing them to the model. resnet_v2.preprocess_input will scale input pixels between -1 and 1. Arguments include_top: whether to include the fully-connected layer at the top of the network. weights: one of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. input_tensor: optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) (with 'channels_last' data format) or (3, 224, 224) (with 'channels_first' data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 32. E.g. (200, 200, 3) would be one valid value. pooling: Optional pooling mode for feature extraction when include_top is False. None means that the output of the model will be the 4D tensor output of the last convolutional block. avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. max means that global max pooling will be applied. classes: optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. When loading pretrained weights, classifier_activation can only be None or "softmax". Returns A keras.Model instance. ResNet101V2 function tf.keras.applications.ResNet101V2( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, classifier_activation="softmax", ) Instantiates the ResNet101V2 architecture. Reference [Identity Mappings in Deep Residual Networks] (https://arxiv.org/abs/1603.05027) (CVPR 2016) For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. Note: each Keras Application expects a specific kind of input preprocessing. For ResNetV2, call tf.keras.applications.resnet_v2.preprocess_input on your inputs before passing them to the model. resnet_v2.preprocess_input will scale input pixels between -1 and 1. Arguments include_top: whether to include the fully-connected layer at the top of the network. weights: one of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. input_tensor: optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) (with 'channels_last' data format) or (3, 224, 224) (with 'channels_first' data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 32. E.g. (200, 200, 3) would be one valid value. pooling: Optional pooling mode for feature extraction when include_top is False. None means that the output of the model will be the 4D tensor output of the last convolutional block. avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. max means that global max pooling will be applied. classes: optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. When loading pretrained weights, classifier_activation can only be None or "softmax". Returns A keras.Model instance. ResNet152V2 function tf.keras.applications.ResNet152V2( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, classifier_activation="softmax", ) Instantiates the ResNet152V2 architecture. Reference [Identity Mappings in Deep Residual Networks] (https://arxiv.org/abs/1603.05027) (CVPR 2016) For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. Note: each Keras Application expects a specific kind of input preprocessing. For ResNetV2, call tf.keras.applications.resnet_v2.preprocess_input on your inputs before passing them to the model. resnet_v2.preprocess_input will scale input pixels between -1 and 1. Arguments include_top: whether to include the fully-connected layer at the top of the network. weights: one of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. input_tensor: optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) (with 'channels_last' data format) or (3, 224, 224) (with 'channels_first' data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 32. E.g. (200, 200, 3) would be one valid value. pooling: Optional pooling mode for feature extraction when include_top is False. None means that the output of the model will be the 4D tensor output of the last convolutional block. avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. max means that global max pooling will be applied. classes: optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. When loading pretrained weights, classifier_activation can only be None or "softmax". Returns A keras.Model instance. EfficientNet B0 to B7 EfficientNetB0 function tf.keras.applications.EfficientNetB0( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, classifier_activation="softmax", **kwargs ) Instantiates the EfficientNetB0 architecture. Reference EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (ICML 2019) This function returns a Keras image classification model, optionally loaded with weights pre-trained on ImageNet. For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. Note: each Keras Application expects a specific kind of input preprocessing. For EfficientNet, input preprocessing is included as part of the model (as a Rescaling layer), and thus tf.keras.applications.efficientnet.preprocess_input is actually a pass-through function. EfficientNet models expect their inputs to be float tensors of pixels with values in the [0-255] range. Arguments include_top: Whether to include the fully-connected layer at the top of the network. Defaults to True. weights: One of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. Defaults to 'imagenet'. input_tensor: Optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: Optional shape tuple, only to be specified if include_top is False. It should have exactly 3 inputs channels. pooling: Optional pooling mode for feature extraction when include_top is False. Defaults to None. - None means that the output of the model will be the 4D tensor output of the last convolutional layer. - avg means that global average pooling will be applied to the output of the last convolutional layer, and thus the output of the model will be a 2D tensor. - max means that global max pooling will be applied. classes: Optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. Defaults to 1000 (number of ImageNet classes). classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. Defaults to 'softmax'. When loading pretrained weights, classifier_activation can only be None or "softmax". Returns A keras.Model instance. EfficientNetB1 function tf.keras.applications.EfficientNetB1( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, classifier_activation="softmax", **kwargs ) Instantiates the EfficientNetB1 architecture. Reference EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (ICML 2019) This function returns a Keras image classification model, optionally loaded with weights pre-trained on ImageNet. For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. Note: each Keras Application expects a specific kind of input preprocessing. For EfficientNet, input preprocessing is included as part of the model (as a Rescaling layer), and thus tf.keras.applications.efficientnet.preprocess_input is actually a pass-through function. EfficientNet models expect their inputs to be float tensors of pixels with values in the [0-255] range. Arguments include_top: Whether to include the fully-connected layer at the top of the network. Defaults to True. weights: One of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. Defaults to 'imagenet'. input_tensor: Optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: Optional shape tuple, only to be specified if include_top is False. It should have exactly 3 inputs channels. pooling: Optional pooling mode for feature extraction when include_top is False. Defaults to None. - None means that the output of the model will be the 4D tensor output of the last convolutional layer. - avg means that global average pooling will be applied to the output of the last convolutional layer, and thus the output of the model will be a 2D tensor. - max means that global max pooling will be applied. classes: Optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. Defaults to 1000 (number of ImageNet classes). classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. Defaults to 'softmax'. When loading pretrained weights, classifier_activation can only be None or "softmax". Returns A keras.Model instance. EfficientNetB2 function tf.keras.applications.EfficientNetB2( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, classifier_activation="softmax", **kwargs ) Instantiates the EfficientNetB2 architecture. Reference EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (ICML 2019) This function returns a Keras image classification model, optionally loaded with weights pre-trained on ImageNet. For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. Note: each Keras Application expects a specific kind of input preprocessing. For EfficientNet, input preprocessing is included as part of the model (as a Rescaling layer), and thus tf.keras.applications.efficientnet.preprocess_input is actually a pass-through function. EfficientNet models expect their inputs to be float tensors of pixels with values in the [0-255] range. Arguments include_top: Whether to include the fully-connected layer at the top of the network. Defaults to True. weights: One of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. Defaults to 'imagenet'. input_tensor: Optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: Optional shape tuple, only to be specified if include_top is False. It should have exactly 3 inputs channels. pooling: Optional pooling mode for feature extraction when include_top is False. Defaults to None. - None means that the output of the model will be the 4D tensor output of the last convolutional layer. - avg means that global average pooling will be applied to the output of the last convolutional layer, and thus the output of the model will be a 2D tensor. - max means that global max pooling will be applied. classes: Optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. Defaults to 1000 (number of ImageNet classes). classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. Defaults to 'softmax'. When loading pretrained weights, classifier_activation can only be None or "softmax". Returns A keras.Model instance. EfficientNetB3 function tf.keras.applications.EfficientNetB3( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, classifier_activation="softmax", **kwargs ) Instantiates the EfficientNetB3 architecture. Reference EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (ICML 2019) This function returns a Keras image classification model, optionally loaded with weights pre-trained on ImageNet. For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. Note: each Keras Application expects a specific kind of input preprocessing. For EfficientNet, input preprocessing is included as part of the model (as a Rescaling layer), and thus tf.keras.applications.efficientnet.preprocess_input is actually a pass-through function. EfficientNet models expect their inputs to be float tensors of pixels with values in the [0-255] range. Arguments include_top: Whether to include the fully-connected layer at the top of the network. Defaults to True. weights: One of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. Defaults to 'imagenet'. input_tensor: Optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: Optional shape tuple, only to be specified if include_top is False. It should have exactly 3 inputs channels. pooling: Optional pooling mode for feature extraction when include_top is False. Defaults to None. - None means that the output of the model will be the 4D tensor output of the last convolutional layer. - avg means that global average pooling will be applied to the output of the last convolutional layer, and thus the output of the model will be a 2D tensor. - max means that global max pooling will be applied. classes: Optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. Defaults to 1000 (number of ImageNet classes). classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. Defaults to 'softmax'. When loading pretrained weights, classifier_activation can only be None or "softmax". Returns A keras.Model instance. EfficientNetB4 function tf.keras.applications.EfficientNetB4( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, classifier_activation="softmax", **kwargs ) Instantiates the EfficientNetB4 architecture. Reference EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (ICML 2019) This function returns a Keras image classification model, optionally loaded with weights pre-trained on ImageNet. For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. Note: each Keras Application expects a specific kind of input preprocessing. For EfficientNet, input preprocessing is included as part of the model (as a Rescaling layer), and thus tf.keras.applications.efficientnet.preprocess_input is actually a pass-through function. EfficientNet models expect their inputs to be float tensors of pixels with values in the [0-255] range. Arguments include_top: Whether to include the fully-connected layer at the top of the network. Defaults to True. weights: One of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. Defaults to 'imagenet'. input_tensor: Optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: Optional shape tuple, only to be specified if include_top is False. It should have exactly 3 inputs channels. pooling: Optional pooling mode for feature extraction when include_top is False. Defaults to None. - None means that the output of the model will be the 4D tensor output of the last convolutional layer. - avg means that global average pooling will be applied to the output of the last convolutional layer, and thus the output of the model will be a 2D tensor. - max means that global max pooling will be applied. classes: Optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. Defaults to 1000 (number of ImageNet classes). classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. Defaults to 'softmax'. When loading pretrained weights, classifier_activation can only be None or "softmax". Returns A keras.Model instance. EfficientNetB5 function tf.keras.applications.EfficientNetB5( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, classifier_activation="softmax", **kwargs ) Instantiates the EfficientNetB5 architecture. Reference EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (ICML 2019) This function returns a Keras image classification model, optionally loaded with weights pre-trained on ImageNet. For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. Note: each Keras Application expects a specific kind of input preprocessing. For EfficientNet, input preprocessing is included as part of the model (as a Rescaling layer), and thus tf.keras.applications.efficientnet.preprocess_input is actually a pass-through function. EfficientNet models expect their inputs to be float tensors of pixels with values in the [0-255] range. Arguments include_top: Whether to include the fully-connected layer at the top of the network. Defaults to True. weights: One of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. Defaults to 'imagenet'. input_tensor: Optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: Optional shape tuple, only to be specified if include_top is False. It should have exactly 3 inputs channels. pooling: Optional pooling mode for feature extraction when include_top is False. Defaults to None. - None means that the output of the model will be the 4D tensor output of the last convolutional layer. - avg means that global average pooling will be applied to the output of the last convolutional layer, and thus the output of the model will be a 2D tensor. - max means that global max pooling will be applied. classes: Optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. Defaults to 1000 (number of ImageNet classes). classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. Defaults to 'softmax'. When loading pretrained weights, classifier_activation can only be None or "softmax". Returns A keras.Model instance. EfficientNetB6 function tf.keras.applications.EfficientNetB6( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, classifier_activation="softmax", **kwargs ) Instantiates the EfficientNetB6 architecture. Reference EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (ICML 2019) This function returns a Keras image classification model, optionally loaded with weights pre-trained on ImageNet. For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. Note: each Keras Application expects a specific kind of input preprocessing. For EfficientNet, input preprocessing is included as part of the model (as a Rescaling layer), and thus tf.keras.applications.efficientnet.preprocess_input is actually a pass-through function. EfficientNet models expect their inputs to be float tensors of pixels with values in the [0-255] range. Arguments include_top: Whether to include the fully-connected layer at the top of the network. Defaults to True. weights: One of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. Defaults to 'imagenet'. input_tensor: Optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: Optional shape tuple, only to be specified if include_top is False. It should have exactly 3 inputs channels. pooling: Optional pooling mode for feature extraction when include_top is False. Defaults to None. - None means that the output of the model will be the 4D tensor output of the last convolutional layer. - avg means that global average pooling will be applied to the output of the last convolutional layer, and thus the output of the model will be a 2D tensor. - max means that global max pooling will be applied. classes: Optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. Defaults to 1000 (number of ImageNet classes). classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. Defaults to 'softmax'. When loading pretrained weights, classifier_activation can only be None or "softmax". Returns A keras.Model instance. EfficientNetB7 function tf.keras.applications.EfficientNetB7( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, classifier_activation="softmax", **kwargs ) Instantiates the EfficientNetB7 architecture. Reference EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (ICML 2019) This function returns a Keras image classification model, optionally loaded with weights pre-trained on ImageNet. For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. Note: each Keras Application expects a specific kind of input preprocessing. For EfficientNet, input preprocessing is included as part of the model (as a Rescaling layer), and thus tf.keras.applications.efficientnet.preprocess_input is actually a pass-through function. EfficientNet models expect their inputs to be float tensors of pixels with values in the [0-255] range. Arguments include_top: Whether to include the fully-connected layer at the top of the network. Defaults to True. weights: One of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. Defaults to 'imagenet'. input_tensor: Optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: Optional shape tuple, only to be specified if include_top is False. It should have exactly 3 inputs channels. pooling: Optional pooling mode for feature extraction when include_top is False. Defaults to None. - None means that the output of the model will be the 4D tensor output of the last convolutional layer. - avg means that global average pooling will be applied to the output of the last convolutional layer, and thus the output of the model will be a 2D tensor. - max means that global max pooling will be applied. classes: Optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. Defaults to 1000 (number of ImageNet classes). classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. Defaults to 'softmax'. When loading pretrained weights, classifier_activation can only be None or "softmax". Returns A keras.Model instance. InceptionResNetV2 InceptionResNetV2 function tf.keras.applications.InceptionResNetV2( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, classifier_activation="softmax", **kwargs ) Instantiates the Inception-ResNet v2 architecture. Reference Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning (AAAI 2017) This function returns a Keras image classification model, optionally loaded with weights pre-trained on ImageNet. For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. Note: each Keras Application expects a specific kind of input preprocessing. For InceptionResNetV2, call tf.keras.applications.inception_resnet_v2.preprocess_input on your inputs before passing them to the model. inception_resnet_v2.preprocess_input will scale input pixels between -1 and 1. Arguments include_top: whether to include the fully-connected layer at the top of the network. weights: one of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. input_tensor: optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (299, 299, 3) (with 'channels_last' data format) or (3, 299, 299) (with 'channels_first' data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 75. E.g. (150, 150, 3) would be one valid value. pooling: Optional pooling mode for feature extraction when include_top is False. None means that the output of the model will be the 4D tensor output of the last convolutional block. 'avg' means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. 'max' means that global max pooling will be applied. classes: optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. When loading pretrained weights, classifier_activation can only be None or "softmax". **kwargs: For backwards compatibility only. Returns A keras.Model instance. Xception Xception function tf.keras.applications.Xception( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, classifier_activation="softmax", ) Instantiates the Xception architecture. Reference Xception: Deep Learning with Depthwise Separable Convolutions (CVPR 2017) For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. The default input image size for this model is 299x299. Note: each Keras Application expects a specific kind of input preprocessing. For Xception, call tf.keras.applications.xception.preprocess_input on your inputs before passing them to the model. xception.preprocess_input will scale input pixels between -1 and 1. Arguments include_top: whether to include the fully-connected layer at the top of the network. weights: one of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. input_tensor: optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (299, 299, 3). It should have exactly 3 inputs channels, and width and height should be no smaller than 71. E.g. (150, 150, 3) would be one valid value. pooling: Optional pooling mode for feature extraction when include_top is False. None means that the output of the model will be the 4D tensor output of the last convolutional block. avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. max means that global max pooling will be applied. classes: optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. When loading pretrained weights, classifier_activation can only be None or "softmax". Returns A keras.Model instance.DenseNet DenseNet121 function tf.keras.applications.DenseNet121( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, ) Instantiates the Densenet121 architecture. Reference Densely Connected Convolutional Networks (CVPR 2017) Optionally loads weights pre-trained on ImageNet. Note that the data format convention used by the model is the one specified in your Keras config at ~/.keras/keras.json. Note: each Keras Application expects a specific kind of input preprocessing. For DenseNet, call tf.keras.applications.densenet.preprocess_input on your inputs before passing them to the model. Arguments include_top: whether to include the fully-connected layer at the top of the network. weights: one of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. input_tensor: optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) (with 'channels_last' data format) or (3, 224, 224) (with 'channels_first' data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 32. E.g. (200, 200, 3) would be one valid value. pooling: Optional pooling mode for feature extraction when include_top is False. None means that the output of the model will be the 4D tensor output of the last convolutional block. avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. max means that global max pooling will be applied. classes: optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. Returns A Keras model instance. DenseNet169 function tf.keras.applications.DenseNet169( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, ) Instantiates the Densenet169 architecture. Reference Densely Connected Convolutional Networks (CVPR 2017) Optionally loads weights pre-trained on ImageNet. Note that the data format convention used by the model is the one specified in your Keras config at ~/.keras/keras.json. Note: each Keras Application expects a specific kind of input preprocessing. For DenseNet, call tf.keras.applications.densenet.preprocess_input on your inputs before passing them to the model. Arguments include_top: whether to include the fully-connected layer at the top of the network. weights: one of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. input_tensor: optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) (with 'channels_last' data format) or (3, 224, 224) (with 'channels_first' data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 32. E.g. (200, 200, 3) would be one valid value. pooling: Optional pooling mode for feature extraction when include_top is False. None means that the output of the model will be the 4D tensor output of the last convolutional block. avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. max means that global max pooling will be applied. classes: optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. Returns A Keras model instance. DenseNet201 function tf.keras.applications.DenseNet201( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, ) Instantiates the Densenet201 architecture. Reference Densely Connected Convolutional Networks (CVPR 2017) Optionally loads weights pre-trained on ImageNet. Note that the data format convention used by the model is the one specified in your Keras config at ~/.keras/keras.json. Note: each Keras Application expects a specific kind of input preprocessing. For DenseNet, call tf.keras.applications.densenet.preprocess_input on your inputs before passing them to the model. Arguments include_top: whether to include the fully-connected layer at the top of the network. weights: one of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. input_tensor: optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) (with 'channels_last' data format) or (3, 224, 224) (with 'channels_first' data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 32. E.g. (200, 200, 3) would be one valid value. pooling: Optional pooling mode for feature extraction when include_top is False. None means that the output of the model will be the 4D tensor output of the last convolutional block. avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. max means that global max pooling will be applied. classes: optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. Returns A Keras model instance. VGG16 and VGG19 VGG16 function tf.keras.applications.VGG16( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, classifier_activation="softmax", ) Instantiates the VGG16 model. Reference Very Deep Convolutional Networks for Large-Scale Image Recognition (ICLR 2015) For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. The default input size for this model is 224x224. Note: each Keras Application expects a specific kind of input preprocessing. For VGG16, call tf.keras.applications.vgg16.preprocess_input on your inputs before passing them to the model. vgg16.preprocess_input will convert the input images from RGB to BGR, then will zero-center each color channel with respect to the ImageNet dataset, without scaling. Arguments include_top: whether to include the 3 fully-connected layers at the top of the network. weights: one of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. input_tensor: optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) (with channels_last data format) or (3, 224, 224) (with channels_first data format). It should have exactly 3 input channels, and width and height should be no smaller than 32. E.g. (200, 200, 3) would be one valid value. pooling: Optional pooling mode for feature extraction when include_top is False. - None means that the output of the model will be the 4D tensor output of the last convolutional block. - avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. - max means that global max pooling will be applied. classes: optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. When loading pretrained weights, classifier_activation can only be None or "softmax". Returns A keras.Model instance. VGG19 function tf.keras.applications.VGG19( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, classifier_activation="softmax", ) Instantiates the VGG19 architecture. Reference Very Deep Convolutional Networks for Large-Scale Image Recognition (ICLR 2015) For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. The default input size for this model is 224x224. Note: each Keras Application expects a specific kind of input preprocessing. For VGG19, call tf.keras.applications.vgg19.preprocess_input on your inputs before passing them to the model. vgg19.preprocess_input will convert the input images from RGB to BGR, then will zero-center each color channel with respect to the ImageNet dataset, without scaling. Arguments include_top: whether to include the 3 fully-connected layers at the top of the network. weights: one of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. input_tensor: optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape: optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) (with channels_last data format) or (3, 224, 224) (with channels_first data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 32. E.g. (200, 200, 3) would be one valid value. pooling: Optional pooling mode for feature extraction when include_top is False. None means that the output of the model will be the 4D tensor output of the last convolutional block. avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. max means that global max pooling will be applied. classes: optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. When loading pretrained weights, classifier_activation can only be None or "softmax". Returns A keras.Model instance. InceptionV3 InceptionV3 function tf.keras.applications.InceptionV3( include_top=True, weights="imagenet", input_tensor=None, input_shape=None, pooling=None, classes=1000, classifier_activation="softmax", ) Instantiates the Inception v3 architecture. Reference Rethinking the Inception Architecture for Computer Vision (CVPR 2016) This function returns a Keras image classification model, optionally loaded with weights pre-trained on ImageNet. For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. Note: each Keras Application expects a specific kind of input preprocessing. For InceptionV3, call tf.keras.applications.inception_v3.preprocess_input on your inputs before passing them to the model. inception_v3.preprocess_input will scale input pixels between -1 and 1. Arguments include_top: Boolean, whether to include the fully-connected layer at the top, as the last layer of the network. Default to True. weights: One of None (random initialization), imagenet (pre-training on ImageNet), or the path to the weights file to be loaded. Default to imagenet. input_tensor: Optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_tensor is useful for sharing inputs between multiple different networks. Default to None. input_shape: Optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (299, 299, 3) (with channels_last data format) or (3, 299, 299) (with channels_first data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 75. E.g. (150, 150, 3) would be one valid value. input_shape will be ignored if the input_tensor is provided. pooling: Optional pooling mode for feature extraction when include_top is False. None (default) means that the output of the model will be the 4D tensor output of the last convolutional block. avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. max means that global max pooling will be applied. classes: optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. Default to 1000. classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. When loading pretrained weights, classifier_activation can only be None or "softmax". Returns A keras.Model instance. MobileNet and MobileNetV2 MobileNet function tf.keras.applications.MobileNet( input_shape=None, alpha=1.0, depth_multiplier=1, dropout=0.001, include_top=True, weights="imagenet", input_tensor=None, pooling=None, classes=1000, classifier_activation="softmax", **kwargs ) Instantiates the MobileNet architecture. Reference MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications This function returns a Keras image classification model, optionally loaded with weights pre-trained on ImageNet. For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. Note: each Keras Application expects a specific kind of input preprocessing. For MobileNet, call tf.keras.applications.mobilenet.preprocess_input on your inputs before passing them to the model. mobilenet.preprocess_input will scale input pixels between -1 and 1. Arguments input_shape: Optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) (with channels_last data format) or (3, 224, 224) (with channels_first data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 32. E.g. (200, 200, 3) would be one valid value. Default to None. input_shape will be ignored if the input_tensor is provided. alpha: Controls the width of the network. This is known as the width multiplier in the MobileNet paper. - If alpha < 1.0, proportionally decreases the number of filters in each layer. - If alpha > 1.0, proportionally increases the number of filters in each layer. - If alpha = 1, default number of filters from the paper are used at each layer. Default to 1.0. depth_multiplier: Depth multiplier for depthwise convolution. This is called the resolution multiplier in the MobileNet paper. Default to 1.0. dropout: Dropout rate. Default to 0.001. include_top: Boolean, whether to include the fully-connected layer at the top of the network. Default to True. weights: One of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. Default to imagenet. input_tensor: Optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_tensor is useful for sharing inputs between multiple different networks. Default to None. pooling: Optional pooling mode for feature extraction when include_top is False. None (default) means that the output of the model will be the 4D tensor output of the last convolutional block. avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. max means that global max pooling will be applied. classes: Optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. Defaults to 1000. classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. When loading pretrained weights, classifier_activation can only be None or "softmax". **kwargs: For backwards compatibility only. Returns A keras.Model instance. MobileNetV2 function tf.keras.applications.MobileNetV2( input_shape=None, alpha=1.0, include_top=True, weights="imagenet", input_tensor=None, pooling=None, classes=1000, classifier_activation="softmax", **kwargs ) Instantiates the MobileNetV2 architecture. MobileNetV2 is very similar to the original MobileNet, except that it uses inverted residual blocks with bottlenecking features. It has a drastically lower parameter count than the original MobileNet. MobileNets support any input size greater than 32 x 32, with larger image sizes offering better performance. Reference MobileNetV2: Inverted Residuals and Linear Bottlenecks (CVPR 2018) This function returns a Keras image classification model, optionally loaded with weights pre-trained on ImageNet. For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. Note: each Keras Application expects a specific kind of input preprocessing. For MobileNetV2, call tf.keras.applications.mobilenet_v2.preprocess_input on your inputs before passing them to the model. mobilenet_v2.preprocess_input will scale input pixels between -1 and 1. Arguments input_shape: Optional shape tuple, to be specified if you would like to use a model with an input image resolution that is not (224, 224, 3). It should have exactly 3 inputs channels (224, 224, 3). You can also omit this option if you would like to infer input_shape from an input_tensor. If you choose to include both input_tensor and input_shape then input_shape will be used if they match, if the shapes do not match then we will throw an error. E.g. (160, 160, 3) would be one valid value. alpha: Float between 0 and 1. controls the width of the network. This is known as the width multiplier in the MobileNetV2 paper, but the name is kept for consistency with applications.MobileNetV1 model in Keras. If alpha < 1.0, proportionally decreases the number of filters in each layer. If alpha > 1.0, proportionally increases the number of filters in each layer. If alpha = 1, default number of filters from the paper are used at each layer. include_top: Boolean, whether to include the fully-connected layer at the top of the network. Defaults to True. weights: String, one of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. input_tensor: Optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. pooling: String, optional pooling mode for feature extraction when include_top is False. None means that the output of the model will be the 4D tensor output of the last convolutional block. avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. max means that global max pooling will be applied. classes: Integer, optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. classifier_activation: A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. When loading pretrained weights, classifier_activation can only be None or "softmax". **kwargs: For backwards compatibility only. Returns A keras.Model instance. NasNetLarge and NasNetMobile NASNetLarge function tf.keras.applications.NASNetLarge( input_shape=None, include_top=True, weights="imagenet", input_tensor=None, pooling=None, classes=1000, ) Instantiates a NASNet model in ImageNet mode. Reference Learning Transferable Architectures for Scalable Image Recognition (CVPR 2018) Optionally loads weights pre-trained on ImageNet. Note that the data format convention used by the model is the one specified in your Keras config at ~/.keras/keras.json. Note: each Keras Application expects a specific kind of input preprocessing. For NASNet, call tf.keras.applications.nasnet.preprocess_input on your inputs before passing them to the model. Arguments input_shape: Optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (331, 331, 3) for NASNetLarge. It should have exactly 3 inputs channels, and width and height should be no smaller than 32. E.g. (224, 224, 3) would be one valid value. include_top: Whether to include the fully-connected layer at the top of the network. weights: None (random initialization) or imagenet (ImageNet weights) For loading imagenet weights, input_shape should be (331, 331, 3) input_tensor: Optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. pooling: Optional pooling mode for feature extraction when include_top is False. - None means that the output of the model will be the 4D tensor output of the last convolutional layer. - avg means that global average pooling will be applied to the output of the last convolutional layer, and thus the output of the model will be a 2D tensor. - max means that global max pooling will be applied. classes: Optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. Returns A Keras model instance. Raises ValueError: in case of invalid argument for weights, or invalid input shape. RuntimeError: If attempting to run this model with a backend that does not support separable convolutions. NASNetMobile function tf.keras.applications.NASNetMobile( input_shape=None, include_top=True, weights="imagenet", input_tensor=None, pooling=None, classes=1000, ) Instantiates a Mobile NASNet model in ImageNet mode. Reference Learning Transferable Architectures for Scalable Image Recognition (CVPR 2018) Optionally loads weights pre-trained on ImageNet. Note that the data format convention used by the model is the one specified in your Keras config at ~/.keras/keras.json. Note: each Keras Application expects a specific kind of input preprocessing. For NASNet, call tf.keras.applications.nasnet.preprocess_input on your inputs before passing them to the model. Arguments input_shape: Optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) for NASNetMobile It should have exactly 3 inputs channels, and width and height should be no smaller than 32. E.g. (224, 224, 3) would be one valid value. include_top: Whether to include the fully-connected layer at the top of the network. weights: None (random initialization) or imagenet (ImageNet weights) For loading imagenet weights, input_shape should be (224, 224, 3) input_tensor: Optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. pooling: Optional pooling mode for feature extraction when include_top is False. - None means that the output of the model will be the 4D tensor output of the last convolutional layer. - avg means that global average pooling will be applied to the output of the last convolutional layer, and thus the output of the model will be a 2D tensor. - max means that global max pooling will be applied. classes: Optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. Returns A Keras model instance. Raises ValueError: In case of invalid argument for weights, or invalid input shape. RuntimeError: If attempting to run this model with a backend that does not support separable convolutions.The base Layer class Layer class tf.keras.layers.Layer( trainable=True, name=None, dtype=None, dynamic=False, **kwargs ) This is the class from which all layers inherit. A layer is a callable object that takes as input one or more tensors and that outputs one or more tensors. It involves computation, defined in the call() method, and a state (weight variables), defined either in the constructor __init__() or in the build() method. Users will just instantiate a layer and then treat it as a callable. Arguments trainable: Boolean, whether the layer's variables should be trainable. name: String name of the layer. dtype: The dtype of the layer's computations and weights. Can also be a tf.keras.mixed_precision.Policy, which allows the computation and weight dtype to differ. Default of None means to use tf.keras.mixed_precision.global_policy(), which is a float32 policy unless set to different value. dynamic: Set this to True if your layer should only be run eagerly, and should not be used to generate a static computation graph. This would be the case for a Tree-RNN or a recursive network, for example, or generally for any layer that manipulates tensors using Python control flow. If False, we assume that the layer can safely be used to generate a static computation graph. Attributes name: The name of the layer (string). dtype: The dtype of the layer's weights. variable_dtype: Alias of dtype. compute_dtype: The dtype of the layer's computations. Layers automatically cast inputs to this dtype which causes the computations and output to also be in this dtype. When mixed precision is used with a tf.keras.mixed_precision.Policy, this will be different than variable_dtype. dtype_policy: The layer's dtype policy. See the tf.keras.mixed_precision.Policy documentation for details. trainable_weights: List of variables to be included in backprop. non_trainable_weights: List of variables that should not be included in backprop. weights: The concatenation of the lists trainable_weights and non_trainable_weights (in this order). trainable: Whether the layer should be trained (boolean), i.e. whether its potentially-trainable weights should be returned as part of layer.trainable_weights. input_spec: Optional (list of) InputSpec object(s) specifying the constraints on inputs that can be accepted by the layer. We recommend that descendants of Layer implement the following methods: __init__(): Defines custom layer attributes, and creates layer state variables that do not depend on input shapes, using add_weight(). build(self, input_shape): This method can be used to create weights that depend on the shape(s) of the input(s), using add_weight(). __call__() will automatically build the layer (if it has not been built yet) by calling build(). call(self, inputs, *args, **kwargs): Called in __call__ after making sure build() has been called. call() performs the logic of applying the layer to the input tensors (which should be passed in as argument). Two reserved keyword arguments you can optionally use in call() are: - training (boolean, whether the call is in inference mode or training mode). See more details in the layer/model subclassing guide - mask (boolean tensor encoding masked timesteps in the input, used in RNN layers). See more details in the layer/model subclassing guide A typical signature for this method is call(self, inputs), and user could optionally add training and mask if the layer need them. *args and **kwargs is only useful for future extension when more input parameters are planned to be added. get_config(self): Returns a dictionary containing the configuration used to initialize this layer. If the keys differ from the arguments in __init__, then override from_config(self) as well. This method is used when saving the layer or a model that contains this layer. Examples Here's a basic example: a layer with two variables, w and b, that returns y = w . x + b. It shows how to implement build() and call(). Variables set as attributes of a layer are tracked as weights of the layers (in layer.weights). class SimpleDense(Layer): def __init__(self, units=32): super(SimpleDense, self).__init__() self.units = units def build(self, input_shape): # Create the state of the layer (weights) w_init = tf.random_normal_initializer() self.w = tf.Variable( initial_value=w_init(shape=(input_shape[-1], self.units), dtype='float32'), trainable=True) b_init = tf.zeros_initializer() self.b = tf.Variable( initial_value=b_init(shape=(self.units,), dtype='float32'), trainable=True) def call(self, inputs): # Defines the computation from inputs to outputs return tf.matmul(inputs, self.w) + self.b # Instantiates the layer. linear_layer = SimpleDense(4) # This will also call `build(input_shape)` and create the weights. y = linear_layer(tf.ones((2, 2))) assert len(linear_layer.weights) == 2 # These weights are trainable, so they're listed in `trainable_weights`: assert len(linear_layer.trainable_weights) == 2 Note that the method add_weight() offers a shortcut to create weights: class SimpleDense(Layer): def __init__(self, units=32): super(SimpleDense, self).__init__() self.units = units def build(self, input_shape): self.w = self.add_weight(shape=(input_shape[-1], self.units), initializer='random_normal', trainable=True) self.b = self.add_weight(shape=(self.units,), initializer='random_normal', trainable=True) def call(self, inputs): return tf.matmul(inputs, self.w) + self.b Besides trainable weights, updated via backpropagation during training, layers can also have non-trainable weights. These weights are meant to be updated manually during call(). Here's a example layer that computes the running sum of its inputs: class ComputeSum(Layer): def __init__(self, input_dim): super(ComputeSum, self).__init__() # Create a non-trainable weight. self.total = tf.Variable(initial_value=tf.zeros((input_dim,)), trainable=False) def call(self, inputs): self.total.assign_add(tf.reduce_sum(inputs, axis=0)) return self.total my_sum = ComputeSum(2) x = tf.ones((2, 2)) y = my_sum(x) print(y.numpy()) # [2. 2.] y = my_sum(x) print(y.numpy()) # [4. 4.] assert my_sum.weights == [my_sum.total] assert my_sum.non_trainable_weights == [my_sum.total] assert my_sum.trainable_weights == [] For more information about creating layers, see the guide Making new Layers and Models via subclassing weights property tf.keras.layers.Layer.weights Returns the list of all layer variables/weights. Returns A list of variables. trainable_weights property tf.keras.layers.Layer.trainable_weights List of all trainable weights tracked by this layer. Trainable weights are updated via gradient descent during training. Returns A list of trainable variables. non_trainable_weights property tf.keras.layers.Layer.non_trainable_weights List of all non-trainable weights tracked by this layer. Non-trainable weights are not updated during training. They are expected to be updated manually in call(). Returns A list of non-trainable variables. trainable property tf.keras.layers.Layer.trainable get_weights method Layer.get_weights() Returns the current weights of the layer, as NumPy arrays. The weights of a layer represent the state of the layer. This function returns both trainable and non-trainable weight values associated with this layer as a list of NumPy arrays, which can in turn be used to load state into similarly parameterized layers. For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer: >>> layer_a = tf.keras.layers.Dense(1, ... kernel_initializer=tf.constant_initializer(1.)) >>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]])) >>> layer_a.get_weights() [array([[1.], [1.], [1.]], dtype=float32), array([0.], dtype=float32)] >>> layer_b = tf.keras.layers.Dense(1, ... kernel_initializer=tf.constant_initializer(2.)) >>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]])) >>> layer_b.get_weights() [array([[2.], [2.], [2.]], dtype=float32), array([0.], dtype=float32)] >>> layer_b.set_weights(layer_a.get_weights()) >>> layer_b.get_weights() [array([[1.], [1.], [1.]], dtype=float32), array([0.], dtype=float32)] Returns Weights values as a list of NumPy arrays. set_weights method Layer.set_weights(weights) Sets the weights of the layer, from NumPy arrays. The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer's weights must be instantiated before calling this function, by calling the layer. For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer: >>> layer_a = tf.keras.layers.Dense(1, ... kernel_initializer=tf.constant_initializer(1.)) >>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]])) >>> layer_a.get_weights() [array([[1.], [1.], [1.]], dtype=float32), array([0.], dtype=float32)] >>> layer_b = tf.keras.layers.Dense(1, ... kernel_initializer=tf.constant_initializer(2.)) >>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]])) >>> layer_b.get_weights() [array([[2.], [2.], [2.]], dtype=float32), array([0.], dtype=float32)] >>> layer_b.set_weights(layer_a.get_weights()) >>> layer_b.get_weights() [array([[1.], [1.], [1.]], dtype=float32), array([0.], dtype=float32)] Arguments weights: a list of NumPy arrays. The number of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights). Raises ValueError: If the provided weights list does not match the layer's specifications. get_config method Model.get_config() Returns the config of the layer. A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration. The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above). Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it. Returns Python dictionary. add_loss method Layer.add_loss(losses, **kwargs) Add loss tensor(s), potentially dependent on layer inputs. Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies. This method can be used inside a subclassed layer or model's call function, in which case losses should be a Tensor or list of Tensors. Example class MyLayer(tf.keras.layers.Layer): def call(self, inputs): self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs This method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model's Inputs. These losses become part of the model's topology and are tracked in get_config. Example inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) If this is not the case for your loss (if, for example, your loss references a Variable of one of the model's layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model's topology since they can't be serialized. Example inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) Arguments losses: Loss tensor, or list/tuple of tensors. Rather than tensors, losses may also be zero-argument callables which create a loss tensor. **kwargs: Additional keyword arguments for backward compatibility. Accepted values: inputs - Deprecated, will be automatically inferred. add_metric method Layer.add_metric(value, name=None, **kwargs) Adds metric tensor to the layer. This method can be used inside the call() method of a subclassed layer or model. class MyMetricLayer(tf.keras.layers.Layer): def __init__(self): super(MyMetricLayer, self).__init__(name='my_metric_layer') self.mean = tf.keras.metrics.Mean(name='metric_1') def call(self, inputs): self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name='metric_2') return inputs This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model's Inputs. These metrics become part of the model's topology and are tracked when you save the model via save(). inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model's inputs. inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') Arguments value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility. Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean. losses property tf.keras.layers.Layer.losses List of losses added using the add_loss() API. Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables. Examples >>> class MyLayer(tf.keras.layers.Layer): ... def call(self, inputs): ... self.add_loss(tf.abs(tf.reduce_mean(inputs))) ... return inputs >>> l = MyLayer() >>> l(np.ones((10, 1))) >>> l.losses [1.0] >>> inputs = tf.keras.Input(shape=(10,)) >>> x = tf.keras.layers.Dense(10)(inputs) >>> outputs = tf.keras.layers.Dense(1)(x) >>> model = tf.keras.Model(inputs, outputs) >>> # Activity regularization. >>> len(model.losses) 0 >>> model.add_loss(tf.abs(tf.reduce_mean(x))) >>> len(model.losses) 1 >>> inputs = tf.keras.Input(shape=(10,)) >>> d = tf.keras.layers.Dense(10, kernel_initializer='ones') >>> x = d(inputs) >>> outputs = tf.keras.layers.Dense(1)(x) >>> model = tf.keras.Model(inputs, outputs) >>> # Weight regularization. >>> model.add_loss(lambda: tf.reduce_mean(d.kernel)) >>> model.losses [] Returns A list of tensors. metrics property tf.keras.layers.Layer.metrics List of metrics added using the add_metric() API. Example >>> input = tf.keras.layers.Input(shape=(3,)) >>> d = tf.keras.layers.Dense(2) >>> output = d(input) >>> d.add_metric(tf.reduce_max(output), name='max') >>> d.add_metric(tf.reduce_min(output), name='min') >>> [m.name for m in d.metrics] ['max', 'min'] Returns A list of Metric objects. dynamic property tf.keras.layers.Layer.dynamic Whether the layer is dynamic (eager-only); set in the constructor. Layer activation functions Usage of activations Activations can either be used through an Activation layer, or through the activation argument supported by all forward layers: model.add(layers.Dense(64, activation=activations.relu)) This is equivalent to: from tensorflow.keras import layers from tensorflow.keras import activations model.add(layers.Dense(64)) model.add(layers.Activation(activations.relu)) All built-in activations may also be passed via their string identifier: model.add(layers.Dense(64, activation='relu')) Available activations relu function tf.keras.activations.relu(x, alpha=0.0, max_value=None, threshold=0) Applies the rectified linear unit activation function. With default values, this returns the standard ReLU activation: max(x, 0), the element-wise maximum of 0 and the input tensor. Modifying default parameters allows you to use non-zero thresholds, change the max value of the activation, and to use a non-zero multiple of the input for values below the threshold. For example: >>> foo = tf.constant([-10, -5, 0.0, 5, 10], dtype = tf.float32) >>> tf.keras.activations.relu(foo).numpy() array([ 0., 0., 0., 5., 10.], dtype=float32) >>> tf.keras.activations.relu(foo, alpha=0.5).numpy() array([-5. , -2.5, 0. , 5. , 10. ], dtype=float32) >>> tf.keras.activations.relu(foo, max_value=5).numpy() array([0., 0., 0., 5., 5.], dtype=float32) >>> tf.keras.activations.relu(foo, threshold=5).numpy() array([-0., -0., 0., 0., 10.], dtype=float32) Arguments x: Input tensor or variable. alpha: A float that governs the slope for values lower than the threshold. max_value: A float that sets the saturation threshold (the largest value the function will return). threshold: A float giving the threshold value of the activation function below which values will be damped or set to zero. Returns A Tensor representing the input tensor, transformed by the relu activation function. Tensor will be of the same shape and dtype of input x. sigmoid function tf.keras.activations.sigmoid(x) Sigmoid activation function, sigmoid(x) = 1 / (1 + exp(-x)). Applies the sigmoid activation function. For small values (<-5), sigmoid returns a value close to zero, and for large values (>5) the result of the function gets close to 1. Sigmoid is equivalent to a 2-element Softmax, where the second element is assumed to be zero. The sigmoid function always returns a value between 0 and 1. For example: >>> a = tf.constant([-20, -1.0, 0.0, 1.0, 20], dtype = tf.float32) >>> b = tf.keras.activations.sigmoid(a) >>> b.numpy() array([2.0611537e-09, 2.6894143e-01, 5.0000000e-01, 7.3105860e-01, 1.0000000e+00], dtype=float32) Arguments x: Input tensor. Returns Tensor with the sigmoid activation: 1 / (1 + exp(-x)). softmax function tf.keras.activations.softmax(x, axis=-1) Softmax converts a vector of values to a probability distribution. The elements of the output vector are in range (0, 1) and sum to 1. Each vector is handled independently. The axis argument sets which axis of the input the function is applied along. Softmax is often used as the activation for the last layer of a classification network because the result could be interpreted as a probability distribution. The softmax of each vector x is computed as exp(x) / tf.reduce_sum(exp(x)). The input values in are the log-odds of the resulting probability. Arguments x : Input tensor. axis: Integer, axis along which the softmax normalization is applied. Returns Tensor, output of softmax transformation (all values are non-negative and sum to 1). Examples Example 1: standalone usage >>> inputs = tf.random.normal(shape=(32, 10)) >>> outputs = tf.keras.activations.softmax(inputs) >>> tf.reduce_sum(outputs[0, :]) # Each sample in the batch now sums to 1 Example 2: usage in a Dense layer >>> layer = tf.keras.layers.Dense(32, activation=tf.keras.activations.softmax) softplus function tf.keras.activations.softplus(x) Softplus activation function, softplus(x) = log(exp(x) + 1). Example Usage: >>> a = tf.constant([-20, -1.0, 0.0, 1.0, 20], dtype = tf.float32) >>> b = tf.keras.activations.softplus(a) >>> b.numpy() array([2.0611537e-09, 3.1326166e-01, 6.9314718e-01, 1.3132616e+00, 2.0000000e+01], dtype=float32) Arguments x: Input tensor. Returns The softplus activation: log(exp(x) + 1). softsign function tf.keras.activations.softsign(x) Softsign activation function, softsign(x) = x / (abs(x) + 1). Example Usage: >>> a = tf.constant([-1.0, 0.0, 1.0], dtype = tf.float32) >>> b = tf.keras.activations.softsign(a) >>> b.numpy() array([-0.5, 0. , 0.5], dtype=float32) Arguments x: Input tensor. Returns The softsign activation: x / (abs(x) + 1). tanh function tf.keras.activations.tanh(x) Hyperbolic tangent activation function. For example: >>> a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32) >>> b = tf.keras.activations.tanh(a) >>> b.numpy() array([-0.9950547, -0.7615942, 0., 0.7615942, 0.9950547], dtype=float32) Arguments x: Input tensor. Returns Tensor of same shape and dtype of input x, with tanh activation: tanh(x) = sinh(x)/cosh(x) = ((exp(x) - exp(-x))/(exp(x) + exp(-x))). selu function tf.keras.activations.selu(x) Scaled Exponential Linear Unit (SELU). The Scaled Exponential Linear Unit (SELU) activation function is defined as: if x > 0: return scale * x if x < 0: return scale * alpha * (exp(x) - 1) where alpha and scale are pre-defined constants (alpha=1.67326324 and scale=1.05070098). Basically, the SELU activation function multiplies scale (> 1) with the output of the tf.keras.activations.elu function to ensure a slope larger than one for positive inputs. The values of alpha and scale are chosen so that the mean and variance of the inputs are preserved between two consecutive layers as long as the weights are initialized correctly (see tf.keras.initializers.LecunNormal initializer) and the number of input units is "large enough" (see reference paper for more information). Example Usage: >>> num_classes = 10 # 10-class problem >>> model = tf.keras.Sequential() >>> model.add(tf.keras.layers.Dense(64, kernel_initializer='lecun_normal', ... activation='selu')) >>> model.add(tf.keras.layers.Dense(32, kernel_initializer='lecun_normal', ... activation='selu')) >>> model.add(tf.keras.layers.Dense(16, kernel_initializer='lecun_normal', ... activation='selu')) >>> model.add(tf.keras.layers.Dense(num_classes, activation='softmax')) Arguments x: A tensor or variable to compute the activation function for. Returns The scaled exponential unit activation: scale * elu(x, alpha). Notes: - To be used together with the tf.keras.initializers.LecunNormal initializer. - To be used together with the dropout variant tf.keras.layers.AlphaDropout (not regular dropout). References: - backendlambauer et al., 2017 elu function tf.keras.activations.elu(x, alpha=1.0) Exponential Linear Unit. The exponential linear unit (ELU) with alpha > 0 is: x if x > 0 and alpha * (exp(x) - 1) if x < 0 The ELU hyperparameter alpha controls the value to which an ELU saturates for negative net inputs. ELUs diminish the vanishing gradient effect. ELUs have negative values which pushes the mean of the activations closer to zero. Mean activations that are closer to zero enable faster learning as they bring the gradient closer to the natural gradient. ELUs saturate to a negative value when the argument gets smaller. Saturation means a small derivative which decreases the variation and the information that is propagated to the next layer. Example Usage: >>> import tensorflow as tf >>> model = tf.keras.Sequential() >>> model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='elu', ... input_shape=(28, 28, 1))) >>> model.add(tf.keras.layers.MaxPooling2D((2, 2))) >>> model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='elu')) >>> model.add(tf.keras.layers.MaxPooling2D((2, 2))) >>> model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='elu')) Arguments x: Input tensor. alpha: A scalar, slope of negative section. alpha controls the value to which an ELU saturates for negative net inputs. Returns The exponential linear unit (ELU) activation function: x if x > 0 and alpha * (exp(x) - 1) if x < 0. Reference: Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) (Clevert et al, 2016) exponential function tf.keras.activations.exponential(x) Exponential activation function. For example: >>> a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32) >>> b = tf.keras.activations.exponential(a) >>> b.numpy() array([0.04978707, 0.36787945, 1., 2.7182817 , 20.085537], dtype=float32) Arguments x: Input tensor. Returns Tensor with exponential activation: exp(x). Creating custom activations You can also use a TensorFlow callable as an activation (in this case it should take a tensor and return a tensor of the same shape and dtype): model.add(layers.Dense(64, activation=tf.nn.tanh)) About "advanced activation" layers Activations that are more complex than a simple TensorFlow function (eg. learnable activations, which maintain a state) are available as Advanced Activation layers, and can be found in the module tf.keras.layers.advanced_activations. These include PReLU and LeakyReLU. If you need a custom activation that requires a state, you should implement it as a custom layer. Note that you should not pass activation layers instances as the activation argument of a layer. They're meant to be used just like regular layers, e.g.: x = layers.Dense(10)(x) x = layers.LeakyReLU()(x) Layer weight initializers Usage of initializers Initializers define the way to set the initial random weights of Keras layers. The keyword arguments used for passing initializers to layers depends on the layer. Usually, it is simply kernel_initializer and bias_initializer: from tensorflow.keras import layers from tensorflow.keras import initializers layer = layers.Dense( units=64, kernel_initializer=initializers.RandomNormal(stddev=0.01), bias_initializer=initializers.Zeros() ) All built-in initializers can also be passed via their string identifier: layer = layers.Dense( units=64, kernel_initializer='random_normal', bias_initializer='zeros' ) Available initializers The following built-in initializers are available as part of the tf.keras.initializers module: RandomNormal class tf.keras.initializers.RandomNormal(mean=0.0, stddev=0.05, seed=None) Initializer that generates tensors with a normal distribution. Also available via the shortcut function tf.keras.initializers.random_normal. Examples >>> # Standalone usage: >>> initializer = tf.keras.initializers.RandomNormal(mean=0., stddev=1.) >>> values = initializer(shape=(2, 2)) >>> # Usage in a Keras layer: >>> initializer = tf.keras.initializers.RandomNormal(mean=0., stddev=1.) >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Arguments mean: a python scalar or a scalar tensor. Mean of the random values to generate. stddev: a python scalar or a scalar tensor. Standard deviation of the random values to generate. seed: A Python integer. An initializer created with a given seed will always produce the same random tensor for a given shape and dtype. RandomUniform class tf.keras.initializers.RandomUniform(minval=-0.05, maxval=0.05, seed=None) Initializer that generates tensors with a uniform distribution. Also available via the shortcut function tf.keras.initializers.random_uniform. Examples >>> # Standalone usage: >>> initializer = tf.keras.initializers.RandomUniform(minval=0., maxval=1.) >>> values = initializer(shape=(2, 2)) >>> # Usage in a Keras layer: >>> initializer = tf.keras.initializers.RandomUniform(minval=0., maxval=1.) >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Arguments minval: A python scalar or a scalar tensor. Lower bound of the range of random values to generate (inclusive). maxval: A python scalar or a scalar tensor. Upper bound of the range of random values to generate (exclusive). seed: A Python integer. An initializer created with a given seed will always produce the same random tensor for a given shape and dtype. TruncatedNormal class tf.keras.initializers.TruncatedNormal(mean=0.0, stddev=0.05, seed=None) Initializer that generates a truncated normal distribution. Also available via the shortcut function tf.keras.initializers.truncated_normal. The values generated are similar to values from a tf.keras.initializers.RandomNormal initializer except that values more than two standard deviations from the mean are discarded and re-drawn. Examples >>> # Standalone usage: >>> initializer = tf.keras.initializers.TruncatedNormal(mean=0., stddev=1.) >>> values = initializer(shape=(2, 2)) >>> # Usage in a Keras layer: >>> initializer = tf.keras.initializers.TruncatedNormal(mean=0., stddev=1.) >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Arguments mean: a python scalar or a scalar tensor. Mean of the random values to generate. stddev: a python scalar or a scalar tensor. Standard deviation of the random values to generate before truncation. seed: A Python integer. An initializer created with a given seed will always produce the same random tensor for a given shape and dtype. Zeros class tf.keras.initializers.Zeros() Initializer that generates tensors initialized to 0. Also available via the shortcut function tf.keras.initializers.zeros. Examples >>> # Standalone usage: >>> initializer = tf.keras.initializers.Zeros() >>> values = initializer(shape=(2, 2)) >>> # Usage in a Keras layer: >>> initializer = tf.keras.initializers.Zeros() >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Ones class tf.keras.initializers.Ones() Initializer that generates tensors initialized to 1. Also available via the shortcut function tf.keras.initializers.ones. Examples >>> # Standalone usage: >>> initializer = tf.keras.initializers.Ones() >>> values = initializer(shape=(2, 2)) >>> # Usage in a Keras layer: >>> initializer = tf.keras.initializers.Ones() >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) GlorotNormal class tf.keras.initializers.GlorotNormal(seed=None) The Glorot normal initializer, also called Xavier normal initializer. Also available via the shortcut function tf.keras.initializers.glorot_normal. Draws samples from a truncated normal distribution centered on 0 with stddev = sqrt(2 / (fan_in + fan_out)) where fan_in is the number of input units in the weight tensor and fan_out is the number of output units in the weight tensor. Examples >>> # Standalone usage: >>> initializer = tf.keras.initializers.GlorotNormal() >>> values = initializer(shape=(2, 2)) >>> # Usage in a Keras layer: >>> initializer = tf.keras.initializers.GlorotNormal() >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Arguments seed: A Python integer. An initializer created with a given seed will always produce the same random tensor for a given shape and dtype. References: Glorot et al., 2010 (pdf) GlorotUniform class tf.keras.initializers.GlorotUniform(seed=None) The Glorot uniform initializer, also called Xavier uniform initializer. Also available via the shortcut function tf.keras.initializers.glorot_uniform. Draws samples from a uniform distribution within [-limit, limit], where limit = sqrt(6 / (fan_in + fan_out)) (fan_in is the number of input units in the weight tensor and fan_out is the number of output units). Examples >>> # Standalone usage: >>> initializer = tf.keras.initializers.GlorotUniform() >>> values = initializer(shape=(2, 2)) >>> # Usage in a Keras layer: >>> initializer = tf.keras.initializers.GlorotUniform() >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Arguments seed: A Python integer. An initializer created with a given seed will always produce the same random tensor for a given shape and dtype. References: Glorot et al., 2010 (pdf) Identity class tf.keras.initializers.Identity(gain=1.0) Initializer that generates the identity matrix. Also available via the shortcut function tf.keras.initializers.identity. Only usable for generating 2D matrices. Examples >>> # Standalone usage: >>> initializer = tf.keras.initializers.Identity() >>> values = initializer(shape=(2, 2)) >>> # Usage in a Keras layer: >>> initializer = tf.keras.initializers.Identity() >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Arguments gain: Multiplicative factor to apply to the identity matrix. Orthogonal class tf.keras.initializers.Orthogonal(gain=1.0, seed=None) Initializer that generates an orthogonal matrix. Also available via the shortcut function tf.keras.initializers.orthogonal. If the shape of the tensor to initialize is two-dimensional, it is initialized with an orthogonal matrix obtained from the QR decomposition of a matrix of random numbers drawn from a normal distribution. If the matrix has fewer rows than columns then the output will have orthogonal rows. Otherwise, the output will have orthogonal columns. If the shape of the tensor to initialize is more than two-dimensional, a matrix of shape (shape[0] * ... * shape[n - 2], shape[n - 1]) is initialized, where n is the length of the shape vector. The matrix is subsequently reshaped to give a tensor of the desired shape. Examples >>> # Standalone usage: >>> initializer = tf.keras.initializers.Orthogonal() >>> values = initializer(shape=(2, 2)) >>> # Usage in a Keras layer: >>> initializer = tf.keras.initializers.Orthogonal() >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Arguments gain: multiplicative factor to apply to the orthogonal matrix seed: A Python integer. An initializer created with a given seed will always produce the same random tensor for a given shape and dtype. References: Saxe et al., 2014 (pdf) Constant class tf.keras.initializers.Constant(value=0) Initializer that generates tensors with constant values. Also available via the shortcut function tf.keras.initializers.constant. Only scalar values are allowed. The constant value provided must be convertible to the dtype requested when calling the initializer. Examples >>> # Standalone usage: >>> initializer = tf.keras.initializers.Constant(3.) >>> values = initializer(shape=(2, 2)) >>> # Usage in a Keras layer: >>> initializer = tf.keras.initializers.Constant(3.) >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Arguments value: A Python scalar. VarianceScaling class tf.keras.initializers.VarianceScaling( scale=1.0, mode="fan_in", distribution="truncated_normal", seed=None ) Initializer capable of adapting its scale to the shape of weights tensors. Also available via the shortcut function tf.keras.initializers.variance_scaling. With distribution="truncated_normal" or "untruncated_normal", samples are drawn from a truncated/untruncated normal distribution with a mean of zero and a standard deviation (after truncation, if used) stddev = sqrt(scale / n), where n is: number of input units in the weight tensor, if mode="fan_in" number of output units, if mode="fan_out" average of the numbers of input and output units, if mode="fan_avg" With distribution="uniform", samples are drawn from a uniform distribution within [-limit, limit], where limit = sqrt(3 * scale / n). Examples >>> # Standalone usage: >>> initializer = tf.keras.initializers.VarianceScaling( ... scale=0.1, mode='fan_in', distribution='uniform') >>> values = initializer(shape=(2, 2)) >>> # Usage in a Keras layer: >>> initializer = tf.keras.initializers.VarianceScaling( ... scale=0.1, mode='fan_in', distribution='uniform') >>> layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Arguments scale: Scaling factor (positive float). mode: One of "fan_in", "fan_out", "fan_avg". distribution: Random distribution to use. One of "truncated_normal", "untruncated_normal" and "uniform". seed: A Python integer. An initializer created with a given seed will always produce the same random tensor for a given shape and dtype. Creating custom initializers Simple callables You can pass a custom callable as initializer. It must take the arguments shape (shape of the variable to initialize) and dtype (dtype of generated values): def my_init(shape, dtype=None): return tf.random.normal(shape, dtype=dtype) layer = Dense(64, kernel_initializer=my_init) Initializer subclasses If you need to configure your initializer via various arguments (e.g. stddev argument in RandomNormal), you should implement it as a subclass of tf.keras.initializers.Initializer. Initializers should implement a __call__ method with the following signature: def __call__(self, shape, dtype=None)`: # returns a tensor of shape `shape` and dtype `dtype` # containing values drawn from a distribution of your choice. Optionally, you an also implement the method get_config and the class method from_config in order to support serialization -- just like with any Keras object. Here's a simple example: a random normal initializer. import tensorflow as tf class ExampleRandomNormal(tf.keras.initializers.Initializer): def __init__(self, mean, stddev): self.mean = mean self.stddev = stddev def __call__(self, shape, dtype=None)`: return tf.random.normal( shape, mean=self.mean, stddev=self.stddev, dtype=dtype) def get_config(self): # To support serialization return {'mean': self.mean, 'stddev': self.stddev} Note that we don't have to implement from_config in the example above since the constructor arguments of the class the keys in the config returned by get_config are the same. In this case, the default from_config works fine. Layer weight regularizers Regularizers allow you to apply penalties on layer parameters or layer activity during optimization. These penalties are summed into the loss function that the network optimizes. Regularization penalties are applied on a per-layer basis. The exact API will depend on the layer, but many layers (e.g. Dense, Conv1D, Conv2D and Conv3D) have a unified API. These layers expose 3 keyword arguments: kernel_regularizer: Regularizer to apply a penalty on the layer's kernel bias_regularizer: Regularizer to apply a penalty on the layer's bias activity_regularizer: Regularizer to apply a penalty on the layer's output from tensorflow.keras import layers from tensorflow.keras import regularizers layer = layers.Dense( units=64, kernel_regularizer=regularizers.l1_l2(l1=1e-5, l2=1e-4), bias_regularizer=regularizers.l2(1e-4), activity_regularizer=regularizers.l2(1e-5) ) The value returned by the activity_regularizer object gets divided by the input batch size so that the relative weighting between the weight regularizers and the activity regularizers does not change with the batch size. You can access a layer's regularization penalties by calling layer.losses after calling the layer on inputs: layer = tf.keras.layers.Dense(5, kernel_initializer='ones', kernel_regularizer=tf.keras.regularizers.l1(0.01), activity_regularizer=tf.keras.regularizers.l2(0.01)) tensor = tf.ones(shape=(5, 5)) * 2.0 out = layer(tensor) # The kernel regularization term is 0.25 # The activity regularization term (after dividing by the batch size) is 5 print(tf.math.reduce_sum(layer.losses)) # 5.25 (= 5 + 0.25) Available regularizers The following built-in regularizers are available as part of the tf.keras.regularizers module: L1 class tf.keras.regularizers.l1(l1=0.01, **kwargs) A regularizer that applies a L1 regularization penalty. The L1 regularization penalty is computed as: loss = l1 * reduce_sum(abs(x)) L1 may be passed to a layer as a string identifier: >>> dense = tf.keras.layers.Dense(3, kernel_regularizer='l1') In this case, the default value used is l1=0.01. Attributes l1: Float; L1 regularization factor. L2 class tf.keras.regularizers.l2(l2=0.01, **kwargs) A regularizer that applies a L2 regularization penalty. The L2 regularization penalty is computed as: loss = l2 * reduce_sum(square(x)) L2 may be passed to a layer as a string identifier: >>> dense = tf.keras.layers.Dense(3, kernel_regularizer='l2') In this case, the default value used is l2=0.01. Attributes l2: Float; L2 regularization factor. l1_l2 function tf.keras.regularizers.l1_l2(l1=0.01, l2=0.01) Create a regularizer that applies both L1 and L2 penalties. The L1 regularization penalty is computed as: loss = l1 * reduce_sum(abs(x)) The L2 regularization penalty is computed as: loss = l2 * reduce_sum(square(x)) Arguments l1: Float; L1 regularization factor. l2: Float; L2 regularization factor. Returns An L1L2 Regularizer with the given regularization factors. Creating custom regularizers Simple callables A weight regularizer can be any callable that takes as input a weight tensor (e.g. the kernel of a Conv2D layer), and returns a scalar loss. Like this: def my_regularizer(x): return 1e-3 * tf.reduce_sum(tf.square(x)) Regularizer subclasses If you need to configure your regularizer via various arguments (e.g. l1 and l2 arguments in l1_l2), you should implement it as a subclass of tf.keras.regularizers.Regularizer. Here's a simple example: class MyRegularizer(regularizers.Regularizer): def __init__(self, strength): self.strength = strength def __call__(self, x): return self.strength * tf.reduce_sum(tf.square(x)) Optionally, you can also implement the method get_config and the class method from_config in order to support serialization -- just like with any Keras object. Example: class MyRegularizer(regularizers.Regularizer): def __init__(self, strength): self.strength = strength def __call__(self, x): return self.strength * tf.reduce_sum(tf.square(x)) def get_config(self): return {'strength': self.strength} Layer weight constraints Usage of constraints Classes from the tf.keras.constraints module allow setting constraints (eg. non-negativity) on model parameters during training. They are per-variable projection functions applied to the target variable after each gradient update (when using fit()). The exact API will depend on the layer, but the layers Dense, Conv1D, Conv2D and Conv3D have a unified API. These layers expose two keyword arguments: kernel_constraint for the main weights matrix bias_constraint for the bias. from tensorflow.keras.constraints import max_norm model.add(Dense(64, kernel_constraint=max_norm(2.))) Available weight constraints MaxNorm class tf.keras.constraints.MaxNorm(max_value=2, axis=0) MaxNorm weight constraint. Constrains the weights incident to each hidden unit to have a norm less than or equal to a desired value. Also available via the shortcut function tf.keras.constraints.max_norm. Arguments max_value: the maximum norm value for the incoming weights. axis: integer, axis along which to calculate weight norms. For instance, in a Dense layer the weight matrix has shape (input_dim, output_dim), set axis to 0 to constrain each weight vector of length (input_dim,). In a Conv2D layer with data_format="channels_last", the weight tensor has shape (rows, cols, input_depth, output_depth), set axis to [0, 1, 2] to constrain the weights of each filter tensor of size (rows, cols, input_depth). MinMaxNorm class tf.keras.constraints.MinMaxNorm( min_value=0.0, max_value=1.0, rate=1.0, axis=0 ) MinMaxNorm weight constraint. Constrains the weights incident to each hidden unit to have the norm between a lower bound and an upper bound. Also available via the shortcut function tf.keras.constraints.min_max_norm. Arguments min_value: the minimum norm for the incoming weights. max_value: the maximum norm for the incoming weights. rate: rate for enforcing the constraint: weights will be rescaled to yield (1 - rate) * norm + rate * norm.clip(min_value, max_value). Effectively, this means that rate=1.0 stands for strict enforcement of the constraint, while rate<1.0 means that weights will be rescaled at each step to slowly move towards a value inside the desired interval. axis: integer, axis along which to calculate weight norms. For instance, in a Dense layer the weight matrix has shape (input_dim, output_dim), set axis to 0 to constrain each weight vector of length (input_dim,). In a Conv2D layer with data_format="channels_last", the weight tensor has shape (rows, cols, input_depth, output_depth), set axis to [0, 1, 2] to constrain the weights of each filter tensor of size (rows, cols, input_depth). NonNeg class tf.keras.constraints.NonNeg() Constrains the weights to be non-negative. Also available via the shortcut function tf.keras.constraints.non_neg. UnitNorm class tf.keras.constraints.UnitNorm(axis=0) Constrains the weights incident to each hidden unit to have unit norm. Also available via the shortcut function tf.keras.constraints.unit_norm. Arguments axis: integer, axis along which to calculate weight norms. For instance, in a Dense layer the weight matrix has shape (input_dim, output_dim), set axis to 0 to constrain each weight vector of length (input_dim,). In a Conv2D layer with data_format="channels_last", the weight tensor has shape (rows, cols, input_depth, output_depth), set axis to [0, 1, 2] to constrain the weights of each filter tensor of size (rows, cols, input_depth). RadialConstraint class tf.keras.constraints.RadialConstraint() Constrains Conv2D kernel weights to be the same for each radius. Also available via the shortcut function tf.keras.constraints.radial_constraint. For example, the desired output for the following 4-by-4 kernel: kernel = [[v_00, v_01, v_02, v_03], [v_10, v_11, v_12, v_13], [v_20, v_21, v_22, v_23], [v_30, v_31, v_32, v_33]] is this:: kernel = [[v_11, v_11, v_11, v_11], [v_11, v_33, v_33, v_11], [v_11, v_33, v_33, v_11], [v_11, v_11, v_11, v_11]] This constraint can be applied to any Conv2D layer version, including Conv2DTranspose and SeparableConv2D, and with either "channels_last" or "channels_first" data format. The method assumes the weight tensor is of shape (rows, cols, input_depth, output_depth). Creating custom weight constraints A weight constraint can be any callable that takes a tensor and returns a tensor with the same shape and dtype. You would typically implement your constraints as subclasses of tf.keras.constraints.Constraint. Here's a simple example: a constraint that forces weight tensors to be centered around a specific value on average. class CenterAround(tf.keras.constraints.Constraint): """Constrains weight tensors to be centered around `ref_value`.""" def __init__(self, ref_value): self.ref_value = ref_value def __call__(self, w): mean = tf.reduce_mean(w) return w - mean + self.ref_value def get_config(self): return {'ref_value': self.ref_value} Optionally, you an also implement the method get_config and the class method from_config in order to support serialization -- just like with any Keras object. Note that we don't have to implement from_config in the example above since the constructor arguments of the class the keys in the config returned by get_config are the same. In this case, the default from_config works fine. LeakyReLU layer LeakyReLU class tf.keras.layers.LeakyReLU(alpha=0.3, **kwargs) Leaky version of a Rectified Linear Unit. It allows a small gradient when the unit is not active: f(x) = alpha * x if x < 0 f(x) = x if x >= 0 Usage: >>> layer = tf.keras.layers.LeakyReLU() >>> output = layer([-3.0, -1.0, 0.0, 2.0]) >>> list(output.numpy()) [-0.9, -0.3, 0.0, 2.0] >>> layer = tf.keras.layers.LeakyReLU(alpha=0.1) >>> output = layer([-3.0, -1.0, 0.0, 2.0]) >>> list(output.numpy()) [-0.3, -0.1, 0.0, 2.0] Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the batch axis) when using this layer as the first layer in a model. Output shape Same shape as the input. Arguments alpha: Float >= 0. Negative slope coefficient. Default to 0.3.ReLU layer ReLU class tf.keras.layers.ReLU(max_value=None, negative_slope=0, threshold=0, **kwargs) Rectified Linear Unit activation function. With default values, it returns element-wise max(x, 0). Otherwise, it follows: f(x) = max_value if x >= max_value f(x) = x if threshold <= x < max_value f(x) = negative_slope * (x - threshold) otherwise Usage: >>> layer = tf.keras.layers.ReLU() >>> output = layer([-3.0, -1.0, 0.0, 2.0]) >>> list(output.numpy()) [0.0, 0.0, 0.0, 2.0] >>> layer = tf.keras.layers.ReLU(max_value=1.0) >>> output = layer([-3.0, -1.0, 0.0, 2.0]) >>> list(output.numpy()) [0.0, 0.0, 0.0, 1.0] >>> layer = tf.keras.layers.ReLU(negative_slope=1.0) >>> output = layer([-3.0, -1.0, 0.0, 2.0]) >>> list(output.numpy()) [-3.0, -1.0, 0.0, 2.0] >>> layer = tf.keras.layers.ReLU(threshold=1.5) >>> output = layer([-3.0, -1.0, 1.0, 2.0]) >>> list(output.numpy()) [0.0, 0.0, 0.0, 2.0] Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the batch axis) when using this layer as the first layer in a model. Output shape Same shape as the input. Arguments max_value: Float >= 0. Maximum activation value. Default to None, which means unlimited. negative_slope: Float >= 0. Negative slope coefficient. Default to 0. threshold: Float. Threshold value for thresholded activation. Default to 0. Softmax layer Softmax class tf.keras.layers.Softmax(axis=-1, **kwargs) Softmax activation function. Example without mask: >>> inp = np.asarray([1., 2., 1.]) >>> layer = tf.keras.layers.Softmax() >>> layer(inp).numpy() array([0.21194157, 0.5761169 , 0.21194157], dtype=float32) >>> mask = np.asarray([True, False, True], dtype=bool) >>> layer(inp, mask).numpy() array([0.5, 0. , 0.5], dtype=float32) Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape Same shape as the input. Arguments axis: Integer, or list of Integers, axis along which the softmax normalization is applied. Call arguments inputs: The inputs, or logits to the softmax layer. mask: A boolean mask of the same shape as inputs. Defaults to None. The mask specifies 1 to keep and 0 to mask. Returns softmaxed output with the same shape as inputs. PReLU layer PReLU class tf.keras.layers.PReLU( alpha_initializer="zeros", alpha_regularizer=None, alpha_constraint=None, shared_axes=None, **kwargs ) Parametric Rectified Linear Unit. It follows: f(x) = alpha * x for x < 0 f(x) = x for x >= 0 where alpha is a learned array with the same shape as x. Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape Same shape as the input. Arguments alpha_initializer: Initializer function for the weights. alpha_regularizer: Regularizer for the weights. alpha_constraint: Constraint for the weights. shared_axes: The axes along which to share learnable parameters for the activation function. For example, if the incoming feature maps are from a 2D convolution with output shape (batch, height, width, channels), and you wish to share parameters across space so that each filter only has one set of parameters, set shared_axes=[1, 2]. ThresholdedReLU layer ThresholdedReLU class tf.keras.layers.ThresholdedReLU(theta=1.0, **kwargs) Thresholded Rectified Linear Unit. It follows: f(x) = x for x > theta f(x) = 0 otherwise` Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape Same shape as the input. Arguments theta: Float >= 0. Threshold location of activation.ELU layer ELU class tf.keras.layers.ELU(alpha=1.0, **kwargs) Exponential Linear Unit. It follows: f(x) = alpha * (exp(x) - 1.) for x < 0 f(x) = x for x >= 0 Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape Same shape as the input. Arguments alpha: Scale for the negative factor. LeakyReLU layer LeakyReLU class tf.keras.layers.LeakyReLU(alpha=0.3, **kwargs) Leaky version of a Rectified Linear Unit. It allows a small gradient when the unit is not active: f(x) = alpha * x if x < 0 f(x) = x if x >= 0 Usage: >>> layer = tf.keras.layers.LeakyReLU() >>> output = layer([-3.0, -1.0, 0.0, 2.0]) >>> list(output.numpy()) [-0.9, -0.3, 0.0, 2.0] >>> layer = tf.keras.layers.LeakyReLU(alpha=0.1) >>> output = layer([-3.0, -1.0, 0.0, 2.0]) >>> list(output.numpy()) [-0.3, -0.1, 0.0, 2.0] Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the batch axis) when using this layer as the first layer in a model. Output shape Same shape as the input. Arguments alpha: Float >= 0. Negative slope coefficient. Default to 0.3.ReLU layer ReLU class tf.keras.layers.ReLU(max_value=None, negative_slope=0, threshold=0, **kwargs) Rectified Linear Unit activation function. With default values, it returns element-wise max(x, 0). Otherwise, it follows: f(x) = max_value if x >= max_value f(x) = x if threshold <= x < max_value f(x) = negative_slope * (x - threshold) otherwise Usage: >>> layer = tf.keras.layers.ReLU() >>> output = layer([-3.0, -1.0, 0.0, 2.0]) >>> list(output.numpy()) [0.0, 0.0, 0.0, 2.0] >>> layer = tf.keras.layers.ReLU(max_value=1.0) >>> output = layer([-3.0, -1.0, 0.0, 2.0]) >>> list(output.numpy()) [0.0, 0.0, 0.0, 1.0] >>> layer = tf.keras.layers.ReLU(negative_slope=1.0) >>> output = layer([-3.0, -1.0, 0.0, 2.0]) >>> list(output.numpy()) [-3.0, -1.0, 0.0, 2.0] >>> layer = tf.keras.layers.ReLU(threshold=1.5) >>> output = layer([-3.0, -1.0, 1.0, 2.0]) >>> list(output.numpy()) [0.0, 0.0, 0.0, 2.0] Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the batch axis) when using this layer as the first layer in a model. Output shape Same shape as the input. Arguments max_value: Float >= 0. Maximum activation value. Default to None, which means unlimited. negative_slope: Float >= 0. Negative slope coefficient. Default to 0. threshold: Float. Threshold value for thresholded activation. Default to 0. Softmax layer Softmax class tf.keras.layers.Softmax(axis=-1, **kwargs) Softmax activation function. Example without mask: >>> inp = np.asarray([1., 2., 1.]) >>> layer = tf.keras.layers.Softmax() >>> layer(inp).numpy() array([0.21194157, 0.5761169 , 0.21194157], dtype=float32) >>> mask = np.asarray([True, False, True], dtype=bool) >>> layer(inp, mask).numpy() array([0.5, 0. , 0.5], dtype=float32) Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape Same shape as the input. Arguments axis: Integer, or list of Integers, axis along which the softmax normalization is applied. Call arguments inputs: The inputs, or logits to the softmax layer. mask: A boolean mask of the same shape as inputs. Defaults to None. The mask specifies 1 to keep and 0 to mask. Returns softmaxed output with the same shape as inputs. PReLU layer PReLU class tf.keras.layers.PReLU( alpha_initializer="zeros", alpha_regularizer=None, alpha_constraint=None, shared_axes=None, **kwargs ) Parametric Rectified Linear Unit. It follows: f(x) = alpha * x for x < 0 f(x) = x for x >= 0 where alpha is a learned array with the same shape as x. Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape Same shape as the input. Arguments alpha_initializer: Initializer function for the weights. alpha_regularizer: Regularizer for the weights. alpha_constraint: Constraint for the weights. shared_axes: The axes along which to share learnable parameters for the activation function. For example, if the incoming feature maps are from a 2D convolution with output shape (batch, height, width, channels), and you wish to share parameters across space so that each filter only has one set of parameters, set shared_axes=[1, 2]. ThresholdedReLU layer ThresholdedReLU class tf.keras.layers.ThresholdedReLU(theta=1.0, **kwargs) Thresholded Rectified Linear Unit. It follows: f(x) = x for x > theta f(x) = 0 otherwise` Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape Same shape as the input. Arguments theta: Float >= 0. Threshold location of activation.ELU layer ELU class tf.keras.layers.ELU(alpha=1.0, **kwargs) Exponential Linear Unit. It follows: f(x) = alpha * (exp(x) - 1.) for x < 0 f(x) = x for x >= 0 Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape Same shape as the input. Arguments alpha: Scale for the negative factor. LocallyConnected2D layer LocallyConnected2D class tf.keras.layers.LocallyConnected2D( filters, kernel_size, strides=(1, 1), padding="valid", data_format=None, activation=None, use_bias=True, kernel_initializer="glorot_uniform", bias_initializer="zeros", kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, implementation=1, **kwargs ) Locally-connected layer for 2D inputs. The LocallyConnected2D layer works similarly to the Conv2D layer, except that weights are unshared, that is, a different set of filters is applied at each different patch of the input. Note: layer attributes cannot be modified after the layer has been called once (except the trainable attribute). Examples # apply a 3x3 unshared weights convolution with 64 output filters on a 32x32 image # with `data_format="channels_last"`: model = Sequential() model.add(LocallyConnected2D(64, (3, 3), input_shape=(32, 32, 3))) # now model.output_shape == (None, 30, 30, 64) # notice that this layer will consume (30*30)*(3*3*3*64) + (30*30)*64 parameters # add a 3x3 unshared weights convolution on top, with 32 output filters: model.add(LocallyConnected2D(32, (3, 3))) # now model.output_shape == (None, 28, 28, 32) Arguments filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution). kernel_size: An integer or tuple/list of 2 integers, specifying the width and height of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions. strides: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the width and height. Can be a single integer to specify the same value for all spatial dimensions. padding: Currently only support "valid" (case-insensitive). "same" will be supported in future. "valid" means no padding. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". activation: Activation function to use. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x). use_bias: Boolean, whether the layer uses a bias vector. kernel_initializer: Initializer for the kernel weights matrix. bias_initializer: Initializer for the bias vector. kernel_regularizer: Regularizer function applied to the kernel weights matrix. bias_regularizer: Regularizer function applied to the bias vector. activity_regularizer: Regularizer function applied to the output of the layer (its "activation"). kernel_constraint: Constraint function applied to the kernel matrix. bias_constraint: Constraint function applied to the bias vector. implementation: implementation mode, either 1, 2, or 3. 1 loops over input spatial locations to perform the forward pass. It is memory-efficient but performs a lot of (small) ops. 2 stores layer weights in a dense but sparsely-populated 2D matrix and implements the forward pass as a single matrix-multiply. It uses a lot of RAM but performs few (large) ops. 3 stores layer weights in a sparse tensor and implements the forward pass as a single sparse matrix-multiply. How to choose: 1: large, dense models, 2: small models, 3: large, sparse models, where "large" stands for large input/output activations (i.e. many filters, input_filters, large np.prod(input_size), np.prod(output_size)), and "sparse" stands for few connections between inputs and outputs, i.e. small ratio filters * input_filters * np.prod(kernel_size) / (np.prod(input_size) * np.prod(strides)), where inputs to and outputs of the layer are assumed to have shapes input_size + (input_filters,), output_size + (filters,) respectively. It is recommended to benchmark each in the setting of interest to pick the most efficient one (in terms of speed and memory usage). Correct choice of implementation can lead to dramatic speed improvements (e.g. 50X), potentially at the expense of RAM. Also, only padding="valid" is supported by implementation=1. Input shape 4D tensor with shape: (samples, channels, rows, cols) if data_format='channels_first' or 4D tensor with shape: (samples, rows, cols, channels) if data_format='channels_last'. Output shape 4D tensor with shape: (samples, filters, new_rows, new_cols) if data_format='channels_first' or 4D tensor with shape: (samples, new_rows, new_cols, filters) if data_format='channels_last'. rows and cols values might have changed due to padding.LocallyConnected1D layer LocallyConnected1D class tf.keras.layers.LocallyConnected1D( filters, kernel_size, strides=1, padding="valid", data_format=None, activation=None, use_bias=True, kernel_initializer="glorot_uniform", bias_initializer="zeros", kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, implementation=1, **kwargs ) Locally-connected layer for 1D inputs. The LocallyConnected1D layer works similarly to the Conv1D layer, except that weights are unshared, that is, a different set of filters is applied at each different patch of the input. Note: layer attributes cannot be modified after the layer has been called once (except the trainable attribute). Example # apply a unshared weight convolution 1d of length 3 to a sequence with # 10 timesteps, with 64 output filters model = Sequential() model.add(LocallyConnected1D(64, 3, input_shape=(10, 32))) # now model.output_shape == (None, 8, 64) # add a new conv1d on top model.add(LocallyConnected1D(32, 3)) # now model.output_shape == (None, 6, 32) Arguments filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution). kernel_size: An integer or tuple/list of a single integer, specifying the length of the 1D convolution window. strides: An integer or tuple/list of a single integer, specifying the stride length of the convolution. padding: Currently only supports "valid" (case-insensitive). "same" may be supported in the future. "valid" means no padding. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, length, channels) while channels_first corresponds to inputs with shape (batch, channels, length). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". activation: Activation function to use. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x). use_bias: Boolean, whether the layer uses a bias vector. kernel_initializer: Initializer for the kernel weights matrix. bias_initializer: Initializer for the bias vector. kernel_regularizer: Regularizer function applied to the kernel weights matrix. bias_regularizer: Regularizer function applied to the bias vector. activity_regularizer: Regularizer function applied to the output of the layer (its "activation").. kernel_constraint: Constraint function applied to the kernel matrix. bias_constraint: Constraint function applied to the bias vector. implementation: implementation mode, either 1, 2, or 3. 1 loops over input spatial locations to perform the forward pass. It is memory-efficient but performs a lot of (small) ops. 2 stores layer weights in a dense but sparsely-populated 2D matrix and implements the forward pass as a single matrix-multiply. It uses a lot of RAM but performs few (large) ops. 3 stores layer weights in a sparse tensor and implements the forward pass as a single sparse matrix-multiply. How to choose: 1: large, dense models, 2: small models, 3: large, sparse models, where "large" stands for large input/output activations (i.e. many filters, input_filters, large input_size, output_size), and "sparse" stands for few connections between inputs and outputs, i.e. small ratio filters * input_filters * kernel_size / (input_size * strides), where inputs to and outputs of the layer are assumed to have shapes (input_size, input_filters), (output_size, filters) respectively. It is recommended to benchmark each in the setting of interest to pick the most efficient one (in terms of speed and memory usage). Correct choice of implementation can lead to dramatic speed improvements (e.g. 50X), potentially at the expense of RAM. Also, only padding="valid" is supported by implementation=1. Input shape 3D tensor with shape: (batch_size, steps, input_dim) Output shape 3D tensor with shape: (batch_size, new_steps, filters) steps value might have changed due to padding or strides. Dot layer Dot class tf.keras.layers.Dot(axes, normalize=False, **kwargs) Layer that computes a dot product between samples in two tensors. E.g. if applied to a list of two tensors a and b of shape (batch_size, n), the output will be a tensor of shape (batch_size, 1) where each entry i will be the dot product between a[i] and b[i]. >>> x = np.arange(10).reshape(1, 5, 2) >>> print(x) [[[0 1] [2 3] [4 5] [6 7] [8 9]]] >>> y = np.arange(10, 20).reshape(1, 2, 5) >>> print(y) [[[10 11 12 13 14] [15 16 17 18 19]]] >>> tf.keras.layers.Dot(axes=(1, 2))([x, y]) >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2)) >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2)) >>> dotted = tf.keras.layers.Dot(axes=1)([x1, x2]) >>> dotted.shape TensorShape([5, 1])Minimum layer Minimum class tf.keras.layers.Minimum(**kwargs) Layer that computes the minimum (element-wise) a list of inputs. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape). >>> tf.keras.layers.Minimum()([np.arange(5).reshape(5, 1), ... np.arange(5, 10).reshape(5, 1)]) >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2)) >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2)) >>> minned = tf.keras.layers.Minimum()([x1, x2]) >>> minned.shape TensorShape([5, 8]) Maximum layer Maximum class tf.keras.layers.Maximum(**kwargs) Layer that computes the maximum (element-wise) a list of inputs. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape). >>> tf.keras.layers.Maximum()([np.arange(5).reshape(5, 1), ... np.arange(5, 10).reshape(5, 1)]) >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2)) >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2)) >>> maxed = tf.keras.layers.Maximum()([x1, x2]) >>> maxed.shape TensorShape([5, 8])Subtract layer Subtract class tf.keras.layers.Subtract(**kwargs) Layer that subtracts two inputs. It takes as input a list of tensors of size 2, both of the same shape, and returns a single tensor, (inputs[0] - inputs[1]), also of the same shape. Examples import keras input1 = keras.layers.Input(shape=(16,)) x1 = keras.layers.Dense(8, activation='relu')(input1) input2 = keras.layers.Input(shape=(32,)) x2 = keras.layers.Dense(8, activation='relu')(input2) # Equivalent to subtracted = keras.layers.subtract([x1, x2]) subtracted = keras.layers.Subtract()([x1, x2]) out = keras.layers.Dense(4)(subtracted) model = keras.models.Model(inputs=[input1, input2], outputs=out)Concatenate layer Concatenate class tf.keras.layers.Concatenate(axis=-1, **kwargs) Layer that concatenates a list of inputs. It takes as input a list of tensors, all of the same shape except for the concatenation axis, and returns a single tensor that is the concatenation of all inputs. >>> x = np.arange(20).reshape(2, 2, 5) >>> print(x) [[[ 0 1 2 3 4] [ 5 6 7 8 9]] [[10 11 12 13 14] [15 16 17 18 19]]] >>> y = np.arange(20, 30).reshape(2, 1, 5) >>> print(y) [[[20 21 22 23 24]] [[25 26 27 28 29]]] >>> tf.keras.layers.Concatenate(axis=1)([x, y]) >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2)) >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2)) >>> concatted = tf.keras.layers.Concatenate()([x1, x2]) >>> concatted.shape TensorShape([5, 16]) Multiply layer Multiply class tf.keras.layers.Multiply(**kwargs) Layer that multiplies (element-wise) a list of inputs. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape). >>> tf.keras.layers.Multiply()([np.arange(5).reshape(5, 1), ... np.arange(5, 10).reshape(5, 1)]) >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2)) >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2)) >>> multiplied = tf.keras.layers.Multiply()([x1, x2]) >>> multiplied.shape TensorShape([5, 8])Average layer Average class tf.keras.layers.Average(**kwargs) Layer that averages a list of inputs element-wise. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape). Example >>> x1 = np.ones((2, 2)) >>> x2 = np.zeros((2, 2)) >>> y = tf.keras.layers.Average()([x1, x2]) >>> y.numpy().tolist() [[0.5, 0.5], [0.5, 0.5]] Usage in a functional model: >>> input1 = tf.keras.layers.Input(shape=(16,)) >>> x1 = tf.keras.layers.Dense(8, activation='relu')(input1) >>> input2 = tf.keras.layers.Input(shape=(32,)) >>> x2 = tf.keras.layers.Dense(8, activation='relu')(input2) >>> avg = tf.keras.layers.Average()([x1, x2]) >>> out = tf.keras.layers.Dense(4)(avg) >>> model = tf.keras.models.Model(inputs=[input1, input2], outputs=out) Raises ValueError: If there is a shape mismatch between the inputs and the shapes cannot be broadcasted to match.Add layer Add class tf.keras.layers.Add(**kwargs) Layer that adds a list of inputs. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape). Examples >>> input_shape = (2, 3, 4) >>> x1 = tf.random.normal(input_shape) >>> x2 = tf.random.normal(input_shape) >>> y = tf.keras.layers.Add()([x1, x2]) >>> print(y.shape) (2, 3, 4) Used in a functional model: >>> input1 = tf.keras.layers.Input(shape=(16,)) >>> x1 = tf.keras.layers.Dense(8, activation='relu')(input1) >>> input2 = tf.keras.layers.Input(shape=(32,)) >>> x2 = tf.keras.layers.Dense(8, activation='relu')(input2) >>> # equivalent to `added = tf.keras.layers.add([x1, x2])` >>> added = tf.keras.layers.Add()([x1, x2]) >>> out = tf.keras.layers.Dense(4)(added) >>> model = tf.keras.models.Model(inputs=[input1, input2], outputs=out)Dot layer Dot class tf.keras.layers.Dot(axes, normalize=False, **kwargs) Layer that computes a dot product between samples in two tensors. E.g. if applied to a list of two tensors a and b of shape (batch_size, n), the output will be a tensor of shape (batch_size, 1) where each entry i will be the dot product between a[i] and b[i]. >>> x = np.arange(10).reshape(1, 5, 2) >>> print(x) [[[0 1] [2 3] [4 5] [6 7] [8 9]]] >>> y = np.arange(10, 20).reshape(1, 2, 5) >>> print(y) [[[10 11 12 13 14] [15 16 17 18 19]]] >>> tf.keras.layers.Dot(axes=(1, 2))([x, y]) >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2)) >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2)) >>> dotted = tf.keras.layers.Dot(axes=1)([x1, x2]) >>> dotted.shape TensorShape([5, 1])Minimum layer Minimum class tf.keras.layers.Minimum(**kwargs) Layer that computes the minimum (element-wise) a list of inputs. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape). >>> tf.keras.layers.Minimum()([np.arange(5).reshape(5, 1), ... np.arange(5, 10).reshape(5, 1)]) >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2)) >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2)) >>> minned = tf.keras.layers.Minimum()([x1, x2]) >>> minned.shape TensorShape([5, 8]) Maximum layer Maximum class tf.keras.layers.Maximum(**kwargs) Layer that computes the maximum (element-wise) a list of inputs. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape). >>> tf.keras.layers.Maximum()([np.arange(5).reshape(5, 1), ... np.arange(5, 10).reshape(5, 1)]) >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2)) >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2)) >>> maxed = tf.keras.layers.Maximum()([x1, x2]) >>> maxed.shape TensorShape([5, 8])Subtract layer Subtract class tf.keras.layers.Subtract(**kwargs) Layer that subtracts two inputs. It takes as input a list of tensors of size 2, both of the same shape, and returns a single tensor, (inputs[0] - inputs[1]), also of the same shape. Examples import keras input1 = keras.layers.Input(shape=(16,)) x1 = keras.layers.Dense(8, activation='relu')(input1) input2 = keras.layers.Input(shape=(32,)) x2 = keras.layers.Dense(8, activation='relu')(input2) # Equivalent to subtracted = keras.layers.subtract([x1, x2]) subtracted = keras.layers.Subtract()([x1, x2]) out = keras.layers.Dense(4)(subtracted) model = keras.models.Model(inputs=[input1, input2], outputs=out)Concatenate layer Concatenate class tf.keras.layers.Concatenate(axis=-1, **kwargs) Layer that concatenates a list of inputs. It takes as input a list of tensors, all of the same shape except for the concatenation axis, and returns a single tensor that is the concatenation of all inputs. >>> x = np.arange(20).reshape(2, 2, 5) >>> print(x) [[[ 0 1 2 3 4] [ 5 6 7 8 9]] [[10 11 12 13 14] [15 16 17 18 19]]] >>> y = np.arange(20, 30).reshape(2, 1, 5) >>> print(y) [[[20 21 22 23 24]] [[25 26 27 28 29]]] >>> tf.keras.layers.Concatenate(axis=1)([x, y]) >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2)) >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2)) >>> concatted = tf.keras.layers.Concatenate()([x1, x2]) >>> concatted.shape TensorShape([5, 16]) Multiply layer Multiply class tf.keras.layers.Multiply(**kwargs) Layer that multiplies (element-wise) a list of inputs. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape). >>> tf.keras.layers.Multiply()([np.arange(5).reshape(5, 1), ... np.arange(5, 10).reshape(5, 1)]) >>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2)) >>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2)) >>> multiplied = tf.keras.layers.Multiply()([x1, x2]) >>> multiplied.shape TensorShape([5, 8])Average layer Average class tf.keras.layers.Average(**kwargs) Layer that averages a list of inputs element-wise. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape). Example >>> x1 = np.ones((2, 2)) >>> x2 = np.zeros((2, 2)) >>> y = tf.keras.layers.Average()([x1, x2]) >>> y.numpy().tolist() [[0.5, 0.5], [0.5, 0.5]] Usage in a functional model: >>> input1 = tf.keras.layers.Input(shape=(16,)) >>> x1 = tf.keras.layers.Dense(8, activation='relu')(input1) >>> input2 = tf.keras.layers.Input(shape=(32,)) >>> x2 = tf.keras.layers.Dense(8, activation='relu')(input2) >>> avg = tf.keras.layers.Average()([x1, x2]) >>> out = tf.keras.layers.Dense(4)(avg) >>> model = tf.keras.models.Model(inputs=[input1, input2], outputs=out) Raises ValueError: If there is a shape mismatch between the inputs and the shapes cannot be broadcasted to match.Add layer Add class tf.keras.layers.Add(**kwargs) Layer that adds a list of inputs. It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape). Examples >>> input_shape = (2, 3, 4) >>> x1 = tf.random.normal(input_shape) >>> x2 = tf.random.normal(input_shape) >>> y = tf.keras.layers.Add()([x1, x2]) >>> print(y.shape) (2, 3, 4) Used in a functional model: >>> input1 = tf.keras.layers.Input(shape=(16,)) >>> x1 = tf.keras.layers.Dense(8, activation='relu')(input1) >>> input2 = tf.keras.layers.Input(shape=(32,)) >>> x2 = tf.keras.layers.Dense(8, activation='relu')(input2) >>> # equivalent to `added = tf.keras.layers.add([x1, x2])` >>> added = tf.keras.layers.Add()([x1, x2]) >>> out = tf.keras.layers.Dense(4)(added) >>> model = tf.keras.models.Model(inputs=[input1, input2], outputs=out) Permute layer Permute class tf.keras.layers.Permute(dims, **kwargs) Permutes the dimensions of the input according to a given pattern. Useful e.g. connecting RNNs and convnets. Example model = Sequential() model.add(Permute((2, 1), input_shape=(10, 64))) # now: model.output_shape == (None, 64, 10) # note: `None` is the batch dimension Arguments dims: Tuple of integers. Permutation pattern does not include the samples dimension. Indexing starts at 1. For instance, (2, 1) permutes the first and second dimensions of the input. Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape Same as the input shape, but with the dimensions re-ordered according to the specified pattern.Cropping2D layer Cropping2D class tf.keras.layers.Cropping2D( cropping=((0, 0), (0, 0)), data_format=None, **kwargs ) Cropping layer for 2D input (e.g. picture). It crops along spatial dimensions, i.e. height and width. Examples >>> input_shape = (2, 28, 28, 3) >>> x = np.arange(np.prod(input_shape)).reshape(input_shape) >>> y = tf.keras.layers.Cropping2D(cropping=((2, 2), (4, 4)))(x) >>> print(y.shape) (2, 24, 20, 3) Arguments cropping: Int, or tuple of 2 ints, or tuple of 2 tuples of 2 ints. If int: the same symmetric cropping is applied to height and width. If tuple of 2 ints: interpreted as two different symmetric cropping values for height and width: (symmetric_height_crop, symmetric_width_crop). If tuple of 2 tuples of 2 ints: interpreted as ((top_crop, bottom_crop), (left_crop, right_crop)) data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, height, width, channels) while channels_first corresponds to inputs with shape (batch_size, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape 4D tensor with shape: - If data_format is "channels_last": (batch_size, rows, cols, channels) - If data_format is "channels_first": (batch_size, channels, rows, cols) Output shape 4D tensor with shape: - If data_format is "channels_last": (batch_size, cropped_rows, cropped_cols, channels) - If data_format is "channels_first": (batch_size, channels, cropped_rows, cropped_cols) ZeroPadding2D layer ZeroPadding2D class tf.keras.layers.ZeroPadding2D(padding=(1, 1), data_format=None, **kwargs) Zero-padding layer for 2D input (e.g. picture). This layer can add rows and columns of zeros at the top, bottom, left and right side of an image tensor. Examples >>> input_shape = (1, 1, 2, 2) >>> x = np.arange(np.prod(input_shape)).reshape(input_shape) >>> print(x) [[[[0 1] [2 3]]]] >>> y = tf.keras.layers.ZeroPadding2D(padding=1)(x) >>> print(y) tf.Tensor( [[[[0 0] [0 0] [0 0] [0 0]] [[0 0] [0 1] [2 3] [0 0]] [[0 0] [0 0] [0 0] [0 0]]]], shape=(1, 3, 4, 2), dtype=int64) Arguments padding: Int, or tuple of 2 ints, or tuple of 2 tuples of 2 ints. If int: the same symmetric padding is applied to height and width. If tuple of 2 ints: interpreted as two different symmetric padding values for height and width: (symmetric_height_pad, symmetric_width_pad). If tuple of 2 tuples of 2 ints: interpreted as ((top_pad, bottom_pad), (left_pad, right_pad)) data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, height, width, channels) while channels_first corresponds to inputs with shape (batch_size, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape 4D tensor with shape: - If data_format is "channels_last": (batch_size, rows, cols, channels) - If data_format is "channels_first": (batch_size, channels, rows, cols) Output shape 4D tensor with shape: - If data_format is "channels_last": (batch_size, padded_rows, padded_cols, channels) - If data_format is "channels_first": (batch_size, channels, padded_rows, padded_cols)UpSampling2D layer UpSampling2D class tf.keras.layers.UpSampling2D( size=(2, 2), data_format=None, interpolation="nearest", **kwargs ) Upsampling layer for 2D inputs. Repeats the rows and columns of the data by size[0] and size[1] respectively. Examples >>> input_shape = (2, 2, 1, 3) >>> x = np.arange(np.prod(input_shape)).reshape(input_shape) >>> print(x) [[[[ 0 1 2]] [[ 3 4 5]]] [[[ 6 7 8]] [[ 9 10 11]]]] >>> y = tf.keras.layers.UpSampling2D(size=(1, 2))(x) >>> print(y) tf.Tensor( [[[[ 0 1 2] [ 0 1 2]] [[ 3 4 5] [ 3 4 5]]] [[[ 6 7 8] [ 6 7 8]] [[ 9 10 11] [ 9 10 11]]]], shape=(2, 2, 2, 3), dtype=int64) Arguments size: Int, or tuple of 2 integers. The upsampling factors for rows and columns. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, height, width, channels) while channels_first corresponds to inputs with shape (batch_size, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". interpolation: A string, one of nearest or bilinear. Input shape 4D tensor with shape: - If data_format is "channels_last": (batch_size, rows, cols, channels) - If data_format is "channels_first": (batch_size, channels, rows, cols) Output shape 4D tensor with shape: - If data_format is "channels_last": (batch_size, upsampled_rows, upsampled_cols, channels) - If data_format is "channels_first": (batch_size, channels, upsampled_rows, upsampled_cols) Cropping3D layer Cropping3D class tf.keras.layers.Cropping3D( cropping=((1, 1), (1, 1), (1, 1)), data_format=None, **kwargs ) Cropping layer for 3D data (e.g. spatial or spatio-temporal). # Examples >>> input_shape = (2, 28, 28, 10, 3) >>> x = np.arange(np.prod(input_shape)).reshape(input_shape) >>> y = tf.keras.layers.Cropping3D(cropping=(2, 4, 2))(x) >>> print(y.shape) (2, 24, 20, 6, 3) Arguments cropping: Int, or tuple of 3 ints, or tuple of 3 tuples of 2 ints. If int: the same symmetric cropping is applied to depth, height, and width. If tuple of 3 ints: interpreted as two different symmetric cropping values for depth, height, and width: (symmetric_dim1_crop, symmetric_dim2_crop, symmetric_dim3_crop). If tuple of 3 tuples of 2 ints: interpreted as ((left_dim1_crop, right_dim1_crop), (left_dim2_crop, right_dim2_crop), (left_dim3_crop, right_dim3_crop)) data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape 5D tensor with shape: - If data_format is "channels_last": (batch_size, first_axis_to_crop, second_axis_to_crop, third_axis_to_crop, depth) - If data_format is "channels_first": (batch_size, depth, first_axis_to_crop, second_axis_to_crop, third_axis_to_crop) Output shape 5D tensor with shape: - If data_format is "channels_last": (batch_size, first_cropped_axis, second_cropped_axis, third_cropped_axis, depth) - If data_format is "channels_first": (batch_size, depth, first_cropped_axis, second_cropped_axis, third_cropped_axis)ZeroPadding3D layer ZeroPadding3D class tf.keras.layers.ZeroPadding3D(padding=(1, 1, 1), data_format=None, **kwargs) Zero-padding layer for 3D data (spatial or spatio-temporal). Examples >>> input_shape = (1, 1, 2, 2, 3) >>> x = np.arange(np.prod(input_shape)).reshape(input_shape) >>> y = tf.keras.layers.ZeroPadding3D(padding=2)(x) >>> print(y.shape) (1, 5, 6, 6, 3) Arguments padding: Int, or tuple of 3 ints, or tuple of 3 tuples of 2 ints. If int: the same symmetric padding is applied to height and width. If tuple of 3 ints: interpreted as two different symmetric padding values for height and width: (symmetric_dim1_pad, symmetric_dim2_pad, symmetric_dim3_pad). If tuple of 3 tuples of 2 ints: interpreted as ((left_dim1_pad, right_dim1_pad), (left_dim2_pad, right_dim2_pad), (left_dim3_pad, right_dim3_pad)) data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape 5D tensor with shape: - If data_format is "channels_last": (batch_size, first_axis_to_pad, second_axis_to_pad, third_axis_to_pad, depth) - If data_format is "channels_first": (batch_size, depth, first_axis_to_pad, second_axis_to_pad, third_axis_to_pad) Output shape 5D tensor with shape: - If data_format is "channels_last": (batch_size, first_padded_axis, second_padded_axis, third_axis_to_pad, depth) - If data_format is "channels_first": (batch_size, depth, first_padded_axis, second_padded_axis, third_axis_to_pad)UpSampling3D layer UpSampling3D class tf.keras.layers.UpSampling3D(size=(2, 2, 2), data_format=None, **kwargs) Upsampling layer for 3D inputs. Repeats the 1st, 2nd and 3rd dimensions of the data by size[0], size[1] and size[2] respectively. Examples >>> input_shape = (2, 1, 2, 1, 3) >>> x = tf.constant(1, shape=input_shape) >>> y = tf.keras.layers.UpSampling3D(size=2)(x) >>> print(y.shape) (2, 2, 4, 2, 3) Arguments size: Int, or tuple of 3 integers. The upsampling factors for dim1, dim2 and dim3. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape 5D tensor with shape: - If data_format is "channels_last": (batch_size, dim1, dim2, dim3, channels) - If data_format is "channels_first": (batch_size, channels, dim1, dim2, dim3) Output shape 5D tensor with shape: - If data_format is "channels_last": (batch_size, upsampled_dim1, upsampled_dim2, upsampled_dim3, channels) - If data_format is "channels_first": (batch_size, channels, upsampled_dim1, upsampled_dim2, upsampled_dim3)Reshape layer Reshape class tf.keras.layers.Reshape(target_shape, **kwargs) Layer that reshapes inputs into the given shape. Input shape Arbitrary, although all dimensions in the input shape must be known/fixed. Use the keyword argument input_shape (tuple of integers, does not include the samples/batch size axis) when using this layer as the first layer in a model. Output shape (batch_size,) + target_shape Example >>> # as first layer in a Sequential model >>> model = tf.keras.Sequential() >>> model.add(tf.keras.layers.Reshape((3, 4), input_shape=(12,))) >>> # model.output_shape == (None, 3, 4), `None` is the batch size. >>> model.output_shape (None, 3, 4) >>> # as intermediate layer in a Sequential model >>> model.add(tf.keras.layers.Reshape((6, 2))) >>> model.output_shape (None, 6, 2) >>> # also supports shape inference using `-1` as dimension >>> model.add(tf.keras.layers.Reshape((-1, 2, 2))) >>> model.output_shape (None, 3, 2, 2)RepeatVector layer RepeatVector class tf.keras.layers.RepeatVector(n, **kwargs) Repeats the input n times. Example model = Sequential() model.add(Dense(32, input_dim=32)) # now: model.output_shape == (None, 32) # note: `None` is the batch dimension model.add(RepeatVector(3)) # now: model.output_shape == (None, 3, 32) Arguments n: Integer, repetition factor. Input shape 2D tensor of shape (num_samples, features). Output shape 3D tensor of shape (num_samples, n, features).Cropping1D layer Cropping1D class tf.keras.layers.Cropping1D(cropping=(1, 1), **kwargs) Cropping layer for 1D input (e.g. temporal sequence). It crops along the time dimension (axis 1). Examples >>> input_shape = (2, 3, 2) >>> x = np.arange(np.prod(input_shape)).reshape(input_shape) >>> print(x) [[[ 0 1] [ 2 3] [ 4 5]] [[ 6 7] [ 8 9] [10 11]]] >>> y = tf.keras.layers.Cropping1D(cropping=1)(x) >>> print(y) tf.Tensor( [[[2 3]] [[8 9]]], shape=(2, 1, 2), dtype=int64) Arguments cropping: Int or tuple of int (length 2) How many units should be trimmed off at the beginning and end of the cropping dimension (axis 1). If a single int is provided, the same value will be used for both. Input shape 3D tensor with shape (batch_size, axis_to_crop, features) Output shape 3D tensor with shape (batch_size, cropped_axis, features) Flatten layer Flatten class tf.keras.layers.Flatten(data_format=None, **kwargs) Flattens the input. Does not affect the batch size. Note: If inputs are shaped (batch,) without a feature axis, then flattening adds an extra channel dimension and output shape is (batch, 1). Arguments data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, ..., channels) while channels_first corresponds to inputs with shape (batch, channels, ...). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Example >>> model = tf.keras.Sequential() >>> model.add(tf.keras.layers.Conv2D(64, 3, 3, input_shape=(3, 32, 32))) >>> model.output_shape (None, 1, 10, 64) >>> model.add(Flatten()) >>> model.output_shape (None, 640)UpSampling1D layer UpSampling1D class tf.keras.layers.UpSampling1D(size=2, **kwargs) Upsampling layer for 1D inputs. Repeats each temporal step size times along the time axis. Examples >>> input_shape = (2, 2, 3) >>> x = np.arange(np.prod(input_shape)).reshape(input_shape) >>> print(x) [[[ 0 1 2] [ 3 4 5]] [[ 6 7 8] [ 9 10 11]]] >>> y = tf.keras.layers.UpSampling1D(size=2)(x) >>> print(y) tf.Tensor( [[[ 0 1 2] [ 0 1 2] [ 3 4 5] [ 3 4 5]] [[ 6 7 8] [ 6 7 8] [ 9 10 11] [ 9 10 11]]], shape=(2, 4, 3), dtype=int64) Arguments size: Integer. Upsampling factor. Input shape 3D tensor with shape: (batch_size, steps, features). Output shape 3D tensor with shape: (batch_size, upsampled_steps, features).ZeroPadding1D layer ZeroPadding1D class tf.keras.layers.ZeroPadding1D(padding=1, **kwargs) Zero-padding layer for 1D input (e.g. temporal sequence). Examples >>> input_shape = (2, 2, 3) >>> x = np.arange(np.prod(input_shape)).reshape(input_shape) >>> print(x) [[[ 0 1 2] [ 3 4 5]] [[ 6 7 8] [ 9 10 11]]] >>> y = tf.keras.layers.ZeroPadding1D(padding=2)(x) >>> print(y) tf.Tensor( [[[ 0 0 0] [ 0 0 0] [ 0 1 2] [ 3 4 5] [ 0 0 0] [ 0 0 0]] [[ 0 0 0] [ 0 0 0] [ 6 7 8] [ 9 10 11] [ 0 0 0] [ 0 0 0]]], shape=(2, 6, 3), dtype=int64) Arguments padding: Int, or tuple of int (length 2), or dictionary. - If int: How many zeros to add at the beginning and end of the padding dimension (axis 1). - If tuple of int (length 2): How many zeros to add at the beginning and the end of the padding dimension ((left_pad, right_pad)). Input shape 3D tensor with shape (batch_size, axis_to_pad, features) Output shape 3D tensor with shape (batch_size, padded_axis, features) AdditiveAttention layer AdditiveAttention class tf.keras.layers.AdditiveAttention(use_scale=True, **kwargs) Additive attention layer, a.k.a. Bahdanau-style attention. Inputs are query tensor of shape [batch_size, Tq, dim], value tensor of shape [batch_size, Tv, dim] and key tensor of shape [batch_size, Tv, dim]. The calculation follows the steps: Reshape query and value into shapes [batch_size, Tq, 1, dim] and [batch_size, 1, Tv, dim] respectively. Calculate scores with shape [batch_size, Tq, Tv] as a non-linear sum: scores = tf.reduce_sum(tf.tanh(query + value), axis=-1) Use scores to calculate a distribution with shape [batch_size, Tq, Tv]: distribution = tf.nn.softmax(scores). Use distribution to create a linear combination of value with shape [batch_size, Tq, dim]: return tf.matmul(distribution, value). Arguments use_scale: If True, will create a variable to scale the attention scores. causal: Boolean. Set to True for decoder self-attention. Adds a mask such that position i cannot attend to positions j > i. This prevents the flow of information from the future towards the past. dropout: Float between 0 and 1. Fraction of the units to drop for the attention scores. Call # Arguments inputs: List of the following tensors: * query: Query Tensor of shape [batch_size, Tq, dim]. * value: Value Tensor of shape [batch_size, Tv, dim]. * key: Optional key Tensor of shape [batch_size, Tv, dim]. If not given, will use value for both key and value, which is the most common case. mask: List of the following tensors: * query_mask: A boolean mask Tensor of shape [batch_size, Tq]. If given, the output will be zero at the positions where mask==False. * value_mask: A boolean mask Tensor of shape [batch_size, Tv]. If given, will apply the mask such that values at positions where mask==False do not contribute to the result. training: Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (no dropout). return_attention_scores: bool, it True, returns the attention scores (after masking and softmax) as an additional output argument. Output: Attention outputs of shape [batch_size, Tq, dim]. [Optional] Attention scores after masking and softmax with shape [batch_size, Tq, Tv]. The meaning of query, value and key depend on the application. In the case of text similarity, for example, query is the sequence embeddings of the first piece of text and value is the sequence embeddings of the second piece of text. key is usually the same tensor as value. Here is a code example for using AdditiveAttention in a CNN+Attention network: # Variable-length int sequences. query_input = tf.keras.Input(shape=(None,), dtype='int32') value_input = tf.keras.Input(shape=(None,), dtype='int32') # Embedding lookup. token_embedding = tf.keras.layers.Embedding(max_tokens, dimension) # Query embeddings of shape [batch_size, Tq, dimension]. query_embeddings = token_embedding(query_input) # Value embeddings of shape [batch_size, Tv, dimension]. value_embeddings = token_embedding(value_input) # CNN layer. cnn_layer = tf.keras.layers.Conv1D( filters=100, kernel_size=4, # Use 'same' padding so outputs have the same shape as inputs. padding='same') # Query encoding of shape [batch_size, Tq, filters]. query_seq_encoding = cnn_layer(query_embeddings) # Value encoding of shape [batch_size, Tv, filters]. value_seq_encoding = cnn_layer(value_embeddings) # Query-value attention of shape [batch_size, Tq, filters]. query_value_attention_seq = tf.keras.layers.AdditiveAttention()( [query_seq_encoding, value_seq_encoding]) # Reduce over the sequence axis to produce encodings of shape # [batch_size, filters]. query_encoding = tf.keras.layers.GlobalAveragePooling1D()( query_seq_encoding) query_value_attention = tf.keras.layers.GlobalAveragePooling1D()( query_value_attention_seq) # Concatenate query and document encodings to produce a DNN input layer. input_layer = tf.keras.layers.Concatenate()( [query_encoding, query_value_attention]) # Add DNN layers, and create Model. # ...MultiHeadAttention layer MultiHeadAttention class tf.keras.layers.MultiHeadAttention( num_heads, key_dim, value_dim=None, dropout=0.0, use_bias=True, output_shape=None, attention_axes=None, kernel_initializer="glorot_uniform", bias_initializer="zeros", kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs ) MultiHeadAttention layer. This is an implementation of multi-headed attention based on "Attention is all you Need". If query, key, value are the same, then this is self-attention. Each timestep in query attends to the corresponding sequence in key, and returns a fixed-width vector. This layer first projects query, key and value. These are (effectively) a list of tensors of length num_attention_heads, where the corresponding shapes are [batch_size, , key_dim], [batch_size, , key_dim], [batch_size, , value_dim]. Then, the query and key tensors are dot-producted and scaled. These are softmaxed to obtain attention probabilities. The value tensors are then interpolated by these probabilities, then concatenated back to a single tensor. Finally, the result tensor with the last dimension as value_dim can take an linear projection and return. Examples Performs 1D cross-attention over two sequence inputs with an attention mask. Returns the additional attention weights over heads. >>> layer = MultiHeadAttention(num_heads=2, key_dim=2) >>> target = tf.keras.Input(shape=[8, 16]) >>> source = tf.keras.Input(shape=[4, 16]) >>> output_tensor, weights = layer(target, source, ... return_attention_scores=True) >>> print(output_tensor.shape) (None, 8, 16) >>> print(weights.shape) (None, 2, 8, 4) Performs 2D self-attention over a 5D input tensor on axes 2 and 3. >>> layer = MultiHeadAttention(num_heads=2, key_dim=2, attention_axes=(2, 3)) >>> input_tensor = tf.keras.Input(shape=[5, 3, 4, 16]) >>> output_tensor = layer(input_tensor, input_tensor) >>> print(output_tensor.shape) (None, 5, 3, 4, 16) Arguments num_heads: Number of attention heads. key_dim: Size of each attention head for query and key. value_dim: Size of each attention head for value. dropout: Dropout probability. use_bias: Boolean, whether the dense layers use bias vectors/matrices. output_shape: The expected shape of an output tensor, besides the batch and sequence dims. If not specified, projects back to the key feature dim. attention_axes: axes over which the attention is applied. None means attention over all axes, but batch, heads, and features. kernel_initializer: Initializer for dense layer kernels. bias_initializer: Initializer for dense layer biases. kernel_regularizer: Regularizer for dense layer kernels. bias_regularizer: Regularizer for dense layer biases. activity_regularizer: Regularizer for dense layer activity. kernel_constraint: Constraint for dense layer kernels. bias_constraint: Constraint for dense layer kernels. Call arguments query: Query Tensor of shape [B, T, dim]. value: Value Tensor of shape [B, S, dim]. key: Optional key Tensor of shape [B, S, dim]. If not given, will use value for both key and value, which is the most common case. attention_mask: a boolean mask of shape [B, T, S], that prevents attention to certain positions. The boolean mask specifies which query elements can attend to which key elements, 1 indicates attention and 0 indicates no attention. Broadcasting can happen for the missing batch dimensions and the head dimension. return_attention_scores: A boolean to indicate whether the output should be attention output if True, or (attention_output, attention_scores) if False. Defaults to False. training: Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (no dropout). Defaults to either using the training mode of the parent layer/model, or False (inference) if there is no parent layer. Returns attention_output: The result of the computation, of shape [B, T, E], where T is for target sequence shapes and E is the query input last dimension if output_shape is None. Otherwise, the multi-head outputs are project to the shape specified by output_shape. attention_scores: [Optional] multi-head attention coeffients over attention axes.Attention layer Attention class tf.keras.layers.Attention(use_scale=False, **kwargs) Dot-product attention layer, a.k.a. Luong-style attention. Inputs are query tensor of shape [batch_size, Tq, dim], value tensor of shape [batch_size, Tv, dim] and key tensor of shape [batch_size, Tv, dim]. The calculation follows the steps: Calculate scores with shape [batch_size, Tq, Tv] as a query-key dot product: scores = tf.matmul(query, key, transpose_b=True). Use scores to calculate a distribution with shape [batch_size, Tq, Tv]: distribution = tf.nn.softmax(scores). Use distribution to create a linear combination of value with shape [batch_size, Tq, dim]: return tf.matmul(distribution, value). Arguments use_scale: If True, will create a scalar variable to scale the attention scores. causal: Boolean. Set to True for decoder self-attention. Adds a mask such that position i cannot attend to positions j > i. This prevents the flow of information from the future towards the past. dropout: Float between 0 and 1. Fraction of the units to drop for the attention scores. Call # Arguments inputs: List of the following tensors: * query: Query Tensor of shape [batch_size, Tq, dim]. * value: Value Tensor of shape [batch_size, Tv, dim]. * key: Optional key Tensor of shape [batch_size, Tv, dim]. If not given, will use value for both key and value, which is the most common case. mask: List of the following tensors: * query_mask: A boolean mask Tensor of shape [batch_size, Tq]. If given, the output will be zero at the positions where mask==False. * value_mask: A boolean mask Tensor of shape [batch_size, Tv]. If given, will apply the mask such that values at positions where mask==False do not contribute to the result. return_attention_scores: bool, it True, returns the attention scores (after masking and softmax) as an additional output argument. training: Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (no dropout). Output: Attention outputs of shape [batch_size, Tq, dim]. [Optional] Attention scores after masking and softmax with shape [batch_size, Tq, Tv]. The meaning of query, value and key depend on the application. In the case of text similarity, for example, query is the sequence embeddings of the first piece of text and value is the sequence embeddings of the second piece of text. key is usually the same tensor as value. Here is a code example for using Attention in a CNN+Attention network: # Variable-length int sequences. query_input = tf.keras.Input(shape=(None,), dtype='int32') value_input = tf.keras.Input(shape=(None,), dtype='int32') # Embedding lookup. token_embedding = tf.keras.layers.Embedding(input_dim=1000, output_dim=64) # Query embeddings of shape [batch_size, Tq, dimension]. query_embeddings = token_embedding(query_input) # Value embeddings of shape [batch_size, Tv, dimension]. value_embeddings = token_embedding(value_input) # CNN layer. cnn_layer = tf.keras.layers.Conv1D( filters=100, kernel_size=4, # Use 'same' padding so outputs have the same shape as inputs. padding='same') # Query encoding of shape [batch_size, Tq, filters]. query_seq_encoding = cnn_layer(query_embeddings) # Value encoding of shape [batch_size, Tv, filters]. value_seq_encoding = cnn_layer(value_embeddings) # Query-value attention of shape [batch_size, Tq, filters]. query_value_attention_seq = tf.keras.layers.Attention()( [query_seq_encoding, value_seq_encoding]) # Reduce over the sequence axis to produce encodings of shape # [batch_size, filters]. query_encoding = tf.keras.layers.GlobalAveragePooling1D()( query_seq_encoding) query_value_attention = tf.keras.layers.GlobalAveragePooling1D()( query_value_attention_seq) # Concatenate query and document encodings to produce a DNN input layer. input_layer = tf.keras.layers.Concatenate()( [query_encoding, query_value_attention]) # Add DNN layers, and create Model. # ... Dropout layer Dropout class tf.keras.layers.Dropout(rate, noise_shape=None, seed=None, **kwargs) Applies Dropout to the input. The Dropout layer randomly sets input units to 0 with a frequency of rate at each step during training time, which helps prevent overfitting. Inputs not set to 0 are scaled up by 1/(1 - rate) such that the sum over all inputs is unchanged. Note that the Dropout layer only applies when training is set to True such that no values are dropped during inference. When using model.fit, training will be appropriately set to True automatically, and in other contexts, you can set the kwarg explicitly to True when calling the layer. (This is in contrast to setting trainable=False for a Dropout layer. trainable does not affect the layer's behavior, as Dropout does not have any variables/weights that can be frozen during training.) >>> tf.random.set_seed(0) >>> layer = tf.keras.layers.Dropout(.2, input_shape=(2,)) >>> data = np.arange(10).reshape(5, 2).astype(np.float32) >>> print(data) [[0. 1.] [2. 3.] [4. 5.] [6. 7.] [8. 9.]] >>> outputs = layer(data, training=True) >>> print(outputs) tf.Tensor( [[ 0. 1.25] [ 2.5 3.75] [ 5. 6.25] [ 7.5 8.75] [10. 0. ]], shape=(5, 2), dtype=float32) Arguments rate: Float between 0 and 1. Fraction of the input units to drop. noise_shape: 1D integer tensor representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape (batch_size, timesteps, features) and you want the dropout mask to be the same for all timesteps, you can use noise_shape=(batch_size, 1, features). seed: A Python integer to use as random seed. Call arguments inputs: Input tensor (of any rank). training: Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing).GaussianDropout layer GaussianDropout class tf.keras.layers.GaussianDropout(rate, **kwargs) Apply multiplicative 1-centered Gaussian noise. As it is a regularization layer, it is only active at training time. Arguments rate: Float, drop probability (as with Dropout). The multiplicative noise will have standard deviation sqrt(rate / (1 - rate)). Call arguments inputs: Input tensor (of any rank). training: Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing). Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape Same shape as input.SpatialDropout3D layer SpatialDropout3D class tf.keras.layers.SpatialDropout3D(rate, data_format=None, **kwargs) Spatial 3D version of Dropout. This version performs the same function as Dropout, however, it drops entire 3D feature maps instead of individual elements. If adjacent voxels within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, SpatialDropout3D will help promote independence between feature maps and should be used instead. Arguments rate: Float between 0 and 1. Fraction of the input units to drop. data_format: 'channels_first' or 'channels_last'. In 'channels_first' mode, the channels dimension (the depth) is at index 1, in 'channels_last' mode is it at index 4. It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Call arguments inputs: A 5D tensor. training: Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing). Input shape 5D tensor with shape: (samples, channels, dim1, dim2, dim3) if data_format='channels_first' or 5D tensor with shape: (samples, dim1, dim2, dim3, channels) if data_format='channels_last'. Output shape Same as input. References: - Efficient Object Localization Using Convolutional NetworksGaussianNoise layer GaussianNoise class tf.keras.layers.GaussianNoise(stddev, **kwargs) Apply additive zero-centered Gaussian noise. This is useful to mitigate overfitting (you could see it as a form of random data augmentation). Gaussian Noise (GS) is a natural choice as corruption process for real valued inputs. As it is a regularization layer, it is only active at training time. Arguments stddev: Float, standard deviation of the noise distribution. Call arguments inputs: Input tensor (of any rank). training: Python boolean indicating whether the layer should behave in training mode (adding noise) or in inference mode (doing nothing). Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape Same shape as input. SpatialDropout2D layer SpatialDropout2D class tf.keras.layers.SpatialDropout2D(rate, data_format=None, **kwargs) Spatial 2D version of Dropout. This version performs the same function as Dropout, however, it drops entire 2D feature maps instead of individual elements. If adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, SpatialDropout2D will help promote independence between feature maps and should be used instead. Arguments rate: Float between 0 and 1. Fraction of the input units to drop. data_format: 'channels_first' or 'channels_last'. In 'channels_first' mode, the channels dimension (the depth) is at index 1, in 'channels_last' mode is it at index 3. It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Call arguments inputs: A 4D tensor. training: Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing). Input shape 4D tensor with shape: (samples, channels, rows, cols) if data_format='channels_first' or 4D tensor with shape: (samples, rows, cols, channels) if data_format='channels_last'. Output shape Same as input. References: - Efficient Object Localization Using Convolutional NetworksActivityRegularization layer ActivityRegularization class tf.keras.layers.ActivityRegularization(l1=0.0, l2=0.0, **kwargs) Layer that applies an update to the cost function based input activity. Arguments l1: L1 regularization factor (positive float). l2: L2 regularization factor (positive float). Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape Same shape as input. SpatialDropout1D layer SpatialDropout1D class tf.keras.layers.SpatialDropout1D(rate, **kwargs) Spatial 1D version of Dropout. This version performs the same function as Dropout, however, it drops entire 1D feature maps instead of individual elements. If adjacent frames within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, SpatialDropout1D will help promote independence between feature maps and should be used instead. Arguments rate: Float between 0 and 1. Fraction of the input units to drop. Call arguments inputs: A 3D tensor. training: Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing). Input shape 3D tensor with shape: (samples, timesteps, channels) Output shape Same as input. References: - Efficient Object Localization Using Convolutional NetworksAlphaDropout layer AlphaDropout class tf.keras.layers.AlphaDropout(rate, noise_shape=None, seed=None, **kwargs) Applies Alpha Dropout to the input. Alpha Dropout is a Dropout that keeps mean and variance of inputs to their original values, in order to ensure the self-normalizing property even after this dropout. Alpha Dropout fits well to Scaled Exponential Linear Units by randomly setting activations to the negative saturation value. Arguments rate: float, drop probability (as with Dropout). The multiplicative noise will have standard deviation sqrt(rate / (1 - rate)). seed: A Python integer to use as random seed. Call arguments inputs: Input tensor (of any rank). training: Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing). Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape Same shape as input. LayerNormalization layer LayerNormalization class tf.keras.layers.LayerNormalization( axis=-1, epsilon=0.001, center=True, scale=True, beta_initializer="zeros", gamma_initializer="ones", beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None, **kwargs ) Layer normalization layer (Ba et al., 2016). Normalize the activations of the previous layer for each given example in a batch independently, rather than across a batch like Batch Normalization. i.e. applies a transformation that maintains the mean activation within each example close to 0 and the activation standard deviation close to 1. Given a tensor inputs, moments are calculated and normalization is performed across the axes specified in axis. Example >>> data = tf.constant(np.arange(10).reshape(5, 2) * 10, dtype=tf.float32) >>> print(data) tf.Tensor( [[ 0. 10.] [20. 30.] [40. 50.] [60. 70.] [80. 90.]], shape=(5, 2), dtype=float32) >>> layer = tf.keras.layers.LayerNormalization(axis=1) >>> output = layer(data) >>> print(output) tf.Tensor( [[-1. 1.] [-1. 1.] [-1. 1.] [-1. 1.] [-1. 1.]], shape=(5, 2), dtype=float32) Notice that with Layer Normalization the normalization happens across the axes within each example, rather than across different examples in the batch. If scale or center are enabled, the layer will scale the normalized outputs by broadcasting them with a trainable variable gamma, and center the outputs by broadcasting with a trainable variable beta. gamma will default to a ones tensor and beta will default to a zeros tensor, so that centering and scaling are no-ops before training has begun. So, with scaling and centering enabled the normalization equations are as follows: Let the intermediate activations for a mini-batch to be the inputs. For each sample x_i in inputs with k features, we compute the mean and variance of the sample: mean_i = sum(x_i[j] for j in range(k)) / k var_i = sum((x_i[j] - mean_i) ** 2 for j in range(k)) / k and then compute a normalized x_i_normalized, including a small factor epsilon for numerical stability. x_i_normalized = (x_i - mean_i) / sqrt(var_i + epsilon) And finally x_i_normalized is linearly transformed by gamma and beta, which are learned parameters: output_i = x_i_normalized * gamma + beta gamma and beta will span the axes of inputs specified in axis, and this part of the inputs' shape must be fully defined. For example: >>> layer = tf.keras.layers.LayerNormalization(axis=[1, 2, 3]) >>> layer.build([5, 20, 30, 40]) >>> print(layer.beta.shape) (20, 30, 40) >>> print(layer.gamma.shape) (20, 30, 40) Note that other implementations of layer normalization may choose to define gamma and beta over a separate set of axes from the axes being normalized across. For example, Group Normalization (Wu et al. 2018) with group size of 1 corresponds to a Layer Normalization that normalizes across height, width, and channel and has gamma and beta span only the channel dimension. So, this Layer Normalization implementation will not match a Group Normalization layer with group size set to 1. Arguments axis: Integer or List/Tuple. The axis or axes to normalize across. Typically this is the features axis/axes. The left-out axes are typically the batch axis/axes. This argument defaults to -1, the last dimension in the input. epsilon: Small float added to variance to avoid dividing by zero. Defaults to 1e-3 center: If True, add offset of beta to normalized tensor. If False, beta is ignored. Defaults to True. scale: If True, multiply by gamma. If False, gamma is not used. Defaults to True. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling will be done by the next layer. beta_initializer: Initializer for the beta weight. Defaults to zeros. gamma_initializer: Initializer for the gamma weight. Defaults to ones. beta_regularizer: Optional regularizer for the beta weight. None by default. gamma_regularizer: Optional regularizer for the gamma weight. None by default. beta_constraint: Optional constraint for the beta weight. None by default. gamma_constraint: Optional constraint for the gamma weight. None by default. Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape Same shape as input. Reference Lei Ba et al., 2016. BatchNormalization layer BatchNormalization class tf.keras.layers.BatchNormalization( axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer="zeros", gamma_initializer="ones", moving_mean_initializer="zeros", moving_variance_initializer="ones", beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None, **kwargs ) Layer that normalizes its inputs. Batch normalization applies a transformation that maintains the mean output close to 0 and the output standard deviation close to 1. Importantly, batch normalization works differently during training and during inference. During training (i.e. when using fit() or when calling the layer/model with the argument training=True), the layer normalizes its output using the mean and standard deviation of the current batch of inputs. That is to say, for each channel being normalized, the layer returns gamma * (batch - mean(batch)) / sqrt(var(batch) + epsilon) + beta, where: epsilon is small constant (configurable as part of the constructor arguments) gamma is a learned scaling factor (initialized as 1), which can be disabled by passing scale=False to the constructor. beta is a learned offset factor (initialized as 0), which can be disabled by passing center=False to the constructor. During inference (i.e. when using evaluate() or predict() or when calling the layer/model with the argument training=False (which is the default), the layer normalizes its output using a moving average of the mean and standard deviation of the batches it has seen during training. That is to say, it returns gamma * (batch - self.moving_mean) / sqrt(self.moving_var + epsilon) + beta. self.moving_mean and self.moving_var are non-trainable variables that are updated each time the layer in called in training mode, as such: moving_mean = moving_mean * momentum + mean(batch) * (1 - momentum) moving_var = moving_var * momentum + var(batch) * (1 - momentum) As such, the layer will only normalize its inputs during inference after having been trained on data that has similar statistics as the inference data. Arguments axis: Integer, the axis that should be normalized (typically the features axis). For instance, after a Conv2D layer with data_format="channels_first", set axis=1 in BatchNormalization. momentum: Momentum for the moving average. epsilon: Small float added to variance to avoid dividing by zero. center: If True, add offset of beta to normalized tensor. If False, beta is ignored. scale: If True, multiply by gamma. If False, gamma is not used. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling will be done by the next layer. beta_initializer: Initializer for the beta weight. gamma_initializer: Initializer for the gamma weight. moving_mean_initializer: Initializer for the moving mean. moving_variance_initializer: Initializer for the moving variance. beta_regularizer: Optional regularizer for the beta weight. gamma_regularizer: Optional regularizer for the gamma weight. beta_constraint: Optional constraint for the beta weight. gamma_constraint: Optional constraint for the gamma weight. Call arguments inputs: Input tensor (of any rank). training: Python boolean indicating whether the layer should behave in training mode or in inference mode. training=True: The layer will normalize its inputs using the mean and variance of the current batch of inputs. training=False: The layer will normalize its inputs using the mean and variance of its moving statistics, learned during training. Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape Same shape as input. Reference Ioffe and Szegedy, 2015. About setting layer.trainable = False on a BatchNormalization layer: The meaning of setting layer.trainable = False is to freeze the layer, i.e. its internal state will not change during training: its trainable weights will not be updated during fit() or train_on_batch(), and its state updates will not be run. Usually, this does not necessarily mean that the layer is run in inference mode (which is normally controlled by the training argument that can be passed when calling a layer). "Frozen state" and "inference mode" are two separate concepts. However, in the case of the BatchNormalization layer, setting trainable = False on the layer means that the layer will be subsequently run in inference mode (meaning that it will use the moving mean and the moving variance to normalize the current batch, rather than using the mean and variance of the current batch). This behavior has been introduced in TensorFlow 2.0, in order to enable layer.trainable = False to produce the most commonly expected behavior in the convnet fine-tuning use case. Note that: - Setting trainable on an model containing other layers will recursively set the trainable value of all inner layers. - If the value of the trainable attribute is changed after calling compile() on a model, the new value doesn't take effect for this model until compile() is called again. RandomHeight layer RandomHeight class tf.keras.layers.experimental.preprocessing.RandomHeight( factor, interpolation="bilinear", seed=None, **kwargs ) Randomly vary the height of a batch of images during training. Adjusts the height of a batch of images by a random factor. The input should be a 4-D tensor in the "channels_last" image data format. By default, this layer is inactive during inference. Arguments factor: A positive float (fraction of original height), or a tuple of size 2 representing lower and upper bound for resizing vertically. When represented as a single float, this value is used for both the upper and lower bound. For instance, factor=(0.2, 0.3) results in an output with height changed by a random amount in the range [20%, 30%]. factor=(-0.2, 0.3) results in an output with height changed by a random amount in the range [-20%, +30%].factor=0.2results in an output with height changed by a random amount in the range[-20%, +20%]`. interpolation: String, the interpolation method. Defaults to bilinear. Supports bilinear, nearest, bicubic, area, lanczos3, lanczos5, gaussian, mitchellcubic seed: Integer. Used to create a random seed. Input shape 4D tensor with shape: (samples, height, width, channels) (data_format='channels_last'). Output shape 4D tensor with shape: (samples, random_height, width, channels). RandomRotation layer RandomRotation class tf.keras.layers.experimental.preprocessing.RandomRotation( factor, fill_mode="reflect", interpolation="bilinear", seed=None, fill_value=0.0, **kwargs ) Randomly rotate each image. By default, random rotations are only applied during training. At inference time, the layer does nothing. If you need to apply random rotations at inference time, set training to True when calling the layer. Input shape 4D tensor with shape: (samples, height, width, channels), data_format='channels_last'. Output shape 4D tensor with shape: (samples, height, width, channels), data_format='channels_last'. Attributes factor: a float represented as fraction of 2pi, or a tuple of size 2 representing lower and upper bound for rotating clockwise and counter-clockwise. A positive values means rotating counter clock-wise, while a negative value means clock-wise. When represented as a single float, this value is used for both the upper and lower bound. For instance, factor=(-0.2, 0.3) results in an output rotation by a random amount in the range [-20% * 2pi, 30% * 2pi]. factor=0.2 results in an output rotating by a random amount in the range [-20% * 2pi, 20% * 2pi]. fill_mode: Points outside the boundaries of the input are filled according to the given mode (one of {'constant', 'reflect', 'wrap', 'nearest'}). reflect: (d c b a | a b c d | d c b a) The input is extended by reflecting about the edge of the last pixel. constant: (k k k k | a b c d | k k k k) The input is extended by filling all values beyond the edge with the same constant value k = 0. wrap: (a b c d | a b c d | a b c d) The input is extended by wrapping around to the opposite edge. nearest: (a a a a | a b c d | d d d d) The input is extended by the nearest pixel. interpolation: Interpolation mode. Supported values: "nearest", "bilinear". seed: Integer. Used to create a random seed. fill_value: a float represents the value to be filled outside the boundaries when fill_mode is "constant". Raise: ValueError: if either bound is not between [0, 1], or upper bound is less than lower bound. RandomWidth layer RandomWidth class tf.keras.layers.experimental.preprocessing.RandomWidth( factor, interpolation="bilinear", seed=None, **kwargs ) Randomly vary the width of a batch of images during training. Adjusts the width of a batch of images by a random factor. The input should be a 4-D tensor in the "channels_last" image data format. By default, this layer is inactive during inference. Arguments factor: A positive float (fraction of original height), or a tuple of size 2 representing lower and upper bound for resizing vertically. When represented as a single float, this value is used for both the upper and lower bound. For instance, factor=(0.2, 0.3) results in an output with width changed by a random amount in the range [20%, 30%]. factor=(-0.2, 0.3) results in an output with width changed by a random amount in the range [-20%, +30%].factor=0.2results in an output with width changed by a random amount in the range[-20%, +20%]`. interpolation: String, the interpolation method. Defaults to bilinear. Supports bilinear, nearest, bicubic, area, lanczos3, lanczos5, gaussian, mitchellcubic seed: Integer. Used to create a random seed. Input shape 4D tensor with shape: (samples, height, width, channels) (data_format='channels_last'). Output shape 4D tensor with shape: (samples, height, random_width, channels). CenterCrop layer CenterCrop class tf.keras.layers.experimental.preprocessing.CenterCrop( height, width, **kwargs ) Crop the central portion of the images to target height and width. Input shape 4D tensor with shape: (samples, height, width, channels), data_format='channels_last'. Output shape 4D tensor with shape: (samples, target_height, target_width, channels). If the input height/width is even and the target height/width is odd (or inversely), the input image is left-padded by 1 pixel. Arguments height: Integer, the height of the output shape. width: Integer, the width of the output shape. RandomCrop layer RandomCrop class tf.keras.layers.experimental.preprocessing.RandomCrop( height, width, seed=None, **kwargs ) Randomly crop the images to target height and width. This layer will crop all the images in the same batch to the same cropping location. By default, random cropping is only applied during training. At inference time, the images will be first rescaled to preserve the shorter side, and center cropped. If you need to apply random cropping at inference time, set training to True when calling the layer. Input shape 4D tensor with shape: (samples, height, width, channels), data_format='channels_last'. Output shape 4D tensor with shape: (samples, target_height, target_width, channels). Arguments height: Integer, the height of the output shape. width: Integer, the width of the output shape. seed: Integer. Used to create a random seed. RandomFlip layer RandomFlip class tf.keras.layers.experimental.preprocessing.RandomFlip( mode="horizontal_and_vertical", seed=None, **kwargs ) Randomly flip each image horizontally and vertically. This layer will flip the images based on the mode attribute. During inference time, the output will be identical to input. Call the layer with training=True to flip the input. Input shape 4D tensor with shape: (samples, height, width, channels), data_format='channels_last'. Output shape 4D tensor with shape: (samples, height, width, channels), data_format='channels_last'. Attributes mode: String indicating which flip mode to use. Can be "horizontal", "vertical", or "horizontal_and_vertical". Defaults to "horizontal_and_vertical". "horizontal" is a left-right flip and "vertical" is a top-bottom flip. seed: Integer. Used to create a random seed.RandomZoom layer RandomZoom class tf.keras.layers.experimental.preprocessing.RandomZoom( height_factor, width_factor=None, fill_mode="reflect", interpolation="bilinear", seed=None, fill_value=0.0, **kwargs ) Randomly zoom each image during training. Arguments height_factor: a float represented as fraction of value, or a tuple of size 2 representing lower and upper bound for zooming vertically. When represented as a single float, this value is used for both the upper and lower bound. A positive value means zooming out, while a negative value means zooming in. For instance, height_factor=(0.2, 0.3) result in an output zoomed out by a random amount in the range [+20%, +30%]. height_factor=(-0.3, -0.2) result in an output zoomed in by a random amount in the range [+20%, +30%]. width_factor: a float represented as fraction of value, or a tuple of size 2 representing lower and upper bound for zooming horizontally. When represented as a single float, this value is used for both the upper and lower bound. For instance, width_factor=(0.2, 0.3) result in an output zooming out between 20% to 30%. width_factor=(-0.3, -0.2) result in an output zooming in between 20% to 30%. Defaults to None, i.e., zooming vertical and horizontal directions by preserving the aspect ratio. fill_mode: Points outside the boundaries of the input are filled according to the given mode (one of {'constant', 'reflect', 'wrap', 'nearest'}). reflect: (d c b a | a b c d | d c b a) The input is extended by reflecting about the edge of the last pixel. constant: (k k k k | a b c d | k k k k) The input is extended by filling all values beyond the edge with the same constant value k = 0. wrap: (a b c d | a b c d | a b c d) The input is extended by wrapping around to the opposite edge. nearest: (a a a a | a b c d | d d d d) The input is extended by the nearest pixel. interpolation: Interpolation mode. Supported values: "nearest", "bilinear". seed: Integer. Used to create a random seed. fill_value: a float represents the value to be filled outside the boundaries when fill_mode is "constant". Example input_img = np.random.random((32, 224, 224, 3)) >>> layer = tf.keras.layers.experimental.preprocessing.RandomZoom(.5, .2) >>> out_img = layer(input_img) >>> out_img.shape TensorShape([32, 224, 224, 3]) Input shape 4D tensor with shape: (samples, height, width, channels), data_format='channels_last'. Output shape 4D tensor with shape: (samples, height, width, channels), data_format='channels_last'. Raise: ValueError: if lower bound is not between [0, 1], or upper bound is negative. Resizing layer Resizing class tf.keras.layers.experimental.preprocessing.Resizing( height, width, interpolation="bilinear", **kwargs ) Image resizing layer. Resize the batched image input to target height and width. The input should be a 4-D tensor in the format of NHWC. Arguments height: Integer, the height of the output shape. width: Integer, the width of the output shape. interpolation: String, the interpolation method. Defaults to bilinear. Supports bilinear, nearest, bicubic, area, lanczos3, lanczos5, gaussian, mitchellcubicRandomTranslation layer RandomTranslation class tf.keras.layers.experimental.preprocessing.RandomTranslation( height_factor, width_factor, fill_mode="reflect", interpolation="bilinear", seed=None, fill_value=0.0, **kwargs ) Randomly translate each image during training. Arguments height_factor: a float represented as fraction of value, or a tuple of size 2 representing lower and upper bound for shifting vertically. A negative value means shifting image up, while a positive value means shifting image down. When represented as a single positive float, this value is used for both the upper and lower bound. For instance, height_factor=(-0.2, 0.3) results in an output shifted by a random amount in the range [-20%, +30%]. height_factor=0.2 results in an output height shifted by a random amount in the range [-20%, +20%]. width_factor: a float represented as fraction of value, or a tuple of size 2 representing lower and upper bound for shifting horizontally. A negative value means shifting image left, while a positive value means shifting image right. When represented as a single positive float, this value is used for both the upper and lower bound. For instance, width_factor=(-0.2, 0.3) results in an output shifted left by 20%, and shifted right by 30%. width_factor=0.2 results in an output height shifted left or right by 20%. fill_mode: Points outside the boundaries of the input are filled according to the given mode (one of {'constant', 'reflect', 'wrap', 'nearest'}). reflect: (d c b a | a b c d | d c b a) The input is extended by reflecting about the edge of the last pixel. constant: (k k k k | a b c d | k k k k) The input is extended by filling all values beyond the edge with the same constant value k = 0. wrap: (a b c d | a b c d | a b c d) The input is extended by wrapping around to the opposite edge. nearest: (a a a a | a b c d | d d d d) The input is extended by the nearest pixel. interpolation: Interpolation mode. Supported values: "nearest", "bilinear". seed: Integer. Used to create a random seed. fill_value: a float represents the value to be filled outside the boundaries when fill_mode is "constant". Input shape 4D tensor with shape: (samples, height, width, channels), data_format='channels_last'. Output shape 4D tensor with shape: (samples, height, width, channels), data_format='channels_last'. Raise: ValueError: if either bound is not between [0, 1], or upper bound is less than lower bound.Rescaling layer Rescaling class tf.keras.layers.experimental.preprocessing.Rescaling( scale, offset=0.0, **kwargs ) Multiply inputs by scale and adds offset. For instance: To rescale an input in the [0, 255] range to be in the [0, 1] range, you would pass scale=1./255. To rescale an input in the [0, 255] range to be in the [-1, 1] range, you would pass scale=1./127.5, offset=-1. The rescaling is applied both during training and inference. Input shape Arbitrary. Output shape Same as input. Arguments scale: Float, the scale to apply to the inputs. offset: Float, the offset to apply to the inputs. Discretization layer Discretization class tf.keras.layers.experimental.preprocessing.Discretization( bin_boundaries=None, num_bins=None, epsilon=0.01, **kwargs ) Buckets data into discrete ranges. This layer will place each element of its input data into one of several contiguous ranges and output an integer index indicating which range each element was placed in. Input shape Any tf.Tensor or tf.RaggedTensor of dimension 2 or higher. Output shape Same as input shape. Attributes bin_boundaries: A list of bin boundaries. The leftmost and rightmost bins will always extend to -inf and inf, so bin_boundaries=[0., 1., 2.] generates bins (-inf, 0.), [0., 1.), [1., 2.), and [2., +inf). If this option is set, adapt should not be called. num_bins: The integer number of bins to compute. If this option is set, adapt should be called to learn the bin boundaries. epsilon: Error tolerance, typically a small fraction close to zero (e.g. 0.01). Higher values of epsilon increase the quantile approximation, and hence result in more unequal buckets, but could improve performance and resource consumption. Examples Bucketize float values based on provided buckets. >>> input = np.array([[-1.5, 1.0, 3.4, .5], [0.0, 3.0, 1.3, 0.0]]) >>> layer = tf.keras.layers.experimental.preprocessing.Discretization( ... bin_boundaries=[0., 1., 2.]) >>> layer(input) Bucketize float values based on a number of buckets to compute. >>> input = np.array([[-1.5, 1.0, 3.4, .5], [0.0, 3.0, 1.3, 0.0]]) >>> layer = tf.keras.layers.experimental.preprocessing.Discretization( ... num_bins=4, epsilon=0.01) >>> layer.adapt(input) >>> layer(input) CategoryEncoding layer CategoryEncoding class tf.keras.layers.experimental.preprocessing.CategoryEncoding( num_tokens=None, output_mode="binary", sparse=False, **kwargs ) Category encoding layer. This layer provides options for condensing data into a categorical encoding when the total number of tokens are known in advance. It accepts integer values as inputs and outputs a dense representation (one sample = 1-index tensor of float values representing data about the sample's tokens) of those inputs. For integer inputs where the total number of tokens is not known, see tf.keras.layers.experimental.preprocessing.IntegerLookup. Examples Multi-hot encoding data >>> layer = tf.keras.layers.experimental.preprocessing.CategoryEncoding( ... num_tokens=4, output_mode="binary") >>> layer([[0, 1], [0, 0], [1, 2], [3, 1]]) Using weighted inputs in count mode >>> layer = tf.keras.layers.experimental.preprocessing.CategoryEncoding( ... num_tokens=4, output_mode="count") >>> count_weights = np.array([[.1, .2], [.1, .1], [.2, .3], [.4, .2]]) >>> layer([[0, 1], [0, 0], [1, 2], [3, 1]], count_weights=count_weights) Arguments num_tokens: The total number of tokens the layer should support. All inputs to the layer must integers in the range 0 <= value < num_tokens or an error will be thrown. output_mode: Specification for the output of the layer. Defaults to "binary". Values can be "binary" or "count", configuring the layer as follows: "binary": Outputs a single int array per batch, of num_tokens size, containing 1s in all elements where the token mapped to that index exists at least once in the batch item. "count": As "binary", but the int array contains a count of the number of times the token at that index appeared in the batch item. sparse: Boolean. If true, returns a SparseTensor instead of a dense Tensor. Defaults to False. Call arguments inputs: A 2D tensor (samples, timesteps). count_weights: A 2D tensor in the same shape as inputs indicating the weight for each sample value when summing up in count mode. Not used in binary mode. CategoryCrossing layer CategoryCrossing class tf.keras.layers.experimental.preprocessing.CategoryCrossing( depth=None, name=None, separator="_X_", **kwargs ) Category crossing layer. This layer concatenates multiple categorical inputs into a single categorical output (similar to Cartesian product). The output dtype is string. Usage: >>> inp_1 = ['a', 'b', 'c'] >>> inp_2 = ['d', 'e', 'f'] >>> layer = tf.keras.layers.experimental.preprocessing.CategoryCrossing() >>> layer([inp_1, inp_2]) >>> inp_1 = ['a', 'b', 'c'] >>> inp_2 = ['d', 'e', 'f'] >>> layer = tf.keras.layers.experimental.preprocessing.CategoryCrossing( ... separator='-') >>> layer([inp_1, inp_2]) Arguments depth: depth of input crossing. By default None, all inputs are crossed into one output. It can also be an int or tuple/list of ints. Passing an integer will create combinations of crossed outputs with depth up to that integer, i.e., [1, 2, ..., depth), and passing a tuple of integers will create crossed outputs with depth for the specified values in the tuple, i.e., depth=(N1, N2) will create all possible crossed outputs with depth equal to N1 or N2. Passing None means a single crossed output with all inputs. For example, with inputs a, b and c, depth=2 means the output will be [a;b;c;cross(a, b);cross(bc);cross(ca)]. separator: A string added between each input being joined. Defaults to 'X'. name: Name to give to the layer. **kwargs: Keyword arguments to construct a layer. Input shape a list of string or int tensors or sparse tensors of shape [batch_size, d1, ..., dm] Output shape a single string or int tensor or sparse tensor of shape [batch_size, d1, ..., dm] Returns If any input is RaggedTensor, the output is RaggedTensor. Else, if any input is SparseTensor, the output is SparseTensor. Otherwise, the output is Tensor. Example (depth=None) If the layer receives three inputs: a=[[1], [4]], b=[[2], [5]], c=[[3], [6]] the output will be a string tensor: [[b'1_X_2_X_3'], [b'4_X_5_X_6']] Example (depth is an integer) With the same input above, and if depth=2, the output will be a list of 6 string tensors: [[b'1'], [b'4']] [[b'2'], [b'5']] [[b'3'], [b'6']] [[b'1_X_2'], [b'4_X_5']], [[b'2_X_3'], [b'5_X_6']], [[b'3_X_1'], [b'6_X_4']] Example (depth is a tuple/list of integers) With the same input above, and if depth=(2, 3) the output will be a list of 4 string tensors: [[b'1_X_2'], [b'4_X_5']], [[b'2_X_3'], [b'5_X_6']], [[b'3_X_1'], [b'6_X_4']], [[b'1_X_2_X_3'], [b'4_X_5_X_6']] StringLookup layer StringLookup class tf.keras.layers.experimental.preprocessing.StringLookup( max_tokens=None, num_oov_indices=1, mask_token="", oov_token="[UNK]", vocabulary=None, encoding=None, invert=False, output_mode="int", sparse=False, pad_to_max_tokens=False, **kwargs ) Maps strings from a vocabulary to integer indices. This layer translates a set of arbitrary strings into an integer output via a table-based vocabulary lookup. The vocabulary for the layer can be supplied on construction or learned via adapt(). During adapt(), the layer will analyze a data set, determine the frequency of individual strings tokens, and create a vocabulary from them. If the vocabulary is capped in size, the most frequent tokens will be used to create the vocabulary and all others will be treated as out-of-vocabulary (OOV). There are two possible output modes for the layer. When output_mode is "int", input strings are converted to their index in the vocabulary (an integer). When output_mode is "binary", "count", or "tf-idf", input strings are encoded into an array where each dimension corresponds to an element in the vocabulary. The vocabulary can optionally contain a mask token as well as an OOV token (which can optionally occupy multiple indices in the vocabulary, as set by num_oov_indices). The position of these tokens in the vocabulary is fixed. When output_mode is "int", the vocabulary will begin with the mask token at index 0, followed by OOV indices, followed by the rest of the vocabulary. When output_mode is "binary", "count", or "tf-idf" the vocabulary will begin with OOV indices and instances of the mask token will be dropped. Arguments max_tokens: The maximum size of the vocabulary for this layer. If None, there is no cap on the size of the vocabulary. Note that this size includes the OOV and mask tokens. Default to None. num_oov_indices: The number of out-of-vocabulary tokens to use. If this value is more than 1, OOV inputs are hashed to determine their OOV value. If this value is 0, OOV inputs will map to -1 when output_mode is "int" and are dropped otherwise. Defaults to 1. mask_token: A token that represents masked inputs. When output_mode is "int", the token is included in vocabulary and mapped to index 0. In other output modes, the token will not appear in the vocabulary and instances of the mask token in the input will be dropped. If set to None, no mask term will be added. Defaults to "". oov_token: Only used when invert is True. The token to return for OOV indices. Defaults to "[UNK]". vocabulary: An optional list of tokens, or a path to a text file containing a vocabulary to load into this layer. The file should contain one token per line. If the list or file contains the same token multiple times, an error will be thrown. invert: Only valid when output_mode is "int". If True, this layer will map indices to vocabulary items instead of mapping vocabulary items to indices. Default to False. output_mode: Specification for the output of the layer. Defaults to "int". Values can be "int", "binary", "count", or "tf-idf" configuring the layer as follows: "int": Return the raw integer indices of the input tokens. "binary": Outputs a single int array per sample, of either vocab_size or max_tokens size, containing 1s in all elements where the token mapped to that index exists at least once in the sample. "count": Like "binary", but the int array contains a count of the number of times the token at that index appeared in the sample. "tf-idf": As "binary", but the TF-IDF algorithm is applied to find the value in each token slot. pad_to_max_tokens: Only applicable when output_mode is "binary", "count", or "tf-idf". If True, the output will have its feature axis padded to max_tokens even if the number of unique tokens in the vocabulary is less than max_tokens, resulting in a tensor of shape [batch_size, max_tokens] regardless of vocabulary size. Defaults to False. sparse: Boolean. Only applicable when output_mode is "binary", "count", or "tf-idf". If True, returns a SparseTensor instead of a dense Tensor. Defaults to False. Examples Creating a lookup layer with a known vocabulary This example creates a lookup layer with a pre-existing vocabulary. >>> vocab = ["a", "b", "c", "d"] >>> data = tf.constant([["a", "c", "d"], ["d", "z", "b"]]) >>> layer = StringLookup(vocabulary=vocab) >>> layer(data) Creating a lookup layer with an adapted vocabulary This example creates a lookup layer and generates the vocabulary by analyzing the dataset. >>> data = tf.constant([["a", "c", "d"], ["d", "z", "b"]]) >>> layer = StringLookup() >>> layer.adapt(data) >>> layer.get_vocabulary() ['', '[UNK]', 'd', 'z', 'c', 'b', 'a'] Note how the mask token '' and the OOV token [UNK] have been added to the vocabulary. The remaining tokens are sorted by frequency ('d', which has 2 occurrences, is first) then by inverse sort order. >>> data = tf.constant([["a", "c", "d"], ["d", "z", "b"]]) >>> layer = StringLookup() >>> layer.adapt(data) >>> layer(data) Lookups with multiple OOV indices This example demonstrates how to use a lookup layer with multiple OOV indices. When a layer is created with more than one OOV index, any OOV values are hashed into the number of OOV buckets, distributing OOV values in a deterministic fashion across the set. >>> vocab = ["a", "b", "c", "d"] >>> data = tf.constant([["a", "c", "d"], ["m", "z", "b"]]) >>> layer = StringLookup(vocabulary=vocab, num_oov_indices=2) >>> layer(data) Note that the output for OOV value 'm' is 1, while the output for OOV value 'z' is 2. The in-vocab terms have their output index increased by 1 from earlier examples (a maps to 3, etc) in order to make space for the extra OOV value. Multi-hot output Configure the layer with output_mode='binary'. Note that the first num_oov_indices dimensions in the binary encoding represent OOV values. >>> vocab = ["a", "b", "c", "d"] >>> data = tf.constant([["a", "c", "d", "d"], ["d", "z", "b", "z"]]) >>> layer = StringLookup(vocabulary=vocab, output_mode='binary') >>> layer(data) Token count output Configure the layer with output_mode='count'. As with binary output, the first num_oov_indices dimensions in the output represent OOV values. >>> vocab = ["a", "b", "c", "d"] >>> data = tf.constant([["a", "c", "d", "d"], ["d", "z", "b", "z"]]) >>> layer = StringLookup(vocabulary=vocab, output_mode='count') >>> layer(data) TF-IDF output Configure the layer with output_mode='tf-idf'. As with binary output, the first num_oov_indices dimensions in the output represent OOV values. Each token bin will output token_count * idf_weight, where the idf weights are the inverse document frequency weights per token. These should be provided along with the vocabulary. Note that the idf_weight for OOV values will default to the average of all idf weights passed in. >>> vocab = ["a", "b", "c", "d"] >>> idf_weights = [0.25, 0.75, 0.6, 0.4] >>> data = tf.constant([["a", "c", "d", "d"], ["d", "z", "b", "z"]]) >>> layer = StringLookup(output_mode='tf-idf') >>> layer.set_vocabulary(vocab, idf_weights=idf_weights) >>> layer(data) To specify the idf weights for oov values, you will need to pass the entire vocabularly including the leading oov token. >>> vocab = ["[UNK]", "a", "b", "c", "d"] >>> idf_weights = [0.9, 0.25, 0.75, 0.6, 0.4] >>> data = tf.constant([["a", "c", "d", "d"], ["d", "z", "b", "z"]]) >>> layer = StringLookup(output_mode='tf-idf') >>> layer.set_vocabulary(vocab, idf_weights=idf_weights) >>> layer(data) When adapting the layer in tf-idf mode, each input sample will be considered a document, and idf weight per token will be calculated as log(1 + num_documents / (1 + token_document_count)). Inverse lookup This example demonstrates how to map indices to strings using this layer. (You can also use adapt() with inverse=True, but for simplicity we'll pass the vocab in this example.) >>> vocab = ["a", "b", "c", "d"] >>> data = tf.constant([[2, 4, 5], [5, 1, 3]]) >>> layer = StringLookup(vocabulary=vocab, invert=True) >>> layer(data) Note that the first two indices correspond to the mask and oov token by default. This behavior can be disabled by setting mask_token=None and num_oov_indices=0. Forward and inverse lookup pairs This example demonstrates how to use the vocabulary of a standard lookup layer to create an inverse lookup layer. >>> vocab = ["a", "b", "c", "d"] >>> data = tf.constant([["a", "c", "d"], ["d", "z", "b"]]) >>> layer = StringLookup(vocabulary=vocab) >>> i_layer = StringLookup(vocabulary=vocab, invert=True) >>> int_data = layer(data) >>> i_layer(int_data) In this example, the input value 'z' resulted in an output of '[UNK]', since 1000 was not in the vocabulary - it got represented as an OOV, and all OOV values are returned as '[OOV}' in the inverse layer. Also, note that for the inverse to work, you must have already set the forward layer vocabulary either directly or via fit() before calling get_vocabulary(). IntegerLookup layer IntegerLookup class tf.keras.layers.experimental.preprocessing.IntegerLookup( max_tokens=None, num_oov_indices=1, mask_token=0, oov_token=-1, vocabulary=None, invert=False, output_mode="int", sparse=False, pad_to_max_tokens=False, **kwargs ) Reindex integer inputs to be in a contiguous range, via a dict lookup. This layer maps a set of arbitrary integer input tokens into indexed integer output via a table-based vocabulary lookup. The layer's output indices will be contiguously arranged up to the maximum vocab size, even if the input tokens are non-continguous or unbounded. The layer supports multiple options for encoding the output via output_mode, and has optional support for out-of-vocabulary (OOV) tokens and masking. The vocabulary for the layer can be supplied on construction or learned via adapt(). During adapt(), the layer will analyze a data set, determine the frequency of individual integer tokens, and create a vocabulary from them. If the vocabulary is capped in size, the most frequent tokens will be used to create the vocabulary and all others will be treated as OOV. There are two possible output modes for the layer. When output_mode is "int", input integers are converted to their index in the vocabulary (an integer). When output_mode is "binary", "count", or "tf-idf", input integers are encoded into an array where each dimension corresponds to an element in the vocabulary. The vocabulary can optionally contain a mask token as well as an OOV token (which can optionally occupy multiple indices in the vocabulary, as set by num_oov_indices). The position of these tokens in the vocabulary is fixed. When output_mode is "int", the vocabulary will begin with the mask token at index 0, followed by OOV indices, followed by the rest of the vocabulary. When output_mode is "binary", "count", or "tf-idf" the vocabulary will begin with OOV indices and instances of the mask token will be dropped. Arguments max_tokens: The maximum size of the vocabulary for this layer. If None, there is no cap on the size of the vocabulary. Note that this size includes the OOV and mask tokens. Default to None. num_oov_indices: The number of out-of-vocabulary tokens to use. If this value is more than 1, OOV inputs are modulated to determine their OOV value. If this value is 0, OOV inputs will map to -1 when output_mode is "int" and are dropped otherwise. Defaults to 1. mask_token: An integer token that represents masked inputs. When output_mode is "int", the token is included in vocabulary and mapped to index 0. In other output modes, the token will not appear in the vocabulary and instances of the mask token in the input will be dropped. If set to None, no mask term will be added. Defaults to 0. oov_token: Only used when invert is True. The token to return for OOV indices. Defaults to -1. vocabulary: An optional list of integer tokens, or a path to a text file containing a vocabulary to load into this layer. The file should contain one integer token per line. If the list or file contains the same token multiple times, an error will be thrown. invert: Only valid when output_mode is "int". If True, this layer will map indices to vocabulary items instead of mapping vocabulary items to indices. Default to False. output_mode: Specification for the output of the layer. Defaults to "int". Values can be "int", "binary", "count", or "tf-idf" configuring the layer as follows: "int": Return the vocabulary indices of the input tokens. "binary": Outputs a single int array per sample, of either vocabulary size or max_tokens size, containing 1s in all elements where the token mapped to that index exists at least once in the sample. "count": Like "binary", but the int array contains a count of the number of times the token at that index appeared in the sample. "tf-idf": As "binary", but the TF-IDF algorithm is applied to find the value in each token slot. pad_to_max_tokens: Only applicable when output_mode is "binary", "count", or "tf-idf". If True, the output will have its feature axis padded to max_tokens even if the number of unique tokens in the vocabulary is less than max_tokens, resulting in a tensor of shape [batch_size, max_tokens] regardless of vocabulary size. Defaults to False. sparse: Boolean. Only applicable when output_mode is "binary", "count", or "tf-idf". If True, returns a SparseTensor instead of a dense Tensor. Defaults to False. Examples Creating a lookup layer with a known vocabulary This example creates a lookup layer with a pre-existing vocabulary. >>> vocab = [12, 36, 1138, 42] >>> data = tf.constant([[12, 1138, 42], [42, 1000, 36]]) # Note OOV tokens >>> layer = IntegerLookup(vocabulary=vocab) >>> layer(data) Creating a lookup layer with an adapted vocabulary This example creates a lookup layer and generates the vocabulary by analyzing the dataset. >>> data = tf.constant([[12, 1138, 42], [42, 1000, 36]]) >>> layer = IntegerLookup() >>> layer.adapt(data) >>> layer.get_vocabulary() [0, -1, 42, 1138, 1000, 36, 12] Note how the mask token 0 and the OOV token -1 have been added to the vocabulary. The remaining tokens are sorted by frequency (1138, which has 2 occurrences, is first) then by inverse sort order. >>> data = tf.constant([[12, 1138, 42], [42, 1000, 36]]) >>> layer = IntegerLookup() >>> layer.adapt(data) >>> layer(data) Lookups with multiple OOV indices This example demonstrates how to use a lookup layer with multiple OOV indices. When a layer is created with more than one OOV index, any OOV tokens are hashed into the number of OOV buckets, distributing OOV tokens in a deterministic fashion across the set. >>> vocab = [12, 36, 1138, 42] >>> data = tf.constant([[12, 1138, 42], [37, 1000, 36]]) >>> layer = IntegerLookup(vocabulary=vocab, num_oov_indices=2) >>> layer(data) Note that the output for OOV token 37 is 2, while the output for OOV token 1000 is 1. The in-vocab terms have their output index increased by 1 from earlier examples (12 maps to 3, etc) in order to make space for the extra OOV token. Multi-hot output Configure the layer with output_mode='binary'. Note that the first num_oov_indices dimensions in the binary encoding represent OOV tokens >>> vocab = [12, 36, 1138, 42] >>> data = tf.constant([[12, 1138, 42, 42], [42, 7, 36, 7]]) # Note OOV tokens >>> layer = IntegerLookup(vocabulary=vocab, output_mode='binary') >>> layer(data) Token count output Configure the layer with output_mode='count'. As with binary output, the first num_oov_indices dimensions in the output represent OOV tokens. >>> vocab = [12, 36, 1138, 42] >>> data = tf.constant([[12, 1138, 42, 42], [42, 7, 36, 7]]) # Note OOV tokens >>> layer = IntegerLookup(vocabulary=vocab, output_mode='count') >>> layer(data) TF-IDF output Configure the layer with output_mode='tf-idf'. As with binary output, the first num_oov_indices dimensions in the output represent OOV tokens. Each token bin will output token_count * idf_weight, where the idf weights are the inverse document frequency weights per token. These should be provided along with the vocabulary. Note that the idf_weight for OOV tokens will default to the average of all idf weights passed in. >>> vocab = [12, 36, 1138, 42] >>> idf_weights = [0.25, 0.75, 0.6, 0.4] >>> data = tf.constant([[12, 1138, 42, 42], [42, 7, 36, 7]]) # Note OOV tokens >>> layer = IntegerLookup(output_mode='tf-idf') >>> layer.set_vocabulary(vocab, idf_weights=idf_weights) >>> layer(data) To specify the idf weights for oov tokens, you will need to pass the entire vocabularly including the leading oov token. >>> vocab = [-1, 12, 36, 1138, 42] >>> idf_weights = [0.9, 0.25, 0.75, 0.6, 0.4] >>> data = tf.constant([[12, 1138, 42, 42], [42, 7, 36, 7]]) # Note OOV tokens >>> layer = IntegerLookup(output_mode='tf-idf') >>> layer.set_vocabulary(vocab, idf_weights=idf_weights) >>> layer(data) When adapting the layer in tf-idf mode, each input sample will be considered a document, and idf weight per token will be calculated as log(1 + num_documents / (1 + token_document_count)). Inverse lookup This example demonstrates how to map indices to tokens using this layer. (You can also use adapt() with inverse=True, but for simplicity we'll pass the vocab in this example.) >>> vocab = [12, 36, 1138, 42] >>> data = tf.constant([[2, 4, 5], [5, 1, 3]]) >>> layer = IntegerLookup(vocabulary=vocab, invert=True) >>> layer(data) Note that the first two indices correspond to the mask and oov token by default. This behavior can be disabled by setting mask_token=None and num_oov_indices=0. Forward and inverse lookup pairs This example demonstrates how to use the vocabulary of a standard lookup layer to create an inverse lookup layer. >>> vocab = [12, 36, 1138, 42] >>> data = tf.constant([[12, 1138, 42], [42, 1000, 36]]) >>> layer = IntegerLookup(vocabulary=vocab) >>> i_layer = IntegerLookup(vocabulary=layer.get_vocabulary(), invert=True) >>> int_data = layer(data) >>> i_layer(int_data) In this example, the input token 1000 resulted in an output of -1, since 1000 was not in the vocabulary - it got represented as an OOV, and all OOV tokens are returned as -1 in the inverse layer. Also, note that for the inverse to work, you must have already set the forward layer vocabulary either directly or via fit() before calling get_vocabulary().Hashing layer Hashing class tf.keras.layers.experimental.preprocessing.Hashing( num_bins, mask_value=None, salt=None, **kwargs ) Implements categorical feature hashing, also known as "hashing trick". This layer transforms single or multiple categorical inputs to hashed output. It converts a sequence of int or string to a sequence of int. The stable hash function uses tensorflow::ops::Fingerprint to produce universal output that is consistent across platforms. This layer uses FarmHash64 by default, which provides a consistent hashed output across different platforms and is stable across invocations, regardless of device and context, by mixing the input bits thoroughly. If you want to obfuscate the hashed output, you can also pass a random salt argument in the constructor. In that case, the layer will use the SipHash64 hash function, with the salt value serving as additional input to the hash function. Example (FarmHash64): >>> layer = tf.keras.layers.experimental.preprocessing.Hashing(num_bins=3) >>> inp = [['A'], ['B'], ['C'], ['D'], ['E']] >>> layer(inp) Example (FarmHash64) with a mask value: >>> layer = tf.keras.layers.experimental.preprocessing.Hashing(num_bins=3, ... mask_value='') >>> inp = [['A'], ['B'], [''], ['C'], ['D']] >>> layer(inp) Example (SipHash64): >>> layer = tf.keras.layers.experimental.preprocessing.Hashing(num_bins=3, ... salt=[133, 137]) >>> inp = [['A'], ['B'], ['C'], ['D'], ['E']] >>> layer(inp) Example (Siphash64 with a single integer, same as salt=[133, 133] >>> layer = tf.keras.layers.experimental.preprocessing.Hashing(num_bins=3, ... salt=133) >>> inp = [['A'], ['B'], ['C'], ['D'], ['E']] >>> layer(inp) Reference: SipHash with salt Arguments num_bins: Number of hash bins. Note that this includes the mask_value bin, so the effective number of bins is (num_bins - 1) if mask_value is set. mask_value: A value that represents masked inputs, which are mapped to index 0. Defaults to None, meaning no mask term will be added and the hashing will start at index 0. salt: A single unsigned integer or None. If passed, the hash function used will be SipHash64, with these values used as an additional input (known as a "salt" in cryptography). These should be non-zero. Defaults to None (in that case, the FarmHash64 hash function is used). It also supports tuple/list of 2 unsigned integer numbers, see reference paper for details. **kwargs: Keyword arguments to construct a layer. Input shape A single or list of string, int32 or int64 Tensor, SparseTensor or RaggedTensor of shape [batch_size, ...,] Output shape An int64 Tensor, SparseTensor or RaggedTensor of shape [batch_size, ...]. If any input is RaggedTensor then output is RaggedTensor, otherwise if any input is SparseTensor then output is SparseTensor, otherwise the output is Tensor. TextVectorization layer TextVectorization class tf.keras.layers.experimental.preprocessing.TextVectorization( max_tokens=None, standardize="lower_and_strip_punctuation", split="whitespace", ngrams=None, output_mode="int", output_sequence_length=None, pad_to_max_tokens=False, vocabulary=None, **kwargs ) Text vectorization layer. This layer has basic options for managing text in a Keras model. It transforms a batch of strings (one sample = one string) into either a list of token indices (one sample = 1D tensor of integer token indices) or a dense representation (one sample = 1D tensor of float values representing data about the sample's tokens). If desired, the user can call this layer's adapt() method on a dataset. When this layer is adapted, it will analyze the dataset, determine the frequency of individual string values, and create a 'vocabulary' from them. This vocabulary can have unlimited size or be capped, depending on the configuration options for this layer; if there are more unique values in the input than the maximum vocabulary size, the most frequent terms will be used to create the vocabulary. The processing of each sample contains the following steps: 1. standardize each sample (usually lowercasing + punctuation stripping) 2. split each sample into substrings (usually words) 3. recombine substrings into tokens (usually ngrams) 4. index tokens (associate a unique int value with each token) 5. transform each sample using this index, either into a vector of ints or a dense float vector. Some notes on passing Callables to customize splitting and normalization for this layer: 1. Any callable can be passed to this Layer, but if you want to serialize this object you should only pass functions that are registered Keras serializables (see tf.keras.utils.register_keras_serializable for more details). 2. When using a custom callable for standardize, the data received by the callable will be exactly as passed to this layer. The callable should return a tensor of the same shape as the input. 3. When using a custom callable for split, the data received by the callable will have the 1st dimension squeezed out - instead of [["string to split"], ["another string to split"]], the Callable will see ["string to split", "another string to split"]. The callable should return a Tensor with the first dimension containing the split tokens - in this example, we should see something like [["string", "to", "split"], ["another", "string", "to", "split"]]. This makes the callable site natively compatible with tf.strings.split(). Arguments max_tokens: The maximum size of the vocabulary for this layer. If None, there is no cap on the size of the vocabulary. Note that this vocabulary contains 1 OOV token, so the effective number of tokens is (max_tokens - 1 - (1 if output == "int" else 0)). standardize: Optional specification for standardization to apply to the input text. Values can be None (no standardization), 'lower_and_strip_punctuation' (lowercase and remove punctuation) or a Callable. Default is 'lower_and_strip_punctuation'. split: Optional specification for splitting the input text. Values can be None (no splitting), 'whitespace' (split on ASCII whitespace), or a Callable. The default is 'whitespace'. ngrams: Optional specification for ngrams to create from the possibly-split input text. Values can be None, an integer or tuple of integers; passing an integer will create ngrams up to that integer, and passing a tuple of integers will create ngrams for the specified values in the tuple. Passing None means that no ngrams will be created. output_mode: Optional specification for the output of the layer. Values can be "int", "binary", "count" or "tf-idf", configuring the layer as follows: "int": Outputs integer indices, one integer index per split string token. When output == "int", 0 is reserved for masked locations; this reduces the vocab size to max_tokens-2 instead of max_tokens-1 "binary": Outputs a single int array per batch, of either vocab_size or max_tokens size, containing 1s in all elements where the token mapped to that index exists at least once in the batch item. "count": As "binary", but the int array contains a count of the number of times the token at that index appeared in the batch item. "tf-idf": As "binary", but the TF-IDF algorithm is applied to find the value in each token slot. output_sequence_length: Only valid in INT mode. If set, the output will have its time dimension padded or truncated to exactly output_sequence_length values, resulting in a tensor of shape [batch_size, output_sequence_length] regardless of how many tokens resulted from the splitting step. Defaults to None. pad_to_max_tokens: Only valid in "binary", "count", and "tf-idf" modes. If True, the output will have its feature axis padded to max_tokens even if the number of unique tokens in the vocabulary is less than max_tokens, resulting in a tensor of shape [batch_size, max_tokens] regardless of vocabulary size. Defaults to False. vocabulary: An optional list of vocabulary terms, or a path to a text file containing a vocabulary to load into this layer. The file should contain one token per line. If the list or file contains the same token multiple times, an error will be thrown. Example This example instantiates a TextVectorization layer that lowercases text, splits on whitespace, strips punctuation, and outputs integer vocab indices. >>> text_dataset = tf.data.Dataset.from_tensor_slices(["foo", "bar", "baz"]) >>> max_features = 5000 # Maximum vocab size. >>> max_len = 4 # Sequence length to pad the outputs to. >>> embedding_dims = 2 >>> >>> # Create the layer. >>> vectorize_layer = TextVectorization( ... max_tokens=max_features, ... output_mode='int', ... output_sequence_length=max_len) >>> >>> # Now that the vocab layer has been created, call `adapt` on the text-only >>> # dataset to create the vocabulary. You don't have to batch, but for large >>> # datasets this means we're not keeping spare copies of the dataset. >>> vectorize_layer.adapt(text_dataset.batch(64)) >>> >>> # Create the model that uses the vectorize text layer >>> model = tf.keras.models.Sequential() >>> >>> # Start by creating an explicit input layer. It needs to have a shape of >>> # (1,) (because we need to guarantee that there is exactly one string >>> # input per batch), and the dtype needs to be 'string'. >>> model.add(tf.keras.Input(shape=(1,), dtype=tf.string)) >>> >>> # The first layer in our model is the vectorization layer. After this >>> # layer, we have a tensor of shape (batch_size, max_len) containing vocab >>> # indices. >>> model.add(vectorize_layer) >>> >>> # Now, the model can map strings to integers, and you can add an embedding >>> # layer to map these integers to learned embeddings. >>> input_data = [["foo qux bar"], ["qux baz"]] >>> model.predict(input_data) array([[2, 1, 4, 0], [1, 3, 0, 0]]) Example This example instantiates a TextVectorization layer by passing a list of vocabulary terms to the layer's init method. input_array = np.array([["earth", "wind", "and", "fire"], ["fire", "and", "earth", "michigan"]]) expected_output = [[2, 3, 4, 5], [5, 4, 2, 1]] input_data = keras.Input(shape=(None,), dtype=dtypes.string) layer = get_layer_class()( max_tokens=None, standardize=None, split=None, output_mode=text_vectorization.INT, vocabulary=vocab_data) int_data = layer(input_data) model = keras.Model(inputs=input_data, outputs=int_data) output_dataset = model.predict(input_array) >>> vocab_data = ["earth", "wind", "and", "fire"] >>> max_len = 4 # Sequence length to pad the outputs to. >>> >>> # Create the layer, passing the vocab directly. You can also pass the >>> # vocabulary arg a path to a file containing one vocabulary word per >>> # line. >>> vectorize_layer = TextVectorization( ... max_tokens=max_features, ... output_mode='int', ... output_sequence_length=max_len, ... vocabulary=vocab_data) >>> >>> # Because we've passed the vocabulary directly, we don't need to adapt >>> # the layer - the vocabulary is already set. The vocabulary contains the >>> # padding token ('') and OOV token ('[UNK]') as well as the passed tokens. >>> vectorize_layer.get_vocabulary() ['', '[UNK]', 'earth', 'wind', 'and', 'fire'] Normalization layer Normalization class tf.keras.layers.experimental.preprocessing.Normalization( axis=-1, mean=None, variance=None, **kwargs ) Feature-wise normalization of the data. This layer will coerce its inputs into a distribution centered around 0 with standard deviation 1. It accomplishes this by precomputing the mean and variance of the data, and calling (input-mean)/sqrt(var) at runtime. What happens in adapt: Compute mean and variance of the data and store them as the layer's weights. adapt should be called before fit, evaluate, or predict. Arguments axis: Integer or tuple of integers, the axis or axes that should be "kept". These axes are not be summed over when calculating the normalization statistics. By default the last axis, the features axis is kept and any space or time axes are summed. Each element in the the axes that are kept is normalized independently. If axis is set to 'None', the layer will perform scalar normalization (dividing the input by a single scalar value). The batch axis, 0, is always summed over (axis=0 is not allowed). mean: The mean value(s) to use during normalization. The passed value(s) will be broadcast to the shape of the kept axes above; if the value(s) cannot be broadcast, an error will be raised when this layer's build() method is called. variance: The variance value(s) to use during normalization. The passed value(s) will be broadcast to the shape of the kept axes above; if the value(s)cannot be broadcast, an error will be raised when this layer's build() method is called. Examples Calculate the mean and variance by analyzing the dataset in adapt. >>> adapt_data = np.array([[1.], [2.], [3.], [4.], [5.]], dtype=np.float32) >>> input_data = np.array([[1.], [2.], [3.]], np.float32) >>> layer = Normalization() >>> layer.adapt(adapt_data) >>> layer(input_data) Pass the mean and variance directly. >>> input_data = np.array([[1.], [2.], [3.]], np.float32) >>> layer = Normalization(mean=3., variance=2.) >>> layer(input_data) GRU layer GRU class tf.keras.layers.GRU( units, activation="tanh", recurrent_activation="sigmoid", use_bias=True, kernel_initializer="glorot_uniform", recurrent_initializer="orthogonal", bias_initializer="zeros", kernel_regularizer=None, recurrent_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, recurrent_constraint=None, bias_constraint=None, dropout=0.0, recurrent_dropout=0.0, return_sequences=False, return_state=False, go_backwards=False, stateful=False, unroll=False, time_major=False, reset_after=True, **kwargs ) Gated Recurrent Unit - Cho et al. 2014. See the Keras RNN API guide for details about the usage of RNN API. Based on available runtime hardware and constraints, this layer will choose different implementations (cuDNN-based or pure-TensorFlow) to maximize the performance. If a GPU is available and all the arguments to the layer meet the requirement of the CuDNN kernel (see below for details), the layer will use a fast cuDNN implementation. The requirements to use the cuDNN implementation are: activation == tanh recurrent_activation == sigmoid recurrent_dropout == 0 unroll is False use_bias is True reset_after is True Inputs, if use masking, are strictly right-padded. Eager execution is enabled in the outermost context. There are two variants of the GRU implementation. The default one is based on v3 and has reset gate applied to hidden state before matrix multiplication. The other one is based on original and has the order reversed. The second variant is compatible with CuDNNGRU (GPU-only) and allows inference on CPU. Thus it has separate biases for kernel and recurrent_kernel. To use this variant, set 'reset_after'=True and recurrent_activation='sigmoid'. For example: >>> inputs = tf.random.normal([32, 10, 8]) >>> gru = tf.keras.layers.GRU(4) >>> output = gru(inputs) >>> print(output.shape) (32, 4) >>> gru = tf.keras.layers.GRU(4, return_sequences=True, return_state=True) >>> whole_sequence_output, final_state = gru(inputs) >>> print(whole_sequence_output.shape) (32, 10, 4) >>> print(final_state.shape) (32, 4) Arguments units: Positive integer, dimensionality of the output space. activation: Activation function to use. Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x). recurrent_activation: Activation function to use for the recurrent step. Default: sigmoid (sigmoid). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x). use_bias: Boolean, (default True), whether the layer uses a bias vector. kernel_initializer: Initializer for the kernel weights matrix, used for the linear transformation of the inputs. Default: glorot_uniform. recurrent_initializer: Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. Default: orthogonal. bias_initializer: Initializer for the bias vector. Default: zeros. kernel_regularizer: Regularizer function applied to the kernel weights matrix. Default: None. recurrent_regularizer: Regularizer function applied to the recurrent_kernel weights matrix. Default: None. bias_regularizer: Regularizer function applied to the bias vector. Default: None. activity_regularizer: Regularizer function applied to the output of the layer (its "activation"). Default: None. kernel_constraint: Constraint function applied to the kernel weights matrix. Default: None. recurrent_constraint: Constraint function applied to the recurrent_kernel weights matrix. Default: None. bias_constraint: Constraint function applied to the bias vector. Default: None. dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. Default: 0. recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Default: 0. return_sequences: Boolean. Whether to return the last output in the output sequence, or the full sequence. Default: False. return_state: Boolean. Whether to return the last state in addition to the output. Default: False. go_backwards: Boolean (default False). If True, process the input sequence backwards and return the reversed sequence. stateful: Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch. unroll: Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences. time_major: The shape format of the inputs and outputs tensors. If True, the inputs and outputs will be in shape [timesteps, batch, feature], whereas in the False case, it will be [batch, timesteps, feature]. Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form. reset_after: GRU convention (whether to apply reset gate after or before matrix multiplication). False = "before", True = "after" (default and CuDNN compatible). Call arguments inputs: A 3D tensor, with shape [batch, timesteps, feature]. mask: Binary tensor of shape [samples, timesteps] indicating whether a given timestep should be masked (optional, defaults to None). An individual True entry indicates that the corresponding timestep should be utilized, while a False entry indicates that the corresponding timestep should be ignored. training: Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the cell when calling it. This is only relevant if dropout or recurrent_dropout is used (optional, defaults to None). initial_state: List of initial state tensors to be passed to the first call of the cell (optional, defaults to None which causes creation of zero-filled initial state tensors).ConvLSTM2D layer ConvLSTM2D class tf.keras.layers.ConvLSTM2D( filters, kernel_size, strides=(1, 1), padding="valid", data_format=None, dilation_rate=(1, 1), activation="tanh", recurrent_activation="hard_sigmoid", use_bias=True, kernel_initializer="glorot_uniform", recurrent_initializer="orthogonal", bias_initializer="zeros", unit_forget_bias=True, kernel_regularizer=None, recurrent_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, recurrent_constraint=None, bias_constraint=None, return_sequences=False, return_state=False, go_backwards=False, stateful=False, dropout=0.0, recurrent_dropout=0.0, **kwargs ) 2D Convolutional LSTM layer. A convolutional LSTM is similar to an LSTM, but the input transformations and recurrent transformations are both convolutional. This layer is typically used to process timeseries of images (i.e. video-like data). It is known to perform well for weather data forecasting, using inputs that are timeseries of 2D grids of sensor values. It isn't usually applied to regular video data, due to its high computational cost. Arguments filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution). kernel_size: An integer or tuple/list of n integers, specifying the dimensions of the convolution window. strides: An integer or tuple/list of n integers, specifying the strides of the convolution. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1. padding: One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, time, ..., channels) while channels_first corresponds to inputs with shape (batch, time, channels, ...). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". dilation_rate: An integer or tuple/list of n integers, specifying the dilation rate to use for dilated convolution. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any strides value != 1. activation: Activation function to use. By default hyperbolic tangent activation function is applied (tanh(x)). recurrent_activation: Activation function to use for the recurrent step. use_bias: Boolean, whether the layer uses a bias vector. kernel_initializer: Initializer for the kernel weights matrix, used for the linear transformation of the inputs. recurrent_initializer: Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. bias_initializer: Initializer for the bias vector. unit_forget_bias: Boolean. If True, add 1 to the bias of the forget gate at initialization. Use in combination with bias_initializer="zeros". This is recommended in Jozefowicz et al., 2015 kernel_regularizer: Regularizer function applied to the kernel weights matrix. recurrent_regularizer: Regularizer function applied to the recurrent_kernel weights matrix. bias_regularizer: Regularizer function applied to the bias vector. activity_regularizer: Regularizer function applied to. kernel_constraint: Constraint function applied to the kernel weights matrix. recurrent_constraint: Constraint function applied to the recurrent_kernel weights matrix. bias_constraint: Constraint function applied to the bias vector. return_sequences: Boolean. Whether to return the last output in the output sequence, or the full sequence. (default False) return_state: Boolean Whether to return the last state in addition to the output. (default False) go_backwards: Boolean (default False). If True, process the input sequence backwards. stateful: Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch. dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Call arguments inputs: A 5D float tensor (see input shape description below). mask: Binary tensor of shape (samples, timesteps) indicating whether a given timestep should be masked. training: Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the cell when calling it. This is only relevant if dropout or recurrent_dropout are set. initial_state: List of initial state tensors to be passed to the first call of the cell. Input shape If data_format='channels_first' 5D tensor with shape: (samples, time, channels, rows, cols) If data_format='channels_last' 5D tensor with shape: (samples, time, rows, cols, channels) Output shape If return_state: a list of tensors. The first tensor is the output. The remaining tensors are the last states, each 4D tensor with shape: (samples, filters, new_rows, new_cols) if data_format='channels_first' or 4D tensor with shape: (samples, new_rows, new_cols, filters) if data_format='channels_last'. rows and cols values might have changed due to padding. If return_sequences: 5D tensor with shape: (samples, timesteps, filters, new_rows, new_cols) if data_format='channels_first' or 5D tensor with shape: (samples, timesteps, new_rows, new_cols, filters) if data_format='channels_last'. Else, 4D tensor with shape: (samples, filters, new_rows, new_cols) if data_format='channels_first' or 4D tensor with shape: (samples, new_rows, new_cols, filters) if data_format='channels_last'. Raises ValueError: in case of invalid constructor arguments. References: - Shi et al., 2015 (the current implementation does not include the feedback loop on the cells output). Example steps = 10 height = 32 width = 32 input_channels = 3 output_channels = 6 inputs = tf.keras.Input(shape=(steps, height, width, input_channels)) layer = tf.keras.layers.ConvLSTM2D(filters=output_channels, kernel_size=3) outputs = layer(inputs) TimeDistributed layer TimeDistributed class tf.keras.layers.TimeDistributed(layer, **kwargs) This wrapper allows to apply a layer to every temporal slice of an input. Every input should be at least 3D, and the dimension of index one of the first input will be considered to be the temporal dimension. Consider a batch of 32 video samples, where each sample is a 128x128 RGB image with channels_last data format, across 10 timesteps. The batch input shape is (32, 10, 128, 128, 3). You can then use TimeDistributed to apply the same Conv2D layer to each of the 10 timesteps, independently: >>> inputs = tf.keras.Input(shape=(10, 128, 128, 3)) >>> conv_2d_layer = tf.keras.layers.Conv2D(64, (3, 3)) >>> outputs = tf.keras.layers.TimeDistributed(conv_2d_layer)(inputs) >>> outputs.shape TensorShape([None, 10, 126, 126, 64]) Because TimeDistributed applies the same instance of Conv2D to each of the timestamps, the same set of weights are used at each timestamp. Arguments layer: a tf.keras.layers.Layer instance. Call arguments inputs: Input tensor of shape (batch, time, ...) or nested tensors, and each of which has shape (batch, time, ...). training: Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the wrapped layer (only if the layer supports this argument). mask: Binary tensor of shape (samples, timesteps) indicating whether a given timestep should be masked. This argument is passed to the wrapped layer (only if the layer supports this argument). Raises ValueError: If not initialized with a tf.keras.layers.Layer instance. Base RNN layer RNN class tf.keras.layers.RNN( cell, return_sequences=False, return_state=False, go_backwards=False, stateful=False, unroll=False, time_major=False, **kwargs ) Base class for recurrent layers. See the Keras RNN API guide for details about the usage of RNN API. Arguments cell: A RNN cell instance or a list of RNN cell instances. A RNN cell is a class that has: A call(input_at_t, states_at_t) method, returning (output_at_t, states_at_t_plus_1). The call method of the cell can also take the optional argument constants, see section "Note on passing external constants" below. A state_size attribute. This can be a single integer (single state) in which case it is the size of the recurrent state. This can also be a list/tuple of integers (one size per state). The state_size can also be TensorShape or tuple/list of TensorShape, to represent high dimension state. A output_size attribute. This can be a single integer or a TensorShape, which represent the shape of the output. For backward compatible reason, if this attribute is not available for the cell, the value will be inferred by the first element of the state_size. A get_initial_state(inputs=None, batch_size=None, dtype=None) method that creates a tensor meant to be fed to call() as the initial state, if the user didn't specify any initial state via other means. The returned initial state should have a shape of [batch_size, cell.state_size]. The cell might choose to create a tensor full of zeros, or full of other values based on the cell's implementation. inputs is the input tensor to the RNN layer, which should contain the batch size as its shape[0], and also dtype. Note that the shape[0] might be None during the graph construction. Either the inputs or the pair of batch_size and dtype are provided. batch_size is a scalar tensor that represents the batch size of the inputs. dtype is tf.DType that represents the dtype of the inputs. For backward compatibility, if this method is not implemented by the cell, the RNN layer will create a zero filled tensor with the size of [batch_size, cell.state_size]. In the case that cell is a list of RNN cell instances, the cells will be stacked on top of each other in the RNN, resulting in an efficient stacked RNN. return_sequences: Boolean (default False). Whether to return the last output in the output sequence, or the full sequence. return_state: Boolean (default False). Whether to return the last state in addition to the output. go_backwards: Boolean (default False). If True, process the input sequence backwards and return the reversed sequence. stateful: Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch. unroll: Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences. time_major: The shape format of the inputs and outputs tensors. If True, the inputs and outputs will be in shape (timesteps, batch, ...), whereas in the False case, it will be (batch, timesteps, ...). Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form. zero_output_for_mask: Boolean (default False). Whether the output should use zeros for the masked timesteps. Note that this field is only used when return_sequences is True and mask is provided. It can useful if you want to reuse the raw output sequence of the RNN without interference from the masked timesteps, eg, merging bidirectional RNNs. Call arguments inputs: Input tensor. mask: Binary tensor of shape [batch_size, timesteps] indicating whether a given timestep should be masked. An individual True entry indicates that the corresponding timestep should be utilized, while a False entry indicates that the corresponding timestep should be ignored. training: Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the cell when calling it. This is for use with cells that use dropout. initial_state: List of initial state tensors to be passed to the first call of the cell. constants: List of constant tensors to be passed to the cell at each timestep. Input shape N-D tensor with shape [batch_size, timesteps, ...] or [timesteps, batch_size, ...] when time_major is True. Output shape If return_state: a list of tensors. The first tensor is the output. The remaining tensors are the last states, each with shape [batch_size, state_size], where state_size could be a high dimension tensor shape. If return_sequences: N-D tensor with shape [batch_size, timesteps, output_size], where output_size could be a high dimension tensor shape, or [timesteps, batch_size, output_size] when time_major is True. Else, N-D tensor with shape [batch_size, output_size], where output_size could be a high dimension tensor shape. Masking: This layer supports masking for input data with a variable number of timesteps. To introduce masks to your data, use an [tf.keras.layers.Embedding] layer with the mask_zero parameter set to True. Note on using statefulness in RNNs: You can set RNN layers to be 'stateful', which means that the states computed for the samples in one batch will be reused as initial states for the samples in the next batch. This assumes a one-to-one mapping between samples in different successive batches. To enable statefulness: - Specify stateful=True in the layer constructor. - Specify a fixed batch size for your model, by passing If sequential model: batch_input_shape=(...) to the first layer in your model. Else for functional model with 1 or more Input layers: batch_shape=(...) to all the first layers in your model. This is the expected shape of your inputs including the batch size. It should be a tuple of integers, e.g. (32, 10, 100). - Specify shuffle=False when calling fit(). To reset the states of your model, call .reset_states() on either a specific layer, or on your entire model. Note on specifying the initial state of RNNs: You can specify the initial state of RNN layers symbolically by calling them with the keyword argument initial_state. The value of initial_state should be a tensor or list of tensors representing the initial state of the RNN layer. You can specify the initial state of RNN layers numerically by calling reset_states with the keyword argument states. The value of states should be a numpy array or list of numpy arrays representing the initial state of the RNN layer. Note on passing external constants to RNNs: You can pass "external" constants to the cell using the constants keyword argument of RNN.__call__ (as well as RNN.call) method. This requires that the cell.call method accepts the same keyword argument constants. Such constants can be used to condition the cell transformation on additional static inputs (not changing over time), a.k.a. an attention mechanism. Examples # First, let's define a RNN Cell, as a layer subclass. class MinimalRNNCell(keras.layers.Layer): def __init__(self, units, **kwargs): self.units = units self.state_size = units super(MinimalRNNCell, self).__init__(**kwargs) def build(self, input_shape): self.kernel = self.add_weight(shape=(input_shape[-1], self.units), initializer='uniform', name='kernel') self.recurrent_kernel = self.add_weight( shape=(self.units, self.units), initializer='uniform', name='recurrent_kernel') self.built = True def call(self, inputs, states): prev_output = states[0] h = backend.dot(inputs, self.kernel) output = h + backend.dot(prev_output, self.recurrent_kernel) return output, [output] # Let's use this cell in a RNN layer: cell = MinimalRNNCell(32) x = keras.Input((None, 5)) layer = RNN(cell) y = layer(x) # Here's how to use the cell to build a stacked RNN: cells = [MinimalRNNCell(32), MinimalRNNCell(64)] x = keras.Input((None, 5)) layer = RNN(cells) y = layer(x) SimpleRNN layer SimpleRNN class tf.keras.layers.SimpleRNN( units, activation="tanh", use_bias=True, kernel_initializer="glorot_uniform", recurrent_initializer="orthogonal", bias_initializer="zeros", kernel_regularizer=None, recurrent_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, recurrent_constraint=None, bias_constraint=None, dropout=0.0, recurrent_dropout=0.0, return_sequences=False, return_state=False, go_backwards=False, stateful=False, unroll=False, **kwargs ) Fully-connected RNN where the output is to be fed back to input. See the Keras RNN API guide for details about the usage of RNN API. Arguments units: Positive integer, dimensionality of the output space. activation: Activation function to use. Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x). use_bias: Boolean, (default True), whether the layer uses a bias vector. kernel_initializer: Initializer for the kernel weights matrix, used for the linear transformation of the inputs. Default: glorot_uniform. recurrent_initializer: Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. Default: orthogonal. bias_initializer: Initializer for the bias vector. Default: zeros. kernel_regularizer: Regularizer function applied to the kernel weights matrix. Default: None. recurrent_regularizer: Regularizer function applied to the recurrent_kernel weights matrix. Default: None. bias_regularizer: Regularizer function applied to the bias vector. Default: None. activity_regularizer: Regularizer function applied to the output of the layer (its "activation"). Default: None. kernel_constraint: Constraint function applied to the kernel weights matrix. Default: None. recurrent_constraint: Constraint function applied to the recurrent_kernel weights matrix. Default: None. bias_constraint: Constraint function applied to the bias vector. Default: None. dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. Default: 0. recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Default: 0. return_sequences: Boolean. Whether to return the last output in the output sequence, or the full sequence. Default: False. return_state: Boolean. Whether to return the last state in addition to the output. Default: False go_backwards: Boolean (default False). If True, process the input sequence backwards and return the reversed sequence. stateful: Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch. unroll: Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences. Call arguments inputs: A 3D tensor, with shape [batch, timesteps, feature]. mask: Binary tensor of shape [batch, timesteps] indicating whether a given timestep should be masked. An individual True entry indicates that the corresponding timestep should be utilized, while a False entry indicates that the corresponding timestep should be ignored. training: Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the cell when calling it. This is only relevant if dropout or recurrent_dropout is used. initial_state: List of initial state tensors to be passed to the first call of the cell. Examples inputs = np.random.random([32, 10, 8]).astype(np.float32) simple_rnn = tf.keras.layers.SimpleRNN(4) output = simple_rnn(inputs) # The output has shape `[32, 4]`. simple_rnn = tf.keras.layers.SimpleRNN( 4, return_sequences=True, return_state=True) # whole_sequence_output has shape `[32, 10, 4]`. # final_state has shape `[32, 4]`. whole_sequence_output, final_state = simple_rnn(inputs) LSTM layer LSTM class tf.keras.layers.LSTM( units, activation="tanh", recurrent_activation="sigmoid", use_bias=True, kernel_initializer="glorot_uniform", recurrent_initializer="orthogonal", bias_initializer="zeros", unit_forget_bias=True, kernel_regularizer=None, recurrent_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, recurrent_constraint=None, bias_constraint=None, dropout=0.0, recurrent_dropout=0.0, return_sequences=False, return_state=False, go_backwards=False, stateful=False, time_major=False, unroll=False, **kwargs ) Long Short-Term Memory layer - Hochreiter 1997. See the Keras RNN API guide for details about the usage of RNN API. Based on available runtime hardware and constraints, this layer will choose different implementations (cuDNN-based or pure-TensorFlow) to maximize the performance. If a GPU is available and all the arguments to the layer meet the requirement of the CuDNN kernel (see below for details), the layer will use a fast cuDNN implementation. The requirements to use the cuDNN implementation are: activation == tanh recurrent_activation == sigmoid recurrent_dropout == 0 unroll is False use_bias is True Inputs, if use masking, are strictly right-padded. Eager execution is enabled in the outermost context. For example: >>> inputs = tf.random.normal([32, 10, 8]) >>> lstm = tf.keras.layers.LSTM(4) >>> output = lstm(inputs) >>> print(output.shape) (32, 4) >>> lstm = tf.keras.layers.LSTM(4, return_sequences=True, return_state=True) >>> whole_seq_output, final_memory_state, final_carry_state = lstm(inputs) >>> print(whole_seq_output.shape) (32, 10, 4) >>> print(final_memory_state.shape) (32, 4) >>> print(final_carry_state.shape) (32, 4) Arguments units: Positive integer, dimensionality of the output space. activation: Activation function to use. Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x). recurrent_activation: Activation function to use for the recurrent step. Default: sigmoid (sigmoid). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x). use_bias: Boolean (default True), whether the layer uses a bias vector. kernel_initializer: Initializer for the kernel weights matrix, used for the linear transformation of the inputs. Default: glorot_uniform. recurrent_initializer: Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. Default: orthogonal. bias_initializer: Initializer for the bias vector. Default: zeros. unit_forget_bias: Boolean (default True). If True, add 1 to the bias of the forget gate at initialization. Setting it to true will also force bias_initializer="zeros". This is recommended in Jozefowicz et al.. kernel_regularizer: Regularizer function applied to the kernel weights matrix. Default: None. recurrent_regularizer: Regularizer function applied to the recurrent_kernel weights matrix. Default: None. bias_regularizer: Regularizer function applied to the bias vector. Default: None. activity_regularizer: Regularizer function applied to the output of the layer (its "activation"). Default: None. kernel_constraint: Constraint function applied to the kernel weights matrix. Default: None. recurrent_constraint: Constraint function applied to the recurrent_kernel weights matrix. Default: None. bias_constraint: Constraint function applied to the bias vector. Default: None. dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. Default: 0. recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Default: 0. return_sequences: Boolean. Whether to return the last output. in the output sequence, or the full sequence. Default: False. return_state: Boolean. Whether to return the last state in addition to the output. Default: False. go_backwards: Boolean (default False). If True, process the input sequence backwards and return the reversed sequence. stateful: Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch. time_major: The shape format of the inputs and outputs tensors. If True, the inputs and outputs will be in shape [timesteps, batch, feature], whereas in the False case, it will be [batch, timesteps, feature]. Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form. unroll: Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences. Call arguments inputs: A 3D tensor with shape [batch, timesteps, feature]. mask: Binary tensor of shape [batch, timesteps] indicating whether a given timestep should be masked (optional, defaults to None). An individual True entry indicates that the corresponding timestep should be utilized, while a False entry indicates that the corresponding timestep should be ignored. training: Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the cell when calling it. This is only relevant if dropout or recurrent_dropout is used (optional, defaults to None). initial_state: List of initial state tensors to be passed to the first call of the cell (optional, defaults to None which causes creation of zero-filled initial state tensors).Bidirectional layer Bidirectional class tf.keras.layers.Bidirectional( layer, merge_mode="concat", weights=None, backward_layer=None, **kwargs ) Bidirectional wrapper for RNNs. Arguments layer: keras.layers.RNN instance, such as keras.layers.LSTM or keras.layers.GRU. It could also be a keras.layers.Layer instance that meets the following criteria: Be a sequence-processing layer (accepts 3D+ inputs). Have a go_backwards, return_sequences and return_state attribute (with the same semantics as for the RNN class). Have an input_spec attribute. Implement serialization via get_config() and from_config(). Note that the recommended way to create new RNN layers is to write a custom RNN cell and use it with keras.layers.RNN, instead of subclassing keras.layers.Layer directly. merge_mode: Mode by which outputs of the forward and backward RNNs will be combined. One of {'sum', 'mul', 'concat', 'ave', None}. If None, the outputs will not be combined, they will be returned as a list. Default value is 'concat'. backward_layer: Optional keras.layers.RNN, or keras.layers.Layer instance to be used to handle backwards input processing. If backward_layer is not provided, the layer instance passed as the layer argument will be used to generate the backward layer automatically. Note that the provided backward_layer layer should have properties matching those of the layer argument, in particular it should have the same values for stateful, return_states, return_sequences, etc. In addition, backward_layer and layer should have different go_backwards argument values. A ValueError will be raised if these requirements are not met. Call arguments The call arguments for this layer are the same as those of the wrapped RNN layer. Beware that when passing the initial_state argument during the call of this layer, the first half in the list of elements in the initial_state list will be passed to the forward RNN call and the last half in the list of elements will be passed to the backward RNN call. Raises ValueError: If layer or backward_layer is not a Layer instance. In case of invalid merge_mode argument. If backward_layer has mismatched properties compared to layer. Examples model = Sequential() model.add(Bidirectional(LSTM(10, return_sequences=True), input_shape=(5, 10))) model.add(Bidirectional(LSTM(10))) model.add(Dense(5)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='rmsprop') # With custom backward layer model = Sequential() forward_layer = LSTM(10, return_sequences=True) backward_layer = LSTM(10, activation='relu', return_sequences=True, go_backwards=True) model.add(Bidirectional(forward_layer, backward_layer=backward_layer, input_shape=(5, 10))) model.add(Dense(5)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='rmsprop') AveragePooling1D layer AveragePooling1D class tf.keras.layers.AveragePooling1D( pool_size=2, strides=None, padding="valid", data_format="channels_last", **kwargs ) Average pooling for temporal data. Downsamples the input representation by taking the average value over the window defined by pool_size. The window is shifted by strides. The resulting output when using "valid" padding option has a shape of: output_shape = (input_shape - pool_size + 1) / strides) The resulting output shape when using the "same" padding option is: output_shape = input_shape / strides For example, for strides=1 and padding="valid": >>> x = tf.constant([1., 2., 3., 4., 5.]) >>> x = tf.reshape(x, [1, 5, 1]) >>> x >>> avg_pool_1d = tf.keras.layers.AveragePooling1D(pool_size=2, ... strides=1, padding='valid') >>> avg_pool_1d(x) For example, for strides=2 and padding="valid": >>> x = tf.constant([1., 2., 3., 4., 5.]) >>> x = tf.reshape(x, [1, 5, 1]) >>> x >>> avg_pool_1d = tf.keras.layers.AveragePooling1D(pool_size=2, ... strides=2, padding='valid') >>> avg_pool_1d(x) For example, for strides=1 and padding="same": >>> x = tf.constant([1., 2., 3., 4., 5.]) >>> x = tf.reshape(x, [1, 5, 1]) >>> x >>> avg_pool_1d = tf.keras.layers.AveragePooling1D(pool_size=2, ... strides=1, padding='same') >>> avg_pool_1d(x) Arguments pool_size: Integer, size of the average pooling windows. strides: Integer, or None. Factor by which to downscale. E.g. 2 will halve the input. If None, it will default to pool_size. padding: One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, steps, features) while channels_first corresponds to inputs with shape (batch, features, steps). Input shape If data_format='channels_last': 3D tensor with shape (batch_size, steps, features). If data_format='channels_first': 3D tensor with shape (batch_size, features, steps). Output shape If data_format='channels_last': 3D tensor with shape (batch_size, downsampled_steps, features). If data_format='channels_first': 3D tensor with shape (batch_size, features, downsampled_steps). GlobalMaxPooling1D layer GlobalMaxPooling1D class tf.keras.layers.GlobalMaxPooling1D(data_format="channels_last", **kwargs) Global max pooling operation for 1D temporal data. Downsamples the input representation by taking the maximum value over the time dimension. For example: >>> x = tf.constant([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]]) >>> x = tf.reshape(x, [3, 3, 1]) >>> x >>> max_pool_1d = tf.keras.layers.GlobalMaxPooling1D() >>> max_pool_1d(x) Arguments data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, steps, features) while channels_first corresponds to inputs with shape (batch, features, steps). Input shape If data_format='channels_last': 3D tensor with shape: (batch_size, steps, features) If data_format='channels_first': 3D tensor with shape: (batch_size, features, steps) Output shape 2D tensor with shape (batch_size, features). GlobalAveragePooling1D layer GlobalAveragePooling1D class tf.keras.layers.GlobalAveragePooling1D(data_format="channels_last", **kwargs) Global average pooling operation for temporal data. Examples >>> input_shape = (2, 3, 4) >>> x = tf.random.normal(input_shape) >>> y = tf.keras.layers.GlobalAveragePooling1D()(x) >>> print(y.shape) (2, 4) Arguments data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, steps, features) while channels_first corresponds to inputs with shape (batch, features, steps). Call arguments inputs: A 3D tensor. mask: Binary tensor of shape (batch_size, steps) indicating whether a given step should be masked (excluded from the average). Input shape If data_format='channels_last': 3D tensor with shape: (batch_size, steps, features) If data_format='channels_first': 3D tensor with shape: (batch_size, features, steps) Output shape 2D tensor with shape (batch_size, features).MaxPooling1D layer MaxPooling1D class tf.keras.layers.MaxPooling1D( pool_size=2, strides=None, padding="valid", data_format="channels_last", **kwargs ) Max pooling operation for 1D temporal data. Downsamples the input representation by taking the maximum value over a spatial window of size pool_size. The window is shifted by strides. The resulting output, when using the "valid" padding option, has a shape of: output_shape = (input_shape - pool_size + 1) / strides) The resulting output shape when using the "same" padding option is: output_shape = input_shape / strides For example, for strides=1 and padding="valid": >>> x = tf.constant([1., 2., 3., 4., 5.]) >>> x = tf.reshape(x, [1, 5, 1]) >>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2, ... strides=1, padding='valid') >>> max_pool_1d(x) For example, for strides=2 and padding="valid": >>> x = tf.constant([1., 2., 3., 4., 5.]) >>> x = tf.reshape(x, [1, 5, 1]) >>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2, ... strides=2, padding='valid') >>> max_pool_1d(x) For example, for strides=1 and padding="same": >>> x = tf.constant([1., 2., 3., 4., 5.]) >>> x = tf.reshape(x, [1, 5, 1]) >>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2, ... strides=1, padding='same') >>> max_pool_1d(x) Arguments pool_size: Integer, size of the max pooling window. strides: Integer, or None. Specifies how much the pooling window moves for each pooling step. If None, it will default to pool_size. padding: One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, steps, features) while channels_first corresponds to inputs with shape (batch, features, steps). Input shape If data_format='channels_last': 3D tensor with shape (batch_size, steps, features). If data_format='channels_first': 3D tensor with shape (batch_size, features, steps). Output shape If data_format='channels_last': 3D tensor with shape (batch_size, downsampled_steps, features). If data_format='channels_first': 3D tensor with shape (batch_size, features, downsampled_steps).AveragePooling3D layer AveragePooling3D class tf.keras.layers.AveragePooling3D( pool_size=(2, 2, 2), strides=None, padding="valid", data_format=None, **kwargs ) Average pooling operation for 3D data (spatial or spatio-temporal). Downsamples the input along its spatial dimensions (depth, height, and width) by taking the average value over an input window (of size defined by pool_size) for each channel of the input. The window is shifted by strides along each dimension. Arguments pool_size: tuple of 3 integers, factors by which to downscale (dim1, dim2, dim3). (2, 2, 2) will halve the size of the 3D input in each dimension. strides: tuple of 3 integers, or None. Strides values. padding: One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape If data_format='channels_last': 5D tensor with shape: (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels) If data_format='channels_first': 5D tensor with shape: (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3) Output shape If data_format='channels_last': 5D tensor with shape: (batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels) If data_format='channels_first': 5D tensor with shape: (batch_size, channels, pooled_dim1, pooled_dim2, pooled_dim3) Example depth = 30 height = 30 width = 30 input_channels = 3 inputs = tf.keras.Input(shape=(depth, height, width, input_channels)) layer = tf.keras.layers.AveragePooling3D(pool_size=3) outputs = layer(inputs) # Shape: (batch_size, 10, 10, 10, 3)MaxPooling2D layer MaxPooling2D class tf.keras.layers.MaxPooling2D( pool_size=(2, 2), strides=None, padding="valid", data_format=None, **kwargs ) Max pooling operation for 2D spatial data. Downsamples the input along its spatial dimensions (height and width) by taking the maximum value over an input window (of size defined by pool_size) for each channel of the input. The window is shifted by strides along each dimension. The resulting output, when using the "valid" padding option, has a spatial shape (number of rows or columns) of: output_shape = math.floor((input_shape - pool_size) / strides) + 1 (when input_shape >= pool_size) The resulting output shape when using the "same" padding option is: output_shape = math.floor((input_shape - 1) / strides) + 1 For example, for strides=(1, 1) and padding="valid": >>> x = tf.constant([[1., 2., 3.], ... [4., 5., 6.], ... [7., 8., 9.]]) >>> x = tf.reshape(x, [1, 3, 3, 1]) >>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), ... strides=(1, 1), padding='valid') >>> max_pool_2d(x) For example, for strides=(2, 2) and padding="valid": >>> x = tf.constant([[1., 2., 3., 4.], ... [5., 6., 7., 8.], ... [9., 10., 11., 12.]]) >>> x = tf.reshape(x, [1, 3, 4, 1]) >>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), ... strides=(2, 2), padding='valid') >>> max_pool_2d(x) Usage # Example >>> input_image = tf.constant([[[[1.], [1.], [2.], [4.]], ... [[2.], [2.], [3.], [2.]], ... [[4.], [1.], [1.], [1.]], ... [[2.], [2.], [1.], [4.]]]]) >>> output = tf.constant([[[[1], [0]], ... [[0], [1]]]]) >>> model = tf.keras.models.Sequential() >>> model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), ... input_shape=(4, 4, 1))) >>> model.compile('adam', 'mean_squared_error') >>> model.predict(input_image, steps=1) array([[[[2.], [4.]], [[4.], [4.]]]], dtype=float32) For example, for stride=(1, 1) and padding="same": >>> x = tf.constant([[1., 2., 3.], ... [4., 5., 6.], ... [7., 8., 9.]]) >>> x = tf.reshape(x, [1, 3, 3, 1]) >>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), ... strides=(1, 1), padding='same') >>> max_pool_2d(x) Arguments pool_size: integer or tuple of 2 integers, window size over which to take the maximum. (2, 2) will take the max value over a 2x2 pooling window. If only one integer is specified, the same window length will be used for both dimensions. strides: Integer, tuple of 2 integers, or None. Strides values. Specifies how far the pooling window moves for each pooling step. If None, it will default to pool_size. padding: One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape If data_format='channels_last': 4D tensor with shape (batch_size, rows, cols, channels). If data_format='channels_first': 4D tensor with shape (batch_size, channels, rows, cols). Output shape If data_format='channels_last': 4D tensor with shape (batch_size, pooled_rows, pooled_cols, channels). If data_format='channels_first': 4D tensor with shape (batch_size, channels, pooled_rows, pooled_cols). Returns A tensor of rank 4 representing the maximum pooled values. See above for output shape.GlobalMaxPooling3D layer GlobalMaxPooling3D class tf.keras.layers.GlobalMaxPooling3D(data_format=None, **kwargs) Global Max pooling operation for 3D data. Arguments data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape If data_format='channels_last': 5D tensor with shape: (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels) If data_format='channels_first': 5D tensor with shape: (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3) Output shape 2D tensor with shape (batch_size, channels). GlobalAveragePooling3D layer GlobalAveragePooling3D class tf.keras.layers.GlobalAveragePooling3D(data_format=None, **kwargs) Global Average pooling operation for 3D data. Arguments data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape If data_format='channels_last': 5D tensor with shape: (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels) If data_format='channels_first': 5D tensor with shape: (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3) Output shape 2D tensor with shape (batch_size, channels). AveragePooling2D layer AveragePooling2D class tf.keras.layers.AveragePooling2D( pool_size=(2, 2), strides=None, padding="valid", data_format=None, **kwargs ) Average pooling operation for spatial data. Downsamples the input along its spatial dimensions (height and width) by taking the average value over an input window (of size defined by pool_size) for each channel of the input. The window is shifted by strides along each dimension. The resulting output when using "valid" padding option has a shape (number of rows or columns) of: output_shape = math.floor((input_shape - pool_size) / strides) + 1 (when input_shape >= pool_size) The resulting output shape when using the "same" padding option is: output_shape = math.floor((input_shape - 1) / strides) + 1 For example, for strides=(1, 1) and padding="valid": >>> x = tf.constant([[1., 2., 3.], ... [4., 5., 6.], ... [7., 8., 9.]]) >>> x = tf.reshape(x, [1, 3, 3, 1]) >>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2), ... strides=(1, 1), padding='valid') >>> avg_pool_2d(x) For example, for stride=(2, 2) and padding="valid": >>> x = tf.constant([[1., 2., 3., 4.], ... [5., 6., 7., 8.], ... [9., 10., 11., 12.]]) >>> x = tf.reshape(x, [1, 3, 4, 1]) >>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2), ... strides=(2, 2), padding='valid') >>> avg_pool_2d(x) For example, for strides=(1, 1) and padding="same": >>> x = tf.constant([[1., 2., 3.], ... [4., 5., 6.], ... [7., 8., 9.]]) >>> x = tf.reshape(x, [1, 3, 3, 1]) >>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2), ... strides=(1, 1), padding='same') >>> avg_pool_2d(x) Arguments pool_size: integer or tuple of 2 integers, factors by which to downscale (vertical, horizontal). (2, 2) will halve the input in both spatial dimension. If only one integer is specified, the same window length will be used for both dimensions. strides: Integer, tuple of 2 integers, or None. Strides values. If None, it will default to pool_size. padding: One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape If data_format='channels_last': 4D tensor with shape (batch_size, rows, cols, channels). If data_format='channels_first': 4D tensor with shape (batch_size, channels, rows, cols). Output shape If data_format='channels_last': 4D tensor with shape (batch_size, pooled_rows, pooled_cols, channels). If data_format='channels_first': 4D tensor with shape (batch_size, channels, pooled_rows, pooled_cols). MaxPooling3D layer MaxPooling3D class tf.keras.layers.MaxPooling3D( pool_size=(2, 2, 2), strides=None, padding="valid", data_format=None, **kwargs ) Max pooling operation for 3D data (spatial or spatio-temporal). Downsamples the input along its spatial dimensions (depth, height, and width) by taking the maximum value over an input window (of size defined by pool_size) for each channel of the input. The window is shifted by strides along each dimension. Arguments pool_size: Tuple of 3 integers, factors by which to downscale (dim1, dim2, dim3). (2, 2, 2) will halve the size of the 3D input in each dimension. strides: tuple of 3 integers, or None. Strides values. padding: One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape If data_format='channels_last': 5D tensor with shape: (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels) If data_format='channels_first': 5D tensor with shape: (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3) Output shape If data_format='channels_last': 5D tensor with shape: (batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels) If data_format='channels_first': 5D tensor with shape: (batch_size, channels, pooled_dim1, pooled_dim2, pooled_dim3) Example depth = 30 height = 30 width = 30 input_channels = 3 inputs = tf.keras.Input(shape=(depth, height, width, input_channels)) layer = tf.keras.layers.MaxPooling3D(pool_size=3) outputs = layer(inputs) # Shape: (batch_size, 10, 10, 10, 3)GlobalAveragePooling2D layer GlobalAveragePooling2D class tf.keras.layers.GlobalAveragePooling2D(data_format=None, **kwargs) Global average pooling operation for spatial data. Examples >>> input_shape = (2, 4, 5, 3) >>> x = tf.random.normal(input_shape) >>> y = tf.keras.layers.GlobalAveragePooling2D()(x) >>> print(y.shape) (2, 3) Arguments data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape If data_format='channels_last': 4D tensor with shape (batch_size, rows, cols, channels). If data_format='channels_first': 4D tensor with shape (batch_size, channels, rows, cols). Output shape 2D tensor with shape (batch_size, channels). GlobalMaxPooling2D layer GlobalMaxPooling2D class tf.keras.layers.GlobalMaxPooling2D(data_format=None, **kwargs) Global max pooling operation for spatial data. Examples >>> input_shape = (2, 4, 5, 3) >>> x = tf.random.normal(input_shape) >>> y = tf.keras.layers.GlobalMaxPool2D()(x) >>> print(y.shape) (2, 3) Arguments data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape If data_format='channels_last': 4D tensor with shape (batch_size, rows, cols, channels). If data_format='channels_first': 4D tensor with shape (batch_size, channels, rows, cols). Output shape 2D tensor with shape (batch_size, channels). AveragePooling1D layer AveragePooling1D class tf.keras.layers.AveragePooling1D( pool_size=2, strides=None, padding="valid", data_format="channels_last", **kwargs ) Average pooling for temporal data. Downsamples the input representation by taking the average value over the window defined by pool_size. The window is shifted by strides. The resulting output when using "valid" padding option has a shape of: output_shape = (input_shape - pool_size + 1) / strides) The resulting output shape when using the "same" padding option is: output_shape = input_shape / strides For example, for strides=1 and padding="valid": >>> x = tf.constant([1., 2., 3., 4., 5.]) >>> x = tf.reshape(x, [1, 5, 1]) >>> x >>> avg_pool_1d = tf.keras.layers.AveragePooling1D(pool_size=2, ... strides=1, padding='valid') >>> avg_pool_1d(x) For example, for strides=2 and padding="valid": >>> x = tf.constant([1., 2., 3., 4., 5.]) >>> x = tf.reshape(x, [1, 5, 1]) >>> x >>> avg_pool_1d = tf.keras.layers.AveragePooling1D(pool_size=2, ... strides=2, padding='valid') >>> avg_pool_1d(x) For example, for strides=1 and padding="same": >>> x = tf.constant([1., 2., 3., 4., 5.]) >>> x = tf.reshape(x, [1, 5, 1]) >>> x >>> avg_pool_1d = tf.keras.layers.AveragePooling1D(pool_size=2, ... strides=1, padding='same') >>> avg_pool_1d(x) Arguments pool_size: Integer, size of the average pooling windows. strides: Integer, or None. Factor by which to downscale. E.g. 2 will halve the input. If None, it will default to pool_size. padding: One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, steps, features) while channels_first corresponds to inputs with shape (batch, features, steps). Input shape If data_format='channels_last': 3D tensor with shape (batch_size, steps, features). If data_format='channels_first': 3D tensor with shape (batch_size, features, steps). Output shape If data_format='channels_last': 3D tensor with shape (batch_size, downsampled_steps, features). If data_format='channels_first': 3D tensor with shape (batch_size, features, downsampled_steps). GlobalMaxPooling1D layer GlobalMaxPooling1D class tf.keras.layers.GlobalMaxPooling1D(data_format="channels_last", **kwargs) Global max pooling operation for 1D temporal data. Downsamples the input representation by taking the maximum value over the time dimension. For example: >>> x = tf.constant([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]]) >>> x = tf.reshape(x, [3, 3, 1]) >>> x >>> max_pool_1d = tf.keras.layers.GlobalMaxPooling1D() >>> max_pool_1d(x) Arguments data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, steps, features) while channels_first corresponds to inputs with shape (batch, features, steps). Input shape If data_format='channels_last': 3D tensor with shape: (batch_size, steps, features) If data_format='channels_first': 3D tensor with shape: (batch_size, features, steps) Output shape 2D tensor with shape (batch_size, features). GlobalAveragePooling1D layer GlobalAveragePooling1D class tf.keras.layers.GlobalAveragePooling1D(data_format="channels_last", **kwargs) Global average pooling operation for temporal data. Examples >>> input_shape = (2, 3, 4) >>> x = tf.random.normal(input_shape) >>> y = tf.keras.layers.GlobalAveragePooling1D()(x) >>> print(y.shape) (2, 4) Arguments data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, steps, features) while channels_first corresponds to inputs with shape (batch, features, steps). Call arguments inputs: A 3D tensor. mask: Binary tensor of shape (batch_size, steps) indicating whether a given step should be masked (excluded from the average). Input shape If data_format='channels_last': 3D tensor with shape: (batch_size, steps, features) If data_format='channels_first': 3D tensor with shape: (batch_size, features, steps) Output shape 2D tensor with shape (batch_size, features).MaxPooling1D layer MaxPooling1D class tf.keras.layers.MaxPooling1D( pool_size=2, strides=None, padding="valid", data_format="channels_last", **kwargs ) Max pooling operation for 1D temporal data. Downsamples the input representation by taking the maximum value over a spatial window of size pool_size. The window is shifted by strides. The resulting output, when using the "valid" padding option, has a shape of: output_shape = (input_shape - pool_size + 1) / strides) The resulting output shape when using the "same" padding option is: output_shape = input_shape / strides For example, for strides=1 and padding="valid": >>> x = tf.constant([1., 2., 3., 4., 5.]) >>> x = tf.reshape(x, [1, 5, 1]) >>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2, ... strides=1, padding='valid') >>> max_pool_1d(x) For example, for strides=2 and padding="valid": >>> x = tf.constant([1., 2., 3., 4., 5.]) >>> x = tf.reshape(x, [1, 5, 1]) >>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2, ... strides=2, padding='valid') >>> max_pool_1d(x) For example, for strides=1 and padding="same": >>> x = tf.constant([1., 2., 3., 4., 5.]) >>> x = tf.reshape(x, [1, 5, 1]) >>> max_pool_1d = tf.keras.layers.MaxPooling1D(pool_size=2, ... strides=1, padding='same') >>> max_pool_1d(x) Arguments pool_size: Integer, size of the max pooling window. strides: Integer, or None. Specifies how much the pooling window moves for each pooling step. If None, it will default to pool_size. padding: One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, steps, features) while channels_first corresponds to inputs with shape (batch, features, steps). Input shape If data_format='channels_last': 3D tensor with shape (batch_size, steps, features). If data_format='channels_first': 3D tensor with shape (batch_size, features, steps). Output shape If data_format='channels_last': 3D tensor with shape (batch_size, downsampled_steps, features). If data_format='channels_first': 3D tensor with shape (batch_size, features, downsampled_steps).AveragePooling3D layer AveragePooling3D class tf.keras.layers.AveragePooling3D( pool_size=(2, 2, 2), strides=None, padding="valid", data_format=None, **kwargs ) Average pooling operation for 3D data (spatial or spatio-temporal). Downsamples the input along its spatial dimensions (depth, height, and width) by taking the average value over an input window (of size defined by pool_size) for each channel of the input. The window is shifted by strides along each dimension. Arguments pool_size: tuple of 3 integers, factors by which to downscale (dim1, dim2, dim3). (2, 2, 2) will halve the size of the 3D input in each dimension. strides: tuple of 3 integers, or None. Strides values. padding: One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape If data_format='channels_last': 5D tensor with shape: (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels) If data_format='channels_first': 5D tensor with shape: (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3) Output shape If data_format='channels_last': 5D tensor with shape: (batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels) If data_format='channels_first': 5D tensor with shape: (batch_size, channels, pooled_dim1, pooled_dim2, pooled_dim3) Example depth = 30 height = 30 width = 30 input_channels = 3 inputs = tf.keras.Input(shape=(depth, height, width, input_channels)) layer = tf.keras.layers.AveragePooling3D(pool_size=3) outputs = layer(inputs) # Shape: (batch_size, 10, 10, 10, 3)MaxPooling2D layer MaxPooling2D class tf.keras.layers.MaxPooling2D( pool_size=(2, 2), strides=None, padding="valid", data_format=None, **kwargs ) Max pooling operation for 2D spatial data. Downsamples the input along its spatial dimensions (height and width) by taking the maximum value over an input window (of size defined by pool_size) for each channel of the input. The window is shifted by strides along each dimension. The resulting output, when using the "valid" padding option, has a spatial shape (number of rows or columns) of: output_shape = math.floor((input_shape - pool_size) / strides) + 1 (when input_shape >= pool_size) The resulting output shape when using the "same" padding option is: output_shape = math.floor((input_shape - 1) / strides) + 1 For example, for strides=(1, 1) and padding="valid": >>> x = tf.constant([[1., 2., 3.], ... [4., 5., 6.], ... [7., 8., 9.]]) >>> x = tf.reshape(x, [1, 3, 3, 1]) >>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), ... strides=(1, 1), padding='valid') >>> max_pool_2d(x) For example, for strides=(2, 2) and padding="valid": >>> x = tf.constant([[1., 2., 3., 4.], ... [5., 6., 7., 8.], ... [9., 10., 11., 12.]]) >>> x = tf.reshape(x, [1, 3, 4, 1]) >>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), ... strides=(2, 2), padding='valid') >>> max_pool_2d(x) Usage # Example >>> input_image = tf.constant([[[[1.], [1.], [2.], [4.]], ... [[2.], [2.], [3.], [2.]], ... [[4.], [1.], [1.], [1.]], ... [[2.], [2.], [1.], [4.]]]]) >>> output = tf.constant([[[[1], [0]], ... [[0], [1]]]]) >>> model = tf.keras.models.Sequential() >>> model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), ... input_shape=(4, 4, 1))) >>> model.compile('adam', 'mean_squared_error') >>> model.predict(input_image, steps=1) array([[[[2.], [4.]], [[4.], [4.]]]], dtype=float32) For example, for stride=(1, 1) and padding="same": >>> x = tf.constant([[1., 2., 3.], ... [4., 5., 6.], ... [7., 8., 9.]]) >>> x = tf.reshape(x, [1, 3, 3, 1]) >>> max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), ... strides=(1, 1), padding='same') >>> max_pool_2d(x) Arguments pool_size: integer or tuple of 2 integers, window size over which to take the maximum. (2, 2) will take the max value over a 2x2 pooling window. If only one integer is specified, the same window length will be used for both dimensions. strides: Integer, tuple of 2 integers, or None. Strides values. Specifies how far the pooling window moves for each pooling step. If None, it will default to pool_size. padding: One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape If data_format='channels_last': 4D tensor with shape (batch_size, rows, cols, channels). If data_format='channels_first': 4D tensor with shape (batch_size, channels, rows, cols). Output shape If data_format='channels_last': 4D tensor with shape (batch_size, pooled_rows, pooled_cols, channels). If data_format='channels_first': 4D tensor with shape (batch_size, channels, pooled_rows, pooled_cols). Returns A tensor of rank 4 representing the maximum pooled values. See above for output shape.GlobalMaxPooling3D layer GlobalMaxPooling3D class tf.keras.layers.GlobalMaxPooling3D(data_format=None, **kwargs) Global Max pooling operation for 3D data. Arguments data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape If data_format='channels_last': 5D tensor with shape: (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels) If data_format='channels_first': 5D tensor with shape: (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3) Output shape 2D tensor with shape (batch_size, channels). GlobalAveragePooling3D layer GlobalAveragePooling3D class tf.keras.layers.GlobalAveragePooling3D(data_format=None, **kwargs) Global Average pooling operation for 3D data. Arguments data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape If data_format='channels_last': 5D tensor with shape: (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels) If data_format='channels_first': 5D tensor with shape: (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3) Output shape 2D tensor with shape (batch_size, channels). AveragePooling2D layer AveragePooling2D class tf.keras.layers.AveragePooling2D( pool_size=(2, 2), strides=None, padding="valid", data_format=None, **kwargs ) Average pooling operation for spatial data. Downsamples the input along its spatial dimensions (height and width) by taking the average value over an input window (of size defined by pool_size) for each channel of the input. The window is shifted by strides along each dimension. The resulting output when using "valid" padding option has a shape (number of rows or columns) of: output_shape = math.floor((input_shape - pool_size) / strides) + 1 (when input_shape >= pool_size) The resulting output shape when using the "same" padding option is: output_shape = math.floor((input_shape - 1) / strides) + 1 For example, for strides=(1, 1) and padding="valid": >>> x = tf.constant([[1., 2., 3.], ... [4., 5., 6.], ... [7., 8., 9.]]) >>> x = tf.reshape(x, [1, 3, 3, 1]) >>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2), ... strides=(1, 1), padding='valid') >>> avg_pool_2d(x) For example, for stride=(2, 2) and padding="valid": >>> x = tf.constant([[1., 2., 3., 4.], ... [5., 6., 7., 8.], ... [9., 10., 11., 12.]]) >>> x = tf.reshape(x, [1, 3, 4, 1]) >>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2), ... strides=(2, 2), padding='valid') >>> avg_pool_2d(x) For example, for strides=(1, 1) and padding="same": >>> x = tf.constant([[1., 2., 3.], ... [4., 5., 6.], ... [7., 8., 9.]]) >>> x = tf.reshape(x, [1, 3, 3, 1]) >>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2), ... strides=(1, 1), padding='same') >>> avg_pool_2d(x) Arguments pool_size: integer or tuple of 2 integers, factors by which to downscale (vertical, horizontal). (2, 2) will halve the input in both spatial dimension. If only one integer is specified, the same window length will be used for both dimensions. strides: Integer, tuple of 2 integers, or None. Strides values. If None, it will default to pool_size. padding: One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape If data_format='channels_last': 4D tensor with shape (batch_size, rows, cols, channels). If data_format='channels_first': 4D tensor with shape (batch_size, channels, rows, cols). Output shape If data_format='channels_last': 4D tensor with shape (batch_size, pooled_rows, pooled_cols, channels). If data_format='channels_first': 4D tensor with shape (batch_size, channels, pooled_rows, pooled_cols). MaxPooling3D layer MaxPooling3D class tf.keras.layers.MaxPooling3D( pool_size=(2, 2, 2), strides=None, padding="valid", data_format=None, **kwargs ) Max pooling operation for 3D data (spatial or spatio-temporal). Downsamples the input along its spatial dimensions (depth, height, and width) by taking the maximum value over an input window (of size defined by pool_size) for each channel of the input. The window is shifted by strides along each dimension. Arguments pool_size: Tuple of 3 integers, factors by which to downscale (dim1, dim2, dim3). (2, 2, 2) will halve the size of the 3D input in each dimension. strides: tuple of 3 integers, or None. Strides values. padding: One of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape (batch, channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape If data_format='channels_last': 5D tensor with shape: (batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels) If data_format='channels_first': 5D tensor with shape: (batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3) Output shape If data_format='channels_last': 5D tensor with shape: (batch_size, pooled_dim1, pooled_dim2, pooled_dim3, channels) If data_format='channels_first': 5D tensor with shape: (batch_size, channels, pooled_dim1, pooled_dim2, pooled_dim3) Example depth = 30 height = 30 width = 30 input_channels = 3 inputs = tf.keras.Input(shape=(depth, height, width, input_channels)) layer = tf.keras.layers.MaxPooling3D(pool_size=3) outputs = layer(inputs) # Shape: (batch_size, 10, 10, 10, 3)GlobalAveragePooling2D layer GlobalAveragePooling2D class tf.keras.layers.GlobalAveragePooling2D(data_format=None, **kwargs) Global average pooling operation for spatial data. Examples >>> input_shape = (2, 4, 5, 3) >>> x = tf.random.normal(input_shape) >>> y = tf.keras.layers.GlobalAveragePooling2D()(x) >>> print(y.shape) (2, 3) Arguments data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape If data_format='channels_last': 4D tensor with shape (batch_size, rows, cols, channels). If data_format='channels_first': 4D tensor with shape (batch_size, channels, rows, cols). Output shape 2D tensor with shape (batch_size, channels). GlobalMaxPooling2D layer GlobalMaxPooling2D class tf.keras.layers.GlobalMaxPooling2D(data_format=None, **kwargs) Global max pooling operation for spatial data. Examples >>> input_shape = (2, 4, 5, 3) >>> x = tf.random.normal(input_shape) >>> y = tf.keras.layers.GlobalMaxPool2D()(x) >>> print(y.shape) (2, 3) Arguments data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape If data_format='channels_last': 4D tensor with shape (batch_size, rows, cols, channels). If data_format='channels_first': 4D tensor with shape (batch_size, channels, rows, cols). Output shape 2D tensor with shape (batch_size, channels). SeparableConv2D layer SeparableConv2D class tf.keras.layers.SeparableConv2D( filters, kernel_size, strides=(1, 1), padding="valid", data_format=None, dilation_rate=(1, 1), depth_multiplier=1, activation=None, use_bias=True, depthwise_initializer="glorot_uniform", pointwise_initializer="glorot_uniform", bias_initializer="zeros", depthwise_regularizer=None, pointwise_regularizer=None, bias_regularizer=None, activity_regularizer=None, depthwise_constraint=None, pointwise_constraint=None, bias_constraint=None, **kwargs ) Depthwise separable 2D convolution. Separable convolutions consist of first performing a depthwise spatial convolution (which acts on each input channel separately) followed by a pointwise convolution which mixes the resulting output channels. The depth_multiplier argument controls how many output channels are generated per input channel in the depthwise step. Intuitively, separable convolutions can be understood as a way to factorize a convolution kernel into two smaller kernels, or as an extreme version of an Inception block. Arguments filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution). kernel_size: An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions. strides: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1. padding: one of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding with zeros evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, height, width, channels) while channels_first corresponds to inputs with shape (batch_size, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". dilation_rate: An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any strides value != 1. depth_multiplier: The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to filters_in * depth_multiplier. activation: Activation function to use. If you don't specify anything, no activation is applied ( see keras.activations). use_bias: Boolean, whether the layer uses a bias vector. depthwise_initializer: An initializer for the depthwise convolution kernel ( see keras.initializers). If None, then the default initializer ( 'glorot_uniform') will be used. pointwise_initializer: An initializer for the pointwise convolution kernel ( see keras.initializers). If None, then the default initializer ('glorot_uniform') will be used. bias_initializer: An initializer for the bias vector. If None, the default initializer ('zeros') will be used (see keras.initializers). depthwise_regularizer: Regularizer function applied to the depthwise kernel matrix (see keras.regularizers). pointwise_regularizer: Regularizer function applied to the pointwise kernel matrix (see keras.regularizers). bias_regularizer: Regularizer function applied to the bias vector ( see keras.regularizers). activity_regularizer: Regularizer function applied to the output of the layer (its "activation") ( see keras.regularizers). depthwise_constraint: Constraint function applied to the depthwise kernel matrix ( see keras.constraints). pointwise_constraint: Constraint function applied to the pointwise kernel matrix ( see keras.constraints). bias_constraint: Constraint function applied to the bias vector ( see keras.constraints). Input shape 4D tensor with shape: (batch_size, channels, rows, cols) if data_format='channels_first' or 4D tensor with shape: (batch_size, rows, cols, channels) if data_format='channels_last'. Output shape 4D tensor with shape: (batch_size, filters, new_rows, new_cols) if data_format='channels_first' or 4D tensor with shape: (batch_size, new_rows, new_cols, filters) if data_format='channels_last'. rows and cols values might have changed due to padding. Returns A tensor of rank 4 representing activation(separableconv2d(inputs, kernel) + bias). Raises ValueError: if padding is "causal". ValueError: when both strides > 1 and dilation_rate > 1. Conv1D layer Conv1D class tf.keras.layers.Conv1D( filters, kernel_size, strides=1, padding="valid", data_format="channels_last", dilation_rate=1, groups=1, activation=None, use_bias=True, kernel_initializer="glorot_uniform", bias_initializer="zeros", kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs ) 1D convolution layer (e.g. temporal convolution). This layer creates a convolution kernel that is convolved with the layer input over a single spatial (or temporal) dimension to produce a tensor of outputs. If use_bias is True, a bias vector is created and added to the outputs. Finally, if activation is not None, it is applied to the outputs as well. When using this layer as the first layer in a model, provide an input_shape argument (tuple of integers or None, e.g. (10, 128) for sequences of 10 vectors of 128-dimensional vectors, or (None, 128) for variable-length sequences of 128-dimensional vectors. Examples >>> # The inputs are 128-length vectors with 10 timesteps, and the batch size >>> # is 4. >>> input_shape = (4, 10, 128) >>> x = tf.random.normal(input_shape) >>> y = tf.keras.layers.Conv1D( ... 32, 3, activation='relu',input_shape=input_shape[1:])(x) >>> print(y.shape) (4, 8, 32) >>> # With extended batch shape [4, 7] (e.g. weather data where batch >>> # dimensions correspond to spatial location and the third dimension >>> # corresponds to time.) >>> input_shape = (4, 7, 10, 128) >>> x = tf.random.normal(input_shape) >>> y = tf.keras.layers.Conv1D( ... 32, 3, activation='relu', input_shape=input_shape[2:])(x) >>> print(y.shape) (4, 7, 8, 32) Arguments filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution). kernel_size: An integer or tuple/list of a single integer, specifying the length of the 1D convolution window. strides: An integer or tuple/list of a single integer, specifying the stride length of the convolution. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1. padding: One of "valid", "same" or "causal" (case-insensitive). "valid" means no padding. "same" results in padding with zeros evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. "causal" results in causal (dilated) convolutions, e.g. output[t] does not depend on input[t+1:]. Useful when modeling temporal data where the model should not violate the temporal order. See WaveNet: A Generative Model for Raw Audio, section 2.1. data_format: A string, one of channels_last (default) or channels_first. dilation_rate: an integer or tuple/list of a single integer, specifying the dilation rate to use for dilated convolution. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any strides value != 1. groups: A positive integer specifying the number of groups in which the input is split along the channel axis. Each group is convolved separately with filters / groups filters. The output is the concatenation of all the groups results along the channel axis. Input channels and filters must both be divisible by groups. activation: Activation function to use. If you don't specify anything, no activation is applied ( see keras.activations). use_bias: Boolean, whether the layer uses a bias vector. kernel_initializer: Initializer for the kernel weights matrix ( see keras.initializers). Defaults to 'glorot_uniform'. bias_initializer: Initializer for the bias vector ( see keras.initializers). Defaults to 'zeros'. kernel_regularizer: Regularizer function applied to the kernel weights matrix (see keras.regularizers). bias_regularizer: Regularizer function applied to the bias vector ( see keras.regularizers). activity_regularizer: Regularizer function applied to the output of the layer (its "activation") ( see keras.regularizers). kernel_constraint: Constraint function applied to the kernel matrix ( see keras.constraints). bias_constraint: Constraint function applied to the bias vector ( see keras.constraints). Input shape 3+D tensor with shape: batch_shape + (steps, input_dim) Output shape 3+D tensor with shape: batch_shape + (new_steps, filters) steps value might have changed due to padding or strides. Returns A tensor of rank 3 representing activation(conv1d(inputs, kernel) + bias). Raises ValueError: when both strides > 1 and dilation_rate > 1.Conv2DTranspose layer Conv2DTranspose class tf.keras.layers.Conv2DTranspose( filters, kernel_size, strides=(1, 1), padding="valid", output_padding=None, data_format=None, dilation_rate=(1, 1), activation=None, use_bias=True, kernel_initializer="glorot_uniform", bias_initializer="zeros", kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs ) Transposed convolution layer (sometimes called Deconvolution). The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution. When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers or None, does not include the sample axis), e.g. input_shape=(128, 128, 3) for 128x128 RGB pictures in data_format="channels_last". Arguments filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution). kernel_size: An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions. strides: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1. padding: one of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding with zeros evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. output_padding: An integer or tuple/list of 2 integers, specifying the amount of padding along the height and width of the output tensor. Can be a single integer to specify the same value for all spatial dimensions. The amount of output padding along a given dimension must be lower than the stride along that same dimension. If set to None (default), the output shape is inferred. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, height, width, channels) while channels_first corresponds to inputs with shape (batch_size, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". dilation_rate: an integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1. activation: Activation function to use. If you don't specify anything, no activation is applied ( see keras.activations). use_bias: Boolean, whether the layer uses a bias vector. kernel_initializer: Initializer for the kernel weights matrix ( see keras.initializers). Defaults to 'glorot_uniform'. bias_initializer: Initializer for the bias vector ( see keras.initializers). Defaults to 'zeros'. kernel_regularizer: Regularizer function applied to the kernel weights matrix (see keras.regularizers). bias_regularizer: Regularizer function applied to the bias vector ( see keras.regularizers). activity_regularizer: Regularizer function applied to the output of the layer (its "activation") (see keras.regularizers). kernel_constraint: Constraint function applied to the kernel matrix ( see keras.constraints). bias_constraint: Constraint function applied to the bias vector ( see keras.constraints). Input shape 4D tensor with shape: (batch_size, channels, rows, cols) if data_format='channels_first' or 4D tensor with shape: (batch_size, rows, cols, channels) if data_format='channels_last'. Output shape 4D tensor with shape: (batch_size, filters, new_rows, new_cols) if data_format='channels_first' or 4D tensor with shape: (batch_size, new_rows, new_cols, filters) if data_format='channels_last'. rows and cols values might have changed due to padding. If output_padding is specified: new_rows = ((rows - 1) * strides[0] + kernel_size[0] - 2 * padding[0] + output_padding[0]) new_cols = ((cols - 1) * strides[1] + kernel_size[1] - 2 * padding[1] + output_padding[1]) Returns A tensor of rank 4 representing activation(conv2dtranspose(inputs, kernel) + bias). Raises ValueError: if padding is "causal". ValueError: when both strides > 1 and dilation_rate > 1. References: - A guide to convolution arithmetic for deep learning - Deconvolutional NetworksDepthwiseConv2D layer DepthwiseConv2D class tf.keras.layers.DepthwiseConv2D( kernel_size, strides=(1, 1), padding="valid", depth_multiplier=1, data_format=None, dilation_rate=(1, 1), activation=None, use_bias=True, depthwise_initializer="glorot_uniform", bias_initializer="zeros", depthwise_regularizer=None, bias_regularizer=None, activity_regularizer=None, depthwise_constraint=None, bias_constraint=None, **kwargs ) Depthwise 2D convolution. Depthwise convolution is a type of convolution in which a single convolutional filter is apply to each input channel (i.e. in a depthwise way). You can understand depthwise convolution as being the first step in a depthwise separable convolution. It is implemented via the following steps: Split the input into individual channels. Convolve each input with the layer's kernel (called a depthwise kernel). Stack the convolved outputs together (along the channels axis). Unlike a regular 2D convolution, depthwise convolution does not mix information across different input channels. The depth_multiplier argument controls how many output channels are generated per input channel in the depthwise step. Arguments kernel_size: An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions. strides: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1. padding: one of 'valid' or 'same' (case-insensitive). "valid" means no padding. "same" results in padding with zeros evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. depth_multiplier: The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to filters_in * depth_multiplier. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, height, width, channels) while channels_first corresponds to inputs with shape (batch_size, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be 'channels_last'. dilation_rate: An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any strides value != 1. activation: Activation function to use. If you don't specify anything, no activation is applied ( see keras.activations). use_bias: Boolean, whether the layer uses a bias vector. depthwise_initializer: Initializer for the depthwise kernel matrix ( see keras.initializers). If None, the default initializer ( 'glorot_uniform') will be used. bias_initializer: Initializer for the bias vector ( see keras.initializers). If None, the default initializer ( 'zeros') will bs used. depthwise_regularizer: Regularizer function applied to the depthwise kernel matrix (see keras.regularizers). bias_regularizer: Regularizer function applied to the bias vector ( see keras.regularizers). activity_regularizer: Regularizer function applied to the output of the layer (its 'activation') ( see keras.regularizers). depthwise_constraint: Constraint function applied to the depthwise kernel matrix ( see keras.constraints). bias_constraint: Constraint function applied to the bias vector ( see keras.constraints). Input shape 4D tensor with shape: [batch_size, channels, rows, cols] if data_format='channels_first' or 4D tensor with shape: [batch_size, rows, cols, channels] if data_format='channels_last'. Output shape 4D tensor with shape: [batch_size, channels * depth_multiplier, new_rows, new_cols] if data_format='channels_first' or 4D tensor with shape: [batch_size, new_rows, new_cols, channels * depth_multiplier] if data_format='channels_last'. rows and cols values might have changed due to padding. Returns A tensor of rank 4 representing activation(depthwiseconv2d(inputs, kernel) + bias). Raises ValueError: if padding is "causal". ValueError: when both strides > 1 and dilation_rate > 1. Conv3D layer Conv3D class tf.keras.layers.Conv3D( filters, kernel_size, strides=(1, 1, 1), padding="valid", data_format=None, dilation_rate=(1, 1, 1), groups=1, activation=None, use_bias=True, kernel_initializer="glorot_uniform", bias_initializer="zeros", kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs ) 3D convolution layer (e.g. spatial convolution over volumes). This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If use_bias is True, a bias vector is created and added to the outputs. Finally, if activation is not None, it is applied to the outputs as well. When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers or None, does not include the sample axis), e.g. input_shape=(128, 128, 128, 1) for 128x128x128 volumes with a single channel, in data_format="channels_last". Examples >>> # The inputs are 28x28x28 volumes with a single channel, and the >>> # batch size is 4 >>> input_shape =(4, 28, 28, 28, 1) >>> x = tf.random.normal(input_shape) >>> y = tf.keras.layers.Conv3D( ... 2, 3, activation='relu', input_shape=input_shape[1:])(x) >>> print(y.shape) (4, 26, 26, 26, 2) >>> # With extended batch shape [4, 7], e.g. a batch of 4 videos of 3D frames, >>> # with 7 frames per video. >>> input_shape = (4, 7, 28, 28, 28, 1) >>> x = tf.random.normal(input_shape) >>> y = tf.keras.layers.Conv3D( ... 2, 3, activation='relu', input_shape=input_shape[2:])(x) >>> print(y.shape) (4, 7, 26, 26, 26, 2) Arguments filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution). kernel_size: An integer or tuple/list of 3 integers, specifying the depth, height and width of the 3D convolution window. Can be a single integer to specify the same value for all spatial dimensions. strides: An integer or tuple/list of 3 integers, specifying the strides of the convolution along each spatial dimension. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1. padding: one of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding with zeros evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape batch_shape + (spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape batch_shape + (channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". dilation_rate: an integer or tuple/list of 3 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1. groups: A positive integer specifying the number of groups in which the input is split along the channel axis. Each group is convolved separately with filters / groups filters. The output is the concatenation of all the groups results along the channel axis. Input channels and filters must both be divisible by groups. activation: Activation function to use. If you don't specify anything, no activation is applied (see keras.activations). use_bias: Boolean, whether the layer uses a bias vector. kernel_initializer: Initializer for the kernel weights matrix (see keras.initializers). Defaults to 'glorot_uniform'. bias_initializer: Initializer for the bias vector (see keras.initializers). Defaults to 'zeros'. kernel_regularizer: Regularizer function applied to the kernel weights matrix (see keras.regularizers). bias_regularizer: Regularizer function applied to the bias vector (see keras.regularizers). activity_regularizer: Regularizer function applied to the output of the layer (its "activation") (see keras.regularizers). kernel_constraint: Constraint function applied to the kernel matrix (see keras.constraints). bias_constraint: Constraint function applied to the bias vector (see keras.constraints). Input shape 5+D tensor with shape: batch_shape + (channels, conv_dim1, conv_dim2, conv_dim3) if data_format='channels_first' or 5+D tensor with shape: batch_shape + (conv_dim1, conv_dim2, conv_dim3, channels) if data_format='channels_last'. Output shape 5+D tensor with shape: batch_shape + (filters, new_conv_dim1, new_conv_dim2, new_conv_dim3) if data_format='channels_first' or 5+D tensor with shape: batch_shape + (new_conv_dim1, new_conv_dim2, new_conv_dim3, filters) if data_format='channels_last'. new_conv_dim1, new_conv_dim2 and new_conv_dim3 values might have changed due to padding. Returns A tensor of rank 5+ representing activation(conv3d(inputs, kernel) + bias). Raises ValueError: if padding is "causal". ValueError: when both strides > 1 and dilation_rate > 1. Conv3DTranspose layer Conv3DTranspose class tf.keras.layers.Conv3DTranspose( filters, kernel_size, strides=(1, 1, 1), padding="valid", output_padding=None, data_format=None, dilation_rate=(1, 1, 1), activation=None, use_bias=True, kernel_initializer="glorot_uniform", bias_initializer="zeros", kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs ) Transposed convolution layer (sometimes called Deconvolution). The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution. When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers or None, does not include the sample axis), e.g. input_shape=(128, 128, 128, 3) for a 128x128x128 volume with 3 channels if data_format="channels_last". Arguments filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution). kernel_size: An integer or tuple/list of 3 integers, specifying the depth, height and width of the 3D convolution window. Can be a single integer to specify the same value for all spatial dimensions. strides: An integer or tuple/list of 3 integers, specifying the strides of the convolution along the depth, height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1. padding: one of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding with zeros evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. output_padding: An integer or tuple/list of 3 integers, specifying the amount of padding along the depth, height, and width. Can be a single integer to specify the same value for all spatial dimensions. The amount of output padding along a given dimension must be lower than the stride along that same dimension. If set to None (default), the output shape is inferred. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, depth, height, width, channels) while channels_first corresponds to inputs with shape (batch_size, channels, depth, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". dilation_rate: an integer or tuple/list of 3 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1. activation: Activation function to use. If you don't specify anything, no activation is applied ( see keras.activations). use_bias: Boolean, whether the layer uses a bias vector. kernel_initializer: Initializer for the kernel weights matrix ( see keras.initializers). Defaults to 'glorot_uniform'. bias_initializer: Initializer for the bias vector ( see keras.initializers). Defaults to 'zeros'. kernel_regularizer: Regularizer function applied to the kernel weights matrix ( see keras.regularizers). bias_regularizer: Regularizer function applied to the bias vector ( see keras.regularizers). activity_regularizer: Regularizer function applied to the output of the layer (its "activation") ( see keras.regularizers). kernel_constraint: Constraint function applied to the kernel matrix ( see keras.constraints). bias_constraint: Constraint function applied to the bias vector ( see keras.constraints). Input shape 5D tensor with shape: (batch_size, channels, depth, rows, cols) if data_format='channels_first' or 5D tensor with shape: (batch_size, depth, rows, cols, channels) if data_format='channels_last'. Output shape 5D tensor with shape: (batch_size, filters, new_depth, new_rows, new_cols) if data_format='channels_first' or 5D tensor with shape: (batch_size, new_depth, new_rows, new_cols, filters) if data_format='channels_last'. depth and rows and cols values might have changed due to padding. If output_padding is specified:: new_depth = ((depth - 1) * strides[0] + kernel_size[0] - 2 * padding[0] + output_padding[0]) new_rows = ((rows - 1) * strides[1] + kernel_size[1] - 2 * padding[1] + output_padding[1]) new_cols = ((cols - 1) * strides[2] + kernel_size[2] - 2 * padding[2] + output_padding[2]) Returns A tensor of rank 5 representing activation(conv3dtranspose(inputs, kernel) + bias). Raises ValueError: if padding is "causal". ValueError: when both strides > 1 and dilation_rate > 1. References: - A guide to convolution arithmetic for deep learning - Deconvolutional Networks SeparableConv1D layer SeparableConv1D class tf.keras.layers.SeparableConv1D( filters, kernel_size, strides=1, padding="valid", data_format=None, dilation_rate=1, depth_multiplier=1, activation=None, use_bias=True, depthwise_initializer="glorot_uniform", pointwise_initializer="glorot_uniform", bias_initializer="zeros", depthwise_regularizer=None, pointwise_regularizer=None, bias_regularizer=None, activity_regularizer=None, depthwise_constraint=None, pointwise_constraint=None, bias_constraint=None, **kwargs ) Depthwise separable 1D convolution. This layer performs a depthwise convolution that acts separately on channels, followed by a pointwise convolution that mixes channels. If use_bias is True and a bias initializer is provided, it adds a bias vector to the output. It then optionally applies an activation function to produce the final output. Arguments filters: Integer, the dimensionality of the output space (i.e. the number of filters in the convolution). kernel_size: A single integer specifying the spatial dimensions of the filters. strides: A single integer specifying the strides of the convolution. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1. padding: One of "valid", "same", or "causal" (case-insensitive). "valid" means no padding. "same" results in padding with zeros evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. "causal" results in causal (dilated) convolutions, e.g. output[t] does not depend on input[t+1:]. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, length, channels) while channels_first corresponds to inputs with shape (batch_size, channels, length). dilation_rate: A single integer, specifying the dilation rate to use for dilated convolution. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1. depth_multiplier: The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to num_filters_in * depth_multiplier. activation: Activation function to use. If you don't specify anything, no activation is applied ( see keras.activations). use_bias: Boolean, whether the layer uses a bias. depthwise_initializer: An initializer for the depthwise convolution kernel ( see keras.initializers). If None, then the default initializer ( 'glorot_uniform') will be used. pointwise_initializer: An initializer for the pointwise convolution kernel ( see keras.initializers). If None, then the default initializer ('glorot_uniform') will be used. bias_initializer: An initializer for the bias vector. If None, the default initializer ('zeros') will be used (see keras.initializers). depthwise_regularizer: Optional regularizer for the depthwise convolution kernel (see keras.regularizers). pointwise_regularizer: Optional regularizer for the pointwise convolution kernel (see keras.regularizers). bias_regularizer: Optional regularizer for the bias vector ( see keras.regularizers). activity_regularizer: Optional regularizer function for the output ( see keras.regularizers). depthwise_constraint: Optional projection function to be applied to the depthwise kernel after being updated by an Optimizer (e.g. used for norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training ( see keras.constraints). pointwise_constraint: Optional projection function to be applied to the pointwise kernel after being updated by an Optimizer ( see keras.constraints). bias_constraint: Optional projection function to be applied to the bias after being updated by an Optimizer ( see keras.constraints). trainable: Boolean, if True the weights of this layer will be marked as trainable (and listed in layer.trainable_weights). Input shape 3D tensor with shape: (batch_size, channels, steps) if data_format='channels_first' or 5D tensor with shape: (batch_size, steps, channels) if data_format='channels_last'. Output shape 3D tensor with shape: (batch_size, filters, new_steps) if data_format='channels_first' or 3D tensor with shape: (batch_size, new_steps, filters) if data_format='channels_last'. new_steps value might have changed due to padding or strides. Returns A tensor of rank 3 representing activation(separableconv1d(inputs, kernel) + bias). Raises ValueError: when both strides > 1 and dilation_rate > 1. Conv2D layer Conv2D class tf.keras.layers.Conv2D( filters, kernel_size, strides=(1, 1), padding="valid", data_format=None, dilation_rate=(1, 1), groups=1, activation=None, use_bias=True, kernel_initializer="glorot_uniform", bias_initializer="zeros", kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs ) 2D convolution layer (e.g. spatial convolution over images). This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If use_bias is True, a bias vector is created and added to the outputs. Finally, if activation is not None, it is applied to the outputs as well. When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers or None, does not include the sample axis), e.g. input_shape=(128, 128, 3) for 128x128 RGB pictures in data_format="channels_last". You can use None when a dimension has variable size. Examples >>> # The inputs are 28x28 RGB images with `channels_last` and the batch >>> # size is 4. >>> input_shape = (4, 28, 28, 3) >>> x = tf.random.normal(input_shape) >>> y = tf.keras.layers.Conv2D( ... 2, 3, activation='relu', input_shape=input_shape[1:])(x) >>> print(y.shape) (4, 26, 26, 2) >>> # With `dilation_rate` as 2. >>> input_shape = (4, 28, 28, 3) >>> x = tf.random.normal(input_shape) >>> y = tf.keras.layers.Conv2D( ... 2, 3, activation='relu', dilation_rate=2, input_shape=input_shape[1:])(x) >>> print(y.shape) (4, 24, 24, 2) >>> # With `padding` as "same". >>> input_shape = (4, 28, 28, 3) >>> x = tf.random.normal(input_shape) >>> y = tf.keras.layers.Conv2D( ... 2, 3, activation='relu', padding="same", input_shape=input_shape[1:])(x) >>> print(y.shape) (4, 28, 28, 2) >>> # With extended batch shape [4, 7]: >>> input_shape = (4, 7, 28, 28, 3) >>> x = tf.random.normal(input_shape) >>> y = tf.keras.layers.Conv2D( ... 2, 3, activation='relu', input_shape=input_shape[2:])(x) >>> print(y.shape) (4, 7, 26, 26, 2) Arguments filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution). kernel_size: An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions. strides: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1. padding: one of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding with zeros evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, height, width, channels) while channels_first corresponds to inputs with shape (batch_size, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be channels_last. dilation_rate: an integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1. groups: A positive integer specifying the number of groups in which the input is split along the channel axis. Each group is convolved separately with filters / groups filters. The output is the concatenation of all the groups results along the channel axis. Input channels and filters must both be divisible by groups. activation: Activation function to use. If you don't specify anything, no activation is applied (see keras.activations). use_bias: Boolean, whether the layer uses a bias vector. kernel_initializer: Initializer for the kernel weights matrix (see keras.initializers). Defaults to 'glorot_uniform'. bias_initializer: Initializer for the bias vector (see keras.initializers). Defaults to 'zeros'. kernel_regularizer: Regularizer function applied to the kernel weights matrix (see keras.regularizers). bias_regularizer: Regularizer function applied to the bias vector (see keras.regularizers). activity_regularizer: Regularizer function applied to the output of the layer (its "activation") (see keras.regularizers). kernel_constraint: Constraint function applied to the kernel matrix (see keras.constraints). bias_constraint: Constraint function applied to the bias vector (see keras.constraints). Input shape 4+D tensor with shape: batch_shape + (channels, rows, cols) if data_format='channels_first' or 4+D tensor with shape: batch_shape + (rows, cols, channels) if data_format='channels_last'. Output shape 4+D tensor with shape: batch_shape + (filters, new_rows, new_cols) if data_format='channels_first' or 4+D tensor with shape: batch_shape + (new_rows, new_cols, filters) if data_format='channels_last'. rows and cols values might have changed due to padding. Returns A tensor of rank 4+ representing activation(conv2d(inputs, kernel) + bias). Raises ValueError: if padding is "causal". ValueError: when both strides > 1 and dilation_rate > 1. Embedding layer Embedding class tf.keras.layers.Embedding( input_dim, output_dim, embeddings_initializer="uniform", embeddings_regularizer=None, activity_regularizer=None, embeddings_constraint=None, mask_zero=False, input_length=None, **kwargs ) Turns positive integers (indexes) into dense vectors of fixed size. e.g. [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]] This layer can only be used as the first layer in a model. Example >>> model = tf.keras.Sequential() >>> model.add(tf.keras.layers.Embedding(1000, 64, input_length=10)) >>> # The model will take as input an integer matrix of size (batch, >>> # input_length), and the largest integer (i.e. word index) in the input >>> # should be no larger than 999 (vocabulary size). >>> # Now model.output_shape is (None, 10, 64), where `None` is the batch >>> # dimension. >>> input_array = np.random.randint(1000, size=(32, 10)) >>> model.compile('rmsprop', 'mse') >>> output_array = model.predict(input_array) >>> print(output_array.shape) (32, 10, 64) Arguments input_dim: Integer. Size of the vocabulary, i.e. maximum integer index + 1. output_dim: Integer. Dimension of the dense embedding. embeddings_initializer: Initializer for the embeddings matrix (see keras.initializers). embeddings_regularizer: Regularizer function applied to the embeddings matrix (see keras.regularizers). embeddings_constraint: Constraint function applied to the embeddings matrix (see keras.constraints). mask_zero: Boolean, whether or not the input value 0 is a special "padding" value that should be masked out. This is useful when using recurrent layers which may take variable length input. If this is True, then all subsequent layers in the model need to support masking or an exception will be raised. If mask_zero is set to True, as a consequence, index 0 cannot be used in the vocabulary (input_dim should equal size of vocabulary + 1). input_length: Length of input sequences, when it is constant. This argument is required if you are going to connect Flatten then Dense layers upstream (without it, the shape of the dense outputs cannot be computed). Input shape 2D tensor with shape: (batch_size, input_length). Output shape 3D tensor with shape: (batch_size, input_length, output_dim). Activation layer Activation class tf.keras.layers.Activation(activation, **kwargs) Applies an activation function to an output. Arguments activation: Activation function, such as tf.nn.relu, or string name of built-in activation function, such as "relu". Usage: >>> layer = tf.keras.layers.Activation('relu') >>> output = layer([-3.0, -1.0, 0.0, 2.0]) >>> list(output.numpy()) [0.0, 0.0, 0.0, 2.0] >>> layer = tf.keras.layers.Activation(tf.nn.relu) >>> output = layer([-3.0, -1.0, 0.0, 2.0]) >>> list(output.numpy()) [0.0, 0.0, 0.0, 2.0] Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the batch axis) when using this layer as the first layer in a model. Output shape Same shape as input. Input object Input function tf.keras.Input( shape=None, batch_size=None, name=None, dtype=None, sparse=None, tensor=None, ragged=None, type_spec=None, **kwargs ) Input() is used to instantiate a Keras tensor. A Keras tensor is a symbolic tensor-like object, which we augment with certain attributes that allow us to build a Keras model just by knowing the inputs and outputs of the model. For instance, if a, b and c are Keras tensors, it becomes possible to do: model = Model(input=[a, b], output=c) Arguments shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known. batch_size: optional static batch size (integer). name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided. dtype: The data type expected by the input, as a string (float32, float64, int32...) sparse: A boolean specifying whether the placeholder to be created is sparse. Only one of 'ragged' and 'sparse' can be True. Note that, if sparse is False, sparse tensors can still be passed into the input - they will be densified with a default value of 0. tensor: Optional existing tensor to wrap into the Input layer. If set, the layer will use the tf.TypeSpec of this tensor rather than creating a new placeholder tensor. ragged: A boolean specifying whether the placeholder to be created is ragged. Only one of 'ragged' and 'sparse' can be True. In this case, values of 'None' in the 'shape' argument represent ragged dimensions. For more information about RaggedTensors, see this guide. type_spec: A tf.TypeSpec object to create the input placeholder from. When provided, all other args except name must be None. **kwargs: deprecated arguments support. Supports batch_shape and batch_input_shape. Returns A tensor. Example # this is a logistic regression in Keras x = Input(shape=(32,)) y = Dense(16, activation='softmax')(x) model = Model(x, y) Note that even if eager execution is enabled, Input produces a symbolic tensor-like object (i.e. a placeholder). This symbolic tensor-like object can be used with lower-level TensorFlow ops that take tensors as inputs, as such: x = Input(shape=(32,)) y = tf.square(x) # This op will be treated like a layer model = Model(x, y) (This behavior does not work for higher-order TensorFlow APIs such as control flow and being directly watched by a tf.GradientTape). However, the resulting model will not track any variables that were used as inputs to TensorFlow ops. All variable usages must happen within Keras layers to make sure they will be tracked by the model's weights. The Keras Input can also create a placeholder from an arbitrary tf.TypeSpec, e.g: x = Input(type_spec=tf.RaggedTensorSpec(shape=[None, None], dtype=tf.float32, ragged_rank=1)) y = x.values model = Model(x, y) When passing an arbitrary tf.TypeSpec, it must represent the signature of an entire batch instead of just one example. Raises ValueError: If both sparse and ragged are provided. ValueError: If both shape and (batch_input_shape or batch_shape) are provided. ValueError: If shape, tensor and type_spec are None. ValueError: If arguments besides type_spec are non-None while type_spec is passed. ValueError: if any unrecognized parameters are provided. Lambda layer Lambda class tf.keras.layers.Lambda( function, output_shape=None, mask=None, arguments=None, **kwargs ) Wraps arbitrary expressions as a Layer object. The Lambda layer exists so that arbitrary expressions can be used as a Layer when constructing Sequential and Functional API models. Lambda layers are best suited for simple operations or quick experimentation. For more advanced use cases, follow this guide for subclassing tf.keras.layers.Layer. WARNING: tf.keras.layers.Lambda layers have (de)serialization limitations! The main reason to subclass tf.keras.layers.Layer instead of using a Lambda layer is saving and inspecting a Model. Lambda layers are saved by serializing the Python bytecode, which is fundamentally non-portable. They should only be loaded in the same environment where they were saved. Subclassed layers can be saved in a more portable way by overriding their get_config method. Models that rely on subclassed Layers are also often easier to visualize and reason about. Examples # add a x -> x^2 layer model.add(Lambda(lambda x: x ** 2)) # add a layer that returns the concatenation # of the positive part of the input and # the opposite of the negative part def antirectifier(x): x -= K.mean(x, axis=1, keepdims=True) x = K.l2_normalize(x, axis=1) pos = K.relu(x) neg = K.relu(-x) return K.concatenate([pos, neg], axis=1) model.add(Lambda(antirectifier)) Variables: While it is possible to use Variables with Lambda layers, this practice is discouraged as it can easily lead to bugs. For instance, consider the following layer: python scale = tf.Variable(1.) scale_layer = tf.keras.layers.Lambda(lambda x: x * scale) Because scale_layer does not directly track the scale variable, it will not appear in scale_layer.trainable_weights and will therefore not be trained if scale_layer is used in a Model. A better pattern is to write a subclassed Layer: ```python class ScaleLayer(tf.keras.layers.Layer): def init(self): super(ScaleLayer, self).init() self.scale = tf.Variable(1.) def call(self, inputs): return inputs * self.scale ``` In general, Lambda layers can be convenient for simple stateless computation, but anything more complex should use a subclass Layer instead. Arguments function: The function to be evaluated. Takes input tensor as first argument. output_shape: Expected output shape from function. This argument can be inferred if not explicitly provided. Can be a tuple or function. If a tuple, it only specifies the first dimension onward; sample dimension is assumed either the same as the input: output_shape = (input_shape[0], ) + output_shape or, the input is None and the sample dimension is also None: output_shape = (None, ) + output_shape If a function, it specifies the entire shape as a function of the input shape: output_shape = f(input_shape) mask: Either None (indicating no masking) or a callable with the same signature as the compute_mask layer method, or a tensor that will be returned as output mask regardless of what the input is. arguments: Optional dictionary of keyword arguments to be passed to the function. Input shape Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape Specified by output_shape argument Dense layer Dense class tf.keras.layers.Dense( units, activation=None, use_bias=True, kernel_initializer="glorot_uniform", bias_initializer="zeros", kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs ) Just your regular densely-connected NN layer. Dense implements the operation: output = activation(dot(input, kernel) + bias) where activation is the element-wise activation function passed as the activation argument, kernel is a weights matrix created by the layer, and bias is a bias vector created by the layer (only applicable if use_bias is True). These are all attributes of Dense. Note: If the input to the layer has a rank greater than 2, then Dense computes the dot product between the inputs and the kernel along the last axis of the inputs and axis 1 of the kernel (using tf.tensordot). For example, if input has dimensions (batch_size, d0, d1), then we create a kernel with shape (d1, units), and the kernel operates along axis 2 of the input, on every sub-tensor of shape (1, 1, d1) (there are batch_size * d0 such sub-tensors). The output in this case will have shape (batch_size, d0, units). Besides, layer attributes cannot be modified after the layer has been called once (except the trainable attribute). When a popular kwarg input_shape is passed, then keras will create an input layer to insert before the current layer. This can be treated equivalent to explicitly defining an InputLayer. Example >>> # Create a `Sequential` model and add a Dense layer as the first layer. >>> model = tf.keras.models.Sequential() >>> model.add(tf.keras.Input(shape=(16,))) >>> model.add(tf.keras.layers.Dense(32, activation='relu')) >>> # Now the model will take as input arrays of shape (None, 16) >>> # and output arrays of shape (None, 32). >>> # Note that after the first layer, you don't need to specify >>> # the size of the input anymore: >>> model.add(tf.keras.layers.Dense(32)) >>> model.output_shape (None, 32) Arguments units: Positive integer, dimensionality of the output space. activation: Activation function to use. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x). use_bias: Boolean, whether the layer uses a bias vector. kernel_initializer: Initializer for the kernel weights matrix. bias_initializer: Initializer for the bias vector. kernel_regularizer: Regularizer function applied to the kernel weights matrix. bias_regularizer: Regularizer function applied to the bias vector. activity_regularizer: Regularizer function applied to the output of the layer (its "activation"). kernel_constraint: Constraint function applied to the kernel weights matrix. bias_constraint: Constraint function applied to the bias vector. Input shape N-D tensor with shape: (batch_size, ..., input_dim). The most common situation would be a 2D input with shape (batch_size, input_dim). Output shape N-D tensor with shape: (batch_size, ..., units). For instance, for a 2D input with shape (batch_size, input_dim), the output would have shape (batch_size, units). Masking layer Masking class tf.keras.layers.Masking(mask_value=0.0, **kwargs) Masks a sequence by using a mask value to skip timesteps. For each timestep in the input tensor (dimension #1 in the tensor), if all values in the input tensor at that timestep are equal to mask_value, then the timestep will be masked (skipped) in all downstream layers (as long as they support masking). If any downstream layer does not support masking yet receives such an input mask, an exception will be raised. Example Consider a Numpy data array x of shape (samples, timesteps, features), to be fed to an LSTM layer. You want to mask timestep #3 and #5 because you lack data for these timesteps. You can: Set x[:, 3, :] = 0. and x[:, 5, :] = 0. Insert a Masking layer with mask_value=0. before the LSTM layer: samples, timesteps, features = 32, 10, 8 inputs = np.random.random([samples, timesteps, features]).astype(np.float32) inputs[:, 3, :] = 0. inputs[:, 5, :] = 0. model = tf.keras.models.Sequential() model.add(tf.keras.layers.Masking(mask_value=0., input_shape=(timesteps, features))) model.add(tf.keras.layers.LSTM(32)) output = model(inputs) # The time step 3 and 5 will be skipped from LSTM calculation. See the masking and padding guide for more details. Image data preprocessing image_dataset_from_directory function tf.keras.preprocessing.image_dataset_from_directory( directory, labels="inferred", label_mode="int", class_names=None, color_mode="rgb", batch_size=32, image_size=(256, 256), shuffle=True, seed=None, validation_split=None, subset=None, interpolation="bilinear", follow_links=False, smart_resize=False, ) Generates a tf.data.Dataset from image files in a directory. If your directory structure is: main_directory/ ...class_a/ ......a_image_1.jpg ......a_image_2.jpg ...class_b/ ......b_image_1.jpg ......b_image_2.jpg Then calling image_dataset_from_directory(main_directory, labels='inferred') will return a tf.data.Dataset that yields batches of images from the subdirectories class_a and class_b, together with labels 0 and 1 (0 corresponding to class_a and 1 corresponding to class_b). Supported image formats: jpeg, png, bmp, gif. Animated gifs are truncated to the first frame. Arguments directory: Directory where the data is located. If labels is "inferred", it should contain subdirectories, each containing images for a class. Otherwise, the directory structure is ignored. labels: Either "inferred" (labels are generated from the directory structure), None (no labels), or a list/tuple of integer labels of the same size as the number of image files found in the directory. Labels should be sorted according to the alphanumeric order of the image file paths (obtained via os.walk(directory) in Python). label_mode: - 'int': means that the labels are encoded as integers (e.g. for sparse_categorical_crossentropy loss). - 'categorical' means that the labels are encoded as a categorical vector (e.g. for categorical_crossentropy loss). - 'binary' means that the labels (there can be only 2) are encoded as float32 scalars with values 0 or 1 (e.g. for binary_crossentropy). - None (no labels). class_names: Only valid if "labels" is "inferred". This is the explict list of class names (must match names of subdirectories). Used to control the order of the classes (otherwise alphanumerical order is used). color_mode: One of "grayscale", "rgb", "rgba". Default: "rgb". Whether the images will be converted to have 1, 3, or 4 channels. batch_size: Size of the batches of data. Default: 32. image_size: Size to resize images to after they are read from disk. Defaults to (256, 256). Since the pipeline processes batches of images that must all have the same size, this must be provided. shuffle: Whether to shuffle the data. Default: True. If set to False, sorts the data in alphanumeric order. seed: Optional random seed for shuffling and transformations. validation_split: Optional float between 0 and 1, fraction of data to reserve for validation. subset: One of "training" or "validation". Only used if validation_split is set. interpolation: String, the interpolation method used when resizing images. Defaults to bilinear. Supports bilinear, nearest, bicubic, area, lanczos3, lanczos5, gaussian, mitchellcubic. follow_links: Whether to visits subdirectories pointed to by symlinks. Defaults to False. smart_resize: If True, the resizing function used will be tf.keras.preprocessing.image.smart_resize, which preserves the aspect ratio of the original image by using a mixture of resizing and cropping. If False (default), the resizing function is tf.image.resize, which does not preserve aspect ratio. Returns A tf.data.Dataset object. - If label_mode is None, it yields float32 tensors of shape (batch_size, image_size[0], image_size[1], num_channels), encoding images (see below for rules regarding num_channels). - Otherwise, it yields a tuple (images, labels), where images has shape (batch_size, image_size[0], image_size[1], num_channels), and labels follows the format described below. Rules regarding labels format: - if label_mode is int, the labels are an int32 tensor of shape (batch_size,). - if label_mode is binary, the labels are a float32 tensor of 1s and 0s of shape (batch_size, 1). - if label_mode is categorial, the labels are a float32 tensor of shape (batch_size, num_classes), representing a one-hot encoding of the class index. Rules regarding number of channels in the yielded images: - if color_mode is grayscale, there's 1 channel in the image tensors. - if color_mode is rgb, there are 3 channel in the image tensors. - if color_mode is rgba, there are 4 channel in the image tensors. load_img function tf.keras.preprocessing.image.load_img( path, grayscale=False, color_mode="rgb", target_size=None, interpolation="nearest" ) Loads an image into PIL format. Usage: image = tf.keras.preprocessing.image.load_img(image_path) input_arr = keras.preprocessing.image.img_to_array(image) input_arr = np.array([input_arr]) # Convert single image to a batch. predictions = model.predict(input_arr) Arguments path: Path to image file. grayscale: DEPRECATED use color_mode="grayscale". color_mode: One of "grayscale", "rgb", "rgba". Default: "rgb". The desired image format. target_size: Either None (default to original size) or tuple of ints (img_height, img_width). interpolation: Interpolation method used to resample the image if the target size is different from that of the loaded image. Supported methods are "nearest", "bilinear", and "bicubic". If PIL version 1.1.3 or newer is installed, "lanczos" is also supported. If PIL version 3.4.0 or newer is installed, "box" and "hamming" are also supported. By default, "nearest" is used. Returns A PIL Image instance. Raises ImportError: if PIL is not available. ValueError: if interpolation method is not supported. img_to_array function tf.keras.preprocessing.image.img_to_array(img, data_format=None, dtype=None) Converts a PIL Image instance to a Numpy array. Usage: from PIL import Image img_data = np.random.random(size=(100, 100, 3)) img = tf.keras.preprocessing.image.array_to_img(img_data) array = tf.keras.preprocessing.image.img_to_array(img) Arguments img: Input PIL Image instance. data_format: Image data format, can be either "channels_first" or "channels_last". Defaults to None, in which case the global setting tf.keras.backend.image_data_format() is used (unless you changed it, it defaults to "channels_last"). dtype: Dtype to use. Default to None, in which case the global setting tf.keras.backend.floatx() is used (unless you changed it, it defaults to "float32") Returns A 3D Numpy array. Raises ValueError: if invalid img or data_format is passed. ImageDataGenerator class tf.keras.preprocessing.image.ImageDataGenerator( featurewise_center=False, samplewise_center=False, featurewise_std_normalization=False, samplewise_std_normalization=False, zca_whitening=False, zca_epsilon=1e-06, rotation_range=0, width_shift_range=0.0, height_shift_range=0.0, brightness_range=None, shear_range=0.0, zoom_range=0.0, channel_shift_range=0.0, fill_mode="nearest", cval=0.0, horizontal_flip=False, vertical_flip=False, rescale=None, preprocessing_function=None, data_format=None, validation_split=0.0, dtype=None, ) Generate batches of tensor image data with real-time data augmentation. The data will be looped over (in batches). Arguments featurewise_center: Boolean. Set input mean to 0 over the dataset, feature-wise. samplewise_center: Boolean. Set each sample mean to 0. featurewise_std_normalization: Boolean. Divide inputs by std of the dataset, feature-wise. samplewise_std_normalization: Boolean. Divide each input by its std. zca_epsilon: epsilon for ZCA whitening. Default is 1e-6. zca_whitening: Boolean. Apply ZCA whitening. rotation_range: Int. Degree range for random rotations. width_shift_range: Float, 1-D array-like or int - float: fraction of total width, if < 1, or pixels if >= 1. - 1-D array-like: random elements from the array. - int: integer number of pixels from interval (-width_shift_range, +width_shift_range) - With width_shift_range=2 possible values are integers [-1, 0, +1], same as with width_shift_range=[-1, 0, +1], while with width_shift_range=1.0 possible values are floats in the interval [-1.0, +1.0). height_shift_range: Float, 1-D array-like or int - float: fraction of total height, if < 1, or pixels if >= 1. - 1-D array-like: random elements from the array. - int: integer number of pixels from interval (-height_shift_range, +height_shift_range) - With height_shift_range=2 possible values are integers [-1, 0, +1], same as with height_shift_range=[-1, 0, +1], while with height_shift_range=1.0 possible values are floats in the interval [-1.0, +1.0). brightness_range: Tuple or list of two floats. Range for picking a brightness shift value from. shear_range: Float. Shear Intensity (Shear angle in counter-clockwise direction in degrees) zoom_range: Float or [lower, upper]. Range for random zoom. If a float, [lower, upper] = [1-zoom_range, 1+zoom_range]. channel_shift_range: Float. Range for random channel shifts. fill_mode: One of {"constant", "nearest", "reflect" or "wrap"}. Default is 'nearest'. Points outside the boundaries of the input are filled according to the given mode: - 'constant': kkkkkkkk|abcd|kkkkkkkk (cval=k) - 'nearest': aaaaaaaa|abcd|dddddddd - 'reflect': abcddcba|abcd|dcbaabcd - 'wrap': abcdabcd|abcd|abcdabcd cval: Float or Int. Value used for points outside the boundaries when fill_mode = "constant". horizontal_flip: Boolean. Randomly flip inputs horizontally. vertical_flip: Boolean. Randomly flip inputs vertically. rescale: rescaling factor. Defaults to None. If None or 0, no rescaling is applied, otherwise we multiply the data by the value provided (after applying all other transformations). preprocessing_function: function that will be applied on each input. The function will run after the image is resized and augmented. The function should take one argument: one image (Numpy tensor with rank 3), and should output a Numpy tensor with the same shape. data_format: Image data format, either "channels_first" or "channels_last". "channels_last" mode means that the images should have shape (samples, height, width, channels), "channels_first" mode means that the images should have shape (samples, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". validation_split: Float. Fraction of images reserved for validation (strictly between 0 and 1). dtype: Dtype to use for the generated arrays. Raises ValueError: If the value of the argument, data_format is other than "channels_last" or "channels_first". ValueError: If the value of the argument, validation_split > 1 or validation_split < 0. Examples Example of using .flow(x, y): (x_train, y_train), (x_test, y_test) = cifar10.load_data() y_train = utils.to_categorical(y_train, num_classes) y_test = utils.to_categorical(y_test, num_classes) datagen = ImageDataGenerator( featurewise_center=True, featurewise_std_normalization=True, rotation_range=20, width_shift_range=0.2, height_shift_range=0.2, horizontal_flip=True, validation_split=0.2) # compute quantities required for featurewise normalization # (std, mean, and principal components if ZCA whitening is applied) datagen.fit(x_train) # fits the model on batches with real-time data augmentation: model.fit(datagen.flow(x_train, y_train, batch_size=32, subset='training'), validation_data=datagen.flow(x_train, y_train, batch_size=8, subset='validation'), steps_per_epoch=len(x_train) / 32, epochs=epochs) # here's a more "manual" example for e in range(epochs): print('Epoch', e) batches = 0 for x_batch, y_batch in datagen.flow(x_train, y_train, batch_size=32): model.fit(x_batch, y_batch) batches += 1 if batches >= len(x_train) / 32: # we need to break the loop by hand because # the generator loops indefinitely break Example of using .flow_from_directory(directory): train_datagen = ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( 'data/train', target_size=(150, 150), batch_size=32, class_mode='binary') validation_generator = test_datagen.flow_from_directory( 'data/validation', target_size=(150, 150), batch_size=32, class_mode='binary') model.fit( train_generator, steps_per_epoch=2000, epochs=50, validation_data=validation_generator, validation_steps=800) Example of transforming images and masks together. # we create two instances with the same arguments data_gen_args = dict(featurewise_center=True, featurewise_std_normalization=True, rotation_range=90, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.2) image_datagen = ImageDataGenerator(**data_gen_args) mask_datagen = ImageDataGenerator(**data_gen_args) # Provide the same seed and keyword arguments to the fit and flow methods seed = 1 image_datagen.fit(images, augment=True, seed=seed) mask_datagen.fit(masks, augment=True, seed=seed) image_generator = image_datagen.flow_from_directory( 'data/images', class_mode=None, seed=seed) mask_generator = mask_datagen.flow_from_directory( 'data/masks', class_mode=None, seed=seed) # combine generators into one which yields image and masks train_generator = zip(image_generator, mask_generator) model.fit( train_generator, steps_per_epoch=2000, epochs=50) flow method ImageDataGenerator.flow( x, y=None, batch_size=32, shuffle=True, sample_weight=None, seed=None, save_to_dir=None, save_prefix="", save_format="png", subset=None, ) Takes data & label arrays, generates batches of augmented data. Arguments x: Input data. Numpy array of rank 4 or a tuple. If tuple, the first element should contain the images and the second element another numpy array or a list of numpy arrays that gets passed to the output without any modifications. Can be used to feed the model miscellaneous data along with the images. In case of grayscale data, the channels axis of the image array should have value 1, in case of RGB data, it should have value 3, and in case of RGBA data, it should have value 4. y: Labels. batch_size: Int (default: 32). shuffle: Boolean (default: True). sample_weight: Sample weights. seed: Int (default: None). save_to_dir: None or str (default: None). This allows you to optionally specify a directory to which to save the augmented pictures being generated (useful for visualizing what you are doing). save_prefix: Str (default: ''). Prefix to use for filenames of saved pictures (only relevant if save_to_dir is set). save_format: one of "png", "jpeg", "bmp", "pdf", "ppm", "gif", "tif", "jpg" (only relevant if save_to_dir is set). Default: "png". subset: Subset of data ("training" or "validation") if validation_split is set in ImageDataGenerator. Returns An Iterator yielding tuples of (x, y) where x is a numpy array of image data (in the case of a single image input) or a list of numpy arrays (in the case with additional inputs) and y is a numpy array of corresponding labels. If 'sample_weight' is not None, the yielded tuples are of the form (x, y, sample_weight). If y is None, only the numpy array x is returned. Raises ValueError: If the Value of the argument, subset is other than "training" or "validation". flow_from_dataframe method ImageDataGenerator.flow_from_dataframe( dataframe, directory=None, x_col="filename", y_col="class", weight_col=None, target_size=(256, 256), color_mode="rgb", classes=None, class_mode="categorical", batch_size=32, shuffle=True, seed=None, save_to_dir=None, save_prefix="", save_format="png", subset=None, interpolation="nearest", validate_filenames=True, **kwargs ) Takes the dataframe and the path to a directory + generates batches. The generated batches contain augmented/normalized data. A simple tutorial can be found here. Arguments dataframe: Pandas dataframe containing the filepaths relative to directory (or absolute paths if directory is None) of the images in a string column. It should include other column/s depending on the class_mode: - if class_mode is "categorical" (default value) it must include the y_col column with the class/es of each image. Values in column can be string/list/tuple if a single class or list/tuple if multiple classes. - if class_mode is "binary" or "sparse" it must include the given y_col column with class values as strings. - if class_mode is "raw" or "multi_output" it should contain the columns specified in y_col. - if class_mode is "input" or None no extra column is needed. directory: string, path to the directory to read images from. If None, data in x_col column should be absolute paths. x_col: string, column in dataframe that contains the filenames (or absolute paths if directory is None). y_col: string or list, column/s in dataframe that has the target data. weight_col: string, column in dataframe that contains the sample weights. Default: None. target_size: tuple of integers (height, width), default: (256, 256). The dimensions to which all images found will be resized. color_mode: one of "grayscale", "rgb", "rgba". Default: "rgb". Whether the images will be converted to have 1 or 3 color channels. classes: optional list of classes (e.g. ['dogs', 'cats']). Default is None. If not provided, the list of classes will be automatically inferred from the y_col, which will map to the label indices, will be alphanumeric). The dictionary containing the mapping from class names to class indices can be obtained via the attribute class_indices. class_mode: one of "binary", "categorical", "input", "multi_output", "raw", sparse" or None. Default: "categorical". Mode for yielding the targets: - "binary": 1D numpy array of binary labels, - "categorical": 2D numpy array of one-hot encoded labels. Supports multi-label output. - "input": images identical to input images (mainly used to work with autoencoders), - "multi_output": list with the values of the different columns, - "raw": numpy array of values in y_col column(s), - "sparse": 1D numpy array of integer labels, - None, no targets are returned (the generator will only yield batches of image data, which is useful to use in model.predict()). batch_size: size of the batches of data (default: 32). shuffle: whether to shuffle the data (default: True) seed: optional random seed for shuffling and transformations. save_to_dir: None or str (default: None). This allows you to optionally specify a directory to which to save the augmented pictures being generated (useful for visualizing what you are doing). save_prefix: str. Prefix to use for filenames of saved pictures (only relevant if save_to_dir is set). save_format: one of "png", "jpeg", "bmp", "pdf", "ppm", "gif", "tif", "jpg" (only relevant if save_to_dir is set). Default: "png". subset: Subset of data ("training" or "validation") if validation_split is set in ImageDataGenerator. interpolation: Interpolation method used to resample the image if the target size is different from that of the loaded image. Supported methods are "nearest", "bilinear", and "bicubic". If PIL version 1.1.3 or newer is installed, "lanczos" is also supported. If PIL version 3.4.0 or newer is installed, "box" and "hamming" are also supported. By default, "nearest" is used. validate_filenames: Boolean, whether to validate image filenames in x_col. If True, invalid images will be ignored. Disabling this option can lead to speed-up in the execution of this function. Defaults to True. **kwargs: legacy arguments for raising deprecation warnings. Returns A DataFrameIterator yielding tuples of (x, y) where x is a numpy array containing a batch of images with shape (batch_size, *target_size, channels) and y is a numpy array of corresponding labels. flow_from_directory method ImageDataGenerator.flow_from_directory( directory, target_size=(256, 256), color_mode="rgb", classes=None, class_mode="categorical", batch_size=32, shuffle=True, seed=None, save_to_dir=None, save_prefix="", save_format="png", follow_links=False, subset=None, interpolation="nearest", ) Takes the path to a directory & generates batches of augmented data. Arguments directory: string, path to the target directory. It should contain one subdirectory per class. Any PNG, JPG, BMP, PPM or TIF images inside each of the subdirectories directory tree will be included in the generator. See this script for more details. target_size: Tuple of integers (height, width), defaults to (256, 256). The dimensions to which all images found will be resized. color_mode: One of "grayscale", "rgb", "rgba". Default: "rgb". Whether the images will be converted to have 1, 3, or 4 channels. classes: Optional list of class subdirectories (e.g. ['dogs', 'cats']). Default: None. If not provided, the list of classes will be automatically inferred from the subdirectory names/structure under directory, where each subdirectory will be treated as a different class (and the order of the classes, which will map to the label indices, will be alphanumeric). The dictionary containing the mapping from class names to class indices can be obtained via the attribute class_indices. class_mode: One of "categorical", "binary", "sparse", "input", or None. Default: "categorical". Determines the type of label arrays that are returned: - "categorical" will be 2D one-hot encoded labels, - "binary" will be 1D binary labels, - "sparse" will be 1D integer labels, - "input" will be images identical to input images (mainly used to work with autoencoders). - If None, no labels are returned (the generator will only yield batches of image data, which is useful to use with model.predict()). Please note that in case of class_mode None, the data still needs to reside in a subdirectory of directory for it to work correctly. batch_size: Size of the batches of data (default: 32). shuffle: Whether to shuffle the data (default: True) If set to False, sorts the data in alphanumeric order. seed: Optional random seed for shuffling and transformations. save_to_dir: None or str (default: None). This allows you to optionally specify a directory to which to save the augmented pictures being generated (useful for visualizing what you are doing). save_prefix: Str. Prefix to use for filenames of saved pictures (only relevant if save_to_dir is set). save_format: one of "png", "jpeg", "bmp", "pdf", "ppm", "gif", "tif", "jpg" (only relevant if save_to_dir is set). Default: "png". follow_links: Whether to follow symlinks inside class subdirectories (default: False). subset: Subset of data ("training" or "validation") if validation_split is set in ImageDataGenerator. interpolation: Interpolation method used to resample the image if the target size is different from that of the loaded image. Supported methods are "nearest", "bilinear", and "bicubic". If PIL version 1.1.3 or newer is installed, "lanczos" is also supported. If PIL version 3.4.0 or newer is installed, "box" and "hamming" are also supported. By default, "nearest" is used. Returns A DirectoryIterator yielding tuples of (x, y) where x is a numpy array containing a batch of images with shape (batch_size, *target_size, channels) and y is a numpy array of corresponding labels.Text data preprocessing text_dataset_from_directory function tf.keras.preprocessing.text_dataset_from_directory( directory, labels="inferred", label_mode="int", class_names=None, batch_size=32, max_length=None, shuffle=True, seed=None, validation_split=None, subset=None, follow_links=False, ) Generates a tf.data.Dataset from text files in a directory. If your directory structure is: main_directory/ ...class_a/ ......a_text_1.txt ......a_text_2.txt ...class_b/ ......b_text_1.txt ......b_text_2.txt Then calling text_dataset_from_directory(main_directory, labels='inferred') will return a tf.data.Dataset that yields batches of texts from the subdirectories class_a and class_b, together with labels 0 and 1 (0 corresponding to class_a and 1 corresponding to class_b). Only .txt files are supported at this time. Arguments directory: Directory where the data is located. If labels is "inferred", it should contain subdirectories, each containing text files for a class. Otherwise, the directory structure is ignored. labels: Either "inferred" (labels are generated from the directory structure), None (no labels), or a list/tuple of integer labels of the same size as the number of text files found in the directory. Labels should be sorted according to the alphanumeric order of the text file paths (obtained via os.walk(directory) in Python). label_mode: - 'int': means that the labels are encoded as integers (e.g. for sparse_categorical_crossentropy loss). - 'categorical' means that the labels are encoded as a categorical vector (e.g. for categorical_crossentropy loss). - 'binary' means that the labels (there can be only 2) are encoded as float32 scalars with values 0 or 1 (e.g. for binary_crossentropy). - None (no labels). class_names: Only valid if "labels" is "inferred". This is the explict list of class names (must match names of subdirectories). Used to control the order of the classes (otherwise alphanumerical order is used). batch_size: Size of the batches of data. Default: 32. max_length: Maximum size of a text string. Texts longer than this will be truncated to max_length. shuffle: Whether to shuffle the data. Default: True. If set to False, sorts the data in alphanumeric order. seed: Optional random seed for shuffling and transformations. validation_split: Optional float between 0 and 1, fraction of data to reserve for validation. subset: One of "training" or "validation". Only used if validation_split is set. follow_links: Whether to visits subdirectories pointed to by symlinks. Defaults to False. Returns A tf.data.Dataset object. - If label_mode is None, it yields string tensors of shape (batch_size,), containing the contents of a batch of text files. - Otherwise, it yields a tuple (texts, labels), where texts has shape (batch_size,) and labels follows the format described below. Rules regarding labels format: - if label_mode is int, the labels are an int32 tensor of shape (batch_size,). - if label_mode is binary, the labels are a float32 tensor of 1s and 0s of shape (batch_size, 1). - if label_mode is categorial, the labels are a float32 tensor of shape (batch_size, num_classes), representing a one-hot encoding of the class index. Tokenizer class tf.keras.preprocessing.text.Tokenizer( num_words=None, filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n', lower=True, split=" ", char_level=False, oov_token=None, document_count=0, **kwargs ) Text tokenization utility class. This class allows to vectorize a text corpus, by turning each text into either a sequence of integers (each integer being the index of a token in a dictionary) or into a vector where the coefficient for each token could be binary, based on word count, based on tf-idf... Arguments num_words: the maximum number of words to keep, based on word frequency. Only the most common num_words-1 words will be kept. filters: a string where each element is a character that will be filtered from the texts. The default is all punctuation, plus tabs and line breaks, minus the ' character. lower: boolean. Whether to convert the texts to lowercase. split: str. Separator for word splitting. char_level: if True, every character will be treated as a token. oov_token: if given, it will be added to word_index and used to replace out-of-vocabulary words during text_to_sequence calls By default, all punctuation is removed, turning the texts into space-separated sequences of words (words maybe include the ' character). These sequences are then split into lists of tokens. They will then be indexed or vectorized. 0 is a reserved index that won't be assigned to any word.Timeseries data preprocessing timeseries_dataset_from_array function tf.keras.preprocessing.timeseries_dataset_from_array( data, targets, sequence_length, sequence_stride=1, sampling_rate=1, batch_size=128, shuffle=False, seed=None, start_index=None, end_index=None, ) Creates a dataset of sliding windows over a timeseries provided as array. This function takes in a sequence of data-points gathered at equal intervals, along with time series parameters such as length of the sequences/windows, spacing between two sequence/windows, etc., to produce batches of timeseries inputs and targets. Arguments data: Numpy array or eager tensor containing consecutive data points (timesteps). Axis 0 is expected to be the time dimension. targets: Targets corresponding to timesteps in data. targets[i] should be the target corresponding to the window that starts at index i (see example 2 below). Pass None if you don't have target data (in this case the dataset will only yield the input data). sequence_length: Length of the output sequences (in number of timesteps). sequence_stride: Period between successive output sequences. For stride s, output samples would start at index data[i], data[i + s], data[i + 2 * s], etc. sampling_rate: Period between successive individual timesteps within sequences. For rate r, timesteps data[i], data[i + r], ... data[i + sequence_length] are used for create a sample sequence. batch_size: Number of timeseries samples in each batch (except maybe the last one). shuffle: Whether to shuffle output samples, or instead draw them in chronological order. seed: Optional int; random seed for shuffling. start_index: Optional int; data points earlier (exclusive) than start_index will not be used in the output sequences. This is useful to reserve part of the data for test or validation. end_index: Optional int; data points later (exclusive) than end_index will not be used in the output sequences. This is useful to reserve part of the data for test or validation. Returns A tf.data.Dataset instance. If targets was passed, the dataset yields tuple (batch_of_sequences, batch_of_targets). If not, the dataset yields only batch_of_sequences. Example 1: Consider indices [0, 1, ... 99]. With sequence_length=10, sampling_rate=2, sequence_stride=3, shuffle=False, the dataset will yield batches of sequences composed of the following indices: First sequence: [0 2 4 6 8 10 12 14 16 18] Second sequence: [3 5 7 9 11 13 15 17 19 21] Third sequence: [6 8 10 12 14 16 18 20 22 24] ... Last sequence: [78 80 82 84 86 88 90 92 94 96] In this case the last 3 data points are discarded since no full sequence can be generated to include them (the next sequence would have started at index 81, and thus its last step would have gone over 99). Example 2: temporal regression. Consider an array data of scalar values, of shape (steps,). To generate a dataset that uses the past 10 timesteps to predict the next timestep, you would use: python input_data = data[:-10] targets = data[10:] dataset = tf.keras.preprocessing.timeseries_dataset_from_array( input_data, targets, sequence_length=10) for batch in dataset: inputs, targets = batch assert np.array_equal(inputs[0], data[:10]) # First sequence: steps [0-9] assert np.array_equal(targets[0], data[10]) # Corresponding target: step 10 break Example 3: temporal regression for many-to-many architectures. Consider two arrays of scalar values X and Y, both of shape (100,). The resulting dataset should consist samples with 20 timestamps each. The samples should not overlap. To generate a dataset that uses the current timestamp to predict the corresponding target timestep, you would use: ```python X = np.arange(100) Y = X*2 sample_length = 20 input_dataset = tf.keras.preprocessing.timeseries_dataset_from_array( X, None, sequence_length=sample_length, sequence_stride=sample_length) target_dataset = tf.keras.preprocessing.timeseries_dataset_from_array( Y, None, sequence_length=sample_length, sequence_stride=sample_length) for batch in zip(input_dataset, target_dataset): inputs, targets = batch assert np.array_equal(inputs[0], X[:sample_length]) # second sample equals output timestamps 20-40 assert np.array_equal(targets[1], Y[sample_length:2*sample_length]) break ``` pad_sequences function tf.keras.preprocessing.sequence.pad_sequences( sequences, maxlen=None, dtype="int32", padding="pre", truncating="pre", value=0.0 ) Pads sequences to the same length. This function transforms a list (of length num_samples) of sequences (lists of integers) into a 2D Numpy array of shape (num_samples, num_timesteps). num_timesteps is either the maxlen argument if provided, or the length of the longest sequence in the list. Sequences that are shorter than num_timesteps are padded with value until they are num_timesteps long. Sequences longer than num_timesteps are truncated so that they fit the desired length. The position where padding or truncation happens is determined by the arguments padding and truncating, respectively. Pre-padding or removing values from the beginning of the sequence is the default. >>> sequence = [[1], [2, 3], [4, 5, 6]] >>> tf.keras.preprocessing.sequence.pad_sequences(sequence) array([[0, 0, 1], [0, 2, 3], [4, 5, 6]], dtype=int32) >>> tf.keras.preprocessing.sequence.pad_sequences(sequence, value=-1) array([[-1, -1, 1], [-1, 2, 3], [ 4, 5, 6]], dtype=int32) >>> tf.keras.preprocessing.sequence.pad_sequences(sequence, padding='post') array([[1, 0, 0], [2, 3, 0], [4, 5, 6]], dtype=int32) >>> tf.keras.preprocessing.sequence.pad_sequences(sequence, maxlen=2) array([[0, 1], [2, 3], [5, 6]], dtype=int32) Arguments sequences: List of sequences (each sequence is a list of integers). maxlen: Optional Int, maximum length of all sequences. If not provided, sequences will be padded to the length of the longest individual sequence. dtype: (Optional, defaults to int32). Type of the output sequences. To pad sequences with variable length strings, you can use object. padding: String, 'pre' or 'post' (optional, defaults to 'pre'): pad either before or after each sequence. truncating: String, 'pre' or 'post' (optional, defaults to 'pre'): remove values from sequences larger than maxlen, either at the beginning or at the end of the sequences. value: Float or String, padding value. (Optional, defaults to 0.) Returns Numpy array with shape (len(sequences), maxlen) Raises ValueError: In case of invalid values for truncating or padding, or in case of invalid shape for a sequences entry. TimeseriesGenerator class tf.keras.preprocessing.sequence.TimeseriesGenerator( data, targets, length, sampling_rate=1, stride=1, start_index=0, end_index=None, shuffle=False, reverse=False, batch_size=128, ) Utility class for generating batches of temporal data. This class takes in a sequence of data-points gathered at equal intervals, along with time series parameters such as stride, length of history, etc., to produce batches for training/validation. Arguments data: Indexable generator (such as list or Numpy array) containing consecutive data points (timesteps). The data should be at 2D, and axis 0 is expected to be the time dimension. targets: Targets corresponding to timesteps in data. It should have same length as data. length: Length of the output sequences (in number of timesteps). sampling_rate: Period between successive individual timesteps within sequences. For rate r, timesteps data[i], data[i-r], ... data[i - length] are used for create a sample sequence. stride: Period between successive output sequences. For stride s, consecutive output samples would be centered around data[i], data[i+s], data[i+2*s], etc. start_index: Data points earlier than start_index will not be used in the output sequences. This is useful to reserve part of the data for test or validation. end_index: Data points later than end_index will not be used in the output sequences. This is useful to reserve part of the data for test or validation. shuffle: Whether to shuffle output samples, or instead draw them in chronological order. reverse: Boolean: if true, timesteps in each output sample will be in reverse chronological order. batch_size: Number of timeseries samples in each batch (except maybe the last one). Returns A Sequence instance. Example s from keras.preprocessing.sequence import TimeseriesGenerator import numpy as np data = np.array([[i] for i in range(50)]) targets = np.array([[i] for i in range(50)]) data_gen = TimeseriesGenerator(data, targets, length=10, sampling_rate=2, batch_size=2) assert len(data_gen) == 20 batch_0 = data_gen[0] x, y = batch_0 assert np.array_equal(x, np.array([[[0], [2], [4], [6], [8]], [[1], [3], [5], [7], [9]]])) assert np.array_equal(y, np.array([[10], [11]]))Model training APIs compile method Model.compile( optimizer="rmsprop", loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, **kwargs ) Configures the model for training. Arguments optimizer: String (name of optimizer) or optimizer instance. See tf.keras.optimizers. loss: String (name of objective function), objective function or tf.keras.losses.Loss instance. See tf.keras.losses. An objective function is any callable with the signature loss = fn(y_true, y_pred), where y_true = ground truth values with shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1]. y_pred = predicted values with shape = [batch_size, d0, .. dN]. It returns a weighted loss float tensor. If a custom Loss instance is used and reduction is set to NONE, return value has the shape [batch_size, d0, .. dN-1] ie. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses. metrics: List of metrics to be evaluated by the model during training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=['accuracy']. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={'output_a': 'accuracy', 'output_b': ['accuracy', 'mse']}. You can also pass a list (len = len(outputs)) of lists of metrics such as metrics=[['accuracy'], ['accuracy', 'mse']] or metrics=['accuracy', ['accuracy', 'mse']]. When you pass the strings 'accuracy' or 'acc', we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the loss function used and the model output shape. We do a similar conversion for the strings 'crossentropy' and 'ce' as well. loss_weights: Optional list or dictionary specifying scalar coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model's outputs. If a dict, it is expected to map output names (strings) to scalar coefficients. weighted_metrics: List of metrics to be evaluated and weighted by sample_weight or class_weight during training and testing. run_eagerly: Bool. Defaults to False. If True, this Model's logic will not be wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy. steps_per_execution: Int. Defaults to 1. The number of batches to run during each tf.function call. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs or small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution). **kwargs: Arguments supported for backwards compatibility only. Raises ValueError: In case of invalid arguments for optimizer, loss or metrics. fit method Model.fit( x=None, y=None, batch_size=None, epochs=1, verbose="auto", callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False, ) Trains the model for a fixed number of epochs (iterations on a dataset). Arguments x: Input data. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs. A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights). A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights). A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information. A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x. y: Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x). batch_size: Integer or None. Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches). epochs: Integer. Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided. Note that in conjunction with initial_epoch, epochs is to be understood as "final epoch". The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached. verbose: 'auto', 0, 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch. 'auto' defaults to 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment). callbacks: List of keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value. validation_split: Float between 0 and 1. Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy. validation_data: Data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be: - A tuple (x_val, y_val) of Numpy arrays or tensors. - A tuple (x_val, y_val, val_sample_weights) of NumPy arrays. - A tf.data.Dataset. - A Python generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights). validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy. shuffle: Boolean (whether to shuffle the training data before each epoch) or str (for 'batch'). This argument is ignored when x is a generator or an object of tf.data.Dataset. 'batch' is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None. class_weight: Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to "pay more attention" to samples from an under-represented class. sample_weight: Optional Numpy array of weights for the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. initial_epoch: Integer. Epoch at which to start training (useful for resuming a previous training run). steps_per_epoch: Integer or None. Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and 'steps_per_epoch' is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. This argument is not supported with array inputs. steps_per_epoch=None is not supported when using tf.distribute.experimental.ParameterServerStrategy. validation_steps: Only relevant if validation_data is provided and is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If 'validation_steps' is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If 'validation_steps' is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time. validation_batch_size: Integer or None. Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches). validation_freq: Only relevant if validation data is provided. Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs. max_queue_size: Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10. workers: Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. use_multiprocessing: Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can't be passed easily to children processes. Unpacking behavior for iterator-like inputs: A common pattern is to pass a tf.data.Dataset, generator, or tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as 'x'. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({"x0": x0, "x1": x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict. A notable unsupported data type is the namedtuple. The reason is that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form: namedtuple("example_tuple", ["y", "x"]) it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form: namedtuple("other_tuple", ["x", "y", "z"]) where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.) Returns A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable). Raises RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function. ValueError: In case of mismatch between the provided input data and what the model expects or when the input data is empty. evaluate method Model.evaluate( x=None, y=None, batch_size=None, verbose=1, sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs ) Returns the loss value & metrics values for the model in test mode. Computation is done in batches (see the batch_size arg.) Arguments x: Input data. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs. A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights). A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights). A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit. y: Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset). batch_size: Integer or None. Number of samples per batch of computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches). verbose: 0 or 1. Verbosity mode. 0 = silent, 1 = progress bar. sample_weight: Optional Numpy array of weights for the test samples, used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x. steps: Integer or None. Total number of steps (batches of samples) before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, 'evaluate' will run until the dataset is exhausted. This argument is not supported with array inputs. callbacks: List of keras.callbacks.Callback instances. List of callbacks to apply during evaluation. See callbacks. max_queue_size: Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10. workers: Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. use_multiprocessing: Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can't be passed easily to children processes. return_dict: If True, loss and metric results are returned as a dict, with each key being the name of the metric. If False, they are returned as a list. **kwargs: Unused at this time. See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Model.evaluate is not yet supported with tf.distribute.experimental.ParameterServerStrategy. Returns Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs. Raises RuntimeError: If model.evaluate is wrapped in tf.function. ValueError: in case of invalid arguments. predict method Model.predict( x, batch_size=None, verbose=0, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, ) Generates output predictions for the input samples. Computation is done in batches. This method is designed for performance in large scale inputs. For small amount of inputs that fit in one batch, directly using __call__ is recommended for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behaves differently during inference. Also, note the fact that test loss is not affected by regularization layers like noise and dropout. Arguments x: Input samples. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A tf.data dataset. A generator or keras.utils.Sequence instance. A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit. batch_size: Integer or None. Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches). verbose: Verbosity mode, 0 or 1. steps: Total number of steps (batches of samples) before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict will run until the input dataset is exhausted. callbacks: List of keras.callbacks.Callback instances. List of callbacks to apply during prediction. See callbacks. max_queue_size: Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10. workers: Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. use_multiprocessing: Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can't be passed easily to children processes. See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods. Model.predict is not yet supported with tf.distribute.experimental.ParameterServerStrategy. Returns Numpy array(s) of predictions. Raises RuntimeError: If model.predict is wrapped in tf.function. ValueError: In case of mismatch between the provided input data and the model's expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size. train_on_batch method Model.train_on_batch( x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False, ) Runs a single gradient update on a single batch of data. Arguments x: Input data. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs. y: Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). sample_weight: Optional array of the same length as x, containing weights to apply to the model's loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. class_weight: Optional dictionary mapping class indices (integers) to a weight (float) to apply to the model's loss for the samples from this class during training. This can be useful to tell the model to "pay more attention" to samples from an under-represented class. reset_metrics: If True, the metrics returned will be only for this batch. If False, the metrics will be statefully accumulated across batches. return_dict: If True, loss and metric results are returned as a dict, with each key being the name of the metric. If False, they are returned as a list. Returns Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs. Raises RuntimeError: If model.train_on_batch is wrapped in tf.function. ValueError: In case of invalid user-provided arguments. test_on_batch method Model.test_on_batch( x, y=None, sample_weight=None, reset_metrics=True, return_dict=False ) Test the model on a single batch of samples. Arguments x: Input data. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). A dict mapping input names to the corresponding array/tensors, if the model has named inputs. y: Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). sample_weight: Optional array of the same length as x, containing weights to apply to the model's loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. reset_metrics: If True, the metrics returned will be only for this batch. If False, the metrics will be statefully accumulated across batches. return_dict: If True, loss and metric results are returned as a dict, with each key being the name of the metric. If False, they are returned as a list. Returns Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs. Raises RuntimeError: If model.test_on_batch is wrapped in tf.function. ValueError: In case of invalid user-provided arguments. predict_on_batch method Model.predict_on_batch(x) Returns predictions for a single batch of samples. Arguments x: Input data. It could be: A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). Returns Numpy array(s) of predictions. Raises RuntimeError: If model.predict_on_batch is wrapped in tf.function. ValueError: In case of mismatch between given number of inputs and expectations of the model. run_eagerly property tf.keras.Model.run_eagerly Settable attribute indicating whether the model should run eagerly. Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls. By default, we will attempt to compile your model to a static graph to deliver the best execution performance. Returns Boolean, whether the model should run eagerly. Model saving & serialization APIs save method Model.save( filepath, overwrite=True, include_optimizer=True, save_format=None, signatures=None, options=None, save_traces=True, ) Saves the model to Tensorflow SavedModel or a single HDF5 file. Please see tf.keras.models.save_model or the Serialization and Saving guide for details. Arguments filepath: String, PathLike, path to SavedModel or H5 file to save the model. overwrite: Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt. include_optimizer: If True, save optimizer's state together. save_format: Either 'tf' or 'h5', indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and 'h5' in TF 1.X. signatures: Signatures to save with the SavedModel. Applicable to the 'tf' format only. Please see the signatures argument in tf.saved_model.save for details. options: (only applies to SavedModel format) tf.saved_model.SaveOptions object that specifies options for saving to SavedModel. save_traces: (only applies to SavedModel format) When enabled, the SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method. Example from keras.models import load_model model.save('my_model.h5') # creates a HDF5 file 'my_model.h5' del model # deletes the existing model # returns a compiled model # identical to the previous one model = load_model('my_model.h5') save_model function tf.keras.models.save_model( model, filepath, overwrite=True, include_optimizer=True, save_format=None, signatures=None, options=None, save_traces=True, ) Saves a model as a TensorFlow SavedModel or HDF5 file. See the Serialization and Saving guide for details. Usage: >>> model = tf.keras.Sequential([ ... tf.keras.layers.Dense(5, input_shape=(3,)), ... tf.keras.layers.Softmax()]) >>> model.save('/tmp/model') >>> loaded_model = tf.keras.models.load_model('/tmp/model') >>> x = tf.random.uniform((10, 3)) >>> assert np.allclose(model.predict(x), loaded_model.predict(x)) The SavedModel and HDF5 file contains: the model's configuration (topology) the model's weights the model's optimizer's state (if any) Thus models can be reinstantiated in the exact same state, without any of the code used for model definition or training. Note that the model weights may have different scoped names after being loaded. Scoped names include the model/layer names, such as "dense_1/kernel:0". It is recommended that you use the layer properties to access specific variables, e.g. model.get_layer("dense_1").kernel. SavedModel serialization format Keras SavedModel uses tf.saved_model.save to save the model and all trackable objects attached to the model (e.g. layers and variables). The model config, weights, and optimizer are saved in the SavedModel. Additionally, for every Keras layer attached to the model, the SavedModel stores: * the config and metadata -- e.g. name, dtype, trainable status * traced call and loss functions, which are stored as TensorFlow subgraphs. The traced functions allow the SavedModel format to save and load custom layers without the original class definition. You can choose to not save the traced functions by disabling the save_traces option. This will decrease the time it takes to save the model and the amount of disk space occupied by the output SavedModel. If you enable this option, then you must provide all custom class definitions when loading the model. See the custom_objects argument in tf.keras.models.load_model. Arguments model: Keras model instance to be saved. filepath: One of the following: String or pathlib.Path object, path where to save the model h5py.File object where to save the model overwrite: Whether we should overwrite any existing model at the target location, or instead ask the user with a manual prompt. include_optimizer: If True, save optimizer's state together. save_format: Either 'tf' or 'h5', indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and 'h5' in TF 1.X. signatures: Signatures to save with the SavedModel. Applicable to the 'tf' format only. Please see the signatures argument in tf.saved_model.save for details. options: (only applies to SavedModel format) tf.saved_model.SaveOptions object that specifies options for saving to SavedModel. save_traces: (only applies to SavedModel format) When enabled, the SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method. Raises ImportError: If save format is hdf5, and h5py is not available. load_model function tf.keras.models.load_model( filepath, custom_objects=None, compile=True, options=None ) Loads a model saved via model.save(). Usage: >>> model = tf.keras.Sequential([ ... tf.keras.layers.Dense(5, input_shape=(3,)), ... tf.keras.layers.Softmax()]) >>> model.save('/tmp/model') >>> loaded_model = tf.keras.models.load_model('/tmp/model') >>> x = tf.random.uniform((10, 3)) >>> assert np.allclose(model.predict(x), loaded_model.predict(x)) Note that the model weights may have different scoped names after being loaded. Scoped names include the model/layer names, such as "dense_1/kernel:0". It is recommended that you use the layer properties to access specific variables, e.g. model.get_layer("dense_1").kernel. Arguments filepath: One of the following: - String or pathlib.Path object, path to the saved model - h5py.File object from which to load the model custom_objects: Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization. compile: Boolean, whether to compile the model after loading. options: Optional tf.saved_model.LoadOptions object that specifies options for loading from SavedModel. Returns A Keras model instance. If the original model was compiled, and saved with the optimizer, then the returned model will be compiled. Otherwise, the model will be left uncompiled. In the case that an uncompiled model is returned, a warning is displayed if the compile argument is set to True. Raises ImportError: if loading from an hdf5 file and h5py is not available. IOError: In case of an invalid savefile. get_weights method Model.get_weights() Retrieves the weights of the model. Returns A flat list of Numpy arrays. set_weights method Model.set_weights(weights) Sets the weights of the layer, from NumPy arrays. The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer's weights must be instantiated before calling this function, by calling the layer. For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer: >>> layer_a = tf.keras.layers.Dense(1, ... kernel_initializer=tf.constant_initializer(1.)) >>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]])) >>> layer_a.get_weights() [array([[1.], [1.], [1.]], dtype=float32), array([0.], dtype=float32)] >>> layer_b = tf.keras.layers.Dense(1, ... kernel_initializer=tf.constant_initializer(2.)) >>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]])) >>> layer_b.get_weights() [array([[2.], [2.], [2.]], dtype=float32), array([0.], dtype=float32)] >>> layer_b.set_weights(layer_a.get_weights()) >>> layer_b.get_weights() [array([[1.], [1.], [1.]], dtype=float32), array([0.], dtype=float32)] Arguments weights: a list of NumPy arrays. The number of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights). Raises ValueError: If the provided weights list does not match the layer's specifications. save_weights method Model.save_weights(filepath, overwrite=True, save_format=None, options=None) Saves all layer weights. Either saves in HDF5 or in TensorFlow format based on the save_format argument. When saving in HDF5 format, the weight file has: - layer_names (attribute), a list of strings (ordered names of model layers). - For every layer, a group named layer.name - For every such layer group, a group attribute weight_names, a list of strings (ordered names of weights tensor of the layer). - For every weight in the layer, a dataset storing the weight value, named after the weight tensor. When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details. While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints. The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model's variables. See the guide to training checkpoints for details on the TensorFlow format. Arguments filepath: String or PathLike, path to the file to save the weights to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the '.h5' suffix causes weights to be saved in HDF5 format. overwrite: Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt. save_format: Either 'tf' or 'h5'. A filepath ending in '.h5' or '.keras' will default to HDF5 if save_format is None. Otherwise None defaults to 'tf'. options: Optional tf.train.CheckpointOptions object that specifies options for saving weights. Raises ImportError: If h5py is not available when attempting to save in HDF5 format. ValueError: For invalid/unknown format arguments. load_weights method Model.load_weights(filepath, by_name=False, skip_mismatch=False, options=None) Loads all layer weights, either from a TensorFlow or an HDF5 weight file. If by_name is False weights are loaded based on the network's topology. This means the architecture should be the same as when the weights were saved. Note that layers that don't have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don't have weights. If by_name is True, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed. Only topological loading (by_name=False) is supported when loading weights from the TensorFlow format. Note that topological loading differs slightly between TensorFlow and HDF5 formats for user-defined classes inheriting from tf.keras.Model: HDF5 loads based on a flattened list of weights, while the TensorFlow format loads based on the object-local names of attributes to which layers are assigned in the Model's constructor. Arguments filepath: String, path to the weights file to load. For weight files in TensorFlow format, this is the file prefix (the same as was passed to save_weights). This can also be a path to a SavedModel saved from model.save. by_name: Boolean, whether to load weights by name or by topological order. Only topological loading is supported for weight files in TensorFlow format. skip_mismatch: Boolean, whether to skip loading of layers where there is a mismatch in the number of weights, or a mismatch in the shape of the weight (only valid when by_name=True). options: Optional tf.train.CheckpointOptions object that specifies options for loading weights. Returns When loading a weight file in TensorFlow format, returns the same status object as tf.train.Checkpoint.restore. When graph building, restore ops are run automatically as soon as the network is built (on first call for user-defined classes inheriting from Model, immediately if it is already built). When loading weights in HDF5 format, returns None. Raises ImportError: If h5py is not available and the weight file is in HDF5 format. ValueError: If skip_mismatch is set to True when by_name is False. get_config method Model.get_config() Returns the config of the layer. A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration. The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above). Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it. Returns Python dictionary. from_config method Model.from_config(config, custom_objects=None) Creates a layer from its config. This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights). Arguments config: A Python dictionary, typically the output of get_config. Returns A layer instance. model_from_config function tf.keras.models.model_from_config(config, custom_objects=None) Instantiates a Keras model from its config. Usage: # for a Functional API model tf.keras.Model().from_config(model.get_config()) # for a Sequential model tf.keras.Sequential().from_config(model.get_config()) Arguments config: Configuration dictionary. custom_objects: Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization. Returns A Keras model instance (uncompiled). Raises TypeError: if config is not a dictionary. to_json method Model.to_json(**kwargs) Returns a JSON string containing the network configuration. To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}). Arguments **kwargs: Additional keyword arguments to be passed to json.dumps(). Returns A JSON string. model_from_json function tf.keras.models.model_from_json(json_string, custom_objects=None) Parses a JSON model configuration string and returns a model instance. Usage: >>> model = tf.keras.Sequential([ ... tf.keras.layers.Dense(5, input_shape=(3,)), ... tf.keras.layers.Softmax()]) >>> config = model.to_json() >>> loaded_model = tf.keras.models.model_from_json(config) Arguments json_string: JSON string encoding a model configuration. custom_objects: Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization. Returns A Keras model instance (uncompiled). clone_model function tf.keras.models.clone_model(model, input_tensors=None, clone_function=None) Clone a Functional or Sequential Model instance. Model cloning is similar to calling a model on new inputs, except that it creates new layers (and thus new weights) instead of sharing the weights of the existing layers. Note that clone_model will not preserve the uniqueness of shared objects within the model (e.g. a single variable attached to two distinct layers will be restored as two separate variables). Arguments model: Instance of Model (could be a Functional model or a Sequential model). input_tensors: optional list of input tensors or InputLayer objects to build the model upon. If not provided, new Input objects will be created. clone_function: Callable to be used to clone each layer in the target model (except InputLayer instances). It takes as argument the layer instance to be cloned, and returns the corresponding layer instance to be used in the model copy. If unspecified, this callable defaults to the following serialization/deserialization function: lambda layer: layer.__class__.from_config(layer.get_config()). By passing a custom callable, you can customize your copy of the model, e.g. by wrapping certain layers of interest (you might want to replace all LSTM instances with equivalent Bidirectional(LSTM(...)) instances, for example). Returns An instance of Model reproducing the behavior of the original model, on top of new inputs tensors, using newly instantiated weights. The cloned model may behave differently from the original model if a custom clone_function modifies the layer. Example # Create a test Sequential model. model = keras.Sequential([ keras.Input(shape=(728,)), keras.layers.Dense(32, activation='relu'), keras.layers.Dense(1, activation='sigmoid'), ]) # Create a copy of the test model (with freshly initialized weights). new_model = clone_model(model) Note that subclassed models cannot be cloned, since their internal layer structure is not known. To achieve equivalent functionality as clone_model in the case of a subclassed model, simply make sure that the model class implements get_config() (and optionally from_config()), and call: new_model = model.__class__.from_config(model.get_config())The Model class Model class tf.keras.Model() Model groups layers into an object with training and inference features. Arguments inputs: The input(s) of the model: a keras.Input object or list of keras.Input objects. outputs: The output(s) of the model. See Functional API example below. name: String, the name of the model. There are two ways to instantiate a Model: 1 - With the "Functional API", where you start from Input, you chain layer calls to specify the model's forward pass, and finally you create your model from inputs and outputs: import tensorflow as tf inputs = tf.keras.Input(shape=(3,)) x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs) outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x) model = tf.keras.Model(inputs=inputs, outputs=outputs) 2 - By subclassing the Model class: in that case, you should define your layers in __init__ and you should implement the model's forward pass in call. import tensorflow as tf class MyModel(tf.keras.Model): def __init__(self): super(MyModel, self).__init__() self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu) self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax) def call(self, inputs): x = self.dense1(inputs) return self.dense2(x) model = MyModel() If you subclass Model, you can optionally have a training argument (boolean) in call, which you can use to specify a different behavior in training and inference: import tensorflow as tf class MyModel(tf.keras.Model): def __init__(self): super(MyModel, self).__init__() self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu) self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax) self.dropout = tf.keras.layers.Dropout(0.5) def call(self, inputs, training=False): x = self.dense1(inputs) if training: x = self.dropout(x, training=training) return self.dense2(x) model = MyModel() Once the model is created, you can config the model with losses and metrics with model.compile(), train the model with model.fit(), or use the model to do prediction with model.predict(). summary method Model.summary(line_length=None, positions=None, print_fn=None) Prints a string summary of the network. Arguments line_length: Total length of printed lines (e.g. set this to adapt the display to different terminal window sizes). positions: Relative or absolute positions of log elements in each line. If not provided, defaults to [.33, .55, .67, 1.]. print_fn: Print function to use. Defaults to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary. Raises ValueError: if summary() is called before the model is built. get_layer method Model.get_layer(name=None, index=None) Retrieves a layer based on either its name (unique) or index. If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up). Arguments name: String, name of layer. index: Integer, index of layer. Returns A layer instance. Raises ValueError: In case of invalid layer name or index.The Sequential class Sequential class tf.keras.Sequential(layers=None, name=None) Sequential groups a linear stack of layers into a tf.keras.Model. Sequential provides training and inference features on this model. Examples >>> # Optionally, the first layer can receive an `input_shape` argument: >>> model = tf.keras.Sequential() >>> model.add(tf.keras.layers.Dense(8, input_shape=(16,))) >>> # Afterwards, we do automatic shape inference: >>> model.add(tf.keras.layers.Dense(4)) >>> # This is identical to the following: >>> model = tf.keras.Sequential() >>> model.add(tf.keras.Input(shape=(16,))) >>> model.add(tf.keras.layers.Dense(8)) >>> # Note that you can also omit the `input_shape` argument. >>> # In that case the model doesn't have any weights until the first call >>> # to a training/evaluation method (since it isn't yet built): >>> model = tf.keras.Sequential() >>> model.add(tf.keras.layers.Dense(8)) >>> model.add(tf.keras.layers.Dense(4)) >>> # model.weights not created yet >>> # Whereas if you specify the input shape, the model gets built >>> # continuously as you are adding layers: >>> model = tf.keras.Sequential() >>> model.add(tf.keras.layers.Dense(8, input_shape=(16,))) >>> model.add(tf.keras.layers.Dense(4)) >>> len(model.weights) 4 >>> # When using the delayed-build pattern (no input shape specified), you can >>> # choose to manually build your model by calling >>> # `build(batch_input_shape)`: >>> model = tf.keras.Sequential() >>> model.add(tf.keras.layers.Dense(8)) >>> model.add(tf.keras.layers.Dense(4)) >>> model.build((None, 16)) >>> len(model.weights) 4 # Note that when using the delayed-build pattern (no input shape specified), # the model gets built the first time you call `fit`, `eval`, or `predict`, # or the first time you call the model on some input data. model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(8)) model.add(tf.keras.layers.Dense(1)) model.compile(optimizer='sgd', loss='mse') # This builds the model for the first time: model.fit(x, y, batch_size=32, epochs=10) add method Sequential.add(layer) Adds a layer instance on top of the layer stack. Arguments layer: layer instance. Raises TypeError: If layer is not a layer instance. ValueError: In case the layer argument does not know its input shape. ValueError: In case the layer argument has multiple output tensors, or is already connected somewhere else (forbidden in Sequential models). pop method Sequential.pop() Removes the last layer in the model. Raises TypeError: if there are no layers in the model. SGD SGD class tf.keras.optimizers.SGD( learning_rate=0.01, momentum=0.0, nesterov=False, name="SGD", **kwargs ) Gradient descent (with momentum) optimizer. Update rule for parameter w with gradient g when momentum is 0: w = w - learning_rate * g Update rule when momentum is larger than 0: velocity = momentum * velocity - learning_rate * g w = w + velocity When nesterov=True, this rule becomes: velocity = momentum * velocity - learning_rate * g w = w + momentum * velocity - learning_rate * g Arguments learning_rate: A Tensor, floating point value, or a schedule that is a tf.keras.optimizers.schedules.LearningRateSchedule, or a callable that takes no arguments and returns the actual value to use. The learning rate. Defaults to 0.01. momentum: float hyperparameter >= 0 that accelerates gradient descent in the relevant direction and dampens oscillations. Defaults to 0, i.e., vanilla gradient descent. nesterov: boolean. Whether to apply Nesterov momentum. Defaults to False. name: Optional name prefix for the operations created when applying gradients. Defaults to "SGD". **kwargs: Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm; "clipvalue" (float) clips gradients by value. Usage: >>> opt = tf.keras.optimizers.SGD(learning_rate=0.1) >>> var = tf.Variable(1.0) >>> loss = lambda: (var ** 2)/2.0 # d(loss)/d(var1) = var1 >>> step_count = opt.minimize(loss, [var]).numpy() >>> # Step is `- learning_rate * grad` >>> var.numpy() 0.9 >>> opt = tf.keras.optimizers.SGD(learning_rate=0.1, momentum=0.9) >>> var = tf.Variable(1.0) >>> val0 = var.value() >>> loss = lambda: (var ** 2)/2.0 # d(loss)/d(var1) = var1 >>> # First step is `- learning_rate * grad` >>> step_count = opt.minimize(loss, [var]).numpy() >>> val1 = var.value() >>> (val0 - val1).numpy() 0.1 >>> # On later steps, step-size increases because of momentum >>> step_count = opt.minimize(loss, [var]).numpy() >>> val2 = var.value() >>> (val1 - val2).numpy() 0.18 Reference For nesterov=True, See Sutskever et al., 2013.Adadelta Adadelta class tf.keras.optimizers.Adadelta( learning_rate=0.001, rho=0.95, epsilon=1e-07, name="Adadelta", **kwargs ) Optimizer that implements the Adadelta algorithm. Adadelta optimization is a stochastic gradient descent method that is based on adaptive learning rate per dimension to address two drawbacks: The continual decay of learning rates throughout training. The need for a manually selected global learning rate. Adadelta is a more robust extension of Adagrad that adapts learning rates based on a moving window of gradient updates, instead of accumulating all past gradients. This way, Adadelta continues learning even when many updates have been done. Compared to Adagrad, in the original version of Adadelta you don't have to set an initial learning rate. In this version, the initial learning rate can be set, as in most other Keras optimizers. Arguments learning_rate: Initial value for the learning rate: either a floating point value, or a tf.keras.optimizers.schedules.LearningRateSchedule instance. Defaults to 0.001. Note that Adadelta tends to benefit from higher initial learning rate values compared to other optimizers. To match the exact form in the original paper, use 1.0. rho: A Tensor or a floating point value. The decay rate. epsilon: Small floating point value used to maintain numerical stability. name: Optional name prefix for the operations created when applying gradients. Defaults to "Adadelta". **kwargs: Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm and represents the maximum norm of each parameter; "clipvalue" (float) clips gradient by value and represents the maximum absolute value of each parameter. Reference Zeiler, 2012Adam Adam class tf.keras.optimizers.Adam( learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=False, name="Adam", **kwargs ) Optimizer that implements the Adam algorithm. Adam optimization is a stochastic gradient descent method that is based on adaptive estimation of first-order and second-order moments. According to Kingma et al., 2014, the method is "computationally efficient, has little memory requirement, invariant to diagonal rescaling of gradients, and is well suited for problems that are large in terms of data/parameters". Arguments learning_rate: A Tensor, floating point value, or a schedule that is a tf.keras.optimizers.schedules.LearningRateSchedule, or a callable that takes no arguments and returns the actual value to use, The learning rate. Defaults to 0.001. beta_1: A float value or a constant float tensor, or a callable that takes no arguments and returns the actual value to use. The exponential decay rate for the 1st moment estimates. Defaults to 0.9. beta_2: A float value or a constant float tensor, or a callable that takes no arguments and returns the actual value to use, The exponential decay rate for the 2nd moment estimates. Defaults to 0.999. epsilon: A small constant for numerical stability. This epsilon is "epsilon hat" in the Kingma and Ba paper (in the formula just before Section 2.1), not the epsilon in Algorithm 1 of the paper. Defaults to 1e-7. amsgrad: Boolean. Whether to apply AMSGrad variant of this algorithm from the paper "On the Convergence of Adam and beyond". Defaults to False. name: Optional name for the operations created when applying gradients. Defaults to "Adam". **kwargs: Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm; "clipvalue" (float) clips gradients by value. Usage: >>> opt = tf.keras.optimizers.Adam(learning_rate=0.1) >>> var1 = tf.Variable(10.0) >>> loss = lambda: (var1 ** 2)/2.0 # d(loss)/d(var1) == var1 >>> step_count = opt.minimize(loss, [var1]).numpy() >>> # The first step is `-learning_rate*sign(grad)` >>> var1.numpy() 9.9 Reference Kingma et al., 2014 Reddi et al., 2018 for amsgrad. Notes: The default value of 1e-7 for epsilon might not be a good default in general. For example, when training an Inception network on ImageNet a current good choice is 1.0 or 0.1. Note that since Adam uses the formulation just before Section 2.1 of the Kingma and Ba paper rather than the formulation in Algorithm 1, the "epsilon" referred to here is "epsilon hat" in the paper. The sparse implementation of this algorithm (used when the gradient is an IndexedSlices object, typically because of tf.gather or an embedding lookup in the forward pass) does apply momentum to variable slices even if they were not used in the forward pass (meaning they have a gradient equal to zero). Momentum decay (beta1) is also applied to the entire momentum accumulator. This means that the sparse behavior is equivalent to the dense behavior (in contrast to some momentum implementations which ignore momentum unless a variable slice was actually used). RMSprop RMSprop class tf.keras.optimizers.RMSprop( learning_rate=0.001, rho=0.9, momentum=0.0, epsilon=1e-07, centered=False, name="RMSprop", **kwargs ) Optimizer that implements the RMSprop algorithm. The gist of RMSprop is to: Maintain a moving (discounted) average of the square of gradients Divide the gradient by the root of this average This implementation of RMSprop uses plain momentum, not Nesterov momentum. The centered version additionally maintains a moving average of the gradients, and uses that average to estimate the variance. Arguments learning_rate: A Tensor, floating point value, or a schedule that is a tf.keras.optimizers.schedules.LearningRateSchedule, or a callable that takes no arguments and returns the actual value to use. The learning rate. Defaults to 0.001. rho: Discounting factor for the history/coming gradient. Defaults to 0.9. momentum: A scalar or a scalar Tensor. Defaults to 0.0. epsilon: A small constant for numerical stability. This epsilon is "epsilon hat" in the Kingma and Ba paper (in the formula just before Section 2.1), not the epsilon in Algorithm 1 of the paper. Defaults to 1e-7. centered: Boolean. If True, gradients are normalized by the estimated variance of the gradient; if False, by the uncentered second moment. Setting this to True may help with training, but is slightly more expensive in terms of computation and memory. Defaults to False. name: Optional name prefix for the operations created when applying gradients. Defaults to "RMSprop". **kwargs: Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm; "clipvalue" (float) clips gradients by value. Note that in the dense implementation of this algorithm, variables and their corresponding accumulators (momentum, gradient moving average, square gradient moving average) will be updated even if the gradient is zero (i.e. accumulators will decay, momentum will be applied). The sparse implementation (used when the gradient is an IndexedSlices object, typically because of tf.gather or an embedding lookup in the forward pass) will not update variable slices or their accumulators unless those slices were used in the forward pass (nor is there an "eventual" correction to account for these omitted updates). This leads to more efficient updates for large embedding lookup tables (where most of the slices are not accessed in a particular graph execution), but differs from the published algorithm. Usage: >>> opt = tf.keras.optimizers.RMSprop(learning_rate=0.1) >>> var1 = tf.Variable(10.0) >>> loss = lambda: (var1 ** 2) / 2.0 # d(loss) / d(var1) = var1 >>> step_count = opt.minimize(loss, [var1]).numpy() >>> var1.numpy() 9.683772 Reference Hinton, 2012 Adamax Adamax class tf.keras.optimizers.Adamax( learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, name="Adamax", **kwargs ) Optimizer that implements the Adamax algorithm. It is a variant of Adam based on the infinity norm. Default parameters follow those provided in the paper. Adamax is sometimes superior to adam, specially in models with embeddings. Initialization: m = 0 # Initialize initial 1st moment vector v = 0 # Initialize the exponentially weighted infinity norm t = 0 # Initialize timestep The update rule for parameter w with gradient g is described at the end of section 7.1 of the paper: t += 1 m = beta1 * m + (1 - beta) * g v = max(beta2 * v, abs(g)) current_lr = learning_rate / (1 - beta1 ** t) w = w - current_lr * m / (v + epsilon) Similarly to Adam, the epsilon is added for numerical stability (especially to get rid of division by zero when v_t == 0). In contrast to Adam, the sparse implementation of this algorithm (used when the gradient is an IndexedSlices object, typically because of tf.gather or an embedding lookup in the forward pass) only updates variable slices and corresponding m_t, v_t terms when that part of the variable was used in the forward pass. This means that the sparse behavior is contrast to the dense behavior (similar to some momentum implementations which ignore momentum unless a variable slice was actually used). Arguments learning_rate: A Tensor, floating point value, or a schedule that is a tf.keras.optimizers.schedules.LearningRateSchedule. The learning rate. beta_1: A float value or a constant float tensor. The exponential decay rate for the 1st moment estimates. beta_2: A float value or a constant float tensor. The exponential decay rate for the exponentially weighted infinity norm. epsilon: A small constant for numerical stability. name: Optional name for the operations created when applying gradients. Defaults to "Adamax". **kwargs: Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm; "clipvalue" (float) clips gradients by value. Reference Kingma et al., 2014 Ftrl Ftrl class tf.keras.optimizers.Ftrl( learning_rate=0.001, learning_rate_power=-0.5, initial_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, name="Ftrl", l2_shrinkage_regularization_strength=0.0, beta=0.0, **kwargs ) Optimizer that implements the FTRL algorithm. "Follow The Regularized Leader" (FTRL) is an optimization algorithm developed at Google for click-through rate prediction in the early 2010s. It is most suitable for shallow models with large and sparse feature spaces. The algorithm is described in this paper. The Keras version has support for both online L2 regularization (the L2 regularization described in the paper above) and shrinkage-type L2 regularization (which is the addition of an L2 penalty to the loss function). Initialization: n = 0 sigma = 0 z = 0 Update rule for one variable w: prev_n = n n = n + g ** 2 sigma = (sqrt(n) - sqrt(prev_n)) / lr z = z + g - sigma * w if abs(z) < lambda_1: w = 0 else: w = (sgn(z) * lambda_1 - z) / ((beta + sqrt(n)) / alpha + lambda_2) Notation: lr is the learning rate g is the gradient for the variable lambda_1 is the L1 regularization strength lambda_2 is the L2 regularization strength Check the documentation for the l2_shrinkage_regularization_strength parameter for more details when shrinkage is enabled, in which case gradient is replaced with a gradient with shrinkage. Arguments learning_rate: A Tensor, floating point value, or a schedule that is a tf.keras.optimizers.schedules.LearningRateSchedule. The learning rate. learning_rate_power: A float value, must be less or equal to zero. Controls how the learning rate decreases during training. Use zero for a fixed learning rate. initial_accumulator_value: The starting value for accumulators. Only zero or positive values are allowed. l1_regularization_strength: A float value, must be greater than or equal to zero. Defaults to 0.0. l2_regularization_strength: A float value, must be greater than or equal to zero. Defaults to 0.0. name: Optional name prefix for the operations created when applying gradients. Defaults to "Ftrl". l2_shrinkage_regularization_strength: A float value, must be greater than or equal to zero. This differs from L2 above in that the L2 above is a stabilization penalty, whereas this L2 shrinkage is a magnitude penalty. When input is sparse shrinkage will only happen on the active weights. beta: A float value, representing the beta value from the paper. Defaults to 0.0. **kwargs: Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm; "clipvalue" (float) clips gradients by value. Reference Original paper Nadam Nadam class tf.keras.optimizers.Nadam( learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, name="Nadam", **kwargs ) Optimizer that implements the NAdam algorithm. Much like Adam is essentially RMSprop with momentum, Nadam is Adam with Nesterov momentum. Arguments learning_rate: A Tensor or a floating point value. The learning rate. beta_1: A float value or a constant float tensor. The exponential decay rate for the 1st moment estimates. beta_2: A float value or a constant float tensor. The exponential decay rate for the exponentially weighted infinity norm. epsilon: A small constant for numerical stability. name: Optional name for the operations created when applying gradients. Defaults to "Nadam". **kwargs: Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm; "clipvalue" (float) clips gradients by value. Usage # Example opt = tf.keras.optimizers.Nadam(learning_rate=0.2) var1 = tf.Variable(10.0) loss = lambda: (var1 ** 2) / 2.0 step_count = opt.minimize(loss, [var1]).numpy() "{:.1f}".format(var1.numpy()) 9.8 Reference Dozat, 2015. Adagrad Adagrad class tf.keras.optimizers.Adagrad( learning_rate=0.001, initial_accumulator_value=0.1, epsilon=1e-07, name="Adagrad", **kwargs ) Optimizer that implements the Adagrad algorithm. Adagrad is an optimizer with parameter-specific learning rates, which are adapted relative to how frequently a parameter gets updated during training. The more updates a parameter receives, the smaller the updates. Arguments learning_rate: Initial value for the learning rate: either a floating point value, or a tf.keras.optimizers.schedules.LearningRateSchedule instance. Defaults to 0.001. Note that Adagrad tends to benefit from higher initial learning rate values compared to other optimizers. To match the exact form in the original paper, use 1.0. initial_accumulator_value: Floating point value. Starting value for the accumulators (per-parameter momentum values). Must be non-negative. epsilon: Small floating point value used to maintain numerical stability. name: Optional name prefix for the operations created when applying gradients. Defaults to "Adagrad". **kwargs: Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm and represents the maximum L2 norm of each weight variable; "clipvalue" (float) clips gradient by value and represents the maximum absolute value of each weight variable. Reference Duchi et al., 2011.TerminateOnNaN TerminateOnNaN class tf.keras.callbacks.TerminateOnNaN() Callback that terminates training when a NaN loss is encountered. ProgbarLogger ProgbarLogger class tf.keras.callbacks.ProgbarLogger(count_mode="samples", stateful_metrics=None) Callback that prints metrics to stdout. Arguments count_mode: One of "steps" or "samples". Whether the progress bar should count samples seen or steps (batches) seen. stateful_metrics: Iterable of string names of metrics that should not be averaged over an epoch. Metrics in this list will be logged as-is. All others will be averaged over time (e.g. loss, etc). If not provided, defaults to the Model's metrics. Raises ValueError: In case of invalid count_mode. ModelCheckpoint ModelCheckpoint class tf.keras.callbacks.ModelCheckpoint( filepath, monitor="val_loss", verbose=0, save_best_only=False, save_weights_only=False, mode="auto", save_freq="epoch", options=None, **kwargs ) Callback to save the Keras model or model weights at some frequency. ModelCheckpoint callback is used in conjunction with training using model.fit() to save a model or weights (in a checkpoint file) at some interval, so the model or weights can be loaded later to continue the training from the state saved. A few options this callback provides include: Whether to only keep the model that has achieved the "best performance" so far, or whether to save the model at the end of every epoch regardless of performance. Definition of 'best'; which quantity to monitor and whether it should be maximized or minimized. The frequency it should save at. Currently, the callback supports saving at the end of every epoch, or after a fixed number of training batches. Whether only weights are saved, or the whole model is saved. Note: If you get WARNING:tensorflow:Can save best model only with available, skipping see the description of the monitor argument for details on how to get this right. Example model.compile(loss=..., optimizer=..., metrics=['accuracy']) EPOCHS = 10 checkpoint_filepath = '/tmp/checkpoint' model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_filepath, save_weights_only=True, monitor='val_accuracy', mode='max', save_best_only=True) # Model weights are saved at the end of every epoch, if it's the best seen # so far. model.fit(epochs=EPOCHS, callbacks=[model_checkpoint_callback]) # The model weights (that are considered the best) are loaded into the model. model.load_weights(checkpoint_filepath) Arguments filepath: string or PathLike, path to save the model file. e.g. filepath = os.path.join(working_dir, 'ckpt', file_name). filepath can contain named formatting options, which will be filled the value of epoch and keys in logs (passed in on_epoch_end). For example: if filepath is weights.{epoch:02d}-{val_loss:.2f}.hdf5, then the model checkpoints will be saved with the epoch number and the validation loss in the filename. The directory of the filepath should not be reused by any other callbacks to avoid conflicts. monitor: The metric name to monitor. Typically the metrics are set by the Model.compile method. Note: Prefix the name with "val_" to monitor validation metrics. Use "loss" or "val_loss" to monitor the model's total loss. If you specify metrics as strings, like "accuracy", pass the same string (with or without the "val_" prefix). If you pass metrics.Metric objects, monitor should be set to metric.name If you're not sure about the metric names you can check the contents of the history.history dictionary returned by history = model.fit() Multi-output models set additional prefixes on the metric names. verbose: verbosity mode, 0 or 1. save_best_only: if save_best_only=True, it only saves when the model is considered the "best" and the latest best model according to the quantity monitored will not be overwritten. If filepath doesn't contain formatting options like {epoch} then filepath will be overwritten by each new better model. mode: one of {'auto', 'min', 'max'}. If save_best_only=True, the decision to overwrite the current save file is made based on either the maximization or the minimization of the monitored quantity. For val_acc, this should be max, for val_loss this should be min, etc. In auto mode, the mode is set to max if the quantities monitored are 'acc' or start with 'fmeasure' and are set to min for the rest of the quantities. save_weights_only: if True, then only the model's weights will be saved (model.save_weights(filepath)), else the full model is saved (model.save(filepath)). save_freq: 'epoch' or integer. When using 'epoch', the callback saves the model after each epoch. When using integer, the callback saves the model at end of this many batches. If the Model is compiled with steps_per_execution=N, then the saving criteria will be checked every Nth batch. Note that if the saving isn't aligned to epochs, the monitored metric may potentially be less reliable (it could reflect as little as 1 batch, since the metrics get reset every epoch). Defaults to 'epoch'. options: Optional tf.train.CheckpointOptions object if save_weights_only is true or optional tf.saved_model.SaveOptions object if save_weights_only is false. **kwargs: Additional arguments for backwards compatibility. Possible key is period.LearningRateScheduler LearningRateScheduler class tf.keras.callbacks.LearningRateScheduler(schedule, verbose=0) Learning rate scheduler. At the beginning of every epoch, this callback gets the updated learning rate value from schedule function provided at __init__, with the current epoch and current learning rate, and applies the updated learning rate on the optimizer. Arguments schedule: a function that takes an epoch index (integer, indexed from 0) and current learning rate (float) as inputs and returns a new learning rate as output (float). verbose: int. 0: quiet, 1: update messages. Example >>> # This function keeps the initial learning rate for the first ten epochs >>> # and decreases it exponentially after that. >>> def scheduler(epoch, lr): ... if epoch < 10: ... return lr ... else: ... return lr * tf.math.exp(-0.1) >>> >>> model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) >>> model.compile(tf.keras.optimizers.SGD(), loss='mse') >>> round(model.optimizer.lr.numpy(), 5) 0.01 >>> callback = tf.keras.callbacks.LearningRateScheduler(scheduler) >>> history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5), ... epochs=15, callbacks=[callback], verbose=0) >>> round(model.optimizer.lr.numpy(), 5) 0.00607ReduceLROnPlateau ReduceLROnPlateau class tf.keras.callbacks.ReduceLROnPlateau( monitor="val_loss", factor=0.1, patience=10, verbose=0, mode="auto", min_delta=0.0001, cooldown=0, min_lr=0, **kwargs ) Reduce learning rate when a metric has stopped improving. Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This callback monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced. Example reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.001) model.fit(X_train, Y_train, callbacks=[reduce_lr]) Arguments monitor: quantity to be monitored. factor: factor by which the learning rate will be reduced. new_lr = lr * factor. patience: number of epochs with no improvement after which learning rate will be reduced. verbose: int. 0: quiet, 1: update messages. mode: one of {'auto', 'min', 'max'}. In 'min' mode, the learning rate will be reduced when the quantity monitored has stopped decreasing; in 'max' mode it will be reduced when the quantity monitored has stopped increasing; in 'auto' mode, the direction is automatically inferred from the name of the monitored quantity. min_delta: threshold for measuring the new optimum, to only focus on significant changes. cooldown: number of epochs to wait before resuming normal operation after lr has been reduced. min_lr: lower bound on the learning rate. CSVLogger CSVLogger class tf.keras.callbacks.CSVLogger(filename, separator=",", append=False) Callback that streams epoch results to a CSV file. Supports all values that can be represented as a string, including 1D iterables such as np.ndarray. Example csv_logger = CSVLogger('training.log') model.fit(X_train, Y_train, callbacks=[csv_logger]) Arguments filename: Filename of the CSV file, e.g. 'run/log.csv'. separator: String used to separate elements in the CSV file. append: Boolean. True: append if file exists (useful for continuing training). False: overwrite existing file. LambdaCallback LambdaCallback class tf.keras.callbacks.LambdaCallback( on_epoch_begin=None, on_epoch_end=None, on_batch_begin=None, on_batch_end=None, on_train_begin=None, on_train_end=None, **kwargs ) Callback for creating simple, custom callbacks on-the-fly. This callback is constructed with anonymous functions that will be called at the appropriate time (during Model.{fit | evaluate | predict}). Note that the callbacks expects positional arguments, as: on_epoch_begin and on_epoch_end expect two positional arguments: epoch, logs on_batch_begin and on_batch_end expect two positional arguments: batch, logs on_train_begin and on_train_end expect one positional argument: logs Arguments on_epoch_begin: called at the beginning of every epoch. on_epoch_end: called at the end of every epoch. on_batch_begin: called at the beginning of every batch. on_batch_end: called at the end of every batch. on_train_begin: called at the beginning of model training. on_train_end: called at the end of model training. Example # Print the batch number at the beginning of every batch. batch_print_callback = LambdaCallback( on_batch_begin=lambda batch,logs: print(batch)) # Stream the epoch loss to a file in JSON format. The file content # is not well-formed JSON but rather has a JSON object per line. import json json_log = open('loss_log.json', mode='wt', buffering=1) json_logging_callback = LambdaCallback( on_epoch_end=lambda epoch, logs: json_log.write( json.dumps({'epoch': epoch, 'loss': logs['loss']}) + '\n'), on_train_end=lambda logs: json_log.close() ) # Terminate some processes after having finished model training. processes = ... cleanup_callback = LambdaCallback( on_train_end=lambda logs: [ p.terminate() for p in processes if p.is_alive()]) model.fit(..., callbacks=[batch_print_callback, json_logging_callback, cleanup_callback]) TensorBoard TensorBoard class tf.keras.callbacks.TensorBoard( log_dir="logs", histogram_freq=0, write_graph=True, write_images=False, write_steps_per_second=False, update_freq="epoch", profile_batch=2, embeddings_freq=0, embeddings_metadata=None, **kwargs ) Enable visualizations for TensorBoard. TensorBoard is a visualization tool provided with TensorFlow. This callback logs events for TensorBoard, including: Metrics summary plots Training graph visualization Activation histograms Sampled profiling When used in Model.evaluate, in addition to epoch summaries, there will be a summary that records evaluation metrics vs Model.optimizer.iterations written. The metric names will be prepended with evaluation, with Model.optimizer.iterations being the step in the visualized TensorBoard. If you have installed TensorFlow with pip, you should be able to launch TensorBoard from the command line: tensorboard --logdir=path_to_your_logs You can find more information about TensorBoard here. Arguments log_dir: the path of the directory where to save the log files to be parsed by TensorBoard. e.g. log_dir = os.path.join(working_dir, 'logs') This directory should not be reused by any other callbacks. histogram_freq: frequency (in epochs) at which to compute activation and weight histograms for the layers of the model. If set to 0, histograms won't be computed. Validation data (or split) must be specified for histogram visualizations. write_graph: whether to visualize the graph in TensorBoard. The log file can become quite large when write_graph is set to True. write_images: whether to write model weights to visualize as image in TensorBoard. write_steps_per_second: whether to log the training steps per second into Tensorboard. This supports both epoch and batch frequency logging. update_freq: 'batch' or 'epoch' or integer. When using 'batch', writes the losses and metrics to TensorBoard after each batch. The same applies for 'epoch'. If using an integer, let's say 1000, the callback will write the metrics and losses to TensorBoard every 1000 batches. Note that writing too frequently to TensorBoard can slow down your training. profile_batch: Profile the batch(es) to sample compute characteristics. profile_batch must be a non-negative integer or a tuple of integers. A pair of positive integers signify a range of batches to profile. By default, it will profile the second batch. Set profile_batch=0 to disable profiling. embeddings_freq: frequency (in epochs) at which embedding layers will be visualized. If set to 0, embeddings won't be visualized. embeddings_metadata: a dictionary which maps layer name to a file name in which metadata for this embedding layer is saved. See the details about metadata files format. In case if the same metadata file is used for all embedding layers, string can be passed. Examples Basic usage: tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="./logs") model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback]) # Then run the tensorboard command to view the visualizations. Custom batch-level summaries in a subclassed Model: class MyModel(tf.keras.Model): def build(self, _): self.dense = tf.keras.layers.Dense(10) def call(self, x): outputs = self.dense(x) tf.summary.histogram('outputs', outputs) return outputs model = MyModel() model.compile('sgd', 'mse') # Make sure to set `update_freq=N` to log a batch-level summary every N batches. # In addition to any `tf.summary` contained in `Model.call`, metrics added in # `Model.compile` will be logged every N batches. tb_callback = tf.keras.callbacks.TensorBoard('./logs', update_freq=1) model.fit(x_train, y_train, callbacks=[tb_callback]) Custom batch-level summaries in a Functional API Model: def my_summary(x): tf.summary.histogram('x', x) return x inputs = tf.keras.Input(10) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Lambda(my_summary)(x) model = tf.keras.Model(inputs, outputs) model.compile('sgd', 'mse') # Make sure to set `update_freq=N` to log a batch-level summary every N batches. # In addition to any `tf.summary` contained in `Model.call`, metrics added in # `Model.compile` will be logged every N batches. tb_callback = tf.keras.callbacks.TensorBoard('./logs', update_freq=1) model.fit(x_train, y_train, callbacks=[tb_callback]) Profiling: # Profile a single batch, e.g. the 5th batch. tensorboard_callback = tf.keras.callbacks.TensorBoard( log_dir='./logs', profile_batch=5) model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback]) # Profile a range of batches, e.g. from 10 to 20. tensorboard_callback = tf.keras.callbacks.TensorBoard( log_dir='./logs', profile_batch=(10,20)) model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback])EarlyStopping EarlyStopping class tf.keras.callbacks.EarlyStopping( monitor="val_loss", min_delta=0, patience=0, verbose=0, mode="auto", baseline=None, restore_best_weights=False, ) Stop training when a monitored metric has stopped improving. Assuming the goal of a training is to minimize the loss. With this, the metric to be monitored would be 'loss', and mode would be 'min'. A model.fit() training loop will check at end of every epoch whether the loss is no longer decreasing, considering the min_delta and patience if applicable. Once it's found no longer decreasing, model.stop_training is marked True and the training terminates. The quantity to be monitored needs to be available in logs dict. To make it so, pass the loss or metrics at model.compile(). Arguments monitor: Quantity to be monitored. min_delta: Minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than min_delta, will count as no improvement. patience: Number of epochs with no improvement after which training will be stopped. verbose: verbosity mode. mode: One of {"auto", "min", "max"}. In min mode, training will stop when the quantity monitored has stopped decreasing; in "max" mode it will stop when the quantity monitored has stopped increasing; in "auto" mode, the direction is automatically inferred from the name of the monitored quantity. baseline: Baseline value for the monitored quantity. Training will stop if the model doesn't show improvement over the baseline. restore_best_weights: Whether to restore model weights from the epoch with the best value of the monitored quantity. If False, the model weights obtained at the last step of training are used. An epoch will be restored regardless of the performance relative to the baseline. If no epoch improves on baseline, training will run for patience epochs and restore weights from the best epoch in that set. Example >>> callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3) >>> # This callback will stop the training when there is no improvement in >>> # the loss for three consecutive epochs. >>> model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) >>> model.compile(tf.keras.optimizers.SGD(), loss='mse') >>> history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5), ... epochs=10, batch_size=1, callbacks=[callback], ... verbose=0) >>> len(history.history['loss']) # Only 4 epochs are run. 4 RemoteMonitor RemoteMonitor class tf.keras.callbacks.RemoteMonitor( root="http://localhost:9000", path="/publish/epoch/end/", field="data", headers=None, send_as_json=False, ) Callback used to stream events to a server. Requires the requests library. Events are sent to root + '/publish/epoch/end/' by default. Calls are HTTP POST, with a data argument which is a JSON-encoded dictionary of event data. If send_as_json=True, the content type of the request will be "application/json". Otherwise the serialized JSON will be sent within a form. Arguments root: String; root url of the target server. path: String; path relative to root to which the events will be sent. field: String; JSON field under which the data will be stored. The field is used only if the payload is sent within a form (i.e. send_as_json is set to False). headers: Dictionary; optional custom HTTP headers. send_as_json: Boolean; whether the request should be sent as "application/json". Base Callback class Callback class tf.keras.callbacks.Callback() Abstract base class used to build new callbacks. Callbacks can be passed to keras methods such as fit, evaluate, and predict in order to hook into the various stages of the model training and inference lifecycle. To create a custom callback, subclass keras.callbacks.Callback and override the method associated with the stage of interest. See https://www.tensorflow.org/guide/keras/custom_callback for more information. Example >>> training_finished = False >>> class MyCallback(tf.keras.callbacks.Callback): ... def on_train_end(self, logs=None): ... global training_finished ... training_finished = True >>> model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))]) >>> model.compile(loss='mean_squared_error') >>> model.fit(tf.constant([[1.0]]), tf.constant([[1.0]]), ... callbacks=[MyCallback()]) >>> assert training_finished == True Attributes params: Dict. Training parameters (eg. verbosity, batch size, number of epochs...). model: Instance of keras.models.Model. Reference of the model being trained. The logs dictionary that callback methods take as argument will contain keys for quantities relevant to the current batch or epoch (see method-specific docstrings).Regression losses MeanSquaredError class tf.keras.losses.MeanSquaredError(reduction="auto", name="mean_squared_error") Computes the mean of squares of errors between labels and predictions. loss = square(y_true - y_pred) Standalone usage: >>> y_true = [[0., 1.], [0., 0.]] >>> y_pred = [[1., 1.], [1., 0.]] >>> # Using 'auto'/'sum_over_batch_size' reduction type. >>> mse = tf.keras.losses.MeanSquaredError() >>> mse(y_true, y_pred).numpy() 0.5 >>> # Calling with 'sample_weight'. >>> mse(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy() 0.25 >>> # Using 'sum' reduction type. >>> mse = tf.keras.losses.MeanSquaredError( ... reduction=tf.keras.losses.Reduction.SUM) >>> mse(y_true, y_pred).numpy() 1.0 >>> # Using 'none' reduction type. >>> mse = tf.keras.losses.MeanSquaredError( ... reduction=tf.keras.losses.Reduction.NONE) >>> mse(y_true, y_pred).numpy() array([0.5, 0.5], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.MeanSquaredError()) MeanAbsoluteError class tf.keras.losses.MeanAbsoluteError( reduction="auto", name="mean_absolute_error" ) Computes the mean of absolute difference between labels and predictions. loss = abs(y_true - y_pred) Standalone usage: >>> y_true = [[0., 1.], [0., 0.]] >>> y_pred = [[1., 1.], [1., 0.]] >>> # Using 'auto'/'sum_over_batch_size' reduction type. >>> mae = tf.keras.losses.MeanAbsoluteError() >>> mae(y_true, y_pred).numpy() 0.5 >>> # Calling with 'sample_weight'. >>> mae(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy() 0.25 >>> # Using 'sum' reduction type. >>> mae = tf.keras.losses.MeanAbsoluteError( ... reduction=tf.keras.losses.Reduction.SUM) >>> mae(y_true, y_pred).numpy() 1.0 >>> # Using 'none' reduction type. >>> mae = tf.keras.losses.MeanAbsoluteError( ... reduction=tf.keras.losses.Reduction.NONE) >>> mae(y_true, y_pred).numpy() array([0.5, 0.5], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.MeanAbsoluteError()) MeanAbsolutePercentageError class tf.keras.losses.MeanAbsolutePercentageError( reduction="auto", name="mean_absolute_percentage_error" ) Computes the mean absolute percentage error between y_true and y_pred. loss = 100 * abs(y_true - y_pred) / y_true Standalone usage: >>> y_true = [[2., 1.], [2., 3.]] >>> y_pred = [[1., 1.], [1., 0.]] >>> # Using 'auto'/'sum_over_batch_size' reduction type. >>> mape = tf.keras.losses.MeanAbsolutePercentageError() >>> mape(y_true, y_pred).numpy() 50. >>> # Calling with 'sample_weight'. >>> mape(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy() 20. >>> # Using 'sum' reduction type. >>> mape = tf.keras.losses.MeanAbsolutePercentageError( ... reduction=tf.keras.losses.Reduction.SUM) >>> mape(y_true, y_pred).numpy() 100. >>> # Using 'none' reduction type. >>> mape = tf.keras.losses.MeanAbsolutePercentageError( ... reduction=tf.keras.losses.Reduction.NONE) >>> mape(y_true, y_pred).numpy() array([25., 75.], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.MeanAbsolutePercentageError()) MeanSquaredLogarithmicError class tf.keras.losses.MeanSquaredLogarithmicError( reduction="auto", name="mean_squared_logarithmic_error" ) Computes the mean squared logarithmic error between y_true and y_pred. loss = square(log(y_true + 1.) - log(y_pred + 1.)) Standalone usage: >>> y_true = [[0., 1.], [0., 0.]] >>> y_pred = [[1., 1.], [1., 0.]] >>> # Using 'auto'/'sum_over_batch_size' reduction type. >>> msle = tf.keras.losses.MeanSquaredLogarithmicError() >>> msle(y_true, y_pred).numpy() 0.240 >>> # Calling with 'sample_weight'. >>> msle(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy() 0.120 >>> # Using 'sum' reduction type. >>> msle = tf.keras.losses.MeanSquaredLogarithmicError( ... reduction=tf.keras.losses.Reduction.SUM) >>> msle(y_true, y_pred).numpy() 0.480 >>> # Using 'none' reduction type. >>> msle = tf.keras.losses.MeanSquaredLogarithmicError( ... reduction=tf.keras.losses.Reduction.NONE) >>> msle(y_true, y_pred).numpy() array([0.240, 0.240], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.MeanSquaredLogarithmicError()) CosineSimilarity class tf.keras.losses.CosineSimilarity( axis=-1, reduction="auto", name="cosine_similarity" ) Computes the cosine similarity between labels and predictions. Note that it is a number between -1 and 1. When it is a negative number between -1 and 0, 0 indicates orthogonality and values closer to -1 indicate greater similarity. The values closer to 1 indicate greater dissimilarity. This makes it usable as a loss function in a setting where you try to maximize the proximity between predictions and targets. If either y_true or y_pred is a zero vector, cosine similarity will be 0 regardless of the proximity between predictions and targets. loss = -sum(l2_norm(y_true) * l2_norm(y_pred)) Standalone usage: >>> y_true = [[0., 1.], [1., 1.]] >>> y_pred = [[1., 0.], [1., 1.]] >>> # Using 'auto'/'sum_over_batch_size' reduction type. >>> cosine_loss = tf.keras.losses.CosineSimilarity(axis=1) >>> # l2_norm(y_true) = [[0., 1.], [1./1.414], 1./1.414]]] >>> # l2_norm(y_pred) = [[1., 0.], [1./1.414], 1./1.414]]] >>> # l2_norm(y_true) . l2_norm(y_pred) = [[0., 0.], [0.5, 0.5]] >>> # loss = mean(sum(l2_norm(y_true) . l2_norm(y_pred), axis=1)) >>> # = -((0. + 0.) + (0.5 + 0.5)) / 2 >>> cosine_loss(y_true, y_pred).numpy() -0.5 >>> # Calling with 'sample_weight'. >>> cosine_loss(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy() -0.0999 >>> # Using 'sum' reduction type. >>> cosine_loss = tf.keras.losses.CosineSimilarity(axis=1, ... reduction=tf.keras.losses.Reduction.SUM) >>> cosine_loss(y_true, y_pred).numpy() -0.999 >>> # Using 'none' reduction type. >>> cosine_loss = tf.keras.losses.CosineSimilarity(axis=1, ... reduction=tf.keras.losses.Reduction.NONE) >>> cosine_loss(y_true, y_pred).numpy() array([-0., -0.999], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.CosineSimilarity(axis=1)) Arguments axis: (Optional) Defaults to -1. The dimension along which the cosine similarity is computed. reduction: (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training [tutorial] (https://www.tensorflow.org/tutorials/distribute/custom_training) for more details. name: Optional name for the op. mean_squared_error function tf.keras.losses.mean_squared_error(y_true, y_pred) Computes the mean squared error between labels and predictions. After computing the squared distance between the inputs, the mean value over the last dimension is returned. loss = mean(square(y_true - y_pred), axis=-1) Standalone usage: >>> y_true = np.random.randint(0, 2, size=(2, 3)) >>> y_pred = np.random.random(size=(2, 3)) >>> loss = tf.keras.losses.mean_squared_error(y_true, y_pred) >>> assert loss.shape == (2,) >>> assert np.array_equal( ... loss.numpy(), np.mean(np.square(y_true - y_pred), axis=-1)) Arguments y_true: Ground truth values. shape = [batch_size, d0, .. dN]. y_pred: The predicted values. shape = [batch_size, d0, .. dN]. Returns Mean squared error values. shape = [batch_size, d0, .. dN-1]. mean_absolute_error function tf.keras.losses.mean_absolute_error(y_true, y_pred) Computes the mean absolute error between labels and predictions. loss = mean(abs(y_true - y_pred), axis=-1) Standalone usage: >>> y_true = np.random.randint(0, 2, size=(2, 3)) >>> y_pred = np.random.random(size=(2, 3)) >>> loss = tf.keras.losses.mean_absolute_error(y_true, y_pred) >>> assert loss.shape == (2,) >>> assert np.array_equal( ... loss.numpy(), np.mean(np.abs(y_true - y_pred), axis=-1)) Arguments y_true: Ground truth values. shape = [batch_size, d0, .. dN]. y_pred: The predicted values. shape = [batch_size, d0, .. dN]. Returns Mean absolute error values. shape = [batch_size, d0, .. dN-1]. mean_absolute_percentage_error function tf.keras.losses.mean_absolute_percentage_error(y_true, y_pred) Computes the mean absolute percentage error between y_true and y_pred. loss = 100 * mean(abs((y_true - y_pred) / y_true), axis=-1) Standalone usage: >>> y_true = np.random.random(size=(2, 3)) >>> y_true = np.maximum(y_true, 1e-7) # Prevent division by zero >>> y_pred = np.random.random(size=(2, 3)) >>> loss = tf.keras.losses.mean_absolute_percentage_error(y_true, y_pred) >>> assert loss.shape == (2,) >>> assert np.array_equal( ... loss.numpy(), ... 100. * np.mean(np.abs((y_true - y_pred) / y_true), axis=-1)) Arguments y_true: Ground truth values. shape = [batch_size, d0, .. dN]. y_pred: The predicted values. shape = [batch_size, d0, .. dN]. Returns Mean absolute percentage error values. shape = [batch_size, d0, .. dN-1]. mean_squared_logarithmic_error function tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred) Computes the mean squared logarithmic error between y_true and y_pred. loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1) Standalone usage: >>> y_true = np.random.randint(0, 2, size=(2, 3)) >>> y_pred = np.random.random(size=(2, 3)) >>> loss = tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred) >>> assert loss.shape == (2,) >>> y_true = np.maximum(y_true, 1e-7) >>> y_pred = np.maximum(y_pred, 1e-7) >>> assert np.allclose( ... loss.numpy(), ... np.mean( ... np.square(np.log(y_true + 1.) - np.log(y_pred + 1.)), axis=-1)) Arguments y_true: Ground truth values. shape = [batch_size, d0, .. dN]. y_pred: The predicted values. shape = [batch_size, d0, .. dN]. Returns Mean squared logarithmic error values. shape = [batch_size, d0, .. dN-1]. cosine_similarity function tf.keras.losses.cosine_similarity(y_true, y_pred, axis=-1) Computes the cosine similarity between labels and predictions. Note that it is a number between -1 and 1. When it is a negative number between -1 and 0, 0 indicates orthogonality and values closer to -1 indicate greater similarity. The values closer to 1 indicate greater dissimilarity. This makes it usable as a loss function in a setting where you try to maximize the proximity between predictions and targets. If either y_true or y_pred is a zero vector, cosine similarity will be 0 regardless of the proximity between predictions and targets. loss = -sum(l2_norm(y_true) * l2_norm(y_pred)) Standalone usage: >>> y_true = [[0., 1.], [1., 1.], [1., 1.]] >>> y_pred = [[1., 0.], [1., 1.], [-1., -1.]] >>> loss = tf.keras.losses.cosine_similarity(y_true, y_pred, axis=1) >>> loss.numpy() array([-0., -0.999, 0.999], dtype=float32) Arguments y_true: Tensor of true targets. y_pred: Tensor of predicted targets. axis: Axis along which to determine similarity. Returns Cosine similarity tensor. Huber class tf.keras.losses.Huber(delta=1.0, reduction="auto", name="huber_loss") Computes the Huber loss between y_true and y_pred. For each value x in error = y_true - y_pred: loss = 0.5 * x^2 if |x| <= d loss = 0.5 * d^2 + d * (|x| - d) if |x| > d where d is delta. See: https://en.wikipedia.org/wiki/Huber_loss Standalone usage: >>> y_true = [[0, 1], [0, 0]] >>> y_pred = [[0.6, 0.4], [0.4, 0.6]] >>> # Using 'auto'/'sum_over_batch_size' reduction type. >>> h = tf.keras.losses.Huber() >>> h(y_true, y_pred).numpy() 0.155 >>> # Calling with 'sample_weight'. >>> h(y_true, y_pred, sample_weight=[1, 0]).numpy() 0.09 >>> # Using 'sum' reduction type. >>> h = tf.keras.losses.Huber( ... reduction=tf.keras.losses.Reduction.SUM) >>> h(y_true, y_pred).numpy() 0.31 >>> # Using 'none' reduction type. >>> h = tf.keras.losses.Huber( ... reduction=tf.keras.losses.Reduction.NONE) >>> h(y_true, y_pred).numpy() array([0.18, 0.13], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.Huber()) huber function tf.keras.losses.huber(y_true, y_pred, delta=1.0) Computes Huber loss value. For each value x in error = y_true - y_pred: loss = 0.5 * x^2 if |x| <= d loss = d * |x| - 0.5 * d^2 if |x| > d where d is delta. See: https://en.wikipedia.org/wiki/Huber_loss Arguments y_true: tensor of true targets. y_pred: tensor of predicted targets. delta: A float, the point where the Huber loss function changes from a quadratic to linear. Returns Tensor with one scalar loss entry per sample. LogCosh class tf.keras.losses.LogCosh(reduction="auto", name="log_cosh") Computes the logarithm of the hyperbolic cosine of the prediction error. logcosh = log((exp(x) + exp(-x))/2), where x is the error y_pred - y_true. Standalone usage: >>> y_true = [[0., 1.], [0., 0.]] >>> y_pred = [[1., 1.], [0., 0.]] >>> # Using 'auto'/'sum_over_batch_size' reduction type. >>> l = tf.keras.losses.LogCosh() >>> l(y_true, y_pred).numpy() 0.108 >>> # Calling with 'sample_weight'. >>> l(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy() 0.087 >>> # Using 'sum' reduction type. >>> l = tf.keras.losses.LogCosh( ... reduction=tf.keras.losses.Reduction.SUM) >>> l(y_true, y_pred).numpy() 0.217 >>> # Using 'none' reduction type. >>> l = tf.keras.losses.LogCosh( ... reduction=tf.keras.losses.Reduction.NONE) >>> l(y_true, y_pred).numpy() array([0.217, 0.], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.LogCosh()) log_cosh function tf.keras.losses.log_cosh(y_true, y_pred) Logarithm of the hyperbolic cosine of the prediction error. log(cosh(x)) is approximately equal to (x ** 2) / 2 for small x and to abs(x) - log(2) for large x. This means that 'logcosh' works mostly like the mean squared error, but will not be so strongly affected by the occasional wildly incorrect prediction. Standalone usage: >>> y_true = np.random.random(size=(2, 3)) >>> y_pred = np.random.random(size=(2, 3)) >>> loss = tf.keras.losses.logcosh(y_true, y_pred) >>> assert loss.shape == (2,) >>> x = y_pred - y_true >>> assert np.allclose( ... loss.numpy(), ... np.mean(x + np.log(np.exp(-2. * x) + 1.) - math_ops.log(2.), axis=-1), ... atol=1e-5) Arguments y_true: Ground truth values. shape = [batch_size, d0, .. dN]. y_pred: The predicted values. shape = [batch_size, d0, .. dN]. Returns Logcosh error values. shape = [batch_size, d0, .. dN-1]. Hinge losses for "maximum-margin" classification Hinge class tf.keras.losses.Hinge(reduction="auto", name="hinge") Computes the hinge loss between y_true and y_pred. loss = maximum(1 - y_true * y_pred, 0) y_true values are expected to be -1 or 1. If binary (0 or 1) labels are provided we will convert them to -1 or 1. Standalone usage: >>> y_true = [[0., 1.], [0., 0.]] >>> y_pred = [[0.6, 0.4], [0.4, 0.6]] >>> # Using 'auto'/'sum_over_batch_size' reduction type. >>> h = tf.keras.losses.Hinge() >>> h(y_true, y_pred).numpy() 1.3 >>> # Calling with 'sample_weight'. >>> h(y_true, y_pred, sample_weight=[1, 0]).numpy() 0.55 >>> # Using 'sum' reduction type. >>> h = tf.keras.losses.Hinge( ... reduction=tf.keras.losses.Reduction.SUM) >>> h(y_true, y_pred).numpy() 2.6 >>> # Using 'none' reduction type. >>> h = tf.keras.losses.Hinge( ... reduction=tf.keras.losses.Reduction.NONE) >>> h(y_true, y_pred).numpy() array([1.1, 1.5], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.Hinge()) SquaredHinge class tf.keras.losses.SquaredHinge(reduction="auto", name="squared_hinge") Computes the squared hinge loss between y_true and y_pred. loss = square(maximum(1 - y_true * y_pred, 0)) y_true values are expected to be -1 or 1. If binary (0 or 1) labels are provided we will convert them to -1 or 1. Standalone usage: >>> y_true = [[0., 1.], [0., 0.]] >>> y_pred = [[0.6, 0.4], [0.4, 0.6]] >>> # Using 'auto'/'sum_over_batch_size' reduction type. >>> h = tf.keras.losses.SquaredHinge() >>> h(y_true, y_pred).numpy() 1.86 >>> # Calling with 'sample_weight'. >>> h(y_true, y_pred, sample_weight=[1, 0]).numpy() 0.73 >>> # Using 'sum' reduction type. >>> h = tf.keras.losses.SquaredHinge( ... reduction=tf.keras.losses.Reduction.SUM) >>> h(y_true, y_pred).numpy() 3.72 >>> # Using 'none' reduction type. >>> h = tf.keras.losses.SquaredHinge( ... reduction=tf.keras.losses.Reduction.NONE) >>> h(y_true, y_pred).numpy() array([1.46, 2.26], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.SquaredHinge()) CategoricalHinge class tf.keras.losses.CategoricalHinge(reduction="auto", name="categorical_hinge") Computes the categorical hinge loss between y_true and y_pred. loss = maximum(neg - pos + 1, 0) where neg=maximum((1-y_true)*y_pred) and pos=sum(y_true*y_pred) Standalone usage: >>> y_true = [[0, 1], [0, 0]] >>> y_pred = [[0.6, 0.4], [0.4, 0.6]] >>> # Using 'auto'/'sum_over_batch_size' reduction type. >>> h = tf.keras.losses.CategoricalHinge() >>> h(y_true, y_pred).numpy() 1.4 >>> # Calling with 'sample_weight'. >>> h(y_true, y_pred, sample_weight=[1, 0]).numpy() 0.6 >>> # Using 'sum' reduction type. >>> h = tf.keras.losses.CategoricalHinge( ... reduction=tf.keras.losses.Reduction.SUM) >>> h(y_true, y_pred).numpy() 2.8 >>> # Using 'none' reduction type. >>> h = tf.keras.losses.CategoricalHinge( ... reduction=tf.keras.losses.Reduction.NONE) >>> h(y_true, y_pred).numpy() array([1.2, 1.6], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.CategoricalHinge()) hinge function tf.keras.losses.hinge(y_true, y_pred) Computes the hinge loss between y_true and y_pred. loss = mean(maximum(1 - y_true * y_pred, 0), axis=-1) Standalone usage: >>> y_true = np.random.choice([-1, 1], size=(2, 3)) >>> y_pred = np.random.random(size=(2, 3)) >>> loss = tf.keras.losses.hinge(y_true, y_pred) >>> assert loss.shape == (2,) >>> assert np.array_equal( ... loss.numpy(), ... np.mean(np.maximum(1. - y_true * y_pred, 0.), axis=-1)) Arguments y_true: The ground truth values. y_true values are expected to be -1 or 1. If binary (0 or 1) labels are provided they will be converted to -1 or 1. shape = [batch_size, d0, .. dN]. y_pred: The predicted values. shape = [batch_size, d0, .. dN]. Returns Hinge loss values. shape = [batch_size, d0, .. dN-1]. squared_hinge function tf.keras.losses.squared_hinge(y_true, y_pred) Computes the squared hinge loss between y_true and y_pred. loss = mean(square(maximum(1 - y_true * y_pred, 0)), axis=-1) Standalone usage: >>> y_true = np.random.choice([-1, 1], size=(2, 3)) >>> y_pred = np.random.random(size=(2, 3)) >>> loss = tf.keras.losses.squared_hinge(y_true, y_pred) >>> assert loss.shape == (2,) >>> assert np.array_equal( ... loss.numpy(), ... np.mean(np.square(np.maximum(1. - y_true * y_pred, 0.)), axis=-1)) Arguments y_true: The ground truth values. y_true values are expected to be -1 or 1. If binary (0 or 1) labels are provided we will convert them to -1 or 1. shape = [batch_size, d0, .. dN]. y_pred: The predicted values. shape = [batch_size, d0, .. dN]. Returns Squared hinge loss values. shape = [batch_size, d0, .. dN-1]. categorical_hinge function tf.keras.losses.categorical_hinge(y_true, y_pred) Computes the categorical hinge loss between y_true and y_pred. loss = maximum(neg - pos + 1, 0) where neg=maximum((1-y_true)*y_pred) and pos=sum(y_true*y_pred) Standalone usage: >>> y_true = np.random.randint(0, 3, size=(2,)) >>> y_true = tf.keras.utils.to_categorical(y_true, num_classes=3) >>> y_pred = np.random.random(size=(2, 3)) >>> loss = tf.keras.losses.categorical_hinge(y_true, y_pred) >>> assert loss.shape == (2,) >>> pos = np.sum(y_true * y_pred, axis=-1) >>> neg = np.amax((1. - y_true) * y_pred, axis=-1) >>> assert np.array_equal(loss.numpy(), np.maximum(0., neg - pos + 1.)) Arguments y_true: The ground truth values. y_true values are expected to be either {-1, +1} or {0, 1} (i.e. a one-hot-encoded tensor). y_pred: The predicted values. Returns Categorical hinge loss values. Probabilistic losses BinaryCrossentropy class tf.keras.losses.BinaryCrossentropy( from_logits=False, label_smoothing=0, reduction="auto", name="binary_crossentropy" ) Computes the cross-entropy loss between true labels and predicted labels. Use this cross-entropy loss for binary (0 or 1) classification applications. The loss function requires the following inputs: y_true (true label): This is either 0 or 1. y_pred (predicted value): This is the model's prediction, i.e, a single floating-point value which either represents a logit, (i.e, value in [-inf, inf] when from_logits=True) or a probability (i.e, value in [0., 1.] when from_logits=False). Recommended Usage: (set from_logits=True) With tf.keras API: model.compile( loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), .... ) As a standalone function: >>> # Example 1: (batch_size = 1, number of samples = 4) >>> y_true = [0, 1, 0, 0] >>> y_pred = [-18.6, 0.51, 2.94, -12.8] >>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True) >>> bce(y_true, y_pred).numpy() 0.865 >>> # Example 2: (batch_size = 2, number of samples = 4) >>> y_true = [[0, 1], [0, 0]] >>> y_pred = [[-18.6, 0.51], [2.94, -12.8]] >>> # Using default 'auto'/'sum_over_batch_size' reduction type. >>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True) >>> bce(y_true, y_pred).numpy() 0.865 >>> # Using 'sample_weight' attribute >>> bce(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy() 0.243 >>> # Using 'sum' reduction` type. >>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True, ... reduction=tf.keras.losses.Reduction.SUM) >>> bce(y_true, y_pred).numpy() 1.730 >>> # Using 'none' reduction type. >>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True, ... reduction=tf.keras.losses.Reduction.NONE) >>> bce(y_true, y_pred).numpy() array([0.235, 1.496], dtype=float32) Default Usage: (set from_logits=False) >>> # Make the following updates to the above "Recommended Usage" section >>> # 1. Set `from_logits=False` >>> tf.keras.losses.BinaryCrossentropy() # OR ...('from_logits=False') >>> # 2. Update `y_pred` to use probabilities instead of logits >>> y_pred = [0.6, 0.3, 0.2, 0.8] # OR [[0.6, 0.3], [0.2, 0.8]] CategoricalCrossentropy class tf.keras.losses.CategoricalCrossentropy( from_logits=False, label_smoothing=0, reduction="auto", name="categorical_crossentropy", ) Computes the crossentropy loss between the labels and predictions. Use this crossentropy loss function when there are two or more label classes. We expect labels to be provided in a one_hot representation. If you want to provide labels as integers, please use SparseCategoricalCrossentropy loss. There should be # classes floating point values per feature. In the snippet below, there is # classes floating pointing values per example. The shape of both y_pred and y_true are [batch_size, num_classes]. Standalone usage: >>> y_true = [[0, 1, 0], [0, 0, 1]] >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]] >>> # Using 'auto'/'sum_over_batch_size' reduction type. >>> cce = tf.keras.losses.CategoricalCrossentropy() >>> cce(y_true, y_pred).numpy() 1.177 >>> # Calling with 'sample_weight'. >>> cce(y_true, y_pred, sample_weight=tf.constant([0.3, 0.7])).numpy() 0.814 >>> # Using 'sum' reduction type. >>> cce = tf.keras.losses.CategoricalCrossentropy( ... reduction=tf.keras.losses.Reduction.SUM) >>> cce(y_true, y_pred).numpy() 2.354 >>> # Using 'none' reduction type. >>> cce = tf.keras.losses.CategoricalCrossentropy( ... reduction=tf.keras.losses.Reduction.NONE) >>> cce(y_true, y_pred).numpy() array([0.0513, 2.303], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.CategoricalCrossentropy()) SparseCategoricalCrossentropy class tf.keras.losses.SparseCategoricalCrossentropy( from_logits=False, reduction="auto", name="sparse_categorical_crossentropy" ) Computes the crossentropy loss between the labels and predictions. Use this crossentropy loss function when there are two or more label classes. We expect labels to be provided as integers. If you want to provide labels using one-hot representation, please use CategoricalCrossentropy loss. There should be # classes floating point values per feature for y_pred and a single floating point value per feature for y_true. In the snippet below, there is a single floating point value per example for y_true and # classes floating pointing values per example for y_pred. The shape of y_true is [batch_size] and the shape of y_pred is [batch_size, num_classes]. Standalone usage: >>> y_true = [1, 2] >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]] >>> # Using 'auto'/'sum_over_batch_size' reduction type. >>> scce = tf.keras.losses.SparseCategoricalCrossentropy() >>> scce(y_true, y_pred).numpy() 1.177 >>> # Calling with 'sample_weight'. >>> scce(y_true, y_pred, sample_weight=tf.constant([0.3, 0.7])).numpy() 0.814 >>> # Using 'sum' reduction type. >>> scce = tf.keras.losses.SparseCategoricalCrossentropy( ... reduction=tf.keras.losses.Reduction.SUM) >>> scce(y_true, y_pred).numpy() 2.354 >>> # Using 'none' reduction type. >>> scce = tf.keras.losses.SparseCategoricalCrossentropy( ... reduction=tf.keras.losses.Reduction.NONE) >>> scce(y_true, y_pred).numpy() array([0.0513, 2.303], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.SparseCategoricalCrossentropy()) Poisson class tf.keras.losses.Poisson(reduction="auto", name="poisson") Computes the Poisson loss between y_true and y_pred. loss = y_pred - y_true * log(y_pred) Standalone usage: >>> y_true = [[0., 1.], [0., 0.]] >>> y_pred = [[1., 1.], [0., 0.]] >>> # Using 'auto'/'sum_over_batch_size' reduction type. >>> p = tf.keras.losses.Poisson() >>> p(y_true, y_pred).numpy() 0.5 >>> # Calling with 'sample_weight'. >>> p(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy() 0.4 >>> # Using 'sum' reduction type. >>> p = tf.keras.losses.Poisson( ... reduction=tf.keras.losses.Reduction.SUM) >>> p(y_true, y_pred).numpy() 0.999 >>> # Using 'none' reduction type. >>> p = tf.keras.losses.Poisson( ... reduction=tf.keras.losses.Reduction.NONE) >>> p(y_true, y_pred).numpy() array([0.999, 0.], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.Poisson()) binary_crossentropy function tf.keras.losses.binary_crossentropy( y_true, y_pred, from_logits=False, label_smoothing=0 ) Computes the binary crossentropy loss. Standalone usage: >>> y_true = [[0, 1], [0, 0]] >>> y_pred = [[0.6, 0.4], [0.4, 0.6]] >>> loss = tf.keras.losses.binary_crossentropy(y_true, y_pred) >>> assert loss.shape == (2,) >>> loss.numpy() array([0.916 , 0.714], dtype=float32) Arguments y_true: Ground truth values. shape = [batch_size, d0, .. dN]. y_pred: The predicted values. shape = [batch_size, d0, .. dN]. from_logits: Whether y_pred is expected to be a logits tensor. By default, we assume that y_pred encodes a probability distribution. label_smoothing: Float in [0, 1]. If > 0 then smooth the labels by squeezing them towards 0.5 That is, using 1. - 0.5 * label_smoothing for the target class and 0.5 * label_smoothing for the non-target class. Returns Binary crossentropy loss value. shape = [batch_size, d0, .. dN-1]. categorical_crossentropy function tf.keras.losses.categorical_crossentropy( y_true, y_pred, from_logits=False, label_smoothing=0 ) Computes the categorical crossentropy loss. Standalone usage: >>> y_true = [[0, 1, 0], [0, 0, 1]] >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]] >>> loss = tf.keras.losses.categorical_crossentropy(y_true, y_pred) >>> assert loss.shape == (2,) >>> loss.numpy() array([0.0513, 2.303], dtype=float32) Arguments y_true: Tensor of one-hot true targets. y_pred: Tensor of predicted targets. from_logits: Whether y_pred is expected to be a logits tensor. By default, we assume that y_pred encodes a probability distribution. label_smoothing: Float in [0, 1]. If > 0 then smooth the labels. For example, if 0.1, use 0.1 / num_classes for non-target labels and 0.9 + 0.1 / num_classes for target labels. Returns Categorical crossentropy loss value. sparse_categorical_crossentropy function tf.keras.losses.sparse_categorical_crossentropy( y_true, y_pred, from_logits=False, axis=-1 ) Computes the sparse categorical crossentropy loss. Standalone usage: >>> y_true = [1, 2] >>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]] >>> loss = tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred) >>> assert loss.shape == (2,) >>> loss.numpy() array([0.0513, 2.303], dtype=float32) Arguments y_true: Ground truth values. y_pred: The predicted values. from_logits: Whether y_pred is expected to be a logits tensor. By default, we assume that y_pred encodes a probability distribution. axis: (Optional) Defaults to -1. The dimension along which the entropy is computed. Returns Sparse categorical crossentropy loss value. poisson function tf.keras.losses.poisson(y_true, y_pred) Computes the Poisson loss between y_true and y_pred. The Poisson loss is the mean of the elements of the Tensor y_pred - y_true * log(y_pred). Standalone usage: >>> y_true = np.random.randint(0, 2, size=(2, 3)) >>> y_pred = np.random.random(size=(2, 3)) >>> loss = tf.keras.losses.poisson(y_true, y_pred) >>> assert loss.shape == (2,) >>> y_pred = y_pred + 1e-7 >>> assert np.allclose( ... loss.numpy(), np.mean(y_pred - y_true * np.log(y_pred), axis=-1), ... atol=1e-5) Arguments y_true: Ground truth values. shape = [batch_size, d0, .. dN]. y_pred: The predicted values. shape = [batch_size, d0, .. dN]. Returns Poisson loss value. shape = [batch_size, d0, .. dN-1]. Raises InvalidArgumentError: If y_true and y_pred have incompatible shapes. KLDivergence class tf.keras.losses.KLDivergence(reduction="auto", name="kl_divergence") Computes Kullback-Leibler divergence loss between y_true and y_pred. loss = y_true * log(y_true / y_pred) See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence Standalone usage: >>> y_true = [[0, 1], [0, 0]] >>> y_pred = [[0.6, 0.4], [0.4, 0.6]] >>> # Using 'auto'/'sum_over_batch_size' reduction type. >>> kl = tf.keras.losses.KLDivergence() >>> kl(y_true, y_pred).numpy() 0.458 >>> # Calling with 'sample_weight'. >>> kl(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy() 0.366 >>> # Using 'sum' reduction type. >>> kl = tf.keras.losses.KLDivergence( ... reduction=tf.keras.losses.Reduction.SUM) >>> kl(y_true, y_pred).numpy() 0.916 >>> # Using 'none' reduction type. >>> kl = tf.keras.losses.KLDivergence( ... reduction=tf.keras.losses.Reduction.NONE) >>> kl(y_true, y_pred).numpy() array([0.916, -3.08e-06], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.KLDivergence()) kl_divergence function tf.keras.losses.kl_divergence(y_true, y_pred) Computes Kullback-Leibler divergence loss between y_true and y_pred. loss = y_true * log(y_true / y_pred) See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence Standalone usage: >>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64) >>> y_pred = np.random.random(size=(2, 3)) >>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred) >>> assert loss.shape == (2,) >>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1) >>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1) >>> assert np.array_equal( ... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1)) Arguments y_true: Tensor of true targets. y_pred: Tensor of predicted targets. Returns A Tensor with loss. Raises TypeError: If y_true cannot be cast to the y_pred.dtype. Backend utilities clear_session function tf.keras.backend.clear_session() Resets all state generated by Keras. Keras manages a global state, which it uses to implement the Functional model-building API and to uniquify autogenerated layer names. If you are creating many models in a loop, this global state will consume an increasing amount of memory over time, and you may want to clear it. Calling clear_session() releases the global state: this helps avoid clutter from old models and layers, especially when memory is limited. Example 1: calling clear_session() when creating models in a loop for _ in range(100): # Without `clear_session()`, each iteration of this loop will # slightly increase the size of the global state managed by Keras model = tf.keras.Sequential([tf.keras.layers.Dense(10) for _ in range(10)]) for _ in range(100): # With `clear_session()` called at the beginning, # Keras starts with a blank state at each iteration # and memory consumption is constant over time. tf.keras.backend.clear_session() model = tf.keras.Sequential([tf.keras.layers.Dense(10) for _ in range(10)]) Example 2: resetting the layer name generation counter >>> import tensorflow as tf >>> layers = [tf.keras.layers.Dense(10) for _ in range(10)] >>> new_layer = tf.keras.layers.Dense(10) >>> print(new_layer.name) dense_10 >>> tf.keras.backend.set_learning_phase(1) >>> print(tf.keras.backend.learning_phase()) 1 >>> tf.keras.backend.clear_session() >>> new_layer = tf.keras.layers.Dense(10) >>> print(new_layer.name) dense floatx function tf.keras.backend.floatx() Returns the default float type, as a string. E.g. 'float16', 'float32', 'float64'. Returns String, the current default float type. Example >>> tf.keras.backend.floatx() 'float32' set_floatx function tf.keras.backend.set_floatx(value) Sets the default float type. Note: It is not recommended to set this to float16 for training, as this will likely cause numeric stability issues. Instead, mixed precision, which is using a mix of float16 and float32, can be used by calling tf.keras.mixed_precision.experimental.set_policy('mixed_float16'). See the mixed precision guide for details. Arguments value: String; 'float16', 'float32', or 'float64'. Example >>> tf.keras.backend.floatx() 'float32' >>> tf.keras.backend.set_floatx('float64') >>> tf.keras.backend.floatx() 'float64' >>> tf.keras.backend.set_floatx('float32') Raises ValueError: In case of invalid value. image_data_format function tf.keras.backend.image_data_format() Returns the default image data format convention. Returns A string, either 'channels_first' or 'channels_last' Example >>> tf.keras.backend.image_data_format() 'channels_last' set_image_data_format function tf.keras.backend.set_image_data_format(data_format) Sets the value of the image data format convention. Arguments data_format: string. 'channels_first' or 'channels_last'. Example >>> tf.keras.backend.image_data_format() 'channels_last' >>> tf.keras.backend.set_image_data_format('channels_first') >>> tf.keras.backend.image_data_format() 'channels_first' >>> tf.keras.backend.set_image_data_format('channels_last') Raises ValueError: In case of invalid data_format value. epsilon function tf.keras.backend.epsilon() Returns the value of the fuzz factor used in numeric expressions. Returns A float. Example >>> tf.keras.backend.epsilon() 1e-07 set_epsilon function tf.keras.backend.set_epsilon(value) Sets the value of the fuzz factor used in numeric expressions. Arguments value: float. New value of epsilon. Example >>> tf.keras.backend.epsilon() 1e-07 >>> tf.keras.backend.set_epsilon(1e-5) >>> tf.keras.backend.epsilon() 1e-05 >>> tf.keras.backend.set_epsilon(1e-7) is_keras_tensor function tf.keras.backend.is_keras_tensor(x) Returns whether x is a Keras tensor. A "Keras tensor" is a tensor that was returned by a Keras layer, (Layer class) or by Input. Arguments x: A candidate tensor. Returns A boolean: Whether the argument is a Keras tensor. Raises ValueError: In case x is not a symbolic tensor. Examples >>> np_var = np.array([1, 2]) >>> # A numpy array is not a symbolic tensor. >>> tf.keras.backend.is_keras_tensor(np_var) Traceback (most recent call last): ... ValueError: Unexpectedly found an instance of type ``. Expected a symbolic tensor instance. >>> keras_var = tf.keras.backend.variable(np_var) >>> # A variable created with the keras backend is not a Keras tensor. >>> tf.keras.backend.is_keras_tensor(keras_var) False >>> keras_placeholder = tf.keras.backend.placeholder(shape=(2, 4, 5)) >>> # A placeholder is a Keras tensor. >>> tf.keras.backend.is_keras_tensor(keras_placeholder) True >>> keras_input = tf.keras.layers.Input([10]) >>> # An Input is a Keras tensor. >>> tf.keras.backend.is_keras_tensor(keras_input) True >>> keras_layer_output = tf.keras.layers.Dense(10)(keras_input) >>> # Any Keras layer output is a Keras tensor. >>> tf.keras.backend.is_keras_tensor(keras_layer_output) True get_uid function tf.keras.backend.get_uid(prefix="") Associates a string prefix with an integer counter in a TensorFlow graph. Arguments prefix: String prefix to index. Returns Unique integer ID. Example >>> get_uid('dense') 1 >>> get_uid('dense') 2 rnn function tf.keras.backend.rnn( step_function, inputs, initial_states, go_backwards=False, mask=None, constants=None, unroll=False, input_length=None, time_major=False, zero_output_for_mask=False, ) Iterates over the time dimension of a tensor. Arguments step_function: RNN step function. Args; input; Tensor with shape (samples, ...) (no time dimension), representing input for the batch of samples at a certain time step. states; List of tensors. Returns; output; Tensor with shape (samples, output_dim) (no time dimension). new_states; List of tensors, same length and shapes as 'states'. The first state in the list must be the output tensor at the previous timestep. inputs: Tensor of temporal data of shape (samples, time, ...) (at least 3D), or nested tensors, and each of which has shape (samples, time, ...). initial_states: Tensor with shape (samples, state_size) (no time dimension), containing the initial values for the states used in the step function. In the case that state_size is in a nested shape, the shape of initial_states will also follow the nested structure. go_backwards: Boolean. If True, do the iteration over the time dimension in reverse order and return the reversed sequence. mask: Binary tensor with shape (samples, time, 1), with a zero for every element that is masked. constants: List of constant values passed at each step. unroll: Whether to unroll the RNN or to use a symbolic while_loop. input_length: An integer or a 1-D Tensor, depending on whether the time dimension is fixed-length or not. In case of variable length input, it is used for masking in case there's no mask specified. time_major: Boolean. If true, the inputs and outputs will be in shape (timesteps, batch, ...), whereas in the False case, it will be (batch, timesteps, ...). Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form. zero_output_for_mask: Boolean. If True, the output for masked timestep will be zeros, whereas in the False case, output from previous timestep is returned. Returns A tuple, (last_output, outputs, new_states). last_output: the latest output of the rnn, of shape (samples, ...) outputs: tensor with shape (samples, time, ...) where each entry outputs[s, t] is the output of the step function at time t for sample s. new_states: list of tensors, latest states returned by the step function, of shape (samples, ...). Raises ValueError: if input dimension is less than 3. ValueError: if unroll is True but input timestep is not a fixed number. ValueError: if mask is provided (not None) but states is not provided (len(states) == 0). Model plotting utilities plot_model function tf.keras.utils.plot_model( model, to_file="model.png", show_shapes=False, show_dtype=False, show_layer_names=True, rankdir="TB", expand_nested=False, dpi=96, ) Converts a Keras model to dot format and save to a file. Example input = tf.keras.Input(shape=(100,), dtype='int32', name='input') x = tf.keras.layers.Embedding( output_dim=512, input_dim=10000, input_length=100)(input) x = tf.keras.layers.LSTM(32)(x) x = tf.keras.layers.Dense(64, activation='relu')(x) x = tf.keras.layers.Dense(64, activation='relu')(x) x = tf.keras.layers.Dense(64, activation='relu')(x) output = tf.keras.layers.Dense(1, activation='sigmoid', name='output')(x) model = tf.keras.Model(inputs=[input], outputs=[output]) dot_img_file = '/tmp/model_1.png' tf.keras.utils.plot_model(model, to_file=dot_img_file, show_shapes=True) Arguments model: A Keras model instance to_file: File name of the plot image. show_shapes: whether to display shape information. show_dtype: whether to display layer dtypes. show_layer_names: whether to display layer names. rankdir: rankdir argument passed to PyDot, a string specifying the format of the plot: 'TB' creates a vertical plot; 'LR' creates a horizontal plot. expand_nested: Whether to expand nested models into clusters. dpi: Dots per inch. Returns A Jupyter notebook Image object if Jupyter is installed. This enables in-line display of the model plots in notebooks. model_to_dot function tf.keras.utils.model_to_dot( model, show_shapes=False, show_dtype=False, show_layer_names=True, rankdir="TB", expand_nested=False, dpi=96, subgraph=False, ) Convert a Keras model to dot format. Arguments model: A Keras model instance. show_shapes: whether to display shape information. show_dtype: whether to display layer dtypes. show_layer_names: whether to display layer names. rankdir: rankdir argument passed to PyDot, a string specifying the format of the plot: 'TB' creates a vertical plot; 'LR' creates a horizontal plot. expand_nested: whether to expand nested models into clusters. dpi: Dots per inch. subgraph: whether to return a pydot.Cluster instance. Returns A pydot.Dot instance representing the Keras model or a pydot.Cluster instance representing nested model if subgraph=True. Raises ImportError: if graphviz or pydot are not available.Serialization utilities CustomObjectScope class tf.keras.utils.custom_object_scope(*args) Exposes custom classes/functions to Keras deserialization internals. Under a scope with custom_object_scope(objects_dict), Keras methods such as tf.keras.models.load_model or tf.keras.models.model_from_config will be able to deserialize any custom object referenced by a saved config (e.g. a custom layer or metric). Example Consider a custom regularizer my_regularizer: layer = Dense(3, kernel_regularizer=my_regularizer) config = layer.get_config() # Config contains a reference to `my_regularizer` ... # Later: with custom_object_scope({'my_regularizer': my_regularizer}): layer = Dense.from_config(config) Arguments *args: Dictionary or dictionaries of {name: object} pairs. get_custom_objects function tf.keras.utils.get_custom_objects() Retrieves a live reference to the global dictionary of custom objects. Updating and clearing custom objects using custom_object_scope is preferred, but get_custom_objects can be used to directly access the current collection of custom objects. Example get_custom_objects().clear() get_custom_objects()['MyObject'] = MyObject Returns Global dictionary of names to classes (_GLOBAL_CUSTOM_OBJECTS). register_keras_serializable function tf.keras.utils.register_keras_serializable(package="Custom", name=None) Registers an object with the Keras serialization framework. This decorator injects the decorated class or function into the Keras custom object dictionary, so that it can be serialized and deserialized without needing an entry in the user-provided custom object dict. It also injects a function that Keras will call to get the object's serializable string key. Note that to be serialized and deserialized, classes must implement the get_config() method. Functions do not have this requirement. The object will be registered under the key 'package>name' where name, defaults to the object name if not passed. Arguments package: The package that this class belongs to. name: The name to serialize this class under in this package. If None, the class' name will be used. Returns A decorator that registers the decorated class with the passed names. serialize_keras_object function tf.keras.utils.serialize_keras_object(instance) Serialize a Keras object into a JSON-compatible representation. Calls to serialize_keras_object while underneath the SharedObjectSavingScope context manager will cause any objects re-used across multiple layers to be saved with a special shared object ID. This allows the network to be re-created properly during deserialization. Arguments instance: The object to serialize. Returns A dict-like, JSON-compatible representation of the object's config. deserialize_keras_object function tf.keras.utils.deserialize_keras_object( identifier, module_objects=None, custom_objects=None, printable_module_name="object" ) Turns the serialized form of a Keras object back into an actual object. This function is for mid-level library implementers rather than end users. Importantly, this utility requires you to provide the dict of module_objects to use for looking up the object config; this is not populated by default. If you need a deserialization utility that has preexisting knowledge of built-in Keras objects, use e.g. keras.layers.deserialize(config), keras.metrics.deserialize(config), etc. Calling deserialize_keras_object while underneath the SharedObjectLoadingScope context manager will cause any already-seen shared objects to be returned as-is rather than creating a new object. Arguments identifier: the serialized form of the object. module_objects: A dictionary of built-in objects to look the name up in. Generally, module_objects is provided by midlevel library implementers. custom_objects: A dictionary of custom objects to look the name up in. Generally, custom_objects is provided by the end user. printable_module_name: A human-readable string representing the type of the object. Printed in case of exception. Returns The deserialized object. Example A mid-level library implementer might want to implement a utility for retrieving an object from its config, as such: def deserialize(config, custom_objects=None): return deserialize_keras_object( identifier, module_objects=globals(), custom_objects=custom_objects, name="MyObjectType", ) This is how e.g. keras.layers.deserialize() is implemented.Python & NumPy utilities to_categorical function tf.keras.utils.to_categorical(y, num_classes=None, dtype="float32") Converts a class vector (integers) to binary class matrix. E.g. for use with categorical_crossentropy. Arguments y: class vector to be converted into a matrix (integers from 0 to num_classes). num_classes: total number of classes. If None, this would be inferred as the (largest number in y) + 1. dtype: The data type expected by the input. Default: 'float32'. Returns A binary matrix representation of the input. The classes axis is placed last. Example >>> a = tf.keras.utils.to_categorical([0, 1, 2, 3], num_classes=4) >>> a = tf.constant(a, shape=[4, 4]) >>> print(a) tf.Tensor( [[1. 0. 0. 0.] [0. 1. 0. 0.] [0. 0. 1. 0.] [0. 0. 0. 1.]], shape=(4, 4), dtype=float32) >>> b = tf.constant([.9, .04, .03, .03, ... .3, .45, .15, .13, ... .04, .01, .94, .05, ... .12, .21, .5, .17], ... shape=[4, 4]) >>> loss = tf.keras.backend.categorical_crossentropy(a, b) >>> print(np.around(loss, 5)) [0.10536 0.82807 0.1011 1.77196] >>> loss = tf.keras.backend.categorical_crossentropy(a, a) >>> print(np.around(loss, 5)) [0. 0. 0. 0.] Raises Value Error: If input contains string value normalize function tf.keras.utils.normalize(x, axis=-1, order=2) Normalizes a Numpy array. Arguments x: Numpy array to normalize. axis: axis along which to normalize. order: Normalization order (e.g. order=2 for L2 norm). Returns A normalized copy of the array. get_file function tf.keras.utils.get_file( fname, origin, untar=False, md5_hash=None, file_hash=None, cache_subdir="datasets", hash_algorithm="auto", extract=False, archive_format="auto", cache_dir=None, ) Downloads a file from a URL if it not already in the cache. By default the file at the url origin is downloaded to the cache_dir ~/.keras, placed in the cache_subdir datasets, and given the filename fname. The final location of a file example.txt would therefore be ~/.keras/datasets/example.txt. Files in tar, tar.gz, tar.bz, and zip formats can also be extracted. Passing a hash will verify the file after download. The command line programs shasum and sha256sum can compute the hash. Example path_to_downloaded_file = tf.keras.utils.get_file( "flower_photos", "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz", untar=True) Arguments fname: Name of the file. If an absolute path /path/to/file.txt is specified the file will be saved at that location. origin: Original URL of the file. untar: Deprecated in favor of extract argument. boolean, whether the file should be decompressed md5_hash: Deprecated in favor of file_hash argument. md5 hash of the file for verification file_hash: The expected hash string of the file after download. The sha256 and md5 hash algorithms are both supported. cache_subdir: Subdirectory under the Keras cache dir where the file is saved. If an absolute path /path/to/folder is specified the file will be saved at that location. hash_algorithm: Select the hash algorithm to verify the file. options are 'md5', 'sha256', and 'auto'. The default 'auto' detects the hash algorithm in use. extract: True tries extracting the file as an Archive, like tar or zip. archive_format: Archive format to try for extracting the file. Options are 'auto', 'tar', 'zip', and None. 'tar' includes tar, tar.gz, and tar.bz files. The default 'auto' corresponds to ['tar', 'zip']. None or an empty list will return no matches found. cache_dir: Location to store cached files, when None it defaults to the default directory ~/.keras/. Returns Path to the downloaded file Progbar class tf.keras.utils.Progbar( target, width=30, verbose=1, interval=0.05, stateful_metrics=None, unit_name="step" ) Displays a progress bar. Arguments target: Total number of steps expected, None if unknown. width: Progress bar width on screen. verbose: Verbosity mode, 0 (silent), 1 (verbose), 2 (semi-verbose) stateful_metrics: Iterable of string names of metrics that should not be averaged over time. Metrics in this list will be displayed as-is. All others will be averaged by the progbar before display. interval: Minimum visual progress update interval (in seconds). unit_name: Display name for step counts (usually "step" or "sample"). Sequence class tf.keras.utils.Sequence() Base object for fitting to a sequence of data, such as a dataset. Every Sequence must implement the __getitem__ and the __len__ methods. If you want to modify your dataset between epochs you may implement on_epoch_end. The method __getitem__ should return a complete batch. Notes: Sequence are a safer way to do multiprocessing. This structure guarantees that the network will only train once on each sample per epoch which is not the case with generators. Examples from skimage.io import imread from skimage.transform import resize import numpy as np import math # Here, `x_set` is list of path to the images # and `y_set` are the associated classes. class CIFAR10Sequence(Sequence): def __init__(self, x_set, y_set, batch_size): self.x, self.y = x_set, y_set self.batch_size = batch_size def __len__(self): return math.ceil(len(self.x) / self.batch_size) def __getitem__(self, idx): batch_x = self.x[idx * self.batch_size:(idx + 1) * self.batch_size] batch_y = self.y[idx * self.batch_size:(idx + 1) * self.batch_size] return np.array([ resize(imread(file_name), (200, 200)) for file_name in batch_x]), np.array(batch_y) Multi-GPU and distributed training Author: fchollet Date created: 2020/04/28 Last modified: 2020/04/29 Description: Guide to multi-GPU & distributed training for Keras models. View in Colab • GitHub source Introduction There are generally two ways to distribute computation across multiple devices: Data parallelism, where a single model gets replicated on multiple devices or multiple machines. Each of them processes different batches of data, then they merge their results. There exist many variants of this setup, that differ in how the different model replicas merge results, in whether they stay in sync at every batch or whether they are more loosely coupled, etc. Model parallelism, where different parts of a single model run on different devices, processing a single batch of data together. This works best with models that have a naturally-parallel architecture, such as models that feature multiple branches. This guide focuses on data parallelism, in particular synchronous data parallelism, where the different replicas of the model stay in sync after each batch they process. Synchronicity keeps the model convergence behavior identical to what you would see for single-device training. Specifically, this guide teaches you how to use the tf.distribute API to train Keras models on multiple GPUs, with minimal changes to your code, in the following two setups: On multiple GPUs (typically 2 to 8) installed on a single machine (single host, multi-device training). This is the most common setup for researchers and small-scale industry workflows. On a cluster of many machines, each hosting one or multiple GPUs (multi-worker distributed training). This is a good setup for large-scale industry workflows, e.g. training high-resolution image classification models on tens of millions of images using 20-100 GPUs. Setup import tensorflow as tf from tensorflow import keras Single-host, multi-device synchronous training In this setup, you have one machine with several GPUs on it (typically 2 to 8). Each device will run a copy of your model (called a replica). For simplicity, in what follows, we'll assume we're dealing with 8 GPUs, at no loss of generality. How it works At each step of training: The current batch of data (called global batch) is split into 8 different sub-batches (called local batches). For instance, if the global batch has 512 samples, each of the 8 local batches will have 64 samples. Each of the 8 replicas independently processes a local batch: they run a forward pass, then a backward pass, outputting the gradient of the weights with respect to the loss of the model on the local batch. The weight updates originating from local gradients are efficiently merged across the 8 replicas. Because this is done at the end of every step, the replicas always stay in sync. In practice, the process of synchronously updating the weights of the model replicas is handled at the level of each individual weight variable. This is done through a mirrored variable object. How to use it To do single-host, multi-device synchronous training with a Keras model, you would use the tf.distribute.MirroredStrategy API. Here's how it works: Instantiate a MirroredStrategy, optionally configuring which specific devices you want to use (by default the strategy will use all GPUs available). Use the strategy object to open a scope, and within this scope, create all the Keras objects you need that contain variables. Typically, that means creating & compiling the model inside the distribution scope. Train the model via fit() as usual. Importantly, we recommend that you use tf.data.Dataset objects to load data in a multi-device or distributed workflow. Schematically, it looks like this: # Create a MirroredStrategy. strategy = tf.distribute.MirroredStrategy() print('Number of devices: {}'.format(strategy.num_replicas_in_sync)) # Open a strategy scope. with strategy.scope(): # Everything that creates variables should be under the strategy scope. # In general this is only model construction & `compile()`. model = Model(...) model.compile(...) # Train the model on all available devices. model.fit(train_dataset, validation_data=val_dataset, ...) # Test the model on all available devices. model.evaluate(test_dataset) Here's a simple end-to-end runnable example: def get_compiled_model(): # Make a simple 2-layer densely-connected neural network. inputs = keras.Input(shape=(784,)) x = keras.layers.Dense(256, activation="relu")(inputs) x = keras.layers.Dense(256, activation="relu")(x) outputs = keras.layers.Dense(10)(x) model = keras.Model(inputs, outputs) model.compile( optimizer=keras.optimizers.Adam(), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[keras.metrics.SparseCategoricalAccuracy()], ) return model def get_dataset(): batch_size = 32 num_val_samples = 10000 # Return the MNIST dataset in the form of a `tf.data.Dataset`. (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() # Preprocess the data (these are Numpy arrays) x_train = x_train.reshape(-1, 784).astype("float32") / 255 x_test = x_test.reshape(-1, 784).astype("float32") / 255 y_train = y_train.astype("float32") y_test = y_test.astype("float32") # Reserve num_val_samples samples for validation x_val = x_train[-num_val_samples:] y_val = y_train[-num_val_samples:] x_train = x_train[:-num_val_samples] y_train = y_train[:-num_val_samples] return ( tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(batch_size), tf.data.Dataset.from_tensor_slices((x_val, y_val)).batch(batch_size), tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(batch_size), ) # Create a MirroredStrategy. strategy = tf.distribute.MirroredStrategy() print("Number of devices: {}".format(strategy.num_replicas_in_sync)) # Open a strategy scope. with strategy.scope(): # Everything that creates variables should be under the strategy scope. # In general this is only model construction & `compile()`. model = get_compiled_model() # Train the model on all available devices. train_dataset, val_dataset, test_dataset = get_dataset() model.fit(train_dataset, epochs=2, validation_data=val_dataset) # Test the model on all available devices. model.evaluate(test_dataset) WARNING: Logging before flag parsing goes to stderr. W0829 16:54:57.025418 4592479680 cross_device_ops.py:1115] There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce. Number of devices: 1 Epoch 1/2 1563/1563 [==============================] - 3s 2ms/step - loss: 0.3767 - sparse_categorical_accuracy: 0.8889 - val_loss: 0.1257 - val_sparse_categorical_accuracy: 0.9623 Epoch 2/2 1563/1563 [==============================] - 2s 2ms/step - loss: 0.1053 - sparse_categorical_accuracy: 0.9678 - val_loss: 0.0944 - val_sparse_categorical_accuracy: 0.9710 313/313 [==============================] - 0s 779us/step - loss: 0.0900 - sparse_categorical_accuracy: 0.9723 [0.08995261788368225, 0.9722999930381775] Using callbacks to ensure fault tolerance When using distributed training, you should always make sure you have a strategy to recover from failure (fault tolerance). The simplest way to handle this is to pass ModelCheckpoint callback to fit(), to save your model at regular intervals (e.g. every 100 batches or every epoch). You can then restart training from your saved model. Here's a simple example: import os from tensorflow import keras # Prepare a directory to store all the checkpoints. checkpoint_dir = "./ckpt" if not os.path.exists(checkpoint_dir): os.makedirs(checkpoint_dir) def make_or_restore_model(): # Either restore the latest model, or create a fresh one # if there is no checkpoint available. checkpoints = [checkpoint_dir + "/" + name for name in os.listdir(checkpoint_dir)] if checkpoints: latest_checkpoint = max(checkpoints, key=os.path.getctime) print("Restoring from", latest_checkpoint) return keras.models.load_model(latest_checkpoint) print("Creating a new model") return get_compiled_model() def run_training(epochs=1): # Create a MirroredStrategy. strategy = tf.distribute.MirroredStrategy() # Open a strategy scope and create/restore the model with strategy.scope(): model = make_or_restore_model() callbacks = [ # This callback saves a SavedModel every epoch # We include the current epoch in the folder name. keras.callbacks.ModelCheckpoint( filepath=checkpoint_dir + "/ckpt-{epoch}", save_freq="epoch" ) ] model.fit( train_dataset, epochs=epochs, callbacks=callbacks, validation_data=val_dataset, verbose=2, ) # Running the first time creates the model run_training(epochs=1) # Calling the same function again will resume from where we left off run_training(epochs=1) W0829 16:55:03.609519 4592479680 cross_device_ops.py:1115] There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce. Creating a new model W0829 16:55:03.708506 4592479680 callbacks.py:1270] Automatic model reloading for interrupted job was removed from the `ModelCheckpoint` callback in multi-worker mode, please use the `keras.callbacks.experimental.BackupAndRestore` callback instead. See this tutorial for details: https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#backupandrestore_callback. 1563/1563 - 4s - loss: 0.2242 - sparse_categorical_accuracy: 0.9321 - val_loss: 0.1243 - val_sparse_categorical_accuracy: 0.9647 W0829 16:55:07.981292 4592479680 cross_device_ops.py:1115] There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce. Restoring from ./ckpt/ckpt-1 W0829 16:55:08.245935 4592479680 callbacks.py:1270] Automatic model reloading for interrupted job was removed from the `ModelCheckpoint` callback in multi-worker mode, please use the `keras.callbacks.experimental.BackupAndRestore` callback instead. See this tutorial for details: https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#backupandrestore_callback. 1563/1563 - 4s - loss: 0.0948 - sparse_categorical_accuracy: 0.9709 - val_loss: 0.1006 - val_sparse_categorical_accuracy: 0.9699 tf.data performance tips When doing distributed training, the efficiency with which you load data can often become critical. Here are a few tips to make sure your tf.data pipelines run as fast as possible. Note about dataset batching When creating your dataset, make sure it is batched with the global batch size. For instance, if each of your 8 GPUs is capable of running a batch of 64 samples, you call use a global batch size of 512. Calling dataset.cache() If you call .cache() on a dataset, its data will be cached after running through the first iteration over the data. Every subsequent iteration will use the cached data. The cache can be in memory (default) or to a local file you specify. This can improve performance when: Your data is not expected to change from iteration to iteration You are reading data from a remote distributed filesystem You are reading data from local disk, but your data would fit in memory and your workflow is significantly IO-bound (e.g. reading & decoding image files). Calling dataset.prefetch(buffer_size) You should almost always call .prefetch(buffer_size) after creating a dataset. It means your data pipeline will run asynchronously from your model, with new samples being preprocessed and stored in a buffer while the current batch samples are used to train the model. The next batch will be prefetched in GPU memory by the time the current batch is over. Multi-worker distributed synchronous training How it works In this setup, you have multiple machines (called workers), each with one or several GPUs on them. Much like what happens for single-host training, each available GPU will run one model replica, and the value of the variables of each replica is kept in sync after each batch. Importantly, the current implementation assumes that all workers have the same number of GPUs (homogeneous cluster). How to use it Set up a cluster (we provide pointers below). Set up an appropriate TF_CONFIG environment variable on each worker. This tells the worker what its role is and how to communicate with its peers. On each worker, run your model construction & compilation code within the scope of a MultiWorkerMirroredStrategy object, similarly to we did for single-host training. Run evaluation code on a designated evaluator machine. Setting up a cluster First, set up a cluster (collective of machines). Each machine individually should be setup so as to be able to run your model (typically, each machine will run the same Docker image) and to able to access your data source (e.g. GCS). Cluster management is beyond the scope of this guide. Here is a document to help you get started. You can also take a look at Kubeflow. Setting up the TF_CONFIG environment variable While the code running on each worker is almost the same as the code used in the single-host workflow (except with a different tf.distribute strategy object), one significant difference between the single-host workflow and the multi-worker workflow is that you need to set a TF_CONFIG environment variable on each machine running in your cluster. The TF_CONFIG environment variable is a JSON string that specifies: The cluster configuration, while the list of addresses & ports of the machines that make up the cluster The worker's "task", which is the role that this specific machine has to play within the cluster. One example of TF_CONFIG is: os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0} }) In the multi-worker synchronous training setup, valid roles (task types) for the machines are "worker" and "evaluator". For example, if you have 8 machines with 4 GPUs each, you could have 7 workers and one evaluator. The workers train the model, each one processing sub-batches of a global batch. One of the workers (worker 0) will serve as "chief", a particular kind of worker that is responsible for saving logs and checkpoints for later reuse (typically to a Cloud storage location). The evaluator runs a continuous loop that loads the latest checkpoint saved by the chief worker, runs evaluation on it (asynchronously from the other workers) and writes evaluation logs (e.g. TensorBoard logs). Running code on each worker You would run training code on each worker (including the chief) and evaluation code on the evaluator. The training code is basically the same as what you would use in the single-host setup, except using MultiWorkerMirroredStrategy instead of MirroredStrategy. Each worker would run the same code (minus the difference explained in the note below), including the same callbacks. Note: Callbacks that save model checkpoints or logs should save to a different directory for each worker. It is standard practice that all workers should save to local disk (which is typically temporary), except worker 0, which would save TensorBoard logs checkpoints to a Cloud storage location for later access & reuse. The evaluator would simply use MirroredStrategy (since it runs on a single machine and does not need to communicate with other machines) and call model.evaluate(). It would be loading the latest checkpoint saved by the chief worker to a Cloud storage location, and would save evaluation logs to the same location as the chief logs. Example: code running in a multi-worker setup On the chief (worker 0): # Set TF_CONFIG os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0} }) # Open a strategy scope and create/restore the model. strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() with strategy.scope(): model = make_or_restore_model() callbacks = [ # This callback saves a SavedModel every 100 batches keras.callbacks.ModelCheckpoint(filepath='path/to/cloud/location/ckpt', save_freq=100), keras.callbacks.TensorBoard('path/to/cloud/location/tb/') ] model.fit(train_dataset, callbacks=callbacks, ...) On other workers: # Set TF_CONFIG worker_index = 1 # For instance os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': worker_index} }) # Open a strategy scope and create/restore the model. # You can restore from the checkpoint saved by the chief. strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() with strategy.scope(): model = make_or_restore_model() callbacks = [ keras.callbacks.ModelCheckpoint(filepath='local/path/ckpt', save_freq=100), keras.callbacks.TensorBoard('local/path/tb/') ] model.fit(train_dataset, callbacks=callbacks, ...) On the evaluator: strategy = tf.distribute.MirroredStrategy() with strategy.scope(): model = make_or_restore_model() # Restore from the checkpoint saved by the chief. results = model.evaluate(val_dataset) # Then, log the results on a shared location, write TensorBoard logs, etc Further reading TensorFlow distributed training guide Tutorial on multi-worker training with Keras MirroredStrategy docs MultiWorkerMirroredStrategy docs Distributed training in tf.keras with Weights & BiasesTraining Keras models with TensorFlow Cloud Author: Jonah Kohn Date created: 2020/08/11 Last modified: 2020/08/11 Description: In-depth usage guide for TensorFlow Cloud. View in Colab • GitHub source Introduction TensorFlow Cloud is a Python package that provides APIs for a seamless transition from local debugging to distributed training in Google Cloud. It simplifies the process of training TensorFlow models on the cloud into a single, simple function call, requiring minimal setup and no changes to your model. TensorFlow Cloud handles cloud-specific tasks such as creating VM instances and distribution strategies for your models automatically. This guide will demonstrate how to interface with Google Cloud through TensorFlow Cloud, and the wide range of functionality provided within TensorFlow Cloud. We'll start with the simplest use-case. Setup We'll get started by installing TensorFlow Cloud, and importing the packages we will need in this guide. !pip install -q tensorflow_cloud import tensorflow as tf import tensorflow_cloud as tfc from tensorflow import keras from tensorflow.keras import layers API overview: a first end-to-end example Let's begin with a Keras model training script, such as the following CNN: (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() model = keras.Sequential( [ keras.Input(shape=(28, 28)), # Use a Rescaling layer to make sure input values are in the [0, 1] range. layers.experimental.preprocessing.Rescaling(1.0 / 255), # The original images have shape (28, 28), so we reshape them to (28, 28, 1) layers.Reshape(target_shape=(28, 28, 1)), # Follow-up with a classic small convnet layers.Conv2D(32, 3, activation="relu"), layers.MaxPooling2D(2), layers.Conv2D(32, 3, activation="relu"), layers.MaxPooling2D(2), layers.Conv2D(32, 3, activation="relu"), layers.Flatten(), layers.Dense(128, activation="relu"), layers.Dense(10), ] ) model.compile( optimizer=keras.optimizers.Adam(), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=keras.metrics.SparseCategoricalAccuracy(), ) model.fit(x_train, y_train, epochs=20, batch_size=128, validation_split=0.1) To train this model on Google Cloud we just need to add a call to run() at the beginning of the script, before the imports: tfc.run() You don’t need to worry about cloud-specific tasks such as creating VM instances and distribution strategies when using TensorFlow Cloud. The API includes intelligent defaults for all the parameters -- everything is configurable, but many models can rely on these defaults. Upon calling run(), TensorFlow Cloud will: Make your Python script or notebook distribution-ready. Convert it into a Docker image with required dependencies. Run the training job on a GCP GPU-powered VM. Stream relevant logs and job information. The default VM configuration is 1 chief and 0 workers with 8 CPU cores and 1 Tesla T4 GPU. Google Cloud configuration In order to facilitate the proper pathways for Cloud training, you will need to do some first-time setup. If you're a new Google Cloud user, there are a few preliminary steps you will need to take: Create a GCP Project; Enable AI Platform Services; Create a Service Account; Download an authorization key; Create a Cloud Storage bucket. Detailed first-time setup instructions can be found in the TensorFlow Cloud README, and an additional setup example is shown on the TensorFlow Blog. Common workflows and Cloud storage In most cases, you'll want to retrieve your model after training on Google Cloud. For this, it's crucial to redirect saving and loading to Cloud Storage while training remotely. We can direct TensorFlow Cloud to our Cloud Storage bucket for a variety of tasks. The storage bucket can be used to save and load large training datasets, store callback logs or model weights, and save trained model files. To begin, let's configure fit() to save the model to a Cloud Storage, and set up TensorBoard monitoring to track training progress. def create_model(): model = keras.Sequential( [ keras.Input(shape=(28, 28)), layers.experimental.preprocessing.Rescaling(1.0 / 255), layers.Reshape(target_shape=(28, 28, 1)), layers.Conv2D(32, 3, activation="relu"), layers.MaxPooling2D(2), layers.Conv2D(32, 3, activation="relu"), layers.MaxPooling2D(2), layers.Conv2D(32, 3, activation="relu"), layers.Flatten(), layers.Dense(128, activation="relu"), layers.Dense(10), ] ) model.compile( optimizer=keras.optimizers.Adam(), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=keras.metrics.SparseCategoricalAccuracy(), ) return model Let's save the TensorBoard logs and model checkpoints generated during training in our cloud storage bucket. import datetime import os # Note: Please change the gcp_bucket to your bucket name. gcp_bucket = "keras-examples" checkpoint_path = os.path.join("gs://", gcp_bucket, "mnist_example", "save_at_{epoch}") tensorboard_path = os.path.join( # Timestamp included to enable timeseries graphs "gs://", gcp_bucket, "logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S") ) callbacks = [ # TensorBoard will store logs for each epoch and graph performance for us. keras.callbacks.TensorBoard(log_dir=tensorboard_path, histogram_freq=1), # ModelCheckpoint will save models after each epoch for retrieval later. keras.callbacks.ModelCheckpoint(checkpoint_path), # EarlyStopping will terminate training when val_loss ceases to improve. keras.callbacks.EarlyStopping(monitor="val_loss", patience=3), ] model = create_model() Here, we will load our data from Keras directly. In general, it's best practice to store your dataset in your Cloud Storage bucket, however TensorFlow Cloud can also accomodate datasets stored locally. That's covered in the Multi-file section of this guide. (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() The TensorFlow Cloud API provides the remote() function to determine whether code is being executed locally or on the cloud. This allows for the separate designation of fit() parameters for local and remote execution, and provides means for easy debugging without overloading your local machine. if tfc.remote(): epochs = 100 callbacks = callbacks batch_size = 128 else: epochs = 5 batch_size = 64 callbacks = None model.fit(x_train, y_train, epochs=epochs, callbacks=callbacks, batch_size=batch_size) Epoch 1/5 938/938 [==============================] - 6s 7ms/step - loss: 0.2021 - sparse_categorical_accuracy: 0.9383 Epoch 2/5 938/938 [==============================] - 6s 7ms/step - loss: 0.0533 - sparse_categorical_accuracy: 0.9836 Epoch 3/5 938/938 [==============================] - 6s 7ms/step - loss: 0.0385 - sparse_categorical_accuracy: 0.9883 Epoch 4/5 938/938 [==============================] - 6s 7ms/step - loss: 0.0330 - sparse_categorical_accuracy: 0.9895 Epoch 5/5 938/938 [==============================] - 6s 7ms/step - loss: 0.0255 - sparse_categorical_accuracy: 0.9916 Let's save the model in GCS after the training is complete. save_path = os.path.join("gs://", gcp_bucket, "mnist_example") if tfc.remote(): model.save(save_path) We can also use this storage bucket for Docker image building, instead of your local Docker instance. For this, just add your bucket to the docker_image_bucket_name parameter. # docs_infra: no_execute tfc.run(docker_image_bucket_name=gcp_bucket) After training the model, we can load the saved model and view our TensorBoard logs to monitor performance. # docs_infra: no_execute model = keras.models.load_model(save_path) !#docs_infra: no_execute !tensorboard dev upload --logdir "gs://keras-examples-jonah/logs/fit" --name "Guide MNIST" Large-scale projects In many cases, your project containing a Keras model may encompass more than one Python script, or may involve external data or specific dependencies. TensorFlow Cloud is entirely flexible for large-scale deployment, and provides a number of intelligent functionalities to aid your projects. Entry points: support for Python scripts and Jupyter notebooks Your call to the run() API won't always be contained inside the same Python script as your model training code. For this purpose, we provide an entry_point parameter. The entry_point parameter can be used to specify the Python script or notebook in which your model training code lives. When calling run() from the same script as your model, use the entry_point default of None. pip dependencies If your project calls on additional pip dependencies, it's possible to specify the additional required libraries by including a requirements.txt file. In this file, simply put a list of all the required dependencies and TensorFlow Cloud will handle integrating these into your cloud build. Python notebooks TensorFlow Cloud is also runnable from Python notebooks. Additionally, your specified entry_point can be a notebook if needed. There are two key differences to keep in mind between TensorFlow Cloud on notebooks compared to scripts: When calling run() from within a notebook, a Cloud Storage bucket must be specified for building and storing your Docker image. GCloud authentication happens entirely through your authentication key, without project specification. An example workflow using TensorFlow Cloud from a notebook is provided in the "Putting it all together" section of this guide. Multi-file projects If your model depends on additional files, you only need to ensure that these files live in the same directory (or subdirectory) of the specified entry point. Every file that is stored in the same directory as the specified entry_point will be included in the Docker image, as well as any files stored in subdirectories adjacent to the entry_point. This is also true for dependencies you may need which can't be acquired through pip For an example of a custom entry-point and multi-file project with additional pip dependencies, take a look at this multi-file example on the TensorFlow Cloud Repository. For brevity, we'll just include the example's run() call: tfc.run( docker_image_bucket_name=gcp_bucket, entry_point="train_model.py", requirements="requirements.txt" ) Machine configuration and distributed training Model training may require a wide range of different resources, depending on the size of the model or the dataset. When accounting for configurations with multiple GPUs, it becomes critical to choose a fitting distribution strategy. Here, we outline a few possible configurations: Multi-worker distribution Here, we can use COMMON_MACHINE_CONFIGS to designate 1 chief CPU and 4 worker GPUs. tfc.run( docker_image_bucket_name=gcp_bucket, chief_config=tfc.COMMON_MACHINE_CONFIGS['CPU'], worker_count=2, worker_config=tfc.COMMON_MACHINE_CONFIGS['T4_4X'] ) By default, TensorFlow Cloud chooses the best distribution strategy for your machine configuration with a simple formula using the chief_config, worker_config and worker_count parameters provided. If the number of GPUs specified is greater than zero, tf.distribute.MirroredStrategy will be chosen. If the number of workers is greater than zero, tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.experimental.TPUStrategy will be chosen based on the accelerator type. Otherwise, tf.distribute.OneDeviceStrategy will be chosen. TPU distribution Let's train the same model on TPU, as shown: tfc.run( docker_image_bucket_name=gcp_bucket, chief_config=tfc.COMMON_MACHINE_CONFIGS["CPU"], worker_count=1, worker_config=tfc.COMMON_MACHINE_CONFIGS["TPU"] ) Custom distribution strategy To specify a custom distribution strategy, format your code normally as you would according to the distributed training guide and set distribution_strategy to None. Below, we'll specify our own distribution strategy for the same MNIST model. (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() mirrored_strategy = tf.distribute.MirroredStrategy() with mirrored_strategy.scope(): model = create_model() if tfc.remote(): epochs = 100 batch_size = 128 else: epochs = 10 batch_size = 64 callbacks = None model.fit( x_train, y_train, epochs=epochs, callbacks=callbacks, batch_size=batch_size ) tfc.run( docker_image_bucket_name=gcp_bucket, chief_config=tfc.COMMON_MACHINE_CONFIGS['CPU'], worker_count=2, worker_config=tfc.COMMON_MACHINE_CONFIGS['T4_4X'], distribution_strategy=None ) Custom Docker images By default, TensorFlow Cloud uses a Docker base image supplied by Google and corresponding to your current TensorFlow version. However, you can also specify a custom Docker image to fit your build requirements, if necessary. For this example, we will specify the Docker image from an older version of TensorFlow: tfc.run( docker_image_bucket_name=gcp_bucket, base_docker_image="tensorflow/tensorflow:2.1.0-gpu" ) Additional metrics You may find it useful to tag your Cloud jobs with specific labels, or to stream your model's logs during Cloud training. It's good practice to maintain proper labeling on all Cloud jobs, for record-keeping. For this purpose, run() accepts a dictionary of labels up to 64 key-value pairs, which are visible from the Cloud build logs. Logs such as epoch performance and model saving internals can be accessed using the link provided by executing tfc.run or printed to your local terminal using the stream_logs flag. job_labels = {"job": "mnist-example", "team": "keras-io", "user": "jonah"} tfc.run( docker_image_bucket_name=gcp_bucket, job_labels=job_labels, stream_logs=True ) Putting it all together For an in-depth Colab which uses many of the features described in this guide, follow along this example to train a state-of-the-art model to recognize dog breeds from photos using feature extraction.Training & evaluation with the built-in methods Author: fchollet Date created: 2019/03/01 Last modified: 2020/04/13 Description: Complete guide to training & evaluation with fit() and evaluate(). View in Colab • GitHub source Setup import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Introduction This guide covers training, evaluation, and prediction (inference) models when using built-in APIs for training & validation (such as Model.fit(), Model.evaluate() and Model.predict()). If you are interested in leveraging fit() while specifying your own training step function, see the Customizing what happens in fit() guide. If you are interested in writing your own training & evaluation loops from scratch, see the guide "writing a training loop from scratch". In general, whether you are using built-in loops or writing your own, model training & evaluation works strictly in the same way across every kind of Keras model -- Sequential models, models built with the Functional API, and models written from scratch via model subclassing. This guide doesn't cover distributed training, which is covered in our guide to multi-GPU & distributed training. API overview: a first end-to-end example When passing data to the built-in training loops of a model, you should either use NumPy arrays (if your data is small and fits in memory) or tf.data Dataset objects. In the next few paragraphs, we'll use the MNIST dataset as NumPy arrays, in order to demonstrate how to use optimizers, losses, and metrics. Let's consider the following model (here, we build in with the Functional API, but it could be a Sequential model or a subclassed model as well): inputs = keras.Input(shape=(784,), name="digits") x = layers.Dense(64, activation="relu", name="dense_1")(inputs) x = layers.Dense(64, activation="relu", name="dense_2")(x) outputs = layers.Dense(10, activation="softmax", name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outputs) Here's what the typical end-to-end workflow looks like, consisting of: Training Validation on a holdout set generated from the original training data Evaluation on the test data We'll use MNIST data for this example. (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() # Preprocess the data (these are NumPy arrays) x_train = x_train.reshape(60000, 784).astype("float32") / 255 x_test = x_test.reshape(10000, 784).astype("float32") / 255 y_train = y_train.astype("float32") y_test = y_test.astype("float32") # Reserve 10,000 samples for validation x_val = x_train[-10000:] y_val = y_train[-10000:] x_train = x_train[:-10000] y_train = y_train[:-10000] We specify the training configuration (optimizer, loss, metrics): model.compile( optimizer=keras.optimizers.RMSprop(), # Optimizer # Loss function to minimize loss=keras.losses.SparseCategoricalCrossentropy(), # List of metrics to monitor metrics=[keras.metrics.SparseCategoricalAccuracy()], ) We call fit(), which will train the model by slicing the data into "batches" of size batch_size, and repeatedly iterating over the entire dataset for a given number of epochs. print("Fit model on training data") history = model.fit( x_train, y_train, batch_size=64, epochs=2, # We pass some validation for # monitoring validation loss and metrics # at the end of each epoch validation_data=(x_val, y_val), ) Fit model on training data Epoch 1/2 782/782 [==============================] - 2s 2ms/step - loss: 0.5776 - sparse_categorical_accuracy: 0.8435 - val_loss: 0.1810 - val_sparse_categorical_accuracy: 0.9475 Epoch 2/2 782/782 [==============================] - 1s 978us/step - loss: 0.1679 - sparse_categorical_accuracy: 0.9511 - val_loss: 0.1637 - val_sparse_categorical_accuracy: 0.9529 The returned history object holds a record of the loss values and metric values during training: history.history {'loss': [0.3402276635169983, 0.15610544383525848], 'sparse_categorical_accuracy': [0.9048200249671936, 0.9537400007247925], 'val_loss': [0.1809607595205307, 0.16366209089756012], 'val_sparse_categorical_accuracy': [0.9474999904632568, 0.9528999924659729]} We evaluate the model on the test data via evaluate(): # Evaluate the model on the test data using `evaluate` print("Evaluate on test data") results = model.evaluate(x_test, y_test, batch_size=128) print("test loss, test acc:", results) # Generate predictions (probabilities -- the output of the last layer) # on new data using `predict` print("Generate predictions for 3 samples") predictions = model.predict(x_test[:3]) print("predictions shape:", predictions.shape) Evaluate on test data 79/79 [==============================] - 0s 846us/step - loss: 0.1587 - sparse_categorical_accuracy: 0.9513 test loss, test acc: [0.15874555706977844, 0.9513000249862671] Generate predictions for 3 samples predictions shape: (3, 10) Now, let's review each piece of this workflow in detail. The compile() method: specifying a loss, metrics, and an optimizer To train a model with fit(), you need to specify a loss function, an optimizer, and optionally, some metrics to monitor. You pass these to the model as arguments to the compile() method: model.compile( optimizer=keras.optimizers.RMSprop(learning_rate=1e-3), loss=keras.losses.SparseCategoricalCrossentropy(), metrics=[keras.metrics.SparseCategoricalAccuracy()], ) The metrics argument should be a list -- your model can have any number of metrics. If your model has multiple outputs, you can specify different losses and metrics for each output, and you can modulate the contribution of each output to the total loss of the model. You will find more details about this in the Passing data to multi-input, multi-output models section. Note that if you're satisfied with the default settings, in many cases the optimizer, loss, and metrics can be specified via string identifiers as a shortcut: model.compile( optimizer="rmsprop", loss="sparse_categorical_crossentropy", metrics=["sparse_categorical_accuracy"], ) For later reuse, let's put our model definition and compile step in functions; we will call them several times across different examples in this guide. def get_uncompiled_model(): inputs = keras.Input(shape=(784,), name="digits") x = layers.Dense(64, activation="relu", name="dense_1")(inputs) x = layers.Dense(64, activation="relu", name="dense_2")(x) outputs = layers.Dense(10, activation="softmax", name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outputs) return model def get_compiled_model(): model = get_uncompiled_model() model.compile( optimizer="rmsprop", loss="sparse_categorical_crossentropy", metrics=["sparse_categorical_accuracy"], ) return model Many built-in optimizers, losses, and metrics are available In general, you won't have to create your own losses, metrics, or optimizers from scratch, because what you need is likely to be already part of the Keras API: Optimizers: SGD() (with or without momentum) RMSprop() Adam() etc. Losses: MeanSquaredError() KLDivergence() CosineSimilarity() etc. Metrics: AUC() Precision() Recall() etc. Custom losses If you need to create a custom loss, Keras provides two ways to do so. The first method involves creating a function that accepts inputs y_true and y_pred. The following example shows a loss function that computes the mean squared error between the real data and the predictions: def custom_mean_squared_error(y_true, y_pred): return tf.math.reduce_mean(tf.square(y_true - y_pred)) model = get_uncompiled_model() model.compile(optimizer=keras.optimizers.Adam(), loss=custom_mean_squared_error) # We need to one-hot encode the labels to use MSE y_train_one_hot = tf.one_hot(y_train, depth=10) model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1) 782/782 [==============================] - 1s 756us/step - loss: 0.0279 If you need a loss function that takes in parameters beside y_true and y_pred, you can subclass the tf.keras.losses.Loss class and implement the following two methods: __init__(self): accept parameters to pass during the call of your loss function call(self, y_true, y_pred): use the targets (y_true) and the model predictions (y_pred) to compute the model's loss Let's say you want to use mean squared error, but with an added term that will de-incentivize prediction values far from 0.5 (we assume that the categorical targets are one-hot encoded and take values between 0 and 1). This creates an incentive for the model not to be too confident, which may help reduce overfitting (we won't know if it works until we try!). Here's how you would do it: class CustomMSE(keras.losses.Loss): def __init__(self, regularization_factor=0.1, name="custom_mse"): super().__init__(name=name) self.regularization_factor = regularization_factor def call(self, y_true, y_pred): mse = tf.math.reduce_mean(tf.square(y_true - y_pred)) reg = tf.math.reduce_mean(tf.square(0.5 - y_pred)) return mse + reg * self.regularization_factor model = get_uncompiled_model() model.compile(optimizer=keras.optimizers.Adam(), loss=CustomMSE()) y_train_one_hot = tf.one_hot(y_train, depth=10) model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1) 782/782 [==============================] - 1s 787us/step - loss: 0.0484 Custom metrics If you need a metric that isn't part of the API, you can easily create custom metrics by subclassing the tf.keras.metrics.Metric class. You will need to implement 4 methods: __init__(self), in which you will create state variables for your metric. update_state(self, y_true, y_pred, sample_weight=None), which uses the targets y_true and the model predictions y_pred to update the state variables. result(self), which uses the state variables to compute the final results. reset_states(self), which reinitializes the state of the metric. State update and results computation are kept separate (in update_state() and result(), respectively) because in some cases, the results computation might be very expensive and would only be done periodically. Here's a simple example showing how to implement a CategoricalTruePositives metric that counts how many samples were correctly classified as belonging to a given class: class CategoricalTruePositives(keras.metrics.Metric): def __init__(self, name="categorical_true_positives", **kwargs): super(CategoricalTruePositives, self).__init__(name=name, **kwargs) self.true_positives = self.add_weight(name="ctp", initializer="zeros") def update_state(self, y_true, y_pred, sample_weight=None): y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1)) values = tf.cast(y_true, "int32") == tf.cast(y_pred, "int32") values = tf.cast(values, "float32") if sample_weight is not None: sample_weight = tf.cast(sample_weight, "float32") values = tf.multiply(values, sample_weight) self.true_positives.assign_add(tf.reduce_sum(values)) def result(self): return self.true_positives def reset_states(self): # The state of the metric will be reset at the start of each epoch. self.true_positives.assign(0.0) model = get_uncompiled_model() model.compile( optimizer=keras.optimizers.RMSprop(learning_rate=1e-3), loss=keras.losses.SparseCategoricalCrossentropy(), metrics=[CategoricalTruePositives()], ) model.fit(x_train, y_train, batch_size=64, epochs=3) Epoch 1/3 782/782 [==============================] - 1s 871us/step - loss: 0.5631 - categorical_true_positives: 22107.3525 Epoch 2/3 782/782 [==============================] - 1s 826us/step - loss: 0.1679 - categorical_true_positives: 23860.3078 Epoch 3/3 782/782 [==============================] - 1s 823us/step - loss: 0.1102 - categorical_true_positives: 24231.2771 Handling losses and metrics that don't fit the standard signature The overwhelming majority of losses and metrics can be computed from y_true and y_pred, where y_pred is an output of your model -- but not all of them. For instance, a regularization loss may only require the activation of a layer (there are no targets in this case), and this activation may not be a model output. In such cases, you can call self.add_loss(loss_value) from inside the call method of a custom layer. Losses added in this way get added to the "main" loss during training (the one passed to compile()). Here's a simple example that adds activity regularization (note that activity regularization is built-in in all Keras layers -- this layer is just for the sake of providing a concrete example): class ActivityRegularizationLayer(layers.Layer): def call(self, inputs): self.add_loss(tf.reduce_sum(inputs) * 0.1) return inputs # Pass-through layer. inputs = keras.Input(shape=(784,), name="digits") x = layers.Dense(64, activation="relu", name="dense_1")(inputs) # Insert activity regularization as a layer x = ActivityRegularizationLayer()(x) x = layers.Dense(64, activation="relu", name="dense_2")(x) outputs = layers.Dense(10, name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outputs) model.compile( optimizer=keras.optimizers.RMSprop(learning_rate=1e-3), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), ) # The displayed loss will be much higher than before # due to the regularization component. model.fit(x_train, y_train, batch_size=64, epochs=1) 782/782 [==============================] - 1s 828us/step - loss: 3.5361 You can do the same for logging metric values, using add_metric(): class MetricLoggingLayer(layers.Layer): def call(self, inputs): # The `aggregation` argument defines # how to aggregate the per-batch values # over each epoch: # in this case we simply average them. self.add_metric( keras.backend.std(inputs), name="std_of_activation", aggregation="mean" ) return inputs # Pass-through layer. inputs = keras.Input(shape=(784,), name="digits") x = layers.Dense(64, activation="relu", name="dense_1")(inputs) # Insert std logging as a layer. x = MetricLoggingLayer()(x) x = layers.Dense(64, activation="relu", name="dense_2")(x) outputs = layers.Dense(10, name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outputs) model.compile( optimizer=keras.optimizers.RMSprop(learning_rate=1e-3), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), ) model.fit(x_train, y_train, batch_size=64, epochs=1) 782/782 [==============================] - 1s 859us/step - loss: 0.5469 - std_of_activation: 0.9414 In the Functional API, you can also call model.add_loss(loss_tensor), or model.add_metric(metric_tensor, name, aggregation). Here's a simple example: inputs = keras.Input(shape=(784,), name="digits") x1 = layers.Dense(64, activation="relu", name="dense_1")(inputs) x2 = layers.Dense(64, activation="relu", name="dense_2")(x1) outputs = layers.Dense(10, name="predictions")(x2) model = keras.Model(inputs=inputs, outputs=outputs) model.add_loss(tf.reduce_sum(x1) * 0.1) model.add_metric(keras.backend.std(x1), name="std_of_activation", aggregation="mean") model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), ) model.fit(x_train, y_train, batch_size=64, epochs=1) 782/782 [==============================] - 1s 875us/step - loss: 3.4905 - std_of_activation: 0.0019 Note that when you pass losses via add_loss(), it becomes possible to call compile() without a loss function, since the model already has a loss to minimize. Consider the following LogisticEndpoint layer: it takes as inputs targets & logits, and it tracks a crossentropy loss via add_loss(). It also tracks classification accuracy via add_metric(). class LogisticEndpoint(keras.layers.Layer): def __init__(self, name=None): super(LogisticEndpoint, self).__init__(name=name) self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True) self.accuracy_fn = keras.metrics.BinaryAccuracy() def call(self, targets, logits, sample_weights=None): # Compute the training-time loss value and add it # to the layer using `self.add_loss()`. loss = self.loss_fn(targets, logits, sample_weights) self.add_loss(loss) # Log accuracy as a metric and add it # to the layer using `self.add_metric()`. acc = self.accuracy_fn(targets, logits, sample_weights) self.add_metric(acc, name="accuracy") # Return the inference-time prediction tensor (for `.predict()`). return tf.nn.softmax(logits) You can use it in a model with two inputs (input data & targets), compiled without a loss argument, like this: import numpy as np inputs = keras.Input(shape=(3,), name="inputs") targets = keras.Input(shape=(10,), name="targets") logits = keras.layers.Dense(10)(inputs) predictions = LogisticEndpoint(name="predictions")(logits, targets) model = keras.Model(inputs=[inputs, targets], outputs=predictions) model.compile(optimizer="adam") # No loss argument! data = { "inputs": np.random.random((3, 3)), "targets": np.random.random((3, 10)), } model.fit(data) 1/1 [==============================] - 0s 241ms/step - loss: 0.9990 - binary_accuracy: 0.0000e+00 For more information about training multi-input models, see the section Passing data to multi-input, multi-output models. Automatically setting apart a validation holdout set In the first end-to-end example you saw, we used the validation_data argument to pass a tuple of NumPy arrays (x_val, y_val) to the model for evaluating a validation loss and validation metrics at the end of each epoch. Here's another option: the argument validation_split allows you to automatically reserve part of your training data for validation. The argument value represents the fraction of the data to be reserved for validation, so it should be set to a number higher than 0 and lower than 1. For instance, validation_split=0.2 means "use 20% of the data for validation", and validation_split=0.6 means "use 60% of the data for validation". The way the validation is computed is by taking the last x% samples of the arrays received by the fit() call, before any shuffling. Note that you can only use validation_split when training with NumPy data. model = get_compiled_model() model.fit(x_train, y_train, batch_size=64, validation_split=0.2, epochs=1) 625/625 [==============================] - 1s 1ms/step - loss: 0.6263 - sparse_categorical_accuracy: 0.8288 - val_loss: 0.2548 - val_sparse_categorical_accuracy: 0.9206 Training & evaluation from tf.data Datasets In the past few paragraphs, you've seen how to handle losses, metrics, and optimizers, and you've seen how to use the validation_data and validation_split arguments in fit(), when your data is passed as NumPy arrays. Let's now take a look at the case where your data comes in the form of a tf.data.Dataset object. The tf.data API is a set of utilities in TensorFlow 2.0 for loading and preprocessing data in a way that's fast and scalable. For a complete guide about creating Datasets, see the tf.data documentation. You can pass a Dataset instance directly to the methods fit(), evaluate(), and predict(): model = get_compiled_model() # First, let's create a training Dataset instance. # For the sake of our example, we'll use the same MNIST data as before. train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) # Shuffle and slice the dataset. train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) # Now we get a test dataset. test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)) test_dataset = test_dataset.batch(64) # Since the dataset already takes care of batching, # we don't pass a `batch_size` argument. model.fit(train_dataset, epochs=3) # You can also evaluate or predict on a dataset. print("Evaluate") result = model.evaluate(test_dataset) dict(zip(model.metrics_names, result)) Epoch 1/3 782/782 [==============================] - 1s 1ms/step - loss: 0.5631 - sparse_categorical_accuracy: 0.8472 Epoch 2/3 782/782 [==============================] - 1s 1ms/step - loss: 0.1676 - sparse_categorical_accuracy: 0.9497 Epoch 3/3 782/782 [==============================] - 1s 1ms/step - loss: 0.1211 - sparse_categorical_accuracy: 0.9638 Evaluate 157/157 [==============================] - 0s 790us/step - loss: 0.1231 - sparse_categorical_accuracy: 0.9625 {'loss': 0.1230572983622551, 'sparse_categorical_accuracy': 0.9624999761581421} Note that the Dataset is reset at the end of each epoch, so it can be reused of the next epoch. If you want to run training only on a specific number of batches from this Dataset, you can pass the steps_per_epoch argument, which specifies how many training steps the model should run using this Dataset before moving on to the next epoch. If you do this, the dataset is not reset at the end of each epoch, instead we just keep drawing the next batches. The dataset will eventually run out of data (unless it is an infinitely-looping dataset). model = get_compiled_model() # Prepare the training dataset train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) # Only use the 100 batches per epoch (that's 64 * 100 samples) model.fit(train_dataset, epochs=3, steps_per_epoch=100) Epoch 1/3 100/100 [==============================] - 0s 1ms/step - loss: 1.2444 - sparse_categorical_accuracy: 0.6461 Epoch 2/3 100/100 [==============================] - 0s 1ms/step - loss: 0.3783 - sparse_categorical_accuracy: 0.8929 Epoch 3/3 100/100 [==============================] - 0s 1ms/step - loss: 0.3543 - sparse_categorical_accuracy: 0.8988 Using a validation dataset You can pass a Dataset instance as the validation_data argument in fit(): model = get_compiled_model() # Prepare the training dataset train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) # Prepare the validation dataset val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)) val_dataset = val_dataset.batch(64) model.fit(train_dataset, epochs=1, validation_data=val_dataset) 782/782 [==============================] - 1s 1ms/step - loss: 0.5465 - sparse_categorical_accuracy: 0.8459 - val_loss: 0.1943 - val_sparse_categorical_accuracy: 0.9435 At the end of each epoch, the model will iterate over the validation dataset and compute the validation loss and validation metrics. If you want to run validation only on a specific number of batches from this dataset, you can pass the validation_steps argument, which specifies how many validation steps the model should run with the validation dataset before interrupting validation and moving on to the next epoch: model = get_compiled_model() # Prepare the training dataset train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) # Prepare the validation dataset val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)) val_dataset = val_dataset.batch(64) model.fit( train_dataset, epochs=1, # Only run validation using the first 10 batches of the dataset # using the `validation_steps` argument validation_data=val_dataset, validation_steps=10, ) 782/782 [==============================] - 1s 1ms/step - loss: 0.5561 - sparse_categorical_accuracy: 0.8464 - val_loss: 0.2889 - val_sparse_categorical_accuracy: 0.9156 Note that the validation dataset will be reset after each use (so that you will always be evaluating on the same samples from epoch to epoch). The argument validation_split (generating a holdout set from the training data) is not supported when training from Dataset objects, since this feature requires the ability to index the samples of the datasets, which is not possible in general with the Dataset API. Other input formats supported Besides NumPy arrays, eager tensors, and TensorFlow Datasets, it's possible to train a Keras model using Pandas dataframes, or from Python generators that yield batches of data & labels. In particular, the keras.utils.Sequence class offers a simple interface to build Python data generators that are multiprocessing-aware and can be shuffled. In general, we recommend that you use: NumPy input data if your data is small and fits in memory Dataset objects if you have large datasets and you need to do distributed training Sequence objects if you have large datasets and you need to do a lot of custom Python-side processing that cannot be done in TensorFlow (e.g. if you rely on external libraries for data loading or preprocessing). Using a keras.utils.Sequence object as input keras.utils.Sequence is a utility that you can subclass to obtain a Python generator with two important properties: It works well with multiprocessing. It can be shuffled (e.g. when passing shuffle=True in fit()). A Sequence must implement two methods: __getitem__ __len__ The method __getitem__ should return a complete batch. If you want to modify your dataset between epochs, you may implement on_epoch_end. Here's a quick example: from skimage.io import imread from skimage.transform import resize import numpy as np # Here, `filenames` is list of path to the images # and `labels` are the associated labels. class CIFAR10Sequence(Sequence): def __init__(self, filenames, labels, batch_size): self.filenames, self.labels = filenames, labels self.batch_size = batch_size def __len__(self): return int(np.ceil(len(self.filenames) / float(self.batch_size))) def __getitem__(self, idx): batch_x = self.filenames[idx * self.batch_size:(idx + 1) * self.batch_size] batch_y = self.labels[idx * self.batch_size:(idx + 1) * self.batch_size] return np.array([ resize(imread(filename), (200, 200)) for filename in batch_x]), np.array(batch_y) sequence = CIFAR10Sequence(filenames, labels, batch_size) model.fit(sequence, epochs=10) Using sample weighting and class weighting With the default settings the weight of a sample is decided by its frequency in the dataset. There are two methods to weight the data, independent of sample frequency: Class weights Sample weights Class weights This is set by passing a dictionary to the class_weight argument to Model.fit(). This dictionary maps class indices to the weight that should be used for samples belonging to this class. This can be used to balance classes without resampling, or to train a model that gives more importance to a particular class. For instance, if class "0" is half as represented as class "1" in your data, you could use Model.fit(..., class_weight={0: 1., 1: 0.5}). Here's a NumPy example where we use class weights or sample weights to give more importance to the correct classification of class #5 (which is the digit "5" in the MNIST dataset). import numpy as np class_weight = { 0: 1.0, 1: 1.0, 2: 1.0, 3: 1.0, 4: 1.0, # Set weight "2" for class "5", # making this class 2x more important 5: 2.0, 6: 1.0, 7: 1.0, 8: 1.0, 9: 1.0, } print("Fit with class weight") model = get_compiled_model() model.fit(x_train, y_train, class_weight=class_weight, batch_size=64, epochs=1) Fit with class weight 782/782 [==============================] - 1s 933us/step - loss: 0.6334 - sparse_categorical_accuracy: 0.8297 Sample weights For fine grained control, or if you are not building a classifier, you can use "sample weights". When training from NumPy data: Pass the sample_weight argument to Model.fit(). When training from tf.data or any other sort of iterator: Yield (input_batch, label_batch, sample_weight_batch) tuples. A "sample weights" array is an array of numbers that specify how much weight each sample in a batch should have in computing the total loss. It is commonly used in imbalanced classification problems (the idea being to give more weight to rarely-seen classes). When the weights used are ones and zeros, the array can be used as a mask for the loss function (entirely discarding the contribution of certain samples to the total loss). sample_weight = np.ones(shape=(len(y_train),)) sample_weight[y_train == 5] = 2.0 print("Fit with sample weight") model = get_compiled_model() model.fit(x_train, y_train, sample_weight=sample_weight, batch_size=64, epochs=1) Fit with sample weight 782/782 [==============================] - 1s 899us/step - loss: 0.6337 - sparse_categorical_accuracy: 0.8355 Here's a matching Dataset example: sample_weight = np.ones(shape=(len(y_train),)) sample_weight[y_train == 5] = 2.0 # Create a Dataset that includes sample weights # (3rd element in the return tuple). train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train, sample_weight)) # Shuffle and slice the dataset. train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) model = get_compiled_model() model.fit(train_dataset, epochs=1) 782/782 [==============================] - 1s 1ms/step - loss: 0.6539 - sparse_categorical_accuracy: 0.8364 Passing data to multi-input, multi-output models In the previous examples, we were considering a model with a single input (a tensor of shape (764,)) and a single output (a prediction tensor of shape (10,)). But what about models that have multiple inputs or outputs? Consider the following model, which has an image input of shape (32, 32, 3) (that's (height, width, channels)) and a time series input of shape (None, 10) (that's (timesteps, features)). Our model will have two outputs computed from the combination of these inputs: a "score" (of shape (1,)) and a probability distribution over five classes (of shape (5,)). image_input = keras.Input(shape=(32, 32, 3), name="img_input") timeseries_input = keras.Input(shape=(None, 10), name="ts_input") x1 = layers.Conv2D(3, 3)(image_input) x1 = layers.GlobalMaxPooling2D()(x1) x2 = layers.Conv1D(3, 3)(timeseries_input) x2 = layers.GlobalMaxPooling1D()(x2) x = layers.concatenate([x1, x2]) score_output = layers.Dense(1, name="score_output")(x) class_output = layers.Dense(5, name="class_output")(x) model = keras.Model( inputs=[image_input, timeseries_input], outputs=[score_output, class_output] ) Let's plot this model, so you can clearly see what we're doing here (note that the shapes shown in the plot are batch shapes, rather than per-sample shapes). keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True) png At compilation time, we can specify different losses to different outputs, by passing the loss functions as a list: model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()], ) If we only passed a single loss function to the model, the same loss function would be applied to every output (which is not appropriate here). Likewise for metrics: model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()], metrics=[ [ keras.metrics.MeanAbsolutePercentageError(), keras.metrics.MeanAbsoluteError(), ], [keras.metrics.CategoricalAccuracy()], ], ) Since we gave names to our output layers, we could also specify per-output losses and metrics via a dict: model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss={ "score_output": keras.losses.MeanSquaredError(), "class_output": keras.losses.CategoricalCrossentropy(), }, metrics={ "score_output": [ keras.metrics.MeanAbsolutePercentageError(), keras.metrics.MeanAbsoluteError(), ], "class_output": [keras.metrics.CategoricalAccuracy()], }, ) We recommend the use of explicit names and dicts if you have more than 2 outputs. It's possible to give different weights to different output-specific losses (for instance, one might wish to privilege the "score" loss in our example, by giving to 2x the importance of the class loss), using the loss_weights argument: model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss={ "score_output": keras.losses.MeanSquaredError(), "class_output": keras.losses.CategoricalCrossentropy(), }, metrics={ "score_output": [ keras.metrics.MeanAbsolutePercentageError(), keras.metrics.MeanAbsoluteError(), ], "class_output": [keras.metrics.CategoricalAccuracy()], }, loss_weights={"score_output": 2.0, "class_output": 1.0}, ) You could also choose not to compute a loss for certain outputs, if these outputs are meant for prediction but not for training: # List loss version model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss=[None, keras.losses.CategoricalCrossentropy()], ) # Or dict loss version model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss={"class_output": keras.losses.CategoricalCrossentropy()}, ) Passing data to a multi-input or multi-output model in fit() works in a similar way as specifying a loss function in compile: you can pass lists of NumPy arrays (with 1:1 mapping to the outputs that received a loss function) or dicts mapping output names to NumPy arrays. model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()], ) # Generate dummy NumPy data img_data = np.random.random_sample(size=(100, 32, 32, 3)) ts_data = np.random.random_sample(size=(100, 20, 10)) score_targets = np.random.random_sample(size=(100, 1)) class_targets = np.random.random_sample(size=(100, 5)) # Fit on lists model.fit([img_data, ts_data], [score_targets, class_targets], batch_size=32, epochs=1) # Alternatively, fit on dicts model.fit( {"img_input": img_data, "ts_input": ts_data}, {"score_output": score_targets, "class_output": class_targets}, batch_size=32, epochs=1, ) 4/4 [==============================] - 1s 5ms/step - loss: 13.0462 - score_output_loss: 2.7483 - class_output_loss: 10.2979 4/4 [==============================] - 0s 4ms/step - loss: 11.9004 - score_output_loss: 1.7583 - class_output_loss: 10.1420 Here's the Dataset use case: similarly as what we did for NumPy arrays, the Dataset should return a tuple of dicts. train_dataset = tf.data.Dataset.from_tensor_slices( ( {"img_input": img_data, "ts_input": ts_data}, {"score_output": score_targets, "class_output": class_targets}, ) ) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) model.fit(train_dataset, epochs=1) 2/2 [==============================] - 0s 6ms/step - loss: 11.5102 - score_output_loss: 1.3747 - class_output_loss: 10.1355 Using callbacks Callbacks in Keras are objects that are called at different points during training (at the start of an epoch, at the end of a batch, at the end of an epoch, etc.). They can be used to implement certain behaviors, such as: Doing validation at different points during training (beyond the built-in per-epoch validation) Checkpointing the model at regular intervals or when it exceeds a certain accuracy threshold Changing the learning rate of the model when training seems to be plateauing Doing fine-tuning of the top layers when training seems to be plateauing Sending email or instant message notifications when training ends or where a certain performance threshold is exceeded Etc. Callbacks can be passed as a list to your call to fit(): model = get_compiled_model() callbacks = [ keras.callbacks.EarlyStopping( # Stop training when `val_loss` is no longer improving monitor="val_loss", # "no longer improving" being defined as "no better than 1e-2 less" min_delta=1e-2, # "no longer improving" being further defined as "for at least 2 epochs" patience=2, verbose=1, ) ] model.fit( x_train, y_train, epochs=20, batch_size=64, callbacks=callbacks, validation_split=0.2, ) Epoch 1/20 625/625 [==============================] - 1s 1ms/step - loss: 0.6032 - sparse_categorical_accuracy: 0.8355 - val_loss: 0.2303 - val_sparse_categorical_accuracy: 0.9306 Epoch 2/20 625/625 [==============================] - 1s 1ms/step - loss: 0.1855 - sparse_categorical_accuracy: 0.9458 - val_loss: 0.1775 - val_sparse_categorical_accuracy: 0.9471 Epoch 3/20 625/625 [==============================] - 1s 1ms/step - loss: 0.1280 - sparse_categorical_accuracy: 0.9597 - val_loss: 0.1585 - val_sparse_categorical_accuracy: 0.9531 Epoch 4/20 625/625 [==============================] - 1s 1ms/step - loss: 0.0986 - sparse_categorical_accuracy: 0.9704 - val_loss: 0.1418 - val_sparse_categorical_accuracy: 0.9593 Epoch 5/20 625/625 [==============================] - 1s 1ms/step - loss: 0.0774 - sparse_categorical_accuracy: 0.9761 - val_loss: 0.1319 - val_sparse_categorical_accuracy: 0.9628 Epoch 6/20 625/625 [==============================] - 1s 1ms/step - loss: 0.0649 - sparse_categorical_accuracy: 0.9798 - val_loss: 0.1465 - val_sparse_categorical_accuracy: 0.9580 Epoch 00006: early stopping Many built-in callbacks are available There are many built-in callbacks already available in Keras, such as: ModelCheckpoint: Periodically save the model. EarlyStopping: Stop training when training is no longer improving the validation metrics. TensorBoard: periodically write model logs that can be visualized in TensorBoard (more details in the section "Visualization"). CSVLogger: streams loss and metrics data to a CSV file. etc. See the callbacks documentation for the complete list. Writing your own callback You can create a custom callback by extending the base class keras.callbacks.Callback. A callback has access to its associated model through the class property self.model. Make sure to read the complete guide to writing custom callbacks. Here's a simple example saving a list of per-batch loss values during training: class LossHistory(keras.callbacks.Callback): def on_train_begin(self, logs): self.per_batch_losses = [] def on_batch_end(self, batch, logs): self.per_batch_losses.append(logs.get("loss")) Checkpointing models When you're training model on relatively large datasets, it's crucial to save checkpoints of your model at frequent intervals. The easiest way to achieve this is with the ModelCheckpoint callback: model = get_compiled_model() callbacks = [ keras.callbacks.ModelCheckpoint( # Path where to save the model # The two parameters below mean that we will overwrite # the current checkpoint if and only if # the `val_loss` score has improved. # The saved model name will include the current epoch. filepath="mymodel_{epoch}", save_best_only=True, # Only save a model if `val_loss` has improved. monitor="val_loss", verbose=1, ) ] model.fit( x_train, y_train, epochs=2, batch_size=64, callbacks=callbacks, validation_split=0.2 ) Epoch 1/2 625/625 [==============================] - 1s 1ms/step - loss: 0.6380 - sparse_categorical_accuracy: 0.8226 - val_loss: 0.2283 - val_sparse_categorical_accuracy: 0.9317 Epoch 00001: val_loss improved from inf to 0.22825, saving model to mymodel_1 INFO:tensorflow:Assets written to: mymodel_1/assets Epoch 2/2 625/625 [==============================] - 1s 1ms/step - loss: 0.1787 - sparse_categorical_accuracy: 0.9466 - val_loss: 0.1877 - val_sparse_categorical_accuracy: 0.9440 Epoch 00002: val_loss improved from 0.22825 to 0.18768, saving model to mymodel_2 INFO:tensorflow:Assets written to: mymodel_2/assets The ModelCheckpoint callback can be used to implement fault-tolerance: the ability to restart training from the last saved state of the model in case training gets randomly interrupted. Here's a basic example: import os # Prepare a directory to store all the checkpoints. checkpoint_dir = "./ckpt" if not os.path.exists(checkpoint_dir): os.makedirs(checkpoint_dir) def make_or_restore_model(): # Either restore the latest model, or create a fresh one # if there is no checkpoint available. checkpoints = [checkpoint_dir + "/" + name for name in os.listdir(checkpoint_dir)] if checkpoints: latest_checkpoint = max(checkpoints, key=os.path.getctime) print("Restoring from", latest_checkpoint) return keras.models.load_model(latest_checkpoint) print("Creating a new model") return get_compiled_model() model = make_or_restore_model() callbacks = [ # This callback saves a SavedModel every 100 batches. # We include the training loss in the saved model name. keras.callbacks.ModelCheckpoint( filepath=checkpoint_dir + "/ckpt-loss={loss:.2f}", save_freq=100 ) ] model.fit(x_train, y_train, epochs=1, callbacks=callbacks) Creating a new model 98/1563 [>.............................] - ETA: 1s - loss: 1.3456 - sparse_categorical_accuracy: 0.6230INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.90/assets 155/1563 [=>............................] - ETA: 5s - loss: 1.1479 - sparse_categorical_accuracy: 0.6822INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.66/assets 248/1563 [===>..........................] - ETA: 5s - loss: 0.9643 - sparse_categorical_accuracy: 0.7340INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.57/assets 353/1563 [=====>........................] - ETA: 5s - loss: 0.8454 - sparse_categorical_accuracy: 0.7668INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.50/assets 449/1563 [=======>......................] - ETA: 5s - loss: 0.7714 - sparse_categorical_accuracy: 0.7870INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.47/assets 598/1563 [==========>...................] - ETA: 4s - loss: 0.6930 - sparse_categorical_accuracy: 0.8082INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.43/assets 658/1563 [===========>..................] - ETA: 4s - loss: 0.6685 - sparse_categorical_accuracy: 0.8148INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.41/assets 757/1563 [=============>................] - ETA: 4s - loss: 0.6340 - sparse_categorical_accuracy: 0.8241INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.39/assets 856/1563 [===============>..............] - ETA: 3s - loss: 0.6051 - sparse_categorical_accuracy: 0.8319INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.37/assets 956/1563 [=================>............] - ETA: 3s - loss: 0.5801 - sparse_categorical_accuracy: 0.8387INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.35/assets 1057/1563 [===================>..........] - ETA: 2s - loss: 0.5583 - sparse_categorical_accuracy: 0.8446INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.34/assets 1153/1563 [=====================>........] - ETA: 2s - loss: 0.5399 - sparse_categorical_accuracy: 0.8495INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.33/assets 1255/1563 [=======================>......] - ETA: 1s - loss: 0.5225 - sparse_categorical_accuracy: 0.8542INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.32/assets 1355/1563 [=========================>....] - ETA: 1s - loss: 0.5073 - sparse_categorical_accuracy: 0.8583INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.31/assets 1457/1563 [==========================>...] - ETA: 0s - loss: 0.4933 - sparse_categorical_accuracy: 0.8621INFO:tensorflow:Assets written to: ./ckpt/ckpt-loss=0.30/assets 1563/1563 [==============================] - 8s 5ms/step - loss: 0.4800 - sparse_categorical_accuracy: 0.8657 You call also write your own callback for saving and restoring models. For a complete guide on serialization and saving, see the guide to saving and serializing Models. Using learning rate schedules A common pattern when training deep learning models is to gradually reduce the learning as training progresses. This is generally known as "learning rate decay". The learning decay schedule could be static (fixed in advance, as a function of the current epoch or the current batch index), or dynamic (responding to the current behavior of the model, in particular the validation loss). Passing a schedule to an optimizer You can easily use a static learning rate decay schedule by passing a schedule object as the learning_rate argument in your optimizer: initial_learning_rate = 0.1 lr_schedule = keras.optimizers.schedules.ExponentialDecay( initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True ) optimizer = keras.optimizers.RMSprop(learning_rate=lr_schedule) Several built-in schedules are available: ExponentialDecay, PiecewiseConstantDecay, PolynomialDecay, and InverseTimeDecay. Using callbacks to implement a dynamic learning rate schedule A dynamic learning rate schedule (for instance, decreasing the learning rate when the validation loss is no longer improving) cannot be achieved with these schedule objects, since the optimizer does not have access to validation metrics. However, callbacks do have access to all metrics, including validation metrics! You can thus achieve this pattern by using a callback that modifies the current learning rate on the optimizer. In fact, this is even built-in as the ReduceLROnPlateau callback. Visualizing loss and metrics during training The best way to keep an eye on your model during training is to use TensorBoard -- a browser-based application that you can run locally that provides you with: Live plots of the loss and metrics for training and evaluation (optionally) Visualizations of the histograms of your layer activations (optionally) 3D visualizations of the embedding spaces learned by your Embedding layers If you have installed TensorFlow with pip, you should be able to launch TensorBoard from the command line: tensorboard --logdir=/full_path_to_your_logs Using the TensorBoard callback The easiest way to use TensorBoard with a Keras model and the fit() method is the TensorBoard callback.Customizing what happens in fit() Author: fchollet Date created: 2020/04/15 Last modified: 2020/04/15 Description: Complete guide to overriding the training step of the Model class. View in Colab • GitHub source Introduction When you're doing supervised learning, you can use fit() and everything works smoothly. When you need to write your own training loop from scratch, you can use the GradientTape and take control of every little detail. But what if you need a custom training algorithm, but you still want to benefit from the convenient features of fit(), such as callbacks, built-in distribution support, or step fusing? A core principle of Keras is progressive disclosure of complexity. You should always be able to get into lower-level workflows in a gradual way. You shouldn't fall off a cliff if the high-level functionality doesn't exactly match your use case. You should be able to gain more control over the small details while retaining a commensurate amount of high-level convenience. When you need to customize what fit() does, you should override the training step function of the Model class. This is the function that is called by fit() for every batch of data. You will then be able to call fit() as usual -- and it will be running your own learning algorithm. Note that this pattern does not prevent you from building models with the Functional API. You can do this whether you're building Sequential models, Functional API models, or subclassed models. Let's see how that works. Setup Requires TensorFlow 2.2 or later. import tensorflow as tf from tensorflow import keras A first simple example Let's start from a simple example: We create a new class that subclasses keras.Model. We just override the method train_step(self, data). We return a dictionary mapping metric names (including the loss) to their current value. The input argument data is what gets passed to fit as training data: If you pass Numpy arrays, by calling fit(x, y, ...), then data will be the tuple (x, y) If you pass a tf.data.Dataset, by calling fit(dataset, ...), then data will be what gets yielded by dataset at each batch. In the body of the train_step method, we implement a regular training update, similar to what you are already familiar with. Importantly, we compute the loss via self.compiled_loss, which wraps the loss(es) function(s) that were passed to compile(). Similarly, we call self.compiled_metrics.update_state(y, y_pred) to update the state of the metrics that were passed in compile(), and we query results from self.metrics at the end to retrieve their current value. class CustomModel(keras.Model): def train_step(self, data): # Unpack the data. Its structure depends on your model and # on what you pass to `fit()`. x, y = data with tf.GradientTape() as tape: y_pred = self(x, training=True) # Forward pass # Compute the loss value # (the loss function is configured in `compile()`) loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses) # Compute gradients trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Update metrics (includes the metric that tracks the loss) self.compiled_metrics.update_state(y, y_pred) # Return a dict mapping metric names to current value return {m.name: m.result() for m in self.metrics} Let's try this out: import numpy as np # Construct and compile an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) model.compile(optimizer="adam", loss="mse", metrics=["mae"]) # Just use `fit` as usual x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) model.fit(x, y, epochs=3) Epoch 1/3 32/32 [==============================] - 0s 721us/step - loss: 0.5791 - mae: 0.6232 Epoch 2/3 32/32 [==============================] - 0s 601us/step - loss: 0.2739 - mae: 0.4296 Epoch 3/3 32/32 [==============================] - 0s 576us/step - loss: 0.2547 - mae: 0.4078 Going lower-level Naturally, you could just skip passing a loss function in compile(), and instead do everything manually in train_step. Likewise for metrics. Here's a lower-level example, that only uses compile() to configure the optimizer: We start by creating Metric instances to track our loss and a MAE score. We implement a custom train_step() that updates the state of these metrics (by calling update_state() on them), then query them (via result()) to return their current average value, to be displayed by the progress bar and to be pass to any callback. Note that we would need to call reset_states() on our metrics between each epoch! Otherwise calling result() would return an average since the start of training, whereas we usually work with per-epoch averages. Thankfully, the framework can do that for us: just list any metric you want to reset in the metrics property of the model. The model will call reset_states() on any object listed here at the beginning of each fit() epoch or at the beginning of a call to evaluate(). loss_tracker = keras.metrics.Mean(name="loss") mae_metric = keras.metrics.MeanAbsoluteError(name="mae") class CustomModel(keras.Model): def train_step(self, data): x, y = data with tf.GradientTape() as tape: y_pred = self(x, training=True) # Forward pass # Compute our own loss loss = keras.losses.mean_squared_error(y, y_pred) # Compute gradients trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Compute our own metrics loss_tracker.update_state(loss) mae_metric.update_state(y, y_pred) return {"loss": loss_tracker.result(), "mae": mae_metric.result()} @property def metrics(self): # We list our `Metric` objects here so that `reset_states()` can be # called automatically at the start of each epoch # or at the start of `evaluate()`. # If you don't implement this property, you have to call # `reset_states()` yourself at the time of your choosing. return [loss_tracker, mae_metric] # Construct an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) # We don't passs a loss or metrics here. model.compile(optimizer="adam") # Just use `fit` as usual -- you can use callbacks, etc. x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) model.fit(x, y, epochs=5) Epoch 1/5 32/32 [==============================] - 0s 645us/step - loss: 0.2661 - mae: 0.4126 Epoch 2/5 32/32 [==============================] - 0s 515us/step - loss: 0.2401 - mae: 0.3932 Epoch 3/5 32/32 [==============================] - 0s 605us/step - loss: 0.2283 - mae: 0.3833 Epoch 4/5 32/32 [==============================] - 0s 508us/step - loss: 0.2176 - mae: 0.3742 Epoch 5/5 32/32 [==============================] - 0s 448us/step - loss: 0.2070 - mae: 0.3654 Supporting sample_weight & class_weight You may have noticed that our first basic example didn't make any mention of sample weighting. If you want to support the fit() arguments sample_weight and class_weight, you'd simply do the following: Unpack sample_weight from the data argument Pass it to compiled_loss & compiled_metrics (of course, you could also just apply it manually if you don't rely on compile() for losses & metrics) That's it. That's the list. class CustomModel(keras.Model): def train_step(self, data): # Unpack the data. Its structure depends on your model and # on what you pass to `fit()`. if len(data) == 3: x, y, sample_weight = data else: sample_weight = None x, y = data with tf.GradientTape() as tape: y_pred = self(x, training=True) # Forward pass # Compute the loss value. # The loss function is configured in `compile()`. loss = self.compiled_loss( y, y_pred, sample_weight=sample_weight, regularization_losses=self.losses, ) # Compute gradients trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Update the metrics. # Metrics are configured in `compile()`. self.compiled_metrics.update_state(y, y_pred, sample_weight=sample_weight) # Return a dict mapping metric names to current value. # Note that it will include the loss (tracked in self.metrics). return {m.name: m.result() for m in self.metrics} # Construct and compile an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) model.compile(optimizer="adam", loss="mse", metrics=["mae"]) # You can now use sample_weight argument x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) sw = np.random.random((1000, 1)) model.fit(x, y, sample_weight=sw, epochs=3) Epoch 1/3 32/32 [==============================] - 0s 709us/step - loss: 0.6128 - mae: 1.0027 Epoch 2/3 32/32 [==============================] - 0s 681us/step - loss: 0.2476 - mae: 0.6092 Epoch 3/3 32/32 [==============================] - 0s 669us/step - loss: 0.1248 - mae: 0.4186 Providing your own evaluation step What if you want to do the same for calls to model.evaluate()? Then you would override test_step in exactly the same way. Here's what it looks like: class CustomModel(keras.Model): def test_step(self, data): # Unpack the data x, y = data # Compute predictions y_pred = self(x, training=False) # Updates the metrics tracking the loss self.compiled_loss(y, y_pred, regularization_losses=self.losses) # Update the metrics. self.compiled_metrics.update_state(y, y_pred) # Return a dict mapping metric names to current value. # Note that it will include the loss (tracked in self.metrics). return {m.name: m.result() for m in self.metrics} # Construct an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) model.compile(loss="mse", metrics=["mae"]) # Evaluate with our custom test_step x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) model.evaluate(x, y) 32/32 [==============================] - 0s 578us/step - loss: 0.7436 - mae: 0.7455 [0.744135320186615, 0.7466798424720764] Wrapping up: an end-to-end GAN example Let's walk through an end-to-end example that leverages everything you just learned. Let's consider: A generator network meant to generate 28x28x1 images. A discriminator network meant to classify 28x28x1 images into two classes ("fake" and "real"). One optimizer for each. A loss function to train the discriminator. from tensorflow.keras import layers # Create the discriminator discriminator = keras.Sequential( [ keras.Input(shape=(28, 28, 1)), layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.GlobalMaxPooling2D(), layers.Dense(1), ], name="discriminator", ) # Create the generator latent_dim = 128 generator = keras.Sequential( [ keras.Input(shape=(latent_dim,)), # We want to generate 128 coefficients to reshape into a 7x7x128 map layers.Dense(7 * 7 * 128), layers.LeakyReLU(alpha=0.2), layers.Reshape((7, 7, 128)), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"), ], name="generator", ) Here's a feature-complete GAN class, overriding compile() to use its own signature, and implementing the entire GAN algorithm in 17 lines in train_step: class GAN(keras.Model): def __init__(self, discriminator, generator, latent_dim): super(GAN, self).__init__() self.discriminator = discriminator self.generator = generator self.latent_dim = latent_dim def compile(self, d_optimizer, g_optimizer, loss_fn): super(GAN, self).compile() self.d_optimizer = d_optimizer self.g_optimizer = g_optimizer self.loss_fn = loss_fn def train_step(self, real_images): if isinstance(real_images, tuple): real_images = real_images[0] # Sample random points in the latent space batch_size = tf.shape(real_images)[0] random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim)) # Decode them to fake images generated_images = self.generator(random_latent_vectors) # Combine them with real images combined_images = tf.concat([generated_images, real_images], axis=0) # Assemble labels discriminating real from fake images labels = tf.concat( [tf.ones((batch_size, 1)), tf.zeros((batch_size, 1))], axis=0 ) # Add random noise to the labels - important trick! labels += 0.05 * tf.random.uniform(tf.shape(labels)) # Train the discriminator with tf.GradientTape() as tape: predictions = self.discriminator(combined_images) d_loss = self.loss_fn(labels, predictions) grads = tape.gradient(d_loss, self.discriminator.trainable_weights) self.d_optimizer.apply_gradients( zip(grads, self.discriminator.trainable_weights) ) # Sample random points in the latent space random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim)) # Assemble labels that say "all real images" misleading_labels = tf.zeros((batch_size, 1)) # Train the generator (note that we should *not* update the weights # of the discriminator)! with tf.GradientTape() as tape: predictions = self.discriminator(self.generator(random_latent_vectors)) g_loss = self.loss_fn(misleading_labels, predictions) grads = tape.gradient(g_loss, self.generator.trainable_weights) self.g_optimizer.apply_gradients(zip(grads, self.generator.trainable_weights)) return {"d_loss": d_loss, "g_loss": g_loss} Let's test-drive it: # Prepare the dataset. We use both the training & test MNIST digits. batch_size = 64 (x_train, _), (x_test, _) = keras.datasets.mnist.load_data() all_digits = np.concatenate([x_train, x_test]) all_digits = all_digits.astype("float32") / 255.0 all_digits = np.reshape(all_digits, (-1, 28, 28, 1)) dataset = tf.data.Dataset.from_tensor_slices(all_digits) dataset = dataset.shuffle(buffer_size=1024).batch(batch_size) gan = GAN(discriminator=discriminator, generator=generator, latent_dim=latent_dim) gan.compile( d_optimizer=keras.optimizers.Adam(learning_rate=0.0003), g_optimizer=keras.optimizers.Adam(learning_rate=0.0003), loss_fn=keras.losses.BinaryCrossentropy(from_logits=True), ) # To limit the execution time, we only train on 100 batches. You can train on # the entire dataset. You will need about 20 epochs to get nice results. gan.fit(dataset.take(100), epochs=1) 100/100 [==============================] - 60s 591ms/step - d_loss: 0.4534 - g_loss: 0.9839 The ideas behind deep learning are simple, so why should their implementation be painful?Understanding masking & padding Authors: Scott Zhu, Francois Chollet Date created: 2019/07/16 Last modified: 2020/04/14 Description: Complete guide to using mask-aware sequence layers in Keras. View in Colab • GitHub source Setup import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Introduction Masking is a way to tell sequence-processing layers that certain timesteps in an input are missing, and thus should be skipped when processing the data. Padding is a special form of masking where the masked steps are at the start or the end of a sequence. Padding comes from the need to encode sequence data into contiguous batches: in order to make all sequences in a batch fit a given standard length, it is necessary to pad or truncate some sequences. Let's take a close look. Padding sequence data When processing sequence data, it is very common for individual samples to have different lengths. Consider the following example (text tokenized as words): [ ["Hello", "world", "!"], ["How", "are", "you", "doing", "today"], ["The", "weather", "will", "be", "nice", "tomorrow"], ] After vocabulary lookup, the data might be vectorized as integers, e.g.: [ [71, 1331, 4231] [73, 8, 3215, 55, 927], [83, 91, 1, 645, 1253, 927], ] The data is a nested list where individual samples have length 3, 5, and 6, respectively. Since the input data for a deep learning model must be a single tensor (of shape e.g. (batch_size, 6, vocab_size) in this case), samples that are shorter than the longest item need to be padded with some placeholder value (alternatively, one might also truncate long samples before padding short samples). Keras provides a utility function to truncate and pad Python lists to a common length: tf.keras.preprocessing.sequence.pad_sequences. raw_inputs = [ [711, 632, 71], [73, 8, 3215, 55, 927], [83, 91, 1, 645, 1253, 927], ] # By default, this will pad using 0s; it is configurable via the # "value" parameter. # Note that you could "pre" padding (at the beginning) or # "post" padding (at the end). # We recommend using "post" padding when working with RNN layers # (in order to be able to use the # CuDNN implementation of the layers). padded_inputs = tf.keras.preprocessing.sequence.pad_sequences( raw_inputs, padding="post" ) print(padded_inputs) [[ 711 632 71 0 0 0] [ 73 8 3215 55 927 0] [ 83 91 1 645 1253 927]] Masking Now that all samples have a uniform length, the model must be informed that some part of the data is actually padding and should be ignored. That mechanism is masking. There are three ways to introduce input masks in Keras models: Add a keras.layers.Masking layer. Configure a keras.layers.Embedding layer with mask_zero=True. Pass a mask argument manually when calling layers that support this argument (e.g. RNN layers). Mask-generating layers: Embedding and Masking Under the hood, these layers will create a mask tensor (2D tensor with shape (batch, sequence_length)), and attach it to the tensor output returned by the Masking or Embedding layer. embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True) masked_output = embedding(padded_inputs) print(masked_output._keras_mask) masking_layer = layers.Masking() # Simulate the embedding lookup by expanding the 2D input to 3D, # with embedding dimension of 10. unmasked_embedding = tf.cast( tf.tile(tf.expand_dims(padded_inputs, axis=-1), [1, 1, 10]), tf.float32 ) masked_embedding = masking_layer(unmasked_embedding) print(masked_embedding._keras_mask) tf.Tensor( [[ True True True False False False] [ True True True True True False] [ True True True True True True]], shape=(3, 6), dtype=bool) tf.Tensor( [[ True True True False False False] [ True True True True True False] [ True True True True True True]], shape=(3, 6), dtype=bool) As you can see from the printed result, the mask is a 2D boolean tensor with shape (batch_size, sequence_length), where each individual False entry indicates that the corresponding timestep should be ignored during processing. Mask propagation in the Functional API and Sequential API When using the Functional API or the Sequential API, a mask generated by an Embedding or Masking layer will be propagated through the network for any layer that is capable of using them (for example, RNN layers). Keras will automatically fetch the mask corresponding to an input and pass it to any layer that knows how to use it. For instance, in the following Sequential model, the LSTM layer will automatically receive a mask, which means it will ignore padded values: model = keras.Sequential( [layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True), layers.LSTM(32),] ) This is also the case for the following Functional API model: inputs = keras.Input(shape=(None,), dtype="int32") x = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)(inputs) outputs = layers.LSTM(32)(x) model = keras.Model(inputs, outputs) Passing mask tensors directly to layers Layers that can handle masks (such as the LSTM layer) have a mask argument in their __call__ method. Meanwhile, layers that produce a mask (e.g. Embedding) expose a compute_mask(input, previous_mask) method which you can call. Thus, you can pass the output of the compute_mask() method of a mask-producing layer to the __call__ method of a mask-consuming layer, like this: class MyLayer(layers.Layer): def __init__(self, **kwargs): super(MyLayer, self).__init__(**kwargs) self.embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True) self.lstm = layers.LSTM(32) def call(self, inputs): x = self.embedding(inputs) # Note that you could also prepare a `mask` tensor manually. # It only needs to be a boolean tensor # with the right shape, i.e. (batch_size, timesteps). mask = self.embedding.compute_mask(inputs) output = self.lstm(x, mask=mask) # The layer will ignore the masked values return output layer = MyLayer() x = np.random.random((32, 10)) * 100 x = x.astype("int32") layer(x) Supporting masking in your custom layers Sometimes, you may need to write layers that generate a mask (like Embedding), or layers that need to modify the current mask. For instance, any layer that produces a tensor with a different time dimension than its input, such as a Concatenate layer that concatenates on the time dimension, will need to modify the current mask so that downstream layers will be able to properly take masked timesteps into account. To do this, your layer should implement the layer.compute_mask() method, which produces a new mask given the input and the current mask. Here is an example of a TemporalSplit layer that needs to modify the current mask. class TemporalSplit(keras.layers.Layer): """Split the input tensor into 2 tensors along the time dimension.""" def call(self, inputs): # Expect the input to be 3D and mask to be 2D, split the input tensor into 2 # subtensors along the time axis (axis 1). return tf.split(inputs, 2, axis=1) def compute_mask(self, inputs, mask=None): # Also split the mask into 2 if it presents. if mask is None: return None return tf.split(mask, 2, axis=1) first_half, second_half = TemporalSplit()(masked_embedding) print(first_half._keras_mask) print(second_half._keras_mask) tf.Tensor( [[ True True True] [ True True True] [ True True True]], shape=(3, 3), dtype=bool) tf.Tensor( [[False False False] [ True True False] [ True True True]], shape=(3, 3), dtype=bool) Here is another example of a CustomEmbedding layer that is capable of generating a mask from input values: class CustomEmbedding(keras.layers.Layer): def __init__(self, input_dim, output_dim, mask_zero=False, **kwargs): super(CustomEmbedding, self).__init__(**kwargs) self.input_dim = input_dim self.output_dim = output_dim self.mask_zero = mask_zero def build(self, input_shape): self.embeddings = self.add_weight( shape=(self.input_dim, self.output_dim), initializer="random_normal", dtype="float32", ) def call(self, inputs): return tf.nn.embedding_lookup(self.embeddings, inputs) def compute_mask(self, inputs, mask=None): if not self.mask_zero: return None return tf.not_equal(inputs, 0) layer = CustomEmbedding(10, 32, mask_zero=True) x = np.random.random((3, 10)) * 9 x = x.astype("int32") y = layer(x) mask = layer.compute_mask(x) print(mask) tf.Tensor( [[ True True True True True True True True True True] [ True True True True True True True True True True] [ True True True True True True True True True True]], shape=(3, 10), dtype=bool) Opting-in to mask propagation on compatible layers Most layers don't modify the time dimension, so don't need to modify the current mask. However, they may still want to be able to propagate the current mask, unchanged, to the next layer. This is an opt-in behavior. By default, a custom layer will destroy the current mask (since the framework has no way to tell whether propagating the mask is safe to do). If you have a custom layer that does not modify the time dimension, and if you want it to be able to propagate the current input mask, you should set self.supports_masking = True in the layer constructor. In this case, the default behavior of compute_mask() is to just pass the current mask through. Here's an example of a layer that is whitelisted for mask propagation: class MyActivation(keras.layers.Layer): def __init__(self, **kwargs): super(MyActivation, self).__init__(**kwargs) # Signal that the layer is safe for mask propagation self.supports_masking = True def call(self, inputs): return tf.nn.relu(inputs) You can now use this custom layer in-between a mask-generating layer (like Embedding) and a mask-consuming layer (like LSTM), and it will pass the mask along so that it reaches the mask-consuming layer. inputs = keras.Input(shape=(None,), dtype="int32") x = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)(inputs) x = MyActivation()(x) # Will pass the mask along print("Mask found:", x._keras_mask) outputs = layers.LSTM(32)(x) # Will receive the mask model = keras.Model(inputs, outputs) Mask found: Tensor("embedding_4/NotEqual:0", shape=(None, None), dtype=bool) Writing layers that need mask information Some layers are mask consumers: they accept a mask argument in call and use it to determine whether to skip certain time steps. To write such a layer, you can simply add a mask=None argument in your call signature. The mask associated with the inputs will be passed to your layer whenever it is available. Here's a simple example below: a layer that computes a softmax over the time dimension (axis 1) of an input sequence, while discarding masked timesteps. class TemporalSoftmax(keras.layers.Layer): def call(self, inputs, mask=None): broadcast_float_mask = tf.expand_dims(tf.cast(mask, "float32"), -1) inputs_exp = tf.exp(inputs) * broadcast_float_mask inputs_sum = tf.reduce_sum(inputs * broadcast_float_mask, axis=1, keepdims=True) return inputs_exp / inputs_sum inputs = keras.Input(shape=(None,), dtype="int32") x = layers.Embedding(input_dim=10, output_dim=32, mask_zero=True)(inputs) x = layers.Dense(1)(x) outputs = TemporalSoftmax()(x) model = keras.Model(inputs, outputs) y = model(np.random.randint(0, 10, size=(32, 100)), np.random.random((32, 100, 1))) Summary That is all you need to know about padding & masking in Keras. To recap: "Masking" is how layers are able to know when to skip / ignore certain timesteps in sequence inputs. Some layers are mask-generators: Embedding can generate a mask from input values (if mask_zero=True), and so can the Masking layer. Some layers are mask-consumers: they expose a mask argument in their __call__ method. This is the case for RNN layers. In the Functional API and Sequential API, mask information is propagated automatically. When using layers in a standalone way, you can pass the mask arguments to layers manually. You can easily write layers that modify the current mask, that generate a new mask, or that consume the mask associated with the inputs.The Sequential model Author: fchollet Date created: 2020/04/12 Last modified: 2020/04/12 Description: Complete guide to the Sequential model. View in Colab • GitHub source Setup import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers When to use a Sequential model A Sequential model is appropriate for a plain stack of layers where each layer has exactly one input tensor and one output tensor. Schematically, the following Sequential model: # Define Sequential model with 3 layers model = keras.Sequential( [ layers.Dense(2, activation="relu", name="layer1"), layers.Dense(3, activation="relu", name="layer2"), layers.Dense(4, name="layer3"), ] ) # Call model on a test input x = tf.ones((3, 3)) y = model(x) is equivalent to this function: # Create 3 layers layer1 = layers.Dense(2, activation="relu", name="layer1") layer2 = layers.Dense(3, activation="relu", name="layer2") layer3 = layers.Dense(4, name="layer3") # Call layers on a test input x = tf.ones((3, 3)) y = layer3(layer2(layer1(x))) A Sequential model is not appropriate when: Your model has multiple inputs or multiple outputs Any of your layers has multiple inputs or multiple outputs You need to do layer sharing You want non-linear topology (e.g. a residual connection, a multi-branch model) Creating a Sequential model You can create a Sequential model by passing a list of layers to the Sequential constructor: model = keras.Sequential( [ layers.Dense(2, activation="relu"), layers.Dense(3, activation="relu"), layers.Dense(4), ] ) Its layers are accessible via the layers attribute: model.layers [, , ] You can also create a Sequential model incrementally via the add() method: model = keras.Sequential() model.add(layers.Dense(2, activation="relu")) model.add(layers.Dense(3, activation="relu")) model.add(layers.Dense(4)) Note that there's also a corresponding pop() method to remove layers: a Sequential model behaves very much like a list of layers. model.pop() print(len(model.layers)) # 2 2 Also note that the Sequential constructor accepts a name argument, just like any layer or model in Keras. This is useful to annotate TensorBoard graphs with semantically meaningful names. model = keras.Sequential(name="my_sequential") model.add(layers.Dense(2, activation="relu", name="layer1")) model.add(layers.Dense(3, activation="relu", name="layer2")) model.add(layers.Dense(4, name="layer3")) Specifying the input shape in advance Generally, all layers in Keras need to know the shape of their inputs in order to be able to create their weights. So when you create a layer like this, initially, it has no weights: layer = layers.Dense(3) layer.weights # Empty [] It creates its weights the first time it is called on an input, since the shape of the weights depends on the shape of the inputs: # Call layer on a test input x = tf.ones((1, 4)) y = layer(x) layer.weights # Now it has weights, of shape (4, 3) and (3,) [, ] Naturally, this also applies to Sequential models. When you instantiate a Sequential model without an input shape, it isn't "built": it has no weights (and calling model.weights results in an error stating just this). The weights are created when the model first sees some input data: model = keras.Sequential( [ layers.Dense(2, activation="relu"), layers.Dense(3, activation="relu"), layers.Dense(4), ] ) # No weights at this stage! # At this point, you can't do this: # model.weights # You also can't do this: # model.summary() # Call the model on a test input x = tf.ones((1, 4)) y = model(x) print("Number of weights after calling the model:", len(model.weights)) # 6 Number of weights after calling the model: 6 Once a model is "built", you can call its summary() method to display its contents: model.summary() Model: "sequential_3" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_7 (Dense) (1, 2) 10 _________________________________________________________________ dense_8 (Dense) (1, 3) 9 _________________________________________________________________ dense_9 (Dense) (1, 4) 16 ================================================================= Total params: 35 Trainable params: 35 Non-trainable params: 0 _________________________________________________________________ However, it can be very useful when building a Sequential model incrementally to be able to display the summary of the model so far, including the current output shape. In this case, you should start your model by passing an Input object to your model, so that it knows its input shape from the start: model = keras.Sequential() model.add(keras.Input(shape=(4,))) model.add(layers.Dense(2, activation="relu")) model.summary() Model: "sequential_4" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_10 (Dense) (None, 2) 10 ================================================================= Total params: 10 Trainable params: 10 Non-trainable params: 0 _________________________________________________________________ Note that the Input object is not displayed as part of model.layers, since it isn't a layer: model.layers [] A simple alternative is to just pass an input_shape argument to your first layer: model = keras.Sequential() model.add(layers.Dense(2, activation="relu", input_shape=(4,))) model.summary() Model: "sequential_5" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_11 (Dense) (None, 2) 10 ================================================================= Total params: 10 Trainable params: 10 Non-trainable params: 0 _________________________________________________________________ Models built with a predefined input shape like this always have weights (even before seeing any data) and always have a defined output shape. In general, it's a recommended best practice to always specify the input shape of a Sequential model in advance if you know what it is. A common debugging workflow: add() + summary() When building a new Sequential architecture, it's useful to incrementally stack layers with add() and frequently print model summaries. For instance, this enables you to monitor how a stack of Conv2D and MaxPooling2D layers is downsampling image feature maps: model = keras.Sequential() model.add(keras.Input(shape=(250, 250, 3))) # 250x250 RGB images model.add(layers.Conv2D(32, 5, strides=2, activation="relu")) model.add(layers.Conv2D(32, 3, activation="relu")) model.add(layers.MaxPooling2D(3)) # Can you guess what the current output shape is at this point? Probably not. # Let's just print it: model.summary() # The answer was: (40, 40, 32), so we can keep downsampling... model.add(layers.Conv2D(32, 3, activation="relu")) model.add(layers.Conv2D(32, 3, activation="relu")) model.add(layers.MaxPooling2D(3)) model.add(layers.Conv2D(32, 3, activation="relu")) model.add(layers.Conv2D(32, 3, activation="relu")) model.add(layers.MaxPooling2D(2)) # And now? model.summary() # Now that we have 4x4 feature maps, time to apply global max pooling. model.add(layers.GlobalMaxPooling2D()) # Finally, we add a classification layer. model.add(layers.Dense(10)) Model: "sequential_6" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 123, 123, 32) 2432 _________________________________________________________________ conv2d_1 (Conv2D) (None, 121, 121, 32) 9248 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 40, 40, 32) 0 ================================================================= Total params: 11,680 Trainable params: 11,680 Non-trainable params: 0 _________________________________________________________________ Model: "sequential_6" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 123, 123, 32) 2432 _________________________________________________________________ conv2d_1 (Conv2D) (None, 121, 121, 32) 9248 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 40, 40, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 38, 38, 32) 9248 _________________________________________________________________ conv2d_3 (Conv2D) (None, 36, 36, 32) 9248 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 12, 12, 32) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 10, 10, 32) 9248 _________________________________________________________________ conv2d_5 (Conv2D) (None, 8, 8, 32) 9248 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 4, 4, 32) 0 ================================================================= Total params: 48,672 Trainable params: 48,672 Non-trainable params: 0 _________________________________________________________________ Very practical, right? What to do once you have a model Once your model architecture is ready, you will want to: Train your model, evaluate it, and run inference. See our guide to training & evaluation with the built-in loops Save your model to disk and restore it. See our guide to serialization & saving. Speed up model training by leveraging multiple GPUs. See our guide to multi-GPU and distributed training. Feature extraction with a Sequential model Once a Sequential model has been built, it behaves like a Functional API model. This means that every layer has an input and output attribute. These attributes can be used to do neat things, like quickly creating a model that extracts the outputs of all intermediate layers in a Sequential model: initial_model = keras.Sequential( [ keras.Input(shape=(250, 250, 3)), layers.Conv2D(32, 5, strides=2, activation="relu"), layers.Conv2D(32, 3, activation="relu"), layers.Conv2D(32, 3, activation="relu"), ] ) feature_extractor = keras.Model( inputs=initial_model.inputs, outputs=[layer.output for layer in initial_model.layers], ) # Call feature extractor on test input. x = tf.ones((1, 250, 250, 3)) features = feature_extractor(x) Here's a similar example that only extract features from one layer: initial_model = keras.Sequential( [ keras.Input(shape=(250, 250, 3)), layers.Conv2D(32, 5, strides=2, activation="relu"), layers.Conv2D(32, 3, activation="relu", name="my_intermediate_layer"), layers.Conv2D(32, 3, activation="relu"), ] ) feature_extractor = keras.Model( inputs=initial_model.inputs, outputs=initial_model.get_layer(name="my_intermediate_layer").output, ) # Call feature extractor on test input. x = tf.ones((1, 250, 250, 3)) features = feature_extractor(x) Transfer learning with a Sequential model Transfer learning consists of freezing the bottom layers in a model and only training the top layers. If you aren't familiar with it, make sure to read our guide to transfer learning. Here are two common transfer learning blueprint involving Sequential models. First, let's say that you have a Sequential model, and you want to freeze all layers except the last one. In this case, you would simply iterate over model.layers and set layer.trainable = False on each layer, except the last one. Like this: model = keras.Sequential([ keras.Input(shape=(784)) layers.Dense(32, activation='relu'), layers.Dense(32, activation='relu'), layers.Dense(32, activation='relu'), layers.Dense(10), ]) # Presumably you would want to first load pre-trained weights. model.load_weights(...) # Freeze all layers except the last one. for layer in model.layers[:-1]: layer.trainable = False # Recompile and train (this will only update the weights of the last layer). model.compile(...) model.fit(...) Another common blueprint is to use a Sequential model to stack a pre-trained model and some freshly initialized classification layers. Like this: # Load a convolutional base with pre-trained weights base_model = keras.applications.Xception( weights='imagenet', include_top=False, pooling='avg') # Freeze the base model base_model.trainable = False # Use a Sequential model to add a trainable classifier on top model = keras.Sequential([ base_model, layers.Dense(1000), ]) # Compile & train model.compile(...) model.fit(...) If you do transfer learning, you will probably find yourself frequently using these two patterns. That's about all you need to know about Sequential models! To find out more about building models in Keras, see: Guide to the Functional API Guide to making new Layers & Models via subclassingThe Functional API Author: fchollet Date created: 2019/03/01 Last modified: 2020/04/12 Description: Complete guide to the functional API. View in Colab - GitHub source Setup import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Introduction The Keras functional API is a way to create models that are more flexible than the tf.keras.Sequential API. The functional API can handle models with non-linear topology, shared layers, and even multiple inputs or outputs. The main idea is that a deep learning model is usually a directed acyclic graph (DAG) of layers. So the functional API is a way to build graphs of layers. Consider the following model: (input: 784-dimensional vectors) [Dense (64 units, relu activation)] [Dense (64 units, relu activation)] [Dense (10 units, softmax activation)] (output: logits of a probability distribution over 10 classes) This is a basic graph with three layers. To build this model using the functional API, start by creating an input node: inputs = keras.Input(shape=(784,)) The shape of the data is set as a 784-dimensional vector. The batch size is always omitted since only the shape of each sample is specified. If, for example, you have an image input with a shape of (32, 32, 3), you would use: # Just for demonstration purposes. img_inputs = keras.Input(shape=(32, 32, 3)) The inputs that is returned contains information about the shape and dtype of the input data that you feed to your model. Here's the shape: inputs.shape TensorShape([None, 784]) Here's the dtype: inputs.dtype tf.float32 You create a new node in the graph of layers by calling a layer on this inputs object: dense = layers.Dense(64, activation="relu") x = dense(inputs) The "layer call" action is like drawing an arrow from "inputs" to this layer you created. You're "passing" the inputs to the dense layer, and you get x as the output. Let's add a few more layers to the graph of layers: x = layers.Dense(64, activation="relu")(x) outputs = layers.Dense(10)(x) At this point, you can create a Model by specifying its inputs and outputs in the graph of layers: model = keras.Model(inputs=inputs, outputs=outputs, name="mnist_model") Let's check out what the model summary looks like: model.summary() Model: "mnist_model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 784)] 0 _________________________________________________________________ dense (Dense) (None, 64) 50240 _________________________________________________________________ dense_1 (Dense) (None, 64) 4160 _________________________________________________________________ dense_2 (Dense) (None, 10) 650 ================================================================= Total params: 55,050 Trainable params: 55,050 Non-trainable params: 0 _________________________________________________________________ You can also plot the model as a graph: keras.utils.plot_model(model, "my_first_model.png") png And, optionally, display the input and output shapes of each layer in the plotted graph: keras.utils.plot_model(model, "my_first_model_with_shape_info.png", show_shapes=True) png This figure and the code are almost identical. In the code version, the connection arrows are replaced by the call operation. A "graph of layers" is an intuitive mental image for a deep learning model, and the functional API is a way to create models that closely mirrors this. Training, evaluation, and inference Training, evaluation, and inference work exactly in the same way for models built using the functional API as for Sequential models. The Model class offers a built-in training loop (the fit() method) and a built-in evaluation loop (the evaluate() method). Note that you can easily customize these loops to implement training routines beyond supervised learning (e.g. GANs). Here, load the MNIST image data, reshape it into vectors, fit the model on the data (while monitoring performance on a validation split), then evaluate the model on the test data: (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() x_train = x_train.reshape(60000, 784).astype("float32") / 255 x_test = x_test.reshape(10000, 784).astype("float32") / 255 model.compile( loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer=keras.optimizers.RMSprop(), metrics=["accuracy"], ) history = model.fit(x_train, y_train, batch_size=64, epochs=2, validation_split=0.2) test_scores = model.evaluate(x_test, y_test, verbose=2) print("Test loss:", test_scores[0]) print("Test accuracy:", test_scores[1]) Epoch 1/2 750/750 [==============================] - 2s 2ms/step - loss: 0.5648 - accuracy: 0.8473 - val_loss: 0.1793 - val_accuracy: 0.9474 Epoch 2/2 750/750 [==============================] - 1s 1ms/step - loss: 0.1686 - accuracy: 0.9506 - val_loss: 0.1398 - val_accuracy: 0.9576 313/313 - 0s - loss: 0.1401 - accuracy: 0.9580 Test loss: 0.14005452394485474 Test accuracy: 0.9580000042915344 For further reading, see the training and evaluation guide. Save and serialize Saving the model and serialization work the same way for models built using the functional API as they do for Sequential models. The standard way to save a functional model is to call model.save() to save the entire model as a single file. You can later recreate the same model from this file, even if the code that built the model is no longer available. This saved file includes the: - model architecture - model weight values (that were learned during training) - model training config, if any (as passed to compile) - optimizer and its state, if any (to restart training where you left off) model.save("path_to_my_model") del model # Recreate the exact same model purely from the file: model = keras.models.load_model("path_to_my_model") INFO:tensorflow:Assets written to: path_to_my_model/assets For details, read the model serialization & saving guide. Use the same graph of layers to define multiple models In the functional API, models are created by specifying their inputs and outputs in a graph of layers. That means that a single graph of layers can be used to generate multiple models. In the example below, you use the same stack of layers to instantiate two models: an encoder model that turns image inputs into 16-dimensional vectors, and an end-to-end autoencoder model for training. encoder_input = keras.Input(shape=(28, 28, 1), name="img") x = layers.Conv2D(16, 3, activation="relu")(encoder_input) x = layers.Conv2D(32, 3, activation="relu")(x) x = layers.MaxPooling2D(3)(x) x = layers.Conv2D(32, 3, activation="relu")(x) x = layers.Conv2D(16, 3, activation="relu")(x) encoder_output = layers.GlobalMaxPooling2D()(x) encoder = keras.Model(encoder_input, encoder_output, name="encoder") encoder.summary() x = layers.Reshape((4, 4, 1))(encoder_output) x = layers.Conv2DTranspose(16, 3, activation="relu")(x) x = layers.Conv2DTranspose(32, 3, activation="relu")(x) x = layers.UpSampling2D(3)(x) x = layers.Conv2DTranspose(16, 3, activation="relu")(x) decoder_output = layers.Conv2DTranspose(1, 3, activation="relu")(x) autoencoder = keras.Model(encoder_input, decoder_output, name="autoencoder") autoencoder.summary() Model: "encoder" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= img (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ conv2d (Conv2D) (None, 26, 26, 16) 160 _________________________________________________________________ conv2d_1 (Conv2D) (None, 24, 24, 32) 4640 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 8, 8, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 6, 6, 32) 9248 _________________________________________________________________ conv2d_3 (Conv2D) (None, 4, 4, 16) 4624 _________________________________________________________________ global_max_pooling2d (Global (None, 16) 0 ================================================================= Total params: 18,672 Trainable params: 18,672 Non-trainable params: 0 _________________________________________________________________ Model: "autoencoder" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= img (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ conv2d (Conv2D) (None, 26, 26, 16) 160 _________________________________________________________________ conv2d_1 (Conv2D) (None, 24, 24, 32) 4640 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 8, 8, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 6, 6, 32) 9248 _________________________________________________________________ conv2d_3 (Conv2D) (None, 4, 4, 16) 4624 _________________________________________________________________ global_max_pooling2d (Global (None, 16) 0 _________________________________________________________________ reshape (Reshape) (None, 4, 4, 1) 0 _________________________________________________________________ conv2d_transpose (Conv2DTran (None, 6, 6, 16) 160 _________________________________________________________________ conv2d_transpose_1 (Conv2DTr (None, 8, 8, 32) 4640 _________________________________________________________________ up_sampling2d (UpSampling2D) (None, 24, 24, 32) 0 _________________________________________________________________ conv2d_transpose_2 (Conv2DTr (None, 26, 26, 16) 4624 _________________________________________________________________ conv2d_transpose_3 (Conv2DTr (None, 28, 28, 1) 145 ================================================================= Total params: 28,241 Trainable params: 28,241 Non-trainable params: 0 _________________________________________________________________ Here, the decoding architecture is strictly symmetrical to the encoding architecture, so the output shape is the same as the input shape (28, 28, 1). The reverse of a Conv2D layer is a Conv2DTranspose layer, and the reverse of a MaxPooling2D layer is an UpSampling2D layer. All models are callable, just like layers You can treat any model as if it were a layer by invoking it on an Input or on the output of another layer. By calling a model you aren't just reusing the architecture of the model, you're also reusing its weights. To see this in action, here's a different take on the autoencoder example that creates an encoder model, a decoder model, and chains them in two calls to obtain the autoencoder model: encoder_input = keras.Input(shape=(28, 28, 1), name="original_img") x = layers.Conv2D(16, 3, activation="relu")(encoder_input) x = layers.Conv2D(32, 3, activation="relu")(x) x = layers.MaxPooling2D(3)(x) x = layers.Conv2D(32, 3, activation="relu")(x) x = layers.Conv2D(16, 3, activation="relu")(x) encoder_output = layers.GlobalMaxPooling2D()(x) encoder = keras.Model(encoder_input, encoder_output, name="encoder") encoder.summary() decoder_input = keras.Input(shape=(16,), name="encoded_img") x = layers.Reshape((4, 4, 1))(decoder_input) x = layers.Conv2DTranspose(16, 3, activation="relu")(x) x = layers.Conv2DTranspose(32, 3, activation="relu")(x) x = layers.UpSampling2D(3)(x) x = layers.Conv2DTranspose(16, 3, activation="relu")(x) decoder_output = layers.Conv2DTranspose(1, 3, activation="relu")(x) decoder = keras.Model(decoder_input, decoder_output, name="decoder") decoder.summary() autoencoder_input = keras.Input(shape=(28, 28, 1), name="img") encoded_img = encoder(autoencoder_input) decoded_img = decoder(encoded_img) autoencoder = keras.Model(autoencoder_input, decoded_img, name="autoencoder") autoencoder.summary() Model: "encoder" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= original_img (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 26, 26, 16) 160 _________________________________________________________________ conv2d_5 (Conv2D) (None, 24, 24, 32) 4640 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 8, 8, 32) 0 _________________________________________________________________ conv2d_6 (Conv2D) (None, 6, 6, 32) 9248 _________________________________________________________________ conv2d_7 (Conv2D) (None, 4, 4, 16) 4624 _________________________________________________________________ global_max_pooling2d_1 (Glob (None, 16) 0 ================================================================= Total params: 18,672 Trainable params: 18,672 Non-trainable params: 0 _________________________________________________________________ Model: "decoder" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= encoded_img (InputLayer) [(None, 16)] 0 _________________________________________________________________ reshape_1 (Reshape) (None, 4, 4, 1) 0 _________________________________________________________________ conv2d_transpose_4 (Conv2DTr (None, 6, 6, 16) 160 _________________________________________________________________ conv2d_transpose_5 (Conv2DTr (None, 8, 8, 32) 4640 _________________________________________________________________ up_sampling2d_1 (UpSampling2 (None, 24, 24, 32) 0 _________________________________________________________________ conv2d_transpose_6 (Conv2DTr (None, 26, 26, 16) 4624 _________________________________________________________________ conv2d_transpose_7 (Conv2DTr (None, 28, 28, 1) 145 ================================================================= Total params: 9,569 Trainable params: 9,569 Non-trainable params: 0 _________________________________________________________________ Model: "autoencoder" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= img (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ encoder (Functional) (None, 16) 18672 _________________________________________________________________ decoder (Functional) (None, 28, 28, 1) 9569 ================================================================= Total params: 28,241 Trainable params: 28,241 Non-trainable params: 0 _________________________________________________________________ As you can see, the model can be nested: a model can contain sub-models (since a model is just like a layer). A common use case for model nesting is ensembling. For example, here's how to ensemble a set of models into a single model that averages their predictions: def get_model(): inputs = keras.Input(shape=(128,)) outputs = layers.Dense(1)(inputs) return keras.Model(inputs, outputs) model1 = get_model() model2 = get_model() model3 = get_model() inputs = keras.Input(shape=(128,)) y1 = model1(inputs) y2 = model2(inputs) y3 = model3(inputs) outputs = layers.average([y1, y2, y3]) ensemble_model = keras.Model(inputs=inputs, outputs=outputs) Manipulate complex graph topologies Models with multiple inputs and outputs The functional API makes it easy to manipulate multiple inputs and outputs. This cannot be handled with the Sequential API. For example, if you're building a system for ranking customer issue tickets by priority and routing them to the correct department, then the model will have three inputs: the title of the ticket (text input), the text body of the ticket (text input), and any tags added by the user (categorical input) This model will have two outputs: the priority score between 0 and 1 (scalar sigmoid output), and the department that should handle the ticket (softmax output over the set of departments). You can build this model in a few lines with the functional API: num_tags = 12 # Number of unique issue tags num_words = 10000 # Size of vocabulary obtained when preprocessing text data num_departments = 4 # Number of departments for predictions title_input = keras.Input( shape=(None,), name="title" ) # Variable-length sequence of ints body_input = keras.Input(shape=(None,), name="body") # Variable-length sequence of ints tags_input = keras.Input( shape=(num_tags,), name="tags" ) # Binary vectors of size `num_tags` # Embed each word in the title into a 64-dimensional vector title_features = layers.Embedding(num_words, 64)(title_input) # Embed each word in the text into a 64-dimensional vector body_features = layers.Embedding(num_words, 64)(body_input) # Reduce sequence of embedded words in the title into a single 128-dimensional vector title_features = layers.LSTM(128)(title_features) # Reduce sequence of embedded words in the body into a single 32-dimensional vector body_features = layers.LSTM(32)(body_features) # Merge all available features into a single large vector via concatenation x = layers.concatenate([title_features, body_features, tags_input]) # Stick a logistic regression for priority prediction on top of the features priority_pred = layers.Dense(1, name="priority")(x) # Stick a department classifier on top of the features department_pred = layers.Dense(num_departments, name="department")(x) # Instantiate an end-to-end model predicting both priority and department model = keras.Model( inputs=[title_input, body_input, tags_input], outputs=[priority_pred, department_pred], ) Now plot the model: keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True) png When compiling this model, you can assign different losses to each output. You can even assign different weights to each loss -- to modulate their contribution to the total training loss. model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss=[ keras.losses.BinaryCrossentropy(from_logits=True), keras.losses.CategoricalCrossentropy(from_logits=True), ], loss_weights=[1.0, 0.2], ) Since the output layers have different names, you could also specify the loss like this: model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss={ "priority": keras.losses.BinaryCrossentropy(from_logits=True), "department": keras.losses.CategoricalCrossentropy(from_logits=True), }, loss_weights=[1.0, 0.2], ) Train the model by passing lists of NumPy arrays of inputs and targets: # Dummy input data title_data = np.random.randint(num_words, size=(1280, 10)) body_data = np.random.randint(num_words, size=(1280, 100)) tags_data = np.random.randint(2, size=(1280, num_tags)).astype("float32") # Dummy target data priority_targets = np.random.random(size=(1280, 1)) dept_targets = np.random.randint(2, size=(1280, num_departments)) model.fit( {"title": title_data, "body": body_data, "tags": tags_data}, {"priority": priority_targets, "department": dept_targets}, epochs=2, batch_size=32, ) Epoch 1/2 40/40 [==============================] - 3s 21ms/step - loss: 1.2713 - priority_loss: 0.7000 - department_loss: 2.8567 Epoch 2/2 40/40 [==============================] - 1s 22ms/step - loss: 1.2947 - priority_loss: 0.6990 - department_loss: 2.9786 When calling fit with a Dataset object, it should yield either a tuple of lists like ([title_data, body_data, tags_data], [priority_targets, dept_targets]) or a tuple of dictionaries like ({'title': title_data, 'body': body_data, 'tags': tags_data}, {'priority': priority_targets, 'department': dept_targets}). For more detailed explanation, refer to the training and evaluation guide. A toy ResNet model In addition to models with multiple inputs and outputs, the functional API makes it easy to manipulate non-linear connectivity topologies -- these are models with layers that are not connected sequentially, which the Sequential API cannot handle. A common use case for this is residual connections. Let's build a toy ResNet model for CIFAR10 to demonstrate this: inputs = keras.Input(shape=(32, 32, 3), name="img") x = layers.Conv2D(32, 3, activation="relu")(inputs) x = layers.Conv2D(64, 3, activation="relu")(x) block_1_output = layers.MaxPooling2D(3)(x) x = layers.Conv2D(64, 3, activation="relu", padding="same")(block_1_output) x = layers.Conv2D(64, 3, activation="relu", padding="same")(x) block_2_output = layers.add([x, block_1_output]) x = layers.Conv2D(64, 3, activation="relu", padding="same")(block_2_output) x = layers.Conv2D(64, 3, activation="relu", padding="same")(x) block_3_output = layers.add([x, block_2_output]) x = layers.Conv2D(64, 3, activation="relu")(block_3_output) x = layers.GlobalAveragePooling2D()(x) x = layers.Dense(256, activation="relu")(x) x = layers.Dropout(0.5)(x) outputs = layers.Dense(10)(x) model = keras.Model(inputs, outputs, name="toy_resnet") model.summary() Model: "toy_resnet" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== img (InputLayer) [(None, 32, 32, 3)] 0 __________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 30, 30, 32) 896 img[0][0] __________________________________________________________________________________________________ conv2d_9 (Conv2D) (None, 28, 28, 64) 18496 conv2d_8[0][0] __________________________________________________________________________________________________ max_pooling2d_2 (MaxPooling2D) (None, 9, 9, 64) 0 conv2d_9[0][0] __________________________________________________________________________________________________ conv2d_10 (Conv2D) (None, 9, 9, 64) 36928 max_pooling2d_2[0][0] __________________________________________________________________________________________________ conv2d_11 (Conv2D) (None, 9, 9, 64) 36928 conv2d_10[0][0] __________________________________________________________________________________________________ add (Add) (None, 9, 9, 64) 0 conv2d_11[0][0] max_pooling2d_2[0][0] __________________________________________________________________________________________________ conv2d_12 (Conv2D) (None, 9, 9, 64) 36928 add[0][0] __________________________________________________________________________________________________ conv2d_13 (Conv2D) (None, 9, 9, 64) 36928 conv2d_12[0][0] __________________________________________________________________________________________________ add_1 (Add) (None, 9, 9, 64) 0 conv2d_13[0][0] add[0][0] __________________________________________________________________________________________________ conv2d_14 (Conv2D) (None, 7, 7, 64) 36928 add_1[0][0] __________________________________________________________________________________________________ global_average_pooling2d (Globa (None, 64) 0 conv2d_14[0][0] __________________________________________________________________________________________________ dense_6 (Dense) (None, 256) 16640 global_average_pooling2d[0][0] __________________________________________________________________________________________________ dropout (Dropout) (None, 256) 0 dense_6[0][0] __________________________________________________________________________________________________ dense_7 (Dense) (None, 10) 2570 dropout[0][0] ================================================================================================== Total params: 223,242 Trainable params: 223,242 Non-trainable params: 0 __________________________________________________________________________________________________ Plot the model: keras.utils.plot_model(model, "mini_resnet.png", show_shapes=True) png Now train the model: (x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data() x_train = x_train.astype("float32") / 255.0 x_test = x_test.astype("float32") / 255.0 y_train = keras.utils.to_categorical(y_train, 10) y_test = keras.utils.to_categorical(y_test, 10) model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss=keras.losses.CategoricalCrossentropy(from_logits=True), metrics=["acc"], ) # We restrict the data to the first 1000 samples so as to limit execution time # on Colab. Try to train on the entire dataset until convergence! model.fit(x_train[:1000], y_train[:1000], batch_size=64, epochs=1, validation_split=0.2) 13/13 [==============================] - 2s 103ms/step - loss: 2.3218 - acc: 0.1291 - val_loss: 2.3014 - val_acc: 0.1150 Shared layers Another good use for the functional API are models that use shared layers. Shared layers are layer instances that are reused multiple times in the same model -- they learn features that correspond to multiple paths in the graph-of-layers. Shared layers are often used to encode inputs from similar spaces (say, two different pieces of text that feature similar vocabulary). They enable sharing of information across these different inputs, and they make it possible to train such a model on less data. If a given word is seen in one of the inputs, that will benefit the processing of all inputs that pass through the shared layer. To share a layer in the functional API, call the same layer instance multiple times. For instance, here's an Embedding layer shared across two different text inputs: # Embedding for 1000 unique words mapped to 128-dimensional vectors shared_embedding = layers.Embedding(1000, 128) # Variable-length sequence of integers text_input_a = keras.Input(shape=(None,), dtype="int32") # Variable-length sequence of integers text_input_b = keras.Input(shape=(None,), dtype="int32") # Reuse the same layer to encode both inputs encoded_input_a = shared_embedding(text_input_a) encoded_input_b = shared_embedding(text_input_b) Extract and reuse nodes in the graph of layers Because the graph of layers you are manipulating is a static data structure, it can be accessed and inspected. And this is how you are able to plot functional models as images. This also means that you can access the activations of intermediate layers ("nodes" in the graph) and reuse them elsewhere -- which is very useful for something like feature extraction. Let's look at an example. This is a VGG19 model with weights pretrained on ImageNet: vgg19 = tf.keras.applications.VGG19() And these are the intermediate activations of the model, obtained by querying the graph data structure: features_list = [layer.output for layer in vgg19.layers] Use these features to create a new feature-extraction model that returns the values of the intermediate layer activations: feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list) img = np.random.random((1, 224, 224, 3)).astype("float32") extracted_features = feat_extraction_model(img) This comes in handy for tasks like neural style transfer, among other things. Extend the API using custom layers tf.keras includes a wide range of built-in layers, for example: Convolutional layers: Conv1D, Conv2D, Conv3D, Conv2DTranspose Pooling layers: MaxPooling1D, MaxPooling2D, MaxPooling3D, AveragePooling1D RNN layers: GRU, LSTM, ConvLSTM2D BatchNormalization, Dropout, Embedding, etc. But if you don't find what you need, it's easy to extend the API by creating your own layers. All layers subclass the Layer class and implement: call method, that specifies the computation done by the layer. build method, that creates the weights of the layer (this is just a style convention since you can create weights in __init__, as well). To learn more about creating layers from scratch, read custom layers and models guide. The following is a basic implementation of tf.keras.layers.Dense: class CustomDense(layers.Layer): def __init__(self, units=32): super(CustomDense, self).__init__() self.units = units def build(self, input_shape): self.w = self.add_weight( shape=(input_shape[-1], self.units), initializer="random_normal", trainable=True, ) self.b = self.add_weight( shape=(self.units,), initializer="random_normal", trainable=True ) def call(self, inputs): return tf.matmul(inputs, self.w) + self.b inputs = keras.Input((4,)) outputs = CustomDense(10)(inputs) model = keras.Model(inputs, outputs) For serialization support in your custom layer, define a get_config method that returns the constructor arguments of the layer instance: class CustomDense(layers.Layer): def __init__(self, units=32): super(CustomDense, self).__init__() self.units = units def build(self, input_shape): self.w = self.add_weight( shape=(input_shape[-1], self.units), initializer="random_normal", trainable=True, ) self.b = self.add_weight( shape=(self.units,), initializer="random_normal", trainable=True ) def call(self, inputs): return tf.matmul(inputs, self.w) + self.b def get_config(self): return {"units": self.units} inputs = keras.Input((4,)) outputs = CustomDense(10)(inputs) model = keras.Model(inputs, outputs) config = model.get_config() new_model = keras.Model.from_config(config, custom_objects={"CustomDense": CustomDense}) Optionally, implement the class method from_config(cls, config) which is used when recreating a layer instance given its config dictionary. The default implementation of from_config is: def from_config(cls, config): return cls(**config) When to use the functional API Should you use the Keras functional API to create a new model, or just subclass the Model class directly? In general, the functional API is higher-level, easier and safer, and has a number of features that subclassed models do not support. However, model subclassing provides greater flexibility when building models that are not easily expressible as directed acyclic graphs of layers. For example, you could not implement a Tree-RNN with the functional API and would have to subclass Model directly. For an in-depth look at the differences between the functional API and model subclassing, read What are Symbolic and Imperative APIs in TensorFlow 2.0?. Functional API strengths: The following properties are also true for Sequential models (which are also data structures), but are not true for subclassed models (which are Python bytecode, not data structures). Less verbose There is no super(MyClass, self).__init__(...), no def call(self, ...):, etc. Compare: inputs = keras.Input(shape=(32,)) x = layers.Dense(64, activation='relu')(inputs) outputs = layers.Dense(10)(x) mlp = keras.Model(inputs, outputs) With the subclassed version: class MLP(keras.Model): def __init__(self, **kwargs): super(MLP, self).__init__(**kwargs) self.dense_1 = layers.Dense(64, activation='relu') self.dense_2 = layers.Dense(10) def call(self, inputs): x = self.dense_1(inputs) return self.dense_2(x) # Instantiate the model. mlp = MLP() # Necessary to create the model's state. # The model doesn't have a state until it's called at least once. _ = mlp(tf.zeros((1, 32))) Model validation while defining its connectivity graph In the functional API, the input specification (shape and dtype) is created in advance (using Input). Every time you call a layer, the layer checks that the specification passed to it matches its assumptions, and it will raise a helpful error message if not. This guarantees that any model you can build with the functional API will run. All debugging -- other than convergence-related debugging -- happens statically during the model construction and not at execution time. This is similar to type checking in a compiler. A functional model is plottable and inspectable You can plot the model as a graph, and you can easily access intermediate nodes in this graph. For example, to extract and reuse the activations of intermediate layers (as seen in a previous example): features_list = [layer.output for layer in vgg19.layers] feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list) A functional model can be serialized or cloned Because a functional model is a data structure rather than a piece of code, it is safely serializable and can be saved as a single file that allows you to recreate the exact same model without having access to any of the original code. See the serialization & saving guide. To serialize a subclassed model, it is necessary for the implementer to specify a get_config() and from_config() method at the model level. Functional API weakness: It does not support dynamic architectures The functional API treats models as DAGs of layers. This is true for most deep learning architectures, but not all -- for example, recursive networks or Tree RNNs do not follow this assumption and cannot be implemented in the functional API. Mix-and-match API styles Choosing between the functional API or Model subclassing isn't a binary decision that restricts you into one category of models. All models in the tf.keras API can interact with each other, whether they're Sequential models, functional models, or subclassed models that are written from scratch. You can always use a functional model or Sequential model as part of a subclassed model or layer: units = 32 timesteps = 10 input_dim = 5 # Define a Functional model inputs = keras.Input((None, units)) x = layers.GlobalAveragePooling1D()(inputs) outputs = layers.Dense(1)(x) model = keras.Model(inputs, outputs) class CustomRNN(layers.Layer): def __init__(self): super(CustomRNN, self).__init__() self.units = units self.projection_1 = layers.Dense(units=units, activation="tanh") self.projection_2 = layers.Dense(units=units, activation="tanh") # Our previously-defined Functional model self.classifier = model def call(self, inputs): outputs = [] state = tf.zeros(shape=(inputs.shape[0], self.units)) for t in range(inputs.shape[1]): x = inputs[:, t, :] h = self.projection_1(x) y = h + self.projection_2(state) state = y outputs.append(y) features = tf.stack(outputs, axis=1) print(features.shape) return self.classifier(features) rnn_model = CustomRNN() _ = rnn_model(tf.zeros((1, timesteps, input_dim))) (1, 10, 32) You can use any subclassed layer or model in the functional API as long as it implements a call method that follows one of the following patterns: call(self, inputs, **kwargs) -- Where inputs is a tensor or a nested structure of tensors (e.g. a list of tensors), and where **kwargs are non-tensor arguments (non-inputs). call(self, inputs, training=None, **kwargs) -- Where training is a boolean indicating whether the layer should behave in training mode and inference mode. call(self, inputs, mask=None, **kwargs) -- Where mask is a boolean mask tensor (useful for RNNs, for instance). call(self, inputs, training=None, mask=None, **kwargs) -- Of course, you can have both masking and training-specific behavior at the same time. Additionally, if you implement the get_config method on your custom Layer or model, the functional models you create will still be serializable and cloneable. Here's a quick example of a custom RNN, written from scratch, being used in a functional model: units = 32 timesteps = 10 input_dim = 5 batch_size = 16 class CustomRNN(layers.Layer): def __init__(self): super(CustomRNN, self).__init__() self.units = units self.projection_1 = layers.Dense(units=units, activation="tanh") self.projection_2 = layers.Dense(units=units, activation="tanh") self.classifier = layers.Dense(1) def call(self, inputs): outputs = [] state = tf.zeros(shape=(inputs.shape[0], self.units)) for t in range(inputs.shape[1]): x = inputs[:, t, :] h = self.projection_1(x) y = h + self.projection_2(state) state = y outputs.append(y) features = tf.stack(outputs, axis=1) return self.classifier(features) # Note that you specify a static batch size for the inputs with the `batch_shape` # arg, because the inner computation of `CustomRNN` requires a static batch size # (when you create the `state` zeros tensor). inputs = keras.Input(batch_shape=(batch_size, timesteps, input_dim)) x = layers.Conv1D(32, 3)(inputs) outputs = CustomRNN()(x) model = keras.Model(inputs, outputs) rnn_model = CustomRNN() _ = rnn_model(tf.zeros((1, 10, 5)))Working with preprocessing layers Authors: Francois Chollet, Mark Omernick Date created: 2020/07/25 Last modified: 2021/04/23 Description: Overview of how to leverage preprocessing layers to create end-to-end models. View in Colab • GitHub source Keras preprocessing layers The Keras preprocessing layers API allows developers to build Keras-native input processing pipelines. These input processing pipelines can be used as independent preprocessing code in non-Keras workflows, combined directly with Keras models, and exported as part of a Keras SavedModel. With Keras preprocessing layers, you can build and export models that are truly end-to-end: models that accept raw images or raw structured data as input; models that handle feature normalization or feature value indexing on their own. Available preprocessing layers Core preprocessing layers TextVectorization layer: turns raw strings into an encoded representation that can be read by an Embedding layer or Dense layer. Normalization layer: performs feature-wise normalize of input features. Structured data preprocessing layers These layers are for structured data encoding and feature engineering. CategoryEncoding layer: turns integer categorical features into one-hot, multi-hot, or count dense representations. Hashing layer: performs categorical feature hashing, also known as the "hashing trick". Discretization layer: turns continuous numerical features into integer categorical features. StringLookup layer: turns string categorical values an encoded representation that can be read by an Embedding layer or Dense layer. IntegerLookup layer: turns integer categorical values into an encoded representation that can be read by an Embedding layer or Dense layer. CategoryCrossing layer: combines categorical features into co-occurrence features. E.g. if you have feature values "a" and "b", it can provide with the combination feature "a and b are present at the same time". Image preprocessing layers These layers are for standardizing the inputs of an image model. Resizing layer: resizes a batch of images to a target size. Rescaling layer: rescales and offsets the values of a batch of image (e.g. go from inputs in the [0, 255] range to inputs in the [0, 1] range. CenterCrop layer: returns a center crop of a batch of images. Image data augmentation layers These layers apply random augmentation transforms to a batch of images. They are only active during training. RandomCrop layer RandomFlip layer RandomTranslation layer RandomRotation layer RandomZoom layer RandomHeight layer RandomWidth layer The adapt() method Some preprocessing layers have an internal state that must be computed based on a sample of the training data. The list of stateful preprocessing layers is: TextVectorization: holds a mapping between string tokens and integer indices StringLookup and IntegerLookup: hold a mapping between input values and integer indices. Normalization: holds the mean and standard deviation of the features. Discretization: holds information about value bucket boundaries. Crucially, these layers are non-trainable. Their state is not set during training; it must be set before training, a step called "adaptation". You set the state of a preprocessing layer by exposing it to training data, via the adapt() method: import numpy as np import tensorflow as tf from tensorflow.keras.layers.experimental import preprocessing data = np.array([[0.1, 0.2, 0.3], [0.8, 0.9, 1.0], [1.5, 1.6, 1.7],]) layer = preprocessing.Normalization() layer.adapt(data) normalized_data = layer(data) print("Features mean: %.2f" % (normalized_data.numpy().mean())) print("Features std: %.2f" % (normalized_data.numpy().std())) The adapt() method takes either a Numpy array or a tf.data.Dataset object. In the case of StringLookup and TextVectorization, you can also pass a list of strings: data = [ "ξεῖν᾽, ἦ τοι μὲν ὄνειροι ἀμήχανοι ἀκριτόμυθοι", "γίγνοντ᾽, οὐδέ τι πάντα τελείεται ἀνθρώποισι.", "δοιαὶ γάρ τε πύλαι ἀμενηνῶν εἰσὶν ὀνείρων:", "αἱ μὲν γὰρ κεράεσσι τετεύχαται, αἱ δ᾽ ἐλέφαντι:", "τῶν οἳ μέν κ᾽ ἔλθωσι διὰ πριστοῦ ἐλέφαντος,", "οἵ ῥ᾽ ἐλεφαίρονται, ἔπε᾽ ἀκράαντα φέροντες:", "οἱ δὲ διὰ ξεστῶν κεράων ἔλθωσι θύραζε,", "οἵ ῥ᾽ ἔτυμα κραίνουσι, βροτῶν ὅτε κέν τις ἴδηται.", ] layer = preprocessing.TextVectorization() layer.adapt(data) vectorized_text = layer(data) print(vectorized_text) tf.Tensor( [[37 12 25 5 9 20 21 0 0] [51 34 27 33 29 18 0 0 0] [49 52 30 31 19 46 10 0 0] [ 7 5 50 43 28 7 47 17 0] [24 35 39 40 3 6 32 16 0] [ 4 2 15 14 22 23 0 0 0] [36 48 6 38 42 3 45 0 0] [ 4 2 13 41 53 8 44 26 11]], shape=(8, 9), dtype=int64) In addition, adaptable layers always expose an option to directly set state via constructor arguments or weight assignment. If the intended state values are known at layer construction time, or are calculated outside of the adapt() call, they can be set without relying on the layer's internal computation. For instance, if external vocabulary files for the TextVectorization, StringLookup, or IntegerLookup layers already exist, those can be loaded directly into the lookup tables by passing a path to the vocabulary file in the layer's constructor arguments. Here's an example where we instantiate a StringLookup layer with precomputed vocabulary: vocab = ["a", "b", "c", "d"] data = tf.constant([["a", "c", "d"], ["d", "z", "b"]]) layer = preprocessing.StringLookup(vocabulary=vocab) vectorized_data = layer(data) print(vectorized_data) tf.Tensor( [[2 4 5] [5 1 3]], shape=(2, 3), dtype=int64) Preprocessing data before the model or inside the model There are two ways you could be using preprocessing layers: Option 1: Make them part of the model, like this: inputs = keras.Input(shape=input_shape) x = preprocessing_layer(inputs) outputs = rest_of_the_model(x) model = keras.Model(inputs, outputs) With this option, preprocessing will happen on device, synchronously with the rest of the model execution, meaning that it will benefit from GPU acceleration. If you're training on GPU, this is the best option for the Normalization layer, and for all image preprocessing and data augmentation layers. Option 2: apply it to your tf.data.Dataset, so as to obtain a dataset that yields batches of preprocessed data, like this: dataset = dataset.map( lambda x, y: (preprocessing_layer(x), y)) With this option, your preprocessing will happen on CPU, asynchronously, and will be buffered before going into the model. This is the best option for TextVectorization, and all structured data preprocessing layers. It can also be a good option if you're training on CPU and you use image preprocessing layers. Benefits of doing preprocessing inside the model at inference time Even if you go with option 2, you may later want to export an inference-only end-to-end model that will include the preprocessing layers. The key benefit to doing this is that it makes your model portable and it helps reduce the training/serving skew. When all data preprocessing is part of the model, other people can load and use your model without having to be aware of how each feature is expected to be encoded & normalized. Your inference model will be able to process raw images or raw structured data, and will not require users of the model to be aware of the details of e.g. the tokenization scheme used for text, the indexing scheme used for categorical features, whether image pixel values are normalized to [-1, +1] or to [0, 1], etc. This is especially powerful if you're exporting your model to another runtime, such as TensorFlow.js: you won't have to reimplement your preprocessing pipeline in JavaScript. If you initially put your preprocessing layers in your tf.data pipeline, you can export an inference model that packages the preprocessing. Simply instantiate a new model that chains your preprocessing layers and your training model: inputs = keras.Input(shape=input_shape) x = preprocessing_layer(inputs) outputs = training_model(x) inference_model = keras.Model(inputs, outputs) Quick recipes Image data augmentation (on-device) Note that image data augmentation layers are only active during training (similarly to the Dropout layer). from tensorflow import keras from tensorflow.keras import layers # Create a data augmentation stage with horizontal flipping, rotations, zooms data_augmentation = keras.Sequential( [ preprocessing.RandomFlip("horizontal"), preprocessing.RandomRotation(0.1), preprocessing.RandomZoom(0.1), ] ) # Create a model that includes the augmentation stage input_shape = (32, 32, 3) classes = 10 inputs = keras.Input(shape=input_shape) # Augment images x = data_augmentation(inputs) # Rescale image values to [0, 1] x = preprocessing.Rescaling(1.0 / 255)(x) # Add the rest of the model outputs = keras.applications.ResNet50( weights=None, input_shape=input_shape, classes=classes )(x) model = keras.Model(inputs, outputs) You can see a similar setup in action in the example image classification from scratch. Normalizing numerical features # Load some data (x_train, y_train), _ = keras.datasets.cifar10.load_data() x_train = x_train.reshape((len(x_train), -1)) input_shape = x_train.shape[1:] classes = 10 # Create a Normalization layer and set its internal state using the training data normalizer = preprocessing.Normalization() normalizer.adapt(x_train) # Create a model that include the normalization layer inputs = keras.Input(shape=input_shape) x = normalizer(inputs) outputs = layers.Dense(classes, activation="softmax")(x) model = keras.Model(inputs, outputs) # Train the model model.compile(optimizer="adam", loss="sparse_categorical_crossentropy") model.fit(x_train, y_train) 1563/1563 [==============================] - 3s 2ms/step - loss: 2.1828 Encoding string categorical features via one-hot encoding # Define some toy data data = tf.constant([["a"], ["b"], ["c"], ["b"], ["c"], ["a"]]) # Use StringLookup to build an index of the feature values and encode output. lookup = preprocessing.StringLookup(output_mode="binary") lookup.adapt(data) # Convert new test data (which includes unknown feature values) test_data = tf.constant([["a"], ["b"], ["c"], ["d"], ["e"], [""]]) encoded_data = lookup(test_data) print(encoded_data) tf.Tensor( [[0. 0. 0. 1.] [0. 0. 1. 0.] [0. 1. 0. 0.] [1. 0. 0. 0.] [1. 0. 0. 0.] [0. 0. 0. 0.]], shape=(6, 4), dtype=float32) Note that index 0 is reserved for missing values (which you should specify as the empty string ""), and index 1 is reserved for out-of-vocabulary values (values that were not seen during adapt()). You can configure this by using the mask_token and oov_token constructor arguments of StringLookup. You can see the StringLookup in action in the Structured data classification from scratch example. Encoding integer categorical features via one-hot encoding # Define some toy data data = tf.constant([[10], [20], [20], [10], [30], [0]]) # Use IntegerLookup to build an index of the feature values and encode output. lookup = preprocessing.IntegerLookup(output_mode="binary") lookup.adapt(data) # Convert new test data (which includes unknown feature values) test_data = tf.constant([[10], [10], [20], [50], [60], [0]]) encoded_data = lookup(test_data) print(encoded_data) tf.Tensor( [[0. 0. 1. 0.] [0. 0. 1. 0.] [0. 1. 0. 0.] [1. 0. 0. 0.] [1. 0. 0. 0.] [0. 0. 0. 0.]], shape=(6, 4), dtype=float32) Note that index 0 is reserved for missing values (which you should specify as the value 0), and index 1 is reserved for out-of-vocabulary values (values that were not seen during adapt()). You can configure this by using the mask_token and oov_token constructor arguments of IntegerLookup. You can see the IntegerLookup in action in the example structured data classification from scratch. Applying the hashing trick to an integer categorical feature If you have a categorical feature that can take many different values (on the order of 10e3 or higher), where each value only appears a few times in the data, it becomes impractical and ineffective to index and one-hot encode the feature values. Instead, it can be a good idea to apply the "hashing trick": hash the values to a vector of fixed size. This keeps the size of the feature space manageable, and removes the need for explicit indexing. # Sample data: 10,000 random integers with values between 0 and 100,000 data = np.random.randint(0, 100000, size=(10000, 1)) # Use the Hashing layer to hash the values to the range [0, 64] hasher = preprocessing.Hashing(num_bins=64, salt=1337) # Use the CategoryEncoding layer to one-hot encode the hashed values encoder = preprocessing.CategoryEncoding(num_tokens=64, output_mode="binary") encoded_data = encoder(hasher(data)) print(encoded_data.shape) (10000, 64) Encoding text as a sequence of token indices This is how you should preprocess text to be passed to an Embedding layer. # Define some text data to adapt the layer data = tf.constant( [ "The Brain is wider than the Sky", "For put them side by side", "The one the other will contain", "With ease and You beside", ] ) # Instantiate TextVectorization with "int" output_mode text_vectorizer = preprocessing.TextVectorization(output_mode="int") # Index the vocabulary via `adapt()` text_vectorizer.adapt(data) # You can retrieve the vocabulary we indexed via get_vocabulary() vocab = text_vectorizer.get_vocabulary() print("Vocabulary:", vocab) # Create an Embedding + LSTM model inputs = keras.Input(shape=(1,), dtype="string") x = text_vectorizer(inputs) x = layers.Embedding(input_dim=len(vocab), output_dim=64)(x) outputs = layers.LSTM(1)(x) model = keras.Model(inputs, outputs) # Call the model on test data (which includes unknown tokens) test_data = tf.constant(["The Brain is deeper than the sea"]) test_output = model(test_data) Vocabulary: ['', '[UNK]', 'the', 'side', 'you', 'with', 'will', 'wider', 'them', 'than', 'sky', 'put', 'other', 'one', 'is', 'for', 'ease', 'contain', 'by', 'brain', 'beside', 'and'] You can see the TextVectorization layer in action, combined with an Embedding mode, in the example text classification from scratch. Note that when training such a model, for best performance, you should use the TextVectorization layer as part of the input pipeline (which is what we do in the text classification example above). Encoding text as a dense matrix of ngrams with multi-hot encoding This is how you should preprocess text to be passed to a Dense layer. # Define some text data to adapt the layer data = tf.constant( [ "The Brain is wider than the Sky", "For put them side by side", "The one the other will contain", "With ease and You beside", ] ) # Instantiate TextVectorization with "binary" output_mode (multi-hot) # and ngrams=2 (index all bigrams) text_vectorizer = preprocessing.TextVectorization(output_mode="binary", ngrams=2) # Index the bigrams via `adapt()` text_vectorizer.adapt(data) print( "Encoded text:\n", text_vectorizer(["The Brain is deeper than the sea"]).numpy(), "\n", ) # Create a Dense model inputs = keras.Input(shape=(1,), dtype="string") x = text_vectorizer(inputs) outputs = layers.Dense(1)(x) model = keras.Model(inputs, outputs) # Call the model on test data (which includes unknown tokens) test_data = tf.constant(["The Brain is deeper than the sea"]) test_output = model(test_data) print("Model output:", test_output) Encoded text: [[1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 1. 1. 0. 0. 0.]] Model output: tf.Tensor([[0.53373265]], shape=(1, 1), dtype=float32) Encoding text as a dense matrix of ngrams with TF-IDF weighting This is an alternative way of preprocessing text before passing it to a Dense layer. # Define some text data to adapt the layer data = tf.constant( [ "The Brain is wider than the Sky", "For put them side by side", "The one the other will contain", "With ease and You beside", ] ) # Instantiate TextVectorization with "tf-idf" output_mode # (multi-hot with TF-IDF weighting) and ngrams=2 (index all bigrams) text_vectorizer = preprocessing.TextVectorization(output_mode="tf-idf", ngrams=2) # Index the bigrams and learn the TF-IDF weights via `adapt()` text_vectorizer.adapt(data) print( "Encoded text:\n", text_vectorizer(["The Brain is deeper than the sea"]).numpy(), "\n", ) # Create a Dense model inputs = keras.Input(shape=(1,), dtype="string") x = text_vectorizer(inputs) outputs = layers.Dense(1)(x) model = keras.Model(inputs, outputs) # Call the model on test data (which includes unknown tokens) test_data = tf.constant(["The Brain is deeper than the sea"]) test_output = model(test_data) print("Model output:", test_output) Encoded text: [[5.461647 1.6945957 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.0986123 1.0986123 1.0986123 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.0986123 0. 0. 0. 0. 0. 0. 0. 1.0986123 1.0986123 0. 0. 0. ]] Model output: tf.Tensor([[-0.49451536]], shape=(1, 1), dtype=float32)Writing your own callbacks Authors: Rick Chao, Francois Chollet Date created: 2019/03/20 Last modified: 2020/04/15 Description: Complete guide to writing new Keras callbacks. View in Colab • GitHub source Introduction A callback is a powerful tool to customize the behavior of a Keras model during training, evaluation, or inference. Examples include tf.keras.callbacks.TensorBoard to visualize training progress and results with TensorBoard, or tf.keras.callbacks.ModelCheckpoint to periodically save your model during training. In this guide, you will learn what a Keras callback is, what it can do, and how you can build your own. We provide a few demos of simple callback applications to get you started. Setup import tensorflow as tf from tensorflow import keras Keras callbacks overview All callbacks subclass the keras.callbacks.Callback class, and override a set of methods called at various stages of training, testing, and predicting. Callbacks are useful to get a view on internal states and statistics of the model during training. You can pass a list of callbacks (as the keyword argument callbacks) to the following model methods: keras.Model.fit() keras.Model.evaluate() keras.Model.predict() An overview of callback methods Global methods on_(train|test|predict)_begin(self, logs=None) Called at the beginning of fit/evaluate/predict. on_(train|test|predict)_end(self, logs=None) Called at the end of fit/evaluate/predict. Batch-level methods for training/testing/predicting on_(train|test|predict)_batch_begin(self, batch, logs=None) Called right before processing a batch during training/testing/predicting. on_(train|test|predict)_batch_end(self, batch, logs=None) Called at the end of training/testing/predicting a batch. Within this method, logs is a dict containing the metrics results. Epoch-level methods (training only) on_epoch_begin(self, epoch, logs=None) Called at the beginning of an epoch during training. on_epoch_end(self, epoch, logs=None) Called at the end of an epoch during training. A basic example Let's take a look at a concrete example. To get started, let's import tensorflow and define a simple Sequential Keras model: # Define the Keras model to add callbacks to def get_model(): model = keras.Sequential() model.add(keras.layers.Dense(1, input_dim=784)) model.compile( optimizer=keras.optimizers.RMSprop(learning_rate=0.1), loss="mean_squared_error", metrics=["mean_absolute_error"], ) return model Then, load the MNIST data for training and testing from Keras datasets API: # Load example MNIST data and pre-process it (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() x_train = x_train.reshape(-1, 784).astype("float32") / 255.0 x_test = x_test.reshape(-1, 784).astype("float32") / 255.0 # Limit the data to 1000 samples x_train = x_train[:1000] y_train = y_train[:1000] x_test = x_test[:1000] y_test = y_test[:1000] Now, define a simple custom callback that logs: When fit/evaluate/predict starts & ends When each epoch starts & ends When each training batch starts & ends When each evaluation (test) batch starts & ends When each inference (prediction) batch starts & ends class CustomCallback(keras.callbacks.Callback): def on_train_begin(self, logs=None): keys = list(logs.keys()) print("Starting training; got log keys: {}".format(keys)) def on_train_end(self, logs=None): keys = list(logs.keys()) print("Stop training; got log keys: {}".format(keys)) def on_epoch_begin(self, epoch, logs=None): keys = list(logs.keys()) print("Start epoch {} of training; got log keys: {}".format(epoch, keys)) def on_epoch_end(self, epoch, logs=None): keys = list(logs.keys()) print("End epoch {} of training; got log keys: {}".format(epoch, keys)) def on_test_begin(self, logs=None): keys = list(logs.keys()) print("Start testing; got log keys: {}".format(keys)) def on_test_end(self, logs=None): keys = list(logs.keys()) print("Stop testing; got log keys: {}".format(keys)) def on_predict_begin(self, logs=None): keys = list(logs.keys()) print("Start predicting; got log keys: {}".format(keys)) def on_predict_end(self, logs=None): keys = list(logs.keys()) print("Stop predicting; got log keys: {}".format(keys)) def on_train_batch_begin(self, batch, logs=None): keys = list(logs.keys()) print("...Training: start of batch {}; got log keys: {}".format(batch, keys)) def on_train_batch_end(self, batch, logs=None): keys = list(logs.keys()) print("...Training: end of batch {}; got log keys: {}".format(batch, keys)) def on_test_batch_begin(self, batch, logs=None): keys = list(logs.keys()) print("...Evaluating: start of batch {}; got log keys: {}".format(batch, keys)) def on_test_batch_end(self, batch, logs=None): keys = list(logs.keys()) print("...Evaluating: end of batch {}; got log keys: {}".format(batch, keys)) def on_predict_batch_begin(self, batch, logs=None): keys = list(logs.keys()) print("...Predicting: start of batch {}; got log keys: {}".format(batch, keys)) def on_predict_batch_end(self, batch, logs=None): keys = list(logs.keys()) print("...Predicting: end of batch {}; got log keys: {}".format(batch, keys)) Let's try it out: model = get_model() model.fit( x_train, y_train, batch_size=128, epochs=1, verbose=0, validation_split=0.5, callbacks=[CustomCallback()], ) res = model.evaluate( x_test, y_test, batch_size=128, verbose=0, callbacks=[CustomCallback()] ) res = model.predict(x_test, batch_size=128, callbacks=[CustomCallback()]) Starting training; got log keys: [] Start epoch 0 of training; got log keys: [] ...Training: start of batch 0; got log keys: [] ...Training: end of batch 0; got log keys: ['loss', 'mean_absolute_error'] ...Training: start of batch 1; got log keys: [] ...Training: end of batch 1; got log keys: ['loss', 'mean_absolute_error'] ...Training: start of batch 2; got log keys: [] ...Training: end of batch 2; got log keys: ['loss', 'mean_absolute_error'] ...Training: start of batch 3; got log keys: [] ...Training: end of batch 3; got log keys: ['loss', 'mean_absolute_error'] Start testing; got log keys: [] ...Evaluating: start of batch 0; got log keys: [] ...Evaluating: end of batch 0; got log keys: ['loss', 'mean_absolute_error'] ...Evaluating: start of batch 1; got log keys: [] ...Evaluating: end of batch 1; got log keys: ['loss', 'mean_absolute_error'] ...Evaluating: start of batch 2; got log keys: [] ...Evaluating: end of batch 2; got log keys: ['loss', 'mean_absolute_error'] ...Evaluating: start of batch 3; got log keys: [] ...Evaluating: end of batch 3; got log keys: ['loss', 'mean_absolute_error'] Stop testing; got log keys: ['loss', 'mean_absolute_error'] End epoch 0 of training; got log keys: ['loss', 'mean_absolute_error', 'val_loss', 'val_mean_absolute_error'] Stop training; got log keys: ['loss', 'mean_absolute_error', 'val_loss', 'val_mean_absolute_error'] Start testing; got log keys: [] ...Evaluating: start of batch 0; got log keys: [] ...Evaluating: end of batch 0; got log keys: ['loss', 'mean_absolute_error'] ...Evaluating: start of batch 1; got log keys: [] ...Evaluating: end of batch 1; got log keys: ['loss', 'mean_absolute_error'] ...Evaluating: start of batch 2; got log keys: [] ...Evaluating: end of batch 2; got log keys: ['loss', 'mean_absolute_error'] ...Evaluating: start of batch 3; got log keys: [] ...Evaluating: end of batch 3; got log keys: ['loss', 'mean_absolute_error'] ...Evaluating: start of batch 4; got log keys: [] ...Evaluating: end of batch 4; got log keys: ['loss', 'mean_absolute_error'] ...Evaluating: start of batch 5; got log keys: [] ...Evaluating: end of batch 5; got log keys: ['loss', 'mean_absolute_error'] ...Evaluating: start of batch 6; got log keys: [] ...Evaluating: end of batch 6; got log keys: ['loss', 'mean_absolute_error'] ...Evaluating: start of batch 7; got log keys: [] ...Evaluating: end of batch 7; got log keys: ['loss', 'mean_absolute_error'] Stop testing; got log keys: ['loss', 'mean_absolute_error'] Start predicting; got log keys: [] ...Predicting: start of batch 0; got log keys: [] ...Predicting: end of batch 0; got log keys: ['outputs'] ...Predicting: start of batch 1; got log keys: [] ...Predicting: end of batch 1; got log keys: ['outputs'] ...Predicting: start of batch 2; got log keys: [] ...Predicting: end of batch 2; got log keys: ['outputs'] ...Predicting: start of batch 3; got log keys: [] ...Predicting: end of batch 3; got log keys: ['outputs'] ...Predicting: start of batch 4; got log keys: [] ...Predicting: end of batch 4; got log keys: ['outputs'] ...Predicting: start of batch 5; got log keys: [] ...Predicting: end of batch 5; got log keys: ['outputs'] ...Predicting: start of batch 6; got log keys: [] ...Predicting: end of batch 6; got log keys: ['outputs'] ...Predicting: start of batch 7; got log keys: [] ...Predicting: end of batch 7; got log keys: ['outputs'] Stop predicting; got log keys: [] Usage of logs dict The logs dict contains the loss value, and all the metrics at the end of a batch or epoch. Example includes the loss and mean absolute error. class LossAndErrorPrintingCallback(keras.callbacks.Callback): def on_train_batch_end(self, batch, logs=None): print("For batch {}, loss is {:7.2f}.".format(batch, logs["loss"])) def on_test_batch_end(self, batch, logs=None): print("For batch {}, loss is {:7.2f}.".format(batch, logs["loss"])) def on_epoch_end(self, epoch, logs=None): print( "The average loss for epoch {} is {:7.2f} " "and mean absolute error is {:7.2f}.".format( epoch, logs["loss"], logs["mean_absolute_error"] ) ) model = get_model() model.fit( x_train, y_train, batch_size=128, epochs=2, verbose=0, callbacks=[LossAndErrorPrintingCallback()], ) res = model.evaluate( x_test, y_test, batch_size=128, verbose=0, callbacks=[LossAndErrorPrintingCallback()], ) For batch 0, loss is 32.45. For batch 1, loss is 393.79. For batch 2, loss is 272.00. For batch 3, loss is 206.95. For batch 4, loss is 167.29. For batch 5, loss is 140.41. For batch 6, loss is 121.19. For batch 7, loss is 109.21. The average loss for epoch 0 is 109.21 and mean absolute error is 5.83. For batch 0, loss is 5.94. For batch 1, loss is 5.73. For batch 2, loss is 5.50. For batch 3, loss is 5.38. For batch 4, loss is 5.16. For batch 5, loss is 5.19. For batch 6, loss is 5.64. For batch 7, loss is 7.05. The average loss for epoch 1 is 7.05 and mean absolute error is 2.14. For batch 0, loss is 40.89. For batch 1, loss is 42.12. For batch 2, loss is 41.42. For batch 3, loss is 42.10. For batch 4, loss is 42.05. For batch 5, loss is 42.91. For batch 6, loss is 43.05. For batch 7, loss is 42.94. Usage of self.model attribute In addition to receiving log information when one of their methods is called, callbacks have access to the model associated with the current round of training/evaluation/inference: self.model. Here are of few of the things you can do with self.model in a callback: Set self.model.stop_training = True to immediately interrupt training. Mutate hyperparameters of the optimizer (available as self.model.optimizer), such as self.model.optimizer.learning_rate. Save the model at period intervals. Record the output of model.predict() on a few test samples at the end of each epoch, to use as a sanity check during training. Extract visualizations of intermediate features at the end of each epoch, to monitor what the model is learning over time. etc. Let's see this in action in a couple of examples. Examples of Keras callback applications Early stopping at minimum loss This first example shows the creation of a Callback that stops training when the minimum of loss has been reached, by setting the attribute self.model.stop_training (boolean). Optionally, you can provide an argument patience to specify how many epochs we should wait before stopping after having reached a local minimum. tf.keras.callbacks.EarlyStopping provides a more complete and general implementation. import numpy as np class EarlyStoppingAtMinLoss(keras.callbacks.Callback): """Stop training when the loss is at its min, i.e. the loss stops decreasing. Arguments: patience: Number of epochs to wait after min has been hit. After this number of no improvement, training stops. """ def __init__(self, patience=0): super(EarlyStoppingAtMinLoss, self).__init__() self.patience = patience # best_weights to store the weights at which the minimum loss occurs. self.best_weights = None def on_train_begin(self, logs=None): # The number of epoch it has waited when loss is no longer minimum. self.wait = 0 # The epoch the training stops at. self.stopped_epoch = 0 # Initialize the best as infinity. self.best = np.Inf def on_epoch_end(self, epoch, logs=None): current = logs.get("loss") if np.less(current, self.best): self.best = current self.wait = 0 # Record the best weights if current results is better (less). self.best_weights = self.model.get_weights() else: self.wait += 1 if self.wait >= self.patience: self.stopped_epoch = epoch self.model.stop_training = True print("Restoring model weights from the end of the best epoch.") self.model.set_weights(self.best_weights) def on_train_end(self, logs=None): if self.stopped_epoch > 0: print("Epoch %05d: early stopping" % (self.stopped_epoch + 1)) model = get_model() model.fit( x_train, y_train, batch_size=64, steps_per_epoch=5, epochs=30, verbose=0, callbacks=[LossAndErrorPrintingCallback(), EarlyStoppingAtMinLoss()], ) For batch 0, loss is 34.49. For batch 1, loss is 438.63. For batch 2, loss is 301.08. For batch 3, loss is 228.22. For batch 4, loss is 183.83. The average loss for epoch 0 is 183.83 and mean absolute error is 8.24. For batch 0, loss is 9.19. For batch 1, loss is 7.99. For batch 2, loss is 7.32. For batch 3, loss is 6.83. For batch 4, loss is 6.31. The average loss for epoch 1 is 6.31 and mean absolute error is 2.07. For batch 0, loss is 5.26. For batch 1, loss is 4.62. For batch 2, loss is 4.51. For batch 3, loss is 4.56. For batch 4, loss is 4.52. The average loss for epoch 2 is 4.52 and mean absolute error is 1.72. For batch 0, loss is 4.36. For batch 1, loss is 6.15. For batch 2, loss is 10.84. For batch 3, loss is 17.60. For batch 4, loss is 26.95. The average loss for epoch 3 is 26.95 and mean absolute error is 4.29. Restoring model weights from the end of the best epoch. Epoch 00004: early stopping Learning rate scheduling In this example, we show how a custom Callback can be used to dynamically change the learning rate of the optimizer during the course of training. See callbacks.LearningRateScheduler for a more general implementations. class CustomLearningRateScheduler(keras.callbacks.Callback): """Learning rate scheduler which sets the learning rate according to schedule. Arguments: schedule: a function that takes an epoch index (integer, indexed from 0) and current learning rate as inputs and returns a new learning rate as output (float). """ def __init__(self, schedule): super(CustomLearningRateScheduler, self).__init__() self.schedule = schedule def on_epoch_begin(self, epoch, logs=None): if not hasattr(self.model.optimizer, "lr"): raise ValueError('Optimizer must have a "lr" attribute.') # Get the current learning rate from model's optimizer. lr = float(tf.keras.backend.get_value(self.model.optimizer.learning_rate)) # Call schedule function to get the scheduled learning rate. scheduled_lr = self.schedule(epoch, lr) # Set the value back to the optimizer before this epoch starts tf.keras.backend.set_value(self.model.optimizer.lr, scheduled_lr) print("\nEpoch %05d: Learning rate is %6.4f." % (epoch, scheduled_lr)) LR_SCHEDULE = [ # (epoch to start, learning rate) tuples (3, 0.05), (6, 0.01), (9, 0.005), (12, 0.001), ] def lr_schedule(epoch, lr): """Helper function to retrieve the scheduled learning rate based on epoch.""" if epoch < LR_SCHEDULE[0][0] or epoch > LR_SCHEDULE[-1][0]: return lr for i in range(len(LR_SCHEDULE)): if epoch == LR_SCHEDULE[i][0]: return LR_SCHEDULE[i][1] return lr model = get_model() model.fit( x_train, y_train, batch_size=64, steps_per_epoch=5, epochs=15, verbose=0, callbacks=[ LossAndErrorPrintingCallback(), CustomLearningRateScheduler(lr_schedule), ], ) Epoch 00000: Learning rate is 0.1000. For batch 0, loss is 32.53. For batch 1, loss is 430.35. For batch 2, loss is 294.47. For batch 3, loss is 223.69. For batch 4, loss is 180.61. The average loss for epoch 0 is 180.61 and mean absolute error is 8.20. Epoch 00001: Learning rate is 0.1000. For batch 0, loss is 6.72. For batch 1, loss is 5.57. For batch 2, loss is 5.33. For batch 3, loss is 5.35. For batch 4, loss is 5.53. The average loss for epoch 1 is 5.53 and mean absolute error is 1.92. Epoch 00002: Learning rate is 0.1000. For batch 0, loss is 5.22. For batch 1, loss is 5.19. For batch 2, loss is 5.51. For batch 3, loss is 5.80. For batch 4, loss is 5.69. The average loss for epoch 2 is 5.69 and mean absolute error is 1.99. Epoch 00003: Learning rate is 0.0500. For batch 0, loss is 6.21. For batch 1, loss is 4.85. For batch 2, loss is 4.90. For batch 3, loss is 4.66. For batch 4, loss is 4.54. The average loss for epoch 3 is 4.54 and mean absolute error is 1.69. Epoch 00004: Learning rate is 0.0500. For batch 0, loss is 3.62. For batch 1, loss is 3.58. For batch 2, loss is 3.92. For batch 3, loss is 3.73. For batch 4, loss is 3.65. The average loss for epoch 4 is 3.65 and mean absolute error is 1.57. Epoch 00005: Learning rate is 0.0500. For batch 0, loss is 4.42. For batch 1, loss is 4.95. For batch 2, loss is 5.83. For batch 3, loss is 6.36. For batch 4, loss is 6.62. The average loss for epoch 5 is 6.62 and mean absolute error is 2.09. Epoch 00006: Learning rate is 0.0100. For batch 0, loss is 8.74. For batch 1, loss is 7.34. For batch 2, loss is 5.55. For batch 3, loss is 4.98. For batch 4, loss is 4.48. The average loss for epoch 6 is 4.48 and mean absolute error is 1.65. Epoch 00007: Learning rate is 0.0100. For batch 0, loss is 4.30. For batch 1, loss is 4.01. For batch 2, loss is 3.97. For batch 3, loss is 3.68. For batch 4, loss is 3.76. The average loss for epoch 7 is 3.76 and mean absolute error is 1.51. Epoch 00008: Learning rate is 0.0100. For batch 0, loss is 3.41. For batch 1, loss is 3.74. For batch 2, loss is 3.51. For batch 3, loss is 3.52. For batch 4, loss is 3.47. The average loss for epoch 8 is 3.47 and mean absolute error is 1.47. Epoch 00009: Learning rate is 0.0050. For batch 0, loss is 3.39. For batch 1, loss is 3.04. For batch 2, loss is 3.10. For batch 3, loss is 3.22. For batch 4, loss is 3.14. The average loss for epoch 9 is 3.14 and mean absolute error is 1.38. Epoch 00010: Learning rate is 0.0050. For batch 0, loss is 2.77. For batch 1, loss is 2.89. For batch 2, loss is 2.94. For batch 3, loss is 2.85. For batch 4, loss is 2.78. The average loss for epoch 10 is 2.78 and mean absolute error is 1.30. Epoch 00011: Learning rate is 0.0050. For batch 0, loss is 3.69. For batch 1, loss is 3.33. For batch 2, loss is 3.22. For batch 3, loss is 3.57. For batch 4, loss is 3.79. The average loss for epoch 11 is 3.79 and mean absolute error is 1.51. Epoch 00012: Learning rate is 0.0010. For batch 0, loss is 3.61. For batch 1, loss is 3.21. For batch 2, loss is 3.07. For batch 3, loss is 3.34. For batch 4, loss is 3.23. The average loss for epoch 12 is 3.23 and mean absolute error is 1.42. Epoch 00013: Learning rate is 0.0010. For batch 0, loss is 2.03. For batch 1, loss is 3.25. For batch 2, loss is 3.23. For batch 3, loss is 3.36. For batch 4, loss is 3.44. The average loss for epoch 13 is 3.44 and mean absolute error is 1.46. Epoch 00014: Learning rate is 0.0010. For batch 0, loss is 3.28. For batch 1, loss is 3.14. For batch 2, loss is 2.89. For batch 3, loss is 2.94. For batch 4, loss is 3.02. The average loss for epoch 14 is 3.02 and mean absolute error is 1.38. Built-in Keras callbacks Be sure to check out the existing Keras callbacks by reading the API docs. Applications include logging to CSV, saving the model, visualizing metrics in TensorBoard, and a lot more!Transfer learning & fine-tuning Author: fchollet Date created: 2020/04/15 Last modified: 2020/05/12 Description: Complete guide to transfer learning & fine-tuning in Keras. View in Colab • GitHub source Setup import numpy as np import tensorflow as tf from tensorflow import keras Introduction Transfer learning consists of taking features learned on one problem, and leveraging them on a new, similar problem. For instance, features from a model that has learned to identify racoons may be useful to kick-start a model meant to identify tanukis. Transfer learning is usually done for tasks where your dataset has too little data to train a full-scale model from scratch. The most common incarnation of transfer learning in the context of deep learning is the following worfklow: Take layers from a previously trained model. Freeze them, so as to avoid destroying any of the information they contain during future training rounds. Add some new, trainable layers on top of the frozen layers. They will learn to turn the old features into predictions on a new dataset. Train the new layers on your dataset. A last, optional step, is fine-tuning, which consists of unfreezing the entire model you obtained above (or part of it), and re-training it on the new data with a very low learning rate. This can potentially achieve meaningful improvements, by incrementally adapting the pretrained features to the new data. First, we will go over the Keras trainable API in detail, which underlies most transfer learning & fine-tuning workflows. Then, we'll demonstrate the typical workflow by taking a model pretrained on the ImageNet dataset, and retraining it on the Kaggle "cats vs dogs" classification dataset. This is adapted from Deep Learning with Python and the 2016 blog post "building powerful image classification models using very little data". Freezing layers: understanding the trainable attribute Layers & models have three weight attributes: weights is the list of all weights variables of the layer. trainable_weights is the list of those that are meant to be updated (via gradient descent) to minimize the loss during training. non_trainable_weights is the list of those that aren't meant to be trained. Typically they are updated by the model during the forward pass. Example: the Dense layer has 2 trainable weights (kernel & bias) layer = keras.layers.Dense(3) layer.build((None, 4)) # Create the weights print("weights:", len(layer.weights)) print("trainable_weights:", len(layer.trainable_weights)) print("non_trainable_weights:", len(layer.non_trainable_weights)) weights: 2 trainable_weights: 2 non_trainable_weights: 0 In general, all weights are trainable weights. The only built-in layer that has non-trainable weights is the BatchNormalization layer. It uses non-trainable weights to keep track of the mean and variance of its inputs during training. To learn how to use non-trainable weights in your own custom layers, see the guide to writing new layers from scratch. Example: the BatchNormalization layer has 2 trainable weights and 2 non-trainable weights layer = keras.layers.BatchNormalization() layer.build((None, 4)) # Create the weights print("weights:", len(layer.weights)) print("trainable_weights:", len(layer.trainable_weights)) print("non_trainable_weights:", len(layer.non_trainable_weights)) weights: 4 trainable_weights: 2 non_trainable_weights: 2 Layers & models also feature a boolean attribute trainable. Its value can be changed. Setting layer.trainable to False moves all the layer's weights from trainable to non-trainable. This is called "freezing" the layer: the state of a frozen layer won't be updated during training (either when training with fit() or when training with any custom loop that relies on trainable_weights to apply gradient updates). Example: setting trainable to False layer = keras.layers.Dense(3) layer.build((None, 4)) # Create the weights layer.trainable = False # Freeze the layer print("weights:", len(layer.weights)) print("trainable_weights:", len(layer.trainable_weights)) print("non_trainable_weights:", len(layer.non_trainable_weights)) weights: 2 trainable_weights: 0 non_trainable_weights: 2 When a trainable weight becomes non-trainable, its value is no longer updated during training. # Make a model with 2 layers layer1 = keras.layers.Dense(3, activation="relu") layer2 = keras.layers.Dense(3, activation="sigmoid") model = keras.Sequential([keras.Input(shape=(3,)), layer1, layer2]) # Freeze the first layer layer1.trainable = False # Keep a copy of the weights of layer1 for later reference initial_layer1_weights_values = layer1.get_weights() # Train the model model.compile(optimizer="adam", loss="mse") model.fit(np.random.random((2, 3)), np.random.random((2, 3))) # Check that the weights of layer1 have not changed during training final_layer1_weights_values = layer1.get_weights() np.testing.assert_allclose( initial_layer1_weights_values[0], final_layer1_weights_values[0] ) np.testing.assert_allclose( initial_layer1_weights_values[1], final_layer1_weights_values[1] ) 1/1 [==============================] - 0s 1ms/step - loss: 0.0846 Do not confuse the layer.trainable attribute with the argument training in layer.__call__() (which controls whether the layer should run its forward pass in inference mode or training mode). For more information, see the Keras FAQ. Recursive setting of the trainable attribute If you set trainable = False on a model or on any layer that has sublayers, all children layers become non-trainable as well. Example: inner_model = keras.Sequential( [ keras.Input(shape=(3,)), keras.layers.Dense(3, activation="relu"), keras.layers.Dense(3, activation="relu"), ] ) model = keras.Sequential( [keras.Input(shape=(3,)), inner_model, keras.layers.Dense(3, activation="sigmoid"),] ) model.trainable = False # Freeze the outer model assert inner_model.trainable == False # All layers in `model` are now frozen assert inner_model.layers[0].trainable == False # `trainable` is propagated recursively The typical transfer-learning workflow This leads us to how a typical transfer learning workflow can be implemented in Keras: Instantiate a base model and load pre-trained weights into it. Freeze all layers in the base model by setting trainable = False. Create a new model on top of the output of one (or several) layers from the base model. Train your new model on your new dataset. Note that an alternative, more lightweight workflow could also be: Instantiate a base model and load pre-trained weights into it. Run your new dataset through it and record the output of one (or several) layers from the base model. This is called feature extraction. Use that output as input data for a new, smaller model. A key advantage of that second workflow is that you only run the base model once on your data, rather than once per epoch of training. So it's a lot faster & cheaper. An issue with that second workflow, though, is that it doesn't allow you to dynamically modify the input data of your new model during training, which is required when doing data augmentation, for instance. Transfer learning is typically used for tasks when your new dataset has too little data to train a full-scale model from scratch, and in such scenarios data augmentation is very important. So in what follows, we will focus on the first workflow. Here's what the first workflow looks like in Keras: First, instantiate a base model with pre-trained weights. base_model = keras.applications.Xception( weights='imagenet', # Load weights pre-trained on ImageNet. input_shape=(150, 150, 3), include_top=False) # Do not include the ImageNet classifier at the top. Then, freeze the base model. base_model.trainable = False Create a new model on top. inputs = keras.Input(shape=(150, 150, 3)) # We make sure that the base_model is running in inference mode here, # by passing `training=False`. This is important for fine-tuning, as you will # learn in a few paragraphs. x = base_model(inputs, training=False) # Convert features of shape `base_model.output_shape[1:]` to vectors x = keras.layers.GlobalAveragePooling2D()(x) # A Dense classifier with a single unit (binary classification) outputs = keras.layers.Dense(1)(x) model = keras.Model(inputs, outputs) Train the model on new data. model.compile(optimizer=keras.optimizers.Adam(), loss=keras.losses.BinaryCrossentropy(from_logits=True), metrics=[keras.metrics.BinaryAccuracy()]) model.fit(new_dataset, epochs=20, callbacks=..., validation_data=...) Fine-tuning Once your model has converged on the new data, you can try to unfreeze all or part of the base model and retrain the whole model end-to-end with a very low learning rate. This is an optional last step that can potentially give you incremental improvements. It could also potentially lead to quick overfitting -- keep that in mind. It is critical to only do this step after the model with frozen layers has been trained to convergence. If you mix randomly-initialized trainable layers with trainable layers that hold pre-trained features, the randomly-initialized layers will cause very large gradient updates during training, which will destroy your pre-trained features. It's also critical to use a very low learning rate at this stage, because you are training a much larger model than in the first round of training, on a dataset that is typically very small. As a result, you are at risk of overfitting very quickly if you apply large weight updates. Here, you only want to readapt the pretrained weights in an incremental way. This is how to implement fine-tuning of the whole base model: # Unfreeze the base model base_model.trainable = True # It's important to recompile your model after you make any changes # to the `trainable` attribute of any inner layer, so that your changes # are take into account model.compile(optimizer=keras.optimizers.Adam(1e-5), # Very low learning rate loss=keras.losses.BinaryCrossentropy(from_logits=True), metrics=[keras.metrics.BinaryAccuracy()]) # Train end-to-end. Be careful to stop before you overfit! model.fit(new_dataset, epochs=10, callbacks=..., validation_data=...) Important note about compile() and trainable Calling compile() on a model is meant to "freeze" the behavior of that model. This implies that the trainable attribute values at the time the model is compiled should be preserved throughout the lifetime of that model, until compile is called again. Hence, if you change any trainable value, make sure to call compile() again on your model for your changes to be taken into account. Important notes about BatchNormalization layer Many image models contain BatchNormalization layers. That layer is a special case on every imaginable count. Here are a few things to keep in mind. BatchNormalization contains 2 non-trainable weights that get updated during training. These are the variables tracking the mean and variance of the inputs. When you set bn_layer.trainable = False, the BatchNormalization layer will run in inference mode, and will not update its mean & variance statistics. This is not the case for other layers in general, as weight trainability & inference/training modes are two orthogonal concepts. But the two are tied in the case of the BatchNormalization layer. When you unfreeze a model that contains BatchNormalization layers in order to do fine-tuning, you should keep the BatchNormalization layers in inference mode by passing training=False when calling the base model. Otherwise the updates applied to the non-trainable weights will suddenly destroy what the model has learned. You'll see this pattern in action in the end-to-end example at the end of this guide. Transfer learning & fine-tuning with a custom training loop If instead of fit(), you are using your own low-level training loop, the workflow stays essentially the same. You should be careful to only take into account the list model.trainable_weights when applying gradient updates: # Create base model base_model = keras.applications.Xception( weights='imagenet', input_shape=(150, 150, 3), include_top=False) # Freeze base model base_model.trainable = False # Create new model on top. inputs = keras.Input(shape=(150, 150, 3)) x = base_model(inputs, training=False) x = keras.layers.GlobalAveragePooling2D()(x) outputs = keras.layers.Dense(1)(x) model = keras.Model(inputs, outputs) loss_fn = keras.losses.BinaryCrossentropy(from_logits=True) optimizer = keras.optimizers.Adam() # Iterate over the batches of a dataset. for inputs, targets in new_dataset: # Open a GradientTape. with tf.GradientTape() as tape: # Forward pass. predictions = model(inputs) # Compute the loss value for this batch. loss_value = loss_fn(targets, predictions) # Get gradients of loss wrt the *trainable* weights. gradients = tape.gradient(loss_value, model.trainable_weights) # Update the weights of the model. optimizer.apply_gradients(zip(gradients, model.trainable_weights)) Likewise for fine-tuning. An end-to-end example: fine-tuning an image classification model on a cats vs. dogs dataset To solidify these concepts, let's walk you through a concrete end-to-end transfer learning & fine-tuning example. We will load the Xception model, pre-trained on ImageNet, and use it on the Kaggle "cats vs. dogs" classification dataset. Getting the data First, let's fetch the cats vs. dogs dataset using TFDS. If you have your own dataset, you'll probably want to use the utility tf.keras.preprocessing.image_dataset_from_directory to generate similar labeled dataset objects from a set of images on disk filed into class-specific folders. Transfer learning is most useful when working with very small datasets. To keep our dataset small, we will use 40% of the original training data (25,000 images) for training, 10% for validation, and 10% for testing. import tensorflow_datasets as tfds tfds.disable_progress_bar() train_ds, validation_ds, test_ds = tfds.load( "cats_vs_dogs", # Reserve 10% for validation and 10% for test split=["train[:40%]", "train[40%:50%]", "train[50%:60%]"], as_supervised=True, # Include labels ) print("Number of training samples: %d" % tf.data.experimental.cardinality(train_ds)) print( "Number of validation samples: %d" % tf.data.experimental.cardinality(validation_ds) ) print("Number of test samples: %d" % tf.data.experimental.cardinality(test_ds)) Number of training samples: 9305 Number of validation samples: 2326 Number of test samples: 2326 These are the first 9 images in the training dataset -- as you can see, they're all different sizes. import matplotlib.pyplot as plt plt.figure(figsize=(10, 10)) for i, (image, label) in enumerate(train_ds.take(9)): ax = plt.subplot(3, 3, i + 1) plt.imshow(image) plt.title(int(label)) plt.axis("off") png We can also see that label 1 is "dog" and label 0 is "cat". Standardizing the data Our raw images have a variety of sizes. In addition, each pixel consists of 3 integer values between 0 and 255 (RGB level values). This isn't a great fit for feeding a neural network. We need to do 2 things: Standardize to a fixed image size. We pick 150x150. Normalize pixel values between -1 and 1. We'll do this using a Normalization layer as part of the model itself. In general, it's a good practice to develop models that take raw data as input, as opposed to models that take already-preprocessed data. The reason being that, if your model expects preprocessed data, any time you export your model to use it elsewhere (in a web browser, in a mobile app), you'll need to reimplement the exact same preprocessing pipeline. This gets very tricky very quickly. So we should do the least possible amount of preprocessing before hitting the model. Here, we'll do image resizing in the data pipeline (because a deep neural network can only process contiguous batches of data), and we'll do the input value scaling as part of the model, when we create it. Let's resize images to 150x150: size = (150, 150) train_ds = train_ds.map(lambda x, y: (tf.image.resize(x, size), y)) validation_ds = validation_ds.map(lambda x, y: (tf.image.resize(x, size), y)) test_ds = test_ds.map(lambda x, y: (tf.image.resize(x, size), y)) Besides, let's batch the data and use caching & prefetching to optimize loading speed. batch_size = 32 train_ds = train_ds.cache().batch(batch_size).prefetch(buffer_size=10) validation_ds = validation_ds.cache().batch(batch_size).prefetch(buffer_size=10) test_ds = test_ds.cache().batch(batch_size).prefetch(buffer_size=10) Using random data augmentation When you don't have a large image dataset, it's a good practice to artificially introduce sample diversity by applying random yet realistic transformations to the training images, such as random horizontal flipping or small random rotations. This helps expose the model to different aspects of the training data while slowing down overfitting. from tensorflow import keras from tensorflow.keras import layers data_augmentation = keras.Sequential( [ layers.experimental.preprocessing.RandomFlip("horizontal"), layers.experimental.preprocessing.RandomRotation(0.1), ] ) Let's visualize what the first image of the first batch looks like after various random transformations: import numpy as np for images, labels in train_ds.take(1): plt.figure(figsize=(10, 10)) first_image = images[0] for i in range(9): ax = plt.subplot(3, 3, i + 1) augmented_image = data_augmentation( tf.expand_dims(first_image, 0), training=True ) plt.imshow(augmented_image[0].numpy().astype("int32")) plt.title(int(labels[i])) plt.axis("off") png Build a model Now let's built a model that follows the blueprint we've explained earlier. Note that: We add a Normalization layer to scale input values (initially in the [0, 255] range) to the [-1, 1] range. We add a Dropout layer before the classification layer, for regularization. We make sure to pass training=False when calling the base model, so that it runs in inference mode, so that batchnorm statistics don't get updated even after we unfreeze the base model for fine-tuning. base_model = keras.applications.Xception( weights="imagenet", # Load weights pre-trained on ImageNet. input_shape=(150, 150, 3), include_top=False, ) # Do not include the ImageNet classifier at the top. # Freeze the base_model base_model.trainable = False # Create new model on top inputs = keras.Input(shape=(150, 150, 3)) x = data_augmentation(inputs) # Apply random data augmentation # Pre-trained Xception weights requires that input be normalized # from (0, 255) to a range (-1., +1.), the normalization layer # does the following, outputs = (inputs - mean) / sqrt(var) norm_layer = keras.layers.experimental.preprocessing.Normalization() mean = np.array([127.5] * 3) var = mean ** 2 # Scale inputs to [-1, +1] x = norm_layer(x) norm_layer.set_weights([mean, var]) # The base model contains batchnorm layers. We want to keep them in inference mode # when we unfreeze the base model for fine-tuning, so we make sure that the # base_model is running in inference mode here. x = base_model(x, training=False) x = keras.layers.GlobalAveragePooling2D()(x) x = keras.layers.Dropout(0.2)(x) # Regularize with dropout outputs = keras.layers.Dense(1)(x) model = keras.Model(inputs, outputs) model.summary() Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_5 (InputLayer) [(None, 150, 150, 3)] 0 _________________________________________________________________ sequential_3 (Sequential) (None, 150, 150, 3) 0 _________________________________________________________________ normalization (Normalization (None, 150, 150, 3) 7 _________________________________________________________________ xception (Model) (None, 5, 5, 2048) 20861480 _________________________________________________________________ global_average_pooling2d (Gl (None, 2048) 0 _________________________________________________________________ dropout (Dropout) (None, 2048) 0 _________________________________________________________________ dense_7 (Dense) (None, 1) 2049 ================================================================= Total params: 20,863,536 Trainable params: 2,049 Non-trainable params: 20,861,487 _________________________________________________________________ Train the top layer model.compile( optimizer=keras.optimizers.Adam(), loss=keras.losses.BinaryCrossentropy(from_logits=True), metrics=[keras.metrics.BinaryAccuracy()], ) epochs = 20 model.fit(train_ds, epochs=epochs, validation_data=validation_ds) Epoch 1/20 291/291 [==============================] - 24s 83ms/step - loss: 0.1639 - binary_accuracy: 0.9276 - val_loss: 0.0883 - val_binary_accuracy: 0.9652 Epoch 2/20 291/291 [==============================] - 22s 76ms/step - loss: 0.1202 - binary_accuracy: 0.9491 - val_loss: 0.0855 - val_binary_accuracy: 0.9686 Epoch 3/20 291/291 [==============================] - 23s 80ms/step - loss: 0.1076 - binary_accuracy: 0.9546 - val_loss: 0.0802 - val_binary_accuracy: 0.9682 Epoch 4/20 291/291 [==============================] - 23s 80ms/step - loss: 0.1127 - binary_accuracy: 0.9539 - val_loss: 0.0798 - val_binary_accuracy: 0.9682 Epoch 5/20 291/291 [==============================] - 23s 78ms/step - loss: 0.1072 - binary_accuracy: 0.9558 - val_loss: 0.0807 - val_binary_accuracy: 0.9695 Epoch 6/20 291/291 [==============================] - 23s 79ms/step - loss: 0.1073 - binary_accuracy: 0.9565 - val_loss: 0.0746 - val_binary_accuracy: 0.9733 Epoch 7/20 291/291 [==============================] - 23s 79ms/step - loss: 0.1037 - binary_accuracy: 0.9562 - val_loss: 0.0738 - val_binary_accuracy: 0.9712 Epoch 8/20 291/291 [==============================] - 23s 79ms/step - loss: 0.1061 - binary_accuracy: 0.9580 - val_loss: 0.0764 - val_binary_accuracy: 0.9738 Epoch 9/20 291/291 [==============================] - 23s 78ms/step - loss: 0.0959 - binary_accuracy: 0.9612 - val_loss: 0.0823 - val_binary_accuracy: 0.9673 Epoch 10/20 291/291 [==============================] - 23s 79ms/step - loss: 0.0956 - binary_accuracy: 0.9600 - val_loss: 0.0736 - val_binary_accuracy: 0.9725 Epoch 11/20 291/291 [==============================] - 23s 79ms/step - loss: 0.0944 - binary_accuracy: 0.9603 - val_loss: 0.0781 - val_binary_accuracy: 0.9716 Epoch 12/20 291/291 [==============================] - 23s 79ms/step - loss: 0.0960 - binary_accuracy: 0.9615 - val_loss: 0.0720 - val_binary_accuracy: 0.9725 Epoch 13/20 291/291 [==============================] - 23s 79ms/step - loss: 0.0987 - binary_accuracy: 0.9614 - val_loss: 0.0791 - val_binary_accuracy: 0.9708 Epoch 14/20 291/291 [==============================] - 23s 79ms/step - loss: 0.0930 - binary_accuracy: 0.9636 - val_loss: 0.0780 - val_binary_accuracy: 0.9690 Epoch 15/20 291/291 [==============================] - 23s 78ms/step - loss: 0.0954 - binary_accuracy: 0.9624 - val_loss: 0.0772 - val_binary_accuracy: 0.9678 Epoch 16/20 291/291 [==============================] - 23s 78ms/step - loss: 0.0963 - binary_accuracy: 0.9598 - val_loss: 0.0781 - val_binary_accuracy: 0.9695 Epoch 17/20 291/291 [==============================] - 23s 78ms/step - loss: 0.1006 - binary_accuracy: 0.9585 - val_loss: 0.0832 - val_binary_accuracy: 0.9699 Epoch 18/20 291/291 [==============================] - 23s 78ms/step - loss: 0.0942 - binary_accuracy: 0.9615 - val_loss: 0.0761 - val_binary_accuracy: 0.9703 Epoch 19/20 291/291 [==============================] - 23s 79ms/step - loss: 0.0950 - binary_accuracy: 0.9613 - val_loss: 0.0817 - val_binary_accuracy: 0.9690 Epoch 20/20 291/291 [==============================] - 23s 79ms/step - loss: 0.0906 - binary_accuracy: 0.9624 - val_loss: 0.0755 - val_binary_accuracy: 0.9712 Do a round of fine-tuning of the entire model Finally, let's unfreeze the base model and train the entire model end-to-end with a low learning rate. Importantly, although the base model becomes trainable, it is still running in inference mode since we passed training=False when calling it when we built the model. This means that the batch normalization layers inside won't update their batch statistics. If they did, they would wreck havoc on the representations learned by the model so far. # Unfreeze the base_model. Note that it keeps running in inference mode # since we passed `training=False` when calling it. This means that # the batchnorm layers will not update their batch statistics. # This prevents the batchnorm layers from undoing all the training # we've done so far. base_model.trainable = True model.summary() model.compile( optimizer=keras.optimizers.Adam(1e-5), # Low learning rate loss=keras.losses.BinaryCrossentropy(from_logits=True), metrics=[keras.metrics.BinaryAccuracy()], ) epochs = 10 model.fit(train_ds, epochs=epochs, validation_data=validation_ds) Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_5 (InputLayer) [(None, 150, 150, 3)] 0 _________________________________________________________________ sequential_3 (Sequential) (None, 150, 150, 3) 0 _________________________________________________________________ normalization (Normalization (None, 150, 150, 3) 7 _________________________________________________________________ xception (Model) (None, 5, 5, 2048) 20861480 _________________________________________________________________ global_average_pooling2d (Gl (None, 2048) 0 _________________________________________________________________ dropout (Dropout) (None, 2048) 0 _________________________________________________________________ dense_7 (Dense) (None, 1) 2049 ================================================================= Total params: 20,863,536 Trainable params: 20,809,001 Non-trainable params: 54,535 _________________________________________________________________ Epoch 1/10 291/291 [==============================] - 92s 318ms/step - loss: 0.0766 - binary_accuracy: 0.9710 - val_loss: 0.0571 - val_binary_accuracy: 0.9772 Epoch 2/10 291/291 [==============================] - 90s 308ms/step - loss: 0.0534 - binary_accuracy: 0.9800 - val_loss: 0.0471 - val_binary_accuracy: 0.9807 Epoch 3/10 291/291 [==============================] - 90s 308ms/step - loss: 0.0491 - binary_accuracy: 0.9799 - val_loss: 0.0411 - val_binary_accuracy: 0.9815 Epoch 4/10 291/291 [==============================] - 90s 308ms/step - loss: 0.0349 - binary_accuracy: 0.9868 - val_loss: 0.0438 - val_binary_accuracy: 0.9832 Epoch 5/10 291/291 [==============================] - 89s 307ms/step - loss: 0.0302 - binary_accuracy: 0.9881 - val_loss: 0.0440 - val_binary_accuracy: 0.9837 Epoch 6/10 291/291 [==============================] - 90s 308ms/step - loss: 0.0290 - binary_accuracy: 0.9890 - val_loss: 0.0445 - val_binary_accuracy: 0.9832 Epoch 7/10 291/291 [==============================] - 90s 310ms/step - loss: 0.0209 - binary_accuracy: 0.9920 - val_loss: 0.0527 - val_binary_accuracy: 0.9811 Epoch 8/10 291/291 [==============================] - 91s 311ms/step - loss: 0.0162 - binary_accuracy: 0.9940 - val_loss: 0.0510 - val_binary_accuracy: 0.9828 Epoch 9/10 291/291 [==============================] - 91s 311ms/step - loss: 0.0199 - binary_accuracy: 0.9933 - val_loss: 0.0470 - val_binary_accuracy: 0.9867 Epoch 10/10 291/291 [==============================] - 90s 308ms/step - loss: 0.0128 - binary_accuracy: 0.9953 - val_loss: 0.0471 - val_binary_accuracy: 0.9845 After 10 epochs, fine-tuning gains us a nice improvement here.Making new layers and models via subclassing Author: fchollet Date created: 2019/03/01 Last modified: 2020/04/13 Description: Complete guide to writing Layer and Model objects from scratch. View in Colab • GitHub source Setup import tensorflow as tf from tensorflow import keras The Layer class: the combination of state (weights) and some computation One of the central abstraction in Keras is the Layer class. A layer encapsulates both a state (the layer's "weights") and a transformation from inputs to outputs (a "call", the layer's forward pass). Here's a densely-connected layer. It has a state: the variables w and b. class Linear(keras.layers.Layer): def __init__(self, units=32, input_dim=32): super(Linear, self).__init__() w_init = tf.random_normal_initializer() self.w = tf.Variable( initial_value=w_init(shape=(input_dim, units), dtype="float32"), trainable=True, ) b_init = tf.zeros_initializer() self.b = tf.Variable( initial_value=b_init(shape=(units,), dtype="float32"), trainable=True ) def call(self, inputs): return tf.matmul(inputs, self.w) + self.b You would use a layer by calling it on some tensor input(s), much like a Python function. x = tf.ones((2, 2)) linear_layer = Linear(4, 2) y = linear_layer(x) print(y) tf.Tensor( [[ 0.01013444 -0.01070027 -0.01888977 0.05208318] [ 0.01013444 -0.01070027 -0.01888977 0.05208318]], shape=(2, 4), dtype=float32) Note that the weights w and b are automatically tracked by the layer upon being set as layer attributes: assert linear_layer.weights == [linear_layer.w, linear_layer.b] Note you also have access to a quicker shortcut for adding weight to a layer: the add_weight() method: class Linear(keras.layers.Layer): def __init__(self, units=32, input_dim=32): super(Linear, self).__init__() self.w = self.add_weight( shape=(input_dim, units), initializer="random_normal", trainable=True ) self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True) def call(self, inputs): return tf.matmul(inputs, self.w) + self.b x = tf.ones((2, 2)) linear_layer = Linear(4, 2) y = linear_layer(x) print(y) tf.Tensor( [[-0.01331179 -0.00605625 -0.01042787 0.17160884] [-0.01331179 -0.00605625 -0.01042787 0.17160884]], shape=(2, 4), dtype=float32) Layers can have non-trainable weights Besides trainable weights, you can add non-trainable weights to a layer as well. Such weights are meant not to be taken into account during backpropagation, when you are training the layer. Here's how to add and use a non-trainable weight: class ComputeSum(keras.layers.Layer): def __init__(self, input_dim): super(ComputeSum, self).__init__() self.total = tf.Variable(initial_value=tf.zeros((input_dim,)), trainable=False) def call(self, inputs): self.total.assign_add(tf.reduce_sum(inputs, axis=0)) return self.total x = tf.ones((2, 2)) my_sum = ComputeSum(2) y = my_sum(x) print(y.numpy()) y = my_sum(x) print(y.numpy()) [2. 2.] [4. 4.] It's part of layer.weights, but it gets categorized as a non-trainable weight: print("weights:", len(my_sum.weights)) print("non-trainable weights:", len(my_sum.non_trainable_weights)) # It's not included in the trainable weights: print("trainable_weights:", my_sum.trainable_weights) weights: 1 non-trainable weights: 1 trainable_weights: [] Best practice: deferring weight creation until the shape of the inputs is known Our Linear layer above took an input_dimargument that was used to compute the shape of the weights w and b in __init__(): class Linear(keras.layers.Layer): def __init__(self, units=32, input_dim=32): super(Linear, self).__init__() self.w = self.add_weight( shape=(input_dim, units), initializer="random_normal", trainable=True ) self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True) def call(self, inputs): return tf.matmul(inputs, self.w) + self.b In many cases, you may not know in advance the size of your inputs, and you would like to lazily create weights when that value becomes known, some time after instantiating the layer. In the Keras API, we recommend creating layer weights in the build(self, inputs_shape) method of your layer. Like this: class Linear(keras.layers.Layer): def __init__(self, units=32): super(Linear, self).__init__() self.units = units def build(self, input_shape): self.w = self.add_weight( shape=(input_shape[-1], self.units), initializer="random_normal", trainable=True, ) self.b = self.add_weight( shape=(self.units,), initializer="random_normal", trainable=True ) def call(self, inputs): return tf.matmul(inputs, self.w) + self.b The __call__() method of your layer will automatically run build the first time it is called. You now have a layer that's lazy and thus easier to use: # At instantiation, we don't know on what inputs this is going to get called linear_layer = Linear(32) # The layer's weights are created dynamically the first time the layer is called y = linear_layer(x) Layers are recursively composable If you assign a Layer instance as attribute of another Layer, the outer layer will start tracking the weights of the inner layer. We recommend creating such sublayers in the __init__() method (since the sublayers will typically have a build method, they will be built when the outer layer gets built). # Let's assume we are reusing the Linear class # with a `build` method that we defined above. class MLPBlock(keras.layers.Layer): def __init__(self): super(MLPBlock, self).__init__() self.linear_1 = Linear(32) self.linear_2 = Linear(32) self.linear_3 = Linear(1) def call(self, inputs): x = self.linear_1(inputs) x = tf.nn.relu(x) x = self.linear_2(x) x = tf.nn.relu(x) return self.linear_3(x) mlp = MLPBlock() y = mlp(tf.ones(shape=(3, 64))) # The first call to the `mlp` will create the weights print("weights:", len(mlp.weights)) print("trainable weights:", len(mlp.trainable_weights)) weights: 6 trainable weights: 6 The add_loss() method When writing the call() method of a layer, you can create loss tensors that you will want to use later, when writing your training loop. This is doable by calling self.add_loss(value): # A layer that creates an activity regularization loss class ActivityRegularizationLayer(keras.layers.Layer): def __init__(self, rate=1e-2): super(ActivityRegularizationLayer, self).__init__() self.rate = rate def call(self, inputs): self.add_loss(self.rate * tf.reduce_sum(inputs)) return inputs These losses (including those created by any inner layer) can be retrieved via layer.losses. This property is reset at the start of every __call__() to the top-level layer, so that layer.losses always contains the loss values created during the last forward pass. class OuterLayer(keras.layers.Layer): def __init__(self): super(OuterLayer, self).__init__() self.activity_reg = ActivityRegularizationLayer(1e-2) def call(self, inputs): return self.activity_reg(inputs) layer = OuterLayer() assert len(layer.losses) == 0 # No losses yet since the layer has never been called _ = layer(tf.zeros(1, 1)) assert len(layer.losses) == 1 # We created one loss value # `layer.losses` gets reset at the start of each __call__ _ = layer(tf.zeros(1, 1)) assert len(layer.losses) == 1 # This is the loss created during the call above In addition, the loss property also contains regularization losses created for the weights of any inner layer: class OuterLayerWithKernelRegularizer(keras.layers.Layer): def __init__(self): super(OuterLayerWithKernelRegularizer, self).__init__() self.dense = keras.layers.Dense( 32, kernel_regularizer=tf.keras.regularizers.l2(1e-3) ) def call(self, inputs): return self.dense(inputs) layer = OuterLayerWithKernelRegularizer() _ = layer(tf.zeros((1, 1))) # This is `1e-3 * sum(layer.dense.kernel ** 2)`, # created by the `kernel_regularizer` above. print(layer.losses) [] These losses are meant to be taken into account when writing training loops, like this: # Instantiate an optimizer. optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3) loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) # Iterate over the batches of a dataset. for x_batch_train, y_batch_train in train_dataset: with tf.GradientTape() as tape: logits = layer(x_batch_train) # Logits for this minibatch # Loss value for this minibatch loss_value = loss_fn(y_batch_train, logits) # Add extra losses created during this forward pass: loss_value += sum(model.losses) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) For a detailed guide about writing training loops, see the guide to writing a training loop from scratch. These losses also work seamlessly with fit() (they get automatically summed and added to the main loss, if any): import numpy as np inputs = keras.Input(shape=(3,)) outputs = ActivityRegularizationLayer()(inputs) model = keras.Model(inputs, outputs) # If there is a loss passed in `compile`, the regularization # losses get added to it model.compile(optimizer="adam", loss="mse") model.fit(np.random.random((2, 3)), np.random.random((2, 3))) # It's also possible not to pass any loss in `compile`, # since the model already has a loss to minimize, via the `add_loss` # call during the forward pass! model.compile(optimizer="adam") model.fit(np.random.random((2, 3)), np.random.random((2, 3))) 1/1 [==============================] - 0s 1ms/step - loss: 0.1555 1/1 [==============================] - 0s 927us/step - loss: 0.0336 The add_metric() method Similarly to add_loss(), layers also have an add_metric() method for tracking the moving average of a quantity during training. Consider the following layer: a "logistic endpoint" layer. It takes as inputs predictions & targets, it computes a loss which it tracks via add_loss(), and it computes an accuracy scalar, which it tracks via add_metric(). class LogisticEndpoint(keras.layers.Layer): def __init__(self, name=None): super(LogisticEndpoint, self).__init__(name=name) self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True) self.accuracy_fn = keras.metrics.BinaryAccuracy() def call(self, targets, logits, sample_weights=None): # Compute the training-time loss value and add it # to the layer using `self.add_loss()`. loss = self.loss_fn(targets, logits, sample_weights) self.add_loss(loss) # Log accuracy as a metric and add it # to the layer using `self.add_metric()`. acc = self.accuracy_fn(targets, logits, sample_weights) self.add_metric(acc, name="accuracy") # Return the inference-time prediction tensor (for `.predict()`). return tf.nn.softmax(logits) Metrics tracked in this way are accessible via layer.metrics: layer = LogisticEndpoint() targets = tf.ones((2, 2)) logits = tf.ones((2, 2)) y = layer(targets, logits) print("layer.metrics:", layer.metrics) print("current accuracy value:", float(layer.metrics[0].result())) layer.metrics: [] current accuracy value: 1.0 Just like for add_loss(), these metrics are tracked by fit(): inputs = keras.Input(shape=(3,), name="inputs") targets = keras.Input(shape=(10,), name="targets") logits = keras.layers.Dense(10)(inputs) predictions = LogisticEndpoint(name="predictions")(logits, targets) model = keras.Model(inputs=[inputs, targets], outputs=predictions) model.compile(optimizer="adam") data = { "inputs": np.random.random((3, 3)), "targets": np.random.random((3, 10)), } model.fit(data) 1/1 [==============================] - 0s 999us/step - loss: 1.0366 - binary_accuracy: 0.0000e+00 You can optionally enable serialization on your layers If you need your custom layers to be serializable as part of a Functional model, you can optionally implement a get_config() method: class Linear(keras.layers.Layer): def __init__(self, units=32): super(Linear, self).__init__() self.units = units def build(self, input_shape): self.w = self.add_weight( shape=(input_shape[-1], self.units), initializer="random_normal", trainable=True, ) self.b = self.add_weight( shape=(self.units,), initializer="random_normal", trainable=True ) def call(self, inputs): return tf.matmul(inputs, self.w) + self.b def get_config(self): return {"units": self.units} # Now you can recreate the layer from its config: layer = Linear(64) config = layer.get_config() print(config) new_layer = Linear.from_config(config) {'units': 64} Note that the __init__() method of the base Layer class takes some keyword arguments, in particular a name and a dtype. It's good practice to pass these arguments to the parent class in __init__() and to include them in the layer config: class Linear(keras.layers.Layer): def __init__(self, units=32, **kwargs): super(Linear, self).__init__(**kwargs) self.units = units def build(self, input_shape): self.w = self.add_weight( shape=(input_shape[-1], self.units), initializer="random_normal", trainable=True, ) self.b = self.add_weight( shape=(self.units,), initializer="random_normal", trainable=True ) def call(self, inputs): return tf.matmul(inputs, self.w) + self.b def get_config(self): config = super(Linear, self).get_config() config.update({"units": self.units}) return config layer = Linear(64) config = layer.get_config() print(config) new_layer = Linear.from_config(config) {'name': 'linear_8', 'trainable': True, 'dtype': 'float32', 'units': 64} If you need more flexibility when deserializing the layer from its config, you can also override the from_config() class method. This is the base implementation of from_config(): def from_config(cls, config): return cls(**config) To learn more about serialization and saving, see the complete guide to saving and serializing models. Privileged training argument in the call() method Some layers, in particular the BatchNormalization layer and the Dropout layer, have different behaviors during training and inference. For such layers, it is standard practice to expose a training (boolean) argument in the call() method. By exposing this argument in call(), you enable the built-in training and evaluation loops (e.g. fit()) to correctly use the layer in training and inference. class CustomDropout(keras.layers.Layer): def __init__(self, rate, **kwargs): super(CustomDropout, self).__init__(**kwargs) self.rate = rate def call(self, inputs, training=None): if training: return tf.nn.dropout(inputs, rate=self.rate) return inputs Privileged mask argument in the call() method The other privileged argument supported by call() is the mask argument. You will find it in all Keras RNN layers. A mask is a boolean tensor (one boolean value per timestep in the input) used to skip certain input timesteps when processing timeseries data. Keras will automatically pass the correct mask argument to __call__() for layers that support it, when a mask is generated by a prior layer. Mask-generating layers are the Embedding layer configured with mask_zero=True, and the Masking layer. To learn more about masking and how to write masking-enabled layers, please check out the guide "understanding padding and masking". The Model class In general, you will use the Layer class to define inner computation blocks, and will use the Model class to define the outer model -- the object you will train. For instance, in a ResNet50 model, you would have several ResNet blocks subclassing Layer, and a single Model encompassing the entire ResNet50 network. The Model class has the same API as Layer, with the following differences: It exposes built-in training, evaluation, and prediction loops (model.fit(), model.evaluate(), model.predict()). It exposes the list of its inner layers, via the model.layers property. It exposes saving and serialization APIs (save(), save_weights()...) Effectively, the Layer class corresponds to what we refer to in the literature as a "layer" (as in "convolution layer" or "recurrent layer") or as a "block" (as in "ResNet block" or "Inception block"). Meanwhile, the Model class corresponds to what is referred to in the literature as a "model" (as in "deep learning model") or as a "network" (as in "deep neural network"). So if you're wondering, "should I use the Layer class or the Model class?", ask yourself: will I need to call fit() on it? Will I need to call save() on it? If so, go with Model. If not (either because your class is just a block in a bigger system, or because you are writing training & saving code yourself), use Layer. For instance, we could take our mini-resnet example above, and use it to build a Model that we could train with fit(), and that we could save with save_weights(): class ResNet(tf.keras.Model): def __init__(self, num_classes=1000): super(ResNet, self).__init__() self.block_1 = ResNetBlock() self.block_2 = ResNetBlock() self.global_pool = layers.GlobalAveragePooling2D() self.classifier = Dense(num_classes) def call(self, inputs): x = self.block_1(inputs) x = self.block_2(x) x = self.global_pool(x) return self.classifier(x) resnet = ResNet() dataset = ... resnet.fit(dataset, epochs=10) resnet.save(filepath) Putting it all together: an end-to-end example Here's what you've learned so far: A Layer encapsulate a state (created in __init__() or build()) and some computation (defined in call()). Layers can be recursively nested to create new, bigger computation blocks. Layers can create and track losses (typically regularization losses) as well as metrics, via add_loss() and add_metric() The outer container, the thing you want to train, is a Model. A Model is just like a Layer, but with added training and serialization utilities. Let's put all of these things together into an end-to-end example: we're going to implement a Variational AutoEncoder (VAE). We'll train it on MNIST digits. Our VAE will be a subclass of Model, built as a nested composition of layers that subclass Layer. It will feature a regularization loss (KL divergence). from tensorflow.keras import layers class Sampling(layers.Layer): """Uses (z_mean, z_log_var) to sample z, the vector encoding a digit.""" def call(self, inputs): z_mean, z_log_var = inputs batch = tf.shape(z_mean)[0] dim = tf.shape(z_mean)[1] epsilon = tf.keras.backend.random_normal(shape=(batch, dim)) return z_mean + tf.exp(0.5 * z_log_var) * epsilon class Encoder(layers.Layer): """Maps MNIST digits to a triplet (z_mean, z_log_var, z).""" def __init__(self, latent_dim=32, intermediate_dim=64, name="encoder", **kwargs): super(Encoder, self).__init__(name=name, **kwargs) self.dense_proj = layers.Dense(intermediate_dim, activation="relu") self.dense_mean = layers.Dense(latent_dim) self.dense_log_var = layers.Dense(latent_dim) self.sampling = Sampling() def call(self, inputs): x = self.dense_proj(inputs) z_mean = self.dense_mean(x) z_log_var = self.dense_log_var(x) z = self.sampling((z_mean, z_log_var)) return z_mean, z_log_var, z class Decoder(layers.Layer): """Converts z, the encoded digit vector, back into a readable digit.""" def __init__(self, original_dim, intermediate_dim=64, name="decoder", **kwargs): super(Decoder, self).__init__(name=name, **kwargs) self.dense_proj = layers.Dense(intermediate_dim, activation="relu") self.dense_output = layers.Dense(original_dim, activation="sigmoid") def call(self, inputs): x = self.dense_proj(inputs) return self.dense_output(x) class VariationalAutoEncoder(keras.Model): """Combines the encoder and decoder into an end-to-end model for training.""" def __init__( self, original_dim, intermediate_dim=64, latent_dim=32, name="autoencoder", **kwargs ): super(VariationalAutoEncoder, self).__init__(name=name, **kwargs) self.original_dim = original_dim self.encoder = Encoder(latent_dim=latent_dim, intermediate_dim=intermediate_dim) self.decoder = Decoder(original_dim, intermediate_dim=intermediate_dim) def call(self, inputs): z_mean, z_log_var, z = self.encoder(inputs) reconstructed = self.decoder(z) # Add KL divergence regularization loss. kl_loss = -0.5 * tf.reduce_mean( z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1 ) self.add_loss(kl_loss) return reconstructed Let's write a simple training loop on MNIST: original_dim = 784 vae = VariationalAutoEncoder(original_dim, 64, 32) optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3) mse_loss_fn = tf.keras.losses.MeanSquaredError() loss_metric = tf.keras.metrics.Mean() (x_train, _), _ = tf.keras.datasets.mnist.load_data() x_train = x_train.reshape(60000, 784).astype("float32") / 255 train_dataset = tf.data.Dataset.from_tensor_slices(x_train) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) epochs = 2 # Iterate over epochs. for epoch in range(epochs): print("Start of epoch %d" % (epoch,)) # Iterate over the batches of the dataset. for step, x_batch_train in enumerate(train_dataset): with tf.GradientTape() as tape: reconstructed = vae(x_batch_train) # Compute reconstruction loss loss = mse_loss_fn(x_batch_train, reconstructed) loss += sum(vae.losses) # Add KLD regularization loss grads = tape.gradient(loss, vae.trainable_weights) optimizer.apply_gradients(zip(grads, vae.trainable_weights)) loss_metric(loss) if step % 100 == 0: print("step %d: mean loss = %.4f" % (step, loss_metric.result())) Start of epoch 0 step 0: mean loss = 0.3577 step 100: mean loss = 0.1258 step 200: mean loss = 0.0994 step 300: mean loss = 0.0893 step 400: mean loss = 0.0843 step 500: mean loss = 0.0809 step 600: mean loss = 0.0788 step 700: mean loss = 0.0772 step 800: mean loss = 0.0760 step 900: mean loss = 0.0750 Start of epoch 1 step 0: mean loss = 0.0747 step 100: mean loss = 0.0740 step 200: mean loss = 0.0735 step 300: mean loss = 0.0730 step 400: mean loss = 0.0727 step 500: mean loss = 0.0723 step 600: mean loss = 0.0720 step 700: mean loss = 0.0717 step 800: mean loss = 0.0715 step 900: mean loss = 0.0712 Note that since the VAE is subclassing Model, it features built-in training loops. So you could also have trained it like this: vae = VariationalAutoEncoder(784, 64, 32) optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3) vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError()) vae.fit(x_train, x_train, epochs=2, batch_size=64) Epoch 1/2 938/938 [==============================] - 1s 1ms/step - loss: 0.0745 Epoch 2/2 938/938 [==============================] - 1s 1ms/step - loss: 0.0676 Beyond object-oriented development: the Functional API Was this example too much object-oriented development for you? You can also build models using the Functional API. Importantly, choosing one style or another does not prevent you from leveraging components written in the other style: you can always mix-and-match. For instance, the Functional API example below reuses the same Sampling layer we defined in the example above: original_dim = 784 intermediate_dim = 64 latent_dim = 32 # Define encoder model. original_inputs = tf.keras.Input(shape=(original_dim,), name="encoder_input") x = layers.Dense(intermediate_dim, activation="relu")(original_inputs) z_mean = layers.Dense(latent_dim, name="z_mean")(x) z_log_var = layers.Dense(latent_dim, name="z_log_var")(x) z = Sampling()((z_mean, z_log_var)) encoder = tf.keras.Model(inputs=original_inputs, outputs=z, name="encoder") # Define decoder model. latent_inputs = tf.keras.Input(shape=(latent_dim,), name="z_sampling") x = layers.Dense(intermediate_dim, activation="relu")(latent_inputs) outputs = layers.Dense(original_dim, activation="sigmoid")(x) decoder = tf.keras.Model(inputs=latent_inputs, outputs=outputs, name="decoder") # Define VAE model. outputs = decoder(z) vae = tf.keras.Model(inputs=original_inputs, outputs=outputs, name="vae") # Add KL divergence regularization loss. kl_loss = -0.5 * tf.reduce_mean(z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1) vae.add_loss(kl_loss) # Train. optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3) vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError()) vae.fit(x_train, x_train, epochs=3, batch_size=64) Epoch 1/3 938/938 [==============================] - 1s 1ms/step - loss: 0.0747 Epoch 2/3 938/938 [==============================] - 1s 1ms/step - loss: 0.0676 Epoch 3/3 938/938 [==============================] - 1s 1ms/step - loss: 0.0676 For more information, make sure to read the Functional API guide.Working with RNNs Authors: Scott Zhu, Francois Chollet Date created: 2019/07/08 Last modified: 2020/04/14 Description: Complete guide to using & customizing RNN layers. View in Colab • GitHub source Introduction Recurrent neural networks (RNN) are a class of neural networks that is powerful for modeling sequence data such as time series or natural language. Schematically, a RNN layer uses a for loop to iterate over the timesteps of a sequence, while maintaining an internal state that encodes information about the timesteps it has seen so far. The Keras RNN API is designed with a focus on: Ease of use: the built-in keras.layers.RNN, keras.layers.LSTM, keras.layers.GRU layers enable you to quickly build recurrent models without having to make difficult configuration choices. Ease of customization: You can also define your own RNN cell layer (the inner part of the for loop) with custom behavior, and use it with the generic keras.layers.RNN layer (the for loop itself). This allows you to quickly prototype different research ideas in a flexible way with minimal code. Setup import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers Built-in RNN layers: a simple example There are three built-in RNN layers in Keras: keras.layers.SimpleRNN, a fully-connected RNN where the output from previous timestep is to be fed to next timestep. keras.layers.GRU, first proposed in Cho et al., 2014. keras.layers.LSTM, first proposed in Hochreiter & Schmidhuber, 1997. In early 2015, Keras had the first reusable open-source Python implementations of LSTM and GRU. Here is a simple example of a Sequential model that processes sequences of integers, embeds each integer into a 64-dimensional vector, then processes the sequence of vectors using a LSTM layer. model = keras.Sequential() # Add an Embedding layer expecting input vocab of size 1000, and # output embedding dimension of size 64. model.add(layers.Embedding(input_dim=1000, output_dim=64)) # Add a LSTM layer with 128 internal units. model.add(layers.LSTM(128)) # Add a Dense layer with 10 units. model.add(layers.Dense(10)) model.summary() Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (None, None, 64) 64000 _________________________________________________________________ lstm (LSTM) (None, 128) 98816 _________________________________________________________________ dense (Dense) (None, 10) 1290 ================================================================= Total params: 164,106 Trainable params: 164,106 Non-trainable params: 0 _________________________________________________________________ Built-in RNNs support a number of useful features: Recurrent dropout, via the dropout and recurrent_dropout arguments Ability to process an input sequence in reverse, via the go_backwards argument Loop unrolling (which can lead to a large speedup when processing short sequences on CPU), via the unroll argument ...and more. For more information, see the RNN API documentation. Outputs and states By default, the output of a RNN layer contains a single vector per sample. This vector is the RNN cell output corresponding to the last timestep, containing information about the entire input sequence. The shape of this output is (batch_size, units) where units corresponds to the units argument passed to the layer's constructor. A RNN layer can also return the entire sequence of outputs for each sample (one vector per timestep per sample), if you set return_sequences=True. The shape of this output is (batch_size, timesteps, units). model = keras.Sequential() model.add(layers.Embedding(input_dim=1000, output_dim=64)) # The output of GRU will be a 3D tensor of shape (batch_size, timesteps, 256) model.add(layers.GRU(256, return_sequences=True)) # The output of SimpleRNN will be a 2D tensor of shape (batch_size, 128) model.add(layers.SimpleRNN(128)) model.add(layers.Dense(10)) model.summary() Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_1 (Embedding) (None, None, 64) 64000 _________________________________________________________________ gru (GRU) (None, None, 256) 247296 _________________________________________________________________ simple_rnn (SimpleRNN) (None, 128) 49280 _________________________________________________________________ dense_1 (Dense) (None, 10) 1290 ================================================================= Total params: 361,866 Trainable params: 361,866 Non-trainable params: 0 _________________________________________________________________ In addition, a RNN layer can return its final internal state(s). The returned states can be used to resume the RNN execution later, or to initialize another RNN. This setting is commonly used in the encoder-decoder sequence-to-sequence model, where the encoder final state is used as the initial state of the decoder. To configure a RNN layer to return its internal state, set the return_state parameter to True when creating the layer. Note that LSTM has 2 state tensors, but GRU only has one. To configure the initial state of the layer, just call the layer with additional keyword argument initial_state. Note that the shape of the state needs to match the unit size of the layer, like in the example below. encoder_vocab = 1000 decoder_vocab = 2000 encoder_input = layers.Input(shape=(None,)) encoder_embedded = layers.Embedding(input_dim=encoder_vocab, output_dim=64)( encoder_input ) # Return states in addition to output output, state_h, state_c = layers.LSTM(64, return_state=True, name="encoder")( encoder_embedded ) encoder_state = [state_h, state_c] decoder_input = layers.Input(shape=(None,)) decoder_embedded = layers.Embedding(input_dim=decoder_vocab, output_dim=64)( decoder_input ) # Pass the 2 states to a new LSTM layer, as initial state decoder_output = layers.LSTM(64, name="decoder")( decoder_embedded, initial_state=encoder_state ) output = layers.Dense(10)(decoder_output) model = keras.Model([encoder_input, decoder_input], output) model.summary() Model: "model" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, None)] 0 __________________________________________________________________________________________________ input_2 (InputLayer) [(None, None)] 0 __________________________________________________________________________________________________ embedding_2 (Embedding) (None, None, 64) 64000 input_1[0][0] __________________________________________________________________________________________________ embedding_3 (Embedding) (None, None, 64) 128000 input_2[0][0] __________________________________________________________________________________________________ encoder (LSTM) [(None, 64), (None, 33024 embedding_2[0][0] __________________________________________________________________________________________________ decoder (LSTM) (None, 64) 33024 embedding_3[0][0] encoder[0][1] encoder[0][2] __________________________________________________________________________________________________ dense_2 (Dense) (None, 10) 650 decoder[0][0] ================================================================================================== Total params: 258,698 Trainable params: 258,698 Non-trainable params: 0 __________________________________________________________________________________________________ RNN layers and RNN cells In addition to the built-in RNN layers, the RNN API also provides cell-level APIs. Unlike RNN layers, which processes whole batches of input sequences, the RNN cell only processes a single timestep. The cell is the inside of the for loop of a RNN layer. Wrapping a cell inside a keras.layers.RNN layer gives you a layer capable of processing batches of sequences, e.g. RNN(LSTMCell(10)). Mathematically, RNN(LSTMCell(10)) produces the same result as LSTM(10). In fact, the implementation of this layer in TF v1.x was just creating the corresponding RNN cell and wrapping it in a RNN layer. However using the built-in GRU and LSTM layers enable the use of CuDNN and you may see better performance. There are three built-in RNN cells, each of them corresponding to the matching RNN layer. keras.layers.SimpleRNNCell corresponds to the SimpleRNN layer. keras.layers.GRUCell corresponds to the GRU layer. keras.layers.LSTMCell corresponds to the LSTM layer. The cell abstraction, together with the generic keras.layers.RNN class, make it very easy to implement custom RNN architectures for your research. Cross-batch statefulness When processing very long sequences (possibly infinite), you may want to use the pattern of cross-batch statefulness. Normally, the internal state of a RNN layer is reset every time it sees a new batch (i.e. every sample seen by the layer is assumed to be independent of the past). The layer will only maintain a state while processing a given sample. If you have very long sequences though, it is useful to break them into shorter sequences, and to feed these shorter sequences sequentially into a RNN layer without resetting the layer's state. That way, the layer can retain information about the entirety of the sequence, even though it's only seeing one sub-sequence at a time. You can do this by setting stateful=True in the constructor. If you have a sequence s = [t0, t1, ... t1546, t1547], you would split it into e.g. s1 = [t0, t1, ... t100] s2 = [t101, ... t201] ... s16 = [t1501, ... t1547] Then you would process it via: lstm_layer = layers.LSTM(64, stateful=True) for s in sub_sequences: output = lstm_layer(s) When you want to clear the state, you can use layer.reset_states(). Note: In this setup, sample i in a given batch is assumed to be the continuation of sample i in the previous batch. This means that all batches should contain the same number of samples (batch size). E.g. if a batch contains [sequence_A_from_t0_to_t100, sequence_B_from_t0_to_t100], the next batch should contain [sequence_A_from_t101_to_t200, sequence_B_from_t101_to_t200]. Here is a complete example: paragraph1 = np.random.random((20, 10, 50)).astype(np.float32) paragraph2 = np.random.random((20, 10, 50)).astype(np.float32) paragraph3 = np.random.random((20, 10, 50)).astype(np.float32) lstm_layer = layers.LSTM(64, stateful=True) output = lstm_layer(paragraph1) output = lstm_layer(paragraph2) output = lstm_layer(paragraph3) # reset_states() will reset the cached state to the original initial_state. # If no initial_state was provided, zero-states will be used by default. lstm_layer.reset_states() RNN State Reuse The recorded states of the RNN layer are not included in the layer.weights(). If you would like to reuse the state from a RNN layer, you can retrieve the states value by layer.states and use it as the initial state for a new layer via the Keras functional API like new_layer(inputs, initial_state=layer.states), or model subclassing. Please also note that sequential model might not be used in this case since it only supports layers with single input and output, the extra input of initial state makes it impossible to use here. paragraph1 = np.random.random((20, 10, 50)).astype(np.float32) paragraph2 = np.random.random((20, 10, 50)).astype(np.float32) paragraph3 = np.random.random((20, 10, 50)).astype(np.float32) lstm_layer = layers.LSTM(64, stateful=True) output = lstm_layer(paragraph1) output = lstm_layer(paragraph2) existing_state = lstm_layer.states new_lstm_layer = layers.LSTM(64) new_output = new_lstm_layer(paragraph3, initial_state=existing_state) Bidirectional RNNs For sequences other than time series (e.g. text), it is often the case that a RNN model can perform better if it not only processes sequence from start to end, but also backwards. For example, to predict the next word in a sentence, it is often useful to have the context around the word, not only just the words that come before it. Keras provides an easy API for you to build such bidirectional RNNs: the keras.layers.Bidirectional wrapper. model = keras.Sequential() model.add( layers.Bidirectional(layers.LSTM(64, return_sequences=True), input_shape=(5, 10)) ) model.add(layers.Bidirectional(layers.LSTM(32))) model.add(layers.Dense(10)) model.summary() Model: "sequential_2" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= bidirectional (Bidirectional (None, 5, 128) 38400 _________________________________________________________________ bidirectional_1 (Bidirection (None, 64) 41216 _________________________________________________________________ dense_3 (Dense) (None, 10) 650 ================================================================= Total params: 80,266 Trainable params: 80,266 Non-trainable params: 0 _________________________________________________________________ Under the hood, Bidirectional will copy the RNN layer passed in, and flip the go_backwards field of the newly copied layer, so that it will process the inputs in reverse order. The output of the Bidirectional RNN will be, by default, the concatenation of the forward layer output and the backward layer output. If you need a different merging behavior, e.g. concatenation, change the merge_mode parameter in the Bidirectional wrapper constructor. For more details about Bidirectional, please check the API docs. Performance optimization and CuDNN kernels In TensorFlow 2.0, the built-in LSTM and GRU layers have been updated to leverage CuDNN kernels by default when a GPU is available. With this change, the prior keras.layers.CuDNNLSTM/CuDNNGRU layers have been deprecated, and you can build your model without worrying about the hardware it will run on. Since the CuDNN kernel is built with certain assumptions, this means the layer will not be able to use the CuDNN kernel if you change the defaults of the built-in LSTM or GRU layers. E.g.: Changing the activation function from tanh to something else. Changing the recurrent_activation function from sigmoid to something else. Using recurrent_dropout > 0. Setting unroll to True, which forces LSTM/GRU to decompose the inner tf.while_loop into an unrolled for loop. Setting use_bias to False. Using masking when the input data is not strictly right padded (if the mask corresponds to strictly right padded data, CuDNN can still be used. This is the most common case). For the detailed list of constraints, please see the documentation for the LSTM and GRU layers. Using CuDNN kernels when available Let's build a simple LSTM model to demonstrate the performance difference. We'll use as input sequences the sequence of rows of MNIST digits (treating each row of pixels as a timestep), and we'll predict the digit's label. batch_size = 64 # Each MNIST image batch is a tensor of shape (batch_size, 28, 28). # Each input sequence will be of size (28, 28) (height is treated like time). input_dim = 28 units = 64 output_size = 10 # labels are from 0 to 9 # Build the RNN model def build_model(allow_cudnn_kernel=True): # CuDNN is only available at the layer level, and not at the cell level. # This means `LSTM(units)` will use the CuDNN kernel, # while RNN(LSTMCell(units)) will run on non-CuDNN kernel. if allow_cudnn_kernel: # The LSTM layer with default options uses CuDNN. lstm_layer = keras.layers.LSTM(units, input_shape=(None, input_dim)) else: # Wrapping a LSTMCell in a RNN layer will not use CuDNN. lstm_layer = keras.layers.RNN( keras.layers.LSTMCell(units), input_shape=(None, input_dim) ) model = keras.models.Sequential( [ lstm_layer, keras.layers.BatchNormalization(), keras.layers.Dense(output_size), ] ) return model Let's load the MNIST dataset: mnist = keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 sample, sample_label = x_train[0], y_train[0] Let's create a model instance and train it. We choose sparse_categorical_crossentropy as the loss function for the model. The output of the model has shape of [batch_size, 10]. The target for the model is an integer vector, each of the integer is in the range of 0 to 9. model = build_model(allow_cudnn_kernel=True) model.compile( loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer="sgd", metrics=["accuracy"], ) model.fit( x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1 ) 938/938 [==============================] - 12s 11ms/step - loss: 1.3152 - accuracy: 0.5698 - val_loss: 0.5888 - val_accuracy: 0.8086 Now, let's compare to a model that does not use the CuDNN kernel: noncudnn_model = build_model(allow_cudnn_kernel=False) noncudnn_model.set_weights(model.get_weights()) noncudnn_model.compile( loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer="sgd", metrics=["accuracy"], ) noncudnn_model.fit( x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1 ) 938/938 [==============================] - 14s 14ms/step - loss: 0.4382 - accuracy: 0.8669 - val_loss: 0.3223 - val_accuracy: 0.8955 When running on a machine with a NVIDIA GPU and CuDNN installed, the model built with CuDNN is much faster to train compared to the model that uses the regular TensorFlow kernel. The same CuDNN-enabled model can also be used to run inference in a CPU-only environment. The tf.device annotation below is just forcing the device placement. The model will run on CPU by default if no GPU is available. You simply don't have to worry about the hardware you're running on anymore. Isn't that pretty cool? import matplotlib.pyplot as plt with tf.device("CPU:0"): cpu_model = build_model(allow_cudnn_kernel=True) cpu_model.set_weights(model.get_weights()) result = tf.argmax(cpu_model.predict_on_batch(tf.expand_dims(sample, 0)), axis=1) print( "Predicted result is: %s, target result is: %s" % (result.numpy(), sample_label) ) plt.imshow(sample, cmap=plt.get_cmap("gray")) Predicted result is: [3], target result is: 5 png RNNs with list/dict inputs, or nested inputs Nested structures allow implementers to include more information within a single timestep. For example, a video frame could have audio and video input at the same time. The data shape in this case could be: [batch, timestep, {"video": [height, width, channel], "audio": [frequency]}] In another example, handwriting data could have both coordinates x and y for the current position of the pen, as well as pressure information. So the data representation could be: [batch, timestep, {"location": [x, y], "pressure": [force]}] The following code provides an example of how to build a custom RNN cell that accepts such structured inputs. Define a custom cell that supports nested input/output See Making new Layers & Models via subclassing for details on writing your own layers. class NestedCell(keras.layers.Layer): def __init__(self, unit_1, unit_2, unit_3, **kwargs): self.unit_1 = unit_1 self.unit_2 = unit_2 self.unit_3 = unit_3 self.state_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])] self.output_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])] super(NestedCell, self).__init__(**kwargs) def build(self, input_shapes): # expect input_shape to contain 2 items, [(batch, i1), (batch, i2, i3)] i1 = input_shapes[0][1] i2 = input_shapes[1][1] i3 = input_shapes[1][2] self.kernel_1 = self.add_weight( shape=(i1, self.unit_1), initializer="uniform", name="kernel_1" ) self.kernel_2_3 = self.add_weight( shape=(i2, i3, self.unit_2, self.unit_3), initializer="uniform", name="kernel_2_3", ) def call(self, inputs, states): # inputs should be in [(batch, input_1), (batch, input_2, input_3)] # state should be in shape [(batch, unit_1), (batch, unit_2, unit_3)] input_1, input_2 = tf.nest.flatten(inputs) s1, s2 = states output_1 = tf.matmul(input_1, self.kernel_1) output_2_3 = tf.einsum("bij,ijkl->bkl", input_2, self.kernel_2_3) state_1 = s1 + output_1 state_2_3 = s2 + output_2_3 output = (output_1, output_2_3) new_states = (state_1, state_2_3) return output, new_states def get_config(self): return {"unit_1": self.unit_1, "unit_2": unit_2, "unit_3": self.unit_3} Build a RNN model with nested input/output Let's build a Keras model that uses a keras.layers.RNN layer and the custom cell we just defined. unit_1 = 10 unit_2 = 20 unit_3 = 30 i1 = 32 i2 = 64 i3 = 32 batch_size = 64 num_batches = 10 timestep = 50 cell = NestedCell(unit_1, unit_2, unit_3) rnn = keras.layers.RNN(cell) input_1 = keras.Input((None, i1)) input_2 = keras.Input((None, i2, i3)) outputs = rnn((input_1, input_2)) model = keras.models.Model([input_1, input_2], outputs) model.compile(optimizer="adam", loss="mse", metrics=["accuracy"]) Train the model with randomly generated data Since there isn't a good candidate dataset for this model, we use random Numpy data for demonstration. input_1_data = np.random.random((batch_size * num_batches, timestep, i1)) input_2_data = np.random.random((batch_size * num_batches, timestep, i2, i3)) target_1_data = np.random.random((batch_size * num_batches, unit_1)) target_2_data = np.random.random((batch_size * num_batches, unit_2, unit_3)) input_data = [input_1_data, input_2_data] target_data = [target_1_data, target_2_data] model.fit(input_data, target_data, batch_size=batch_size) 10/10 [==============================] - 4s 263ms/step - loss: 0.9004 - rnn_1_loss: 0.3103 - rnn_1_1_loss: 0.5902 - rnn_1_accuracy: 0.1403 - rnn_1_1_accuracy: 0.0335 With the Keras keras.layers.RNN layer, You are only expected to define the math logic for individual step within the sequence, and the keras.layers.RNN layer will handle the sequence iteration for you. It's an incredibly powerful way to quickly prototype new kinds of RNNs (e.g. a LSTM variant). For more details, please visit the API docs.Writing a training loop from scratch Author: fchollet Date created: 2019/03/01 Last modified: 2020/04/15 Description: Complete guide to writing low-level training & evaluation loops. View in Colab • GitHub source Setup import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import numpy as np Introduction Keras provides default training and evaluation loops, fit() and evaluate(). Their usage is covered in the guide Training & evaluation with the built-in methods. If you want to customize the learning algorithm of your model while still leveraging the convenience of fit() (for instance, to train a GAN using fit()), you can subclass the Model class and implement your own train_step() method, which is called repeatedly during fit(). This is covered in the guide Customizing what happens in fit(). Now, if you want very low-level control over training & evaluation, you should write your own training & evaluation loops from scratch. This is what this guide is about. Using the GradientTape: a first end-to-end example Calling a model inside a GradientTape scope enables you to retrieve the gradients of the trainable weights of the layer with respect to a loss value. Using an optimizer instance, you can use these gradients to update these variables (which you can retrieve using model.trainable_weights). Let's consider a simple MNIST model: inputs = keras.Input(shape=(784,), name="digits") x1 = layers.Dense(64, activation="relu")(inputs) x2 = layers.Dense(64, activation="relu")(x1) outputs = layers.Dense(10, name="predictions")(x2) model = keras.Model(inputs=inputs, outputs=outputs) Let's train it using mini-batch gradient with a custom training loop. First, we're going to need an optimizer, a loss function, and a dataset: # Instantiate an optimizer. optimizer = keras.optimizers.SGD(learning_rate=1e-3) # Instantiate a loss function. loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) # Prepare the training dataset. batch_size = 64 (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() x_train = np.reshape(x_train, (-1, 784)) x_test = np.reshape(x_test, (-1, 784)) train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size) Here's our training loop: We open a for loop that iterates over epochs For each epoch, we open a for loop that iterates over the dataset, in batches For each batch, we open a GradientTape() scope Inside this scope, we call the model (forward pass) and compute the loss Outside the scope, we retrieve the gradients of the weights of the model with regard to the loss Finally, we use the optimizer to update the weights of the model based on the gradients epochs = 2 for epoch in range(epochs): print("\nStart of epoch %d" % (epoch,)) # Iterate over the batches of the dataset. for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): # Open a GradientTape to record the operations run # during the forward pass, which enables auto-differentiation. with tf.GradientTape() as tape: # Run the forward pass of the layer. # The operations that the layer applies # to its inputs are going to be recorded # on the GradientTape. logits = model(x_batch_train, training=True) # Logits for this minibatch # Compute the loss value for this minibatch. loss_value = loss_fn(y_batch_train, logits) # Use the gradient tape to automatically retrieve # the gradients of the trainable variables with respect to the loss. grads = tape.gradient(loss_value, model.trainable_weights) # Run one step of gradient descent by updating # the value of the variables to minimize the loss. optimizer.apply_gradients(zip(grads, model.trainable_weights)) # Log every 200 batches. if step % 200 == 0: print( "Training loss (for one batch) at step %d: %.4f" % (step, float(loss_value)) ) print("Seen so far: %s samples" % ((step + 1) * 64)) Start of epoch 0 Training loss (for one batch) at step 0: 76.3562 Seen so far: 64 samples Training loss (for one batch) at step 200: 1.3921 Seen so far: 12864 samples Training loss (for one batch) at step 400: 1.0018 Seen so far: 25664 samples Training loss (for one batch) at step 600: 0.8904 Seen so far: 38464 samples Training loss (for one batch) at step 800: 0.8393 Seen so far: 51264 samples Start of epoch 1 Training loss (for one batch) at step 0: 0.8572 Seen so far: 64 samples Training loss (for one batch) at step 200: 0.7616 Seen so far: 12864 samples Training loss (for one batch) at step 400: 0.8453 Seen so far: 25664 samples Training loss (for one batch) at step 600: 0.4959 Seen so far: 38464 samples Training loss (for one batch) at step 800: 0.9363 Seen so far: 51264 samples Low-level handling of metrics Let's add metrics monitoring to this basic loop. You can readily reuse the built-in metrics (or custom ones you wrote) in such training loops written from scratch. Here's the flow: Instantiate the metric at the start of the loop Call metric.update_state() after each batch Call metric.result() when you need to display the current value of the metric Call metric.reset_states() when you need to clear the state of the metric (typically at the end of an epoch) Let's use this knowledge to compute SparseCategoricalAccuracy on validation data at the end of each epoch: # Get model inputs = keras.Input(shape=(784,), name="digits") x = layers.Dense(64, activation="relu", name="dense_1")(inputs) x = layers.Dense(64, activation="relu", name="dense_2")(x) outputs = layers.Dense(10, name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outputs) # Instantiate an optimizer to train the model. optimizer = keras.optimizers.SGD(learning_rate=1e-3) # Instantiate a loss function. loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) # Prepare the metrics. train_acc_metric = keras.metrics.SparseCategoricalAccuracy() val_acc_metric = keras.metrics.SparseCategoricalAccuracy() # Prepare the training dataset. batch_size = 64 train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size) # Prepare the validation dataset. # Reserve 10,000 samples for validation. x_val = x_train[-10000:] y_val = y_train[-10000:] x_train = x_train[:-10000] y_train = y_train[:-10000] val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)) val_dataset = val_dataset.batch(64) Here's our training & evaluation loop: import time epochs = 2 for epoch in range(epochs): print("\nStart of epoch %d" % (epoch,)) start_time = time.time() # Iterate over the batches of the dataset. for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): with tf.GradientTape() as tape: logits = model(x_batch_train, training=True) loss_value = loss_fn(y_batch_train, logits) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) # Update training metric. train_acc_metric.update_state(y_batch_train, logits) # Log every 200 batches. if step % 200 == 0: print( "Training loss (for one batch) at step %d: %.4f" % (step, float(loss_value)) ) print("Seen so far: %d samples" % ((step + 1) * 64)) # Display metrics at the end of each epoch. train_acc = train_acc_metric.result() print("Training acc over epoch: %.4f" % (float(train_acc),)) # Reset training metrics at the end of each epoch train_acc_metric.reset_states() # Run a validation loop at the end of each epoch. for x_batch_val, y_batch_val in val_dataset: val_logits = model(x_batch_val, training=False) # Update val metrics val_acc_metric.update_state(y_batch_val, val_logits) val_acc = val_acc_metric.result() val_acc_metric.reset_states() print("Validation acc: %.4f" % (float(val_acc),)) print("Time taken: %.2fs" % (time.time() - start_time)) Start of epoch 0 Training loss (for one batch) at step 0: 134.3001 Seen so far: 64 samples Training loss (for one batch) at step 200: 1.3430 Seen so far: 12864 samples Training loss (for one batch) at step 400: 1.3557 Seen so far: 25664 samples Training loss (for one batch) at step 600: 0.8682 Seen so far: 38464 samples Training loss (for one batch) at step 800: 0.5862 Seen so far: 51264 samples Training acc over epoch: 0.7176 Validation acc: 0.8403 Time taken: 4.65s Start of epoch 1 Training loss (for one batch) at step 0: 0.4264 Seen so far: 64 samples Training loss (for one batch) at step 200: 0.4168 Seen so far: 12864 samples Training loss (for one batch) at step 400: 0.6106 Seen so far: 25664 samples Training loss (for one batch) at step 600: 0.4762 Seen so far: 38464 samples Training loss (for one batch) at step 800: 0.4031 Seen so far: 51264 samples Training acc over epoch: 0.8429 Validation acc: 0.8774 Time taken: 5.07s Speeding-up your training step with tf.function The default runtime in TensorFlow 2.0 is eager execution. As such, our training loop above executes eagerly. This is great for debugging, but graph compilation has a definite performance advantage. Describing your computation as a static graph enables the framework to apply global performance optimizations. This is impossible when the framework is constrained to greedly execute one operation after another, with no knowledge of what comes next. You can compile into a static graph any function that takes tensors as input. Just add a @tf.function decorator on it, like this: @tf.function def train_step(x, y): with tf.GradientTape() as tape: logits = model(x, training=True) loss_value = loss_fn(y, logits) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) train_acc_metric.update_state(y, logits) return loss_value Let's do the same with the evaluation step: @tf.function def test_step(x, y): val_logits = model(x, training=False) val_acc_metric.update_state(y, val_logits) Now, let's re-run our training loop with this compiled training step: import time epochs = 2 for epoch in range(epochs): print("\nStart of epoch %d" % (epoch,)) start_time = time.time() # Iterate over the batches of the dataset. for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): loss_value = train_step(x_batch_train, y_batch_train) # Log every 200 batches. if step % 200 == 0: print( "Training loss (for one batch) at step %d: %.4f" % (step, float(loss_value)) ) print("Seen so far: %d samples" % ((step + 1) * 64)) # Display metrics at the end of each epoch. train_acc = train_acc_metric.result() print("Training acc over epoch: %.4f" % (float(train_acc),)) # Reset training metrics at the end of each epoch train_acc_metric.reset_states() # Run a validation loop at the end of each epoch. for x_batch_val, y_batch_val in val_dataset: test_step(x_batch_val, y_batch_val) val_acc = val_acc_metric.result() val_acc_metric.reset_states() print("Validation acc: %.4f" % (float(val_acc),)) print("Time taken: %.2fs" % (time.time() - start_time)) Start of epoch 0 Training loss (for one batch) at step 0: 0.6483 Seen so far: 64 samples Training loss (for one batch) at step 200: 0.5966 Seen so far: 12864 samples Training loss (for one batch) at step 400: 0.5951 Seen so far: 25664 samples Training loss (for one batch) at step 600: 1.3830 Seen so far: 38464 samples Training loss (for one batch) at step 800: 0.2758 Seen so far: 51264 samples Training acc over epoch: 0.8756 Validation acc: 0.8955 Time taken: 1.18s Start of epoch 1 Training loss (for one batch) at step 0: 0.4447 Seen so far: 64 samples Training loss (for one batch) at step 200: 0.3794 Seen so far: 12864 samples Training loss (for one batch) at step 400: 0.4636 Seen so far: 25664 samples Training loss (for one batch) at step 600: 0.3694 Seen so far: 38464 samples Training loss (for one batch) at step 800: 0.2763 Seen so far: 51264 samples Training acc over epoch: 0.8926 Validation acc: 0.9078 Time taken: 0.71s Much faster, isn't it? Low-level handling of losses tracked by the model Layers & models recursively track any losses created during the forward pass by layers that call self.add_loss(value). The resulting list of scalar loss values are available via the property model.losses at the end of the forward pass. If you want to be using these loss components, you should sum them and add them to the main loss in your training step. Consider this layer, that creates an activity regularization loss: class ActivityRegularizationLayer(layers.Layer): def call(self, inputs): self.add_loss(1e-2 * tf.reduce_sum(inputs)) return inputs Let's build a really simple model that uses it: inputs = keras.Input(shape=(784,), name="digits") x = layers.Dense(64, activation="relu")(inputs) # Insert activity regularization as a layer x = ActivityRegularizationLayer()(x) x = layers.Dense(64, activation="relu")(x) outputs = layers.Dense(10, name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outputs) Here's what our training step should look like now: @tf.function def train_step(x, y): with tf.GradientTape() as tape: logits = model(x, training=True) loss_value = loss_fn(y, logits) # Add any extra losses created during the forward pass. loss_value += sum(model.losses) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) train_acc_metric.update_state(y, logits) return loss_value Summary Now you know everything there is to know about using built-in training loops and writing your own from scratch. To conclude, here's a simple end-to-end example that ties together everything you've learned in this guide: a DCGAN trained on MNIST digits. End-to-end example: a GAN training loop from scratch You may be familiar with Generative Adversarial Networks (GANs). GANs can generate new images that look almost real, by learning the latent distribution of a training dataset of images (the "latent space" of the images). A GAN is made of two parts: a "generator" model that maps points in the latent space to points in image space, a "discriminator" model, a classifier that can tell the difference between real images (from the training dataset) and fake images (the output of the generator network). A GAN training loop looks like this: 1) Train the discriminator. - Sample a batch of random points in the latent space. - Turn the points into fake images via the "generator" model. - Get a batch of real images and combine them with the generated images. - Train the "discriminator" model to classify generated vs. real images. 2) Train the generator. - Sample random points in the latent space. - Turn the points into fake images via the "generator" network. - Get a batch of real images and combine them with the generated images. - Train the "generator" model to "fool" the discriminator and classify the fake images as real. For a much more detailed overview of how GANs works, see Deep Learning with Python. Let's implement this training loop. First, create the discriminator meant to classify fake vs real digits: discriminator = keras.Sequential( [ keras.Input(shape=(28, 28, 1)), layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.GlobalMaxPooling2D(), layers.Dense(1), ], name="discriminator", ) discriminator.summary() Model: "discriminator" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 14, 14, 64) 640 _________________________________________________________________ leaky_re_lu (LeakyReLU) (None, 14, 14, 64) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 7, 7, 128) 73856 _________________________________________________________________ leaky_re_lu_1 (LeakyReLU) (None, 7, 7, 128) 0 _________________________________________________________________ global_max_pooling2d (Global (None, 128) 0 _________________________________________________________________ dense_4 (Dense) (None, 1) 129 ================================================================= Total params: 74,625 Trainable params: 74,625 Non-trainable params: 0 _________________________________________________________________ Then let's create a generator network, that turns latent vectors into outputs of shape (28, 28, 1) (representing MNIST digits): latent_dim = 128 generator = keras.Sequential( [ keras.Input(shape=(latent_dim,)), # We want to generate 128 coefficients to reshape into a 7x7x128 map layers.Dense(7 * 7 * 128), layers.LeakyReLU(alpha=0.2), layers.Reshape((7, 7, 128)), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"), ], name="generator", ) Here's the key bit: the training loop. As you can see it is quite straightforward. The training step function only takes 17 lines. # Instantiate one optimizer for the discriminator and another for the generator. d_optimizer = keras.optimizers.Adam(learning_rate=0.0003) g_optimizer = keras.optimizers.Adam(learning_rate=0.0004) # Instantiate a loss function. loss_fn = keras.losses.BinaryCrossentropy(from_logits=True) @tf.function def train_step(real_images): # Sample random points in the latent space random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim)) # Decode them to fake images generated_images = generator(random_latent_vectors) # Combine them with real images combined_images = tf.concat([generated_images, real_images], axis=0) # Assemble labels discriminating real from fake images labels = tf.concat( [tf.ones((batch_size, 1)), tf.zeros((real_images.shape[0], 1))], axis=0 ) # Add random noise to the labels - important trick! labels += 0.05 * tf.random.uniform(labels.shape) # Train the discriminator with tf.GradientTape() as tape: predictions = discriminator(combined_images) d_loss = loss_fn(labels, predictions) grads = tape.gradient(d_loss, discriminator.trainable_weights) d_optimizer.apply_gradients(zip(grads, discriminator.trainable_weights)) # Sample random points in the latent space random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim)) # Assemble labels that say "all real images" misleading_labels = tf.zeros((batch_size, 1)) # Train the generator (note that we should *not* update the weights # of the discriminator)! with tf.GradientTape() as tape: predictions = discriminator(generator(random_latent_vectors)) g_loss = loss_fn(misleading_labels, predictions) grads = tape.gradient(g_loss, generator.trainable_weights) g_optimizer.apply_gradients(zip(grads, generator.trainable_weights)) return d_loss, g_loss, generated_images Let's train our GAN, by repeatedly calling train_step on batches of images. Since our discriminator and generator are convnets, you're going to want to run this code on a GPU. import os # Prepare the dataset. We use both the training & test MNIST digits. batch_size = 64 (x_train, _), (x_test, _) = keras.datasets.mnist.load_data() all_digits = np.concatenate([x_train, x_test]) all_digits = all_digits.astype("float32") / 255.0 all_digits = np.reshape(all_digits, (-1, 28, 28, 1)) dataset = tf.data.Dataset.from_tensor_slices(all_digits) dataset = dataset.shuffle(buffer_size=1024).batch(batch_size) epochs = 1 # In practice you need at least 20 epochs to generate nice digits. save_dir = "./" for epoch in range(epochs): print("\nStart epoch", epoch) for step, real_images in enumerate(dataset): # Train the discriminator & generator on one batch of real images. d_loss, g_loss, generated_images = train_step(real_images) # Logging. if step % 200 == 0: # Print metrics print("discriminator loss at step %d: %.2f" % (step, d_loss)) print("adversarial loss at step %d: %.2f" % (step, g_loss)) # Save one generated image img = tf.keras.preprocessing.image.array_to_img( generated_images[0] * 255.0, scale=False ) img.save(os.path.join(save_dir, "generated_img" + str(step) + ".png")) # To limit execution time we stop after 10 steps. # Remove the lines below to actually train the model! if step > 10: break Start epoch 0 discriminator loss at step 0: 0.70 adversarial loss at step 0: 0.68 That's it! You'll get nice-looking fake MNIST digits after just ~30s of training on the Colab GPU.Serialization and saving Authors: Kathy Wu, Francois Chollet Date created: 2020/04/28 Last modified: 2020/04/28 Description: Complete guide to saving & serializing models. View in Colab • GitHub source Introduction A Keras model consists of multiple components: The architecture, or configuration, which specifies what layers the model contain, and how they're connected. A set of weights values (the "state of the model"). An optimizer (defined by compiling the model). A set of losses and metrics (defined by compiling the model or calling add_loss() or add_metric()). The Keras API makes it possible to save all of these pieces to disk at once, or to only selectively save some of them: Saving everything into a single archive in the TensorFlow SavedModel format (or in the older Keras H5 format). This is the standard practice. Saving the architecture / configuration only, typically as a JSON file. Saving the weights values only. This is generally used when training the model. Let's take a look at each of these options. When would you use one or the other, and how do they work? How to save and load a model If you only have 10 seconds to read this guide, here's what you need to know. Saving a Keras model: model = ... # Get model (Sequential, Functional Model, or Model subclass) model.save('path/to/location') Loading the model back: from tensorflow import keras model = keras.models.load_model('path/to/location') Now, let's look at the details. Setup import numpy as np import tensorflow as tf from tensorflow import keras Whole-model saving & loading You can save an entire model to a single artifact. It will include: The model's architecture/config The model's weight values (which were learned during training) The model's compilation information (if compile() was called) The optimizer and its state, if any (this enables you to restart training where you left) APIs model.save() or tf.keras.models.save_model() tf.keras.models.load_model() There are two formats you can use to save an entire model to disk: the TensorFlow SavedModel format, and the older Keras H5 format. The recommended format is SavedModel. It is the default when you use model.save(). You can switch to the H5 format by: Passing save_format='h5' to save(). Passing a filename that ends in .h5 or .keras to save(). SavedModel format SavedModel is the more comprehensive save format that saves the model architecture, weights, and the traced Tensorflow subgraphs of the call functions. This enables Keras to restore both built-in layers as well as custom objects. Example: def get_model(): # Create a simple model. inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = keras.Model(inputs, outputs) model.compile(optimizer="adam", loss="mean_squared_error") return model model = get_model() # Train the model. test_input = np.random.random((128, 32)) test_target = np.random.random((128, 1)) model.fit(test_input, test_target) # Calling `save('my_model')` creates a SavedModel folder `my_model`. model.save("my_model") # It can be used to reconstruct the model identically. reconstructed_model = keras.models.load_model("my_model") # Let's check: np.testing.assert_allclose( model.predict(test_input), reconstructed_model.predict(test_input) ) # The reconstructed model is already compiled and has retained the optimizer # state, so training can resume: reconstructed_model.fit(test_input, test_target) 4/4 [==============================] - 0s 833us/step - loss: 0.2464 What the SavedModel contains Calling model.save('my_model') creates a folder named my_model, containing the following: !ls my_model assets saved_model.pb variables The model architecture, and training configuration (including the optimizer, losses, and metrics) are stored in saved_model.pb. The weights are saved in the variables/ directory. For detailed information on the SavedModel format, see the SavedModel guide (The SavedModel format on disk). How SavedModel handles custom objects When saving the model and its layers, the SavedModel format stores the class name, call function, losses, and weights (and the config, if implemented). The call function defines the computation graph of the model/layer. In the absence of the model/layer config, the call function is used to create a model that exists like the original model which can be trained, evaluated, and used for inference. Nevertheless, it is always a good practice to define the get_config and from_config methods when writing a custom model or layer class. This allows you to easily update the computation later if needed. See the section about Custom objects for more information. Example: class CustomModel(keras.Model): def __init__(self, hidden_units): super(CustomModel, self).__init__() self.hidden_units = hidden_units self.dense_layers = [keras.layers.Dense(u) for u in hidden_units] def call(self, inputs): x = inputs for layer in self.dense_layers: x = layer(x) return x def get_config(self): return {"hidden_units": self.hidden_units} @classmethod def from_config(cls, config): return cls(**config) model = CustomModel([16, 16, 10]) # Build the model by calling it input_arr = tf.random.uniform((1, 5)) outputs = model(input_arr) model.save("my_model") # Option 1: Load with the custom_object argument. loaded_1 = keras.models.load_model( "my_model", custom_objects={"CustomModel": CustomModel} ) # Option 2: Load without the CustomModel class. # Delete the custom-defined model class to ensure that the loader does not have # access to it. del CustomModel loaded_2 = keras.models.load_model("my_model") np.testing.assert_allclose(loaded_1(input_arr), outputs) np.testing.assert_allclose(loaded_2(input_arr), outputs) print("Original model:", model) print("Model Loaded with custom objects:", loaded_1) print("Model loaded without the custom object class:", loaded_2) INFO:tensorflow:Assets written to: my_model/assets WARNING:tensorflow:No training configuration found in save file, so the model was *not* compiled. Compile it manually. WARNING:tensorflow:No training configuration found in save file, so the model was *not* compiled. Compile it manually. Original model: <__main__.CustomModel object at 0x151ad0990> Model Loaded with custom objects: <__main__.CustomModel object at 0x151b03850> Model loaded without the custom object class: The first loaded model is loaded using the config and CustomModel class. The second model is loaded by dynamically creating the model class that acts like the original model. Configuring the SavedModel New in TensoFlow 2.4 The argument save_traces has been added to model.save, which allows you to toggle SavedModel function tracing. Functions are saved to allow the Keras to re-load custom objects without the original class definitons, so when save_traces=False, all custom objects must have defined get_config/from_config methods. When loading, the custom objects must be passed to the custom_objects argument. save_traces=False reduces the disk space used by the SavedModel and saving time. Keras H5 format Keras also supports saving a single HDF5 file containing the model's architecture, weights values, and compile() information. It is a light-weight alternative to SavedModel. Example: model = get_model() # Train the model. test_input = np.random.random((128, 32)) test_target = np.random.random((128, 1)) model.fit(test_input, test_target) # Calling `save('my_model.h5')` creates a h5 file `my_model.h5`. model.save("my_h5_model.h5") # It can be used to reconstruct the model identically. reconstructed_model = keras.models.load_model("my_h5_model.h5") # Let's check: np.testing.assert_allclose( model.predict(test_input), reconstructed_model.predict(test_input) ) # The reconstructed model is already compiled and has retained the optimizer # state, so training can resume: reconstructed_model.fit(test_input, test_target) 4/4 [==============================] - 0s 967us/step - loss: 0.8106 4/4 [==============================] - 0s 1ms/step - loss: 0.7184 Limitations Compared to the SavedModel format, there are two things that don't get included in the H5 file: External losses & metrics added via model.add_loss() & model.add_metric() are not saved (unlike SavedModel). If you have such losses & metrics on your model and you want to resume training, you need to add these losses back yourself after loading the model. Note that this does not apply to losses/metrics created inside layers via self.add_loss() & self.add_metric(). As long as the layer gets loaded, these losses & metrics are kept, since they are part of the call method of the layer. The computation graph of custom objects such as custom layers is not included in the saved file. At loading time, Keras will need access to the Python classes/functions of these objects in order to reconstruct the model. See Custom objects. Saving the architecture The model's configuration (or architecture) specifies what layers the model contains, and how these layers are connected*. If you have the configuration of a model, then the model can be created with a freshly initialized state for the weights and no compilation information. *Note this only applies to models defined using the functional or Sequential apis not subclassed models. Configuration of a Sequential model or Functional API model These types of models are explicit graphs of layers: their configuration is always available in a structured form. APIs get_config() and from_config() tf.keras.models.model_to_json() and tf.keras.models.model_from_json() get_config() and from_config() Calling config = model.get_config() will return a Python dict containing the configuration of the model. The same model can then be reconstructed via Sequential.from_config(config) (for a Sequential model) or Model.from_config(config) (for a Functional API model). The same workflow also works for any serializable layer. Layer example: layer = keras.layers.Dense(3, activation="relu") layer_config = layer.get_config() new_layer = keras.layers.Dense.from_config(layer_config) Sequential model example: model = keras.Sequential([keras.Input((32,)), keras.layers.Dense(1)]) config = model.get_config() new_model = keras.Sequential.from_config(config) Functional model example: inputs = keras.Input((32,)) outputs = keras.layers.Dense(1)(inputs) model = keras.Model(inputs, outputs) config = model.get_config() new_model = keras.Model.from_config(config) to_json() and tf.keras.models.model_from_json() This is similar to get_config / from_config, except it turns the model into a JSON string, which can then be loaded without the original model class. It is also specific to models, it isn't meant for layers. Example: model = keras.Sequential([keras.Input((32,)), keras.layers.Dense(1)]) json_config = model.to_json() new_model = keras.models.model_from_json(json_config) Custom objects Models and layers The architecture of subclassed models and layers are defined in the methods __init__ and call. They are considered Python bytecode, which cannot be serialized into a JSON-compatible config -- you could try serializing the bytecode (e.g. via pickle), but it's completely unsafe and means your model cannot be loaded on a different system. In order to save/load a model with custom-defined layers, or a subclassed model, you should overwrite the get_config and optionally from_config methods. Additionally, you should use register the custom object so that Keras is aware of it. Custom functions Custom-defined functions (e.g. activation loss or initialization) do not need a get_config method. The function name is sufficient for loading as long as it is registered as a custom object. Loading the TensorFlow graph only It's possible to load the TensorFlow graph generated by the Keras. If you do so, you won't need to provide any custom_objects. You can do so like this: model.save("my_model") tensorflow_graph = tf.saved_model.load("my_model") x = np.random.uniform(size=(4, 32)).astype(np.float32) predicted = tensorflow_graph(x).numpy() INFO:tensorflow:Assets written to: my_model/assets Note that this method has several drawbacks: * For traceability reasons, you should always have access to the custom objects that were used. You wouldn't want to put in production a model that you cannot re-create. * The object returned by tf.saved_model.load isn't a Keras model. So it's not as easy to use. For example, you won't have access to .predict() or .fit() Even if its use is discouraged, it can help you if you're in a tight spot, for example, if you lost the code of your custom objects or have issues loading the model with tf.keras.models.load_model(). You can find out more in the page about tf.saved_model.load Defining the config methods Specifications: get_config should return a JSON-serializable dictionary in order to be compatible with the Keras architecture- and model-saving APIs. from_config(config) (classmethod) should return a new layer or model object that is created from the config. The default implementation returns cls(**config). Example: class CustomLayer(keras.layers.Layer): def __init__(self, a): self.var = tf.Variable(a, name="var_a") def call(self, inputs, training=False): if training: return inputs * self.var else: return inputs def get_config(self): return {"a": self.var.numpy()} # There's actually no need to define `from_config` here, since returning # `cls(**config)` is the default behavior. @classmethod def from_config(cls, config): return cls(**config) layer = CustomLayer(5) layer.var.assign(2) serialized_layer = keras.layers.serialize(layer) new_layer = keras.layers.deserialize( serialized_layer, custom_objects={"CustomLayer": CustomLayer} ) Registering the custom object Keras keeps a note of which class generated the config. From the example above, tf.keras.layers.serialize generates a serialized form of the custom layer: {'class_name': 'CustomLayer', 'config': {'a': 2}} Keras keeps a master list of all built-in layer, model, optimizer, and metric classes, which is used to find the correct class to call from_config. If the class can't be found, then an error is raised (Value Error: Unknown layer). There are a few ways to register custom classes to this list: Setting custom_objects argument in the loading function. (see the example in section above "Defining the config methods") tf.keras.utils.custom_object_scope or tf.keras.utils.CustomObjectScope tf.keras.utils.register_keras_serializable Custom layer and function example class CustomLayer(keras.layers.Layer): def __init__(self, units=32, **kwargs): super(CustomLayer, self).__init__(**kwargs) self.units = units def build(self, input_shape): self.w = self.add_weight( shape=(input_shape[-1], self.units), initializer="random_normal", trainable=True, ) self.b = self.add_weight( shape=(self.units,), initializer="random_normal", trainable=True ) def call(self, inputs): return tf.matmul(inputs, self.w) + self.b def get_config(self): config = super(CustomLayer, self).get_config() config.update({"units": self.units}) return config def custom_activation(x): return tf.nn.tanh(x) ** 2 # Make a model with the CustomLayer and custom_activation inputs = keras.Input((32,)) x = CustomLayer(32)(inputs) outputs = keras.layers.Activation(custom_activation)(x) model = keras.Model(inputs, outputs) # Retrieve the config config = model.get_config() # At loading time, register the custom objects with a `custom_object_scope`: custom_objects = {"CustomLayer": CustomLayer, "custom_activation": custom_activation} with keras.utils.custom_object_scope(custom_objects): new_model = keras.Model.from_config(config) In-memory model cloning You can also do in-memory cloning of a model via tf.keras.models.clone_model(). This is equivalent to getting the config then recreating the model from its config (so it does not preserve compilation information or layer weights values). Example: with keras.utils.custom_object_scope(custom_objects): new_model = keras.models.clone_model(model) Saving & loading only the model's weights values You can choose to only save & load a model's weights. This can be useful if: You only need the model for inference: in this case you won't need to restart training, so you don't need the compilation information or optimizer state. You are doing transfer learning: in this case you will be training a new model reusing the state of a prior model, so you don't need the compilation information of the prior model. APIs for in-memory weight transfer Weights can be copied between different objects by using get_weights and set_weights: tf.keras.layers.Layer.get_weights(): Returns a list of numpy arrays. tf.keras.layers.Layer.set_weights(): Sets the model weights to the values in the weights argument. Examples below. Transfering weights from one layer to another, in memory def create_layer(): layer = keras.layers.Dense(64, activation="relu", name="dense_2") layer.build((None, 784)) return layer layer_1 = create_layer() layer_2 = create_layer() # Copy weights from layer 1 to layer 2 layer_2.set_weights(layer_1.get_weights()) Transfering weights from one model to another model with a compatible architecture, in memory # Create a simple functional model inputs = keras.Input(shape=(784,), name="digits") x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs) x = keras.layers.Dense(64, activation="relu", name="dense_2")(x) outputs = keras.layers.Dense(10, name="predictions")(x) functional_model = keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp") # Define a subclassed model with the same architecture class SubclassedModel(keras.Model): def __init__(self, output_dim, name=None): super(SubclassedModel, self).__init__(name=name) self.output_dim = output_dim self.dense_1 = keras.layers.Dense(64, activation="relu", name="dense_1") self.dense_2 = keras.layers.Dense(64, activation="relu", name="dense_2") self.dense_3 = keras.layers.Dense(output_dim, name="predictions") def call(self, inputs): x = self.dense_1(inputs) x = self.dense_2(x) x = self.dense_3(x) return x def get_config(self): return {"output_dim": self.output_dim, "name": self.name} subclassed_model = SubclassedModel(10) # Call the subclassed model once to create the weights. subclassed_model(tf.ones((1, 784))) # Copy weights from functional_model to subclassed_model. subclassed_model.set_weights(functional_model.get_weights()) assert len(functional_model.weights) == len(subclassed_model.weights) for a, b in zip(functional_model.weights, subclassed_model.weights): np.testing.assert_allclose(a.numpy(), b.numpy()) The case of stateless layers Because stateless layers do not change the order or number of weights, models can have compatible architectures even if there are extra/missing stateless layers. inputs = keras.Input(shape=(784,), name="digits") x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs) x = keras.layers.Dense(64, activation="relu", name="dense_2")(x) outputs = keras.layers.Dense(10, name="predictions")(x) functional_model = keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp") inputs = keras.Input(shape=(784,), name="digits") x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs) x = keras.layers.Dense(64, activation="relu", name="dense_2")(x) # Add a dropout layer, which does not contain any weights. x = keras.layers.Dropout(0.5)(x) outputs = keras.layers.Dense(10, name="predictions")(x) functional_model_with_dropout = keras.Model( inputs=inputs, outputs=outputs, name="3_layer_mlp" ) functional_model_with_dropout.set_weights(functional_model.get_weights()) APIs for saving weights to disk & loading them back Weights can be saved to disk by calling model.save_weights in the following formats: TensorFlow Checkpoint HDF5 The default format for model.save_weights is TensorFlow checkpoint. There are two ways to specify the save format: save_format argument: Set the value to save_format="tf" or save_format="h5". path argument: If the path ends with .h5 or .hdf5, then the HDF5 format is used. Other suffixes will result in a TensorFlow checkpoint unless save_format is set. There is also an option of retrieving weights as in-memory numpy arrays. Each API has its pros and cons which are detailed below. TF Checkpoint format Example: # Runnable example sequential_model = keras.Sequential( [ keras.Input(shape=(784,), name="digits"), keras.layers.Dense(64, activation="relu", name="dense_1"), keras.layers.Dense(64, activation="relu", name="dense_2"), keras.layers.Dense(10, name="predictions"), ] ) sequential_model.save_weights("ckpt") load_status = sequential_model.load_weights("ckpt") # `assert_consumed` can be used as validation that all variable values have been # restored from the checkpoint. See `tf.train.Checkpoint.restore` for other # methods in the Status object. load_status.assert_consumed() Format details The TensorFlow Checkpoint format saves and restores the weights using object attribute names. For instance, consider the tf.keras.layers.Dense layer. The layer contains two weights: dense.kernel and dense.bias. When the layer is saved to the tf format, the resulting checkpoint contains the keys "kernel" and "bias" and their corresponding weight values. For more information see "Loading mechanics" in the TF Checkpoint guide. Note that attribute/graph edge is named after the name used in parent object, not the name of the variable. Consider the CustomLayer in the example below. The variable CustomLayer.var is saved with "var" as part of key, not "var_a". class CustomLayer(keras.layers.Layer): def __init__(self, a): self.var = tf.Variable(a, name="var_a") layer = CustomLayer(5) layer_ckpt = tf.train.Checkpoint(layer=layer).save("custom_layer") ckpt_reader = tf.train.load_checkpoint(layer_ckpt) ckpt_reader.get_variable_to_dtype_map() {'save_counter/.ATTRIBUTES/VARIABLE_VALUE': tf.int64, 'layer/var/.ATTRIBUTES/VARIABLE_VALUE': tf.int32, '_CHECKPOINTABLE_OBJECT_GRAPH': tf.string} Transfer learning example Essentially, as long as two models have the same architecture, they are able to share the same checkpoint. Example: inputs = keras.Input(shape=(784,), name="digits") x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs) x = keras.layers.Dense(64, activation="relu", name="dense_2")(x) outputs = keras.layers.Dense(10, name="predictions")(x) functional_model = keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp") # Extract a portion of the functional model defined in the Setup section. # The following lines produce a new model that excludes the final output # layer of the functional model. pretrained = keras.Model( functional_model.inputs, functional_model.layers[-1].input, name="pretrained_model" ) # Randomly assign "trained" weights. for w in pretrained.weights: w.assign(tf.random.normal(w.shape)) pretrained.save_weights("pretrained_ckpt") pretrained.summary() # Assume this is a separate program where only 'pretrained_ckpt' exists. # Create a new functional model with a different output dimension. inputs = keras.Input(shape=(784,), name="digits") x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs) x = keras.layers.Dense(64, activation="relu", name="dense_2")(x) outputs = keras.layers.Dense(5, name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outputs, name="new_model") # Load the weights from pretrained_ckpt into model. model.load_weights("pretrained_ckpt") # Check that all of the pretrained weights have been loaded. for a, b in zip(pretrained.weights, model.weights): np.testing.assert_allclose(a.numpy(), b.numpy()) print("\n", "-" * 50) model.summary() # Example 2: Sequential model # Recreate the pretrained model, and load the saved weights. inputs = keras.Input(shape=(784,), name="digits") x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs) x = keras.layers.Dense(64, activation="relu", name="dense_2")(x) pretrained_model = keras.Model(inputs=inputs, outputs=x, name="pretrained") # Sequential example: model = keras.Sequential([pretrained_model, keras.layers.Dense(5, name="predictions")]) model.summary() pretrained_model.load_weights("pretrained_ckpt") # Warning! Calling `model.load_weights('pretrained_ckpt')` won't throw an error, # but will *not* work as expected. If you inspect the weights, you'll see that # none of the weights will have loaded. `pretrained_model.load_weights()` is the # correct method to call. Model: "pretrained_model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= digits (InputLayer) [(None, 784)] 0 _________________________________________________________________ dense_1 (Dense) (None, 64) 50240 _________________________________________________________________ dense_2 (Dense) (None, 64) 4160 ================================================================= Total params: 54,400 Trainable params: 54,400 Non-trainable params: 0 _________________________________________________________________ -------------------------------------------------- Model: "new_model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= digits (InputLayer) [(None, 784)] 0 _________________________________________________________________ dense_1 (Dense) (None, 64) 50240 _________________________________________________________________ dense_2 (Dense) (None, 64) 4160 _________________________________________________________________ predictions (Dense) (None, 5) 325 ================================================================= Total params: 54,725 Trainable params: 54,725 Non-trainable params: 0 _________________________________________________________________ Model: "sequential_3" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= pretrained (Functional) (None, 64) 54400 _________________________________________________________________ predictions (Dense) (None, 5) 325 ================================================================= Total params: 54,725 Trainable params: 54,725 Non-trainable params: 0 _________________________________________________________________ It is generally recommended to stick to the same API for building models. If you switch between Sequential and Functional, or Functional and subclassed, etc., then always rebuild the pre-trained model and load the pre-trained weights to that model. The next question is, how can weights be saved and loaded to different models if the model architectures are quite different? The solution is to use tf.train.Checkpoint to save and restore the exact layers/variables. Example: # Create a subclassed model that essentially uses functional_model's first # and last layers. # First, save the weights of functional_model's first and last dense layers. first_dense = functional_model.layers[1] last_dense = functional_model.layers[-1] ckpt_path = tf.train.Checkpoint( dense=first_dense, kernel=last_dense.kernel, bias=last_dense.bias ).save("ckpt") # Define the subclassed model. class ContrivedModel(keras.Model): def __init__(self): super(ContrivedModel, self).__init__() self.first_dense = keras.layers.Dense(64) self.kernel = self.add_variable("kernel", shape=(64, 10)) self.bias = self.add_variable("bias", shape=(10,)) def call(self, inputs): x = self.first_dense(inputs) return tf.matmul(x, self.kernel) + self.bias model = ContrivedModel() # Call model on inputs to create the variables of the dense layer. _ = model(tf.ones((1, 784))) # Create a Checkpoint with the same structure as before, and load the weights. tf.train.Checkpoint( dense=model.first_dense, kernel=model.kernel, bias=model.bias ).restore(ckpt_path).assert_consumed() HDF5 format The HDF5 format contains weights grouped by layer names. The weights are lists ordered by concatenating the list of trainable weights to the list of non-trainable weights (same as layer.weights). Thus, a model can use a hdf5 checkpoint if it has the same layers and trainable statuses as saved in the checkpoint. Example: # Runnable example sequential_model = keras.Sequential( [ keras.Input(shape=(784,), name="digits"), keras.layers.Dense(64, activation="relu", name="dense_1"), keras.layers.Dense(64, activation="relu", name="dense_2"), keras.layers.Dense(10, name="predictions"), ] ) sequential_model.save_weights("weights.h5") sequential_model.load_weights("weights.h5") Note that changing layer.trainable may result in a different layer.weights ordering when the model contains nested layers. class NestedDenseLayer(keras.layers.Layer): def __init__(self, units, name=None): super(NestedDenseLayer, self).__init__(name=name) self.dense_1 = keras.layers.Dense(units, name="dense_1") self.dense_2 = keras.layers.Dense(units, name="dense_2") def call(self, inputs): return self.dense_2(self.dense_1(inputs)) nested_model = keras.Sequential([keras.Input((784,)), NestedDenseLayer(10, "nested")]) variable_names = [v.name for v in nested_model.weights] print("variables: {}".format(variable_names)) print("\nChanging trainable status of one of the nested layers...") nested_model.get_layer("nested").dense_1.trainable = False variable_names_2 = [v.name for v in nested_model.weights] print("\nvariables: {}".format(variable_names_2)) print("variable ordering changed:", variable_names != variable_names_2) variables: ['nested/dense_1/kernel:0', 'nested/dense_1/bias:0', 'nested/dense_2/kernel:0', 'nested/dense_2/bias:0'] Changing trainable status of one of the nested layers... variables: ['nested/dense_2/kernel:0', 'nested/dense_2/bias:0', 'nested/dense_1/kernel:0', 'nested/dense_1/bias:0'] variable ordering changed: True Transfer learning example When loading pretrained weights from HDF5, it is recommended to load the weights into the original checkpointed model, and then extract the desired weights/layers into a new model. Example: def create_functional_model(): inputs = keras.Input(shape=(784,), name="digits") x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs) x = keras.layers.Dense(64, activation="relu", name="dense_2")(x) outputs = keras.layers.Dense(10, name="predictions")(x) return keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp") functional_model = create_functional_model() functional_model.save_weights("pretrained_weights.h5") # In a separate program: pretrained_model = create_functional_model() pretrained_model.load_weights("pretrained_weights.h5") # Create a new model by extracting layers from the original model: extracted_layers = pretrained_model.layers[:-1] extracted_layers.append(keras.layers.Dense(5, name="dense_3")) model = keras.Sequential(extracted_layers) model.summary() Model: "sequential_6" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_1 (Dense) (None, 64) 50240 _________________________________________________________________ dense_2 (Dense) (None, 64) 4160 _________________________________________________________________ dense_3 (Dense) (None, 5) 325 ================================================================= Total params: 54,725 Trainable params: 54,725 Non-trainable params: 0 _________________________________________________________________