text stringlengths 0 4.99k |
|---|
Epoch 49/50 |
27/27 [==============================] - 1s 39ms/step - loss: 0.1198 - accuracy: 0.9670 - val_loss: 0.1114 - val_accuracy: 0.9713 |
Epoch 50/50 |
27/27 [==============================] - 1s 45ms/step - loss: 0.1186 - accuracy: 0.9677 - val_loss: 0.1106 - val_accuracy: 0.9703 |
<tensorflow.python.keras.callbacks.History at 0x1b79ec350> |
Tip 4: if your code is slow, run the TensorFlow profiler |
One last tip -- if your code seems slower than it should be, you're going to want to plot how much time is spent on each computation step. Look for any bottleneck that might be causing less than 100% device utilization. |
To learn more about TensorFlow profiling, see this extensive guide. |
You can quickly profile a Keras model via the TensorBoard callback: |
# Profile from batches 10 to 15 |
tb_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, |
profile_batch=(10, 15)) |
# Train the model and use the TensorBoard Keras callback to collect |
# performance profiling data |
model.fit(dataset, |
epochs=1, |
callbacks=[tb_callback]) |
Then navigate to the TensorBoard app and check the \"profile\" tab. |
Training better student models via knowledge distillation with function matching. |
Introduction |
Knowledge distillation (Hinton et al.) is a technique that enables us to compress larger models into smaller ones. This allows us to reap the benefits of high performing larger models, while reducing storage and memory costs and achieving higher inference speed: |
Smaller models -> smaller memory footprint |
Reduced complexity -> fewer floating-point operations (FLOPs) |
In Knowledge distillation: A good teacher is patient and consistent, Beyer et al. investigate various existing setups for performing knowledge distillation and show that all of them lead to sub-optimal performance. Due to this, practitioners often settle for other alternatives (quantization, pruning, weight clustering,... |
Beyer et al. investigate how we can improve the student models that come out of the knowledge distillation process and always match the performance of their teacher models. In this example, we will study the recipes introduced by them, using the Flowers102 dataset. As a reference, with these recipes, the authors were a... |
In case you need a refresher on knowledge distillation and want to study how it is implemented in Keras, you can refer to this example. You can also follow this example that shows an extension of knowledge distillation applied to consistency training. |
To follow this example, you will need TensorFlow 2.5 or higher as well as TensorFlow Addons, which can be installed using the command below: |
!pip install -q tensorflow-addons |
Imports |
from tensorflow import keras |
import tensorflow_addons as tfa |
import tensorflow as tf |
import matplotlib.pyplot as plt |
import numpy as np |
import tensorflow_datasets as tfds |
tfds.disable_progress_bar() |
Hyperparameters and contants |
AUTO = tf.data.AUTOTUNE # Used to dynamically adjust parallelism. |
BATCH_SIZE = 64 |
# Comes from Table 4 and \"Training setup\" section. |
TEMPERATURE = 10 # Used to soften the logits before they go to softmax. |
INIT_LR = 0.003 # Initial learning rate that will be decayed over the training period. |
WEIGHT_DECAY = 0.001 # Used for regularization. |
CLIP_THRESHOLD = 1.0 # Used for clipping the gradients by L2-norm. |
# We will first resize the training images to a bigger size and then we will take |
# random crops of a lower size. |
BIGGER = 160 |
RESIZE = 128 |
Load the Flowers102 dataset |
train_ds, validation_ds, test_ds = tfds.load( |
\"oxford_flowers102\", split=[\"train\", \"validation\", \"test\"], as_supervised=True |
) |
print(f\"Number of training examples: {train_ds.cardinality()}.\") |
print( |
f\"Number of validation examples: {validation_ds.cardinality()}.\" |
) |
print(f\"Number of test examples: {test_ds.cardinality()}.\") |
Number of training examples: 1020. |
Number of validation examples: 1020. |
Number of test examples: 6149. |
Teacher model |
As is common with any distillation technique, it's important to first train a well-performing teacher model which is usually larger than the subsequent student model. The authors distill a BiT ResNet152x2 model (teacher) into a BiT ResNet50 model (student). |
BiT stands for Big Transfer and was introduced in Big Transfer (BiT): General Visual Representation Learning. BiT variants of ResNets use Group Normalization (Wu et al.) and Weight Standardization (Qiao et al.) in place of Batch Normalization (Ioffe et al.). In order to limit the time it takes to run this example, we w... |
The model weights are hosted on Kaggle as a dataset. To download the weights, follow these steps: |
Create an account on Kaggle here. |
Go to the \"Account\" tab of your user profile. |
Select \"Create API Token\". This will trigger the download of kaggle.json, a file containing your API credentials. |
From that JSON file, copy your Kaggle username and API key. |
Now run the following: |
import os |
os.environ[\"KAGGLE_USERNAME\"] = \"\" # TODO: enter your Kaggle user name here |
os.environ[\"KAGGLE_KEY\"] = \"\" # TODO: enter your Kaggle key here |
Once the environment variables are set, run: |
$ kaggle datasets download -d spsayakpaul/bitresnet101x3flowers102 |
$ unzip -qq bitresnet101x3flowers102.zip |
This should generate a folder named T-r101x3-128 which is essentially a teacher SavedModel. |
import os |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.