text stringlengths 0 4.99k |
|---|
os.environ[\"KAGGLE_USERNAME\"] = \"\" # TODO: enter your Kaggle user name here |
os.environ[\"KAGGLE_KEY\"] = \"\" # TODO: enter your Kaggle API key here |
!kaggle datasets download -d spsayakpaul/bitresnet101x3flowers102 |
!unzip -qq bitresnet101x3flowers102.zip |
# Since the teacher model is not going to be trained further we make |
# it non-trainable. |
teacher_model = keras.models.load_model( |
\"/home/jupyter/keras-io/examples/keras_recipes/T-r101x3-128\" |
) |
teacher_model.trainable = False |
teacher_model.summary() |
Model: \"my_bi_t_model_1\" |
_________________________________________________________________ |
Layer (type) Output Shape Param # |
================================================================= |
dense_1 (Dense) multiple 626790 |
_________________________________________________________________ |
keras_layer_1 (KerasLayer) multiple 381789888 |
================================================================= |
Total params: 382,416,678 |
Trainable params: 0 |
Non-trainable params: 382,416,678 |
_________________________________________________________________ |
The \"function matching\" recipe |
To train a high-quality student model, the authors propose the following changes to the student training workflow: |
Use an aggressive variant of MixUp (Zhang et al.). This is done by sampling the alpha parameter from a uniform distribution instead of a beta distribution. MixUp is used here in order to help the student model capture the function underlying the teacher model. MixUp linearly interpolates between different samples acros... |
Unlike other works (Noisy Student Training for example), both the teacher and student models receive the same copy of an image, which is mixed up and randomly cropped. By providing the same inputs to both the models, the authors make the teacher consistent with the student. |
With MixUp, we are essentially introducing a strong form of regularization when training the student. As such, it should be trained for a relatively long period of time (1000 epochs at least). Since the student is trained with strong regularization, the risk of overfitting due to a longer training schedule are also mit... |
In summary, one needs to be consistent and patient while training the student model. |
Data input pipeline |
def mixup(images, labels): |
alpha = tf.random.uniform([], 0, 1) |
mixedup_images = alpha * images + (1 - alpha) * tf.reverse(images, axis=[0]) |
# The labels do not matter here since they are NOT used during |
# training. |
return mixedup_images, labels |
def preprocess_image(image, label, train=True): |
image = tf.cast(image, tf.float32) / 255.0 |
if train: |
image = tf.image.resize(image, (BIGGER, BIGGER)) |
image = tf.image.random_crop(image, (RESIZE, RESIZE, 3)) |
image = tf.image.random_flip_left_right(image) |
else: |
# Central fraction amount is from here: |
# https://git.io/J8Kda. |
image = tf.image.central_crop(image, central_fraction=0.875) |
image = tf.image.resize(image, (RESIZE, RESIZE)) |
return image, label |
def prepare_dataset(dataset, train=True, batch_size=BATCH_SIZE): |
if train: |
dataset = dataset.map(preprocess_image, num_parallel_calls=AUTO) |
dataset = dataset.shuffle(BATCH_SIZE * 10) |
else: |
dataset = dataset.map( |
lambda x, y: (preprocess_image(x, y, train)), num_parallel_calls=AUTO |
) |
dataset = dataset.batch(batch_size) |
if train: |
dataset = dataset.map(mixup, num_parallel_calls=AUTO) |
dataset = dataset.prefetch(AUTO) |
return dataset |
Note that for brevity, we used mild crops for the training set but in practice \"Inception-style\" preprocessing should be applied. You can refer to this script for a closer implementation. Also, the ground-truth labels are not used for training the student. |
train_ds = prepare_dataset(train_ds, True) |
validation_ds = prepare_dataset(validation_ds, False) |
test_ds = prepare_dataset(test_ds, False) |
Visualization |
sample_images, _ = next(iter(train_ds)) |
plt.figure(figsize=(10, 10)) |
for n in range(25): |
ax = plt.subplot(5, 5, n + 1) |
plt.imshow(sample_images[n].numpy()) |
plt.axis(\"off\") |
plt.show() |
png |
Student model |
For the purpose of this example, we will use the standard ResNet50V2 (He et al.). |
def get_resnetv2(): |
resnet_v2 = keras.applications.ResNet50V2( |
weights=None, |
input_shape=(RESIZE, RESIZE, 3), |
classes=102, |
classifier_activation=\"linear\", |
) |
return resnet_v2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.