text stringlengths 0 4.99k |
|---|
# Bigger resolution, only for fine-tuning. |
finetune_sample_images, _ = next(iter(finetune_train_dataset)) |
visualize_dataset(finetune_sample_images) |
# Bigger resolution, with the same augmentation transforms as |
# the smaller resolution dataset. |
vanilla_sample_images, _ = next(iter(vanilla_train_dataset)) |
visualize_dataset(vanilla_sample_images) |
2021-10-11 02:05:26.638594: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline si... |
png |
Batch shape: (128, 128, 128, 3). |
2021-10-11 02:05:28.509752: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline si... |
png |
Batch shape: (128, 224, 224, 3). |
2021-10-11 02:05:30.108623: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline si... |
png |
Batch shape: (128, 224, 224, 3). |
Model training utilities |
We train multiple variants of ResNet50V2 (He et al.): |
On the smaller resolution dataset (128x128). It will be trained from scratch. |
Then fine-tune the model from 1 on the larger resolution (224x224) dataset. |
Train another ResNet50V2 from scratch on the larger resolution dataset. |
As a reminder, the larger resolution datasets differ in terms of their augmentation transforms. |
def get_training_model(num_classes=5): |
inputs = layers.Input((None, None, 3)) |
resnet_base = keras.applications.ResNet50V2( |
include_top=False, weights=None, pooling=\"avg\" |
) |
resnet_base.trainable = True |
x = layers.Rescaling(scale=1.0 / 127.5, offset=-1)(inputs) |
x = resnet_base(x) |
outputs = layers.Dense(num_classes, activation=\"softmax\")(x) |
return keras.Model(inputs, outputs) |
def train_and_evaluate( |
model, train_ds, val_ds, epochs, learning_rate=1e-3, use_early_stopping=False |
): |
optimizer = keras.optimizers.Adam(learning_rate=learning_rate) |
model.compile( |
optimizer=optimizer, |
loss=\"sparse_categorical_crossentropy\", |
metrics=[\"accuracy\"], |
) |
if use_early_stopping: |
es_callback = keras.callbacks.EarlyStopping(patience=5) |
callbacks = [es_callback] |
else: |
callbacks = None |
model.fit( |
train_ds, validation_data=val_ds, epochs=epochs, callbacks=callbacks, |
) |
_, accuracy = model.evaluate(val_ds) |
print(f\"Top-1 accuracy on the validation set: {accuracy*100:.2f}%.\") |
return model |
Experiment 1: Train on 128x128 and then fine-tune on 224x224 |
epochs = 30 |
smaller_res_model = get_training_model() |
smaller_res_model = train_and_evaluate( |
smaller_res_model, initial_train_dataset, initial_val_dataset, epochs |
) |
Epoch 1/30 |
26/26 [==============================] - 14s 226ms/step - loss: 1.6476 - accuracy: 0.4345 - val_loss: 9.8213 - val_accuracy: 0.2044 |
Epoch 2/30 |
26/26 [==============================] - 3s 123ms/step - loss: 1.1561 - accuracy: 0.5495 - val_loss: 6.5521 - val_accuracy: 0.2071 |
Epoch 3/30 |
26/26 [==============================] - 3s 123ms/step - loss: 1.0989 - accuracy: 0.5722 - val_loss: 2.6216 - val_accuracy: 0.1935 |
Epoch 4/30 |
26/26 [==============================] - 3s 122ms/step - loss: 1.0373 - accuracy: 0.5895 - val_loss: 1.9918 - val_accuracy: 0.2125 |
Epoch 5/30 |
26/26 [==============================] - 3s 122ms/step - loss: 0.9960 - accuracy: 0.6119 - val_loss: 2.8505 - val_accuracy: 0.2262 |
Epoch 6/30 |
26/26 [==============================] - 3s 122ms/step - loss: 0.9458 - accuracy: 0.6331 - val_loss: 1.8974 - val_accuracy: 0.2834 |
Epoch 7/30 |
26/26 [==============================] - 3s 122ms/step - loss: 0.8949 - accuracy: 0.6606 - val_loss: 2.1164 - val_accuracy: 0.2834 |
Epoch 8/30 |
26/26 [==============================] - 3s 122ms/step - loss: 0.8581 - accuracy: 0.6709 - val_loss: 1.8858 - val_accuracy: 0.3815 |
Epoch 9/30 |
26/26 [==============================] - 3s 123ms/step - loss: 0.8436 - accuracy: 0.6776 - val_loss: 1.5671 - val_accuracy: 0.4687 |
Epoch 10/30 |
26/26 [==============================] - 3s 123ms/step - loss: 0.8632 - accuracy: 0.6685 - val_loss: 1.5005 - val_accuracy: 0.5504 |
Epoch 11/30 |
26/26 [==============================] - 3s 123ms/step - loss: 0.8316 - accuracy: 0.6918 - val_loss: 1.1421 - val_accuracy: 0.6594 |
Epoch 12/30 |
26/26 [==============================] - 3s 123ms/step - loss: 0.7981 - accuracy: 0.6951 - val_loss: 1.2036 - val_accuracy: 0.6403 |
Epoch 13/30 |
26/26 [==============================] - 3s 122ms/step - loss: 0.8275 - accuracy: 0.6806 - val_loss: 2.2632 - val_accuracy: 0.5177 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.