text stringlengths 0 4.99k |
|---|
Epoch 14/30 |
26/26 [==============================] - 3s 122ms/step - loss: 0.8156 - accuracy: 0.6994 - val_loss: 1.1023 - val_accuracy: 0.6649 |
Epoch 15/30 |
26/26 [==============================] - 3s 122ms/step - loss: 0.7572 - accuracy: 0.7091 - val_loss: 1.6248 - val_accuracy: 0.6049 |
Epoch 16/30 |
26/26 [==============================] - 3s 123ms/step - loss: 0.7757 - accuracy: 0.7024 - val_loss: 2.0600 - val_accuracy: 0.6294 |
Epoch 17/30 |
26/26 [==============================] - 3s 122ms/step - loss: 0.7600 - accuracy: 0.7087 - val_loss: 1.5731 - val_accuracy: 0.6131 |
Epoch 18/30 |
26/26 [==============================] - 3s 122ms/step - loss: 0.7385 - accuracy: 0.7215 - val_loss: 1.8312 - val_accuracy: 0.5749 |
Epoch 19/30 |
26/26 [==============================] - 3s 122ms/step - loss: 0.7493 - accuracy: 0.7224 - val_loss: 3.0382 - val_accuracy: 0.4986 |
Epoch 20/30 |
26/26 [==============================] - 3s 122ms/step - loss: 0.7746 - accuracy: 0.7048 - val_loss: 7.8191 - val_accuracy: 0.5123 |
Epoch 21/30 |
26/26 [==============================] - 3s 123ms/step - loss: 0.7367 - accuracy: 0.7405 - val_loss: 1.9607 - val_accuracy: 0.6676 |
Epoch 22/30 |
26/26 [==============================] - 3s 122ms/step - loss: 0.6970 - accuracy: 0.7357 - val_loss: 3.1944 - val_accuracy: 0.4496 |
Epoch 23/30 |
26/26 [==============================] - 3s 122ms/step - loss: 0.7299 - accuracy: 0.7212 - val_loss: 1.4012 - val_accuracy: 0.6567 |
Epoch 24/30 |
26/26 [==============================] - 3s 122ms/step - loss: 0.6965 - accuracy: 0.7315 - val_loss: 1.9781 - val_accuracy: 0.6403 |
Epoch 25/30 |
26/26 [==============================] - 3s 124ms/step - loss: 0.6811 - accuracy: 0.7408 - val_loss: 0.9287 - val_accuracy: 0.6839 |
Epoch 26/30 |
26/26 [==============================] - 3s 123ms/step - loss: 0.6732 - accuracy: 0.7487 - val_loss: 2.9406 - val_accuracy: 0.5504 |
Epoch 27/30 |
26/26 [==============================] - 3s 122ms/step - loss: 0.6571 - accuracy: 0.7560 - val_loss: 1.6268 - val_accuracy: 0.5804 |
Epoch 28/30 |
26/26 [==============================] - 3s 122ms/step - loss: 0.6662 - accuracy: 0.7548 - val_loss: 0.9067 - val_accuracy: 0.7357 |
Epoch 29/30 |
26/26 [==============================] - 3s 122ms/step - loss: 0.6443 - accuracy: 0.7520 - val_loss: 0.7760 - val_accuracy: 0.7520 |
Epoch 30/30 |
26/26 [==============================] - 3s 122ms/step - loss: 0.6617 - accuracy: 0.7539 - val_loss: 0.6026 - val_accuracy: 0.7766 |
3/3 [==============================] - 0s 37ms/step - loss: 0.6026 - accuracy: 0.7766 |
Top-1 accuracy on the validation set: 77.66%. |
Freeze all the layers except for the final Batch Normalization layer |
For fine-tuning, we train only two layers: |
The final Batch Normalization (Ioffe et al.) layer. |
The classification layer. |
We are unfreezing the final Batch Normalization layer to compensate for the change in activation statistics before the global average pooling layer. As shown in the paper, unfreezing the final Batch Normalization layer is enough. |
For a comprehensive guide on fine-tuning models in Keras, refer to this tutorial. |
for layer in smaller_res_model.layers[2].layers: |
layer.trainable = False |
smaller_res_model.layers[2].get_layer(\"post_bn\").trainable = True |
epochs = 10 |
# Use a lower learning rate during fine-tuning. |
bigger_res_model = train_and_evaluate( |
smaller_res_model, |
finetune_train_dataset, |
finetune_val_dataset, |
epochs, |
learning_rate=1e-4, |
) |
Epoch 1/10 |
26/26 [==============================] - 9s 201ms/step - loss: 0.7912 - accuracy: 0.7856 - val_loss: 0.6808 - val_accuracy: 0.7575 |
Epoch 2/10 |
26/26 [==============================] - 3s 115ms/step - loss: 0.7732 - accuracy: 0.7938 - val_loss: 0.7028 - val_accuracy: 0.7684 |
Epoch 3/10 |
26/26 [==============================] - 3s 115ms/step - loss: 0.7658 - accuracy: 0.7923 - val_loss: 0.7136 - val_accuracy: 0.7629 |
Epoch 4/10 |
26/26 [==============================] - 3s 115ms/step - loss: 0.7536 - accuracy: 0.7872 - val_loss: 0.7161 - val_accuracy: 0.7684 |
Epoch 5/10 |
26/26 [==============================] - 3s 115ms/step - loss: 0.7346 - accuracy: 0.7947 - val_loss: 0.7154 - val_accuracy: 0.7711 |
Epoch 6/10 |
26/26 [==============================] - 3s 115ms/step - loss: 0.7183 - accuracy: 0.7990 - val_loss: 0.7139 - val_accuracy: 0.7684 |
Epoch 7/10 |
26/26 [==============================] - 3s 116ms/step - loss: 0.7059 - accuracy: 0.7962 - val_loss: 0.7071 - val_accuracy: 0.7738 |
Epoch 8/10 |
26/26 [==============================] - 3s 115ms/step - loss: 0.6959 - accuracy: 0.7923 - val_loss: 0.7002 - val_accuracy: 0.7738 |
Epoch 9/10 |
26/26 [==============================] - 3s 116ms/step - loss: 0.6871 - accuracy: 0.8011 - val_loss: 0.6967 - val_accuracy: 0.7711 |
Epoch 10/10 |
26/26 [==============================] - 3s 116ms/step - loss: 0.6761 - accuracy: 0.8044 - val_loss: 0.6887 - val_accuracy: 0.7738 |
3/3 [==============================] - 0s 95ms/step - loss: 0.6887 - accuracy: 0.7738 |
Top-1 accuracy on the validation set: 77.38%. |
Experiment 2: Train a model on 224x224 resolution from scratch |
Now, we train another model from scratch on the larger resolution dataset. Recall that the augmentation transforms used in this dataset are different from before. |
epochs = 30 |
vanilla_bigger_res_model = get_training_model() |
vanilla_bigger_res_model = train_and_evaluate( |
vanilla_bigger_res_model, vanilla_train_dataset, vanilla_val_dataset, epochs |
) |
Epoch 1/30 |
26/26 [==============================] - 15s 389ms/step - loss: 1.5339 - accuracy: 0.4569 - val_loss: 177.5233 - val_accuracy: 0.1907 |
Epoch 2/30 |
26/26 [==============================] - 8s 314ms/step - loss: 1.1472 - accuracy: 0.5483 - val_loss: 17.5804 - val_accuracy: 0.1907 |
Epoch 3/30 |
26/26 [==============================] - 8s 315ms/step - loss: 1.0708 - accuracy: 0.5792 - val_loss: 2.2719 - val_accuracy: 0.2480 |
Epoch 4/30 |
26/26 [==============================] - 8s 315ms/step - loss: 1.0225 - accuracy: 0.6170 - val_loss: 2.1274 - val_accuracy: 0.2398 |
Epoch 5/30 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.