text stringlengths 0 4.99k |
|---|
26/26 [==============================] - 8s 316ms/step - loss: 1.0001 - accuracy: 0.6206 - val_loss: 2.0375 - val_accuracy: 0.2834 |
Epoch 6/30 |
26/26 [==============================] - 8s 315ms/step - loss: 0.9602 - accuracy: 0.6355 - val_loss: 1.4412 - val_accuracy: 0.3978 |
Epoch 7/30 |
26/26 [==============================] - 8s 316ms/step - loss: 0.9418 - accuracy: 0.6461 - val_loss: 1.5257 - val_accuracy: 0.4305 |
Epoch 8/30 |
26/26 [==============================] - 8s 316ms/step - loss: 0.8911 - accuracy: 0.6649 - val_loss: 1.1530 - val_accuracy: 0.5858 |
Epoch 9/30 |
26/26 [==============================] - 8s 316ms/step - loss: 0.8834 - accuracy: 0.6694 - val_loss: 1.2026 - val_accuracy: 0.5531 |
Epoch 10/30 |
26/26 [==============================] - 8s 316ms/step - loss: 0.8752 - accuracy: 0.6724 - val_loss: 1.4917 - val_accuracy: 0.5695 |
Epoch 11/30 |
26/26 [==============================] - 8s 316ms/step - loss: 0.8690 - accuracy: 0.6594 - val_loss: 1.4115 - val_accuracy: 0.6022 |
Epoch 12/30 |
26/26 [==============================] - 8s 314ms/step - loss: 0.8586 - accuracy: 0.6761 - val_loss: 1.0692 - val_accuracy: 0.6349 |
Epoch 13/30 |
26/26 [==============================] - 8s 315ms/step - loss: 0.8120 - accuracy: 0.6894 - val_loss: 1.5233 - val_accuracy: 0.6567 |
Epoch 14/30 |
26/26 [==============================] - 8s 316ms/step - loss: 0.8275 - accuracy: 0.6857 - val_loss: 1.9079 - val_accuracy: 0.5804 |
Epoch 15/30 |
26/26 [==============================] - 8s 316ms/step - loss: 0.7624 - accuracy: 0.7127 - val_loss: 0.9543 - val_accuracy: 0.6540 |
Epoch 16/30 |
26/26 [==============================] - 8s 315ms/step - loss: 0.7595 - accuracy: 0.7266 - val_loss: 4.5757 - val_accuracy: 0.4877 |
Epoch 17/30 |
26/26 [==============================] - 8s 315ms/step - loss: 0.7577 - accuracy: 0.7154 - val_loss: 1.8411 - val_accuracy: 0.5749 |
Epoch 18/30 |
26/26 [==============================] - 8s 316ms/step - loss: 0.7596 - accuracy: 0.7163 - val_loss: 1.0660 - val_accuracy: 0.6703 |
Epoch 19/30 |
26/26 [==============================] - 8s 315ms/step - loss: 0.7492 - accuracy: 0.7160 - val_loss: 1.2462 - val_accuracy: 0.6485 |
Epoch 20/30 |
26/26 [==============================] - 8s 315ms/step - loss: 0.7269 - accuracy: 0.7330 - val_loss: 5.8287 - val_accuracy: 0.3379 |
Epoch 21/30 |
26/26 [==============================] - 8s 315ms/step - loss: 0.7193 - accuracy: 0.7275 - val_loss: 4.7058 - val_accuracy: 0.6049 |
Epoch 22/30 |
26/26 [==============================] - 8s 316ms/step - loss: 0.7251 - accuracy: 0.7318 - val_loss: 1.5608 - val_accuracy: 0.6485 |
Epoch 23/30 |
26/26 [==============================] - 8s 314ms/step - loss: 0.6888 - accuracy: 0.7466 - val_loss: 1.7914 - val_accuracy: 0.6240 |
Epoch 24/30 |
26/26 [==============================] - 8s 314ms/step - loss: 0.7051 - accuracy: 0.7339 - val_loss: 2.0918 - val_accuracy: 0.6158 |
Epoch 25/30 |
26/26 [==============================] - 8s 315ms/step - loss: 0.6920 - accuracy: 0.7454 - val_loss: 0.7284 - val_accuracy: 0.7575 |
Epoch 26/30 |
26/26 [==============================] - 8s 316ms/step - loss: 0.6502 - accuracy: 0.7523 - val_loss: 2.5474 - val_accuracy: 0.5313 |
Epoch 27/30 |
26/26 [==============================] - 8s 315ms/step - loss: 0.7101 - accuracy: 0.7330 - val_loss: 26.8117 - val_accuracy: 0.3297 |
Epoch 28/30 |
26/26 [==============================] - 8s 315ms/step - loss: 0.6632 - accuracy: 0.7548 - val_loss: 20.1011 - val_accuracy: 0.3243 |
Epoch 29/30 |
26/26 [==============================] - 8s 315ms/step - loss: 0.6682 - accuracy: 0.7505 - val_loss: 11.5872 - val_accuracy: 0.3297 |
Epoch 30/30 |
26/26 [==============================] - 8s 315ms/step - loss: 0.6758 - accuracy: 0.7514 - val_loss: 5.7229 - val_accuracy: 0.4305 |
3/3 [==============================] - 0s 95ms/step - loss: 5.7229 - accuracy: 0.4305 |
Top-1 accuracy on the validation set: 43.05%. |
As we can notice from the above cells, FixRes leads to a better performance. Another advantage of FixRes is the improved total training time and reduction in GPU memory usage. FixRes is model-agnostic, you can use it on any image classification model to potentially boost performance. |
You can find more results here that were gathered by running the same code with different random seeds. |
How to obtain a class activation heatmap for an image classification model. |
Adapted from Deep Learning with Python (2017). |
Setup |
import numpy as np |
import tensorflow as tf |
from tensorflow import keras |
# Display |
from IPython.display import Image, display |
import matplotlib.pyplot as plt |
import matplotlib.cm as cm |
Configurable parameters |
You can change these to another model. |
To get the values for last_conv_layer_name use model.summary() to see the names of all layers in the model. |
model_builder = keras.applications.xception.Xception |
img_size = (299, 299) |
preprocess_input = keras.applications.xception.preprocess_input |
decode_predictions = keras.applications.xception.decode_predictions |
last_conv_layer_name = \"block14_sepconv2_act\" |
# The local path to our target image |
img_path = keras.utils.get_file( |
\"african_elephant.jpg\", \"https://i.imgur.com/Bvro0YD.png\" |
) |
display(Image(img_path)) |
jpeg |
The Grad-CAM algorithm |
def get_img_array(img_path, size): |
# `img` is a PIL image of size 299x299 |
img = keras.preprocessing.image.load_img(img_path, target_size=size) |
# `array` is a float32 Numpy array of shape (299, 299, 3) |
array = keras.preprocessing.image.img_to_array(img) |
# We add a dimension to transform our array into a \"batch\" |
# of size (1, 299, 299, 3) |
array = np.expand_dims(array, axis=0) |
return array |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.