text stringlengths 0 4.99k |
|---|
1563/1563 [==============================] - 37s 24ms/step - loss: 1.2987 - accuracy: 0.7099 - val_loss: 0.8409 - val_accuracy: 0.7766 |
Epoch 15/15 |
1563/1563 [==============================] - 37s 24ms/step - loss: 1.2953 - accuracy: 0.7099 - val_loss: 0.7850 - val_accuracy: 0.8014 |
313/313 [==============================] - 3s 9ms/step - loss: 0.7850 - accuracy: 0.8014 |
Test accuracy: 80.14% |
Train the model using the original non-augmented dataset |
model = training_model() |
model.load_weights(\"initial_weights.h5\") |
model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"]) |
model.fit(train_ds_simple, validation_data=test_ds, epochs=15) |
test_loss, test_accuracy = model.evaluate(test_ds) |
print(\"Test accuracy: {:.2f}%\".format(test_accuracy * 100)) |
Epoch 1/15 |
1563/1563 [==============================] - 38s 23ms/step - loss: 1.4864 - accuracy: 0.5173 - val_loss: 1.3694 - val_accuracy: 0.5708 |
Epoch 2/15 |
1563/1563 [==============================] - 36s 23ms/step - loss: 1.0682 - accuracy: 0.6779 - val_loss: 1.1424 - val_accuracy: 0.6686 |
Epoch 3/15 |
1563/1563 [==============================] - 36s 23ms/step - loss: 0.8955 - accuracy: 0.7449 - val_loss: 1.0555 - val_accuracy: 0.7007 |
Epoch 4/15 |
1563/1563 [==============================] - 36s 23ms/step - loss: 0.7890 - accuracy: 0.7878 - val_loss: 1.0575 - val_accuracy: 0.7079 |
Epoch 5/15 |
1563/1563 [==============================] - 36s 23ms/step - loss: 0.7107 - accuracy: 0.8175 - val_loss: 1.1395 - val_accuracy: 0.7062 |
Epoch 6/15 |
1563/1563 [==============================] - 36s 23ms/step - loss: 0.6524 - accuracy: 0.8397 - val_loss: 1.1716 - val_accuracy: 0.7042 |
Epoch 7/15 |
1563/1563 [==============================] - 36s 23ms/step - loss: 0.6098 - accuracy: 0.8594 - val_loss: 1.4120 - val_accuracy: 0.6786 |
Epoch 8/15 |
1563/1563 [==============================] - 36s 23ms/step - loss: 0.5715 - accuracy: 0.8765 - val_loss: 1.3159 - val_accuracy: 0.7011 |
Epoch 9/15 |
1563/1563 [==============================] - 36s 23ms/step - loss: 0.5477 - accuracy: 0.8872 - val_loss: 1.2873 - val_accuracy: 0.7182 |
Epoch 10/15 |
1563/1563 [==============================] - 36s 23ms/step - loss: 0.5233 - accuracy: 0.8988 - val_loss: 1.4118 - val_accuracy: 0.6964 |
Epoch 11/15 |
1563/1563 [==============================] - 36s 23ms/step - loss: 0.5165 - accuracy: 0.9045 - val_loss: 1.3741 - val_accuracy: 0.7230 |
Epoch 12/15 |
1563/1563 [==============================] - 36s 23ms/step - loss: 0.5008 - accuracy: 0.9124 - val_loss: 1.3984 - val_accuracy: 0.7181 |
Epoch 13/15 |
1563/1563 [==============================] - 36s 23ms/step - loss: 0.4896 - accuracy: 0.9190 - val_loss: 1.3642 - val_accuracy: 0.7209 |
Epoch 14/15 |
1563/1563 [==============================] - 36s 23ms/step - loss: 0.4845 - accuracy: 0.9231 - val_loss: 1.5469 - val_accuracy: 0.6992 |
Epoch 15/15 |
1563/1563 [==============================] - 36s 23ms/step - loss: 0.4749 - accuracy: 0.9294 - val_loss: 1.4034 - val_accuracy: 0.7362 |
313/313 [==============================] - 3s 9ms/step - loss: 1.4034 - accuracy: 0.7362 |
Test accuracy: 73.62% |
Notes |
In this example, we trained our model for 15 epochs. In our experiment, the model with CutMix achieves a better accuracy on the CIFAR-10 dataset (80.36% in our experiment) compared to the model that doesn't use the augmentation (72.70%). You may notice it takes less time to train the model with the CutMix augmentation. |
You can experiment further with the CutMix technique by following the original paper. |
Few-shot classification of the Omniglot dataset using Reptile. |
Introduction |
The Reptile algorithm was developed by OpenAI to perform model agnostic meta-learning. Specifically, this algorithm was designed to quickly learn to perform new tasks with minimal training (few-shot learning). The algorithm works by performing Stochastic Gradient Descent using the difference between weights trained on ... |
import matplotlib.pyplot as plt |
import numpy as np |
import random |
import tensorflow as tf |
from tensorflow import keras |
from tensorflow.keras import layers |
import tensorflow_datasets as tfds |
Define the Hyperparameters |
learning_rate = 0.003 |
meta_step_size = 0.25 |
inner_batch_size = 25 |
eval_batch_size = 25 |
meta_iters = 2000 |
eval_iters = 5 |
inner_iters = 4 |
eval_interval = 1 |
train_shots = 20 |
shots = 5 |
classes = 5 |
Prepare the data |
The Omniglot dataset is a dataset of 1,623 characters taken from 50 different alphabets, with 20 examples for each character. The 20 samples for each character were drawn online via Amazon's Mechanical Turk. For the few-shot learning task, k samples (or \"shots\") are drawn randomly from n randomly-chosen classes. Thes... |
class Dataset: |
# This class will facilitate the creation of a few-shot dataset |
# from the Omniglot dataset that can be sampled from quickly while also |
# allowing to create new labels at the same time. |
def __init__(self, training): |
# Download the tfrecord files containing the omniglot data and convert to a |
# dataset. |
split = \"train\" if training else \"test\" |
ds = tfds.load(\"omniglot\", split=split, as_supervised=True, shuffle_files=False) |
# Iterate over the dataset to get each individual image and its class, |
# and put that data into a dictionary. |
self.data = {} |
def extraction(image, label): |
# This function will shrink the Omniglot images to the desired size, |
# scale pixel values and convert the RGB image to grayscale |
image = tf.image.convert_image_dtype(image, tf.float32) |
image = tf.image.rgb_to_grayscale(image) |
image = tf.image.resize(image, [28, 28]) |
return image, label |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.