text stringlengths 0 4.99k |
|---|
# Compute predictions |
y_prediction = self.student(x, training=False) |
# Update the metrics |
self.compiled_metrics.update_state(y, tf.nn.softmax(y_prediction, axis=1)) |
# Return a dict of performance |
results = {m.name: m.result() for m in self.metrics} |
return results |
The only difference in this implementation is the way loss is being calculated. Instead of weighted the distillation loss and student loss differently we are taking their average following Noisy Student Training. |
Train the student model |
# Define the callbacks. |
# We are using a larger decay factor to stabilize the training. |
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau( |
patience=3, factor=0.5, monitor=\"val_accuracy\" |
) |
early_stopping = tf.keras.callbacks.EarlyStopping( |
patience=10, restore_best_weights=True, monitor=\"val_accuracy\" |
) |
# Compile and train the student model. |
self_trainer = SelfTrainer(student=get_training_model(), teacher=teacher_model) |
self_trainer.compile( |
# Notice we are *not* using SWA here. |
optimizer=\"adam\", |
metrics=[\"accuracy\"], |
student_loss_fn=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), |
distillation_loss_fn=tf.keras.losses.KLDivergence(), |
temperature=10, |
) |
history = self_trainer.fit( |
consistency_training_ds, |
epochs=EPOCHS, |
validation_data=validation_ds, |
callbacks=[reduce_lr, early_stopping], |
) |
# Evaluate the student model. |
acc = self_trainer.evaluate(test_ds, verbose=0) |
print(f\"Test accuracy from student model: {acc*100}%\") |
Epoch 1/5 |
387/387 [==============================] - 39s 84ms/step - accuracy: 0.2112 - total_loss: 1.0629 - val_accuracy: 0.4180 |
Epoch 2/5 |
387/387 [==============================] - 32s 82ms/step - accuracy: 0.3341 - total_loss: 0.9554 - val_accuracy: 0.3900 |
Epoch 3/5 |
387/387 [==============================] - 31s 81ms/step - accuracy: 0.3873 - total_loss: 0.8852 - val_accuracy: 0.4580 |
Epoch 4/5 |
387/387 [==============================] - 31s 81ms/step - accuracy: 0.4294 - total_loss: 0.8423 - val_accuracy: 0.5660 |
Epoch 5/5 |
387/387 [==============================] - 31s 81ms/step - accuracy: 0.4547 - total_loss: 0.8093 - val_accuracy: 0.5880 |
Test accuracy from student model: 58.490002155303955% |
Assess the robustness of the models |
A standard benchmark of assessing the robustness of vision models is to record their performance on corrupted datasets like ImageNet-C and CIFAR-10-C both of which were proposed in Benchmarking Neural Network Robustness to Common Corruptions and Perturbations. For this example, we will be using the CIFAR-10-C dataset w... |
Run the pre-trained models on the highest level of severities and obtain the top-1 accuracies. |
Compute the mean top-1 accuracy. |
For the purpose of this example, we won't be going through these steps. This is why we trained the models for only 5 epochs. You can check out this repository that demonstrates the full-scale training experiments and also the aforementioned assessment. The figure below presents an executive summary of that assessment: |
Mean Top-1 results stand for the CIFAR-10-C dataset and Test Top-1 results stand for the CIFAR-10 test set. It's clear that consistency training has an advantage on not only enhancing the model robustness but also on improving the standard test performance. |
How to train a deep convolutional autoencoder for image denoising. |
Introduction |
This example demonstrates how to implement a deep convolutional autoencoder for image denoising, mapping noisy digits images from the MNIST dataset to clean digits images. This implementation is based on an original blog post titled Building Autoencoders in Keras by François Chollet. |
Setup |
import numpy as np |
import tensorflow as tf |
import matplotlib.pyplot as plt |
from tensorflow.keras import layers |
from tensorflow.keras.datasets import mnist |
from tensorflow.keras.models import Model |
def preprocess(array): |
\"\"\" |
Normalizes the supplied array and reshapes it into the appropriate format. |
\"\"\" |
array = array.astype(\"float32\") / 255.0 |
array = np.reshape(array, (len(array), 28, 28, 1)) |
return array |
def noise(array): |
\"\"\" |
Adds random noise to each image in the supplied array. |
\"\"\" |
noise_factor = 0.4 |
noisy_array = array + noise_factor * np.random.normal( |
loc=0.0, scale=1.0, size=array.shape |
) |
return np.clip(noisy_array, 0.0, 1.0) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.