text stringlengths 0 4.99k |
|---|
training[window_length - 1 : 0 : -1], training, training[-1:-window_length:-1] |
] |
test_s = np.r_[ |
testing[window_length - 1 : 0 : -1], testing, testing[-1:-window_length:-1] |
] |
w = np.hamming(window_length) |
train_y = np.convolve(w / w.sum(), train_s, mode=\"valid\") |
test_y = np.convolve(w / w.sum(), test_s, mode=\"valid\") |
# Display the training accuracies. |
x = np.arange(0, len(test_y), 1) |
plt.plot(x, test_y, x, train_y) |
plt.legend([\"test\", \"train\"]) |
plt.grid() |
train_set, test_images, test_labels = dataset.get_mini_dataset( |
eval_batch_size, eval_iters, shots, classes, split=True |
) |
for images, labels in train_set: |
with tf.GradientTape() as tape: |
preds = model(images) |
loss = keras.losses.sparse_categorical_crossentropy(labels, preds) |
grads = tape.gradient(loss, model.trainable_weights) |
optimizer.apply_gradients(zip(grads, model.trainable_weights)) |
test_preds = model.predict(test_images) |
test_preds = tf.argmax(test_preds).numpy() |
_, axarr = plt.subplots(nrows=1, ncols=5, figsize=(20, 20)) |
sample_keys = list(train_dataset.data.keys()) |
for i, ax in zip(range(5), axarr): |
temp_image = np.stack((test_images[i, :, :, 0],) * 3, axis=2) |
temp_image *= 255 |
temp_image = np.clip(temp_image, 0, 255).astype(\"uint8\") |
ax.set_title( |
\"Label : {}, Prediction : {}\".format(int(test_labels[i]), test_preds[i]) |
) |
ax.imshow(temp_image, cmap=\"gray\") |
ax.xaxis.set_visible(False) |
ax.yaxis.set_visible(False) |
plt.show() |
png |
png |
Mitigating resolution discrepancy between training and test sets. |
Introduction |
It is a common practice to use the same input image resolution while training and testing vision models. However, as investigated in Fixing the train-test resolution discrepancy (Touvron et al.), this practice leads to suboptimal performance. Data augmentation is an indispensable part of the training process of deep ne... |
In this example, we implement the FixRes techniques introduced by Touvron et al. to fix this discrepancy. |
Imports |
from tensorflow import keras |
from tensorflow.keras import layers |
import tensorflow as tf |
import tensorflow_datasets as tfds |
tfds.disable_progress_bar() |
import matplotlib.pyplot as plt |
Load the tf_flowers dataset |
train_dataset, val_dataset = tfds.load( |
\"tf_flowers\", split=[\"train[:90%]\", \"train[90%:]\"], as_supervised=True |
) |
num_train = train_dataset.cardinality() |
num_val = val_dataset.cardinality() |
print(f\"Number of training examples: {num_train}\") |
print(f\"Number of validation examples: {num_val}\") |
Number of training examples: 3303 |
Number of validation examples: 367 |
Data preprocessing utilities |
We create three datasets: |
A dataset with a smaller resolution - 128x128. |
Two datasets with a larger resolution - 224x224. |
We will apply different augmentation transforms to the larger-resolution datasets. |
The idea of FixRes is to first train a model on a smaller resolution dataset and then fine-tune it on a larger resolution dataset. This simple yet effective recipe leads to non-trivial performance improvements. Please refer to the original paper for results. |
# Reference: https://github.com/facebookresearch/FixRes/blob/main/transforms_v2.py. |
batch_size = 128 |
auto = tf.data.AUTOTUNE |
smaller_size = 128 |
bigger_size = 224 |
size_for_resizing = int((bigger_size / smaller_size) * bigger_size) |
central_crop_layer = layers.CenterCrop(bigger_size, bigger_size) |
def preprocess_initial(train, image_size): |
\"\"\"Initial preprocessing function for training on smaller resolution. |
For training, do random_horizontal_flip -> random_crop. |
For validation, just resize. |
No color-jittering has been used. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.