text stringlengths 0 4.99k |
|---|
Epoch 16/30 |
16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0038 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0068 |
Epoch 17/30 |
16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0039 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0066 |
Epoch 18/30 |
16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0038 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0064 |
Epoch 19/30 |
16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0035 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0071 |
Epoch 20/30 |
16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0038 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0066 |
Epoch 21/30 |
16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0038 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0068 |
Epoch 22/30 |
16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0034 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0073 |
Epoch 23/30 |
16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0035 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0078 |
Epoch 24/30 |
16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0037 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0087 |
Epoch 25/30 |
16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0031 - val_sparse_categorical_accuracy: 0.0108 - val_distillation_loss: 0.0078 |
Epoch 26/30 |
16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0033 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0072 |
Epoch 27/30 |
16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0036 - val_sparse_categorical_accuracy: 0.0098 - val_distillation_loss: 0.0071 |
Epoch 28/30 |
16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0036 - val_sparse_categorical_accuracy: 0.0275 - val_distillation_loss: 0.0078 |
Epoch 29/30 |
16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0032 - val_sparse_categorical_accuracy: 0.0196 - val_distillation_loss: 0.0068 |
Epoch 30/30 |
16/16 [==============================] - 37s 2s/step - distillation_loss: 0.0034 - val_sparse_categorical_accuracy: 0.0147 - val_distillation_loss: 0.0071 |
97/97 [==============================] - 7s 64ms/step - loss: 0.0000e+00 - accuracy: 0.0107 |
Top-1 accuracy on the test set: 1.07% |
Results |
With just 30 epochs of training, the results are nowhere near expected. This is where the benefits of patience aka a longer training schedule will come into play. Let's investigate what the model trained for 1000 epochs can do. |
# Download the pre-trained weights. |
!wget https://git.io/JBO3Y -O S-r50x1-128-1000.tar.gz |
!tar xf S-r50x1-128-1000.tar.gz |
pretrained_student = keras.models.load_model(\"S-r50x1-128-1000\") |
pretrained_student.summary() |
Model: \"resnet\" |
_________________________________________________________________ |
Layer (type) Output Shape Param # |
================================================================= |
root_block (Sequential) (None, 32, 32, 64) 9408 |
_________________________________________________________________ |
block1 (Sequential) (None, 32, 32, 256) 214912 |
_________________________________________________________________ |
block2 (Sequential) (None, 16, 16, 512) 1218048 |
_________________________________________________________________ |
block3 (Sequential) (None, 8, 8, 1024) 7095296 |
_________________________________________________________________ |
block4 (Sequential) (None, 4, 4, 2048) 14958592 |
_________________________________________________________________ |
group_norm (GroupNormalizati multiple 4096 |
_________________________________________________________________ |
re_lu_97 (ReLU) multiple 0 |
_________________________________________________________________ |
global_average_pooling2d_1 ( multiple 0 |
_________________________________________________________________ |
head/dense (Dense) multiple 208998 |
================================================================= |
Total params: 23,709,350 |
Trainable params: 23,709,350 |
Non-trainable params: 0 |
_________________________________________________________________ |
This model exactly follows what the authors have used in their student models. This is why the model summary is a bit different. |
_, top1_accuracy = pretrained_student.evaluate(test_ds) |
print(f\"Top-1 accuracy on the test set: {round(top1_accuracy * 100, 2)}%\") |
97/97 [==============================] - 14s 131ms/step - loss: 0.0000e+00 - accuracy: 0.8102 |
Top-1 accuracy on the test set: 81.02% |
With 100000 epochs of training, this same model leads to a top-1 accuracy of 95.54%. |
There are a number of important ablations studies presented in the paper that show the effectiveness of these recipes compared to the prior art. So if you are skeptical about these recipes, definitely consult the paper. |
Note on training for longer |
With TPU-based hardware infrastructure, we can train the model for 1000 epochs faster. This does not even require adding a lot of changes to this codebase. You are encouraged to check this repository as it presents TPU-compatible training workflows for these recipes and can be run on Kaggle Kernel leveraging their free TPU v3-8 hardware. |
Using compositional and mixed-dimension embeddings for memory-efficient recommendation models. |
Introduction |
This example demonstrates two techniques for building memory-efficient recommendation models by reducing the size of the embedding tables, without sacrificing model effectiveness: |
Quotient-remainder trick, by Hao-Jun Michael Shi et al., which reduces the number of embedding vectors to store, yet produces unique embedding vector for each item without explicit definition. |
Mixed Dimension embeddings, by Antonio Ginart et al., which stores embedding vectors with mixed dimensions, where less popular items have reduced dimension embeddings. |
We use the 1M version of the Movielens dataset. The dataset includes around 1 million ratings from 6,000 users on 4,000 movies. |
Setup |
import os |
import math |
from zipfile import ZipFile |
from urllib.request import urlretrieve |
import numpy as np |
import pandas as pd |
import tensorflow as tf |
from tensorflow import keras |
from tensorflow.keras import layers |
from tensorflow.keras.layers import StringLookup |
import matplotlib.pyplot as plt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.