text stringlengths 0 4.99k |
|---|
)(movie_input) |
# Compute dot product similarity between user and movie embeddings. |
logits = layers.Dot(axes=1, name=\"dot_similarity\")( |
[user_embedding, movie_embedding] |
) |
# Convert to rating scale. |
prediction = keras.activations.sigmoid(logits) * 5 |
# Create the model. |
model = keras.Model( |
inputs=[user_input, movie_input], outputs=prediction, name=\"baseline_model\" |
) |
return model |
baseline_model = create_baseline_model() |
baseline_model.summary() |
Model: \"baseline_model\" |
__________________________________________________________________________________________________ |
Layer (type) Output Shape Param # Connected to |
================================================================================================== |
user_id (InputLayer) [(None,)] 0 |
__________________________________________________________________________________________________ |
movie_id (InputLayer) [(None,)] 0 |
__________________________________________________________________________________________________ |
user_embedding (Sequential) (None, 64) 386560 user_id[0][0] |
__________________________________________________________________________________________________ |
movie_embedding (Sequential) (None, 64) 237184 movie_id[0][0] |
__________________________________________________________________________________________________ |
dot_similarity (Dot) (None, 1) 0 user_embedding[0][0] |
movie_embedding[0][0] |
__________________________________________________________________________________________________ |
tf.math.sigmoid (TFOpLambda) (None, 1) 0 dot_similarity[0][0] |
__________________________________________________________________________________________________ |
tf.math.multiply (TFOpLambda) (None, 1) 0 tf.math.sigmoid[0][0] |
================================================================================================== |
Total params: 623,744 |
Trainable params: 623,744 |
Non-trainable params: 0 |
__________________________________________________________________________________________________ |
Notice that the number of trainable parameters is 623,744 |
history = run_experiment(baseline_model) |
plt.plot(history.history[\"loss\"]) |
plt.plot(history.history[\"val_loss\"]) |
plt.title(\"model loss\") |
plt.ylabel(\"loss\") |
plt.xlabel(\"epoch\") |
plt.legend([\"train\", \"eval\"], loc=\"upper left\") |
plt.show() |
Epoch 1/3 |
6644/6644 [==============================] - 46s 7ms/step - loss: 1.4399 - mae: 0.9818 - val_loss: 0.9348 - val_mae: 0.7569 |
Epoch 2/3 |
6644/6644 [==============================] - 53s 8ms/step - loss: 0.8422 - mae: 0.7246 - val_loss: 0.7991 - val_mae: 0.7076 |
Epoch 3/3 |
6644/6644 [==============================] - 58s 9ms/step - loss: 0.7461 - mae: 0.6819 - val_loss: 0.7564 - val_mae: 0.6869 |
png |
Experiment 2: memory-efficient model |
Implement Quotient-Remainder embedding as a layer |
The Quotient-Remainder technique works as follows. For a set of vocabulary and embedding size embedding_dim, instead of creating a vocabulary_size X embedding_dim embedding table, we create two num_buckets X embedding_dim embedding tables, where num_buckets is much smaller than vocabulary_size. An embedding for a given item index is generated via the following steps: |
Compute the quotient_index as index // num_buckets. |
Compute the remainder_index as index % num_buckets. |
Lookup quotient_embedding from the first embedding table using quotient_index. |
Lookup remainder_embedding from the second embedding table using remainder_index. |
Return quotient_embedding * remainder_embedding. |
This technique not only reduces the number of embedding vectors needs to be stored and trained, but also generates a unique embedding vector for each item of size embedding_dim. Note that q_embedding and r_embedding can be combined using other operations, like Add and Concatenate. |
class QREmbedding(keras.layers.Layer): |
def __init__(self, vocabulary, embedding_dim, num_buckets, name=None): |
super(QREmbedding, self).__init__(name=name) |
self.num_buckets = num_buckets |
self.index_lookup = StringLookup( |
vocabulary=vocabulary, mask_token=None, num_oov_indices=0 |
) |
self.q_embeddings = layers.Embedding(num_buckets, embedding_dim,) |
self.r_embeddings = layers.Embedding(num_buckets, embedding_dim,) |
def call(self, inputs): |
# Get the item index. |
embedding_index = self.index_lookup(inputs) |
# Get the quotient index. |
quotient_index = tf.math.floordiv(embedding_index, self.num_buckets) |
# Get the reminder index. |
remainder_index = tf.math.floormod(embedding_index, self.num_buckets) |
# Lookup the quotient_embedding using the quotient_index. |
quotient_embedding = self.q_embeddings(quotient_index) |
# Lookup the remainder_embedding using the remainder_index. |
remainder_embedding = self.r_embeddings(remainder_index) |
# Use multiplication as a combiner operation |
return quotient_embedding * remainder_embedding |
Implement Mixed Dimension embedding as a layer |
In the mixed dimension embedding technique, we train embedding vectors with full dimensions for the frequently queried items, while train embedding vectors with reduced dimensions for less frequent items, plus a projection weights matrix to bring low dimension embeddings to the full dimensions. |
More precisely, we define blocks of items of similar frequencies. For each block, a block_vocab_size X block_embedding_dim embedding table and block_embedding_dim X full_embedding_dim projection weights matrix are created. Note that, if block_embedding_dim equals full_embedding_dim, the projection weights matrix becomes an identity matrix. Embeddings for a given batch of item indices are generated via the following steps: |
For each block, lookup the block_embedding_dim embedding vectors using indices, and project them to the full_embedding_dim. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.