text stringlengths 0 4.99k |
|---|
If an item index does not belong to a given block, an out-of-vocabulary embedding is returned. Each block will return a batch_size X full_embedding_dim tensor. |
A mask is applied to the embeddings returned from each block in order to convert the out-of-vocabulary embeddings to vector of zeros. That is, for each item in the batch, a single non-zero embedding vector is returned from the all block embeddings. |
Embeddings retrieved from the blocks are combined using sum to produce the final batch_size X full_embedding_dim tensor. |
class MDEmbedding(keras.layers.Layer): |
def __init__( |
self, blocks_vocabulary, blocks_embedding_dims, base_embedding_dim, name=None |
): |
super(MDEmbedding, self).__init__(name=name) |
self.num_blocks = len(blocks_vocabulary) |
# Create vocab to block lookup. |
keys = [] |
values = [] |
for block_idx, block_vocab in enumerate(blocks_vocabulary): |
keys.extend(block_vocab) |
values.extend([block_idx] * len(block_vocab)) |
self.vocab_to_block = tf.lookup.StaticHashTable( |
tf.lookup.KeyValueTensorInitializer(keys, values), default_value=-1 |
) |
self.block_embedding_encoders = [] |
self.block_embedding_projectors = [] |
# Create block embedding encoders and projectors. |
for idx in range(self.num_blocks): |
vocabulary = blocks_vocabulary[idx] |
embedding_dim = blocks_embedding_dims[idx] |
block_embedding_encoder = embedding_encoder( |
vocabulary, embedding_dim, num_oov_indices=1 |
) |
self.block_embedding_encoders.append(block_embedding_encoder) |
if embedding_dim == base_embedding_dim: |
self.block_embedding_projectors.append(layers.Lambda(lambda x: x)) |
else: |
self.block_embedding_projectors.append( |
layers.Dense(units=base_embedding_dim) |
) |
def call(self, inputs): |
# Get block index for each input item. |
block_indicies = self.vocab_to_block.lookup(inputs) |
# Initialize output embeddings to zeros. |
embeddings = tf.zeros(shape=(tf.shape(inputs)[0], base_embedding_dim)) |
# Generate embeddings from blocks. |
for idx in range(self.num_blocks): |
# Lookup embeddings from the current block. |
block_embeddings = self.block_embedding_encoders[idx](inputs) |
# Project embeddings to base_embedding_dim. |
block_embeddings = self.block_embedding_projectors[idx](block_embeddings) |
# Create a mask to filter out embeddings of items that do not belong to the current block. |
mask = tf.expand_dims(tf.cast(block_indicies == idx, tf.dtypes.float32), 1) |
# Set the embeddings for the items not belonging to the current block to zeros. |
block_embeddings = block_embeddings * mask |
# Add the block embeddings to the final embeddings. |
embeddings += block_embeddings |
return embeddings |
Implement the memory-efficient model |
In this experiment, we are going to use the Quotient-Remainder technique to reduce the size of the user embeddings, and the Mixed Dimension technique to reduce the size of the movie embeddings. |
While in the paper, an alpha-power rule is used to determined the dimensions of the embedding of each block, we simply set the number of blocks and the dimensions of embeddings of each block based on the histogram visualization of movies popularity. |
movie_frequencies = ratings_data[\"movie_id\"].value_counts() |
movie_frequencies.hist(bins=10) |
<AxesSubplot:> |
png |
You can see that we can group the movies into three blocks, and assign them 64, 32, and 16 embedding dimensions, respectively. Feel free to experiment with different number of blocks and dimensions. |
sorted_movie_vocabulary = list(movie_frequencies.keys()) |
movie_blocks_vocabulary = [ |
sorted_movie_vocabulary[:400], # high popularity movies block |
sorted_movie_vocabulary[400:1700], # normal popularity movies block |
sorted_movie_vocabulary[1700:], # low popularity movies block |
] |
movie_blocks_embedding_dims = [64, 32, 16] |
user_embedding_num_buckets = len(user_vocabulary) // 50 |
def create_memory_efficient_model(): |
# Take the user as an input. |
user_input = layers.Input(name=\"user_id\", shape=(), dtype=tf.string) |
# Get user embedding. |
user_embedding = QREmbedding( |
vocabulary=user_vocabulary, |
embedding_dim=base_embedding_dim, |
num_buckets=user_embedding_num_buckets, |
name=\"user_embedding\", |
)(user_input) |
# Take the movie as an input. |
movie_input = layers.Input(name=\"movie_id\", shape=(), dtype=tf.string) |
# Get embedding. |
movie_embedding = MDEmbedding( |
blocks_vocabulary=movie_blocks_vocabulary, |
blocks_embedding_dims=movie_blocks_embedding_dims, |
base_embedding_dim=base_embedding_dim, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.