text stringlengths 0 4.99k |
|---|
name=\"movie_embedding\", |
)(movie_input) |
# Compute dot product similarity between user and movie embeddings. |
logits = layers.Dot(axes=1, name=\"dot_similarity\")( |
[user_embedding, movie_embedding] |
) |
# Convert to rating scale. |
prediction = keras.activations.sigmoid(logits) * 5 |
# Create the model. |
model = keras.Model( |
inputs=[user_input, movie_input], outputs=prediction, name=\"baseline_model\" |
) |
return model |
memory_efficient_model = create_memory_efficient_model() |
memory_efficient_model.summary() |
Model: \"baseline_model\" |
__________________________________________________________________________________________________ |
Layer (type) Output Shape Param # Connected to |
================================================================================================== |
user_id (InputLayer) [(None,)] 0 |
__________________________________________________________________________________________________ |
movie_id (InputLayer) [(None,)] 0 |
__________________________________________________________________________________________________ |
user_embedding (QREmbedding) (None, 64) 15360 user_id[0][0] |
__________________________________________________________________________________________________ |
movie_embedding (MDEmbedding) (None, 64) 102608 movie_id[0][0] |
__________________________________________________________________________________________________ |
dot_similarity (Dot) (None, 1) 0 user_embedding[0][0] |
movie_embedding[0][0] |
__________________________________________________________________________________________________ |
tf.math.sigmoid_1 (TFOpLambda) (None, 1) 0 dot_similarity[0][0] |
__________________________________________________________________________________________________ |
tf.math.multiply_1 (TFOpLambda) (None, 1) 0 tf.math.sigmoid_1[0][0] |
================================================================================================== |
Total params: 117,968 |
Trainable params: 117,968 |
Non-trainable params: 0 |
__________________________________________________________________________________________________ |
Notice that the number of trainable parameters is 117,968, which is more than 5x less than the number of parameters in the baseline model. |
history = run_experiment(memory_efficient_model) |
plt.plot(history.history[\"loss\"]) |
plt.plot(history.history[\"val_loss\"]) |
plt.title(\"model loss\") |
plt.ylabel(\"loss\") |
plt.xlabel(\"epoch\") |
plt.legend([\"train\", \"eval\"], loc=\"upper left\") |
plt.show() |
Epoch 1/3 |
6644/6644 [==============================] - 10s 1ms/step - loss: 1.2632 - mae: 0.9078 - val_loss: 1.0593 - val_mae: 0.8045 |
Epoch 2/3 |
6644/6644 [==============================] - 9s 1ms/step - loss: 0.8933 - mae: 0.7512 - val_loss: 0.8932 - val_mae: 0.7519 |
Epoch 3/3 |
6644/6644 [==============================] - 9s 1ms/step - loss: 0.8412 - mae: 0.7279 - val_loss: 0.8612 - val_mae: 0.7357 |
png |
Building Probabilistic Bayesian neural network models with TensorFlow Probability. |
Introduction |
Taking a probabilistic approach to deep learning allows to account for uncertainty, so that models can assign less levels of confidence to incorrect predictions. Sources of uncertainty can be found in the data, due to measurement error or noise in the labels, or the model, due to insufficient data availability for the model to learn effectively. |
This example demonstrates how to build basic probabilistic Bayesian neural networks to account for these two types of uncertainty. We use TensorFlow Probability library, which is compatible with Keras API. |
This example requires TensorFlow 2.3 or higher. You can install Tensorflow Probability using the following command: |
pip install tensorflow-probability |
The dataset |
We use the Wine Quality dataset, which is available in the TensorFlow Datasets. We use the red wine subset, which contains 4,898 examples. The dataset has 11numerical physicochemical features of the wine, and the task is to predict the wine quality, which is a score between 0 and 10. In this example, we treat this as a regression task. |
You can install TensorFlow Datasets using the following command: |
pip install tensorflow-datasets |
Setup |
import numpy as np |
import tensorflow as tf |
from tensorflow import keras |
from tensorflow.keras import layers |
import tensorflow_datasets as tfds |
import tensorflow_probability as tfp |
Create training and evaluation datasets |
Here, we load the wine_quality dataset using tfds.load(), and we convert the target feature to float. Then, we shuffle the dataset and split it into training and test sets. We take the first train_size examples as the train split, and the rest as the test split. |
def get_train_and_test_splits(train_size, batch_size=1): |
# We prefetch with a buffer the same size as the dataset because th dataset |
# is very small and fits into memory. |
dataset = ( |
tfds.load(name=\"wine_quality\", as_supervised=True, split=\"train\") |
.map(lambda x, y: (x, tf.cast(y, tf.float32))) |
.prefetch(buffer_size=dataset_size) |
.cache() |
) |
# We shuffle with a buffer the same size as the dataset. |
train_dataset = ( |
dataset.take(train_size).shuffle(buffer_size=train_size).batch(batch_size) |
) |
test_dataset = dataset.skip(train_size).batch(batch_size) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.