text stringlengths 0 4.99k |
|---|
terminal = done |
if terminal or (t == steps_per_epoch - 1): |
last_value = 0 if done else critic(observation.reshape(1, -1)) |
buffer.finish_trajectory(last_value) |
sum_return += episode_return |
sum_length += episode_length |
num_episodes += 1 |
observation, episode_return, episode_length = env.reset(), 0, 0 |
# Get values from the buffer |
( |
observation_buffer, |
action_buffer, |
advantage_buffer, |
return_buffer, |
logprobability_buffer, |
) = buffer.get() |
# Update the policy and implement early stopping using KL divergence |
for _ in range(train_policy_iterations): |
kl = train_policy( |
observation_buffer, action_buffer, logprobability_buffer, advantage_buffer |
) |
if kl > 1.5 * target_kl: |
# Early Stopping |
break |
# Update the value function |
for _ in range(train_value_iterations): |
train_value_function(observation_buffer, return_buffer) |
# Print mean return and length for each epoch |
print( |
f\" Epoch: {epoch + 1}. Mean Return: {sum_return / num_episodes}. Mean Length: {sum_length / num_episodes}\" |
) |
Epoch: 1. Mean Return: 18.01801801801802. Mean Length: 18.01801801801802 |
Epoch: 2. Mean Return: 21.978021978021978. Mean Length: 21.978021978021978 |
Epoch: 3. Mean Return: 27.397260273972602. Mean Length: 27.397260273972602 |
Epoch: 4. Mean Return: 36.69724770642202. Mean Length: 36.69724770642202 |
Epoch: 5. Mean Return: 48.19277108433735. Mean Length: 48.19277108433735 |
Epoch: 6. Mean Return: 66.66666666666667. Mean Length: 66.66666666666667 |
Epoch: 7. Mean Return: 133.33333333333334. Mean Length: 133.33333333333334 |
Epoch: 8. Mean Return: 166.66666666666666. Mean Length: 166.66666666666666 |
Epoch: 9. Mean Return: 181.8181818181818. Mean Length: 181.8181818181818 |
Epoch: 10. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 |
Epoch: 11. Mean Return: 200.0. Mean Length: 200.0 |
Epoch: 12. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 |
Epoch: 13. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 |
Epoch: 14. Mean Return: 181.8181818181818. Mean Length: 181.8181818181818 |
Epoch: 15. Mean Return: 181.8181818181818. Mean Length: 181.8181818181818 |
Epoch: 16. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 |
Epoch: 17. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 |
Epoch: 18. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 |
Epoch: 19. Mean Return: 200.0. Mean Length: 200.0 |
Epoch: 20. Mean Return: 200.0. Mean Length: 200.0 |
Epoch: 21. Mean Return: 200.0. Mean Length: 200.0 |
Epoch: 22. Mean Return: 200.0. Mean Length: 200.0 |
Epoch: 23. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 |
Epoch: 24. Mean Return: 190.47619047619048. Mean Length: 190.47619047619048 |
Epoch: 25. Mean Return: 200.0. Mean Length: 200.0 |
Epoch: 26. Mean Return: 200.0. Mean Length: 200.0 |
Epoch: 27. Mean Return: 200.0. Mean Length: 200.0 |
Epoch: 28. Mean Return: 200.0. Mean Length: 200.0 |
Epoch: 29. Mean Return: 200.0. Mean Length: 200.0 |
Epoch: 30. Mean Return: 200.0. Mean Length: 200.0 |
Visualizations |
Before training: |
Imgur |
After 8 epochs of training: |
Imgur |
After 20 epochs of training: |
Imgur |
Rating rate prediction using the Behavior Sequence Transformer (BST) model on the Movielens. |
Introduction |
This example demonstrates the Behavior Sequence Transformer (BST) model, by Qiwei Chen et al., using the Movielens dataset. The BST model leverages the sequential behaviour of the users in watching and rating movies, as well as user profile and movie features, to predict the rating of the user to a target movie. |
More precisely, the BST model aims to predict the rating of a target movie by accepting the following inputs: |
A fixed-length sequence of movie_ids watched by a user. |
A fixed-length sequence of the ratings for the movies watched by a user. |
A set of user features, including user_id, sex, occupation, and age_group. |
A set of genres for each movie in the input sequence and the target movie. |
A target_movie_id for which to predict the rating. |
This example modifies the original BST model in the following ways: |
We incorporate the movie features (genres) into the processing of the embedding of each movie of the input sequence and the target movie, rather than treating them as \"other features\" outside the transformer layer. |
We utilize the ratings of movies in the input sequence, along with the their positions in the sequence, to update them before feeding them into the self-attention layer. |
Note that this example should be run with TensorFlow 2.4 or higher. |
The dataset |
We use the 1M version of the Movielens dataset. The dataset includes around 1 million ratings from 6000 users on 4000 movies, along with some user features, movie genres. In addition, the timestamp of each user-movie rating is provided, which allows creating sequences of movie ratings for each user, as expected by the BST model. |
Setup |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.