Now that you’ve studied the theory behind Deep Q-Learning, you’re ready to train your Deep Q-Learning agent to play Atari Games. We’ll start with Space Invaders, but you’ll be able to use any Atari game you want 🔥
We’re using the RL-Baselines-3 Zoo integration, a vanilla version of Deep Q-Learning with no extensions such as Double-DQN, Dueling-DQN, or Prioritized Experience Replay.
Also, if you want to learn to implement Deep Q-Learning by yourself after this hands-on, you definitely should look at CleanRL implementation: https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn_atari.py
To validate this hands-on for the certification process, you need to push your trained model to the Hub and get a result of >= 500.
To find your result, go to the leaderboard and find your model, the result = mean_reward - std of reward
For more information about the certification process, check this section 👉 https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process
To start the hands-on click on Open In Colab button 👇 :
In this notebook, you’ll train a Deep Q-Learning agent playing Space Invaders using RL Baselines3 Zoo, a training framework based on Stable-Baselines3 that provides scripts for training, evaluating agents, tuning hyperparameters, plotting results and recording videos.
We’re using the RL-Baselines-3 Zoo integration, a vanilla version of Deep Q-Learning with no extensions such as Double-DQN, Dueling-DQN, and Prioritized Experience Replay.
⬇️ Here is an example of what you will achieve ⬇️
%%html
<video controls autoplay><source src="https://huggingface.co/ThomasSimonini/ppo-SpaceInvadersNoFrameskip-v4/resolve/main/replay.mp4" type="video/mp4"></video>
At the end of the notebook, you will:
🔲 📚 Study Deep Q-Learning by reading Unit 3 🤗
We’re constantly trying to improve our tutorials, so if you find some issues in this notebook, please open an issue on the Github Repo.
To validate this hands-on for the certification process, you need to push your trained model to the Hub and get a result of >= 500.
To find your result, go to the leaderboard and find your model, the result = mean_reward - std of reward
For more information about the certification process, check this section 👉 https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process
Runtime > Change Runtime type
Hardware Accelerator > GPU
During the notebook, we’ll need to generate a replay video. To do so, with colab, we need to have a virtual screen to be able to render the environment (and thus record the frames).
Hence the following cell will install the librairies and create and run a virtual screen 🖥
apt install python-opengl apt install ffmpeg apt install xvfb pip3 install pyvirtualdisplay
apt-get install swig cmake freeglut3-dev
pip install pyglet==1.5.1
# Virtual display
from pyvirtualdisplay import Display
virtual_display = Display(visible=0, size=(1400, 900))
virtual_display.start()
git clone https://github.com/DLR-RM/rl-baselines3-zoo
cd /content/rl-baselines3-zoo/
pip install -r requirements.txt
To train an agent with RL-Baselines3-Zoo, we just need to do two things:
rl-baselines3-zoo/hyperparams/dqn.yml
Here we see that:
Atari Wrapper
that does the pre-processing (Frame reduction, grayscale, stack four frames frames),CnnPolicy
, since we use Convolutional layers to process the frames.n_timesteps
.💡 My advice is to reduce the training timesteps to 1M, which will take about 90 minutes on a P100. !nvidia-smi
will tell you what GPU you’re using. At 10 million steps, this will take about 9 hours, which could likely result in Colab timing out. I recommend running this on your local computer (or somewhere else). Just click on: File>Download
.
In terms of hyperparameters optimization, my advice is to focus on these 3 hyperparameters:
learning_rate
buffer_size (Experience Memory size)
batch_size
As a good practice, you need to check the documentation to understand what each hyperparameters does: https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html#parameters
train.py
and save the models on logs
folder 📁python train.py --algo ________ --env SpaceInvadersNoFrameskip-v4 -f _________
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps _________ --folder logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps 5000 --folder logs/
By using rl_zoo3.push_to_hub.py
, you evaluate, record a replay, generate a model card of your agent, and push it to the Hub.
This way:
To be able to share your model with the community, there are three more steps to follow:
1️⃣ (If it’s not already done) create an account in HF ➡ https://huggingface.co/join
2️⃣ Sign in and then, you need to store your authentication token from the Hugging Face website.
from huggingface_hub import notebook_login # To log to our Hugging Face account to be able to upload models to the Hub.
notebook_login()
git config --global credential.helper store
If you don’t want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: huggingface-cli login
3️⃣ We’re now ready to push our trained agent to the Hub 🔥
Let’s run push_to_hub.py
file to upload our trained agent to the Hub. There are two important parameters:
--repo-name
: The name of the repo-orga
: Your Hugging Face usernamepython -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 --repo-name _____________________ -orga _____________________ -f logs/
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 --repo-name dqn-SpaceInvadersNoFrameskip-v4 -orga ThomasSimonini -f logs/
Congrats 🥳 you’ve just trained and uploaded your first Deep Q-Learning agent using RL-Baselines-3 Zoo. The script above should have displayed a link to a model repository such as https://huggingface.co/ThomasSimonini/dqn-SpaceInvadersNoFrameskip-v4. When you go to this link, you can:
README.md
file) which gives a description of the model and the hyperparameters you used.Under the hood, the Hub uses git-based repositories (don’t worry if you don’t know what git is), which means you can update the model with new versions as you experiment and improve your agent.
Compare the results of your agents with your classmates using the leaderboard 🏆
The Stable-Baselines3 team uploaded more than 150 trained Deep Reinforcement Learning agents on the Hub. You can download them and use them to see how they perform!
You can find them here: 👉 https://huggingface.co/sb3
Some examples:
Let’s load an agent playing Beam Rider: https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4
<video controls autoplay><source src="https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4/resolve/main/replay.mp4" type="video/mp4"></video>
rl_zoo3.load_from_hub
, and place it in a new folder that we can call rl_trained
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env BeamRiderNoFrameskip-v4 -orga sb3 -f rl_trained/
python enjoy.py --algo dqn --env BeamRiderNoFrameskip-v4 -n 5000 -f rl_trained/
Why not trying to train your own Deep Q-Learning Agent playing BeamRiderNoFrameskip-v4? 🏆.
If you want to try, check https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4#hyperparameters. There, in the model card, you have the hyperparameters of the trained agent.
But finding hyperparameters can be a daunting task. Fortunately, we’ll see in the next bonus Unit, how we can use Optuna for optimizing the Hyperparameters 🔥.
The best way to learn is to try things by your own!
In the Leaderboard you will find your agents. Can you get to the top?
Here’s a list of environments you can try to train your agent with:
Also, if you want to learn to implement Deep Q-Learning by yourself, you definitely should look at CleanRL implementation: https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn_atari.py
Congrats on finishing this chapter!
If you’re still feel confused with all these elements…it’s totally normal! This was the same for me and for all people who studied RL.
Take time to really grasp the material before continuing and try the additional challenges. It’s important to master these elements and having a solid foundations.
In the next unit, we’re going to learn about Optuna. One of the most critical task in Deep Reinforcement Learning is to find a good set of training hyperparameters. And Optuna is a library that helps you to automate the search.
See you on Bonus unit 2! 🔥