To start to train Huggy, click on Open In Colab button 👇 :
In this notebook, we’ll reinforce what we learned in the first Unit by teaching Huggy the Dog to fetch the stick and then play with it directly in your browser
⬇️ Here is an example of what you will achieve at the end of the unit. ⬇️ (launch ▶ to see)
%%html
<video controls autoplay><source src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit-bonus1/huggy.mp4" type="video/mp4"></video>
We’re constantly trying to improve our tutorials, so if you find some issues in this notebook, please open an issue on the Github Repo.
At the end of the notebook, you will:
Before diving into the notebook, you need to:
🔲 📚 Develop an understanding of the foundations of Reinforcement learning (MC, TD, Rewards hypothesis…) by doing Unit 1
🔲 📚 Read the introduction to Huggy by doing Bonus Unit 1
Hardware Accelerator > GPU
# Clone this specific repository (can take 3min)
git clone https://github.com/huggingface/ml-agents/
# Go inside the repository and install the package (can take 3min)
%cd ml-agents
pip3 install -e ./ml-agents-envs
pip3 install -e ./ml-agents
./trained-envs-executables/linux/
./trained-envs-executables/linux/
mkdir ./trained-envs-executables
mkdir ./trained-envs-executables/linux
wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1zv3M95ZJTWHUVOWT6ckq_cm98nft8gdF' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1zv3M95ZJTWHUVOWT6ckq_cm98nft8gdF" -O ./trained-envs-executables/linux/Huggy.zip && rm -rf /tmp/cookies.txt
Download the file Huggy.zip from https://drive.google.com/uc?export=download&id=1zv3M95ZJTWHUVOWT6ckq_cm98nft8gdF using wget
. Check out the full solution to download large files from GDrive here
%%capture unzip -d ./trained-envs-executables/linux/ ./trained-envs-executables/linux/Huggy.zip
Make sure your file is accessible
chmod -R 755 ./trained-envs-executables/linux/Huggy
Huggy doesn’t “see” his environment. Instead, we provide him information about the environment:
Given all this information, Huggy can decide which action to take next to fulfill his goal.
Joint motors drive huggy legs. It means that to get the target, Huggy needs to learn to rotate the joint motors of each of his legs correctly so he can move.
The reward function is designed so that Huggy will fulfill his goal : fetch the stick.
Remember that one of the foundations of Reinforcement Learning is the reward hypothesis: a goal can be described as the maximization of the expected cumulative reward.
Here, our goal is that Huggy goes towards the stick but without spinning too much. Hence, our reward function must translate this goal.
Our reward function:
In ML-Agents, you define the training hyperparameters into config.yaml files.
For the scope of this notebook, we’re not going to modify the hyperparameters, but if you want to try as an experiment, you should also try to modify some other hyperparameters, Unity provides very good documentation explaining each of them here.
In the case you want to modify the hyperparameters, in Google Colab notebook, you can click here to open the config.yaml: /content/ml-agents/config/ppo/Huggy.yaml
We’re now ready to train our agent 🔥.
To train our agent, we just need to launch mlagents-learn and select the executable containing the environment.
With ML Agents, we run a training script. We define four parameters:
mlagents-learn <config>
: the path where the hyperparameter config file is.--env
: where the environment executable is.--run_id
: the name you want to give to your training run id.--no-graphics
: to not launch the visualization during the training.Train the model and use the --resume
flag to continue training in case of interruption.
It will fail first time when you use
--resume
, try running the block again to bypass the error.
The training will take 30 to 45min depending on your machine (don’t forget to set up a GPU), go take a ☕️you deserve it 🤗.
mlagents-learn ./config/ppo/Huggy.yaml --env=./trained-envs-executables/linux/Huggy/Huggy --run-id="Huggy" --no-graphics
To be able to share your model with the community there are three more steps to follow:
1️⃣ (If it’s not already done) create an account to HF ➡ https://huggingface.co/join
2️⃣ Sign in and then, you need to store your authentication token from the Hugging Face website.
from huggingface_hub import notebook_login
notebook_login()
If you don’t want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: huggingface-cli login
Then, we simply need to run mlagents-push-to-hf
.
And we define 4 parameters:
--run-id
: the name of the training run id.--local-dir
: where the agent was saved, it’s results/<run_id name>, so in my case results/First Training.--repo-id
: the name of the Hugging Face repo you want to create or update. It’s always <your huggingface username>/<the repo name>
If the repo does not exist it will be created automatically--commit-message
: since HF repos are git repository you need to define a commit message.mlagents-push-to-hf --run-id="HuggyTraining" --local-dir="./results/Huggy" --repo-id="ThomasSimonini/ppo-Huggy" --commit-message="Huggy"
Else, if everything worked you should have this at the end of the process(but with a different url 😆) :
Your model is pushed to the hub. You can view your model here: https://huggingface.co/ThomasSimonini/ppo-Huggy
It’s the link to your model repository. The repository contains a model card that explains how to use the model, your Tensorboard logs and your config file. What’s awesome is that it’s a git repository, which means you can have different commits, update your repository with a new push, open Pull Requests, etc.
But now comes the best: being able to play with Huggy online 👀.
This step is the simplest:
Open the game Huggy in your browser: https://huggingface.co/spaces/ThomasSimonini/Huggy
Click on Play with my Huggy model
In step 1, choose your model repository which is the model id (in my case ThomasSimonini/ppo-Huggy).
In step 2, choose what model you want to replay:
Huggy.onnx
👉 What’s nice is to try with different models steps to see the improvement of the agent.
Congrats on finishing this bonus unit!
You can now sit and enjoy playing with your Huggy 🐶. And don’t forget to spread the love by sharing Huggy with your friends 🤗. And if you share about it on social media, please tag us @huggingface and me @simoninithomas