Using sample-factory at Hugging Face

sample-factory is a codebase for high throughput asynchronous reinforcement learning. It has integrations with the Hugging Face Hub to share models with evaluation results and training metrics.

Exploring sample-factory in the Hub

You can find sample-factory models by filtering at the left of the models page.

All models on the Hub come up with useful features:

  1. An automatically generated model card with a description, a training configuration, and more.
  2. Metadata tags that help for discoverability.
  3. Evaluation results to compare with other models.
  4. A video widget where you can watch your agent performing.

Install the library

To install the `sample-factory` library, you need to install the package:

pip install sample-factory

SF is known to work on Linux and MacOS. There is no Windows support at this time.

Loading models from the Hub

Using load_from_hub

To download a model from the Hugging Face Hub to use with Sample-Factory, use the load_from_hub script:

python -m sample_factory.huggingface.load_from_hub -r <HuggingFace_repo_id> -d <train_dir_path>

The command line arguments are:

Download Model Repository Directly

Hugging Face repositories can be downloaded directly using git clone:

git clone <URL of HuggingFace Repo>

Using Downloaded Models with Sample-Factory

After downloading the model, you can run the models in the repo with the enjoy script corresponding to your environment. For example, if you are downloading a mujoco-ant model, it can be run with:

python -m sf_examples.mujoco.enjoy_mujoco --algo=APPO --env=mujoco_ant --experiment=<repo_name> --train_dir=./train_dir

Note, you may have to specify the --train_dir if your local train_dir has a different path than the one in the cfg.json

Sharing your models

Using push_to_hub

If you want to upload without generating evaluation metrics or a replay video, you can use the push_to_hub script:

python -m sample_factory.huggingface.push_to_hub -r <hf_username>/<hf_repo_name> -d <experiment_dir_path>

The command line arguments are:

Using enjoy.py

You can upload your models to the Hub using your environment’s enjoy script with the --push_to_hub flag. Uploading using enjoy can also generate evaluation metrics and a replay video.

The evaluation metrics are generated by running your model on the specified environment for a number of episodes and reporting the mean and std reward of those runs.

Other relevant command line arguments are:

You can also save a video of the model during evaluation to upload to the hub with the --save_video flag

For example:

python -m sf_examples.mujoco_examples.enjoy_mujoco --algo=APPO --env=mujoco_ant --experiment=<repo_name> --train_dir=./train_dir --max_num_episodes=10 --push_to_hub --hf_username=<username> --hf_repository=<hf_repo_name> --save_video --no_render