# Model Card for Diffusion Policy / PushT Diffusion Policy (as per [Diffusion Policy: Visuomotor Policy Learning via Action Diffusion](https://arxiv.org/abs/2303.04137)) trained for the `PushT` environment from [gym-pusht](https://github.com/huggingface/gym-pusht). ![demo](demo.gif) ## How to Get Started with the Model See the [LeRobot library](https://github.com/huggingface/lerobot) (particularly the [evaluation script](https://github.com/huggingface/lerobot/blob/main/lerobot/scripts/eval.py)) for instructions on how to load and evaluate this model. ## Training Details TODO commit hash. Trained with [LeRobot@d747195](https://github.com/huggingface/lerobot/tree/d747195c5733c4f68d4bfbe62632d6fc1b605712). The model was trained using [LeRobot's training script](https://github.com/huggingface/lerobot/blob/d747195c5733c4f68d4bfbe62632d6fc1b605712/lerobot/scripts/train.py) and with the [pusht](https://huggingface.co/datasets/lerobot/pusht/tree/v1.3) dataset. Here are the [loss](./train_loss.csv), [evaluation score](./eval_avg_max_reward.csv), [evaluation success rate](./eval_pc_success.csv) (with 50 rollouts) during training. ![](training_curves.png) This took about 7 hours to train on an Nvida RTX 3090. ## Evaluation The model was evaluated on the `PushT` environment from [gym-pusht](https://github.com/huggingface/gym-pusht) and compared to a similar model trained with the original [Diffusion Policy code](https://github.com/real-stanford/diffusion_policy). There are two evaluation metrics on a per-episode basis: - Maximum overlap with target (seen as `eval/avg_max_reward` in the charts above). This ranges in [0, 1]. - Success: whether or not the maximum overlap is at least 95%. Here are the metrics for 500 episodes worth of evaluation. For the succes rate we add and extra row with confidence bounds. This assumes a uniform prior over success probability and computes the beta posterior, then calculates the mean and lower/upper confidence bounds (with a 68.2% confidence interval centered on the mean). |Ours|Theirs -|-|- Average max. overlap ratio | 0.959 | 0.957 Success rate for 500 episodes (%) | 63.8 | 64.2 Beta distribution lower/mean/upper (%) | 61.6 / 63.7 / 65.9 | 62.0 / 64.1 / 66.3