File size: 3,013 Bytes
49dda6b
 
 
 
0ffb2d2
49dda6b
8f3b4d8
 
 
 
 
 
 
 
 
 
 
c887ff6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8f3b4d8
dcd48c0
8f3b4d8
a09d3ac
 
49dda6b
 
8f3b4d8
 
 
 
 
 
 
3833c90
8f3b4d8
 
 
 
 
37628ec
dcd48c0
0ffb2d2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
license: apache-2.0
datasets:
- lerobot/pusht
pipeline_tag: robotics
---
# Model Card for Diffusion Policy / PushT

Diffusion Policy (as per [Diffusion Policy: Visuomotor Policy
Learning via Action Diffusion](https://arxiv.org/abs/2303.04137)) trained for the `PushT` environment from [gym-pusht](https://github.com/huggingface/gym-pusht).

## How to Get Started with the Model

See the [LeRobot library](https://github.com/huggingface/lerobot) (particularly the [evaluation script](https://github.com/huggingface/lerobot/blob/main/lerobot/scripts/eval.py)) for instructions on how to load and evaluate this model.

## Training Details

The model was trained using [LeRobot's training script](https://github.com/huggingface/lerobot/blob/d747195c5733c4f68d4bfbe62632d6fc1b605712/lerobot/scripts/train.py) and with the [pusht](https://huggingface.co/datasets/lerobot/pusht/tree/v1.3) dataset, using this command:

```bash
python lerobot/scripts/train.py \
  hydra.run.dir=outputs/train/diffusion_pusht \
  hydra.job.name=diffusion_pusht \
  policy=diffusion training.save_model=true \
  env=pusht \
  env.task=PushT-v0 \
  dataset_repo_id=lerobot/pusht \
  training.offline_steps=200000 \
  training.save_freq=20000 \
  training.eval_freq=10000 \
  eval.n_episodes=50 \
  wandb.enable=true \
  wandb.disable_artifact=true \
  device=cuda
```


The training curves may be found at https://wandb.ai/alexander-soare/Alexander-LeRobot/runs/508luayd.

This took about 7 hours to train on an Nvida RTX 3090.

_Note: At the time of training, [this PR](https://github.com/huggingface/lerobot/pull/129) was also incorporated._

## Evaluation

The model was evaluated on the `PushT` environment from [gym-pusht](https://github.com/huggingface/gym-pusht) and compared to a similar model trained with the original [Diffusion Policy code](https://github.com/real-stanford/diffusion_policy). There are two evaluation metrics on a per-episode basis:

- Maximum overlap with target (seen as `eval/avg_max_reward` in the charts above). This ranges in [0, 1].
- Success: whether or not the maximum overlap is at least 95%.

Here are the metrics for 500 episodes worth of evaluation. For the succes rate we add an extra row with confidence bounds. This assumes a uniform prior over success probability and computes the beta posterior, then calculates the mean and lower/upper confidence bounds (with a 68.2% confidence interval centered on the mean). The "Theirs" column is for an equivalent model trained on the original Diffusion Policy repository and evaluated on LeRobot (the model weights may be found in the [`original_dp_repo`](https://huggingface.co/lerobot/diffusion_pusht/tree/original_dp_repo) branch of this respository).

<blank>|Ours|Theirs
-|-|-
Average max. overlap ratio | 0.959 | 0.957
Success rate for 500 episodes (%) | 63.8 | 64.2
Beta distribution lower/mean/upper (%) | 61.6 / 63.7 / 65.9 | 62.0 / 64.1 / 66.3

The results of each of the individual rollouts may be found in [eval_info.json](eval_info.json).