Robotics
Transformers
Safetensors
Inference Endpoints
File size: 1,811 Bytes
bac9459
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8977f0f
bac9459
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
license: apache-2.0
datasets:
- JayLee131/vqbet_pusht
pipeline_tag: robotics
---
# Model Card for VQ-BeT/PushT

VQ-BeT (as per [Behavior Generation with Latent Actions](https://arxiv.org/abs/2403.03181)) trained for the `PushT` environment from [gym-pusht](https://github.com/huggingface/gym-pusht).

## How to Get Started with the Model

See the [LeRobot library](https://github.com/huggingface/lerobot) (particularly the [evaluation script](https://github.com/huggingface/lerobot/blob/main/lerobot/scripts/eval.py)) for instructions on how to load and evaluate this model.

## Training Details

Trained with [LeRobot@342f429](https://github.com/huggingface/lerobot/tree/342f429f1c321a2b4501c3007b1dacba7244b469).

The model was trained using this command:

```bash
python lerobot/scripts/train.py \
  policy=vqbet \
  env=pusht dataset_repo_id=lerobot/pusht \
  wandb.enable=true \
  device=cuda
```

The training curves may be found at https://wandb.ai/jaylee0301/lerobot/runs/9r0ndphr?nw=nwuserjaylee0301.

Training VQ-BeT on PushT took about 7-8 hours to train on an Nvida A6000.

## Model Size

<blank>|Number of Parameters
-|-
RGB Encoder | 11.2M
Remaining VQ-BeT Parts | 26.3M

## Evaluation

The model was evaluated on the `PushT` environment from [gym-pusht](https://github.com/huggingface/gym-pusht). There are two evaluation metrics on a per-episode basis:

- Maximum overlap with target (seen as `eval/avg_max_reward` in the charts above). This ranges in [0, 1].
- Success: whether or not the maximum overlap is at least 95%.

Here are the metrics for 500 episodes worth of evaluation.

Metric|Value
-|-
Average max. overlap ratio for 500 episodes | 0.895
Success rate for 500 episodes (%) | 63.8

The results of each of the individual rollouts may be found in [eval_info.json](eval_info.json).