File size: 1,606 Bytes
f341c14
79a8ebb
 
 
 
 
 
 
f341c14
79a8ebb
 
 
f341c14
79a8ebb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---

library_name: transformers
tags:
- robotics
- vla
- image-text-to-text
- multimodal
- pretraining
license: mit
language:
- en
pipeline_tag: image-text-to-text
---


# OpenVLA 7B Fine-Tuned on LIBERO-10 (LIBERO-Long)

This model was produced by fine-tuning the [OpenVLA 7B model](https://huggingface.co/openvla/openvla-7b) via
LoRA (r=32) on the LIBERO-10 (LIBERO-Long) dataset from the [LIBERO simulation benchmark](https://libero-project.github.io/main.html).
We made a few modifications to the training dataset to improve final performance (see the
[OpenVLA paper](https://arxiv.org/abs/2406.09246) for details).
We fine-tuned OpenVLA with batch size 128 for 80K gradient steps using 8 A100 GPUs. We applied random crop and color jitter
image augmentations during training (therefore, center cropping should be applied at inference time).

## Usage Instructions

See the [OpenVLA GitHub README](https://github.com/openvla/openvla/blob/main/README.md) for instructions on how to
run and evaluate this model in the LIBERO simulator.

## Citation

**BibTeX:**

```bibtex

@article{kim24openvla,

    title={OpenVLA: An Open-Source Vision-Language-Action Model},

    author={{Moo Jin} Kim and Karl Pertsch and Siddharth Karamcheti and Ted Xiao and Ashwin Balakrishna and Suraj Nair and Rafael Rafailov and Ethan Foster and Grace Lam and Pannag Sanketi and Quan Vuong and Thomas Kollar and Benjamin Burchfiel and Russ Tedrake and Dorsa Sadigh and Sergey Levine and Percy Liang and Chelsea Finn},

    journal = {arXiv preprint arXiv:2406.09246},

    year={2024}

} 

```