---
inference: false
pipeline_tag: image-text-to-text
license: apache-2.0
datasets:
- VIMA/VIMA-Data
tags:
- llara
- llava
- robotics
- vlm
---
# LLaRA Model Card
This model is released with paper **[LLaRA: Supercharging Robot Learning Data for Vision-Language Policy](https://arxiv.org/abs/2406.20095)**
[Xiang Li](https://xxli.me)1, [Cristina Mata](https://openreview.net/profile?id=~Cristina_Mata1)1, [Jongwoo Park](https://github.com/jongwoopark7978)1, [Kumara Kahatapitiya](https://www3.cs.stonybrook.edu/~kkahatapitiy)1, [Yoo Sung Jang](https://yjang43.github.io/)1, [Jinghuan Shang](https://elicassion.github.io/)1, [Kanchana Ranasinghe](https://kahnchana.github.io/)1, [Ryan Burgert](https://ryanndagreat.github.io/)1, [Mu Cai](https://pages.cs.wisc.edu/~mucai/)2, [Yong Jae Lee](https://pages.cs.wisc.edu/~yongjaelee/)2, and [Michael S. Ryoo](http://michaelryoo.com/)1
1Stony Brook University 2University of Wisconsin-Madison
## Model details
**Model type:**
LLaRA is an open-source visuomotor policy trained by fine-tuning [LLaVA-7b-v1.5](https://huggingface.co/liuhaotian/llava-v1.5-7b) on instruction-following data `D-inBC` and 6 auxiliary datasets, converted from [VIMA-Data](https://huggingface.co/datasets/VIMA/VIMA-Data).
For the conversion code, please refer to [convert_vima.ipynb](https://github.com/LostXine/LLaRA/blob/main/datasets/convert_vima.ipynb)
**Model date:**
llava-1.5-7b-llara-D-inBC-Aux-B-VIMA-80k was trained in June 2024.
**Paper or resources for more information:**
https://github.com/LostXine/LLaRA
**Where to send questions or comments about the model:**
https://github.com/LostXine/LLaRA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaRA is research on large multimodal models for robotics.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in robotics, computer vision, natural language processing, machine learning, and artificial intelligence.