File size: 816 Bytes
f88c36d 0274a1c 736078c 0274a1c 736078c 0274a1c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
---
license: apache-2.0
---
# VCoder LLaVA-1.5-13b
VCoder LLaVA-1.5-13b was trained on COST training dataset in December 2023. It uses the pretrained [LLaVA-1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) model weights. It was introduced by Jain et al. in [this repository](https://github.com/SHI-Labs/VCoder).
VCoder is an adapter for improving existing Multimodal LLMs at object-level perception tasks with the use of perception modalities as control inputs while retaining performance on other tasks.
![img](https://praeclarumjj3.github.io/vcoder/vcoder.svg)
### Citation
```bibtex
@article{jain2023vcoder,
title={{VCoder: Versatile Vision Encoders for Multimodal Large Language Models}},
author={Jitesh Jain and Jianwei Yang and Humphrey Shi},
journal={arXiv},
year={2023}
}
```
|