BAAI
/

EVE-7B-v1.0 / README.md
YufengCui's picture
Update README.md
7ee69e8 verified
---
license: apache-2.0
---
<div align='center'>
<h1>EVE: Unveiling Encoder-Free Vision-Language Models</h1h1>
<h3><a href="https://arxiv.org/abs/2406.11832">Unveiling Encoder-Free Vision-Language Models</a></h3>
<!-- [Haiwen Diao](https://scholar.google.com/citations?user=46eCjHQAAAAJ&hl=zh-CN)<sup>1*</sup>, [Yufeng Cui](https://scholar.google.com/citations?user=5Ydha2EAAAAJ&hl=zh-CN&oi=ao)<sup>2,1*</sup>, [Xiaotong Li]()<sup>1</sup>, [Hongcheng Gao]()<sup>1</sup>, [Jingjing Liu](https://air.tsinghua.edu.cn/en/info/1046/1194.htm)<sup>2</sup>, [Tiejun Huang](https://scholar.google.com/citations?user=knvEK4AAAAAJ&hl=en)<sup>1,3</sup>, [Xinlong Wang](https://www.xloong.wang/)<sup>1</sup> -->
[Haiwen Diao*](https://scholar.google.com/citations?user=46eCjHQAAAAJ&hl=zh-CN), [Yufeng Cui*](https://scholar.google.com/citations?user=5Ydha2EAAAAJ&hl=zh-CN&oi=ao), [Xiaotong Li](https://scholar.google.com/citations?hl=zh-CN&user=cpCE_T4AAAAJ), [Yueze Wang](https://openreview.net/profile?id=~Yueze_Wang1), [Huchuan Lu📧](https://scholar.google.com/citations?user=D3nE0agAAAAJ&hl=zh-CN), [Xinlong Wang📧](https://scholar.google.com/citations?user=DPz0DjYAAAAJ&hl=zh-CN)
Dalian University of Technology; Beijing Academy of Artificial Intelligence; Peking University
<!-- <sup>1</sup> [Dalian University of Technology], <sup>2</sup> [Beijing Academy of Artificial Intelligence], <sup>3</sup> [Peking University]<br><sup>*</sup> Equal Contribution -->
| [Paper](https://arxiv.org/abs/2406.11832) | [Code](https://github.com/baaivision/EVE) |
</div>
Existing vision-language models (VLMs) mostly rely on vision encoders to extract visual features followed by large language models (LLMs) for visual-language tasks. However, the vision encoders set a strong inductive bias in abstracting visual representation, e.g., resolution, aspect ratio, and semantic priors, which could impede the flexibility and efficiency of the VLMs. Training pure VLMs that accept the seamless vision and language inputs, i.e., without vision encoders, remains challenging and rarely explored. Empirical observations reveal that direct training without encoders results in slow convergence and large performance gaps. In this work, we bridge the gap between encoder-based and encoder-free models, and present a simple yet effective training recipe towards pure VLMs. Specifically, we unveil the key aspects of training encoder-free VLMs efficiently via thorough experiments: (1) Bridging vision-language representation inside one unified decoder; (2) Enhancing visual recognition capability via extra supervision. With these strategies, we launch EVE, an encoder-free vision-language model that can be trained and forwarded efficiently. Notably, solely utilizing 35M publicly accessible data, EVE can impressively rival the encoder-based VLMs of similar capacities across multiple vision-language benchmarks. It significantly outperforms the counterpart Fuyu-8B with mysterious training procedures and undisclosed training data. We believe that EVE provides a transparent and efficient route for developing a pure decoder-only architecture across modalities.
## Model Weights
We release the pretrained and instruction-tuned weights of **EVE**.
| Model name | Weight |
| ---------- | ------------------------------------------------------- |
| **EVE-7B-HD-v1.0** | [🤗 HF link](https://huggingface.co/BAAI/EVE-7B-HD-v1.0) (14GB) |
| **EVE-7B-v1.0** | [🤗 HF link](https://huggingface.co/BAAI/EVE-7B-v1.0) (14GB) |
| **EVE-7B-Pretrain-v1.0** | [🤗 HF link](https://huggingface.co/BAAI/EVE-7B-Pretrain-v1.0) (14GB) |
## ✒️ Citation
If **EVE** is helpful for your research, please consider **star** ⭐ and **citation** 📝 :
```bibtex
@article{diao2024EVE,
title={Unveiling Encoder-Free Vision-Language Models},
author={Diao, Haiwen and Cui, Yufeng and Li, Xiaotong and Wang, Yueze and Lu, Huchuan and Wang, Xinlong},
journal={arXiv preprint arXiv:2406.11832},
year={2024}
}
```