File size: 1,618 Bytes
a9f0ccc
 
 
 
 
b7fdb46
 
 
 
 
 
 
 
 
a9f0ccc
 
b7fdb46
 
 
 
a9f0ccc
b7fdb46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a9f0ccc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-7B-Instruct
---
# Valley 2.0
## Introduction
Valley [github](https://github.com/bytedance/Valley) is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data, which is developed by ByteDance. Our model not only

- Achieved the best results in the inhouse e-commerce and short-video benchmarks
- Demonstrated comparatively outstanding performance in the OpenCompass (average scores > 67) tests

when evaluated against models of the same scale. 

## Release
- [12/23] 🔥 Announcing [Valley-Qwen2.5-7B](https://huggingface.co/ByteDance)!

## Valley-Eagle
The foundational version of Valley is a multimodal large model aligned with Siglip and Qwen2.5, incorporating LargeMLP and ConvAdapter to construct the projector. 

- In the final version, we also referenced Eagle, introducing an additional VisionEncoder that can flexibly adjust the number of tokens and is parallelized with the original visual tokens. 
- This enhancement supplements the model’s performance in extreme scenarios, and we chose the Qwen2vl VisionEncoder for this purpose. 

and the model structure is shown as follows:

<div style="display:flex;">
  <img src="valley_structure.jpeg" alt="opencompass" style="height:600px;" />
</div>


## Environment Setup
``` bash
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt
```

## License Agreement
All of our open-source models are licensed under the Apache-2.0 license.


## Citation
Coming Soon!