Safetensors
valley
wuziheng commited on
Commit
b7fdb46
·
verified ·
1 Parent(s): 5beb519

updata README.md

Browse files
Files changed (1) hide show
  1. README.md +105 -3
README.md CHANGED
@@ -1,3 +1,105 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Valley 2.0
2
+ ## Introduction
3
+ Valley [github](https://github.com/bytedance/Valley) is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data, which is developed by ByteDance. Our model not only
4
+
5
+ - Achieved the best results in the inhouse e-commerce and short-video benchmarks
6
+ - Demonstrated comparatively outstanding performance in the OpenCompass (average scores > 67) tests
7
+
8
+ when evaluated against models of the same scale.
9
+
10
+
11
+ ## Valley-Eagle
12
+ The foundational version of Valley is a multimodal large model aligned with Siglip and Qwen2.5, incorporating LargeMLP and ConvAdapter to construct the projector.
13
+
14
+ - In the final version, we also referenced [Eagle](https://arxiv.org/pdf/2408.15998), introducing an additional VisionEncoder that can flexibly adjust the number of tokens and is parallelized with the original visual tokens.
15
+ - This enhancement supplements the model’s performance in extreme scenarios, and we chose the Qwen2vl VisionEncoder for this purpose.
16
+
17
+ and the model structure is shown as follows:
18
+
19
+ <div style="display:flex;">
20
+ <img src="valley_structure.jpeg" alt="opencompass" style="height:600px;" />
21
+ </div>
22
+
23
+
24
+ ## Release
25
+ - [12/23] 🔥 Announcing [Valley-Qwen2.5-7B](https://huggingface.co/ByteDance)!
26
+
27
+ ## Environment Setup
28
+ ``` bash
29
+ pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu121
30
+ pip install -r requirements.txt
31
+ ```
32
+
33
+ ## Inference Demo
34
+ - Single image
35
+ ``` python
36
+ from valley_eagle_chat import ValleyEagleChat
37
+ model = ValleyEagleChat(
38
+ model_path='path/to/ckpt',
39
+ padding_side = 'left',
40
+ )
41
+
42
+ url = 'http://p16-goveng-va.ibyteimg.com/tos-maliva-i-wtmo38ne4c-us/4870400481414052507~tplv-wtmo38ne4c-jpeg.jpeg'
43
+ img = urllib.request.urlopen(url=url, timeout=5).read()
44
+
45
+ request = {
46
+ "chat_history": [
47
+ {'role': 'system', 'content': 'You are Valley, developed by ByteDance. Your are a helpfull Assistant.'},
48
+ {'role': 'user', 'content': 'Describe the given image.'},
49
+ ],
50
+ "images": [img],
51
+ }
52
+
53
+ result = model(request)
54
+ print(f"\n>>> Assistant:\n")
55
+ print(result)
56
+ ```
57
+
58
+ - Video
59
+ ``` python
60
+ from valley_eagle_chat import ValleyEagleChat
61
+ import decord
62
+ import requests
63
+ import numpy as np
64
+ from torchvision import transforms
65
+
66
+ model = ValleyEagleChat(
67
+ model_path='path/to/ckpt',
68
+ padding_side = 'left',
69
+ )
70
+
71
+ url = 'https://videos.pexels.com/video-files/29641276/12753127_1920_1080_25fps.mp4'
72
+ video_file = './video.mp4'
73
+ response = requests.get(url)
74
+ if response.status_code == 200:
75
+ with open("video.mp4", "wb") as f:
76
+ f.write(response.content)
77
+ else:
78
+ print("download error!")
79
+ exit(1)
80
+
81
+ video_reader = decord.VideoReader(video_file)
82
+ decord.bridge.set_bridge("torch")
83
+ video = video_reader.get_batch(
84
+ np.linspace(0, len(video_reader) - 1, 8).astype(np.int_)
85
+ ).byte()
86
+ print([transforms.ToPILImage()(image.permute(2, 0, 1)).convert("RGB") for image in video])
87
+
88
+ request = {
89
+ "chat_history": [
90
+ {'role': 'system', 'content': 'You are Valley, developed by ByteDance. Your are a helpfull Assistant.'},
91
+ {'role': 'user', 'content': 'Describe the given video.'},
92
+ ],
93
+ "images": [transforms.ToPILImage()(image.permute(2, 0, 1)).convert("RGB") for image in video],
94
+ }
95
+ result = model(request)
96
+ print(f"\n>>> Assistant:\n")
97
+ print(result)
98
+ ```
99
+
100
+ ## License Agreement
101
+ All of our open-source models are licensed under the Apache-2.0 license.
102
+
103
+
104
+ ## Citation
105
+ Coming Soon!