Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,149 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
4 |
---
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
-
|
23 |
-
-
|
24 |
-
-
|
25 |
-
-
|
26 |
-
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
[
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- OpenGVLab/VideoChat2-IT
|
5 |
+
- Lin-Chen/ShareGPT4V
|
6 |
+
- liuhaotian/LLaVA-Instruct-150K
|
7 |
+
language:
|
8 |
+
- en
|
9 |
+
metrics:
|
10 |
+
- accuracy
|
11 |
library_name: transformers
|
12 |
+
pipeline_tag: visual-question-answering
|
13 |
+
tags:
|
14 |
+
- multimodal large language model
|
15 |
+
- large video-language model
|
16 |
---
|
17 |
+
<p align="center">
|
18 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/63913b120cf6b11c487ca31d/ROs4bHIp4zJ7g7vzgUycu.png" width="150" style="margin-bottom: 0.2;"/>
|
19 |
+
<p>
|
20 |
+
|
21 |
+
<h3 align="center"><a href="https://arxiv.org/abs/2406.07476">VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs</a></h3>
|
22 |
+
<h5 align="center"> If you like our project, please give us a star ⭐ on <a href="https://github.com/DAMO-NLP-SG/VideoLLaMA2">Github</a> for the latest update. </h2>
|
23 |
+
|
24 |
+
<p align="center"><video src="https://cdn-uploads.huggingface.co/production/uploads/63913b120cf6b11c487ca31d/Wj7GuqQ0CB9JRoPo6_GoH.webm" width="800"></p>
|
25 |
+
|
26 |
+
## 📰 News
|
27 |
+
* **[2024.06.12]** Release model weights and the first version of the technical report of VideoLLaMA 2.
|
28 |
+
* **[2024.06.03]** Release training, evaluation, and serving codes of VideoLLaMA 2.
|
29 |
+
|
30 |
+
|
31 |
+
## 🌎 Model Zoo
|
32 |
+
| Model Name | Type | Visual Encoder | Language Decoder | # Training Frames |
|
33 |
+
|:-------------------|:--------------:|:----------------|:------------------|:----------------------:|
|
34 |
+
| [VideoLLaMA2-7B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-Base) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 8 |
|
35 |
+
| [VideoLLaMA2-7B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 8 |
|
36 |
+
| [VideoLLaMA2-7B-16F-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-16F-Base) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 16 |
|
37 |
+
| [VideoLLaMA2-7B-16F](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-16F) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 16 |
|
38 |
+
| [VideoLLaMA2-8x7B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-8x7B-Base) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 8 |
|
39 |
+
| [VideoLLaMA2-8x7B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-8x7B) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 8 |
|
40 |
+
| [VideoLLaMA2-72B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-72B-Base) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct) | 8 |
|
41 |
+
| [VideoLLaMA2-72B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-72B) (This checkpoint) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct) | 8 |
|
42 |
+
|
43 |
+
|
44 |
+
## 🚀 Main Results
|
45 |
+
|
46 |
+
### Multi-Choice Video QA & Video Captioning
|
47 |
+
<p><img src="https://github.com/user-attachments/assets/fbe3e3c2-b0f1-4e29-8b92-bc3611192909" width="800" "/></p>
|
48 |
+
|
49 |
+
|
50 |
+
### Open-Ended Video QA
|
51 |
+
<p><img src="https://github.com/user-attachments/assets/cee2efe1-309e-4301-a217-e2a848799953" width="800" "/></p>
|
52 |
+
|
53 |
+
|
54 |
+
|
55 |
+
|
56 |
+
## 🤖 Inference with VideoLLaMA2
|
57 |
+
```python
|
58 |
+
import torch
|
59 |
+
import transformers
|
60 |
+
import sys
|
61 |
+
sys.path.append('./')
|
62 |
+
from videollama2.conversation import conv_templates, SeparatorStyle
|
63 |
+
from videollama2.constants import DEFAULT_MMODAL_TOKEN, MMODAL_TOKEN_INDEX
|
64 |
+
from videollama2.mm_utils import get_model_name_from_path, tokenizer_MMODAL_token, KeywordsStoppingCriteria, process_video, process_image
|
65 |
+
from videollama2.model.builder import load_pretrained_model
|
66 |
+
def inference():
|
67 |
+
# Video Inference
|
68 |
+
paths = ['assets/cat_and_chicken.mp4']
|
69 |
+
questions = ['What animals are in the video, what are they doing, and how does the video feel?']
|
70 |
+
# Reply:
|
71 |
+
# The video features a kitten and a baby chick playing together. The kitten is seen laying on the floor while the baby chick hops around. The two animals interact playfully with each other, and the video has a cute and heartwarming feel to it.
|
72 |
+
modal_list = ['video']
|
73 |
+
# Video Inference
|
74 |
+
paths = ['assets/sora.mp4']
|
75 |
+
questions = ['Please describe this video.']
|
76 |
+
# Reply:
|
77 |
+
# The video features a series of colorful kites flying in the sky. The kites are first seen flying over trees, and then they are shown flying in the sky. The kites come in various shapes and colors, including red, green, blue, and yellow. The video captures the kites soaring gracefully through the air, with some kites flying higher than others. The sky is clear and blue, and the trees below are lush and green. The kites are the main focus of the video, and their vibrant colors and intricate designs are highlighted against the backdrop of the sky and trees. Overall, the video showcases the beauty and artistry of kite-flying, and it is a delight to watch the kites dance and glide through the air.
|
78 |
+
modal_list = ['video']
|
79 |
+
# Image Inference
|
80 |
+
paths = ['assets/sora.png']
|
81 |
+
questions = ['What is the woman wearing, what is she doing, and how does the image feel?']
|
82 |
+
# Reply:
|
83 |
+
# The woman in the image is wearing a black coat and sunglasses, and she is walking down a rain-soaked city street. The image feels vibrant and lively, with the bright city lights reflecting off the wet pavement, creating a visually appealing atmosphere. The woman's presence adds a sense of style and confidence to the scene, as she navigates the bustling urban environment.
|
84 |
+
modal_list = ['image']
|
85 |
+
# 1. Initialize the model.
|
86 |
+
model_path = 'DAMO-NLP-SG/VideoLLaMA2-72B'
|
87 |
+
model_name = get_model_name_from_path(model_path)
|
88 |
+
tokenizer, model, processor, context_len = load_pretrained_model(model_path, None, model_name)
|
89 |
+
model = model.to('cuda:0')
|
90 |
+
conv_mode = 'llama_2'
|
91 |
+
# 2. Visual preprocess (load & transform image or video).
|
92 |
+
if modal_list[0] == 'video':
|
93 |
+
tensor = process_video(paths[0], processor, model.config.image_aspect_ratio).to(dtype=torch.float16, device='cuda', non_blocking=True)
|
94 |
+
default_mm_token = DEFAULT_MMODAL_TOKEN["VIDEO"]
|
95 |
+
modal_token_index = MMODAL_TOKEN_INDEX["VIDEO"]
|
96 |
+
else:
|
97 |
+
tensor = process_image(paths[0], processor, model.config.image_aspect_ratio)[0].to(dtype=torch.float16, device='cuda', non_blocking=True)
|
98 |
+
default_mm_token = DEFAULT_MMODAL_TOKEN["IMAGE"]
|
99 |
+
modal_token_index = MMODAL_TOKEN_INDEX["IMAGE"]
|
100 |
+
tensor = [tensor]
|
101 |
+
# 3. Text preprocess (tag process & generate prompt).
|
102 |
+
question = default_mm_token + "\n" + questions[0]
|
103 |
+
conv = conv_templates[conv_mode].copy()
|
104 |
+
conv.append_message(conv.roles[0], question)
|
105 |
+
conv.append_message(conv.roles[1], None)
|
106 |
+
prompt = conv.get_prompt()
|
107 |
+
input_ids = tokenizer_MMODAL_token(prompt, tokenizer, modal_token_index, return_tensors='pt').unsqueeze(0).to('cuda:0')
|
108 |
+
# 4. Generate a response according to visual signals and prompts.
|
109 |
+
stop_str = conv.sep if conv.sep_style in [SeparatorStyle.SINGLE] else conv.sep2
|
110 |
+
# keywords = ["<s>", "</s>"]
|
111 |
+
keywords = [stop_str]
|
112 |
+
stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids)
|
113 |
+
with torch.inference_mode():
|
114 |
+
output_ids = model.generate(
|
115 |
+
input_ids,
|
116 |
+
images_or_videos=tensor,
|
117 |
+
modal_list=modal_list,
|
118 |
+
do_sample=True,
|
119 |
+
temperature=0.2,
|
120 |
+
max_new_tokens=1024,
|
121 |
+
use_cache=True,
|
122 |
+
stopping_criteria=[stopping_criteria],
|
123 |
+
)
|
124 |
+
outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
|
125 |
+
print(outputs[0])
|
126 |
+
if __name__ == "__main__":
|
127 |
+
inference()
|
128 |
+
```
|
129 |
+
|
130 |
+
|
131 |
+
## Citation
|
132 |
+
|
133 |
+
If you find VideoLLaMA useful for your research and applications, please cite using this BibTeX:
|
134 |
+
```bibtex
|
135 |
+
@article{damonlpsg2024videollama2,
|
136 |
+
title={VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs},
|
137 |
+
author={Cheng, Zesen and Leng, Sicong and Zhang, Hang and Xin, Yifei and Li, Xin and Chen, Guanzheng and Zhu, Yongxin and Zhang, Wenqi and Luo, Ziyang and Zhao, Deli and Bing, Lidong},
|
138 |
+
journal={arXiv preprint arXiv:2406.07476},
|
139 |
+
year={2024},
|
140 |
+
url = {https://arxiv.org/abs/2406.07476}
|
141 |
+
}
|
142 |
+
@article{damonlpsg2023videollama,
|
143 |
+
title = {Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding},
|
144 |
+
author = {Zhang, Hang and Li, Xin and Bing, Lidong},
|
145 |
+
journal = {arXiv preprint arXiv:2306.02858},
|
146 |
+
year = {2023},
|
147 |
+
url = {https://arxiv.org/abs/2306.02858}
|
148 |
+
}
|
149 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|