IVGSZ commited on
Commit
8031974
1 Parent(s): 3b45bc2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -30
README.md CHANGED
@@ -1,30 +1,30 @@
1
- ---
2
- license: llama2
3
- tags:
4
- - vision-language model
5
- - llama
6
- - video understanding
7
- ---
8
-
9
- # Flash-VStream Model Card
10
- <a href='https://invinciblewyq.github.io/vstream-page/'><img src='https://img.shields.io/badge/Project-Page-Green'></a>
11
- <a href='https://arxiv.org/abs/2406.08085v1'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>
12
-
13
- ## Model details
14
- We proposed Flash-VStream, a video-language model that simulates the memory mechanism of human. Our model is able to process extremely long video streams in real-time and respond to user queries simultaneously.
15
-
16
- ** This is the checkpoint only after stage-1 pretraining. **
17
- ** Please use [the checkpoint](https://huggingface.co/IVGSZ/Flash-VStream-7b) after stage-2 finetuning for better performance. **
18
-
19
- ## License
20
- Llama 2 is licensed under the LLAMA 2 Community License,
21
- Copyright (c) Meta Platforms, Inc. All Rights Reserved.
22
-
23
- ## Training data
24
- This model is trained based on image data from LLaVA-1.5 dataset, and video data from WebVid and ActivityNet datasets following LLaMA-VID, including
25
- - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
26
- - 158K GPT-generated multimodal instruction-following data.
27
- - 450K academic-task-oriented VQA data mixture.
28
- - 40K ShareGPT data.
29
- - 232K video-caption pairs sampled from the WebVid 2.5M dataset.
30
- - 98K videos from ActivityNet with QA pairs from Video-ChatGPT.
 
1
+ ---
2
+ license: llama2
3
+ tags:
4
+ - vision-language model
5
+ - llama
6
+ - video understanding
7
+ ---
8
+
9
+ # Flash-VStream Model Card
10
+ <a href='https://invinciblewyq.github.io/vstream-page/'><img src='https://img.shields.io/badge/Project-Page-Green'></a>
11
+ <a href='https://arxiv.org/abs/2406.08085v1'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>
12
+
13
+ ## Model details
14
+ We proposed Flash-VStream, a video-language model that simulates the memory mechanism of human. Our model is able to process extremely long video streams in real-time and respond to user queries simultaneously.
15
+
16
+ **This is the checkpoint only after stage-1 pretraining.**
17
+ **Please use [the checkpoint](https://huggingface.co/IVGSZ/Flash-VStream-7b) after stage-2 finetuning for better performance.**
18
+
19
+ ## License
20
+ Llama 2 is licensed under the LLAMA 2 Community License,
21
+ Copyright (c) Meta Platforms, Inc. All Rights Reserved.
22
+
23
+ ## Training data
24
+ This model is trained based on image data from LLaVA-1.5 dataset, and video data from WebVid and ActivityNet datasets following LLaMA-VID, including
25
+ - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
26
+ - 158K GPT-generated multimodal instruction-following data.
27
+ - 450K academic-task-oriented VQA data mixture.
28
+ - 40K ShareGPT data.
29
+ - 232K video-caption pairs sampled from the WebVid 2.5M dataset.
30
+ - 98K videos from ActivityNet with QA pairs from Video-ChatGPT.