ZhangYuanhan commited on
Commit
bf4420b
1 Parent(s): d395cda

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -1
README.md CHANGED
@@ -1,3 +1,53 @@
1
  ---
2
- license: llama2
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ inference: false
3
+ license: apache-2.0
4
  ---
5
+ <br>
6
+
7
+ # LLaVA-Next-Video Model Card
8
+
9
+ ## Model details
10
+
11
+ **Model type:**
12
+ <br>
13
+ LLaVA-Next-Video is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
14
+ <br>
15
+ Base LLM: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
16
+
17
+ **Model date:**
18
+ <br>
19
+ LLaVA-Next-Video-7B-34K was trained in April 2024.
20
+
21
+ **Paper or resources for more information:**
22
+ <br>
23
+ https://github.com/LLaVA-VL/LLaVA-NeXT
24
+
25
+ ## License
26
+ Llama 2 is licensed under the LLAMA 2 Community License,
27
+ Copyright (c) Meta Platforms, Inc. All Rights Reserved.
28
+
29
+ ## Where to send questions or comments about the model
30
+ https://github.com/LLaVA-VL/LLaVA-NeXT/issues
31
+
32
+ ## Intended use
33
+ **Primary intended uses:**
34
+ <br>
35
+ The primary use of LLaVA is research on large multimodal models and chatbots.
36
+
37
+ **Primary intended users:**
38
+ <br>
39
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
40
+
41
+ ## Training dataset
42
+
43
+ ### Image
44
+ - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
45
+ - 158K GPT-generated multimodal instruction-following data.
46
+ - 500K academic-task-oriented VQA data mixture.
47
+ - 50K GPT-4V data mixture.
48
+ - 40K ShareGPT data.
49
+ ### Video
50
+ - 100K VideoChatGPT-Instruct.
51
+
52
+ ## Evaluation dataset
53
+ A collection of 4 benchmarks, including 3 academic VQA benchmarks and 1 captioning benchmark.