ZhangYuanhan commited on
Commit
475c62d
1 Parent(s): 8b5efe3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -1
README.md CHANGED
@@ -1,3 +1,49 @@
1
  ---
2
- license: llama2
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ inference: false
3
+ license: apache-2.0
4
  ---
5
+
6
+ <br>
7
+ <br>
8
+
9
+ # LLaVA-Next-Video Model Card
10
+
11
+ ## Model details
12
+
13
+ **Model type:**
14
+ LLaVA-Next-Video is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
15
+ Base LLM: [NousResearch/Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)
16
+
17
+ **Model date:**
18
+ LLaVA-Next-Video-34B was trained in April 2024.
19
+
20
+ **Paper or resources for more information:**
21
+ https://github.com/LLaVA-VL/LLaVA-NeXT
22
+
23
+ ## License
24
+ [NousResearch/Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) license.
25
+
26
+
27
+ **Where to send questions or comments about the model:**
28
+ https://github.com/LLaVA-VL/LLaVA-NeXT/issues
29
+
30
+ ## Intended use
31
+ **Primary intended uses:**
32
+ The primary use of LLaVA is research on large multimodal models and chatbots.
33
+
34
+ **Primary intended users:**
35
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
36
+
37
+ ## Training dataset
38
+
39
+ ### Image
40
+ - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
41
+ - 158K GPT-generated multimodal instruction-following data.
42
+ - 500K academic-task-oriented VQA data mixture.
43
+ - 50K GPT-4V data mixture.
44
+ - 40K ShareGPT data.
45
+ ### Video
46
+ - 100K VideoChatGPT-Instruct.
47
+
48
+ ## Evaluation dataset
49
+ A collection of 4 benchmarks, including 3 academic VQA benchmarks and 1 captioning benchmark.