ZhangYuanhan commited on
Commit
e722df2
1 Parent(s): 0f37b5d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -0
README.md CHANGED
@@ -11,13 +11,17 @@ license: apache-2.0
11
  ## Model details
12
 
13
  **Model type:**
 
14
  LLaVA-Next-Video is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
 
15
  Base LLM: [NousResearch/Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)
16
 
17
  **Model date:**
 
18
  LLaVA-Next-Video-34B was trained in April 2024.
19
 
20
  **Paper or resources for more information:**
 
21
  https://github.com/LLaVA-VL/LLaVA-NeXT
22
 
23
  ## License
@@ -25,13 +29,16 @@ https://github.com/LLaVA-VL/LLaVA-NeXT
25
 
26
 
27
  **Where to send questions or comments about the model:**
 
28
  https://github.com/LLaVA-VL/LLaVA-NeXT/issues
29
 
30
  ## Intended use
31
  **Primary intended uses:**
 
32
  The primary use of LLaVA is research on large multimodal models and chatbots.
33
 
34
  **Primary intended users:**
 
35
  The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
36
 
37
  ## Training dataset
 
11
  ## Model details
12
 
13
  **Model type:**
14
+ <br>
15
  LLaVA-Next-Video is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
16
+ <br>
17
  Base LLM: [NousResearch/Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)
18
 
19
  **Model date:**
20
+ <br>
21
  LLaVA-Next-Video-34B was trained in April 2024.
22
 
23
  **Paper or resources for more information:**
24
+ <br>
25
  https://github.com/LLaVA-VL/LLaVA-NeXT
26
 
27
  ## License
 
29
 
30
 
31
  **Where to send questions or comments about the model:**
32
+ <br>
33
  https://github.com/LLaVA-VL/LLaVA-NeXT/issues
34
 
35
  ## Intended use
36
  **Primary intended uses:**
37
+ <br>
38
  The primary use of LLaVA is research on large multimodal models and chatbots.
39
 
40
  **Primary intended users:**
41
+ <br>
42
  The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
43
 
44
  ## Training dataset