jartine commited on
Commit
750ecd4
1 Parent(s): 5764856

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -1
README.md CHANGED
@@ -1,3 +1,43 @@
1
  ---
2
- license: llama2
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ inference: false
3
  ---
4
+
5
+ <br>
6
+ <br>
7
+
8
+ # LLaVA Model Card
9
+
10
+ ## Model details
11
+
12
+ **Model type:**
13
+ LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
14
+ It is an auto-regressive language model, based on the transformer architecture.
15
+
16
+ **Model date:**
17
+ LLaVA-v1.5-7B was trained in September 2023.
18
+
19
+ **Paper or resources for more information:**
20
+ https://llava-vl.github.io/
21
+
22
+ ## License
23
+ Llama 2 is licensed under the LLAMA 2 Community License,
24
+ Copyright (c) Meta Platforms, Inc. All Rights Reserved.
25
+
26
+ **Where to send questions or comments about the model:**
27
+ https://github.com/haotian-liu/LLaVA/issues
28
+
29
+ ## Intended use
30
+ **Primary intended uses:**
31
+ The primary use of LLaVA is research on large multimodal models and chatbots.
32
+
33
+ **Primary intended users:**
34
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
35
+
36
+ ## Training dataset
37
+ - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
38
+ - 158K GPT-generated multimodal instruction-following data.
39
+ - 450K academic-task-oriented VQA data mixture.
40
+ - 40K ShareGPT data.
41
+
42
+ ## Evaluation dataset
43
+ A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.