PsiPi commited on
Commit
5891b0d
1 Parent(s): c049e44

Update README.md

Browse files

added upstream README to card

Files changed (1) hide show
  1. README.md +44 -0
README.md CHANGED
@@ -2,6 +2,50 @@
2
  pipeline_tag: visual-question-answering
3
  ---
4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
 
6
  llava-v1.5-13b-GGUF
7
 
 
2
  pipeline_tag: visual-question-answering
3
  ---
4
 
5
+ ---
6
+ inference: false
7
+ ---
8
+
9
+ <br>
10
+ <br>
11
+
12
+ # LLaVA Model Card
13
+
14
+ ## Model details
15
+
16
+ **Model type:**
17
+ LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
18
+ It is an auto-regressive language model, based on the transformer architecture.
19
+
20
+ **Model date:**
21
+ LLaVA-v1.5-13B was trained in September 2023.
22
+
23
+ **Paper or resources for more information:**
24
+ https://llava-vl.github.io/
25
+
26
+ ## License
27
+ Llama 2 is licensed under the LLAMA 2 Community License,
28
+ Copyright (c) Meta Platforms, Inc. All Rights Reserved.
29
+
30
+ **Where to send questions or comments about the model:**
31
+ https://github.com/haotian-liu/LLaVA/issues
32
+
33
+ ## Intended use
34
+ **Primary intended uses:**
35
+ The primary use of LLaVA is research on large multimodal models and chatbots.
36
+
37
+ **Primary intended users:**
38
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
39
+
40
+ ## Training dataset
41
+ - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
42
+ - 158K GPT-generated multimodal instruction-following data.
43
+ - 450K academic-task-oriented VQA data mixture.
44
+ - 40K ShareGPT data.
45
+
46
+ ## Evaluation dataset
47
+ A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.
48
+
49
 
50
  llava-v1.5-13b-GGUF
51