liuhaotian commited on
Commit
88d35e5
1 Parent(s): 7a1c970

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md CHANGED
@@ -1,3 +1,41 @@
1
  ---
2
  license: apache-2.0
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ inference: false
4
  ---
5
+
6
+ # LLaVA Model Card
7
+
8
+ ## Model details
9
+
10
+ **Model type:**
11
+ The first-stage pretrained checkpoint of LLaVA.
12
+ LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
13
+ It is an auto-regressive language model, based on the transformer architecture.
14
+
15
+ **Model date:**
16
+ LLaVA was trained in April 2023.
17
+
18
+ **Paper or resources for more information:**
19
+ https://llava-vl.github.io/
20
+
21
+ **License:**
22
+ Apache License 2.0
23
+
24
+ **Where to send questions or comments about the model:**
25
+ https://github.com/haotian-liu/LLaVA/issues
26
+
27
+ ## Intended use
28
+ **Primary intended uses:**
29
+ The primary use of LLaVA is research on large multimodal models and chatbots.
30
+
31
+ **Primary intended users:**
32
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
33
+
34
+ ## Training dataset
35
+ 595K filtered image-text pairs from CC3M.
36
+ 150K GPT-generated multimodal instruction-following data.
37
+
38
+ ## Evaluation dataset
39
+ A preliminary evaluation of the model quality is conducted by creating a set of 90 visual reasoning questions from 30 unique images randomly sampled from COCO val 2014 and each is associated with three types of questions: conversational, detailed description, and complex reasoning. We utilize GPT-4 to judge the model outputs.
40
+ We also evaluate our model on the ScienceQA dataset. Our synergy with GPT-4 sets a new state-of-the-art on the dataset.
41
+ See https://llava-vl.github.io/ for more details.