liuhaotian commited on
Commit
22a8271
1 Parent(s): 381abca

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -0
README.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ ---
4
+
5
+ <br>
6
+ <br>
7
+
8
+ # LLaVA Model Card
9
+
10
+ This is a pretrained checkpoint, you can use it to instruct tune your multimodal models.
11
+
12
+ Check out the instructions [here](https://github.com/haotian-liu/LLaVA/blob/main/README.md#visual-instruction-tuning)
13
+
14
+ ## Model details
15
+
16
+ **Model type:**
17
+ LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
18
+ It is an auto-regressive language model, based on the transformer architecture.
19
+
20
+ **Model date:**
21
+ LLaVA-336px-Pretrain-Vicuna-7B-v1.3 was trained in July 2023.
22
+
23
+ **Paper or resources for more information:**
24
+ https://llava-vl.github.io/
25
+
26
+ ## License
27
+ Non-commerical Use.
28
+
29
+ **Where to send questions or comments about the model:**
30
+ https://github.com/haotian-liu/LLaVA/issues
31
+
32
+ ## Intended use
33
+ **Primary intended uses:**
34
+ The primary use of LLaVA is research on large multimodal models and chatbots.
35
+
36
+ **Primary intended users:**
37
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
38
+
39
+ ## Training dataset
40
+ - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.