mucai commited on
Commit
e870c70
1 Parent(s): b162647

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -1
README.md CHANGED
@@ -1,3 +1,41 @@
1
  ---
2
- license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ inference: false
3
  ---
4
+
5
+
6
+ <br>
7
+ <br>
8
+
9
+ # ViP-LLaVA Model Card
10
+
11
+ ## Model details
12
+
13
+ **Model type:**
14
+ ViP-LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on both image level instruction data and region-level instruction data annotated with visual prompts.
15
+ It is an auto-regressive language model, based on the transformer architecture.
16
+
17
+ **Model date:**
18
+ ViP-LLaVA-7B-Pretrain was trained in November 2023. [Paper](https://arxiv.org/abs/2312.00784)
19
+
20
+ **Paper or resources for more information:**
21
+ https://vip-llava.github.io/
22
+
23
+ ## License
24
+ Llama 2 is licensed under the LLAMA 2 Community License,
25
+ Copyright (c) Meta Platforms, Inc. All Rights Reserved.
26
+
27
+ **Where to send questions or comments about the model:**
28
+ https://github.com/mu-cai/ViP-LLaVA/issues
29
+
30
+ ## Intended use
31
+ **Primary intended uses:**
32
+ The primary use of ViP-LLaVA is research on large multimodal models and chatbots.
33
+
34
+ **Primary intended users:**
35
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
36
+
37
+ ## Training dataset
38
+ - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
39
+
40
+ ## Evaluation dataset
41
+ ViP-LLaVA achieves state-of-the-art performance in 4 academic region-level benchmarks and our newly proposed RegionBench.