zhibinlan commited on
Commit
8c19551
·
verified ·
1 Parent(s): f4b03f2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -3
README.md CHANGED
@@ -1,3 +1,40 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ pipeline_tag: image-text-to-text
4
+ ---
5
+ <br>
6
+ <br>
7
+
8
+ # AVG-LLaVA Model Card
9
+
10
+ ## Model details
11
+
12
+ **Model type:**
13
+ AVG-LLaVA-Stage3 is an open-source LMM trained by multi-granularity visual instruction tuning.
14
+ It is an auto-regressive language model, based on the transformer architecture.
15
+ Base LLM: [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5)
16
+
17
+ **Paper or resources for more information:**
18
+ https://arxiv.org/abs/2410.02745
19
+
20
+ ## License
21
+ Llama 2 is licensed under the LLAMA 2 Community License,
22
+ Copyright (c) Meta Platforms, Inc. All Rights Reserved.
23
+
24
+ **Where to send questions or comments about the model:**
25
+ https://github.com/DeepLearnXMU/AVG-LLaVA/issues
26
+
27
+ ## Intended use
28
+ **Primary intended uses:**
29
+ The primary use of LLaVA is research on large multimodal models and chatbots.
30
+
31
+ **Primary intended users:**
32
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
33
+
34
+ ## Training dataset
35
+ - ShareGPT4V Mix665K
36
+ - 200K GPT4V-generated instruction data (ALLaVA)
37
+ - 200K various VQA data
38
+
39
+ ## Evaluation dataset
40
+ A collection of 11 benchmarks, including general VQA benchmarks, text-oriented VQA benchmarks, and general multimodal benchmarks.