PVIT commited on
Commit
fb11c24
1 Parent(s): 0ed87e0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -0
README.md CHANGED
@@ -1,3 +1,36 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # PVIT model
6
+
7
+ This is the model weights of paper: [Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models](https://arxiv.org/abs/2308.13437).
8
+
9
+ ## Model description
10
+
11
+ Position-enhanced Visual Instruction Tuning (PVIT) extends the MLLM by incorporating an additional region-level vision encoder to facilitate support for region-based inputs. Specifically, we adopt the vision encoder from RegionCLIP and utilize it to extract region-level features by taking images and regions as inputs. As an additional source of information, the incorporation of region-level features in this way has a minimal impact on the original MLLM. Furthermore, since the features provided by RegionCLIP are themselves already aligned to the language at a fine-grained level, the overhead of aligning it to the MLLM will be relatively small. Following [LLaVA](https://github.com/haotian-liu/LLaVA), we design a two-stage training strategy for PVIT that first pre-training a linear projection to align the region features to the LLM word embedding, followed by end-to-end fine-tuning to follow complex fine-grained instructions.
12
+
13
+ For more details, please refer to our [paper](https://arxiv.org/abs/2308.13437) and [github repo](https://github.com/THUNLP-MT/PVIT).
14
+
15
+ ## How to use
16
+
17
+ Users have to apply it on top of the original LLaMA weights to get actual PVIT weights. See [here](https://github.com/THUNLP-MT/PVIT#pvit-weights) for instructions.
18
+
19
+ ## Intended use
20
+
21
+ Primary intended uses: The primary use of PVIT is research on large multimodal models and chatbots.
22
+
23
+ Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
24
+
25
+ ## BibTeX entry and citation info
26
+
27
+ ```bibtex
28
+ @misc{chen2023positionenhanced,
29
+ title={Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models},
30
+ author={Chi Chen and Ruoyu Qin and Fuwen Luo and Xiaoyue Mi and Peng Li and Maosong Sun and Yang Liu},
31
+ year={2023},
32
+ eprint={2308.13437},
33
+ archivePrefix={arXiv},
34
+ primaryClass={cs.CV}
35
+ }
36
+ ```