Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- liuhaotian/LLaVA-Instruct-150K
|
4 |
+
- liuhaotian/LLaVA-CC3M-Pretrain-595K
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
metrics:
|
8 |
+
- accuracy
|
9 |
+
pipeline_tag: visual-question-answering
|
10 |
+
---
|
11 |
+
# DinoV2-SigLIP-Phi3(LoRA) VLM
|
12 |
+
|
13 |
+
* **Vision Encoder** - DinoV2 + SigLIP @384px resolution. [Why 2 vision encoders?](https://arxiv.org/abs/2401.06209)
|
14 |
+
* **Connector** - MLP (Dino and SigLIP features are concatenated and then projected to Phi3 representation space)
|
15 |
+
* **Language Model** - Phi3 + LoRA
|
16 |
+
* **Pre-train (Align) Dataset** - LLaVA-CC3M-Pretrain-595K
|
17 |
+
* **Fine-tune (Instruction) Dataset** - LLAVA-v1.5-Instruct + LRV-Instruct
|
18 |
+
|
19 |
+
Scripts to build and train the models are available at [NMS05/DinoV2-SigLIP-Phi3-LoRA-VLM](https://github.com/NMS05/DinoV2-SigLIP-Phi3-LoRA-VLM).
|