Update README.md
Browse files
README.md
CHANGED
@@ -1,9 +1,25 @@
|
|
1 |
---
|
2 |
-
|
3 |
---
|
4 |
-
|
5 |
|
6 |
-
|
7 |
|
|
|
8 |
|
9 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
{}
|
3 |
---
|
4 |
+
[![CODE](https://img.shields.io/badge/GitHub-Repository-<COLOR>)](https://github.com/bfshi/scaling_on_scales)
|
5 |
|
6 |
+
# When Do We Not Need Larger Vision Models?
|
7 |
|
8 |
+
## Model
|
9 |
|
10 |
+
This is a LLaVA-v1.5-13b model trained with [S<sup>2</sup>-Wrapper](https://github.com/bfshi/scaling_on_scales), a simple approach to enable any vision model to perceive high-resolution images. We use image resolutions of up to 1008x1008 for this model.
|
11 |
+
|
12 |
+
## Training
|
13 |
+
|
14 |
+
The training pipeline and dataset completely follow [LLaVA-v1.5](https://github.com/haotian-liu/LLaVA/tree/main). We use LoRA to fine-tune the model.
|
15 |
+
|
16 |
+
## Benchmarking
|
17 |
+
|
18 |
+
| Version | Size | Schedule | Checkpoint | VQAv2 | VizWiz | TextVQA | MMMU-val | MathVista | MM-Bench | SEED | MM-Vet |
|
19 |
+
|----------|----------|-----------|-----------|---|---|---|---|---|---|---|---|
|
20 |
+
| LLaVA-1.5 | 13B | full_ft-1e | [liuhaotian/llava-v1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) | 80.0 | 53.6 | 61.3 | 36.4 | 27.6 | 67.7 | 68.2 | 36.1 |
|
21 |
+
| LLaVA-1.5 | 13B | lora-1e | [liuhaotian/llava-v1.5-13b-lora](https://huggingface.co/liuhaotian/llava-v1.5-13b-lora) | 80.0 | 58.9 | 60.2 | - | - | 68.5 | - | 38.3 |
|
22 |
+
| LLaVA-1.5-S2 | 13B | lora-1e | this model | **80.9** | 56.0 | **63.1** | **37.4** | **27.8** | 67.9 | **68.9** | 36.4 |
|
23 |
+
## License
|
24 |
+
Llama 2 is licensed under the LLAMA 2 Community License,
|
25 |
+
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
|