Update README.md
Browse files
README.md
CHANGED
@@ -16,9 +16,9 @@ Crystal-based models mimics the training recipie used for Vicuna 7B in LLaVA mul
|
|
16 |
|
17 |
| LLM Backbone | MME-P | MME-C | POPE | SciQA | TextVQA |
|
18 |
|-----------------------------------|---------|--------|-------|--------|---------|
|
19 |
-
| CrystalCoder-7B | 1359.83 | 238.92 | 86.
|
20 |
| CrystalChat-7B | 1456.53 | **308.21** | 86.96 | 67.77 | **57.84** |
|
21 |
-
| Vicuna-7B | **1481.12** | 302.85 | **87.
|
22 |
|
23 |
*Table: Comparison of different LLM backbones on visual language understanding benchmarks. All models are instruction-tuned on the general domain data (i.e. LLaVA)*
|
24 |
|
|
|
16 |
|
17 |
| LLM Backbone | MME-P | MME-C | POPE | SciQA | TextVQA |
|
18 |
|-----------------------------------|---------|--------|-------|--------|---------|
|
19 |
+
| CrystalCoder-7B | 1359.83 | 238.92 | 86.18 | 64.15 | 50.39 |
|
20 |
| CrystalChat-7B | 1456.53 | **308.21** | 86.96 | 67.77 | **57.84** |
|
21 |
+
| Vicuna-7B | **1481.12** | 302.85 | **87.17** | **67.97** | 56.49 |
|
22 |
|
23 |
*Table: Comparison of different LLM backbones on visual language understanding benchmarks. All models are instruction-tuned on the general domain data (i.e. LLaVA)*
|
24 |
|