Jason0214 commited on
Commit
f5d321d
1 Parent(s): a3abc41

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -16,9 +16,9 @@ Crystal-based models mimics the training recipie used for Vicuna 7B in LLaVA mul
16
 
17
  | LLM Backbone | MME-P | MME-C | POPE | SciQA | TextVQA |
18
  |-----------------------------------|---------|--------|-------|--------|---------|
19
- | CrystalCoder-7B | 1359.83 | 238.92 | 86.182 | 64.15 | 50.39 |
20
  | CrystalChat-7B | 1456.53 | **308.21** | 86.96 | 67.77 | **57.84** |
21
- | Vicuna-7B | **1481.12** | 302.85 | **87.174** | **67.97** | 56.49 |
22
 
23
  *Table: Comparison of different LLM backbones on visual language understanding benchmarks. All models are instruction-tuned on the general domain data (i.e. LLaVA)*
24
 
 
16
 
17
  | LLM Backbone | MME-P | MME-C | POPE | SciQA | TextVQA |
18
  |-----------------------------------|---------|--------|-------|--------|---------|
19
+ | CrystalCoder-7B | 1359.83 | 238.92 | 86.18 | 64.15 | 50.39 |
20
  | CrystalChat-7B | 1456.53 | **308.21** | 86.96 | 67.77 | **57.84** |
21
+ | Vicuna-7B | **1481.12** | 302.85 | **87.17** | **67.97** | 56.49 |
22
 
23
  *Table: Comparison of different LLM backbones on visual language understanding benchmarks. All models are instruction-tuned on the general domain data (i.e. LLaVA)*
24