Seongyun commited on
Commit
9314773
1 Parent(s): 15b9e6a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -13,7 +13,7 @@ Volcano-7b was trained in October 2023.
13
  **Paper or resources for more information:**
14
 
15
  # Training dataset
16
- - 274K multimodal feedback and revision data
17
  - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
18
  - 158K GPT-generated multimodal instruction-following data.
19
  - 450K academic-task-oriented VQA data mixture.
@@ -22,4 +22,4 @@ Volcano-7b was trained in October 2023.
22
  You can find [here](https://huggingface.co/datasets/kaist-ai/volcano-train) the dataset used to train Volcano, which includes all the aforementioned datasets.
23
 
24
  # Evaluation dataset
25
-
 
13
  **Paper or resources for more information:**
14
 
15
  # Training dataset
16
+ - **274K multimodal feedback and revision data**
17
  - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
18
  - 158K GPT-generated multimodal instruction-following data.
19
  - 450K academic-task-oriented VQA data mixture.
 
22
  You can find [here](https://huggingface.co/datasets/kaist-ai/volcano-train) the dataset used to train Volcano, which includes all the aforementioned datasets.
23
 
24
  # Evaluation dataset
25
+ A collection of three multimodal hallucination benchmarks ([MMHal-Bench](https://huggingface.co/datasets/Shengcao1006/MMHal-Bench), [Pope](https://github.com/RUCAIBox/POPE), [GAVIE](https://github.com/FuxiaoLiu/LRV-Instruction)) and two multimodal understanding benchmarks ([MM-Vet](https://github.com/yuweihao/MM-Vet), [MMBench](https://github.com/open-compass/MMBench)).