Seongyun commited on
Commit
4863eb9
1 Parent(s): 94f4cf7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -0
README.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - image-to-text
4
+ - visual-question-answering
5
+ - image-captioning
6
+ datasets:
7
+ - kaist-ai/volcano-train
8
+ license: apache-2.0
9
+ language:
10
+ - en
11
+ pipeline_tag: image-to-text
12
+ library_name: transformers
13
+ ---
14
+ ## Links for Reference
15
+
16
+ - **Repository:**
17
+ - **Paper:**
18
+
19
+ # Overview
20
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6550c4f27bbfce1878f5f280/AnqbCNf6pRiQ_5uNX0r4d.png)
21
+ Volcano employs a single LMM to generate initial responses, feedback, and revisions, as well as decisions to accept revisions. It follows a sequential procedure of an iterative critique-revision-decide loop.
22
+
23
+ # Model details
24
+
25
+ **Model type:**
26
+ Volcano-13b is a multimodal self-feedback guided revision model that was fine-tuned by mixing the visual instruction tuning dataset used in [LLaVA-v1.5](https://llava-vl.github.io/) with multimodal feedback and revision data collected through [gpt-3.5-turbo](https://platform.openai.com/docs/models/gpt-3-5), applied to the [vicuna-13b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) model.
27
+
28
+ **Model date:**
29
+ Volcano-13b was trained in October 2023.
30
+
31
+ # Training dataset
32
+ - **274K multimodal feedback and revision data**
33
+ - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
34
+ - 158K GPT-generated multimodal instruction-following data.
35
+ - 450K academic-task-oriented VQA data mixture.
36
+ - 40K ShareGPT data
37
+
38
+ You can find [here](https://huggingface.co/datasets/kaist-ai/volcano-train) the dataset used to train Volcano, which includes all the aforementioned datasets.
39
+
40
+ # Evaluation dataset
41
+ A collection of three multimodal hallucination benchmarks ([MMHal-Bench](https://huggingface.co/datasets/Shengcao1006/MMHal-Bench), [Pope](https://github.com/RUCAIBox/POPE), [GAVIE](https://github.com/FuxiaoLiu/LRV-Instruction)) and two multimodal understanding benchmarks ([MM-Vet](https://github.com/yuweihao/MM-Vet), [MMBench](https://github.com/open-compass/MMBench)).