Vision-CAIR commited on
Commit
bf4036b
β€’
1 Parent(s): b98ba2b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -149
README.md CHANGED
@@ -1,149 +1,11 @@
1
- # MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models
2
- [Deyao Zhu](https://tsutikgiau.github.io/)* (On Job Market!), [Jun Chen](https://junchen14.github.io/)* (On Job Market!), [Xiaoqian Shen](https://xiaoqian-shen.github.io), Xiang Li, and Mohamed Elhoseiny. *Equal Contribution
3
-
4
- **King Abdullah University of Science and Technology**
5
-
6
- <a href='https://minigpt-4.github.io'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='MiniGPT_4.pdf'><img src='https://img.shields.io/badge/Paper-PDF-red'></a>
7
-
8
-
9
- ## Online Demo
10
-
11
- Click the image to chat with MiniGPT-4 around your images
12
- [![demo](figs/online_demo.png)](https://minigpt-4.github.io)
13
-
14
-
15
- ## Examples
16
- | | |
17
- :-------------------------:|:-------------------------:
18
- ![find wild](figs/examples/wop_2.png) | ![write story](figs/examples/ad_2.png)
19
- ![solve problem](figs/examples/fix_1.png) | ![write Poem](figs/examples/rhyme_1.png)
20
-
21
- More examples can be found in the [project page](https://minigpt-4.github.io).
22
-
23
-
24
-
25
- ## Introduction
26
- - MiniGPT-4 aligns a frozen visual encoder from BLIP-2 with a frozen LLM, Vicuna, using just one projection layer.
27
- - We train MiniGPT-4 with two stages. The first traditional pretraining stage is trained using roughly 5 million aligned image-text pairs in 10 hours using 4 A100s. After the first stage, Vicuna is able to understand the image. But the generation ability of Vicuna is heavilly impacted.
28
- - To address this issue and improve usability, we propose a novel way to create high-quality image-text pairs by the model itself and ChatGPT together. Based on this, we then create a small (3500 pairs in total) yet high-quality dataset.
29
- - The second finetuning stage is trained on this dataset in a conversation template to significantly improve its generation reliability and overall usability. To our surprise, this stage is computationally efficient and takes only around 7 minutes with a single A100.
30
- - MiniGPT-4 yields many emerging vision-language capabilities similar to those demonstrated in GPT-4.
31
-
32
-
33
- ![overview](figs/overview.png)
34
-
35
-
36
-
37
-
38
- ## Getting Started
39
- ### Installation
40
-
41
- **1. Prepare the code and the environment**
42
-
43
- Git clone our repository, creating a python environment and ativate it via the following command
44
-
45
- ```bash
46
- git clone https://github.com/Vision-CAIR/MiniGPT-4.git
47
- cd MiniGPT-4
48
- conda env create -f environment.yml
49
- conda activate minigpt4
50
- ```
51
-
52
-
53
- **2. Prepare the pretrained Vicuna weights**
54
-
55
- The current version of MiniGPT-4 is built on the v0 versoin of Vicuna-13B.
56
- Please refer to their instructions [here](https://huggingface.co/lmsys/vicuna-13b-delta-v0) to obtaining the weights.
57
- The final weights would be in a single folder with the following structure:
58
-
59
- ```
60
- vicuna_weights
61
- β”œβ”€β”€ config.json
62
- β”œβ”€β”€ generation_config.json
63
- β”œβ”€β”€ pytorch_model.bin.index.json
64
- β”œβ”€β”€ pytorch_model-00001-of-00003.bin
65
- ...
66
- ```
67
-
68
- Then, set the path to the vicuna weight in the model config file
69
- [here](minigpt4/configs/models/minigpt4.yaml#L16) at Line 16.
70
-
71
- **3. Prepare the pretrained MiniGPT-4 checkpoint**
72
-
73
- To play with our pretrained model, download the pretrained checkpoint
74
- [here](https://drive.google.com/file/d/1a4zLvaiDBr-36pasffmgpvH5P7CKmpze/view?usp=share_link).
75
- Then, set the path to the pretrained checkpoint in the evaluation config file
76
- in [eval_configs/minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml#L10) at Line 10.
77
-
78
-
79
-
80
- ### Launching Demo Locally
81
-
82
- Try out our demo [demo.py](demo.py) on your local machine by running
83
-
84
- ```
85
- python demo.py --cfg-path eval_configs/minigpt4_eval.yaml
86
- ```
87
-
88
-
89
-
90
- ### Training
91
- The training of MiniGPT-4 contains two alignment stages.
92
-
93
- **1. First pretraining stage**
94
-
95
- In the first pretrained stage, the model is trained using image-text pairs from Laion and CC datasets
96
- to align the vision and language model. To download and prepare the datasets, please check
97
- our [first stage dataset preparation instruction](dataset/README_1_STAGE.md).
98
- After the first stage, the visual features are mapped and can be understood by the language
99
- model.
100
- To launch the first stage training, run the following command. In our experiments, we use 4 A100.
101
- You can change the save path in the config file
102
- [train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage1_pretrain.yaml)
103
-
104
- ```bash
105
- torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage1_pretrain.yaml
106
- ```
107
-
108
- **1. Second finetuning stage**
109
-
110
- In the second stage, we use a small high quality image-text pair dataset created by ourselves
111
- and convert it to a conversation format to further align MiniGPT-4.
112
- To download and prepare our second stage dataset, please check our
113
- [second stage dataset preparation instruction](dataset/README_2_STAGE.md).
114
- To launch the second stage alignment,
115
- first specify the path to the checkpoint file trained in stage 1 in
116
- [train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage2_finetune.yaml).
117
- You can also specify the output path there.
118
- Then, run the following command. In our experiments, we use 1 A100.
119
-
120
- ```bash
121
- torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage2_finetune.yaml
122
- ```
123
-
124
- After the second stage alignment, MiniGPT-4 is able to talk about the image coherently and user-friendly.
125
-
126
-
127
-
128
-
129
- ## Acknowledgement
130
-
131
- + [BLIP2](https://huggingface.co/docs/transformers/main/model_doc/blip-2) The model architecture of MiniGPT-4 follows BLIP-2. Don't forget to check this great open-source work if you don't know it before!
132
- + [Lavis](https://github.com/salesforce/LAVIS) This repository is built upon Lavis!
133
- + [Vicuna](https://github.com/lm-sys/FastChat) The fantastic language ability of Vicuna with only 13B parameters is just amazing. And it is open-source!
134
-
135
-
136
- If you're using MiniGPT-4 in your research or applications, please cite using this BibTeX:
137
- ```bibtex
138
- @misc{zhu2022minigpt4,
139
- title={MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models},
140
- author={Deyao Zhu and Jun Chen and Xiaoqian Shen and xiang Li and Mohamed Elhoseiny},
141
- year={2023},
142
- }
143
- ```
144
-
145
-
146
- ## License
147
- This repository is under [BSD 3-Clause License](LICENSE.md).
148
- Many codes are based on [Lavis](https://github.com/salesforce/LAVIS) with
149
- BSD 3-Clause License [here](LICENSE_Lavis.md).
 
1
+ ---
2
+ title: MiniGPT-v2
3
+ emoji: πŸš€
4
+ colorFrom: green
5
+ colorTo: gray
6
+ sdk: gradio
7
+ sdk_version: 3.27.0
8
+ app_file: app.py
9
+ pinned: false
10
+ license: other
11
+ ---