Spaces:
Running
on
A10G
Running
on
A10G
# MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models | |
[Deyao Zhu](https://tsutikgiau.github.io/)* (On Job Market!), [Jun Chen](https://junchen14.github.io/)* (On Job Market!), [Xiaoqian Shen](https://xiaoqian-shen.github.io), Xiang Li, and Mohamed Elhoseiny. *Equal Contribution | |
**King Abdullah University of Science and Technology** | |
<a href='https://minigpt-4.github.io'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='MiniGPT_4.pdf'><img src='https://img.shields.io/badge/Paper-PDF-red'></a> | |
## Online Demo | |
Click the image to chat with MiniGPT-4 around your images | |
[![demo](figs/online_demo.png)](https://minigpt-4.github.io) | |
## Examples | |
| | | | |
:-------------------------:|:-------------------------: | |
![find wild](figs/examples/wop_2.png) | ![write story](figs/examples/ad_2.png) | |
![solve problem](figs/examples/fix_1.png) | ![write Poem](figs/examples/rhyme_1.png) | |
More examples can be found in the [project page](https://minigpt-4.github.io). | |
## Introduction | |
- MiniGPT-4 aligns a frozen visual encoder from BLIP-2 with a frozen LLM, Vicuna, using just one projection layer. | |
- We train MiniGPT-4 with two stages. The first traditional pretraining stage is trained using roughly 5 million aligned image-text pairs in 10 hours using 4 A100s. After the first stage, Vicuna is able to understand the image. But the generation ability of Vicuna is heavilly impacted. | |
- To address this issue and improve usability, we propose a novel way to create high-quality image-text pairs by the model itself and ChatGPT together. Based on this, we then create a small (3500 pairs in total) yet high-quality dataset. | |
- The second finetuning stage is trained on this dataset in a conversation template to significantly improve its generation reliability and overall usability. To our surprise, this stage is computationally efficient and takes only around 7 minutes with a single A100. | |
- MiniGPT-4 yields many emerging vision-language capabilities similar to those demonstrated in GPT-4. | |
![overview](figs/overview.png) | |
## Getting Started | |
### Installation | |
**1. Prepare the code and the environment** | |
Git clone our repository, creating a python environment and ativate it via the following command | |
```bash | |
git clone https://github.com/Vision-CAIR/MiniGPT-4.git | |
cd MiniGPT-4 | |
conda env create -f environment.yml | |
conda activate minigpt4 | |
``` | |
**2. Prepare the pretrained Vicuna weights** | |
The current version of MiniGPT-4 is built on the v0 versoin of Vicuna-13B. | |
Please refer to their instructions [here](https://huggingface.co/lmsys/vicuna-13b-delta-v0) to obtaining the weights. | |
The final weights would be in a single folder with the following structure: | |
``` | |
vicuna_weights | |
βββ config.json | |
βββ generation_config.json | |
βββ pytorch_model.bin.index.json | |
βββ pytorch_model-00001-of-00003.bin | |
... | |
``` | |
Then, set the path to the vicuna weight in the model config file | |
[here](minigpt4/configs/models/minigpt4.yaml#L16) at Line 16. | |
**3. Prepare the pretrained MiniGPT-4 checkpoint** | |
To play with our pretrained model, download the pretrained checkpoint | |
[here](https://drive.google.com/file/d/1a4zLvaiDBr-36pasffmgpvH5P7CKmpze/view?usp=share_link). | |
Then, set the path to the pretrained checkpoint in the evaluation config file | |
in [eval_configs/minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml#L10) at Line 10. | |
### Launching Demo Locally | |
Try out our demo [demo.py](demo.py) on your local machine by running | |
``` | |
python demo.py --cfg-path eval_configs/minigpt4_eval.yaml | |
``` | |
### Training | |
The training of MiniGPT-4 contains two alignment stages. | |
**1. First pretraining stage** | |
In the first pretrained stage, the model is trained using image-text pairs from Laion and CC datasets | |
to align the vision and language model. To download and prepare the datasets, please check | |
our [first stage dataset preparation instruction](dataset/README_1_STAGE.md). | |
After the first stage, the visual features are mapped and can be understood by the language | |
model. | |
To launch the first stage training, run the following command. In our experiments, we use 4 A100. | |
You can change the save path in the config file | |
[train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage1_pretrain.yaml) | |
```bash | |
torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage1_pretrain.yaml | |
``` | |
**1. Second finetuning stage** | |
In the second stage, we use a small high quality image-text pair dataset created by ourselves | |
and convert it to a conversation format to further align MiniGPT-4. | |
To download and prepare our second stage dataset, please check our | |
[second stage dataset preparation instruction](dataset/README_2_STAGE.md). | |
To launch the second stage alignment, | |
first specify the path to the checkpoint file trained in stage 1 in | |
[train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage2_finetune.yaml). | |
You can also specify the output path there. | |
Then, run the following command. In our experiments, we use 1 A100. | |
```bash | |
torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage2_finetune.yaml | |
``` | |
After the second stage alignment, MiniGPT-4 is able to talk about the image coherently and user-friendly. | |
## Acknowledgement | |
+ [BLIP2](https://huggingface.co/docs/transformers/main/model_doc/blip-2) The model architecture of MiniGPT-4 follows BLIP-2. Don't forget to check this great open-source work if you don't know it before! | |
+ [Lavis](https://github.com/salesforce/LAVIS) This repository is built upon Lavis! | |
+ [Vicuna](https://github.com/lm-sys/FastChat) The fantastic language ability of Vicuna with only 13B parameters is just amazing. And it is open-source! | |
If you're using MiniGPT-4 in your research or applications, please cite using this BibTeX: | |
```bibtex | |
@misc{zhu2022minigpt4, | |
title={MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models}, | |
author={Deyao Zhu and Jun Chen and Xiaoqian Shen and xiang Li and Mohamed Elhoseiny}, | |
year={2023}, | |
} | |
``` | |
## License | |
This repository is under [BSD 3-Clause License](LICENSE.md). | |
Many codes are based on [Lavis](https://github.com/salesforce/LAVIS) with | |
BSD 3-Clause License [here](LICENSE_Lavis.md). | |