Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Getting Started


1. Prepare the code and the environment

Git clone our repository, creating a python environment and ativate it via the following command

git clone https://github.com/DLYuanGod/ArtGPT-4.git
cd ArtGPT-4
conda env create -f environment.yml
conda activate artgpt4

2. Prepare the pretrained Vicuna weights

The current version of MiniGPT-4 is built on the v0 versoin of Vicuna-13B. Please refer to our instruction here to prepare the Vicuna weights. The final weights would be in a single folder in a structure similar to the following:

β”œβ”€β”€ config.json
β”œβ”€β”€ generation_config.json
β”œβ”€β”€ pytorch_model.bin.index.json
β”œβ”€β”€ pytorch_model-00001-of-00003.bin

Then, set the path to the vicuna weight in the model config file here at Line 16.

3. Prepare the pretrained ArtGPT-4 checkpoint Downlad

Then, set the path to the pretrained checkpoint in the evaluation config file in eval_configs/minigpt4_eval.yaml at Line 11.

Launching Demo Locally

Try out our demo demo.py on your local machine by running

python demo.py --cfg-path eval_configs/artgpt4_eval.yaml  --gpu-id 0


The training of ArtGPT-4 contains two alignment stages. The training process for the step is consistent with that of MiniGPT-4.

Datasets We use Laion-aesthetic from the LAION-5B dataset, which amounts to approximately 200GB for the first 302 tar files.


  • MiniGPT-4 Our work is based on improvements to the model.


This repository is under BSD 3-Clause License. Many codes are based on Lavis with BSD 3-Clause License here.

Downloads last month
Unable to determine this model's library. Check the docs .