MiniGPT-4 checkpoint aligned with @panopstor's FF7R dataset (link in the EveryDream discord). Produces captions that are more useful for training SD datasets that MiniGPT4's default output.

Easiest way to use this is to launch a docker instance for oobabooga/text-generation-webui, eg TheBloke/runpod-pytorch-runclick, follow the instructions for MiniGPT-4 here. For now you'll need to manually edit minigpt_pipeline.py (this line to point to the .pth file in this repo instead of the default.

Dataset

adapted from the @panopstor's FF7R dataset - zip here

Sample output:

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .