paint-journey-v2 / README.md
FredZhang7's picture
Update README.md
a5162aa
|
raw
history blame
3.53 kB
---
license: creativeml-openrail-m
language:
- en
tags:
- text-to-image
- midjourney
- stable-diffusion
- disco-diffusion
- art
- arxiv:2208.12242
inference: true
library_name: diffusers
---
## Paint Journey V2 is [Paint Journey V1](https://huggingface.co/FredZhang7/paint-journey-v1) fine-tuned on 768x768 oil paintings by Midjourney, Open Journey V2, and Disco Diffusion
*Redoing the examples because I recently discovered the endless possibilities with Paint Journey V2.
Paint Journey V2 crafts more stunning masterpieces with more descriptive positive and negative prompts,
although the model can also generate beautiful landscapes with short prompts.*
Begin the prompt with **((oil painting))** to add the oil paint effect. For digital and other painting styles, enter similar prompts as you would for Midjourney (with some tweaks), Stable Diffusion v1.5 (add more styles), Open Journey V2, or Disco Diffusion.
Paint Journey V2's paintings are closer to human-drawn art than Open Journey V2.
Compared to models like Dreamlike Diffusion 1.0, this model tends to generate 768x768 or higher resolution images with reduced noise levels.
This model is also capable of generating stunning portraits at 768x1144 resolution without duplicated faces (with [Camenduru's WebUI](https://github.com/camenduru/stable-diffusion-webui)), a difficult task to models like DreamShaper 3.3.
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/AMLA-UBC/100-Exploring-the-World-of-Modern-Machine-Learning/blob/main/assets/PaintJourneyV2.ipynb)
## Training
Instead of solely fine-tuning its Unet, Paint Journey V2 focuses on fine-tuning its text encoder with a diverse range of prompts.
This allows for a seamless blend of the digital and oil painting styles into various other types of prompts, resulting in a more natural and dynamic output.
This model was trained on a curated dataset of roughly 300 images hand-picked from Midjourney, [Prompt Hero](https://prompthero.com/), Open Journey V2, and Reddit.
Before training, I used R-ESRGAN 4x on many images to increase their resolution and reduce noise.
To further improve the resolution and reduce noise in generated images, especially when using the model for img2img, use [Paint Journey VAE](./paint_journey_v2.vae.pt) in combination with [Checkpoint](./paint_journey_v2.ckpt).
For example, a Automatic1111's WebUI user can add both files to the `./stable-diffusion-webui/models/Stable-diffusion` folder.
## Examples
*Releasing soon*
## Automatic1111's WebUI
```
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
```
Download [paint_journey_v2.ckpt](./paint_journey_v2.ckpt) to the `./stable-diffusion-webui/models/Stable-diffusion` folder. Run `webui-user.bat`.
## Diffusers
```bash
pip install --upgrade diffusers
```
```python
from diffusers import StableDiffusionPipeline
import torch
pipe = StableDiffusionPipeline.from_pretrained("FredZhang7/paint-journey-v2")
pipe = pipe.to("cuda")
# Use Prompt Hero for ideas of descriptive (positive) prompts
prompt = "((oil painting)), a boat sailing, night sky, high resolution, uhd, 4 k wallpaper"
image = pipe(prompt).images[0]
image.save("./result.png")
```
## Safety Checker V2
The official [stable diffusion safety checker](https://huggingface.co/CompVis/stable-diffusion-safety-checker) uses up 1.22GB VRAM.
I recommend using [Google Safesearch Mini V2](https://huggingface.co/FredZhang7/google-safesearch-mini-v2) (220MB) to save 1.0GB VRAM.