alfredplpl
commited on
Commit
•
16ea1ea
1
Parent(s):
88c5332
first
Browse files- README.md +1 -1
- README_en.md +64 -2
README.md
CHANGED
@@ -89,7 +89,7 @@ pip install --upgrade git+https://github.com/huggingface/diffusers.git transform
|
|
89 |
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
|
90 |
import torch
|
91 |
|
92 |
-
model_id = "aipicasso/cool-japan-diffusion-2-1-
|
93 |
|
94 |
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
|
95 |
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
|
|
|
89 |
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
|
90 |
import torch
|
91 |
|
92 |
+
model_id = "aipicasso/cool-japan-diffusion-2-1-1-beta"
|
93 |
|
94 |
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
|
95 |
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
|
README_en.md
CHANGED
@@ -1,4 +1,4 @@
|
|
1 |
-
# Cool Japan Diffusion 2.1.
|
2 |
|
3 |
![アイキャッチ](eyecatch.jpg)
|
4 |
|
@@ -16,4 +16,66 @@ TBA.
|
|
16 |
# Usage
|
17 |
You can try the model by our [Space](https://huggingface.co/spaces/alfredplpl/cool-japan-diffusion-2-1-0).
|
18 |
I recommend to use the model by Web UI.
|
19 |
-
You can download the model [here](https://huggingface.co/aipicasso/cool-japan-diffusion-2-1-0/resolve/main/v2-1-0.ckpt).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Cool Japan Diffusion 2.1.1 Beta Model Card
|
2 |
|
3 |
![アイキャッチ](eyecatch.jpg)
|
4 |
|
|
|
16 |
# Usage
|
17 |
You can try the model by our [Space](https://huggingface.co/spaces/alfredplpl/cool-japan-diffusion-2-1-0).
|
18 |
I recommend to use the model by Web UI.
|
19 |
+
You can download the model [here](https://huggingface.co/aipicasso/cool-japan-diffusion-2-1-0/resolve/main/v2-1-0.ckpt).
|
20 |
+
|
21 |
+
## Model Details
|
22 |
+
- **Developed by:** Robin Rombach, Patrick Esser, Alfred Increment
|
23 |
+
- **Model type:** Diffusion-based text-to-image generation model
|
24 |
+
- **Language(s):** English
|
25 |
+
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
|
26 |
+
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)).
|
27 |
+
- **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/).
|
28 |
+
- **Cite as:**
|
29 |
+
|
30 |
+
@InProceedings{Rombach_2022_CVPR,
|
31 |
+
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
|
32 |
+
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
|
33 |
+
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
|
34 |
+
month = {June},
|
35 |
+
year = {2022},
|
36 |
+
pages = {10684-10695}
|
37 |
+
}
|
38 |
+
|
39 |
+
## Examples
|
40 |
+
|
41 |
+
- Web UI
|
42 |
+
- Diffusers
|
43 |
+
|
44 |
+
## Web UI
|
45 |
+
Download the model [here]().
|
46 |
+
Then, install [Web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) by AUTIMATIC1111.
|
47 |
+
|
48 |
+
## Diffusers
|
49 |
+
|
50 |
+
Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Cool Japan Diffusion 2.1.1 Beta in a simple and efficient manner.
|
51 |
+
|
52 |
+
```bash
|
53 |
+
pip install --upgrade git+https://github.com/huggingface/diffusers.git transformers accelerate scipy
|
54 |
+
```
|
55 |
+
|
56 |
+
Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to EulerDiscreteScheduler):
|
57 |
+
|
58 |
+
```python
|
59 |
+
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
|
60 |
+
import torch
|
61 |
+
|
62 |
+
model_id = "aipicasso/cool-japan-diffusion-2-1-1-beta"
|
63 |
+
|
64 |
+
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
|
65 |
+
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
|
66 |
+
pipe = pipe.to("cuda")
|
67 |
+
|
68 |
+
prompt = "anime, a portrait of a girl with black short hair and red eyes, kimono, full color illustration, official art, 4k, detailed"
|
69 |
+
negative_prompt="low quality, bad face, bad anatomy, bad hand, lowres, jpeg artifacts, 2d, 3d, cg, text"
|
70 |
+
image = pipe(prompt,negative_prompt=negative_prompt).images[0]
|
71 |
+
|
72 |
+
image.save("girl.png")
|
73 |
+
|
74 |
+
```
|
75 |
+
|
76 |
+
**Notes**:
|
77 |
+
- Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance)
|
78 |
+
- If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed)
|
79 |
+
|
80 |
+
|
81 |
+
*This model card was written by: Alfred Increment and is based on the [Stable Diffusion v2](https://huggingface.co/stabilityai/stable-diffusion-2/raw/main/README.md)
|