Spaces:
Runtime error
Runtime error
File size: 2,279 Bytes
08e3abb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
from diffusers import StableDiffusionPipeline
import torch
work_around_for_hugging_face_gradio_sdk_bug = "/blob/main/rem_3k.ckpt"
model_url = "https://huggingface.co/waifu-research-department/Rem" + work_around_for_hugging_face_gradio_sdk_bug
pipeline = StableDiffusionPipeline.from_single_file(
model_url,
torch_dtype=torch.float16,
)
import gradio as gr
description="""
# running stable diffusion from a ckpt file
## NOTICE ⚠️:
- this space does not work rn because it needs GPU, feel free to **clone this space** and set your own with GPU an meet your waifu **ヽ(≧□≦)ノ**
if you do not have money (just like me **(┬┬﹏┬┬)** ) you can always :
* **run the code in your PC** if you have a good GPU a good internet connection (to download the ai model only a 1 time thing)
* **run the model in the cloud** (colab, and kaggle are good alternatives and they have a pretty good internet connection )
### minimalistic code to run a ckpt model
* enable GPU (click runtime thenchange runtime type)
* install the following libraries
```
!pip install -q diffusers gradio omegaconf
```
* **restart your kernal** 👈 (click runtime then click restart session)
* run the following code
```python
from diffusers import StableDiffusionPipeline
import torch
pipeline = StableDiffusionPipeline.from_single_file(
"https://huggingface.co/waifu-research-department/Rem/blob/main/rem_3k.ckpt", # put your model url here
torch_dtype=torch.float16,
).to("cuda")
postive_prompt = "anime girl prompt here" # 👈 change this
negative_prompt = "3D" 👈 things you hate here
image = pipeline(postive_prompt,negative_prompt=negative_prompt).images[0]
image # your image is saved in this PIL variable
```
"""
log = "GPU available"
try :
pipeline.to("cuda")
except :
log = "no GPU available"
def text2img(positive_prompt,negative_prompt):
if log == "no GPU available":
image = None
return log,image
else :
image = pipeline(positive_prompt,negative_prompt=negative_prompt).images[0]
log = {"postive_prompt":positive_prompt,"negative_prompt":negative_prompt}
return log,image
gr.Interface(text2img,["text","text"],["text","image"],examples=[["rem","3D"]],description=description).launch() |