Spaces:
Running
Running
File size: 1,682 Bytes
d8ca2a9 e331aa7 1f71274 e331aa7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
import gradio as gr
from convert import convert
DESCRIPTION = """
The steps are the following:
- Paste a read-access token from hf.co/settings/tokens. Read access is enough given that we will open a PR against the source repo.
- Input a model id from the Hub
- Input the filename from the root dir of the repo that you would like to convert, e.g. 'v2-1_768-ema-pruned.ckpt' or 'v1-5-pruned.safetensors'
- Chose which Stable Diffusion version, image size, scheduler type the model has and whether you want the "ema", or "non-ema" weights.
- Click "Submit"
- That's it! You'll get feedback if it works or not, and if it worked, you'll get the URL of the opened PR 🔥
⚠️ If you encounter weird error messages, please have a look into the Logs and feel free to open a PR to correct the error messages.
"""
demo = gr.Interface(
title="Convert any model to Safetensors and open a PR",
description=DESCRIPTION,
allow_flagging="never",
article="Check out the [Safetensors repo on GitHub](https://github.com/huggingface/safetensors)",
inputs=[
gr.Text(max_lines=1, label="your_hf_token"),
gr.Text(max_lines=1, label="model_id"),
gr.Text(max_lines=1, label="filename"),
gr.Radio(label="Model type", choices=["v1", "v2.0", "v2.1"]),
gr.Radio(label="Sample size (px)", choices=[512, 768]),
gr.Radio(label="Scheduler type", choices=["pndm", "heun", "euler", "dpm", "ddim"], value="dpm"),
gr.Radio(label="Extract EMA or non-EMA?", choices=["ema", "non-ema"], value="ema"),
],
outputs=[gr.Markdown(label="output")],
fn=convert,
).queue(max_size=10, concurrency_count=1)
demo.launch(show_api=True)
|