ysharma HF staff commited on
Commit
42b3085
1 Parent(s): 0e7df92

update descriptions

Browse files
Files changed (1) hide show
  1. app.py +1 -1
app.py CHANGED
@@ -59,7 +59,7 @@ with gr.Blocks() as demo:
59
  gr.Markdown("""<h1><center>LORA - Low-rank Adaptation for Fast Text-to-Image Diffusion Fine-tuning</center></h1>
60
  """)
61
  gr.Markdown(
62
- """**Main Features**<br>- Fine-tune Stable diffusion models twice as faster than dreambooth method, by Low-rank Adaptation.<br>- Get insanely small end result, easy to share and download.<br>- Easy to use, compatible with diffusers.<br>- Sometimes even better performance than full fine-tuning<br>Please refer the Github repo this Space is based on, here - <a href = "https://github.com/cloneofsimo/lora">LORA</a><br>You can also refer this tweet by AK over here to quote/retweet/like, here on <a href="https://twitter.com/_akhaliq/status/1601120767009513472">Twitter</a>.<br>This Gradio Space is an attempt to explore this novel LORA approach to fine-tune Stable diffusion models, using the power and flexibility of Gradio!<br><b>To use this Space well:</b>- First, upload your set of images (4-5), then enter the number of fine-tuning steps, and then press the 'Train LORA model' button.<br>- Enter a prompt, then set the alpha value using the Slider (nearer to 1 implies overfitting to the uploaded images), and then press the 'Inference' button.<br><b>Bonus:</b>Download your fine-tuned model weights from the Gradio file component. The smaller size of LORA models (around 3-4 mb files) is the main highlight of this 'Low-rank Adaptation' approach of fine-tuning.""")
63
 
64
  with gr.Row():
65
  in_images = gr.File(label="Upload images to fine-tune for LORA", file_count="multiple")
 
59
  gr.Markdown("""<h1><center>LORA - Low-rank Adaptation for Fast Text-to-Image Diffusion Fine-tuning</center></h1>
60
  """)
61
  gr.Markdown(
62
+ """**Main Features**<br>- Fine-tune Stable diffusion models twice as faster than dreambooth method, by Low-rank Adaptation.<br>- Get insanely small end result, easy to share and download.<br>- Easy to use, compatible with diffusers.<br>- Sometimes even better performance than full fine-tuning<br><br>Please refer to the GitHub repo this Space is based on, here - <a href = "https://github.com/cloneofsimo/lora">LORA</a>. You can also refer to this tweet by AK to quote/retweet/like here on <a href="https://twitter.com/_akhaliq/status/1601120767009513472">Twitter</a>.This Gradio Space is an attempt to explore this novel LORA approach to fine-tune Stable diffusion models, using the power and flexibility of Gradio! The higher number of steps results in longer training time and better fine-tuned SD models.<br><br><b>To use this Space well:</b><br>- First, upload your set of images (4-5), then enter the number of fine-tuning steps, and then press the 'Train LORA model' button. This will produce your fine-tuned model weights.<br>- Enter a prompt, set the alpha value using the Slider (nearer to 1 implies overfitting to the uploaded images), and then press the 'Inference' button. This will produce an image by the newly fine-tuned model.<br><b>Bonus:</b>You can download your fine-tuned model weights from the Gradio file component. The smaller size of LORA models (around 3-4 MB files) is the main highlight of this 'Low-rank Adaptation' approach of fine-tuning.""")
63
 
64
  with gr.Row():
65
  in_images = gr.File(label="Upload images to fine-tune for LORA", file_count="multiple")