Spaces:
Runtime error
Runtime error
apolinario
commited on
Commit
•
ce39e0b
1
Parent(s):
d69d28e
Improve credits and change default prompt
Browse files
app.py
CHANGED
@@ -269,7 +269,7 @@ image = gr.outputs.Image(type="pil", label="Your imge")
|
|
269 |
video = gr.outputs.Video(type="mp4", label="Your video")
|
270 |
css = ".output-image{height: 528px !important} .output-video{height: 528px !important}"
|
271 |
iface = gr.Interface(fn=run, inputs=[
|
272 |
-
gr.inputs.Textbox(label="Prompt",default="
|
273 |
gr.inputs.Slider(label="Steps - more steps can increase quality but will take longer to generate",default=300,maximum=500,minimum=10,step=1),
|
274 |
#gr.inputs.Radio(label="Aspect Ratio", choices=["Square", "Horizontal", "Vertical"],default="Horizontal"),
|
275 |
gr.inputs.Dropdown(label="Model", choices=["imagenet256","Pokemon256", "ffhq256"], default="imagenet256")
|
@@ -281,6 +281,6 @@ iface = gr.Interface(fn=run, inputs=[
|
|
281 |
outputs=[image,video],
|
282 |
css=css,
|
283 |
title="Generate images from text with StyleGAN XL + CLIP",
|
284 |
-
description="<div>By typing a prompt and pressing submit you generate images based on it. <a href='https://github.com/autonomousvision/stylegan_xl' target='_blank'>StyleGAN XL</a> is a general purpose StyleGAN, and it is CLIP Guidance notebook was created by <a href='https://github.com/CasualGANPapers/StyleGANXL-CLIP' target='_blank'>ryudrigo and ouhenio</a>, and optimised by <a href='https://twitter.com/rivershavewings' target='_blank'>Katherine Crowson</a
|
285 |
article="<h4 style='font-size: 110%;margin-top:.5em'>Biases acknowledgment</h4><div>Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exarcbates societal biases. According to the <a href='https://arxiv.org/abs/2112.10752' target='_blank'>Latent Diffusion paper</a>:<i> \"Deep learning modules tend to reproduce or exacerbate biases that are already present in the data\"</i>. The models are meant to be used for research purposes, such as this one.</div><h4 style='font-size: 110%;margin-top:1em'>Who owns the images produced by this demo?</h4><div>Definetly not me! Probably you do. I say probably because the Copyright discussion about AI generated art is ongoing. So <a href='https://www.theverge.com/2022/2/21/22944335/us-copyright-office-reject-ai-generated-art-recent-entrance-to-paradise' target='_blank'>it may be the case that everything produced here falls automatically into the public domain</a>. But in any case it is either yours or is in the public domain.</div>")
|
286 |
iface.launch(enable_queue=True)
|
|
|
269 |
video = gr.outputs.Video(type="mp4", label="Your video")
|
270 |
css = ".output-image{height: 528px !important} .output-video{height: 528px !important}"
|
271 |
iface = gr.Interface(fn=run, inputs=[
|
272 |
+
gr.inputs.Textbox(label="Prompt",default="Hong Kong by Studio Ghibli"),
|
273 |
gr.inputs.Slider(label="Steps - more steps can increase quality but will take longer to generate",default=300,maximum=500,minimum=10,step=1),
|
274 |
#gr.inputs.Radio(label="Aspect Ratio", choices=["Square", "Horizontal", "Vertical"],default="Horizontal"),
|
275 |
gr.inputs.Dropdown(label="Model", choices=["imagenet256","Pokemon256", "ffhq256"], default="imagenet256")
|
|
|
281 |
outputs=[image,video],
|
282 |
css=css,
|
283 |
title="Generate images from text with StyleGAN XL + CLIP",
|
284 |
+
description="<div>By typing a prompt and pressing submit you generate images based on it. <a href='https://github.com/autonomousvision/stylegan_xl' target='_blank'>StyleGAN XL</a> is a general purpose StyleGAN, and it is CLIP Guidance notebook was created by <a href='https://github.com/CasualGANPapers/StyleGANXL-CLIP' target='_blank'>ryudrigo and ouhenio</a>, and optimised by <a href='https://twitter.com/rivershavewings' target='_blank'>Katherine Crowson</a> This Spaces Gradio UI to the model was assembled by <a style='color: rgb(99, 102, 241);font-weight:bold' href='https://twitter.com/multimodalart' target='_blank'>@multimodalart</a>, keep up with the <a style='color: rgb(99, 102, 241);' href='https://multimodal.art/news' target='_blank'>latest multimodal ai art news here</a> and consider <a style='color: rgb(99, 102, 241);' href='https://www.patreon.com/multimodalart' target='_blank'>supporting us on Patreon</a></div>",
|
285 |
article="<h4 style='font-size: 110%;margin-top:.5em'>Biases acknowledgment</h4><div>Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exarcbates societal biases. According to the <a href='https://arxiv.org/abs/2112.10752' target='_blank'>Latent Diffusion paper</a>:<i> \"Deep learning modules tend to reproduce or exacerbate biases that are already present in the data\"</i>. The models are meant to be used for research purposes, such as this one.</div><h4 style='font-size: 110%;margin-top:1em'>Who owns the images produced by this demo?</h4><div>Definetly not me! Probably you do. I say probably because the Copyright discussion about AI generated art is ongoing. So <a href='https://www.theverge.com/2022/2/21/22944335/us-copyright-office-reject-ai-generated-art-recent-entrance-to-paradise' target='_blank'>it may be the case that everything produced here falls automatically into the public domain</a>. But in any case it is either yours or is in the public domain.</div>")
|
286 |
iface.launch(enable_queue=True)
|