artificialguybr multimodalart HF staff commited on
Commit
4bc5468
1 Parent(s): f297cb4

Move disclaimers below (#3)

Browse files

- Move disclaimers below (6ca9df83acbb91484d7b723f80bc1f37ce85c37d)


Co-authored-by: Apolinário from multimodal AI art <multimodalart@users.noreply.huggingface.co>

Files changed (1) hide show
  1. app.py +9 -8
app.py CHANGED
@@ -134,18 +134,19 @@ iface = gr.Interface(
134
  outputs=gr.Video(),
135
  live=False,
136
  title="AI Video Dubbing",
137
- description="""This tool was developed by [@artificialguybr](https://twitter.com/artificialguybr) using entirely open-source tools. Special thanks to Hugging Face for the GPU support. Thanks [@yeswondwer](https://twitter.com/@yeswondwerr) for original code.
138
-
139
- **Note:**
140
- - Video limit is 1 minute. It will dubbling all people using just one voice.
141
- - Generation may take up to 5 minutes.
142
- - The tool uses open-source models for all models. It's a alpha version.
143
- - Quality can be improved but would require more processing time per video. For scalability and hardware limitations, speed was chosen, not just quality.
144
- - If you need more than 1 minute, duplicate the Space and change the limit on app.py.""",
145
  allow_flagging=False
146
  )
147
  with gr.Blocks() as demo:
148
  iface.render()
149
  radio.change(swap, inputs=[radio], outputs=video)
 
 
 
 
 
 
 
 
150
  demo.queue(concurrency_count=2, max_size=15)
151
  demo.launch()
 
134
  outputs=gr.Video(),
135
  live=False,
136
  title="AI Video Dubbing",
137
+ description="""This tool was developed by [@artificialguybr](https://twitter.com/artificialguybr) using entirely open-source tools. Special thanks to Hugging Face for the GPU support. Thanks [@yeswondwer](https://twitter.com/@yeswondwerr) for original code.""",
 
 
 
 
 
 
 
138
  allow_flagging=False
139
  )
140
  with gr.Blocks() as demo:
141
  iface.render()
142
  radio.change(swap, inputs=[radio], outputs=video)
143
+ gr.Markdown("""
144
+ **Note:**
145
+ - Video limit is 1 minute. It will dubbling all people using just one voice.
146
+ - Generation may take up to 5 minutes.
147
+ - The tool uses open-source models for all models. It's a alpha version.
148
+ - Quality can be improved but would require more processing time per video. For scalability and hardware limitations, speed was chosen, not just quality.
149
+ - If you need more than 1 minute, duplicate the Space and change the limit on app.py.
150
+ """)
151
  demo.queue(concurrency_count=2, max_size=15)
152
  demo.launch()