Anonymous-sub commited on
Commit
ce061be
1 Parent(s): d358cad

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +7 -0
app.py CHANGED
@@ -622,6 +622,11 @@ DESCRIPTION = '''
622
  ### This space provides the function of key frame translation. Full code for full video translation will be released upon the publication of the paper.
623
  ### To avoid overload, we set limitations to the maximum frame number (8) and the maximum frame resolution (512x768).
624
  ### The running time of a video of size 512x640 is about 1 minute per keyframe under T4 GPU.
 
 
 
 
 
625
  ### Tips:
626
  1. This method cannot handle large or quick motions where the optical flow is hard to estimate. **Videos with stable motions are preferred**.
627
  2. Pixel-aware fusion may not work for large or quick motions.
@@ -630,6 +635,8 @@ DESCRIPTION = '''
630
  5. To use your own SD/LoRA model, you may clone the space and specify your model with [sd_model_cfg.py](https://huggingface.co/spaces/Anonymous-sub/Rerender/blob/main/sd_model_cfg.py).
631
  6. This method is based on the original SD model. You may need to [convert](https://github.com/huggingface/diffusers/blob/main/scripts/convert_diffusers_to_original_stable_diffusion.py) Diffuser/Automatic1111 models to the original one.
632
 
 
 
633
  <a href="https://huggingface.co/spaces/Anonymous-sub/Rerender?duplicate=true" style="display: inline-block;margin-top: .5em;margin-right: .25em;" target="_blank">
634
  <img style="margin-bottom: 0em;display: inline;margin-top: -.25em;" src="https://bit.ly/3gLdBN6" alt="Duplicate Space"></a> for no queue on your own hardware.</p>
635
  '''
 
622
  ### This space provides the function of key frame translation. Full code for full video translation will be released upon the publication of the paper.
623
  ### To avoid overload, we set limitations to the maximum frame number (8) and the maximum frame resolution (512x768).
624
  ### The running time of a video of size 512x640 is about 1 minute per keyframe under T4 GPU.
625
+ ### How to use:
626
+ 1. **Run 1st Key Frame**: only translate the first frame, so you can adjust the prompts/models/parameters to find your ideal output appearance before run the whole video.
627
+ 2. **Run Key Frames**: translate all the key frames based on the settings of the first frame
628
+ 3. **Run All**: **Run 1st Key Frame** and **Run Key Frames**
629
+ 4. **Run Propagation**: propogate the key frames to other frames for full video translation. This part will be released upon the publication of the paper.
630
  ### Tips:
631
  1. This method cannot handle large or quick motions where the optical flow is hard to estimate. **Videos with stable motions are preferred**.
632
  2. Pixel-aware fusion may not work for large or quick motions.
 
635
  5. To use your own SD/LoRA model, you may clone the space and specify your model with [sd_model_cfg.py](https://huggingface.co/spaces/Anonymous-sub/Rerender/blob/main/sd_model_cfg.py).
636
  6. This method is based on the original SD model. You may need to [convert](https://github.com/huggingface/diffusers/blob/main/scripts/convert_diffusers_to_original_stable_diffusion.py) Diffuser/Automatic1111 models to the original one.
637
 
638
+ **This code is for research purpose and non-commercial use only.**
639
+
640
  <a href="https://huggingface.co/spaces/Anonymous-sub/Rerender?duplicate=true" style="display: inline-block;margin-top: .5em;margin-right: .25em;" target="_blank">
641
  <img style="margin-bottom: 0em;display: inline;margin-top: -.25em;" src="https://bit.ly/3gLdBN6" alt="Duplicate Space"></a> for no queue on your own hardware.</p>
642
  '''