ShaoTengLiu commited on
Commit
ab4a868
1 Parent(s): a163762
README.md CHANGED
@@ -9,4 +9,5 @@ license: mit
9
  duplicated_from: Tune-A-Video-library/Tune-A-Video-Training-UI
10
  ---
11
 
 
12
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
9
  duplicated_from: Tune-A-Video-library/Tune-A-Video-Training-UI
10
  ---
11
 
12
+ Most of the UI code is from: Tune-A-Video-library/Tune-A-Video-Training-UI
13
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
Video-P2P/.DS_Store DELETED
Binary file (6.15 kB)
 
Video-P2P/configs/man-surfing-tune.yaml DELETED
@@ -1,38 +0,0 @@
1
- pretrained_model_path: "./checkpoints/stable-diffusion-v1-4"
2
- output_dir: "./outputs/man-surfing"
3
-
4
- train_data:
5
- video_path: "data/man-surfing.mp4"
6
- prompt: "a man is surfing"
7
- n_sample_frames: 8
8
- width: 512
9
- height: 512
10
- sample_start_idx: 0
11
- sample_frame_rate: 1
12
-
13
- validation_data:
14
- prompts:
15
- - "a panda is surfing"
16
- video_length: 8
17
- width: 512
18
- height: 512
19
- num_inference_steps: 50
20
- guidance_scale: 12.5
21
- use_inv_latent: True
22
- num_inv_steps: 50
23
-
24
- learning_rate: 3e-5
25
- train_batch_size: 1
26
- max_train_steps: 500
27
- checkpointing_steps: 1000
28
- validation_steps: 500
29
- trainable_modules:
30
- - "attn1.to_q"
31
- - "attn2.to_q"
32
- - "attn_temp"
33
-
34
- seed: 33
35
- mixed_precision: fp16
36
- use_8bit_adam: False
37
- gradient_checkpointing: True
38
- enable_xformers_memory_efficient_attention: True
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Video-P2P/run.py CHANGED
@@ -381,6 +381,8 @@ def main(
381
 
382
  accelerator.end_training()
383
 
 
 
384
  # Video-P2P
385
  scheduler = DDIMScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False)
386
  MY_TOKEN = ''
 
381
 
382
  accelerator.end_training()
383
 
384
+ torch.cuda.empty_cache()
385
+
386
  # Video-P2P
387
  scheduler = DDIMScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False)
388
  MY_TOKEN = ''
app_training.py CHANGED
@@ -72,8 +72,9 @@ def create_training_demo(trainer: Trainer,
72
  label='Validation Epochs', value=100, precision=0)
73
  gr.Markdown('''
74
  - The base model must be a Stable Diffusion model compatible with [diffusers](https://github.com/huggingface/diffusers) library.
75
- - Expected time to train a model for 300 steps: ~20 minutes with T4
76
  - You can check the training status by pressing the "Open logs" button if you are running this on your Space.
 
77
  ''')
78
 
79
  with gr.Row():
 
72
  label='Validation Epochs', value=100, precision=0)
73
  gr.Markdown('''
74
  - The base model must be a Stable Diffusion model compatible with [diffusers](https://github.com/huggingface/diffusers) library.
75
+ - Expected time to complete: ~20 minutes with T4.
76
  - You can check the training status by pressing the "Open logs" button if you are running this on your Space.
77
+ - Find the official github code [here](https://github.com/ShaoTengLiu/Video-P2P).
78
  ''')
79
 
80
  with gr.Row():