NagaSaiAbhinay commited on
Commit
920184b
1 Parent(s): cad0d03

Adds README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md CHANGED
@@ -1,3 +1,69 @@
1
  ---
2
  license: creativeml-openrail-m
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: creativeml-openrail-m
3
  ---
4
+
5
+ ---
6
+ license: creativeml-openrail-m
7
+ base_model: nitrosocke/mo-di-diffusion
8
+ training_prompt: A bear is playing guitar.
9
+ tags:
10
+ - tune-a-video
11
+ - text-to-video
12
+ - diffusers
13
+ inference: false
14
+ ---
15
+
16
+ # Tune-A-Video - Modern Disney
17
+
18
+ ## Model Description
19
+ This is a diffusers compatible checkpoint. When used with DiffusionPipeline, returns an instance of TuneAVideoPipeline
20
+
21
+ >df-cpt is used to indicate that its a diffusers compatible equivalent of Tune-A-Video-library/mo-di-bear-guitar .
22
+
23
+ - Base model: [nitrosocke/mo-di-diffusion](https://huggingface.co/nitrosocke/mo-di-diffusion)
24
+ - Training prompt: a bear is playing guitar.
25
+ ![sample-train](samples/train.gif)
26
+
27
+ ## Samples
28
+
29
+ ![sample-500](samples/sample-500.gif)
30
+ Test prompt: a [handsome prince/magical princess/rabbit/baby] is playing guitar, modern disney style.
31
+
32
+ ## Usage
33
+
34
+ ```python
35
+ import torch
36
+ from diffusers import DiffusionPipeline, DDIMScheduler
37
+ from diffusers.utils import export_to_video
38
+ from PIL import Image
39
+
40
+
41
+ pretrained_model_path = "nitrosocke/mo-di-diffusion"
42
+
43
+ pipe = TuneAVideoPipeline.from_pretrained(
44
+ "Tune-A-Video-library/df-cpt-mo-di-bear-guitar", torch_dtype=torch.float16
45
+ ).to("cuda")
46
+
47
+ prompt = "A princess playing a guitar, modern disney style"
48
+ generator = torch.Generator(device="cuda").manual_seed(42)
49
+
50
+ video_frames = pipe(prompt, video_length=3, generator=generator, num_inference_steps=50, output_type="np").frames
51
+
52
+ # Saving to gif.
53
+ pil_frames = [Image.fromarray(frame) for frame in video_frames]
54
+ duration = len(pil_frames) / 8
55
+ pil_frames[0].save(
56
+ "animation.gif",
57
+ save_all=True,
58
+ append_images=pil_frames[1:], # append rest of the images
59
+ duration=duration * 1000, # in milliseconds
60
+ loop=0,
61
+ )
62
+
63
+ # Saving to video
64
+ video_path = export_to_video(video_frames)
65
+ ```
66
+
67
+ ## Related Papers:
68
+ - [Tune-A-Video](https://arxiv.org/abs/2212.11565): One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
69
+ - [Stable Diffusion](https://arxiv.org/abs/2112.10752): High-Resolution Image Synthesis with Latent Diffusion Models