a-r-r-o-w commited on
Commit
0749f78
1 Parent(s): 721070d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md CHANGED
@@ -2,3 +2,57 @@
2
  license: apache-2.0
3
  library_name: diffusers
4
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: apache-2.0
3
  library_name: diffusers
4
  ---
5
+
6
+ AnimateDiff original author checkpoints are available at: https://huggingface.co/guoyww
7
+
8
+ This checkpoint was converted to Diffusers format by [a-r-r-o-w](https://github.com/a-r-r-o-w/). You can find results and more details adding AnimateDiff SDXL support (beta) to 🤗 Diffusers [here](https://github.com/huggingface/diffusers/pull/6721) The following description is copied from [here](https://huggingface.co/guoyww/animatediff-motion-adapter-v1-5-2).
9
+
10
+ AnimateDiff is a method that allows you to create videos using pre-existing Stable Diffusion Text to Image models.
11
+
12
+ It achieves this by inserting motion module layers into a frozen text to image model and training it on video clips to extract a motion prior. These motion modules are applied after the ResNet and Attention blocks in the Stable Diffusion UNet. Their purpose is to introduce coherent motion across image frames. To support these modules we introduce the concepts of a MotionAdapter and UNetMotionModel. These serve as a convenient way to use these motion modules with existing Stable Diffusion models.
13
+
14
+ Note: The SDXL checkpoint for AnimateDiff is a beta version.
15
+
16
+ ### Usage
17
+
18
+ ```python
19
+ import torch
20
+ from diffusers import AnimateDiffSDXLPipeline
21
+ from diffusers.schedulers import DDIMScheduler, EulerDiscreteScheduler, DEISMultistepScheduler
22
+ from diffusers.models import MotionAdapter
23
+ from diffusers.utils import export_to_gif
24
+
25
+ model_id = "stabilityai/stable-diffusion-xl-base-1.0"
26
+ adapter = MotionAdapter.from_pretrained("a-r-r-o-w/animatediff-motion-adapter-sdxl-beta", torch_dtype=torch.float16)
27
+ scheduler = DDIMScheduler.from_pretrained(
28
+ model_id,
29
+ subfolder="scheduler",
30
+ clip_sample=False,
31
+ timestep_spacing="linspace",
32
+ beta_schedule="linear",
33
+ steps_offset=1,
34
+ )
35
+ pipe = AnimateDiffSDXLPipeline.from_pretrained(
36
+ model_id,
37
+ motion_adapter=adapter,
38
+ scheduler=scheduler,
39
+ torch_dtype=torch.float16,
40
+ variant="fp16",
41
+ ).to("cuda")
42
+
43
+ # enable memory savings
44
+ pipe.enable_vae_slicing()
45
+ pipe.enable_vae_tiling()
46
+
47
+ result = pipe(
48
+ prompt="a panda surfing in the ocean, realistic, hyperrealism, high quality",
49
+ negative_prompt="low quality, worst quality",
50
+ num_inference_steps=20,
51
+ guidance_scale=8,
52
+ width=1024,
53
+ height=1024,
54
+ num_frames=16,
55
+ )
56
+
57
+ export_to_gif(result.frames[0], "animation.gif")
58
+ ```