bluestarburst commited on
Commit
e72f6f2
1 Parent(s): 98306a2

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,207 +1,212 @@
1
- ---
2
- '[object Object]': null
3
- license: mit
4
- datasets:
5
- - allenai/objaverse
6
- language:
7
- - en
8
- pipeline_tag: text-to-video
9
- tags:
10
- - video diffusion
11
- - text-to-video
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
- # Model Card for {{ model_id | default("Model ID", true) }}
 
15
 
16
- <!-- Provide a quick summary of what the model is/does. -->
 
 
 
 
 
 
 
 
17
 
18
- {{ model_summary | default("", true) }}
 
19
 
20
- ## Model Details
21
 
22
- ### Model Description
 
 
 
23
 
24
- <!-- Provide a longer summary of what this model is. -->
25
-
26
- {{ model_description | default("", true) }}
27
-
28
- - **Developed by:** {{ developers | default("[More Information Needed]", true)}}
29
- - **Funded by [optional]:** {{ funded_by | default("[More Information Needed]", true)}}
30
- - **Shared by [optional]:** {{ shared_by | default("[More Information Needed]", true)}}
31
- - **Model type:** {{ model_type | default("[More Information Needed]", true)}}
32
- - **Language(s) (NLP):** {{ language | default("[More Information Needed]", true)}}
33
- - **License:** {{ license | default("[More Information Needed]", true)}}
34
- - **Finetuned from model [optional]:** {{ base_model | default("[More Information Needed]", true)}}
35
-
36
- ### Model Sources [optional]
37
-
38
- <!-- Provide the basic links for the model. -->
39
-
40
- - **Repository:** {{ repo | default("[More Information Needed]", true)}}
41
- - **Paper [optional]:** {{ paper | default("[More Information Needed]", true)}}
42
- - **Demo [optional]:** {{ demo | default("[More Information Needed]", true)}}
43
-
44
- ## Uses
45
-
46
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
47
-
48
- ### Direct Use
49
-
50
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
51
-
52
- {{ direct_use | default("[More Information Needed]", true)}}
53
-
54
- ### Downstream Use [optional]
55
-
56
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
57
-
58
- {{ downstream_use | default("[More Information Needed]", true)}}
59
-
60
- ### Out-of-Scope Use
61
-
62
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
63
-
64
- {{ out_of_scope_use | default("[More Information Needed]", true)}}
65
-
66
- ## Bias, Risks, and Limitations
67
-
68
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
69
-
70
- {{ bias_risks_limitations | default("[More Information Needed]", true)}}
71
-
72
- ### Recommendations
73
-
74
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
75
-
76
- {{ bias_recommendations | default("Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", true)}}
77
-
78
- ## How to Get Started with the Model
79
-
80
- Use the code below to get started with the model.
81
-
82
- {{ get_started_code | default("[More Information Needed]", true)}}
83
-
84
- ## Training Details
85
-
86
- ### Training Data
87
-
88
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
89
-
90
- {{ training_data | default("[More Information Needed]", true)}}
91
-
92
- ### Training Procedure
93
-
94
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
95
-
96
- #### Preprocessing [optional]
97
-
98
- {{ preprocessing | default("[More Information Needed]", true)}}
99
-
100
-
101
- #### Training Hyperparameters
102
-
103
- - **Training regime:** {{ training_regime | default("[More Information Needed]", true)}} <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
104
-
105
- #### Speeds, Sizes, Times [optional]
106
-
107
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
108
-
109
- {{ speeds_sizes_times | default("[More Information Needed]", true)}}
110
-
111
- ## Evaluation
112
-
113
- <!-- This section describes the evaluation protocols and provides the results. -->
114
-
115
- ### Testing Data, Factors & Metrics
116
-
117
- #### Testing Data
118
-
119
- <!-- This should link to a Dataset Card if possible. -->
120
-
121
- {{ testing_data | default("[More Information Needed]", true)}}
122
-
123
- #### Factors
124
-
125
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
126
-
127
- {{ testing_factors | default("[More Information Needed]", true)}}
128
-
129
- #### Metrics
130
-
131
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
132
-
133
- {{ testing_metrics | default("[More Information Needed]", true)}}
134
-
135
- ### Results
136
-
137
- {{ results | default("[More Information Needed]", true)}}
138
-
139
- #### Summary
140
-
141
- {{ results_summary | default("", true) }}
142
-
143
- ## Model Examination [optional]
144
-
145
- <!-- Relevant interpretability work for the model goes here -->
146
-
147
- {{ model_examination | default("[More Information Needed]", true)}}
148
-
149
- ## Environmental Impact
150
-
151
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
152
-
153
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
154
-
155
- - **Hardware Type:** {{ hardware_type | default("[More Information Needed]", true)}}
156
- - **Hours used:** {{ hours_used | default("[More Information Needed]", true)}}
157
- - **Cloud Provider:** {{ cloud_provider | default("[More Information Needed]", true)}}
158
- - **Compute Region:** {{ cloud_region | default("[More Information Needed]", true)}}
159
- - **Carbon Emitted:** {{ co2_emitted | default("[More Information Needed]", true)}}
160
-
161
- ## Technical Specifications [optional]
162
-
163
- ### Model Architecture and Objective
164
-
165
- {{ model_specs | default("[More Information Needed]", true)}}
166
-
167
- ### Compute Infrastructure
168
-
169
- {{ compute_infrastructure | default("[More Information Needed]", true)}}
170
-
171
- #### Hardware
172
-
173
- {{ hardware_requirements | default("[More Information Needed]", true)}}
174
-
175
- #### Software
176
-
177
- {{ software | default("[More Information Needed]", true)}}
178
-
179
- ## Citation [optional]
180
-
181
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
182
-
183
- **BibTeX:**
184
-
185
- {{ citation_bibtex | default("[More Information Needed]", true)}}
186
-
187
- **APA:**
188
-
189
- {{ citation_apa | default("[More Information Needed]", true)}}
190
-
191
- ## Glossary [optional]
192
-
193
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
194
-
195
- {{ glossary | default("[More Information Needed]", true)}}
196
-
197
- ## More Information [optional]
198
-
199
- {{ more_information | default("[More Information Needed]", true)}}
200
-
201
- ## Model Card Authors [optional]
202
-
203
- {{ model_card_authors | default("[More Information Needed]", true)}}
204
-
205
- ## Model Card Contact
206
-
207
- {{ model_card_contact | default("[More Information Needed]", true)}}
 
1
+ # AnimateDiff
2
+
3
+ This repository is the official implementation of [AnimateDiff](https://arxiv.org/abs/2307.04725).
4
+
5
+ **[AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning](https://arxiv.org/abs/2307.04725)**
6
+ </br>
7
+ Yuwei Guo,
8
+ Ceyuan Yang*,
9
+ Anyi Rao,
10
+ Yaohui Wang,
11
+ Yu Qiao,
12
+ Dahua Lin,
13
+ Bo Dai
14
+ <p style="font-size: 0.8em; margin-top: -1em">*Corresponding Author</p>
15
+
16
+ <!-- [Arxiv Report](https://arxiv.org/abs/2307.04725) | [Project Page](https://animatediff.github.io/) -->
17
+ [![arXiv](https://img.shields.io/badge/arXiv-2307.04725-b31b1b.svg)](https://arxiv.org/abs/2307.04725)
18
+ [![Project Page](https://img.shields.io/badge/Project-Website-green)](https://animatediff.github.io/)
19
+ [![Open in OpenXLab](https://cdn-static.openxlab.org.cn/app-center/openxlab_app.svg)](https://openxlab.org.cn/apps/detail/Masbfca/AnimateDiff)
20
+ [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-yellow)](https://huggingface.co/spaces/guoyww/AnimateDiff)
21
+
22
+ ## Next
23
+ One with better controllability and quality is coming soon. Stay tuned.
24
+
25
+ ## Features
26
+ - **[2023/11/10]** Release the Motion Module (beta version) on SDXL, available at [Google Drive](https://drive.google.com/file/d/1EK_D9hDOPfJdK4z8YDB8JYvPracNx2SX/view?usp=share_link
27
+ ) / [HuggingFace](https://huggingface.co/guoyww/animatediff/blob/main/mm_sdxl_v10_beta.ckpt
28
+ ) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules). High resolution videos (i.e., 1024x1024x16 frames with various aspect ratios) could be produced **with/without** personalized models. Inference usually requires ~13GB VRAM and tuned hyperparameters (e.g., #sampling steps), depending on the chosen personalized models. Checkout to the branch [sdxl](https://github.com/guoyww/AnimateDiff/tree/sdxl) for more details of the inference. More checkpoints with better-quality would be available soon. Stay tuned. Examples below are manually downsampled for fast loading.
29
+
30
+ <table class="center">
31
+ <tr style="line-height: 0">
32
+ <td width=50% style="border: none; text-align: center">Original SDXL</td>
33
+ <td width=30% style="border: none; text-align: center">Personalized SDXL</td>
34
+ <td width=20% style="border: none; text-align: center">Personalized SDXL</td>
35
+ </tr>
36
+ <tr>
37
+ <td width=50% style="border: none"><img src="__assets__/animations/motion_xl/01.gif"></td>
38
+ <td width=30% style="border: none"><img src="__assets__/animations/motion_xl/02.gif"></td>
39
+ <td width=20% style="border: none"><img src="__assets__/animations/motion_xl/03.gif"></td>
40
+ </tr>
41
+ </table>
42
+
43
+
44
+
45
+ - **[2023/09/25]** Release **MotionLoRA** and its model zoo, **enabling camera movement controls**! Please download the MotionLoRA models (**74 MB per model**, available at [Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI?usp=sharing) / [HuggingFace](https://huggingface.co/guoyww/animatediff) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules) ) and save them to the `models/MotionLoRA` folder. Example:
46
+ ```
47
+ python -m scripts.animate --config configs/prompts/v2/5-RealisticVision-MotionLoRA.yaml
48
+ ```
49
+ <table class="center">
50
+ <tr style="line-height: 0">
51
+ <td colspan="2" style="border: none; text-align: center">Zoom In</td>
52
+ <td colspan="2" style="border: none; text-align: center">Zoom Out</td>
53
+ <td colspan="2" style="border: none; text-align: center">Zoom Pan Left</td>
54
+ <td colspan="2" style="border: none; text-align: center">Zoom Pan Right</td>
55
+ </tr>
56
+ <tr>
57
+ <td style="border: none"><img src="__assets__/animations/motion_lora/model_01/01.gif"></td>
58
+ <td style="border: none"><img src="__assets__/animations/motion_lora/model_02/02.gif"></td>
59
+ <td style="border: none"><img src="__assets__/animations/motion_lora/model_01/02.gif"></td>
60
+ <td style="border: none"><img src="__assets__/animations/motion_lora/model_02/01.gif"></td>
61
+ <td style="border: none"><img src="__assets__/animations/motion_lora/model_01/03.gif"></td>
62
+ <td style="border: none"><img src="__assets__/animations/motion_lora/model_02/04.gif"></td>
63
+ <td style="border: none"><img src="__assets__/animations/motion_lora/model_01/04.gif"></td>
64
+ <td style="border: none"><img src="__assets__/animations/motion_lora/model_02/03.gif"></td>
65
+ </tr>
66
+ <tr style="line-height: 0">
67
+ <td colspan="2" style="border: none; text-align: center">Tilt Up</td>
68
+ <td colspan="2" style="border: none; text-align: center">Tilt Down</td>
69
+ <td colspan="2" style="border: none; text-align: center">Rolling Anti-Clockwise</td>
70
+ <td colspan="2" style="border: none; text-align: center">Rolling Clockwise</td>
71
+ </tr>
72
+ <tr>
73
+ <td style="border: none"><img src="__assets__/animations/motion_lora/model_01/05.gif"></td>
74
+ <td style="border: none"><img src="__assets__/animations/motion_lora/model_02/05.gif"></td>
75
+ <td style="border: none"><img src="__assets__/animations/motion_lora/model_01/06.gif"></td>
76
+ <td style="border: none"><img src="__assets__/animations/motion_lora/model_02/06.gif"></td>
77
+ <td style="border: none"><img src="__assets__/animations/motion_lora/model_01/07.gif"></td>
78
+ <td style="border: none"><img src="__assets__/animations/motion_lora/model_02/07.gif"></td>
79
+ <td style="border: none"><img src="__assets__/animations/motion_lora/model_01/08.gif"></td>
80
+ <td style="border: none"><img src="__assets__/animations/motion_lora/model_02/08.gif"></td>
81
+ </tr>
82
+ </table>
83
+
84
+ - **[2023/09/10]** New Motion Module release! `mm_sd_v15_v2.ckpt` was trained on larger resolution & batch size, and gains noticeable quality improvements. Check it out at [Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI?usp=sharing) / [HuggingFace](https://huggingface.co/guoyww/animatediff) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules) and use it with `configs/inference/inference-v2.yaml`. Example:
85
+ ```
86
+ python -m scripts.animate --config configs/prompts/v2/5-RealisticVision.yaml
87
+ ```
88
+ Here is a qualitative comparison between `mm_sd_v15.ckpt` (left) and `mm_sd_v15_v2.ckpt` (right):
89
+ <table class="center">
90
+ <tr>
91
+ <td><img src="__assets__/animations/compare/old_0.gif"></td>
92
+ <td><img src="__assets__/animations/compare/new_0.gif"></td>
93
+ <td><img src="__assets__/animations/compare/old_1.gif"></td>
94
+ <td><img src="__assets__/animations/compare/new_1.gif"></td>
95
+ <td><img src="__assets__/animations/compare/old_2.gif"></td>
96
+ <td><img src="__assets__/animations/compare/new_2.gif"></td>
97
+ <td><img src="__assets__/animations/compare/old_3.gif"></td>
98
+ <td><img src="__assets__/animations/compare/new_3.gif"></td>
99
+ </tr>
100
+ </table>
101
+ - GPU Memory Optimization, ~12GB VRAM to inference
102
+
103
+
104
+ ## Quick Demo
105
+
106
+ User Interface developed by community:
107
+ - A1111 Extension [sd-webui-animatediff](https://github.com/continue-revolution/sd-webui-animatediff) (by [@continue-revolution](https://github.com/continue-revolution))
108
+ - ComfyUI Extension [ComfyUI-AnimateDiff-Evolved](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved) (by [@Kosinkadink](https://github.com/Kosinkadink))
109
+ - Google Colab: [Colab](https://colab.research.google.com/github/camenduru/AnimateDiff-colab/blob/main/AnimateDiff_colab.ipynb) (by [@camenduru](https://github.com/camenduru))
110
+
111
+ We also create a Gradio demo to make AnimateDiff easier to use. To launch the demo, please run the following commands:
112
+ ```
113
+ conda activate animatediff
114
+ python app.py
115
+ ```
116
+ By default, the demo will run at `localhost:7860`.
117
+ <br><img src="__assets__/figs/gradio.jpg" style="width: 50em; margin-top: 1em">
118
+
119
+
120
+ ## Model Zoo
121
+ <details open>
122
+ <summary>Motion Modules</summary>
123
+
124
+ | Name | Parameter | Storage Space |
125
+ |----------------------|-----------|---------------|
126
+ | mm_sd_v14.ckpt | 417 M | 1.6 GB |
127
+ | mm_sd_v15.ckpt | 417 M | 1.6 GB |
128
+ | mm_sd_v15_v2.ckpt | 453 M | 1.7 GB |
129
+
130
+ </details>
131
+
132
+ <details open>
133
+ <summary>MotionLoRAs</summary>
134
+
135
+ | Name | Parameter | Storage Space |
136
+ |--------------------------------------|-----------|---------------|
137
+ | v2_lora_ZoomIn.ckpt | 19 M | 74 MB |
138
+ | v2_lora_ZoomOut.ckpt | 19 M | 74 MB |
139
+ | v2_lora_PanLeft.ckpt | 19 M | 74 MB |
140
+ | v2_lora_PanRight.ckpt | 19 M | 74 MB |
141
+ | v2_lora_TiltUp.ckpt | 19 M | 74 MB |
142
+ | v2_lora_TiltDown.ckpt | 19 M | 74 MB |
143
+ | v2_lora_RollingClockwise.ckpt | 19 M | 74 MB |
144
+ | v2_lora_RollingAnticlockwise.ckpt | 19 M | 74 MB |
145
+
146
+ </details>
147
+
148
+ ## Common Issues
149
+ <details>
150
+ <summary>Installation</summary>
151
+
152
+ Please ensure the installation of [xformer](https://github.com/facebookresearch/xformers) that is applied to reduce the inference memory.
153
+ </details>
154
+
155
+
156
+ <details>
157
+ <summary>Various resolution or number of frames</summary>
158
+ Currently, we recommend users to generate animation with 16 frames and 512 resolution that are aligned with our training settings. Notably, various resolution/frames may affect the quality more or less.
159
+ </details>
160
+
161
+
162
+ <details>
163
+ <summary>How to use it without any coding</summary>
164
+
165
+ 1) Get lora models: train lora model with [A1111](https://github.com/continue-revolution/sd-webui-animatediff) based on a collection of your own favorite images (e.g., tutorials [English](https://www.youtube.com/watch?v=mfaqqL5yOO4), [Japanese](https://www.youtube.com/watch?v=N1tXVR9lplM), [Chinese](https://www.bilibili.com/video/BV1fs4y1x7p2/))
166
+ or download Lora models from [Civitai](https://civitai.com/).
167
+
168
+ 2) Animate lora models: using gradio interface or A1111
169
+ (e.g., tutorials [English](https://github.com/continue-revolution/sd-webui-animatediff), [Japanese](https://www.youtube.com/watch?v=zss3xbtvOWw), [Chinese](https://941ai.com/sd-animatediff-webui-1203.html))
170
+
171
+ 3) Be creative togther with other techniques, such as, super resolution, frame interpolation, music generation, etc.
172
+ </details>
173
+
174
+
175
+ <details>
176
+ <summary>Animating a given image</summary>
177
+
178
+ We totally agree that animating a given image is an appealing feature, which we would try to support officially in future. For now, you may enjoy other efforts from the [talesofai](https://github.com/talesofai/AnimateDiff).
179
+ </details>
180
+
181
+ <details>
182
+ <summary>Contributions from community</summary>
183
+ Contributions are always welcome!! The <code>dev</code> branch is for community contributions. As for the main branch, we would like to align it with the original technical report :)
184
+ </details>
185
+
186
+ ## Training and inference
187
+ Please refer to [ANIMATEDIFF](./__assets__/docs/animatediff.md) for the detailed setup.
188
 
189
+ ## Gallery
190
+ We collect several generated results in [GALLERY](./__assets__/docs/gallery.md).
191
 
192
+ ## BibTeX
193
+ ```
194
+ @article{guo2023animatediff,
195
+ title={AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning},
196
+ author={Guo, Yuwei and Yang, Ceyuan and Rao, Anyi and Wang, Yaohui and Qiao, Yu and Lin, Dahua and Dai, Bo},
197
+ journal={arXiv preprint arXiv:2307.04725},
198
+ year={2023}
199
+ }
200
+ ```
201
 
202
+ ## Disclaimer
203
+ This project is released for academic use. We disclaim responsibility for user-generated content. Users are solely liable for their actions. The project contributors are not legally affiliated with, nor accountable for, users' behaviors. Use the generative model responsibly, adhering to ethical and legal standards.
204
 
 
205
 
206
+ ## Contact Us
207
+ **Yuwei Guo**: [guoyuwei@pjlab.org.cn](mailto:guoyuwei@pjlab.org.cn)
208
+ **Ceyuan Yang**: [yangceyuan@pjlab.org.cn](mailto:yangceyuan@pjlab.org.cn)
209
+ **Bo Dai**: [daibo@pjlab.org.cn](mailto:daibo@pjlab.org.cn)
210
 
211
+ ## Acknowledgements
212
+ Codebase built upon [Tune-a-Video](https://github.com/showlab/Tune-A-Video).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
inference.py ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # this is the huggingface inference script for the animation pipeline
2
+ # it takes in a text prompt and generates an animation based on the text
3
+
4
+ import os
5
+ import sys
6
+ import json
7
+ import torch
8
+ import argparse
9
+ import numpy as np
10
+ from PIL import Image
11
+ from tqdm import tqdm
12
+
13
+ import torch.nn.functional as F
14
+ from torch.utils.data import DataLoader
15
+ from torchvision import transforms
16
+ from torchvision.utils import save_image
17
+
models/Motion_Module/test/inv_latents/ddim_latent-1.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6bb60d23d5501d2b6651bbdddf6b9366da7e360763851e2a0466defedd486a4e
3
  size 41725
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ff819827be55caa5766985bd0b2b103cd0d1e3a4699c85ac79ec8df63292b40
3
  size 41725
models/Motion_Module/test/mm.pth CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f02d7a609ddb3b9909d32d246da8b1c61016fef19320a9f1192fb88a29e49c34
3
  size 1672103655
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b55d872dc4d51093464540fc2d0be3cede056409ae1f76ce002d5fd83f2f05c3
3
  size 1672103655
models/Motion_Module/test/samples/sample-1.gif CHANGED
models/Motion_Module/test/samples/sample-1/0.gif CHANGED