EvanTHU commited on
Commit
41284ba
β€’
1 Parent(s): a499880

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +341 -9
README.md CHANGED
@@ -1,9 +1,341 @@
1
- ---
2
- license: other
3
- license_name: idea-1.0
4
- license_link: LICENSE
5
- tags:
6
- - human-motion-generation
7
- ---
8
-
9
- This repository contains the model proposed in [MotionCLR: Motion Generation and Training-free Editing via Understanding Attention Mechanisms](https://huggingface.co/papers/2410.18977).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: idea-1.0
4
+ license_link: LICENSE
5
+ tags:
6
+ - human-motion-generation
7
+ base_model:
8
+ - EvanTHU/MotionCLR
9
+ ---
10
+ # MotionCLR: Motion Generation and Training-free Editing via Understanding Attention Mechanisms
11
+
12
+ [Ling-Hao Chen](https://lhchen.top/), [Wenxun Dai](https://github.com/Dai-Wenxun), [Xuan Ju](https://juxuan27.github.io/), [Shunlin Lu](https://shunlinlu.github.io), [Lei Zhang](https://leizhang.org)
13
+
14
+
15
+
16
+ ![Teaser](https://lhchen.top/MotionCLR/assets/img/teaser.png)
17
+
18
+ ## 🀩 Abstract
19
+ > This research delves into analyzing the attention mechanism of diffusion models in human motion generation. Previous motion diffusion models lack explicit modeling of the word-level text-motion correspondence and explainability. Regarding these issues, we propose an attention-based motion diffusion model, namely MotionCLR, with CLeaR modeling of attention mechanisms. Based on the proposed model, we thoroughly analyze the formulation of the attention mechanism theoretically and empirically. Importantly, we highlight that the self-attention mechanism works to find the fine-grained word-sequence correspondence and activate the corresponding timesteps in the motion sequence. Besides, the cross-attention mechanism aims to measure the sequential similarity between frames and order the sequentiality of motion features. Motivated by these key insights, we propose versatile simple yet effective motion editing methods via manipulating attention maps, such as motion (de)-emphasizing, in-place motion replacement, and example-based motion generation *etc.*. For further verification of the explainability of the attention mechanism, we additionally explore the potential of action-counting and grounded motion generation ability via attention maps.
20
+
21
+ - [x] πŸ“Œ Due to some issues with latest gradio 5, MotionCLR v1-preview huggingface demo for motion editing will be supported next week.
22
+
23
+
24
+ ## πŸ“’ News
25
+
26
+ + **[2024-11-014] MotionCLR v1-preview demo is released at [HuggingFace](https://huggingface.co/spaces/EvanTHU/MotionCLR).**
27
+ + **[2024-10-25] Project, code, and paper are released.**
28
+
29
+
30
+ ## β˜•οΈ Preparation
31
+
32
+
33
+
34
+ <details>
35
+ <summary><b> Environment preparation </b></summary>
36
+
37
+ ```bash
38
+ conda create python=3.10 --name motionclr
39
+ conda activate motionclr
40
+ pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
41
+ pip install -r requirements.txt
42
+ ```
43
+
44
+ </details>
45
+
46
+
47
+ <details>
48
+ <summary><b> Dependencies </b></summary>
49
+
50
+
51
+ If you have the `sudo` permission, install `ffmpeg` for visualizing stick figure (if not already installed):
52
+
53
+ ```
54
+ sudo apt update
55
+ sudo apt install ffmpeg
56
+ ffmpeg -version # check!
57
+ ```
58
+
59
+ If you do not have the `sudo` permission to install it, please install it via `conda`:
60
+
61
+ ```
62
+ conda install conda-forge::ffmpeg
63
+ ffmpeg -version # check!
64
+ ```
65
+
66
+ Run the following command to install [`git-lfs`](https://git-lfs.com/):
67
+ ```
68
+ conda install conda-forge::git-lfs
69
+ ```
70
+
71
+ Run the script to download dependencies materials:
72
+
73
+ ```
74
+ bash prepare/download_glove.sh
75
+ bash prepare/download_t2m_evaluators.sh
76
+ ```
77
+
78
+ </details>
79
+
80
+
81
+
82
+ <details>
83
+ <summary><b> Dataset preparation </b></summary>
84
+
85
+ Please refer to [HumanML3D](https://github.com/EricGuo5513/HumanML3D) for text-to-motion dataset setup. Copy the result dataset to our repository:
86
+ ```
87
+ cp -r ../HumanML3D/HumanML3D ./datasets/humanml3d
88
+ ```
89
+
90
+ The unofficial method of data preparation can be found in this [issue](https://github.com/Dai-Wenxun/MotionLCM/issues/6).
91
+
92
+ </details>
93
+
94
+
95
+
96
+
97
+
98
+ <details>
99
+ <summary><b> Pretrained Model </b></summary>
100
+
101
+ ```python
102
+ from huggingface_hub import hf_hub_download
103
+
104
+ ckptdir = './checkpoints/t2m/release'
105
+ mean_path = hf_hub_download(
106
+ repo_id="EvanTHU/MotionCLR",
107
+ filename="meta/mean.npy",
108
+ local_dir=ckptdir,
109
+ local_dir_use_symlinks=False
110
+ )
111
+
112
+ std_path = hf_hub_download(
113
+ repo_id="EvanTHU/MotionCLR",
114
+ filename="meta/std.npy",
115
+ local_dir=ckptdir,
116
+ local_dir_use_symlinks=False
117
+ )
118
+
119
+ model_path = hf_hub_download(
120
+ repo_id="EvanTHU/MotionCLR",
121
+ filename="model/latest.tar",
122
+ local_dir=ckptdir,
123
+ local_dir_use_symlinks=False
124
+ )
125
+
126
+ opt_path = hf_hub_download(
127
+ repo_id="EvanTHU/MotionCLR",
128
+ filename="opt.txt",
129
+ local_dir=ckptdir,
130
+ local_dir_use_symlinks=False
131
+ )
132
+ ```
133
+ The downloaded files will be saved in the `checkpoints/t2m/release/` directory as follows:
134
+ ```
135
+ checkpoints/
136
+ └── t2m
137
+ β”œβ”€β”€ release
138
+ β”‚ β”œβ”€β”€ meta
139
+ β”‚ β”‚ β”œβ”€β”€ mean.npy
140
+ β”‚ β”‚ └── std.npy
141
+ β”‚ β”œβ”€β”€ model
142
+ β”‚ β”‚ └── latest.tar
143
+ β”‚ └── opt.txt
144
+ ```
145
+ </details>
146
+
147
+
148
+ <details>
149
+ <summary><b> Folder Structure </b></summary>
150
+
151
+ After the whole setup pipeline, the folder structure will look like:
152
+
153
+ ```
154
+ MotionCLR
155
+ └── data
156
+ β”œβ”€β”€ glove
157
+ β”‚ β”œβ”€β”€ our_vab_data.npy
158
+ β”‚ β”œβ”€β”€ our_vab_idx.pkl
159
+ β”‚ └── out_vab_words.pkl
160
+ β”œβ”€β”€ pretrained_models
161
+ β”‚ β”œβ”€β”€ t2m
162
+ β”‚ β”‚ β”œβ”€β”€ text_mot_match
163
+ β”‚ β”‚ β”‚ └── model
164
+ β”‚ β”‚ β”‚ └── finest.tar
165
+ β”‚ β”‚ └── length_est_bigru
166
+ β”‚ β”‚ └── model
167
+ β”‚ β”‚ └── finest.tar
168
+ β”œβ”€β”€ HumanML3D
169
+ β”‚ β”œβ”€β”€ new_joint_vecs
170
+ β”‚ β”‚ └── ...
171
+ β”‚ β”œβ”€β”€ new_joints
172
+ β”‚ β”‚ └── ...
173
+ β”‚ β”œβ”€β”€ texts
174
+ β”‚ β”‚ └── ...
175
+ β”‚ β”œβ”€β”€ Mean.npy
176
+ β”‚ β”œβ”€β”€ Std.npy
177
+ β”‚ β”œβ”€β”€ test.txt
178
+ β”‚ β”œβ”€β”€ train_val.txt
179
+ β”‚ β”œβ”€β”€ train.txt
180
+ β”‚ └── val.txt
181
+ |── t2m_mean.npy
182
+ |── t2m_std.npy
183
+ ```
184
+
185
+ </details>
186
+
187
+
188
+
189
+ ## πŸ‘¨β€πŸ« Quick Start
190
+
191
+ ### Training
192
+
193
+ ```bash
194
+ bash train.sh
195
+ ```
196
+
197
+ ### Testing for Evaluation
198
+
199
+ ```bash
200
+ bash test.sh
201
+ ```
202
+
203
+ ### Generate Results from Text
204
+
205
+ Please replace `$EXP_DIR` with the experiment directory name.
206
+
207
+ + Generate motion from a set of text prompts (`./assets/prompts-replace.txt`), each line is a prompt. (results will be saved in `./checkpoints/t2m/$EXP_DIR/samples_*/`)
208
+
209
+
210
+ ```bash
211
+ python -m scripts.generate --input_text ./assets/prompts-replace.txt \
212
+ --motion_length 8 \
213
+ --self_attention \
214
+ --no_eff \
215
+ --edit_mode \
216
+ --opt_path ./checkpoints/t2m/$EXP_DIR/opt.txt
217
+ ```
218
+ <details>
219
+ <summary><b> Explanation of the arguments </b></summary>
220
+
221
+ - `--input_text`: the path to the text file containing prompts.
222
+
223
+ - `--motion_length`: the length (s) of the generated motion.
224
+
225
+ - `--self_attention`: use self-attention mechanism.
226
+
227
+ - `--no_eff`: do not use efficient attention.
228
+
229
+ - `--edit_mode`: enable editing mode.
230
+
231
+ - `--opt_path`: the path to the trained models.
232
+
233
+ </details>
234
+
235
+
236
+ + Generate motion from a prompt. (results will be saved in `./checkpoints/t2m/$EXP_DIR/samples_*/`)
237
+
238
+ ```bash
239
+ python -m scripts.generate --text_prompt "a man jumps." --motion_length 8 --self_attention --no_eff --opt_path ./checkpoints/t2m/$EXP_DIR/opt.txt
240
+ ```
241
+
242
+ <details>
243
+ <summary><b> Explanation of the arguments </b></summary>
244
+
245
+ - `--text_prompt`: the text prompt.
246
+
247
+ - `--motion_length`: the length (s) of the generated motion.
248
+
249
+ - `--self_attention`: use self-attention mechanism.
250
+
251
+ - `--no_eff`: do not use efficient attention.
252
+
253
+ - `--opt_path`: the path to the trained models.
254
+
255
+ - `--vis_attn`: visualize attention maps. (save in `./checkpoints/t2m/$EXP_DIR/vis_attn/`)
256
+ </details>
257
+
258
+
259
+ <details>
260
+ <summary><b> Other arguments </b></summary>
261
+
262
+ - `--vis_attn`: visualize attention maps.
263
+ </details>
264
+
265
+
266
+
267
+
268
+ ## πŸ”§ Downstream Editing Applications
269
+
270
+
271
+ <details>
272
+ <summary><b>Deploy the demo locally </b></summary>
273
+
274
+ Our project is supported by the latest Gradio 5, which provides a user-friendly interface for motion editing. The demo is available at [HuggingFace](https://huggingface.co/spaces/EvanTHU/MotionCLR). If you want to run the demo locally, please refer to the following instructions:
275
+
276
+ ```bash
277
+ pip install gradio --upgrade
278
+ ```
279
+
280
+ Launch the demo:
281
+ ```python
282
+ python app.py
283
+ ```
284
+ </details>
285
+
286
+
287
+
288
+ <details>
289
+ <summary><b>Interaction with commands</b></summary>
290
+
291
+ You can also use generate or edit the motion via command line. The command is the same as the generation command:
292
+
293
+ ```bash
294
+ python -m scripts.generate --input_text ./assets/prompts-replace.txt \
295
+ --motion_length 8 \
296
+ --self_attention \
297
+ --no_eff \
298
+ --edit_mode \
299
+ --opt_path ./checkpoints/t2m/$EXP_DIR/opt.txt
300
+ ```
301
+
302
+ Besides, you also need to edit the configuration in `./options/edit.yaml` to specify the editing mode. The detailed clarification of the configuration can be found in the comment of the configuration file.
303
+ </details>
304
+
305
+
306
+
307
+
308
+
309
+
310
+
311
+
312
+
313
+
314
+ ## 🌹 Acknowledgement
315
+
316
+ The author team would like to acknowledge [Dr. Jingbo Wang](https://wangjingbo1219.github.io/) from Shanghai AI Laboratory and [Dr. Xingyu Chen](https://seanchenxy.github.io/) from Peking University for his constructive suggestions and discussions on downstream applications. We also would like to acknowledge [Mr. Hongyang Li](https://lhy-hongyangli.github.io/) and [Mr. Zhenhua Yang](https://yeungchenwa.github.io/) from SCUT for their detailed discussion on some technical details and writing. [Mr. Bohong Chen](https://github.com/RobinWitch) from ZJU also provided us with insightful feedback on the evaluation and the presentations. We convey our thanks to all of them.
317
+
318
+ We would like to thank the authors of the following repositories for their excellent work:
319
+ [HumanML3D](https://github.com/EricGuo5513/HumanML3D),
320
+ [UniMoCap](https://github.com/LinghaoChan/UniMoCap),
321
+ [joints2smpl](https://github.com/wangsen1312/joints2smpl),
322
+ [HumanTOMATO](https://github.com/IDEA-Research/HumanTOMATO),
323
+ [MotionLCM](https://github.com/Dai-Wenxun/MotionLCM),
324
+ [StableMoFusion](https://github.com/h-y1heng/StableMoFusion).
325
+
326
+ ## πŸ“œ Citation
327
+
328
+ If you find this work useful, please consider citing our paper:
329
+
330
+ ```bash
331
+ @article{motionclr,
332
+ title={MotionCLR: Motion Generation and Training-free Editing via Understanding Attention Mechanisms},
333
+ author={Chen, Ling-Hao and Dai, Wenxun and Ju, Xuan and Lu, Shunlin and Zhang, Lei},
334
+ journal={arxiv:2410.18977},
335
+ year={2024}
336
+ }
337
+ ```
338
+
339
+ ## πŸ“š License
340
+
341
+ This code is distributed under an [IDEA LICENSE](LICENSE), which not allowed for commercial usage. Note that our code depends on other libraries and datasets which each have their own respective licenses that must also be followed.