kadirnar commited on
Commit
e7d5680
1 Parent(s): 325b29b

Upload 98 files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. CONTRIBUTING.md +91 -0
  2. LICENSE +681 -0
  3. Open-Sora/.gitattributes +35 -0
  4. Open-Sora/README.md +13 -0
  5. Open-Sora/requirements.txt +12 -0
  6. configs/dit/inference/16x256x256.py +31 -0
  7. configs/dit/inference/1x256x256-class.py +31 -0
  8. configs/dit/inference/1x256x256.py +32 -0
  9. configs/dit/train/16x256x256.py +50 -0
  10. configs/dit/train/1x256x256.py +50 -0
  11. configs/latte/inference/16x256x256-class.py +30 -0
  12. configs/latte/inference/16x256x256.py +31 -0
  13. configs/latte/train/16x256x256.py +49 -0
  14. configs/opensora/inference/16x256x256.py +34 -0
  15. configs/opensora/inference/16x512x512.py +35 -0
  16. configs/opensora/inference/64x512x512.py +35 -0
  17. configs/opensora/train/16x256x256.py +53 -0
  18. configs/opensora/train/16x512x512.py +54 -0
  19. configs/opensora/train/360x512x512.py +55 -0
  20. configs/opensora/train/64x512x512-sp.py +54 -0
  21. configs/opensora/train/64x512x512.py +54 -0
  22. configs/pixart/inference/16x256x256.py +32 -0
  23. configs/pixart/inference/1x1024MS.py +34 -0
  24. configs/pixart/inference/1x256x256.py +33 -0
  25. configs/pixart/inference/1x512x512.py +33 -0
  26. configs/pixart/train/16x256x256.py +53 -0
  27. configs/pixart/train/1x512x512.py +54 -0
  28. configs/pixart/train/64x512x512.py +54 -0
  29. docs/README_zh.md +206 -0
  30. docs/acceleration.md +57 -0
  31. docs/commands.md +91 -0
  32. docs/datasets.md +28 -0
  33. docs/report_v1.md +47 -0
  34. docs/structure.md +178 -0
  35. opensora/__init__.py +4 -0
  36. opensora/acceleration/__init__.py +0 -0
  37. opensora/acceleration/checkpoint.py +24 -0
  38. opensora/acceleration/communications.py +188 -0
  39. opensora/acceleration/parallel_states.py +19 -0
  40. opensora/acceleration/plugin.py +100 -0
  41. opensora/acceleration/shardformer/modeling/__init__.py +0 -0
  42. opensora/acceleration/shardformer/modeling/t5.py +39 -0
  43. opensora/acceleration/shardformer/policy/__init__.py +0 -0
  44. opensora/acceleration/shardformer/policy/t5_encoder.py +67 -0
  45. opensora/datasets/__init__.py +2 -0
  46. opensora/datasets/datasets.py +114 -0
  47. opensora/datasets/utils.py +135 -0
  48. opensora/datasets/video_transforms.py +501 -0
  49. opensora/models/__init__.py +6 -0
  50. opensora/models/dit/__init__.py +1 -0
CONTRIBUTING.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Contributing
2
+
3
+ The Open-Sora project welcomes any constructive contribution from the community and the team is more than willing to work on problems you have encountered to make it a better project.
4
+
5
+ ## Development Environment Setup
6
+
7
+ To contribute to Open-Sora, we would like to first guide you to set up a proper development environment so that you can better implement your code. You can install this library from source with the `editable` flag (`-e`, for development mode) so that your change to the source code will be reflected in runtime without re-installation.
8
+
9
+ You can refer to the [Installation Section](./README.md#installation) and replace `pip install -v .` with `pip install -v -e .`.
10
+
11
+
12
+ ### Code Style
13
+
14
+ We have some static checks when you commit your code change, please make sure you can pass all the tests and make sure the coding style meets our requirements. We use pre-commit hook to make sure the code is aligned with the writing standard. To set up the code style checking, you need to follow the steps below.
15
+
16
+ ```shell
17
+ # these commands are executed under the Open-Sora directory
18
+ pip install pre-commit
19
+ pre-commit install
20
+ ```
21
+
22
+ Code format checking will be automatically executed when you commit your changes.
23
+
24
+
25
+ ## Contribution Guide
26
+
27
+ You need to follow these steps below to make contribution to the main repository via pull request. You can learn about the details of pull request [here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests).
28
+
29
+ ### 1. Fork the Official Repository
30
+
31
+ Firstly, you need to visit the [Open-Sora repository](https://github.com/hpcaitech/Open-Sora) and fork into your own account. The `fork` button is at the right top corner of the web page alongside with buttons such as `watch` and `star`.
32
+
33
+ Now, you can clone your own forked repository into your local environment.
34
+
35
+ ```shell
36
+ git clone https://github.com/<YOUR-USERNAME>/Open-Sora.git
37
+ ```
38
+
39
+ ### 2. Configure Git
40
+
41
+ You need to set the official repository as your upstream so that you can synchronize with the latest update in the official repository. You can learn about upstream [here](https://www.atlassian.com/git/tutorials/git-forks-and-upstreams).
42
+
43
+ Then add the original repository as upstream
44
+
45
+ ```shell
46
+ cd Open-Sora
47
+ git remote add upstream https://github.com/hpcaitech/Open-Sora.git
48
+ ```
49
+
50
+ you can use the following command to verify that the remote is set. You should see both `origin` and `upstream` in the output.
51
+
52
+ ```shell
53
+ git remote -v
54
+ ```
55
+
56
+ ### 3. Synchronize with Official Repository
57
+
58
+ Before you make changes to the codebase, it is always good to fetch the latest updates in the official repository. In order to do so, you can use the commands below.
59
+
60
+ ```shell
61
+ git fetch upstream
62
+ git checkout main
63
+ git merge upstream/main
64
+ git push origin main
65
+ ```
66
+
67
+ ### 5. Create a New Branch
68
+
69
+ You should not make changes to the `main` branch of your forked repository as this might make upstream synchronization difficult. You can create a new branch with the appropriate name. General branch name format should start with `hotfix/` and `feature/`. `hotfix` is for bug fix and `feature` is for addition of a new feature.
70
+
71
+
72
+ ```shell
73
+ git checkout -b <NEW-BRANCH-NAME>
74
+ ```
75
+
76
+ ### 6. Implementation and Code Commit
77
+
78
+ Now you can implement your code change in the source code. Remember that you installed the system in development, thus you do not need to uninstall and install to make the code take effect. The code change will be reflected in every new PyThon execution.
79
+ You can commit and push the changes to your local repository. The changes should be kept logical, modular and atomic.
80
+
81
+ ```shell
82
+ git add -A
83
+ git commit -m "<COMMIT-MESSAGE>"
84
+ git push -u origin <NEW-BRANCH-NAME>
85
+ ```
86
+
87
+ ### 7. Open a Pull Request
88
+
89
+ You can now create a pull request on the GitHub webpage of your repository. The source branch is `<NEW-BRANCH-NAME>` of your repository and the target branch should be `main` of `hpcaitech/Open-Sora`. After creating this pull request, you should be able to see it [here](https://github.com/hpcaitech/Open-Sora/pulls).
90
+
91
+ The Open-Sora team will review your code change and merge your code if applicable.
LICENSE ADDED
@@ -0,0 +1,681 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
202
+
203
+ =========================================================================
204
+ This project is inspired by the listed projects and is subject to the following licenses:
205
+
206
+ 1. Latte (https://github.com/Vchitect/Latte/blob/main/LICENSE)
207
+
208
+ Copyright 2024 Latte
209
+
210
+ Licensed under the Apache License, Version 2.0 (the "License");
211
+ you may not use this file except in compliance with the License.
212
+ You may obtain a copy of the License at
213
+
214
+ http://www.apache.org/licenses/LICENSE-2.0
215
+
216
+ Unless required by applicable law or agreed to in writing, software
217
+ distributed under the License is distributed on an "AS IS" BASIS,
218
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
219
+ See the License for the specific language governing permissions and
220
+ limitations under the License.
221
+
222
+ 2. PixArt-alpha (https://github.com/PixArt-alpha/PixArt-alpha/blob/master/LICENSE)
223
+
224
+ Copyright (C) 2024 PixArt-alpha/PixArt-alpha
225
+
226
+ This program is free software: you can redistribute it and/or modify
227
+ it under the terms of the GNU Affero General Public License as published
228
+ by the Free Software Foundation, either version 3 of the License, or
229
+ (at your option) any later version.
230
+
231
+ This program is distributed in the hope that it will be useful,
232
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
233
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
234
+ GNU Affero General Public License for more details.
235
+
236
+ You should have received a copy of the GNU Affero General Public License
237
+ along with this program. If not, see <https://www.gnu.org/licenses/>.
238
+
239
+ 3. dpm-solver (https://github.com/LuChengTHU/dpm-solver/blob/main/LICENSE)
240
+
241
+ MIT License
242
+
243
+ Copyright (c) 2022 Cheng Lu
244
+
245
+ Permission is hereby granted, free of charge, to any person obtaining a copy
246
+ of this software and associated documentation files (the "Software"), to deal
247
+ in the Software without restriction, including without limitation the rights
248
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
249
+ copies of the Software, and to permit persons to whom the Software is
250
+ furnished to do so, subject to the following conditions:
251
+
252
+ The above copyright notice and this permission notice shall be included in all
253
+ copies or substantial portions of the Software.
254
+
255
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
256
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
257
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
258
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
259
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
260
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
261
+ SOFTWARE.
262
+
263
+ 4. DiT (https://github.com/facebookresearch/DiT/blob/main/LICENSE.txt)
264
+
265
+ Attribution-NonCommercial 4.0 International
266
+
267
+ =======================================================================
268
+
269
+ Creative Commons Corporation ("Creative Commons") is not a law firm and
270
+ does not provide legal services or legal advice. Distribution of
271
+ Creative Commons public licenses does not create a lawyer-client or
272
+ other relationship. Creative Commons makes its licenses and related
273
+ information available on an "as-is" basis. Creative Commons gives no
274
+ warranties regarding its licenses, any material licensed under their
275
+ terms and conditions, or any related information. Creative Commons
276
+ disclaims all liability for damages resulting from their use to the
277
+ fullest extent possible.
278
+
279
+ Using Creative Commons Public Licenses
280
+
281
+ Creative Commons public licenses provide a standard set of terms and
282
+ conditions that creators and other rights holders may use to share
283
+ original works of authorship and other material subject to copyright
284
+ and certain other rights specified in the public license below. The
285
+ following considerations are for informational purposes only, are not
286
+ exhaustive, and do not form part of our licenses.
287
+
288
+ Considerations for licensors: Our public licenses are
289
+ intended for use by those authorized to give the public
290
+ permission to use material in ways otherwise restricted by
291
+ copyright and certain other rights. Our licenses are
292
+ irrevocable. Licensors should read and understand the terms
293
+ and conditions of the license they choose before applying it.
294
+ Licensors should also secure all rights necessary before
295
+ applying our licenses so that the public can reuse the
296
+ material as expected. Licensors should clearly mark any
297
+ material not subject to the license. This includes other CC-
298
+ licensed material, or material used under an exception or
299
+ limitation to copyright. More considerations for licensors:
300
+ wiki.creativecommons.org/Considerations_for_licensors
301
+
302
+ Considerations for the public: By using one of our public
303
+ licenses, a licensor grants the public permission to use the
304
+ licensed material under specified terms and conditions. If
305
+ the licensor's permission is not necessary for any reason--for
306
+ example, because of any applicable exception or limitation to
307
+ copyright--then that use is not regulated by the license. Our
308
+ licenses grant only permissions under copyright and certain
309
+ other rights that a licensor has authority to grant. Use of
310
+ the licensed material may still be restricted for other
311
+ reasons, including because others have copyright or other
312
+ rights in the material. A licensor may make special requests,
313
+ such as asking that all changes be marked or described.
314
+ Although not required by our licenses, you are encouraged to
315
+ respect those requests where reasonable. More_considerations
316
+ for the public:
317
+ wiki.creativecommons.org/Considerations_for_licensees
318
+
319
+ =======================================================================
320
+
321
+ Creative Commons Attribution-NonCommercial 4.0 International Public
322
+ License
323
+
324
+ By exercising the Licensed Rights (defined below), You accept and agree
325
+ to be bound by the terms and conditions of this Creative Commons
326
+ Attribution-NonCommercial 4.0 International Public License ("Public
327
+ License"). To the extent this Public License may be interpreted as a
328
+ contract, You are granted the Licensed Rights in consideration of Your
329
+ acceptance of these terms and conditions, and the Licensor grants You
330
+ such rights in consideration of benefits the Licensor receives from
331
+ making the Licensed Material available under these terms and
332
+ conditions.
333
+
334
+ Section 1 -- Definitions.
335
+
336
+ a. Adapted Material means material subject to Copyright and Similar
337
+ Rights that is derived from or based upon the Licensed Material
338
+ and in which the Licensed Material is translated, altered,
339
+ arranged, transformed, or otherwise modified in a manner requiring
340
+ permission under the Copyright and Similar Rights held by the
341
+ Licensor. For purposes of this Public License, where the Licensed
342
+ Material is a musical work, performance, or sound recording,
343
+ Adapted Material is always produced where the Licensed Material is
344
+ synched in timed relation with a moving image.
345
+
346
+ b. Adapter's License means the license You apply to Your Copyright
347
+ and Similar Rights in Your contributions to Adapted Material in
348
+ accordance with the terms and conditions of this Public License.
349
+
350
+ c. Copyright and Similar Rights means copyright and/or similar rights
351
+ closely related to copyright including, without limitation,
352
+ performance, broadcast, sound recording, and Sui Generis Database
353
+ Rights, without regard to how the rights are labeled or
354
+ categorized. For purposes of this Public License, the rights
355
+ specified in Section 2(b)(1)-(2) are not Copyright and Similar
356
+ Rights.
357
+ d. Effective Technological Measures means those measures that, in the
358
+ absence of proper authority, may not be circumvented under laws
359
+ fulfilling obligations under Article 11 of the WIPO Copyright
360
+ Treaty adopted on December 20, 1996, and/or similar international
361
+ agreements.
362
+
363
+ e. Exceptions and Limitations means fair use, fair dealing, and/or
364
+ any other exception or limitation to Copyright and Similar Rights
365
+ that applies to Your use of the Licensed Material.
366
+
367
+ f. Licensed Material means the artistic or literary work, database,
368
+ or other material to which the Licensor applied this Public
369
+ License.
370
+
371
+ g. Licensed Rights means the rights granted to You subject to the
372
+ terms and conditions of this Public License, which are limited to
373
+ all Copyright and Similar Rights that apply to Your use of the
374
+ Licensed Material and that the Licensor has authority to license.
375
+
376
+ h. Licensor means the individual(s) or entity(ies) granting rights
377
+ under this Public License.
378
+
379
+ i. NonCommercial means not primarily intended for or directed towards
380
+ commercial advantage or monetary compensation. For purposes of
381
+ this Public License, the exchange of the Licensed Material for
382
+ other material subject to Copyright and Similar Rights by digital
383
+ file-sharing or similar means is NonCommercial provided there is
384
+ no payment of monetary compensation in connection with the
385
+ exchange.
386
+
387
+ j. Share means to provide material to the public by any means or
388
+ process that requires permission under the Licensed Rights, such
389
+ as reproduction, public display, public performance, distribution,
390
+ dissemination, communication, or importation, and to make material
391
+ available to the public including in ways that members of the
392
+ public may access the material from a place and at a time
393
+ individually chosen by them.
394
+
395
+ k. Sui Generis Database Rights means rights other than copyright
396
+ resulting from Directive 96/9/EC of the European Parliament and of
397
+ the Council of 11 March 1996 on the legal protection of databases,
398
+ as amended and/or succeeded, as well as other essentially
399
+ equivalent rights anywhere in the world.
400
+
401
+ l. You means the individual or entity exercising the Licensed Rights
402
+ under this Public License. Your has a corresponding meaning.
403
+
404
+ Section 2 -- Scope.
405
+
406
+ a. License grant.
407
+
408
+ 1. Subject to the terms and conditions of this Public License,
409
+ the Licensor hereby grants You a worldwide, royalty-free,
410
+ non-sublicensable, non-exclusive, irrevocable license to
411
+ exercise the Licensed Rights in the Licensed Material to:
412
+
413
+ a. reproduce and Share the Licensed Material, in whole or
414
+ in part, for NonCommercial purposes only; and
415
+
416
+ b. produce, reproduce, and Share Adapted Material for
417
+ NonCommercial purposes only.
418
+
419
+ 2. Exceptions and Limitations. For the avoidance of doubt, where
420
+ Exceptions and Limitations apply to Your use, this Public
421
+ License does not apply, and You do not need to comply with
422
+ its terms and conditions.
423
+
424
+ 3. Term. The term of this Public License is specified in Section
425
+ 6(a).
426
+
427
+ 4. Media and formats; technical modifications allowed. The
428
+ Licensor authorizes You to exercise the Licensed Rights in
429
+ all media and formats whether now known or hereafter created,
430
+ and to make technical modifications necessary to do so. The
431
+ Licensor waives and/or agrees not to assert any right or
432
+ authority to forbid You from making technical modifications
433
+ necessary to exercise the Licensed Rights, including
434
+ technical modifications necessary to circumvent Effective
435
+ Technological Measures. For purposes of this Public License,
436
+ simply making modifications authorized by this Section 2(a)
437
+ (4) never produces Adapted Material.
438
+
439
+ 5. Downstream recipients.
440
+
441
+ a. Offer from the Licensor -- Licensed Material. Every
442
+ recipient of the Licensed Material automatically
443
+ receives an offer from the Licensor to exercise the
444
+ Licensed Rights under the terms and conditions of this
445
+ Public License.
446
+
447
+ b. No downstream restrictions. You may not offer or impose
448
+ any additional or different terms or conditions on, or
449
+ apply any Effective Technological Measures to, the
450
+ Licensed Material if doing so restricts exercise of the
451
+ Licensed Rights by any recipient of the Licensed
452
+ Material.
453
+
454
+ 6. No endorsement. Nothing in this Public License constitutes or
455
+ may be construed as permission to assert or imply that You
456
+ are, or that Your use of the Licensed Material is, connected
457
+ with, or sponsored, endorsed, or granted official status by,
458
+ the Licensor or others designated to receive attribution as
459
+ provided in Section 3(a)(1)(A)(i).
460
+
461
+ b. Other rights.
462
+
463
+ 1. Moral rights, such as the right of integrity, are not
464
+ licensed under this Public License, nor are publicity,
465
+ privacy, and/or other similar personality rights; however, to
466
+ the extent possible, the Licensor waives and/or agrees not to
467
+ assert any such rights held by the Licensor to the limited
468
+ extent necessary to allow You to exercise the Licensed
469
+ Rights, but not otherwise.
470
+
471
+ 2. Patent and trademark rights are not licensed under this
472
+ Public License.
473
+
474
+ 3. To the extent possible, the Licensor waives any right to
475
+ collect royalties from You for the exercise of the Licensed
476
+ Rights, whether directly or through a collecting society
477
+ under any voluntary or waivable statutory or compulsory
478
+ licensing scheme. In all other cases the Licensor expressly
479
+ reserves any right to collect such royalties, including when
480
+ the Licensed Material is used other than for NonCommercial
481
+ purposes.
482
+
483
+ Section 3 -- License Conditions.
484
+
485
+ Your exercise of the Licensed Rights is expressly made subject to the
486
+ following conditions.
487
+
488
+ a. Attribution.
489
+
490
+ 1. If You Share the Licensed Material (including in modified
491
+ form), You must:
492
+
493
+ a. retain the following if it is supplied by the Licensor
494
+ with the Licensed Material:
495
+
496
+ i. identification of the creator(s) of the Licensed
497
+ Material and any others designated to receive
498
+ attribution, in any reasonable manner requested by
499
+ the Licensor (including by pseudonym if
500
+ designated);
501
+
502
+ ii. a copyright notice;
503
+
504
+ iii. a notice that refers to this Public License;
505
+
506
+ iv. a notice that refers to the disclaimer of
507
+ warranties;
508
+
509
+ v. a URI or hyperlink to the Licensed Material to the
510
+ extent reasonably practicable;
511
+
512
+ b. indicate if You modified the Licensed Material and
513
+ retain an indication of any previous modifications; and
514
+
515
+ c. indicate the Licensed Material is licensed under this
516
+ Public License, and include the text of, or the URI or
517
+ hyperlink to, this Public License.
518
+
519
+ 2. You may satisfy the conditions in Section 3(a)(1) in any
520
+ reasonable manner based on the medium, means, and context in
521
+ which You Share the Licensed Material. For example, it may be
522
+ reasonable to satisfy the conditions by providing a URI or
523
+ hyperlink to a resource that includes the required
524
+ information.
525
+
526
+ 3. If requested by the Licensor, You must remove any of the
527
+ information required by Section 3(a)(1)(A) to the extent
528
+ reasonably practicable.
529
+
530
+ 4. If You Share Adapted Material You produce, the Adapter's
531
+ License You apply must not prevent recipients of the Adapted
532
+ Material from complying with this Public License.
533
+
534
+ Section 4 -- Sui Generis Database Rights.
535
+
536
+ Where the Licensed Rights include Sui Generis Database Rights that
537
+ apply to Your use of the Licensed Material:
538
+
539
+ a. for the avoidance of doubt, Section 2(a)(1) grants You the right
540
+ to extract, reuse, reproduce, and Share all or a substantial
541
+ portion of the contents of the database for NonCommercial purposes
542
+ only;
543
+
544
+ b. if You include all or a substantial portion of the database
545
+ contents in a database in which You have Sui Generis Database
546
+ Rights, then the database in which You have Sui Generis Database
547
+ Rights (but not its individual contents) is Adapted Material; and
548
+
549
+ c. You must comply with the conditions in Section 3(a) if You Share
550
+ all or a substantial portion of the contents of the database.
551
+
552
+ For the avoidance of doubt, this Section 4 supplements and does not
553
+ replace Your obligations under this Public License where the Licensed
554
+ Rights include other Copyright and Similar Rights.
555
+
556
+ Section 5 -- Disclaimer of Warranties and Limitation of Liability.
557
+
558
+ a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
559
+ EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
560
+ AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
561
+ ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
562
+ IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
563
+ WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
564
+ PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
565
+ ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
566
+ KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
567
+ ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
568
+
569
+ b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
570
+ TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
571
+ NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
572
+ INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
573
+ COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
574
+ USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
575
+ ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
576
+ DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
577
+ IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
578
+
579
+ c. The disclaimer of warranties and limitation of liability provided
580
+ above shall be interpreted in a manner that, to the extent
581
+ possible, most closely approximates an absolute disclaimer and
582
+ waiver of all liability.
583
+
584
+ Section 6 -- Term and Termination.
585
+
586
+ a. This Public License applies for the term of the Copyright and
587
+ Similar Rights licensed here. However, if You fail to comply with
588
+ this Public License, then Your rights under this Public License
589
+ terminate automatically.
590
+
591
+ b. Where Your right to use the Licensed Material has terminated under
592
+ Section 6(a), it reinstates:
593
+
594
+ 1. automatically as of the date the violation is cured, provided
595
+ it is cured within 30 days of Your discovery of the
596
+ violation; or
597
+
598
+ 2. upon express reinstatement by the Licensor.
599
+
600
+ For the avoidance of doubt, this Section 6(b) does not affect any
601
+ right the Licensor may have to seek remedies for Your violations
602
+ of this Public License.
603
+
604
+ c. For the avoidance of doubt, the Licensor may also offer the
605
+ Licensed Material under separate terms or conditions or stop
606
+ distributing the Licensed Material at any time; however, doing so
607
+ will not terminate this Public License.
608
+
609
+ d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
610
+ License.
611
+
612
+ Section 7 -- Other Terms and Conditions.
613
+
614
+ a. The Licensor shall not be bound by any additional or different
615
+ terms or conditions communicated by You unless expressly agreed.
616
+
617
+ b. Any arrangements, understandings, or agreements regarding the
618
+ Licensed Material not stated herein are separate from and
619
+ independent of the terms and conditions of this Public License.
620
+
621
+ Section 8 -- Interpretation.
622
+
623
+ a. For the avoidance of doubt, this Public License does not, and
624
+ shall not be interpreted to, reduce, limit, restrict, or impose
625
+ conditions on any use of the Licensed Material that could lawfully
626
+ be made without permission under this Public License.
627
+
628
+ b. To the extent possible, if any provision of this Public License is
629
+ deemed unenforceable, it shall be automatically reformed to the
630
+ minimum extent necessary to make it enforceable. If the provision
631
+ cannot be reformed, it shall be severed from this Public License
632
+ without affecting the enforceability of the remaining terms and
633
+ conditions.
634
+
635
+ c. No term or condition of this Public License will be waived and no
636
+ failure to comply consented to unless expressly agreed to by the
637
+ Licensor.
638
+
639
+ d. Nothing in this Public License constitutes or may be interpreted
640
+ as a limitation upon, or waiver of, any privileges and immunities
641
+ that apply to the Licensor or You, including from the legal
642
+ processes of any jurisdiction or authority.
643
+
644
+ =======================================================================
645
+
646
+ Creative Commons is not a party to its public
647
+ licenses. Notwithstanding, Creative Commons may elect to apply one of
648
+ its public licenses to material it publishes and in those instances
649
+ will be considered the “Licensor.” The text of the Creative Commons
650
+ public licenses is dedicated to the public domain under the CC0 Public
651
+ Domain Dedication. Except for the limited purpose of indicating that
652
+ material is shared under a Creative Commons public license or as
653
+ otherwise permitted by the Creative Commons policies published at
654
+ creativecommons.org/policies, Creative Commons does not authorize the
655
+ use of the trademark "Creative Commons" or any other trademark or logo
656
+ of Creative Commons without its prior written consent including,
657
+ without limitation, in connection with any unauthorized modifications
658
+ to any of its public licenses or any other arrangements,
659
+ understandings, or agreements concerning use of licensed material. For
660
+ the avoidance of doubt, this paragraph does not form part of the
661
+ public licenses.
662
+
663
+ Creative Commons may be contacted at creativecommons.org.
664
+
665
+ 5. OpenDiT (https://github.com/NUS-HPC-AI-Lab/OpenDiT/blob/master/LICENSE)
666
+
667
+ Copyright OpenDiT
668
+
669
+ Licensed under the Apache License, Version 2.0 (the "License");
670
+ you may not use this file except in compliance with the License.
671
+ You may obtain a copy of the License at
672
+
673
+ http://www.apache.org/licenses/LICENSE-2.0
674
+
675
+ Unless required by applicable law or agreed to in writing, software
676
+ distributed under the License is distributed on an "AS IS" BASIS,
677
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
678
+ See the License for the specific language governing permissions and
679
+ limitations under the License.
680
+
681
+
Open-Sora/.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
Open-Sora/README.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Open Sora
3
+ emoji: 📚
4
+ colorFrom: yellow
5
+ colorTo: indigo
6
+ sdk: gradio
7
+ sdk_version: 4.21.0
8
+ app_file: app.py
9
+ pinned: false
10
+ license: apache-2.0
11
+ ---
12
+
13
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
Open-Sora/requirements.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ torch
2
+ torchvision
3
+
4
+ packaging
5
+ ninja
6
+ flash-attn --no-build-isolation
7
+
8
+ -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" git+https://github.com/NVIDIA/apex.git
9
+
10
+ -U xformers --index-url https://download.pytorch.org/whl/cu121
11
+
12
+ -e git+https://github.com/hpcaitech/Open-Sora.git#egg=open-sora
configs/dit/inference/16x256x256.py ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 16
2
+ fps = 8
3
+ image_size = (256, 256)
4
+
5
+ # Define model
6
+ model = dict(
7
+ type="DiT-XL/2",
8
+ condition="text",
9
+ from_pretrained="PRETRAINED_MODEL",
10
+ )
11
+ vae = dict(
12
+ type="VideoAutoencoderKL",
13
+ from_pretrained="stabilityai/sd-vae-ft-ema",
14
+ )
15
+ text_encoder = dict(
16
+ type="clip",
17
+ from_pretrained="openai/clip-vit-base-patch32",
18
+ model_max_length=77,
19
+ )
20
+ scheduler = dict(
21
+ type="dpm-solver",
22
+ num_sampling_steps=20,
23
+ cfg_scale=4.0,
24
+ )
25
+ dtype = "fp16"
26
+
27
+ # Others
28
+ batch_size = 2
29
+ seed = 42
30
+ prompt_path = "./assets/texts/ucf101_labels.txt"
31
+ save_dir = "./outputs/samples/"
configs/dit/inference/1x256x256-class.py ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 1
2
+ fps = 1
3
+ image_size = (256, 256)
4
+
5
+ # Define model
6
+ model = dict(
7
+ type="DiT-XL/2",
8
+ no_temporal_pos_emb=True,
9
+ condition="label_1000",
10
+ from_pretrained="DiT-XL-2-256x256.pt",
11
+ )
12
+ vae = dict(
13
+ type="VideoAutoencoderKL",
14
+ from_pretrained="stabilityai/sd-vae-ft-ema",
15
+ )
16
+ text_encoder = dict(
17
+ type="classes",
18
+ num_classes=1000,
19
+ )
20
+ scheduler = dict(
21
+ type="dpm-solver",
22
+ num_sampling_steps=20,
23
+ cfg_scale=4.0,
24
+ )
25
+ dtype = "fp16"
26
+
27
+ # Others
28
+ batch_size = 2
29
+ seed = 42
30
+ prompt_path = "./assets/texts/imagenet_id.txt"
31
+ save_dir = "./outputs/samples/"
configs/dit/inference/1x256x256.py ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 1
2
+ fps = 1
3
+ image_size = (256, 256)
4
+
5
+ # Define model
6
+ model = dict(
7
+ type="DiT-XL/2",
8
+ no_temporal_pos_emb=True,
9
+ condition="text",
10
+ from_pretrained="PRETRAINED_MODEL",
11
+ )
12
+ vae = dict(
13
+ type="VideoAutoencoderKL",
14
+ from_pretrained="stabilityai/sd-vae-ft-ema",
15
+ )
16
+ text_encoder = dict(
17
+ type="clip",
18
+ from_pretrained="openai/clip-vit-base-patch32",
19
+ model_max_length=77,
20
+ )
21
+ scheduler = dict(
22
+ type="dpm-solver",
23
+ num_sampling_steps=20,
24
+ cfg_scale=4.0,
25
+ )
26
+ dtype = "fp16"
27
+
28
+ # Others
29
+ batch_size = 2
30
+ seed = 42
31
+ prompt_path = "./assets/texts/imagenet_labels.txt"
32
+ save_dir = "./outputs/samples/"
configs/dit/train/16x256x256.py ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 16
2
+ frame_interval = 3
3
+ image_size = (256, 256)
4
+
5
+ # Define dataset
6
+ root = None
7
+ data_path = "CSV_PATH"
8
+ use_image_transform = False
9
+ num_workers = 4
10
+
11
+ # Define acceleration
12
+ dtype = "bf16"
13
+ grad_checkpoint = False
14
+ plugin = "zero2"
15
+ sp_size = 1
16
+
17
+ # Define model
18
+ model = dict(
19
+ type="DiT-XL/2",
20
+ from_pretrained="DiT-XL-2-256x256.pt",
21
+ enable_flashattn=True,
22
+ enable_layernorm_kernel=True,
23
+ )
24
+ vae = dict(
25
+ type="VideoAutoencoderKL",
26
+ from_pretrained="stabilityai/sd-vae-ft-ema",
27
+ )
28
+ text_encoder = dict(
29
+ type="clip",
30
+ from_pretrained="openai/clip-vit-base-patch32",
31
+ model_max_length=77,
32
+ )
33
+ scheduler = dict(
34
+ type="iddpm",
35
+ timestep_respacing="",
36
+ )
37
+
38
+ # Others
39
+ seed = 42
40
+ outputs = "outputs"
41
+ wandb = False
42
+
43
+ epochs = 1000
44
+ log_every = 10
45
+ ckpt_every = 1000
46
+ load = None
47
+
48
+ batch_size = 8
49
+ lr = 2e-5
50
+ grad_clip = 1.0
configs/dit/train/1x256x256.py ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 1
2
+ frame_interval = 1
3
+ image_size = (256, 256)
4
+
5
+ # Define dataset
6
+ root = None
7
+ data_path = "CSV_PATH"
8
+ use_image_transform = True
9
+ num_workers = 4
10
+
11
+ # Define acceleration
12
+ dtype = "bf16"
13
+ grad_checkpoint = False
14
+ plugin = "zero2"
15
+ sp_size = 1
16
+
17
+ # Define model
18
+ model = dict(
19
+ type="DiT-XL/2",
20
+ no_temporal_pos_emb=True,
21
+ enable_flashattn=True,
22
+ enable_layernorm_kernel=True,
23
+ )
24
+ vae = dict(
25
+ type="VideoAutoencoderKL",
26
+ from_pretrained="stabilityai/sd-vae-ft-ema",
27
+ )
28
+ text_encoder = dict(
29
+ type="clip",
30
+ from_pretrained="openai/clip-vit-base-patch32",
31
+ model_max_length=77,
32
+ )
33
+ scheduler = dict(
34
+ type="iddpm",
35
+ timestep_respacing="",
36
+ )
37
+
38
+ # Others
39
+ seed = 42
40
+ outputs = "outputs"
41
+ wandb = False
42
+
43
+ epochs = 1000
44
+ log_every = 10
45
+ ckpt_every = 1000
46
+ load = None
47
+
48
+ batch_size = 128
49
+ lr = 1e-4 # according to DiT repo
50
+ grad_clip = 1.0
configs/latte/inference/16x256x256-class.py ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 16
2
+ fps = 8
3
+ image_size = (256, 256)
4
+
5
+ # Define model
6
+ model = dict(
7
+ type="Latte-XL/2",
8
+ condition="label_101",
9
+ from_pretrained="Latte-XL-2-256x256-ucf101.pt",
10
+ )
11
+ vae = dict(
12
+ type="VideoAutoencoderKL",
13
+ from_pretrained="stabilityai/sd-vae-ft-ema",
14
+ )
15
+ text_encoder = dict(
16
+ type="classes",
17
+ num_classes=101,
18
+ )
19
+ scheduler = dict(
20
+ type="dpm-solver",
21
+ num_sampling_steps=20,
22
+ cfg_scale=4.0,
23
+ )
24
+ dtype = "fp16"
25
+
26
+ # Others
27
+ batch_size = 2
28
+ seed = 42
29
+ prompt_path = "./assets/texts/ucf101_id.txt"
30
+ save_dir = "./outputs/samples/"
configs/latte/inference/16x256x256.py ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 16
2
+ fps = 8
3
+ image_size = (256, 256)
4
+
5
+ # Define model
6
+ model = dict(
7
+ type="Latte-XL/2",
8
+ condition="text",
9
+ from_pretrained="PRETRAINED_MODEL",
10
+ )
11
+ vae = dict(
12
+ type="VideoAutoencoderKL",
13
+ from_pretrained="stabilityai/sd-vae-ft-ema",
14
+ )
15
+ text_encoder = dict(
16
+ type="clip",
17
+ from_pretrained="openai/clip-vit-base-patch32",
18
+ model_max_length=77,
19
+ )
20
+ scheduler = dict(
21
+ type="dpm-solver",
22
+ num_sampling_steps=20,
23
+ cfg_scale=4.0,
24
+ )
25
+ dtype = "fp16"
26
+
27
+ # Others
28
+ batch_size = 2
29
+ seed = 42
30
+ prompt_path = "./assets/texts/ucf101_labels.txt"
31
+ save_dir = "./outputs/samples/"
configs/latte/train/16x256x256.py ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 16
2
+ frame_interval = 3
3
+ image_size = (256, 256)
4
+
5
+ # Define dataset
6
+ root = None
7
+ data_path = "CSV_PATH"
8
+ use_image_transform = False
9
+ num_workers = 4
10
+
11
+ # Define acceleration
12
+ dtype = "bf16"
13
+ grad_checkpoint = True
14
+ plugin = "zero2"
15
+ sp_size = 1
16
+
17
+ # Define model
18
+ model = dict(
19
+ type="Latte-XL/2",
20
+ enable_flashattn=True,
21
+ enable_layernorm_kernel=True,
22
+ )
23
+ vae = dict(
24
+ type="VideoAutoencoderKL",
25
+ from_pretrained="stabilityai/sd-vae-ft-ema",
26
+ )
27
+ text_encoder = dict(
28
+ type="clip",
29
+ from_pretrained="openai/clip-vit-base-patch32",
30
+ model_max_length=77,
31
+ )
32
+ scheduler = dict(
33
+ type="iddpm",
34
+ timestep_respacing="",
35
+ )
36
+
37
+ # Others
38
+ seed = 42
39
+ outputs = "outputs"
40
+ wandb = False
41
+
42
+ epochs = 1000
43
+ log_every = 10
44
+ ckpt_every = 1000
45
+ load = None
46
+
47
+ batch_size = 8
48
+ lr = 2e-5
49
+ grad_clip = 1.0
configs/opensora/inference/16x256x256.py ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 16
2
+ fps = 24 // 3
3
+ image_size = (256, 256)
4
+
5
+ # Define model
6
+ model = dict(
7
+ type="STDiT-XL/2",
8
+ space_scale=0.5,
9
+ time_scale=1.0,
10
+ enable_flashattn=True,
11
+ enable_layernorm_kernel=True,
12
+ from_pretrained="PRETRAINED_MODEL",
13
+ )
14
+ vae = dict(
15
+ type="VideoAutoencoderKL",
16
+ from_pretrained="stabilityai/sd-vae-ft-ema",
17
+ )
18
+ text_encoder = dict(
19
+ type="t5",
20
+ from_pretrained="./pretrained_models/t5_ckpts",
21
+ model_max_length=120,
22
+ )
23
+ scheduler = dict(
24
+ type="iddpm",
25
+ num_sampling_steps=100,
26
+ cfg_scale=7.0,
27
+ )
28
+ dtype = "fp16"
29
+
30
+ # Others
31
+ batch_size = 2
32
+ seed = 42
33
+ prompt_path = "./assets/texts/t2v_samples.txt"
34
+ save_dir = "./outputs/samples/"
configs/opensora/inference/16x512x512.py ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 16
2
+ fps = 24 // 3
3
+ image_size = (512, 512)
4
+
5
+ # Define model
6
+ model = dict(
7
+ type="STDiT-XL/2",
8
+ space_scale=1.0,
9
+ time_scale=1.0,
10
+ enable_flashattn=True,
11
+ enable_layernorm_kernel=True,
12
+ from_pretrained="PRETRAINED_MODEL"
13
+ )
14
+ vae = dict(
15
+ type="VideoAutoencoderKL",
16
+ from_pretrained="stabilityai/sd-vae-ft-ema",
17
+ micro_batch_size=128,
18
+ )
19
+ text_encoder = dict(
20
+ type="t5",
21
+ from_pretrained="./pretrained_models/t5_ckpts",
22
+ model_max_length=120,
23
+ )
24
+ scheduler = dict(
25
+ type="iddpm",
26
+ num_sampling_steps=100,
27
+ cfg_scale=7.0,
28
+ )
29
+ dtype = "fp16"
30
+
31
+ # Others
32
+ batch_size = 2
33
+ seed = 42
34
+ prompt_path = "./assets/texts/t2v_samples.txt"
35
+ save_dir = "./outputs/samples/"
configs/opensora/inference/64x512x512.py ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 64
2
+ fps = 24 // 2
3
+ image_size = (512, 512)
4
+
5
+ # Define model
6
+ model = dict(
7
+ type="STDiT-XL/2",
8
+ space_scale=1.0,
9
+ time_scale=2 / 3,
10
+ enable_flashattn=True,
11
+ enable_layernorm_kernel=True,
12
+ from_pretrained="PRETRAINED_MODEL",
13
+ )
14
+ vae = dict(
15
+ type="VideoAutoencoderKL",
16
+ from_pretrained="stabilityai/sd-vae-ft-ema",
17
+ micro_batch_size=128,
18
+ )
19
+ text_encoder = dict(
20
+ type="t5",
21
+ from_pretrained="./pretrained_models/t5_ckpts",
22
+ model_max_length=120,
23
+ )
24
+ scheduler = dict(
25
+ type="iddpm",
26
+ num_sampling_steps=100,
27
+ cfg_scale=7.0,
28
+ )
29
+ dtype = "fp16"
30
+
31
+ # Others
32
+ batch_size = 1
33
+ seed = 42
34
+ prompt_path = "./assets/texts/t2v_samples.txt"
35
+ save_dir = "./outputs/samples/"
configs/opensora/train/16x256x256.py ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 16
2
+ frame_interval = 3
3
+ image_size = (256, 256)
4
+
5
+ # Define dataset
6
+ root = None
7
+ data_path = "CSV_PATH"
8
+ use_image_transform = False
9
+ num_workers = 4
10
+
11
+ # Define acceleration
12
+ dtype = "bf16"
13
+ grad_checkpoint = True
14
+ plugin = "zero2"
15
+ sp_size = 1
16
+
17
+ # Define model
18
+ model = dict(
19
+ type="STDiT-XL/2",
20
+ space_scale=0.5,
21
+ time_scale=1.0,
22
+ from_pretrained="PixArt-XL-2-512x512.pth",
23
+ enable_flashattn=True,
24
+ enable_layernorm_kernel=True,
25
+ )
26
+ vae = dict(
27
+ type="VideoAutoencoderKL",
28
+ from_pretrained="stabilityai/sd-vae-ft-ema",
29
+ )
30
+ text_encoder = dict(
31
+ type="t5",
32
+ from_pretrained="./pretrained_models/t5_ckpts",
33
+ model_max_length=120,
34
+ shardformer=True,
35
+ )
36
+ scheduler = dict(
37
+ type="iddpm",
38
+ timestep_respacing="",
39
+ )
40
+
41
+ # Others
42
+ seed = 42
43
+ outputs = "outputs"
44
+ wandb = False
45
+
46
+ epochs = 1000
47
+ log_every = 10
48
+ ckpt_every = 1000
49
+ load = None
50
+
51
+ batch_size = 8
52
+ lr = 2e-5
53
+ grad_clip = 1.0
configs/opensora/train/16x512x512.py ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 16
2
+ frame_interval = 3
3
+ image_size = (512, 512)
4
+
5
+ # Define dataset
6
+ root = None
7
+ data_path = "CSV_PATH"
8
+ use_image_transform = False
9
+ num_workers = 4
10
+
11
+ # Define acceleration
12
+ dtype = "bf16"
13
+ grad_checkpoint = False
14
+ plugin = "zero2"
15
+ sp_size = 1
16
+
17
+ # Define model
18
+ model = dict(
19
+ type="STDiT-XL/2",
20
+ space_scale=1.0,
21
+ time_scale=1.0,
22
+ from_pretrained=None,
23
+ enable_flashattn=True,
24
+ enable_layernorm_kernel=True,
25
+ )
26
+ vae = dict(
27
+ type="VideoAutoencoderKL",
28
+ from_pretrained="stabilityai/sd-vae-ft-ema",
29
+ micro_batch_size=128,
30
+ )
31
+ text_encoder = dict(
32
+ type="t5",
33
+ from_pretrained="./pretrained_models/t5_ckpts",
34
+ model_max_length=120,
35
+ shardformer=True,
36
+ )
37
+ scheduler = dict(
38
+ type="iddpm",
39
+ timestep_respacing="",
40
+ )
41
+
42
+ # Others
43
+ seed = 42
44
+ outputs = "outputs"
45
+ wandb = False
46
+
47
+ epochs = 1000
48
+ log_every = 10
49
+ ckpt_every = 500
50
+ load = None
51
+
52
+ batch_size = 8
53
+ lr = 2e-5
54
+ grad_clip = 1.0
configs/opensora/train/360x512x512.py ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 360
2
+ frame_interval = 1
3
+ image_size = (512, 512)
4
+
5
+ # Define dataset
6
+ root = None
7
+ data_path = "CSV_PATH"
8
+ use_image_transform = False
9
+ num_workers = 4
10
+
11
+ # Define acceleration
12
+ dtype = "bf16"
13
+ grad_checkpoint = True
14
+ plugin = "zero2-seq"
15
+ sp_size = 2
16
+
17
+ # Define model
18
+ model = dict(
19
+ type="STDiT-XL/2",
20
+ space_scale=1.0,
21
+ time_scale=2 / 3,
22
+ from_pretrained=None,
23
+ enable_flashattn=True,
24
+ enable_layernorm_kernel=True,
25
+ enable_sequence_parallelism=True, # enable sq here
26
+ )
27
+ vae = dict(
28
+ type="VideoAutoencoderKL",
29
+ from_pretrained="stabilityai/sd-vae-ft-ema",
30
+ micro_batch_size=128,
31
+ )
32
+ text_encoder = dict(
33
+ type="t5",
34
+ from_pretrained="./pretrained_models/t5_ckpts",
35
+ model_max_length=120,
36
+ shardformer=True,
37
+ )
38
+ scheduler = dict(
39
+ type="iddpm",
40
+ timestep_respacing="",
41
+ )
42
+
43
+ # Others
44
+ seed = 42
45
+ outputs = "outputs"
46
+ wandb = False
47
+
48
+ epochs = 1000
49
+ log_every = 10
50
+ ckpt_every = 250
51
+ load = None
52
+
53
+ batch_size = 1
54
+ lr = 2e-5
55
+ grad_clip = 1.0
configs/opensora/train/64x512x512-sp.py ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 64
2
+ frame_interval = 2
3
+ image_size = (512, 512)
4
+
5
+ # Define dataset
6
+ root = None
7
+ data_path = "CSV_PATH"
8
+ use_image_transform = False
9
+ num_workers = 4
10
+
11
+ # Define acceleration
12
+ dtype = "bf16"
13
+ grad_checkpoint = True
14
+ plugin = "zero2-seq"
15
+ sp_size = 2
16
+
17
+ # Define model
18
+ model = dict(
19
+ type="STDiT-XL/2",
20
+ space_scale=1.0,
21
+ time_scale=2 / 3,
22
+ from_pretrained=None,
23
+ enable_flashattn=True,
24
+ enable_layernorm_kernel=True,
25
+ enable_sequence_parallelism=True, # enable sq here
26
+ )
27
+ vae = dict(
28
+ type="VideoAutoencoderKL",
29
+ from_pretrained="stabilityai/sd-vae-ft-ema",
30
+ )
31
+ text_encoder = dict(
32
+ type="t5",
33
+ from_pretrained="./pretrained_models/t5_ckpts",
34
+ model_max_length=120,
35
+ shardformer=True,
36
+ )
37
+ scheduler = dict(
38
+ type="iddpm",
39
+ timestep_respacing="",
40
+ )
41
+
42
+ # Others
43
+ seed = 42
44
+ outputs = "outputs"
45
+ wandb = False
46
+
47
+ epochs = 1000
48
+ log_every = 10
49
+ ckpt_every = 1000
50
+ load = None
51
+
52
+ batch_size = 1
53
+ lr = 2e-5
54
+ grad_clip = 1.0
configs/opensora/train/64x512x512.py ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 64
2
+ frame_interval = 2
3
+ image_size = (512, 512)
4
+
5
+ # Define dataset
6
+ root = None
7
+ data_path = "CSV_PATH"
8
+ use_image_transform = False
9
+ num_workers = 4
10
+
11
+ # Define acceleration
12
+ dtype = "bf16"
13
+ grad_checkpoint = True
14
+ plugin = "zero2"
15
+ sp_size = 1
16
+
17
+ # Define model
18
+ model = dict(
19
+ type="STDiT-XL/2",
20
+ space_scale=1.0,
21
+ time_scale=2 / 3,
22
+ from_pretrained=None,
23
+ enable_flashattn=True,
24
+ enable_layernorm_kernel=True,
25
+ )
26
+ vae = dict(
27
+ type="VideoAutoencoderKL",
28
+ from_pretrained="stabilityai/sd-vae-ft-ema",
29
+ micro_batch_size=64,
30
+ )
31
+ text_encoder = dict(
32
+ type="t5",
33
+ from_pretrained="./pretrained_models/t5_ckpts",
34
+ model_max_length=120,
35
+ shardformer=True,
36
+ )
37
+ scheduler = dict(
38
+ type="iddpm",
39
+ timestep_respacing="",
40
+ )
41
+
42
+ # Others
43
+ seed = 42
44
+ outputs = "outputs"
45
+ wandb = False
46
+
47
+ epochs = 1000
48
+ log_every = 10
49
+ ckpt_every = 250
50
+ load = None
51
+
52
+ batch_size = 4
53
+ lr = 2e-5
54
+ grad_clip = 1.0
configs/pixart/inference/16x256x256.py ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 16
2
+ fps = 8
3
+ image_size = (256, 256)
4
+
5
+ # Define model
6
+ model = dict(
7
+ type="PixArt-XL/2",
8
+ space_scale=0.5,
9
+ time_scale=1.0,
10
+ from_pretrained="outputs/098-F16S3-PixArt-XL-2/epoch7-global_step30000/model_ckpt.pt",
11
+ )
12
+ vae = dict(
13
+ type="VideoAutoencoderKL",
14
+ from_pretrained="stabilityai/sd-vae-ft-ema",
15
+ )
16
+ text_encoder = dict(
17
+ type="t5",
18
+ from_pretrained="./pretrained_models/t5_ckpts",
19
+ model_max_length=120,
20
+ )
21
+ scheduler = dict(
22
+ type="dpm-solver",
23
+ num_sampling_steps=20,
24
+ cfg_scale=7.0,
25
+ )
26
+ dtype = "fp16"
27
+
28
+ # Others
29
+ batch_size = 2
30
+ seed = 42
31
+ prompt_path = "./assets/texts/t2v_samples.txt"
32
+ save_dir = "./outputs/samples/"
configs/pixart/inference/1x1024MS.py ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 1
2
+ fps = 1
3
+ image_size = (1920, 512)
4
+ multi_resolution = True
5
+
6
+ # Define model
7
+ model = dict(
8
+ type="PixArtMS-XL/2",
9
+ space_scale=2.0,
10
+ time_scale=1.0,
11
+ no_temporal_pos_emb=True,
12
+ from_pretrained="PixArt-XL-2-1024-MS.pth",
13
+ )
14
+ vae = dict(
15
+ type="VideoAutoencoderKL",
16
+ from_pretrained="stabilityai/sd-vae-ft-ema",
17
+ )
18
+ text_encoder = dict(
19
+ type="t5",
20
+ from_pretrained="./pretrained_models/t5_ckpts",
21
+ model_max_length=120,
22
+ )
23
+ scheduler = dict(
24
+ type="dpm-solver",
25
+ num_sampling_steps=20,
26
+ cfg_scale=7.0,
27
+ )
28
+ dtype = "fp16"
29
+
30
+ # Others
31
+ batch_size = 2
32
+ seed = 42
33
+ prompt_path = "./assets/texts/t2i_samples.txt"
34
+ save_dir = "./outputs/samples/"
configs/pixart/inference/1x256x256.py ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 1
2
+ fps = 1
3
+ image_size = (256, 256)
4
+
5
+ # Define model
6
+ model = dict(
7
+ type="PixArt-XL/2",
8
+ space_scale=1.0,
9
+ time_scale=1.0,
10
+ no_temporal_pos_emb=True,
11
+ from_pretrained="PixArt-XL-2-256x256.pth",
12
+ )
13
+ vae = dict(
14
+ type="VideoAutoencoderKL",
15
+ from_pretrained="stabilityai/sd-vae-ft-ema",
16
+ )
17
+ text_encoder = dict(
18
+ type="t5",
19
+ from_pretrained="./pretrained_models/t5_ckpts",
20
+ model_max_length=120,
21
+ )
22
+ scheduler = dict(
23
+ type="dpm-solver",
24
+ num_sampling_steps=20,
25
+ cfg_scale=7.0,
26
+ )
27
+ dtype = "fp16"
28
+
29
+ # Others
30
+ batch_size = 2
31
+ seed = 42
32
+ prompt_path = "./assets/texts/t2i_samples.txt"
33
+ save_dir = "./outputs/samples/"
configs/pixart/inference/1x512x512.py ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 1
2
+ fps = 1
3
+ image_size = (512, 512)
4
+
5
+ # Define model
6
+ model = dict(
7
+ type="PixArt-XL/2",
8
+ space_scale=1.0,
9
+ time_scale=1.0,
10
+ no_temporal_pos_emb=True,
11
+ from_pretrained="PixArt-XL-2-512x512.pth",
12
+ )
13
+ vae = dict(
14
+ type="VideoAutoencoderKL",
15
+ from_pretrained="stabilityai/sd-vae-ft-ema",
16
+ )
17
+ text_encoder = dict(
18
+ type="t5",
19
+ from_pretrained="./pretrained_models/t5_ckpts",
20
+ model_max_length=120,
21
+ )
22
+ scheduler = dict(
23
+ type="dpm-solver",
24
+ num_sampling_steps=20,
25
+ cfg_scale=7.0,
26
+ )
27
+ dtype = "fp16"
28
+
29
+ # Others
30
+ batch_size = 2
31
+ seed = 42
32
+ prompt_path = "./assets/texts/t2i_samples.txt"
33
+ save_dir = "./outputs/samples/"
configs/pixart/train/16x256x256.py ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 16
2
+ frame_interval = 3
3
+ image_size = (256, 256)
4
+
5
+ # Define dataset
6
+ root = None
7
+ data_path = "CSV_PATH"
8
+ use_image_transform = False
9
+ num_workers = 4
10
+
11
+ # Define acceleration
12
+ dtype = "bf16"
13
+ grad_checkpoint = False
14
+ plugin = "zero2"
15
+ sp_size = 1
16
+
17
+ # Define model
18
+ model = dict(
19
+ type="PixArt-XL/2",
20
+ space_scale=0.5,
21
+ time_scale=1.0,
22
+ from_pretrained="PixArt-XL-2-512x512.pth",
23
+ enable_flashattn=True,
24
+ enable_layernorm_kernel=True,
25
+ )
26
+ vae = dict(
27
+ type="VideoAutoencoderKL",
28
+ from_pretrained="stabilityai/sd-vae-ft-ema",
29
+ )
30
+ text_encoder = dict(
31
+ type="t5",
32
+ from_pretrained="./pretrained_models/t5_ckpts",
33
+ model_max_length=120,
34
+ shardformer=True,
35
+ )
36
+ scheduler = dict(
37
+ type="iddpm",
38
+ timestep_respacing="",
39
+ )
40
+
41
+ # Others
42
+ seed = 42
43
+ outputs = "outputs"
44
+ wandb = False
45
+
46
+ epochs = 1000
47
+ log_every = 10
48
+ ckpt_every = 1000
49
+ load = None
50
+
51
+ batch_size = 8
52
+ lr = 2e-5
53
+ grad_clip = 1.0
configs/pixart/train/1x512x512.py ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 1
2
+ frame_interval = 1
3
+ image_size = (512, 512)
4
+
5
+ # Define dataset
6
+ root = None
7
+ data_path = "CSV_PATH"
8
+ use_image_transform = True
9
+ num_workers = 4
10
+
11
+ # Define acceleration
12
+ dtype = "bf16"
13
+ grad_checkpoint = True
14
+ plugin = "zero2"
15
+ sp_size = 1
16
+
17
+ # Define model
18
+ model = dict(
19
+ type="PixArt-XL/2",
20
+ space_scale=1.0,
21
+ time_scale=1.0,
22
+ no_temporal_pos_emb=True,
23
+ from_pretrained="PixArt-XL-2-512x512.pth",
24
+ enable_flashattn=True,
25
+ enable_layernorm_kernel=True,
26
+ )
27
+ vae = dict(
28
+ type="VideoAutoencoderKL",
29
+ from_pretrained="stabilityai/sd-vae-ft-ema",
30
+ )
31
+ text_encoder = dict(
32
+ type="t5",
33
+ from_pretrained="./pretrained_models/t5_ckpts",
34
+ model_max_length=120,
35
+ shardformer=True,
36
+ )
37
+ scheduler = dict(
38
+ type="iddpm",
39
+ timestep_respacing="",
40
+ )
41
+
42
+ # Others
43
+ seed = 42
44
+ outputs = "outputs"
45
+ wandb = False
46
+
47
+ epochs = 1000
48
+ log_every = 10
49
+ ckpt_every = 1000
50
+ load = None
51
+
52
+ batch_size = 32
53
+ lr = 2e-5
54
+ grad_clip = 1.0
configs/pixart/train/64x512x512.py ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ num_frames = 64
2
+ frame_interval = 2
3
+ image_size = (512, 512)
4
+
5
+ # Define dataset
6
+ root = None
7
+ data_path = "CSV_PATH"
8
+ use_image_transform = False
9
+ num_workers = 4
10
+
11
+ # Define acceleration
12
+ dtype = "bf16"
13
+ grad_checkpoint = True
14
+ plugin = "zero2"
15
+ sp_size = 1
16
+
17
+ # Define model
18
+ model = dict(
19
+ type="PixArt-XL/2",
20
+ space_scale=1.0,
21
+ time_scale=2 / 3,
22
+ from_pretrained=None,
23
+ enable_flashattn=True,
24
+ enable_layernorm_kernel=True,
25
+ )
26
+ vae = dict(
27
+ type="VideoAutoencoderKL",
28
+ from_pretrained="stabilityai/sd-vae-ft-ema",
29
+ micro_batch_size=128,
30
+ )
31
+ text_encoder = dict(
32
+ type="t5",
33
+ from_pretrained="./pretrained_models/t5_ckpts",
34
+ model_max_length=120,
35
+ shardformer=True,
36
+ )
37
+ scheduler = dict(
38
+ type="iddpm",
39
+ timestep_respacing="",
40
+ )
41
+
42
+ # Others
43
+ seed = 42
44
+ outputs = "outputs"
45
+ wandb = False
46
+
47
+ epochs = 1000
48
+ log_every = 10
49
+ ckpt_every = 250
50
+ load = None
51
+
52
+ batch_size = 4
53
+ lr = 2e-5
54
+ grad_clip = 1.0
docs/README_zh.md ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <p align="center">
2
+ <img src="../assets/readme/icon.png" width="250"/>
3
+ <p>
4
+
5
+ <div align="center">
6
+ <a href="https://github.com/hpcaitech/Open-Sora/stargazers"><img src="https://img.shields.io/github/stars/hpcaitech/Open-Sora?style=social"></a>
7
+ <a href="https://hpcaitech.github.io/Open-Sora/"><img src="https://img.shields.io/badge/Gallery-View-orange?logo=&amp"></a>
8
+ <a href="https://discord.gg/shpbperhGs"><img src="https://img.shields.io/badge/Discord-join-blueviolet?logo=discord&amp"></a>
9
+ <a href="https://join.slack.com/t/colossalaiworkspace/shared_invite/zt-247ipg9fk-KRRYmUl~u2ll2637WRURVA"><img src="https://img.shields.io/badge/Slack-ColossalAI-blueviolet?logo=slack&amp"></a>
10
+ <a href="https://twitter.com/yangyou1991/status/1769411544083996787?s=61&t=jT0Dsx2d-MS5vS9rNM5e5g"><img src="https://img.shields.io/badge/Twitter-Discuss-blue?logo=twitter&amp"></a>
11
+ <a href="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/WeChat.png"><img src="https://img.shields.io/badge/微信-小助手加群-green?logo=wechat&amp"></a>
12
+ </div>
13
+
14
+ ## Open-Sora: 完全开源的高效复现类Sora视频生成方案
15
+ **Open-Sora**项目是一项致力于**高效**制作高质量视频,并使所有人都能使用其模型、工具和内容的计划。
16
+ 通过采用**开源**原则,Open-Sora 不仅实现了先进视频生成技术的低成本普及,还提供了一个精简且用户友好的方案,简化了视频制作的复杂性。
17
+ 通过 Open-Sora,我们希望更多开发者一起探索内容创作领域的创新、创造和包容。
18
+ [[English]](/README.md)
19
+
20
+ ## 📰 资讯
21
+
22
+ * **[2024.03.18]** 🔥 我们发布了**Open-Sora 1.0**,这是一个完全开源的视频生成项目。
23
+ * Open-Sora 1.0 支持视频数据预处理、<a href="https://github.com/hpcaitech/ColossalAI"><img src="../assets/readme/colossal_ai.png" width="8%" ></a> 加速训练、推理等全套流程。
24
+ * 我们提供的[模型权重](#model-weights)只需 3 天的训练就能生成 2~5 秒的 512x512 视频。
25
+ * **[2024.03.04]** Open-Sora:开源Sora复现方案,成本降低46%,序列扩充至近百万
26
+
27
+ ## 🎥 最新视频
28
+
29
+ | **2s 512×512** | **2s 512×512** | **2s 512×512** |
30
+ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- |
31
+ | [<img src="/assets/readme/sample_0.gif" width="">](https://github.com/hpcaitech/Open-Sora/assets/99191637/de1963d3-b43b-4e68-a670-bb821ebb6f80) | [<img src="/assets/readme/sample_1.gif" width="">](https://github.com/hpcaitech/Open-Sora/assets/99191637/13f8338f-3d42-4b71-8142-d234fbd746cc) | [<img src="/assets/readme/sample_2.gif" width="">](https://github.com/hpcaitech/Open-Sora/assets/99191637/fa6a65a6-e32a-4d64-9a9e-eabb0ebb8c16) |
32
+ | A serene night scene in a forested area. [...] The video is a time-lapse, capturing the transition from day to night, with the lake and forest serving as a constant backdrop. | A soaring drone footage captures the majestic beauty of a coastal cliff, [...] The water gently laps at the rock base and the greenery that clings to the top of the cliff. | The majestic beauty of a waterfall cascading down a cliff into a serene lake. [...] The camera angle provides a bird's eye view of the waterfall. |
33
+ | [<img src="/assets/readme/sample_3.gif" width="">](https://github.com/hpcaitech/Open-Sora/assets/99191637/64232f84-1b36-4750-a6c0-3e610fa9aa94) | [<img src="/assets/readme/sample_4.gif" width="">](https://github.com/hpcaitech/Open-Sora/assets/99191637/983a1965-a374-41a7-a76b-c07941a6c1e9) | [<img src="/assets/readme/sample_5.gif" width="">](https://github.com/hpcaitech/Open-Sora/assets/99191637/ec10c879-9767-4c31-865f-2e8d6cf11e65) |
34
+ | A bustling city street at night, filled with the glow of car headlights and the ambient light of streetlights. [...] | The vibrant beauty of a sunflower field. The sunflowers are arranged in neat rows, creating a sense of order and symmetry. [...] | A serene underwater scene featuring a sea turtle swimming through a coral reef. The turtle, with its greenish-brown shell [...] |
35
+
36
+ 视频经过降采样处理为`.gif`格式,以便显示。点击查看原始视频。为便于显示,文字经过修剪,全文请参见 [此处](/assets/texts/t2v_samples.txt)。在我们的[图片库](https://hpcaitech.github.io/Open-Sora/)中查看更多样本。
37
+
38
+ ## 🔆 新功能
39
+
40
+ * 📍Open-Sora-v1 已发布。[这里](#model-weights)提供了模型权重。只需 400K 视频片段和在单卡 H800 上训200天(类比Stable Video Diffusion 的 152M 样本),我们就能生成 2 秒的 512×512 视频。
41
+ * ✅ 从图像扩散模型到视频扩散模型的三阶段训练。我们提供每个阶段的权重。
42
+ * ✅ 支持训练加速,包括加速变压器、更快的 T5 和 VAE 以及序列并行。在对 64x512x512 视频进行训练时,Open-Sora 可将训练速度提高**55%**。详细信息请参见[加速训练](docs/acceleration.md)。
43
+ * ✅ 我们提供用于数据预处理的视频切割和字幕工具。有关说明请点击[此处](tools/data/README.md),我们的数据收集计划请点击 [数据集](docs/datasets.md)。
44
+ * ✅ 我们发现来自[VideoGPT](https://wilson1yan.github.io/videogpt/index.html)的 VQ-VAE 质量较低,因此采用了来自[Stability-AI](https://huggingface.co/stabilityai/sd-vae-ft-mse-original) 的更好的 VAE。我们还发现在时间维度上进行修补会降低质量。更多讨论,请参阅我们的 **[报告](docs/report_v1.md)**。
45
+ * ✅ 我们研究了不同的架构,包括 DiT、Latte 和我们提出的 **STDiT**。我们的STDiT在质量和速度之间实现了更好的权衡。更多讨论,请参阅我们的 **[报告](docs/report_v1.md)**。
46
+ * ✅ 支持剪辑和 T5 文本调节。
47
+ * ✅ 通过将图像视为单帧视频,我们的项目支持在图像和视频(如 ImageNet 和 UCF101)上训练 DiT。更多说明请参见 [指令解析](docs/command.md)。
48
+ * ✅ 利用[DiT](https://github.com/facebookresearch/DiT)、[Latte](https://github.com/Vchitect/Latte) 和 [PixArt](https://pixart-alpha.github.io/) 的官方权重支持推理。
49
+
50
+ <details>
51
+ <summary>查看更多</summary>
52
+
53
+ * ✅ 重构代码库。请参阅[结构](docs/structure.md),了解项目结构以及如何使用配置文件。
54
+
55
+ </details>
56
+
57
+ ### 下一步计划【按优先级排序】
58
+
59
+ * [ ] 完成数据处理管道(包括密集光流、美学评分、文本图像相似性、重复数据删除等)。更多信息请参见[数据集](/docs/datasets.md)。**[项目进行中]**
60
+ * [ ] 训练视频-VAE。 **[项目进行中]**
61
+
62
+ <details>
63
+ <summary>查看更多</summary>
64
+
65
+ * [ ] 支持图像和视频调节。
66
+ * [ ] 评估流程。
67
+ * [ ] 加入更好的调度程序,如 SD3 中的整流程序。
68
+ * [ ] 支持可变长宽比、分辨率和持续时间。
69
+ * [ ] 发布后支持 SD3。
70
+
71
+ </details>
72
+
73
+ ## 目录
74
+
75
+ * [安装](#installation)
76
+ * [模型权重](/#model-weights)
77
+ * [推理](/#inference)
78
+ * [数据处理](/#data-processing)
79
+ * [训练](/#training)
80
+ * [贡献](/#contribution)
81
+ * [声明](/#acknowledgement)
82
+ * [引用](/#citation)
83
+
84
+ ## Installation
85
+
86
+ ```bash
87
+ # create a virtual env
88
+ conda create -n opensora python=3.10
89
+
90
+ # install torch
91
+ # the command below is for CUDA 12.1, choose install commands from
92
+ # https://pytorch.org/get-started/locally/ based on your own CUDA version
93
+ pip3 install torch torchvision
94
+
95
+ # install flash attention (optional)
96
+ pip install packaging ninja
97
+ pip install flash-attn --no-build-isolation
98
+
99
+ # install apex (optional)
100
+ pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" git+https://github.com/NVIDIA/apex.git
101
+
102
+ # install xformers
103
+ pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu121
104
+
105
+ # install this project
106
+ git clone https://github.com/hpcaitech/Open-Sora
107
+ cd Open-Sora
108
+ pip install -v .
109
+ ```
110
+
111
+ 安装完成后,建议阅读[结构](docs/structure.md),了解项目结构以及如何使用配置文件。
112
+
113
+ ## 模型权重
114
+
115
+ | 分辨率 | 数据 | 迭代次数 | 批量大小 | GPU 天数 (H800) | 网址 |
116
+ | ---------- | ------ | ----------- | ---------- | --------------- | ---------- |
117
+ | 16×256×256 | 366K | 80k | 8×64 | 117 | [:link:]() |
118
+ | 16×256×256 | 20K HQ | 24k | 8×64 | 45 | [:link:]() |
119
+ | 16×512×512 | 20K HQ | 20k | 2×64 | 35 | [:link:]() |
120
+ | 64×512×512 | 50K HQ | | | | TBD |
121
+
122
+ 我们模型的权重部分由[PixArt-α](https://github.com/PixArt-alpha/PixArt-alpha) 初始化。参数数量为 724M。有关训练的更多信息,请参阅我们的 **[报告](/docs/report_v1.md)**。有关数据集的更多信息,请参阅[数据](/docs/dataset.md)。HQ 表示高质量。
123
+ :warning: **局限性**:我们的模型是在有限的预算内训练出来的。质量和文本对齐度相对较差。特别是在生成人类时,模型表现很差,无法遵循详细的指令。我们正在努力改进质量和文本对齐。
124
+
125
+ ## 推理
126
+
127
+ 要使用我们提供的权重进行推理,首先要将[T5](https://huggingface.co/DeepFloyd/t5-v1_1-xxl/tree/main)权重下载到pretrained_models/t5_ckpts/t5-v1_1-xxl 中。然后下载模型权重。运行以下命令生成样本。请参阅[此处](docs/structure.md#inference-config-demos)自定义配置。
128
+
129
+ ```bash
130
+ # Sample 16x256x256 (5s/sample)
131
+ torchrun --standalone --nproc_per_node 1 scripts/inference.py configs/opensora/inference/16x256x256.py --ckpt-path ./path/to/your/ckpt.pth
132
+
133
+ # Sample 16x512x512 (20s/sample, 100 time steps)
134
+ torchrun --standalone --nproc_per_node 1 scripts/inference.py configs/opensora/inference/16x512x512.py --ckpt-path ./path/to/your/ckpt.pth
135
+
136
+ # Sample 64x512x512 (40s/sample, 100 time steps)
137
+ torchrun --standalone --nproc_per_node 1 scripts/inference.py configs/opensora/inference/64x512x512.py --ckpt-path ./path/to/your/ckpt.pth
138
+
139
+ # Sample 64x512x512 with sequence parallelism (30s/sample, 100 time steps)
140
+ # sequence parallelism is enabled automatically when nproc_per_node is larger than 1
141
+ torchrun --standalone --nproc_per_node 2 scripts/inference.py configs/opensora/inference/64x512x512.py --ckpt-path ./path/to/your/ckpt.pth
142
+ ```
143
+
144
+ 我们在 H800 GPU 上进行了速度测试。如需使用其他模型进行推理,请参阅[此处](docs/commands.md)获取更多说明。
145
+
146
+ ## 数据处理
147
+
148
+ 高质量数据是高质量模型的关键。[这里](/docs/datasets.md)有我们使用过的数据集和数据收集计划。我们提供处理视频数据的工具。目前,我们的数据处理流程包括以下步骤:
149
+
150
+ 1. 下载数据集。[[文件](/tools/datasets/README.md)]
151
+ 2. 将视频分割成片段。 [[文件](/tools/scenedetect/README.md)]
152
+ 3. 生成视频字幕。 [[文件](/tools/caption/README.md)]
153
+
154
+ ## 训练
155
+
156
+ 要启动训练,首先要将[T5](https://huggingface.co/DeepFloyd/t5-v1_1-xxl/tree/main)权重下载到pretrained_models/t5_ckpts/t5-v1_1-xxl 中。然后运行以下命令在单个节点上启动训练。
157
+
158
+ ```bash
159
+ # 1 GPU, 16x256x256
160
+ torchrun --nnodes=1 --nproc_per_node=1 scripts/train.py configs/opensora/train/16x256x512.py --data-path YOUR_CSV_PATH
161
+ # 8 GPUs, 64x512x512
162
+ torchrun --nnodes=1 --nproc_per_node=8 scripts/train.py configs/opensora/train/64x512x512.py --data-path YOUR_CSV_PATH --ckpt-path YOUR_PRETRAINED_CKPT
163
+ ```
164
+
165
+ 要在多个节点上启动训练,请根据[ColossalAI](https://colossalai.org/docs/basics/launch_colossalai/#launch-with-colossal-ai-cli) 准备一个主机文件,并运行以下命令。
166
+
167
+ ```bash
168
+ colossalai run --nproc_per_node 8 --hostfile hostfile scripts/train.py configs/opensora/train/64x512x512.py --data-path YOUR_CSV_PATH --ckpt-path YOUR_PRETRAINED_CKPT
169
+ ```
170
+
171
+ 有关其他型号的培训和高级使用方法,请参阅[此处](docs/commands.md)获取更多说明。
172
+
173
+ ## 贡献
174
+
175
+ 如果您希望为该项目做出贡献,可以参考 [贡献指南](./CONTRIBUTING.md).
176
+
177
+ ## 声明
178
+
179
+ * [DiT](https://github.com/facebookresearch/DiT): Scalable Diffusion Models with Transformers.
180
+ * [OpenDiT](https://github.com/NUS-HPC-AI-Lab/OpenDiT): An acceleration for DiT training. We adopt valuable acceleration strategies for training progress from OpenDiT.
181
+ * [PixArt](https://github.com/PixArt-alpha/PixArt-alpha): An open-source DiT-based text-to-image model.
182
+ * [Latte](https://github.com/Vchitect/Latte): An attempt to efficiently train DiT for video.
183
+ * [StabilityAI VAE](https://huggingface.co/stabilityai/sd-vae-ft-mse-original): A powerful image VAE model.
184
+ * [CLIP](https://github.com/openai/CLIP): A powerful text-image embedding model.
185
+ * [T5](https://github.com/google-research/text-to-text-transfer-transformer): A powerful text encoder.
186
+ * [LLaVA](https://github.com/haotian-liu/LLaVA): A powerful image captioning model based on [Yi-34B](https://huggingface.co/01-ai/Yi-34B).
187
+
188
+ 我们对他们的出色工作和对开源的慷慨贡献表示感谢。
189
+
190
+ ## 引用
191
+
192
+ ```bibtex
193
+ @software{opensora,
194
+ author = {Zangwei Zheng and Xiangyu Peng and Yang You},
195
+ title = {Open-Sora: Democratizing Efficient Video Production for All},
196
+ month = {March},
197
+ year = {2024},
198
+ url = {https://github.com/hpcaitech/Open-Sora}
199
+ }
200
+ ```
201
+
202
+ [Zangwei Zheng](https://github.com/zhengzangw) and [Xiangyu Peng](https://github.com/xyupeng) equally contributed to this work during their internship at [HPC-AI Tech](https://hpc-ai.com/).
203
+
204
+ ## Star 走势
205
+
206
+ [![Star History Chart](https://api.star-history.com/svg?repos=hpcaitech/Open-Sora&type=Date)](https://star-history.com/#hpcaitech/Open-Sora&Date)
docs/acceleration.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Acceleration
2
+
3
+ Open-Sora aims to provide a high-speed training framework for diffusion models. We can achieve **55%** training speed acceleration when training on **64 frames 512x512 videos**. Our framework support training **1min 1080p videos**.
4
+
5
+ ## Accelerated Transformer
6
+
7
+ Open-Sora boosts the training speed by:
8
+
9
+ - Kernal optimization including [flash attention](https://github.com/Dao-AILab/flash-attention), fused layernorm kernal, and the ones compiled by colossalAI.
10
+ - Hybrid parallelism including ZeRO.
11
+ - Gradient checkpointing for larger batch size.
12
+
13
+ Our training speed on images is comparable to [OpenDiT](https://github.com/NUS-HPC-AI-Lab/OpenDiT), an project to accelerate DiT training. The training speed is measured on 8 H800 GPUs with batch size 128, image size 256x256.
14
+
15
+ | Model | Throughput (img/s/GPU) | Throughput (tokens/s/GPU) |
16
+ | -------- | ---------------------- | ------------------------- |
17
+ | DiT | 100 | 26k |
18
+ | OpenDiT | 175 | 45k |
19
+ | OpenSora | 175 | 45k |
20
+
21
+ ## Efficient STDiT
22
+
23
+ Our STDiT adopts spatial-temporal attention to model the video data. Compared with directly applying full attention on DiT, our STDiT is more efficient as the number of frames increases. Our current framework only supports sequence parallelism for very long sequence.
24
+
25
+ The training speed is measured on 8 H800 GPUs with acceleration techniques applied, GC means gradient checkpointing. Both with T5 conditioning like PixArt.
26
+
27
+ | Model | Setting | Throughput (sample/s/GPU) | Throughput (tokens/s/GPU) |
28
+ | ---------------- | -------------- | ------------------------- | ------------------------- |
29
+ | DiT | 16x256 (4k) | 7.20 | 29k |
30
+ | STDiT | 16x256 (4k) | 7.00 | 28k |
31
+ | DiT | 16x512 (16k) | 0.85 | 14k |
32
+ | STDiT | 16x512 (16k) | 1.45 | 23k |
33
+ | DiT (GC) | 64x512 (65k) | 0.08 | 5k |
34
+ | STDiT (GC) | 64x512 (65k) | 0.40 | 25k |
35
+ | STDiT (GC, sp=2) | 360x512 (370k) | 0.10 | 18k |
36
+
37
+ With a 4x downsampling in the temporal dimension with Video-VAE, an 24fps video has 450 frames. The gap between the speed of STDiT (28k tokens/s) and DiT on images (up to 45k tokens/s) mainly comes from the T5 and VAE encoding, and temperal attention.
38
+
39
+ ## Accelerated Encoder (T5, VAE)
40
+
41
+ During training, texts are encoded by T5, and videos are encoded by VAE. Typically there are two ways to accelerate the training:
42
+
43
+ 1. Preprocess text and video data in advance and save them to disk.
44
+ 2. Encode text and video data during training, and accelerate the encoding process.
45
+
46
+ For option 1, 120 tokens for one sample require 1M disk space, and a 64x64x64 latent requires 4M. Considering a training dataset with 10M video clips, the total disk space required is 50TB. Our storage system is not ready at this time for this scale of data.
47
+
48
+ For option 2, we boost T5 speed and memory requirement. According to [OpenDiT](https://github.com/NUS-HPC-AI-Lab/OpenDiT), we find VAE consumes a large number of GPU memory. Thus we split batch size into smaller ones for VAE encoding. With both techniques, we can greatly accelerated the training speed.
49
+
50
+ The training speed is measured on 8 H800 GPUs with STDiT.
51
+
52
+ | Acceleration | Setting | Throughput (img/s/GPU) | Throughput (tokens/s/GPU) |
53
+ | ------------ | ------------- | ---------------------- | ------------------------- |
54
+ | Baseline | 16x256 (4k) | 6.16 | 25k |
55
+ | w. faster T5 | 16x256 (4k) | 7.00 | 29k |
56
+ | Baseline | 64x512 (65k) | 0.94 | 15k |
57
+ | w. both | 64x512 (65k) | 1.45 | 23k |
docs/commands.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Commands
2
+
3
+ ## Inference
4
+
5
+ You can modify corresponding config files to change the inference settings. See more details [here](/docs/structure.md#inference-config-demos).
6
+
7
+ ### Inference with DiT pretrained on ImageNet
8
+
9
+ The following command automatically downloads the pretrained weights on ImageNet and runs inference.
10
+
11
+ ```bash
12
+ python scripts/inference.py configs/dit/inference/1x256x256-class.py --ckpt-path DiT-XL-2-256x256.pt
13
+ ```
14
+
15
+ ### Inference with Latte pretrained on UCF101
16
+
17
+ The following command automatically downloads the pretrained weights on UCF101 and runs inference.
18
+
19
+ ```bash
20
+ python scripts/inference.py configs/latte/inference/16x256x256-class.py --ckpt-path Latte-XL-2-256x256-ucf101.pt
21
+ ```
22
+
23
+ ### Inference with PixArt-α pretrained weights
24
+
25
+ Download T5 into `./pretrained_models` and run the following command.
26
+
27
+ ```bash
28
+ # 256x256
29
+ torchrun --standalone --nproc_per_node 1 scripts/inference.py configs/pixart/inference/1x256x256.py --ckpt-path PixArt-XL-2-256x256.pth
30
+
31
+ # 512x512
32
+ torchrun --standalone --nproc_per_node 1 scripts/inference.py configs/pixart/inference/1x512x512.py --ckpt-path PixArt-XL-2-512x512.pth
33
+
34
+ # 1024 multi-scale
35
+ torchrun --standalone --nproc_per_node 1 scripts/inference.py configs/pixart/inference/1x1024MS.py --ckpt-path PixArt-XL-2-1024MS.pth
36
+ ```
37
+
38
+ ### Inference with checkpoints saved during training
39
+
40
+ During training, an experiment logging folder is created in `outputs` directory. Under each checpoint folder, e.g. `epoch12-global_step2000`, there is a `ema.pt` and the shared `model` folder. Run the following command to perform inference.
41
+
42
+ ```bash
43
+ # inference with ema model
44
+ torchrun --standalone --nproc_per_node 1 scripts/inference.py configs/opensora/inference/16x256x256.py --ckpt-path outputs/001-STDiT-XL-2/epoch12-global_step2000/ema.pt
45
+
46
+ # inference with model
47
+ torchrun --standalone --nproc_per_node 1 scripts/inference.py configs/opensora/inference/16x256x256.py --ckpt-path outputs/001-STDiT-XL-2/epoch12-global_step2000
48
+
49
+ # inference with sequence parallelism
50
+ # sequence parallelism is enabled automatically when nproc_per_node is larger than 1
51
+ torchrun --standalone --nproc_per_node 2 scripts/inference.py configs/opensora/inference/16x256x256.py --ckpt-path outputs/001-STDiT-XL-2/epoch12-global_step2000
52
+ ```
53
+
54
+ The second command will automatically generate a `model_ckpt.pt` file in the checkpoint folder.
55
+
56
+ ### Inference Hyperparameters
57
+
58
+ 1. DPM-solver is good at fast inference for images. However, the video result is not satisfactory. You can use it for fast demo purpose.
59
+
60
+ ```python
61
+ type="dmp-solver"
62
+ num_sampling_steps=20
63
+ ```
64
+
65
+ 1. You can use [SVD](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt)'s finetuned VAE decoder on videos for inference (consumes more memory). However, we do not see significant improvement in the video result. To use it, download [the pretrained weights](https://huggingface.co/maxin-cn/Latte/tree/main/t2v_required_models/vae_temporal_decoder) into `./pretrained_models/vae_temporal_decoder` and modify the config file as follows.
66
+
67
+ ```python
68
+ vae = dict(
69
+ type="VideoAutoencoderKLTemporalDecoder",
70
+ from_pretrained="pretrained_models/vae_temporal_decoder",
71
+ )
72
+
73
+ ## Training
74
+
75
+ To resume training, run the following command. ``--load`` different from ``--ckpt-path`` as it loads the optimizer and dataloader states.
76
+
77
+ ```bash
78
+ torchrun --nnodes=1 --nproc_per_node=8 scripts/train.py configs/opensora/train/64x512x512.py --data-path YOUR_CSV_PATH --load YOUR_PRETRAINED_CKPT
79
+ ```
80
+
81
+ To enable wandb logging, add `--wandb` to the command.
82
+
83
+ ```bash
84
+ WANDB_API_KEY=YOUR_WANDB_API_KEY torchrun --nnodes=1 --nproc_per_node=8 scripts/train.py configs/opensora/train/64x512x512.py --data-path YOUR_CSV_PATH --wandb True
85
+ ```
86
+
87
+ You can modify corresponding config files to change the training settings. See more details [here](/docs/structure.md#training-config-demos).
88
+
89
+ ### Training Hyperparameters
90
+
91
+ 1. `dtype` is the data type for training. Only `fp16` and `bf16` are supported. ColossalAI automatically enables the mixed precision training for `fp16` and `bf16`. During training, we find `bf16` more stable.
docs/datasets.md ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Datasets
2
+
3
+ ## Datasets used for now
4
+
5
+ ### HD-VG-130M
6
+
7
+ [HD-VG-130M](https://github.com/daooshee/HD-VG-130M?tab=readme-ov-file) comprises 130M text-video pairs. The caption is generated by BLIP-2. We find the cut and the text quality are relatively poor. It contains 20 splits. For OpenSora 1.0, we use the first split. We plan to use the whole dataset and re-process it.
8
+
9
+ ### Inter4k
10
+
11
+ [Inter4k](https://github.com/alexandrosstergiou/Inter4K) is a dataset containing 1k video clips with 4K resolution. The dataset is proposed for super-resolution tasks. We use the dataset for HQ training. The videos are processed as mentioned [here](/README.md#data-processing).
12
+
13
+ ### Pexels.com
14
+
15
+ [Pexels.com](https://www.pexels.com/) is a website that provides free stock photos and videos. We collect 19K video clips from this website for HQ training. The videos are processed as mentioned [here](/README.md#data-processing).
16
+
17
+ ## Datasets watching list
18
+
19
+ We are also watching the following datasets and considering using them in the future, which depends on our disk space and the quality of the dataset.
20
+
21
+ | Name | Size | Description |
22
+ | ----------------- | ------------ | ----------------------------- |
23
+ | Panda-70M | 70M videos | High quality video-text pairs |
24
+ | WebVid-10M | 10M videos | Low quality |
25
+ | InternVid-10M-FLT | 10M videos | |
26
+ | EGO4D | 3670 hours | |
27
+ | OpenDV-YouTube | 1700 hours | |
28
+ | VidProM | 6.69M videos | |
docs/report_v1.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Open-Sora v1 Report
2
+
3
+ OpenAI's Sora is amazing at generating one minutes high quality videos. However, it reveals almost no information about its details. To make AI more "open", we are dedicated to build an open-source version of Sora. This report describes our first attempt to train a transformer-based video diffusion model.
4
+
5
+ ## Efficiency in choosing the architecture
6
+
7
+ To lower the computational cost, we want to utilize existing VAE models. Sora uses spatial-temporal VAE to reduce the temporal dimensions. However, we found that there is no open-source high-quality spatial-temporal VAE model. [MAGVIT](https://github.com/google-research/magvit)'s 4x4x4 VAE is not open-sourced, while [VideoGPT](https://wilson1yan.github.io/videogpt/index.html)'s 2x4x4 VAE has a low quality in our experiments. Thus, we decided to use a 2D VAE (from [Stability-AI](https://huggingface.co/stabilityai/sd-vae-ft-mse-original)) in our first version.
8
+
9
+ The video training involves a large amount of tokens. Considering 24fps 1min videos, we have 1440 frames. With VAE downsampling 4x and patch size downsampling 2x, we have 1440x1024≈1.5M tokens. Full attention on 1.5M tokens leads to a huge computational cost. Thus, we use spatial-temporal attention to reduce the cost following [Latte](https://github.com/Vchitect/Latte).
10
+
11
+ As shown in the figure, we insert a temporal attention right after each spatial attention in STDiT (ST stands for spatial-temporal). This is similar to variant 3 in Latte's paper. However, we do not control a similar number of parameters for these variants. While Latte's paper claims their variant is better than variant 3, our experiments on 16x256x256 videos show that with same number of iterations, the performance ranks as: DiT (full) > STDiT (Sequential) > STDiT (Parallel) ≈ Latte. Thus, we choose STDiT (Sequential) out of efficiency. Speed benchmark is provided [here](/docs/acceleration.md#efficient-stdit).
12
+
13
+ ![Architecture Comparison](https://i0.imgs.ovh/2024/03/15/eLk9D.png)
14
+
15
+ To focus on video generation, we hope to train the model based on a powerful image generation model. [PixArt-α](https://github.com/PixArt-alpha/PixArt-alpha) is an efficiently trained high-quality image generation model with T5-conditioned DiT structure. We initialize our model with PixArt-α and initialize the projection layer of inserted temporal attention with zero. This initialization preserves model's ability of image generation at beginning, while Latte's architecture cannot. The inserted attention increases the number of parameter from 580M to 724M.
16
+
17
+ ![Architecture](https://i0.imgs.ovh/2024/03/16/erC1d.png)
18
+
19
+ Drawing from the success of PixArt-α and Stable Video Diffusion, we also adopt a progressive training strategy: 16x256x256 on 366K pretraining datasets, and then 16x256x256, 16x512x512, and 64x512x512 on 20K datasets. With scaled position embedding, this strategy greatly reduces the computational cost.
20
+
21
+ We also try to use a 3D patch embedder in DiT. However, with 2x downsampling on temporal dimension, the generated videos have a low quality. Thus, we leave the downsampling to temporal VAE in our next version. For now, we sample at every 3 frames with 16 frames training and every 2 frames with 64 frames training.
22
+
23
+ ## Data is the key to high quality
24
+
25
+ We find that the number and quality of data have a great impact on the quality of generated videos, even larger than the model architecture and training strategy. At this time, we only prepared the first split (366K video clips) from [HD-VG-130M](https://github.com/daooshee/HD-VG-130M). The quality of these videos varies greatly, and the captions are not that accurate. Thus, we further collect 20k relatively high quality videos from [Pexels](https://www.pexels.com/), which provides free license videos. We label the video with LLaVA, an image captioning model, with three frames and a designed prompt. With designed prompt, LLaVA can generate good quality of captions.
26
+
27
+ ![Caption](https://i0.imgs.ovh/2024/03/16/eXdvC.png)
28
+
29
+ As we lay more emphasis on the quality of data, we prepare to collect more data and build a video preprocessing pipeline in our next version.
30
+
31
+ ## Training Details
32
+
33
+ With a limited training budgets, we made only a few exploration. We find learning rate 1e-4 is too large and scales down to 2e-5. When training with a large batch size, we find `fp16` less stable than `bf16` and may lead to generation failure. Thus, we switch to `bf16` for training on 64x512x512. For other hyper-parameters, we follow previous works.
34
+
35
+ ## Loss curves
36
+
37
+ 16x256x256 Pretraining Loss Curve
38
+
39
+ ![16x256x256 Pretraining Loss Curve](https://i0.imgs.ovh/2024/03/16/erXQj.png)
40
+
41
+ 16x256x256 HQ Training Loss Curve
42
+
43
+ ![16x256x256 HQ Training Loss Curve](https://i0.imgs.ovh/2024/03/16/ernXv.png)
44
+
45
+ 16x512x512 HQ Training Loss Curve
46
+
47
+ ![16x512x512 HQ Training Loss Curve](https://i0.imgs.ovh/2024/03/16/erHBe.png)
docs/structure.md ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Repo & Config Structure
2
+
3
+ ## Repo Structure
4
+
5
+ ```plaintext
6
+ Open-Sora
7
+ ├── README.md
8
+ ├── docs
9
+ │ ├── acceleration.md -> Acceleration & Speed benchmark
10
+ │ ├── command.md -> Commands for training & inference
11
+ │ ├── datasets.md -> Datasets used in this project
12
+ │ ├── structure.md -> This file
13
+ │ └── report_v1.md -> Report for Open-Sora v1
14
+ ├── scripts
15
+ │ ├── train.py -> diffusion training script
16
+ │ └── inference.py -> Report for Open-Sora v1
17
+ ├── configs -> Configs for training & inference
18
+ ├── opensora
19
+ │ ├── __init__.py
20
+ │ ├── registry.py -> Registry helper
21
+ │   ├── acceleration -> Acceleration related code
22
+ │   ├── dataset -> Dataset related code
23
+ │   ├── models
24
+ │   │   ├── layers -> Common layers
25
+ │   │   ├── vae -> VAE as image encoder
26
+ │   │   ├── text_encoder -> Text encoder
27
+ │   │   │   ├── classes.py -> Class id encoder (inference only)
28
+ │   │   │   ├── clip.py -> CLIP encoder
29
+ │   │   │   └── t5.py -> T5 encoder
30
+ │   │   ├── dit
31
+ │   │   ├── latte
32
+ │   │   ├── pixart
33
+ │   │   └── stdit -> Our STDiT related code
34
+ │   ├── schedulers -> Diffusion shedulers
35
+ │   │   ├── iddpm -> IDDPM for training and inference
36
+ │   │ └── dpms -> DPM-Solver for fast inference
37
+ │ └── utils
38
+ └── tools -> Tools for data processing and more
39
+ ```
40
+
41
+ ## Configs
42
+
43
+ Our config files follows [MMEgine](https://github.com/open-mmlab/mmengine). MMEngine will reads the config file (a `.py` file) and parse it into a dictionary-like object.
44
+
45
+ ```plaintext
46
+ Open-Sora
47
+ └── configs -> Configs for training & inference
48
+ ├── opensora -> STDiT related configs
49
+ │ ├── inference
50
+ │ │ ├── 16x256x256.py -> Sample videos 16 frames 256x256
51
+ │ │ ├── 16x512x512.py -> Sample videos 16 frames 512x512
52
+ │ │ └── 64x512x512.py -> Sample videos 64 frames 512x512
53
+ │ └── train
54
+ │ ├── 16x256x256.py -> Train on videos 16 frames 256x256
55
+ │ ├── 16x256x256.py -> Train on videos 16 frames 256x256
56
+ │ └── 64x512x512.py -> Train on videos 64 frames 512x512
57
+ ├── dit -> DiT related configs
58
+    │   ├── inference
59
+    │   │   ├── 1x256x256-class.py -> Sample images with ckpts from DiT
60
+    │   │   ├── 1x256x256.py -> Sample images with clip condition
61
+    │   │   └── 16x256x256.py -> Sample videos
62
+    │   └── train
63
+    │     ├── 1x256x256.py -> Train on images with clip condition
64
+    │      └── 16x256x256.py -> Train on videos
65
+ ├── latte -> Latte related configs
66
+ └── pixart -> PixArt related configs
67
+ ```
68
+
69
+ ## Inference config demos
70
+
71
+ To change the inference settings, you can directly modify the corresponding config file. Or you can pass arguments to overwrite the config file ([config_utils.py](/opensora/utils/config_utils.py)). To change sampling prompts, you should modify the `.txt` file passed to the `--prompt_path` argument.
72
+
73
+ ```plaintext
74
+ --prompt_path ./assets/texts/t2v_samples.txt -> prompt_path
75
+ --ckpt-path ./path/to/your/ckpt.pth -> model["from_pretrained"]
76
+ ```
77
+
78
+ The explanation of each field is provided below.
79
+
80
+ ```python
81
+ # Define sampling size
82
+ num_frames = 64 # number of frames
83
+ fps = 24 // 2 # frames per second (divided by 2 for frame_interval=2)
84
+ image_size = (512, 512) # image size (height, width)
85
+
86
+ # Define model
87
+ model = dict(
88
+ type="STDiT-XL/2", # Select model type (STDiT-XL/2, DiT-XL/2, etc.)
89
+ space_scale=1.0, # (Optional) Space positional encoding scale (new height / old height)
90
+ time_scale=2 / 3, # (Optional) Time positional encoding scale (new frame_interval / old frame_interval)
91
+ enable_flashattn=True, # (Optional) Speed up training and inference with flash attention
92
+ enable_layernorm_kernel=True, # (Optional) Speed up training and inference with fused kernel
93
+ from_pretrained="PRETRAINED_MODEL", # (Optional) Load from pretrained model
94
+ no_temporal_pos_emb=True, # (Optional) Disable temporal positional encoding (for image)
95
+ )
96
+ vae = dict(
97
+ type="VideoAutoencoderKL", # Select VAE type
98
+ from_pretrained="stabilityai/sd-vae-ft-ema", # Load from pretrained VAE
99
+ micro_batch_size=128, # VAE with micro batch size to save memory
100
+ )
101
+ text_encoder = dict(
102
+ type="t5", # Select text encoder type (t5, clip)
103
+ from_pretrained="./pretrained_models/t5_ckpts", # Load from pretrained text encoder
104
+ model_max_length=120, # Maximum length of input text
105
+ )
106
+ scheduler = dict(
107
+ type="iddpm", # Select scheduler type (iddpm, dpm-solver)
108
+ num_sampling_steps=100, # Number of sampling steps
109
+ cfg_scale=7.0, # hyper-parameter for classifier-free diffusion
110
+ )
111
+ dtype = "fp16" # Computation type (fp16, fp32, bf16)
112
+
113
+ # Other settings
114
+ batch_size = 1 # batch size
115
+ seed = 42 # random seed
116
+ prompt_path = "./assets/texts/t2v_samples.txt" # path to prompt file
117
+ save_dir = "./samples" # path to save samples
118
+ ```
119
+
120
+ ## Training config demos
121
+
122
+ ```python
123
+ # Define sampling size
124
+ num_frames = 64
125
+ frame_interval = 2 # sample every 2 frames
126
+ image_size = (512, 512)
127
+
128
+ # Define dataset
129
+ root = None # root path to the dataset
130
+ data_path = "CSV_PATH" # path to the csv file
131
+ use_image_transform = False # True if training on images
132
+ num_workers = 4 # number of workers for dataloader
133
+
134
+ # Define acceleration
135
+ dtype = "bf16" # Computation type (fp16, bf16)
136
+ grad_checkpoint = True # Use gradient checkpointing
137
+ plugin = "zero2" # Plugin for distributed training (zero2, zero2-seq)
138
+ sp_size = 1 # Sequence parallelism size (1 for no sequence parallelism)
139
+
140
+ # Define model
141
+ model = dict(
142
+ type="STDiT-XL/2",
143
+ space_scale=1.0,
144
+ time_scale=2 / 3,
145
+ from_pretrained="YOUR_PRETRAINED_MODEL",
146
+ enable_flashattn=True, # Enable flash attention
147
+ enable_layernorm_kernel=True, # Enable layernorm kernel
148
+ )
149
+ vae = dict(
150
+ type="VideoAutoencoderKL",
151
+ from_pretrained="stabilityai/sd-vae-ft-ema",
152
+ micro_batch_size=128,
153
+ )
154
+ text_encoder = dict(
155
+ type="t5",
156
+ from_pretrained="./pretrained_models/t5_ckpts",
157
+ model_max_length=120,
158
+ shardformer=True, # Enable shardformer for T5 acceleration
159
+ )
160
+ scheduler = dict(
161
+ type="iddpm",
162
+ timestep_respacing="", # Default 1000 timesteps
163
+ )
164
+
165
+ # Others
166
+ seed = 42
167
+ outputs = "outputs" # path to save checkpoints
168
+ wandb = False # Use wandb for logging
169
+
170
+ epochs = 1000 # number of epochs (just large enough, kill when satisfied)
171
+ log_every = 10
172
+ ckpt_every = 250
173
+ load = None # path to resume training
174
+
175
+ batch_size = 4
176
+ lr = 2e-5
177
+ grad_clip = 1.0 # gradient clipping
178
+ ```
opensora/__init__.py ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ from .acceleration import *
2
+ from .datasets import *
3
+ from .models import *
4
+ from .registry import *
opensora/acceleration/__init__.py ADDED
File without changes
opensora/acceleration/checkpoint.py ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from collections.abc import Iterable
2
+
3
+ import torch.nn as nn
4
+ from torch.utils.checkpoint import checkpoint, checkpoint_sequential
5
+
6
+
7
+ def set_grad_checkpoint(model, use_fp32_attention=False, gc_step=1):
8
+ assert isinstance(model, nn.Module)
9
+
10
+ def set_attr(module):
11
+ module.grad_checkpointing = True
12
+ module.fp32_attention = use_fp32_attention
13
+ module.grad_checkpointing_step = gc_step
14
+
15
+ model.apply(set_attr)
16
+
17
+
18
+ def auto_grad_checkpoint(module, *args, **kwargs):
19
+ if getattr(module, "grad_checkpointing", False):
20
+ if not isinstance(module, Iterable):
21
+ return checkpoint(module, *args, **kwargs)
22
+ gc_step = module[0].grad_checkpointing_step
23
+ return checkpoint_sequential(module, gc_step, *args, **kwargs)
24
+ return module(*args, **kwargs)
opensora/acceleration/communications.py ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.distributed as dist
3
+
4
+
5
+ # ====================
6
+ # All-To-All
7
+ # ====================
8
+ def _all_to_all(
9
+ input_: torch.Tensor,
10
+ world_size: int,
11
+ group: dist.ProcessGroup,
12
+ scatter_dim: int,
13
+ gather_dim: int,
14
+ ):
15
+ input_list = [t.contiguous() for t in torch.tensor_split(input_, world_size, scatter_dim)]
16
+ output_list = [torch.empty_like(input_list[0]) for _ in range(world_size)]
17
+ dist.all_to_all(output_list, input_list, group=group)
18
+ return torch.cat(output_list, dim=gather_dim).contiguous()
19
+
20
+
21
+ class _AllToAll(torch.autograd.Function):
22
+ """All-to-all communication.
23
+
24
+ Args:
25
+ input_: input matrix
26
+ process_group: communication group
27
+ scatter_dim: scatter dimension
28
+ gather_dim: gather dimension
29
+ """
30
+
31
+ @staticmethod
32
+ def forward(ctx, input_, process_group, scatter_dim, gather_dim):
33
+ ctx.process_group = process_group
34
+ ctx.scatter_dim = scatter_dim
35
+ ctx.gather_dim = gather_dim
36
+ ctx.world_size = dist.get_world_size(process_group)
37
+ output = _all_to_all(input_, ctx.world_size, process_group, scatter_dim, gather_dim)
38
+ return output
39
+
40
+ @staticmethod
41
+ def backward(ctx, grad_output):
42
+ grad_output = _all_to_all(
43
+ grad_output,
44
+ ctx.world_size,
45
+ ctx.process_group,
46
+ ctx.gather_dim,
47
+ ctx.scatter_dim,
48
+ )
49
+ return (
50
+ grad_output,
51
+ None,
52
+ None,
53
+ None,
54
+ )
55
+
56
+
57
+ def all_to_all(
58
+ input_: torch.Tensor,
59
+ process_group: dist.ProcessGroup,
60
+ scatter_dim: int = 2,
61
+ gather_dim: int = 1,
62
+ ):
63
+ return _AllToAll.apply(input_, process_group, scatter_dim, gather_dim)
64
+
65
+
66
+ def _gather(
67
+ input_: torch.Tensor,
68
+ world_size: int,
69
+ group: dist.ProcessGroup,
70
+ gather_dim: int,
71
+ ):
72
+ if gather_list is None:
73
+ gather_list = [torch.empty_like(input_) for _ in range(world_size)]
74
+ dist.gather(input_, gather_list, group=group, gather_dim=gather_dim)
75
+ return gather_list
76
+
77
+
78
+ # ====================
79
+ # Gather-Split
80
+ # ====================
81
+
82
+
83
+ def _split(input_, pg: dist.ProcessGroup, dim=-1):
84
+ # skip if only one rank involved
85
+ world_size = dist.get_world_size(pg)
86
+ rank = dist.get_rank(pg)
87
+ if world_size == 1:
88
+ return input_
89
+
90
+ # Split along last dimension.
91
+ dim_size = input_.size(dim)
92
+ assert dim_size % world_size == 0, (
93
+ f"The dimension to split ({dim_size}) is not a multiple of world size ({world_size}), "
94
+ f"cannot split tensor evenly"
95
+ )
96
+
97
+ tensor_list = torch.split(input_, dim_size // world_size, dim=dim)
98
+ output = tensor_list[rank].contiguous()
99
+
100
+ return output
101
+
102
+
103
+ def _gather(input_, pg: dist.ProcessGroup, dim=-1):
104
+ # skip if only one rank involved
105
+ input_ = input_.contiguous()
106
+ world_size = dist.get_world_size(pg)
107
+ dist.get_rank(pg)
108
+
109
+ if world_size == 1:
110
+ return input_
111
+
112
+ # all gather
113
+ tensor_list = [torch.empty_like(input_) for _ in range(world_size)]
114
+ assert input_.device.type == "cuda"
115
+ torch.distributed.all_gather(tensor_list, input_, group=pg)
116
+
117
+ # concat
118
+ output = torch.cat(tensor_list, dim=dim).contiguous()
119
+
120
+ return output
121
+
122
+
123
+ class _GatherForwardSplitBackward(torch.autograd.Function):
124
+ """Gather the input from model parallel region and concatenate.
125
+
126
+ Args:
127
+ input_: input matrix.
128
+ process_group: parallel mode.
129
+ dim: dimension
130
+ """
131
+
132
+ @staticmethod
133
+ def symbolic(graph, input_):
134
+ return _gather(input_)
135
+
136
+ @staticmethod
137
+ def forward(ctx, input_, process_group, dim, grad_scale):
138
+ ctx.mode = process_group
139
+ ctx.dim = dim
140
+ ctx.grad_scale = grad_scale
141
+ return _gather(input_, process_group, dim)
142
+
143
+ @staticmethod
144
+ def backward(ctx, grad_output):
145
+ if ctx.grad_scale == "up":
146
+ grad_output = grad_output * dist.get_world_size(ctx.mode)
147
+ elif ctx.grad_scale == "down":
148
+ grad_output = grad_output / dist.get_world_size(ctx.mode)
149
+
150
+ return _split(grad_output, ctx.mode, ctx.dim), None, None, None
151
+
152
+
153
+ class _SplitForwardGatherBackward(torch.autograd.Function):
154
+ """
155
+ Split the input and keep only the corresponding chuck to the rank.
156
+
157
+ Args:
158
+ input_: input matrix.
159
+ process_group: parallel mode.
160
+ dim: dimension
161
+ """
162
+
163
+ @staticmethod
164
+ def symbolic(graph, input_):
165
+ return _split(input_)
166
+
167
+ @staticmethod
168
+ def forward(ctx, input_, process_group, dim, grad_scale):
169
+ ctx.mode = process_group
170
+ ctx.dim = dim
171
+ ctx.grad_scale = grad_scale
172
+ return _split(input_, process_group, dim)
173
+
174
+ @staticmethod
175
+ def backward(ctx, grad_output):
176
+ if ctx.grad_scale == "up":
177
+ grad_output = grad_output * dist.get_world_size(ctx.mode)
178
+ elif ctx.grad_scale == "down":
179
+ grad_output = grad_output / dist.get_world_size(ctx.mode)
180
+ return _gather(grad_output, ctx.mode, ctx.dim), None, None, None
181
+
182
+
183
+ def split_forward_gather_backward(input_, process_group, dim, grad_scale=1.0):
184
+ return _SplitForwardGatherBackward.apply(input_, process_group, dim, grad_scale)
185
+
186
+
187
+ def gather_forward_split_backward(input_, process_group, dim, grad_scale=None):
188
+ return _GatherForwardSplitBackward.apply(input_, process_group, dim, grad_scale)
opensora/acceleration/parallel_states.py ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch.distributed as dist
2
+
3
+ _GLOBAL_PARALLEL_GROUPS = dict()
4
+
5
+
6
+ def set_data_parallel_group(group: dist.ProcessGroup):
7
+ _GLOBAL_PARALLEL_GROUPS["data"] = group
8
+
9
+
10
+ def get_data_parallel_group():
11
+ return _GLOBAL_PARALLEL_GROUPS.get("data", None)
12
+
13
+
14
+ def set_sequence_parallel_group(group: dist.ProcessGroup):
15
+ _GLOBAL_PARALLEL_GROUPS["sequence"] = group
16
+
17
+
18
+ def get_sequence_parallel_group():
19
+ return _GLOBAL_PARALLEL_GROUPS.get("sequence", None)
opensora/acceleration/plugin.py ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import random
2
+ from typing import Optional
3
+
4
+ import numpy as np
5
+ import torch
6
+ from colossalai.booster.plugin import LowLevelZeroPlugin
7
+ from colossalai.cluster import ProcessGroupMesh
8
+ from torch.utils.data import DataLoader
9
+ from torch.utils.data.distributed import DistributedSampler
10
+
11
+ DP_AXIS, SP_AXIS = 0, 1
12
+
13
+
14
+ class ZeroSeqParallelPlugin(LowLevelZeroPlugin):
15
+ def __init__(
16
+ self,
17
+ sp_size: int = 1,
18
+ stage: int = 2,
19
+ precision: str = "fp16",
20
+ initial_scale: float = 2**32,
21
+ min_scale: float = 1,
22
+ growth_factor: float = 2,
23
+ backoff_factor: float = 0.5,
24
+ growth_interval: int = 1000,
25
+ hysteresis: int = 2,
26
+ max_scale: float = 2**32,
27
+ max_norm: float = 0.0,
28
+ norm_type: float = 2.0,
29
+ reduce_bucket_size_in_m: int = 12,
30
+ communication_dtype: Optional[torch.dtype] = None,
31
+ overlap_communication: bool = True,
32
+ cpu_offload: bool = False,
33
+ master_weights: bool = True,
34
+ verbose: bool = False,
35
+ ) -> None:
36
+ super().__init__(
37
+ stage=stage,
38
+ precision=precision,
39
+ initial_scale=initial_scale,
40
+ min_scale=min_scale,
41
+ growth_factor=growth_factor,
42
+ backoff_factor=backoff_factor,
43
+ growth_interval=growth_interval,
44
+ hysteresis=hysteresis,
45
+ max_scale=max_scale,
46
+ max_norm=max_norm,
47
+ norm_type=norm_type,
48
+ reduce_bucket_size_in_m=reduce_bucket_size_in_m,
49
+ communication_dtype=communication_dtype,
50
+ overlap_communication=overlap_communication,
51
+ cpu_offload=cpu_offload,
52
+ master_weights=master_weights,
53
+ verbose=verbose,
54
+ )
55
+ self.sp_size = sp_size
56
+ assert self.world_size % sp_size == 0, "world_size must be divisible by sp_size"
57
+ self.dp_size = self.world_size // sp_size
58
+ self.pg_mesh = ProcessGroupMesh(self.dp_size, self.sp_size)
59
+ self.dp_group = self.pg_mesh.get_group_along_axis(DP_AXIS)
60
+ self.sp_group = self.pg_mesh.get_group_along_axis(SP_AXIS)
61
+ self.dp_rank = self.pg_mesh.coordinate(DP_AXIS)
62
+ self.sp_rank = self.pg_mesh.coordinate(SP_AXIS)
63
+
64
+ def __del__(self):
65
+ """Destroy the prcess groups in ProcessGroupMesh"""
66
+ self.pg_mesh.destroy_mesh_process_groups()
67
+
68
+ def prepare_dataloader(
69
+ self,
70
+ dataset,
71
+ batch_size,
72
+ shuffle=False,
73
+ seed=1024,
74
+ drop_last=False,
75
+ pin_memory=False,
76
+ num_workers=0,
77
+ distributed_sampler_cls=None,
78
+ **kwargs,
79
+ ):
80
+ _kwargs = kwargs.copy()
81
+ distributed_sampler_cls = distributed_sampler_cls or DistributedSampler
82
+ sampler = distributed_sampler_cls(dataset, num_replicas=self.dp_size, rank=self.dp_rank, shuffle=shuffle)
83
+
84
+ # Deterministic dataloader
85
+ def seed_worker(worker_id):
86
+ worker_seed = seed
87
+ np.random.seed(worker_seed)
88
+ torch.manual_seed(worker_seed)
89
+ random.seed(worker_seed)
90
+
91
+ return DataLoader(
92
+ dataset,
93
+ batch_size=batch_size,
94
+ sampler=sampler,
95
+ worker_init_fn=seed_worker,
96
+ drop_last=drop_last,
97
+ pin_memory=pin_memory,
98
+ num_workers=num_workers,
99
+ **_kwargs,
100
+ )
opensora/acceleration/shardformer/modeling/__init__.py ADDED
File without changes
opensora/acceleration/shardformer/modeling/t5.py ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.nn as nn
3
+
4
+
5
+ class T5LayerNorm(nn.Module):
6
+ def __init__(self, hidden_size, eps=1e-6):
7
+ """
8
+ Construct a layernorm module in the T5 style. No bias and no subtraction of mean.
9
+ """
10
+ super().__init__()
11
+ self.weight = nn.Parameter(torch.ones(hidden_size))
12
+ self.variance_epsilon = eps
13
+
14
+ def forward(self, hidden_states):
15
+ # T5 uses a layer_norm which only scales and doesn't shift, which is also known as Root Mean
16
+ # Square Layer Normalization https://arxiv.org/abs/1910.07467 thus varience is calculated
17
+ # w/o mean and there is no bias. Additionally we want to make sure that the accumulation for
18
+ # half-precision inputs is done in fp32
19
+
20
+ variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
21
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
22
+
23
+ # convert into half-precision if necessary
24
+ if self.weight.dtype in [torch.float16, torch.bfloat16]:
25
+ hidden_states = hidden_states.to(self.weight.dtype)
26
+
27
+ return self.weight * hidden_states
28
+
29
+ @staticmethod
30
+ def from_native_module(module, *args, **kwargs):
31
+ assert module.__class__.__name__ == "FusedRMSNorm", (
32
+ "Recovering T5LayerNorm requires the original layer to be apex's Fused RMS Norm."
33
+ "Apex's fused norm is automatically used by Hugging Face Transformers https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py#L265C5-L265C48"
34
+ )
35
+
36
+ layer_norm = T5LayerNorm(module.normalized_shape, eps=module.eps)
37
+ layer_norm.weight.data.copy_(module.weight.data)
38
+ layer_norm = layer_norm.to(module.weight.device)
39
+ return layer_norm
opensora/acceleration/shardformer/policy/__init__.py ADDED
File without changes
opensora/acceleration/shardformer/policy/t5_encoder.py ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from colossalai.shardformer.modeling.jit import get_jit_fused_dropout_add_func
2
+ from colossalai.shardformer.modeling.t5 import get_jit_fused_T5_layer_ff_forward, get_T5_layer_self_attention_forward
3
+ from colossalai.shardformer.policies.base_policy import Policy, SubModuleReplacementDescription
4
+
5
+
6
+ class T5EncoderPolicy(Policy):
7
+ def config_sanity_check(self):
8
+ assert not self.shard_config.enable_tensor_parallelism
9
+ assert not self.shard_config.enable_flash_attention
10
+
11
+ def preprocess(self):
12
+ return self.model
13
+
14
+ def module_policy(self):
15
+ from transformers.models.t5.modeling_t5 import T5LayerFF, T5LayerSelfAttention, T5Stack
16
+
17
+ policy = {}
18
+
19
+ # check whether apex is installed
20
+ try:
21
+ from opensora.acceleration.shardformer.modeling.t5 import T5LayerNorm
22
+
23
+ # recover hf from fused rms norm to T5 norm which is faster
24
+ self.append_or_create_submodule_replacement(
25
+ description=SubModuleReplacementDescription(
26
+ suffix="layer_norm",
27
+ target_module=T5LayerNorm,
28
+ ),
29
+ policy=policy,
30
+ target_key=T5LayerFF,
31
+ )
32
+ self.append_or_create_submodule_replacement(
33
+ description=SubModuleReplacementDescription(suffix="layer_norm", target_module=T5LayerNorm),
34
+ policy=policy,
35
+ target_key=T5LayerSelfAttention,
36
+ )
37
+ self.append_or_create_submodule_replacement(
38
+ description=SubModuleReplacementDescription(suffix="final_layer_norm", target_module=T5LayerNorm),
39
+ policy=policy,
40
+ target_key=T5Stack,
41
+ )
42
+ except (ImportError, ModuleNotFoundError):
43
+ pass
44
+
45
+ # use jit operator
46
+ if self.shard_config.enable_jit_fused:
47
+ self.append_or_create_method_replacement(
48
+ description={
49
+ "forward": get_jit_fused_T5_layer_ff_forward(),
50
+ "dropout_add": get_jit_fused_dropout_add_func(),
51
+ },
52
+ policy=policy,
53
+ target_key=T5LayerFF,
54
+ )
55
+ self.append_or_create_method_replacement(
56
+ description={
57
+ "forward": get_T5_layer_self_attention_forward(),
58
+ "dropout_add": get_jit_fused_dropout_add_func(),
59
+ },
60
+ policy=policy,
61
+ target_key=T5LayerSelfAttention,
62
+ )
63
+
64
+ return policy
65
+
66
+ def postprocess(self):
67
+ return self.model
opensora/datasets/__init__.py ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ from .datasets import DatasetFromCSV, get_transforms_image, get_transforms_video
2
+ from .utils import prepare_dataloader, save_sample
opensora/datasets/datasets.py ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import csv
2
+ import os
3
+
4
+ import numpy as np
5
+ import torch
6
+ import torchvision
7
+ import torchvision.transforms as transforms
8
+ from torchvision.datasets.folder import IMG_EXTENSIONS, pil_loader
9
+
10
+ from . import video_transforms
11
+ from .utils import center_crop_arr
12
+
13
+
14
+ def get_transforms_video(resolution=256):
15
+ transform_video = transforms.Compose(
16
+ [
17
+ video_transforms.ToTensorVideo(), # TCHW
18
+ video_transforms.RandomHorizontalFlipVideo(),
19
+ video_transforms.UCFCenterCropVideo(resolution),
20
+ transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5], inplace=True),
21
+ ]
22
+ )
23
+ return transform_video
24
+
25
+
26
+ def get_transforms_image(image_size=256):
27
+ transform = transforms.Compose(
28
+ [
29
+ transforms.Lambda(lambda pil_image: center_crop_arr(pil_image, image_size)),
30
+ transforms.RandomHorizontalFlip(),
31
+ transforms.ToTensor(),
32
+ transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5], inplace=True),
33
+ ]
34
+ )
35
+ return transform
36
+
37
+
38
+ class DatasetFromCSV(torch.utils.data.Dataset):
39
+ """load video according to the csv file.
40
+
41
+ Args:
42
+ target_video_len (int): the number of video frames will be load.
43
+ align_transform (callable): Align different videos in a specified size.
44
+ temporal_sample (callable): Sample the target length of a video.
45
+ """
46
+
47
+ def __init__(
48
+ self,
49
+ csv_path,
50
+ num_frames=16,
51
+ frame_interval=1,
52
+ transform=None,
53
+ root=None,
54
+ ):
55
+ self.csv_path = csv_path
56
+ with open(csv_path, "r") as f:
57
+ reader = csv.reader(f)
58
+ self.samples = list(reader)
59
+
60
+ ext = self.samples[0][0].split(".")[-1]
61
+ if ext.lower() in ("mp4", "avi", "mov", "mkv"):
62
+ self.is_video = True
63
+ else:
64
+ assert f".{ext.lower()}" in IMG_EXTENSIONS, f"Unsupported file format: {ext}"
65
+ self.is_video = False
66
+
67
+ self.transform = transform
68
+
69
+ self.num_frames = num_frames
70
+ self.frame_interval = frame_interval
71
+ self.temporal_sample = video_transforms.TemporalRandomCrop(num_frames * frame_interval)
72
+ self.root = root
73
+
74
+ def getitem(self, index):
75
+ sample = self.samples[index]
76
+ path = sample[0]
77
+ if self.root:
78
+ path = os.path.join(self.root, path)
79
+ text = sample[1]
80
+
81
+ if self.is_video:
82
+ vframes, aframes, info = torchvision.io.read_video(filename=path, pts_unit="sec", output_format="TCHW")
83
+ total_frames = len(vframes)
84
+
85
+ # Sampling video frames
86
+ start_frame_ind, end_frame_ind = self.temporal_sample(total_frames)
87
+ assert (
88
+ end_frame_ind - start_frame_ind >= self.num_frames
89
+ ), f"{path} with index {index} has not enough frames."
90
+ frame_indice = np.linspace(start_frame_ind, end_frame_ind - 1, self.num_frames, dtype=int)
91
+
92
+ video = vframes[frame_indice]
93
+ video = self.transform(video) # T C H W
94
+ else:
95
+ image = pil_loader(path)
96
+ image = self.transform(image)
97
+ video = image.unsqueeze(0).repeat(self.num_frames, 1, 1, 1)
98
+
99
+ # TCHW -> CTHW
100
+ video = video.permute(1, 0, 2, 3)
101
+
102
+ return {"video": video, "text": text}
103
+
104
+ def __getitem__(self, index):
105
+ for _ in range(10):
106
+ try:
107
+ return self.getitem(index)
108
+ except Exception as e:
109
+ print(e)
110
+ index = np.random.randint(len(self))
111
+ raise RuntimeError("Too many bad data.")
112
+
113
+ def __len__(self):
114
+ return len(self.samples)
opensora/datasets/utils.py ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import random
2
+ from typing import Iterator, Optional
3
+
4
+ import numpy as np
5
+ import torch
6
+ from PIL import Image
7
+ from torch.distributed import ProcessGroup
8
+ from torch.distributed.distributed_c10d import _get_default_group
9
+ from torch.utils.data import DataLoader, Dataset
10
+ from torch.utils.data.distributed import DistributedSampler
11
+ from torchvision.io import write_video
12
+ from torchvision.utils import save_image
13
+
14
+
15
+ def save_sample(x, fps=8, save_path=None, normalize=True, value_range=(-1, 1)):
16
+ """
17
+ Args:
18
+ x (Tensor): shape [C, T, H, W]
19
+ """
20
+ assert x.ndim == 4
21
+
22
+ if x.shape[1] == 1: # T = 1: save as image
23
+ save_path += ".png"
24
+ x = x.squeeze(1)
25
+ save_image([x], save_path, normalize=normalize, value_range=value_range)
26
+ else:
27
+ save_path += ".mp4"
28
+ if normalize:
29
+ low, high = value_range
30
+ x.clamp_(min=low, max=high)
31
+ x.sub_(low).div_(max(high - low, 1e-5))
32
+
33
+ x = x.mul(255).add_(0.5).clamp_(0, 255).permute(1, 2, 3, 0).to("cpu", torch.uint8)
34
+ write_video(save_path, x, fps=fps, video_codec="h264")
35
+ print(f"Saved to {save_path}")
36
+
37
+
38
+ class StatefulDistributedSampler(DistributedSampler):
39
+ def __init__(
40
+ self,
41
+ dataset: Dataset,
42
+ num_replicas: Optional[int] = None,
43
+ rank: Optional[int] = None,
44
+ shuffle: bool = True,
45
+ seed: int = 0,
46
+ drop_last: bool = False,
47
+ ) -> None:
48
+ super().__init__(dataset, num_replicas, rank, shuffle, seed, drop_last)
49
+ self.start_index: int = 0
50
+
51
+ def __iter__(self) -> Iterator:
52
+ iterator = super().__iter__()
53
+ indices = list(iterator)
54
+ indices = indices[self.start_index :]
55
+ return iter(indices)
56
+
57
+ def __len__(self) -> int:
58
+ return self.num_samples - self.start_index
59
+
60
+ def set_start_index(self, start_index: int) -> None:
61
+ self.start_index = start_index
62
+
63
+
64
+ def prepare_dataloader(
65
+ dataset,
66
+ batch_size,
67
+ shuffle=False,
68
+ seed=1024,
69
+ drop_last=False,
70
+ pin_memory=False,
71
+ num_workers=0,
72
+ process_group: Optional[ProcessGroup] = None,
73
+ **kwargs,
74
+ ):
75
+ r"""
76
+ Prepare a dataloader for distributed training. The dataloader will be wrapped by
77
+ `torch.utils.data.DataLoader` and `StatefulDistributedSampler`.
78
+
79
+
80
+ Args:
81
+ dataset (`torch.utils.data.Dataset`): The dataset to be loaded.
82
+ shuffle (bool, optional): Whether to shuffle the dataset. Defaults to False.
83
+ seed (int, optional): Random worker seed for sampling, defaults to 1024.
84
+ add_sampler: Whether to add ``DistributedDataParallelSampler`` to the dataset. Defaults to True.
85
+ drop_last (bool, optional): Set to True to drop the last incomplete batch, if the dataset size
86
+ is not divisible by the batch size. If False and the size of dataset is not divisible by
87
+ the batch size, then the last batch will be smaller, defaults to False.
88
+ pin_memory (bool, optional): Whether to pin memory address in CPU memory. Defaults to False.
89
+ num_workers (int, optional): Number of worker threads for this dataloader. Defaults to 0.
90
+ kwargs (dict): optional parameters for ``torch.utils.data.DataLoader``, more details could be found in
91
+ `DataLoader <https://pytorch.org/docs/stable/_modules/torch/utils/data/dataloader.html#DataLoader>`_.
92
+
93
+ Returns:
94
+ :class:`torch.utils.data.DataLoader`: A DataLoader used for training or testing.
95
+ """
96
+ _kwargs = kwargs.copy()
97
+ process_group = process_group or _get_default_group()
98
+ sampler = StatefulDistributedSampler(
99
+ dataset, num_replicas=process_group.size(), rank=process_group.rank(), shuffle=shuffle
100
+ )
101
+
102
+ # Deterministic dataloader
103
+ def seed_worker(worker_id):
104
+ worker_seed = seed
105
+ np.random.seed(worker_seed)
106
+ torch.manual_seed(worker_seed)
107
+ random.seed(worker_seed)
108
+
109
+ return DataLoader(
110
+ dataset,
111
+ batch_size=batch_size,
112
+ sampler=sampler,
113
+ worker_init_fn=seed_worker,
114
+ drop_last=drop_last,
115
+ pin_memory=pin_memory,
116
+ num_workers=num_workers,
117
+ **_kwargs,
118
+ )
119
+
120
+
121
+ def center_crop_arr(pil_image, image_size):
122
+ """
123
+ Center cropping implementation from ADM.
124
+ https://github.com/openai/guided-diffusion/blob/8fb3ad9197f16bbc40620447b2742e13458d2831/guided_diffusion/image_datasets.py#L126
125
+ """
126
+ while min(*pil_image.size) >= 2 * image_size:
127
+ pil_image = pil_image.resize(tuple(x // 2 for x in pil_image.size), resample=Image.BOX)
128
+
129
+ scale = image_size / min(*pil_image.size)
130
+ pil_image = pil_image.resize(tuple(round(x * scale) for x in pil_image.size), resample=Image.BICUBIC)
131
+
132
+ arr = np.array(pil_image)
133
+ crop_y = (arr.shape[0] - image_size) // 2
134
+ crop_x = (arr.shape[1] - image_size) // 2
135
+ return Image.fromarray(arr[crop_y : crop_y + image_size, crop_x : crop_x + image_size])
opensora/datasets/video_transforms.py ADDED
@@ -0,0 +1,501 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 Vchitect/Latte
2
+
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.# Modified from Latte
14
+
15
+ # - This file is adapted from https://github.com/Vchitect/Latte/blob/main/datasets/video_transforms.py
16
+
17
+
18
+ import numbers
19
+ import random
20
+
21
+ import torch
22
+
23
+
24
+ def _is_tensor_video_clip(clip):
25
+ if not torch.is_tensor(clip):
26
+ raise TypeError("clip should be Tensor. Got %s" % type(clip))
27
+
28
+ if not clip.ndimension() == 4:
29
+ raise ValueError("clip should be 4D. Got %dD" % clip.dim())
30
+
31
+ return True
32
+
33
+
34
+ def center_crop_arr(pil_image, image_size):
35
+ """
36
+ Center cropping implementation from ADM.
37
+ https://github.com/openai/guided-diffusion/blob/8fb3ad9197f16bbc40620447b2742e13458d2831/guided_diffusion/image_datasets.py#L126
38
+ """
39
+ while min(*pil_image.size) >= 2 * image_size:
40
+ pil_image = pil_image.resize(tuple(x // 2 for x in pil_image.size), resample=Image.BOX)
41
+
42
+ scale = image_size / min(*pil_image.size)
43
+ pil_image = pil_image.resize(tuple(round(x * scale) for x in pil_image.size), resample=Image.BICUBIC)
44
+
45
+ arr = np.array(pil_image)
46
+ crop_y = (arr.shape[0] - image_size) // 2
47
+ crop_x = (arr.shape[1] - image_size) // 2
48
+ return Image.fromarray(arr[crop_y : crop_y + image_size, crop_x : crop_x + image_size])
49
+
50
+
51
+ def crop(clip, i, j, h, w):
52
+ """
53
+ Args:
54
+ clip (torch.tensor): Video clip to be cropped. Size is (T, C, H, W)
55
+ """
56
+ if len(clip.size()) != 4:
57
+ raise ValueError("clip should be a 4D tensor")
58
+ return clip[..., i : i + h, j : j + w]
59
+
60
+
61
+ def resize(clip, target_size, interpolation_mode):
62
+ if len(target_size) != 2:
63
+ raise ValueError(f"target size should be tuple (height, width), instead got {target_size}")
64
+ return torch.nn.functional.interpolate(clip, size=target_size, mode=interpolation_mode, align_corners=False)
65
+
66
+
67
+ def resize_scale(clip, target_size, interpolation_mode):
68
+ if len(target_size) != 2:
69
+ raise ValueError(f"target size should be tuple (height, width), instead got {target_size}")
70
+ H, W = clip.size(-2), clip.size(-1)
71
+ scale_ = target_size[0] / min(H, W)
72
+ return torch.nn.functional.interpolate(clip, scale_factor=scale_, mode=interpolation_mode, align_corners=False)
73
+
74
+
75
+ def resized_crop(clip, i, j, h, w, size, interpolation_mode="bilinear"):
76
+ """
77
+ Do spatial cropping and resizing to the video clip
78
+ Args:
79
+ clip (torch.tensor): Video clip to be cropped. Size is (T, C, H, W)
80
+ i (int): i in (i,j) i.e coordinates of the upper left corner.
81
+ j (int): j in (i,j) i.e coordinates of the upper left corner.
82
+ h (int): Height of the cropped region.
83
+ w (int): Width of the cropped region.
84
+ size (tuple(int, int)): height and width of resized clip
85
+ Returns:
86
+ clip (torch.tensor): Resized and cropped clip. Size is (T, C, H, W)
87
+ """
88
+ if not _is_tensor_video_clip(clip):
89
+ raise ValueError("clip should be a 4D torch.tensor")
90
+ clip = crop(clip, i, j, h, w)
91
+ clip = resize(clip, size, interpolation_mode)
92
+ return clip
93
+
94
+
95
+ def center_crop(clip, crop_size):
96
+ if not _is_tensor_video_clip(clip):
97
+ raise ValueError("clip should be a 4D torch.tensor")
98
+ h, w = clip.size(-2), clip.size(-1)
99
+ th, tw = crop_size
100
+ if h < th or w < tw:
101
+ raise ValueError("height and width must be no smaller than crop_size")
102
+
103
+ i = int(round((h - th) / 2.0))
104
+ j = int(round((w - tw) / 2.0))
105
+ return crop(clip, i, j, th, tw)
106
+
107
+
108
+ def center_crop_using_short_edge(clip):
109
+ if not _is_tensor_video_clip(clip):
110
+ raise ValueError("clip should be a 4D torch.tensor")
111
+ h, w = clip.size(-2), clip.size(-1)
112
+ if h < w:
113
+ th, tw = h, h
114
+ i = 0
115
+ j = int(round((w - tw) / 2.0))
116
+ else:
117
+ th, tw = w, w
118
+ i = int(round((h - th) / 2.0))
119
+ j = 0
120
+ return crop(clip, i, j, th, tw)
121
+
122
+
123
+ def random_shift_crop(clip):
124
+ """
125
+ Slide along the long edge, with the short edge as crop size
126
+ """
127
+ if not _is_tensor_video_clip(clip):
128
+ raise ValueError("clip should be a 4D torch.tensor")
129
+ h, w = clip.size(-2), clip.size(-1)
130
+
131
+ if h <= w:
132
+ short_edge = h
133
+ else:
134
+ short_edge = w
135
+
136
+ th, tw = short_edge, short_edge
137
+
138
+ i = torch.randint(0, h - th + 1, size=(1,)).item()
139
+ j = torch.randint(0, w - tw + 1, size=(1,)).item()
140
+ return crop(clip, i, j, th, tw)
141
+
142
+
143
+ def to_tensor(clip):
144
+ """
145
+ Convert tensor data type from uint8 to float, divide value by 255.0 and
146
+ permute the dimensions of clip tensor
147
+ Args:
148
+ clip (torch.tensor, dtype=torch.uint8): Size is (T, C, H, W)
149
+ Return:
150
+ clip (torch.tensor, dtype=torch.float): Size is (T, C, H, W)
151
+ """
152
+ _is_tensor_video_clip(clip)
153
+ if not clip.dtype == torch.uint8:
154
+ raise TypeError("clip tensor should have data type uint8. Got %s" % str(clip.dtype))
155
+ # return clip.float().permute(3, 0, 1, 2) / 255.0
156
+ return clip.float() / 255.0
157
+
158
+
159
+ def normalize(clip, mean, std, inplace=False):
160
+ """
161
+ Args:
162
+ clip (torch.tensor): Video clip to be normalized. Size is (T, C, H, W)
163
+ mean (tuple): pixel RGB mean. Size is (3)
164
+ std (tuple): pixel standard deviation. Size is (3)
165
+ Returns:
166
+ normalized clip (torch.tensor): Size is (T, C, H, W)
167
+ """
168
+ if not _is_tensor_video_clip(clip):
169
+ raise ValueError("clip should be a 4D torch.tensor")
170
+ if not inplace:
171
+ clip = clip.clone()
172
+ mean = torch.as_tensor(mean, dtype=clip.dtype, device=clip.device)
173
+ # print(mean)
174
+ std = torch.as_tensor(std, dtype=clip.dtype, device=clip.device)
175
+ clip.sub_(mean[:, None, None, None]).div_(std[:, None, None, None])
176
+ return clip
177
+
178
+
179
+ def hflip(clip):
180
+ """
181
+ Args:
182
+ clip (torch.tensor): Video clip to be normalized. Size is (T, C, H, W)
183
+ Returns:
184
+ flipped clip (torch.tensor): Size is (T, C, H, W)
185
+ """
186
+ if not _is_tensor_video_clip(clip):
187
+ raise ValueError("clip should be a 4D torch.tensor")
188
+ return clip.flip(-1)
189
+
190
+
191
+ class RandomCropVideo:
192
+ def __init__(self, size):
193
+ if isinstance(size, numbers.Number):
194
+ self.size = (int(size), int(size))
195
+ else:
196
+ self.size = size
197
+
198
+ def __call__(self, clip):
199
+ """
200
+ Args:
201
+ clip (torch.tensor): Video clip to be cropped. Size is (T, C, H, W)
202
+ Returns:
203
+ torch.tensor: randomly cropped video clip.
204
+ size is (T, C, OH, OW)
205
+ """
206
+ i, j, h, w = self.get_params(clip)
207
+ return crop(clip, i, j, h, w)
208
+
209
+ def get_params(self, clip):
210
+ h, w = clip.shape[-2:]
211
+ th, tw = self.size
212
+
213
+ if h < th or w < tw:
214
+ raise ValueError(f"Required crop size {(th, tw)} is larger than input image size {(h, w)}")
215
+
216
+ if w == tw and h == th:
217
+ return 0, 0, h, w
218
+
219
+ i = torch.randint(0, h - th + 1, size=(1,)).item()
220
+ j = torch.randint(0, w - tw + 1, size=(1,)).item()
221
+
222
+ return i, j, th, tw
223
+
224
+ def __repr__(self) -> str:
225
+ return f"{self.__class__.__name__}(size={self.size})"
226
+
227
+
228
+ class CenterCropResizeVideo:
229
+ """
230
+ First use the short side for cropping length,
231
+ center crop video, then resize to the specified size
232
+ """
233
+
234
+ def __init__(
235
+ self,
236
+ size,
237
+ interpolation_mode="bilinear",
238
+ ):
239
+ if isinstance(size, tuple):
240
+ if len(size) != 2:
241
+ raise ValueError(f"size should be tuple (height, width), instead got {size}")
242
+ self.size = size
243
+ else:
244
+ self.size = (size, size)
245
+
246
+ self.interpolation_mode = interpolation_mode
247
+
248
+ def __call__(self, clip):
249
+ """
250
+ Args:
251
+ clip (torch.tensor): Video clip to be cropped. Size is (T, C, H, W)
252
+ Returns:
253
+ torch.tensor: scale resized / center cropped video clip.
254
+ size is (T, C, crop_size, crop_size)
255
+ """
256
+ clip_center_crop = center_crop_using_short_edge(clip)
257
+ clip_center_crop_resize = resize(
258
+ clip_center_crop, target_size=self.size, interpolation_mode=self.interpolation_mode
259
+ )
260
+ return clip_center_crop_resize
261
+
262
+ def __repr__(self) -> str:
263
+ return f"{self.__class__.__name__}(size={self.size}, interpolation_mode={self.interpolation_mode}"
264
+
265
+
266
+ class UCFCenterCropVideo:
267
+ """
268
+ First scale to the specified size in equal proportion to the short edge,
269
+ then center cropping
270
+ """
271
+
272
+ def __init__(
273
+ self,
274
+ size,
275
+ interpolation_mode="bilinear",
276
+ ):
277
+ if isinstance(size, tuple):
278
+ if len(size) != 2:
279
+ raise ValueError(f"size should be tuple (height, width), instead got {size}")
280
+ self.size = size
281
+ else:
282
+ self.size = (size, size)
283
+
284
+ self.interpolation_mode = interpolation_mode
285
+
286
+ def __call__(self, clip):
287
+ """
288
+ Args:
289
+ clip (torch.tensor): Video clip to be cropped. Size is (T, C, H, W)
290
+ Returns:
291
+ torch.tensor: scale resized / center cropped video clip.
292
+ size is (T, C, crop_size, crop_size)
293
+ """
294
+ clip_resize = resize_scale(clip=clip, target_size=self.size, interpolation_mode=self.interpolation_mode)
295
+ clip_center_crop = center_crop(clip_resize, self.size)
296
+ return clip_center_crop
297
+
298
+ def __repr__(self) -> str:
299
+ return f"{self.__class__.__name__}(size={self.size}, interpolation_mode={self.interpolation_mode}"
300
+
301
+
302
+ class KineticsRandomCropResizeVideo:
303
+ """
304
+ Slide along the long edge, with the short edge as crop size. And resie to the desired size.
305
+ """
306
+
307
+ def __init__(
308
+ self,
309
+ size,
310
+ interpolation_mode="bilinear",
311
+ ):
312
+ if isinstance(size, tuple):
313
+ if len(size) != 2:
314
+ raise ValueError(f"size should be tuple (height, width), instead got {size}")
315
+ self.size = size
316
+ else:
317
+ self.size = (size, size)
318
+
319
+ self.interpolation_mode = interpolation_mode
320
+
321
+ def __call__(self, clip):
322
+ clip_random_crop = random_shift_crop(clip)
323
+ clip_resize = resize(clip_random_crop, self.size, self.interpolation_mode)
324
+ return clip_resize
325
+
326
+
327
+ class CenterCropVideo:
328
+ def __init__(
329
+ self,
330
+ size,
331
+ interpolation_mode="bilinear",
332
+ ):
333
+ if isinstance(size, tuple):
334
+ if len(size) != 2:
335
+ raise ValueError(f"size should be tuple (height, width), instead got {size}")
336
+ self.size = size
337
+ else:
338
+ self.size = (size, size)
339
+
340
+ self.interpolation_mode = interpolation_mode
341
+
342
+ def __call__(self, clip):
343
+ """
344
+ Args:
345
+ clip (torch.tensor): Video clip to be cropped. Size is (T, C, H, W)
346
+ Returns:
347
+ torch.tensor: center cropped video clip.
348
+ size is (T, C, crop_size, crop_size)
349
+ """
350
+ clip_center_crop = center_crop(clip, self.size)
351
+ return clip_center_crop
352
+
353
+ def __repr__(self) -> str:
354
+ return f"{self.__class__.__name__}(size={self.size}, interpolation_mode={self.interpolation_mode}"
355
+
356
+
357
+ class NormalizeVideo:
358
+ """
359
+ Normalize the video clip by mean subtraction and division by standard deviation
360
+ Args:
361
+ mean (3-tuple): pixel RGB mean
362
+ std (3-tuple): pixel RGB standard deviation
363
+ inplace (boolean): whether do in-place normalization
364
+ """
365
+
366
+ def __init__(self, mean, std, inplace=False):
367
+ self.mean = mean
368
+ self.std = std
369
+ self.inplace = inplace
370
+
371
+ def __call__(self, clip):
372
+ """
373
+ Args:
374
+ clip (torch.tensor): video clip must be normalized. Size is (C, T, H, W)
375
+ """
376
+ return normalize(clip, self.mean, self.std, self.inplace)
377
+
378
+ def __repr__(self) -> str:
379
+ return f"{self.__class__.__name__}(mean={self.mean}, std={self.std}, inplace={self.inplace})"
380
+
381
+
382
+ class ToTensorVideo:
383
+ """
384
+ Convert tensor data type from uint8 to float, divide value by 255.0 and
385
+ permute the dimensions of clip tensor
386
+ """
387
+
388
+ def __init__(self):
389
+ pass
390
+
391
+ def __call__(self, clip):
392
+ """
393
+ Args:
394
+ clip (torch.tensor, dtype=torch.uint8): Size is (T, C, H, W)
395
+ Return:
396
+ clip (torch.tensor, dtype=torch.float): Size is (T, C, H, W)
397
+ """
398
+ return to_tensor(clip)
399
+
400
+ def __repr__(self) -> str:
401
+ return self.__class__.__name__
402
+
403
+
404
+ class RandomHorizontalFlipVideo:
405
+ """
406
+ Flip the video clip along the horizontal direction with a given probability
407
+ Args:
408
+ p (float): probability of the clip being flipped. Default value is 0.5
409
+ """
410
+
411
+ def __init__(self, p=0.5):
412
+ self.p = p
413
+
414
+ def __call__(self, clip):
415
+ """
416
+ Args:
417
+ clip (torch.tensor): Size is (T, C, H, W)
418
+ Return:
419
+ clip (torch.tensor): Size is (T, C, H, W)
420
+ """
421
+ if random.random() < self.p:
422
+ clip = hflip(clip)
423
+ return clip
424
+
425
+ def __repr__(self) -> str:
426
+ return f"{self.__class__.__name__}(p={self.p})"
427
+
428
+
429
+ # ------------------------------------------------------------
430
+ # --------------------- Sampling ---------------------------
431
+ # ------------------------------------------------------------
432
+ class TemporalRandomCrop(object):
433
+ """Temporally crop the given frame indices at a random location.
434
+
435
+ Args:
436
+ size (int): Desired length of frames will be seen in the model.
437
+ """
438
+
439
+ def __init__(self, size):
440
+ self.size = size
441
+
442
+ def __call__(self, total_frames):
443
+ rand_end = max(0, total_frames - self.size - 1)
444
+ begin_index = random.randint(0, rand_end)
445
+ end_index = min(begin_index + self.size, total_frames)
446
+ return begin_index, end_index
447
+
448
+
449
+ if __name__ == "__main__":
450
+ import os
451
+
452
+ import numpy as np
453
+ import torchvision.io as io
454
+ from torchvision import transforms
455
+ from torchvision.utils import save_image
456
+
457
+ vframes, aframes, info = io.read_video(filename="./v_Archery_g01_c03.avi", pts_unit="sec", output_format="TCHW")
458
+
459
+ trans = transforms.Compose(
460
+ [
461
+ ToTensorVideo(),
462
+ RandomHorizontalFlipVideo(),
463
+ UCFCenterCropVideo(512),
464
+ # NormalizeVideo(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5], inplace=True),
465
+ transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5], inplace=True),
466
+ ]
467
+ )
468
+
469
+ target_video_len = 32
470
+ frame_interval = 1
471
+ total_frames = len(vframes)
472
+ print(total_frames)
473
+
474
+ temporal_sample = TemporalRandomCrop(target_video_len * frame_interval)
475
+
476
+ # Sampling video frames
477
+ start_frame_ind, end_frame_ind = temporal_sample(total_frames)
478
+ # print(start_frame_ind)
479
+ # print(end_frame_ind)
480
+ assert end_frame_ind - start_frame_ind >= target_video_len
481
+ frame_indice = np.linspace(start_frame_ind, end_frame_ind - 1, target_video_len, dtype=int)
482
+ print(frame_indice)
483
+
484
+ select_vframes = vframes[frame_indice]
485
+ print(select_vframes.shape)
486
+ print(select_vframes.dtype)
487
+
488
+ select_vframes_trans = trans(select_vframes)
489
+ print(select_vframes_trans.shape)
490
+ print(select_vframes_trans.dtype)
491
+
492
+ select_vframes_trans_int = ((select_vframes_trans * 0.5 + 0.5) * 255).to(dtype=torch.uint8)
493
+ print(select_vframes_trans_int.dtype)
494
+ print(select_vframes_trans_int.permute(0, 2, 3, 1).shape)
495
+
496
+ io.write_video("./test.avi", select_vframes_trans_int.permute(0, 2, 3, 1), fps=8)
497
+
498
+ for i in range(target_video_len):
499
+ save_image(
500
+ select_vframes_trans[i], os.path.join("./test000", "%04d.png" % i), normalize=True, value_range=(-1, 1)
501
+ )
opensora/models/__init__.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ from .dit import *
2
+ from .latte import *
3
+ from .pixart import *
4
+ from .stdit import *
5
+ from .text_encoder import *
6
+ from .vae import *
opensora/models/dit/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from .dit import DiT, DiT_XL_2, DiT_XL_2x2