argojuni0506 commited on
Commit
559df5a
·
verified ·
1 Parent(s): 5794b84

Upload 220 files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +1 -0
  2. .github/workflows/codeql.yml +100 -0
  3. .gitignore +398 -0
  4. .gitmodules +3 -0
  5. CODE_OF_CONDUCT.md +9 -0
  6. DATASET.md +231 -0
  7. LICENSE +21 -0
  8. README.md +325 -0
  9. SECURITY.md +41 -0
  10. SUPPORT.md +25 -0
  11. app.py +403 -0
  12. app_text.py +266 -0
  13. assets/T.ply +3 -0
  14. assets/example_image/T.png +3 -0
  15. assets/example_image/typical_building_building.png +3 -0
  16. assets/example_image/typical_building_castle.png +3 -0
  17. assets/example_image/typical_building_colorful_cottage.png +3 -0
  18. assets/example_image/typical_building_maya_pyramid.png +3 -0
  19. assets/example_image/typical_building_mushroom.png +3 -0
  20. assets/example_image/typical_building_space_station.png +3 -0
  21. assets/example_image/typical_creature_dragon.png +3 -0
  22. assets/example_image/typical_creature_elephant.png +3 -0
  23. assets/example_image/typical_creature_furry.png +3 -0
  24. assets/example_image/typical_creature_quadruped.png +3 -0
  25. assets/example_image/typical_creature_robot_crab.png +3 -0
  26. assets/example_image/typical_creature_robot_dinosour.png +3 -0
  27. assets/example_image/typical_creature_rock_monster.png +3 -0
  28. assets/example_image/typical_humanoid_block_robot.png +3 -0
  29. assets/example_image/typical_humanoid_dragonborn.png +3 -0
  30. assets/example_image/typical_humanoid_dwarf.png +3 -0
  31. assets/example_image/typical_humanoid_goblin.png +3 -0
  32. assets/example_image/typical_humanoid_mech.png +3 -0
  33. assets/example_image/typical_misc_crate.png +3 -0
  34. assets/example_image/typical_misc_fireplace.png +3 -0
  35. assets/example_image/typical_misc_gate.png +3 -0
  36. assets/example_image/typical_misc_lantern.png +3 -0
  37. assets/example_image/typical_misc_magicbook.png +3 -0
  38. assets/example_image/typical_misc_mailbox.png +3 -0
  39. assets/example_image/typical_misc_monster_chest.png +3 -0
  40. assets/example_image/typical_misc_paper_machine.png +3 -0
  41. assets/example_image/typical_misc_phonograph.png +3 -0
  42. assets/example_image/typical_misc_portal2.png +3 -0
  43. assets/example_image/typical_misc_storage_chest.png +3 -0
  44. assets/example_image/typical_misc_telephone.png +3 -0
  45. assets/example_image/typical_misc_television.png +3 -0
  46. assets/example_image/typical_misc_workbench.png +3 -0
  47. assets/example_image/typical_vehicle_biplane.png +3 -0
  48. assets/example_image/typical_vehicle_bulldozer.png +3 -0
  49. assets/example_image/typical_vehicle_cart.png +3 -0
  50. assets/example_image/typical_vehicle_excavator.png +3 -0
.gitattributes CHANGED
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ assets/T.ply filter=lfs diff=lfs merge=lfs -text
.github/workflows/codeql.yml ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # For most projects, this workflow file will not need changing; you simply need
2
+ # to commit it to your repository.
3
+ #
4
+ # You may wish to alter this file to override the set of languages analyzed,
5
+ # or to provide custom queries or build logic.
6
+ #
7
+ # ******** NOTE ********
8
+ # We have attempted to detect the languages in your repository. Please check
9
+ # the `language` matrix defined below to confirm you have the correct set of
10
+ # supported CodeQL languages.
11
+ #
12
+ name: "CodeQL Advanced"
13
+
14
+ on:
15
+ push:
16
+ branches: [ "main" ]
17
+ pull_request:
18
+ branches: [ "main" ]
19
+ schedule:
20
+ - cron: '31 15 * * 6'
21
+
22
+ jobs:
23
+ analyze:
24
+ name: Analyze (${{ matrix.language }})
25
+ # Runner size impacts CodeQL analysis time. To learn more, please see:
26
+ # - https://gh.io/recommended-hardware-resources-for-running-codeql
27
+ # - https://gh.io/supported-runners-and-hardware-resources
28
+ # - https://gh.io/using-larger-runners (GitHub.com only)
29
+ # Consider using larger runners or machines with greater resources for possible analysis time improvements.
30
+ runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
31
+ permissions:
32
+ # required for all workflows
33
+ security-events: write
34
+
35
+ # required to fetch internal or private CodeQL packs
36
+ packages: read
37
+
38
+ # only required for workflows in private repositories
39
+ actions: read
40
+ contents: read
41
+
42
+ strategy:
43
+ fail-fast: false
44
+ matrix:
45
+ include:
46
+ - language: c-cpp
47
+ build-mode: none
48
+ - language: python
49
+ build-mode: autobuild
50
+ # CodeQL supports the following values keywords for 'language': 'actions', 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'swift'
51
+ # Use `c-cpp` to analyze code written in C, C++ or both
52
+ # Use 'java-kotlin' to analyze code written in Java, Kotlin or both
53
+ # Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
54
+ # To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
55
+ # see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
56
+ # If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
57
+ # your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
58
+ steps:
59
+ - name: Checkout repository
60
+ uses: actions/checkout@v4
61
+
62
+ # Add any setup steps before running the `github/codeql-action/init` action.
63
+ # This includes steps like installing compilers or runtimes (`actions/setup-node`
64
+ # or others). This is typically only required for manual builds.
65
+ # - name: Setup runtime (example)
66
+ # uses: actions/setup-example@v1
67
+
68
+ # Initializes the CodeQL tools for scanning.
69
+ - name: Initialize CodeQL
70
+ uses: github/codeql-action/init@v3
71
+ with:
72
+ languages: ${{ matrix.language }}
73
+ build-mode: ${{ matrix.build-mode }}
74
+ # If you wish to specify custom queries, you can do so here or in a config file.
75
+ # By default, queries listed here will override any specified in a config file.
76
+ # Prefix the list here with "+" to use these queries and those in the config file.
77
+
78
+ # For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
79
+ # queries: security-extended,security-and-quality
80
+
81
+ # If the analyze step fails for one of the languages you are analyzing with
82
+ # "We were unable to automatically build your code", modify the matrix above
83
+ # to set the build mode to "manual" for that language. Then modify this step
84
+ # to build your code.
85
+ # ℹ️ Command-line programs to run using the OS shell.
86
+ # 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
87
+ - if: matrix.build-mode == 'manual'
88
+ shell: bash
89
+ run: |
90
+ echo 'If you are using a "manual" build mode for one or more of the' \
91
+ 'languages you are analyzing, replace this with the commands to build' \
92
+ 'your code, for example:'
93
+ echo ' make bootstrap'
94
+ echo ' make release'
95
+ exit 1
96
+
97
+ - name: Perform CodeQL Analysis
98
+ uses: github/codeql-action/analyze@v3
99
+ with:
100
+ category: "/language:${{matrix.language}}"
.gitignore ADDED
@@ -0,0 +1,398 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Ignore Visual Studio temporary files, build results, and
2
+ ## files generated by popular Visual Studio add-ons.
3
+ ##
4
+ ## Get latest from https://github.com/github/gitignore/blob/main/VisualStudio.gitignore
5
+
6
+ # User-specific files
7
+ *.rsuser
8
+ *.suo
9
+ *.user
10
+ *.userosscache
11
+ *.sln.docstates
12
+
13
+ # User-specific files (MonoDevelop/Xamarin Studio)
14
+ *.userprefs
15
+
16
+ # Mono auto generated files
17
+ mono_crash.*
18
+
19
+ # Build results
20
+ [Dd]ebug/
21
+ [Dd]ebugPublic/
22
+ [Rr]elease/
23
+ [Rr]eleases/
24
+ x64/
25
+ x86/
26
+ [Ww][Ii][Nn]32/
27
+ [Aa][Rr][Mm]/
28
+ [Aa][Rr][Mm]64/
29
+ bld/
30
+ [Bb]in/
31
+ [Oo]bj/
32
+ [Ll]og/
33
+ [Ll]ogs/
34
+
35
+ # Visual Studio 2015/2017 cache/options directory
36
+ .vs/
37
+ # Uncomment if you have tasks that create the project's static files in wwwroot
38
+ #wwwroot/
39
+
40
+ # Visual Studio 2017 auto generated files
41
+ Generated\ Files/
42
+
43
+ # MSTest test Results
44
+ [Tt]est[Rr]esult*/
45
+ [Bb]uild[Ll]og.*
46
+
47
+ # NUnit
48
+ *.VisualState.xml
49
+ TestResult.xml
50
+ nunit-*.xml
51
+
52
+ # Build Results of an ATL Project
53
+ [Dd]ebugPS/
54
+ [Rr]eleasePS/
55
+ dlldata.c
56
+
57
+ # Benchmark Results
58
+ BenchmarkDotNet.Artifacts/
59
+
60
+ # .NET Core
61
+ project.lock.json
62
+ project.fragment.lock.json
63
+ artifacts/
64
+
65
+ # ASP.NET Scaffolding
66
+ ScaffoldingReadMe.txt
67
+
68
+ # StyleCop
69
+ StyleCopReport.xml
70
+
71
+ # Files built by Visual Studio
72
+ *_i.c
73
+ *_p.c
74
+ *_h.h
75
+ *.ilk
76
+ *.meta
77
+ *.obj
78
+ *.iobj
79
+ *.pch
80
+ *.pdb
81
+ *.ipdb
82
+ *.pgc
83
+ *.pgd
84
+ *.rsp
85
+ *.sbr
86
+ *.tlb
87
+ *.tli
88
+ *.tlh
89
+ *.tmp
90
+ *.tmp_proj
91
+ *_wpftmp.csproj
92
+ *.log
93
+ *.tlog
94
+ *.vspscc
95
+ *.vssscc
96
+ .builds
97
+ *.pidb
98
+ *.svclog
99
+ *.scc
100
+
101
+ # Chutzpah Test files
102
+ _Chutzpah*
103
+
104
+ # Visual C++ cache files
105
+ ipch/
106
+ *.aps
107
+ *.ncb
108
+ *.opendb
109
+ *.opensdf
110
+ *.sdf
111
+ *.cachefile
112
+ *.VC.db
113
+ *.VC.VC.opendb
114
+
115
+ # Visual Studio profiler
116
+ *.psess
117
+ *.vsp
118
+ *.vspx
119
+ *.sap
120
+
121
+ # Visual Studio Trace Files
122
+ *.e2e
123
+
124
+ # TFS 2012 Local Workspace
125
+ $tf/
126
+
127
+ # Guidance Automation Toolkit
128
+ *.gpState
129
+
130
+ # ReSharper is a .NET coding add-in
131
+ _ReSharper*/
132
+ *.[Rr]e[Ss]harper
133
+ *.DotSettings.user
134
+
135
+ # TeamCity is a build add-in
136
+ _TeamCity*
137
+
138
+ # DotCover is a Code Coverage Tool
139
+ *.dotCover
140
+
141
+ # AxoCover is a Code Coverage Tool
142
+ .axoCover/*
143
+ !.axoCover/settings.json
144
+
145
+ # Coverlet is a free, cross platform Code Coverage Tool
146
+ coverage*.json
147
+ coverage*.xml
148
+ coverage*.info
149
+
150
+ # Visual Studio code coverage results
151
+ *.coverage
152
+ *.coveragexml
153
+
154
+ # NCrunch
155
+ _NCrunch_*
156
+ .*crunch*.local.xml
157
+ nCrunchTemp_*
158
+
159
+ # MightyMoose
160
+ *.mm.*
161
+ AutoTest.Net/
162
+
163
+ # Web workbench (sass)
164
+ .sass-cache/
165
+
166
+ # Installshield output folder
167
+ [Ee]xpress/
168
+
169
+ # DocProject is a documentation generator add-in
170
+ DocProject/buildhelp/
171
+ DocProject/Help/*.HxT
172
+ DocProject/Help/*.HxC
173
+ DocProject/Help/*.hhc
174
+ DocProject/Help/*.hhk
175
+ DocProject/Help/*.hhp
176
+ DocProject/Help/Html2
177
+ DocProject/Help/html
178
+
179
+ # Click-Once directory
180
+ publish/
181
+
182
+ # Publish Web Output
183
+ *.[Pp]ublish.xml
184
+ *.azurePubxml
185
+ # Note: Comment the next line if you want to checkin your web deploy settings,
186
+ # but database connection strings (with potential passwords) will be unencrypted
187
+ *.pubxml
188
+ *.publishproj
189
+
190
+ # Microsoft Azure Web App publish settings. Comment the next line if you want to
191
+ # checkin your Azure Web App publish settings, but sensitive information contained
192
+ # in these scripts will be unencrypted
193
+ PublishScripts/
194
+
195
+ # NuGet Packages
196
+ *.nupkg
197
+ # NuGet Symbol Packages
198
+ *.snupkg
199
+ # The packages folder can be ignored because of Package Restore
200
+ **/[Pp]ackages/*
201
+ # except build/, which is used as an MSBuild target.
202
+ !**/[Pp]ackages/build/
203
+ # Uncomment if necessary however generally it will be regenerated when needed
204
+ #!**/[Pp]ackages/repositories.config
205
+ # NuGet v3's project.json files produces more ignorable files
206
+ *.nuget.props
207
+ *.nuget.targets
208
+
209
+ # Microsoft Azure Build Output
210
+ csx/
211
+ *.build.csdef
212
+
213
+ # Microsoft Azure Emulator
214
+ ecf/
215
+ rcf/
216
+
217
+ # Windows Store app package directories and files
218
+ AppPackages/
219
+ BundleArtifacts/
220
+ Package.StoreAssociation.xml
221
+ _pkginfo.txt
222
+ *.appx
223
+ *.appxbundle
224
+ *.appxupload
225
+
226
+ # Visual Studio cache files
227
+ # files ending in .cache can be ignored
228
+ *.[Cc]ache
229
+ # but keep track of directories ending in .cache
230
+ !?*.[Cc]ache/
231
+
232
+ # Others
233
+ ClientBin/
234
+ ~$*
235
+ *~
236
+ *.dbmdl
237
+ *.dbproj.schemaview
238
+ *.jfm
239
+ *.pfx
240
+ *.publishsettings
241
+ orleans.codegen.cs
242
+
243
+ # Including strong name files can present a security risk
244
+ # (https://github.com/github/gitignore/pull/2483#issue-259490424)
245
+ #*.snk
246
+
247
+ # Since there are multiple workflows, uncomment next line to ignore bower_components
248
+ # (https://github.com/github/gitignore/pull/1529#issuecomment-104372622)
249
+ #bower_components/
250
+
251
+ # RIA/Silverlight projects
252
+ Generated_Code/
253
+
254
+ # Backup & report files from converting an old project file
255
+ # to a newer Visual Studio version. Backup files are not needed,
256
+ # because we have git ;-)
257
+ _UpgradeReport_Files/
258
+ Backup*/
259
+ UpgradeLog*.XML
260
+ UpgradeLog*.htm
261
+ ServiceFabricBackup/
262
+ *.rptproj.bak
263
+
264
+ # SQL Server files
265
+ *.mdf
266
+ *.ldf
267
+ *.ndf
268
+
269
+ # Business Intelligence projects
270
+ *.rdl.data
271
+ *.bim.layout
272
+ *.bim_*.settings
273
+ *.rptproj.rsuser
274
+ *- [Bb]ackup.rdl
275
+ *- [Bb]ackup ([0-9]).rdl
276
+ *- [Bb]ackup ([0-9][0-9]).rdl
277
+
278
+ # Microsoft Fakes
279
+ FakesAssemblies/
280
+
281
+ # GhostDoc plugin setting file
282
+ *.GhostDoc.xml
283
+
284
+ # Node.js Tools for Visual Studio
285
+ .ntvs_analysis.dat
286
+ node_modules/
287
+
288
+ # Visual Studio 6 build log
289
+ *.plg
290
+
291
+ # Visual Studio 6 workspace options file
292
+ *.opt
293
+
294
+ # Visual Studio 6 auto-generated workspace file (contains which files were open etc.)
295
+ *.vbw
296
+
297
+ # Visual Studio 6 auto-generated project file (contains which files were open etc.)
298
+ *.vbp
299
+
300
+ # Visual Studio 6 workspace and project file (working project files containing files to include in project)
301
+ *.dsw
302
+ *.dsp
303
+
304
+ # Visual Studio 6 technical files
305
+ *.ncb
306
+ *.aps
307
+
308
+ # Visual Studio LightSwitch build output
309
+ **/*.HTMLClient/GeneratedArtifacts
310
+ **/*.DesktopClient/GeneratedArtifacts
311
+ **/*.DesktopClient/ModelManifest.xml
312
+ **/*.Server/GeneratedArtifacts
313
+ **/*.Server/ModelManifest.xml
314
+ _Pvt_Extensions
315
+
316
+ # Paket dependency manager
317
+ .paket/paket.exe
318
+ paket-files/
319
+
320
+ # FAKE - F# Make
321
+ .fake/
322
+
323
+ # CodeRush personal settings
324
+ .cr/personal
325
+
326
+ # Python Tools for Visual Studio (PTVS)
327
+ __pycache__/
328
+ *.pyc
329
+
330
+ # Cake - Uncomment if you are using it
331
+ # tools/**
332
+ # !tools/packages.config
333
+
334
+ # Tabs Studio
335
+ *.tss
336
+
337
+ # Telerik's JustMock configuration file
338
+ *.jmconfig
339
+
340
+ # BizTalk build output
341
+ *.btp.cs
342
+ *.btm.cs
343
+ *.odx.cs
344
+ *.xsd.cs
345
+
346
+ # OpenCover UI analysis results
347
+ OpenCover/
348
+
349
+ # Azure Stream Analytics local run output
350
+ ASALocalRun/
351
+
352
+ # MSBuild Binary and Structured Log
353
+ *.binlog
354
+
355
+ # NVidia Nsight GPU debugger configuration file
356
+ *.nvuser
357
+
358
+ # MFractors (Xamarin productivity tool) working folder
359
+ .mfractor/
360
+
361
+ # Local History for Visual Studio
362
+ .localhistory/
363
+
364
+ # Visual Studio History (VSHistory) files
365
+ .vshistory/
366
+
367
+ # BeatPulse healthcheck temp database
368
+ healthchecksdb
369
+
370
+ # Backup folder for Package Reference Convert tool in Visual Studio 2017
371
+ MigrationBackup/
372
+
373
+ # Ionide (cross platform F# VS Code tools) working folder
374
+ .ionide/
375
+
376
+ # Fody - auto-generated XML schema
377
+ FodyWeavers.xsd
378
+
379
+ # VS Code files for those working on multiple tools
380
+ .vscode/*
381
+ !.vscode/settings.json
382
+ !.vscode/tasks.json
383
+ !.vscode/launch.json
384
+ !.vscode/extensions.json
385
+ *.code-workspace
386
+
387
+ # Local History for Visual Studio Code
388
+ .history/
389
+
390
+ # Windows Installer files from build outputs
391
+ *.cab
392
+ *.msi
393
+ *.msix
394
+ *.msm
395
+ *.msp
396
+
397
+ # JetBrains Rider
398
+ *.sln.iml
.gitmodules ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ [submodule "trellis/representations/mesh/flexicubes"]
2
+ path = trellis/representations/mesh/flexicubes
3
+ url = https://github.com/MaxtirError/FlexiCubes.git
CODE_OF_CONDUCT.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ # Microsoft Open Source Code of Conduct
2
+
3
+ This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
4
+
5
+ Resources:
6
+
7
+ - [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/)
8
+ - [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
9
+ - Contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with questions or concerns
DATASET.md ADDED
@@ -0,0 +1,231 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TRELLIS-500K
2
+
3
+ TRELLIS-500K is a dataset of 500K 3D assets curated from [Objaverse(XL)](https://objaverse.allenai.org/), [ABO](https://amazon-berkeley-objects.s3.amazonaws.com/index.html), [3D-FUTURE](https://tianchi.aliyun.com/specials/promotion/alibaba-3d-future), [HSSD](https://huggingface.co/datasets/hssd/hssd-models), and [Toys4k](https://github.com/rehg-lab/lowshot-shapebias/tree/main/toys4k), filtered based on aesthetic scores.
4
+ This dataset serves for 3D generation tasks.
5
+
6
+ The dataset is provided as csv files containing the 3D assets' metadata.
7
+
8
+ ## Dataset Statistics
9
+
10
+ The following table summarizes the dataset's filtering and composition:
11
+
12
+ ***NOTE: Some of the 3D assets lack text captions. Please filter out such assets if captions are required.***
13
+ | Source | Aesthetic Score Threshold | Filtered Size | With Captions |
14
+ |:-:|:-:|:-:|:-:|
15
+ | ObjaverseXL (sketchfab) | 5.5 | 168307 | 167638 |
16
+ | ObjaverseXL (github) | 5.5 | 311843 | 306790 |
17
+ | ABO | 4.5 | 4485 | 4390 |
18
+ | 3D-FUTURE | 4.5 | 9472 | 9291 |
19
+ | HSSD | 4.5 | 6670 | 6661 |
20
+ | All (training set) | - | 500777 | 494770 |
21
+ | Toys4k (evaluation set) | 4.5 | 3229 | 3180 |
22
+
23
+ ## Dataset Location
24
+
25
+ The dataset is hosted on Hugging Face Datasets. You can preview the dataset at
26
+
27
+ [https://huggingface.co/datasets/JeffreyXiang/TRELLIS-500K](https://huggingface.co/datasets/JeffreyXiang/TRELLIS-500K)
28
+
29
+ There is no need to download the csv files manually. We provide toolkits to load and prepare the dataset.
30
+
31
+ ## Dataset Toolkits
32
+
33
+ We provide [toolkits](dataset_toolkits) for data preparation.
34
+
35
+ ### Step 1: Install Dependencies
36
+
37
+ ```
38
+ . ./dataset_toolkits/setup.sh
39
+ ```
40
+
41
+ ### Step 2: Load Metadata
42
+
43
+ First, we need to load the metadata of the dataset.
44
+
45
+ ```
46
+ python dataset_toolkits/build_metadata.py <SUBSET> --output_dir <OUTPUT_DIR> [--source <SOURCE>]
47
+ ```
48
+
49
+ - `SUBSET`: The subset of the dataset to load. Options are `ObjaverseXL`, `ABO`, `3D-FUTURE`, `HSSD`, and `Toys4k`.
50
+ - `OUTPUT_DIR`: The directory to save the data.
51
+ - `SOURCE`: Required if `SUBSET` is `ObjaverseXL`. Options are `sketchfab` and `github`.
52
+
53
+ For example, to load the metadata of the ObjaverseXL (sketchfab) subset and save it to `datasets/ObjaverseXL_sketchfab`, we can run:
54
+
55
+ ```
56
+ python dataset_toolkits/build_metadata.py ObjaverseXL --source sketchfab --output_dir datasets/ObjaverseXL_sketchfab
57
+ ```
58
+
59
+ ### Step 3: Download Data
60
+
61
+ Next, we need to download the 3D assets.
62
+
63
+ ```
64
+ python dataset_toolkits/download.py <SUBSET> --output_dir <OUTPUT_DIR> [--rank <RANK> --world_size <WORLD_SIZE>]
65
+ ```
66
+
67
+ - `SUBSET`: The subset of the dataset to download. Options are `ObjaverseXL`, `ABO`, `3D-FUTURE`, `HSSD`, and `Toys4k`.
68
+ - `OUTPUT_DIR`: The directory to save the data.
69
+
70
+ You can also specify the `RANK` and `WORLD_SIZE` of the current process if you are using multiple nodes for data preparation.
71
+
72
+ For example, to download the ObjaverseXL (sketchfab) subset and save it to `datasets/ObjaverseXL_sketchfab`, we can run:
73
+
74
+ ***NOTE: The example command below sets a large `WORLD_SIZE` for demonstration purposes. Only a small portion of the dataset will be downloaded.***
75
+
76
+ ```
77
+ python dataset_toolkits/download.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab --world_size 160000
78
+ ```
79
+
80
+ Some datasets may require interactive login to Hugging Face or manual downloading. Please follow the instructions given by the toolkits.
81
+
82
+ After downloading, update the metadata file with:
83
+
84
+ ```
85
+ python dataset_toolkits/build_metadata.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
86
+ ```
87
+
88
+ ### Step 4: Render Multiview Images
89
+
90
+ Multiview images can be rendered with:
91
+
92
+ ```
93
+ python dataset_toolkits/render.py <SUBSET> --output_dir <OUTPUT_DIR> [--num_views <NUM_VIEWS>] [--rank <RANK> --world_size <WORLD_SIZE>]
94
+ ```
95
+
96
+ - `SUBSET`: The subset of the dataset to render. Options are `ObjaverseXL`, `ABO`, `3D-FUTURE`, `HSSD`, and `Toys4k`.
97
+ - `OUTPUT_DIR`: The directory to save the data.
98
+ - `NUM_VIEWS`: The number of views to render. Default is 150.
99
+ - `RANK` and `WORLD_SIZE`: Multi-node configuration.
100
+
101
+ For example, to render the ObjaverseXL (sketchfab) subset and save it to `datasets/ObjaverseXL_sketchfab`, we can run:
102
+
103
+ ```
104
+ python dataset_toolkits/render.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
105
+ ```
106
+
107
+ Don't forget to update the metadata file with:
108
+
109
+ ```
110
+ python dataset_toolkits/build_metadata.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
111
+ ```
112
+
113
+ ### Step 5: Voxelize 3D Models
114
+
115
+ We can voxelize the 3D models with:
116
+
117
+ ```
118
+ python dataset_toolkits/voxelize.py <SUBSET> --output_dir <OUTPUT_DIR> [--rank <RANK> --world_size <WORLD_SIZE>]
119
+ ```
120
+
121
+ - `SUBSET`: The subset of the dataset to voxelize. Options are `ObjaverseXL`, `ABO`, `3D-FUTURE`, `HSSD`, and `Toys4k`.
122
+ - `OUTPUT_DIR`: The directory to save the data.
123
+ - `RANK` and `WORLD_SIZE`: Multi-node configuration.
124
+
125
+ For example, to voxelize the ObjaverseXL (sketchfab) subset and save it to `datasets/ObjaverseXL_sketchfab`, we can run:
126
+ ```
127
+ python dataset_toolkits/voxelize.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
128
+ ```
129
+
130
+ Then update the metadata file with:
131
+
132
+ ```
133
+ python dataset_toolkits/build_metadata.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
134
+ ```
135
+
136
+ ### Step 6: Extract DINO Features
137
+
138
+ To prepare the training data for SLat VAE, we need to extract DINO features from multiview images and aggregate them into sparse voxel grids.
139
+
140
+ ```
141
+ python dataset_toolkits/extract_features.py --output_dir <OUTPUT_DIR> [--rank <RANK> --world_size <WORLD_SIZE>]
142
+ ```
143
+
144
+ - `OUTPUT_DIR`: The directory to save the data.
145
+ - `RANK` and `WORLD_SIZE`: Multi-node configuration.
146
+
147
+
148
+ For example, to extract DINO features from the ObjaverseXL (sketchfab) subset and save it to `datasets/ObjaverseXL_sketchfab`, we can run:
149
+
150
+ ```
151
+ python dataset_toolkits/extract_feature.py --output_dir datasets/ObjaverseXL_sketchfab
152
+ ```
153
+
154
+ Then update the metadata file with:
155
+
156
+ ```
157
+ python dataset_toolkits/build_metadata.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
158
+ ```
159
+
160
+ ### Step 7: Encode Sparse Structures
161
+
162
+ Encoding the sparse structures into latents to train the first stage generator:
163
+
164
+ ```
165
+ python dataset_toolkits/encode_ss_latent.py --output_dir <OUTPUT_DIR> [--rank <RANK> --world_size <WORLD_SIZE>]
166
+ ```
167
+
168
+ - `OUTPUT_DIR`: The directory to save the data.
169
+ - `RANK` and `WORLD_SIZE`: Multi-node configuration.
170
+
171
+ For example, to encode the sparse structures into latents for the ObjaverseXL (sketchfab) subset and save it to `datasets/ObjaverseXL_sketchfab`, we can run:
172
+
173
+ ```
174
+ python dataset_toolkits/encode_ss_latent.py --output_dir datasets/ObjaverseXL_sketchfab
175
+ ```
176
+
177
+ Then update the metadata file with:
178
+
179
+ ```
180
+ python dataset_toolkits/build_metadata.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
181
+ ```
182
+
183
+ ### Step 8: Encode SLat
184
+
185
+ Encoding SLat for second stage generator training:
186
+
187
+ ```
188
+ python dataset_toolkits/encode_latent.py --output_dir <OUTPUT_DIR> [--rank <RANK> --world_size <WORLD_SIZE>]
189
+ ```
190
+
191
+ - `OUTPUT_DIR`: The directory to save the data.
192
+ - `RANK` and `WORLD_SIZE`: Multi-node configuration.
193
+
194
+ For example, to encode SLat for the ObjaverseXL (sketchfab) subset and save it to `datasets/ObjaverseXL_sketchfab`, we can run:
195
+
196
+ ```
197
+ python dataset_toolkits/encode_latent.py --output_dir datasets/ObjaverseXL_sketchfab
198
+ ```
199
+
200
+ Then update the metadata file with:
201
+
202
+ ```
203
+ python dataset_toolkits/build_metadata.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
204
+ ```
205
+
206
+ ### Step 9: Render Image Conditions
207
+
208
+ To train the image conditioned generator, we need to render image conditions with augmented views.
209
+
210
+ ```
211
+ python dataset_toolkits/render_cond.py <SUBSET> --output_dir <OUTPUT_DIR> [--num_views <NUM_VIEWS>] [--rank <RANK> --world_size <WORLD_SIZE>]
212
+ ```
213
+
214
+ - `SUBSET`: The subset of the dataset to render. Options are `ObjaverseXL`, `ABO`, `3D-FUTURE`, `HSSD`, and `Toys4k`.
215
+ - `OUTPUT_DIR`: The directory to save the data.
216
+ - `NUM_VIEWS`: The number of views to render. Default is 24.
217
+ - `RANK` and `WORLD_SIZE`: Multi-node configuration.
218
+
219
+ For example, to render image conditions for the ObjaverseXL (sketchfab) subset and save it to `datasets/ObjaverseXL_sketchfab`, we can run:
220
+
221
+ ```
222
+ python dataset_toolkits/render_cond.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
223
+ ```
224
+
225
+ Then update the metadata file with:
226
+
227
+ ```
228
+ python dataset_toolkits/build_metadata.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
229
+ ```
230
+
231
+
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) Microsoft Corporation.
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE
README.md ADDED
@@ -0,0 +1,325 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <img src="assets/logo.webp" width="100%" align="center">
2
+ <h1 align="center">Structured 3D Latents<br>for Scalable and Versatile 3D Generation</h1>
3
+ <p align="center"><a href="https://arxiv.org/abs/2412.01506"><img src='https://img.shields.io/badge/arXiv-Paper-red?logo=arxiv&logoColor=white' alt='arXiv'></a>
4
+ <a href='https://trellis3d.github.io'><img src='https://img.shields.io/badge/Project_Page-Website-green?logo=googlechrome&logoColor=white' alt='Project Page'></a>
5
+ <a href='https://huggingface.co/spaces?q=TRELLIS'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Live_Demo-blue'></a>
6
+ </p>
7
+ <p align="center"><img src="assets/teaser.png" width="100%"></p>
8
+
9
+ <span style="font-size: 16px; font-weight: 600;">T</span><span style="font-size: 12px; font-weight: 700;">RELLIS</span> is a large 3D asset generation model. It takes in text or image prompts and generates high-quality 3D assets in various formats, such as Radiance Fields, 3D Gaussians, and meshes. The cornerstone of <span style="font-size: 16px; font-weight: 600;">T</span><span style="font-size: 12px; font-weight: 700;">RELLIS</span> is a unified Structured LATent (<span style="font-size: 16px; font-weight: 600;">SL</span><span style="font-size: 12px; font-weight: 700;">AT</span>) representation that allows decoding to different output formats and Rectified Flow Transformers tailored for <span style="font-size: 16px; font-weight: 600;">SL</span><span style="font-size: 12px; font-weight: 700;">AT</span> as the powerful backbones. We provide large-scale pre-trained models with up to 2 billion parameters on a large 3D asset dataset of 500K diverse objects. <span style="font-size: 16px; font-weight: 600;">T</span><span style="font-size: 12px; font-weight: 700;">RELLIS</span> significantly surpasses existing methods, including recent ones at similar scales, and showcases flexible output format selection and local 3D editing capabilities which were not offered by previous models.
10
+
11
+ ***Check out our [Project Page](https://trellis3d.github.io) for more videos and interactive demos!***
12
+
13
+ <!-- Features -->
14
+ ## 🌟 Features
15
+ - **High Quality**: It produces diverse 3D assets at high quality with intricate shape and texture details.
16
+ - **Versatility**: It takes text or image prompts and can generate various final 3D representations including but not limited to *Radiance Fields*, *3D Gaussians*, and *meshes*, accommodating diverse downstream requirements.
17
+ - **Flexible Editing**: It allows for easy editings of generated 3D assets, such as generating variants of the same object or local editing of the 3D asset.
18
+
19
+ <!-- Updates -->
20
+ ## ⏩ Updates
21
+
22
+ **03/25/2025**
23
+ - Release training code.
24
+ - Release **TRELLIS-text** models and asset variants generation.
25
+ - Examples are provided as [example_text.py](example_text.py) and [example_variant.py](example_variant.py).
26
+ - Gradio demo is provided as [app_text.py](app_text.py).
27
+ - *Note: It is always recommended to do text to 3D generation by first generating images using text-to-image models and then using TRELLIS-image models for 3D generation. Text-conditioned models are less creative and detailed due to data limitations.*
28
+
29
+ **12/26/2024**
30
+ - Release [**TRELLIS-500K**](https://github.com/microsoft/TRELLIS#-dataset) dataset and toolkits for data preparation.
31
+
32
+ **12/18/2024**
33
+ - Implementation of multi-image conditioning for **TRELLIS-image** model. ([#7](https://github.com/microsoft/TRELLIS/issues/7)). This is based on tuning-free algorithm without training a specialized model, so it may not give the best results for all input images.
34
+ - Add Gaussian export in `app.py` and `example.py`. ([#40](https://github.com/microsoft/TRELLIS/issues/40))
35
+
36
+ <!-- Installation -->
37
+ ## 📦 Installation
38
+
39
+ ### Prerequisites
40
+ - **System**: The code is currently tested only on **Linux**. For windows setup, you may refer to [#3](https://github.com/microsoft/TRELLIS/issues/3) (not fully tested).
41
+ - **Hardware**: An NVIDIA GPU with at least 16GB of memory is necessary. The code has been verified on NVIDIA A100 and A6000 GPUs.
42
+ - **Software**:
43
+ - The [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit-archive) is needed to compile certain submodules. The code has been tested with CUDA versions 11.8 and 12.2.
44
+ - [Conda](https://docs.anaconda.com/miniconda/install/#quick-command-line-install) is recommended for managing dependencies.
45
+ - Python version 3.8 or higher is required.
46
+
47
+ ### Installation Steps
48
+ 1. Clone the repo:
49
+ ```sh
50
+ git clone --recurse-submodules https://github.com/microsoft/TRELLIS.git
51
+ cd TRELLIS
52
+ ```
53
+
54
+ 2. Install the dependencies:
55
+
56
+ **Before running the following command there are somethings to note:**
57
+ - By adding `--new-env`, a new conda environment named `trellis` will be created. If you want to use an existing conda environment, please remove this flag.
58
+ - By default the `trellis` environment will use pytorch 2.4.0 with CUDA 11.8. If you want to use a different version of CUDA (e.g., if you have CUDA Toolkit 12.2 installed and do not want to install another 11.8 version for submodule compilation), you can remove the `--new-env` flag and manually install the required dependencies. Refer to [PyTorch](https://pytorch.org/get-started/previous-versions/) for the installation command.
59
+ - If you have multiple CUDA Toolkit versions installed, `PATH` should be set to the correct version before running the command. For example, if you have CUDA Toolkit 11.8 and 12.2 installed, you should run `export PATH=/usr/local/cuda-11.8/bin:$PATH` before running the command.
60
+ - By default, the code uses the `flash-attn` backend for attention. For GPUs do not support `flash-attn` (e.g., NVIDIA V100), you can remove the `--flash-attn` flag to install `xformers` only and set the `ATTN_BACKEND` environment variable to `xformers` before running the code. See the [Minimal Example](#minimal-example) for more details.
61
+ - The installation may take a while due to the large number of dependencies. Please be patient. If you encounter any issues, you can try to install the dependencies one by one, specifying one flag at a time.
62
+ - If you encounter any issues during the installation, feel free to open an issue or contact us.
63
+
64
+ Create a new conda environment named `trellis` and install the dependencies:
65
+ ```sh
66
+ . ./setup.sh --new-env --basic --xformers --flash-attn --diffoctreerast --spconv --mipgaussian --kaolin --nvdiffrast
67
+ ```
68
+ The detailed usage of `setup.sh` can be found by running `. ./setup.sh --help`.
69
+ ```sh
70
+ Usage: setup.sh [OPTIONS]
71
+ Options:
72
+ -h, --help Display this help message
73
+ --new-env Create a new conda environment
74
+ --basic Install basic dependencies
75
+ --train Install training dependencies
76
+ --xformers Install xformers
77
+ --flash-attn Install flash-attn
78
+ --diffoctreerast Install diffoctreerast
79
+ --vox2seq Install vox2seq
80
+ --spconv Install spconv
81
+ --mipgaussian Install mip-splatting
82
+ --kaolin Install kaolin
83
+ --nvdiffrast Install nvdiffrast
84
+ --demo Install all dependencies for demo
85
+ ```
86
+
87
+ <!-- Usage -->
88
+ ## 💡 Usage
89
+
90
+ ### Minimal Example
91
+
92
+ Here is an [example](example.py) of how to use the pretrained models for 3D asset generation.
93
+
94
+ ```python
95
+ import os
96
+ # os.environ['ATTN_BACKEND'] = 'xformers' # Can be 'flash-attn' or 'xformers', default is 'flash-attn'
97
+ os.environ['SPCONV_ALGO'] = 'native' # Can be 'native' or 'auto', default is 'auto'.
98
+ # 'auto' is faster but will do benchmarking at the beginning.
99
+ # Recommended to set to 'native' if run only once.
100
+
101
+ import imageio
102
+ from PIL import Image
103
+ from trellis.pipelines import TrellisImageTo3DPipeline
104
+ from trellis.utils import render_utils, postprocessing_utils
105
+
106
+ # Load a pipeline from a model folder or a Hugging Face model hub.
107
+ pipeline = TrellisImageTo3DPipeline.from_pretrained("microsoft/TRELLIS-image-large")
108
+ pipeline.cuda()
109
+
110
+ # Load an image
111
+ image = Image.open("assets/example_image/T.png")
112
+
113
+ # Run the pipeline
114
+ outputs = pipeline.run(
115
+ image,
116
+ seed=1,
117
+ # Optional parameters
118
+ # sparse_structure_sampler_params={
119
+ # "steps": 12,
120
+ # "cfg_strength": 7.5,
121
+ # },
122
+ # slat_sampler_params={
123
+ # "steps": 12,
124
+ # "cfg_strength": 3,
125
+ # },
126
+ )
127
+ # outputs is a dictionary containing generated 3D assets in different formats:
128
+ # - outputs['gaussian']: a list of 3D Gaussians
129
+ # - outputs['radiance_field']: a list of radiance fields
130
+ # - outputs['mesh']: a list of meshes
131
+
132
+ # Render the outputs
133
+ video = render_utils.render_video(outputs['gaussian'][0])['color']
134
+ imageio.mimsave("sample_gs.mp4", video, fps=30)
135
+ video = render_utils.render_video(outputs['radiance_field'][0])['color']
136
+ imageio.mimsave("sample_rf.mp4", video, fps=30)
137
+ video = render_utils.render_video(outputs['mesh'][0])['normal']
138
+ imageio.mimsave("sample_mesh.mp4", video, fps=30)
139
+
140
+ # GLB files can be extracted from the outputs
141
+ glb = postprocessing_utils.to_glb(
142
+ outputs['gaussian'][0],
143
+ outputs['mesh'][0],
144
+ # Optional parameters
145
+ simplify=0.95, # Ratio of triangles to remove in the simplification process
146
+ texture_size=1024, # Size of the texture used for the GLB
147
+ )
148
+ glb.export("sample.glb")
149
+
150
+ # Save Gaussians as PLY files
151
+ outputs['gaussian'][0].save_ply("sample.ply")
152
+ ```
153
+
154
+ After running the code, you will get the following files:
155
+ - `sample_gs.mp4`: a video showing the 3D Gaussian representation
156
+ - `sample_rf.mp4`: a video showing the Radiance Field representation
157
+ - `sample_mesh.mp4`: a video showing the mesh representation
158
+ - `sample.glb`: a GLB file containing the extracted textured mesh
159
+ - `sample.ply`: a PLY file containing the 3D Gaussian representation
160
+
161
+
162
+ ### Web Demo
163
+
164
+ [app.py](app.py) provides a simple web demo for 3D asset generation. Since this demo is based on [Gradio](https://gradio.app/), additional dependencies are required:
165
+ ```sh
166
+ . ./setup.sh --demo
167
+ ```
168
+
169
+ After installing the dependencies, you can run the demo with the following command:
170
+ ```sh
171
+ python app.py
172
+ ```
173
+
174
+ Then, you can access the demo at the address shown in the terminal.
175
+
176
+
177
+ <!-- Dataset -->
178
+ ## 📚 Dataset
179
+
180
+ We provide **TRELLIS-500K**, a large-scale dataset containing 500K 3D assets curated from [Objaverse(XL)](https://objaverse.allenai.org/), [ABO](https://amazon-berkeley-objects.s3.amazonaws.com/index.html), [3D-FUTURE](https://tianchi.aliyun.com/specials/promotion/alibaba-3d-future), [HSSD](https://huggingface.co/datasets/hssd/hssd-models), and [Toys4k](https://github.com/rehg-lab/lowshot-shapebias/tree/main/toys4k), filtered based on aesthetic scores. Please refer to the [dataset README](DATASET.md) for more details.
181
+
182
+
183
+ <!-- Training -->
184
+ ## 🏋️‍♂️ Training
185
+
186
+ TRELLIS’s training framework is organized to provide a flexible and modular approach to building and fine-tuning large-scale 3D generation models. The training code is centered around `train.py` and is structured into several directories to clearly separate dataset handling, model components, training logic, and visualization utilities.
187
+
188
+ ### Code Structure
189
+
190
+ - **train.py**: Main entry point for training.
191
+ - **trellis/datasets**: Dataset loading and preprocessing.
192
+ - **trellis/models**: Different models and their components.
193
+ - **trellis/modules**: Custom modules for various models.
194
+ - **trellis/pipelines**: Inference pipelines for different models.
195
+ - **trellis/renderers**: Renderers for different 3D representations.
196
+ - **trellis/representations**: Different 3D representations.
197
+ - **trellis/trainers**: Training logic for different models.
198
+ - **trellis/utils**: Utility functions for training and visualization.
199
+
200
+ ### Training Setup
201
+
202
+ 1. **Prepare the Environment:**
203
+ - Ensure all training dependencies are installed.
204
+ - Use a Linux system with an NVIDIA GPU (The models are trained on NVIDIA A100 GPUs).
205
+ - For distributed training, verify that your nodes can communicate through the designated master address and port.
206
+
207
+ 2. **Dataset Preparation:**
208
+ - Organize your dataset similar to TRELLIS-500K. Specify your dataset path using the `--data_dir` argument when launching training.
209
+
210
+ 3. **Configuration Files:**
211
+ - Training hyperparameters and model architectures are defined in configuration files under the `configs/` directory.
212
+ - Example configuration files include:
213
+
214
+ | Config | Description |
215
+ | --- | --- |
216
+ | [`vae/ss_vae_conv3d_16l8_fp16.json`](configs/vae/ss_vae_conv3d_16l8_fp16.json) | Sparse structure VAE |
217
+ | [`vae/slat_vae_enc_dec_gs_swin8_B_64l8_fp16.json`](configs/vae/slat_vae_enc_dec_gs_swin8_B_64l8_fp16.json) | SLat VAE with Gaussian Decoder |
218
+ | [`vae/slat_vae_dec_rf_swin8_B_64l8_fp16.json`](configs/vae/slat_vae_dec_rf_swin8_B_64l8_fp16.json) | SLat Radiance Field Decoder |
219
+ | [`vae/slat_vae_dec_mesh_swin8_B_64l8_fp16.json`](configs/vae/slat_vae_dec_mesh_swin8_B_64l8_fp16.json) | SLat Mesh Decoder |
220
+ | [`generation/ss_flow_img_dit_L_16l8_fp16.json`](configs/generation/ss_flow_img_dit_L_16l8_fp16.json) | Image conditioned sparse structure Flow Model |
221
+ | [`generation/slat_flow_img_dit_L_64l8p2_fp16.json`](configs/generation/slat_flow_img_dit_L_64l8p2_fp16.json) | Image conditioned SLat Flow Model |
222
+ | [`generation/ss_flow_txt_dit_B_16l8_fp16.json`](configs/generation/ss_flow_txt_dit_B_16l8_fp16.json) | Base text-conditioned sparse structure Flow Model |
223
+ | [`generation/slat_flow_txt_dit_B_64l8p2_fp16.json`](configs/generation/slat_flow_txt_dit_B_64l8p2_fp16.json) | Base text-conditioned SLat Flow Model |
224
+ | [`generation/ss_flow_txt_dit_L_16l8_fp16.json`](configs/generation/ss_flow_txt_dit_L_16l8_fp16.json) | Large text-conditioned sparse structure Flow Model |
225
+ | [`generation/slat_flow_txt_dit_L_64l8p2_fp16.json`](configs/generation/slat_flow_txt_dit_L_64l8p2_fp16.json) | Large text-conditioned SLat Flow Model |
226
+ | [`generation/ss_flow_txt_dit_XL_16l8_fp16.json`](configs/generation/ss_flow_txt_dit_XL_16l8_fp16.json) | Extra-large text-conditioned sparse structure Flow Model |
227
+ | [`generation/slat_flow_txt_dit_XL_64l8p2_fp16.json`](configs/generation/slat_flow_txt_dit_XL_64l8p2_fp16.json) | Extra-large text-conditioned SLat Flow Model |
228
+
229
+
230
+ ### Command-Line Options
231
+
232
+ The training script can be run as follows:
233
+ ```sh
234
+ usage: train.py [-h] --config CONFIG --output_dir OUTPUT_DIR [--load_dir LOAD_DIR] [--ckpt CKPT] [--data_dir DATA_DIR] [--auto_retry AUTO_RETRY] [--tryrun] [--profile] [--num_nodes NUM_NODES] [--node_rank NODE_RANK] [--num_gpus NUM_GPUS] [--master_addr MASTER_ADDR] [--master_port MASTER_PORT]
235
+
236
+ options:
237
+ -h, --help show this help message and exit
238
+ --config CONFIG Experiment config file
239
+ --output_dir OUTPUT_DIR Output directory
240
+ --load_dir LOAD_DIR Load directory, default to output_dir
241
+ --ckpt CKPT Checkpoint step to resume training, default to latest
242
+ --data_dir DATA_DIR Data directory
243
+ --auto_retry AUTO_RETRY Number of retries on error
244
+ --tryrun Try run without training
245
+ --profile Profile training
246
+ --num_nodes NUM_NODES Number of nodes
247
+ --node_rank NODE_RANK Node rank
248
+ --num_gpus NUM_GPUS Number of GPUs per node, default to all
249
+ --master_addr MASTER_ADDR Master address for distributed training
250
+ --master_port MASTER_PORT Port for distributed training
251
+ ```
252
+
253
+ ### Example Training Commands
254
+
255
+ #### Single-node Training
256
+
257
+ To train a image-to-3D stage 2 model with a single machine.
258
+ ```sh
259
+ python train.py \
260
+ --config configs/vae/slat_vae_dec_mesh_swin8_B_64l8_fp16.json \
261
+ --output_dir outputs/slat_vae_dec_mesh_swin8_B_64l8_fp16_1node \
262
+ --data_dir /path/to/your/dataset1,/path/to/your/dataset2 \
263
+ ```
264
+ The script will automatically distribute the training across all available GPUs. Specify the number of GPUs with the `--num_gpus` flag if you want to limit the number of GPUs used.
265
+
266
+ #### Multi-node Training
267
+
268
+ To train a image-to-3D stage 2 model with multiple GPUs across nodes (e.g., 2 nodes):
269
+ ```sh
270
+ python train.py \
271
+ --config configs/generation/slat_flow_img_dit_L_64l8p2_fp16.json \
272
+ --output_dir outputs/slat_flow_img_dit_L_64l8p2_fp16_2nodes \
273
+ --data_dir /path/to/your/dataset1,/path/to/your/dataset2 \
274
+ --num_nodes 2 \
275
+ --node_rank 0 \
276
+ --master_addr $MASTER_ADDR \
277
+ --master_port $MASTER_PORT
278
+ ```
279
+ Be sure to adjust `node_rank`, `master_addr`, and `master_port` for each node accordingly.
280
+
281
+ #### Resuming Training
282
+
283
+ By default, training will resume from the latest saved checkpoint in the same output directory. To specify a specific checkpoint to resume from, use the `--load_dir` and `--ckpt` flags:
284
+ ```sh
285
+ python train.py \
286
+ --config configs/generation/slat_flow_img_dit_L_64l8p2_fp16.json \
287
+ --output_dir outputs/slat_flow_img_dit_L_64l8p2_fp16_resume \
288
+ --data_dir /path/to/your/dataset1,/path/to/your/dataset2 \
289
+ --load_dir /path/to/your/checkpoint \
290
+ --ckpt [step]
291
+ ```
292
+
293
+ ### Additional Options
294
+
295
+ - **Auto Retry:** Use the `--auto_retry` flag to specify the number of retries in case of intermittent errors.
296
+ - **Dry Run:** The `--tryrun` flag allows you to check your configuration and environment without launching full training.
297
+ - **Profiling:** Enable profiling with the `--profile` flag to gain insights into training performance and diagnose bottlenecks.
298
+
299
+ Adjust the file paths and parameters to match your experimental setup.
300
+
301
+
302
+ <!-- License -->
303
+ ## ⚖️ License
304
+
305
+ TRELLIS models and the majority of the code are licensed under the [MIT License](LICENSE). The following submodules may have different licenses:
306
+ - [**diffoctreerast**](https://github.com/JeffreyXiang/diffoctreerast): We developed a CUDA-based real-time differentiable octree renderer for rendering radiance fields as part of this project. This renderer is derived from the [diff-gaussian-rasterization](https://github.com/graphdeco-inria/diff-gaussian-rasterization) project and is available under the [LICENSE](https://github.com/JeffreyXiang/diffoctreerast/blob/master/LICENSE).
307
+
308
+
309
+ - [**Modified Flexicubes**](https://github.com/MaxtirError/FlexiCubes): In this project, we used a modified version of [Flexicubes](https://github.com/nv-tlabs/FlexiCubes) to support vertex attributes. This modified version is licensed under the [LICENSE](https://github.com/nv-tlabs/FlexiCubes/blob/main/LICENSE.txt).
310
+
311
+
312
+ <!-- Citation -->
313
+ ## 📜 Citation
314
+
315
+ If you find this work helpful, please consider citing our paper:
316
+
317
+ ```bibtex
318
+ @article{xiang2024structured,
319
+ title = {Structured 3D Latents for Scalable and Versatile 3D Generation},
320
+ author = {Xiang, Jianfeng and Lv, Zelong and Xu, Sicheng and Deng, Yu and Wang, Ruicheng and Zhang, Bowen and Chen, Dong and Tong, Xin and Yang, Jiaolong},
321
+ journal = {arXiv preprint arXiv:2412.01506},
322
+ year = {2024}
323
+ }
324
+ ```
325
+
SECURITY.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!-- BEGIN MICROSOFT SECURITY.MD V0.0.9 BLOCK -->
2
+
3
+ ## Security
4
+
5
+ Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet) and [Xamarin](https://github.com/xamarin).
6
+
7
+ If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/security.md/definition), please report it to us as described below.
8
+
9
+ ## Reporting Security Issues
10
+
11
+ **Please do not report security vulnerabilities through public GitHub issues.**
12
+
13
+ Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/security.md/msrc/create-report).
14
+
15
+ If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/security.md/msrc/pgp).
16
+
17
+ You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://www.microsoft.com/msrc).
18
+
19
+ Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
20
+
21
+ * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
22
+ * Full paths of source file(s) related to the manifestation of the issue
23
+ * The location of the affected source code (tag/branch/commit or direct URL)
24
+ * Any special configuration required to reproduce the issue
25
+ * Step-by-step instructions to reproduce the issue
26
+ * Proof-of-concept or exploit code (if possible)
27
+ * Impact of the issue, including how an attacker might exploit the issue
28
+
29
+ This information will help us triage your report more quickly.
30
+
31
+ If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/security.md/msrc/bounty) page for more details about our active programs.
32
+
33
+ ## Preferred Languages
34
+
35
+ We prefer all communications to be in English.
36
+
37
+ ## Policy
38
+
39
+ Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/security.md/cvd).
40
+
41
+ <!-- END MICROSOFT SECURITY.MD BLOCK -->
SUPPORT.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TODO: The maintainer of this repo has not yet edited this file
2
+
3
+ **REPO OWNER**: Do you want Customer Service & Support (CSS) support for this product/project?
4
+
5
+ - **No CSS support:** Fill out this template with information about how to file issues and get help.
6
+ - **Yes CSS support:** Fill out an intake form at [aka.ms/onboardsupport](https://aka.ms/onboardsupport). CSS will work with/help you to determine next steps.
7
+ - **Not sure?** Fill out an intake as though the answer were "Yes". CSS will help you decide.
8
+
9
+ *Then remove this first heading from this SUPPORT.MD file before publishing your repo.*
10
+
11
+ # Support
12
+
13
+ ## How to file issues and get help
14
+
15
+ This project uses GitHub Issues to track bugs and feature requests. Please search the existing
16
+ issues before filing new issues to avoid duplicates. For new issues, file your bug or
17
+ feature request as a new Issue.
18
+
19
+ For help and questions about using this project, please **REPO MAINTAINER: INSERT INSTRUCTIONS HERE
20
+ FOR HOW TO ENGAGE REPO OWNERS OR COMMUNITY FOR HELP. COULD BE A STACK OVERFLOW TAG OR OTHER
21
+ CHANNEL. WHERE WILL YOU HELP PEOPLE?**.
22
+
23
+ ## Microsoft Support Policy
24
+
25
+ Support for this **PROJECT or PRODUCT** is limited to the resources listed above.
app.py ADDED
@@ -0,0 +1,403 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ from gradio_litmodel3d import LitModel3D
3
+
4
+ import os
5
+ import shutil
6
+ from typing import *
7
+ import torch
8
+ import numpy as np
9
+ import imageio
10
+ from easydict import EasyDict as edict
11
+ from PIL import Image
12
+ from trellis.pipelines import TrellisImageTo3DPipeline
13
+ from trellis.representations import Gaussian, MeshExtractResult
14
+ from trellis.utils import render_utils, postprocessing_utils
15
+
16
+
17
+ MAX_SEED = np.iinfo(np.int32).max
18
+ TMP_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'tmp')
19
+ os.makedirs(TMP_DIR, exist_ok=True)
20
+
21
+
22
+ def start_session(req: gr.Request):
23
+ user_dir = os.path.join(TMP_DIR, str(req.session_hash))
24
+ os.makedirs(user_dir, exist_ok=True)
25
+
26
+
27
+ def end_session(req: gr.Request):
28
+ user_dir = os.path.join(TMP_DIR, str(req.session_hash))
29
+ shutil.rmtree(user_dir)
30
+
31
+
32
+ def preprocess_image(image: Image.Image) -> Image.Image:
33
+ """
34
+ Preprocess the input image.
35
+
36
+ Args:
37
+ image (Image.Image): The input image.
38
+
39
+ Returns:
40
+ Image.Image: The preprocessed image.
41
+ """
42
+ processed_image = pipeline.preprocess_image(image)
43
+ return processed_image
44
+
45
+
46
+ def preprocess_images(images: List[Tuple[Image.Image, str]]) -> List[Image.Image]:
47
+ """
48
+ Preprocess a list of input images.
49
+
50
+ Args:
51
+ images (List[Tuple[Image.Image, str]]): The input images.
52
+
53
+ Returns:
54
+ List[Image.Image]: The preprocessed images.
55
+ """
56
+ images = [image[0] for image in images]
57
+ processed_images = [pipeline.preprocess_image(image) for image in images]
58
+ return processed_images
59
+
60
+
61
+ def pack_state(gs: Gaussian, mesh: MeshExtractResult) -> dict:
62
+ return {
63
+ 'gaussian': {
64
+ **gs.init_params,
65
+ '_xyz': gs._xyz.cpu().numpy(),
66
+ '_features_dc': gs._features_dc.cpu().numpy(),
67
+ '_scaling': gs._scaling.cpu().numpy(),
68
+ '_rotation': gs._rotation.cpu().numpy(),
69
+ '_opacity': gs._opacity.cpu().numpy(),
70
+ },
71
+ 'mesh': {
72
+ 'vertices': mesh.vertices.cpu().numpy(),
73
+ 'faces': mesh.faces.cpu().numpy(),
74
+ },
75
+ }
76
+
77
+
78
+ def unpack_state(state: dict) -> Tuple[Gaussian, edict, str]:
79
+ gs = Gaussian(
80
+ aabb=state['gaussian']['aabb'],
81
+ sh_degree=state['gaussian']['sh_degree'],
82
+ mininum_kernel_size=state['gaussian']['mininum_kernel_size'],
83
+ scaling_bias=state['gaussian']['scaling_bias'],
84
+ opacity_bias=state['gaussian']['opacity_bias'],
85
+ scaling_activation=state['gaussian']['scaling_activation'],
86
+ )
87
+ gs._xyz = torch.tensor(state['gaussian']['_xyz'], device='cuda')
88
+ gs._features_dc = torch.tensor(state['gaussian']['_features_dc'], device='cuda')
89
+ gs._scaling = torch.tensor(state['gaussian']['_scaling'], device='cuda')
90
+ gs._rotation = torch.tensor(state['gaussian']['_rotation'], device='cuda')
91
+ gs._opacity = torch.tensor(state['gaussian']['_opacity'], device='cuda')
92
+
93
+ mesh = edict(
94
+ vertices=torch.tensor(state['mesh']['vertices'], device='cuda'),
95
+ faces=torch.tensor(state['mesh']['faces'], device='cuda'),
96
+ )
97
+
98
+ return gs, mesh
99
+
100
+
101
+ def get_seed(randomize_seed: bool, seed: int) -> int:
102
+ """
103
+ Get the random seed.
104
+ """
105
+ return np.random.randint(0, MAX_SEED) if randomize_seed else seed
106
+
107
+
108
+ def image_to_3d(
109
+ image: Image.Image,
110
+ multiimages: List[Tuple[Image.Image, str]],
111
+ is_multiimage: bool,
112
+ seed: int,
113
+ ss_guidance_strength: float,
114
+ ss_sampling_steps: int,
115
+ slat_guidance_strength: float,
116
+ slat_sampling_steps: int,
117
+ multiimage_algo: Literal["multidiffusion", "stochastic"],
118
+ req: gr.Request,
119
+ ) -> Tuple[dict, str]:
120
+ """
121
+ Convert an image to a 3D model.
122
+
123
+ Args:
124
+ image (Image.Image): The input image.
125
+ multiimages (List[Tuple[Image.Image, str]]): The input images in multi-image mode.
126
+ is_multiimage (bool): Whether is in multi-image mode.
127
+ seed (int): The random seed.
128
+ ss_guidance_strength (float): The guidance strength for sparse structure generation.
129
+ ss_sampling_steps (int): The number of sampling steps for sparse structure generation.
130
+ slat_guidance_strength (float): The guidance strength for structured latent generation.
131
+ slat_sampling_steps (int): The number of sampling steps for structured latent generation.
132
+ multiimage_algo (Literal["multidiffusion", "stochastic"]): The algorithm for multi-image generation.
133
+
134
+ Returns:
135
+ dict: The information of the generated 3D model.
136
+ str: The path to the video of the 3D model.
137
+ """
138
+ user_dir = os.path.join(TMP_DIR, str(req.session_hash))
139
+ if not is_multiimage:
140
+ outputs = pipeline.run(
141
+ image,
142
+ seed=seed,
143
+ formats=["gaussian", "mesh"],
144
+ preprocess_image=False,
145
+ sparse_structure_sampler_params={
146
+ "steps": ss_sampling_steps,
147
+ "cfg_strength": ss_guidance_strength,
148
+ },
149
+ slat_sampler_params={
150
+ "steps": slat_sampling_steps,
151
+ "cfg_strength": slat_guidance_strength,
152
+ },
153
+ )
154
+ else:
155
+ outputs = pipeline.run_multi_image(
156
+ [image[0] for image in multiimages],
157
+ seed=seed,
158
+ formats=["gaussian", "mesh"],
159
+ preprocess_image=False,
160
+ sparse_structure_sampler_params={
161
+ "steps": ss_sampling_steps,
162
+ "cfg_strength": ss_guidance_strength,
163
+ },
164
+ slat_sampler_params={
165
+ "steps": slat_sampling_steps,
166
+ "cfg_strength": slat_guidance_strength,
167
+ },
168
+ mode=multiimage_algo,
169
+ )
170
+ video = render_utils.render_video(outputs['gaussian'][0], num_frames=120)['color']
171
+ video_geo = render_utils.render_video(outputs['mesh'][0], num_frames=120)['normal']
172
+ video = [np.concatenate([video[i], video_geo[i]], axis=1) for i in range(len(video))]
173
+ video_path = os.path.join(user_dir, 'sample.mp4')
174
+ imageio.mimsave(video_path, video, fps=15)
175
+ state = pack_state(outputs['gaussian'][0], outputs['mesh'][0])
176
+ torch.cuda.empty_cache()
177
+ return state, video_path
178
+
179
+
180
+ def extract_glb(
181
+ state: dict,
182
+ mesh_simplify: float,
183
+ texture_size: int,
184
+ req: gr.Request,
185
+ ) -> Tuple[str, str]:
186
+ """
187
+ Extract a GLB file from the 3D model.
188
+
189
+ Args:
190
+ state (dict): The state of the generated 3D model.
191
+ mesh_simplify (float): The mesh simplification factor.
192
+ texture_size (int): The texture resolution.
193
+
194
+ Returns:
195
+ str: The path to the extracted GLB file.
196
+ """
197
+ user_dir = os.path.join(TMP_DIR, str(req.session_hash))
198
+ gs, mesh = unpack_state(state)
199
+ glb = postprocessing_utils.to_glb(gs, mesh, simplify=mesh_simplify, texture_size=texture_size, verbose=False)
200
+ glb_path = os.path.join(user_dir, 'sample.glb')
201
+ glb.export(glb_path)
202
+ torch.cuda.empty_cache()
203
+ return glb_path, glb_path
204
+
205
+
206
+ def extract_gaussian(state: dict, req: gr.Request) -> Tuple[str, str]:
207
+ """
208
+ Extract a Gaussian file from the 3D model.
209
+
210
+ Args:
211
+ state (dict): The state of the generated 3D model.
212
+
213
+ Returns:
214
+ str: The path to the extracted Gaussian file.
215
+ """
216
+ user_dir = os.path.join(TMP_DIR, str(req.session_hash))
217
+ gs, _ = unpack_state(state)
218
+ gaussian_path = os.path.join(user_dir, 'sample.ply')
219
+ gs.save_ply(gaussian_path)
220
+ torch.cuda.empty_cache()
221
+ return gaussian_path, gaussian_path
222
+
223
+
224
+ def prepare_multi_example() -> List[Image.Image]:
225
+ multi_case = list(set([i.split('_')[0] for i in os.listdir("assets/example_multi_image")]))
226
+ images = []
227
+ for case in multi_case:
228
+ _images = []
229
+ for i in range(1, 4):
230
+ img = Image.open(f'assets/example_multi_image/{case}_{i}.png')
231
+ W, H = img.size
232
+ img = img.resize((int(W / H * 512), 512))
233
+ _images.append(np.array(img))
234
+ images.append(Image.fromarray(np.concatenate(_images, axis=1)))
235
+ return images
236
+
237
+
238
+ def split_image(image: Image.Image) -> List[Image.Image]:
239
+ """
240
+ Split an image into multiple views.
241
+ """
242
+ image = np.array(image)
243
+ alpha = image[..., 3]
244
+ alpha = np.any(alpha>0, axis=0)
245
+ start_pos = np.where(~alpha[:-1] & alpha[1:])[0].tolist()
246
+ end_pos = np.where(alpha[:-1] & ~alpha[1:])[0].tolist()
247
+ images = []
248
+ for s, e in zip(start_pos, end_pos):
249
+ images.append(Image.fromarray(image[:, s:e+1]))
250
+ return [preprocess_image(image) for image in images]
251
+
252
+
253
+ with gr.Blocks(delete_cache=(600, 600)) as demo:
254
+ gr.Markdown("""
255
+ ## Image to 3D Asset with [TRELLIS](https://trellis3d.github.io/)
256
+ * Upload an image and click "Generate" to create a 3D asset. If the image has alpha channel, it be used as the mask. Otherwise, we use `rembg` to remove the background.
257
+ * If you find the generated 3D asset satisfactory, click "Extract GLB" to extract the GLB file and download it.
258
+ """)
259
+
260
+ with gr.Row():
261
+ with gr.Column():
262
+ with gr.Tabs() as input_tabs:
263
+ with gr.Tab(label="Single Image", id=0) as single_image_input_tab:
264
+ image_prompt = gr.Image(label="Image Prompt", format="png", image_mode="RGBA", type="pil", height=300)
265
+ with gr.Tab(label="Multiple Images", id=1) as multiimage_input_tab:
266
+ multiimage_prompt = gr.Gallery(label="Image Prompt", format="png", type="pil", height=300, columns=3)
267
+ gr.Markdown("""
268
+ Input different views of the object in separate images.
269
+
270
+ *NOTE: this is an experimental algorithm without training a specialized model. It may not produce the best results for all images, especially those having different poses or inconsistent details.*
271
+ """)
272
+
273
+ with gr.Accordion(label="Generation Settings", open=False):
274
+ seed = gr.Slider(0, MAX_SEED, label="Seed", value=0, step=1)
275
+ randomize_seed = gr.Checkbox(label="Randomize Seed", value=True)
276
+ gr.Markdown("Stage 1: Sparse Structure Generation")
277
+ with gr.Row():
278
+ ss_guidance_strength = gr.Slider(0.0, 10.0, label="Guidance Strength", value=7.5, step=0.1)
279
+ ss_sampling_steps = gr.Slider(1, 50, label="Sampling Steps", value=12, step=1)
280
+ gr.Markdown("Stage 2: Structured Latent Generation")
281
+ with gr.Row():
282
+ slat_guidance_strength = gr.Slider(0.0, 10.0, label="Guidance Strength", value=3.0, step=0.1)
283
+ slat_sampling_steps = gr.Slider(1, 50, label="Sampling Steps", value=12, step=1)
284
+ multiimage_algo = gr.Radio(["stochastic", "multidiffusion"], label="Multi-image Algorithm", value="stochastic")
285
+
286
+ generate_btn = gr.Button("Generate")
287
+
288
+ with gr.Accordion(label="GLB Extraction Settings", open=False):
289
+ mesh_simplify = gr.Slider(0.9, 0.98, label="Simplify", value=0.95, step=0.01)
290
+ texture_size = gr.Slider(512, 2048, label="Texture Size", value=1024, step=512)
291
+
292
+ with gr.Row():
293
+ extract_glb_btn = gr.Button("Extract GLB", interactive=False)
294
+ extract_gs_btn = gr.Button("Extract Gaussian", interactive=False)
295
+ gr.Markdown("""
296
+ *NOTE: Gaussian file can be very large (~50MB), it will take a while to display and download.*
297
+ """)
298
+
299
+ with gr.Column():
300
+ video_output = gr.Video(label="Generated 3D Asset", autoplay=True, loop=True, height=300)
301
+ model_output = LitModel3D(label="Extracted GLB/Gaussian", exposure=10.0, height=300)
302
+
303
+ with gr.Row():
304
+ download_glb = gr.DownloadButton(label="Download GLB", interactive=False)
305
+ download_gs = gr.DownloadButton(label="Download Gaussian", interactive=False)
306
+
307
+ is_multiimage = gr.State(False)
308
+ output_buf = gr.State()
309
+
310
+ # Example images at the bottom of the page
311
+ with gr.Row() as single_image_example:
312
+ examples = gr.Examples(
313
+ examples=[
314
+ f'assets/example_image/{image}'
315
+ for image in os.listdir("assets/example_image")
316
+ ],
317
+ inputs=[image_prompt],
318
+ fn=preprocess_image,
319
+ outputs=[image_prompt],
320
+ run_on_click=True,
321
+ examples_per_page=64,
322
+ )
323
+ with gr.Row(visible=False) as multiimage_example:
324
+ examples_multi = gr.Examples(
325
+ examples=prepare_multi_example(),
326
+ inputs=[image_prompt],
327
+ fn=split_image,
328
+ outputs=[multiimage_prompt],
329
+ run_on_click=True,
330
+ examples_per_page=8,
331
+ )
332
+
333
+ # Handlers
334
+ demo.load(start_session)
335
+ demo.unload(end_session)
336
+
337
+ single_image_input_tab.select(
338
+ lambda: tuple([False, gr.Row.update(visible=True), gr.Row.update(visible=False)]),
339
+ outputs=[is_multiimage, single_image_example, multiimage_example]
340
+ )
341
+ multiimage_input_tab.select(
342
+ lambda: tuple([True, gr.Row.update(visible=False), gr.Row.update(visible=True)]),
343
+ outputs=[is_multiimage, single_image_example, multiimage_example]
344
+ )
345
+
346
+ image_prompt.upload(
347
+ preprocess_image,
348
+ inputs=[image_prompt],
349
+ outputs=[image_prompt],
350
+ )
351
+ multiimage_prompt.upload(
352
+ preprocess_images,
353
+ inputs=[multiimage_prompt],
354
+ outputs=[multiimage_prompt],
355
+ )
356
+
357
+ generate_btn.click(
358
+ get_seed,
359
+ inputs=[randomize_seed, seed],
360
+ outputs=[seed],
361
+ ).then(
362
+ image_to_3d,
363
+ inputs=[image_prompt, multiimage_prompt, is_multiimage, seed, ss_guidance_strength, ss_sampling_steps, slat_guidance_strength, slat_sampling_steps, multiimage_algo],
364
+ outputs=[output_buf, video_output],
365
+ ).then(
366
+ lambda: tuple([gr.Button(interactive=True), gr.Button(interactive=True)]),
367
+ outputs=[extract_glb_btn, extract_gs_btn],
368
+ )
369
+
370
+ video_output.clear(
371
+ lambda: tuple([gr.Button(interactive=False), gr.Button(interactive=False)]),
372
+ outputs=[extract_glb_btn, extract_gs_btn],
373
+ )
374
+
375
+ extract_glb_btn.click(
376
+ extract_glb,
377
+ inputs=[output_buf, mesh_simplify, texture_size],
378
+ outputs=[model_output, download_glb],
379
+ ).then(
380
+ lambda: gr.Button(interactive=True),
381
+ outputs=[download_glb],
382
+ )
383
+
384
+ extract_gs_btn.click(
385
+ extract_gaussian,
386
+ inputs=[output_buf],
387
+ outputs=[model_output, download_gs],
388
+ ).then(
389
+ lambda: gr.Button(interactive=True),
390
+ outputs=[download_gs],
391
+ )
392
+
393
+ model_output.clear(
394
+ lambda: gr.Button(interactive=False),
395
+ outputs=[download_glb],
396
+ )
397
+
398
+
399
+ # Launch the Gradio app
400
+ if __name__ == "__main__":
401
+ pipeline = TrellisImageTo3DPipeline.from_pretrained("microsoft/TRELLIS-image-large")
402
+ pipeline.cuda()
403
+ demo.launch()
app_text.py ADDED
@@ -0,0 +1,266 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ from gradio_litmodel3d import LitModel3D
3
+
4
+ import os
5
+ import shutil
6
+ from typing import *
7
+ import torch
8
+ import numpy as np
9
+ import imageio
10
+ from easydict import EasyDict as edict
11
+ from trellis.pipelines import TrellisTextTo3DPipeline
12
+ from trellis.representations import Gaussian, MeshExtractResult
13
+ from trellis.utils import render_utils, postprocessing_utils
14
+
15
+
16
+ MAX_SEED = np.iinfo(np.int32).max
17
+ TMP_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'tmp')
18
+ os.makedirs(TMP_DIR, exist_ok=True)
19
+
20
+
21
+ def start_session(req: gr.Request):
22
+ user_dir = os.path.join(TMP_DIR, str(req.session_hash))
23
+ os.makedirs(user_dir, exist_ok=True)
24
+
25
+
26
+ def end_session(req: gr.Request):
27
+ user_dir = os.path.join(TMP_DIR, str(req.session_hash))
28
+ shutil.rmtree(user_dir)
29
+
30
+
31
+ def pack_state(gs: Gaussian, mesh: MeshExtractResult) -> dict:
32
+ return {
33
+ 'gaussian': {
34
+ **gs.init_params,
35
+ '_xyz': gs._xyz.cpu().numpy(),
36
+ '_features_dc': gs._features_dc.cpu().numpy(),
37
+ '_scaling': gs._scaling.cpu().numpy(),
38
+ '_rotation': gs._rotation.cpu().numpy(),
39
+ '_opacity': gs._opacity.cpu().numpy(),
40
+ },
41
+ 'mesh': {
42
+ 'vertices': mesh.vertices.cpu().numpy(),
43
+ 'faces': mesh.faces.cpu().numpy(),
44
+ },
45
+ }
46
+
47
+
48
+ def unpack_state(state: dict) -> Tuple[Gaussian, edict, str]:
49
+ gs = Gaussian(
50
+ aabb=state['gaussian']['aabb'],
51
+ sh_degree=state['gaussian']['sh_degree'],
52
+ mininum_kernel_size=state['gaussian']['mininum_kernel_size'],
53
+ scaling_bias=state['gaussian']['scaling_bias'],
54
+ opacity_bias=state['gaussian']['opacity_bias'],
55
+ scaling_activation=state['gaussian']['scaling_activation'],
56
+ )
57
+ gs._xyz = torch.tensor(state['gaussian']['_xyz'], device='cuda')
58
+ gs._features_dc = torch.tensor(state['gaussian']['_features_dc'], device='cuda')
59
+ gs._scaling = torch.tensor(state['gaussian']['_scaling'], device='cuda')
60
+ gs._rotation = torch.tensor(state['gaussian']['_rotation'], device='cuda')
61
+ gs._opacity = torch.tensor(state['gaussian']['_opacity'], device='cuda')
62
+
63
+ mesh = edict(
64
+ vertices=torch.tensor(state['mesh']['vertices'], device='cuda'),
65
+ faces=torch.tensor(state['mesh']['faces'], device='cuda'),
66
+ )
67
+
68
+ return gs, mesh
69
+
70
+
71
+ def get_seed(randomize_seed: bool, seed: int) -> int:
72
+ """
73
+ Get the random seed.
74
+ """
75
+ return np.random.randint(0, MAX_SEED) if randomize_seed else seed
76
+
77
+
78
+ def text_to_3d(
79
+ prompt: str,
80
+ seed: int,
81
+ ss_guidance_strength: float,
82
+ ss_sampling_steps: int,
83
+ slat_guidance_strength: float,
84
+ slat_sampling_steps: int,
85
+ req: gr.Request,
86
+ ) -> Tuple[dict, str]:
87
+ """
88
+ Convert an text prompt to a 3D model.
89
+
90
+ Args:
91
+ prompt (str): The text prompt.
92
+ seed (int): The random seed.
93
+ ss_guidance_strength (float): The guidance strength for sparse structure generation.
94
+ ss_sampling_steps (int): The number of sampling steps for sparse structure generation.
95
+ slat_guidance_strength (float): The guidance strength for structured latent generation.
96
+ slat_sampling_steps (int): The number of sampling steps for structured latent generation.
97
+
98
+ Returns:
99
+ dict: The information of the generated 3D model.
100
+ str: The path to the video of the 3D model.
101
+ """
102
+ user_dir = os.path.join(TMP_DIR, str(req.session_hash))
103
+ outputs = pipeline.run(
104
+ prompt,
105
+ seed=seed,
106
+ formats=["gaussian", "mesh"],
107
+ sparse_structure_sampler_params={
108
+ "steps": ss_sampling_steps,
109
+ "cfg_strength": ss_guidance_strength,
110
+ },
111
+ slat_sampler_params={
112
+ "steps": slat_sampling_steps,
113
+ "cfg_strength": slat_guidance_strength,
114
+ },
115
+ )
116
+ video = render_utils.render_video(outputs['gaussian'][0], num_frames=120)['color']
117
+ video_geo = render_utils.render_video(outputs['mesh'][0], num_frames=120)['normal']
118
+ video = [np.concatenate([video[i], video_geo[i]], axis=1) for i in range(len(video))]
119
+ video_path = os.path.join(user_dir, 'sample.mp4')
120
+ imageio.mimsave(video_path, video, fps=15)
121
+ state = pack_state(outputs['gaussian'][0], outputs['mesh'][0])
122
+ torch.cuda.empty_cache()
123
+ return state, video_path
124
+
125
+
126
+ def extract_glb(
127
+ state: dict,
128
+ mesh_simplify: float,
129
+ texture_size: int,
130
+ req: gr.Request,
131
+ ) -> Tuple[str, str]:
132
+ """
133
+ Extract a GLB file from the 3D model.
134
+
135
+ Args:
136
+ state (dict): The state of the generated 3D model.
137
+ mesh_simplify (float): The mesh simplification factor.
138
+ texture_size (int): The texture resolution.
139
+
140
+ Returns:
141
+ str: The path to the extracted GLB file.
142
+ """
143
+ user_dir = os.path.join(TMP_DIR, str(req.session_hash))
144
+ gs, mesh = unpack_state(state)
145
+ glb = postprocessing_utils.to_glb(gs, mesh, simplify=mesh_simplify, texture_size=texture_size, verbose=False)
146
+ glb_path = os.path.join(user_dir, 'sample.glb')
147
+ glb.export(glb_path)
148
+ torch.cuda.empty_cache()
149
+ return glb_path, glb_path
150
+
151
+
152
+ def extract_gaussian(state: dict, req: gr.Request) -> Tuple[str, str]:
153
+ """
154
+ Extract a Gaussian file from the 3D model.
155
+
156
+ Args:
157
+ state (dict): The state of the generated 3D model.
158
+
159
+ Returns:
160
+ str: The path to the extracted Gaussian file.
161
+ """
162
+ user_dir = os.path.join(TMP_DIR, str(req.session_hash))
163
+ gs, _ = unpack_state(state)
164
+ gaussian_path = os.path.join(user_dir, 'sample.ply')
165
+ gs.save_ply(gaussian_path)
166
+ torch.cuda.empty_cache()
167
+ return gaussian_path, gaussian_path
168
+
169
+
170
+ with gr.Blocks(delete_cache=(600, 600)) as demo:
171
+ gr.Markdown("""
172
+ ## Text to 3D Asset with [TRELLIS](https://trellis3d.github.io/)
173
+ * Type a text prompt and click "Generate" to create a 3D asset.
174
+ * If you find the generated 3D asset satisfactory, click "Extract GLB" to extract the GLB file and download it.
175
+ """)
176
+
177
+ with gr.Row():
178
+ with gr.Column():
179
+ text_prompt = gr.Textbox(label="Text Prompt", lines=5)
180
+
181
+ with gr.Accordion(label="Generation Settings", open=False):
182
+ seed = gr.Slider(0, MAX_SEED, label="Seed", value=0, step=1)
183
+ randomize_seed = gr.Checkbox(label="Randomize Seed", value=True)
184
+ gr.Markdown("Stage 1: Sparse Structure Generation")
185
+ with gr.Row():
186
+ ss_guidance_strength = gr.Slider(0.0, 10.0, label="Guidance Strength", value=7.5, step=0.1)
187
+ ss_sampling_steps = gr.Slider(1, 50, label="Sampling Steps", value=25, step=1)
188
+ gr.Markdown("Stage 2: Structured Latent Generation")
189
+ with gr.Row():
190
+ slat_guidance_strength = gr.Slider(0.0, 10.0, label="Guidance Strength", value=7.5, step=0.1)
191
+ slat_sampling_steps = gr.Slider(1, 50, label="Sampling Steps", value=25, step=1)
192
+
193
+ generate_btn = gr.Button("Generate")
194
+
195
+ with gr.Accordion(label="GLB Extraction Settings", open=False):
196
+ mesh_simplify = gr.Slider(0.9, 0.98, label="Simplify", value=0.95, step=0.01)
197
+ texture_size = gr.Slider(512, 2048, label="Texture Size", value=1024, step=512)
198
+
199
+ with gr.Row():
200
+ extract_glb_btn = gr.Button("Extract GLB", interactive=False)
201
+ extract_gs_btn = gr.Button("Extract Gaussian", interactive=False)
202
+ gr.Markdown("""
203
+ *NOTE: Gaussian file can be very large (~50MB), it will take a while to display and download.*
204
+ """)
205
+
206
+ with gr.Column():
207
+ video_output = gr.Video(label="Generated 3D Asset", autoplay=True, loop=True, height=300)
208
+ model_output = LitModel3D(label="Extracted GLB/Gaussian", exposure=10.0, height=300)
209
+
210
+ with gr.Row():
211
+ download_glb = gr.DownloadButton(label="Download GLB", interactive=False)
212
+ download_gs = gr.DownloadButton(label="Download Gaussian", interactive=False)
213
+
214
+ output_buf = gr.State()
215
+
216
+ # Handlers
217
+ demo.load(start_session)
218
+ demo.unload(end_session)
219
+
220
+ generate_btn.click(
221
+ get_seed,
222
+ inputs=[randomize_seed, seed],
223
+ outputs=[seed],
224
+ ).then(
225
+ text_to_3d,
226
+ inputs=[text_prompt, seed, ss_guidance_strength, ss_sampling_steps, slat_guidance_strength, slat_sampling_steps],
227
+ outputs=[output_buf, video_output],
228
+ ).then(
229
+ lambda: tuple([gr.Button(interactive=True), gr.Button(interactive=True)]),
230
+ outputs=[extract_glb_btn, extract_gs_btn],
231
+ )
232
+
233
+ video_output.clear(
234
+ lambda: tuple([gr.Button(interactive=False), gr.Button(interactive=False)]),
235
+ outputs=[extract_glb_btn, extract_gs_btn],
236
+ )
237
+
238
+ extract_glb_btn.click(
239
+ extract_glb,
240
+ inputs=[output_buf, mesh_simplify, texture_size],
241
+ outputs=[model_output, download_glb],
242
+ ).then(
243
+ lambda: gr.Button(interactive=True),
244
+ outputs=[download_glb],
245
+ )
246
+
247
+ extract_gs_btn.click(
248
+ extract_gaussian,
249
+ inputs=[output_buf],
250
+ outputs=[model_output, download_gs],
251
+ ).then(
252
+ lambda: gr.Button(interactive=True),
253
+ outputs=[download_gs],
254
+ )
255
+
256
+ model_output.clear(
257
+ lambda: gr.Button(interactive=False),
258
+ outputs=[download_glb],
259
+ )
260
+
261
+
262
+ # Launch the Gradio app
263
+ if __name__ == "__main__":
264
+ pipeline = TrellisTextTo3DPipeline.from_pretrained("microsoft/TRELLIS-text-xlarge")
265
+ pipeline.cuda()
266
+ demo.launch()
assets/T.ply ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:163e3efe355f4c7fe36eb3b55563d1897ac1384c5ab2eb1acfc68700de2dc31b
3
+ size 2089367
assets/example_image/T.png ADDED

Git LFS Details

  • SHA256: e29ddc83a5bd3a05fe9b34732169bc4ea7131f7c36527fdc5f626a90a73076d2
  • Pointer size: 131 Bytes
  • Size of remote file: 955 kB
assets/example_image/typical_building_building.png ADDED

Git LFS Details

  • SHA256: 8faa11d557be95c000c475247e61a773d511114c7d1e517c04f8d3d88a6049ec
  • Pointer size: 131 Bytes
  • Size of remote file: 547 kB
assets/example_image/typical_building_castle.png ADDED

Git LFS Details

  • SHA256: 076f0554b087b921863643d2b1ab3e0572a13a347fd66bc29cd9d194034affae
  • Pointer size: 131 Bytes
  • Size of remote file: 426 kB
assets/example_image/typical_building_colorful_cottage.png ADDED

Git LFS Details

  • SHA256: 687305b4e35da759692be0de614d728583a2a9cd2fd3a55593fa753e567d0d47
  • Pointer size: 131 Bytes
  • Size of remote file: 609 kB
assets/example_image/typical_building_maya_pyramid.png ADDED

Git LFS Details

  • SHA256: 4d514f7f4db244ee184af4ddfbc5948d417b4e5bf1c6ee5f5a592679561690df
  • Pointer size: 131 Bytes
  • Size of remote file: 232 kB
assets/example_image/typical_building_mushroom.png ADDED

Git LFS Details

  • SHA256: de9b72d3e13e967e70844ddc54643832a84a1b35ca043a11e7c774371d0ccdab
  • Pointer size: 131 Bytes
  • Size of remote file: 488 kB
assets/example_image/typical_building_space_station.png ADDED

Git LFS Details

  • SHA256: 212c7b4c27ba1e01a7908dbc7f245e7115850eadbc9974aa726327cf35062846
  • Pointer size: 131 Bytes
  • Size of remote file: 620 kB
assets/example_image/typical_creature_dragon.png ADDED

Git LFS Details

  • SHA256: 0e8d6720dfa1e7b332b76e897e617b7f0863187f30879451b4724f482c84185a
  • Pointer size: 131 Bytes
  • Size of remote file: 564 kB
assets/example_image/typical_creature_elephant.png ADDED

Git LFS Details

  • SHA256: 86a171e37a3d781e7215977f565cd63e813341c1f89e2c586fa61937e4ed6916
  • Pointer size: 131 Bytes
  • Size of remote file: 482 kB
assets/example_image/typical_creature_furry.png ADDED

Git LFS Details

  • SHA256: 5b5445b8f1996cf6d72497b2d7564c656f4048e6c1fa626fd7bb3ee582fee671
  • Pointer size: 131 Bytes
  • Size of remote file: 648 kB
assets/example_image/typical_creature_quadruped.png ADDED

Git LFS Details

  • SHA256: 7469f43f58389adec101e9685f60188bd4e7fbede77eef975102f6a8865bc786
  • Pointer size: 131 Bytes
  • Size of remote file: 685 kB
assets/example_image/typical_creature_robot_crab.png ADDED

Git LFS Details

  • SHA256: d7e716abe8f8895080f562d1dc26b14fa0e20a05aa5beb2770c6fb3b87b3476a
  • Pointer size: 131 Bytes
  • Size of remote file: 594 kB
assets/example_image/typical_creature_robot_dinosour.png ADDED

Git LFS Details

  • SHA256: d0986f29557a6fddf9b52b5251a6b6103728c61e201b1cfad1e709b090b72f56
  • Pointer size: 131 Bytes
  • Size of remote file: 632 kB
assets/example_image/typical_creature_rock_monster.png ADDED

Git LFS Details

  • SHA256: e29458a6110bee8374c0d4d12471e7167a6c1c98c18f6e2d7ff4f5f0ca3fa01b
  • Pointer size: 131 Bytes
  • Size of remote file: 648 kB
assets/example_image/typical_humanoid_block_robot.png ADDED

Git LFS Details

  • SHA256: 3a0acbb532668e1bf35f3eef5bcbfdd094c22219ef2d837fa01ccf51cce75ca3
  • Pointer size: 131 Bytes
  • Size of remote file: 441 kB
assets/example_image/typical_humanoid_dragonborn.png ADDED

Git LFS Details

  • SHA256: 5d7c547909a6c12da55dbab1c1c98181ff09e58c9ba943682ca105e71be9548e
  • Pointer size: 131 Bytes
  • Size of remote file: 481 kB
assets/example_image/typical_humanoid_dwarf.png ADDED

Git LFS Details

  • SHA256: a4a7c157d5d8071128c27594e45a7a03e5113b3333b7f1c5ff1379481e3e0264
  • Pointer size: 131 Bytes
  • Size of remote file: 498 kB
assets/example_image/typical_humanoid_goblin.png ADDED

Git LFS Details

  • SHA256: 2b0e9a04ae3e7bef44b7180a70306f95374b60727ffa0f6f01fd6c746595cd77
  • Pointer size: 131 Bytes
  • Size of remote file: 496 kB
assets/example_image/typical_humanoid_mech.png ADDED

Git LFS Details

  • SHA256: a244ec54b7984e646e54d433de6897657081dd5b9cd5ccd3d865328d813beb49
  • Pointer size: 131 Bytes
  • Size of remote file: 850 kB
assets/example_image/typical_misc_crate.png ADDED

Git LFS Details

  • SHA256: 59fd9884301faca93265166d90078e8c31e76c7f93524b1db31975df4b450748
  • Pointer size: 131 Bytes
  • Size of remote file: 642 kB
assets/example_image/typical_misc_fireplace.png ADDED

Git LFS Details

  • SHA256: 2288c034603e289192d63cbc73565107caefd99e81c4b7afa2983c8b13e34440
  • Pointer size: 131 Bytes
  • Size of remote file: 558 kB
assets/example_image/typical_misc_gate.png ADDED

Git LFS Details

  • SHA256: ec8db5389b74fe56b826e3c6d860234541033387350e09268591c46d411cc8e9
  • Pointer size: 131 Bytes
  • Size of remote file: 572 kB
assets/example_image/typical_misc_lantern.png ADDED

Git LFS Details

  • SHA256: e17bd83adf433ebfca17abd220097b2b7f08affc649518bd7822e03797e83d41
  • Pointer size: 131 Bytes
  • Size of remote file: 300 kB
assets/example_image/typical_misc_magicbook.png ADDED

Git LFS Details

  • SHA256: aff9c14589c340e31b61bf82e4506d77d72c511e741260fa1e600cefa4e103e6
  • Pointer size: 131 Bytes
  • Size of remote file: 496 kB
assets/example_image/typical_misc_mailbox.png ADDED

Git LFS Details

  • SHA256: 01e86a5d68edafb7e11d7a86f7e8081f5ed1b02578198a3271554c5fb8fb9fcf
  • Pointer size: 131 Bytes
  • Size of remote file: 631 kB
assets/example_image/typical_misc_monster_chest.png ADDED

Git LFS Details

  • SHA256: c57a598e842225a31b9770bf3bbb9ae86197ec57d0c2883caf8cb5eed4908fbc
  • Pointer size: 131 Bytes
  • Size of remote file: 690 kB
assets/example_image/typical_misc_paper_machine.png ADDED

Git LFS Details

  • SHA256: 2d55400ae5d4df2377258400d800ece75766d5274e80ce07c3b29a4d1fd1fa36
  • Pointer size: 131 Bytes
  • Size of remote file: 614 kB
assets/example_image/typical_misc_phonograph.png ADDED

Git LFS Details

  • SHA256: 14fff9a27ea769d3ca711e9ff55ab3d9385486a5e8b99117f506df326a0a357e
  • Pointer size: 131 Bytes
  • Size of remote file: 517 kB
assets/example_image/typical_misc_portal2.png ADDED

Git LFS Details

  • SHA256: 57aab2bba56bc946523a3fca77ca70651a4ad8c6fbf1b91a1a824418df48faae
  • Pointer size: 131 Bytes
  • Size of remote file: 386 kB
assets/example_image/typical_misc_storage_chest.png ADDED

Git LFS Details

  • SHA256: 0e4ac1c67fdda902ecb709447b8defd949c738954c844c1b8364b8e3f7d9e55a
  • Pointer size: 131 Bytes
  • Size of remote file: 632 kB
assets/example_image/typical_misc_telephone.png ADDED

Git LFS Details

  • SHA256: 00048be46234a2709c12614b04cbad61c6e3c7e63c2a4ef33d999185f5393e36
  • Pointer size: 131 Bytes
  • Size of remote file: 648 kB
assets/example_image/typical_misc_television.png ADDED

Git LFS Details

  • SHA256: 6a1947b737398bf535ec212668a4d78cd38fe84cf9da1ccd6c0c0d838337755e
  • Pointer size: 131 Bytes
  • Size of remote file: 627 kB
assets/example_image/typical_misc_workbench.png ADDED

Git LFS Details

  • SHA256: a6d9ed4d005a5253b8571fd976b0d102e293512d7b5a8ed5e3f7f17c5f4e19da
  • Pointer size: 131 Bytes
  • Size of remote file: 463 kB
assets/example_image/typical_vehicle_biplane.png ADDED

Git LFS Details

  • SHA256: c73e98112eb603b4ba635b8965cad7807d0588f083811bc2faa0c7ab9668a65a
  • Pointer size: 131 Bytes
  • Size of remote file: 574 kB
assets/example_image/typical_vehicle_bulldozer.png ADDED

Git LFS Details

  • SHA256: 23d821b4daea61cbea28cc6ddd3ae46712514dfcdff995c2664f5a70d21f4ef3
  • Pointer size: 131 Bytes
  • Size of remote file: 693 kB
assets/example_image/typical_vehicle_cart.png ADDED

Git LFS Details

  • SHA256: b72c04a2aa5cf57717c05151a2982d6dc31afde130d5e830adf37a84a70616cb
  • Pointer size: 131 Bytes
  • Size of remote file: 693 kB
assets/example_image/typical_vehicle_excavator.png ADDED

Git LFS Details

  • SHA256: 27a418853eefa197f1e10ed944a7bb071413fd2bc1681804ee773a6ce3799c52
  • Pointer size: 131 Bytes
  • Size of remote file: 712 kB