multimodalart's picture
Upload 2025 files
22a452a verified
|
raw
history blame
1.07 kB

Caching methods

Cache methods speedup diffusion transformers by storing and reusing intermediate outputs of specific layers, such as attention and feedforward layers, instead of recalculating them at each inference step.

CacheMixin

[[autodoc]] CacheMixin

PyramidAttentionBroadcastConfig

[[autodoc]] PyramidAttentionBroadcastConfig

[[autodoc]] apply_pyramid_attention_broadcast

FasterCacheConfig

[[autodoc]] FasterCacheConfig

[[autodoc]] apply_faster_cache