🤗 PEFT: Parameter-Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware Feb 10, 2023 • 35
view post Post 2221 Reply Here is a hackable and minimal implementation showing how to perform distributed text-to-image generation with Diffusers and Accelerate. Full snippet is here: https://gist.github.com/sayakpaul/cfaebd221820d7b43fae638b4dfa01baWith @JW17
view post Post 4400 Reply Flux.1-Dev like images but in fewer steps. Merging code (very simple), inference code, merged params: sayakpaul/FLUX.1-mergedEnjoy the Monday 🤗
Optimizing diffusion models Provides a list of papers focusing on optimizing T2I diffusion models, targeting fewer timesteps, architecture optimization, and more. Progressive Distillation for Fast Sampling of Diffusion Models Paper • 2202.00512 • Published Feb 1, 2022 • 1 On Distillation of Guided Diffusion Models Paper • 2210.03142 • Published Oct 6, 2022 InstaFlow: One Step is Enough for High-Quality Diffusion-Based Text-to-Image Generation Paper • 2309.06380 • Published Sep 12, 2023 • 32 Consistency Models Paper • 2303.01469 • Published Mar 2, 2023 • 6
Progressive Distillation for Fast Sampling of Diffusion Models Paper • 2202.00512 • Published Feb 1, 2022 • 1
InstaFlow: One Step is Enough for High-Quality Diffusion-Based Text-to-Image Generation Paper • 2309.06380 • Published Sep 12, 2023 • 32