-
SANA-Sprint: One-Step Diffusion with Continuous-Time Consistency Distillation
Paper β’ 2503.09641 β’ Published β’ 26 -
Efficient-Large-Model/Sana_Sprint_1.6B_1024px_teacher
Text-to-Image β’ Updated β’ 10 -
Efficient-Large-Model/Sana_Sprint_1.6B_1024px
Text-to-Image β’ Updated β’ 26 -
Efficient-Large-Model/Sana_Sprint_1.6B_1024px_diffusers
Text-to-Image β’ Updated β’ 1

Efficient-Large-Model
AI & ML interests
None defined yet.
Recent Activity
Welcome to Efficient Large Model Team! π
We are researchers from NVIDIA and MIT working on GPU accelerated large models for generative AI.
Click to know more!
π Introduction
Efficient Large Model Team is a collaboration between researchers from NVIDIA and MIT dedicated to the development and optimization of GPU-accelerated efficient large models. We focuses on pushing the boundaries of generative AI by designing models that are not only powerful but also efficient in terms of computational resources. We are committed to advancing the field of AI by making state-of-the-art models deployable, scalable and accessible.
π Contribution Guidelines
We welcome contributions from the community to help us further improve and expand our research efforts. Whether you're an experienced researcher, a student eager to learn, or a developer passionate about efficiency in AI, there are several ways to get involved:
- Contribute Code: Help us develop and optimize efficient large models by contributing code to our GitHub repositories.
- Report Issues: If you encounter any bugs or have suggestions for improvement, please open an issue on the respective repository.
- Provide Feedback: Share your insights and ideas through discussions on our GitHub repositories or join our community forums.
- Spread the Word: Let others know about our work and encourage them to join our community.
- Internship: We have openings at both MIT and NVIDIA for excellent contributors with proven track record.
πΏ Fun Facts
Our team comprises researchers from diverse backgrounds, bringing together expertise from both industry and academia. We're passionate about optimizing AI models not just for performance but also for sustainability and accessibility. In our spare time, we love experimenting with new algorithms and techniques to enhance the efficiency of our models, and skiing at the speed of GPU. Join us on this exciting journey of building the next generation of efficient large models! π
π©βπ» Useful Resources
MIT HAN Lab: https://hanlab.mit.edu NVIDIA TensorRT-LLM: https://github.com/NVIDIA/TensorRT-LLM
Collections
7
-
SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion Transformer
Paper β’ 2501.18427 β’ Published β’ 18 -
Efficient-Large-Model/SANA1.5_4.8B_1024px
Text-to-Image β’ Updated β’ 1.32k β’ 13 -
Efficient-Large-Model/SANA1.5_4.8B_1024px_diffusers
Text-to-Image β’ Updated β’ 5 -
Efficient-Large-Model/SANA1.5_1.6B_1024px
Text-to-Image β’ Updated β’ 102 β’ 1
models
71

Efficient-Large-Model/Sana_Sprint_1.6B_1024px_diffusers

Efficient-Large-Model/Sana_Sprint_1.6B_1024px_teacher

Efficient-Large-Model/Sana_Sprint_1.6B_1024px

Efficient-Large-Model/SANA1.5_1.6B_1024px_diffusers

Efficient-Large-Model/SANA1.5_1.6B_1024px

Efficient-Large-Model/SANA1.5_4.8B_1024px_diffusers

Efficient-Large-Model/SANA1.5_4.8B_1024px

Efficient-Large-Model/NVILA-Lite-8B-hf-preview

Efficient-Large-Model/NVILA-Lite-2B-hf-preview
