

kernels-community
AI & ML interests
None defined yet.
Recent Activity
Kernel Community
The Kernel Hub allows Python libraries and applications to load optimized compute kernels directly from the Hugging Face Hub. Think of it like the Model Hub, but for low-level, high-performance code snippets (kernels) that accelerate specific operations, often on GPUs.
Instead of manually managing complex dependencies, wrestling with compilation flags, or building libraries like Triton or CUTLASS from source, you can use the kernels
library to instantly fetch and run pre-compiled, optimized kernels.
Projects
The kernel hub team maintains two projects to make interacting with the kernel hub as easy as possible.
kernel-builder
Creates compliant kernels that meet strict criteria for portability and compatibility.
kernels
Python library to load compute kernels directly from the Hub.
What are Compliant Kernels?
Kernels on the Hub are designed to be:
- Portable: Load from paths outside
PYTHONPATH
- Unique: Multiple versions can run in the same process
- Compatible: Support various Python versions, PyTorch builds, and C++ ABIs
models
13


kernels-community/deformable-detr

kernels-community/activation

kernels-community/flash-attn

kernels-community/quantization

kernels-community/mamba-ssm

kernels-community/paged-attention

kernels-community/rotary

kernels-community/quantization-eetq
