Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
kernels-community
/
paged-attention
like
0
Follow
kernels-community
35
kernel
License:
apache-2.0
Model card
Files
Files and versions
Community
1
main
paged-attention
Ctrl+K
Ctrl+K
3 contributors
History:
13 commits
danieldk
HF Staff
Fix flake input
bebc17e
about 21 hours ago
build
Build (aarch64)
about 23 hours ago
cuda-utils
Port vLLM attention kernels
3 months ago
paged-attention
Port vLLM attention kernels
3 months ago
tests
Rename to paged-attention
2 months ago
torch-ext
Update for build.toml changes
about 2 months ago
.gitattributes
Safe
1.56 kB
Port vLLM attention kernels
3 months ago
README.md
Safe
128 Bytes
Update README.md (#1)
about 2 months ago
build.toml
1.33 kB
Sync capabilities with upstream
1 day ago
flake.lock
3.03 kB
Fix flake input
about 21 hours ago
flake.nix
332 Bytes
Fix flake input
about 21 hours ago