Datasets:
Add comprehensive dataset README
Browse files
README.md
CHANGED
|
@@ -1,43 +1,215 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
task_categories:
|
| 4 |
-
- text-generation
|
| 5 |
-
language:
|
| 6 |
-
- en
|
| 7 |
tags:
|
| 8 |
-
-
|
| 9 |
-
- llama-cpp
|
| 10 |
-
- wheels
|
| 11 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
---
|
| 13 |
|
| 14 |
-
# π
|
| 15 |
|
| 16 |
-
|
| 17 |
|
| 18 |
-
|
| 19 |
|
| 20 |
-
|
| 21 |
|
| 22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
-
|
| 25 |
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
|
| 31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
|
| 33 |
-
|
| 34 |
-
Our private distributed build farm is currently maintaining **3,600+ combinations**:
|
| 35 |
|
| 36 |
-
|
| 37 |
-
| :--- | :--- |
|
| 38 |
-
| **Llama-cpp-python** | v0.3.12 β v0.3.16+ |
|
| 39 |
-
| **Python Versions** | 3.10, 3.11, 3.12, 3.13, 3.14 |
|
| 40 |
-
| **Backends** | `Basic (CPU)`, `OpenBLAS`, `Vulkan`, `CLBlast`, `MKL` |
|
| 41 |
-
| **Optimizations** | `AVX-512`, `VNNI`, `AMX`, `AVX2` |
|
| 42 |
|
| 43 |
-
-
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
task_categories:
|
| 4 |
+
- text-generation
|
|
|
|
|
|
|
| 5 |
tags:
|
| 6 |
+
- llama-cpp
|
| 7 |
+
- llama-cpp-python
|
| 8 |
+
- wheels
|
| 9 |
+
- prebuilt
|
| 10 |
+
- manylinux
|
| 11 |
+
- cpu
|
| 12 |
+
- gpu
|
| 13 |
+
- vulkan
|
| 14 |
+
- openblas
|
| 15 |
+
- mkl
|
| 16 |
+
- avx2
|
| 17 |
+
- avx512
|
| 18 |
+
- gguf
|
| 19 |
+
- inference
|
| 20 |
+
pretty_name: "llama-cpp-python Prebuilt Wheels"
|
| 21 |
+
size_categories:
|
| 22 |
+
- 1K<n<10K
|
| 23 |
---
|
| 24 |
|
| 25 |
+
# π llama-cpp-python Prebuilt Wheels
|
| 26 |
|
| 27 |
+
**The most complete collection of prebuilt `llama-cpp-python` wheels for manylinux.**
|
| 28 |
|
| 29 |
+
Never compile `llama-cpp-python` from source again. Just `pip install` the exact wheel you need.
|
| 30 |
|
| 31 |
+
## π Stats
|
| 32 |
|
| 33 |
+
| | Count |
|
| 34 |
+
|--|-------|
|
| 35 |
+
| **Total Wheels** | 3,795+ |
|
| 36 |
+
| **Versions** | 0.3.0 β 0.3.16 (17 versions) |
|
| 37 |
+
| **Python** | 3.8, 3.9, 3.10, 3.11, 3.12, 3.13, 3.14 |
|
| 38 |
+
| **CPU Backends** | OpenBLAS, Intel MKL, Basic (no BLAS) |
|
| 39 |
+
| **GPU Backends** | Vulkan, CLBlast, OpenCL, SYCL, RPC |
|
| 40 |
+
| **CPU Optimizations** | AVX, AVX2, AVX512, FMA, F16C, VNNI, VBMI, BF16, AMX, AVX-VNNI |
|
| 41 |
+
| **Platform** | `manylinux_2_31_x86_64` |
|
| 42 |
|
| 43 |
+
## β‘ Quick Install
|
| 44 |
|
| 45 |
+
```bash
|
| 46 |
+
# Direct install β just replace the filename with the wheel you need
|
| 47 |
+
pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/WHEEL_FILENAME.whl
|
| 48 |
+
```
|
| 49 |
|
| 50 |
+
### Examples
|
| 51 |
+
|
| 52 |
+
```bash
|
| 53 |
+
# OpenBLAS + AVX2/FMA/F16C (most modern desktops, 2013+)
|
| 54 |
+
pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
|
| 55 |
+
|
| 56 |
+
# AVX512 + VNNI + VBMI (Ice Lake servers, 2019+)
|
| 57 |
+
pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+openblas_avx512_fma_f16c_vnni_vbmi-cp311-cp311-manylinux_2_31_x86_64.whl
|
| 58 |
+
|
| 59 |
+
# Vulkan GPU acceleration
|
| 60 |
+
pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+vulkan-cp311-cp311-manylinux_2_31_x86_64.whl
|
| 61 |
+
|
| 62 |
+
# Basic β maximum compatibility (any x86-64 CPU)
|
| 63 |
+
pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+basic_basic-cp311-cp311-manylinux_2_31_x86_64.whl
|
| 64 |
+
```
|
| 65 |
+
|
| 66 |
+
### In a requirements.txt
|
| 67 |
+
|
| 68 |
+
```
|
| 69 |
+
# URL-encode the + as %2B
|
| 70 |
+
https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16%2Bopenblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
### In a Dockerfile
|
| 74 |
+
|
| 75 |
+
```dockerfile
|
| 76 |
+
RUN pip install --no-cache-dir \
|
| 77 |
+
https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16%2Bopenblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
|
| 78 |
+
```
|
| 79 |
+
|
| 80 |
+
### In a HuggingFace Space (packages.txt + requirements.txt)
|
| 81 |
+
|
| 82 |
+
**packages.txt:**
|
| 83 |
+
```
|
| 84 |
+
libopenblas-dev
|
| 85 |
+
```
|
| 86 |
+
|
| 87 |
+
**requirements.txt:**
|
| 88 |
+
```
|
| 89 |
+
https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16%2Bopenblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
|
| 90 |
+
```
|
| 91 |
+
|
| 92 |
+
## π§ Which Wheel Do I Need?
|
| 93 |
+
|
| 94 |
+
### Step 1: Choose Your Backend
|
| 95 |
+
|
| 96 |
+
| Backend | Best For | Tag |
|
| 97 |
+
|---------|----------|-----|
|
| 98 |
+
| **OpenBLAS** | General CPU inference, good default | `openblas` |
|
| 99 |
+
| **Intel MKL** | Intel CPUs, potentially faster BLAS | `mkl` |
|
| 100 |
+
| **Basic** | Maximum compatibility, no external deps | `basic` |
|
| 101 |
+
| **Vulkan** | GPU acceleration (NVIDIA, AMD, Intel) | `vulkan` |
|
| 102 |
+
| **CLBlast** | OpenCL GPU acceleration | `clblast` |
|
| 103 |
+
| **OpenCL** | Generic OpenCL devices | `opencl` |
|
| 104 |
+
| **SYCL** | Intel GPU (Arc, Flex, Data Center) | `sycl` |
|
| 105 |
+
| **RPC** | Distributed inference over network | `rpc` |
|
| 106 |
+
|
| 107 |
+
### Step 2: Choose Your CPU Optimization
|
| 108 |
+
|
| 109 |
+
Check what your CPU supports:
|
| 110 |
+
|
| 111 |
+
```bash
|
| 112 |
+
# Linux β check CPU flags
|
| 113 |
+
grep -o 'avx[a-z0-9_]*\|fma\|f16c\|sse4_2' /proc/cpuinfo | sort -u
|
| 114 |
+
```
|
| 115 |
+
|
| 116 |
+
| CPU Era | Example CPUs | Recommended Tag |
|
| 117 |
+
|---------|-------------|-----------------|
|
| 118 |
+
| **2013+ Desktop** | Haswell, Ryzen 1st gen | `avx2_fma_f16c` |
|
| 119 |
+
| **2017+ Server** | Skylake-X, EPYC | `avx512_fma_f16c` |
|
| 120 |
+
| **2019+ Server** | Ice Lake, EPYC 3rd gen | `avx512_fma_f16c_vnni_vbmi` |
|
| 121 |
+
| **2021+ Desktop** | Alder Lake, 12th gen Intel | `avx2_fma_f16c_avxvnni` |
|
| 122 |
+
| **2023+ Server** | Sapphire Rapids, 4th gen Xeon | `avx512_fma_f16c_vnni_vbmi_bf16_amx` |
|
| 123 |
+
| **2012+ Legacy** | Ivy Bridge | `avx_f16c` |
|
| 124 |
+
| **2011+ Legacy** | Sandy Bridge | `avx` |
|
| 125 |
+
| **Any x86-64** | Anything 64-bit | `basic` |
|
| 126 |
+
|
| 127 |
+
### Step 3: Build the Filename
|
| 128 |
+
|
| 129 |
+
```
|
| 130 |
+
llama_cpp_python-{VERSION}+{BACKEND}_{CPU_TAG}-{PYTHON}-{PYTHON}-manylinux_2_31_x86_64.whl
|
| 131 |
+
```
|
| 132 |
+
|
| 133 |
+
**Example:** Python 3.12 + OpenBLAS + AVX2/FMA/F16C + version 0.3.16:
|
| 134 |
+
```
|
| 135 |
+
llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp312-cp312-manylinux_2_31_x86_64.whl
|
| 136 |
+
```
|
| 137 |
+
|
| 138 |
+
GPU backends don't need a CPU tag:
|
| 139 |
+
```
|
| 140 |
+
llama_cpp_python-0.3.16+vulkan-cp312-cp312-manylinux_2_31_x86_64.whl
|
| 141 |
+
```
|
| 142 |
+
|
| 143 |
+
## π CPU Optimization Tags Reference
|
| 144 |
+
|
| 145 |
+
| Tag | CPU Instructions Enabled | CMake Flags |
|
| 146 |
+
|-----|-------------------------|-------------|
|
| 147 |
+
| `basic` | None (pure x86-64) | β |
|
| 148 |
+
| `avx` | AVX | `-DGGML_AVX=ON` |
|
| 149 |
+
| `avx_f16c` | AVX + F16C | `-DGGML_AVX=ON -DGGML_F16C=ON` |
|
| 150 |
+
| `avx2_fma_f16c` | AVX2 + FMA + F16C | `-DGGML_AVX2=ON -DGGML_FMA=ON -DGGML_F16C=ON` |
|
| 151 |
+
| `avx512_fma_f16c` | AVX512 + FMA + F16C | `-DGGML_AVX512=ON -DGGML_FMA=ON -DGGML_F16C=ON` |
|
| 152 |
+
| `avx512_fma_f16c_vnni_vbmi` | AVX512 + FMA + F16C + VNNI + VBMI | `+ -DGGML_AVX512_VNNI=ON -DGGML_AVX512_VBMI=ON` |
|
| 153 |
+
| `avx512_fma_f16c_vnni_vbmi_bf16_amx` | Full server (Sapphire Rapids) | `+ -DGGML_AVX512_BF16=ON -DGGML_AMX_TILE/INT8/BF16=ON` |
|
| 154 |
+
| `avx2_fma_f16c_avxvnni` | AVX2 + FMA + F16C + AVX-VNNI | `+ -DGGML_AVX_VNNI=ON` |
|
| 155 |
+
|
| 156 |
+
## π Python Version Support
|
| 157 |
+
|
| 158 |
+
| Python | Tag | Status |
|
| 159 |
+
|--------|-----|--------|
|
| 160 |
+
| 3.8 | `cp38` | β
Full coverage |
|
| 161 |
+
| 3.9 | `cp39` | β
Full coverage |
|
| 162 |
+
| 3.10 | `cp310` | β
Full coverage |
|
| 163 |
+
| 3.11 | `cp311` | β
Full coverage |
|
| 164 |
+
| 3.12 | `cp312` | β
Full coverage |
|
| 165 |
+
| 3.13 | `cp313` | β
Full coverage |
|
| 166 |
+
| 3.14 | `cp314` | β
Full coverage |
|
| 167 |
+
|
| 168 |
+
## π¦ Naming Convention (PEP 440)
|
| 169 |
+
|
| 170 |
+
All wheels follow the [PEP 440](https://peps.python.org/pep-0440/) local version identifier standard:
|
| 171 |
+
|
| 172 |
+
```
|
| 173 |
+
llama_cpp_python-{VERSION}+{LOCAL_TAG}-{PYTHON}-{ABI}-{PLATFORM}.whl
|
| 174 |
+
^
|
| 175 |
+
βββ Local version label (backend + CPU flags)
|
| 176 |
+
```
|
| 177 |
+
|
| 178 |
+
The `+` separates the upstream version from the local build variant. The local tag uses `_` to separate components. This is fully PEP 440 compliant and works with `pip`, `requirements.txt`, and all standard Python packaging tools.
|
| 179 |
+
|
| 180 |
+
## π How These Wheels Are Built
|
| 181 |
+
|
| 182 |
+
These wheels are built automatically by the **Ultimate Llama Wheel Factory** β a distributed build system running on HuggingFace Spaces:
|
| 183 |
+
|
| 184 |
+
- **[Dispatcher](https://huggingface.co/spaces/AIencoder/wheel-factory-dispatcher)** β Command center for creating and managing build jobs
|
| 185 |
+
- **[Workers 1-4](https://huggingface.co/spaces/AIencoder/wheel-factory-worker-1)** β Autonomous Docker-based build agents
|
| 186 |
+
- **[Auditor](https://huggingface.co/spaces/AIencoder/wheel-factory-auditor)** β Validates filenames and repo health
|
| 187 |
+
|
| 188 |
+
Each wheel is compiled from source with explicit cmake flags β no `-march=native`, ensuring the exact instruction set advertised in the filename.
|
| 189 |
+
|
| 190 |
+
## β FAQ
|
| 191 |
+
|
| 192 |
+
**Q: Do I need to install OpenBLAS separately?**
|
| 193 |
+
A: For `openblas` wheels on Linux, yes: `sudo apt install libopenblas-dev`. For `basic` wheels, no external dependencies needed. For HuggingFace Spaces, add `libopenblas-dev` to `packages.txt`.
|
| 194 |
+
|
| 195 |
+
**Q: Which wheel is fastest?**
|
| 196 |
+
A: Use the most specific wheel your CPU supports. `avx2_fma_f16c` is the sweet spot for most modern hardware. If your CPU has AVX512, use the `avx512` variants for potentially better performance on large batch sizes.
|
| 197 |
+
|
| 198 |
+
**Q: Can I use these on Ubuntu/Debian/Fedora/etc?**
|
| 199 |
+
A: Yes! `manylinux_2_31` works on any Linux distro with glibc β₯ 2.31. That includes Ubuntu 20.04+, Debian 11+, Fedora 34+, RHEL 9+, and most other modern distros.
|
| 200 |
+
|
| 201 |
+
**Q: What about Windows/macOS/CUDA wheels?**
|
| 202 |
+
A: This repo currently focuses on manylinux. For other platforms, check [abetlen's wheel index](https://abetlen.github.io/llama-cpp-python/whl/) or [jllllll's cuBLAS wheels](https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels).
|
| 203 |
+
|
| 204 |
+
**Q: A wheel doesn't work / crashes with SIGILL?**
|
| 205 |
+
A: You're probably using a wheel with CPU instructions your hardware doesn't support (e.g., AVX512 on a non-AVX512 CPU). Try a less specific wheel like `avx2_fma_f16c` or `basic`.
|
| 206 |
+
|
| 207 |
+
## π License
|
| 208 |
|
| 209 |
+
MIT β same as [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) and [llama.cpp](https://github.com/ggml-org/llama.cpp).
|
|
|
|
| 210 |
|
| 211 |
+
## π Credits
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 212 |
|
| 213 |
+
- [Georgi Gerganov](https://github.com/ggerganov) β llama.cpp
|
| 214 |
+
- [Andrei Betlen](https://github.com/abetlen) β llama-cpp-python
|
| 215 |
+
- Built by [AIencoder](https://huggingface.co/AIencoder) with the Ultimate Llama Wheel Factory
|