AIencoder commited on
Commit
d361e6f
Β·
verified Β·
1 Parent(s): 6b9a4a8

docs: complete README with 8333 wheel stats and install guide

Browse files
Files changed (1) hide show
  1. README.md +65 -172
README.md CHANGED
@@ -1,206 +1,99 @@
1
  ---
2
  license: mit
3
- task_categories:
4
- - text-generation
5
  tags:
6
- - llama-cpp
7
- - llama-cpp-python
8
- - wheels
9
- - prebuilt
10
- - cpu
11
- - gpu
12
- - manylinux
13
- - gguf
14
- - inference
15
- pretty_name: "llama-cpp-python Prebuilt Wheels"
16
  size_categories:
17
- - 1K<n<10K
18
  ---
19
 
20
- # 🏭 llama-cpp-python Prebuilt Wheels
21
 
22
- **The most complete collection of prebuilt `llama-cpp-python` wheels for manylinux x86_64.**
23
 
24
- Stop compiling. Start inferencing.
25
 
26
- ```bash
27
- pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
28
- ```
29
-
30
- ## πŸ“Š What's Inside
31
 
32
- | | Count |
33
  |---|---|
34
- | **Total Wheels** | 3,794+ |
35
- | **Versions** | 0.3.0 β€” 0.3.16 (17 versions) |
36
- | **Python** | 3.8, 3.9, 3.10, 3.11, 3.12, 3.13, 3.14 |
37
- | **Platform** | `manylinux_2_31_x86_64` |
38
- | **Backends** | 8 |
39
- | **CPU Profiles** | 13+ flag combinations |
40
-
41
- ## ⚑ Backends
42
-
43
- | Backend | Tag | Description |
44
- |---------|-----|-------------|
45
- | **OpenBLAS** | `openblas` | CPU BLAS acceleration β€” best general-purpose choice |
46
- | **Intel MKL** | `mkl` | Intel Math Kernel Library β€” fastest on Intel CPUs |
47
- | **Basic** | `basic` | No BLAS β€” maximum compatibility, no extra dependencies |
48
- | **Vulkan** | `vulkan` | Universal GPU acceleration β€” works on NVIDIA, AMD, Intel |
49
- | **CLBlast** | `clblast` | OpenCL GPU acceleration |
50
- | **SYCL** | `sycl` | Intel GPU acceleration (Data Center, Arc, iGPU) |
51
- | **OpenCL** | `opencl` | Generic OpenCL GPU backend |
52
- | **RPC** | `rpc` | Distributed inference over network |
53
-
54
- ## πŸ–₯️ CPU Optimization Profiles
55
-
56
- Wheels are built with specific CPU instruction sets enabled. Pick the one that matches your hardware:
57
-
58
- | CPU Tag | Instructions | Best For |
59
- |---------|-------------|----------|
60
- | `basic` | None | Any x86-64 CPU (maximum compatibility) |
61
- | `avx` | AVX | Sandy Bridge+ (2011) |
62
- | `avx_f16c` | AVX + F16C | Ivy Bridge+ (2012) |
63
- | `avx2_fma_f16c` | AVX2 + FMA + F16C | **Haswell+ (2013) β€” most common** |
64
- | `avx2_fma_f16c_avxvnni` | AVX2 + FMA + F16C + AVX-VNNI | Alder Lake+ (2021) |
65
- | `avx512_fma_f16c` | AVX-512 + FMA + F16C | Skylake-X+ (2017) |
66
- | `avx512_fma_f16c_vnni` | + AVX512-VNNI | Cascade Lake+ (2019) |
67
- | `avx512_fma_f16c_vnni_vbmi` | + AVX512-VBMI | Ice Lake+ (2019) |
68
- | `avx512_fma_f16c_vnni_vbmi_bf16_amx` | + BF16 + AMX | Sapphire Rapids+ (2023) |
69
-
70
- ### How to Pick the Right Wheel
71
-
72
- **Don't know your CPU?** Start with `avx2_fma_f16c` β€” it works on any CPU from 2013 onwards (Intel Haswell, AMD Ryzen, and newer).
73
-
74
- **Want maximum compatibility?** Use `basic` β€” works on literally any x86-64 CPU.
75
-
76
- **Have a server CPU?** Check if it supports AVX-512:
77
- ```bash
78
- grep -o 'avx[^ ]*\|fma\|f16c\|bmi2\|sse4_2' /proc/cpuinfo | sort -u
79
- ```
80
-
81
- ## πŸ“¦ Filename Format
82
-
83
- All wheels follow the [PEP 440](https://peps.python.org/pep-0440/) local version identifier standard:
84
-
85
- ```
86
- llama_cpp_python-{version}+{backend}_{cpu_flags}-{python}-{python}-{platform}.whl
87
- ```
88
-
89
- Examples:
90
- ```
91
- llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
92
- llama_cpp_python-0.3.16+vulkan-cp312-cp312-manylinux_2_31_x86_64.whl
93
- llama_cpp_python-0.3.16+basic-cp310-cp310-manylinux_2_31_x86_64.whl
94
- ```
95
-
96
- The local version label (`+openblas_avx2_fma_f16c`) encodes:
97
- - **Backend**: `openblas`, `mkl`, `basic`, `vulkan`, `clblast`, `sycl`, `opencl`, `rpc`
98
- - **CPU flags** (in order): `avx`, `avx2`, `avx512`, `fma`, `f16c`, `vnni`, `vbmi`, `bf16`, `avxvnni`, `amx`
99
 
100
- ## πŸš€ Quick Start
101
 
102
- ### CPU (OpenBLAS + AVX2 β€” recommended for most users)
103
 
104
  ```bash
105
- sudo apt-get install libopenblas-dev
106
-
107
- pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
108
  ```
109
 
110
- ### GPU (Vulkan β€” works on any GPU vendor)
111
-
112
- ```bash
113
- sudo apt-get install libvulkan1
114
 
115
- pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+vulkan-cp311-cp311-manylinux_2_31_x86_64.whl
116
  ```
117
-
118
- ### Basic (zero dependencies)
119
-
120
- ```bash
121
- pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+basic-cp311-cp311-manylinux_2_31_x86_64.whl
122
  ```
123
 
124
- ### Example Usage
125
-
126
- ```python
127
- from llama_cpp import Llama
128
 
129
- llm = Llama.from_pretrained(
130
- repo_id="Qwen/Qwen2.5-Coder-7B-Instruct-GGUF",
131
- filename="*q4_k_m.gguf",
132
- n_ctx=4096,
133
- )
134
-
135
- output = llm.create_chat_completion(
136
- messages=[{"role": "user", "content": "Write a Python hello world"}],
137
- max_tokens=256,
138
- )
139
- print(output["choices"][0]["message"]["content"])
140
- ```
141
 
142
- ## πŸ”§ Runtime Dependencies
 
 
 
 
 
 
 
 
 
143
 
144
- | Backend | Required Packages |
145
- |---------|------------------|
146
- | OpenBLAS | `libopenblas0` (runtime) or `libopenblas-dev` (build) |
147
- | MKL | Intel oneAPI MKL |
148
- | Vulkan | `libvulkan1` |
149
- | CLBlast | `libclblast1` |
150
- | OpenCL | `ocl-icd-libopencl1` |
151
- | Basic | **None** |
152
- | SYCL | Intel oneAPI DPC++ runtime |
153
- | RPC | Network access to RPC server |
154
 
155
- ## 🏭 How These Wheels Are Built
156
 
157
- These wheels are built by the **Ultimate Llama Wheel Factory** β€” a distributed build system running entirely on free HuggingFace Spaces:
158
-
159
- | Component | Link |
160
- |-----------|------|
161
- | 🏭 Dispatcher | [wheel-factory-dispatcher](https://huggingface.co/spaces/AIencoder/wheel-factory-dispatcher) |
162
- | βš™οΈ Workers 1-4 | [wheel-factory-worker-1](https://huggingface.co/spaces/AIencoder/wheel-factory-worker-1) ... 4 |
163
- | πŸ” Auditor | [wheel-factory-auditor](https://huggingface.co/spaces/AIencoder/wheel-factory-auditor) |
164
-
165
- The factory uses explicit cmake flags matching llama.cpp's official CPU variant builds:
166
-
167
- ```
168
- CMAKE_ARGS="-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS -DGGML_AVX2=ON -DGGML_FMA=ON -DGGML_F16C=ON -DGGML_AVX=OFF -DGGML_AVX512=OFF -DGGML_NATIVE=OFF"
169
- ```
170
-
171
- Every flag is set explicitly (no cmake defaults) to ensure reproducible, deterministic builds.
172
-
173
- ## ❓ FAQ
174
-
175
- **Q: Which wheel should I use?**
176
- For most people: `openblas_avx2_fma_f16c` with your Python version. It's fast, works on 90%+ of modern CPUs, and only needs `libopenblas`.
177
-
178
- **Q: Can I use these on Ubuntu / Debian / Fedora / Arch?**
179
- Yes β€” `manylinux_2_31` wheels work on any Linux distro with glibc 2.31 or newer (Ubuntu 20.04+, Debian 11+, Fedora 34+, Arch).
180
 
181
- **Q: What about Windows / macOS / CUDA wheels?**
182
- This repo focuses on manylinux x86_64. For other platforms, see:
183
- - [abetlen's official wheel index](https://abetlen.github.io/llama-cpp-python/whl/) β€” CPU, CUDA 12.1-12.5, Metal
184
- - [jllllll's CUDA wheels](https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels) β€” cuBLAS + AVX combos
185
 
186
- **Q: These wheels don't work on Alpine Linux.**
187
- Alpine uses musl, not glibc. These are `manylinux` (glibc) wheels. Build from source or use `musllinux` wheels.
188
 
189
- **Q: I get "illegal instruction" errors.**
190
- You're using a wheel with CPU flags your processor doesn't support. Try `basic` (no SIMD) or check your CPU flags with:
191
- ```bash
192
- grep -o 'avx[^ ]*\|fma\|f16c' /proc/cpuinfo | sort -u
193
  ```
194
 
195
- **Q: Can I contribute more wheels?**
196
- No. The factory is a private space only to be accessable by me but you can suggest some wheels to add via community tab
197
-
198
- ## πŸ“„ License
199
 
200
- MIT β€” same as [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) and [llama.cpp](https://github.com/ggml-org/llama.cpp).
 
201
 
202
- ## πŸ™ Credits
203
 
204
- - [llama.cpp](https://github.com/ggml-org/llama.cpp) by Georgi Gerganov and the ggml community
205
- - [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) by Andrei Betlen
206
- - Built with 🏭 by [AIencoder](https://huggingface.co/AIencoder)
 
 
 
1
  ---
2
  license: mit
 
 
3
  tags:
4
+ - llama-cpp-python
5
+ - wheels
6
+ - pre-built
7
+ - binary
8
+ pretty_name: llama-cpp-python Pre-Built Wheels
 
 
 
 
 
9
  size_categories:
10
+ - 1K<n<10K
11
  ---
12
 
13
+ # 🏭 llama-cpp-python Pre-Built Wheels
14
 
15
+ The most complete collection of pre-built `llama-cpp-python` wheels in existence β€” **8,333 wheels** across every platform, Python version, backend, and CPU optimization level.
16
 
17
+ No more building from source. Just find your wheel and `pip install` it directly.
18
 
19
+ ## πŸ“Š Collection Stats
 
 
 
 
20
 
21
+ | Platform | Wheels |
22
  |---|---|
23
+ | 🐧 Linux x86_64 (manylinux) | 4,940 |
24
+ | 🍎 macOS Intel (x86_64) | 1,040 |
25
+ | πŸͺŸ Windows (amd64) | 1,010 |
26
+ | πŸͺŸ Windows (32-bit) | 634 |
27
+ | 🍎 macOS Apple Silicon (arm64) | 289 |
28
+ | 🐧 Linux i686 | 214 |
29
+ | 🐧 Linux aarch64 | 120 |
30
+ | 🐧 Linux x86_64 (plain) | 81 |
31
+ | 🐧 Linux RISC-V | 5 |
32
+ | **Total** | **8,333** |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
+ ## πŸš€ How to Install
35
 
36
+ Find your wheel using the naming convention below, then install directly:
37
 
38
  ```bash
39
+ pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/YOUR_WHEEL_NAME.whl"
 
 
40
  ```
41
 
42
+ ### Wheel Naming Convention
 
 
 
43
 
 
44
  ```
45
+ llama_cpp_python-{version}+{backend}_{profile}-{pytag}-{pytag}-{platform}.whl
 
 
 
 
46
  ```
47
 
48
+ **Versions:** `0.2.82` through `0.3.18+`
 
 
 
49
 
50
+ **Backends (manylinux wheels):**
51
+ - `openblas` β€” OpenBLAS BLAS acceleration
52
+ - `mkl` β€” Intel MKL acceleration
53
+ - `basic` β€” No BLAS, maximum compatibility
54
+ - `vulkan` β€” Vulkan GPU
55
+ - `clblast` β€” CLBlast OpenCL GPU
56
+ - `opencl` β€” OpenCL GPU
57
+ - `rpc` β€” Distributed inference
 
 
 
 
58
 
59
+ **CPU Profiles (manylinux wheels):**
60
+ - `basic` β€” Any x86-64 CPU
61
+ - `sse42` β€” Nehalem+ (2008+)
62
+ - `sandybridge` β€” AVX (2011+)
63
+ - `ivybridge` β€” AVX + F16C (2012+)
64
+ - `haswell` β€” AVX2 + FMA + BMI2 (2013+) ← most common
65
+ - `skylakex` β€” AVX-512 (2017+)
66
+ - `icelake` β€” AVX-512 VNNI+VBMI (2019+)
67
+ - `alderlake` β€” AVX-VNNI (2021+)
68
+ - `sapphirerapids` β€” AVX-512 BF16 + AMX (2023+)
69
 
70
+ **Python tags:** `cp38`, `cp39`, `cp310`, `cp311`, `cp312`, `cp313`, `cp314`, `pp38`, `pp39`, `pp310`
 
 
 
 
 
 
 
 
 
71
 
72
+ ### Examples
73
 
74
+ ```bash
75
+ # Linux x86_64, Python 3.11, OpenBLAS, Haswell CPU (most common setup)
76
+ pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18+openblas_haswell-cp311-cp311-manylinux_2_31_x86_64.whl"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
 
78
+ # Windows, Python 3.11
79
+ pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp311-cp311-win_amd64.whl"
 
 
80
 
81
+ # macOS Apple Silicon, Python 3.12
82
+ pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp312-cp312-macosx_11_0_arm64.whl"
83
 
84
+ # macOS Intel, Python 3.11
85
+ pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp311-cp311-macosx_10_9_x86_64.whl"
 
 
86
  ```
87
 
88
+ ## πŸ—οΈ Sources
 
 
 
89
 
90
+ - **manylinux wheels** β€” Built by the [Ultimate Llama Wheel Factory](https://huggingface.co/AIencoder) β€” a distributed 4-worker HuggingFace Space system covering every llama.cpp cmake option possible on manylinux
91
+ - **Windows / macOS / Linux ARM wheels** β€” Sourced from [abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python) official releases
92
 
93
+ ## πŸ“ Notes
94
 
95
+ - All wheels are MIT licensed (same as llama-cpp-python)
96
+ - manylinux wheels target `manylinux_2_31_x86_64` (glibc 2.31+, Ubuntu 20.04+)
97
+ - CUDA wheels for Windows/macOS are included (cu121–cu124)
98
+ - Metal wheels for macOS are included
99
+ - This collection is updated periodically as new versions are released