Zap11 commited on
Commit
6bcbf1f
·
verified ·
1 Parent(s): c661444

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -3
README.md CHANGED
@@ -1,3 +1,82 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # llama-cpp-python — Free-Tier Friendly Wheel
2
+
3
+ This Space provides a **prebuilt `llama-cpp-python` wheel** designed to work
4
+ **reliably on Hugging Face Free tier Spaces**.
5
+
6
+ No compilation. No system packages. No build failures.
7
+
8
+ If your Space crashes during `pip install llama-cpp-python`, this wheel is the fix.
9
+
10
+ ---
11
+
12
+ ## Optimized for Hugging Face Free Tier
13
+
14
+ Hugging Face Free tier Spaces are:
15
+
16
+ - CPU-only
17
+ - Limited in memory
18
+ - Not suitable for native compilation
19
+
20
+ This wheel is built **ahead of time** so it can be installed instantly without
21
+ triggering CMake, compilers, or BLAS detection.
22
+
23
+ ---
24
+
25
+ ## What this wheel gives you
26
+
27
+ - ✅ Works on **HF Free tier CPU Spaces**
28
+ - ✅ Linux (ubuntu-22.04 compatible)
29
+ - ✅ Python 3.10
30
+ - ✅ OpenBLAS enabled (`GGML_BLAS=ON`)
31
+ - ✅ No system dependencies required
32
+ - ✅ No build step during Space startup
33
+ - ✅ Fast, reliable `pip install`
34
+
35
+ ---
36
+
37
+ ## How to use in a Space (Free tier)
38
+
39
+ 1. Download the wheel from the GitHub repository
40
+ 2. Upload it to your Space
41
+ 3. Install it in your Space startup:
42
+
43
+
44
+
45
+ pip install llama_cpp_python-*.whl>
46
+
47
+
48
+ ## That’s it — your Space will start without build errors.
49
+
50
+ ## Build details
51
+
52
+ This wheel was built using:
53
+ abetlen/llama-cpp-python (recursive submodules)
54
+ OpenBLAS (GGML_VENDOR=OpenBLAS)
55
+ scikit-build-core
56
+ ninja
57
+ python -m build --wheel --no-isolation
58
+
59
+ ## Build environment:
60
+
61
+ OS: Ubuntu 22.04
62
+ Python: 3.10
63
+
64
+ ## Why not build from source on HF?
65
+
66
+ On Free tier Spaces, building from source often fails due to:
67
+ Missing compilers
68
+ Missing BLAS libraries
69
+ Memory limits
70
+ Build timeouts
71
+ This prebuilt wheel avoids all of those issues.
72
+
73
+ ## Notes
74
+
75
+ CPU-only (no CUDA)
76
+ Intended for inference workloads
77
+ Not an official upstream release
78
+
79
+ ## Credits
80
+
81
+ All credit goes to the maintainers of llama-cpp-python and llama.cpp.
82
+ This Space exists solely to make Free tier usage painless.