AIencoder commited on
Commit
ab94b2c
Β·
verified Β·
1 Parent(s): d361e6f

docs: comprehensive README merging original + new content with full credits

Browse files
Files changed (1) hide show
  1. README.md +134 -45
README.md CHANGED
@@ -1,80 +1,85 @@
1
  ---
2
  license: mit
 
 
 
 
3
  tags:
 
 
4
  - llama-cpp-python
5
  - wheels
6
  - pre-built
7
  - binary
 
 
 
8
  pretty_name: llama-cpp-python Pre-Built Wheels
9
  size_categories:
10
  - 1K<n<10K
11
  ---
12
 
13
- # 🏭 llama-cpp-python Pre-Built Wheels
 
 
14
 
15
  The most complete collection of pre-built `llama-cpp-python` wheels in existence β€” **8,333 wheels** across every platform, Python version, backend, and CPU optimization level.
16
 
17
- No more building from source. Just find your wheel and `pip install` it directly.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
  ## πŸ“Š Collection Stats
20
 
21
  | Platform | Wheels |
22
- |---|---|
23
  | 🐧 Linux x86_64 (manylinux) | 4,940 |
24
- | 🍎 macOS Intel (x86_64) | 1,040 |
25
  | πŸͺŸ Windows (amd64) | 1,010 |
26
  | πŸͺŸ Windows (32-bit) | 634 |
27
  | 🍎 macOS Apple Silicon (arm64) | 289 |
28
  | 🐧 Linux i686 | 214 |
29
  | 🐧 Linux aarch64 | 120 |
30
- | 🐧 Linux x86_64 (plain) | 81 |
31
  | 🐧 Linux RISC-V | 5 |
32
  | **Total** | **8,333** |
33
 
 
 
 
 
34
  ## πŸš€ How to Install
35
 
36
- Find your wheel using the naming convention below, then install directly:
 
 
37
 
38
  ```bash
39
  pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/YOUR_WHEEL_NAME.whl"
40
  ```
41
 
42
- ### Wheel Naming Convention
43
-
44
- ```
45
- llama_cpp_python-{version}+{backend}_{profile}-{pytag}-{pytag}-{platform}.whl
46
- ```
47
-
48
- **Versions:** `0.2.82` through `0.3.18+`
49
-
50
- **Backends (manylinux wheels):**
51
- - `openblas` β€” OpenBLAS BLAS acceleration
52
- - `mkl` β€” Intel MKL acceleration
53
- - `basic` β€” No BLAS, maximum compatibility
54
- - `vulkan` β€” Vulkan GPU
55
- - `clblast` β€” CLBlast OpenCL GPU
56
- - `opencl` β€” OpenCL GPU
57
- - `rpc` β€” Distributed inference
58
-
59
- **CPU Profiles (manylinux wheels):**
60
- - `basic` β€” Any x86-64 CPU
61
- - `sse42` β€” Nehalem+ (2008+)
62
- - `sandybridge` β€” AVX (2011+)
63
- - `ivybridge` β€” AVX + F16C (2012+)
64
- - `haswell` β€” AVX2 + FMA + BMI2 (2013+) ← most common
65
- - `skylakex` β€” AVX-512 (2017+)
66
- - `icelake` β€” AVX-512 VNNI+VBMI (2019+)
67
- - `alderlake` β€” AVX-VNNI (2021+)
68
- - `sapphirerapids` β€” AVX-512 BF16 + AMX (2023+)
69
-
70
- **Python tags:** `cp38`, `cp39`, `cp310`, `cp311`, `cp312`, `cp313`, `cp314`, `pp38`, `pp39`, `pp310`
71
-
72
- ### Examples
73
 
74
  ```bash
75
- # Linux x86_64, Python 3.11, OpenBLAS, Haswell CPU (most common setup)
76
  pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18+openblas_haswell-cp311-cp311-manylinux_2_31_x86_64.whl"
77
 
 
 
 
78
  # Windows, Python 3.11
79
  pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp311-cp311-win_amd64.whl"
80
 
@@ -83,17 +88,101 @@ pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/
83
 
84
  # macOS Intel, Python 3.11
85
  pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp311-cp311-macosx_10_9_x86_64.whl"
 
 
 
86
  ```
87
 
88
- ## πŸ—οΈ Sources
 
 
 
 
89
 
90
- - **manylinux wheels** β€” Built by the [Ultimate Llama Wheel Factory](https://huggingface.co/AIencoder) β€” a distributed 4-worker HuggingFace Space system covering every llama.cpp cmake option possible on manylinux
91
- - **Windows / macOS / Linux ARM wheels** β€” Sourced from [abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python) official releases
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92
 
93
  ## πŸ“ Notes
94
 
95
- - All wheels are MIT licensed (same as llama-cpp-python)
96
- - manylinux wheels target `manylinux_2_31_x86_64` (glibc 2.31+, Ubuntu 20.04+)
97
- - CUDA wheels for Windows/macOS are included (cu121–cu124)
98
- - Metal wheels for macOS are included
 
99
  - This collection is updated periodically as new versions are released
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
  tags:
8
+ - code
9
+ - llama-cpp
10
  - llama-cpp-python
11
  - wheels
12
  - pre-built
13
  - binary
14
+ - linux
15
+ - windows
16
+ - macos
17
  pretty_name: llama-cpp-python Pre-Built Wheels
18
  size_categories:
19
  - 1K<n<10K
20
  ---
21
 
22
+ # 🏭 llama-cpp-python Mega-Factory Wheels
23
+
24
+ > **"Stop waiting for `pip` to compile. Just install and run."**
25
 
26
  The most complete collection of pre-built `llama-cpp-python` wheels in existence β€” **8,333 wheels** across every platform, Python version, backend, and CPU optimization level.
27
 
28
+ No more `cmake`, `gcc`, or compilation hell. No more waiting 10 minutes for a build that might fail. Just find your wheel and `pip install` it directly.
29
+
30
+ ---
31
+
32
+ ## πŸš€ Why These Wheels?
33
+
34
+ Standard wheels target the "lowest common denominator" to avoid crashes on old hardware. This collection goes further β€” the manylinux wheels are built using a massive **Everything Preset** targeting specific CPU instruction sets, maximizing your **Tokens per Second (T/s)**.
35
+
36
+ - **Zero Dependencies:** No `cmake`, `gcc`, or `nvcc` required on your target machine.
37
+ - **Every Platform:** Linux (manylinux, aarch64, i686, RISC-V), Windows (amd64, 32-bit), macOS (Intel + Apple Silicon).
38
+ - **Server-Grade Power:** Optimized builds for `Sapphire Rapids`, `Ice Lake`, `Alder Lake`, `Haswell`, and more.
39
+ - **Full Backend Support:** `OpenBLAS`, `MKL`, `Vulkan`, `CLBlast`, `OpenCL`, `RPC`, and plain CPU builds.
40
+ - **Cutting Edge:** Python `3.8` through experimental `3.14`, plus PyPy `pp38`–`pp310`.
41
+ - **GPU Too:** CUDA wheels (cu121–cu124) and macOS Metal wheels included.
42
+
43
+ ---
44
 
45
  ## πŸ“Š Collection Stats
46
 
47
  | Platform | Wheels |
48
+ |:---|---:|
49
  | 🐧 Linux x86_64 (manylinux) | 4,940 |
50
+ | 🍎 macOS Intel (x86\_64) | 1,040 |
51
  | πŸͺŸ Windows (amd64) | 1,010 |
52
  | πŸͺŸ Windows (32-bit) | 634 |
53
  | 🍎 macOS Apple Silicon (arm64) | 289 |
54
  | 🐧 Linux i686 | 214 |
55
  | 🐧 Linux aarch64 | 120 |
56
+ | 🐧 Linux x86\_64 (plain) | 81 |
57
  | 🐧 Linux RISC-V | 5 |
58
  | **Total** | **8,333** |
59
 
60
+ The manylinux builds alone cover **3,600+ combinations** across versions, backends, Python versions, and CPU profiles.
61
+
62
+ ---
63
+
64
  ## πŸš€ How to Install
65
 
66
+ ### Quick Install
67
+
68
+ Find your wheel filename (see naming convention below), then:
69
 
70
  ```bash
71
  pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/YOUR_WHEEL_NAME.whl"
72
  ```
73
 
74
+ ### Common Examples
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  ```bash
77
+ # Linux x86_64, Python 3.11, OpenBLAS, Haswell CPU (most common Linux setup)
78
  pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18+openblas_haswell-cp311-cp311-manylinux_2_31_x86_64.whl"
79
 
80
+ # Linux x86_64, Python 3.12, Basic CPU (maximum compatibility)
81
+ pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18+basic_basic-cp312-cp312-manylinux_2_31_x86_64.whl"
82
+
83
  # Windows, Python 3.11
84
  pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp311-cp311-win_amd64.whl"
85
 
 
88
 
89
  # macOS Intel, Python 3.11
90
  pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp311-cp311-macosx_10_9_x86_64.whl"
91
+
92
+ # Linux ARM64 (Raspberry Pi, AWS Graviton), Python 3.11
93
+ pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp311-cp311-linux_aarch64.whl"
94
  ```
95
 
96
+ ---
97
+
98
+ ## πŸ“ Wheel Naming Convention
99
+
100
+ ### manylinux wheels (custom-built)
101
 
102
+ ```
103
+ llama_cpp_python-{version}+{backend}_{profile}-{pytag}-{pytag}-{platform}.whl
104
+ ```
105
+
106
+ **Versions covered:** `0.3.0` through `0.3.18+`
107
+
108
+ **Backends:**
109
+
110
+ | Backend | Description |
111
+ |:---|:---|
112
+ | `openblas` | OpenBLAS BLAS acceleration β€” best general-purpose CPU performance |
113
+ | `mkl` | Intel MKL acceleration β€” best on Intel CPUs |
114
+ | `basic` | No BLAS, maximum compatibility |
115
+ | `vulkan` | Vulkan GPU backend |
116
+ | `clblast` | CLBlast OpenCL GPU backend |
117
+ | `opencl` | Generic OpenCL GPU backend |
118
+ | `rpc` | Distributed inference over network |
119
+
120
+ **CPU Profiles:**
121
+
122
+ | Profile | Instruction Sets | Era | Notes |
123
+ |:---|:---|:---|:---|
124
+ | `basic` | x86-64 baseline | Any | Maximum compatibility |
125
+ | `sse42` | SSE 4.2 | 2008+ | Nehalem |
126
+ | `sandybridge` | AVX | 2011+ | |
127
+ | `ivybridge` | AVX + F16C | 2012+ | |
128
+ | `haswell` | AVX2 + FMA + BMI2 | 2013+ | **Most common** |
129
+ | `skylakex` | AVX-512 | 2017+ | |
130
+ | `icelake` | AVX-512 + VNNI + VBMI | 2019+ | |
131
+ | `alderlake` | AVX-VNNI | 2021+ | |
132
+ | `sapphirerapids` | AVX-512 BF16 + AMX | 2023+ | Highest performance |
133
+
134
+ **Python tags:** `cp38`, `cp39`, `cp310`, `cp311`, `cp312`, `cp313`, `cp314`, `pp38`, `pp39`, `pp310`
135
+
136
+ **Platform:** `manylinux_2_31_x86_64` (glibc 2.31+, compatible with Ubuntu 20.04+, Debian 11+)
137
+
138
+ ### Windows / macOS / Linux ARM wheels (from abetlen)
139
+
140
+ ```
141
+ llama_cpp_python-{version}-{pytag}-{pytag}-{platform}.whl
142
+ ```
143
+
144
+ These are the official pre-built wheels from the upstream maintainer, covering versions `0.2.82` through `0.3.18+`.
145
+
146
+ ---
147
+
148
+ ## πŸ” How to Find Your Wheel
149
+
150
+ 1. **Identify your Python version:** `python --version` β†’ e.g. `3.11` β†’ tag `cp311`
151
+ 2. **Identify your platform:**
152
+ - Linux x86\_64 β†’ `manylinux_2_31_x86_64`
153
+ - Windows 64-bit β†’ `win_amd64`
154
+ - macOS Apple Silicon β†’ `macosx_11_0_arm64`
155
+ - macOS Intel β†’ `macosx_10_9_x86_64`
156
+ 3. **Pick a backend** (manylinux only): `openblas` for most use cases
157
+ 4. **Pick a CPU profile** (manylinux only): `haswell` works on virtually all modern CPUs
158
+ 5. **Browse the files** in this repo or construct the filename directly
159
+
160
+ ---
161
+
162
+ ## πŸ—οΈ Sources & Credits
163
+
164
+ ### manylinux Wheels β€” Built by AIencoder
165
+ The 4,940 manylinux x86\_64 wheels were built by a distributed **4-worker HuggingFace Space factory** system (`AIencoder/wheel-factory-*`) β€” a custom-built automated pipeline covering every possible llama.cpp cmake option on manylinux:
166
+ - Every backend: OpenBLAS, MKL, Basic, Vulkan, CLBlast, OpenCL, RPC
167
+ - Every CPU hardware profile from baseline x86-64 up to Sapphire Rapids AMX
168
+ - Python 3.8 through 3.14
169
+ - llama-cpp-python versions 0.3.0 through 0.3.18+
170
+
171
+ ### Windows / macOS / Linux ARM Wheels β€” abetlen
172
+ The remaining 3,393 wheels (Windows, macOS, Linux aarch64/i686/riscv64, PyPy) were sourced from the official releases by **Andrei Betlen ([@abetlen](https://github.com/abetlen))**, the original author and maintainer of `llama-cpp-python`. These include:
173
+ - CPU wheels for all platforms via `https://abetlen.github.io/llama-cpp-python/whl/cpu/`
174
+ - Metal wheels for macOS GPU acceleration
175
+ - CUDA wheels (cu121–cu124) for Windows and Linux
176
+
177
+ > All credit for the underlying library goes to **Georgi Gerganov ([@ggerganov](https://github.com/ggerganov))** and the [llama.cpp](https://github.com/ggml-org/llama.cpp) team, and to **Andrei Betlen** for the Python bindings.
178
+
179
+ ---
180
 
181
  ## πŸ“ Notes
182
 
183
+ - All wheels are **MIT licensed** (same as llama-cpp-python upstream)
184
+ - manylinux wheels require **glibc 2.31+** (Ubuntu 20.04+, Debian 11+)
185
+ - `manylinux` and `linux_x86_64` are **not the same thing** β€” manylinux wheels have broad distro compatibility, plain linux wheels do not
186
+ - CUDA wheels require the matching CUDA toolkit to be installed
187
+ - Metal wheels require macOS 11.0+ and an Apple Silicon or AMD GPU
188
  - This collection is updated periodically as new versions are released