llama-cpp-wheels / README.md
AIencoder's picture
docs: comprehensive README merging original + new content with full credits
ab94b2c verified
metadata
license: mit
task_categories:
  - text-generation
language:
  - en
tags:
  - code
  - llama-cpp
  - llama-cpp-python
  - wheels
  - pre-built
  - binary
  - linux
  - windows
  - macos
pretty_name: llama-cpp-python Pre-Built Wheels
size_categories:
  - 1K<n<10K

🏭 llama-cpp-python Mega-Factory Wheels

"Stop waiting for pip to compile. Just install and run."

The most complete collection of pre-built llama-cpp-python wheels in existence β€” 8,333 wheels across every platform, Python version, backend, and CPU optimization level.

No more cmake, gcc, or compilation hell. No more waiting 10 minutes for a build that might fail. Just find your wheel and pip install it directly.


πŸš€ Why These Wheels?

Standard wheels target the "lowest common denominator" to avoid crashes on old hardware. This collection goes further β€” the manylinux wheels are built using a massive Everything Preset targeting specific CPU instruction sets, maximizing your Tokens per Second (T/s).

  • Zero Dependencies: No cmake, gcc, or nvcc required on your target machine.
  • Every Platform: Linux (manylinux, aarch64, i686, RISC-V), Windows (amd64, 32-bit), macOS (Intel + Apple Silicon).
  • Server-Grade Power: Optimized builds for Sapphire Rapids, Ice Lake, Alder Lake, Haswell, and more.
  • Full Backend Support: OpenBLAS, MKL, Vulkan, CLBlast, OpenCL, RPC, and plain CPU builds.
  • Cutting Edge: Python 3.8 through experimental 3.14, plus PyPy pp38–pp310.
  • GPU Too: CUDA wheels (cu121–cu124) and macOS Metal wheels included.

πŸ“Š Collection Stats

Platform Wheels
🐧 Linux x86_64 (manylinux) 4,940
🍎 macOS Intel (x86_64) 1,040
πŸͺŸ Windows (amd64) 1,010
πŸͺŸ Windows (32-bit) 634
🍎 macOS Apple Silicon (arm64) 289
🐧 Linux i686 214
🐧 Linux aarch64 120
🐧 Linux x86_64 (plain) 81
🐧 Linux RISC-V 5
Total 8,333

The manylinux builds alone cover 3,600+ combinations across versions, backends, Python versions, and CPU profiles.


πŸš€ How to Install

Quick Install

Find your wheel filename (see naming convention below), then:

pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/YOUR_WHEEL_NAME.whl"

Common Examples

# Linux x86_64, Python 3.11, OpenBLAS, Haswell CPU (most common Linux setup)
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18+openblas_haswell-cp311-cp311-manylinux_2_31_x86_64.whl"

# Linux x86_64, Python 3.12, Basic CPU (maximum compatibility)
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18+basic_basic-cp312-cp312-manylinux_2_31_x86_64.whl"

# Windows, Python 3.11
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp311-cp311-win_amd64.whl"

# macOS Apple Silicon, Python 3.12
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp312-cp312-macosx_11_0_arm64.whl"

# macOS Intel, Python 3.11
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp311-cp311-macosx_10_9_x86_64.whl"

# Linux ARM64 (Raspberry Pi, AWS Graviton), Python 3.11
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp311-cp311-linux_aarch64.whl"

πŸ“ Wheel Naming Convention

manylinux wheels (custom-built)

llama_cpp_python-{version}+{backend}_{profile}-{pytag}-{pytag}-{platform}.whl

Versions covered: 0.3.0 through 0.3.18+

Backends:

Backend Description
openblas OpenBLAS BLAS acceleration β€” best general-purpose CPU performance
mkl Intel MKL acceleration β€” best on Intel CPUs
basic No BLAS, maximum compatibility
vulkan Vulkan GPU backend
clblast CLBlast OpenCL GPU backend
opencl Generic OpenCL GPU backend
rpc Distributed inference over network

CPU Profiles:

Profile Instruction Sets Era Notes
basic x86-64 baseline Any Maximum compatibility
sse42 SSE 4.2 2008+ Nehalem
sandybridge AVX 2011+
ivybridge AVX + F16C 2012+
haswell AVX2 + FMA + BMI2 2013+ Most common
skylakex AVX-512 2017+
icelake AVX-512 + VNNI + VBMI 2019+
alderlake AVX-VNNI 2021+
sapphirerapids AVX-512 BF16 + AMX 2023+ Highest performance

Python tags: cp38, cp39, cp310, cp311, cp312, cp313, cp314, pp38, pp39, pp310

Platform: manylinux_2_31_x86_64 (glibc 2.31+, compatible with Ubuntu 20.04+, Debian 11+)

Windows / macOS / Linux ARM wheels (from abetlen)

llama_cpp_python-{version}-{pytag}-{pytag}-{platform}.whl

These are the official pre-built wheels from the upstream maintainer, covering versions 0.2.82 through 0.3.18+.


πŸ” How to Find Your Wheel

  1. Identify your Python version: python --version β†’ e.g. 3.11 β†’ tag cp311
  2. Identify your platform:
    • Linux x86_64 β†’ manylinux_2_31_x86_64
    • Windows 64-bit β†’ win_amd64
    • macOS Apple Silicon β†’ macosx_11_0_arm64
    • macOS Intel β†’ macosx_10_9_x86_64
  3. Pick a backend (manylinux only): openblas for most use cases
  4. Pick a CPU profile (manylinux only): haswell works on virtually all modern CPUs
  5. Browse the files in this repo or construct the filename directly

πŸ—οΈ Sources & Credits

manylinux Wheels β€” Built by AIencoder

The 4,940 manylinux x86_64 wheels were built by a distributed 4-worker HuggingFace Space factory system (AIencoder/wheel-factory-*) β€” a custom-built automated pipeline covering every possible llama.cpp cmake option on manylinux:

  • Every backend: OpenBLAS, MKL, Basic, Vulkan, CLBlast, OpenCL, RPC
  • Every CPU hardware profile from baseline x86-64 up to Sapphire Rapids AMX
  • Python 3.8 through 3.14
  • llama-cpp-python versions 0.3.0 through 0.3.18+

Windows / macOS / Linux ARM Wheels β€” abetlen

The remaining 3,393 wheels (Windows, macOS, Linux aarch64/i686/riscv64, PyPy) were sourced from the official releases by Andrei Betlen (@abetlen), the original author and maintainer of llama-cpp-python. These include:

  • CPU wheels for all platforms via https://abetlen.github.io/llama-cpp-python/whl/cpu/
  • Metal wheels for macOS GPU acceleration
  • CUDA wheels (cu121–cu124) for Windows and Linux

All credit for the underlying library goes to Georgi Gerganov (@ggerganov) and the llama.cpp team, and to Andrei Betlen for the Python bindings.


πŸ“ Notes

  • All wheels are MIT licensed (same as llama-cpp-python upstream)
  • manylinux wheels require glibc 2.31+ (Ubuntu 20.04+, Debian 11+)
  • manylinux and linux_x86_64 are not the same thing β€” manylinux wheels have broad distro compatibility, plain linux wheels do not
  • CUDA wheels require the matching CUDA toolkit to be installed
  • Metal wheels require macOS 11.0+ and an Apple Silicon or AMD GPU
  • This collection is updated periodically as new versions are released