This Python script automates the process of downloading and setting up the best binary distribution of llama.cpp
for your system and graphics card (if present).
It fetches the latest release from GitHub, detects your system's specifications, and selects the most suitable binary for your setup.
- Automatic Detection: Detects your operating system, architecture, and GPU (NVIDIA or AMD).
- CUDA Compatibility: Checks for CUDA and driver versions to ensure compatibility.
- AVX Support: Checks for AVX, AVX2, and AVX512 support on your CPU.
- Download and Extraction: Downloads the appropriate binary and extracts it.
- Verification: Runs the binary with
--version
to verify the setup.
- Python 3.x
requests
librarycpuinfo
libraryzipfile
andtarfile
modulessubprocess
andplatform
modules
POSIX (Linux, macOS, etc.):
% python -m venv .venv
% source ./.venv/bin/activate
(.venv) % pip install -r requirements.txt
(.venv) % ./latest-llama-cpp.py
Windows:
> python -m venv .venv
> .\.venv\Scripts\avtivate.ps1
> pip install -r requirements.txt
> python latest-llama-cpp.py
- Fetch Latest Release: The script fetches the latest release information from the
llama.cpp
GitHub repository. - System Information: It detects your operating system and architecture.
- GPU Detection: Checks for NVIDIA or AMD GPUs and their respective CUDA and driver versions.
- AVX Support: Checks if your CPU supports AVX, AVX2, or AVX512.
- Select Best Asset: Based on the detected information, it selects the most suitable binary asset.
- Download and Extract: Downloads the selected binary and extracts it to the specified directory.
- Run Verification: Runs the binary with
--version
to ensure it was set up correctly.
A Containerfile has been included for limited testing on Linux:
% podman build -t latest-llama-cpp .
% podman run -it latest-llama-cpp
- Ensure you have the necessary permissions to run
nvidia-smi
andlspci
commands. - The script assumes a standard directory structure for the downloaded and extracted files.
This project is licensed under the MIT License.