v0.0.8.4
What's New ✨
- Added support for Qwen2.5, Qwen2.5-code, and Qwen2.5-Math
Install Nexa SDK 🛠️
CPU Installation
To install the CPU version of Nexa SDK, run:
pip install nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir
GPU Installation (Metal - macOS)
For the GPU version supporting Metal (macOS), run:
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir
For detailed installation instructions of Nexa SDK for CUDA and AMD GPU support, please refer to the Installation section in the main README.
To update your current SDK version to v0.0.8.4, use the same command as the installation but add a -U
flag to the pip install command.
Run Qwen2.5 with Nexa SDK
Option 1: Run official GGUF files from Qwen HuggingFace Page 🤗
You could use the following command to pull and run language models in GGUF format from 🤗 HuggingFace: nexa run -hf <hf model id>
. Choose one of these commands based on your preferred model size:
Qwen2.5 0.5B:
nexa run -hf Qwen/Qwen2.5-0.5B-Instruct-GGUF
Qwen2.5 1.5B:
nexa run -hf Qwen/Qwen2.5-1.5B-Instruct-GGUF
Qwen2.5 3B:
nexa run -hf Qwen/Qwen2.5-3B-Instruct-GGUF
Qwen2.5 7B:
nexa run -hf Qwen/Qwen2.5-7B-Instruct-GGUF
Qwen2.5 14B:
nexa run -hf Qwen/Qwen2.5-14B-Instruct-GGUF
The command line will prompt you to select one file from different quantization options. Use the number to indicate your choice. If you're unsure which one to choose, try "q4_0.gguf".
You will then have Qwen2.5 running locally on your computer.
Note: For Qwen2.5-code and Qwen2.5-Math, there are no official GGUF files available. Please use Option 2 for these models.
Option 2: Pull and Run Qwen2.5, Qwen2.5-code, Qwen2.5-Math from Nexa Model Hub 🐙
We have converted and uploaded the following models to the Nexa Model Hub:
Model | Nexa Run Command |
---|---|
Qwen2.5 0.5B | nexa run Qwen2.5-0.5B-Instruct:q4_0 |
Qwen2.5 1.5B | nexa run Qwen2.5-1.5B-Instruct:q4_0 |
Qwen2.5 3B | nexa run Qwen2.5-3B-Instruct:q4_0 |
Qwen2.5-code | nexa run Qwen2.5-Coder-1.5B-Instruct:q4_0 |
Qwen2.5-Math | nexa run Qwen2.5-Math-1.5B-Instruct:q4_0 |
Visit the model pages to choose your parameters and quantization preference. We will constantly upload and support more models in the Qwen2.5 family.
Please feel free to share your feedback and feature/model requests on the issue page.