Skip to content

Releases: NexaAI/nexa-sdk

v0.0.8.6-rocm621

02 Oct 17:00
Compare
Choose a tag to compare

What's New ✨

Improvements 🔧

  • Added AMD ROCm prebuild wheel

Fixes 🐞

  • Fixed progress bar not showing during image generation

Upgrade Guide 📝

To upgrade the Nexa SDK, use the command for your system:

CPU

pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (Metal)

For the GPU version supporting Metal (macOS):

CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (CUDA)

For Linux:

CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For Windows PowerShell:

$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For Windows Command Prompt:

set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For Windows Git Bash:

CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (ROCm)

For Linux:

CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir

For detailed installation instructions, please refer to the Installation section in the README.

Full Changelog - v0.0.8.5...v0.0.8.6

v0.0.8.6-metal

02 Oct 08:11
Compare
Choose a tag to compare

What's New ✨

Improvements 🔧

  • Added AMD ROCm prebuild wheel

Fixes 🐞

  • Fixed progress bar not showing during image generation

Upgrade Guide 📝

To upgrade the Nexa SDK, use the command for your system:

CPU

pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (Metal)

For the GPU version supporting Metal (macOS):

CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (CUDA)

For Linux:

CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For Windows PowerShell:

$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For Windows Command Prompt:

set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For Windows Git Bash:

CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (ROCm)

For Linux:

CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir

For detailed installation instructions, please refer to the Installation section in the README.

Full Changelog - v0.0.8.5...v0.0.8.6

v0.0.8.6-cu124

02 Oct 09:21
Compare
Choose a tag to compare

What's New ✨

Improvements 🔧

  • Added AMD ROCm prebuild wheel

Fixes 🐞

  • Fixed progress bar not showing during image generation

Upgrade Guide 📝

To upgrade the Nexa SDK, use the command for your system:

CPU

pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (Metal)

For the GPU version supporting Metal (macOS):

CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (CUDA)

For Linux:

CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For Windows PowerShell:

$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For Windows Command Prompt:

set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For Windows Git Bash:

CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (ROCm)

For Linux:

CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir

For detailed installation instructions, please refer to the Installation section in the README.

Full Changelog - v0.0.8.5...v0.0.8.6

v0.0.8.6

02 Oct 09:30
Compare
Choose a tag to compare

What's New ✨

Improvements 🔧

  • Added AMD ROCm prebuild wheel

Fixes 🐞

  • Fixed progress bar not showing during image generation

Upgrade Guide 📝

To upgrade the Nexa SDK, use the command for your system:

CPU

pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (Metal)

For the GPU version supporting Metal (macOS):

CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (CUDA)

For Linux:

CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For Windows PowerShell:

$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For Windows Command Prompt:

set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For Windows Git Bash:

CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (ROCm)

For Linux:

CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir

For detailed installation instructions, please refer to the Installation section in the README.

Full Changelog - v0.0.8.5...v0.0.8.6

v0.0.8.5-metal

25 Sep 21:12
Compare
Choose a tag to compare

What's New ✨

Added support for Llama3.2 models:

Model Command to Run
Llama3.2 3B nexa run llama3.2
Llama3.2 1B nexa run Llama3.2-1B-Instruct:q4_0

Update Nexa SDK 🛠️

CPU Version

To update the CPU version of Nexa SDK, run:

pip install nexaai -U --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir

GPU Version (Metal - macOS)

For the GPU version supporting Metal (macOS), run:

CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install nexaai -U --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir

Other GPU Support

For detailed installation instructions of Nexa SDK for CUDA and AMD GPU support, please refer to the Installation section in the main README.

Note: To update your current SDK version to v0.0.8.5, use the same command as the installation but add a -U flag to the pip install command.

v0.0.8.5-cu124

25 Sep 22:56
Compare
Choose a tag to compare

What's New ✨

Added support for Llama3.2 models:

Model Command to Run
Llama3.2 3B nexa run llama3.2
Llama3.2 1B nexa run Llama3.2-1B-Instruct:q4_0

Update Nexa SDK 🛠️

CPU Version

To update the CPU version of Nexa SDK, run:

pip install nexaai -U --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir

GPU Version (Metal - macOS)

For the GPU version supporting Metal (macOS), run:

CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install nexaai -U --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir

Other GPU Support

For detailed installation instructions of Nexa SDK for CUDA and AMD GPU support, please refer to the Installation section in the main README.

Note: To update your current SDK version to v0.0.8.5, use the same command as the installation but add a -U flag to the pip install command.

v0.0.8.5

25 Sep 22:28
Compare
Choose a tag to compare

What's New ✨

Added support for Llama3.2 models:

Model Command to Run
Llama3.2 3B nexa run llama3.2
Llama3.2 1B nexa run Llama3.2-1B-Instruct:q4_0

Update Nexa SDK 🛠️

CPU Version

To update the CPU version of Nexa SDK, run:

pip install nexaai -U --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir

GPU Version (Metal - macOS)

For the GPU version supporting Metal (macOS), run:

CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install nexaai -U --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir

Other GPU Support

For detailed installation instructions of Nexa SDK for CUDA and AMD GPU support, please refer to the Installation section in the main README.

Note: To update your current SDK version to v0.0.8.5, use the same command as the installation but add a -U flag to the pip install command.

main-cu124

24 Sep 01:23
3790259
Compare
Choose a tag to compare
Merge pull request #110 from NexaAI/yhqiu-develop

Local File Organization

v0.0.8.4-metal

18 Sep 22:07
05e0538
Compare
Choose a tag to compare

What's New ✨

  • Added support for Qwen2.5, Qwen2.5-code, and Qwen2.5-Math

Install Nexa SDK 🛠️

CPU Installation

To install the CPU version of Nexa SDK, run:

pip install nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir

GPU Installation (Metal - macOS)

For the GPU version supporting Metal (macOS), run:

CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir

For detailed installation instructions of Nexa SDK for CUDA and AMD GPU support, please refer to the Installation section in the main README.

To update your current SDK version to v0.0.8.4, use the same command as the installation but add a -U flag to the pip install command.

Run Qwen2.5 with Nexa SDK

Option 1: Run official GGUF files from Qwen HuggingFace Page 🤗

You could use the following command to pull and run language models in GGUF format from 🤗 HuggingFace: nexa run -hf <hf model id>. Choose one of these commands based on your preferred model size:

Qwen2.5 0.5B:

nexa run -hf Qwen/Qwen2.5-0.5B-Instruct-GGUF

Qwen2.5 1.5B:

nexa run -hf Qwen/Qwen2.5-1.5B-Instruct-GGUF

Qwen2.5 3B:

nexa run -hf Qwen/Qwen2.5-3B-Instruct-GGUF

Qwen2.5 7B:

nexa run -hf Qwen/Qwen2.5-7B-Instruct-GGUF

Qwen2.5 14B:

nexa run -hf Qwen/Qwen2.5-14B-Instruct-GGUF

The command line will prompt you to select one file from different quantization options. Use the number to indicate your choice. If you're unsure which one to choose, try "q4_0.gguf".

You will then have Qwen2.5 running locally on your computer.

Note: For Qwen2.5-code and Qwen2.5-Math, there are no official GGUF files available. Please use Option 2 for these models.

Option 2: Pull and Run Qwen2.5, Qwen2.5-code, Qwen2.5-Math from Nexa Model Hub 🐙

We have converted and uploaded the following models to the Nexa Model Hub:

Model Nexa Run Command
Qwen2.5 0.5B nexa run Qwen2.5-0.5B-Instruct:q4_0
Qwen2.5 1.5B nexa run Qwen2.5-1.5B-Instruct:q4_0
Qwen2.5 3B nexa run Qwen2.5-3B-Instruct:q4_0
Qwen2.5-code nexa run Qwen2.5-Coder-1.5B-Instruct:q4_0
Qwen2.5-Math nexa run Qwen2.5-Math-1.5B-Instruct:q4_0

Visit the model pages to choose your parameters and quantization preference. We will constantly upload and support more models in the Qwen2.5 family.

Please feel free to share your feedback and feature/model requests on the issue page.

v0.0.8.4-cu124

18 Sep 23:48
3c175c9
Compare
Choose a tag to compare

What's New ✨

  • Added support for Qwen2.5, Qwen2.5-code, and Qwen2.5-Math

Install Nexa SDK 🛠️

CPU Installation

To install the CPU version of Nexa SDK, run:

pip install nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir

GPU Installation (Metal - macOS)

For the GPU version supporting Metal (macOS), run:

CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir

For detailed installation instructions of Nexa SDK for CUDA and AMD GPU support, please refer to the Installation section in the main README.

To update your current SDK version to v0.0.8.4, use the same command as the installation but add a -U flag to the pip install command.

Run Qwen2.5 with Nexa SDK

Option 1: Run official GGUF files from Qwen HuggingFace Page 🤗

You could use the following command to pull and run language models in GGUF format from 🤗 HuggingFace: nexa run -hf <hf model id>. Choose one of these commands based on your preferred model size:

Qwen2.5 0.5B:

nexa run -hf Qwen/Qwen2.5-0.5B-Instruct-GGUF

Qwen2.5 1.5B:

nexa run -hf Qwen/Qwen2.5-1.5B-Instruct-GGUF

Qwen2.5 3B:

nexa run -hf Qwen/Qwen2.5-3B-Instruct-GGUF

Qwen2.5 7B:

nexa run -hf Qwen/Qwen2.5-7B-Instruct-GGUF

Qwen2.5 14B:

nexa run -hf Qwen/Qwen2.5-14B-Instruct-GGUF

The command line will prompt you to select one file from different quantization options. Use the number to indicate your choice. If you're unsure which one to choose, try "q4_0.gguf".

You will then have Qwen2.5 running locally on your computer.

Note: For Qwen2.5-code and Qwen2.5-Math, there are no official GGUF files available. Please use Option 2 for these models.

Option 2: Pull and Run Qwen2.5, Qwen2.5-code, Qwen2.5-Math from Nexa Model Hub 🐙

We have converted and uploaded the following models to the Nexa Model Hub:

Model Nexa Run Command
Qwen2.5 0.5B nexa run Qwen2.5-0.5B-Instruct:q4_0
Qwen2.5 1.5B nexa run Qwen2.5-1.5B-Instruct:q4_0
Qwen2.5 3B nexa run Qwen2.5-3B-Instruct:q4_0
Qwen2.5-code nexa run Qwen2.5-Coder-1.5B-Instruct:q4_0
Qwen2.5-Math nexa run Qwen2.5-Math-1.5B-Instruct:q4_0

Visit the model pages to choose your parameters and quantization preference. We will constantly upload and support more models in the Qwen2.5 family.

Please feel free to share your feedback and feature/model requests on the issue page.