Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add support for Mistral using TGI / vllm / candle #225

Open
pabl-o-ce opened this issue Oct 21, 2023 · 4 comments
Open

add support for Mistral using TGI / vllm / candle #225

pabl-o-ce opened this issue Oct 21, 2023 · 4 comments
Assignees
Labels
enhancement New feature or request good first issue Good for newcomers help wanted Extra attention is needed

Comments

@pabl-o-ce
Copy link

Hi guys love your project

I was wondering if you can add support to mistral via:

for use it as endpoints also they have active support to new llm Architectures as Mistral

@williamhogman williamhogman added enhancement New feature or request good first issue Good for newcomers help wanted Extra attention is needed labels Oct 25, 2023
@williamhogman
Copy link
Contributor

Hey sounds like a very good idea :)

If anyone wants to add this it would be a most welcome contribution.

@andychenbruce
Copy link
Contributor

Llama+Mistral+Zephyr and GPU acceleration in only ~450 lines using candle.
https://github.com/huggingface/candle/blob/main/candle-examples/examples/quantized/main.rs

If Mistral support is added with candle it could be fairly trivial to also support Llama and Zephyr.

@01PrathamS
Copy link

I have some experience with Rust, although my familiarity with LLMS is somewhat limited. can take on this challenge, as it would mark my initial contribution to the LLM-chain.

@williamhogman
Copy link
Contributor

Sounds like a great idea :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants