Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for guided decoding for vllm backend #7897

Open
Inkorak opened this issue Dec 20, 2024 · 1 comment
Open

Support for guided decoding for vllm backend #7897

Inkorak opened this issue Dec 20, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@Inkorak
Copy link

Inkorak commented Dec 20, 2024

Is your feature request related to a problem? Please describe.
vLLM supports structured output capability with guided decoding, but the vLLM backend does not have this capability.

Describe the solution you'd like
Add Guided Decoding for vllm backend, so we can using it throught generate endpoint.

Describe alternatives you've considered
We could write our own backend modification, but it would be nice if it worked out of the box.

@tanmayv25 tanmayv25 added the enhancement New feature or request label Jan 24, 2025
@tanmayv25
Copy link
Contributor

Working link for guided decoding: https://docs.vllm.ai/en/latest/features/structured_outputs.html
Need to investigate what needs to be updated in vllm backend to support the feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Development

No branches or pull requests

2 participants