You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Great job with the current routing setup!
I’m wondering if there’s a possibility to expand the routing capabilities to include multiple LiteLLM models. Currently, it seems we can only route between one strong and one weak model.
Here’s why it would be beneficial for example:
Microsoft-PHI: Useful for enterprise tasks and Microsoft integrations.
Google-Gemma: Great for tasks that involve Google’s ecosystem.
Meta-Llama3: Ideal for open-source and research-based queries.
Azure-GPT: Perfect for projects , troubleshooting services, or offering guidance on optimizing cloud infrastructure .
Currently as we have, for high-quality responses on complex queries, a strong model like OpenAI GPT.
Supporting these models would help optimize costs and improve how we handle various data types and queries. Is there a way to integrate this with RouteLLM? Any advice or guidance would be greatly appreciated, let me know by when it will be published.
The text was updated successfully, but these errors were encountered:
Hi there, thank you for your interest! The multi-model routing problem is definitely interesting, and more research is required to support this use-case. We would be excited to support any efforts or contributions in this space.
Great job with the current routing setup!
I’m wondering if there’s a possibility to expand the routing capabilities to include multiple LiteLLM models. Currently, it seems we can only route between one strong and one weak model.
Here’s why it would be beneficial for example:
Microsoft-PHI: Useful for enterprise tasks and Microsoft integrations.
Google-Gemma: Great for tasks that involve Google’s ecosystem.
Meta-Llama3: Ideal for open-source and research-based queries.
Azure-GPT: Perfect for projects , troubleshooting services, or offering guidance on optimizing cloud infrastructure .
Currently as we have, for high-quality responses on complex queries, a strong model like OpenAI GPT.
Supporting these models would help optimize costs and improve how we handle various data types and queries. Is there a way to integrate this with RouteLLM? Any advice or guidance would be greatly appreciated, let me know by when it will be published.
The text was updated successfully, but these errors were encountered: