v1.61.1.dev1
·
3 commits
to 3f0438a59d0c594848c4063730a42bb712d9cee6
since this release
What's Changed
- Improved wildcard route handling on
/models
and/model_group/info
by @krrishdholakia in #8473 - (Bug fix) - Using
include_usage
for /completions requests + unit testing by @ishaan-jaff in #8484 - add sonar pricings by @themrzmaster in #8476
- (bug fix)
PerplexityChatConfig
- track correct OpenAI compatible params by @ishaan-jaff in #8496 - (fix #2) don't block proxy startup if license check fails & using prometheus by @ishaan-jaff in #8492
- ci(config.yml): mark daily docker builds with
-nightly
by @krrishdholakia in #8499 - (Redis Cluster) - Fixes for using redis cluster + pipeline by @ishaan-jaff in #8442
- Litellm UI stable version 02 12 2025 by @krrishdholakia in #8497
- fix: fix test by @krrishdholakia in #8501
Full Changelog: v1.61.1...v1.61.1.dev1
## Docker Run LiteLLM Proxy
```
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.1.dev1
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Failed ❌ | 180.0 | 213.07786790233536 | 6.297898153114872 | 6.297898153114872 | 1884 | 1884 | 146.15093399999068 | 4776.909474999997 |
Aggregated | Failed ❌ | 180.0 | 213.07786790233536 | 6.297898153114872 | 6.297898153114872 | 1884 | 1884 | 146.15093399999068 | 4776.909474999997 |