Skip to content

Commit

Permalink
Include x-coding-assistant=aider header in litellm calls
Browse files Browse the repository at this point in the history
Proxies that inspect traffic between the development environment and an
LLM might be interested in whether it's aider or another tool calling in
order to be able to inspect and/or modify the payload.

The most common way of solving this would be to add a `user-agent` header.
However, litellm which aider uses calls into OpenAI libraries directly
when making the request and it seems like the only way to set a custom
`http_client`. This seemed like something that might have unforeseen
consequences (timeouts? retries?). For other LLMs, litellm seems to use
its own httpx wrapper which might presumably be easier to customize, but
I have not tried.

To make things easier, let's just add an aider specific header. I put the
string aider followed by the version there, but the value - and indeed the key
- of the header are not that interesting, what I would like to do is to just
be able to to tell aider calls.
  • Loading branch information
jhrozek committed Feb 6, 2025
1 parent 2265456 commit 5851806
Show file tree
Hide file tree
Showing 3 changed files with 21 additions and 0 deletions.
1 change: 1 addition & 0 deletions aider/coders/base_coder.py
Original file line number Diff line number Diff line change
Expand Up @@ -1219,6 +1219,7 @@ def warm_cache_worker():

kwargs = dict(self.main_model.extra_params) or dict()
kwargs["max_tokens"] = 1
kwargs["headers"] = {"x-coding-assistant": f"aider-{__version__}"}

try:
completion = litellm.completion(
Expand Down
2 changes: 2 additions & 0 deletions aider/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
import yaml
from PIL import Image

from aider._version import __version__
from aider.dump import dump # noqa: F401
from aider.llm import litellm
from aider.sendchat import ensure_alternating_roles, sanity_check_messages
Expand Down Expand Up @@ -570,6 +571,7 @@ def send_completion(self, messages, functions, stream, temperature=None):
model=self.name,
messages=messages,
stream=stream,
headers={"x-coding-assistant": f"aider-{__version__}"},
)

if self.use_temperature is not False:
Expand Down
18 changes: 18 additions & 0 deletions tests/basic/test_sendchat.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,24 @@ def test_send_completion_with_functions(self, mock_completion):
assert "tools" in called_kwargs
assert called_kwargs["tools"][0]["function"] == mock_function

@patch("litellm.completion")
def test_send_completion_aider_specific_header(self, mock_completion):
# Setup mock to raise AttributeError
mock_completion.return_value = MagicMock()
mock_completion.return_value.choices = None

_ = Model(self.mock_model).send_completion(
self.mock_messages,
functions=None,
stream=False,
)

# Verify the aider specific header was sent
called_kwargs = mock_completion.call_args.kwargs
assert "headers" in called_kwargs
assert "x-coding-assistant" in called_kwargs["headers"]
assert "aider" in called_kwargs["headers"]["x-coding-assistant"]

@patch("litellm.completion")
def test_simple_send_attribute_error(self, mock_completion):
# Setup mock to raise AttributeError
Expand Down

0 comments on commit 5851806

Please sign in to comment.