Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Langfuse Integration #30

Open
norton120 opened this issue Jan 20, 2025 · 4 comments
Open

Langfuse Integration #30

norton120 opened this issue Jan 20, 2025 · 4 comments

Comments

@norton120
Copy link

It could be really useful to bake langfuse support into the decorator. Since the mechanics of the generation are abstracted it makes instrumentation very difficult without either blind decorating using langfuse's @observe (which leaves a lot to be desired) or creating shell decorators - which is how that functionality would probably be implemented under the hood anyway I'm guessing. but something like this maybe?

@llm(model="gpt-4o", langfuse_args={"user":current_user.id})
def write_an_email(purpose: str) -> EmailBody:
    """Write a formal business email for the purpose of {purpose}."""

This would create a langfuse trace like:

    { "name": "write_an_email",
      "user": 12345,
      "model": "gpt-4o-2024-08-06" // ... etc
@norton120
Copy link
Author

I stand corrected, litellm already has hooks for langfuse baked in https://docs.litellm.ai/docs/observability/callbacks
so just need to set the litellm params, though it looks like user/name tagging may need to be manually set which is a bummer. I'll play with it and see.

@knowsuchagency
Copy link
Owner

That would be really interesting to see. I've had it on my todo list to try spinning up langfuse on my dokploy instance, too

@marcklingen
Copy link

I'm one of the langfuse maintainers

I think what @norton120 was aiming at here initially was tracing from the application itself instead of from the proxy level. This is generally preferred as you then capture actual application-level timestamps and can include non-llm calls in the LLM Application traces and evaluations (e.g. retrieval/tool methods). Example here: https://langfuse.com/docs/integrations/litellm/example-proxy-python

Logging from a proxy such as LiteLLM is a good alternative, you'll have less flexibility to log whatever is relevant but it automatically applies to all requests.

Let me know if you have any questions, happy to help here

@norton120
Copy link
Author

👋🏻 Hey @marcklingen ! Yep I tested out the existing liteLLM integration yesterday and don't love it - we lose a ton of trace fidelity compared to our current instrumentation using the @observe decorator directly. So I may take a pass at adding a way to set/override the langfuse context in promptic directly (like a singleton?) and then pass optionally more/better components at call time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants