-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Langfuse Integration #30
Comments
I stand corrected, litellm already has hooks for langfuse baked in https://docs.litellm.ai/docs/observability/callbacks |
That would be really interesting to see. I've had it on my todo list to try spinning up langfuse on my dokploy instance, too |
I'm one of the langfuse maintainers I think what @norton120 was aiming at here initially was tracing from the application itself instead of from the proxy level. This is generally preferred as you then capture actual application-level timestamps and can include non-llm calls in the LLM Application traces and evaluations (e.g. retrieval/tool methods). Example here: https://langfuse.com/docs/integrations/litellm/example-proxy-python Logging from a proxy such as LiteLLM is a good alternative, you'll have less flexibility to log whatever is relevant but it automatically applies to all requests. Let me know if you have any questions, happy to help here |
👋🏻 Hey @marcklingen ! Yep I tested out the existing liteLLM integration yesterday and don't love it - we lose a ton of trace fidelity compared to our current instrumentation using the @observe decorator directly. So I may take a pass at adding a way to set/override the langfuse context in promptic directly (like a singleton?) and then pass optionally more/better components at call time. |
It could be really useful to bake langfuse support into the decorator. Since the mechanics of the generation are abstracted it makes instrumentation very difficult without either blind decorating using langfuse's
@observe
(which leaves a lot to be desired) or creating shell decorators - which is how that functionality would probably be implemented under the hood anyway I'm guessing. but something like this maybe?This would create a langfuse trace like:
The text was updated successfully, but these errors were encountered: