-
Notifications
You must be signed in to change notification settings - Fork 166
Description
Hi Langfuse team,
I am building a LangGraph agent and deploying it to AWS Bedrock AgentCore (using the custom runtime approach with Docker). I'm struggling to find the "canonical" way to integrate Langfuse tracing in this specific environment, especially given the short-lived/serverless nature of AgentCore invocations.
My Setup:
Framework: LangGraph / LangChain (Python) with LangChain DeepAgents
Runtime: AWS Bedrock AgentCore (Custom Runtime)
Deployment: AWS CDK (Docker container)
What I've Tried: I have read the AgentCore integration docs (which focus on Strands) and the LangChain integration docs.
I am currently attempting a hybrid approach in my
OTEL Configuration: I am programmatically setting OTEL_EXPORTER_OTLP_ENDPOINT and OTEL_EXPORTER_OTLP_HEADERS inside the python process at startup, pointing to Langfuse.
Instrumentation: I'm using LangchainInstrumentor().instrument().
Native Tracing: I'm also trying to use the @observe decorator on the entrypoint and passing a CallbackHandler to agent.invoke().
The Problem: I am unsure if I should be relying purely on OTEL (auto-instrumentation) or the native Langfuse SDK (CallbackHandler) for LangGraph on AgentCore. Specifically:
Does LangchainInstrumentor work reliably with AgentCore's execution model (freeze/thaw)?
Do I need to manually handle context propagation (trace_id, parent_observation_id) from the AgentCore payload if I want to link the AgentCore trace to the inner LangGraph traces?
Is langfuse.flush() sufficient, or do I need to ensure OTEL providers are flushed manually?
Request: Could you provide a working example or update the documentation to cover LangGraph on Bedrock AgentCore?
Specifically, I'm looking for:
- Recommended setup (OTEL vs Native SDK).
- How to correctly propagate the trace context so the "Agent Invocation" in Bedrock links to the LangGraph traces.
- How to ensure all traces are flushed before the AgentCore runtime freezes/terminates.
Thank you!