Skip to content

langchain-samples/lg-agentcore

Repository files navigation

AgentCore + LangSmith: Tracing & Evaluation

This repo demonstrates how to deploy a LangGraph agent to AWS Bedrock AgentCore and trace its execution end-to-end in LangSmith, including offline evaluation.

What's covered

  1. Deploy a LangGraph agent to AgentCore using the bedrock-agentcore-starter-toolkit
  2. Distributed tracing into LangSmith — trace context is propagated from the caller (notebook) into the AgentCore container so that inner LangGraph steps (nodes, tool calls, LLM calls) nest under a single parent trace
  3. Offline evaluation — run the deployed agent against a LangSmith dataset using an LLM-as-judge evaluator

Architecture

Notebook (@traceable)
  └── AgentCore Runtime (langgraph_bedrock.py)
        └── LangGraph agent
              ├── chatbot node (Claude Haiku via Bedrock)
              └── tools node (calculator, weather)

Trace context (trace_id, run_id, dotted_order) is injected into the AgentCore payload. Inside the container, a phantom RunTree re-attaches to that context so all inner spans appear nested under the caller's trace in LangSmith.

Files

File Description
langgraph_bedrock.py Agent entrypoint deployed to AgentCore. Includes TracingMiddleware and LangSmith context propagation.
langgraph_bedrock_local.py Same agent, runnable locally without AgentCore for fast iteration.
agentcore_langgraph.ipynb Step-by-step notebook: configure, deploy, trace, and evaluate.
requirements.txt Python dependencies for the AgentCore container.

Prerequisites

  • AWS account with Bedrock access (Claude Haiku and AgentCore enabled in your region)
  • LangSmith account and API key
  • OpenAI API key (used by the LLM-as-judge evaluator)
  • Python 3.11+

Setup

uv venv
uv pip install -r requirements

Copy .env.example to .env and fill in:

AWS_REGION=...
LANGSMITH_API_KEY=...
LANGSMITH_TRACING=true
LANGSMITH_PROJECT=agentcore-demo
OPENAI_API_KEY=...

Running the demo

Open agentcore_langgraph.ipynb and run cells top to bottom. The notebook is organized into four sections:

  1. Configure & Deploy — packages the agent and deploys to AgentCore via CodeBuild
  2. Basic Invocation — quick sanity check invoke
  3. Distributed Tracing — invokes with LangSmith context propagation; traces appear nested in LangSmith
  4. Evaluation — creates a dataset and runs offline eval with an LLM-as-judge
  5. Cleanup — deletes the AgentCore runtime and ECR repository to avoid ongoing costs

Tracing approach

AgentCore runs the agent in an isolated container. To get nested traces in LangSmith, the caller:

  1. Wraps the invocation in @traceable to create a parent span
  2. Reads the current RunTree to get trace_id, run_id, and dotted_order
  3. Passes those values in the AgentCore invocation payload

Inside the container, langgraph_bedrock.py reconstructs a phantom RunTree from those values and uses tracing_context(parent=phantom_parent) so all LangGraph spans attach as children of the caller's span.

About

No description, website, or topics provided.

Resources

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors