Skip to content

Commit e7b89a6

Browse files
alexmojakiCopilot
andauthored
Claude SDK instrumentation (#1618)
Co-authored-by: Copilot <[email protected]>
1 parent 4074714 commit e7b89a6

File tree

10 files changed

+349
-18
lines changed

10 files changed

+349
-18
lines changed

docs/guides/web-ui/llm-panels.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,7 @@ Click an LLM span to open the details panel.
5353
| [LangChain](../../integrations/llms/langchain.md) ||||
5454
| [LiteLLM](../../integrations/llms/litellm.md) ||||
5555
| [Anthropic](../../integrations/llms/anthropic.md) | | ||
56+
| [Claude Agent SDK](../../integrations/llms/claude-agent-sdk.md) | | ||
5657
| [Google ADK](https://github.com/pydantic/logfire/issues/1201#issuecomment-3012423974) || | |
5758

5859
Tokens and costs are more generally supported by any instrumentation that follows the standard [OpenTelemetry semantic conventions for GenAI spans](https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans/). The following snippet shows the attributes required if you want to log the data manually:
92.1 KB
Loading

docs/integrations/llms/anthropic.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,9 @@ With that you get:
4545

4646
## Methods covered
4747

48+
!!! note
49+
This is separate from [Claude Agent SDK instrumentation](../llms/claude-agent-sdk.md). The Claude Agent SDK doesn't actually use the `anthropic` package under the hood.
50+
4851
The following Anthropic methods are covered:
4952

5053
- [`client.messages.create`](https://docs.anthropic.com/en/api/messages)
Lines changed: 89 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,89 @@
1+
---
2+
title: "Logfire Integrations: Claude Agent SDK"
3+
description: "Guide for using Logfire with Claude Agent SDK via Langsmith OpenTelemetry tracing, including setup instructions and example trace output."
4+
integration: "third-party"
5+
---
6+
# Claude Agent SDK
7+
8+
You can instrument the Python [Claude Agent SDK](https://platform.claude.com/docs/en/agent-sdk/overview) using **Logfire** and [Langsmith](https://docs.langchain.com/langsmith/trace-claude-agent-sdk).
9+
10+
!!! note
11+
This is separate from the [`anthropic` integration](../llms/anthropic.md). The Claude Agent SDK doesn't actually use the `anthropic` package under the hood.
12+
13+
First, install dependencies:
14+
15+
```bash
16+
pip install 'langsmith[claude-agent-sdk, otel]'
17+
```
18+
19+
Here's an example script:
20+
21+
```python
22+
from langsmith.integrations.claude_agent_sdk import configure_claude_agent_sdk
23+
import logfire
24+
import os
25+
import asyncio
26+
from claude_agent_sdk import (
27+
ClaudeAgentOptions,
28+
ClaudeSDKClient,
29+
tool,
30+
create_sdk_mcp_server,
31+
)
32+
from typing import Any
33+
34+
# These environment variables enable Langsmith OpenTelemetry tracing,
35+
# instead of sending traces to Langsmith directly.
36+
os.environ['LANGSMITH_OTEL_ENABLED'] = "true"
37+
os.environ['LANGSMITH_OTEL_ONLY'] = "true"
38+
os.environ['LANGSMITH_TRACING'] = "true"
39+
40+
# Ensure that OpenTelemetry traces are sent to Logfire
41+
logfire.configure()
42+
43+
# Instrument the Claude Agent SDK with Langsmith
44+
configure_claude_agent_sdk()
45+
46+
# Example of using a tool in the Claude Agent SDK:
47+
@tool(
48+
"get_weather",
49+
"Gets the current weather for a given city",
50+
{
51+
"city": str,
52+
},
53+
)
54+
async def get_weather(args: dict[str, Any]) -> dict[str, Any]:
55+
"""Simulated weather lookup tool"""
56+
city = args["city"]
57+
weather = "Cloudy, 59°F"
58+
return {"content": [{"type": "text", "text": f"Weather in {city}: {weather}"}]}
59+
60+
61+
async def main():
62+
weather_server = create_sdk_mcp_server(
63+
name="weather",
64+
version="1.0.0",
65+
tools=[get_weather],
66+
)
67+
68+
options = ClaudeAgentOptions(
69+
system_prompt="You are a friendly travel assistant who helps with weather information.",
70+
mcp_servers={"weather": weather_server},
71+
allowed_tools=["mcp__weather__get_weather"],
72+
)
73+
74+
async with ClaudeSDKClient(options=options) as client:
75+
await client.query("What's the weather like in Berlin?")
76+
77+
async for message in client.receive_response():
78+
print(message)
79+
80+
81+
asyncio.run(main())
82+
```
83+
84+
!!! warning
85+
Only the `ClaudeSDKClient` is instrumented, not the top-level `claude_agent_sdk.query()` function.
86+
87+
The resulting trace looks like this in Logfire:
88+
89+
![Logfire Claude Agent SDK Trace](../../images/logfire-screenshot-claude-agent-sdk.png)

docs/integrations/llms/langchain.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,10 +5,11 @@ integration: "built-in"
55
---
66
# LangChain
77

8-
[LangChain](https://www.langchain.com/) (and thus [LangGraph](https://www.langchain.com/langgraph)) has [built-in OpenTelemetry tracing via Langsmith](https://docs.smith.langchain.com/observability/how_to_guides/trace_langchain_with_otel) which you can use with **Logfire**. It's enabled by these two environment variables:
8+
[LangChain](https://www.langchain.com/) (and thus [LangGraph](https://www.langchain.com/langgraph)) has [built-in OpenTelemetry tracing via Langsmith](https://docs.smith.langchain.com/observability/how_to_guides/trace_langchain_with_otel) which you can use with **Logfire**. It's enabled by these environment variables:
99

1010
```
1111
LANGSMITH_OTEL_ENABLED=true
12+
LANGSMITH_OTEL_ONLY=true
1213
LANGSMITH_TRACING=true
1314
```
1415

@@ -21,6 +22,7 @@ import logfire
2122

2223
# These environment variables need to be set before importing langchain or langgraph
2324
os.environ['LANGSMITH_OTEL_ENABLED'] = 'true'
25+
os.environ["LANGSMITH_OTEL_ONLY"] = 'true'
2426
os.environ['LANGSMITH_TRACING'] = 'true'
2527

2628
from langchain.agents import create_agent

logfire/_internal/exporters/processor_wrapper.py

Lines changed: 40 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
from __future__ import annotations
22

33
import json
4+
from collections.abc import Mapping
45
from contextlib import suppress
56
from dataclasses import dataclass
67
from typing import Any, cast
@@ -347,7 +348,8 @@ def _transform_langchain_span(span: ReadableSpanDict):
347348
if existing_json_schema: # pragma: no cover
348349
return
349350

350-
properties = JsonSchemaProperties({})
351+
properties = JsonSchemaProperties({'all_messages_events': {'type': 'array'}})
352+
351353
parsed_attributes: dict[str, Any] = {}
352354
for key, value in attributes.items():
353355
if not isinstance(value, str) or not value.startswith(('{"', '[')):
@@ -359,6 +361,18 @@ def _transform_langchain_span(span: ReadableSpanDict):
359361
# Tell the Logfire backend to parse this attribute as JSON.
360362
properties[key] = {'type': 'object' if value.startswith('{') else 'array'}
361363

364+
attributes, new_attributes = _transform_langsmith_span_attributes(attributes, parsed_attributes)
365+
366+
span['attributes'] = {
367+
**attributes,
368+
ATTRIBUTES_JSON_SCHEMA_KEY: attributes_json_schema(properties),
369+
**new_attributes,
370+
}
371+
372+
373+
def _transform_langsmith_span_attributes(
374+
attributes: Mapping[str, Any], parsed_attributes: dict[str, Any]
375+
) -> tuple[Mapping[str, Any], dict[str, Any]]:
362376
new_attributes: dict[str, Any] = {}
363377

364378
# OTel semconv attributes, needed for displaying costs.
@@ -370,7 +384,7 @@ def _transform_langchain_span(span: ReadableSpanDict):
370384
]
371385
new_attributes.setdefault('gen_ai.request.model', model)
372386

373-
request_model: str = attributes.get('gen_ai.request.model') or new_attributes.get('gen_ai.request.model', '') # type: ignore
387+
request_model: str = attributes.get('gen_ai.request.model') or new_attributes.get('gen_ai.request.model', '')
374388

375389
if not request_model and 'gen_ai.usage.input_tokens' in attributes: # pragma: no cover
376390
# Only keep usage attributes on spans with actual token usage, i.e. model requests,
@@ -389,18 +403,31 @@ def _transform_langchain_span(span: ReadableSpanDict):
389403

390404
# Add `all_messages_events`
391405
with suppress(Exception):
392-
input_messages = parsed_attributes.get('input.value', parsed_attributes.get('gen_ai.prompt', {}))['messages']
393-
if len(input_messages) == 1 and isinstance(input_messages[0], list):
394-
[input_messages] = input_messages
395-
406+
input_attribute = parsed_attributes.get('input.value', parsed_attributes.get('gen_ai.prompt', {}))
407+
if 'messages' in input_attribute:
408+
input_messages = input_attribute['messages']
409+
if len(input_messages) == 1 and isinstance(input_messages[0], list):
410+
[input_messages] = input_messages
411+
else:
412+
input_messages: list[Any] = []
413+
if 'system' in input_attribute:
414+
input_messages.append({'role': 'system', 'content': input_attribute['system']})
415+
if 'prompt' in input_attribute:
416+
input_messages.append({'role': 'user', 'content': input_attribute['prompt']})
417+
if not input_messages:
418+
raise ValueError
396419
message_events = [_transform_langchain_message(old_message) for old_message in input_messages]
397420

398421
# If we fail to parse output messages, fine, but only try if we've succeeded to parse input messages.
399422
with suppress(Exception):
400423
output_value = parsed_attributes.get('output.value', parsed_attributes.get('gen_ai.completion', {}))
424+
# This weird issubclass usage is because pyright can't deal with missing type arguments of dict sensibly.
425+
if issubclass(type(output_value), dict) and 'content' in output_value and 'role' in output_value:
426+
output_value = cast(Any, {'messages': [output_value]})
401427
try:
402428
# Multiple generations mean multiple choices, we can only display one.
403-
message_events += [_transform_langchain_message(output_value['generations'][0][0]['message'])]
429+
old_message = output_value['generations'][0][0]['message']
430+
message_events += [_transform_langchain_message(old_message)]
404431
except Exception:
405432
try:
406433
output_message_events = [_transform_langchain_message(m) for m in output_value['messages']]
@@ -420,13 +447,8 @@ def _transform_langchain_span(span: ReadableSpanDict):
420447
message_events += [_transform_langchain_message(output_value['output'])]
421448

422449
new_attributes['all_messages_events'] = json.dumps(message_events)
423-
properties['all_messages_events'] = {'type': 'array'}
424450

425-
span['attributes'] = {
426-
**attributes,
427-
ATTRIBUTES_JSON_SCHEMA_KEY: attributes_json_schema(properties),
428-
**new_attributes,
429-
}
451+
return attributes, new_attributes
430452

431453

432454
def _transform_langchain_message(old_message: dict[str, Any]) -> dict[str, Any]:
@@ -446,6 +468,9 @@ def _transform_langchain_message(old_message: dict[str, Any]) -> dict[str, Any]:
446468
'role': role,
447469
}
448470

471+
if 'tool_call_id' in result:
472+
result['id'] = result.pop('tool_call_id')
473+
449474
if tool_calls := result.get('tool_calls'):
450475
for tool_call in tool_calls:
451476
if (
@@ -463,9 +488,9 @@ def _transform_langchain_message(old_message: dict[str, Any]) -> dict[str, Any]:
463488
)
464489
else:
465490
result.pop('tool_calls', None)
491+
if role == 'tool' and 'content' in result and 'id' in result:
492+
result.setdefault('name', '') # dummy value which makes the frontend happy
466493

467-
if 'tool_call_id' in result:
468-
result['id'] = result.pop('tool_call_id')
469494
return result
470495

471496

logfire/_internal/scrubbing.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -155,6 +155,8 @@ class BaseScrubber(ABC):
155155
'rpc.method',
156156
'gen_ai.system',
157157
'model_request_parameters',
158+
'langsmith.metadata.session_id',
159+
'langsmith.trace.session_name',
158160
}
159161

160162
@abstractmethod

mkdocs.yml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -121,6 +121,7 @@ nav:
121121
- LangChain: integrations/llms/langchain.md
122122
- LiteLLM: integrations/llms/litellm.md
123123
- MCP: integrations/llms/mcp.md
124+
- Claude Agent SDK: integrations/llms/claude-agent-sdk.md
124125
- LLamaIndex: integrations/llms/llamaindex.md
125126
- Mirascope: integrations/llms/mirascope.md
126127
- Magentic: integrations/llms/magentic.md
@@ -244,11 +245,11 @@ plugins:
244245
- mkdocstrings:
245246
handlers:
246247
python:
247-
paths: [src/packages/logfire/logfire]
248+
paths: [ src/packages/logfire/logfire ]
248249
options:
249250
members_order: source
250251
separate_signature: true
251-
filters: ["!^_"]
252+
filters: [ "!^_" ]
252253
docstring_options:
253254
ignore_init_summary: true
254255
merge_init_into_class: true

0 commit comments

Comments
 (0)