You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This page covers key concepts that are important to understand when logging traces to LangSmith.
9
+
LangSmith Observability lets you record, inspect, and analyze every step your LLM application takes. This page explains how data is structured in LangSmith and how to send traces.
10
10
11
-
A [_trace_](#traces) records the sequence of steps your application takes—from receiving an input, through intermediate processing, to producing a final output. Each step within a trace is represented by a [_run_](#runs). Multiple traces are grouped together within a [_project_](#projects), and traces from multi-turn conversations can be linked together as a [_thread_](#threads).
11
+
## How LangSmith structures data
12
12
13
-
The following diagram displays these concepts in the context of a simple RAG app, which retrieves documents from an index and generates an answer.
13
+
LangSmith groups multiple [_traces_](#traces) within a [_project_](#projects). Each trace records the sequence of steps your application takes for a single operation, which are made up of individual [_runs_](#runs). You can link together traces from multi-turn conversations as a [_thread_](#threads).
14
14
15
-
<img
16
-
className="block dark:hidden"
17
-
src="/langsmith/images/primitives.png"
18
-
alt="Primitives of LangSmith Project, Trace, Run in the context of a question and answer RAG app."
19
-
/>
15
+
```mermaid actions={false}
16
+
graph TB
17
+
Project -->|contains| Trace1[Trace]
18
+
Project -->|contains| Trace2[Trace]
19
+
Trace1 -->|contains| Run1[Run]
20
+
Trace1 -->|contains| Run2[Run]
21
+
Trace1 -->|contains| Run3[Run]
22
+
Trace1 -->|part of| Thread
23
+
Trace2 -->|part of| Thread
24
+
```
20
25
21
-
<img
22
-
className="hidden dark:block"
23
-
src="/langsmith/images/primitives-dark.png"
24
-
alt="Primitives of LangSmith Project, Trace, Run in the context of a question and answer RAG app."
25
-
/>
26
+
### Projects
26
27
27
-
## Runs
28
+
A _project_ is a container for all the traces related to a single application or service.
28
29
29
-
A _run_ is a span representing a single unit of work or operation within your LLM application. This could be anything from a single call to an LLM or chain, to a prompt formatting call, to a runnable lambda invocation. If you are familiar with [OpenTelemetry](https://opentelemetry.io/), you can think of a run as a span.
30
+
[Log traces to a project](/langsmith/log-traces-to-project).
30
31
31
-
<img
32
-
className="block dark:hidden"
33
-
src="/langsmith/images/run-page-light.png"
34
-
alt="Run details page in the LangSmith UI."
35
-
/>
32
+
### Traces
36
33
37
-
<img
38
-
className="hidden dark:block"
39
-
src="/langsmith/images/run-page-dark.png"
40
-
alt="Run details page in the LangSmith UI."
41
-
/>
42
-
43
-
## Traces
44
-
45
-
A _trace_ is a collection of runs for a single operation. For example, if you have a user request that triggers a chain, and that chain makes a call to an LLM, then to an output parser, and so on, all of these runs would be part of the same trace. If you are familiar with [OpenTelemetry](https://opentelemetry.io/), you can think of a LangSmith trace as a collection of spans. Runs are bound to a trace by a unique trace ID.
46
-
47
-
<img
48
-
className="block dark:hidden"
49
-
src="/langsmith/images/trace-light.png"
50
-
alt="Trace with individual runs in the LangSmith UI."
51
-
/>
52
-
53
-
<img
54
-
className="hidden dark:block"
55
-
src="/langsmith/images/trace-dark.png"
56
-
alt="Trace with individual runs in the LangSmith UI."
57
-
/>
34
+
A _trace_ is a collection of runs for a single operation. For example, if a user request triggers a chain that calls an LLM and then an output parser, all of those runs belong to the same trace. Runs are bound to a trace by a unique trace ID. If you are familiar with [OpenTelemetry](https://opentelemetry.io/), you can think of a LangSmith trace as a collection of spans.
58
35
59
36
<Note><MaxRunsPerTrace /></Note>
60
37
61
-
##Threads
38
+
### Runs
62
39
63
-
A _thread_ is a sequence of traces representing a single conversation. Many LLM applications have a chatbot-like interface in which the user and the LLM application engage in a multi-turn conversation. Each turn in the conversation is represented as its own trace, but these traces are linked together by being part of the same thread. The most recent trace in a thread is the latest message exchange.
40
+
A _run_ is a span representing a single unit of work within your LLM application: a call to an LLM, a prompt formatting step, a retrieval call, or any other discrete operation. If you are familiar with [OpenTelemetry](https://opentelemetry.io/), you can think of a run as a span.
64
41
65
-
To group traces into threads, you pass a special metadata key (`session_id`, `thread_id`, or `conversation_id`) with a unique identifier value that links the traces together.
42
+
### Threads
66
43
67
-
[Learn how to configure threads](/langsmith/threads).
68
-
69
-
<img
70
-
className="block dark:hidden"
71
-
src="/langsmith/images/thread-overview-light.png"
72
-
alt="Thread representing a sequence of traces in a multi-turn conversation."
73
-
/>
44
+
A _thread_ is a sequence of traces representing a single conversation. Each turn in a multi-turn conversation is its own trace, but traces are linked by a shared identifier. To group traces into threads, pass a special metadata key (`session_id`, `thread_id`, or `conversation_id`) with a unique value.
74
45
75
-
<img
76
-
className="hidden dark:block"
77
-
src="/langsmith/images/thread-overview-dark.png"
78
-
alt="Thread representing a sequence of traces in a multi-turn conversation."
79
-
/>
46
+
[Learn how to configure threads](/langsmith/threads).
80
47
81
48
<Callouttype="info"icon="feather">
82
49
Use **[Polly](/langsmith/polly)** to analyze traces, runs, and threads. Polly helps you understand agent performance, debug issues, and gain insights from conversation threads without manually digging through data.
83
50
</Callout>
84
51
85
-
## Projects
52
+
## Trace enrichment
86
53
87
-
A _project_ is a collection of traces. You can think of a project as a container for all the traces that are related to a single application or service. You can have multiple projects, and each project can have multiple traces.
54
+
### Feedback
88
55
89
-
<img
90
-
className="block dark:hidden"
91
-
src="/langsmith/images/project-light.png"
92
-
alt="Project containing traces in the LangSmith UI with the + Project button at the top of the table."
93
-
/>
56
+
_Feedback_ allows you to score an individual run based on certain criteria. Each feedback entry consists of a tag and a score, and is bound to a run by a unique run ID. Feedback can be continuous or discrete (categorical), and tags can be reused across runs within an organization.
94
57
95
-
<img
96
-
className="hidden dark:block"
97
-
src="/langsmith/images/project-dark.png"
98
-
alt="Project containing traces in the LangSmith UI with the + Project button at the top of the table."
99
-
/>
58
+
For more on how feedback is stored, refer to the [Feedback data format guide](/langsmith/feedback-data-format).
100
59
101
-
For more details on project setup and traces, refer to [Log traces to a project](/langsmith/log-traces-to-project).
60
+
### Tags
102
61
103
-
## Feedback
62
+
_Tags_ are strings you can attach to runs to categorize, filter, and group them in the LangSmith UI.
104
63
105
-
_Feedback_ allows you to score an individual run based on certain criteria. Each feedback entry consists of a feedback tag and feedback score, and is bound to a run by a unique run ID. Feedback can be continuous or discrete (categorical), and you can reuse feedback tags across different runs within an organization.
106
-
107
-
You can collect feedback on runs in a number of ways:
108
-
109
-
1.[Sent up along with a trace](/langsmith/attach-user-feedback) from the LLM application.
110
-
2. Generated by a user in the app [inline](/langsmith/annotate-traces-inline) or in an [annotation queue](/langsmith/annotation-queues).
111
-
3. Generated by an automatic evaluator during [offline evaluation](/langsmith/evaluate-llm-application).
112
-
4. Generated by an [online evaluator](/langsmith/online-evaluations-llm-as-judge).
64
+
[Learn how to attach tags to your traces](/langsmith/add-metadata-tags).
113
65
114
-
To learn more about how feedback is stored in the application, refer to the [Feedback data format guide](/langsmith/feedback-data-format).
66
+
### Metadata
115
67
116
-
## Tags
68
+
_Metadata_ is a collection of key-value pairs you can attach to runs. For example, application version, environment, or any other contextual information. Similarly to tags, you can use metadata to filter and group runs.
117
69
118
-
_Tags_ are collections of strings that can be attached to runs. You can use tags to do the following in the LangSmith UI:
70
+
[Learn how to add metadata to your traces](/langsmith/add-metadata-tags).
119
71
120
-
- Categorize runs for easier search.
121
-
- Filter runs.
122
-
- Group runs together for analysis.
72
+
## Sending traces
123
73
124
-
[Learn how to attach tags to your traces](/langsmith/add-metadata-tags).
74
+
There are two ways to send trace data to LangSmith.
125
75
126
-
##Metadata
76
+
### Integrations
127
77
128
-
_Metadata_ is a collection of key-value pairs that you can attach to runs. You can use metadata to store additional information about a run, such as the version of the application that generated the run, the environment in which the run was generated, or any other information that you want to associate with a run. Similarly to tags, you can use metadata to filter runs in the LangSmith UI or group runs together for analysis.
78
+
LangSmith _integrations_ provide automatic tracing for popular LLM providers and agent frameworks (the equivalent of auto-instrumentation in general observability). When you use a supported framework such as LangChain, LangGraph, OpenAI, Anthropic, or CrewAI, the integration captures inputs, outputs, and metadata without requiring manual code changes.
129
79
130
-
[Learn how to add metadata to your traces](/langsmith/add-metadata-tags).
80
+
[Browse all integrations](/langsmith/integrations).
131
81
132
-
<img
133
-
className="block dark:hidden"
134
-
src="/langsmith/images/metadata-light.png"
135
-
alt="Metadata for a run in the LangSmith UI."
136
-
/>
82
+
### Manual instrumentation
137
83
138
-
<img
139
-
className="hidden dark:block"
140
-
src="/langsmith/images/metadata-dark.png"
141
-
alt="Metadata for a run in the LangSmith UI."
142
-
/>
84
+
_Manual instrumentation_ lets you add tracing to any code, regardless of the framework. Use it when you're not using a supported integration or when you need granular control over what gets traced. LangSmith provides three mechanisms:
143
85
144
-
## Data storage and retention
86
+
-`@traceable` / `traceable`: a decorator to trace any function
87
+
-`trace` context manager (Python): wrap specific code blocks
88
+
-`RunTree` API: low-level, explicit trace construction
145
89
146
-
For traces ingested on or after Wednesday, May 22, 2024, LangSmith (SaaS) retains trace data for a maximum of 400 days past the date and time the trace was inserted into the LangSmith trace database.
90
+
[Learn how to add manual instrumentation](/langsmith/annotate-code).
147
91
148
-
After 400 days, the traces are permanently deleted from LangSmith, with a limited amount of metadata retained for the purpose of showing accurate statistics, such as historic usage and cost.
92
+
## Data retention
149
93
150
-
For more information on data retention tiers, pricing, and auto-upgrade scenarios, refer to [Usage and billing: Data retention](/langsmith/administration-overview#data-retention).
94
+
LangSmith (SaaS) retains trace data for 400 days from ingestion. After that, traces are permanently deleted, with limited metadata retained for usage statistics. For details on retention tiers and pricing, refer to [Usage and billing: Data retention](/langsmith/administration-overview#data-retention).
151
95
152
96
<Note>
153
-
If you wish to keep tracing data longer than the data retention period, you can add it to a dataset. A [dataset](/langsmith/manage-datasets) allows you to store the trace inputs and outputs (e.g., as a key-value dataset), and will persist indefinitely, even after the trace gets deleted.
97
+
To keep data beyond the retention period, add it to a [dataset](/langsmith/manage-datasets). Datasets persist indefinitely, even after the source trace is deleted.
154
98
</Note>
155
99
156
100
To delete traces before their expiration date, see [Manage a trace](/langsmith/manage-trace#delete-a-trace).
Copy file name to clipboardExpand all lines: src/langsmith/self-host-blob-storage.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ For complete cloud-specific setup and architecture guides, see [AWS](/langsmith/
20
20
## Requirements
21
21
22
22
<Note>
23
-
Azure blob storage is available in Helm chart versions 0.8.9 and greater. [Deleting trace projects](/langsmith/observability-concepts#data-storage-and-retention) is supported in Azure starting in Helm chart version 0.10.43.
23
+
Azure blob storage is available in Helm chart versions 0.8.9 and greater. [Deleting trace projects](/langsmith/observability-concepts#data-retention) is supported in Azure starting in Helm chart version 0.10.43.
24
24
25
25
Native GCS blob storage engine support (using `engine: "GCS"`) is available in Helm chart versions 0.13.29 and greater. For earlier versions, GCS is supported via the S3-compatible API by setting `engine: "S3"` with HMAC credentials.
0 commit comments