| title | Tracing quickstart |
|---|---|
| sidebarTitle | Quickstart |
| description | Add LangSmith tracing to an LLM application in minutes. |
| icon | rocket |
import project from '/snippets/langsmith/trace-ingestion-project.mdx';
LangSmith gives you end-to-end visibility into your LLM application by capturing traces; a complete record of every step that ran during a request, from the inputs passed in to the final output returned.
In this quickstart, you will add tracing to an AI assistant and view the results in LangSmith.
If you're building with [LangChain](https://docs.langchain.com/oss/python/langchain/overview) or [LangGraph](https://docs.langchain.com/oss/python/langgraph/overview), you can enable LangSmith tracing with a single environment variable. Refer to [trace with LangChain](/langsmith/trace-with-langchain) or [trace with LangGraph](/langsmith/trace-with-langgraph).Before you begin, make sure you have:
- A LangSmith account: Sign up or log in at smith.langchain.com.
- A LangSmith API key: Follow the Create an API key guide.
- An OpenAI API key: Generate this from the OpenAI dashboard.
This example uses OpenAI as the LLM provider. You can adapt it for your own provider.
-
Create a project directory, install the dependencies, and configure the required environment variables:
mkdir ls-quickstart && cd ls-quickstart python -m venv .venv && source .venv/bin/activate pip install -U langsmith openai
mkdir ls-quickstart-ts && cd ls-quickstart-ts npm init -y npm install langsmith openai npm install -D typescript tsx
-
Export your environment variables in your shell:
export LANGSMITH_TRACING=true export LANGSMITH_API_KEY="<your-langsmith-api-key>" export OPENAI_API_KEY="<your-openai-api-key>"
If you are using Anthropic, use the Anthropic wrapper. If you are using Google Gemini, use the Gemini wrapper. For other providers, use the
@traceabledecorator to trace calls manually.
The following app uses two LangSmith tools to add tracing:
wrap_openai: wraps the OpenAI client so every LLM call is automatically logged as a nested span.@traceable: wraps a function so its inputs, outputs, and any nested spans appear as a single trace in LangSmith.
The assistant function calls a tool (get_context) to retrieve relevant context, then passes that context to the model. Using @traceable on both functions captures the full pipeline in one trace, with the tool call and LLM call as nested spans.
Create a file called app.py (or index.ts) with the following code:
from openai import OpenAI
from langsmith.wrappers import wrap_openai
from langsmith import traceable
client = wrap_openai(OpenAI()) # log every OpenAI call automatically
@traceable(run_type="tool") # trace this as a tool span
def get_context(question: str) -> str:
# In a real app, this would query a knowledge base or vector store
return "LangSmith traces are stored for 14 days on the Developer plan."
@traceable # capture the full pipeline as a single trace
def assistant(question: str) -> str:
context = get_context(question)
response = client.chat.completions.create(
model="gpt-5.4-mini",
messages=[
{
"role": "system",
"content": f"Answer using the context below.\n\nContext: {context}",
},
{"role": "user", "content": question},
],
)
return response.choices[0].message.content
if __name__ == "__main__":
print(assistant("How long are LangSmith traces stored?"))import OpenAI from "openai";
import { wrapOpenAI } from "langsmith/wrappers";
import { traceable } from "langsmith/traceable";
const client = wrapOpenAI(new OpenAI()); // log every OpenAI call automatically
const getContext = traceable(
async function getContext(question: string): Promise<string> { // trace this as a tool span
// In a real app, this would query a knowledge base or vector store
return "LangSmith traces are stored for 14 days on the Developer plan.";
},
{ run_type: "tool" }
);
const assistant = traceable(async function assistant(question: string) { // capture the full pipeline as a single trace
const context = await getContext(question);
const response = await client.chat.completions.create({
model: "gpt-5.4-mini",
messages: [
{
role: "system",
content: `Answer using the context below.\n\nContext: ${context}`,
},
{ role: "user", content: question },
],
});
return response.choices[0]?.message?.content ?? null;
});
(async () => {
console.log(await assistant("How long are LangSmith traces stored?"));
})();python app.pynpx tsx index.tsIn the LangSmith UI, go to Tracing and select your default project. Click the assistant row to open the trace. The Messages tab shows the conversation as it was sent to the model. Select the Details tab to see the full run tree, including the assistant function with the get_context tool call and the OpenAI call nested inside it.
The outer span captures your assistant function's inputs and outputs. The nested get_context span records the tool call, and the ChatOpenAI span records the exact prompt sent to the model and the response returned.
- Tracing integrations: LangChain, LangGraph, Anthropic, and other providers.
- Trace an LLM application: a full lifecycle tutorial, from prototyping through production.
- Filter traces: search and navigate large tracing projects.
- Log to a specific project: send traces to a named project instead of default.

