|
| 1 | +--- |
| 2 | +title: Pydantic Logfire Vercel AI SDK Integration |
| 3 | +description: "Track LLM calls, token usage, tool invocations, and response times in AI applications built with the Vercel AI SDK using Logfire." |
| 4 | +integration: logfire |
| 5 | +--- |
| 6 | +# Vercel AI SDK |
| 7 | + |
| 8 | +Logfire works well with AI applications built with the [Vercel AI SDK](https://ai-sdk.dev/). Track LLM calls, token usage, tool invocations, and response times across any supported model provider. |
| 9 | + |
| 10 | +## Node.js Scripts |
| 11 | + |
| 12 | +For standalone Node.js scripts, use the `@pydantic/logfire-node` package combined with the Vercel AI SDK. |
| 13 | + |
| 14 | +### Installation |
| 15 | + |
| 16 | +```bash |
| 17 | +npm install @pydantic/logfire-node ai @ai-sdk/your-provider |
| 18 | +``` |
| 19 | + |
| 20 | +Replace `@ai-sdk/your-provider` with the provider package you're using (e.g., `@ai-sdk/openai`, `@ai-sdk/anthropic`, `@ai-sdk/google`). |
| 21 | + |
| 22 | +### Setup |
| 23 | + |
| 24 | +**1. Create an instrumentation file** |
| 25 | + |
| 26 | +Create an `instrumentation.ts` file that configures Logfire: |
| 27 | + |
| 28 | +```typescript |
| 29 | +import logfire from "@pydantic/logfire-node"; |
| 30 | + |
| 31 | +logfire.configure({ |
| 32 | + token: "your-write-token", |
| 33 | + serviceName: "my-ai-app", |
| 34 | + serviceVersion: "1.0.0", |
| 35 | +}); |
| 36 | +``` |
| 37 | + |
| 38 | +You can also use the `LOGFIRE_TOKEN` environment variable instead of passing the token directly. |
| 39 | + |
| 40 | +**2. Import instrumentation first** |
| 41 | + |
| 42 | +In your main script, import the instrumentation file before other imports: |
| 43 | + |
| 44 | +```typescript |
| 45 | +import "./instrumentation.ts"; |
| 46 | +import { generateText } from "ai"; |
| 47 | +import { yourProvider } from "@ai-sdk/your-provider"; |
| 48 | + |
| 49 | +// Your AI code here |
| 50 | +``` |
| 51 | + |
| 52 | +## Next.js |
| 53 | + |
| 54 | +For Next.js applications, use [Vercel's built-in OpenTelemetry support](https://nextjs.org/docs/app/guides/open-telemetry) with environment variables pointing to Logfire. |
| 55 | + |
| 56 | +### Installation |
| 57 | + |
| 58 | +```bash |
| 59 | +npm install @vercel/otel @opentelemetry/api ai @ai-sdk/your-provider |
| 60 | +``` |
| 61 | + |
| 62 | +### Setup |
| 63 | + |
| 64 | +**1. Add environment variables** |
| 65 | + |
| 66 | +Add these to your `.env.local` file (or your deployment environment): |
| 67 | + |
| 68 | +``` |
| 69 | +OTEL_EXPORTER_OTLP_ENDPOINT=https://logfire-api.pydantic.dev |
| 70 | +OTEL_EXPORTER_OTLP_HEADERS='Authorization=your-write-token' |
| 71 | +``` |
| 72 | + |
| 73 | +**2. Create the instrumentation file** |
| 74 | + |
| 75 | +Create `instrumentation.ts` in your project root (or `src` directory if using that structure): |
| 76 | + |
| 77 | +```typescript |
| 78 | +import { registerOTel } from "@vercel/otel"; |
| 79 | + |
| 80 | +export function register() { |
| 81 | + registerOTel({ serviceName: "my-nextjs-app" }); |
| 82 | +} |
| 83 | +``` |
| 84 | + |
| 85 | +This file must be in the root directory, not inside `app` or `pages`. See the [Vercel instrumentation docs](https://vercel.com/docs/tracing/instrumentation) for more configuration options. |
| 86 | + |
| 87 | +**3. Enable telemetry on AI SDK calls** |
| 88 | + |
| 89 | +See the [Enabling Telemetry](#enabling-telemetry) section below. |
| 90 | + |
| 91 | +## Enabling Telemetry |
| 92 | + |
| 93 | +The Vercel AI SDK uses [OpenTelemetry for telemetry](https://ai-sdk.dev/docs/ai-sdk-core/telemetry). To capture traces, add the `experimental_telemetry` option to your AI SDK function calls: |
| 94 | + |
| 95 | +```typescript |
| 96 | +const result = await generateText({ |
| 97 | + model: yourModel("model-name"), |
| 98 | + prompt: "Your prompt here", |
| 99 | + experimental_telemetry: { isEnabled: true }, |
| 100 | +}); |
| 101 | +``` |
| 102 | + |
| 103 | +This option works with all AI SDK core functions: |
| 104 | + |
| 105 | +- `generateText` / `streamText` |
| 106 | +- `generateObject` / `streamObject` |
| 107 | +- `embed` / `embedMany` |
| 108 | + |
| 109 | +## Example: Text Generation with Tools |
| 110 | + |
| 111 | +Here's a complete example showing text generation with a tool and telemetry enabled: |
| 112 | + |
| 113 | +```typescript |
| 114 | +import { generateText, tool } from "ai"; |
| 115 | +import { yourProvider } from "@ai-sdk/your-provider"; |
| 116 | +import { z } from "zod"; |
| 117 | + |
| 118 | +const result = await generateText({ |
| 119 | + model: yourProvider("model-name"), |
| 120 | + experimental_telemetry: { isEnabled: true }, |
| 121 | + tools: { |
| 122 | + weather: tool({ |
| 123 | + description: "Get the weather in a location", |
| 124 | + inputSchema: z.object({ |
| 125 | + location: z.string().describe("The location to get the weather for"), |
| 126 | + }), |
| 127 | + execute: async ({ location }) => ({ |
| 128 | + location, |
| 129 | + temperature: 72 + Math.floor(Math.random() * 21) - 10, |
| 130 | + }), |
| 131 | + }), |
| 132 | + }, |
| 133 | + prompt: "What is the weather in San Francisco?", |
| 134 | +}); |
| 135 | + |
| 136 | +console.log(result.text); |
| 137 | +``` |
| 138 | + |
| 139 | +For Node.js scripts, remember to import your instrumentation file at the top of your entry point. |
| 140 | + |
| 141 | +## What You'll See in Logfire |
| 142 | + |
| 143 | +When telemetry is enabled, Logfire captures a hierarchical trace of your AI operations: |
| 144 | + |
| 145 | +- **Parent span** for the AI operation (e.g., `ai.generateText`) |
| 146 | + - **Provider call spans** showing the actual LLM API calls |
| 147 | + - **Tool call spans** for each tool invocation |
| 148 | + |
| 149 | +The captured data includes: |
| 150 | + |
| 151 | +- Prompts and responses |
| 152 | +- Model information and provider details |
| 153 | +- Token usage (input and output tokens) |
| 154 | +- Timing information |
| 155 | +- Tool call arguments and results |
| 156 | + |
| 157 | +## Advanced Options |
| 158 | + |
| 159 | +The `experimental_telemetry` option accepts additional configuration: |
| 160 | + |
| 161 | +```typescript |
| 162 | +experimental_telemetry: { |
| 163 | + isEnabled: true, |
| 164 | + functionId: "weather-lookup", |
| 165 | + metadata: { |
| 166 | + userId: "user-123", |
| 167 | + environment: "production", |
| 168 | + }, |
| 169 | +} |
| 170 | +``` |
| 171 | + |
| 172 | +- `functionId` - A custom identifier that appears in span names, useful for distinguishing different use cases |
| 173 | +- `metadata` - Custom key-value pairs attached to the telemetry spans |
0 commit comments