| title | Build a SQL agent |
|---|---|
| sidebarTitle | SQL agent |
import ChatModelTabsPy from '/snippets/chat-model-tabs.mdx'; import ChatModelTabsJS from '/snippets/chat-model-tabs-js.mdx';
In this tutorial, you will learn how to build an agent that can answer questions about a SQL database using LangChain agents.
At a high level, the agent will:
Building Q&A systems of SQL databases requires executing model-generated SQL queries. There are inherent risks in doing this. Make sure that your database connection permissions are always scoped as narrowly as possible for your agent's needs. This will mitigate, though not eliminate, the risks of building a model-driven system.We will cover the following concepts:
- Tools for reading from SQL databases
- LangChain agents
- Human-in-the-loop processes
:::python
<CodeGroup>
```bash pip
pip install langchain langgraph langchain-community
```
</CodeGroup>
:::
:::js
<CodeGroup>
```bash npm
npm i langchain @langchain/core typeorm sqlite3 zod
```
```bash yarn
yarn add langchain @langchain/core typeorm sqlite3 zod
```
```bash pnpm
pnpm add langchain @langchain/core typeorm sqlite3 zod
```
</CodeGroup>
:::
Set up LangSmith to inspect what is happening inside your chain or agent. Then set the following environment variables:
```shell
export LANGSMITH_TRACING="true"
export LANGSMITH_API_KEY="..."
```
:::python Select a model that supports tool-calling:
::: :::jsSelect a model that supports tool-calling: :::
The output shown in the examples below used OpenAI.
You will be creating a SQLite database for this tutorial. SQLite is a lightweight database that is easy to set up and use. We will be loading the chinook database, which is a sample database that represents a digital media store.
For convenience, we have hosted the database (Chinook.db) on a public GCS bucket.
:::python
import requests, pathlib
url = "https://storage.googleapis.com/benchmarks-artifacts/chinook/Chinook.db"
local_path = pathlib.Path("Chinook.db")
if local_path.exists():
print(f"{local_path} already exists, skipping download.")
else:
response = requests.get(url)
if response.status_code == 200:
local_path.write_bytes(response.content)
print(f"File downloaded and saved as {local_path}")
else:
print(f"Failed to download the file. Status code: {response.status_code}")::: :::js
import fs from "node:fs/promises";
import path from "node:path";
const url = "https://storage.googleapis.com/benchmarks-artifacts/chinook/Chinook.db";
const localPath = path.resolve("Chinook.db");
async function resolveDbPath() {
if (await fs.exists(localPath)) {
return localPath;
}
const resp = await fetch(url);
if (!resp.ok) throw new Error(`Failed to download DB. Status code: ${resp.status}`);
const buf = Buffer.from(await resp.arrayBuffer());
await fs.writeFile(localPath, buf);
return localPath;
}:::
:::python
We will use a handy SQL database wrapper available in the langchain_community package to interact with the database. The wrapper provides a simple interface to execute SQL queries and fetch results:
from langchain_community.utilities import SQLDatabase
db = SQLDatabase.from_uri("sqlite:///Chinook.db")
print(f"Dialect: {db.dialect}")
print(f"Available tables: {db.get_usable_table_names()}")
print(f'Sample output: {db.run("SELECT * FROM Artist LIMIT 5;")}')Dialect: sqlite
Available tables: ['Album', 'Artist', 'Customer', 'Employee', 'Genre', 'Invoice', 'InvoiceLine', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track']
Sample output: [(1, 'AC/DC'), (2, 'Accept'), (3, 'Aerosmith'), (4, 'Alanis Morissette'), (5, 'Alice In Chains')]
:::
:::python
Use the SQLDatabase wrapper available in the langchain_community package to interact with the database. The wrapper provides a simple interface to execute SQL queries and fetch results:
from langchain_community.agent_toolkits import SQLDatabaseToolkit
toolkit = SQLDatabaseToolkit(db=db, llm=model)
tools = toolkit.get_tools()
for tool in tools:
print(f"{tool.name}: {tool.description}\n")sql_db_query: Input to this tool is a detailed and correct SQL query, output is a result from the database. If the query is not correct, an error message will be returned. If an error is returned, rewrite the query, check the query, and try again. If you encounter an issue with Unknown column 'xxxx' in 'field list', use sql_db_schema to query the correct table fields.
sql_db_schema: Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables. Be sure that the tables actually exist by calling sql_db_list_tables first! Example Input: table1, table2, table3
sql_db_list_tables: Input is an empty string, output is a comma-separated list of tables in the database.
sql_db_query_checker: Use this tool to double check if your query is correct before executing it. Always use this tool before executing a query with sql_db_query!
::: :::js
Use the SqlDatabase wrapper available in the langchain/sql_db to interact with the database. The wrapper provides a simple interface to execute SQL queries and fetch results:
import { SqlDatabase } from "@langchain/classic/sql_db";
import { DataSource } from "typeorm";
let db: SqlDatabase | undefined;
async function getDb() {
if (!db) {
const dbPath = await resolveDbFile();
const datasource = new DataSource({ type: "sqlite", database: dbPath });
db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource });
}
return db;
}
async function getSchema() {
const db = await getDb();
return await db.getTableInfo();
}:::
:::python
Use @[create_agent] to build a ReAct agent with minimal code. The agent will interpret the request and generate a SQL command, which the tools will execute. If the command has an error, the error message is returned to the model. The model can then examine the original request and the new error message and generate a new command. This can continue until the LLM generates the command successfully or reaches an end count. This pattern of providing a model with feedback - error messages in this case - is very powerful.
Initialize the agent with a descriptive system prompt to customize its behavior:
system_prompt = """
You are an agent designed to interact with a SQL database.
Given an input question, create a syntactically correct {dialect} query to run,
then look at the results of the query and return the answer. Unless the user
specifies a specific number of examples they wish to obtain, always limit your
query to at most {top_k} results.
You can order the results by a relevant column to return the most interesting
examples in the database. Never query for all the columns from a specific table,
only ask for the relevant columns given the question.
You MUST double check your query before executing it. If you get an error while
executing a query, rewrite the query and try again.
DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the
database.
To start you should ALWAYS look at the tables in the database to see what you
can query. Do NOT skip this step.
Then you should query the schema of the most relevant tables.
""".format(
dialect=db.dialect,
top_k=5,
)Now, create an agent with the model, tools, and prompt:
from langchain.agents import create_agent
agent = create_agent(
model,
tools,
system_prompt=system_prompt,
)Run the agent on a sample query and observe its behavior:
question = "Which genre on average has the longest tracks?"
for step in agent.stream(
{"messages": [{"role": "user", "content": question}]},
stream_mode="values",
):
step["messages"][-1].pretty_print()================================ Human Message =================================
Which genre on average has the longest tracks?
================================== Ai Message ==================================
Tool Calls:
sql_db_list_tables (call_BQsWg8P65apHc8BTJ1NPDvnM)
Call ID: call_BQsWg8P65apHc8BTJ1NPDvnM
Args:
================================= Tool Message =================================
Name: sql_db_list_tables
Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track
================================== Ai Message ==================================
Tool Calls:
sql_db_schema (call_i89tjKECFSeERbuACYm4w0cU)
Call ID: call_i89tjKECFSeERbuACYm4w0cU
Args:
table_names: Track, Genre
================================= Tool Message =================================
Name: sql_db_schema
CREATE TABLE "Genre" (
"GenreId" INTEGER NOT NULL,
"Name" NVARCHAR(120),
PRIMARY KEY ("GenreId")
)
/*
3 rows from Genre table:
GenreId Name
1 Rock
2 Jazz
3 Metal
*/
CREATE TABLE "Track" (
"TrackId" INTEGER NOT NULL,
"Name" NVARCHAR(200) NOT NULL,
"AlbumId" INTEGER,
"MediaTypeId" INTEGER NOT NULL,
"GenreId" INTEGER,
"Composer" NVARCHAR(220),
"Milliseconds" INTEGER NOT NULL,
"Bytes" INTEGER,
"UnitPrice" NUMERIC(10, 2) NOT NULL,
PRIMARY KEY ("TrackId"),
FOREIGN KEY("MediaTypeId") REFERENCES "MediaType" ("MediaTypeId"),
FOREIGN KEY("GenreId") REFERENCES "Genre" ("GenreId"),
FOREIGN KEY("AlbumId") REFERENCES "Album" ("AlbumId")
)
/*
3 rows from Track table:
TrackId Name AlbumId MediaTypeId GenreId Composer Milliseconds Bytes UnitPrice
1 For Those About To Rock (We Salute You) 1 1 1 Angus Young, Malcolm Young, Brian Johnson 343719 11170334 0.99
2 Balls to the Wall 2 2 1 U. Dirkschneider, W. Hoffmann, H. Frank, P. Baltes, S. Kaufmann, G. Hoffmann 342562 5510424 0.99
3 Fast As a Shark 3 2 1 F. Baltes, S. Kaufman, U. Dirkscneider & W. Hoffman 230619 3990994 0.99
*/
================================== Ai Message ==================================
Tool Calls:
sql_db_query_checker (call_G64yYm6R6UauiVPCXJZMA49b)
Call ID: call_G64yYm6R6UauiVPCXJZMA49b
Args:
query: SELECT Genre.Name, AVG(Track.Milliseconds) AS AverageLength FROM Track INNER JOIN Genre ON Track.GenreId = Genre.GenreId GROUP BY Genre.Name ORDER BY AverageLength DESC LIMIT 5;
================================= Tool Message =================================
Name: sql_db_query_checker
SELECT Genre.Name, AVG(Track.Milliseconds) AS AverageLength FROM Track INNER JOIN Genre ON Track.GenreId = Genre.GenreId GROUP BY Genre.Name ORDER BY AverageLength DESC LIMIT 5;
================================== Ai Message ==================================
Tool Calls:
sql_db_query (call_AnO3SrhD0ODJBxh6dHMwvHwZ)
Call ID: call_AnO3SrhD0ODJBxh6dHMwvHwZ
Args:
query: SELECT Genre.Name, AVG(Track.Milliseconds) AS AverageLength FROM Track INNER JOIN Genre ON Track.GenreId = Genre.GenreId GROUP BY Genre.Name ORDER BY AverageLength DESC LIMIT 5;
================================= Tool Message =================================
Name: sql_db_query
[('Sci Fi & Fantasy', 2911783.0384615385), ('Science Fiction', 2625549.076923077), ('Drama', 2575283.78125), ('TV Shows', 2145041.0215053763), ('Comedy', 1585263.705882353)]
================================== Ai Message ==================================
On average, the genre with the longest tracks is "Sci Fi & Fantasy" with an average track length of approximately 2,911,783 milliseconds. This is followed by "Science Fiction," "Drama," "TV Shows," and "Comedy."
The agent correctly wrote a query, checked the query, and ran it to inform its final response.
You can inspect all aspects of the above run, including steps taken, tools invoked, what prompts were seen by the LLM, and more in the [LangSmith trace](https://smith.langchain.com/public/cd2ce887-388a-4bb1-a29d-48208ce50d15/r).Studio provides a "client side" loop as well as memory so you can run this as a chat interface and query the database. You can ask questions like "Tell me the scheme of the database" or "Show me the invoices for the 5 top customers". You will see the SQL command that is generated and the resulting output. The details of how to get that started are below.
In addition to the previously mentioned packages, you will need to:
pip install -U langgraph-cli[inmem]>=0.4.0In directory you will run in, you will need a langgraph.json file with the following contents:
{
"dependencies": ["."],
"graphs": {
"agent": "./sql_agent.py:agent",
"graph": "./sql_agent_langgraph.py:graph"
},
"env": ".env"
}Create a file sql_agent.py and insert this:
#sql_agent.py for studio
import pathlib
from langchain.agents import create_agent
from langchain.chat_models import init_chat_model
from langchain_community.agent_toolkits import SQLDatabaseToolkit
from langchain_community.utilities import SQLDatabase
import requests
# Initialize an LLM
model = init_chat_model("gpt-4.1")
# Get the database, store it locally
url = "https://storage.googleapis.com/benchmarks-artifacts/chinook/Chinook.db"
local_path = pathlib.Path("Chinook.db")
if local_path.exists():
print(f"{local_path} already exists, skipping download.")
else:
response = requests.get(url)
if response.status_code == 200:
local_path.write_bytes(response.content)
print(f"File downloaded and saved as {local_path}")
else:
print(f"Failed to download the file. Status code: {response.status_code}")
db = SQLDatabase.from_uri("sqlite:///Chinook.db")
# Create the tools
toolkit = SQLDatabaseToolkit(db=db, llm=model)
tools = toolkit.get_tools()
for tool in tools:
print(f"{tool.name}: {tool.description}\n")
# Use create_agent
system_prompt = """
You are an agent designed to interact with a SQL database.
Given an input question, create a syntactically correct {dialect} query to run,
then look at the results of the query and return the answer. Unless the user
specifies a specific number of examples they wish to obtain, always limit your
query to at most {top_k} results.
You can order the results by a relevant column to return the most interesting
examples in the database. Never query for all the columns from a specific table,
only ask for the relevant columns given the question.
You MUST double check your query before executing it. If you get an error while
executing a query, rewrite the query and try again.
DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the
database.
To start you should ALWAYS look at the tables in the database to see what you
can query. Do NOT skip this step.
Then you should query the schema of the most relevant tables.
""".format(
dialect=db.dialect,
top_k=5,
)
agent = create_agent(
model,
tools,
system_prompt=system_prompt,
):::
:::python
It can be prudent to check the agent's SQL queries before they are executed for any unintended actions or inefficiencies.
LangChain agents feature support for built-in human-in-the-loop middleware to add oversight to agent tool calls. Let's configure the agent to pause for human review on calling the sql_db_query tool:
from langchain.agents import create_agent
from langchain.agents.middleware import HumanInTheLoopMiddleware # [!code highlight]
from langgraph.checkpoint.memory import InMemorySaver # [!code highlight]
agent = create_agent(
model,
tools,
system_prompt=system_prompt,
middleware=[ # [!code highlight]
HumanInTheLoopMiddleware( # [!code highlight]
interrupt_on={"sql_db_query": True}, # [!code highlight]
description_prefix="Tool execution pending approval", # [!code highlight]
), # [!code highlight]
], # [!code highlight]
checkpointer=InMemorySaver(), # [!code highlight]
)On running the agent, it will now pause for review before executing the sql_db_query tool:
question = "Which genre on average has the longest tracks?"
config = {"configurable": {"thread_id": "1"}} # [!code highlight]
for step in agent.stream(
{"messages": [{"role": "user", "content": question}]},
config, # [!code highlight]
stream_mode="values",
):
if "__interrupt__" in step: # [!code highlight]
print("INTERRUPTED:") # [!code highlight]
interrupt = step["__interrupt__"][0] # [!code highlight]
for request in interrupt.value["action_requests"]: # [!code highlight]
print(request["description"]) # [!code highlight]
elif "messages" in step:
step["messages"][-1].pretty_print()
else:
pass...
INTERRUPTED:
Tool execution pending approval
Tool: sql_db_query
Args: {'query': 'SELECT g.Name AS Genre, AVG(t.Milliseconds) AS AvgTrackLength FROM Track t JOIN Genre g ON t.GenreId = g.GenreId GROUP BY g.Name ORDER BY AvgTrackLength DESC LIMIT 1;'}
We can resume execution, in this case accepting the query, using Command:
from langgraph.types import Command # [!code highlight]
for step in agent.stream(
Command(resume={"decisions": [{"type": "approve"}]}), # [!code highlight]
config,
stream_mode="values",
):
if "messages" in step:
step["messages"][-1].pretty_print()
if "__interrupt__" in step:
print("INTERRUPTED:")
interrupt = step["__interrupt__"][0]
for request in interrupt.value["action_requests"]:
print(request["description"])
else:
pass================================== Ai Message ==================================
Tool Calls:
sql_db_query (call_7oz86Epg7lYRqi9rQHbZPS1U)
Call ID: call_7oz86Epg7lYRqi9rQHbZPS1U
Args:
query: SELECT Genre.Name, AVG(Track.Milliseconds) AS AvgDuration FROM Track JOIN Genre ON Track.GenreId = Genre.GenreId GROUP BY Genre.Name ORDER BY AvgDuration DESC LIMIT 5;
================================= Tool Message =================================
Name: sql_db_query
[('Sci Fi & Fantasy', 2911783.0384615385), ('Science Fiction', 2625549.076923077), ('Drama', 2575283.78125), ('TV Shows', 2145041.0215053763), ('Comedy', 1585263.705882353)]
================================== Ai Message ==================================
The genre with the longest average track length is "Sci Fi & Fantasy" with an average duration of about 2,911,783 milliseconds, followed by "Science Fiction" and "Drama."
Refer to the human-in-the-loop guide for details.
:::
:::js
Before running the command, do a check to check the LLM generated command in _safe_sql:
const DENY_RE = /\b(INSERT|UPDATE|DELETE|ALTER|DROP|CREATE|REPLACE|TRUNCATE)\b/i;
const HAS_LIMIT_TAIL_RE = /\blimit\b\s+\d+(\s*,\s*\d+)?\s*;?\s*$/i;
function sanitizeSqlQuery(q) {
let query = String(q ?? "").trim();
// block multiple statements (allow one optional trailing ;)
const semis = [...query].filter((c) => c === ";").length;
if (semis > 1 || (query.endsWith(";") && query.slice(0, -1).includes(";"))) {
throw new Error("multiple statements are not allowed.")
}
query = query.replace(/;+\s*$/g, "").trim();
// read-only gate
if (!query.toLowerCase().startsWith("select")) {
throw new Error("Only SELECT statements are allowed")
}
if (DENY_RE.test(query)) {
throw new Error("DML/DDL detected. Only read-only queries are permitted.")
}
// append LIMIT only if not already present
if (!HAS_LIMIT_TAIL_RE.test(query)) {
query += " LIMIT 5";
}
return query;
}Then, use run from SQLDatabase to execute commands with an execute_sql tool:
import { tool } from "langchain"
import * as z from "zod";
const executeSql = tool(
async ({ query }) => {
const q = sanitizeSqlQuery(query);
try {
const result = await db.run(q);
return typeof result === "string" ? result : JSON.stringify(result, null, 2);
} catch (e) {
throw new Error(e?.message ?? String(e))
}
},
{
name: "execute_sql",
description: "Execute a READ-ONLY SQLite SELECT query and return results.",
schema: z.object({
query: z.string().describe("SQLite SELECT query to execute (read-only)."),
}),
}
);Use createAgent to build a ReAct agent with minimal code. The agent will interpret the request and generate a SQL command. The tools will check the command for safety and then try to execute the command. If the command has an error, the error message is returned to the model. The model can then examine the original request and the new error message and generate a new command. This can continue until the LLM generates the command successfully or reaches an end count. This pattern of providing a model with feedback - error messages in this case - is very powerful.
Initialize the agent with a descriptive system prompt to customize its behavior:
import { SystemMessage } from "langchain";
const getSystemPrompt = async () => new SystemMessage(`You are a careful SQLite analyst.
Authoritative schema (do not invent columns/tables):
${await getSchema()}
Rules:
- Think step-by-step.
- When you need data, call the tool \`execute_sql\` with ONE SELECT query.
- Read-only only; no INSERT/UPDATE/DELETE/ALTER/DROP/CREATE/REPLACE/TRUNCATE.
- Limit to 5 rows unless user explicitly asks otherwise.
- If the tool returns 'Error:', revise the SQL and try again.
- Limit the number of attempts to 5.
- If you are not successful after 5 attempts, return a note to the user.
- Prefer explicit column lists; avoid SELECT *.
`);Now, create an agent with the model, tools, and prompt:
import { createAgent } from "langchain";
const agent = createAgent({
model: "gpt-5",
tools: [executeSql],
systemPrompt: getSystemPrompt,
});Run the agent on a sample query and observe its behavior:
const question = "Which genre, on average, has the longest tracks?";
const stream = await agent.stream(
{ messages: [{ role: "user", content: question }] },
{ streamMode: "values" }
);
for await (const step of stream) {
const message = step.messages.at(-1);
console.log(`${message.role}: ${JSON.stringify(message.content, null, 2)}`);
}human: Which genre, on average, has the longest tracks?
ai:
tool: [{"Genre":"Sci Fi & Fantasy","AvgMilliseconds":2911783.0384615385}]
ai: Sci Fi & Fantasy β average track length β 48.5 minutes (about 2,911,783 ms).
The agent correctly wrote a query, checked the query, and ran it to inform its final response.
You can inspect all aspects of the above run, including steps taken, tools invoked, what prompts were seen by the LLM, and more in the [LangSmith trace](https://smith.langchain.com/public/653d218b-af67-4854-95ca-6abecb9b2520/r).Studio provides a "client side" loop as well as memory so you can run this as a chat interface and query the database. You can ask questions like "Tell me the scheme of the database" or "Show me the invoices for the 5 top customers". You will see the SQL command that is generated and the resulting output. The details of how to get that started are below.
In addition to the previously mentioned packages, you will need to:
npm i -g @langchain/langgraph-cli@latestIn directory you will run in, you will need a langgraph.json file with the following contents:
{
"dependencies": ["."],
"graphs": {
"agent": "./sqlAgent.ts:agent",
"graph": "./sqlAgentLanggraph.ts:graph"
},
"env": ".env"
}import fs from "node:fs/promises";
import path from "node:path";
import { SqlDatabase } from "@langchain/classic/sql_db";
import { DataSource } from "typeorm";
import { SystemMessage, createAgent, tool } from "langchain"
import * as z from "zod";
const url = "https://storage.googleapis.com/benchmarks-artifacts/chinook/Chinook.db";
const localPath = path.resolve("Chinook.db");
async function resolveDbPath() {
if (await fs.exists(localPath)) {
return localPath;
}
const resp = await fetch(url);
if (!resp.ok) throw new Error(`Failed to download DB. Status code: ${resp.status}`);
const buf = Buffer.from(await resp.arrayBuffer());
await fs.writeFile(localPath, buf);
return localPath;
}
let db: SqlDatabase | undefined;
async function getDb() {
if (!db) {
const dbPath = await resolveDbPath();
const datasource = new DataSource({ type: "sqlite", database: dbPath });
db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource });
}
return db;
}
async function getSchema() {
const db = await getDb();
return await db.getTableInfo();
}
const DENY_RE = /\b(INSERT|UPDATE|DELETE|ALTER|DROP|CREATE|REPLACE|TRUNCATE)\b/i;
const HAS_LIMIT_TAIL_RE = /\blimit\b\s+\d+(\s*,\s*\d+)?\s*;?\s*$/i;
function sanitizeSqlQuery(q) {
let query = String(q ?? "").trim();
// block multiple statements (allow one optional trailing ;)
const semis = [...query].filter((c) => c === ";").length;
if (semis > 1 || (query.endsWith(";") && query.slice(0, -1).includes(";"))) {
throw new Error("multiple statements are not allowed.")
}
query = query.replace(/;+\s*$/g, "").trim();
// read-only gate
if (!query.toLowerCase().startsWith("select")) {
throw new Error("Only SELECT statements are allowed")
}
if (DENY_RE.test(query)) {
throw new Error("DML/DDL detected. Only read-only queries are permitted.")
}
// append LIMIT only if not already present
if (!HAS_LIMIT_TAIL_RE.test(query)) {
query += " LIMIT 5";
}
return query;
}
const executeSql = tool(
async ({ query }) => {
const q = sanitizeSqlQuery(query);
try {
const result = await db.run(q);
return typeof result === "string" ? result : JSON.stringify(result, null, 2);
} catch (e) {
throw new Error(e?.message ?? String(e))
}
},
{
name: "execute_sql",
description: "Execute a READ-ONLY SQLite SELECT query and return results.",
schema: z.object({
query: z.string().describe("SQLite SELECT query to execute (read-only)."),
}),
}
);
const getSystemPrompt = async () => new SystemMessage(`You are a careful SQLite analyst.
Authoritative schema (do not invent columns/tables):
${await getSchema()}
Rules:
- Think step-by-step.
- When you need data, call the tool \`execute_sql\` with ONE SELECT query.
- Read-only only; no INSERT/UPDATE/DELETE/ALTER/DROP/CREATE/REPLACE/TRUNCATE.
- Limit to 5 rows unless user explicitly asks otherwise.
- If the tool returns 'Error:', revise the SQL and try again.
- Limit the number of attempts to 5.
- If you are not successful after 5 attempts, return a note to the user.
- Prefer explicit column lists; avoid SELECT *.
`);
export const agent = createAgent({
model: "gpt-5",
tools: [executeSql],
systemPrompt: getSystemPrompt,
});For deeper customization, check out this tutorial for implementing a SQL agent directly using LangGraph primitives.