Talk to your database in plain English. A .NET 8 console application powered by Azure OpenAI and Semantic Kernel that demonstrates how well-crafted prompt templates eliminate complexity, keeping code clean and self-explanatory.
Most NL-to-SQL solutions bury logic in complex code. This project takes a different approach: prompts do the heavy lifting, code just orchestrates. The result is maintainable, transparent, and easy to understand.
SQL Generation (Prompts/sql-generation.txt)
- Receives database schema context dynamically injected by the kernel
- Contains safety rules (read-only, injection prevention, timeout limits)
- Uses few-shot examples to guide query structure
- Handles complex joins, aggregations, and filters without hardcoded logic
Result Formatting (Prompts/result-formatting.txt)
- Transforms tabular SQL results into conversational responses
- Preserves numerical accuracy while improving readability
- Adapts tone and detail based on query complexity
Schema Context (Prompts/schema-explanation.txt)
- Makes AI understand table relationships and business context
- Enables intelligent table selection for complex queries
- Adds semantic meaning to technical column names
Your C# code stays minimal: load schema, inject context into kernel memory, call semantic functions, display results. Business logic lives in readable prompt templates that anyone can review and improve.
Prerequisites:
- .NET 8 SDK
- Azure OpenAI access (endpoint, API key, deployed GPT-4 or GPT-3.5-turbo)
- SQL Server database with read permissions
Required NuGet Packages:
using Microsoft.SemanticKernel; // Core Semantic Kernel SDK
using Microsoft.SemanticKernel.Connectors.AI.OpenAI; // Azure OpenAI connector
using Microsoft.SemanticKernel.PromptTemplates; // Prompt template configuration
using Microsoft.SemanticKernel.PromptTemplates.Handlebars; // Handlebars template support
using System.Data.SqlClient; // SQL Server connectivityInstall via Package Manager Console:
dotnet add package Microsoft.SemanticKernel
dotnet add package Microsoft.SemanticKernel.Connectors.AI.OpenAI
dotnet add package Microsoft.SemanticKernel.PromptTemplates.Handlebars
dotnet add package System.Data.SqlClientConfiguration:
Set environment variables or add to appsettings.json:
{
"AzureOpenAI": {
"Endpoint": "https://your-resource.openai.azure.com/",
"ApiKey": "your-api-key",
"DeploymentName": "gpt-4"
},
"Database": {
"ConnectionString": "Server=...;Database=...;User Id=...;Password=...;"
}
}Run:
dotnet restore
dotnet build
dotnet runFirst Query:
Schema loaded: 5 tables discovered
Ready to query. Ask anything in natural language.
> How many customers do we have in California?
You have 1,247 customers located in California.
[SQL: SELECT COUNT(*) FROM Customers WHERE State = 'CA']
- Schema Discovery: Connect to database, read metadata (tables, columns, relationships)
- Context Injection: Load schema into Semantic Kernel memory
- Prompt Orchestration: User question + schema context → semantic function → SQL query
- Safe Execution: Validate and run SQL (read-only enforcement)
- Natural Response: SQL results → semantic function → conversational answer
Key Insight: Steps 3 and 5 are entirely driven by prompt templates. No SQL parsing, no hardcoded query builders, no brittle heuristics.
azure-sk-ai-agent/
├── src/
│ ├── Program.cs # Entry point, orchestration (~100 lines)
│ ├── Plugins/
│ │ ├── DatabaseQueryPlugin.cs # Executes SQL, validates queries
│ │ └── SchemaPlugin.cs # Loads schema, searches tables
│ ├── Services/
│ │ ├── KernelService.cs # Initializes Semantic Kernel
│ │ └── DatabaseService.cs # Database connectivity
│ └── Prompts/ # ⭐ The intelligence lives here
│ ├── sql-generation.txt # NL → SQL conversion logic
│ ├── result-formatting.txt # SQL results → NL response
│ └── schema-context.txt # Schema understanding
├── appsettings.json
└── README.md
Notice: The Prompts/ folder contains the core business logic. C# code is intentionally minimal and self-documenting.
Semantic Kernel Patterns:
- Semantic Functions: AI-powered transformations (prompts)
- Native Functions: C# helpers (database operations, validation)
- Kernel Memory: Schema context persistence across conversation
- Plugin Architecture: Modular capabilities (database, schema, formatting)
Prompt Engineering Techniques:
- System context injection (dynamic schema embedding)
- Few-shot learning (example SQL patterns in prompts)
- Constraint specification (safety rules, format requirements)
- Chain-of-thought reasoning (complex query breakdown)
AI Safety:
- Read-only query enforcement
- SQL injection prevention via parameterization
- Result set size limits
- Query timeout protection
| Setting | Description | Example |
|---|---|---|
AzureOpenAI:Endpoint |
Azure OpenAI resource URL | https://your-resource.openai.azure.com/ |
AzureOpenAI:ApiKey |
Authentication key | abc123... |
AzureOpenAI:DeploymentName |
Model deployment | gpt-4 or gpt-35-turbo |
Database:ConnectionString |
SQL Server connection | Server=...;Database=...; |
Optional:
Database:SchemaFile: JSON file with manual schema definitionsAzureOpenAI:MaxTokens: Response token limit (default: 1000)Database:QueryTimeout: SQL execution timeout seconds (default: 30)
> What's our total revenue this month?
> Show top 10 customers by order volume
> Compare sales between Q1 and Q2
> Which products haven't sold in 90 days?
> Find customers who bought X but not Y
The app maintains conversation context for natural follow-ups:
> Show me electronics sales
> Just the top 5
> How about last quarter?
Azure OpenAI Connection Failed
- Verify endpoint includes
https://and trailing/ - Ensure deployment name exactly matches Azure portal
- Check API key hasn't expired
Schema Discovery Issues
- Grant
VIEW DEFINITIONpermission to database user - Verify connection string credentials
- Use manual schema file as fallback
Incorrect SQL Generated
- Add column descriptions in manual schema file for business context
- Provide example question in different phrasing
- Use GPT-4 deployment for complex queries
Query Timeout
- Increase
Database:QueryTimeoutin config - Simplify natural language question
- Check database indexes and performance
Traditional Approach: Complex C# code with SQL builders, pattern matching, entity mappings, and fragile heuristics. Hard to maintain, harder to explain.
This Approach: Clean orchestration code + powerful prompt templates. Logic is readable, changes are transparent, and anyone can understand the system behavior by reading the prompts.
The Value: When prompt engineering is done right, the code becomes self-explanatory. This project demonstrates that principle in action.
Built with Microsoft Semantic Kernel and Azure OpenAI Service