Click the image to watch the video
Here's something that might surprise you: if you've ever called a REST API, you already have the core skill needed to build AI-powered applications.
Think about it. When you call an API, you send a request and get a response. Building with AI is the same pattern:
var response = await chatClient.GetResponseAsync("Summarize this customer feedback");
Console.WriteLine(response);That's the entire concept. You send a prompt. You get a response. Your existing .NET skills handle everything else: dependency injection, configuration, error handling, async patterns.
No PhD required. No Python. No complex ML pipelines.
This course will prove it to you, not with slides and theory, but with real, runnable code that you'll build yourself.
By the end of this lesson, you will:
- Understand what Generative AI actually is, stripped of the hype and mystery
- Know why your .NET skills already prepare you for AI development
- See the key difference between traditional programming and AI-powered applications
- Get your development environment ready so you can start coding immediately
No prerequisites beyond basic .NET knowledge. We'll build everything step by step.
Generative AI is software that creates new content (text, images, code, audio) based on patterns it learned from existing data.
That's the entire concept. When you use ChatGPT, GitHub Copilot, or DALL-E, you're using generative AI.
At the heart of most generative AI applications, usually, are Large Language Models (LLMs). These are neural networks trained on massive amounts of text data. During training, they learn patterns, relationships, and structures in language.
When you send a prompt to an LLM, it doesn't "understand" in the human sense. Instead, it predicts the most likely next tokens (words or word pieces) based on patterns it learned. The result feels intelligent because those patterns came from billions of examples of human writing.
Key concepts:
-
Tokens: LLMs break text into small pieces called tokens. A token might be a word, part of a word, or punctuation. When you see "token limits," this refers to how much text the model can process at once.
-
Context Window: The amount of text an LLM can "see" at one time. Larger context windows let you include more information in your prompts.
-
Temperature: A setting that controls randomness. Lower temperature (0.0-0.3) gives more predictable, focused responses. Higher temperature (0.7-1.0) gives more creative, varied responses.
Learn more: Understanding tokens in Azure OpenAI explains token limits, pricing, and how to count tokens.
Let's think about how you write code today versus how you interact with an AI model:
| Traditional Programming | Generative AI |
|---|---|
| You write explicit rules | You describe what you want |
| Output is deterministic (same input = same output) | Output is probabilistic (same input ≈ similar outputs) |
if (score > 90) return "A"; |
"Grade this essay and explain why" |
| You handle every edge case | The model generalizes from patterns |
Example:
- Old way: Write 500 lines of code to analyze sentiment in customer reviews using regex and keyword matching.
- New way: Send the review to an AI model with the prompt "Is this customer happy, neutral, or unhappy? Explain why."
It learned patterns from millions of examples and applies them to your specific input.
Here's what you don't need to worry about:
- Training models (that's for research labs with massive compute budgets)
- Understanding neural network math (helpful but not required)
- Learning Python (.NET works great)
Here's what you do need to know:
- How to call AI models (just like calling any API)
- How to write good prompts (we'll cover this)
- How to integrate AI into your applications (your existing .NET skills)
Since you're describing what you want rather than coding explicit rules, the way you write prompts matters enormously. A well-crafted prompt can be the difference between a useful response and a useless one.
Good prompts typically:
- Provide context: Tell the model what role it should play or what domain it's working in
- Be specific: Vague prompts get vague answers
- Include examples: Show the model what good output looks like
- Set constraints: Specify format, length, or style requirements
We'll practice prompt engineering throughout this course, starting in Lesson 02.
There's a misconception that AI development requires Python or specialized ML expertise.
This is false.
You already have the core skills:
| Skill You Already Have | How It Applies to AI |
|---|---|
| Calling REST APIs | AI models are accessed via APIs |
| Dependency Injection | Swap AI providers without changing code |
| Async/await patterns | AI calls are async operations |
| Configuration management | API keys, endpoints, model selection |
| Error handling | AI calls can fail, need retry logic |
Microsoft created a unified abstraction called IChatClient (part of Microsoft.Extensions.AI). It works just like the patterns you already know:
- Think
ILogger, but for AI conversations - Think
HttpClient, but for model interactions
One interface, any AI provider:
// Use OpenAI
IChatClient client = new OpenAIChatClient("gpt-5-mini", apiKey);
// Or use a local model with Ollama
IChatClient client = new OllamaChatClient(new Uri("http://localhost:11434"), "phi4-mini");
// Or use Azure OpenAI
IChatClient client = new AzureOpenAIChatClient(endpoint, credential, "gpt-5-mini");
// Your application code stays exactly the same!
var response = await client.GetResponseAsync("Hello, AI!");Switch providers by changing one line. Your business logic never changes.
Now that you understand the concepts, let's look at the tools. We'll go from simple to advanced.
This is your foundation. MEAI provides:
IChatClientfor text conversationsIEmbeddingGeneratorfor vector search scenarios- Built-in support for caching, telemetry, and retries
Think of it as: The plumbing that connects your code to any AI model.
Source: Microsoft Extensions AI - Preview Announcement
You have options:
| Option | Best For | Cost |
|---|---|---|
| Ollama (Local) | Privacy, offline work, learning | Free |
| Azure OpenAI / Microsoft Foundry | Production, enterprise, compliance | Pay-per-use |
All of these work with the same IChatClient interface!
Once you're comfortable with basic AI calls, you'll learn to build Agents. AI workers that can use tools, maintain state, and collaborate with other agents.
But that's for later lessons. First, let's get you coding.
Click the image to watch the setup video
We have removed the setup barriers. Choose the path that fits your workflow:
Best for: Full course experience with cloud-hosted AI models.
- Run
./setup.ps1to automatically provision Azure resources, or use your own existing Azure OpenAI deployment. - Models:
gpt-5-mini(chat) andtext-embedding-3-small(embeddings). - Access the Azure OpenAI Setup Guide
Best for: Privacy, offline work, and free local capability.
- What you get: You run the "brain" on your own laptop.
- Models: Phi-4, Llama 3, etc.
- Access the Local Ollama Setup Guide
In this workshop, you will not just learn theory. You will build:
| What You'll Build | Description | Lesson |
|---|---|---|
| Chat Applications | Conversations with context and memory | Lesson 02: Generative AI Techniques |
| Semantic Search | Search that understands meaning, not just keywords | Lesson 03: AI Patterns and Applications |
| RAG Applications | Apps grounded in your own documents and data | Lesson 03: AI Patterns and Applications |
| Tool-Using Agents | Agents that call APIs and take actions | Lesson 04: AI Agents |
| Multi-Agent Systems | Autonomous agents that collaborate | Lesson 04: AI Agents |
Each lesson builds on the previous one, and everything is hands-on code you can run immediately.
Before you move on, let's reinforce the key concepts:
| Concept | Key Takeaway |
|---|---|
| Generative AI | Software that creates new content based on learned patterns |
| Deterministic vs. Probabilistic | Traditional code gives exact outputs; AI gives generated, variable outputs |
| Your .NET skills transfer | API calls, DI, async/await: you already know the patterns |
IChatClient |
One interface for any AI provider (OpenAI, Ollama, Azure) |
| No training required | You use pre-trained models; focus on integration |
Can you answer these questions?
- What's the main difference between traditional programming and generative AI?
- Why don't you need to train your own models?
- What is
IChatClientand why is it useful?
If you can answer all three, you're ready for the next lesson!
You have two things to do:
- Set up your environment using one of the paths above (we recommend running
./setup.ps1to get started quickly) - Move to the next lesson where you'll write your first real AI application
Continue to Lesson 2: Generative AI Techniques →
Want to go deeper? Here are some excellent resources:
Core .NET AI Documentation:
- Microsoft.Extensions.AI Documentation: The unified AI abstraction layer for .NET
- Get started with AI in .NET: Official quickstart guide for .NET developers
- Azure OpenAI Service Documentation: Enterprise-grade OpenAI models on Azure
Building Agents:
- Microsoft Agent Framework: Build intelligent agents that reason and act
- What are agents?: Conceptual overview of AI agents in .NET
Running Models Locally:
- Ollama: Run open-source LLMs on your own machine
Want the Full Picture?
- Generative AI for Beginners: Our 21-lesson course covering GenAI concepts in depth (Python/TypeScript focus)

