Skip to content

Commit 5c510e6

Browse files
authored
rename and update dev container
1 parent 2a9623b commit 5c510e6

File tree

3 files changed

+8
-7
lines changed

3 files changed

+8
-7
lines changed

.devcontainer/devcontainer.json

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,10 @@
11
{
2-
"name": "Azure OpenAI end-to-end baseline reference implementation",
2+
"name": "AI Agent service chat baseline reference implementation",
33
"image": "mcr.microsoft.com/devcontainers/dotnet:dev-8.0-jammy",
44
"runArgs": ["--network=host"],
55
"remoteUser": "vscode",
66
"features": {
7+
"ghcr.io/devcontainers/features/azure-cli:1": {}
78
},
89
"customizations": {
910
"vscode": {

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Contributing to the Azure OpenAI and AI Agent service chat baseline reference implementation
1+
# Contributing to the AI Agent service chat baseline reference implementation
22

33
This project welcomes contributions and suggestions. Most contributions require you to agree to a
44
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
# Azure OpenAI and AI Agent service chat baseline reference implementation
1+
# AI Agent service chat baseline reference implementation
22

3-
This reference implementation illustrates an approach running a chat application and an AI orchestration layer in a single region. It uses Azure AI Agent service as the orchestrator and Azure OpenAI foundation models. This repository directly supports the [Baseline end-to-end chat reference architecture](https://learn.microsoft.com/azure/architecture/ai-ml/architecture/baseline-openai-e2e-chat) on Microsoft Learn.
3+
This reference implementation illustrates an approach running a chat application and an AI orchestration layer in a single region. It uses Azure AI Agent service as the orchestrator and OpenAI foundation models. This repository directly supports the [Baseline end-to-end chat reference architecture](https://learn.microsoft.com/azure/architecture/ai-ml/architecture/baseline-openai-e2e-chat) on Microsoft Learn.
44

5-
Follow this implementation to deploy an agent in [Azure AI Foundry](https://learn.microsoft.com/azure/ai-studio/how-to/prompt-flow) and uses Bing for grounding data. You'll be exposed to common generative AI chat application characteristics such as:
5+
Follow this implementation to deploy an agent in [Azure AI Foundry](https://learn.microsoft.com/azure/ai-foundry/) and uses Bing for grounding data. You'll be exposed to common generative AI chat application characteristics such as:
66

77
- Creating agents and agent prompts
88
- Querying data stores for grounding data
@@ -70,7 +70,7 @@ Follow these instructions to deploy this example to your Azure subscription, try
7070
- App Service Plans: P1v3 (AZ), 3 instances
7171
- Azure AI Search (S - Standard): 1
7272
- Azure Cosmos DB: 1 account
73-
- Azure OpenAI: GPT-4o model deployment with 50k tokens per minute (TPM) capacity
73+
- OpenAI model: GPT-4o model deployment with 50k tokens per minute (TPM) capacity
7474
- DDoS Protection Plans: 1
7575
- Public IPv4 Addresses - Standard: 4
7676
- Standard DSv3 Family vCPU: 2
@@ -292,7 +292,7 @@ For this deployment guide, you'll continue using your jump box to simulate part
292292
293293
### 5. Try it out! Test the deployed application that calls into the Azure AI Agent service
294294
295-
This section will help you to validate that the workload is exposed correctly and responding to HTTP requests. This will validate that traffic is flowing through Application Gateway, into your Web App, and from your Web App, into the Azure Machine Learning managed online endpoint, which contains the hosted prompt flow. The hosted prompt flow will interface with Wikipedia for grounding data and Azure OpenAI for generative responses.
295+
This section will help you to validate that the workload is exposed correctly and responding to HTTP requests. This will validate that traffic is flowing through Application Gateway, into your Web App, and from your Web App, into the Azure AI Foundry agent API endpoint, which hosts the agent and its chat history. The agent will interface with Bing for grounding data and an OpenAI model for generative responses.
296296
297297
| :computer: | Unless otherwise noted, the following steps are all performed from your original workstation, not from the jump box. |
298298
| :--------: | :------------------------- |

0 commit comments

Comments
 (0)