Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
# Microsoft Entra ID (Azure AD) Configuration
AZURE_TENANT_ID=your_tenant_id_here
AZURE_CLIENT_ID=your_client_id_here
AZURE_CLIENT_SECRET=your_client_secret_here

# Azure AI Foundry Endpoint
AZURE_ENDPOINT=your_endpoint_here

# Azure AI Foundry Model Deployment Name
AZURE_DEPLOYMENT=your_deployment_name_here
AZURE_DEPLOYMENT=your_deployment_name_here

# Project ID (Optional - will be generated if not provided)
PROJECT_ID=your_project_id_here
MODEL_DEPLOYMENT_NAME=gpt-4o

# Note: This sample uses DefaultAzureCredential for authentication
# Please ensure you are logged in with Azure CLI using 'az login'
# For more authentication options, see: https://learn.microsoft.com/en-us/java/api/overview/azure/identity-readme?view=azure-java-stable
445 changes: 335 additions & 110 deletions samples/microsoft/java/mslearn-resources/quickstart/README.md

Large diffs are not rendered by default.

17 changes: 13 additions & 4 deletions samples/microsoft/java/mslearn-resources/quickstart/TESTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,26 @@

This guide provides instructions on how to test the Java samples in this repository to ensure they work correctly with the Azure AI Foundry SDK.

## Authentication Setup

These samples use `DefaultAzureCredential` for authentication. Before testing, ensure you are logged in with the Azure CLI:

```bash
az login
```

## Automated Testing Scripts

For your convenience, this repository includes testing scripts that automate the execution of all the samples. You can run the appropriate script for your environment to test all samples in sequence.

### On Linux/macOS/WSL:
### On Linux/macOS/WSL:

To use the testing script on Linux, macOS, or Windows Subsystem for Linux (WSL):

1. Make sure you've set up your `.env` file with valid credentials
2. Open a terminal in the Java samples directory
3. Run the script:
1. Make sure you've set up your `.env` file with the required configuration
2. Ensure you are logged in with the Azure CLI (run `az login`)
3. Open a terminal in the Java samples directory
4. Run the script:

```bash
# Make the script executable
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,50 +11,66 @@
import com.azure.ai.projects.models.agent.AgentRun;
import com.azure.ai.projects.models.agent.AgentRunStatus;
import com.azure.ai.projects.models.agent.AgentThread;
import com.azure.identity.ClientSecretCredential;
import com.azure.identity.ClientSecretCredentialBuilder;

import com.azure.identity.DefaultAzureCredential;

import java.util.List;

/**
* This sample demonstrates how to create and run an agent using the Azure AI Foundry SDK.
*
* Agents in Azure AI Foundry are specialized AI assistants that can be customized with
* specific instructions and capabilities to perform particular tasks. They maintain conversation
* history in threads and can be deployed for various use cases.
*
* This sample shows:
* 1. How to authenticate with Azure AI Foundry using DefaultAzureCredential
* 2. How to create an agent with specific instructions and capabilities
* 3. How to create a thread for conversation with the agent
* 4. How to send messages to the agent and run it
* 5. How to wait for the agent to complete its execution
* 6. How to retrieve and display the agent's response
*
* Prerequisites:
* - An Azure account with access to Azure AI Foundry
* - Azure CLI installed and logged in ('az login')
* - Environment variables set in .env file (AZURE_ENDPOINT, AZURE_DEPLOYMENT)
*/
public class AgentSample {
public static void main(String[] args) {
// Load configuration from .env file
String tenantId = ConfigLoader.getAzureTenantId();
String clientId = ConfigLoader.getAzureClientId();
String clientSecret = ConfigLoader.getAzureClientSecret();
// Load configuration values from the .env file
// These include the service endpoint and the deployment name of the model to use
String endpoint = ConfigLoader.getAzureEndpoint();
String deploymentName = ConfigLoader.getAzureDeployment();

// Create a credential object using Microsoft Entra ID
ClientSecretCredential credential = new ClientSecretCredentialBuilder()
.tenantId(tenantId)
.clientId(clientId)
.clientSecret(clientSecret)
.build();
// Get DefaultAzureCredential for authentication
// This uses the most appropriate authentication method based on the environment
// For local development, it will use your Azure CLI login credentials
DefaultAzureCredential credential = ConfigLoader.getDefaultCredential();

// Create a projects client
// Create a projects client to interact with Azure AI Foundry services
// The client requires an authentication credential and an endpoint
ProjectsClient client = new ProjectsClientBuilder()
.credential(credential)
.endpoint(endpoint)
.buildClient();

// Get an agent client
// Get an agent client, which provides operations for working with AI agents
// This includes creating, configuring, and running agents
AgentClient agentClient = client.getAgentClient();

// Create an agent
// Create a new agent with specialized capabilities and instructions
// The agent is configured with a name, description, instructions, and underlying model
System.out.println("Creating agent...");
Agent agent = agentClient.createAgent(new AgentOptions()
.setName("Research Assistant")
.setDescription("An agent that helps with research tasks")
.setInstructions("You are a research assistant. Help users find information and summarize content.")
.setModel(deploymentName));
.setName("Research Assistant") // Descriptive name for the agent
.setDescription("An agent that helps with research tasks") // Brief description of the agent's purpose
.setInstructions("You are a research assistant. Help users find information and summarize content.") // Detailed instructions for the agent's behavior
.setModel(deploymentName)); // The underlying AI model to power the agent

System.out.println("Agent created: " + agent.getName() + " (ID: " + agent.getId() + ")");

// Create a thread for the conversation
// Create a thread for the conversation with the agent
// Threads maintain conversation history and state across multiple interactions
System.out.println("Creating thread...");
AgentThread thread = agentClient.createThread();
System.out.println("Thread created: " + thread.getId());
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,54 +8,76 @@
import com.azure.ai.projects.models.chat.ChatCompletionOptions;
import com.azure.ai.projects.models.chat.ChatMessage;
import com.azure.ai.projects.models.chat.ChatRole;
import com.azure.identity.ClientSecretCredential;
import com.azure.identity.ClientSecretCredentialBuilder;
import com.azure.identity.DefaultAzureCredential;
import com.azure.identity.DefaultAzureCredentialBuilder;
import com.azure.ai.projects.models.chat.ChatCompletionResponse;



import java.util.Arrays;
import java.util.List;

/**
* This sample demonstrates how to use the chat completion API with the Azure AI Foundry SDK.
*
* Chat completions allow you to have interactive, conversational interactions with AI models
* by providing a list of messages and receiving AI-generated responses that maintain context
* across the conversation.
*
* This sample shows:
* 1. How to authenticate with Azure AI Foundry using DefaultAzureCredential
* 2. How to create a chat client for a specific model deployment
* 3. How to structure a conversation with system and user messages
* 4. How to configure and send a chat completion request
* 5. How to process and display the AI-generated response
*
* Prerequisites:
* - An Azure account with access to Azure AI Foundry
* - Azure CLI installed and logged in ('az login')
* - Environment variables set in .env file (AZURE_ENDPOINT, AZURE_DEPLOYMENT)
*/
public class ChatCompletionSample {
public static void main(String[] args) {
// Load configuration from .env file
String tenantId = ConfigLoader.getAzureTenantId();
String clientId = ConfigLoader.getAzureClientId();
String clientSecret = ConfigLoader.getAzureClientSecret();
// Load configuration values from the .env file
// These include the service endpoint and the deployment name of the model to use
String endpoint = ConfigLoader.getAzureEndpoint();
String deploymentName = ConfigLoader.getAzureDeployment();

// Create a credential object using Microsoft Entra ID
ClientSecretCredential credential = new ClientSecretCredentialBuilder()
.tenantId(tenantId)
.clientId(clientId)
.clientSecret(clientSecret)
.build();
// Get DefaultAzureCredential for authentication
// This uses the most appropriate authentication method based on the environment
// For local development, it will use your Azure CLI login credentials
DefaultAzureCredential credential = ConfigLoader.getDefaultCredential();

// Create a projects client
// Create a projects client to interact with Azure AI Foundry services
// The client requires an authentication credential and an endpoint
ProjectsClient client = new ProjectsClientBuilder()
.credential(credential)
.endpoint(endpoint)
.buildClient();

// Get a chat client
// Get a chat client for the specified model deployment
// This client provides access to chat completion functionality
ChatClient chatClient = client.getChatClient(deploymentName);

// Create chat messages
// Create a list of chat messages to form the conversation
// This includes a system message to set the assistant's behavior
// and a user message containing the user's question or prompt
List<ChatMessage> messages = Arrays.asList(
new ChatMessage(ChatRole.SYSTEM, "You are a helpful assistant."),
new ChatMessage(ChatRole.USER, "Tell me about Azure AI Foundry.")
);

// Set chat completion options
// Configure chat completion options including the messages, temperature, and token limit
// - Temperature controls randomness: lower values (like 0.2) give more focused responses,
// higher values (like 0.8) give more creative responses
// - MaxTokens limits the length of the response
ChatCompletionOptions options = new ChatCompletionOptions(messages)
.setTemperature(0.7)
.setMaxTokens(800);
.setTemperature(0.7) // Balanced between deterministic and creative
.setMaxTokens(800); // Limit response length

System.out.println("Sending chat completion request...");

// Get chat completion
// Send the request and get the AI-generated completion
ChatCompletion completion = chatClient.getChatCompletion(options);

// Display the response
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,54 +8,76 @@
import com.azure.ai.projects.models.chat.ChatCompletionStreamResponse;
import com.azure.ai.projects.models.chat.ChatMessage;
import com.azure.ai.projects.models.chat.ChatRole;
import com.azure.identity.ClientSecretCredential;
import com.azure.identity.ClientSecretCredentialBuilder;
import com.azure.identity.DefaultAzureCredential;

import java.util.Arrays;
import java.util.List;

/**
/**
* This sample demonstrates how to use streaming chat completion with the Azure AI Foundry SDK.
*
* Streaming chat completions deliver the AI's response in real-time as it's being generated,
* providing a more interactive experience by showing results incrementally instead of waiting
* for the entire response to be complete.
*
* This sample shows:
* 1. How to authenticate with Azure AI Foundry using DefaultAzureCredential
* 2. How to create a chat client for a specific model deployment
* 3. How to structure a conversation with system and user messages
* 4. How to configure and send a streaming chat completion request
* 5. How to process and display the AI-generated response as it streams in
*
* Streaming is particularly useful for:
* - Creating more responsive user interfaces
* - Supporting longer responses without timeout issues
* - Providing real-time feedback to users
*
* Prerequisites:
* - An Azure account with access to Azure AI Foundry
* - Azure CLI installed and logged in ('az login')
* - Environment variables set in .env file (AZURE_ENDPOINT, AZURE_DEPLOYMENT)
*/
public class ChatCompletionStreamingSample {
public static void main(String[] args) {
// Load configuration from .env file
String tenantId = ConfigLoader.getAzureTenantId();
String clientId = ConfigLoader.getAzureClientId();
String clientSecret = ConfigLoader.getAzureClientSecret();
// Load configuration values from the .env file
// These include the service endpoint and the deployment name of the model to use
String endpoint = ConfigLoader.getAzureEndpoint();
String deploymentName = ConfigLoader.getAzureDeployment();

// Create a credential object using Microsoft Entra ID
ClientSecretCredential credential = new ClientSecretCredentialBuilder()
.tenantId(tenantId)
.clientId(clientId)
.clientSecret(clientSecret)
.build();
// Get DefaultAzureCredential for authentication
// This uses the most appropriate authentication method based on the environment
// For local development, it will use your Azure CLI login credentials
DefaultAzureCredential credential = ConfigLoader.getDefaultCredential();

// Create a projects client
// Create a projects client to interact with Azure AI Foundry services
// The client requires an authentication credential and an endpoint
ProjectsClient client = new ProjectsClientBuilder()
.credential(credential)
.endpoint(endpoint)
.buildClient();

// Get a chat client
// Get a chat client for the specified model deployment
// This client provides access to both standard and streaming chat completion functionality
ChatClient chatClient = client.getChatClient(deploymentName);

// Create chat messages
// Create a list of chat messages to form the conversation
// This includes a system message to set the assistant's behavior
// and a user message containing the user's request
List<ChatMessage> messages = Arrays.asList(
new ChatMessage(ChatRole.SYSTEM, "You are a helpful assistant."),
new ChatMessage(ChatRole.USER, "Write a short poem about cloud computing.")
);

// Set chat completion options
// Configure chat completion options including the messages, temperature, and token limit
// The same options structure is used for both streaming and non-streaming requests
ChatCompletionOptions options = new ChatCompletionOptions(messages)
.setTemperature(0.7)
.setMaxTokens(800);
.setTemperature(0.7) // Balanced between deterministic and creative
.setMaxTokens(800); // Limit response length

System.out.println("Sending streaming chat completion request...");

// Get streaming chat completion
// Send the streaming request and prepare to receive chunks of the response
// Unlike standard completions, streaming returns portions of the response as they're generated
System.out.println("\nResponse from assistant (streaming):");
chatClient.getChatCompletionStream(options)
.forEach(response -> {
Expand Down
Loading
Loading