Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
350 changes: 350 additions & 0 deletions src/oss/python/integrations/middleware/azure_ai.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,350 @@
---
title: "Microsoft Foundry middleware integration"
description: "Integrate with the Azure AI middleware using LangChain Python."
---

Middleware specifically designed for Microsoft Foundry and Azure AI Content Safety. Learn more about [middleware](/oss/langchain/middleware/overview).

These middleware classes live in the `langchain-azure-ai` package and are exported from `langchain_azure_ai.agents.middleware`.

<Info>
Azure AI Content Safety middleware is currently marked experimental upstream. Expect the API surface to evolve as Azure AI Content Safety and LangChain middleware support continue to mature.
</Info>

## Overview

| Middleware | Description |
|------------|-------------|
| [Text moderation](#text-moderation) | Screen input and output text for harmful content and blocklist matches |
| [Image moderation](#image-moderation) | Screen image inputs and outputs using Azure AI Content Safety image analysis |
| [Prompt shield](#prompt-shield) | Detect direct and indirect prompt injection attempts |
| [Protected material](#protected-material) | Detect copyrighted or otherwise protected text or code |
| [Groundedness](#groundedness) | Evaluate model outputs against grounding sources and flag hallucinations |

### Features

- Text moderation for harmful content and custom blocklists.
- Image moderation for data URLs and public HTTP(S) image inputs.
- Prompt injection detection with Prompt Shield.
- Protected material detection for text and code.
- Groundedness evaluation for generated answers against retrieved context.
- Custom `context_extractor` hooks to adapt screening and evaluation to your agent state.

## Setup

To use the Azure AI Content Safety middleware, install the integration package, configure either an Azure AI Foundry project endpoint or an Azure Content Safety endpoint, and provide a credential.

### Installation

Install the package:

<CodeGroup>
```bash pip
pip install -U langchain-azure-ai
```
```bash uv
uv add langchain-azure-ai
```
</CodeGroup>

### Credentials

For authentication, pass either `DefaultAzureCredential()` or an API-key string through the `credential` argument. Using a Foundry Project requires the use of Microsoft Entra ID for authentication.

```python Initialize credential icon="shield-lock"
from azure.identity import DefaultAzureCredential

credential = DefaultAzureCredential()
```

### Instantiation

The middleware supports two endpoint styles:

- An Azure Content Safety resource endpoint via `AZURE_CONTENT_SAFETY_ENDPOINT`
- An Azure AI Foundry project endpoint via `AZURE_AI_PROJECT_ENDPOINT`

If both are available, prefer `project_endpoint` because it gives better defaults for Azure AI Foundry-based workflows. In most setups, you can set the environment variable once and omit `endpoint` or `project_endpoint` from each middleware instantiation.

```python Configure endpoint icon="key"
import os

os.environ["AZURE_AI_PROJECT_ENDPOINT"] = "https://<resource>.services.ai.azure.com/api/projects/<project>"
```

Import and configure your middleware from `langchain_azure_ai.agents.middleware`.

```python Initialize middleware icon="arrows-shuffle"
from azure.identity import DefaultAzureCredential
from langchain_azure_ai.agents.middleware import AzureContentModerationMiddleware

middleware = AzureContentModerationMiddleware(
project_endpoint="https://<resource>.services.ai.azure.com/api/projects/<project>",
credential=DefaultAzureCredential(),
categories=["Hate", "Violence"],
exit_behavior="error",
)
```

## Use with an agent

Pass middleware to @[`create_agent`] in order. You can combine Azure AI middleware with [built-in middleware](/oss/langchain/middleware/built-in).

```python Agent with middleware icon="robot"
from azure.identity import DefaultAzureCredential
from langchain.agents import create_agent
from langchain_azure_ai.agents.middleware import AzureContentModerationMiddleware

agent = create_agent(
model="azure_ai:gpt-4.1",
middleware=[
AzureContentModerationMiddleware(
project_endpoint="https://<resource>.services.ai.azure.com/api/projects/<project>",
credential=DefaultAzureCredential(),
categories=["Hate", "Violence"],
exit_behavior="error",
)
],
)
```

<Tip>
If `AZURE_AI_PROJECT_ENDPOINT` is already set, you can usually omit `project_endpoint` during instantiation.
</Tip>

## Azure AI Content Safety

### Text moderation

Use `AzureContentModerationMiddleware` to screen the last `HumanMessage` before the agent runs and the last `AIMessage` after the agent runs. This middleware uses Azure AI Content Safety harm detection and can also check custom blocklists configured in your resource.

Text moderation is useful for the following:

- Blocking harmful user input before a model call
- Screening model output before it reaches end users
- Enforcing custom blocklists in regulated or enterprise deployments
- Composing multiple moderation passes with different category and direction settings

```python
from azure.identity import DefaultAzureCredential
from langchain_azure_ai.agents.middleware import AzureContentModerationMiddleware

middleware = AzureContentModerationMiddleware(
project_endpoint="https://<resource>.services.ai.azure.com/api/projects/<project>",
credential=DefaultAzureCredential(),
categories=["Hate", "SelfHarm", "Sexual", "Violence"],
severity_threshold=4,
exit_behavior="error",
apply_to_input=True,
apply_to_output=True,
)
```

<Accordion title="Configuration options">

<ParamField body="categories" type="list[str] | None">
Harm categories to analyze. Valid values are `'Hate'`, `'SelfHarm'`, `'Sexual'`, and `'Violence'`. Defaults to all four categories.
</ParamField>

<ParamField body="severity_threshold" type="int" default="4">
Minimum severity score from `0` to `6` that triggers the configured behavior.
</ParamField>

<ParamField body="exit_behavior" type="string" default="error">
One of `'error'`, `'continue'`, or `'replace'`.
</ParamField>

<ParamField body="apply_to_input" type="bool" default="True">
Whether to screen the last `HumanMessage` before the agent runs.
</ParamField>

<ParamField body="apply_to_output" type="bool" default="True">
Whether to screen the last `AIMessage` after the agent runs.
</ParamField>

<ParamField body="blocklist_names" type="list[str] | None">
Names of custom blocklists configured in your Azure Content Safety resource.
</ParamField>

<ParamField body="context_extractor" type="Callable | None">
Optional callable that extracts the text to screen from agent state and runtime.
</ParamField>

</Accordion>

### Image moderation

Use `AzureContentModerationForImagesMiddleware` when your agent handles visual content. It extracts images from the latest input or output message and screens them with the Azure AI Content Safety image analysis API.

This middleware supports:

- Base64 data URLs such as `data:image/png;base64,...`
- Public HTTP(S) image URLs

```python
from azure.identity import DefaultAzureCredential
from langchain_azure_ai.agents.middleware import (
AzureContentModerationForImagesMiddleware,
)

middleware = AzureContentModerationForImagesMiddleware(
endpoint="https://<resource>.cognitiveservices.azure.com/",
credential=DefaultAzureCredential(),
categories=["Hate", "SelfHarm", "Sexual", "Violence"],
severity_threshold=4,
exit_behavior="error",
apply_to_input=True,
apply_to_output=False,
)
```

<Accordion title="Configuration options">

<ParamField body="categories" type="list[str] | None">
Image harm categories to analyze. Defaults to all four supported categories.
</ParamField>

<ParamField body="severity_threshold" type="int" default="4">
Minimum severity score from `0` to `6` that triggers the configured behavior.
</ParamField>

<ParamField body="exit_behavior" type="string" default="error">
One of `'error'` or `'continue'`.
</ParamField>

<ParamField body="apply_to_input" type="bool" default="True">
Whether to screen images in the latest `HumanMessage`.
</ParamField>

<ParamField body="apply_to_output" type="bool" default="False">
Whether to screen images in the latest `AIMessage`.
</ParamField>

<ParamField body="context_extractor" type="Callable | None">
Optional callable that extracts images from agent state and runtime.
</ParamField>

</Accordion>

### Prompt shield

Use `AzurePromptShieldMiddleware` to detect prompt injection in user prompts and optional supporting documents. By default it screens input only, because prompt injection is usually an input-side attack, but you can also enable output screening.

```python
from azure.identity import DefaultAzureCredential
from langchain_azure_ai.agents.middleware import AzurePromptShieldMiddleware

middleware = AzurePromptShieldMiddleware(
project_endpoint="https://<resource>.services.ai.azure.com/api/projects/<project>",
credential=DefaultAzureCredential(),
exit_behavior="continue",
apply_to_input=True,
apply_to_output=False,
)
```

<Accordion title="Configuration options">

<ParamField body="exit_behavior" type="string" default="error">
One of `'error'`, `'continue'`, or `'replace'`.
</ParamField>

<ParamField body="apply_to_input" type="bool" default="True">
Whether to screen the latest `HumanMessage` before the agent runs.
</ParamField>

<ParamField body="apply_to_output" type="bool" default="False">
Whether to screen the latest `AIMessage` after the agent runs.
</ParamField>

<ParamField body="context_extractor" type="Callable | None">
Optional callable that extracts the user prompt and grounding documents from agent state and runtime.
</ParamField>

</Accordion>

### Protected material

Use `AzureProtectedMaterialMiddleware` to detect protected content such as copyrighted text or code. This middleware can screen both the latest user input and the latest model output.

```python
from azure.identity import DefaultAzureCredential
from langchain_azure_ai.agents.middleware import AzureProtectedMaterialMiddleware

middleware = AzureProtectedMaterialMiddleware(
endpoint="https://<resource>.cognitiveservices.azure.com/",
credential=DefaultAzureCredential(),
type="code",
exit_behavior="replace",
apply_to_input=False,
apply_to_output=True,
violation_message="Protected material detected. Please provide a higher-level summary instead.",
)
```

<Accordion title="Configuration options">

<ParamField body="type" type="string" default="text">
The content type to screen: `'text'` or `'code'`.
</ParamField>

<ParamField body="exit_behavior" type="string" default="error">
One of `'error'`, `'continue'`, or `'replace'`.
</ParamField>

<ParamField body="apply_to_input" type="bool" default="True">
Whether to screen the latest `HumanMessage`.
</ParamField>

<ParamField body="apply_to_output" type="bool" default="True">
Whether to screen the latest `AIMessage`.
</ParamField>

<ParamField body="context_extractor" type="Callable | None">
Optional callable that extracts text from agent state and runtime.
</ParamField>

</Accordion>

### Groundedness

Use `AzureGroundednessMiddleware` to evaluate whether a model response is grounded in the context available to the agent. Unlike the other middleware classes on this page, groundedness runs after model generation and inspects the generated answer against supporting sources.

By default, groundedness collects sources from the current conversation, including system content, tool outputs, and relevant annotations attached to model responses.

```python
from azure.identity import DefaultAzureCredential
from langchain_azure_ai.agents.middleware import AzureGroundednessMiddleware

middleware = AzureGroundednessMiddleware(
project_endpoint="https://<resource>.services.ai.azure.com/api/projects/<project>",
credential=DefaultAzureCredential(),
domain="Generic",
task="QnA",
exit_behavior="continue",
)
```

<Accordion title="Configuration options">

<ParamField body="domain" type="string" default="Generic">
The analysis domain. Supported values are `'Generic'` and `'Medical'`.
</ParamField>

<ParamField body="task" type="string" default="Summarization">
The task type for the analysis. Supported values are `'Summarization'` and `'QnA'`.
</ParamField>

<ParamField body="exit_behavior" type="string" default="error">
One of `'error'` or `'continue'`.
</ParamField>

<ParamField body="context_extractor" type="Callable | None">
Optional callable that extracts the answer, grounding sources, and optional question from agent state and runtime.
</ParamField>

</Accordion>

## API reference

For the full public API, see the middleware exports in [`langchain_azure_ai.agents.middleware`](https://github.com/langchain-ai/langchain-azure/tree/main/libs/azure-ai/langchain_azure_ai/agents/middleware) and the underlying Content Safety middleware package in [`langchain_azure_ai.agents.middleware.content_safety`](https://github.com/langchain-ai/langchain-azure/tree/main/libs/azure-ai/langchain_azure_ai/agents/middleware/content_safety).
1 change: 1 addition & 0 deletions src/oss/python/integrations/middleware/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ Middleware enables context engineering, harness customization, and runtime safet
|------------|-------------|--------|
| [Anthropic](/oss/integrations/middleware/anthropic) | Prompt caching, bash tool, text editor, memory, and file search | [`langchain-ai/langchain`](https://github.com/langchain-ai/langchain/tree/master/libs/partners/anthropic) |
| [AWS](/oss/integrations/middleware/aws) | Prompt caching | [`langchain-ai/langchain-aws`](https://github.com/langchain-ai/langchain-aws/tree/main/libs/aws) |
| [Microsoft Foundry](/oss/integrations/middleware/azure_ai) | Text moderation, image moderation, prompt shield, protected material, and groundedness | [`langchain-ai/langchain-azure`](https://github.com/langchain-ai/langchain-azure/tree/main/libs/azure-ai) |
| [OpenAI](/oss/integrations/middleware/openai) | Content moderation | [`langchain-ai/langchain`](https://github.com/langchain-ai/langchain/tree/master/libs/partners/openai) |

## Community integrations
Expand Down
24 changes: 24 additions & 0 deletions src/oss/python/integrations/providers/microsoft.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -141,6 +141,30 @@ embed_model = AzureAIOpenAIApiEmbeddingsModel(
)
```

## Middleware

### Azure AI Content Safety middleware

>[Azure AI Content Safety](https://learn.microsoft.com/en-us/azure/ai-services/content-safety/overview) provides guardrails you can apply to LangChain agents through middleware. The `langchain-azure-ai` package currently exports middleware for text moderation, image moderation, prompt injection detection, protected material detection, and groundedness evaluation.

Install the middleware package:

<CodeGroup>
```bash pip
pip install -U langchain-azure-ai
```

```bash uv
uv add langchain-azure-ai
```
</CodeGroup>

See the [Microsoft Foundry middleware guide](/oss/integrations/middleware/azure_ai).

```python
from langchain_azure_ai.agents.middleware import AzureContentModerationMiddleware
```

## Document loaders

### Azure AI data
Expand Down
Loading