Skip to content

Commit 470f8f3

Browse files
santiagxfsydney-runklelnhsingh
authored
Azure AI Content Safety middleware (#3542)
## Overview Adding documentation for middleware in langchain-azure-ai ## Type of change **Type:** New documentation page ## Checklist <!-- Put an 'x' in all boxes that apply --> - [x] I have read the [contributing guidelines](README.md), including the [language policy](https://docs.langchain.com/oss/python/contributing/overview#language-policy) - [x] I have tested my changes locally using `docs dev` - [x] All code examples have been tested and work correctly - [ x I have used **root relative** paths for internal links - [x] I have updated navigation in `src/docs.json` if needed (Internal team members only / optional): Create a preview deployment as necessary using the [Create Preview Branch workflow](https://github.com/langchain-ai/docs/actions/workflows/create-preview-branch.yml) ## Additional notes Public documentation at https://learn.microsoft.com/en-us/azure/foundry/how-to/develop/langchain-middleware --------- Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com> Co-authored-by: Lauren Hirata Singh <lauren@langchain.dev>
1 parent 420babb commit 470f8f3

File tree

3 files changed

+375
-0
lines changed

3 files changed

+375
-0
lines changed
Lines changed: 350 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,350 @@
1+
---
2+
title: "Microsoft Foundry middleware integration"
3+
description: "Integrate with the Azure AI middleware using LangChain Python."
4+
---
5+
6+
Middleware specifically designed for Microsoft Foundry and Azure AI Content Safety. Learn more about [middleware](/oss/langchain/middleware/overview).
7+
8+
These middleware classes live in the `langchain-azure-ai` package and are exported from `langchain_azure_ai.agents.middleware`.
9+
10+
<Info>
11+
Azure AI Content Safety middleware is currently marked experimental upstream. Expect the API surface to evolve as Azure AI Content Safety and LangChain middleware support continue to mature.
12+
</Info>
13+
14+
## Overview
15+
16+
| Middleware | Description |
17+
|------------|-------------|
18+
| [Text moderation](#text-moderation) | Screen input and output text for harmful content and blocklist matches |
19+
| [Image moderation](#image-moderation) | Screen image inputs and outputs using Azure AI Content Safety image analysis |
20+
| [Prompt shield](#prompt-shield) | Detect direct and indirect prompt injection attempts |
21+
| [Protected material](#protected-material) | Detect copyrighted or otherwise protected text or code |
22+
| [Groundedness](#groundedness) | Evaluate model outputs against grounding sources and flag hallucinations |
23+
24+
### Features
25+
26+
- Text moderation for harmful content and custom blocklists.
27+
- Image moderation for data URLs and public HTTP(S) image inputs.
28+
- Prompt injection detection with Prompt Shield.
29+
- Protected material detection for text and code.
30+
- Groundedness evaluation for generated answers against retrieved context.
31+
- Custom `context_extractor` hooks to adapt screening and evaluation to your agent state.
32+
33+
## Setup
34+
35+
To use the Azure AI Content Safety middleware, install the integration package, configure either an Azure AI Foundry project endpoint or an Azure Content Safety endpoint, and provide a credential.
36+
37+
### Installation
38+
39+
Install the package:
40+
41+
<CodeGroup>
42+
```bash pip
43+
pip install -U langchain-azure-ai
44+
```
45+
```bash uv
46+
uv add langchain-azure-ai
47+
```
48+
</CodeGroup>
49+
50+
### Credentials
51+
52+
For authentication, pass either `DefaultAzureCredential()` or an API-key string through the `credential` argument. Using a Foundry Project requires the use of Microsoft Entra ID for authentication.
53+
54+
```python Initialize credential icon="shield-lock"
55+
from azure.identity import DefaultAzureCredential
56+
57+
credential = DefaultAzureCredential()
58+
```
59+
60+
### Instantiation
61+
62+
The middleware supports two endpoint styles:
63+
64+
- An Azure Content Safety resource endpoint via `AZURE_CONTENT_SAFETY_ENDPOINT`
65+
- An Azure AI Foundry project endpoint via `AZURE_AI_PROJECT_ENDPOINT`
66+
67+
If both are available, prefer `project_endpoint` because it gives better defaults for Azure AI Foundry-based workflows. In most setups, you can set the environment variable once and omit `endpoint` or `project_endpoint` from each middleware instantiation.
68+
69+
```python Configure endpoint icon="key"
70+
import os
71+
72+
os.environ["AZURE_AI_PROJECT_ENDPOINT"] = "https://<resource>.services.ai.azure.com/api/projects/<project>"
73+
```
74+
75+
Import and configure your middleware from `langchain_azure_ai.agents.middleware`.
76+
77+
```python Initialize middleware icon="arrows-shuffle"
78+
from azure.identity import DefaultAzureCredential
79+
from langchain_azure_ai.agents.middleware import AzureContentModerationMiddleware
80+
81+
middleware = AzureContentModerationMiddleware(
82+
project_endpoint="https://<resource>.services.ai.azure.com/api/projects/<project>",
83+
credential=DefaultAzureCredential(),
84+
categories=["Hate", "Violence"],
85+
exit_behavior="error",
86+
)
87+
```
88+
89+
## Use with an agent
90+
91+
Pass middleware to @[`create_agent`] in order. You can combine Azure AI middleware with [built-in middleware](/oss/langchain/middleware/built-in).
92+
93+
```python Agent with middleware icon="robot"
94+
from azure.identity import DefaultAzureCredential
95+
from langchain.agents import create_agent
96+
from langchain_azure_ai.agents.middleware import AzureContentModerationMiddleware
97+
98+
agent = create_agent(
99+
model="azure_ai:gpt-4.1",
100+
middleware=[
101+
AzureContentModerationMiddleware(
102+
project_endpoint="https://<resource>.services.ai.azure.com/api/projects/<project>",
103+
credential=DefaultAzureCredential(),
104+
categories=["Hate", "Violence"],
105+
exit_behavior="error",
106+
)
107+
],
108+
)
109+
```
110+
111+
<Tip>
112+
If `AZURE_AI_PROJECT_ENDPOINT` is already set, you can usually omit `project_endpoint` during instantiation.
113+
</Tip>
114+
115+
## Azure AI Content Safety
116+
117+
### Text moderation
118+
119+
Use `AzureContentModerationMiddleware` to screen the last `HumanMessage` before the agent runs and the last `AIMessage` after the agent runs. This middleware uses Azure AI Content Safety harm detection and can also check custom blocklists configured in your resource.
120+
121+
Text moderation is useful for the following:
122+
123+
- Blocking harmful user input before a model call
124+
- Screening model output before it reaches end users
125+
- Enforcing custom blocklists in regulated or enterprise deployments
126+
- Composing multiple moderation passes with different category and direction settings
127+
128+
```python
129+
from azure.identity import DefaultAzureCredential
130+
from langchain_azure_ai.agents.middleware import AzureContentModerationMiddleware
131+
132+
middleware = AzureContentModerationMiddleware(
133+
project_endpoint="https://<resource>.services.ai.azure.com/api/projects/<project>",
134+
credential=DefaultAzureCredential(),
135+
categories=["Hate", "SelfHarm", "Sexual", "Violence"],
136+
severity_threshold=4,
137+
exit_behavior="error",
138+
apply_to_input=True,
139+
apply_to_output=True,
140+
)
141+
```
142+
143+
<Accordion title="Configuration options">
144+
145+
<ParamField body="categories" type="list[str] | None">
146+
Harm categories to analyze. Valid values are `'Hate'`, `'SelfHarm'`, `'Sexual'`, and `'Violence'`. Defaults to all four categories.
147+
</ParamField>
148+
149+
<ParamField body="severity_threshold" type="int" default="4">
150+
Minimum severity score from `0` to `6` that triggers the configured behavior.
151+
</ParamField>
152+
153+
<ParamField body="exit_behavior" type="string" default="error">
154+
One of `'error'`, `'continue'`, or `'replace'`.
155+
</ParamField>
156+
157+
<ParamField body="apply_to_input" type="bool" default="True">
158+
Whether to screen the last `HumanMessage` before the agent runs.
159+
</ParamField>
160+
161+
<ParamField body="apply_to_output" type="bool" default="True">
162+
Whether to screen the last `AIMessage` after the agent runs.
163+
</ParamField>
164+
165+
<ParamField body="blocklist_names" type="list[str] | None">
166+
Names of custom blocklists configured in your Azure Content Safety resource.
167+
</ParamField>
168+
169+
<ParamField body="context_extractor" type="Callable | None">
170+
Optional callable that extracts the text to screen from agent state and runtime.
171+
</ParamField>
172+
173+
</Accordion>
174+
175+
### Image moderation
176+
177+
Use `AzureContentModerationForImagesMiddleware` when your agent handles visual content. It extracts images from the latest input or output message and screens them with the Azure AI Content Safety image analysis API.
178+
179+
This middleware supports:
180+
181+
- Base64 data URLs such as `data:image/png;base64,...`
182+
- Public HTTP(S) image URLs
183+
184+
```python
185+
from azure.identity import DefaultAzureCredential
186+
from langchain_azure_ai.agents.middleware import (
187+
AzureContentModerationForImagesMiddleware,
188+
)
189+
190+
middleware = AzureContentModerationForImagesMiddleware(
191+
endpoint="https://<resource>.cognitiveservices.azure.com/",
192+
credential=DefaultAzureCredential(),
193+
categories=["Hate", "SelfHarm", "Sexual", "Violence"],
194+
severity_threshold=4,
195+
exit_behavior="error",
196+
apply_to_input=True,
197+
apply_to_output=False,
198+
)
199+
```
200+
201+
<Accordion title="Configuration options">
202+
203+
<ParamField body="categories" type="list[str] | None">
204+
Image harm categories to analyze. Defaults to all four supported categories.
205+
</ParamField>
206+
207+
<ParamField body="severity_threshold" type="int" default="4">
208+
Minimum severity score from `0` to `6` that triggers the configured behavior.
209+
</ParamField>
210+
211+
<ParamField body="exit_behavior" type="string" default="error">
212+
One of `'error'` or `'continue'`.
213+
</ParamField>
214+
215+
<ParamField body="apply_to_input" type="bool" default="True">
216+
Whether to screen images in the latest `HumanMessage`.
217+
</ParamField>
218+
219+
<ParamField body="apply_to_output" type="bool" default="False">
220+
Whether to screen images in the latest `AIMessage`.
221+
</ParamField>
222+
223+
<ParamField body="context_extractor" type="Callable | None">
224+
Optional callable that extracts images from agent state and runtime.
225+
</ParamField>
226+
227+
</Accordion>
228+
229+
### Prompt shield
230+
231+
Use `AzurePromptShieldMiddleware` to detect prompt injection in user prompts and optional supporting documents. By default it screens input only, because prompt injection is usually an input-side attack, but you can also enable output screening.
232+
233+
```python
234+
from azure.identity import DefaultAzureCredential
235+
from langchain_azure_ai.agents.middleware import AzurePromptShieldMiddleware
236+
237+
middleware = AzurePromptShieldMiddleware(
238+
project_endpoint="https://<resource>.services.ai.azure.com/api/projects/<project>",
239+
credential=DefaultAzureCredential(),
240+
exit_behavior="continue",
241+
apply_to_input=True,
242+
apply_to_output=False,
243+
)
244+
```
245+
246+
<Accordion title="Configuration options">
247+
248+
<ParamField body="exit_behavior" type="string" default="error">
249+
One of `'error'`, `'continue'`, or `'replace'`.
250+
</ParamField>
251+
252+
<ParamField body="apply_to_input" type="bool" default="True">
253+
Whether to screen the latest `HumanMessage` before the agent runs.
254+
</ParamField>
255+
256+
<ParamField body="apply_to_output" type="bool" default="False">
257+
Whether to screen the latest `AIMessage` after the agent runs.
258+
</ParamField>
259+
260+
<ParamField body="context_extractor" type="Callable | None">
261+
Optional callable that extracts the user prompt and grounding documents from agent state and runtime.
262+
</ParamField>
263+
264+
</Accordion>
265+
266+
### Protected material
267+
268+
Use `AzureProtectedMaterialMiddleware` to detect protected content such as copyrighted text or code. This middleware can screen both the latest user input and the latest model output.
269+
270+
```python
271+
from azure.identity import DefaultAzureCredential
272+
from langchain_azure_ai.agents.middleware import AzureProtectedMaterialMiddleware
273+
274+
middleware = AzureProtectedMaterialMiddleware(
275+
endpoint="https://<resource>.cognitiveservices.azure.com/",
276+
credential=DefaultAzureCredential(),
277+
type="code",
278+
exit_behavior="replace",
279+
apply_to_input=False,
280+
apply_to_output=True,
281+
violation_message="Protected material detected. Please provide a higher-level summary instead.",
282+
)
283+
```
284+
285+
<Accordion title="Configuration options">
286+
287+
<ParamField body="type" type="string" default="text">
288+
The content type to screen: `'text'` or `'code'`.
289+
</ParamField>
290+
291+
<ParamField body="exit_behavior" type="string" default="error">
292+
One of `'error'`, `'continue'`, or `'replace'`.
293+
</ParamField>
294+
295+
<ParamField body="apply_to_input" type="bool" default="True">
296+
Whether to screen the latest `HumanMessage`.
297+
</ParamField>
298+
299+
<ParamField body="apply_to_output" type="bool" default="True">
300+
Whether to screen the latest `AIMessage`.
301+
</ParamField>
302+
303+
<ParamField body="context_extractor" type="Callable | None">
304+
Optional callable that extracts text from agent state and runtime.
305+
</ParamField>
306+
307+
</Accordion>
308+
309+
### Groundedness
310+
311+
Use `AzureGroundednessMiddleware` to evaluate whether a model response is grounded in the context available to the agent. Unlike the other middleware classes on this page, groundedness runs after model generation and inspects the generated answer against supporting sources.
312+
313+
By default, groundedness collects sources from the current conversation, including system content, tool outputs, and relevant annotations attached to model responses.
314+
315+
```python
316+
from azure.identity import DefaultAzureCredential
317+
from langchain_azure_ai.agents.middleware import AzureGroundednessMiddleware
318+
319+
middleware = AzureGroundednessMiddleware(
320+
project_endpoint="https://<resource>.services.ai.azure.com/api/projects/<project>",
321+
credential=DefaultAzureCredential(),
322+
domain="Generic",
323+
task="QnA",
324+
exit_behavior="continue",
325+
)
326+
```
327+
328+
<Accordion title="Configuration options">
329+
330+
<ParamField body="domain" type="string" default="Generic">
331+
The analysis domain. Supported values are `'Generic'` and `'Medical'`.
332+
</ParamField>
333+
334+
<ParamField body="task" type="string" default="Summarization">
335+
The task type for the analysis. Supported values are `'Summarization'` and `'QnA'`.
336+
</ParamField>
337+
338+
<ParamField body="exit_behavior" type="string" default="error">
339+
One of `'error'` or `'continue'`.
340+
</ParamField>
341+
342+
<ParamField body="context_extractor" type="Callable | None">
343+
Optional callable that extracts the answer, grounding sources, and optional question from agent state and runtime.
344+
</ParamField>
345+
346+
</Accordion>
347+
348+
## API reference
349+
350+
For the full public API, see the middleware exports in [`langchain_azure_ai.agents.middleware`](https://github.com/langchain-ai/langchain-azure/tree/main/libs/azure-ai/langchain_azure_ai/agents/middleware) and the underlying Content Safety middleware package in [`langchain_azure_ai.agents.middleware.content_safety`](https://github.com/langchain-ai/langchain-azure/tree/main/libs/azure-ai/langchain_azure_ai/agents/middleware/content_safety).

src/oss/python/integrations/middleware/index.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ Middleware enables context engineering, harness customization, and runtime safet
2525
|------------|-------------|--------|
2626
| [Anthropic](/oss/integrations/middleware/anthropic) | Prompt caching, bash tool, text editor, memory, and file search | [`langchain-ai/langchain`](https://github.com/langchain-ai/langchain/tree/master/libs/partners/anthropic) |
2727
| [AWS](/oss/integrations/middleware/aws) | Prompt caching | [`langchain-ai/langchain-aws`](https://github.com/langchain-ai/langchain-aws/tree/main/libs/aws) |
28+
| [Microsoft Foundry](/oss/integrations/middleware/azure_ai) | Text moderation, image moderation, prompt shield, protected material, and groundedness | [`langchain-ai/langchain-azure`](https://github.com/langchain-ai/langchain-azure/tree/main/libs/azure-ai) |
2829
| [OpenAI](/oss/integrations/middleware/openai) | Content moderation | [`langchain-ai/langchain`](https://github.com/langchain-ai/langchain/tree/master/libs/partners/openai) |
2930

3031
## Community integrations

src/oss/python/integrations/providers/microsoft.mdx

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -141,6 +141,30 @@ embed_model = AzureAIOpenAIApiEmbeddingsModel(
141141
)
142142
```
143143

144+
## Middleware
145+
146+
### Azure AI Content Safety middleware
147+
148+
>[Azure AI Content Safety](https://learn.microsoft.com/en-us/azure/ai-services/content-safety/overview) provides guardrails you can apply to LangChain agents through middleware. The `langchain-azure-ai` package currently exports middleware for text moderation, image moderation, prompt injection detection, protected material detection, and groundedness evaluation.
149+
150+
Install the middleware package:
151+
152+
<CodeGroup>
153+
```bash pip
154+
pip install -U langchain-azure-ai
155+
```
156+
157+
```bash uv
158+
uv add langchain-azure-ai
159+
```
160+
</CodeGroup>
161+
162+
See the [Microsoft Foundry middleware guide](/oss/integrations/middleware/azure_ai).
163+
164+
```python
165+
from langchain_azure_ai.agents.middleware import AzureContentModerationMiddleware
166+
```
167+
144168
## Document loaders
145169

146170
### Azure AI data

0 commit comments

Comments
 (0)