Feature Request: Support for Personally Identifiable Information (PII) Input Filtering in Azure AI Foundry Content Filter #203
Replies: 1 comment
-
|
Hi @Messatsu92 Azure AI Foundry now supports input-side content filtering, including configurable filters that can be applied to user prompts before they reach the model. This addresses many of the compliance and privacy concerns you've raised. See https://learn.microsoft.com/azure/ai-foundry/openai/concepts/content-filter, the Content Filter system can be configured to scan both inputs and outputs. While PII filtering is currently emphasized on the output side, input filtering is supported for categories such as hate, violence, sexual content, and self-harm. PII-specific input filtering is not yet a built-in category, but it can be implemented using custom filters. You can create custom filters via the Foundry portal or API https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/content-filter, and configure them to block, warn, or log prompts that match certain criteria. These filters can be integrated with Guardrails and Controls https://learn.microsoft.com/azure/ai-foundry/openai/concepts/guardrails-controls to enforce compliance policies across deployments. Guidance is for organizations needing PII detection at the input stage, use a layered approach:
While Azure AI Foundry doesn’t yet offer a dedicated sample for input-side PII filtering, there are several resources and examples you can build upon. GitHub Samples Repository
Although PII isn’t a built-in input filter category, you can extend these samples using regex or ML-based detectors.
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Technical Feedback
Currently, the Azure AI Foundry Content Filter(https://learn.microsoft.com/en-us/azure/ai-foundry/openai/concepts/content-filter) service provides PII filtering as an Output Filter, detecting and blocking personally identifiable information (PII) in the completions generated by LLMs.
However, there is a significant need among customers to have this feature available as an Input Filter. In many enterprise GenAI use cases, Legal and Security teams require safeguards against scenarios where end users submit prompts containing PII. Without input-side filtering, there is a risk that sensitive information could be ingested or processed by the model, creating compliance and privacy issues.
Desired Outcome
We would like the Azure AI Foundry Content Filter to support PII detection and filtering at the input stage, not just the output. Specifically:
Requested solution:
Enable PII filtering on user prompts (inputs) before they are processed by the LLM
Provide configurable options for blocking, warning, or logging attempts to submit PII in prompts
Ensure that this input filtering can be integrated with existing compliance workflows
This feature would:
Help organizations remain compliant with privacy regulations
Support Legal and Security requirements for GenAI use cases
Reduce risk of processing or storing PII inadvertently
If this feature is already planned or being considered, please share any roadmap details or ETA.
Current Workaround
No workaround currently.
Beta Was this translation helpful? Give feedback.
All reactions