Enabling Reasoning Steps Feature in Compliance with AUP #171
Replies: 1 comment
-
|
Hey @Messatsu92 , in a non-legal explanation, you are absolutely allowed to expose the reasoning summaries produced by any reasoning model as a feature in any user interface. What this compliance violation aims to prevent is the internal thinking not exposed to the developer or user. Asking the questions "think step by step", "explain your reasoning", or "show me your thought process" are unnecessary prompts to send to a reasoning model because we share the CoT summaries anyway. These prompts are seen as a way to gain access to the internal thought process which is why it is seen as a potential violation. In short, please display to CoT summaries to users, but do not attempt to gain control of the internal thoughts not exposed through the response. If this helps you in any way, please mark it as the answer! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I’m reaching out regarding a recent support communication we received from Microsoft about Chain-of-Thought (CoT) distillation violations with Azure OpenAI models.
In our internal GPT implementation, the ability to display reasoning steps is a feature for our business use case. This functionality is essential for transparency, explainability, and trust in AI outputs, especially in regulated workflows where stakeholders need to understand why the model reached a certain conclusion.
I understand from the support team’s note that raw CoT extraction is prohibited under the Acceptable Use Policy and that certain prompt patterns (e.g., “think step by step,” “explain your reasoning,” “show me your thought process”) are flagged as violations. However, the reasoning steps we require are not intended to bypass safeguards or access raw model internals, rather, we need a sanitized, policy-compliant way to surface structured reasoning that is generated for user-facing explainability.
Could you help us identify:
We’d be happy to provide concrete examples of how the reasoning feature is used in our workflows so you can assess whether a safe, compliant alternative is possible.
Thank you for your guidance, we want to ensure our implementation aligns fully with Microsoft’s policies while still delivering the critical transparency features our users expect.
Best regards.
Desired Outcome
Being able to display CoT reasoning steps.
Current Workaround
Disable the display of CoT reasoning steps.
Beta Was this translation helpful? Give feedback.
All reactions