Skip to content

Commit 1da9f22

Browse files
committed
update classifier docs
1 parent 6cf6c04 commit 1da9f22

File tree

2 files changed

+356
-142
lines changed

2 files changed

+356
-142
lines changed

docs/src/content/docs/classifiers/built-in/anthropic-classifier.mdx

Lines changed: 193 additions & 74 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,12 @@ The Anthropic Classifier extends the abstract `Classifier` class and uses the An
1414
- Supports custom system prompts and variables
1515
- Handles conversation history for context-aware classification
1616

17-
### Basic Usage
17+
### Default Model
18+
19+
The classifier uses Claude 3.5 Sonnet as its default model:
20+
```typescript
21+
ANTHROPIC_MODEL_ID_CLAUDE_3_5_SONNET = "claude-3-5-sonnet-20240620"
22+
```
1823

1924
### Python Package
2025

@@ -24,6 +29,8 @@ If you haven't already installed the Anthropic-related dependencies, make sure t
2429
pip install "multi-agent-orchestrator[anthropic]"
2530
```
2631

32+
### Basic Usage
33+
2734
To use the AnthropicClassifier, you need to create an instance with your Anthropic API key and pass it to the Multi-Agent Orchestrator:
2835

2936
import { Tabs, TabItem } from '@astrojs/starlight/components';
@@ -55,119 +62,231 @@ import { Tabs, TabItem } from '@astrojs/starlight/components';
5562
</TabItem>
5663
</Tabs>
5764

58-
### Custom Configuration
65+
## System Prompt and Variables
5966

60-
You can customize the AnthropicClassifier by providing additional options:
67+
### Full Default System Prompt
6168

62-
<Tabs syncKey="runtime">
63-
<TabItem label="TypeScript" icon="seti:typescript" color="blue">
64-
```typescript
65-
const customAnthropicClassifier = new AnthropicClassifier({
66-
apiKey: 'your-anthropic-api-key',
67-
modelId: 'claude-3-sonnet-20240229',
68-
inferenceConfig: {
69-
maxTokens: 500,
70-
temperature: 0.7,
71-
topP: 0.9,
72-
stopSequences: ['']
73-
}
74-
});
69+
The default system prompt used by the classifier is comprehensive and includes examples of both simple and complex interactions:
7570

76-
const orchestrator = new MultiAgentOrchestrator({ classifier: customAnthropicClassifier });
77-
```
78-
</TabItem>
79-
<TabItem label="Python" icon="seti:python">
80-
```python
81-
from multi_agent_orchestrator.classifiers import AnthropicClassifier, AnthropicClassifierOptions
82-
from multi_agent_orchestrator.orchestrator import MultiAgentOrchestrator
71+
```
72+
You are AgentMatcher, an intelligent assistant designed to analyze user queries and match them with
73+
the most suitable agent or department. Your task is to understand the user's request,
74+
identify key entities and intents, and determine which agent or department would be best equipped
75+
to handle the query.
76+
77+
Important: The user's input may be a follow-up response to a previous interaction.
78+
The conversation history, including the name of the previously selected agent, is provided.
79+
If the user's input appears to be a continuation of the previous conversation
80+
(e.g., "yes", "ok", "I want to know more", "1"), select the same agent as before.
81+
82+
Analyze the user's input and categorize it into one of the following agent types:
83+
<agents>
84+
{{AGENT_DESCRIPTIONS}}
85+
</agents>
86+
If you are unable to select an agent put "unknown"
87+
88+
Guidelines for classification:
89+
90+
Agent Type: Choose the most appropriate agent type based on the nature of the query.
91+
For follow-up responses, use the same agent type as the previous interaction.
92+
Priority: Assign based on urgency and impact.
93+
High: Issues affecting service, billing problems, or urgent technical issues
94+
Medium: Non-urgent product inquiries, sales questions
95+
Low: General information requests, feedback
96+
Key Entities: Extract important nouns, product names, or specific issues mentioned.
97+
For follow-up responses, include relevant entities from the previous interaction if applicable.
98+
For follow-ups, relate the intent to the ongoing conversation.
99+
Confidence: Indicate how confident you are in the classification.
100+
High: Clear, straightforward requests or clear follow-ups
101+
Medium: Requests with some ambiguity but likely classification
102+
Low: Vague or multi-faceted requests that could fit multiple categories
103+
Is Followup: Indicate whether the input is a follow-up to a previous interaction.
104+
105+
Handle variations in user input, including different phrasings, synonyms,
106+
and potential spelling errors.
107+
For short responses like "yes", "ok", "I want to know more", or numerical answers,
108+
treat them as follow-ups and maintain the previous agent selection.
109+
110+
Here is the conversation history that you need to take into account before answering:
111+
<history>
112+
{{HISTORY}}
113+
</history>
114+
115+
Skip any preamble and provide only the response in the specified format.
116+
```
83117

84-
custom_anthropic_classifier = AnthropicClassifier(AnthropicClassifierOptions(
85-
api_key='your-anthropic-api-key',
86-
model_id='claude-3-sonnet-20240229',
87-
inference_config={
88-
'max_tokens': 500,
89-
'temperature': 0.7,
90-
'top_p': 0.9,
91-
'stop_sequences': ['']
92-
}
93-
))
118+
### Variable Replacements
94119

95-
orchestrator = MultiAgentOrchestrator(classifier=custom_anthropic_classifier)
96-
```
97-
</TabItem>
98-
</Tabs>
120+
#### AGENT_DESCRIPTIONS Example
121+
```
122+
tech-support-agent:Specializes in resolving technical issues, software problems, and system configurations
123+
billing-agent:Handles all billing-related queries, payment processing, and subscription management
124+
customer-service-agent:Manages general inquiries, account questions, and product information requests
125+
sales-agent:Assists with product recommendations, pricing inquiries, and purchase decisions
126+
```
99127

100-
The AnthropicClassifier accepts the following configuration options:
128+
### Extended HISTORY Examples
101129

102-
- `api_key` (required): Your Anthropic API key.
103-
- `model_id` (optional): The ID of the Anthropic model to use. Defaults to Claude 3.5 Sonnet.
104-
- `inference_config` (optional): A dictionary containing inference configuration parameters:
105-
- `max_tokens` (optional): The maximum number of tokens to generate. Defaults to 1000 if not specified.
106-
- `temperature` (optional): Controls randomness in output generation.
107-
- `top_p` (optional): Controls diversity of output generation.
108-
- `stop_sequences` (optional): A list of sequences that, when generated, will stop the generation process.
130+
The conversation history is formatted to include agent names in the responses, allowing the classifier to track which agent handled each interaction. Each assistant response is prefixed with `[agent-name]` in the history, making it clear who provided each response:
131+
132+
```
133+
user: I need help with my subscription
134+
assistant: [billing-agent] I can help you with your subscription. What specific information do you need?
135+
user: The premium features aren't working
136+
assistant: [tech-support-agent] I'll help you troubleshoot the premium features. Could you tell me which specific features aren't working?
137+
user: The cloud storage says I only have 5GB but I'm supposed to have 100GB
138+
assistant: [tech-support-agent] Let's verify your subscription status and refresh your storage allocation. When did you last see the correct storage amount?
139+
user: How much am I paying for this subscription?
140+
assistant: [billing-agent] I'll check your subscription details. Your current plan is $29.99/month for the Premium tier with 100GB storage. Would you like me to review your billing history?
141+
user: Yes please
142+
```
109143

110-
## Customizing the System Prompt
144+
Here, the history shows the conversation moving between `billing-agent` and `tech-support-agent` as the topic shifts between billing and technical issues.
145+
146+
147+
The agent prefixing (e.g., `[agent-name]`) is automatically handled by the Multi-Agent Orchestrator when formatting the conversation history. This helps the classifier understand:
148+
- Which agent handled each part of the conversation
149+
- The context of previous interactions
150+
- When agent transitions occurred
151+
- How to maintain continuity for follow-up responses
152+
153+
## Tool-Based Response Structure
154+
155+
The AnthropicClassifier uses a tool specification to enforce structured output from the model. This is a design pattern that ensures consistent and properly formatted responses.
156+
157+
### The Tool Specification
158+
```json
159+
{
160+
"name": "analyzePrompt",
161+
"description": "Analyze the user input and provide structured output",
162+
"input_schema": {
163+
"type": "object",
164+
"properties": {
165+
"userinput": {"type": "string"},
166+
"selected_agent": {"type": "string"},
167+
"confidence": {"type": "number"}
168+
},
169+
"required": ["userinput", "selected_agent", "confidence"]
170+
}
171+
}
172+
```
173+
174+
### Why Use Tools?
175+
176+
1. **Structured Output**: Instead of free-form text, the model must provide exactly the data structure we need.
177+
2. **Guaranteed Format**: The tool schema ensures we always get:
178+
- A valid agent identifier
179+
- A properly formatted confidence score
180+
- All required fields
181+
3. **Implementation Note**: The tool isn't actually executed - it's a pattern to force the model to structure its response in a specific way that maps directly to our `ClassifierResult` type.
182+
183+
Example Response:
184+
```json
185+
{
186+
"userinput": "I need to reset my password",
187+
"selected_agent": "tech-support-agent",
188+
"confidence": 0.95
189+
}
190+
```
111191

112-
You can customize the system prompt used by the AnthropicClassifier:
192+
### Customizing the System Prompt
193+
194+
You can override the default system prompt while maintaining the required agent descriptions and history variables. Here's how to do it:
113195

114196
<Tabs syncKey="runtime">
115197
<TabItem label="TypeScript" icon="seti:typescript" color="blue">
116198
```typescript
117199
orchestrator.classifier.setSystemPrompt(
118-
`
119-
Custom prompt template with placeholders:
200+
`You are a specialized routing expert with deep knowledge of {{INDUSTRY}} operations.
201+
202+
Your available agents are:
203+
<agents>
120204
{{AGENT_DESCRIPTIONS}}
205+
</agents>
206+
207+
Consider these key factors for {{INDUSTRY}} when routing:
208+
{{INDUSTRY_RULES}}
209+
210+
Recent conversation context:
211+
<history>
121212
{{HISTORY}}
122-
{{CUSTOM_PLACEHOLDER}}
123-
`,
213+
</history>
214+
215+
Route based on industry best practices and conversation history.`,
124216
{
125-
CUSTOM_PLACEHOLDER: "Value for custom placeholder"
217+
INDUSTRY: "healthcare",
218+
INDUSTRY_RULES: [
219+
"- HIPAA compliance requirements",
220+
"- Patient data privacy protocols",
221+
"- Emergency request prioritization",
222+
"- Insurance verification processes"
223+
]
126224
}
127225
);
128226
```
129227
</TabItem>
130228
<TabItem label="Python" icon="seti:python">
131229
```python
132230
orchestrator.classifier.set_system_prompt(
133-
"""
134-
Custom prompt template with placeholders:
231+
"""You are a specialized routing expert with deep knowledge of {{INDUSTRY}} operations.
232+
233+
Your available agents are:
234+
<agents>
135235
{{AGENT_DESCRIPTIONS}}
236+
</agents>
237+
238+
Consider these key factors for {{INDUSTRY}} when routing:
239+
{{INDUSTRY_RULES}}
240+
241+
Recent conversation context:
242+
<history>
136243
{{HISTORY}}
137-
{{CUSTOM_PLACEHOLDER}}
138-
""",
244+
</history>
245+
246+
Route based on industry best practices and conversation history.""",
139247
{
140-
"CUSTOM_PLACEHOLDER": "Value for custom placeholder"
248+
"INDUSTRY": "healthcare",
249+
"INDUSTRY_RULES": [
250+
"- HIPAA compliance requirements",
251+
"- Patient data privacy protocols",
252+
"- Emergency request prioritization",
253+
"- Insurance verification processes"
254+
]
141255
}
142256
)
143257
```
144258
</TabItem>
145259
</Tabs>
146260

147-
## Processing Requests
148-
149-
The AnthropicClassifier processes requests using the `process_request` method, which is called internally by the orchestrator. This method:
261+
Note: When customizing the prompt, you must include:
262+
- The `{{AGENT_DESCRIPTIONS}}` variable to list available agents
263+
- The `{{HISTORY}}` variable for conversation context
264+
- Clear instructions for agent selection
265+
- Response format expectations
150266

151-
1. Prepares the user's message.
152-
2. Constructs a request for the Anthropic API, including the system prompt and tool configurations.
153-
3. Sends the request to the Anthropic API and processes the response.
154-
4. Returns a `ClassifierResult` containing the selected agent and confidence score.
267+
## Configuration Options
155268

156-
## Error Handling
269+
The AnthropicClassifier accepts the following configuration options:
157270

158-
The AnthropicClassifier includes error handling to manage potential issues during the classification process. If an error occurs, it will log the error and raise an exception, which can be caught and handled by the orchestrator.
271+
- `api_key` (required): Your Anthropic API key.
272+
- `model_id` (optional): The ID of the Anthropic model to use. Defaults to Claude 3.5 Sonnet.
273+
- `inference_config` (optional): A dictionary containing inference configuration parameters:
274+
- `max_tokens` (optional): The maximum number of tokens to generate. Defaults to 1000.
275+
- `temperature` (optional): Controls randomness in output generation.
276+
- `top_p` (optional): Controls diversity of output generation.
277+
- `stop_sequences` (optional): A list of sequences that will stop generation.
159278

160279
## Best Practices
161280

162-
1. **API Key Security**: Ensure your Anthropic API key is kept secure and not exposed in your codebase.
163-
2. **Model Selection**: Choose an appropriate model based on your use case and performance requirements.
164-
3. **Inference Configuration**: Experiment with different inference parameters to find the best balance between response quality and speed.
165-
4. **System Prompt**: Craft a clear and comprehensive system prompt to guide the model's classification process effectively.
281+
1. **API Key Security**: Keep your Anthropic API key secure and never expose it in your code.
282+
2. **Model Selection**: Choose appropriate models based on your needs and performance requirements.
283+
3. **Inference Configuration**: Experiment with different parameters to optimize classification accuracy.
284+
4. **System Prompt**: Consider customizing the system prompt for your specific use case, while maintaining the core classification structure.
166285

167286
## Limitations
168287

169-
- Requires an active Anthropic API key.
170-
- Classification quality depends on the chosen model and the quality of your system prompt and agent descriptions.
171-
- API usage is subject to Anthropic's pricing and rate limits.
288+
- Requires an active Anthropic API key
289+
- Subject to Anthropic's API pricing and rate limits
290+
- Classification quality depends on the quality of agent descriptions and system prompt
172291

173-
For more information on using and customizing the Multi-Agent Orchestrator, refer to the [Classifier Overview](/multi-agent-orchestrator/classifier/overview) and [Agents](/multi-agent-orchestrator/agents/overview) documentation.
292+
For more information, see the [Classifier Overview](/multi-agent-orchestrator/classifier/overview) and [Agents](/multi-agent-orchestrator/agents/overview) documentation.

0 commit comments

Comments
 (0)