Skip to content

Commit e9e510c

Browse files
committed
feat: Add adapter for CrewAI
1 parent a53a0d6 commit e9e510c

File tree

17 files changed

+1526
-2
lines changed

17 files changed

+1526
-2
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,7 @@
6060
> | AutoGen | 🚧 || 🚧 |
6161
> | Microsoft Agent Framework | 🚧 | 🚧 | 🚧 |
6262
> | [Agno](https://runtime.agentscope.io/en/agno_guidelines.html) ||| 🚧 |
63+
> | [CrewAI](https://runtime.agentscope.io/en/crewai_guidelines.html) ||| 🚧 |
6364
6465
---
6566

README_zh.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,7 @@
6060
> | AutoGen | 🚧 || 🚧 |
6161
> | Microsoft Agent Framework | 🚧 | 🚧 | 🚧 |
6262
> | [Agno](https://runtime.agentscope.io/zh/agno_guidelines.html) | 🧪 || 🚧 |
63+
> | [CrewAI](https://runtime.agentscope.io/zh/crewai_guidelines.html) ||| 🚧 |
6364
6465
---
6566

cookbook/_toc.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,7 @@ parts:
4444
- file: en/ut.md
4545
- file: en/langgraph_guidelines.md
4646
- file: en/agno_guidelines.md
47+
- file: en/crewai_guidelines.md
4748
- file: en/contribute.md
4849
sections:
4950
- file: en/README.md
@@ -107,6 +108,7 @@ parts:
107108
- file: zh/ut.md
108109
- file: zh/langgraph_guidelines.md
109110
- file: zh/agno_guidelines.md
111+
- file: zh/crewai_guidelines.md
110112
- file: zh/contribute.md
111113
sections:
112114
- file: zh/README.md

cookbook/en/crewai_guidelines.md

Lines changed: 205 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,205 @@
1+
---
2+
jupytext:
3+
formats: md:myst
4+
text_representation:
5+
extension: .md
6+
format_name: myst
7+
format_version: 0.13
8+
jupytext_version: 1.11.5
9+
kernelspec:
10+
display_name: Python 3
11+
language: python
12+
name: python3
13+
---
14+
15+
# CrewAI Integration Guide
16+
17+
This document describes how to integrate and use the CrewAI framework within AgentScope Runtime to build collaborative autonomous agents that support multi-turn conversations, session memory, and streaming responses.
18+
19+
## 📦 Example Overview
20+
21+
The following example demonstrates how to use the CrewAI framework inside AgentScope Runtime:
22+
23+
- Uses the Qwen-Plus model from DashScope.
24+
- Orchestrates a simple research task with one agent.
25+
- Supports multi-turn conversation and session memory.
26+
- Employs streaming output (SSE) to return responses in real-time.
27+
- Implements session history storage via an in-memory service (InMemorySessionHistoryService).
28+
- Can be accessed through an OpenAI-compatible API mode.
29+
30+
Here’s the core code:
31+
32+
```{code-cell}
33+
# crewai_agent.py
34+
# -*- coding: utf-8 -*-
35+
import os
36+
from agentscope_runtime.engine import AgentApp
37+
from agentscope_runtime.engine.schemas.agent_schemas import AgentRequest
38+
from agentscope_runtime.engine.services.session_history import InMemorySessionHistoryService
39+
from agentscope_runtime.adapters.crewai.memory import create_crewai_session_history_memory
40+
41+
from crewai import Agent, LLM, Crew, Task
42+
43+
PORT = 8090
44+
45+
def run_app():
46+
"""Start AgentApp and enable streaming output."""
47+
agent_app = AgentApp(
48+
app_name="Friday",
49+
app_description="A helpful assistant",
50+
)
51+
52+
@agent_app.init
53+
async def init_func(self):
54+
# Initialize the session history service
55+
self.session_history_service = InMemorySessionHistoryService()
56+
57+
58+
@agent_app.query(framework="crewai")
59+
async def query_func(
60+
self,
61+
msgs,
62+
request: AgentRequest = None,
63+
**kwargs,
64+
):
65+
"""Handle agent queries using CrewAI."""
66+
67+
# Extract user query from the input message
68+
user_question = msgs[0]["content"][0]["text"]
69+
70+
# Initialize the LLM
71+
llm = LLM(
72+
model="qwen-plus",
73+
api_key=os.environ["DASHSCOPE_API_KEY"],
74+
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
75+
stream=True,
76+
)
77+
78+
# Create session-specific memory for the crew
79+
memory = await create_crewai_session_history_memory(
80+
service_or_class=self.session_history_service,
81+
user_id=request.user_id,
82+
session_id=request.session_id,
83+
)
84+
85+
# Define the Research Agent
86+
research_analyst = Agent(
87+
role="Expert Research Analyst",
88+
goal="Analyze the user's question and provide a clear, concise, and accurate answer.",
89+
backstory=(
90+
"You are an expert analyst at a world-renowned research institute. "
91+
"You are known for your ability to break down complex questions and "
92+
"deliver well-structured, easy-to-understand answers."
93+
),
94+
llm=llm,
95+
)
96+
97+
# Define the Research Task
98+
research_task = Task(
99+
description=f"Investigate the following user query: '{user_question}'",
100+
expected_output=(
101+
"A comprehensive yet easy-to-read answer that directly addresses the user's query. "
102+
"The answer should be well-formatted and factually correct."
103+
),
104+
agent=research_analyst,
105+
)
106+
107+
# Assemble the crew
108+
crew = Crew(
109+
agents=[research_analyst],
110+
tasks=[research_task],
111+
external_memory=memory,
112+
stream=True,
113+
)
114+
115+
# Kick off the crew and stream the results
116+
async for chunk in await crew.akickoff():
117+
yield chunk
118+
119+
120+
agent_app.run(host="127.0.0.1", port=PORT)
121+
122+
123+
if __name__ == "__main__":
124+
run_app()
125+
```
126+
127+
## ⚙️ Prerequisites
128+
129+
```{note}
130+
Before starting, make sure you have installed AgentScope Runtime and CrewAI, and configured the required API keys.
131+
```
132+
133+
1. **Install dependencies**:
134+
135+
```bash
136+
pip install "agentscope-runtime[ext]"
137+
```
138+
139+
2. **Set environment variables** (DashScope provides the API key for Qwen models):
140+
141+
```bash
142+
export DASHSCOPE_API_KEY="your-dashscope-api-key"
143+
```
144+
145+
## ▶️ Run the Example
146+
147+
Run the example:
148+
149+
```bash
150+
python crewai_agent.py
151+
```
152+
153+
## 🌐 API Interaction
154+
155+
### 1. Ask the Agent (`/process`)
156+
157+
You can send an HTTP POST request to interact with the agent, with SSE streaming enabled:
158+
159+
```bash
160+
curl -N \
161+
-X POST "http://localhost:8090/process" \
162+
-H "Content-Type: application/json" \
163+
-d '{
164+
"input": [
165+
{
166+
"role": "user",
167+
"content": [
168+
{ "type": "text", "text": "What is the capital of France?" }
169+
]
170+
}
171+
],
172+
"session_id": "session_1"
173+
}'
174+
```
175+
176+
### 2. OpenAI-Compatible Mode
177+
178+
This example also supports the **OpenAI Compatible API**:
179+
180+
```python
181+
from openai import OpenAI
182+
183+
client = OpenAI(base_url="http://127.0.0.1:8090/compatible-mode/v1")
184+
resp = client.responses.create(
185+
model="any_model",
186+
input="What is CrewAI?",
187+
)
188+
print(resp.response["output"][0]["content"][0]["text"])
189+
```
190+
191+
## 🔧 Customization
192+
193+
You can extend this example by:
194+
195+
1. **Changing the model**: Replace `LLM(model="qwen-plus", ...)` with another model.
196+
2. **Adding system prompts**:
197+
- Modify the agent's role, goal, and backstory to change its persona and expertise.
198+
- Improve the task's description and expected_output for more specific results.
199+
- Add more Agent and Task instances to the Crew to build more complex, multi-agent workflows for collaboration and delegation.
200+
3. **Use Different Tools**: Assign tools to your agents to allow them to interact with external services, such as searching the web or accessing databases.
201+
202+
## 📚 References
203+
204+
- [CrewAI Documentation](https://docs.crewai.com/)
205+
- [AgentScope Runtime Documentation](https://runtime.agentscope.io/)

0 commit comments

Comments
 (0)