diff --git a/README.md b/README.md
index 17ab8b258..2cd7f22c9 100644
--- a/README.md
+++ b/README.md
@@ -493,6 +493,65 @@ python examples/run_azure_openai.py
python examples/run_ollama.py
```
+### Using with OpenRouter
+
+[OpenRouter](https://openrouter.ai/) provides access to a wide variety of models through a single API. OWL integrates with OpenRouter, including advanced features like API key pooling and rotation.
+
+**1. Setup OpenRouter API Key:**
+
+Get your API key from the [OpenRouter website](https://openrouter.ai/keys). Add it to your `.env` file:
+
+```
+OPENROUTER_API_KEY='your-key-here'
+```
+
+For enhanced reliability, you can provide a comma-separated list of keys. The system will automatically rotate through them and put failing keys on a temporary cooldown.
+
+```
+OPENROUTER_API_KEY='key1,key2,key3'
+```
+
+**2. Run via Web UI:**
+
+1. Start the web application: `python owl/webapp.py`
+2. From the "Select Function Module" dropdown, choose `run_openrouter`.
+3. A new text box will appear. Enter the model identifier from OpenRouter (e.g., `mistralai/mistral-7b-instruct`).
+4. Enter your question and click "Run".
+
+### Using with Local Models (Ollama)
+
+You can run OWL with local models served by [Ollama](https://ollama.com/). This allows you to run the entire system on your own machine, even without an internet connection.
+
+**1. Setup Ollama:**
+
+First, make sure you have Ollama installed and running. You can download it from the [Ollama website](https://ollama.com/).
+
+After installation, pull the models you want to use. For example, to get the Llama 3 and Llava (for vision) models, run the following commands in your terminal:
+
+```bash
+ollama run llama3
+ollama run llava
+```
+
+**2. Run via Web UI:**
+
+The easiest way to use a local model is through the web interface:
+
+1. Start the web application: `python owl/webapp.py`
+2. From the "Select Function Module" dropdown, choose `run_ollama`.
+3. A new text box will appear. Enter the name of the text model you want to use (e.g., `llama3`). The vision model is currently defaulted to `llava`.
+4. Enter your question and click "Run".
+
+**3. (Optional) Custom Server URL:**
+
+If your Ollama server is not running on the default `http://localhost:11434`, you can configure the URL by setting an environment variable:
+
+```bash
+export OLLAMA_API_BASE_URL="http://your-ollama-host:11434"
+```
+
+You can also add this line to your `.env` file.
+
For a simpler version that only requires an LLM API key, you can try our minimal example:
```bash
@@ -535,6 +594,33 @@ Here are some tasks you can try with OWL:
- "Summarize the main points from this research paper: [paper URL]"
- "Create a data visualization for this dataset: [dataset path]"
+# 🚀 Advanced Usage
+
+## Autonomous Developer Agent
+
+OWL now includes an experimental autonomous developer agent that can perform upgrades and modify its own codebase. This "Daemon Developer" runs in a continuous loop to improve the application.
+
+**Capabilities:**
+- **Code Introspection**: Can list, read, and search through its own source code files.
+- **Self-Upgrade from Git**: Can check for remote `git` updates and apply them safely using a backup-test-restore workflow.
+- **Self-Modification**: Can work on a "backlog" of development tasks by programmatically modifying its own code.
+
+**How to Run:**
+
+You can start this agent from the Web UI:
+1. Start the web application: `python owl/webapp.py`
+2. From the "Select Function Module" dropdown, choose `run_developer_daemon`.
+3. Click "Run".
+
+> **Warning**: This is a persistent process that will run indefinitely in your terminal. To stop it, you will need to press `Ctrl+C` in the terminal where you launched the web app.
+
+**Security Considerations:**
+This feature is highly experimental and grants the AI agent significant control over its own source code and execution environment. While safeguards are in place (restricting file writes and script execution to specific project directories), this capability carries inherent risks.
+
+As an additional security measure, high-risk tools (`write_file` and `run_upgrade_from_git`) now require **human-in-the-loop confirmation**. When the agent attempts to use these tools, it will print a security prompt in the terminal where the app is running and wait for you to type `yes` before proceeding.
+
+It is strongly recommended to run this agent in a sandboxed or containerized environment and to carefully review any action before approving it.
+
# 🧰 Toolkits and Capabilities
## Model Context Protocol (MCP)
@@ -608,6 +694,7 @@ Key toolkits include:
- **CodeExecutionToolkit**: Python code execution and evaluation
- **SearchToolkit**: Web searches (Google, DuckDuckGo, Wikipedia)
- **DocumentProcessingToolkit**: Document parsing (PDF, DOCX, etc.)
+- **CSVToolkit**: Read, write, and query data in CSV files.
Additional specialized toolkits: ArxivToolkit, GitHubToolkit, GoogleMapsToolkit, MathToolkit, NetworkXToolkit, NotionToolkit, RedditToolkit, WeatherToolkit, and more. For a complete list, see the [CAMEL toolkits documentation](https://docs.camel-ai.org/key_modules/tools.html#built-in-toolkits).
@@ -659,10 +746,10 @@ python owl/webapp_jp.py
## Features
-- **Easy Model Selection**: Choose between different models (OpenAI, Qwen, DeepSeek, etc.)
-- **Environment Variable Management**: Configure your API keys and other settings directly from the UI
-- **Interactive Chat Interface**: Communicate with OWL agents through a user-friendly interface
-- **Task History**: View the history and results of your interactions
+- **Real-time Conversation Streaming**: Watch the agent conversation unfold in real-time in the "Conversation" tab.
+- **Easy Model Selection**: Choose between different models and providers (OpenAI, OpenRouter, Ollama, etc.).
+- **Environment Variable Management**: Configure your API keys and other settings directly from the UI.
+- **Full Log Viewer**: Access the detailed, raw logs in the "Full Logs" tab for debugging.
The web interface is built using Gradio and runs locally on your machine. No data is sent to external servers beyond what's required for the model API calls you configure.
diff --git a/examples/data/sample_employees.csv b/examples/data/sample_employees.csv
new file mode 100644
index 000000000..4dacfcac9
--- /dev/null
+++ b/examples/data/sample_employees.csv
@@ -0,0 +1,8 @@
+EmployeeID,FirstName,LastName,Department,Salary
+101,John,Doe,Engineering,90000
+102,Jane,Smith,Marketing,75000
+103,Peter,Jones,Engineering,95000
+104,Mary,Johnson,Sales,80000
+105,David,Williams,Marketing,78000
+106,Emily,Brown,Engineering,120000
+107,Michael,Davis,Sales,82000
diff --git a/examples/run_csv_task.py b/examples/run_csv_task.py
new file mode 100644
index 000000000..0d2683fbc
--- /dev/null
+++ b/examples/run_csv_task.py
@@ -0,0 +1,102 @@
+# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
+from dotenv import load_dotenv
+from camel.models import ModelFactory
+from camel.types import ModelType
+from camel.societies import RolePlaying
+from camel.logger import set_log_level
+
+# Assuming the new toolkit is in this path
+from owl.utils.csv_toolkit import CSVToolkit
+from owl.utils import run_society
+
+# Setup environment
+import pathlib
+base_dir = pathlib.Path(__file__).parent.parent
+env_path = base_dir / "owl" / ".env"
+load_dotenv(dotenv_path=str(env_path))
+
+set_log_level(level="INFO")
+
+
+def construct_society(question: str) -> RolePlaying:
+ """
+ Constructs a society of agents for the CSV processing task.
+ """
+ # Define the model for the agent
+ model = ModelFactory.create(
+ model_platform="openai",
+ model_type=ModelType.GPT_4O,
+ model_config_dict={"temperature": 0.2},
+ )
+
+ # Instantiate the toolkit
+ csv_toolkit = CSVToolkit()
+
+ # Get the tool functions from the toolkit instance
+ tools = [
+ csv_toolkit.read_csv,
+ csv_toolkit.write_csv,
+ csv_toolkit.query_csv,
+ ]
+
+ # Configure agent roles and parameters
+ user_agent_kwargs = {"model": model}
+ assistant_agent_kwargs = {"model": model, "tools": tools}
+
+ # Configure task parameters
+ task_kwargs = {
+ "task_prompt": question,
+ "with_task_specify": False,
+ }
+
+ # Create and return the society
+ society = RolePlaying(
+ **task_kwargs,
+ user_role_name="user",
+ user_agent_kwargs=user_agent_kwargs,
+ assistant_role_name="assistant",
+ assistant_agent_kwargs=assistant_agent_kwargs,
+ )
+
+ return society
+
+
+def main():
+ """Main function to run the CSV processing workflow."""
+
+ # The detailed task prompt that guides the agent
+ task_prompt = """
+ Your task is to process an employee data file. You must follow these steps:
+ 1. First, read the data from the CSV file located at `examples/data/sample_employees.csv`.
+ 2. Next, query this data to find all employees who are in the 'Engineering' department.
+ 3. Finally, take the filtered list of engineers and write it to a new CSV file named `engineers.csv`.
+ After you have written the new file, report that the task is complete and state how many engineers were found and saved.
+ """
+
+ # Construct and run the society
+ society = construct_society(task_prompt)
+ answer, chat_history, token_count = run_society(society)
+
+ # Output the final result from the agent
+ print("\n" + "="*30)
+ print("CSV Processing Task Final Report:")
+ print("="*30)
+ print(f"\033[94m{answer}\033[0m")
+ print("\nToken usage information:")
+ print(token_count)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/examples/run_developer_daemon.py b/examples/run_developer_daemon.py
new file mode 100644
index 000000000..ba94469fe
--- /dev/null
+++ b/examples/run_developer_daemon.py
@@ -0,0 +1,120 @@
+# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
+import time
+import pathlib
+from dotenv import load_dotenv
+from camel.models import ModelFactory
+from camel.types import ModelType
+from camel.societies import RolePlaying
+from camel.logger import set_log_level
+
+from owl.utils.developer_toolkit import DeveloperToolkit
+from owl.utils import run_society
+
+# Setup environment
+base_dir = pathlib.Path(__file__).parent.parent
+env_path = base_dir / "owl" / ".env"
+load_dotenv(dotenv_path=str(env_path))
+
+set_log_level(level="INFO")
+
+
+def construct_society(question: str) -> RolePlaying:
+ """
+ Constructs a society of agents for the developer daemon task.
+ """
+ model = ModelFactory.create(
+ model_platform="openai",
+ model_type=ModelType.GPT_4O,
+ model_config_dict={"temperature": 0.2},
+ )
+
+ developer_toolkit = DeveloperToolkit()
+ tools = [
+ developer_toolkit.list_files,
+ developer_toolkit.read_file,
+ developer_toolkit.write_file,
+ developer_toolkit.check_for_git_updates,
+ developer_toolkit.run_tests,
+ developer_toolkit.run_upgrade_from_git,
+ ]
+
+ assistant_agent_kwargs = {"model": model, "tools": tools}
+ user_agent_kwargs = {"model": model}
+
+ return RolePlaying(
+ task_prompt=question,
+ user_role_name="user",
+ assistant_role_name="assistant",
+ user_agent_kwargs=user_agent_kwargs,
+ assistant_agent_kwargs=assistant_agent_kwargs,
+ )
+
+
+def main():
+ """Main function to run the developer daemon."""
+ print("Starting Daemon Developer Agent...")
+ print("This agent will run in a continuous loop to improve the codebase.")
+
+ # This backlog can be expanded with more development tasks.
+ development_backlog = [
+ "1. Add a new function to the `CSVToolkit` in `owl/utils/csv_toolkit.py` called `get_row_count` that reads a CSV and returns the number of rows (excluding the header).",
+ "2. Refactor the `SystemToolkit` to use the new `DeveloperToolkit`'s `_run_command` helper to reduce code duplication.",
+ # Add more tasks here in the future.
+ ]
+
+ while True:
+ print("\n" + "="*50)
+ print("Starting new development cycle...")
+ print("="*50)
+
+ task_prompt = f"""
+ Your goal is to continuously improve this application. Your workflow for this cycle is as follows:
+
+ **IMPORTANT NOTE:** Some of your tools, like `write_file` and `run_upgrade_from_git`, are high-risk and require human approval before they can execute. You must check the output of these tools carefully. If the output says 'Action cancelled by user', you must stop the current task and report the cancellation.
+
+ 1. **Check for External Updates:** First, call the `check_for_git_updates` tool to see if there are any new commits in the main repository.
+
+ 2. **Apply External Updates (if any):** If updates are available, call the `run_upgrade_from_git` tool to safely apply them. This tool handles backup, upgrade, testing, and restore automatically. Report the result of this process and your cycle is complete.
+
+ 3. **Work on Internal Tasks (if no external updates):** If no external updates are found, you must work on an internal development task. Here is the current backlog of tasks:
+ ---
+ {chr(10).join(development_backlog)}
+ ---
+ Choose the *first* task from this list that has not been completed.
+
+ 4. **Implement the Internal Task:**
+ a. Use your `list_files` and `read_file` tools to understand the current codebase related to the task.
+ b. Plan the necessary code changes.
+ c. Use the `write_file` tool to implement the changes.
+ d. After writing the code, use the `run_tests` tool to ensure you haven't broken anything.
+ e. Report a summary of the changes you made and the result of the tests. Your cycle is then complete.
+ """
+
+ # Construct and run the society for one cycle
+ society = construct_society(task_prompt)
+ answer, _, _ = run_society(society)
+
+ print("\n" + "-"*50)
+ print("Development Cycle Complete. Final Report from Agent:")
+ print(f"\033[94m{answer}\033[0m")
+ print("-" * 50)
+
+ # Wait for a few seconds before starting the next cycle
+ print("\nWaiting for 10 seconds before next cycle...")
+ time.sleep(10)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/examples/run_ollama.py b/examples/run_ollama.py
index 2ff53573b..955d7619e 100644
--- a/examples/run_ollama.py
+++ b/examples/run_ollama.py
@@ -41,69 +41,58 @@
set_log_level(level="DEBUG")
-def construct_society(question: str) -> RolePlaying:
+import os
+
+
+def construct_society(
+ question: str, ollama_model_name: str = "llama3"
+) -> RolePlaying:
r"""Construct a society of agents based on the given question.
Args:
question (str): The task or question to be addressed by the society.
+ ollama_model_name (str, optional): The name of the Ollama model to
+ use for text-based tasks. Defaults to "llama3".
Returns:
RolePlaying: A configured society of agents ready to address the question.
"""
+ # Get Ollama server URL from environment variable or use default
+ ollama_url = os.getenv("OLLAMA_API_BASE_URL", "http://localhost:11434/v1")
+
+ # Use the user-specified model for text-based tasks
+ text_model = ModelFactory.create(
+ model_type=ollama_model_name,
+ url=ollama_url,
+ model_config_dict={"temperature": 0.2},
+ )
- # Create models for different components
- models = {
- "user": ModelFactory.create(
- model_platform=ModelPlatformType.OLLAMA,
- model_type="qwen2.5:72b",
- url="http://localhost:11434/v1",
- model_config_dict={"temperature": 0.8, "max_tokens": 1000000},
- ),
- "assistant": ModelFactory.create(
- model_platform=ModelPlatformType.OLLAMA,
- model_type="qwen2.5:72b",
- url="http://localhost:11434/v1",
- model_config_dict={"temperature": 0.2, "max_tokens": 1000000},
- ),
- "browsing": ModelFactory.create(
- model_platform=ModelPlatformType.OLLAMA,
- model_type="llava:latest",
- url="http://localhost:11434/v1",
- model_config_dict={"temperature": 0.4, "max_tokens": 1000000},
- ),
- "planning": ModelFactory.create(
- model_platform=ModelPlatformType.OLLAMA,
- model_type="qwen2.5:72b",
- url="http://localhost:11434/v1",
- model_config_dict={"temperature": 0.4, "max_tokens": 1000000},
- ),
- "image": ModelFactory.create(
- model_platform=ModelPlatformType.OLLAMA,
- model_type="llava:latest",
- url="http://localhost:11434/v1",
- model_config_dict={"temperature": 0.4, "max_tokens": 1000000},
- ),
- }
+ # Use a dedicated vision model for image analysis tasks
+ # Note: The user must have 'llava' pulled via Ollama for this to work.
+ vision_model = ModelFactory.create(
+ model_type="llava",
+ url=ollama_url,
+ model_config_dict={"temperature": 0.2},
+ )
# Configure toolkits
tools = [
*BrowserToolkit(
- headless=False, # Set to True for headless mode (e.g., on remote servers)
- web_agent_model=models["browsing"],
- planning_agent_model=models["planning"],
+ headless=True,
+ web_agent_model=text_model,
+ planning_agent_model=text_model,
).get_tools(),
*CodeExecutionToolkit(sandbox="subprocess", verbose=True).get_tools(),
- *ImageAnalysisToolkit(model=models["image"]).get_tools(),
+ *ImageAnalysisToolkit(model=vision_model).get_tools(),
SearchToolkit().search_duckduckgo,
- # SearchToolkit().search_google, # Comment this out if you don't have google search
SearchToolkit().search_wiki,
*ExcelToolkit().get_tools(),
*FileWriteToolkit(output_dir="./").get_tools(),
]
- # Configure agent roles and parameters
- user_agent_kwargs = {"model": models["user"]}
- assistant_agent_kwargs = {"model": models["assistant"], "tools": tools}
+ # Configure agent roles and parameters, using the same text model for both
+ user_agent_kwargs = {"model": text_model}
+ assistant_agent_kwargs = {"model": text_model, "tools": tools}
# Configure task parameters
task_kwargs = {
diff --git a/examples/run_openrouter.py b/examples/run_openrouter.py
new file mode 100644
index 000000000..348203f55
--- /dev/null
+++ b/examples/run_openrouter.py
@@ -0,0 +1,121 @@
+# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
+import os
+import sys
+import pathlib
+
+from dotenv import load_dotenv
+from camel.toolkits import (
+ CodeExecutionToolkit,
+ ExcelToolkit,
+ SearchToolkit,
+ BrowserToolkit,
+ FileWriteToolkit,
+)
+from camel.types import ModelPlatformType
+from camel.societies import RolePlaying
+from camel.logger import set_log_level
+
+from owl.utils import run_society, DocumentProcessingToolkit
+from owl.key_manager import KeyManager, ResilientOpenAICompatibleModel
+
+base_dir = pathlib.Path(__file__).parent.parent
+env_path = base_dir / "owl" / ".env"
+load_dotenv(dotenv_path=str(env_path))
+
+set_log_level(level="INFO")
+
+
+def construct_society(
+ question: str, openrouter_model_name: str = "mistralai/mistral-7b-instruct"
+) -> RolePlaying:
+ r"""Construct a society of agents based on the given question, using
+ OpenRouter as the model provider with resilient key management.
+
+ Args:
+ question (str): The task or question to be addressed by the society.
+ openrouter_model_name (str, optional): The name of the model on
+ OpenRouter to use. Defaults to "mistralai/mistral-7b-instruct".
+
+ Returns:
+ RolePlaying: A configured society of agents ready to address the question.
+ """
+ # Initialize the key manager for OpenRouter API keys
+ key_manager = KeyManager(key_env_var="OPENROUTER_API_KEY")
+
+ # Define model configurations using the resilient model
+ model = ResilientOpenAICompatibleModel(
+ key_manager=key_manager,
+ model_type=openrouter_model_name,
+ url="https://openrouter.ai/api/v1",
+ model_config_dict={"temperature": 0.2},
+ )
+
+ # Configure toolkits
+ tools = [
+ *BrowserToolkit(
+ headless=False,
+ web_agent_model=model,
+ planning_agent_model=model,
+ ).get_tools(),
+ *CodeExecutionToolkit(sandbox="subprocess", verbose=True).get_tools(),
+ SearchToolkit().search_duckduckgo,
+ SearchToolkit().search_google,
+ SearchToolkit().search_wiki,
+ *ExcelToolkit().get_tools(),
+ *DocumentProcessingToolkit(model=model).get_tools(),
+ *FileWriteToolkit(output_dir="./").get_tools(),
+ ]
+
+ # Configure agent roles and parameters
+ user_agent_kwargs = {"model": model}
+ assistant_agent_kwargs = {"model": model, "tools": tools}
+
+ # Configure task parameters
+ task_kwargs = {
+ "task_prompt": question,
+ "with_task_specify": False,
+ }
+
+ # Create and return the society
+ society = RolePlaying(
+ **task_kwargs,
+ user_role_name="user",
+ user_agent_kwargs=user_agent_kwargs,
+ assistant_role_name="assistant",
+ assistant_agent_kwargs=assistant_agent_kwargs,
+ )
+
+ return society
+
+
+def main():
+ r"""Main function to run the OWL system with an example question."""
+ # Example research question
+ default_task = "Use the search tool to find the capital of France and write it to a file named 'capital.txt'."
+
+ # Override default task if command line argument is provided
+ task = sys.argv[1] if len(sys.argv) > 1 else default_task
+
+ # Construct and run the society
+ society = construct_society(task)
+
+ answer, chat_history, token_count = run_society(society)
+
+ # Output the result
+ print(f"\033[94mAnswer: {answer}\033[0m")
+
+
+if __name__ == "__main__":
+ main()
diff --git a/examples/run_upgrade.py b/examples/run_upgrade.py
new file mode 100644
index 000000000..5056a4228
--- /dev/null
+++ b/examples/run_upgrade.py
@@ -0,0 +1,109 @@
+# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
+import os
+import sys
+import pathlib
+
+from dotenv import load_dotenv
+from camel.models import ModelFactory
+from camel.types import ModelType
+from camel.societies import RolePlaying
+from camel.logger import set_log_level
+
+# Assuming the new toolkit is in this path
+from owl.utils.system_toolkit import SystemToolkit
+from owl.utils import run_society
+
+# Setup environment
+base_dir = pathlib.Path(__file__).parent.parent
+env_path = base_dir / "owl" / ".env"
+load_dotenv(dotenv_path=str(env_path))
+
+set_log_level(level="INFO")
+
+
+def construct_society(question: str) -> RolePlaying:
+ """
+ Constructs a society of agents for the self-upgrade task.
+ """
+ # Define the model for the agent
+ model = ModelFactory.create(
+ model_platform="openai",
+ model_type=ModelType.GPT_4O,
+ model_config_dict={"temperature": 0.2},
+ )
+
+ # Instantiate the toolkit
+ system_toolkit = SystemToolkit()
+
+ # Get the tool functions from the toolkit instance
+ tools = [
+ system_toolkit.backup,
+ system_toolkit.upgrade,
+ system_toolkit.test,
+ system_toolkit.restore,
+ ]
+
+ # Configure agent roles and parameters
+ user_agent_kwargs = {"model": model}
+ assistant_agent_kwargs = {"model": model, "tools": tools}
+
+ # Configure task parameters
+ task_kwargs = {
+ "task_prompt": question,
+ "with_task_specify": False,
+ }
+
+ # Create and return the society
+ society = RolePlaying(
+ **task_kwargs,
+ user_role_name="user",
+ user_agent_kwargs=user_agent_kwargs,
+ assistant_role_name="assistant",
+ assistant_agent_kwargs=assistant_agent_kwargs,
+ )
+
+ return society
+
+
+def main():
+ """Main function to run the self-upgrade workflow."""
+
+ # The detailed task prompt that guides the agent
+ task_prompt = """
+ Your task is to safely upgrade the application's codebase. You must follow these steps precisely:
+ 1. First, create a backup of the current application state by calling the `backup` tool.
+ 2. After the backup is confirmed, attempt to upgrade the application by calling the `upgrade` tool.
+ 3. After the upgrade attempt, you must verify the integrity of the application by calling the `test` tool.
+ 4. Analyze the output of the `test` tool. The output will contain the line 'RESULT: Smoke test PASSED.' or 'RESULT: Smoke test FAILED.'.
+ 5. If the test passed, your task is complete. Report the success.
+ 6. If the test failed, you must restore the application to its previous state by calling the `restore` tool. After restoring, report the failure and the fact that you have restored the backup.
+ Do not deviate from this sequence.
+ """
+
+ # Construct and run the society
+ society = construct_society(task_prompt)
+ answer, chat_history, token_count = run_society(society)
+
+ # Output the final result from the agent
+ print("\n" + "="*30)
+ print("Self-Upgrade Process Final Report:")
+ print("="*30)
+ print(f"\033[94m{answer}\033[0m")
+ print("\nToken usage information:")
+ print(token_count)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/owl/.env_template b/owl/.env_template
index 8d5ea06b7..c4bb9acda 100644
--- a/owl/.env_template
+++ b/owl/.env_template
@@ -35,6 +35,9 @@ OPENAI_API_KEY="Your_Key"
# NOVITA API (https://novita.ai/settings/key-management?utm_source=github_owl&utm_medium=github_readme&utm_campaign=github_link)
# NOVITA_API_KEY="Your_Key"
+# OpenRouter API (https://openrouter.ai/keys)
+# OPENROUTER_API_KEY="Your_Key"
+
#===========================================
# Tools & Services API
#===========================================
diff --git a/owl/key_manager.py b/owl/key_manager.py
new file mode 100644
index 000000000..3d65bb2a0
--- /dev/null
+++ b/owl/key_manager.py
@@ -0,0 +1,180 @@
+# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
+import os
+import time
+import threading
+import logging
+from typing import List, Optional
+
+import openai
+from camel.models import OpenAICompatibleModel
+
+# Configure logging
+logging.basicConfig(level=logging.INFO)
+logger = logging.getLogger(__name__)
+
+
+class KeyManager:
+ """A class to manage a pool of API keys, including rotation and cooldown."""
+
+ def __init__(self, key_env_var: str, cooldown_period: int = 300):
+ """
+ Initializes the KeyManager.
+
+ Args:
+ key_env_var (str): The name of the environment variable containing
+ a comma-separated list of API keys.
+ cooldown_period (int): The number of seconds a key should be in
+ cooldown after a failure.
+ """
+ keys_str = os.getenv(key_env_var, "")
+ self.keys: List[str] = [
+ key.strip() for key in keys_str.split(",") if key.strip()
+ ]
+ if not self.keys:
+ raise ValueError(
+ f"No API keys found in environment variable '{key_env_var}'. "
+ "Please provide a comma-separated list of keys."
+ )
+
+ self.cooldown_period = cooldown_period
+ self.key_status: dict[str, float] = {
+ key: 0.0 for key in self.keys
+ } # Key -> Cooldown end time
+ self.current_key_index = 0
+ self.lock = threading.Lock()
+
+ def get_key(self) -> Optional[str]:
+ """
+ Gets the next available API key from the pool.
+
+ It iterates through the keys, respecting cooldown periods.
+
+ Returns:
+ Optional[str]: An available API key, or None if all keys are
+ currently in cooldown.
+ """
+ with self.lock:
+ # Try to find an available key starting from the current index
+ for _ in range(len(self.keys)):
+ key = self.keys[self.current_key_index]
+ cooldown_end_time = self.key_status.get(key, 0.0)
+
+ if time.time() >= cooldown_end_time:
+ # Key is available, move to the next index for next call
+ self.current_key_index = (self.current_key_index + 1) % len(
+ self.keys
+ )
+ return key
+
+ # Key is in cooldown, try the next one
+ self.current_key_index = (self.current_key_index + 1) % len(
+ self.keys
+ )
+
+ # If we complete the loop, all keys are in cooldown
+ logger.warning("All API keys are currently in cooldown.")
+ return None
+
+ def set_cooldown(self, key: str):
+ """
+ Puts a specific key into a cooldown period after a failure.
+
+ Args:
+ key (str): The API key to put into cooldown.
+ """
+ with self.lock:
+ cooldown_end_time = time.time() + self.cooldown_period
+ self.key_status[key] = cooldown_end_time
+ logger.info(
+ f"Key '...{key[-4:]}' put on cooldown for "
+ f"{self.cooldown_period} seconds."
+ )
+
+ def __repr__(self) -> str:
+ return (
+ f"KeyManager(keys={len(self.keys)}, "
+ f"cooldown_period={self.cooldown_period}s)"
+ )
+
+
+class ResilientOpenAICompatibleModel(OpenAICompatibleModel):
+ """
+ A wrapper around OpenAICompatibleModel that adds resilience by using a
+ pool of API keys and handling failures gracefully.
+ """
+
+ def __init__(self, key_manager: KeyManager, *args, **kwargs):
+ """
+ Initializes the resilient model.
+
+ Args:
+ key_manager (KeyManager): The key manager instance to use for
+ API key rotation and cooldown.
+ *args, **kwargs: Arguments to pass to the parent
+ OpenAICompatibleModel.
+ """
+ self.key_manager = key_manager
+ # The 'api_key' will be managed dynamically, so we pop it from kwargs
+ # to avoid it being set permanently in the parent class.
+ kwargs.pop("api_key", None)
+ super().__init__(*args, **kwargs)
+
+ def _create_client(self, api_key: str):
+ """Helper to create an OpenAI client with a specific key."""
+ return openai.OpenAI(
+ api_key=api_key,
+ base_url=self.url,
+ timeout=self.timeout,
+ max_retries=0, # We handle retries manually
+ )
+
+ def step(self, *args, **kwargs):
+ """
+ Overrides the parent 'step' method to add resilience.
+
+ It attempts to make an API call with a key from the KeyManager.
+ If the call fails due to authentication or rate limits, it puts the
+ key on cooldown and retries with the next available key.
+ """
+ while True: # Loop to retry with new keys
+ current_key = self.key_manager.get_key()
+ if current_key is None:
+ raise RuntimeError(
+ "All API keys are in cooldown. Please wait or add more keys."
+ )
+
+ # Dynamically create the client with the current key
+ self.client = self._create_client(current_key)
+
+ try:
+ # Attempt the API call using the parent's logic
+ logger.info(f"Making API call with key '...{current_key[-4:]}'.")
+ response = super().step(*args, **kwargs)
+ return response
+ except (
+ openai.AuthenticationError,
+ openai.RateLimitError,
+ openai.PermissionDeniedError,
+ ) as e:
+ logger.warning(
+ f"API call failed for key '...{current_key[-4:]}'. "
+ f"Error: {e.__class__.__name__}. Putting key on cooldown."
+ )
+ self.key_manager.set_cooldown(current_key)
+ # The loop will automatically try the next available key
+ except Exception as e:
+ # For other unexpected errors, re-raise the exception
+ logger.error(f"An unexpected error occurred: {e}")
+ raise e
diff --git a/owl/utils/csv_toolkit.py b/owl/utils/csv_toolkit.py
new file mode 100644
index 000000000..8fe96d18f
--- /dev/null
+++ b/owl/utils/csv_toolkit.py
@@ -0,0 +1,106 @@
+# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
+import csv
+import json
+import os
+from typing import List, Dict, Any
+
+class CSVToolkit:
+ """A toolkit for performing operations on CSV files."""
+
+ def read_csv(self, file_path: str) -> str:
+ """Reads the content of a CSV file and returns it as a JSON string
+ representing a list of dictionaries.
+
+ Args:
+ file_path (str): The path to the CSV file.
+
+ Returns:
+ str: A JSON string of the CSV content, or an error message.
+ """
+ if not os.path.exists(file_path):
+ return f"Error: File not found at '{file_path}'."
+
+ try:
+ with open(file_path, mode='r', encoding='utf-8') as infile:
+ reader = csv.DictReader(infile)
+ data = [row for row in reader]
+ return json.dumps(data, indent=2)
+ except Exception as e:
+ return f"Error reading CSV file: {e}"
+
+ def write_csv(self, file_path: str, data: str) -> str:
+ """Writes a list of dictionaries (provided as a JSON string) to a
+ CSV file.
+
+ Args:
+ file_path (str): The path to the CSV file to be created.
+ data (str): A JSON string representing a list of dictionaries.
+ Example: '[{"col1": "val1", "col2": "val2"}]'
+
+ Returns:
+ str: A success or error message.
+ """
+ try:
+ list_of_dicts: List[Dict[str, Any]] = json.loads(data)
+ if not isinstance(list_of_dicts, list) or not all(isinstance(d, dict) for d in list_of_dicts):
+ return "Error: Data must be a JSON string of a list of dictionaries."
+
+ if not list_of_dicts:
+ return "Error: Data is empty, cannot write to CSV."
+
+ headers = list_of_dicts[0].keys()
+ with open(file_path, mode='w', encoding='utf-8', newline='') as outfile:
+ writer = csv.DictWriter(outfile, fieldnames=headers)
+ writer.writeheader()
+ writer.writerows(list_of_dicts)
+
+ return f"Successfully wrote data to '{file_path}'."
+ except json.JSONDecodeError:
+ return "Error: Invalid JSON format in the provided data string."
+ except Exception as e:
+ return f"Error writing to CSV file: {e}"
+
+ def query_csv(self, file_path: str, query: str) -> str:
+ """Queries a CSV file based on a simple 'column=value' filter and
+ returns the matching rows.
+
+ Args:
+ file_path (str): The path to the CSV file.
+ query (str): A query string in the format 'column_name=value'.
+
+ Returns:
+ str: A JSON string of the filtered data, or an error message.
+ """
+ if not os.path.exists(file_path):
+ return f"Error: File not found at '{file_path}'."
+
+ try:
+ column_to_query, value_to_match = query.split('=', 1)
+ except ValueError:
+ return "Error: Invalid query format. Please use 'column_name=value'."
+
+ try:
+ with open(file_path, mode='r', encoding='utf-8') as infile:
+ reader = csv.DictReader(infile)
+ filtered_data = [
+ row for row in reader if row.get(column_to_query) == value_to_match
+ ]
+
+ if not filtered_data:
+ return "No matching rows found for the query."
+
+ return json.dumps(filtered_data, indent=2)
+ except Exception as e:
+ return f"Error querying CSV file: {e}"
diff --git a/owl/utils/developer_toolkit.py b/owl/utils/developer_toolkit.py
new file mode 100644
index 000000000..3d909e73d
--- /dev/null
+++ b/owl/utils/developer_toolkit.py
@@ -0,0 +1,233 @@
+# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
+import os
+import subprocess
+from typing import Tuple
+
+class DeveloperToolkit:
+ """A comprehensive toolkit for a developer agent that can inspect, modify,
+ and upgrade its own codebase.
+ """
+
+ def _confirm_action(self, prompt: str) -> bool:
+ """Displays a prompt and waits for user confirmation from the console."""
+ print("\n" + "="*50)
+ print("SECURITY PROMPT: Human approval required.")
+ print(prompt)
+ print("="*50)
+ response = input("Please type 'yes' to approve or anything else to cancel: ")
+ return response.lower() == "yes"
+
+ def _run_command(self, command: list[str]) -> Tuple[int, str, str]:
+ """Helper to run a shell command and capture output."""
+ # Security Guardrail: Restrict shell script execution to the `scripts` dir
+ if command[0] == "bash" or command[0].endswith(".sh"):
+ script_path = os.path.abspath(command[1])
+ allowed_dir = os.path.abspath("scripts")
+ if not script_path.startswith(allowed_dir):
+ return -1, "", f"Error: Security policy prevents execution of scripts outside the 'scripts/' directory."
+
+ try:
+ process = subprocess.run(
+ command,
+ capture_output=True,
+ text=True,
+ check=False,
+ )
+ return process.returncode, process.stdout, process.stderr
+ except FileNotFoundError:
+ return -1, "", f"Error: Command '{command[0]}' not found."
+ except Exception as e:
+ return -1, "", f"An unexpected error occurred: {e}"
+
+ def list_files(self, directory: str = ".") -> str:
+ """Recursively lists all files and directories within a given
+ directory.
+
+ Args:
+ directory (str, optional): The directory to list. Defaults to the
+ current directory.
+
+ Returns:
+ str: A string representing the directory tree.
+ """
+ if not os.path.isdir(directory):
+ return f"Error: Directory '{directory}' not found."
+
+ tree = []
+ for root, dirs, files in os.walk(directory):
+ level = root.replace(directory, '').count(os.sep)
+ indent = ' ' * 4 * (level)
+ tree.append(f"{indent}{os.path.basename(root)}/")
+ sub_indent = ' ' * 4 * (level + 1)
+ for f in files:
+ tree.append(f"{sub_indent}{f}")
+
+ return "\n".join(tree)
+
+ def read_file(self, file_path: str) -> str:
+ """Reads and returns the content of a specified file.
+
+ Args:
+ file_path (str): The path to the file to read.
+
+ Returns:
+ str: The content of the file, or an error message.
+ """
+ if not os.path.exists(file_path):
+ return f"Error: File not found at '{file_path}'."
+ try:
+ with open(file_path, "r", encoding="utf-8") as f:
+ return f.read()
+ except Exception as e:
+ return f"Error reading file: {e}"
+
+ def write_file(self, file_path: str, content: str) -> str:
+ """Writes content to a specified file, overwriting it if it exists.
+ For security, this tool can only write to files within the 'owl',
+ 'examples', and 'scripts' directories. This action requires human
+ approval.
+
+ Args:
+ file_path (str): The path to the file to write to.
+ content (str): The new content to write to the file.
+
+ Returns:
+ str: A success or error message.
+ """
+ # Security Guardrail: Prevent writing to arbitrary locations
+ allowed_dirs = ["owl", "examples", "scripts"]
+ working_dir = os.getcwd()
+
+ absolute_path = os.path.abspath(file_path)
+
+ is_safe = any(
+ absolute_path.startswith(os.path.join(working_dir, d))
+ for d in allowed_dirs
+ )
+
+ if not is_safe:
+ return (
+ "Error: Security policy restricts file writing to the 'owl', "
+ "'examples', and 'scripts' directories."
+ )
+
+ # Human-in-the-Loop Confirmation
+ prompt = (
+ f"The agent wants to write to the file '{file_path}'.\n"
+ "This will OVERWRITE the file if it exists.\n"
+ "--- PREVIEW OF CONTENT ---\n"
+ f"{content[:500]}\n"
+ "--- END PREVIEW ---"
+ )
+ if not self._confirm_action(prompt):
+ return "Action cancelled by user."
+
+ try:
+ os.makedirs(os.path.dirname(file_path), exist_ok=True)
+ with open(file_path, "w", encoding="utf-8") as f:
+ f.write(content)
+ return f"Successfully wrote content to '{file_path}'."
+ except Exception as e:
+ return f"Error writing file: {e}"
+
+ def check_for_git_updates(self) -> str:
+ """Checks for new updates in the remote git repository.
+
+ Returns:
+ str: A message indicating if updates are available or not.
+ """
+ print("Checking for git updates...")
+ returncode, stdout, stderr = self._run_command(["git", "fetch"])
+ if returncode != 0:
+ return f"Error running 'git fetch': {stderr}"
+
+ returncode, stdout, stderr = self._run_command(["git", "status", "-uno"])
+ if returncode != 0:
+ return f"Error running 'git status': {stderr}"
+
+ if "Your branch is up to date" in stdout:
+ return "No new updates found."
+ elif "Your branch is behind" in stdout:
+ return "Updates available."
+ else:
+ return f"Could not determine git status. Output:\n{stdout}"
+
+ def run_tests(self) -> str:
+ """Runs the test suite to verify the application's integrity.
+
+ Returns:
+ str: A summary of the test results, indicating pass or fail.
+ """
+ print("Running tests...")
+ returncode, stdout, stderr = self._run_command(["bash", "scripts/test.sh"])
+ result = f"Test Script Exit Code: {returncode}\n---\nSTDOUT:\n{stdout}\n---\nSTDERR:\n{stderr}"
+ if returncode == 0:
+ result += "\n---\nRESULT: Tests PASSED."
+ else:
+ result += "\n---\nRESULT: Tests FAILED."
+ return result
+
+ def run_upgrade_from_git(self) -> str:
+ """
+ Runs the full, safe upgrade process from the git repository.
+ This involves backing up, upgrading, testing, and restoring on failure.
+ This action requires human approval.
+
+ Returns:
+ str: A detailed log of the entire upgrade process and its outcome.
+ """
+ prompt = (
+ "The agent wants to initiate the self-upgrade process from git.\n"
+ "This will execute the following steps:\n"
+ " 1. Backup current code\n"
+ " 2. Pull latest code from git\n"
+ " 3. Run tests\n"
+ " 4. Restore backup if tests fail"
+ )
+ if not self._confirm_action(prompt):
+ return "Action cancelled by user."
+
+ print("Starting safe git upgrade process...")
+
+ # 1. Backup
+ backup_code, backup_out, backup_err = self._run_command(["bash", "scripts/backup.sh"])
+ if backup_code != 0:
+ return f"Upgrade failed at backup step. Error:\n{backup_err}"
+
+ # 2. Upgrade
+ upgrade_code, upgrade_out, upgrade_err = self._run_command(["bash", "scripts/upgrade.sh"])
+ if upgrade_code != 0:
+ return f"Upgrade failed at git pull step. Error:\n{upgrade_err}"
+
+ # 3. Test
+ test_code, test_out, test_err = self._run_command(["bash", "scripts/test.sh"])
+
+ final_report = f"BACKUP LOG:\n{backup_out}\n{backup_err}\n\n"
+ final_report += f"UPGRADE LOG:\n{upgrade_out}\n{upgrade_err}\n\n"
+ final_report += f"TEST LOG:\n{test_out}\n{test_err}\n\n"
+
+ if test_code == 0:
+ final_report += "FINAL STATUS: Upgrade successful and tests passed."
+ else:
+ # 4. Restore on failure
+ final_report += "TESTS FAILED. Restoring from backup...\n"
+ restore_code, restore_out, restore_err = self._run_command(["bash", "scripts/restore.sh"])
+ final_report += f"RESTORE LOG:\n{restore_out}\n{restore_err}\n\n"
+ if restore_code == 0:
+ final_report += "FINAL STATUS: Upgrade failed. Code has been restored from backup."
+ else:
+ final_report += "CRITICAL ERROR: Upgrade failed AND restore failed. Manual intervention required."
+
+ return final_report
diff --git a/owl/utils/enhanced_role_playing.py b/owl/utils/enhanced_role_playing.py
index f6aaa553c..1d93b19a4 100644
--- a/owl/utils/enhanced_role_playing.py
+++ b/owl/utils/enhanced_role_playing.py
@@ -439,9 +439,13 @@ def step(
)
+import queue
+
+
def run_society(
society: OwlRolePlaying,
round_limit: int = 15,
+ message_queue: Optional[queue.Queue] = None,
) -> Tuple[str, List[dict], dict]:
overall_completion_token_count = 0
overall_prompt_token_count = 0
@@ -479,6 +483,10 @@ def run_society(
}
chat_history.append(_data)
+
+ # If a message queue is provided, put the data into the queue
+ if message_queue:
+ message_queue.put(_data)
logger.info(
f"Round #{_round} user_response:\n {user_response.msgs[0].content if user_response.msgs and len(user_response.msgs) > 0 else ''}"
)
diff --git a/owl/utils/system_toolkit.py b/owl/utils/system_toolkit.py
new file mode 100644
index 000000000..070886315
--- /dev/null
+++ b/owl/utils/system_toolkit.py
@@ -0,0 +1,85 @@
+# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
+import subprocess
+import os
+from typing import Tuple
+
+class SystemToolkit:
+ """A toolkit for performing system-level operations like backup, upgrade,
+ and testing.
+ """
+
+ def _run_script(self, script_name: str) -> Tuple[int, str, str]:
+ """A helper function to run a shell script and capture its output."""
+ script_path = os.path.join("scripts", script_name)
+ if not os.path.exists(script_path):
+ return -1, "", f"Error: Script not found at {script_path}"
+
+ try:
+ process = subprocess.run(
+ ["bash", script_path],
+ capture_output=True,
+ text=True,
+ check=False, # Do not raise exception on non-zero exit codes
+ )
+ stdout = process.stdout
+ stderr = process.stderr
+ return process.returncode, stdout, stderr
+ except Exception as e:
+ return -1, "", f"An unexpected error occurred: {e}"
+
+ def backup(self) -> str:
+ """Creates a backup of the current codebase by running the backup.sh
+ script.
+
+ Returns:
+ str: The output of the backup script, including stdout and stderr.
+ """
+ returncode, stdout, stderr = self._run_script("backup.sh")
+ return f"Backup Script Exit Code: {returncode}\n---\nSTDOUT:\n{stdout}\n---\nSTDERR:\n{stderr}"
+
+ def upgrade(self) -> str:
+ """Attempts to upgrade the codebase by running the upgrade.sh script,
+ which pulls the latest changes from git.
+
+ Returns:
+ str: The output of the upgrade script, including stdout and stderr.
+ """
+ returncode, stdout, stderr = self._run_script("upgrade.sh")
+ return f"Upgrade Script Exit Code: {returncode}\n---\nSTDOUT:\n{stdout}\n---\nSTDERR:\n{stderr}"
+
+ def test(self) -> str:
+ """Runs a smoke test on the codebase by executing the test.sh script.
+
+ Returns:
+ str: The output of the test script, including a clear statement
+ of whether the test passed or failed based on the exit code.
+ """
+ returncode, stdout, stderr = self._run_script("test.sh")
+ result = f"Test Script Exit Code: {returncode}\n---\nSTDOUT:\n{stdout}\n---\nSTDERR:\n{stderr}"
+ if returncode == 0:
+ result += "\n---\nRESULT: Smoke test PASSED."
+ else:
+ result += "\n---\nRESULT: Smoke test FAILED."
+ return result
+
+ def restore(self) -> str:
+ """Restores the codebase from the backup by running the restore.sh
+ script.
+
+ Returns:
+ str: The output of the restore script, including stdout and stderr.
+ """
+ returncode, stdout, stderr = self._run_script("restore.sh")
+ return f"Restore Script Exit Code: {returncode}\n---\nSTDOUT:\n{stdout}\n---\nSTDERR:\n{stderr}"
diff --git a/owl/webapp.py b/owl/webapp.py
index 56f755280..3b7e23af4 100644
--- a/owl/webapp.py
+++ b/owl/webapp.py
@@ -21,6 +21,7 @@
import datetime
from typing import Tuple
import importlib
+import inspect
from dotenv import load_dotenv, set_key, find_dotenv, unset_key
import threading
import queue
@@ -249,7 +250,8 @@ def process_message(role, content):
"run_deepseek_zh": "Using deepseek model to process Chinese tasks",
"run_mistral": "Using Mistral models to process tasks",
"run_openai_compatible_model": "Using openai compatible model to process tasks",
- "run_ollama": "Using local ollama model to process tasks",
+ "run_ollama": "Use a local model served by Ollama. Make sure Ollama is running.",
+ "run_developer_daemon": "Run the autonomous developer agent. WARNING: This is a persistent process that runs in a loop.",
"run_qwen_mini_zh": "Using qwen model with minimal configuration to process tasks",
"run_qwen_zh": "Using qwen model to process tasks",
"run_azure_openai": "Using azure openai model to process tasks",
@@ -257,6 +259,7 @@ def process_message(role, content):
"run_ppio": "Using ppio model to process tasks",
"run_together_ai": "Using together ai model to process tasks",
"run_novita_ai": "Using novita ai model to process tasks",
+ "run_openrouter": "Using a custom model from OpenRouter.",
}
@@ -283,6 +286,9 @@ def process_message(role, content):
# DeepSeek API (https://platform.deepseek.com/api_keys)
DEEPSEEK_API_KEY='Your_Key'
+# OpenRouter API (https://openrouter.ai/keys)
+OPENROUTER_API_KEY='Your_Key'
+
#===========================================
# Tools & Services API
#===========================================
@@ -315,15 +321,26 @@ def validate_input(question: str) -> bool:
return True
-def run_owl(question: str, example_module: str) -> Tuple[str, str, str]:
- """Run the OWL system and return results
+def run_owl(
+ question: str,
+ example_module: str,
+ openrouter_model_name: str = None,
+ ollama_model_name: str = None,
+ message_queue: queue.Queue = None,
+) -> Tuple[str, str, str]:
+ """Run the OWL system and return results.
Args:
- question: User question
- example_module: Example module name to import (e.g., "run_terminal_zh" or "run_deep")
+ question: User question.
+ example_module: Example module name to import.
+ openrouter_model_name (str, optional): The name of the OpenRouter
+ model to use.
+ ollama_model_name (str, optional): The name of the Ollama model to use.
+ message_queue (queue.Queue, optional): A queue to stream messages
+ back to the UI.
Returns:
- Tuple[...]: Answer, token count, status
+ Tuple[...]: Answer, token count, status.
"""
global CURRENT_PROCESS
@@ -357,6 +374,8 @@ def run_owl(question: str, example_module: str) -> Tuple[str, str, str]:
try:
logging.info(f"Importing module: {module_path}")
module = importlib.import_module(module_path)
+ # Reload the module to ensure latest changes are used
+ importlib.reload(module)
except ImportError as ie:
logging.error(f"Unable to import module {module_path}: {str(ie)}")
return (
@@ -388,7 +407,21 @@ def run_owl(question: str, example_module: str) -> Tuple[str, str, str]:
# Build society simulation
try:
logging.info("Building society simulation...")
- society = module.construct_society(question)
+
+ # Inspect the signature of the target construct_society function
+ sig = inspect.signature(module.construct_society)
+ kwargs = {"question": question}
+
+ if "openrouter_model_name" in sig.parameters:
+ kwargs["openrouter_model_name"] = openrouter_model_name
+ logging.info(f"Passing openrouter_model_name: {openrouter_model_name}")
+
+ if "ollama_model_name" in sig.parameters:
+ kwargs["ollama_model_name"] = ollama_model_name
+ logging.info(f"Passing ollama_model_name: {ollama_model_name}")
+
+ # Call the function with the supported arguments
+ society = module.construct_society(**kwargs)
except Exception as e:
logging.error(f"Error occurred while building society simulation: {str(e)}")
@@ -401,7 +434,13 @@ def run_owl(question: str, example_module: str) -> Tuple[str, str, str]:
# Run society simulation
try:
logging.info("Running society simulation...")
- answer, chat_history, token_info = run_society(society)
+ # Inspect the signature of the run_society function
+ run_sig = inspect.signature(run_society)
+ run_kwargs = {"society": society}
+ if "message_queue" in run_sig.parameters:
+ run_kwargs["message_queue"] = message_queue
+
+ answer, chat_history, token_info = run_society(**run_kwargs)
logging.info("Society simulation completed")
except Exception as e:
logging.error(f"Error occurred while running society simulation: {str(e)}")
@@ -610,6 +649,7 @@ def is_api_related(key: str) -> bool:
"hugging",
"chunkr",
"firecrawl",
+ "openrouter",
]
# Check if it contains API-related keywords (case insensitive)
@@ -644,6 +684,8 @@ def get_api_guide(key: str) -> str:
return "https://www.firecrawl.dev/"
elif "novita" in key_lower:
return "https://novita.ai/settings/key-management?utm_source=github_owl&utm_medium=github_readme&utm_campaign=github_link"
+ elif "openrouter" in key_lower:
+ return "https://openrouter.ai/keys"
else:
return ""
@@ -801,71 +843,80 @@ def clear_log_file():
logging.error(f"Error clearing log file: {str(e)}")
return ""
- # Create a real-time log update function
- def process_with_live_logs(question, module_name):
- """Process questions and update logs in real-time"""
+ def stream_conversation(
+ question, module_name, openrouter_model_name=None, ollama_model_name=None
+ ):
+ """
+ Streams the agent conversation to the chatbot UI and updates logs.
+ """
global CURRENT_PROCESS
- # Clear log file
+ # Clear previous logs and chatbot history
clear_log_file()
+ yield [], "0", " Processing..."
- # Create a background thread to process the question
+ # Set up queues for communication
+ chat_queue = queue.Queue()
result_queue = queue.Queue()
+ # Define the background task for running the agent society
def process_in_background():
try:
- result = run_owl(question, module_name)
+ # Pass the chat_queue to run_owl
+ result = run_owl(
+ question,
+ module_name,
+ openrouter_model_name,
+ ollama_model_name,
+ chat_queue,
+ )
result_queue.put(result)
except Exception as e:
result_queue.put(
(f"Error occurred: {str(e)}", "0", f"❌ Error: {str(e)}")
)
- # Start background processing thread
+ # Start the background thread
bg_thread = threading.Thread(target=process_in_background)
- CURRENT_PROCESS = bg_thread # Record current process
+ CURRENT_PROCESS = bg_thread
bg_thread.start()
- # While waiting for processing to complete, update logs once per second
+ # Stream updates from the chat_queue to the chatbot UI
+ chat_history = []
while bg_thread.is_alive():
- # Update conversation record display
- logs2 = get_latest_logs(100, LOG_QUEUE)
-
- # Always update status
- yield (
- "0",
- " Processing...",
- logs2,
- )
-
- time.sleep(1)
-
- # Processing complete, get results
- if not result_queue.empty():
- result = result_queue.get()
- answer, token_count, status = result
-
- # Final update of conversation record
- logs2 = get_latest_logs(100, LOG_QUEUE)
-
- # Set different indicators based on status
- if "Error" in status:
- status_with_indicator = (
- f" {status}"
- )
- else:
- status_with_indicator = (
- f" {status}"
- )
-
- yield token_count, status_with_indicator, logs2
+ try:
+ # Non-blocking get from the queue
+ message = chat_queue.get_nowait()
+
+ # Format for chatbot
+ user_msg = message.get("user", "")
+ assistant_msg = message.get("assistant", "")
+
+ # Append user and assistant messages to chat history
+ if user_msg:
+ chat_history.append((user_msg, None))
+ yield chat_history, "0", " User message..."
+
+ if assistant_msg:
+ chat_history[-1] = (chat_history[-1][0], assistant_msg)
+ yield chat_history, "0", " Assistant message..."
+
+ except queue.Empty:
+ # Yield the current state if queue is empty
+ yield chat_history, "0", " Processing..."
+ time.sleep(0.5)
+
+ # Get final result from the result_queue
+ final_result = result_queue.get()
+ answer, token_count, status = final_result
+
+ # Final update to the UI
+ if "Error" in status:
+ status_with_indicator = f" {status}"
else:
- logs2 = get_latest_logs(100, LOG_QUEUE)
- yield (
- "0",
- " Terminated",
- logs2,
- )
+ status_with_indicator = f" {status}"
+
+ yield chat_history, token_count, status_with_indicator
with gr.Blocks(title="OWL", theme=gr.themes.Soft(primary_hue="blue")) as app:
gr.Markdown(
@@ -1113,6 +1164,20 @@ def process_in_background():
elem_classes="module-info",
)
+ openrouter_model_name_textbox = gr.Textbox(
+ label="OpenRouter Model Name",
+ placeholder="e.g., mistralai/mistral-7b-instruct",
+ value="mistralai/mistral-7b-instruct",
+ visible=False,
+ )
+
+ ollama_model_name_textbox = gr.Textbox(
+ label="Ollama Model Name",
+ placeholder="e.g., llama3, mistral",
+ value="llama3",
+ visible=False,
+ )
+
with gr.Row():
run_button = gr.Button(
"Run", variant="primary", elem_classes="primary"
@@ -1144,22 +1209,34 @@ def process_in_background():
""")
- with gr.Tabs(): # Set conversation record as the default selected tab
- with gr.TabItem("Conversation Record"):
+ with gr.Tabs():
+ with gr.TabItem("Conversation"):
+ chatbot_display = gr.Chatbot(
+ label="Agent Conversation",
+ elem_id="chatbot",
+ height=600,
+ show_copy_button=True,
+ bubble_full_width=False,
+ )
+ with gr.Row():
+ clear_chatbot_button = gr.Button(
+ "Clear Conversation", variant="secondary"
+ )
+
+ with gr.TabItem("Full Logs"):
# Add conversation record display area
with gr.Group():
- log_display2 = gr.Markdown(
- value="No conversation records yet.",
- elem_classes="log-display",
+ log_display = gr.Markdown(
+ value="No logs yet.", elem_classes="log-display"
)
with gr.Row():
- refresh_logs_button2 = gr.Button("Refresh Record")
- auto_refresh_checkbox2 = gr.Checkbox(
+ refresh_logs_button = gr.Button("Refresh Logs")
+ auto_refresh_checkbox = gr.Checkbox(
label="Auto Refresh", value=True, interactive=True
)
- clear_logs_button2 = gr.Button(
- "Clear Record", variant="secondary"
+ clear_logs_button = gr.Button(
+ "Clear Logs", variant="secondary"
)
with gr.TabItem("Environment Variable Management", id="env-settings"):
@@ -1245,38 +1322,65 @@ def process_in_background():
refresh_button.click(fn=update_env_table, outputs=[env_table])
- # Set up event handling
+ # Set up event handling for the run button
run_button.click(
- fn=process_with_live_logs,
- inputs=[question_input, module_dropdown],
- outputs=[token_count_output, status_output, log_display2],
+ fn=stream_conversation,
+ inputs=[
+ question_input,
+ module_dropdown,
+ openrouter_model_name_textbox,
+ ollama_model_name_textbox,
+ ],
+ outputs=[chatbot_display, token_count_output, status_output],
)
- # Module selection updates description
+ # Module selection updates description and UI visibility
+ def handle_module_change(module_name):
+ description = update_module_description(module_name)
+ openrouter_visible = module_name == "run_openrouter"
+ ollama_visible = module_name == "run_ollama"
+ return (
+ description,
+ gr.update(visible=openrouter_visible),
+ gr.update(visible=ollama_visible),
+ )
+
module_dropdown.change(
- fn=update_module_description,
+ fn=handle_module_change,
inputs=module_dropdown,
- outputs=module_description,
+ outputs=[
+ module_description,
+ openrouter_model_name_textbox,
+ ollama_model_name_textbox,
+ ],
)
- # Conversation record related event handling
- refresh_logs_button2.click(
- fn=lambda: get_latest_logs(100, LOG_QUEUE), outputs=[log_display2]
+ # --- Chatbot Tab ---
+ def clear_chatbot():
+ return []
+
+ clear_chatbot_button.click(fn=clear_chatbot, outputs=[chatbot_display])
+
+ # --- Full Logs Tab ---
+ refresh_logs_button.click(
+ fn=lambda: get_latest_logs(100, LOG_QUEUE), outputs=[log_display]
)
- clear_logs_button2.click(fn=clear_log_file, outputs=[log_display2])
+ clear_logs_button.click(fn=clear_log_file, outputs=[log_display])
- # Auto refresh control
def toggle_auto_refresh(enabled):
if enabled:
+ # Gradio's every= is not working as expected for stopping,
+ # so we just return the component update.
+ # A different approach would be needed for start/stop.
return gr.update(every=3)
else:
- return gr.update(every=0)
+ return gr.update(every=None)
- auto_refresh_checkbox2.change(
+ auto_refresh_checkbox.change(
fn=toggle_auto_refresh,
- inputs=[auto_refresh_checkbox2],
- outputs=[log_display2],
+ inputs=[auto_refresh_checkbox],
+ outputs=[log_display],
)
# No longer automatically refresh logs by default
@@ -1285,7 +1389,7 @@ def toggle_auto_refresh(enabled):
# Main function
-def main():
+def main(port: int = 7860, share: bool = False):
try:
# Initialize logging system
global LOG_FILE
@@ -1305,7 +1409,9 @@ def main():
app.queue()
app.launch(
- share=False,
+ server_name="0.0.0.0",
+ server_port=port,
+ share=share,
favicon_path=os.path.join(
os.path.dirname(__file__), "assets", "owl-favicon.ico"
),
diff --git a/scripts/backup.sh b/scripts/backup.sh
new file mode 100755
index 000000000..a2043c760
--- /dev/null
+++ b/scripts/backup.sh
@@ -0,0 +1,17 @@
+#!/bin/bash
+echo "Starting backup..."
+# Remove old backup if it exists
+if [ -d "backup" ]; then
+ echo "Removing old backup directory..."
+ rm -rf backup
+fi
+# Create a new backup directory
+mkdir -p backup/owl
+mkdir -p backup/examples
+mkdir -p backup/scripts
+# Copy key directories to the backup
+echo "Copying key directories (owl, examples, scripts) to backup/..."
+cp -r owl/* backup/owl/
+cp -r examples/* backup/examples/
+cp -r scripts/* backup/scripts/
+echo "Backup completed successfully."
diff --git a/scripts/restore.sh b/scripts/restore.sh
new file mode 100755
index 000000000..01b9bced8
--- /dev/null
+++ b/scripts/restore.sh
@@ -0,0 +1,12 @@
+#!/bin/bash
+echo "Starting restore from backup..."
+if [ ! -d "backup" ]; then
+ echo "Error: Backup directory not found. Cannot restore."
+ exit 1
+fi
+# Copy the backed-up directories back to their original locations
+echo "Copying files from backup/ to root..."
+cp -r backup/owl/* owl/
+cp -r backup/examples/* examples/
+cp -r backup/scripts/* scripts/
+echo "Restore completed successfully."
diff --git a/scripts/test.sh b/scripts/test.sh
new file mode 100755
index 000000000..01bcf9157
--- /dev/null
+++ b/scripts/test.sh
@@ -0,0 +1,11 @@
+#!/bin/bash
+echo "Running smoke test..."
+# Try to import the main 'owl' module. If it fails, the application is likely broken.
+python -c "import owl"
+if [ $? -eq 0 ]; then
+ echo "Smoke test passed: Basic import of 'owl' module was successful."
+ exit 0
+else
+ echo "Smoke test FAILED: Could not import the 'owl' module."
+ exit 1
+fi
diff --git a/scripts/upgrade.sh b/scripts/upgrade.sh
new file mode 100755
index 000000000..ed8ad8b0e
--- /dev/null
+++ b/scripts/upgrade.sh
@@ -0,0 +1,4 @@
+#!/bin/bash
+echo "Attempting to upgrade by pulling the latest changes from git..."
+git pull
+echo "Upgrade attempt finished."
diff --git a/start.py b/start.py
new file mode 100644
index 000000000..9c3baa9ed
--- /dev/null
+++ b/start.py
@@ -0,0 +1,49 @@
+# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
+import argparse
+from owl.webapp import main as webapp_main
+
+def main():
+ """
+ The main entry point for starting the OWL Web Application.
+ This script handles command-line arguments for server configuration.
+ """
+ parser = argparse.ArgumentParser(
+ description="""
+ OWL: Optimized Workforce Learning for General Multi-Agent Assistance
+ in Real-World Task Automation.
+ """
+ )
+
+ parser.add_argument(
+ "--port",
+ type=int,
+ default=7860,
+ help="The port number to launch the Gradio web server on.",
+ )
+
+ parser.add_argument(
+ "--share",
+ action="store_true",
+ help="Set to True to create a public, shareable link for the web UI.",
+ )
+
+ args = parser.parse_args()
+
+ # Call the main function from the webapp with the parsed arguments
+ webapp_main(port=args.port, share=args.share)
+
+
+if __name__ == "__main__":
+ main()