Skip to content

Latest commit

 

History

History
188 lines (133 loc) · 9.44 KB

README.md

File metadata and controls

188 lines (133 loc) · 9.44 KB

BeeAI Framework for Python Project Status: Alpha

Build production-ready multi-agent systems. Also available in TypeScript.

Apache 2.0 Follow on Bluesky Join our Discord LF AI & Data

Key features

BeeAI framework provides a comprehensive set of features for building powerful AI agents:

Core building blocks

Feature Description
Agents Create intelligent, autonomous agents using the ReAct pattern. Build agents that can reason about problems, take appropriate actions, and adapt their approach based on feedback. Includes pre-built agent architectures and customizable components.
Workflows Orchestrate complex multi-agent systems where specialized agents collaborate to solve problems. Define sequential or conditional execution flows with state management and observability.
Backend Connect to various LLM providers like Ollama, watsonx.ai, and more. Offers unified interfaces for chat, embeddings, and structured outputs, making it easy to swap models without changing your code.
Tools Extend agent capabilities with ready-to-use tools for web search, weather forecasting, knowledge retrieval, code execution, and more. Create custom tools to connect agents to any API or service.
Memory Manage conversation history with different memory strategies. Choose from unconstrained memory, token-aware memory, sliding window memory, or summarization memory based on your needs.
Templates Build flexible prompt templates using an enhanced Mustache syntax. Create reusable templates with variables, conditionals, and loops to generate well-structured prompts.

Production optimization

Feature Description
Cache Optimize performance and reduce costs with caching mechanisms for tool outputs and LLM responses. Implement different caching strategies based on your application requirements.
Serialization Save and load agent state for persistence across sessions. Serialize workflows, memory, and other components to support stateful applications.
Errors Implement robust error management with specialized error classes. Distinguish between different error types and implement appropriate recovery strategies.

Note

Cache and serialization features are not yet implemented in Python, but they are coming soon!

Observability & control

Feature Description
Emitter Gain visibility into agent decision processes with a flexible event system. Subscribe to events like updates, errors, and tool executions to monitor agent behavior.
Logger Track agent actions and system events with comprehensive logging. Configure logging levels and outputs to support debugging and monitoring.
Instrumentation Monitor performance and usage with OpenTelemetry integration. Collect metrics and traces to understand system behavior in production environments.
Version Access framework version information programmatically to ensure compatibility.

Note

Instrumentation and version features are not yet implemented in Python, but they are coming soon!

Tutorials

Topic Description
How to Slack with Bee This tutorial will guide you through integrating the BeeAI Python Framework with the Slack API. By the end, the agent will be able to post messages to a Slack channel.
BeeAI integration using RemoteAgent BeeAI is an open platform to help you discover, run, and compose AI agents from any framework and language. In this tutorial you will learn how to integrate BeeAI agents into the framework.

Prerequisites

✅ Python >= 3.11

Installation

Install BeeAI framework using pip:

pip install beeai-framework

Quick example

The following example demonstrates how to build a multi-agent workflow using the BeeAI framework:

import asyncio
from beeai_framework.backend.chat import ChatModel
from beeai_framework.tools.search.wikipedia import WikipediaTool
from beeai_framework.tools.weather.openmeteo import OpenMeteoTool
from beeai_framework.workflows.agent import AgentWorkflow, AgentWorkflowInput

async def main() -> None:
    llm = ChatModel.from_name("ollama:llama3.1")
    workflow = AgentWorkflow(name="Smart assistant")

    workflow.add_agent(
        name="Researcher",
        role="A diligent researcher.",
        instructions="You look up and provide information about a specific topic.",
        tools=[WikipediaTool()],
        llm=llm,
    )

    workflow.add_agent(
        name="WeatherForecaster",
        role="A weather reporter.",
        instructions="You provide detailed weather reports.",
        tools=[OpenMeteoTool()],
        llm=llm,
    )

    workflow.add_agent(
        name="DataSynthesizer",
        role="A meticulous and creative data synthesizer",
        instructions="You can combine disparate information into a final coherent summary.",
        llm=llm,
    )

    location = "Saint-Tropez"

    response = await workflow.run(
        inputs=[
            AgentWorkflowInput(
                prompt=f"Provide a short history of {location}.",
            ),
            AgentWorkflowInput(
                prompt=f"Provide a comprehensive weather summary for {location} today.",
                expected_output="Essential weather details such as chance of rain, temperature and wind. Only report information that is available.",
            ),
            AgentWorkflowInput(
                prompt=f"Summarize the historical and weather data for {location}.",
                expected_output=f"A paragraph that describes the history of {location}, followed by the current weather conditions.",
            ),
        ]
    ).on(
        "success",
        lambda data, event: print(
            f"\n-> Step '{data.step}' has been completed with the following outcome.\n\n{data.state.final_answer}"
        ),
    )
    
    print("==== Final Answer ====")
    print(response.result.final_answer)


if __name__ == "__main__":
    asyncio.run(main())

Source: python/examples/workflows/multi_agents_simple.py

Running the example

Note

To run this example, be sure that you have installed ollama with the granite3.1-dense:8b model downloaded.

To run projects, use:

python [project_name].py

➡️ Explore more in our examples library.

Contribution guidelines

BeeAI framework is an open-source project and we ❤️ contributions.

If you'd like to help build BeeAI, take a look at our contribution guidelines.

Bugs

We are using GitHub Issues to manage public bugs. We keep a close eye on this, so before filing a new issue, please check to make sure it hasn't already been logged.

Code of conduct

This project and everyone participating in it are governed by the Code of Conduct. By participating, you are expected to uphold this code. Please read the full text so that you can read which actions may or may not be tolerated.

Legal notice

All content in these repositories including code has been provided by IBM under the associated open source software license and IBM is under no obligation to provide enhancements, updates, or support. IBM developers produced this code as an open source project (not as an IBM product), and IBM makes no assertions as to the level of quality nor security, and will not be maintaining this code going forward.

Maintainers

For information about maintainers, see MAINTAINERS.md.

Contributors

Special thanks to our contributors for helping us improve BeeAI framework.

Contributors list

Developed by contributors to the BeeAI project, this initiative is part of the Linux Foundation AI & Data program. Its development follows open, collaborative, and community-driven practices.