You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Run your first example, and you can quickly experience the performance of the agents (or agent groups) built by agentUniverse through the tutorial.
61
+
Run your first case, and you can quickly experience the performance of the agents (or groups) built by agentUniverse through the tutorial.
50
62
51
-
Please refer to the document for detail steps: [Run the first example](./docs/guidebook/en/1_Run_the_first_example.md) 。
63
+
Please refer to the document for detailed steps: [Run the first example](./docs/guidebook/en/1_Run_the_first_example.md) 。
52
64
53
65
****************************************
54
66
55
-
## How to build an agent application
67
+
## How to build an intelligent agent application
56
68
57
-
### Standard Project Scaffolding
58
-
setup the standard project: [agentUniverse Standard Project](sample_standard_app)
59
-
60
-
### Create and use agents
69
+
### Using engineering
70
+
#### Create and use agents
61
71
You can learn about the important components of agents through the [Introduction to Agents](./docs/guidebook/en/2_2_1_Agent.md). For detailed information on creating agents, refer to [Creating and Using Agents](./docs/guidebook/en/2_2_1_Agent_Create_And_Use.md). You can also deepen your understanding of the creation and usage of agents by exploring official examples, such as the [Python Code Generation and Execution Agent](./docs/guidebook/en/7_1_1_Python_Auto_Runner.md).
62
72
63
-
### Setting and use knowledgeBase
73
+
####Setting and use knowledgeBase
64
74
In the construction of intelligent agent applications, knowledge base construction and recall are indispensable. The agentUniverse framework, based on RAG technology, provides an efficient standard operating procedure for knowledge base construction and the retrieval and recall process of RAG. You can learn about its usage through the [Knowledge Introduction](./docs/guidebook/en/2_2_4_Knowledge.md) and [Knowledge Definition and Usage](./docs/guidebook/en/2_2_4_Knowledge_Define_And_Use.md), and further master how to quickly build a knowledge base and create a recall-capable agent through [How to Build RAG Agents](./docs/guidebook/en/2_2_4_How_To_Build_A_RAG_Agent.md).
65
75
66
-
### Create and use Tools
76
+
####Create and use Tools
67
77
In the construction of agent applications, agents need to connect to a variety of tools. You should specify a range of tools that they can use. You can integrate various proprietary APIs and services as tool plugins through [Tool Creation and Usage](./docs/guidebook/en/2_2_3_Tool_Create_And_Use.md). The framework has already integrated LangChain and some third-party toolkits. For detailed usage, you can refer to [Integrating LangChain Tools](./docs/guidebook/en/2_2_3_Integrated_LangChain_Tools.md) and [Existing Integrated Tools](./docs/guidebook/en/2_2_3_Integrated_Tools.md).
68
78
69
-
### Effectiveness evaluation
79
+
####Effectiveness evaluation
70
80
The effectiveness evaluation of agents can be conducted through expert assessments on one hand and by leveraging the evaluation capabilities of the agents on the other. agentUniverse has launched DataAgent (Minimum Viable Product version), which aims to empower your agents with self-evaluation and evolution capabilities using agent intelligence. You can also customize the evaluation criteria within it. For more details, see the documentation: [DataAgent - Autonomous Data Agents](./docs/guidebook/en/8_1_1_data_autonomous_agent.md).
71
81
72
-
### agentServe
82
+
####agentServe
73
83
agentUniverse offers multiple standard web server capabilities, as well as standard HTTP and RPC protocols. You can further explore the documentation on [Service Registration and Usage](./docs/guidebook/en/2_4_1_Service_Registration_and_Usage.md) and the [Web Server](./docs/guidebook/en/2_4_1_Web_Server.md) sections.
74
84
75
-
****************************************
76
-
77
-
## Setup the visual agentic workflow platform
78
-
79
-
agentUniverse provides a visual canvas platform for agentic workflow . Please follow the steps below for a quick start:
85
+
### Using platform
86
+
agentUniverse provides a local product platform capability. Please follow the steps below for a quick start:
80
87
81
88
**Install via pip**
82
89
```shell
@@ -95,7 +102,7 @@ This feature is jointly launched by [difizen](https://github.com/difizen/magent)
The core of agentUniverse provides all the key components needed to build a single intelligent agent, the collaboration mechanisms between multiple agents, and the injection of expert knowledge, enabling developers to easily create intelligent applications equipped with professional KnowHow.
@@ -107,6 +114,24 @@ The PEER model utilizes agents with four different responsibilities: Planning, E
107
114
108
115
The PEER model has achieved exciting results, and the latest research findings and experimental results can be found in the following literature.
109
116
117
+
### Citation
118
+
The agentUniverse project is supported by the following research achievements.
119
+
120
+
BibTeX formatted
121
+
```text
122
+
@misc{wang2024peerexpertizingdomainspecifictasks,
123
+
title={PEER: Expertizing Domain-Specific Tasks with a Multi-Agent Framework and Tuning Methods},
124
+
author={Yiying Wang and Xiaojing Li and Binzhu Wang and Yueyang Zhou and Han Ji and Hong Chen and Jinshi Zhang and Fei Yu and Zewei Zhao and Song Jin and Renji Gong and Wanqing Xu},
125
+
year={2024},
126
+
eprint={2407.06985},
127
+
archivePrefix={arXiv},
128
+
primaryClass={cs.AI},
129
+
url={https://arxiv.org/abs/2407.06985},
130
+
}
131
+
```
132
+
Overview: This document provides a detailed introduction to the mechanisms and principles of the PEER multi-agent framework. In the experimental section, scores were assigned across seven dimensions: completeness, relevance, conciseness, factualness, logicality, structure, and comprehensiveness (each dimension has a maximum score of 5 points). The PEER model scored higher on average in each evaluation dimension compared to BabyAGI and demonstrated significant advantages in the dimensions of completeness, relevance, logicality, structure, and comprehensiveness. Additionally, the PEER model achieved a superior rate of 83% over BabyAGI using the GPT-3.5 Turbo (16k) model, and 81% using the GPT-4 model. For more details, please refer to the document.
133
+
🔗https://arxiv.org/pdf/2407.06985
134
+
110
135
### Key Features
111
136
Based on the above introduction, we summarize that agentUniverse includes the following main features:
112
137
@@ -117,11 +142,10 @@ Rich and Effective Multi-Agent Collaboration Models: It offers collaborative mod
117
142
Easy Integration of Domain Expertise: It offers capabilities for domain prompts, knowledge construction, and management, supporting the orchestration and injection of domain-level SOPs, aligning agents with expert-level domain knowledge.
118
143
119
144
💡 For more features, see the [key features of agentUniverse](./docs/guidebook/en/1_Core_Features.md) section.
⌨️[agentUniverse Example Projects](sample_standard_app)
138
161
162
+
### Product Cases
139
163
🔗 [_Zhi Xiao Zhu_-AI Assistant for Financial Professionals](https://zhu.alipay.com/?from=au)
140
164
165
+
****************************************
166
+
141
167
**_Zhi Xiao Zhu_ AI Assistant: Facilitate the implementation of large models in rigorous industries to enhance the efficiency of investment research experts**
142
168
143
169
_Zhi Xiao Zhu_ AI Assistant is an efficient solution for the practical application of large models in rigorous industries. It is based on the Finix model, which focuses on precise applications, and the agentUniverse intelligent agent framework, which excels in professional customization. This solution targets a range of professional AI business assistants related to investment research, ESG (Environmental, Social, and Governance), finance, earnings reports, and other specialized areas. It has been extensively validated in large-scale scenarios at Ant Group, enhancing expert efficiency.
This project is partially built on excellent open-source projects such as langchain, pydantic, gunicorn, flask, SQLAlchemy, chromadb, etc. (The detailed dependency list can be found in pyproject.toml). We would like to extend special thanks to the related projects and contributors. 🙏🙏🙏
179
-
180
-
****************************************
181
-
182
-
## Citation
183
-
184
-
The agentUniverse project is supported by the following research achievements.
185
-
186
-
BibTeX formatted
187
-
```text
188
-
@misc{wang2024peerexpertizingdomainspecifictasks,
189
-
title={PEER: Expertizing Domain-Specific Tasks with a Multi-Agent Framework and Tuning Methods},
190
-
author={Yiying Wang and Xiaojing Li and Binzhu Wang and Yueyang Zhou and Han Ji and Hong Chen and Jinshi Zhang and Fei Yu and Zewei Zhao and Song Jin and Renji Gong and Wanqing Xu},
191
-
year={2024},
192
-
eprint={2407.06985},
193
-
archivePrefix={arXiv},
194
-
primaryClass={cs.AI},
195
-
url={https://arxiv.org/abs/2407.06985},
196
-
}
197
-
```
198
-
Overview: This document provides a detailed introduction to the mechanisms and principles of the PEER multi-agent framework. In the experimental section, scores were assigned across seven dimensions: completeness, relevance, conciseness, factualness, logicality, structure, and comprehensiveness (each dimension has a maximum score of 5 points). The PEER model scored higher on average in each evaluation dimension compared to BabyAGI and demonstrated significant advantages in the dimensions of completeness, relevance, logicality, structure, and comprehensiveness. Additionally, the PEER model achieved a superior rate of 83% over BabyAGI using the GPT-3.5 Turbo (16k) model, and 81% using the GPT-4 model. For more details, please refer to the document.
0 commit comments