Skip to content

Commit 2240033

Browse files
authored
Release v0.4.4 (#4906)
2 parents 5318535 + 46f31cb commit 2240033

File tree

124 files changed

+4950
-755
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

124 files changed

+4950
-755
lines changed

.env.template

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -58,15 +58,19 @@ OPENAI_API_KEY=your-openai-api-key
5858
## USE_AZURE - Use Azure OpenAI or not (Default: False)
5959
# USE_AZURE=False
6060

61+
## AZURE_CONFIG_FILE - The path to the azure.yaml file (Default: azure.yaml)
62+
# AZURE_CONFIG_FILE=azure.yaml
63+
64+
6165
################################################################################
6266
### LLM MODELS
6367
################################################################################
6468

65-
## SMART_LLM_MODEL - Smart language model (Default: gpt-3.5-turbo)
66-
# SMART_LLM_MODEL=gpt-3.5-turbo
69+
## SMART_LLM - Smart language model (Default: gpt-4)
70+
# SMART_LLM=gpt-4
6771

68-
## FAST_LLM_MODEL - Fast language model (Default: gpt-3.5-turbo)
69-
# FAST_LLM_MODEL=gpt-3.5-turbo
72+
## FAST_LLM - Fast language model (Default: gpt-3.5-turbo)
73+
# FAST_LLM=gpt-3.5-turbo
7074

7175
## EMBEDDING_MODEL - Model to use for creating embeddings
7276
# EMBEDDING_MODEL=text-embedding-ada-002

.github/CODEOWNERS

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,2 @@
1-
.github/workflows/ @Significant-Gravitas/Auto-GPT-Source
1+
.github/workflows/ @Significant-Gravitas/maintainers
2+
autogpt/core @collijk

.github/ISSUE_TEMPLATE/1.bug.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -140,8 +140,8 @@ body:
140140
⚠️The following is OPTIONAL, please keep in mind that the log files may contain personal information such as credentials.⚠️
141141
142142
"The log files are located in the folder 'logs' inside the main auto-gpt folder."
143-
144-
- type: input
143+
144+
- type: textarea
145145
attributes:
146146
label: Upload Activity Log Content
147147
description: |
@@ -152,7 +152,7 @@ body:
152152
validations:
153153
required: false
154154

155-
- type: input
155+
- type: textarea
156156
attributes:
157157
label: Upload Error Log Content
158158
description: |

.github/workflows/ci.yml

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -108,22 +108,27 @@ jobs:
108108
if: ${{ startsWith(github.event_name, 'pull_request') }}
109109
run: |
110110
cassette_branch="${{ github.event.pull_request.user.login }}-${{ github.event.pull_request.head.ref }}"
111+
cassette_base_branch="${{ github.event.pull_request.base.ref }}"
111112
cd tests/Auto-GPT-test-cassettes
112113
114+
if ! git ls-remote --exit-code --heads origin $cassette_base_branch ; then
115+
cassette_base_branch="master"
116+
fi
117+
113118
if git ls-remote --exit-code --heads origin $cassette_branch ; then
114119
git fetch origin $cassette_branch
115-
git fetch origin ${{ github.event.pull_request.base.ref }}
120+
git fetch origin $cassette_base_branch
116121
117122
git checkout $cassette_branch
118123
119124
# Pick non-conflicting cassette updates from the base branch
120-
git merge --no-commit --strategy-option=ours origin/${{ github.event.pull_request.base.ref }}
125+
git merge --no-commit --strategy-option=ours origin/$cassette_base_branch
121126
echo "Using cassettes from mirror branch '$cassette_branch'," \
122-
"synced to upstream branch '${{ github.event.pull_request.base.ref }}'."
127+
"synced to upstream branch '$cassette_base_branch'."
123128
else
124129
git checkout -b $cassette_branch
125130
echo "Branch '$cassette_branch' does not exist in cassette submodule." \
126-
"Using cassettes from '${{ github.event.pull_request.base.ref }}'."
131+
"Using cassettes from '$cassette_base_branch'."
127132
fi
128133
129134
- name: Set up Python ${{ matrix.python-version }}

.pre-commit-config.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ repos:
3636
types: [ python ]
3737
- id: pytest-check
3838
name: pytest-check
39-
entry: pytest --cov=autogpt --without-integration --without-slow-integration
39+
entry: pytest --cov=autogpt tests/unit
4040
language: system
4141
pass_filenames: false
4242
always_run: true

BULLETIN.md

Lines changed: 25 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,29 @@
1-
# Website and Documentation Site 📰📖
2-
Check out *https://agpt.co*, the official news & updates site for Auto-GPT!
3-
The documentation also has a place here, at *https://docs.agpt.co*
1+
# QUICK LINKS 🔗
2+
# --------------
3+
🌎 *Official Website*: https://agpt.co.
4+
📖 *User Guide*: https://docs.agpt.co.
5+
👩 *Contributors Wiki*: https://github.com/Significant-Gravitas/Auto-GPT/wiki/Contributing.
46

5-
# For contributors 👷🏼
6-
Since releasing v0.3.0, whave been working on re-architecting the Auto-GPT core to make it more extensible and make room for structural performance-oriented R&D.
7+
# v0.4.4 RELEASE HIGHLIGHTS! 🚀
8+
# -----------------------------
9+
## GPT-4 is back!
10+
Following OpenAI's recent GPT-4 GA announcement, the SMART_LLM .env setting
11+
now defaults to GPT-4, and Auto-GPT will use GPT-4 by default in its main loop.
712

8-
Check out the contribution guide on our wiki:
9-
https://github.com/Significant-Gravitas/Auto-GPT/wiki/Contributing
13+
### !! High Costs Warning !! 💰💀🚨
14+
GPT-4 costs ~20x more than GPT-3.5-turbo.
15+
Please take note of this before using SMART_LLM. You can use `--gpt3only`
16+
or `--gpt4only` to force the use of GPT-3.5-turbo or GPT-4, respectively,
17+
at runtime.
1018

11-
# 🚀 v0.4.3 Release 🚀
12-
We're happy to announce the 0.4.3 maintenance release, which primarily focuses on refining the LLM command execution,
13-
extending support for OpenAI's latest models (including the powerful GPT-3 16k model), and laying the groundwork
14-
for future compatibility with OpenAI's function calling feature.
19+
## Re-arch v1 preview release!
20+
We've released a preview version of the re-arch code, under `autogpt/core`.
21+
This is a major milestone for us, and we're excited to continue working on it.
22+
We look forward to your feedback. Follow the process here:
23+
https://github.com/Significant-Gravitas/Auto-GPT/issues/4770.
1524

16-
Key Highlights:
17-
- OpenAI API Key Prompt: Auto-GPT will now courteously prompt users for their OpenAI API key, if it's not already provided.
18-
- Summarization Enhancements: We've optimized Auto-GPT's use of the LLM context window even further.
19-
- JSON Memory Reading: Support for reading memories from JSON files has been improved, resulting in enhanced task execution.
20-
- Deprecated commands, removed for a leaner, more performant LLM: analyze_code, write_tests, improve_code, audio_text, web_playwright, web_requests
21-
## Take a look at the Release Notes on Github for the full changelog!
22-
https://github.com/Significant-Gravitas/Auto-GPT/releases
25+
## Other highlights
26+
Other fixes include plugins regressions, Azure config and security patches.
27+
28+
Take a look at the Release Notes on Github for the full changelog!
29+
https://github.com/Significant-Gravitas/Auto-GPT/releases.

autogpt/agent/agent.py

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22
import signal
33
import sys
44
from datetime import datetime
5+
from pathlib import Path
56

67
from colorama import Fore, Style
78

@@ -64,7 +65,7 @@ def __init__(
6465
ai_config: AIConfig,
6566
system_prompt: str,
6667
triggering_prompt: str,
67-
workspace_directory: str,
68+
workspace_directory: str | Path,
6869
config: Config,
6970
):
7071
self.ai_name = ai_name
@@ -80,13 +81,11 @@ def __init__(
8081
self.created_at = datetime.now().strftime("%Y%m%d_%H%M%S")
8182
self.cycle_count = 0
8283
self.log_cycle_handler = LogCycleHandler()
83-
self.fast_token_limit = OPEN_AI_CHAT_MODELS.get(
84-
config.fast_llm_model
85-
).max_tokens
84+
self.smart_token_limit = OPEN_AI_CHAT_MODELS.get(config.smart_llm).max_tokens
8685

8786
def start_interaction_loop(self):
8887
# Avoid circular imports
89-
from autogpt.app import execute_command, get_command
88+
from autogpt.app import execute_command, extract_command
9089

9190
# Interaction Loop
9291
self.cycle_count = 0
@@ -137,8 +136,8 @@ def signal_handler(signum, frame):
137136
self,
138137
self.system_prompt,
139138
self.triggering_prompt,
140-
self.fast_token_limit,
141-
self.config.fast_llm_model,
139+
self.smart_token_limit,
140+
self.config.smart_llm,
142141
)
143142

144143
try:
@@ -162,11 +161,11 @@ def signal_handler(signum, frame):
162161
print_assistant_thoughts(
163162
self.ai_name, assistant_reply_json, self.config
164163
)
165-
command_name, arguments = get_command(
164+
command_name, arguments = extract_command(
166165
assistant_reply_json, assistant_reply, self.config
167166
)
168167
if self.config.speak_mode:
169-
say_text(f"I want to execute {command_name}")
168+
say_text(f"I want to execute {command_name}", self.config)
170169

171170
arguments = self._resolve_pathlike_command_args(arguments)
172171

@@ -195,8 +194,9 @@ def signal_handler(signum, frame):
195194
# to exit
196195
self.user_input = ""
197196
logger.info(
198-
"Enter 'y' to authorise command, 'y -N' to run N continuous commands, 's' to run self-feedback commands, "
199-
"'n' to exit program, or enter feedback for "
197+
f"Enter '{self.config.authorise_key}' to authorise command, "
198+
f"'{self.config.authorise_key} -N' to run N continuous commands, "
199+
f"'{self.config.exit_key}' to exit program, or enter feedback for "
200200
f"{self.ai_name}..."
201201
)
202202
while True:
@@ -224,8 +224,8 @@ def signal_handler(signum, frame):
224224
user_input = "GENERATE NEXT COMMAND JSON"
225225
except ValueError:
226226
logger.warn(
227-
"Invalid input format. Please enter 'y -n' where n is"
228-
" the number of continuous tasks."
227+
f"Invalid input format. Please enter '{self.config.authorise_key} -n' "
228+
"where n is the number of continuous tasks."
229229
)
230230
continue
231231
break
@@ -281,12 +281,12 @@ def signal_handler(signum, frame):
281281
result = f"Command {command_name} returned: " f"{command_result}"
282282

283283
result_tlength = count_string_tokens(
284-
str(command_result), self.config.fast_llm_model
284+
str(command_result), self.config.smart_llm
285285
)
286286
memory_tlength = count_string_tokens(
287-
str(self.history.summary_message()), self.config.fast_llm_model
287+
str(self.history.summary_message()), self.config.smart_llm
288288
)
289-
if result_tlength + memory_tlength + 600 > self.fast_token_limit:
289+
if result_tlength + memory_tlength + 600 > self.smart_token_limit:
290290
result = f"Failure: command {command_name} returned too much output. \
291291
Do not execute this command again with the same arguments."
292292

autogpt/app.py

Lines changed: 9 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ def is_valid_int(value: str) -> bool:
2323
return False
2424

2525

26-
def get_command(
26+
def extract_command(
2727
assistant_reply_json: Dict, assistant_reply: ChatModelResponse, config: Config
2828
):
2929
"""Parse the response and return the command name and arguments
@@ -78,21 +78,6 @@ def get_command(
7878
return "Error:", str(e)
7979

8080

81-
def map_command_synonyms(command_name: str):
82-
"""Takes the original command name given by the AI, and checks if the
83-
string matches a list of common/known hallucinations
84-
"""
85-
synonyms = [
86-
("write_file", "write_to_file"),
87-
("create_file", "write_to_file"),
88-
("search", "google"),
89-
]
90-
for seen_command, actual_command_name in synonyms:
91-
if command_name == seen_command:
92-
return actual_command_name
93-
return command_name
94-
95-
9681
def execute_command(
9782
command_name: str,
9883
arguments: dict[str, str],
@@ -109,28 +94,21 @@ def execute_command(
10994
str: The result of the command
11095
"""
11196
try:
112-
cmd = agent.command_registry.commands.get(command_name)
97+
# Execute a native command with the same name or alias, if it exists
98+
if command := agent.command_registry.get_command(command_name):
99+
return command(**arguments, agent=agent)
113100

114-
# If the command is found, call it with the provided arguments
115-
if cmd:
116-
return cmd(**arguments, agent=agent)
117-
118-
# TODO: Remove commands below after they are moved to the command registry.
119-
command_name = map_command_synonyms(command_name.lower())
120-
121-
# TODO: Change these to take in a file rather than pasted code, if
122-
# non-file is given, return instructions "Input should be a python
123-
# filepath, write your code to file and try again
101+
# Handle non-native commands (e.g. from plugins)
124102
for command in agent.ai_config.prompt_generator.commands:
125103
if (
126104
command_name == command["label"].lower()
127105
or command_name == command["name"].lower()
128106
):
129107
return command["function"](**arguments)
130-
return (
131-
f"Unknown command '{command_name}'. Please refer to the 'COMMANDS'"
132-
" list for available commands and only respond in the specified JSON"
133-
" format."
108+
109+
raise RuntimeError(
110+
f"Cannot execute '{command_name}': unknown command."
111+
" Do not try to use this command again."
134112
)
135113
except Exception as e:
136114
return f"Error: {str(e)}"

autogpt/cli.py

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,6 @@
11
"""Main script for the autogpt package."""
2+
from typing import Optional
3+
24
import click
35

46

@@ -65,6 +67,22 @@
6567
is_flag=True,
6668
help="Installs external dependencies for 3rd party plugins.",
6769
)
70+
@click.option(
71+
"--ai-name",
72+
type=str,
73+
help="AI name override",
74+
)
75+
@click.option(
76+
"--ai-role",
77+
type=str,
78+
help="AI role override",
79+
)
80+
@click.option(
81+
"--ai-goal",
82+
type=str,
83+
multiple=True,
84+
help="AI goal override; may be used multiple times to pass multiple goals",
85+
)
6886
@click.pass_context
6987
def main(
7088
ctx: click.Context,
@@ -83,6 +101,9 @@ def main(
83101
skip_news: bool,
84102
workspace_directory: str,
85103
install_plugin_deps: bool,
104+
ai_name: Optional[str],
105+
ai_role: Optional[str],
106+
ai_goal: tuple[str],
86107
) -> None:
87108
"""
88109
Welcome to AutoGPT an experimental open-source application showcasing the capabilities of the GPT-4 pushing the boundaries of AI.
@@ -109,6 +130,9 @@ def main(
109130
skip_news,
110131
workspace_directory,
111132
install_plugin_deps,
133+
ai_name,
134+
ai_role,
135+
ai_goal,
112136
)
113137

114138

autogpt/command_decorator.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ def command(
2020
parameters: dict[str, CommandParameterSpec],
2121
enabled: bool | Callable[[Config], bool] = True,
2222
disabled_reason: Optional[str] = None,
23+
aliases: list[str] = [],
2324
) -> Callable[..., Any]:
2425
"""The command decorator is used to create Command objects from ordinary functions."""
2526

@@ -40,6 +41,7 @@ def decorator(func: Callable[..., Any]) -> Command:
4041
parameters=typed_parameters,
4142
enabled=enabled,
4243
disabled_reason=disabled_reason,
44+
aliases=aliases,
4345
)
4446

4547
@functools.wraps(func)

0 commit comments

Comments
 (0)