Command chaining/piping #6256
Labels
AI efficacy
command system
fridge
Items that can't be processed right now but can be of use or inspiration later
function: run shell commands
Milestone
The ability to chain multiple commands or actions together could significantly increase the efficiency of AutoGPT.
Related:
Notes & ideas for implementation
The simplest way?
The simplest way is probably to amend the current command execution interface to support calling multiple functions which are executed in order, and of which the output can be used as arguments for the next command:
Execution tree
More complex, harder to implement and probably more error prone, but worth considering:
Command interface -> script-like execution
We could profit from totally rethinking the command execution interface. For example, if we implement #6253 + #5132, and we exposed all available commands as functions that can be imported or executed in that environment, it could be as simple as this:
User: Please scrape the content of https://en.wikipedia.org/wiki/Otters and write it to a text file```
AutoGPT:
I have written the content of the webpage to the file
otters.txt
!Currently, it would be more like:
User: Please scrape the content of https://en.wikipedia.org/wiki/Otters and write it to a text file```
AutoGPT: I will now get the content from the webpage.
Executing
get_text_from_webpage("https://en.wikipedia.org/wiki/Otters")
AutoGPT: Now that I have scraped the webpage, I will save the content to a file called
otters.txt
.Executing
write_file("otters.txt", "[lots of text/tokens here, very expensive and possibly slow!]")
This is both slower and significantly more expensive, because we're copy-pasting data using an LLM.
The text was updated successfully, but these errors were encountered: