-
Notifications
You must be signed in to change notification settings - Fork 44.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama - Remote hosts #8234
base: master
Are you sure you want to change the base?
Ollama - Remote hosts #8234
Conversation
Ace (Fried_Squid / therift) seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
✅ Deploy Preview for auto-gpt-docs canceled.
|
This is a super nice change and its much needed 🙏 once CI tests pass it should be good to go! |
CI/CD all passing now 😄 |
@@ -104,6 +108,11 @@ class Input(BlockSchema): | |||
prompt_values: dict[str, str] = SchemaField( | |||
advanced=False, default={}, description="Values used to fill in the prompt." | |||
) | |||
ollama_host: str = SchemaField( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Bentlybro is there a way to conditionally show stuff like this if ollama is the selected model:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ntindle i dont think we have that setup at the moment but maybe that is something we should look into getting setup because i already see quite a lot of use-cases for that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just thinking, would it be worth splitting model selection into provider then model, rather than just model? Some providers have a wide variety of models (i.e. Ollama) which may overlap with other providers, so being able to choose the model and provider would let users have more control. It'd also make it a tad easier to have conditional inputs on the blocks I imagine as we wouldn't have to look up provider based on the model, it'd be in the block inputs.
Let me know and I can try and get a pr out for that functionality soon
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Fried-Squid I think that's a good very good idea, it makes a lot more sense being able to do it that way, @ntindle @Torantulino what do we think? it should be pretty easy to do and it should just be changes to the block its self as far as I can see.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Bentlybro the one issue I see with it is getting the frontend to display that properly - we'd need to make changes to the JSON schema which gets passed to the frontend to render the available model selection. Or we could just throw an error when model and provider don't match, but that seems pretty hostile to new users who might not understand the differences between the providers and models.
Background
Currently, AutoGPT only supports ollama servers running locally. Often, this is not the case as the ollama server could be running on a more suited instance, such as a Jetson board. This PR adds "ollama host" to the input of all LLM blocks, allowing users to select the ollama host for the LLM blocks.
Changes 🏗️
Testing 🔍
Tested all LLM blocks with Ollama remote hosts as well as with the default localhost value.
Related issues
#8225