Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for remote ollama hosts #8225

Open
1 task done
Fried-Squid opened this issue Sep 30, 2024 · 1 comment
Open
1 task done

Add support for remote ollama hosts #8225

Fried-Squid opened this issue Sep 30, 2024 · 1 comment

Comments

@Fried-Squid
Copy link

Duplicates

  • I have searched the existing issues

Summary 💡

The LLM call blocks should support an additional input for an ollama host which is not the localhost.

Will self-assign and PR later this week.

Examples 🌈

No response

Motivation 🔦

Allowing remote ollama servers would allow self-hosted users to run autogpt+ollama across multiple hosts which may be more suited to tasks, e.g. running ollama on a jetson board.

@Fried-Squid
Copy link
Author

PR #8234

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant