Simple AI interface to chat with your Ollama models from the terminal
- Pretty print real time responses in Markdown, using
richlibrary. - Keep conversation context.
- Autodetect and option to select models.
- Add support for custom prompts.
- Add custom roles (reusable prompts).
- Improve performance by preloading models.
- Add conversation persistency (sessions).
An Ollama instance is required to get access to local models.
By default, the URL is set to http://localhost:11434.
You can install it using any package manager of your preference like pip,
but the recommended way is uv tool.
Using uv:
uv tool install sai-chatStart using it in your terminal just by running sai command:
luis@laptop:~ $ sai
╭───────────────────────────────────────────────────────╮
│ Welcome to Sai. Chat with your local LLM models. │
│ │
│ Available commands: │
│ │
│ • /setup : Setup Ollama URL and preferences │
│ • /model : Select a model │
│ • /roles : List and select a role │
│ • /role add : Create a new custom role │
│ • /role delete : Delete a custom role │
│ • /help : Show this help message │
│ • /quit : Exit the application │
╰───────────────────────────────────────────────────────╯
> hi
╭───────────────────────────────── Virtual Assistant ✔ ─╮
│ Hi there! How can I help you today? 😊 │
╰────────────────────── gemma3:1b ──────────────────────╯
>
This project is under development. Feel free to contribute or provide feedback!