Environment (please complete the following):
- OS: macos Version Sequoia 15.6 (24G84)
- kubectl-ai version: 0.0.22
commit: f151614
date: 2025-08-14T18:00:57Z
- LLM provider: ollama
- LLM model: mistral:latest
Describe the bug
Check if ollama is running:
curl http://localhost:11434/; echo
Ollama is running
export OLLAMA_HOST=http://localhost:11434/
kubectl-ai --llm-provider ollama --model mistral:latest --kubeconfig '</path/to/my/kubeconfig>
Here is the error:
.....
.....
Use "kubectl-ai [command] --help" for more information about a command.
starting k8s agent: initializing chat session: **Initialize not yet implemented for ollama**
To Reproduce
Steps to reproduce the behavior:
- Go to 'CLI'
- Run command
kubectl-ai --llm-provider ollama --model mistral:latest --kubeconfig '</path/to/my/kubeconfig>'
- See error:
starting k8s agent: initializing chat session: Initialize not yet implemented for ollama
Expected behavior
What you expected to happen. I want to go to the prompt and query for nodes, pods, namespaces and in general interact with k8s agent.
Additional context
Add any other context or logs here.