Replies: 1 comment
-
Might get more response in the discord - just reopened the forums here a day or two ago - might be a bit slow getting traction here again if looking for a faster response. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Any recommendations for setting up a Dev environment for BMAD?
After finding the BMAD method; I have been wondering if I am overthinking my setup and wondering how others are setting up their development environment.
I have been building a local AI and development cluster for agentic coding using Kubernetes. I am having some difficultly getting it setup due to version incompatibility with GPU pass through for ROCm either in the docker image or through Kubernetes. I have gotten it to work with ComfyUI and Ollama, so I will eventually get it all setup.
[My planned setup]
I am using a local first approach, with access to cloud AI.
Docker images on Kubernetes cluster:
Open WebUI: Talk to local AI models, use API form ComfyUI to generate pics.
LiteLLM: An AI proxy, that routes requests to the AI service (local with vLLM or Ollama, or cloud like Open AI, Claude, etc..)
Model Server: vLLM, TGI, Ollama, etc, a local model server that the other
Qdrant: The Vector Database for long-term memory for your AI, and Retrieval-Augmented Generation.
LangChain Orchestrator: determine if a request needs to talk to the model server, search the Qdrant database, or use another tool. I might try volcano here as an alternative.
ComfyUI: The Image Generator.
Dev-Env: A dedicated container for agentic coding.
Beta Was this translation helpful? Give feedback.
All reactions