A containerized chat interface for local LLMs running in LM Studio, featuring persistent memory using ChromaDB and web browsing capabilities.
- 🤖 Local LLM Integration - Connects to LM Studio's API server
- 🧠 Persistent Memory - ChromaDB-based vector database for semantic memory
- 🌐 Web Browsing - Fetch and analyze webpage content
- 💬 Clean UI - Gradio-based chat interface
- 🐳 Containerized - Easy deployment with Docker Compose
- LM Studio installed and running
- Docker or OrbStack
- A model loaded in LM Studio with the API server enabled
- Clone the repository
git clone https://github.com/yourusername/Local_LLM.git
cd Local_LLM-
Start LM Studio's API server
- Open LM Studio
- Load a model (e.g., qwen3-coder-30b)
- Enable "Local LLM Service" in Settings > Developer
- Start the server (default: http://localhost:1234)
-
Launch the chat interface
docker-compose up --build- Open in browser
- Navigate to http://localhost:7860
- Type questions in the chat input
- Include URLs in your messages to fetch and analyze webpages
- Memory is automatically searched for relevant context
- Add Memory: Save important facts, specifications, or notes
- Search Memory: Find previously stored information
What's the tensile strength of tungsten?
Fetch https://example.com and summarize it
What material are we using for the housing project?
Local_LLM/
├── docker-compose.yml # Container orchestration
├── Dockerfile # Container build instructions
├── requirements.txt # Python dependencies
├── chat_ui.py # Gradio chat interface
├── scripts/ # Standalone Python scripts
│ ├── memory_system.py
│ └── web_agent.py
└── chroma_db/ # Vector database (created on first run)
Edit docker-compose.yml:
environment:
- LM_STUDIO_URL=http://host.docker.internal:YOUR_PORTEdit chat_ui.py, line with model name:
model="your-model-name"The scripts/ folder contains standalone Python scripts for non-containerized use:
python3 scripts/memory_system.pypython3 scripts/web_agent.py- Memory: ChromaDB with
all-MiniLM-L6-v2embeddings - Web Scraping: BeautifulSoup4 + requests
- LLM Integration: LangChain with OpenAI-compatible API
- UI Framework: Gradio
- Ensure LM Studio server is running
- Check that
host.docker.internalresolves (uselocalhoston Linux)
- Check that
chroma_db/folder is created and mounted properly - Verify volume mount in
docker-compose.yml
- Confirm the model name matches your loaded model in LM Studio
- Check LM Studio server logs for errors
Feel free to open issues or submit pull requests!
MIT
Built with assistance from Claude (Anthropic) and Adderall XR!