PAWSitiveOps-AnythingLLM is the AI backbone for the PAWSitiveOps platform.
It packages an AnythingLLM deployment and integration glue so we can run specialized compliance & operations chatbots for pet/vet centers, then expose them to React/FastAPI apps and n8n automations.
This repo focuses on infrastructure + integration (not the upstream AnythingLLM code).
Think: environment setup, deployment configs, routing, and recipes to wire agents into real workflows.
- Provide a repeatable deployment of AnythingLLM (cloud or Docker)
- Standardize workspace/agent setup for PAWSitiveOps personas:
- Compliance Hound
- Sam the Safety Dog
- Operations Bulldog
- Offer clean endpoints so frontends (React) and services (FastAPI, n8n) can talk to our agents
- Keep credentials and provider keys securely managed with environment variables
- Designed the deployment approach (Render/Docker) and env layout
- Built integration patterns for webhooks, document ingestion, and chat completions
- Planned the multi-agent routing so different PAWSitiveOps bots can serve multiple apps
- Documented usage for teammates and future contributors
- AnythingLLM – LLM router, vector index, workspaces
- Docker / Docker Compose – Containerized deploys
- Render – (Planned/Active) one-click cloud hosting
- FastAPI – (Planned) thin API that fronts agent calls for apps
- React – (Planned) frontends consuming agent endpoints
- n8n – Workflow automation to orchestrate intake → embed → respond
- PostgreSQL / SQLite – Vector/storage options depending on target env
- Object Storage – (Optional) document archives (S3-compatible)
✅ Current
- Stand up an AnythingLLM instance for PAWSitiveOps
- Define agent/workspace conventions (naming, roles, retrieval settings)
- Set environment scaffolding for LLM provider keys and embeddings
🚀 Planned
- Compliance Hound – Answers licensing/permit questions and links citations
- Sam the Safety Dog – Safety/SOP Q&A with document retrieval
- Operations Bulldog – Day-to-day ops, customer-service scripts, checklists
- Marketing Poodle (Optional) – Social copy & campaign ideas
- External Apps – VagalFit, DealAgent007, IndieSmithy UI consuming the same agent API
- n8n Pipelines – Auto-ingest PDFs/URLs → embed → notify → serve in chat
[Frontend (React)] [n8n Webhook] [CLI/Script] \ | / \ | / \ | / +-----[ FastAPI Router ]-----+ | v [ AnythingLLM API ] | Workspaces | | Retrieval | | Embeddings | | [ Vector / Storage ] | v [ Response to Caller ]
Create .env with:
OPENAI_API_KEY=...
ANTHROPIC_API_KEY=...
GROQ_API_KEY=...
EMBEDDINGS_PROVIDER=openai
DATABASE_URL=sqlite:///./anythingllm.db
PORT=3001
CORS_ORIGIN=https://your-frontend.example
---
## ⚙️ Quick Start
### 1) Environment Variables
Create `.env` with:
```ini
OPENAI_API_KEY=...
ANTHROPIC_API_KEY=...
GROQ_API_KEY=...
EMBEDDINGS_PROVIDER=openai
DATABASE_URL=sqlite:///./anythingllm.db
PORT=3001
CORS_ORIGIN=https://your-frontend.example
version: "3.9"
services:
anythingllm:
image: mintplexlabs/anythingllm:latest
ports:
- "3001:3001"
env_file:
- .env
volumes:
- ./data:/app/server/storage
restart: unless-stopped
Run:
docker compose up -d
3) Create Workspaces
pawsops-compliance-hound
pawsops-safety-dog
pawsops-operations-bulldog
Attach documents/links → set retrieval settings.
🐍 FastAPI Router Stub
Example main.py for routing chat calls to AnythingLLM:
from fastapi import FastAPI, HTTPException
import requests
import os
app = FastAPI()
ANYTHINGLLM_URL = os.getenv("ANYTHINGLLM_URL", "http://localhost:3001")
API_KEY = os.getenv("ANYTHINGLLM_API_KEY")
@app.post("/api/ask/{workspace}")
def ask_workspace(workspace: str, query: dict):
headers = {"Authorization": f"Bearer {API_KEY}"}
response = requests.post(
f"{ANYTHINGLLM_URL}/api/v1/workspace/{workspace}/chat",
headers=headers,
json={"message": query.get("message", "")}
)
if response.status_code != 200:
raise HTTPException(status_code=response.status_code, detail=response.text)
return response.json()
📒 Workspace Playbook
A step-by-step guide for creating, maintaining, and updating workspaces is available in the Workspace Playbook.
It covers:
Naming conventions
Role descriptions
Retrieval settings
Document ingestion workflows (manual + n8n automated)
Best practices for prompt engineering in AnythingLLM
🔐 Security Notes
Keep provider keys in .env or Render Secrets
Restrict CORS to trusted domains
Use JWT or API key auth for public endpoints
Log requests for audit and debugging
📚 Lessons Learned
Thin router layer keeps apps clean
Workspace standardization avoids confusion
Automated ingestion is key for freshness
Prompts as config = easier iteration
🗺️ Future Plans
One-click Render deploy (AnythingLLM + FastAPI)
Workspace seeding script
Unified metrics/logging dashboard
React chatbot widget for embedding in sites
Enterprise connectors (SharePoint, S3, GDrive)