A next-generation customer support agent capable of reasoning, maintaining context, and handling complex relational queries by orchestrating Knowledge Graphs (GraphRAG), Vector Search (VectorRAG), and Transactional APIs.
Traditional chatbots fail when faced with complex, relational queries (e.g., "Is this charger compatible with my phone?") or multi-turn context. They suffer from hallucinations and "context blindness."
This project solves this by implementing a Hybrid Neuro-Symbolic Architecture:
- Transactional Engine (Deterministic): Handles order tracking instantly via Regex/NLU and Mock ERP APIs.
- Reasoning Engine (GraphRAG): Uses Neo4j and LLMs (Gemini/Llama 3) to traverse a Knowledge Graph for logical answers (compatibility, hierarchy, warranty).
- Semantic Engine (VectorRAG): Uses PostgreSQL/pgvector as a fallback for unstructured documentation (FAQ, Policies).
It features Contextual Memory (Redis), Multilingual Support (French/Arabic), and Sentiment Analysis for empathetic responses.
The system is built on a Microservices architecture, fully containerized with Docker.
- LLM Agnostic: Switch between Google Gemini 2.5 Flash (Cost-effective) and Groq Llama 3.3 (Low Latency) via environment variables.
- GraphRAG Ingestion (ETL): An automated pipeline using LLMs to extract Entities and Relationships from raw text into Neo4j.
- Contextual Rephrasing: Uses LLMs to rewrite follow-up questions (e.g., "And its price?" becomes "What is the price of iPhone 15?") before querying databases.
- Real-Time Dashboard: Live monitoring of conversations, sentiment scores, and AI reasoning steps.
| Component | Technology | Role |
|---|---|---|
| Backend | Python 3.11 / FastAPI | Asynchronous API & WebSocket Orchestrator. |
| Frontend | Next.js 13+ / Tailwind | Modern, responsive Chat UI & Admin Dashboard. |
| AI Core | LangChain 0.3+ | Orchestration framework. |
| LLM Provider | Gemini 1.5 or Groq | Configurable inference engine. |
| Graph DB | Neo4j 5.x | Storing structured knowledge (Products, Relations). |
| Vector DB | PostgreSQL (pgvector) | Storing semantic embeddings. |
| Memory | Redis | Storing conversation history (Short-term memory). |
- Docker & Docker Compose installed.
- Make (Optional, for easy commands).
- API Keys: Google Gemini (Free tier) OR Groq (Free beta).
Clone the repository:
git clone https://github.com/8sylla/ai-support-agent.git
cd ai-support-agentCreate a .env file in the root directory. Choose your LLM provider.
# --- DATABASE CONFIG ---
POSTGRES_USER=admin
POSTGRES_PASSWORD=adminpassword
POSTGRES_DB=agent_db
# --- NEO4J CONFIG ---
NEO4J_URI=bolt://neo4j:7687
NEO4J_USERNAME=neo4j
NEO4J_PASSWORD=password1234
# --- AI PROVIDER SELECTION ---
# Options: 'google' or 'groq'
LLM_PROVIDER=google
# Google Config
GOOGLE_API_KEY=AIzaSyDxxxxxxxxxxxxxxxxxxxxxxxx
GOOGLE_MODEL=gemini-1.5-flash
# Groq Config (Optional)
GROQ_API_KEY=gsk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
GROQ_MODEL=llama-3.3-70b-versatileWe use a Makefile to simplify Docker management.
# Build and Start the stack
make install
make startThe application will be available at:
- Frontend: http://localhost:3000
- Backend: http://localhost:8000/docs
- Admin: http://localhost:3000/admin
- πΈοΈ Neo4j: http://localhost:7474
The databases are empty initially. You must run the ETL pipelines to populate the Knowledge Graph and Vector Index.
# Runs both Graph Ingestion (Neo4j) and Vector Ingestion (Postgres)
make ingest-all| Engine | Trigger Example | Expected Outcome |
|---|---|---|
| Transactional | "Where is order CMD-123?" | Returns real-time status from Mock ERP. |
| GraphRAG | "Who manufactures the iPhone 15?" | Traverses graph (iPhone 15)-[MANUFACTURED_BY]->(Apple). |
| Reasoning | "Is the USB-C cable compatible with iPhone 15?" | Checks compatibility path in Neo4j. |
| Memory | "What is its warranty?" (after iPhone question) | Rewrites query to "What is the warranty of iPhone 15?". |
| VectorRAG | "Do you deliver to Morocco?" | Finds semantic match in FAQ documentation. |
| Multilingual | "Ω Ω ΩΨ΅ΩΨΉ Ψ§ΩΨ’ΩΩΩΩΨ" | Detects Arabic, queries Knowledge Base, answers in Arabic. |
| Command | Description |
|---|---|
make start |
Starts the full stack in detached mode. |
make stop |
Stops all containers. |
make logs |
Shows realtime logs from the Backend API. |
make ingest-all |
Runs all ETL scripts (Graph + Vector + Arabic). |
make train-nlu |
Re-trains the Spacy NLU model and restarts API. |
make db-update |
Updates PostgreSQL schema (e.g. adds feedback column). |
make test |
Runs the Unit Test Suite (Pytest). |
ai-support-agent/
βββ backend/ # FastAPI Application
β βββ app/
β β βββ core/ # Intelligence Engines
β β β βββ orchestrator.py # MAIN LOGIC (Hybrid Router)
β β β βββ graph_engine.py # Neo4j + LLM Logic
β β β βββ llm_loader.py # Provider Factory (Google/Groq)
β β β βββ ...
β βββ ingest_graph.py # ETL Pipeline (Text -> Graph)
β βββ requirements-core.txt # Stable dependencies
βββ frontend-next/ # Next.js Application
β βββ app/ # Pages (Chat & Admin Dashboard)
β βββ components/ # UI Components (OrderCard, Feedback)
βββ docker-compose.yml # Infrastructure orchestration
βββ Makefile # Automation shortcuts
βββ README.md # You are hereAvant de compiler le rapport, vous devez installer un compilateur LaTeX tel que :
- TeX Live
- ou toute autre distribution LaTeX compatible (MiKTeX, etc.).
- Assurez-vous que le compilateur LaTeX est correctement installΓ© sur votre machine.
- Rendez-vous dans le dossier
rapport/du projet. - ExΓ©cutez le script suivant :
compiler.bat
