Skip to content

8sylla/nlp-agent-ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

32 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€– GraphRAG AI Support Agent

Enterprise-Grade Hybrid Conversational AI (Neuro-Symbolic Architecture)

Python FastAPI Next.js Neo4j LLM Agnostic Docker License

A next-generation customer support agent capable of reasoning, maintaining context, and handling complex relational queries by orchestrating Knowledge Graphs (GraphRAG), Vector Search (VectorRAG), and Transactional APIs.


πŸ“– Executive Summary

Traditional chatbots fail when faced with complex, relational queries (e.g., "Is this charger compatible with my phone?") or multi-turn context. They suffer from hallucinations and "context blindness."

This project solves this by implementing a Hybrid Neuro-Symbolic Architecture:

  1. Transactional Engine (Deterministic): Handles order tracking instantly via Regex/NLU and Mock ERP APIs.
  2. Reasoning Engine (GraphRAG): Uses Neo4j and LLMs (Gemini/Llama 3) to traverse a Knowledge Graph for logical answers (compatibility, hierarchy, warranty).
  3. Semantic Engine (VectorRAG): Uses PostgreSQL/pgvector as a fallback for unstructured documentation (FAQ, Policies).

It features Contextual Memory (Redis), Multilingual Support (French/Arabic), and Sentiment Analysis for empathetic responses.


Architecture

The system is built on a Microservices architecture, fully containerized with Docker.

Architecture

Key Technical Features

  • LLM Agnostic: Switch between Google Gemini 2.5 Flash (Cost-effective) and Groq Llama 3.3 (Low Latency) via environment variables.
  • GraphRAG Ingestion (ETL): An automated pipeline using LLMs to extract Entities and Relationships from raw text into Neo4j.
  • Contextual Rephrasing: Uses LLMs to rewrite follow-up questions (e.g., "And its price?" becomes "What is the price of iPhone 15?") before querying databases.
  • Real-Time Dashboard: Live monitoring of conversations, sentiment scores, and AI reasoning steps.

πŸ› οΈ Tech Stack

Component Technology Role
Backend Python 3.11 / FastAPI Asynchronous API & WebSocket Orchestrator.
Frontend Next.js 13+ / Tailwind Modern, responsive Chat UI & Admin Dashboard.
AI Core LangChain 0.3+ Orchestration framework.
LLM Provider Gemini 1.5 or Groq Configurable inference engine.
Graph DB Neo4j 5.x Storing structured knowledge (Products, Relations).
Vector DB PostgreSQL (pgvector) Storing semantic embeddings.
Memory Redis Storing conversation history (Short-term memory).

πŸš€ Getting Started

1. Prerequisites

  • Docker & Docker Compose installed.
  • Make (Optional, for easy commands).
  • API Keys: Google Gemini (Free tier) OR Groq (Free beta).

2. Installation

Clone the repository:

git clone https://github.com/8sylla/ai-support-agent.git
cd ai-support-agent

3. Environment Configuration

Create a .env file in the root directory. Choose your LLM provider.

# --- DATABASE CONFIG ---
POSTGRES_USER=admin
POSTGRES_PASSWORD=adminpassword
POSTGRES_DB=agent_db

# --- NEO4J CONFIG ---
NEO4J_URI=bolt://neo4j:7687
NEO4J_USERNAME=neo4j
NEO4J_PASSWORD=password1234

# --- AI PROVIDER SELECTION ---
# Options: 'google' or 'groq'
LLM_PROVIDER=google

# Google Config
GOOGLE_API_KEY=AIzaSyDxxxxxxxxxxxxxxxxxxxxxxxx
GOOGLE_MODEL=gemini-1.5-flash

# Groq Config (Optional)
GROQ_API_KEY=gsk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
GROQ_MODEL=llama-3.3-70b-versatile

4. Build & Run (The Easy Way)

We use a Makefile to simplify Docker management.

# Build and Start the stack
make install
make start

The application will be available at:

5. Data Ingestion (Crucial Step)

The databases are empty initially. You must run the ETL pipelines to populate the Knowledge Graph and Vector Index.

# Runs both Graph Ingestion (Neo4j) and Vector Ingestion (Postgres)
make ingest-all

πŸ–₯️ Usage Scenarios

Engine Trigger Example Expected Outcome
Transactional "Where is order CMD-123?" Returns real-time status from Mock ERP.
GraphRAG "Who manufactures the iPhone 15?" Traverses graph (iPhone 15)-[MANUFACTURED_BY]->(Apple).
Reasoning "Is the USB-C cable compatible with iPhone 15?" Checks compatibility path in Neo4j.
Memory "What is its warranty?" (after iPhone question) Rewrites query to "What is the warranty of iPhone 15?".
VectorRAG "Do you deliver to Morocco?" Finds semantic match in FAQ documentation.
Multilingual "Ω…Ω† ΩŠΨ΅Ω†ΨΉ Ψ§Ω„Ψ’ΩŠΩΩˆΩ†ΨŸ" Detects Arabic, queries Knowledge Base, answers in Arabic.

Developer Commands (Makefile)

Command Description
make start Starts the full stack in detached mode.
make stop Stops all containers.
make logs Shows realtime logs from the Backend API.
make ingest-all Runs all ETL scripts (Graph + Vector + Arabic).
make train-nlu Re-trains the Spacy NLU model and restarts API.
make db-update Updates PostgreSQL schema (e.g. adds feedback column).
make test Runs the Unit Test Suite (Pytest).

πŸ“‚ Project Structure

ai-support-agent/
β”œβ”€β”€ backend/                 # FastAPI Application
β”‚   β”œβ”€β”€ app/
β”‚   β”‚   β”œβ”€β”€ core/            # Intelligence Engines
β”‚   β”‚   β”‚   β”œβ”€β”€ orchestrator.py  # MAIN LOGIC (Hybrid Router)
β”‚   β”‚   β”‚   β”œβ”€β”€ graph_engine.py  # Neo4j + LLM Logic
β”‚   β”‚   β”‚   β”œβ”€β”€ llm_loader.py    # Provider Factory (Google/Groq)
β”‚   β”‚   β”‚   └── ...
β”‚   β”œβ”€β”€ ingest_graph.py      # ETL Pipeline (Text -> Graph)
β”‚   └── requirements-core.txt # Stable dependencies
β”œβ”€β”€ frontend-next/           # Next.js Application
β”‚   β”œβ”€β”€ app/                 # Pages (Chat & Admin Dashboard)
β”‚   └── components/          # UI Components (OrderCard, Feedback)
β”œβ”€β”€ docker-compose.yml       # Infrastructure orchestration
β”œβ”€β”€ Makefile                 # Automation shortcuts
└── README.md                # You are here

Compilation du rapport

PrΓ©requis

Avant de compiler le rapport, vous devez installer un compilateur LaTeX tel que :

  • TeX Live
  • ou toute autre distribution LaTeX compatible (MiKTeX, etc.).

Γ‰tapes de compilation

  1. Assurez-vous que le compilateur LaTeX est correctement installΓ© sur votre machine.
  2. Rendez-vous dans le dossier rapport/ du projet.
  3. ExΓ©cutez le script suivant :
    compiler.bat

About

A next-generation customer support agent capable of reasoning, maintaining context, and handling complex relational queries by orchestrating Knowledge Graphs (GraphRAG), Vector Search (VectorRAG), and Transactional APIs.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors