Table of Contents
Tsara.IA is an AI-powered agent designed to make travel in Morocco smarter, easier, and more engaging. Built with a Retrieval-Augmented Generation (RAG) architecture, it leverages advanced LLMs, LangChain, and Hugging Face embeddings to provide context-aware answers grounded in reliable local data.
-
Trusted Sources: Information comes from curated datasets, including:
- Official travel agencies across different Moroccan cities
- Licensed tourism guides and cultural references
- Local meals, recipes, and authentic culinary insights
-
Contextual Understanding: Unlike a simple chatbot, Tsara.IA retrieves and synthesizes information from vector databases to deliver relevant, accurate, and personalized answers.
-
Cutting-Edge Tech Stack:
LLMs
for natural language reasoningLangChain
for orchestration and tool integrationHugging Face
embeddings for semantic search and vector retrievalChromaDB
as the vector database.React.js
for an intuitive user interface, ensuring easy access to the system’s features and functionality.FastApi
: for a fast, reliable, and scalable backend to handle API requests efficiently.
-
Database
- All documents used to ensure accuracy come from reliable sources, such as Govermental website, and trused sources for dataset
This is an example of how you may give instructions on setting up your project locally. To get a local copy up and running follow these simple example steps.
This is an example of how to list things you need to use the software and how to install them.
- npm
npm install npm@latest -g
Follow the steps below to set up the project locally:
Make sure you have the following installed:
- Node.js (>= 18.x)
- npm or yarn
- Python (>= 3.12)
- pip (Python package manager)
- uv (Project manager)
git clone https://github.com/ourahma/TsaraIA.git
cd TsaraIA
-
Navigate to the backend folder:
cd backend
-
(Optional) Create and activate a virtual environment:
python -m venv venv source venv/bin/activate # On Linux/Mac venv\Scripts\activate # On Windows
-
Install dependencies:
pip install -r requirements.txt
-
Set up Ollama
- Install Ollama on your machine if not already installed.
- Ensure you have pulled and set up an LLM that supports tool usage.
In this project, we usemistral:7b
as the default model:ollama pull mistral:7b
- You can replace it with another model of your choice, but make sure it supports function calling / tool integration.
-
Run the backend server:
uvicorn main:app --reload
The backend will be available at: http://localhost:8000
-
Navigate to the frontend folder:
cd frontend
-
Install dependencies:
npm install
or
yarn install
-
Start the development server:
npm run dev
or
yarn dev
The frontend will be available at: http://localhost:5173 (default for Vite).
Now both backend and frontend should be running locally and connected.
You can now access the frontend directly from your browser and interact with the chatbot’s user-friendly interface to ask questions and receive accurate, real-time answers.
- Send the Chatbot your question :
- Receive detailed answers with information about the phone number,website, adresses ... :
* Send the Chatbot your question :
- Receive detailed answers with information about the phone number,website, adresses ... :
See LICENSE.txt
for more information.
OURAHMA Maroua - @Website - [email protected]
Project Link: https://github.com/ourahma/TsaraIA