@@ -28,8 +28,12 @@ Before getting started, ensure you have:
2828
2929You can deploy Flare AI RAG using Docker or set up the backend and frontend manually.
3030
31- - ** Environment Setup:**
32- Rename ` .env.example ` to ` .env ` and add in the variables (e.g. your [ Gemini API key] ( https://aistudio.google.com/app/apikey ) ).
31+ ### Environment Setup
32+
33+ 1 . ** Prepare the Environment File:**
34+ Rename ` .env.example ` to ` .env ` and update the variables accordingly. (e.g. your [ Gemini API key] ( https://aistudio.google.com/app/apikey ) )
35+
36+ ### Build using Docker (Recommended) -- [ WIP]
3337
34381 . ** Build the Docker Image:**
3539
@@ -43,10 +47,17 @@ You can deploy Flare AI RAG using Docker or set up the backend and frontend manu
4347 docker run -p 80:80 -it --env-file .env flare-ai-rag
4448 ```
4549
46- ## 🛠 Build Manually
50+ 3 . ** Access the Frontend:**
51+ Open your browser and navigate to [ http://localhost:80 ] ( http://localhost:80 ) to interact with the Chat UI.
52+
53+ ### 🛠 Build Manually
54+
55+ Flare AI RAG is composed of a Python-based backend and a JavaScript frontend. Follow these steps for manual setup:
56+
57+ #### Backend Setup
4758
48591 . ** Install Dependencies:**
49- Install all required dependencies by running :
60+ Use [ uv ] ( https://docs.astral.sh/uv/getting-started/installation/ ) to install backend dependencies :
5061
5162 ``` bash
5263 uv sync --all-extras
@@ -60,52 +71,72 @@ You can deploy Flare AI RAG using Docker or set up the backend and frontend manu
6071 docker run -p 6333:6333 qdrant/qdrant
6172 ```
6273
63- 3 . ** Configure Parameters and Run RAG:**
64- The RAG consists of a router, a retriever, and a responder, all configurable within ` src/input_parameters.json ` .
65- Once configured, add your query to ` src/query.txt ` and run:
74+ 3 . ** Start the Backend:**
75+ The backend runs by default on ` 0.0.0.0:8080 ` :
76+
77+ ``` bash
78+ uv run start-backend
79+ ```
80+
81+ #### Frontend Setup
82+
83+ 1 . ** Install Dependencies:**
84+ In the ` chat-ui/ ` directory, install the required packages using [ npm] ( https://nodejs.org/en/download ) :
85+
86+ ``` bash
87+ cd chat-ui/
88+ npm install
89+ ```
90+
91+ 2 . ** Configure the Frontend:**
92+ Update the backend URL in ` chat-ui/src/App.js ` for testing:
6693
67- ``` bash
68- uv run start-rag
94+ ``` js
95+ const BACKEND_ROUTE = " http://localhost:8080/api/routes/chat/" ;
96+ ```
97+
98+ > ** Note:** Remember to change ` BACKEND_ROUTE ` back to ` 'api/routes/chat/' ` after testing.
99+
100+ 3 . ** Start the Frontend:**
101+
102+ ``` bash
103+ npm start
69104 ```
70105
71106## 📁 Repo Structure
72107
73108```
74109src/flare_ai_rag/
75110├── ai/ # AI Provider implementations
76- │ ├── init.py # Package initialization
77111│ ├── base.py # Abstract base classes
78112│ ├── gemini.py # Google Gemini integration
79113│ ├── model.py # Model definitions
80114│ └── openrouter.py # OpenRouter integration
115+ ├── api/ # API layer
116+ │ ├── middleware/ # Request/response middleware
117+ │ └── routes/ # API endpoint definitions
81118├── attestation/ # TEE security layer
82- │ ├── init.py
83119│ ├── simulated_token.txt
84120│ ├── vtpm_attestation.py # vTPM client
85121│ └── vtpm_validation.py # Token validation
86122├── responder/ # Response generation
87- │ ├── init.py
88123│ ├── base.py # Base responder interface
89124│ ├── config.py # Response configuration
90125│ ├── prompts.py # System prompts
91126│ └── responder.py # Main responder logic
92127├── retriever/ # Document retrieval
93- │ ├── init.py
94128│ ├── base.py # Base retriever interface
95129│ ├── config.py # Retriever configuration
96130│ ├── qdrant_collection.py # Qdrant collection management
97131│ └── qdrant_retriever.py # Qdrant implementation
98132├── router/ # API routing
99- │ ├── init.py
100133│ ├── base.py # Base router interface
101134│ ├── config.py # Router configuration
102135│ ├── prompts.py # Router prompts
103136│ └── router.py # Main routing logic
104137├── utils/ # Utility functions
105- │ ├── init.py
106138│ ├── file_utils.py # File operations
107139│ └── parser_utils.py # Input parsing
108- ├── init.py # Package initialization
109140├── input_parameters.json # Configuration parameters
110141├── main.py # Application entry point
111142├── query.txt # Sample queries
@@ -215,7 +246,6 @@ If you encounter issues, follow these steps:
215246## 💡 Next Steps
216247
217248Design and implement a knowledge ingestion pipeline, with a demonstration interface showing practical applications for developers and users.
218- All code uses the TEE Setup which can be found in the [ flare-ai-defai] ( https://github.com/flare-foundation/flare-ai-defai ) repository.
219249
220250_ N.B._ Other vector databases can be used, provided they run within the same Docker container as the RAG system, since the deployment will occur in a TEE.
221251
0 commit comments