@@ -12,7 +12,9 @@ Flare AI Kit template for Retrieval-Augmented Generation (RAG) Knowledge.
1212- ** Highly Configurable & Extensible:** Uses a straightforward configuration system, enabling effortless integration of new features and services.
1313- ** Unified LLM Integration:** Leverages Gemini as a unified provider while maintaining compatibility with OpenRouter for a broader range of models.
1414
15- ## 📌 Prerequisites
15+ ## 🎯 Getting Started
16+
17+ ### Prerequisites
1618
1719Before getting started, ensure you have:
1820
@@ -21,15 +23,13 @@ Before getting started, ensure you have:
2123- A [ Gemini API key] ( https://aistudio.google.com/app/apikey ) .
2224- Access to one of the Flare databases. (The [ Flare Developer Hub] ( https://dev.flare.network/ ) is included in CSV format for local testing.)
2325
24- ## 🏗️ Build & Run Instructions
26+ ### Build & Run Instructions
2527
2628You can deploy Flare AI RAG using Docker or set up the backend and frontend manually.
2729
2830- ** Environment Setup:**
2931 Rename ` .env.example ` to ` .env ` and add in the variables (e.g. your [ Gemini API key] ( https://aistudio.google.com/app/apikey ) ).
3032
31- ### Build using Docker
32-
33331 . ** Build the Docker Image:**
3434
3535 ``` bash
@@ -42,7 +42,7 @@ You can deploy Flare AI RAG using Docker or set up the backend and frontend manu
4242 docker run -p 80:80 -it --env-file .env flare-ai-rag
4343 ```
4444
45- ### Build manually
45+ ## 🛠 Build Manually
4646
47471 . ** Install Dependencies:**
4848 Install all required dependencies by running:
@@ -67,6 +67,50 @@ You can deploy Flare AI RAG using Docker or set up the backend and frontend manu
6767 uv run start-rag
6868 ```
6969
70+ ## 📁 Repo Structure
71+
72+ ```
73+ src/flare_ai_rag/
74+ ├── ai/ # AI Provider implementations
75+ │ ├── init.py # Package initialization
76+ │ ├── base.py # Abstract base classes
77+ │ ├── gemini.py # Google Gemini integration
78+ │ ├── model.py # Model definitions
79+ │ └── openrouter.py # OpenRouter integration
80+ ├── attestation/ # TEE security layer
81+ │ ├── init.py
82+ │ ├── simulated_token.txt
83+ │ ├── vtpm_attestation.py # vTPM client
84+ │ └── vtpm_validation.py # Token validation
85+ ├── responder/ # Response generation
86+ │ ├── init.py
87+ │ ├── base.py # Base responder interface
88+ │ ├── config.py # Response configuration
89+ │ ├── prompts.py # System prompts
90+ │ └── responder.py # Main responder logic
91+ ├── retriever/ # Document retrieval
92+ │ ├── init.py
93+ │ ├── base.py # Base retriever interface
94+ │ ├── config.py # Retriever configuration
95+ │ ├── qdrant_collection.py # Qdrant collection management
96+ │ └── qdrant_retriever.py # Qdrant implementation
97+ ├── router/ # API routing
98+ │ ├── init.py
99+ │ ├── base.py # Base router interface
100+ │ ├── config.py # Router configuration
101+ │ ├── prompts.py # Router prompts
102+ │ └── router.py # Main routing logic
103+ ├── utils/ # Utility functions
104+ │ ├── init.py
105+ │ ├── file_utils.py # File operations
106+ │ └── parser_utils.py # Input parsing
107+ ├── init.py # Package initialization
108+ ├── input_parameters.json # Configuration parameters
109+ ├── main.py # Application entry point
110+ ├── query.txt # Sample queries
111+ └── settings.py # Environment settings
112+ ```
113+
70114## 🚀 Deploy on TEE
71115
72116Deploy on a [ Confidential Space] ( https://cloud.google.com/confidential-computing/confidential-space/docs/confidential-space-overview ) using AMD SEV.
@@ -167,7 +211,7 @@ If you encounter issues, follow these steps:
1672113 . ** Check Firewall Settings:**
168212 Confirm that your instance is publicly accessible on port ` 80 ` .
169213
170- ## 🔜 Next Steps & Future Upgrades
214+ ## 💡 Next Steps
171215
172216Design and implement a knowledge ingestion pipeline, with a demonstration interface showing practical applications for developers and users.
173217All code uses the TEE Setup which can be found in the [ flare-ai-defai] ( https://github.com/flare-foundation/flare-ai-defai ) repository.
0 commit comments