You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,7 +33,7 @@ You can deploy Flare AI RAG using Docker or set up the backend and frontend manu
33
33
1.**Prepare the Environment File:**
34
34
Rename `.env.example` to `.env` and update the variables accordingly. (e.g. your [Gemini API key](https://aistudio.google.com/app/apikey))
35
35
36
-
### Build using Docker (Recommended) -- [WIP]
36
+
### Build using Docker (Recommended)
37
37
38
38
1.**Build the Docker Image:**
39
39
@@ -257,10 +257,10 @@ Design and implement a knowledge ingestion pipeline, with a demonstration interf
257
257
258
258
_N.B._ Other vector databases can be used, provided they run within the same Docker container as the RAG system, since the deployment will occur in a TEE.
259
259
260
-
-**Enhanced Data Ingestion & Indexing**: Explore more sophisticated data structures for improved indexing and retrieval, and expand beyond a CSV format to include additional data sources (_e.g._, Flare’s GitHub, blogs, documentation). BigQuery integration would be desirable.
260
+
-**Enhanced Data Ingestion & Indexing**: Explore more sophisticated data structures for improved indexing and retrieval, and expand beyond a CSV format to include additional data sources (_e.g._, Flare's GitHub, blogs, documentation). BigQuery integration would be desirable.
261
261
-**Intelligent Query & Data Processing**: Use recommended AI models to refine the data processing pipeline, including pre-processing steps that optimize and clean incoming data, ensuring higher-quality context retrieval. (_e.g._ Use an LLM to reformulate or expand user queries before passing them to the retriever, improving the precision and recall of the semantic search.)
262
262
-**Advanced Context Management**: Develop an intelligent context management system that:
263
263
- Implements Dynamic Relevance Scoring to rank documents by their contextual importance.
264
264
- Optimizes the Context Window to balance the amount of information sent to LLMs.
265
265
- Includes Source Verification Mechanisms to assess and validate the reliability of the data sources.
266
-
-**Improved Retrieval & Response Pipelines**: Integrate hybrid search techniques (combining semantic and keyword-based methods) for better retrieval, and implement completion checks to verify that the responder’s output is complete and accurate (potentially allow an iterative feedback loop for refining the final answer).
266
+
-**Improved Retrieval & Response Pipelines**: Integrate hybrid search techniques (combining semantic and keyword-based methods) for better retrieval, and implement completion checks to verify that the responder's output is complete and accurate (potentially allow an iterative feedback loop for refining the final answer).
0 commit comments