You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-**Setup a Qdrant service**: make sure that Qdrant is up an running before running your script.
47
+
### Step 1: Setup a Qdrant Service
48
+
49
+
Make sure that Qdrant is up an running before running your script.
37
50
You can quickly start a Qdrant instance using Docker:
38
51
39
52
```bash
40
53
docker run -p 6333:6333 qdrant/qdrant
41
54
```
42
55
56
+
### Step 2: Configure Parameters and Run RAG
57
+
43
58
The RAG consists of a router, a retriever, and a responder, all configurable within `src/input_parameters.json`.
44
59
45
60
Once configured, add your query to `src/query.txt` and run:
46
61
47
62
```bash
48
63
uv run start-rag
49
64
```
65
+
66
+
## 🔜 Next Steps & Future Upgrades
67
+
68
+
Design and implement a knowledge ingestion pipeline, with a demonstration interface showing practical applications for developers and users.
69
+
All code uses the TEE Setup which can be found in the [flare-ai-defai](https://github.com/flare-foundation/flare-ai-defai) repository.
70
+
71
+
_N.B._ Other vector databases can be used, provided they run within the same Docker container as the RAG system, since the deployment will occur in a TEE.
72
+
73
+
***Enhanced Data Ingestion & Indexing**: Explore more sophisticated data structures for improved indexing and retrieval, and expand beyond a CSV format to include additional data sources (_e.g._, Flare’s GitHub, blogs, documentation). BigQuery integration would be desirable.
74
+
***Intelligent Query & Data Processing**: Use recommended AI models to refine the data processing pipeline, including pre-processing steps that optimize and clean incoming data, ensuring higher-quality context retrieval. (_e.g._ Use an LLM to reformulate or expand user queries before passing them to the retriever, improving the precision and recall of the semantic search.)
75
+
***Advanced Context Management**: Develop an intelligent context management system that:
76
+
* Implements Dynamic Relevance Scoring to rank documents by their contextual importance.
77
+
* Optimizes the Context Window to balance the amount of information sent to LLMs.
78
+
* Includes Source Verification Mechanisms to assess and validate the reliability of the data sources.
79
+
***Improved Retrieval & Response Pipelines**: Integrate hybrid search techniques (combining semantic and keyword-based methods) for better retrieval, and implement completion checks to verify that the responder’s output is complete and accurate (potentially allow an iterative feedback loop for refining the final answer).
0 commit comments