Skip to content

Latest commit

 

History

History
52 lines (31 loc) · 4.5 KB

File metadata and controls

52 lines (31 loc) · 4.5 KB
  1. Qdrant-Sematic-cache

  2. Private RAG Information Extraction Engine

  3. Production ready Secure and Powerful AI Implementations with Azure Services

  4. LlamaIndex Agents and Qdrant’s Hybrid Search

  5. Building a Traceable RAG System with Qdrant and Langtrace: A Step-by-Step Guide

  6. Qdrant Internals: Immutable Data Structures

  7. Data-Driven RAG Evaluation: Testing Qdrant Apps with Relari AI

  8. Agentic RAG With LangGraph and Qdrant *** 9. Multimodal RAG with ColPali, crewAIInc & Qdrant

    1. Vibe Coding RAG with our MCP server
    2. Qdrant RAG with HoneyHive Tracing

Relari

Screenshot 2024-09-20 at 10 23 25 PM Screenshot 2024-08-22 at 10 50 14 AM
  1. Qdrant-Pulumi

  2. Revolutionizing RAG by Integrating Vision Models for Enhanced Document Processing

  3. LLM Tracing Implementation to Analyze and Visualize LLM at Scale

  4. Build a GraphRAG Agent with Neo4j and Qdrant

  5. Creating and Deploying Memory-Efficient Medical Agents Using Agno, Qdrant, MongoDB & LiteLLM

This project implements two domain‐specific agents — medical and legal — that split short-term conversational state (stored in MongoDB) from long-term semantic knowledge (indexed in Qdrant) under Agno’s orchestration. The intention is to validate a lightweight, memory-efficient architecture — powered by LiteLLM across multiple model providers — that can deliver real-time, context-aware support without the overhead of reloading large history embeddings, while maintaining full auditability and compliance in clinical and legal workflows.

Use Case: