CareAI is a medical assistant that leverages Large Language Models (LLMs) and Vector Databases to deliver context-aware, evidence-based medical insights. It processes trusted medical literature, generates semantic embeddings, and retrieves the most relevant information using similarity search. The system provides safe, concise, and professional explanations to user health queries, ensuring reliable medical understanding through an efficient pipeline of data extraction, embedding, and LLM inference.
- Frontend: Next.js 16, React.js, TailwindCSS, Framer Motion, TypeScript
- Backend: Python 3.11+, FastAPI, LangChain, RAG, Hugging Face (LLMs)
- Database: Pinecone Vector
- Tools/Version: Git, MCP
- User enters a Query through the web interface.
- Generates a vector embedding(MiniLM L6) for each query to enable semantic search.
- Server fetches similar medical contexts from the Vector DB using the query embedding.
- LLMs uses the retrieved context to generate an accurate, concise, and evidence-based medical response.
- Then output is structured using filter.ts(run algorithms) and converted into HTML.
- The response is displayed to the user on a responsive Next.js UI
Stores medical text embeddings in a Vector Database (Pinecone) for fast semantic search and context retrieval.
CareAI/
├── app/
│ ├── components/
│ │ └── ...tsx
│ ├── styles/
│ │ └── ...css
│ ├── utils/
│ │ └── ...ts
│ ├── layout.tsx
│ └── page.tsx
├── public/
├── data/
│ └── ...pdf
├── server/
│ ├── model/
│ │ └── ...ipynb
│ ├── src/
│ │ ├── routes/
│ │ │ ├── __init__.py
│ │ │ └── ...py
│ │ ├── utils/
│ │ │ └── ...py
│ │ ├── __init__.py
│ │ ├── db_handler.py
│ │ ├── embedding_handler.py
│ │ ├── gpt_handler.py
│ │ ├── prompt_handler.py
│ │ └── text_handler.py
│ ├── main.py
│ ├── requirements.txt
│ ├── setup.py
│ └── template.sh
├── .gitignore
├── package.json
├── eslint.config.mjs
├── next.config.js
├── tailwind.config.js
└── README.md git clone https://github.com/harshkunz/careAI.git
cd CareAI cd app
npm install # Install Dependencies
npm run dev # Run ServerRun at http://localhost:3000
cd ../server
python -m venv venv # Create virtual environment
source venv/bin/activate # Linux/macOS
# OR
venv\Scripts\activate # Windows
pip install -r requirements.txt # Install Dependencies
uvicorn main:app --reload # Run ServerRun at http://localhost:8000
.env file in server:
HF_API_KEY = "your_huggingface_api_key"
PINECONE_API_KEY ="your_pinecone_api_key"Open to contributions!
- Fork the repository
- Create a new branch (
git checkout -b feature-name) - Commit your changes (
git commit -m 'Add feature') - Push to the branch (
git push origin feature-name) - Create a Pull Request



