Goal: It is simple, upload your documents in PDF format (will work on changing this later like using docx and text files) and then query it. Get a short summary, or get detailed understanding of only a portion of the document, anything.
The beautiful interface is built using Vue.js and supported by Typescript. The LLM responding to your queries is Google's 'gemini-2.5-flash-lite' model. Access repo here - Google GenAI
Uploading document and then querying.

- Cloning the repository first.
git clone https://github.com/AmishKakka/Document-Summarizer.git- Then, consider building a virtual environment so that there are no conflicts for the library versions.
python3 -m venv env_name
source env_name/bin/activate
pip install -r requirements.txt-
You would need to create a Pinecone vector db instance, setup Firebase Authentication, create a GCS Bucket, and Firestore db if you want to replicate this entire project. For now you can access the project here - https://application-service-108871784288.us-west1.run.app/
-
If you setup everything required, run the command below to access the application in your browser locally.
chmod +x ./start_local.sh
./start_local.sh- The idea is to implement the concept Retreival-Augmented Generation (RAG) and the power of LLMs to generate relevant text.
- To store the embeddings of the documents, Pinecone is used and accessed using the LangChain library.
- You can now upload mutiple documents at a time and query any document.