Given a PDF, build a RAG chain and use a local LLM to ask questions relevant to the document
For fun!
The app is built using
langchain_coreandlangchain_communityfor the runnablesllama_cpp_pythonto use compiled version of GGUF models (Llama-2, Mistral, etc)chromadbto store the vector embeddingsstreamlitto build the UI for the Human-AI QA interactiondotenvto load environment variables
To replicate the results create a .env file in the src/ having the variables
MODEL_PATH="<PATH_TO_YOUR_MODEL_DIR>"
MODEL_NAME="${MODEL_PATH}<NAME_OF_YOUR_MODEL_FILE>"
PDF_PATH="<LOCAL_PATH_TO_/asset_FOLDER>"
PDF_NAME="<TITLE_OF_YOUR_PDF_DOC>"