A simple learning project to explore AI integration and LLM implementation. This app demonstrates how to integrate local LLMs with web applications by creating a PDF analysis tool using FastAPI and Ollama.
This project was built to understand:
- How to integrate LLMs into web applications
- Working with local AI models using Ollama
- Building API endpoints that interact with AI
- Processing documents and providing contextual AI responses
- Session management for conversational AI interfaces
- Upload PDF documents for AI analysis
- Generate summaries, questions, flashcards, and outlines
- Ask follow-up questions about uploaded documents
- Demonstrates different prompt engineering techniques
- FastAPI - Web framework
- Ollama + Llama3 - Local LLM processing
- PyMuPDF - PDF text extraction
- HTML/CSS/JS - Simple frontend
-
Install Ollama and pull Llama3
# Install from https://ollama.ai/ ollama pull llama3 -
Install Python dependencies
pip install fastapi uvicorn PyMuPDF ollama python-multipart
-
Run the application
python main.py
-
Open
upload_form.htmlin your browser
- LLM Integration: How to connect FastAPI with local AI models
- Prompt Engineering: Different prompts for various analysis types
- Session Management: Maintaining context across AI conversations
- Document Processing: Extracting and chunking text for AI analysis
- API Design: Building endpoints that work with AI workflows
main.py- FastAPI app with AI integration logicupload_form.html- Simple frontend interface- Sample PDFs and test scripts for experimentation