AI-Powered Writing Assistant for Content Creators
QuillPilot is a desktop application that helps writers and content creators produce high-quality blog content using AI assistance. It runs completely on your computer and supports both local AI models (like Llama3 via Ollama) and cloud-based AI services (OpenAI).
Just want to start writing?
- Double-click
QuillPilot.commandin your QuillPilot folder - The app will guide you through any setup needed
- Start creating amazing blog content with AI assistance!
For detailed setup instructions, see the Setup Guide below.
Your blog posts are automatically saved and safe!
- Location:
~/QuillPilot/blog-posts/(in your home directory) - Format: Each post is saved as a Markdown file with metadata
- Backup: Your content is stored on your computer, not in the cloud
- Access: Click "Open Folder" in the dashboard to view your files
- Sync: Files are automatically saved as you type
Why this matters:
- β Your content won't be lost if you uninstall the app
- β You can edit posts in any Markdown editor
- β Easy to backup or move to another computer
- β No internet required to access your content
- β Your content stays private on your device
For Developers: The blog-posts/ directory is excluded from Git (see .gitignore) to prevent user content from being accidentally committed to the repository. This keeps your personal writing separate from the application code.
- AI-Powered Content Generation: Generate blog posts, improve writing, and get content ideas
- Multiple AI Providers: Support for both Ollama (local) and OpenAI (cloud)
- Rich Text Editor: Markdown-based editor with live preview
- Blog Templates: Pre-built templates for different content types
- SEO Optimization: Automatic keyword extraction and meta description generation
- File Management: Save, load, and export content in multiple formats
- Desktop Integration: Native desktop app with menu integration
- Click "New Blog Post" in the app
- Add a title for your blog post
- Choose your approach:
- Write manually and use AI to help improve
- Let AI generate a complete blog post from a topic
- Use templates (How-to guides, Lists, Reviews, etc.)
- Configure your AI models in Settings (βοΈ) - choose your preferred models for each provider
- Click the magic wand icon (πͺ) to open AI Assistant
- Choose your AI provider:
- Ollama (Local) - Free, private, runs on your computer
- OpenAI - Requires API key, very powerful
- Type what you want and click Generate! (Uses your preferred model automatically)
- How-To Guide - Step-by-step instructions
- Listicle - "10 Best..." or "5 Ways to..." posts
- Product Review - Balanced reviews with pros/cons
- Opinion Piece - Your thoughts and arguments
- News Article - Factual reporting style
- Start with AI: Generate a first draft or outline
- Edit and personalize: Add your voice and experiences
- Use AI for improvement: Select text and ask AI to improve it
- SEO optimization: AI automatically suggests keywords
- Export when done: Save as Markdown or HTML
- Node.js (v16 or higher)
- Python (v3.8 or higher)
- Ollama (optional, for local AI) - Download here
- OpenAI API Key (optional, for cloud AI)
For Writers (Simple):
- Get the QuillPilot folder
- Double-click
QuillPilot.command - Follow any setup prompts - the app will guide you!
For Developers:
-
Clone the repository:
git clone <repository-url> cd QuillPilot
-
Install dependencies:
npm install cd src/python && pip install -r requirements.txt
-
Configure environment (optional):
# Copy the environment template cp src/python/env_example.txt src/python/.env # Edit .env and add your OpenAI API key if you want to use OpenAI OPENAI_API_KEY=your_openai_api_key_here
-
Install Ollama: Download from ollama.ai
-
Pull a model (choose one):
# General purpose model (recommended) ollama pull llama3 # Code-focused model ollama pull codellama # Fast and efficient model ollama pull mistral
-
Start Ollama service:
ollama serve
-
Get an API key: Visit OpenAI Platform
-
Add your API key: Use the AI Settings panel in the app or add it to your
.envfile
Once you have your AI providers set up, configure your preferred models:
- Open AI Settings: Click the βοΈ Settings icon in the sidebar
- Select Models: In the "Model Selection" section (left side), choose your preferred model for each provider:
- Ollama: Select from your installed models (llama3 recommended)
- OpenAI: Choose from available models (gpt-3.5-turbo for speed/cost, gpt-4 for quality)
- Auto-Selection: Leave dropdown on "Auto-select" to let QuillPilot choose the best available model
- Visual Confirmation: Selected models are highlighted in green with a β checkmark
Model Recommendations:
- For General Writing: llama3 (Ollama) or gpt-3.5-turbo (OpenAI)
- For Code Content: codellama (Ollama) or gpt-4 (OpenAI)
- For Speed: mistral (Ollama) or gpt-3.5-turbo (OpenAI)
- For Quality: llama3 (Ollama) or gpt-4 (OpenAI)
Your model preferences are saved automatically and used for all content generation.
Easy Launch (Recommended):
# Interactive menu - choose web or desktop mode
./start.shWhen you run ./start.sh, you'll see a simple menu:
π Choose how to run QuillPilot:
1. π Web App (for developers - opens in browser)
2. π₯οΈ Desktop App (for regular use - native app)
3. β Cancel
Enter your choice (1-3):
Direct Launch:
./start.sh web # Web app (opens in browser)
./start.sh desktop # Desktop app (native app)Double-Click Launch:
- Double-click
QuillPilot.commandβ Automatically launches desktop app
For Developers:
# Individual services (for development)
npm run dev:web # Web app + backend only
npm run dev:desktop # Desktop app + backend
npm run dev:python # Backend API only- Best for: Developers, debugging, browser DevTools
- Opens: Browser tab at http://localhost:3000
- Includes: React dev server + FastAPI backend
- Use when: You want to inspect elements, debug, or prefer browser
- Best for: Regular writing, clean interface, native feel
- Opens: Native desktop application
- Includes: Background React server + FastAPI backend + Electron
- Use when: You want a distraction-free writing environment
What happens when you start:
- FastAPI backend starts (http://localhost:5001) with AI endpoints
- React development server starts (background or browser)
- Your Ollama models are detected and ready to use
- OpenAI integration available if API key is configured
QuillPilot/
βββ src/
β βββ components/ # React components
β β βββ Dashboard.js # Main dashboard
β β βββ BlogEditor.js # Content editor
β β βββ Sidebar.js # Navigation sidebar
β β βββ AISettings.js # AI configuration
β βββ services/
β β βββ aiService.js # AI integration service
β βββ python/ # Python backend
β βββ app.py # FastAPI server with streaming
β βββ requirements.txt # Python dependencies
βββ public/
β βββ electron.js # Electron main process
β βββ preload.js # Electron preload script
βββ package.json # Node.js dependencies
- Launch QuillPilot and ensure at least one AI service is configured
- Click "New Blog Post" or use
Cmd/Ctrl + N - Choose a writing approach:
- Start with a title and let AI generate content
- Use a template (How-to, Listicle, Review, etc.)
- Write manually with AI assistance
- Full Blog Posts: Provide a topic and get a complete structured blog post
- Custom Content: Generate specific sections or improvements
- Templates: Use pre-built templates for common blog formats
- SEO Optimization: Automatic keyword extraction and meta descriptions
- Style Variations: Generate content in different styles (informative, persuasive, technical)
- Length Control: Generate short, medium, or long-form content
- Save:
Cmd/Ctrl + S- Save your work - Export: Export as Markdown or HTML
- Auto-save: Your work is automatically saved as you type
Access AI settings through the sidebar (βοΈ Settings icon):
Left Side - Model Selection (Primary Actions):
- Model Dropdowns: Select your preferred model for each AI provider
- Visual Confirmation: Selected models highlighted in green with checkmarks
- Smart Defaults: Auto-selection chooses the best available model
- Model Lists: See all available models with the selected one highlighted
Right Side - Status & Configuration (Supporting Info):
- Service Status: Real-time status of Ollama and OpenAI connections
- Setup Instructions: Step-by-step guides when services need configuration
- API Key Management: Secure local storage of OpenAI API keys
- Connection Info: Available model counts and service health
- One-Time Setup: Configure once, use everywhere in the app
- Persistent Preferences: Your model choices are remembered between sessions
- Smart Auto-Selection: QuillPilot picks the best model if you don't specify
- Visual Feedback: Green highlights clearly show which models are active
- Templates: Modify existing templates or create new ones in
src/services/aiService.js - Styles: Customize the UI by editing Tailwind classes
- Shortcuts: Modify keyboard shortcuts in
public/electron.js
- Frontend: React with Tailwind CSS for the user interface
- Backend: Python FastAPI with streaming support for AI integration
- Desktop: Electron for native desktop functionality
- AI Features: Extend
src/services/aiService.js - UI Components: Add React components in
src/components/ - Backend APIs: Add endpoints in
src/python/app.py
# Build the application for distribution
npm run build:electronGET /api/health- Check service status and AI availabilityGET /api/models- Get available AI models (OpenAI + Ollama)POST /api/generate-blog- Generate a complete blog postPOST /api/generate-content- Generate custom contentPOST /api/generate-blog-stream- Stream blog generation in real-timePOST /api/generate-content-stream- Stream content generation in real-timeGET /docs- Interactive API documentation (Swagger UI)GET /redoc- Alternative API documentation
aiService.generateBlogPost()- Generate structured blog contentaiService.generateContent()- Generate custom textaiService.getBlogTemplates()- Get available templates
"AI services not available"
- Make sure Ollama is running: open Terminal and type
ollama serve - Or add your OpenAI API key in Settings
"QuillPilot won't start"
- Make sure Node.js and Python are installed
- Try double-clicking the
QuillPilot.commandfile - Check that all prerequisites are installed
"Can't find my blog posts"
- Your posts are saved locally in the app
- Use File > Export to save them as files
AI Service Not Available
- Ensure Ollama is running:
ollama serve - Check your OpenAI API key is valid
- Verify FastAPI backend is running on port 5001 (not 5000 due to macOS AirPlay)
Electron App Won't Start
- Run
npm installto ensure all dependencies are installed - Check that React build exists:
npm run build - Try
./start.shinstead ofnpm run dev
Python Backend Issues
- Install Python dependencies:
pip install -r src/python/requirements.txt - Check Python version is 3.8 or higher
- Port 5000 conflict: We use port 5001 to avoid macOS AirPlay
- Test backend directly: Visit http://localhost:5001/docs for API documentation
Port Conflicts
- Stop existing processes:
pkill -f "react-scripts|python3.*app.py" - The startup script handles this automatically
- Local AI: Use smaller models like
mistralfor faster responses - Cloud AI: Use
gpt-3.5-turbofor cost-effective generation - Memory: Close unused posts in the editor to save memory
- Best Models:
llama3for quality,mistralfor speed
- Fork the repository
- Create a feature branch:
git checkout -b feature-name - Make your changes and test thoroughly
- Submit a pull request with a clear description
This project is licensed under the MIT License - see the LICENSE file for details.
- Ollama for local AI capabilities
- OpenAI for cloud AI services
- Electron for desktop framework
- React for the user interface
- Tailwind CSS for styling
Happy Writing with QuillPilot! π