Transform your technical documentation with comprehensive style analysis, readability scoring, and AI-powered iterative rewriting. Designed for technical writers targeting 9th-11th grade readability standards.
-
Python 3.12+ (Download here)
CRITICAL: This project requires Python 3.12 or higher. Older versions WILL NOT WORK.
Quick Check:
python3.12 --version # Should show: Python 3.12.x
Windows:
# Navigate to project folder
cd C:\path\to\style-guide-ai
# Create and activate virtual environment (ensure Python 3.12 is installed)
python -m venv venv
venv\Scripts\activateLinux (Fedora/RHEL-based):
# Update system and install Python 3.12
sudo dnf clean all
sudo dnf update
sudo dnf install python3.12
# Navigate to project folder
cd ~/path/to/style-guide-ai
# Create and activate virtual environment
python3.12 -m venv venv
source venv/bin/activatemacOS:
# Navigate to project folder
cd ~/path/to/style-guide-ai
# Create and activate virtual environment (ensure Python 3.12 is installed)
python3.12 -m venv venv
source venv/bin/activateYou should see (venv) at the start of your command prompt
# Upgrade pip first
pip install --upgrade pip
# Install all Python packages (conflict-free!)
pip install -r requirements.txtpython main.pyThen visit: http://localhost:5000
That's it! The application will auto-setup everything on first run:
- SpaCy language models
- NLTK data downloads
- Directory creation
- Dependency verification
- Ollama detection & guidance
Always activate your virtual environment first:
Windows:
cd C:\path\to\style-guide-ai
venv\Scripts\activate
python main.pyLinux/macOS:
cd ~/path/to/style-guide-ai
source venv/bin/activate
python main.pyComprehensive automated testing with AI-powered analysis:
# Install test dependencies (in your activated venv)
pip install -r requirements-test.txt
playwright install chromium
# Run all tests
pytest
# Run with AI analysis and report
python -m testing_agent.test_runner
# View report
open testing_agent/reports/latest_report.html- Unit Tests - Fast module-level tests (2 min)
- Integration Tests - End-to-end workflows (5 min)
- UI Tests - Browser-based interface tests (3 min)
- Performance Tests - Load and stress testing
- Quality Metrics - ML quality monitoring
Tests automatically run daily at 2 AM via GitLab CI pipeline:
- Go to CI/CD → Schedules → New schedule
- Set cron:
0 2 * * * - Variable:
SCHEDULED_JOB=daily_tests
The testing agent uses the same AI configuration as your main app:
Local Development (uses your .env file):
# Your local .env file
AI_PROVIDER=ollama # or openai, anthropic, custom
OLLAMA_MODEL=llama3.2:latest
# ... other settings from your .envGitLab CI/CD (uses secure GitLab environment variables):
- Go to Settings → CI/CD → Variables
- Add variables (they are encrypted and private, NOT public):
AI_PROVIDER=ollamaOLLAMA_MODEL=llama3.2:latest- For APIs:
OPENAI_API_KEY=sk-...(check "Masked" and "Protected")
Security & Access:
- ✅ Variables are encrypted at rest
- ✅ Mark sensitive keys as "Masked" (hidden in logs)
- ✅ Mark as "Protected" (only accessible on protected branches)
- ✅ NOT visible in repository or to unauthorized users
⚠️ Maintainer/Owner role can view/edit variables (they need to run CI/CD)- ✅ Only accessible to GitLab CI jobs you configure
Important for Ollama Users: If you use local Ollama (model downloaded on your machine), it won't work in GitLab CI because:
- GitLab CI runs on a remote server (not your machine)
- Your local Ollama/models aren't accessible to GitLab runners
Solutions:
- Option A: Use API-based AI for GitLab CI (OpenAI, Anthropic)
- Option B: Skip AI analysis in CI, only run tests (set
AI_PROVIDER=none) - Option C: Self-hosted GitLab runner with Ollama installed (advanced)
The testing_agent/config.py reads from environment variables, so it works securely in both places!
For detailed testing documentation: See TESTING_AGENT_GUIDE.md
- Two-Pass Process: AI reviews and refines its own output
- Local Ollama Integration: Privacy-first with Llama models
- Real-Time Progress: Watch the AI improvement process step-by-step
- Smart Confidence Scoring: Know how much the AI improved your text
- Grade Level Assessment: Targets 9th-11th grade readability
- Multiple Readability Scores: Flesch, Gunning Fog, SMOG, Coleman-Liau, ARI
- Style Issues Detection: Passive voice, sentence length, wordiness
- Technical Writing Metrics: Custom scoring for documentation
- Text Files: .txt, .md (Markdown)
- Documents: .docx (Microsoft Word)
- Technical Formats: .adoc (AsciiDoc), .dita (DITA)
- PDFs: Extract and analyze existing documents
- Direct Input: Paste text directly into the interface
- Real-Time Analysis: Instant feedback on text quality
- Interactive Error Highlighting: Click to see specific issues
- Progress Transparency: No fake spinners - see actual AI work
- Responsive Design: Works on desktop, tablet, and mobile
For the best AI rewriting experience, install Ollama with our recommended model:
- Download from: https://ollama.com/download/windows
- Run installer
- Open Command Prompt:
ollama pull llama3:8b
- Download from: https://ollama.com/download/mac
- Install .dmg file
- Open Terminal:
ollama pull llama3:8b
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3:8bRecommended Model:
llama3:8b- Superior writing quality and reasoning (4.7GB) Recommended
Alternative Models (if needed):
llama3.2:3b- Good balance of speed and quality (2GB)
Why llama3:8b?
- Superior writing quality and reasoning capabilities
- Excellent for complex technical writing improvements
- Better understanding of context and nuance
- Optimal performance with our two-pass iterative process
If you prefer to use a different model than our recommended llama3:8b, you can easily customize it:
Step 1: Update Configuration
Edit config.py and change line 45:
# Change this line:
OLLAMA_MODEL = os.getenv('OLLAMA_MODEL', 'llama3:8b')
# To your preferred model, for example:
OLLAMA_MODEL = os.getenv('OLLAMA_MODEL', 'llama3.2:3b')Step 2: Pull Your Chosen Model
# For llama3.2:3b (faster, smaller)
ollama pull llama3.2:3b
# Or any other compatible model
ollama pull your-chosen-modelStep 3: Restart the Application
python main.pyPopular Alternative Models:
llama3.2:3b- Good balance of speed and quality (2GB)llama3.2:1b- Very fast, basic quality (1.3GB)llama3:70b- Highest quality, requires powerful hardware (40GB)codellama:7b- Optimized for technical/code documentation (3.8GB)
For AsciiDoc document parsing and analysis, install the asciidoctor Ruby gem:
Windows:
- Download from: https://rubyinstaller.org/
- Run the installer (includes Ruby + DevKit)
- Open Command Prompt and verify:
ruby --version
macOS:
# Ruby is usually pre-installed. If not:
brew install ruby
# Verify installation
ruby --versionLinux:
# Ubuntu/Debian
sudo apt-get update
sudo apt-get install ruby-full
# Fedora/RHEL
sudo dnf install ruby ruby-devel
# Verify installation
ruby --version# Install asciidoctor
gem install asciidoctor
# Or with sudo if needed
sudo gem install asciidoctor
# Verify installation
asciidoctor --version- High-Performance Parsing: Uses persistent Ruby server (15x faster than subprocess)
- Full AsciiDoc Support: Complete parsing of admonitions, tables, includes, etc.
- Accurate Structure Analysis: Proper block-level content analysis
- Document Title Detection: Correctly identifies and displays document titles
Without Asciidoctor:
- AsciiDoc parsing will be limited to basic text extraction
- Document structure analysis may be incomplete
- Style analysis won't recognize AsciiDoc-specific elements
For comprehensive troubleshooting, see: SETUP_TROUBLESHOOTING.md
This detailed guide covers:
- Python version conflicts and installation
- Virtual environment problems
- Package installation failures
- SpaCy and NLTK setup issues
- Ollama/Llama installation and configuration
- OS-specific solutions (Windows/macOS/Linux)
- Complete setup verification script
Python Version Issues:
# CRITICAL: You must use Python 3.12+
python3.12 --version # Should show 3.12.x
# Create venv with EXACT Python version
python3.12 -m venv venvVirtual Environment Issues:
# Always activate venv first (you should see (venv) in prompt)
source venv/bin/activate # Linux/macOS
venv\Scripts\activate # Windows
# Verify Python version in venv
python --version # Should show Python 3.12.xPackage Installation Issues:
# Nuclear option: fresh reinstall
rm -rf venv
python3.12 -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt --no-cache-dirQuick Setup Verification:
# Verify everything is working
python -c "import flask, spacy, nltk; print('Core packages OK')"
python -c "import spacy; spacy.load('en_core_web_sm'); print('SpaCy model OK')"Input Text:
"In order to facilitate the implementation of the new system, it was decided by the team that the best approach would be to utilize a modular architecture."
AI Analysis Detects:
- Passive voice: "it was decided"
- Wordy phrases: "in order to", "utilize"
- Long sentence: 25 words (target: 15-20)
- Grade level: 14th (target: 9th-11th)
AI Rewrite (Pass 1):
"To implement the new system, the team decided to use a modular architecture."
AI Rewrite (Pass 2 - Final):
"The team chose a modular architecture to implement the new system."
Improvements:
- Reduced from 25 to 10 words
- Converted to active voice
- Removed wordy phrases
- Lowered to 9th grade level
- Improved clarity and flow
style-guide-ai/
├── main.py # Main Flask application (auto-setup included!)
├── requirements.txt # Clean, conflict-free dependencies
├── config.py # Main application configuration
├── style_analyzer/ # Analysis modules
├── rewriter/ # AI rewriting components
├── rules/ # Style rules and checks
├── models/ # AI model management
├── ui/ # User interface files
│ ├── templates/ # HTML templates
│ └── static/ # CSS, JS, images
├── uploads/ # File upload storage
└── logs/ # Application logs
- Fork the repository
- Create your feature branch:
git checkout -b feature/amazing-feature - Commit your changes:
git commit -m 'Add amazing feature' - Push to the branch:
git push origin feature/amazing-feature - Open a Pull Request
- SpaCy for advanced NLP processing
- Ollama for local AI model serving
- Flask for the web framework
- Bootstrap for responsive UI components
Made with love for technical writers who value privacy and quality.