Fast, CPU-only document extraction with structured output - 2-10x faster than competitors
Save time and money by skipping OCR for files that are already machine-readable
Quick Links: Installation β’ Examples β’ Benchmarks β’ API Reference β’ Contributing
| Metric | PreOCR π | Unstructured.io | Docugami |
|---|---|---|---|
| Speed (10-page PDF) | β < 1 second | ||
| Overall Score | β 91.4/100 | 75.0/100 | 77.1/100 |
| Cost Optimization | β Unique - Skip OCR 50-70% | β No | β No |
| Page-Level Processing | β Yes (unique) | β No | β No |
| Type Safety | β Pydantic (unique) | ||
| Confidence Scores | β Per-element | β No | β Yes |
| Open Source | β Yes | β Partial | β Commercial |
PreOCR is 2-10x faster with unique cost optimization features π
- Benchmark Comparison
- What is PreOCR?
- Key Features
- Quick Start
- How It Works
- Installation
- Usage Examples
- Supported File Types
- Reason Codes
- Performance
- Competitive Comparison
- Architecture
- Project Structure
- API Reference
- Configuration
- Troubleshooting
- Development
- Contributing
PreOCR is a universal document gatekeeper that analyzes any file type and answers one simple question:
"Is this file already machine-readable, or do I need OCR?"
Instead of running expensive OCR on everything, PreOCR uses intelligent analysis to determine if OCR is actually needed. Perfect for filtering documents before sending them to expensive OCR engines like MinerU, Tesseract, or cloud OCR services.
- β‘ Fast: CPU-only, typically < 1 second per file
- π― Accurate: 92-95% accuracy with hybrid pipeline (validated with ground truth data). Recent validation on 27 files achieved 100% accuracy (2 TP, 25 TN, 0 FP, 0 FN)
- π§ Smart: Adaptive pipeline - fast heuristics for clear cases, OpenCV refinement for edge cases
- π Deterministic: Same input β same output
- π« OCR-free: Never performs OCR to detect OCR
- π Page-level: Analyze PDFs page-by-page (v0.2.0+)
- π·οΈ Reason codes: Structured codes for programmatic handling
- π¨ Layout-aware: Detects mixed content and layout structure (v0.3.0+)
- π Batch processing: Process thousands of files in parallel with automatic caching, progress tracking, and resume support (v0.5.0+)
- π Structured extraction π: Comprehensive data extraction with tables, forms, images, and semantic relationships (v1.0.0+)
- π― Type-safe output: Full Pydantic models with IDE autocomplete and runtime validation (v1.0.0+)
- π° Cost optimization: Skip OCR for 50-70% of documents, saving significant processing costs (v1.0.0+)
pip install preocrfrom preocr import needs_ocr
# Simple usage
result = needs_ocr("document.pdf")
if result["needs_ocr"]:
print(f"Needs OCR: {result['reason']}")
# Run your OCR here (e.g., MinerU)
else:
print(f"Already readable: {result['reason']}")Extract structured data from machine-readable documents:
from preocr import extract_native_data
# Extract structured data
result = extract_native_data("document.pdf")
# Access elements, tables, forms, images
for element in result.elements:
print(f"{element.element_type}: {element.text[:50]}...")
print(f" Confidence: {element.confidence:.2%}")
print(f" Bounding box: {element.bbox}")
# Access tables
for table in result.tables:
print(f"Table with {table.rows} rows Γ {table.columns} columns")
# Access table cells
for cell in table.cells:
print(f" Cell [{cell.row}, {cell.col}]: {cell.text}")
# Export to Markdown (LLM-ready)
markdown = extract_native_data("document.pdf", output_format="markdown")
print(markdown)Process thousands of files efficiently with parallel processing:
from preocr import BatchProcessor
# Process entire directory with automatic parallelization
processor = BatchProcessor(max_workers=8)
results = processor.process_directory("documents/")
# Get comprehensive statistics
results.print_summary()
# Access results
for result in results.results:
if result["needs_ocr"]:
print(f"{result['file_path']} needs OCR")PreOCR uses a hybrid adaptive pipeline:
βββββββββββββββ
β Any File β
ββββββββ¬βββββββ
β
βΌ
βββββββββββββββββββ
β Heuristics β β Fast text extraction + rules
β (Fast Path) β (< 1 second)
ββββββββ¬βββββββββββ
β
βΌ
βββββββββββββββββββ
β Confidence β₯0.9?β
ββββββββ¬βββββββββββ
β
βββββ΄ββββ
β β
YES NO
β β
βΌ βΌ
βββββββ βββββββββββββββββββ
βReturnβ β OpenCV Layout β β Only for edge cases
βFast! β β Analysis β (20-200ms)
βββββββ ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββ
β Refine Decisionβ
β (Better Accuracy)β
βββββββββ¬ββββββββ
β
βΌ
ββββββββββ
β Result β
ββββββββββ
Performance:
- ~85-90% of files: Fast path (< 150ms) - heuristics only
- ~10-15% of files: Refined path (150-300ms) - heuristics + OpenCV (depends on page count)
- Overall accuracy: 92-95% with hybrid pipeline (vs 88-92% with heuristics alone)
- Average time: 120-180ms per file
Recent Validation Results:
- Test dataset: 27 files (26 PDFs, 1 text file)
- Accuracy: 100.00% (27/27 correct)
- Precision: 100.00% (2/2 true positives)
- Recall: 100.00% (2/2 files needing OCR detected)
- F1-Score: 100.00%
- Confusion Matrix: 2 TP, 25 TN, 0 FP, 0 FN
Note: Accuracy claims should be validated with your own dataset. Use
validate_accuracy.pyto measure accuracy against ground truth labels. See Validation Guide. The 100% result above is from a small sample; larger, more diverse datasets may show different results.
pip install preocrpython -c "from preocr import needs_ocr; print('β
PreOCR installed successfully!')"System Requirements:
- libmagic: Required for file type detection. Install system package:
- Linux (Debian/Ubuntu):
sudo apt-get install libmagic1 - Linux (RHEL/CentOS):
sudo yum install file-develorsudo dnf install file-devel - macOS:
brew install libmagic - Windows: Usually included with
python-magic-binpackage
- Linux (Debian/Ubuntu):
For improved accuracy on edge cases:
pip install preocr[layout-refinement]This installs opencv-python-headless and NumPy for layout analysis. The pipeline automatically uses OpenCV when confidence is low, even if installed separately.
from preocr import needs_ocr
result = needs_ocr("document.pdf")
print(f"Needs OCR: {result['needs_ocr']}")
print(f"Confidence: {result['confidence']:.2f}")
print(f"Reason: {result['reason']}")
print(f"Reason Code: {result['reason_code']}")result = needs_ocr("mixed_document.pdf", page_level=True)
if result["reason_code"] == "PDF_MIXED":
print(f"Mixed PDF: {result['pages_needing_ocr']} pages need OCR")
for page in result["pages"]:
if page["needs_ocr"]:
print(f" Page {page['page_number']}: {page['reason']}")result = needs_ocr("document.pdf", layout_aware=True)
if result.get("layout"):
layout = result["layout"]
print(f"Layout Type: {layout['layout_type']}")
print(f"Text Coverage: {layout['text_coverage']}%")
print(f"Image Coverage: {layout['image_coverage']}%")
print(f"Is Mixed Content: {layout['is_mixed_content']}")PreOCR provides a powerful BatchProcessor class for processing multiple files efficiently with parallel processing, caching, and progress tracking.
from preocr import BatchProcessor
# Create processor with default settings
processor = BatchProcessor()
# Process all files in a directory
results = processor.process_directory("documents/")
# Print summary statistics
results.print_summary()
# Access individual results
for result in results.results:
if result["needs_ocr"]:
print(f"{result['file_path']} needs OCR: {result['reason']}")from preocr import BatchProcessor
# Configure processor with options
processor = BatchProcessor(
max_workers=8, # Parallel workers (default: CPU count)
use_cache=True, # Enable caching to skip processed files
layout_aware=True, # Perform layout analysis for PDFs
page_level=True, # Enable page-level analysis
extensions=["pdf", "png"], # Filter by file extensions
recursive=True, # Scan subdirectories
min_size=1024, # Minimum file size (bytes)
max_size=10*1024*1024, # Maximum file size (bytes)
resume_from="results.json", # Resume from previous results
)
# Process directory with progress bar
results = processor.process_directory("documents/", progress=True)
# Get detailed statistics
stats = results.get_statistics()
print(f"Processed: {stats['processed']} files")
print(f"Needs OCR: {stats['needs_ocr']} ({stats['needs_ocr']/stats['processed']*100:.1f}%)")
print(f"Processing speed: {stats['files_per_second']:.2f} files/sec")
# Access results by type
for result in results.results:
file_type = result.get("file_type")
if result.get("needs_ocr"):
# Process with OCR
pass
else:
# Use existing text
pass- Parallel Processing: Automatically uses all CPU cores for faster processing
- Caching: Skip already-processed files to save time on re-runs
- Progress Tracking: Real-time progress bar with file details (requires
tqdm) - Resume Support: Resume from previous results to continue interrupted batches
- File Filtering: Filter by extensions, size, and recursive scanning
- Page-Level Analysis: Get per-page statistics for PDFs
- Comprehensive Statistics: Detailed breakdown by file type, reason codes, and performance metrics
For simple use cases, you can still use a basic loop:
from pathlib import Path
from preocr import needs_ocr
files = Path("documents").glob("*.pdf")
needs_ocr_count = 0
skipped_count = 0
for file_path in files:
result = needs_ocr(file_path)
if result["needs_ocr"]:
needs_ocr_count += 1
# Process with OCR
else:
skipped_count += 1
# Use existing text
print(f"OCR needed: {needs_ocr_count}, Skipped: {skipped_count}")from preocr import needs_ocr
# from mineru import ocr # or your OCR engine
def process_document(file_path):
result = needs_ocr(file_path)
if result["needs_ocr"]:
# Only run expensive OCR if needed
ocr_result = ocr(file_path)
return ocr_result
else:
# File is already machine-readable
return {"text": extract_text(file_path), "source": "native"}| File Type | Detection | Accuracy | Notes |
|---|---|---|---|
| PDFs | Digital vs Scanned | 90-95% | Page-level analysis available |
| Images | PNG, JPG, TIFF, etc. | 100% | Always needs OCR |
| Office Docs | DOCX, PPTX, XLSX | 85-90% | Text extraction based |
| Text Files | TXT, CSV, HTML | 99% | No OCR needed |
| Structured Data | JSON, XML | 99% | No OCR needed |
| Unknown Binaries | Conservative default | 50-60% | Assumes OCR needed |
PreOCR allows you to customize decision thresholds to fine-tune OCR detection for your specific use case.
from preocr import needs_ocr, Config
# Create custom configuration
config = Config(
min_text_length=75, # Stricter: require 75 chars instead of 50
min_office_text_length=150, # Stricter: require 150 chars for office docs
layout_refinement_threshold=0.85, # Lower threshold triggers OpenCV more often
)
# Use custom config
result = needs_ocr("document.pdf", config=config)from preocr import BatchProcessor
# Option 1: Pass individual threshold parameters
processor = BatchProcessor(
min_text_length=100,
min_office_text_length=200,
layout_refinement_threshold=0.80,
)
# Option 2: Use Config object
from preocr import Config
config = Config(
min_text_length=100,
min_office_text_length=200,
)
processor = BatchProcessor(config=config)
# Process files with custom thresholds
results = processor.process_directory("documents/")min_text_length(int, default: 50): Minimum text length to consider a file as having meaningful text. Files with less text will be flagged as needing OCR.min_office_text_length(int, default: 100): Minimum text length for office documents to skip OCR.layout_refinement_threshold(float, default: 0.9): Confidence threshold for triggering OpenCV layout analysis. Lower values trigger refinement more often.high_confidence(float, default: 0.9): Threshold for high confidence decisions.medium_confidence(float, default: 0.7): Threshold for medium confidence decisions.low_confidence(float, default: 0.5): Threshold for low confidence decisions.
- Stricter detection: Increase
min_text_lengthandmin_office_text_lengthto reduce false negatives (fewer files incorrectly flagged as not needing OCR) - More aggressive refinement: Lower
layout_refinement_thresholdto use OpenCV analysis more frequently - Domain-specific documents: Adjust thresholds based on your document types (e.g., medical forms may need different thresholds than business letters)
PreOCR provides structured reason codes for programmatic handling:
TEXT_FILE- Plain text fileOFFICE_WITH_TEXT- Office document with sufficient textPDF_DIGITAL- Digital PDF with extractable textSTRUCTURED_DATA- JSON/XML filesHTML_WITH_TEXT- HTML with sufficient content
IMAGE_FILE- Image fileOFFICE_NO_TEXT- Office document with insufficient textPDF_SCANNED- PDF appears to be scannedPDF_MIXED- PDF with mixed digital and scanned pagesHTML_MINIMAL- HTML with minimal contentUNKNOWN_BINARY- Unknown binary file type
PDF_PAGE_DIGITAL- Individual page has extractable textPDF_PAGE_SCANNED- Individual page appears scanned
Example:
result = needs_ocr("document.pdf")
if result["reason_code"] == "PDF_MIXED":
# Handle mixed PDF
process_mixed_pdf(result)
elif result["reason_code"] == "PDF_SCANNED":
# All pages need OCR
run_full_ocr(result)Based on comprehensive testing across various document types:
| Scenario | Time | Accuracy |
|---|---|---|
| Fast Path (Heuristics Only) | ||
| - Text files | < 5ms | ~99% |
| - Digital PDFs (1β5 pages) | 30β120ms | 95β98% |
| - Office documents | 80β200ms | 88β92% |
| - Images | 5β30ms | ~100% |
| OpenCV Refinement (CPU, sampled pages) | ||
| - Single-page PDF | 20β60ms | 92β96% |
| - Multi-page PDF (2β5 pages) | 40β120ms | 92β96% |
| - Large PDFs (sampled) | 80β200ms | 90β94% |
| Overall Pipeline | ||
| - Clear cases (~85β90%) | <150ms | ~99% |
| - Edge cases (~10β15%) | 150β300ms | 92β96% |
| - Average | 120β180ms | 94β97% |
Fast Path (~85-90% of files):
- Text extraction: 20-100ms
- Rule-based decision: < 1ms
- Total: < 150ms for most files
OpenCV Refinement (~10-15% of files):
- PDF to image conversion: 10-30ms per page
- OpenCV layout analysis: 10-40ms per page
- Decision refinement: < 1ms
- Total: 20-200ms (depends on page count and sampling strategy)
Factors Affecting Performance:
- File size: Larger files take longer to process
- Page count: More pages = longer OpenCV analysis
- Document complexity: Complex layouts require more processing
- System resources: CPU speed and available memory
To benchmark PreOCR performance on your documents:
# Install with OpenCV support
pip install preocr[layout-refinement]
# Run performance benchmark
python benchmark.py /path/to/pdf/directory [max_files]The benchmark script measures:
- Fast path timing (heuristics only)
- OpenCV analysis timing
- Total pipeline timing
- Performance by page count
- Statistical analysis (min, max, mean, median, P95)
To validate accuracy claims with ground truth data:
# Create ground truth template
python scripts/validate_accuracy.py --create-template /path/to/test/files
# Edit ground_truth.json to set needs_ocr: true/false for each file
# Or use auto-labeling helper:
python scripts/auto_label_ground_truth.py scripts/ground_truth.json
# Run validation
python scripts/validate_accuracy.py /path/to/test/files --ground-truth scripts/ground_truth.json
# Run comprehensive benchmark (performance + accuracy)
python scripts/benchmark_accuracy.py /path/to/test/files --ground-truth scripts/ground_truth.jsonExample Validation Output:
π ACCURACY VALIDATION RESULTS
================================================================================
π Files:
Total: 27
Validated: 27
π Confusion Matrix:
True Positive (TP): 2 - Correctly identified as needing OCR
False Positive (FP): 0 - Incorrectly flagged as needing OCR
True Negative (TN): 25 - Correctly identified as not needing OCR
False Negative (FN): 0 - Missed files that need OCR
π― Overall Metrics:
Accuracy: 100.00%
Precision: 100.00%
Recall: 100.00%
F1-Score: 100.00%
See Validation Guide for detailed instructions on accuracy validation.
File Input
β
File Type Detection (MIME, extension)
β
Text Extraction Probe (PDF, Office, Text)
β
Visual/Binary Analysis (Images, entropy)
β
Decision Engine (Rule-based logic)
β
Confidence Check
ββ High (β₯0.9) β Return
ββ Low (<0.9) β OpenCV Layout Analysis β Refine β Return
preocr/
βββ preocr/ # Main package
β βββ __init__.py # Package initialization
β βββ version.py # Version information
β βββ constants.py # Constants and configuration
β βββ exceptions.py # Custom exception classes
β βββ reason_codes.py # Reason code definitions
β β
β βββ core/ # Core functionality
β β βββ __init__.py
β β βββ detector.py # Main API (needs_ocr function)
β β βββ decision.py # Decision engine
β β βββ signals.py # Signal collection
β β
β βββ probes/ # File type probes
β β βββ __init__.py
β β βββ pdf_probe.py # PDF text extraction
β β βββ office_probe.py # Office document extraction
β β βββ image_probe.py # Image analysis
β β βββ text_probe.py # Text/HTML extraction
β β
β βββ analysis/ # Layout and page analysis
β β βββ __init__.py
β β βββ layout_analyzer.py # PDF layout analysis
β β βββ opencv_layout.py # OpenCV-based analysis
β β βββ page_detection.py # Page-level detection
β β
β βββ utils/ # Utility modules
β βββ __init__.py
β βββ batch.py # Batch processing
β βββ cache.py # Caching system
β βββ filetype.py # File type detection
β βββ logger.py # Logging configuration
β
βββ tests/ # Test suite
β βββ test_*.py # Unit and integration tests
β βββ fixtures/ # Test fixtures
β
βββ examples/ # Example scripts
β βββ basic_usage.py
β βββ batch_processing.py
β βββ layout_aware_usage.py
β
βββ scripts/ # Utility scripts
β βββ validate_accuracy.py # Accuracy validation tool
β βββ benchmark_accuracy.py # Comprehensive benchmark
β βββ auto_label_ground_truth.py # Auto-labeling helper
β βββ ground_truth.json # Example ground truth file
β
βββ docs/ # Documentation
β βββ README.md # Documentation index
β βββ CHANGELOG.md # Version history
β βββ CONTRIBUTING.md # Contribution guidelines
β βββ CODE_OF_CONDUCT.md # Code of conduct
β βββ ... # Other documentation files
β
βββ README.md # Main project README
βββ LICENSE # License file
βββ pyproject.toml # Package configuration
βββ requirements-dev.txt # Development dependencies
core/- Core detection logic and decision engineprobes/- File type-specific text extraction modulesanalysis/- Layout analysis and page-level detectionutils/- Shared utilities (batch processing, caching, logging, file type detection)
Main API function that determines if a file needs OCR.
Parameters:
file_path(str or Path): Path to the file to analyzepage_level(bool): IfTrue, return page-level analysis for PDFs (default:False)layout_aware(bool): IfTrue, perform explicit layout analysis for PDFs (default:False)config(Config, optional): Configuration object with threshold settings (default:None, uses default thresholds)
Returns: Dictionary with:
needs_ocr(bool): Whether OCR is neededfile_type(str): File type categorycategory(str): "structured" or "unstructured"confidence(float): Confidence score (0.0-1.0)reason_code(str): Structured reason codereason(str): Human-readable reasonsignals(dict): All collected signals (for debugging)pages(list, optional): Page-level resultslayout(dict, optional): Layout analysis results
BatchProcessor(max_workers=None, use_cache=True, layout_aware=False, page_level=True, extensions=None, min_size=None, max_size=None, recursive=False, resume_from=None, min_text_length=None, min_office_text_length=None, layout_refinement_threshold=None, config=None)
Batch processor for efficiently processing multiple files with parallel processing, caching, and progress tracking.
Parameters:
max_workers(int, optional): Maximum number of parallel workers (default: CPU count)use_cache(bool): Enable caching to skip already-processed files (default:True)layout_aware(bool): Perform layout analysis for PDFs (default:False)page_level(bool): Perform page-level analysis for PDFs (default:True)extensions(list, optional): List of file extensions to process (e.g.,["pdf", "png"]). Default: common document/image formatsmin_size(int, optional): Minimum file size in bytes (default:None)max_size(int, optional): Maximum file size in bytes (default:None)recursive(bool): Scan subdirectories recursively (default:False)resume_from(str, optional): Path to JSON file with previous results to resume from (default:None)min_text_length(int, optional): Minimum text length threshold (default:None, uses default)min_office_text_length(int, optional): Minimum office text length threshold (default:None, uses default)layout_refinement_threshold(float, optional): Layout refinement threshold (default:None, uses default)config(Config, optional): Configuration object with threshold settings (default:None, uses default thresholds)
Methods:
process_directory(directory, progress=True) -> BatchResults: Process all files in a directory
Returns:
BatchResults object with:
results(list): List of result dictionaries (one per file)errors(list): List of error dictionaries for failed filesget_statistics() -> dict: Get comprehensive statistics about the batchprint_summary(): Print formatted summary to console
extract_native_data(file_path, include_tables=True, include_forms=True, include_metadata=True, include_structure=True, include_images=True, include_bbox=True, pages=None, output_format="pydantic", config=None)
Extract structured data from machine-readable documents (PDFs, Office docs, text files).
Parameters:
file_path(str or Path): Path to the file to extract data frominclude_tables(bool): Whether to extract tables (default:True)include_forms(bool): Whether to extract form fields (default:True)include_metadata(bool): Whether to include document metadata (default:True)include_structure(bool): Whether to detect sections and reading order (default:True)include_images(bool): Whether to detect images (default:True)include_bbox(bool): Whether to include bounding box coordinates (default:True)pages(list, optional): Optional list of page numbers to extract (1-indexed). IfNone, extracts all pagesoutput_format(str): Output format -"pydantic"(default),"json", or"markdown"config(Config, optional): Configuration object (currently unused, reserved for future use)
Returns:
ExtractionResult(Pydantic model) ifoutput_format="pydantic"Dict[str, Any]ifoutput_format="json"str(Markdown) ifoutput_format="markdown"
Example:
from preocr import extract_native_data
# Extract as Pydantic model
result = extract_native_data("document.pdf")
print(result.overall_confidence)
# Extract specific pages as JSON
json_data = extract_native_data("document.pdf", pages=[1, 2], output_format="json")
# Extract as markdown for LLM consumption
markdown = extract_native_data("document.pdf", output_format="markdown")Container for batch processing results with statistics and summary methods.
Attributes:
results(list): List of result dictionarieserrors(list): List of error dictionariestotal_files(int): Total number of files foundprocessed_files(int): Number of files successfully processedskipped_files(int): Number of files skipped (cached/resumed)
Methods:
get_statistics() -> dict: Returns statistics including:- File counts (total, processed, errors, skipped)
- OCR decisions (needs_ocr, no_ocr counts and percentages)
- Page-level statistics (total pages, pages needing OCR)
- Breakdown by file type and reason code
- Performance metrics (processing time, files per second)
print_summary(): Prints a formatted summary to the console
PreOCR uses Python's logging module for debugging and monitoring. Configure logging via environment variable:
# Set log level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
export PREOCR_LOG_LEVEL=INFO
# Or in Python
from preocr.logger import set_log_level
import logging
set_log_level(logging.DEBUG)Default log level is WARNING. Set to INFO or DEBUG for more verbose output during development.
1. File type detection fails
- Ensure
libmagicis installed on your system - Linux:
sudo apt-get install libmagic1(Debian/Ubuntu) orsudo yum install file-devel(RHEL/CentOS) - macOS:
brew install libmagic - Windows: Usually included with
python-magic-binpackage
2. PDF text extraction returns empty results
- Check if PDF is password-protected
- Verify PDF is not corrupted
- Try installing both
pdfplumberandPyMuPDFfor better compatibility
3. OpenCV layout analysis not working
- Install OpenCV dependencies:
pip install preocr[layout-refinement] - Verify OpenCV is available:
python -c "import cv2; print(cv2.__version__)"
4. Low confidence scores
- Enable layout-aware analysis:
needs_ocr(file_path, layout_aware=True) - Check file type is supported
- Review signals in result dictionary for debugging
5. Performance issues
- Most files use fast path (< 150ms)
- Large PDFs may take longer; consider page-level analysis
- Disable layout-aware analysis if speed is critical
- Check existing Issues
- Enable debug logging:
export PREOCR_LOG_LEVEL=DEBUG - Review signals in result:
result["signals"]for detailed analysis
# Clone repository
git clone https://github.com/yuvaraj3855/preocr.git
cd preocr
# Install in development mode
pip install -e ".[dev]"
# Install pre-commit hooks
pre-commit install
# Run tests
pytest
# Run with coverage
pytest --cov=preocr --cov-report=html
# Run linting
ruff check preocr/
black --check preocr/
# Run type checking
mypy preocr/See CHANGELOG.md for version history.
v1.0.1 - Bug Fixes & Type Improvements (Latest)
- Fixed mypy type errors and improved type annotations
- Fixed unused variable warnings
- Improved CI/CD workflow reliability
v1.0.0 - Structured Data Extraction π
- Structured Data Extraction: Comprehensive extraction system for PDFs, Office docs, and text files
- Element-Based Structure: Rich element extraction with 11+ classification types
- Confidence Scoring: Per-element and overall confidence scores
- Bounding Boxes: Precise coordinates for all elements
- Table Extraction: Advanced table extraction with cell-level metadata
- Form Field Detection: Form field extraction with semantic naming
- Image Detection: Image location and metadata extraction
- Section Detection: Hierarchical sections with parent-child relationships
- Multiple Output Formats: Pydantic models, JSON, and Markdown (LLM-ready)
- Type Safety: Full Pydantic models with IDE autocomplete
v0.5.0 - Batch Processing with Parallel Execution
- BatchProcessor class for processing thousands of files efficiently
- Parallel processing with automatic worker management
- Built-in caching to skip already-processed files
- Progress tracking with detailed statistics
- Resume support for interrupted batches
- File filtering (extensions, size, recursive scanning)
- Comprehensive statistics and reporting
v0.3.0 - Hybrid Pipeline with OpenCV Refinement
- Adaptive pipeline: fast heuristics + OpenCV for edge cases
- Improved accuracy (92-95%)
- Layout-aware detection
- Automatic confidence-based refinement
v0.2.0 - Page-Level Detection
- Page-by-page analysis for PDFs
- Structured reason codes
- Enhanced confidence scoring
Contributions are welcome! Please feel free to submit a Pull Request.
See CONTRIBUTING.md for detailed guidelines and CODE_OF_CONDUCT.md for our code of conduct.
Apache License 2.0 - see LICENSE file for details.
- GitHub: https://github.com/yuvaraj3855/preocr
- PyPI: https://pypi.org/project/preocr
- Issues: https://github.com/yuvaraj3855/preocr/issues
Before PreOCR:
- β Run OCR on everything β Expensive, slow
- β Manual inspection β Time-consuming
- β No automation β Not scalable
With PreOCR:
- β Skip OCR for 50-70% of files β Save money
- β Fast decisions (< 1 second) β Don't slow pipeline
- β Automated β Scalable
- β 92-95% accurate (100% on recent validation) β Good enough for production
Perfect for:
- Document processing pipelines
- Cost optimization (skip expensive OCR)
- Batch document analysis
- Pre-filtering before OCR engines (MinerU, Tesseract, etc.)
PreOCR is a highly competitive document extraction solution that matches or exceeds industry leaders while offering unique advantages.
- Ahead of Unstructured.io (75.0)
- Ahead of Docugami (77.1)
| Feature | PreOCR | Unstructured.io | Docugami |
|---|---|---|---|
| Speed | β < 1 second (2-10x faster) | ||
| Cost Optimization | β Unique - Skip OCR for 50-70% | β No | β No |
| Page-Level Processing | β Yes (unique) | β No | β No |
| Type Safety | β Pydantic models (unique) | ||
| Confidence Scores | β Per-element + overall | β No | β Yes |
| Forms Extraction | β Yes | β No | β Yes |
| PDF Extraction | β Excellent | β Excellent | β Excellent |
| Office Docs | β Yes | β Yes | β Yes |
| Bounding Boxes | β Yes (all elements) | β Yes | β Yes |
| Markdown Output | β Yes (LLM-ready) | β Yes | |
| Open Source | β Yes | β Partial | β Commercial |
| CPU-Only | β Yes | β Yes | |
| Batch Processing | β Excellent (parallel) | β Yes | β Yes |
- Speed: 2-10x faster than competitors (< 1 second vs 5-20 seconds)
- Cost Optimization: Skip OCR for 50-70% of documents - no competitor offers this
- Page-Level Granularity: Extract specific pages, page-level OCR detection - no competitor offers this
- Type Safety: Full Pydantic models with IDE autocomplete - no competitor offers this
- CPU-Only: No GPU required, edge-friendly deployment
Test: 10-Page Academic PDF
| Metric | PreOCR | Unstructured.io | Docugami |
|---|---|---|---|
| Processing Time | β < 1 second | ||
| Elements Extracted | β 1,064 | β ~1,000 | β ~1,000 |
| Confidence Score | β 90.92% | β N/A | β ~90% |
| Sections Detected | β 29 | β ~30 | |
| Errors | β 0 |
β Choose PreOCR when:
- You need speed (< 1 second processing)
- You want cost optimization (skip OCR for 50-70% of documents)
- You need page-level granularity (extract specific pages)
- You want type safety (Pydantic models)
- You're building LLM/RAG pipelines
- You need edge deployment (CPU-only)
- You want open source solution
- You need advanced semantic relationships (Docugami)
- You need extensive documentation/examples (Unstructured.io)
- You need enterprise document understanding (Docugami)
See Competitive Analysis for comprehensive feature-by-feature comparison.
| Feature | PreOCR | Manual Inspection | Run OCR on Everything |
|---|---|---|---|
| Speed | < 1s per file | Minutes per file | Seconds to minutes |
| Cost | Free (CPU-only) | Time-consuming | Expensive (cloud OCR) |
| Accuracy | 92-95% (100% on recent validation) | 100% (manual) | N/A (always runs) |
| Automation | β Yes | β No | β Yes |
| CPU-only | β Yes | β Yes | β No (may need GPU) |
| Scalability | β Excellent | β Poor |
Q: Does PreOCR perform OCR?
A: No, PreOCR never performs OCR. It only analyzes files to determine if OCR is needed.
Q: How accurate is PreOCR?
A: PreOCR is designed to achieve 92-95% accuracy with the hybrid pipeline (heuristics + OpenCV refinement). Recent validation on a sample dataset of 27 files achieved 100% accuracy (100% precision, 100% recall, 100% F1-score). Accuracy can be validated using the provided validation tools (scripts/validate_accuracy.py and scripts/auto_label_ground_truth.py). See Validation Guide for details on measuring accuracy with your own dataset.
Q: Can I use PreOCR with cloud OCR services?
A: Yes! PreOCR is perfect for filtering documents before sending to cloud OCR APIs (AWS Textract, Google Vision, Azure Computer Vision, etc.).
Q: What happens if PreOCR makes a mistake?
A: PreOCR is conservative - it may flag some digital documents as needing OCR, but rarely misses documents that actually need OCR. You can review confidence scores to fine-tune decisions.
Q: Does PreOCR work offline?
A: Yes! PreOCR is CPU-only and works completely offline. No internet connection required.
Q: Can I customize the decision thresholds?
A: Yes! You can customize thresholds using the Config class or by passing threshold parameters to BatchProcessor. See the Configuration section for details.
Q: What file sizes can PreOCR handle?
A: PreOCR can handle files of any size, but very large files (>100MB) may take longer. For batch processing, you can set max_size limits.
Q: Is PreOCR thread-safe?
A: Yes, PreOCR functions are thread-safe and can be used in multi-threaded environments. Batch processing uses multiprocessing for better performance.
Made with β€οΈ for efficient document processing