What's New in 0.3 (2025)
- Simpler usage:
-inow also analyzes the relevant page text. Using-ttogether with-iis no longer necessary. Existing-ticalls still work (redundant). - Faster and cheaper runs: fewer model requests and a smoother live status board.
- Predictable per‑file token limit via
[AI].token_limit(default 1,000,000). If the limit is reached, the tool trims lower‑value context first and may skip low‑signal images; INFO logs indicate when this happens. - No config changes required: current setups continue to work. Tip: adjust
[AI].token_limitto trade quality vs. speed/cost.
Includes improvements from 0.2:
- OCR (Tesseract) for scan/low‑text PDFs.
- LiteLLM multi‑provider support (OpenAI tested).
- Parallel job execution with a live status board.
- 24h caching with optional
--no-cacheand cost reporting.