Skip to content

v0.3-dev

Latest

Choose a tag to compare

@Uli-Z Uli-Z released this 05 Nov 11:49

What's New in 0.3 (2025)

  • Simpler usage: -i now also analyzes the relevant page text. Using -t together with -i is no longer necessary. Existing -ti calls still work (redundant).
  • Faster and cheaper runs: fewer model requests and a smoother live status board.
  • Predictable per‑file token limit via [AI].token_limit (default 1,000,000). If the limit is reached, the tool trims lower‑value context first and may skip low‑signal images; INFO logs indicate when this happens.
  • No config changes required: current setups continue to work. Tip: adjust [AI].token_limit to trade quality vs. speed/cost.

Includes improvements from 0.2:

  • OCR (Tesseract) for scan/low‑text PDFs.
  • LiteLLM multi‑provider support (OpenAI tested).
  • Parallel job execution with a live status board.
  • 24h caching with optional --no-cache and cost reporting.