Releases: Piebald-AI/splitrail
v3.3.5
v3.3.4
What's Changed
- Fix GPT-5.2 by @mike1858 in #119
- feat: add Claude Sonnet 4.6 pricing by @mike1858 in #121
- Fix JSONL parsing warnings and add missing model pricing by @mike1858 in #127
- feat: add Gemini 3.1 Pro pricing and model aliases by @mike1858 in #125
- fix: widen token columns and add overflow-safe formatting by @mike1858 in #124
- v3.3.4 by @mike1858 in #128
Full Changelog: v3.3.3...v3.3.4
v3.3.3
What's Changed
- fix(piebald): use per-message model instead of chat-level model by @mike1858 in #112
- fix: strip non-standard numeric format annotations from MCP JSON schemas by @mike1858 in #114
- feat: add pricing for Gemini 3 Flash and GPT-5.3-Codex by @mike1858 in #115
- v3.3.3 by @mike1858 in #116
Full Changelog: v3.3.2...v3.3.3
v3.3.2
v3.3.1
What's Changed
- Fix #104 by @mike1858 in #105
- Add a --dry-run flag for uploading by @basekevin in #106
- Add support for Z.AI/Zhipu AI, xAI, and Synthetic.new models by @Sewer56 in #107
- Add per-model daily stats to the JSON output by @signadou in #108
- v3.3.1 by @mike1858 in #109
New Contributors
- @basekevin made their first contribution in #106
Full Changelog: v3.3.0...v3.3.1
v3.3.0
Memory Efficiency Overhaul
This release brings a big architectural improvement to how Splitrail tracks your usage data.
Contribution Caching — We've introduced a new contribution caching system with different strategies for different analyzers. This means dramatically lower memory usage when watching huge conversation histories, with file-level incremental updates that avoid reprocessing unchanged data.
Thank you @Sewer56 for this amazing contribution!
New Model Support
GPT-5.2-Codex — We added pricing for OpenAI's newest Codex model. If you've been using GPT-5.2-Codex through Codex CLI, costs will now be tracked.
Bug Fixes
Streaming token capture — Fixed an issue where the Piebald analyzer wasn't capturing tokens correctly during streaming by using the updated_at timestamp instead of created_at.
What's Changed
- fix(analyzers): use updated_at timestamp for accurate streaming token capture by @mike1858 in #92
- Refactor: Migrate CLAUDE.md to AGENTS.md with modular docs by @Sewer56 in #94
- feat(upload): add debug logging for upload operations by @mike1858 in #96
- Update Piebald banner in the README by @signadou in #97
- Update the Piebald banner in the README by @signadou in #98
- 2025 -> 2026 by @mike1858 in #100
- Improved Memory Usage, Incremental File Level Updates by @Sewer56 in #99
- GPT-5-2-Codex by @mike1858 in #101
- 3.3.0 by @mike1858 in #102
- 3.3.0 (for real) by @mike1858 in #103
Full Changelog: v3.2.2...v3.3.0
v3.2.2
TUI Improvements
This release brings several quality-of-life improvements to the TUI:
Cached Tokens Visibility — The session view now displays cached tokens instead of a redundant tool column, giving you better insight into your token efficiency and cost savings from prompt caching.
Smarter Summary Totals — When viewing sessions, the summary totals now filter to show only the selected day's statistics, making it easier to understand your daily usage at a glance.
Reverse Sort Order — Press r to toggle reverse sort order in any view. Quickly flip between ascending and descending to find what you're looking for faster.
Thank you to @Sewer56 for these improvements!
Bug Fixes
- Fixed hash collisions — Resolved an issue where conversation entries could collide by using timestamp+id hashing, ensuring accurate deduplication.
- File watcher nested directories — Fixed an issue where the file watcher wouldn't detect new sessions in nested directories.
- OpenCode parsing — Fixed a crash when OpenCode messages contain a boolean summary field.
What's Changed
- fix(opencode): handle boolean summary field in message parsing by @Sewer56 in #83
- Fix file watcher to detect new sessions in nested directories by @Sewer56 in #86
- feat(tui): add 'r' hotkey to toggle reverse sort order by @Sewer56 in #84
- feat(tui): filter summary totals to selected day in session view by @Sewer56 in #87
- feat(tui): replace redundant tool column with cached tokens in session view by @Sewer56 in #88
- Add a sample skill by @bl-ue in #89
- fix(analyzers): use timestamp+id hash to prevent collision by @mike1858 in #90
- v3.2.2 by @mike1858 in #91
New Contributors
Full Changelog: v3.2.1...v3.2.2
v3.2.1
Update Notifications
Splitrail now keeps you informed when a new version is available. On startup, the TUI checks GitHub Releases in the background and displays a yellow banner when there's an update, which is dismissible by pressing u:
New version available: 3.2.0 -> 3.2.1 (press 'u' to dismiss)
This is a quality-of-life improvement to help you stay current with the latest features and bug fixes.
What's Changed
- feat(tui): add background version check with update notification banner by @mike1858 in #80
- v3.2.1 by @mike1858 in #81
Full Changelog: v3.2.0...v3.2.1
v3.2.0
Introducing Piebald Support
We now officially support Piebald — the ultimate agentic AI control experience for developers. Track your Piebald usage across all your favorite providers, whether you're using OpenAI, Anthropic, Google, or any compatible API.
New Model Support
We added pricing for the latest frontier models from OpenAI: GPT-5.2, GPT-5.2-Pro, and the classics, for instance, GPT-5-Pro — keeping Splitrail current with the latest releases.
Other Improvements
- Timezone support for uploads — Your data now includes timezone information for more accurate time tracking on Splitrail Cloud.
- Codex CLI fix — Fixed duplicate entries by including entry count in globalHash.
What's Changed
- feat(vscode): add Pi Agent file watcher and improve marketplace metadata by @mike1858 in #69
- fix(vscode): remove model names from description and keywords by @mike1858 in #70
- Performance improvments by @mike1858 in #71
- fix(codex): include entry count in globalHash to prevent duplicates by @mike1858 in #72
- feat: add timezone support for uploads by @mike1858 in #73
- Piebald is released! by @mike1858 in #74
- Add GPT-5.2, GPT-5.2-Pro and GPT-5-Pro support by @mike1858 in #76
- feat: add Piebald analyzer for tracking Piebald usage by @mike1858 in #75
- Encourage users to star the Splitrail repository by @mike1858 in #78
- v3.2.0 by @mike1858 in #79
Full Changelog: v3.1.1...v3.2.0
v3.1.1
GPT-5.1-Codex-Max Pricing Update
Because OpenAI released it on the API, we updated the pricing for gpt-5.1-codex-max to match the official rates:
| Token Type | Previous (estimated) | Official |
|---|---|---|
| Input | $2.50/1M | $1.25/1M |
| Output | $20.00/1M | $10.00/1M |
| Cached | $0.25/1M | $0.125/1M |
Your historical cost calculations for this model will now be more accurate. Note that you might want to delete the data for the specific days on the Cloud and then re-upload.
What's Changed
- fix(models): update gpt-5.1-codex-max pricing to official rates by @mike1858 in #67
- v3.1.1 by @mike1858 in #68
Full Changelog: v3.1.0...v3.1.1