Releases: Swival/swival
Releases · Swival/swival
1.0.14
1.0.14
/auditnow accepts an--allflag that skips Phase 2 triage and sends every file in scope straight to deep review. Useful when you have already narrowed scope to a subtree you want exhaustively reviewed and do not want triage second-guessing which files are worth a closer look. The flag is recorded with the run, so a bare/audit --resumepicks up an--allrun without needing the flag again.- Server-side context overflow is now recoverable. When the local tiktoken estimate under-counts against the model's real tokenizer, the agent used to give up after the no-tools clamp also got rejected. It now progressively truncates the prompt at tighter targets (50%, 25%, 10% of the context window) and retries each one before declaring the turn lost.
1.0.13
1.0.13
- A goal-driven mode has been added: a structured spin on the Ralph-style "keep prompting until it's done" loop. Set an objective with
/goal <objective>in the REPL and the agent doesn't get to declare victory and walk away after one turn. The original objective is fed back to the model after every answer, and the loop only ends when the agent itself signals the goal is complete after a real evidence-based audit, declares a blocker, or hits the optional token budget. This makes it practical to point Swival at ambitious, long-running tasks like refactors, audits, or end-to-end fixes, and let it grind for hours without giving up halfway./goal pause,/goal resume,/goal replace, and/goal cleargive you full control. - First-run setup now writes a
[profiles.default]block to the generated config, so the freshly created file lines up with the profile structure used everywhere else. - The history file is automatically trimmed when it grows past its maximum capacity.
1.0.12
1.0.11
1.0.10
1.0.9
1.0.9
--logouthas been added to delete locally cached ChatGPT OAuth tokens and exit, so users can sign out without hand-deleting files under~/.config/litellm/chatgpt/./auditno longer asks the LLM for JSON. Intermediate phase responses now use a simple structured-text format (@@ name @@blocks withkey: valuelines), which models emit far more reliably across long prompts than nested JSON./auditphase 1 (file profiling) is now dramatically faster on large repositories. File contents are read through a singlegit cat-file --batchprocess instead of one subprocess per file, cutting the per-file overhead by an order of magnitude on multi-thousand-file scans.- A
--debugoption has been added to/audit. When enabled, a real-time JSONL log is written to.swival/audit/debug.jsonlcapturing every LLM request and response, parse outcomes, repair attempts, and per-phase metrics, which makes it tractable to diagnose model misbehavior on large audits. - Another
/auditimprovement: it is now considerably more verbose during phase 3, surfacing per-file progress instead of presenting one long silent batch. - Phase 5 audit reports no longer occasionally contain raw tool-call JSON (
{"cmd": "ls"}) or conversational preamble like "I'll inspect the patch...". /auditprompt cache hit rates have been improved: the bug-class taxonomy and finding metadata interpolated into phase 3 system prompts have been moved into user messages so the system prefix stays static across calls, and per-phase cache statistics are now logged when--debugis on.
1.0.8
1.0.7
1.0.7
- Emergency truncation has been added as a last-resort compaction stage.
- Prompt caching now works for tool-less LLM calls such as
/audit. Previously, cache control breakpoints were only injected when tool schemas were present. /auditPhase 2 triage now places the repository profile in the system prompt instead of repeating it in every user message, improving prompt cache hit rates and reducing costs./auditPhase 3b finding expansions now run sequentially with per-item error handling instead of in parallel, so a single failed expansion no longer kills the entire batch.- D language (
.d) files are now recognized as source code by/audit. - LiteLLM has been updated to add support for the Mythos provider.
1.0.6
1.0.6
top_pis no longer sent to the provider by default, letting each provider use its own default. The--top-pflag is still available to override it explicitly.- A
--user-agentoption has been added to set a customUser-Agentheader on LLM API requests. The generic and llama.cpp providers now sendSwival/<version>by default, and OpenRouter forwards the header when set. This can also be configured viauser_agentin config files. /auditpath scoping no longer silently skips the target directory when the argument is missing a trailing slash.- Provider-specific workarounds have been added for Kimi K2.6.
1.0.5
1.0.5
- When a file is too large for the LLM's context window during an audit, the audit now progressively truncates it and retries instead of failing outright.
- Audit LLM calls no longer force a fixed temperature and top_p, letting providers that reject custom sampling parameters (such as Anthropic) work without errors.