Conversation
Add an X button to the top-right corner of failed run items in the Runs sidebar, allowing users to remove failed runs that clutter the list. Wires up a new catalog:delete-run IPC channel to the existing deleteRun DB function. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
WalkthroughAdds a delete-run flow: new localization key, IPC handler in main, preload whitelist entry, renderer IPC helper, RunList delete UI for failed runs, and DetectionsPage handler that invokes deletion and refreshes stats. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant UI as RunList
participant Page as DetectionsPage
participant IPC as Renderer IPC
participant Main as Main Process
participant DB as Database
User->>UI: Click "Delete" on failed run
UI->>Page: ondelete(runId)
Page->>IPC: deleteRun(runId)
IPC->>Main: invoke 'catalog:delete-run' with runId
Main->>DB: deleteRun(runId)
DB-->>Main: deletion result
Main-->>IPC: resolve response
IPC-->>Page: deletion complete
Page->>Page: remove run from list / clear selection
Page->>IPC: getCatalogStats()
IPC->>Main: invoke 'catalog:get-stats'
Main->>DB: query stats
DB-->>Main: stats
Main-->>IPC: stats response
IPC-->>Page: stats updated
Page->>UI: refresh runs and stats
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
No actionable comments were generated in the recent review. 🎉 Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @tphakala, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a user-friendly feature to manage failed analysis runs by enabling their direct deletion from the UI. It involves adding an interactive delete button to the run list, wiring up the necessary inter-process communication to trigger the backend deletion logic, and ensuring that the application's state, including the displayed runs and overall catalog statistics, is consistently updated following a deletion. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds a useful feature for deleting failed runs. The changes are well-structured, introducing a new IPC channel for deletion and updating the UI components accordingly. The use of a div with role='button' in RunList.svelte to accommodate the new delete button is a good accessibility improvement. My main feedback is on improving error handling in DetectionsPage.svelte to avoid silently ignoring failures, which could confuse users. Overall, a solid contribution.
| } catch { | ||
| // silently ignore — run may already be gone | ||
| } |
There was a problem hiding this comment.
Silently ignoring errors in this catch block can lead to a poor user experience and make debugging difficult. If deleteRun() fails for a reason other than the run already being gone (e.g., a database error), the user won't be notified and the UI won't update, which can be confusing. Furthermore, if deleteRun() succeeds but getCatalogStats() fails, the error is also swallowed, leaving the UI in an inconsistent state where the run is removed from the list but the stats are not updated.
It's recommended to at least log the error for debugging purposes. Providing user-facing feedback on failure would be an even better improvement.
} catch (error) {
console.error('Failed to delete run ' + runId + ':', error);
// TODO: Show a user-facing error message.
}
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@src/renderer/src/lib/components/RunList.svelte`:
- Around line 54-56: The keyboard handler on the div with role="button" in
RunList.svelte should prevent the default browser action for the Space key to
avoid page scrolling: inside the onkeydown handler that currently checks e.key
=== 'Enter' || e.key === ' ', call e.preventDefault() when e.key === ' ' (before
invoking onselect(run.id)); keep Enter behavior unchanged. Update the onkeydown
handler associated with the run selection logic (the onselect and run.id usage)
to only call preventDefault for the Space key.
- Around line 54-56: The parent div's onkeydown handler calls onselect(run.id)
and receives keydown events bubbled from the delete button, so pressing
Enter/Space on the delete button also triggers selection; fix by preventing
keydown from bubbling from the delete control (add onkeydown={(e) =>
e.stopPropagation()} to the delete <button> alongside its onclick that calls
ondelete(run.id)), or alternatively guard the parent keydown handler that
invokes onselect to ignore events whose target is an interactive child (check
e.target.closest('button, a, input, textarea, [role="button"]') and return
early). Ensure you update the delete button's handlers and/or the parent
onkeydown where onselect is called.
- Add preventDefault() for Space key on role="button" div to prevent page scrolling - Guard parent keydown handler with e.target === e.currentTarget to prevent keyboard events from delete button bubbling up to onselect - Log errors in handleRunDelete instead of silently swallowing them Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Wrap transaction in try/finally block - Ensures foreign keys are re-enabled even if migration fails - Resolves Seer #3 and CodeRabbit Nitpick issue
* docs: add directory analysis feature design document Addresses #10 - comprehensive design for adding directory analysis support with durable JSON output, incremental database ingestion, and crash-resistant progress tracking. * feat: add directory analysis event payload types * feat: add completed_with_errors status to AnalysisRun * feat: add database migration for completed_with_errors status * fix: set completed_at timestamp for completed_with_errors status * feat: add outputDir parameter to enable dual output mode * feat: add importDetectionsFromJson function * fix: remove unused path import from detections.ts * fix: preserve common_name field from JSON output * feat: implement directory analysis mode with event handling and cleanup - Add helper functions for temp directory management and JSON path derivation - Detect if source path is a directory and create temp output directory - Enable dual output mode (NDJSON streaming for directories) - Handle directory-specific progress events (pipeline_started, file_started, file_completed) - Import detections incrementally from JSON files as each file completes - Track failed and skipped files during directory analysis - Determine final status based on file processing results (completed vs completed_with_errors) - Clean up temp directory in finally block regardless of success/failure - Maintain backward compatibility with single file analysis mode * fix: address critical ESLint errors in directory analysis Fixes 4 critical issues identified in code quality review: 1. Async event handler - Wrapped async operations in void IIFE with try/catch to prevent unhandled promise rejections in 'data' event handler. Synchronous event forwarding remains outside IIFE. 2. Unnecessary condition - Replaced final 'else if' with 'else' for 'failed' status case (only remaining option). 3. JSON.parse validation - Added Zod schemas (BirdaDetectionSchema, BirdaJsonOutputSchema) to validate JSON.parse output from birda CLI JSON files, preventing potential runtime errors from malformed data. 4. Status logic bug - Fixed critical bug where all-failed directory analysis was marked 'completed' instead of 'failed'. Now correctly returns 'failed' when processedCount is 0. Also removed unused DetectionsPayload import (still used via inline import statement). All changes maintain backward compatibility with single-file mode. Build and ESLint validation passed successfully. * fix: resolve critical race conditions in analysis handler Two critical race conditions have been fixed in the directory analysis feature: 1. Async Event Handler Race: Status calculation previously read state (totalDetections, failedFiles, skippedFiles) immediately after `await handle.promise`, but async IIFEs in event handlers could still be running. Now all async event handler promises are tracked in `pendingImports` array and awaited with `Promise.allSettled()` before calculating final status. 2. Analysis Lock Race: Gap between null check and assignment of `currentAnalysis` allowed concurrent IPC requests to pass through. Now a placeholder AnalysisHandle is assigned immediately after the null check, locking the handler. The placeholder is replaced with the real handle once available. Lock is released early on setup errors if still holding the placeholder. Changes: - Add `pendingImports: Promise<void>[]` array to track async operations - Extract async IIFE in data event handler to named `importPromise` - Push each promise to `pendingImports` array - Add `await Promise.allSettled(pendingImports)` before status calculation - Create placeholder AnalysisHandle immediately after null check - Assign placeholder to `currentAnalysis` to lock immediately - Wrap setup work in try/catch to release lock on error - Replace placeholder with real handle when available * fix: add memory limits, input validation, and atomic operations - Cap failedFiles/skippedFiles arrays at 100 entries to prevent unbounded memory growth in directory analysis - Add Zod-based input validation for analysis IPC requests - Make database delete operations atomic with new deleteCompletedRunsForSource() transaction - Extract magic constants (LEAP_YEAR_FOR_DOY, MAX_TRACKED_FILES, MAX_STDERR_LINES, JSON_READ_RETRIES, BASE_RETRY_DELAY_MS) Addresses code review tasks 3-6. * style: fix prettier formatting in analysis.ts * fix: resolve knip issues - add zod dependency, clean up exports * fix: disable foreign keys during migration 4 table recreation * fix: resolve duplicate key error in LogPanel Add unique ID field to LogEntry to prevent Svelte each_key_duplicate error during rapid logging. When multiple log entries have the same timestamp (millisecond precision), the keyed each block would fail with duplicate keys. Changes: - Add `id: number` field to LogEntry interface - Add `nextId` counter that increments with each new entry - Update LogPanel to use `entry.id` as key instead of `entry.timestamp` Fixes: Svelte error "each_key_duplicate" during directory analysis * fix: specify JSON format for birda directory analysis When using --output-dir with birda, we need to specify --format json to write JSON files to disk. Without this, birda defaults to CSV format, causing the GUI to fail when trying to import detections from JSON files. The --output-mode ndjson controls stdout streaming, while --format json controls the file format written to disk. * fix: preserve file_completed events in analysis event array When the analysis event array exceeded MAX_EVENTS (500), the trim logic would remove old events including critical file_completed events. This caused checkmarks to disappear from completed files in the directory analysis UI. The fix modifies the trimming logic to: - Keep ALL file_completed events (critical for UI state) - Only trim progress events when the limit is reached - Maintain recent progress events for current file feedback This ensures checkmarks remain visible for all completed files throughout the entire directory analysis, even when processing thousands of segments. * fix: update JSON schema to match birda's actual output format The GUI was expecting a 'metadata' wrapper object, but birda's actual JSON format has top-level fields: source_file, analysis_date, model, settings, detections, and summary. Updated the Zod schema to match birda's JsonResultFile struct: - source_file (not metadata.file) - settings.min_confidence (not metadata.min_confidence) - summary object with detection statistics This fixes the validation error that was causing all directory analysis imports to fail with 'expected object, received undefined' for metadata. * fix: eliminate duplicate event payload types and correct field names - Remove duplicate type definitions from src/main/birda/types.ts - Re-export types from shared/types.ts as canonical source - Add FileStartedPayload to shared/types.ts - Fix analysis.ts to use total_files instead of files_total - Resolves CodeRabbit Major issue about type duplication * fix: add separate counters for failed/skipped files - Add failedFileCount and skippedFileCount numeric variables - Increment counters whenever file fails or is skipped - Use counters instead of array.length for processedCount calculation - Fixes incorrect counts when arrays are capped at MAX_TRACKED_FILES - Resolves CodeRabbit Major issue about processedCount * fix: preserve temp directory on failure for debugging - Move finalStatus declaration outside try block - Set finalStatus to 'failed' in catch block - Check finalStatus in finally block - Only cleanup temp dir if not failed - Log preservation message when keeping temp dir - Resolves CodeRabbit Major issue about temp dir cleanup * fix: include completed_with_errors in findCompletedRuns - Update SQL query to use IN ('completed', 'completed_with_errors') - Ensures previous runs with errors are replaced on re-analysis - Resolves Seer #4 and CodeRabbit Major issue * fix: add try/finally for foreign key pragma in migration 4 - Wrap transaction in try/finally block - Ensures foreign keys are re-enabled even if migration fails - Resolves Seer #3 and CodeRabbit Nitpick issue * fix: add concurrency limit for JSON imports (DoS prevention) - Add Semaphore class for limiting concurrent operations - Set MAX_CONCURRENT_IMPORTS to 10 - Wrap importDetectionsFromJson with semaphore acquire/release - Prevents resource exhaustion with large directories - Resolves Gemini Critical DoS issue * refactor: use mkdtemp for temp directory creation - Replace Date.now() timestamp with fs.promises.mkdtemp - Provides atomic unique directory creation - More idiomatic Node.js approach - Resolves CodeRabbit Nitpick issue * refactor: extract helper function for array overflow handling - Add trackFileWithOverflow helper function - Replace three duplicated code blocks with helper calls - Improves code maintainability and readability - Resolves Gemini suggestion for refactoring * chore: run prettier on analysis.ts * fix: use T[] instead of Array<T> for eslint compliance
Summary
catalog:delete-runIPC channel to the existingdeleteRunDB functionTest plan
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Localization
Accessibility