feat: add third-party license management and in-app viewer#4
Conversation
Add automated license file generation via GitHub Actions workflow that triggers on package-lock.json changes and creates a PR with updated THIRD_PARTY_LICENSES.txt. Bundle the file in packaged builds via extraResources and expose it through Help > Third Party Licenses menu item that opens a scrollable modal viewer. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @tphakala, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a comprehensive system for managing and displaying third-party software licenses within the application. It ensures that all npm dependency licenses are automatically collected, bundled with the application, and made accessible to users through a new in-app viewer, enhancing transparency and compliance. Highlights
Changelog
Ignored Files
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request effectively adds a new feature for viewing third-party licenses. The implementation is well-structured, covering the main process, preload script, and renderer components. I've added a suggestion to improve error handling robustness in the renderer process. Overall, this is a great addition.
| licenseText = text; | ||
| }) | ||
| .catch((e: unknown) => { | ||
| error = (e as Error).message; |
There was a problem hiding this comment.
The error object e in a catch block is of type unknown. Directly casting it with as Error is not type-safe and can lead to runtime errors if the caught value is not an Error object. It's better to check if e is an instance of Error before accessing its properties.
error = e instanceof Error ? e.message : String(e);
- Add generate-license-file to knip ignoreBinaries (fixes knip check) - Use instanceof Error guard in LicenseViewer catch block for type safety Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Update SQL query to use IN ('completed', 'completed_with_errors')
- Ensures previous runs with errors are replaced on re-analysis
- Resolves Seer #4 and CodeRabbit Major issue
* docs: add directory analysis feature design document Addresses #10 - comprehensive design for adding directory analysis support with durable JSON output, incremental database ingestion, and crash-resistant progress tracking. * feat: add directory analysis event payload types * feat: add completed_with_errors status to AnalysisRun * feat: add database migration for completed_with_errors status * fix: set completed_at timestamp for completed_with_errors status * feat: add outputDir parameter to enable dual output mode * feat: add importDetectionsFromJson function * fix: remove unused path import from detections.ts * fix: preserve common_name field from JSON output * feat: implement directory analysis mode with event handling and cleanup - Add helper functions for temp directory management and JSON path derivation - Detect if source path is a directory and create temp output directory - Enable dual output mode (NDJSON streaming for directories) - Handle directory-specific progress events (pipeline_started, file_started, file_completed) - Import detections incrementally from JSON files as each file completes - Track failed and skipped files during directory analysis - Determine final status based on file processing results (completed vs completed_with_errors) - Clean up temp directory in finally block regardless of success/failure - Maintain backward compatibility with single file analysis mode * fix: address critical ESLint errors in directory analysis Fixes 4 critical issues identified in code quality review: 1. Async event handler - Wrapped async operations in void IIFE with try/catch to prevent unhandled promise rejections in 'data' event handler. Synchronous event forwarding remains outside IIFE. 2. Unnecessary condition - Replaced final 'else if' with 'else' for 'failed' status case (only remaining option). 3. JSON.parse validation - Added Zod schemas (BirdaDetectionSchema, BirdaJsonOutputSchema) to validate JSON.parse output from birda CLI JSON files, preventing potential runtime errors from malformed data. 4. Status logic bug - Fixed critical bug where all-failed directory analysis was marked 'completed' instead of 'failed'. Now correctly returns 'failed' when processedCount is 0. Also removed unused DetectionsPayload import (still used via inline import statement). All changes maintain backward compatibility with single-file mode. Build and ESLint validation passed successfully. * fix: resolve critical race conditions in analysis handler Two critical race conditions have been fixed in the directory analysis feature: 1. Async Event Handler Race: Status calculation previously read state (totalDetections, failedFiles, skippedFiles) immediately after `await handle.promise`, but async IIFEs in event handlers could still be running. Now all async event handler promises are tracked in `pendingImports` array and awaited with `Promise.allSettled()` before calculating final status. 2. Analysis Lock Race: Gap between null check and assignment of `currentAnalysis` allowed concurrent IPC requests to pass through. Now a placeholder AnalysisHandle is assigned immediately after the null check, locking the handler. The placeholder is replaced with the real handle once available. Lock is released early on setup errors if still holding the placeholder. Changes: - Add `pendingImports: Promise<void>[]` array to track async operations - Extract async IIFE in data event handler to named `importPromise` - Push each promise to `pendingImports` array - Add `await Promise.allSettled(pendingImports)` before status calculation - Create placeholder AnalysisHandle immediately after null check - Assign placeholder to `currentAnalysis` to lock immediately - Wrap setup work in try/catch to release lock on error - Replace placeholder with real handle when available * fix: add memory limits, input validation, and atomic operations - Cap failedFiles/skippedFiles arrays at 100 entries to prevent unbounded memory growth in directory analysis - Add Zod-based input validation for analysis IPC requests - Make database delete operations atomic with new deleteCompletedRunsForSource() transaction - Extract magic constants (LEAP_YEAR_FOR_DOY, MAX_TRACKED_FILES, MAX_STDERR_LINES, JSON_READ_RETRIES, BASE_RETRY_DELAY_MS) Addresses code review tasks 3-6. * style: fix prettier formatting in analysis.ts * fix: resolve knip issues - add zod dependency, clean up exports * fix: disable foreign keys during migration 4 table recreation * fix: resolve duplicate key error in LogPanel Add unique ID field to LogEntry to prevent Svelte each_key_duplicate error during rapid logging. When multiple log entries have the same timestamp (millisecond precision), the keyed each block would fail with duplicate keys. Changes: - Add `id: number` field to LogEntry interface - Add `nextId` counter that increments with each new entry - Update LogPanel to use `entry.id` as key instead of `entry.timestamp` Fixes: Svelte error "each_key_duplicate" during directory analysis * fix: specify JSON format for birda directory analysis When using --output-dir with birda, we need to specify --format json to write JSON files to disk. Without this, birda defaults to CSV format, causing the GUI to fail when trying to import detections from JSON files. The --output-mode ndjson controls stdout streaming, while --format json controls the file format written to disk. * fix: preserve file_completed events in analysis event array When the analysis event array exceeded MAX_EVENTS (500), the trim logic would remove old events including critical file_completed events. This caused checkmarks to disappear from completed files in the directory analysis UI. The fix modifies the trimming logic to: - Keep ALL file_completed events (critical for UI state) - Only trim progress events when the limit is reached - Maintain recent progress events for current file feedback This ensures checkmarks remain visible for all completed files throughout the entire directory analysis, even when processing thousands of segments. * fix: update JSON schema to match birda's actual output format The GUI was expecting a 'metadata' wrapper object, but birda's actual JSON format has top-level fields: source_file, analysis_date, model, settings, detections, and summary. Updated the Zod schema to match birda's JsonResultFile struct: - source_file (not metadata.file) - settings.min_confidence (not metadata.min_confidence) - summary object with detection statistics This fixes the validation error that was causing all directory analysis imports to fail with 'expected object, received undefined' for metadata. * fix: eliminate duplicate event payload types and correct field names - Remove duplicate type definitions from src/main/birda/types.ts - Re-export types from shared/types.ts as canonical source - Add FileStartedPayload to shared/types.ts - Fix analysis.ts to use total_files instead of files_total - Resolves CodeRabbit Major issue about type duplication * fix: add separate counters for failed/skipped files - Add failedFileCount and skippedFileCount numeric variables - Increment counters whenever file fails or is skipped - Use counters instead of array.length for processedCount calculation - Fixes incorrect counts when arrays are capped at MAX_TRACKED_FILES - Resolves CodeRabbit Major issue about processedCount * fix: preserve temp directory on failure for debugging - Move finalStatus declaration outside try block - Set finalStatus to 'failed' in catch block - Check finalStatus in finally block - Only cleanup temp dir if not failed - Log preservation message when keeping temp dir - Resolves CodeRabbit Major issue about temp dir cleanup * fix: include completed_with_errors in findCompletedRuns - Update SQL query to use IN ('completed', 'completed_with_errors') - Ensures previous runs with errors are replaced on re-analysis - Resolves Seer #4 and CodeRabbit Major issue * fix: add try/finally for foreign key pragma in migration 4 - Wrap transaction in try/finally block - Ensures foreign keys are re-enabled even if migration fails - Resolves Seer #3 and CodeRabbit Nitpick issue * fix: add concurrency limit for JSON imports (DoS prevention) - Add Semaphore class for limiting concurrent operations - Set MAX_CONCURRENT_IMPORTS to 10 - Wrap importDetectionsFromJson with semaphore acquire/release - Prevents resource exhaustion with large directories - Resolves Gemini Critical DoS issue * refactor: use mkdtemp for temp directory creation - Replace Date.now() timestamp with fs.promises.mkdtemp - Provides atomic unique directory creation - More idiomatic Node.js approach - Resolves CodeRabbit Nitpick issue * refactor: extract helper function for array overflow handling - Add trackFileWithOverflow helper function - Replace three duplicated code blocks with helper calls - Improves code maintainability and readability - Resolves Gemini suggestion for refactoring * chore: run prettier on analysis.ts * fix: use T[] instead of Array<T> for eslint compliance
Summary
THIRD_PARTY_LICENSES.txtgenerated from all npm dependencies viagenerate-license-file.github/workflows/licenses.yml) that auto-regenerates the file whenpackage-lock.jsonchanges and opens a PRextraResourcesin electron-builder configTest plan
npm run devand click Help > Third Party Licenses — modal opens with license textnpm run build && npx electron-builder --dirand confirmTHIRD_PARTY_LICENSES.txtexists inrelease/*/resources/npm run validatepasses (format, lint, typecheck, audit)🤖 Generated with Claude Code