Conversation
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
📝 WalkthroughWalkthroughThis PR performs a comprehensive restructuring of a CSP auditor plugin. The changes include removing legacy configuration files, completely rewriting the backend analysis engine with a modular check-based architecture, restructuring the frontend from a monolithic component into multiple modular pages, introducing a new shared package for common types, and updating dependencies and tooling. Version bumped to 2.0.0. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (24)
.gitignore-1-6 (1)
1-6:⚠️ Potential issue | 🟠 MajorAdd
.envpatterns to prevent accidental secret commits.The
.gitignorefile is missing patterns for environment variable files (.env,.env.local,.env.*). Node.js/TypeScript projects commonly use these files to store secrets, API keys, and configuration. Without these patterns, developers could accidentally commit sensitive credentials.🛡️ Proposed fix to add environment file patterns
node_modules dist .DS_Store plugin_package.zip coverage *.tsbuildinfo +.env +.env.* +!.env.example🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.gitignore around lines 1 - 6, Add gitignore patterns to exclude environment files so secrets aren't committed: update the .gitignore to include entries for .env, .env.local, and .env.* (or a single pattern like .env*) to cover all env variants; ensure these new patterns are placed alongside existing entries (node_modules, dist, coverage, *.tsbuildinfo) so Git ignores any environment/config files committed by developers.packages/backend/src/data/userContentHosts.ts-10-10 (1)
10-10:⚠️ Potential issue | 🟠 MajorAdd wildcard for Googleusercontent subdomains.
Line 10 only matches the apex domain. Most user-content traffic is on
*.googleusercontent.com, so this likely under-detects risky hosts.Targeted fix
"googleusercontent.com", + "*.googleusercontent.com",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/backend/src/data/userContentHosts.ts` at line 10, The list entry "googleusercontent.com" only matches the apex domain; update the host list (the exported array/constant in userContentHosts.ts that currently includes "googleusercontent.com") to also include a wildcard for subdomains (e.g., "*.googleusercontent.com" or a leading-dot variant depending on the matching logic used by your host matcher) so that all *.googleusercontent.com hosts are detected; keep or retain the existing apex entry if your matching logic treats wildcards as non-inclusive of the apex..github/workflows/release.yml-26-35 (1)
26-35:⚠️ Potential issue | 🟠 MajorPin GitHub Actions to commit SHAs in release workflow.
Using floating major tags in a privileged release job is a supply-chain risk. Pin to immutable SHAs for
actions/checkout@v4,actions/setup-node@v4,pnpm/action-setup@v4, andcaido/action-release@v1.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release.yml around lines 26 - 35, Replace the floating major tags for GitHub Actions in the release workflow by pinning each action to its immutable commit SHA: update uses entries for actions/checkout@v4, actions/setup-node@v4, pnpm/action-setup@v4, and caido/action-release@v1 to the corresponding full commit SHA (e.g., actions/checkout@<sha>) so the release job uses exact revisions; locate the uses lines for those action names and substitute the version tags with their repository commit SHAs, ensuring you test the workflow after updating.packages/backend/src/data/bypassRecords.ts-42-42 (1)
42-42:⚠️ Potential issue | 🟠 MajorScrub credential-like values from these payloads.
Line 42 and Line 918 embed concrete token-like query values (
sign,appKey,token). Even if these are only sample exploits, checking them in will keep tripping secret scanners and could leak third-party credentials if any are still valid. Replace the concrete values with inert placeholders before release.Also applies to: 918-918
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/backend/src/data/bypassRecords.ts` at line 42, The embedded payload in the `code` field contains credential-like query parameters (e.g., sign, appKey, token) that should be redacted; update the `code` value in bypassRecords (and the similar occurrence later) to replace concrete values with inert placeholders (e.g., "{SIGN}", "{APP_KEY}", "{TOKEN}") or strip those query params before committing, and add a small normalization/sanitization step where `code` is assembled or stored (use a regex or URL parsing in the module that sets the `code` field) to ensure any future instances of `sign`, `appKey`, `token`, etc. are replaced with placeholders.packages/frontend/src/components/Dashboard/AnalysesTable/VulnerabilityCard.vue-11-14 (1)
11-14:⚠️ Potential issue | 🟠 MajorUse a semantic interactive element for accessibility.
The clickable container is a
div, so keyboard users can’t reliably activate it. Please switch to abutton(or add full ARIA+keyboard support).♿ Suggested fix
- <div - class="border border-surface-700 rounded p-3 cursor-pointer hover:border-surface-500 transition-colors" - `@click`="emit('click', props.finding)" - > + <button + type="button" + class="border border-surface-700 rounded p-3 cursor-pointer hover:border-surface-500 transition-colors w-full text-left bg-transparent focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2" + `@click`="emit('click', props.finding)" + > @@ - </div> + </button>Also applies to: 35-35
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/frontend/src/components/Dashboard/AnalysesTable/VulnerabilityCard.vue` around lines 11 - 14, Replace the non-semantic clickable div in VulnerabilityCard.vue with a semantic interactive element (e.g., a <button>) or add full ARIA+keyboard support so keyboard users can activate it; specifically change the element that currently has class "border border-surface-700 rounded p-3 cursor-pointer hover:border-surface-500 transition-colors" and the `@click`="emit('click', props.finding)" handler to a button (or ensure role="button", tabindex="0", and keydown/keypress handlers that call the same emit('click', props.finding)), keep existing classes and the emit call (and ensure styles still apply), and apply appropriate aria-label if the visual context isn’t descriptive.packages/frontend/src/composables/useClipboard.ts-6-10 (1)
6-10:⚠️ Potential issue | 🟠 MajorHandle clipboard failures to prevent unhandled promise rejections in event handlers.
navigator.clipboard.writeTextcan fail (permission denied, insecure context), and since all callers use fire-and-forget patterns directly in event bindings without error handling, failures will result in unhandled promise rejections. The composable should wrap this call with a try/catch to display user feedback on failure.🛠️ Suggested hardening
const copyToClipboard = async (text: string, label?: string) => { - await navigator.clipboard.writeText(text); - sdk.window.showToast(label ?? "Copied to clipboard", { - variant: "success", - }); + try { + if (!navigator.clipboard?.writeText) { + throw new Error("Clipboard API unavailable"); + } + await navigator.clipboard.writeText(text); + sdk.window.showToast(label ?? "Copied to clipboard", { + variant: "success", + }); + } catch { + sdk.window.showToast("Failed to copy to clipboard"); + } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/frontend/src/composables/useClipboard.ts` around lines 6 - 10, The copyToClipboard function currently calls navigator.clipboard.writeText without error handling, which can cause unhandled promise rejections when used fire-and-forget; wrap the await navigator.clipboard.writeText(text) in a try/catch inside copyToClipboard (and optionally return a boolean success flag) so failures are caught, and on error call sdk.window.showToast with a clear failure message (e.g., label ?? "Failed to copy" and a danger/error variant) and log or surface the error; ensure copyToClipboard still resolves normally so event handlers don't get unhandled rejections.packages/frontend/src/views/Database.vue-31-35 (1)
31-35:⚠️ Potential issue | 🟠 MajorAdd an accessible name to the search input.
The search field currently relies on placeholder text; it should expose an explicit accessible name for assistive tech users.
♿ Suggested fix
<InputText v-model="searchQuery" + aria-label="Search CSP bypass techniques" placeholder="Search..." class="w-64" />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/frontend/src/views/Database.vue` around lines 31 - 35, The search input in Database.vue (the InputText component bound to v-model="searchQuery") lacks an accessible name; add one by either providing an explicit aria-label (e.g., aria-label="Search") on the InputText or by adding a visible/visually-hidden <label> tied to the input via an id and for attribute so assistive technologies can announce the field; ensure the chosen approach updates the InputText props (or underlying input id) so the label/aria attribute is correctly associated with the component.packages/backend/README.md-35-63 (1)
35-63:⚠️ Potential issue | 🟠 MajorAPI reference signatures should include async return type.
These RPC methods are documented and used as async calls, so signatures should be
Promise<Result<...>>to avoid integration mistakes.Doc signature fix
-getAllAnalyses(): Result<AnalysisResult[]> -getAnalysis(requestId: string): Result<AnalysisResult | undefined> -getSummary(): Result<AnalysisSummary> -clearCache(): Result<void> +getAllAnalyses(): Promise<Result<AnalysisResult[]>> +getAnalysis(requestId: string): Promise<Result<AnalysisResult | undefined>> +getSummary(): Promise<Result<AnalysisSummary>> +clearCache(): Promise<Result<void>> ... -exportFindings(format: "json" | "csv"): Result<string> +exportFindings(format: "json" | "csv"): Promise<Result<string>> ... -getBypassRecords(): Result<BypassRecord[]> +getBypassRecords(): Promise<Result<BypassRecord[]>>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/backend/README.md` around lines 35 - 63, Update the documented RPC method signatures to include async return types by wrapping each Result<T> in Promise<Result<T>>; e.g., change getAllAnalyses(): Result<AnalysisResult[]> to getAllAnalyses(): Promise<Result<AnalysisResult[]>>, and do the same for getAnalysis, getSummary, clearCache, getScopeEnabled, setScopeEnabled, getFindingsEnabled, setFindingsEnabled, getCheckSettings, setCheckSettings, updateSingleCheck, exportFindings, and getBypassRecords so the README reflects the actual async RPC return types.packages/frontend/src/components/Configuration/Container.vue-170-192 (1)
170-192:⚠️ Potential issue | 🟠 MajorUse
@clickinstead of@mousedownfor action buttons.
@mousedownbreaks expected keyboard activation behavior and hurts accessibility for core configuration actions.💡 Suggested fix
- `@mousedown`="settingsService.setAllChecks(true)" + `@click`="settingsService.setAllChecks(true)" @@ - `@mousedown`="settingsService.setRecommendedMode()" + `@click`="settingsService.setRecommendedMode()" @@ - `@mousedown`="settingsService.setLightMode()" + `@click`="settingsService.setLightMode()" @@ - `@mousedown`="settingsService.setAllChecks(false)" + `@click`="settingsService.setAllChecks(false)"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/frontend/src/components/Configuration/Container.vue` around lines 170 - 192, Replace the four Button event bindings that currently use `@mousedown` with `@click` to restore proper keyboard activation and expected button behavior; update the Button elements that call settingsService.setAllChecks(true), settingsService.setRecommendedMode(), settingsService.setLightMode(), and settingsService.setAllChecks(false) to use `@click` instead of `@mousedown`.packages/frontend/src/composables/useExportDownload.ts-20-37 (1)
20-37:⚠️ Potential issue | 🟠 MajorCatch thrown export errors, not just
Errresults.If
exportFindingsrejects/throws, the current flow skips user feedback and can surface unhandled promise rejections.💡 Suggested fix
export function useExportDownload() { const sdk = useSDK(); + const download = async ( + format: "json" | "csv", + filename: string, + mimeType: string, + successMessage: string, + ) => { + try { + const result = await sdk.backend.exportFindings(format); + if (result.kind === "Ok") { + triggerDownload(result.value, filename, mimeType); + sdk.window.showToast(successMessage, { variant: "success" }); + } else { + sdk.window.showToast("Export failed", { variant: "error" }); + } + } catch { + sdk.window.showToast("Export failed", { variant: "error" }); + } + }; + const downloadAsJson = async () => { - const result = await sdk.backend.exportFindings("json"); - if (result.kind === "Ok") { - triggerDownload(result.value, "csp-findings.json", "application/json"); - sdk.window.showToast("Exported as JSON", { variant: "success" }); - } else { - sdk.window.showToast("Export failed", { variant: "error" }); - } + await download("json", "csp-findings.json", "application/json", "Exported as JSON"); }; const downloadAsCsv = async () => { - const result = await sdk.backend.exportFindings("csv"); - if (result.kind === "Ok") { - triggerDownload(result.value, "csp-findings.csv", "text/csv"); - sdk.window.showToast("Exported as CSV", { variant: "success" }); - } else { - sdk.window.showToast("Export failed", { variant: "error" }); - } + await download("csv", "csp-findings.csv", "text/csv", "Exported as CSV"); };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/frontend/src/composables/useExportDownload.ts` around lines 20 - 37, Wrap the async calls to sdk.backend.exportFindings in try/catch inside both downloadAsJson and downloadAsCsv so thrown/rejected promises are handled; keep the existing result.kind === "Ok" check to call triggerDownload and show success toasts, but in the catch block call sdk.window.showToast with an error variant (include the caught error message for clarity) to avoid unhandled promise rejections and provide user feedback on thrown errors.packages/backend/src/engine/checks/modernThreats.ts-35-45 (1)
35-45:⚠️ Potential issue | 🟠 MajorNormalize host values before matching to avoid false negatives.
Line 37 compares raw source values directly. Values with scheme/path/port can bypass detection (e.g.
https://cdn.jsdelivr.net/npm/...), so risky hosts may be missed.Proposed fix
import type { CheckId, ParsedPolicy, PolicyFinding } from "shared"; import { AI_ML_HOSTS, WEB3_HOSTS } from "../../data"; +import { stripDomainPrefix } from "../../utils/domainMatching"; import { emitFinding, isCheckEnabled } from "../../utils/findings"; @@ for (const value of directive.values) { + const normalizedValue = stripDomainPrefix(value) + .replace(/^\*\./, "") + .replace(/:\d+$/, "") + .toLowerCase(); for (const host of config.hosts) { - if (value === host || value.endsWith(`.${host}`)) { + const normalizedHost = host.toLowerCase(); + if ( + normalizedValue === normalizedHost || + normalizedValue.endsWith(`.${normalizedHost}`) + ) { findings.push( emitFinding( config.checkId, directive.name, value,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/backend/src/engine/checks/modernThreats.ts` around lines 35 - 45, Normalize each directive value to a hostname before matching against config.hosts: for every item in directive.values, derive a candidateHost by parsing the value (use the URL constructor for values with scheme, fallback to stripping port and path for plain host:port or path-containing strings, and lowercase the result); then use that candidateHost in the existing comparisons (candidateHost === host || candidateHost.endsWith(`.${host}`)) and pass the original value into emitFinding (preserve emitFinding(config.checkId, directive.name, value, ...)). Update the loop that iterates directive.values to compute candidateHost once and use it for matching so entries like "https://cdn.jsdelivr.net/..." or "example.com:8080/path" are detected.packages/frontend/src/components/VulnerabilityModal/Container.vue-100-107 (1)
100-107:⚠️ Potential issue | 🟠 MajorUse
@clickand add an accessible label for the copy button.Line 106 uses
@mousedown, which is not keyboard-friendly for activation. Icon-only buttons require an explicit accessible name viaaria-label. This creates a WCAG 2.1 Level A violation and is inconsistent withBypassEntry.vue, which correctly uses@click.Proposed fix
<Button icon="fas fa-copy" size="small" severity="secondary" text + aria-label="Copy bypass payload" class="!w-6 !h-6 shrink-0" - `@mousedown`="copyToClipboard(bypass.payload, 'Bypass copied')" + `@click`="copyToClipboard(bypass.payload, 'Bypass copied')" />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/frontend/src/components/VulnerabilityModal/Container.vue` around lines 100 - 107, The copy button in VulnerabilityModal's template uses `@mousedown` which prevents keyboard activation and lacks an accessible name; replace the `@mousedown` handler with `@click` on the <Button> that calls copyToClipboard(bypass.payload, 'Bypass copied') and add an explicit accessible label (e.g., aria-label="Copy bypass") to the Button so it matches the accessible behavior in BypassEntry.vue and supports keyboard users and screen readers.packages/frontend/src/components/Dashboard/Container.vue-92-113 (1)
92-113:⚠️ Potential issue | 🟠 MajorUse
click+ accessible labels for action buttons.On Line 98 and Line 112,
@mousedownbypasses normal keyboard activation semantics; icon-only buttons also need accessible names.Proposed fix
<Button icon="fas fa-sync" + aria-label="Refresh analyses" severity="secondary" outlined size="small" :loading="refreshing" - `@mousedown`="onRefresh" + `@click`="onRefresh" /> @@ <Button icon="fas fa-trash" + aria-label="Clear analyses cache" severity="danger" outlined size="small" - `@mousedown`="analysesService.clearCache()" + `@click`="analysesService.clearCache()" />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/frontend/src/components/Dashboard/Container.vue` around lines 92 - 113, Replace the non-standard `@mousedown` handlers with `@click` to preserve keyboard activation semantics: change the Button that calls onRefresh to use `@click`="onRefresh" (referencing the onRefresh method) and change the trash Button to `@click`="analysesService.clearCache()" (referencing analysesService.clearCache). Also add accessible names to icon-only buttons by providing aria-label (or a visible label prop if the Button component supports it) and/or title attributes (e.g., aria-label="Refresh" for the sync button and aria-label="Clear cache" for the trash button) so screen readers can announce the actions.packages/frontend/src/components/Dashboard/Container.vue-33-49 (1)
33-49:⚠️ Potential issue | 🟠 Major
summarycurrently returns incomplete analytics data.On Line 47–48,
checkIdCountsis always{}andlastAnalyzedAtis alwaysundefined, so consumers never receive real values.Proposed fix
const summary = computed(() => { const a = analyses.value; const allFindings = a.flatMap((x) => x.findings); const severityCounts: Record<SeverityLevel, number> = { high: 0, medium: 0, low: 0, info: 0, }; - for (const f of allFindings) severityCounts[f.severity]++; + const checkIdCounts: Record<string, number> = {}; + for (const f of allFindings) { + severityCounts[f.severity]++; + checkIdCounts[f.checkId] = (checkIdCounts[f.checkId] ?? 0) + 1; + } + const lastAnalyzedAt = a.reduce<Date | undefined>((latest, item) => { + if (!latest || item.analyzedAt > latest) return item.analyzedAt; + return latest; + }, undefined); return { totalAnalyses: a.length, totalFindings: allFindings.length, severityCounts, - checkIdCounts: {} as Record<string, number>, - lastAnalyzedAt: undefined as Date | undefined, + checkIdCounts, + lastAnalyzedAt, }; });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/frontend/src/components/Dashboard/Container.vue` around lines 33 - 49, The computed summary currently leaves checkIdCounts empty and lastAnalyzedAt undefined; update the summary computed (referencing summary, analyses, allFindings) to build checkIdCounts by iterating allFindings and incrementing counts keyed by each finding.checkId (ensure type Record<string, number>), and compute lastAnalyzedAt by taking the latest Date from analyses (e.g., reduce over analyses to find the max analyzedAt or use allFindings timestamps if findings carry them), then return those real values instead of {} and undefined so consumers receive actual counts and the most recent analysis date.packages/frontend/src/composables/useBypassData.ts-14-23 (1)
14-23:⚠️ Potential issue | 🟠 MajorHandle failure paths in
loadRecordsto avoid silent stale state.On Line 17–20, non-
Okresponses and thrown errors do not update state beyondloading, so stale records can persist with no error signal.Proposed fix
export const useBypassData = defineStore("bypass-data", () => { const sdk = useSDK(); const records = ref<BypassRecord[]>([]); const loading = ref(false); + const error = ref<string | null>(null); const searchQuery = ref(""); const loadRecords = async () => { loading.value = true; + error.value = null; try { const result = await sdk.backend.getBypassRecords(); if (result.kind === "Ok") { records.value = result.value; + } else { + records.value = []; + error.value = String(result.error ?? "Failed to load bypass records"); } + } catch (e) { + records.value = []; + error.value = + e instanceof Error ? e.message : "Failed to load bypass records"; } finally { loading.value = false; } }; @@ return { records, loading, + error, searchQuery,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/frontend/src/composables/useBypassData.ts` around lines 14 - 23, The loadRecords function currently leaves records.value unchanged on non-Ok responses or thrown errors, causing stale data; update loadRecords (the async function calling sdk.backend.getBypassRecords) to handle failure paths by adding a catch that clears or resets records.value (e.g., set to [] or null) and optionally set an error flag or rethrow the error so callers know the call failed; keep the existing finally block to set loading.value = false and ensure both non-Ok results and exceptions go through the same failure handling logic.packages/frontend/src/stores/settings.ts-51-109 (1)
51-109:⚠️ Potential issue | 🟠 MajorSurface backend write failures instead of ignoring them.
All of the mutating actions drop the
Errorcase. If persistence fails, the UI gets no signal to notify the user or retry, so settings changes can fail silently. Return a status/result from these actions or show a toast consistently on failure.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/frontend/src/stores/settings.ts` around lines 51 - 109, The mutating store methods (updateScope, updateFindings, updateSingleCheck, setAllChecks, setRecommendedMode, setLightMode) swallow the Error branch from sdk.backend calls and thus allow silent failures; change each to return the backend result (or rethrow the error) instead of ignoring it, and only apply local state updates when result.kind === "Ok"; in the Error branch emit/return the error (or call a shared notify/toast helper with the error message) so the caller/UI can surface retries/notifications and include error details in the log/toast for debugging.packages/backend/src/api/settings.ts-42-56 (1)
42-56:⚠️ Potential issue | 🟠 MajorReject unknown check IDs at the RPC boundary.
settingsandcheckIdare accepted as arbitrary strings here and always return success. An out-of-date or malformed client can send unknown keys, getok(undefined), and believe the change persisted when it did not. Validate against the supported check IDs and return an error for invalid input instead of unconditionally acknowledging it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/backend/src/api/settings.ts` around lines 42 - 56, Both RPC handlers currently accept arbitrary strings and always return ok(undefined); change apiSetCheckSettings and apiUpdateSingleCheck to validate inputs against the canonical set of supported check IDs (use whatever list/function you have like SUPPORTED_CHECK_IDS or getSupportedChecks()) before calling setCheckSettings/updateSingleCheck, and return an error Result when unknown IDs are supplied. For apiSetCheckSettings, iterate keys of the settings map and if any key is not in the supported list return err(new Error("unknown check id: <id>")) (or a suitable typed error) instead of calling setCheckSettings; for apiUpdateSingleCheck, check checkId is supported before calling updateSingleCheck and return err for invalid ids. Ensure you still call the underlying setters only when validation passes and return ok(undefined) on success.packages/backend/src/utils/domainMatching.ts-15-19 (1)
15-19:⚠️ Potential issue | 🟠 MajorNormalize ports, queries, and fragments before matching.
stripDomainPrefix()only removes the scheme and/.... Inputs likehttps://cdn.example.com:8443/app.jsorexample.com?cache=1normalize tocdn.example.com:8443/example.com?cache=1, so bothisUserContentHost()andisVulnerableJsHost()can miss valid matches. Use URL parsing here, or strip:port,?, and#as well.Suggested fix
export function stripDomainPrefix(domain: string): string { - return domain - .toLowerCase() - .replace(/^https?:\/\//, "") - .replace(/\/.*$/, ""); + const normalized = domain.trim().toLowerCase(); + + try { + const parsed = new URL( + /^[a-z][a-z0-9+.-]*:\/\//.test(normalized) + ? normalized + : `https://${normalized}`, + ); + return parsed.hostname; + } catch { + return normalized + .replace(/^https?:\/\//, "") + .replace(/[/?#].*$/, "") + .replace(/:\d+$/, ""); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/backend/src/utils/domainMatching.ts` around lines 15 - 19, stripDomainPrefix currently only removes scheme and trailing path, leaving ports, query strings, and fragments which causes isUserContentHost and isVulnerableJsHost to miss matches; update stripDomainPrefix to fully normalize by parsing the input with the URL constructor (or equivalent) and returning only hostname (lowercased) without port, query, or fragment, falling back to manual stripping for bare host inputs; reference stripDomainPrefix and ensure callers (isUserContentHost/isVulnerableJsHost) continue to receive the normalized hostname.packages/frontend/src/stores/settings.ts-25-30 (1)
25-30:⚠️ Potential issue | 🟠 MajorDon't swallow initialization failures.
This
catch()resetsinitPromisebut also converts a rejectedloadAll()into a resolvedinitialize(). Callers then keep rendering defaults as if settings were loaded successfully. Reset the cache and rethrow so initialization failure is observable.Suggested fix
async function initialize() { if (initPromise !== undefined) return initPromise; - initPromise = loadAll().catch(() => { + initPromise = loadAll().catch((error) => { initPromise = undefined; + throw error; }); - await initPromise; + return initPromise; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/frontend/src/stores/settings.ts` around lines 25 - 30, The initialize function currently swallows loadAll() failures by resetting initPromise in the catch and not rethrowing; change the catch handler on initPromise = loadAll().catch(...) to reset initPromise = undefined and then rethrow the caught error so callers see the failure. Update the promise handling in initialize (referencing initialize, initPromise, and loadAll) to ensure the cache is cleared on failure but the error is propagated.packages/backend/src/engine/parser.ts-86-102 (1)
86-102:⚠️ Potential issue | 🟠 MajorIgnore repeated directives instead of overwriting them.
Duplicate CSP directives are invalid, and the first occurrence wins. Unconditionally calling
Map.set()here makes the last duplicate take effect, which changes the parsed policy and can produce incorrect findings for real headers.Suggested fix
for (const raw of rawDirectives) { const parts = raw.split(/\s+/).filter((p) => p !== ""); if (parts.length === 0) continue; const name = parts[0]?.toLowerCase(); if (name === undefined || name.trim() === "") continue; + if (policy.directives.has(name)) continue; const values = parts.slice(1); const directive: PolicyDirective = { name, values,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/backend/src/engine/parser.ts` around lines 86 - 102, The loop that parses rawDirectives currently overwrites duplicate directives by unconditionally calling policy.directives.set(name, directive); change it to ignore duplicates so the first occurrence wins: before setting, check policy.directives.has(name) and skip creating/setting a new PolicyDirective for that name if present; keep using the same variables (rawDirectives, parts, name, values, classifySource, PolicyDirective, and policy.directives) so existing logic for isImplicit and sources remains unchanged.packages/backend/src/services/exportService.ts-25-55 (1)
25-55:⚠️ Potential issue | 🟠 MajorEscape every CSV cell before joining.
Only
Descriptionis quoted right now.Value,Host,Path,Request ID, etc. can also contain commas, quotes, newlines, or formula prefixes from analyzed data, which produces malformed CSV and leaves the export open to CSV injection in spreadsheet apps. Centralize escaping/neutralization for every column beforejoin(",").Suggested fix
+function escapeCsvCell(value: unknown): string { + const text = String(value ?? ""); + const neutralized = /^[=+\-@]/.test(text) ? `'${text}` : text; + return `"${neutralized.replace(/"/g, '""')}"`; +} + export function exportAsCsv(analyses: AnalysisResult[]): string { const headers = [ "ID", "Check", @@ const rows = analyses.flatMap((a) => { const { host, path } = extractHostAndPath(a); return a.findings.map((f) => [ f.id, f.checkId, f.severity, f.directive, f.value, - `"${f.description.replace(/"/g, '""')}"`, + f.description, host, path, a.analyzedAt.toISOString(), f.requestId, ]); }); - return [headers, ...rows].map((row) => row.join(",")).join("\n"); + return [headers, ...rows] + .map((row) => row.map(escapeCsvCell).join(",")) + .join("\n"); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/backend/src/services/exportService.ts` around lines 25 - 55, The CSV export currently only quotes the Description field; update exportAsCsv to centrally escape/neutralize every cell by adding a helper (e.g., escapeCsvCell) and applying it to all columns (headers and each value returned from a.findings). escapeCsvCell should: convert non-strings to string, double any existing " characters, wrap the cell in quotes if it contains commas, quotes, or newlines, and neutralize CSV-injection by prefixing cells that start with =, +, -, or @ (or by prepending a safe apostrophe) before joining rows with ","; locate exportAsCsv and replace the current inline quoting with calls to this helper (use extractHostAndPath as needed to get host/path).packages/backend/src/services/analysisService.ts-64-83 (1)
64-83:⚠️ Potential issue | 🟠 MajorDon't let findings export fail
processResponse().
sdk.findings.create()is an optional side effect, but any rejection here aborts the whole call after the result was already cached andanalysisUpdatedwas already emitted. This should be isolated withtry/catchso transient SDK/storage failures do not break interception.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/backend/src/services/analysisService.ts` around lines 64 - 83, The call to sdk.findings.create inside processResponse() can throw and should not abort the main flow; wrap the sdk.findings.create({...}) invocation (the block that builds title/description and calls sdk.findings.create) in a try/catch so any rejection is caught and handled (e.g., log the error via the existing logger or sdk logger) without rethrowing; keep the existing checks (findingsEnabled, allFindings.length, request/requestData) and ensure the catch does not alter emitted analysisUpdated or cached results.packages/backend/src/services/analysisService.ts-14-17 (1)
14-17:⚠️ Potential issue | 🟠 MajorBound the in-memory cache before this ships.
analysisCacheretains every analyzed request untilclearCache()is called. On long-running proxy sessions this turns CSP analysis into an unbounded memory sink. Add an eviction policy here (LRU/TTL/max entries) or store only the smaller summary data you need for the UI.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/backend/src/services/analysisService.ts` around lines 14 - 17, analysisCache is unbounded and can leak memory; implement a bounded eviction policy for it (e.g., LRU with max entries or TTL) or store only summaries instead of full AnalysisResult. Replace the Map used by analysisCache with a capped cache implementation (or wrap Map operations in helper functions) that evicts oldest/least-recent entries when exceeding a configured maxEntries or removes entries after a TTL; update any code referencing analysisCache and clearCache() to use the new cache API (get/set/delete/clear) and ensure functions that populate it (where AnalysisResult objects are created) only store the minimal summary if you choose the summary approach. Configure the cap value as a constant or env/config option and add tests exercising eviction behavior.packages/backend/src/index.ts-59-71 (1)
59-71:⚠️ Potential issue | 🟠 MajorGuard the intercept hook from analysis failures.
This callback runs on live traffic. Any thrown parse/analyze/export error will reject
onInterceptResponse, so a plugin-side failure can leak into response handling. Catch and log inside the hook to keep interception resilient.Proposed fix
sdk.events.onInterceptResponse(async (_sdk, request, response) => { - const headers = response.getHeaders(); - const cspHeaders = extractCspHeaders(headers); - if (cspHeaders.length === 0) return; - - if (getScopeEnabled() && !_sdk.requests.inScope(request)) return; - - await processResponse( - { id: request.getId(), host: request.getHost(), path: request.getPath() }, - { headers }, - request, - ); + try { + const headers = response.getHeaders(); + const cspHeaders = extractCspHeaders(headers); + if (cspHeaders.length === 0) return; + + if (getScopeEnabled() && !_sdk.requests.inScope(request)) return; + + await processResponse( + { id: request.getId(), host: request.getHost(), path: request.getPath() }, + { headers }, + request, + ); + } catch (error) { + _sdk.console.log(`CSP Auditor analysis failed: ${String(error)}`); + } });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/backend/src/index.ts` around lines 59 - 71, The onInterceptResponse hook (sdk.events.onInterceptResponse) is not protected against exceptions from extractCspHeaders, getScopeEnabled/_sdk.requests.inScope checks, or the async processResponse call, so any thrown error can bubble up and break interception; wrap the entire callback body in a try/catch that logs the error (including context: request id/host/path) and prevents rethrowing, ensuring failures in extractCspHeaders or processResponse do not reject the hook; reference the existing symbols sdk.events.onInterceptResponse, extractCspHeaders, getScopeEnabled, _sdk.requests.inScope, and processResponse when locating where to add the try/catch and logging.
🟡 Minor comments (12)
packages/frontend/src/components/Dashboard/GettingStarted/Container.vue-7-7 (1)
7-7:⚠️ Potential issue | 🟡 MinorMark decorative icon as hidden for screen readers.
Line 7 should include
aria-hidden="true"(or an accessible label if meaningful) to avoid noisy/non-informative announcements.Minimal fix
- <i class="fas fa-shield-alt text-surface-400 text-5xl" /> + <i class="fas fa-shield-alt text-surface-400 text-5xl" aria-hidden="true" />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/frontend/src/components/Dashboard/GettingStarted/Container.vue` at line 7, The decorative <i> icon in the GettingStarted Container component is announced by screen readers; update the element (the <i class="fas fa-shield-alt text-surface-400 text-5xl"> in Container.vue) to be ignored by assistive tech by adding aria-hidden="true" (or replace with an accessible label if the icon conveys meaningful information).README.md-6-6 (1)
6-6:⚠️ Potential issue | 🟡 MinorUse official brand spelling in the link label.
Line 6 should use “GitHub” instead of “Github” for documentation consistency.
✏️ Suggested doc fix
- <a href="https://github.com/caido-community" target="_blank">Github</a> + <a href="https://github.com/caido-community" target="_blank">GitHub</a>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 6, Update the link label text in the README anchor so it uses the official brand spelling "GitHub" instead of "Github" (i.e., change the inner text of the <a href="https://github.com/caido-community" target="_blank"> element from "Github" to "GitHub").packages/frontend/src/utils/severity.ts-3-14 (1)
3-14:⚠️ Potential issue | 🟡 MinorAdd an explicit fallback return to avoid undefined class output.
If
levelis ever out-of-contract at runtime, this currently returnsundefinedand silently drops badge styling. Add a default fallback.Suggested hardening
export function getSeverityBadgeStyle(level: SeverityLevel): string { switch (level) { case "high": return "bg-red-500/20 text-red-400 border-red-500/30"; case "medium": return "bg-orange-500/20 text-orange-400 border-orange-500/30"; case "low": return "bg-yellow-500/20 text-yellow-400 border-yellow-500/30"; case "info": return "bg-surface-500/20 text-surface-400 border-surface-500/30"; + default: + return "bg-surface-500/20 text-surface-400 border-surface-500/30"; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/frontend/src/utils/severity.ts` around lines 3 - 14, The getSeverityBadgeStyle function can return undefined for out-of-contract values; add a safe fallback string (e.g. neutral gray or transparent badge classes) so styling never drops. Implement this by adding a default: branch in the switch or a final return after the switch in getSeverityBadgeStyle to return the chosen fallback class; keep existing cases unchanged and ensure the fallback handles any unexpected SeverityLevel values.knip.ts-1-1 (1)
1-1:⚠️ Potential issue | 🟡 MinorUse the public
KnipConfigtype instead of internal import path.Importing from
knip/dist/types/config.jsrelies on an internal distribution path. Use the official public export instead:import type { KnipConfig } from "knip";The
KnipConfigtype is the recommended way to type dynamic Knip configuration files, as documented in the official Knip dynamic configuration guide.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@knip.ts` at line 1, The import uses an internal path and the type RawConfigurationOrFn from "knip/dist/types/config.js"; replace this with the public export KnipConfig by changing the import to import type { KnipConfig } from "knip" and update any uses of RawConfigurationOrFn to KnipConfig (e.g., in knip.ts where RawConfigurationOrFn is referenced) so the module relies on the official public API.packages/backend/README.md-74-104 (1)
74-104:⚠️ Potential issue | 🟡 MinorAdd missing
awaitin async usage examples.A few examples treat backend calls synchronously and will hand readers a
Promiseinstead ofResult.Example fix
-const result = sdk.backend.getAllAnalyses(); +const result = await sdk.backend.getAllAnalyses(); -const result = sdk.backend.getSummary(); +const result = await sdk.backend.getSummary(); -sdk.backend.clearCache(); +await sdk.backend.clearCache();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/backend/README.md` around lines 74 - 104, The examples call async backend methods without awaiting them, returning Promises instead of Result values; update the example snippets to await sdk.backend.getAllAnalyses(), await sdk.backend.getSummary(), await sdk.backend.exportFindings("json"/"csv") and await sdk.backend.clearCache() (and mark the example functions or surrounding scope as async if needed) so the code operates on resolved Result objects (e.g., check result.kind on the awaited value).packages/frontend/src/components/Dashboard/AnalysesTable/Container.vue-47-47 (1)
47-47:⚠️ Potential issue | 🟡 MinorRemove invalid
expandable-rowsprop from DataTable.
expandable-rowsis not a valid PrimeVue DataTable prop and will be ignored. Row expansion is already correctly configured viav-model:expanded-rowsand<Column :expander="true">, making this prop redundant.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/frontend/src/components/Dashboard/AnalysesTable/Container.vue` at line 47, Remove the invalid expandable-rows attribute from the DataTable declaration in Container.vue: the DataTable component should not include the expandable-rows prop since row expansion is already handled by v-model:expanded-rows and the <Column :expander="true"> setup; locate the DataTable element in Container.vue, delete the standalone expandable-rows token, and ensure the existing v-model:expanded-rows and Column expander configuration remain intact.packages/frontend/src/stores/analyses/useAnalysesState.ts-23-29 (1)
23-29:⚠️ Potential issue | 🟡 MinorHandle
ClearinLoadingto avoid sticky loading state.
Clearcurrently has no effect while loading, so reset/cancel flows can remain stuck inLoading.💡 Suggested fix
case "Loading": + if (message.type === "Clear") return { type: "Idle" }; if (message.type === "Error") return { type: "Error", error: message.error }; if (message.type === "Success") return { type: "Success", analyses: message.analyses }; return current;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/frontend/src/stores/analyses/useAnalysesState.ts` around lines 23 - 29, In the switch branch handling case "Loading" inside the analyses state reducer, handle message.type === "Clear" so a clear/cancel can exit the Loading state instead of returning current; update the branch (where it currently checks message.type === "Error" and "Success" and then returns current) to check for "Clear" first (or alongside others) and return the empty/initial analyses state (e.g., { type: "Empty" } or the reducer's defined initial state) so loading is not sticky.packages/frontend/src/utils/formatting.ts-3-8 (1)
3-8:⚠️ Potential issue | 🟡 Minor
formatDatefallback does not handle invalid dates.
new Date(invalid).toLocaleString()returns"Invalid Date"instead of throwing, so the catch block is not reached.Proposed fix
export function formatDate(date: Date | string): string { - try { - return new Date(date).toLocaleString(); - } catch { - return String(date); - } + const parsed = new Date(date); + if (Number.isNaN(parsed.getTime())) { + return String(date); + } + return parsed.toLocaleString(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/frontend/src/utils/formatting.ts` around lines 3 - 8, The fallback in formatDate doesn't run because new Date(invalid) produces an "Invalid Date" object rather than throwing; update formatDate to first construct const d = new Date(date) and check validity (e.g., isNaN(d.getTime()) or Number.isNaN(+d)) and if invalid return String(date), otherwise return d.toLocaleString(); keep the function name formatDate and replace the current try/catch with this explicit validity check.packages/backend/src/engine/checks/missingDirectives.ts-31-35 (1)
31-35:⚠️ Potential issue | 🟡 MinorWildcard detection for
base-uriis too narrow.Checking only
includes("*")as an exact token misses wildcard host-source values (e.g.,https://*.example.com).🔧 Proposed fix
if ( baseUri === undefined || - baseUri.values.includes("*") || + baseUri.values.some((value) => value === "*" || value.includes("*")) || baseUri.values.includes("'unsafe-inline'") ) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/backend/src/engine/checks/missingDirectives.ts` around lines 31 - 35, The wildcard detection in the base-uri check is too narrow: replace the current token check that only tests baseUri.values.includes("*") with a check that inspects each value in baseUri.values for any '*' occurrence (e.g., baseUri.values.some(v => v.includes('*') || v === '*')) so host-source patterns like "https://*.example.com" are caught; update the conditional around baseUri (the block using baseUri and baseUri.values) to use this .some(...) predicate instead of the single includes("*") call.packages/backend/src/engine/checks/bypass.ts-14-28 (1)
14-28:⚠️ Potential issue | 🟡 MinorAvoid substring host matching for JSONP detection.
value.includes(host)can create false positives. Match against parsed hostname boundaries instead.🔧 Proposed fix
for (const value of scriptSrc.values) { if (isCheckEnabled("jsonp-bypass-risk", enabledChecks)) { for (const host of JSONP_CAPABLE_HOSTS) { - if (value.includes(host)) { + if (sourceMatchesHost(value, host)) { findings.push( emitFinding( "jsonp-bypass-risk", "script-src", value, policy.requestId, `Host ${host} supports JSONP callbacks that can bypass CSP`, ), ); } } }function sourceMatchesHost(source: string, host: string): boolean { const token = source.replace(/^'+|'+$/g, ""); try { const normalized = token.includes("://") ? token : `https://${token}`; const hostname = new URL(normalized).hostname; return hostname === host || hostname.endsWith(`.${host}`); } catch { return false; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/backend/src/engine/checks/bypass.ts` around lines 14 - 28, The current JSONP detection uses substring matching (value.includes(host)) which yields false positives; add a helper like sourceMatchesHost(source, host) that strips surrounding quotes, prepends a default scheme when missing, parses via new URL(...).hostname, and returns true when hostname === host or hostname.endsWith(`.${host}`) (catch and return false on parse errors); then replace the includes check inside the loop over scriptSrc.values/JSONP_CAPABLE_HOSTS with sourceMatchesHost(value, host) so emitFinding("jsonp-bypass-risk", ...) only fires on proper hostname matches; define sourceMatchesHost in this module and reuse it where isCheckEnabled("jsonp-bypass-risk", enabledChecks) and emitFinding are used.packages/backend/src/services/analysisService.ts-98-121 (1)
98-121:⚠️ Potential issue | 🟡 Minor
lastAnalyzedAtis readingMaporder, not the newest analysis.
Array.from(analysisCache.values())preserves insertion order, and updating an existing key does not move it to the end. So the timestamp returned here can be stale instead of the most recent analysis time.Proposed fix
export function computeSummary(): AnalysisSummary { const analyses = Array.from(analysisCache.values()); const allFindings = analyses.flatMap((a) => a.findings); + const lastAnalyzedAt = analyses.reduce<Date | undefined>( + (latest, analysis) => + latest === undefined || analysis.analyzedAt > latest + ? analysis.analyzedAt + : latest, + undefined, + ); const severityCounts: Record<SeverityLevel, number> = { @@ return { totalAnalyses: analyses.length, totalFindings: allFindings.length, severityCounts, checkIdCounts, - lastAnalyzedAt: analyses.length > 0 ? analyses[0]?.analyzedAt : undefined, + lastAnalyzedAt, }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/backend/src/services/analysisService.ts` around lines 98 - 121, computeSummary currently uses Array.from(analysisCache.values()) and assumes the first element is the newest, but Map insertion order is not updated on value replacements; fix computeSummary by deriving lastAnalyzedAt from the actual timestamps (e.g., take the maximum analyzedAt across analyses or sort analyses by analyzedAt descending) instead of relying on Map order—update the return value so lastAnalyzedAt = analyses.length > 0 ? max(analyses.map(a => a.analyzedAt)) : undefined, referencing computeSummary, analysisCache, AnalysisSummary, and the analyzedAt field.packages/backend/src/engine/policyBuilder.ts-55-79 (1)
55-79:⚠️ Potential issue | 🟡 MinorNormalize
headerNamebefore deriving policy flags.These checks are case-sensitive, but HTTP header names are not. Mixed-case inputs like
Content-Security-Policy-Report-OnlyorX-WebKit-CSPwill currently come back with the wrongisReportOnly/isDeprecatedvalues.Proposed fix
export function buildPolicy( directives: Record<string, string[]>, options?: { headerName?: string; requestId?: string; url?: string }, ): ParsedPolicy { const headerName = options?.headerName ?? "content-security-policy"; + const normalizedHeaderName = headerName.toLowerCase(); const requestId = options?.requestId ?? "test-request-1"; const directiveMap = new Map<string, PolicyDirective>(); @@ headerValue: Object.entries(directives) .map(([k, v]) => `${k} ${v.join(" ")}`) .join("; "), directives: directiveMap, - isReportOnly: headerName.includes("report-only"), - isDeprecated: ["x-content-security-policy", "x-webkit-csp"].includes( - headerName, + isReportOnly: normalizedHeaderName.includes("report-only"), + isDeprecated: ["x-content-security-policy", "x-webkit-csp"].includes( + normalizedHeaderName, ), parsedAt: new Date(), url: options?.url, }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/backend/src/engine/policyBuilder.ts` around lines 55 - 79, Normalize headerName to lowercase before computing policy flags: create a lowercase variant (e.g., headerNameLower = headerName.toLowerCase()) after headerName is assigned, then use headerNameLower for isReportOnly and isDeprecated checks (replace headerName.includes(...) and the array includes(...) check with headerNameLower). Keep the original headerName value for storage if needed, but base the report-only and deprecated booleans on the normalized headerName to handle mixed-case inputs; update references in this return block where isReportOnly and isDeprecated are set.
Summary by CodeRabbit
New Features
Bug Fixes
Refactoring