Skip to content

V2.0.0 Plugin Refactor#100

Open
amrelsagaei wants to merge 2 commits intomainfrom
ae-plugin-refactor
Open

V2.0.0 Plugin Refactor#100
amrelsagaei wants to merge 2 commits intomainfrom
ae-plugin-refactor

Conversation

@amrelsagaei
Copy link
Copy Markdown

@amrelsagaei amrelsagaei commented Apr 11, 2026

Summary by CodeRabbit

  • New Features

    • Complete CSP analysis engine with configurable vulnerability checks
    • Analysis dashboard displaying policies and findings with filtering
    • CSP bypass database with searchable records
    • Configuration panel to customize vulnerability checks
    • JSON and CSV export functionality for findings
    • Real-time analysis updates on HTTP responses
  • Bug Fixes

    • Improved policy parsing and directive handling
  • Refactoring

    • Restructured backend architecture for modularity
    • Reorganized frontend into component-based pages

@socket-security
Copy link
Copy Markdown

socket-security bot commented Apr 11, 2026

@f8thl3ss
Copy link
Copy Markdown

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 11, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 11, 2026

📝 Walkthrough

Walkthrough

This PR performs a comprehensive restructuring of a CSP auditor plugin. The changes include removing legacy configuration files, completely rewriting the backend analysis engine with a modular check-based architecture, restructuring the frontend from a monolithic component into multiple modular pages, introducing a new shared package for common types, and updating dependencies and tooling. Version bumped to 2.0.0.

Changes

Cohort / File(s) Summary
Configuration & Repository Setup
.editorconfig, .npmrc, .gitignore, .pre-commit-config.yaml, CODEOWNERS, .mise/config.toml
Removed formatter/shell/Terraform config rules, simplified .gitignore, removed pre-commit hooks, cleaned npm/renovate configurations, added mise tool version pinning and validation task, removed code owner assignments.
GitHub Actions Workflows & Templates
.github/workflows/*, .github/ISSUE_TEMPLATE/*, .github/*.yaml
Deleted pre-commit, rigging PR description, and semantic PR validation workflows; updated release workflow with broader Node.js/action versions and new release action; updated validate workflow with frozen-lockfile installs and new test/knip/build jobs; removed issue templates, labeler rules, labels, and actionlint config.
Backend Analysis Engine Rewrite
packages/backend/src/engine/*
Complete architectural overhaul: replaced monolithic analyzer with modular check runners (critical, bypass, deprecated, hostSecurity, modernThreats, missingDirectives, trustedTypes), new parser with CSP header extraction and policy parsing, deduplicator for findings, and analyzer orchestrator. Added comprehensive test coverage for all checks.
Backend API & Services Layer
packages/backend/src/api/*, packages/backend/src/services/*
Introduced new modular API endpoints (analysis, settings, export, bypass) delegating to service layer; added analysisService handling response processing, caching, and finding creation; added exportService for JSON/CSV export formats.
Backend Data & Utilities
packages/backend/src/data/*, packages/backend/src/utils/*
Introduced data modules with curated host lists (user content, vulnerable JS, threat hosts), bypass records, and check defaults; added domain matching and finding emission utilities; removed legacy utilities for URL/pattern matching.
Backend Legacy Code Removal
packages/backend/src/*blacklists.ts, packages/backend/src/csp-parser.ts, packages/backend/src/enhanced-*.ts, packages/backend/src/findings-generator.ts, packages/backend/src/vulnerability-rules.ts, packages/backend/src/utils.ts, packages/backend/src/bypass-database.ts
Deleted 6+ legacy modules (blacklists, CSP parser, enhanced analyzers, findings generator, vulnerability rules, original utilities, bypass database abstraction) to enable cleaner modular architecture.
Backend Configuration & Types
packages/backend/package.json, packages/backend/tsconfig.json, packages/backend/src/types.ts, packages/backend/src/index.ts, packages/backend/src/sdk.ts, packages/backend/src/node-types.d.ts
Updated SDK to v0.55.3, added shared workspace dependency, replaced types with modular imports, removed Node.js type declarations, implemented new API/engine integration in main entry point, added SDK initialization wrapper.
Frontend Component Restructuring
packages/frontend/src/components/*, packages/frontend/src/views/*
Replaced monolithic App.vue with modular page structure: Dashboard (analyses table, stats, empty state), Database (bypass record browser), Configuration (check settings); created reusable sub-components (PolicyCard, VulnerabilityCard, AnalysesTable, ExpandedRow).
Frontend State Management
packages/frontend/src/stores/*, packages/frontend/src/services/*, packages/frontend/src/composables/*
Introduced Pinia stores for analyses (state machine), settings, and bypass data; added service layers for analyses and settings integrating with backend SDK; created composables for clipboard, export download, page navigation, vulnerability modal, and bypass data.
Frontend Utilities & Styling
packages/frontend/src/utils/*, packages/frontend/src/styles/*, packages/frontend/src/types.ts, packages/frontend/src/plugins/sdk.ts
Added utilities for date formatting, host/path extraction, severity/technique styling; removed custom Caido and PrimeVue CSS; replaced custom SDK types with Caido SDK type alias; converted SDK plugin import to type-only.
Frontend Manifests & Config
packages/frontend/package.json, packages/frontend/tsconfig.json, packages/frontend/src/index.ts
Added Pinia store initialization, updated SDK versions to v0.55.3, downgraded Vue/CodeMirror, upgraded vue-tsc; added vitest globals; updated sidebar icon and element ID in plugin registration.
Shared Package Introduction
packages/shared/*
New monorepo package exporting common types: result pattern (Ok/Error), CSP structures (ParsedPolicy, PolicyDirective, PolicySource), vulnerability/finding definitions, analysis aggregations, bypass data models, check configurations with registry and defaults, backend events.
Root Configuration & Scripts
caido.config.ts, knip.ts, package.json, vitest.config.ts, scripts/generate-bypass-data.js, README.md
Updated plugin metadata (version 1.0.3→2.0.0, description, author, IDs); added knip config with workspace entry points; removed bypass data generation script; updated root package scripts (added knip, test, removed prebuild); added vitest config; simplified README.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Poem

🐰 Hopping through the CSP code with glee,
Old monoliths crumble, modular we'll be!
Checks hop in sequence, findings deduplicate clean,
Shared types unite what once was unseen.
From bunny's refactor, a plugin reborn!

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch ae-plugin-refactor

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

🟠 Major comments (24)
.gitignore-1-6 (1)

1-6: ⚠️ Potential issue | 🟠 Major

Add .env patterns to prevent accidental secret commits.

The .gitignore file is missing patterns for environment variable files (.env, .env.local, .env.*). Node.js/TypeScript projects commonly use these files to store secrets, API keys, and configuration. Without these patterns, developers could accidentally commit sensitive credentials.

🛡️ Proposed fix to add environment file patterns
 node_modules
 dist
 .DS_Store
 plugin_package.zip
 coverage
 *.tsbuildinfo
+.env
+.env.*
+!.env.example
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.gitignore around lines 1 - 6, Add gitignore patterns to exclude environment
files so secrets aren't committed: update the .gitignore to include entries for
.env, .env.local, and .env.* (or a single pattern like .env*) to cover all env
variants; ensure these new patterns are placed alongside existing entries
(node_modules, dist, coverage, *.tsbuildinfo) so Git ignores any
environment/config files committed by developers.
packages/backend/src/data/userContentHosts.ts-10-10 (1)

10-10: ⚠️ Potential issue | 🟠 Major

Add wildcard for Googleusercontent subdomains.

Line 10 only matches the apex domain. Most user-content traffic is on *.googleusercontent.com, so this likely under-detects risky hosts.

Targeted fix
   "googleusercontent.com",
+  "*.googleusercontent.com",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/backend/src/data/userContentHosts.ts` at line 10, The list entry
"googleusercontent.com" only matches the apex domain; update the host list (the
exported array/constant in userContentHosts.ts that currently includes
"googleusercontent.com") to also include a wildcard for subdomains (e.g.,
"*.googleusercontent.com" or a leading-dot variant depending on the matching
logic used by your host matcher) so that all *.googleusercontent.com hosts are
detected; keep or retain the existing apex entry if your matching logic treats
wildcards as non-inclusive of the apex.
.github/workflows/release.yml-26-35 (1)

26-35: ⚠️ Potential issue | 🟠 Major

Pin GitHub Actions to commit SHAs in release workflow.

Using floating major tags in a privileged release job is a supply-chain risk. Pin to immutable SHAs for actions/checkout@v4, actions/setup-node@v4, pnpm/action-setup@v4, and caido/action-release@v1.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/release.yml around lines 26 - 35, Replace the floating
major tags for GitHub Actions in the release workflow by pinning each action to
its immutable commit SHA: update uses entries for actions/checkout@v4,
actions/setup-node@v4, pnpm/action-setup@v4, and caido/action-release@v1 to the
corresponding full commit SHA (e.g., actions/checkout@<sha>) so the release job
uses exact revisions; locate the uses lines for those action names and
substitute the version tags with their repository commit SHAs, ensuring you test
the workflow after updating.
packages/backend/src/data/bypassRecords.ts-42-42 (1)

42-42: ⚠️ Potential issue | 🟠 Major

Scrub credential-like values from these payloads.

Line 42 and Line 918 embed concrete token-like query values (sign, appKey, token). Even if these are only sample exploits, checking them in will keep tripping secret scanners and could leak third-party credentials if any are still valid. Replace the concrete values with inert placeholders before release.

Also applies to: 918-918

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/backend/src/data/bypassRecords.ts` at line 42, The embedded payload
in the `code` field contains credential-like query parameters (e.g., sign,
appKey, token) that should be redacted; update the `code` value in bypassRecords
(and the similar occurrence later) to replace concrete values with inert
placeholders (e.g., "{SIGN}", "{APP_KEY}", "{TOKEN}") or strip those query
params before committing, and add a small normalization/sanitization step where
`code` is assembled or stored (use a regex or URL parsing in the module that
sets the `code` field) to ensure any future instances of `sign`, `appKey`,
`token`, etc. are replaced with placeholders.
packages/frontend/src/components/Dashboard/AnalysesTable/VulnerabilityCard.vue-11-14 (1)

11-14: ⚠️ Potential issue | 🟠 Major

Use a semantic interactive element for accessibility.

The clickable container is a div, so keyboard users can’t reliably activate it. Please switch to a button (or add full ARIA+keyboard support).

♿ Suggested fix
-  <div
-    class="border border-surface-700 rounded p-3 cursor-pointer hover:border-surface-500 transition-colors"
-    `@click`="emit('click', props.finding)"
-  >
+  <button
+    type="button"
+    class="border border-surface-700 rounded p-3 cursor-pointer hover:border-surface-500 transition-colors w-full text-left bg-transparent focus-visible:outline focus-visible:outline-2 focus-visible:outline-offset-2"
+    `@click`="emit('click', props.finding)"
+  >
@@
-  </div>
+  </button>

Also applies to: 35-35

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/frontend/src/components/Dashboard/AnalysesTable/VulnerabilityCard.vue`
around lines 11 - 14, Replace the non-semantic clickable div in
VulnerabilityCard.vue with a semantic interactive element (e.g., a <button>) or
add full ARIA+keyboard support so keyboard users can activate it; specifically
change the element that currently has class "border border-surface-700 rounded
p-3 cursor-pointer hover:border-surface-500 transition-colors" and the
`@click`="emit('click', props.finding)" handler to a button (or ensure
role="button", tabindex="0", and keydown/keypress handlers that call the same
emit('click', props.finding)), keep existing classes and the emit call (and
ensure styles still apply), and apply appropriate aria-label if the visual
context isn’t descriptive.
packages/frontend/src/composables/useClipboard.ts-6-10 (1)

6-10: ⚠️ Potential issue | 🟠 Major

Handle clipboard failures to prevent unhandled promise rejections in event handlers.

navigator.clipboard.writeText can fail (permission denied, insecure context), and since all callers use fire-and-forget patterns directly in event bindings without error handling, failures will result in unhandled promise rejections. The composable should wrap this call with a try/catch to display user feedback on failure.

🛠️ Suggested hardening
   const copyToClipboard = async (text: string, label?: string) => {
-    await navigator.clipboard.writeText(text);
-    sdk.window.showToast(label ?? "Copied to clipboard", {
-      variant: "success",
-    });
+    try {
+      if (!navigator.clipboard?.writeText) {
+        throw new Error("Clipboard API unavailable");
+      }
+      await navigator.clipboard.writeText(text);
+      sdk.window.showToast(label ?? "Copied to clipboard", {
+        variant: "success",
+      });
+    } catch {
+      sdk.window.showToast("Failed to copy to clipboard");
+    }
   };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/frontend/src/composables/useClipboard.ts` around lines 6 - 10, The
copyToClipboard function currently calls navigator.clipboard.writeText without
error handling, which can cause unhandled promise rejections when used
fire-and-forget; wrap the await navigator.clipboard.writeText(text) in a
try/catch inside copyToClipboard (and optionally return a boolean success flag)
so failures are caught, and on error call sdk.window.showToast with a clear
failure message (e.g., label ?? "Failed to copy" and a danger/error variant) and
log or surface the error; ensure copyToClipboard still resolves normally so
event handlers don't get unhandled rejections.
packages/frontend/src/views/Database.vue-31-35 (1)

31-35: ⚠️ Potential issue | 🟠 Major

Add an accessible name to the search input.

The search field currently relies on placeholder text; it should expose an explicit accessible name for assistive tech users.

♿ Suggested fix
             <InputText
               v-model="searchQuery"
+              aria-label="Search CSP bypass techniques"
               placeholder="Search..."
               class="w-64"
             />
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/frontend/src/views/Database.vue` around lines 31 - 35, The search
input in Database.vue (the InputText component bound to v-model="searchQuery")
lacks an accessible name; add one by either providing an explicit aria-label
(e.g., aria-label="Search") on the InputText or by adding a
visible/visually-hidden <label> tied to the input via an id and for attribute so
assistive technologies can announce the field; ensure the chosen approach
updates the InputText props (or underlying input id) so the label/aria attribute
is correctly associated with the component.
packages/backend/README.md-35-63 (1)

35-63: ⚠️ Potential issue | 🟠 Major

API reference signatures should include async return type.

These RPC methods are documented and used as async calls, so signatures should be Promise<Result<...>> to avoid integration mistakes.

Doc signature fix
-getAllAnalyses(): Result<AnalysisResult[]>
-getAnalysis(requestId: string): Result<AnalysisResult | undefined>
-getSummary(): Result<AnalysisSummary>
-clearCache(): Result<void>
+getAllAnalyses(): Promise<Result<AnalysisResult[]>>
+getAnalysis(requestId: string): Promise<Result<AnalysisResult | undefined>>
+getSummary(): Promise<Result<AnalysisSummary>>
+clearCache(): Promise<Result<void>>
...
-exportFindings(format: "json" | "csv"): Result<string>
+exportFindings(format: "json" | "csv"): Promise<Result<string>>
...
-getBypassRecords(): Result<BypassRecord[]>
+getBypassRecords(): Promise<Result<BypassRecord[]>>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/backend/README.md` around lines 35 - 63, Update the documented RPC
method signatures to include async return types by wrapping each Result<T> in
Promise<Result<T>>; e.g., change getAllAnalyses(): Result<AnalysisResult[]> to
getAllAnalyses(): Promise<Result<AnalysisResult[]>>, and do the same for
getAnalysis, getSummary, clearCache, getScopeEnabled, setScopeEnabled,
getFindingsEnabled, setFindingsEnabled, getCheckSettings, setCheckSettings,
updateSingleCheck, exportFindings, and getBypassRecords so the README reflects
the actual async RPC return types.
packages/frontend/src/components/Configuration/Container.vue-170-192 (1)

170-192: ⚠️ Potential issue | 🟠 Major

Use @click instead of @mousedown for action buttons.

@mousedown breaks expected keyboard activation behavior and hurts accessibility for core configuration actions.

💡 Suggested fix
-                  `@mousedown`="settingsService.setAllChecks(true)"
+                  `@click`="settingsService.setAllChecks(true)"
@@
-                  `@mousedown`="settingsService.setRecommendedMode()"
+                  `@click`="settingsService.setRecommendedMode()"
@@
-                  `@mousedown`="settingsService.setLightMode()"
+                  `@click`="settingsService.setLightMode()"
@@
-                  `@mousedown`="settingsService.setAllChecks(false)"
+                  `@click`="settingsService.setAllChecks(false)"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/frontend/src/components/Configuration/Container.vue` around lines
170 - 192, Replace the four Button event bindings that currently use `@mousedown`
with `@click` to restore proper keyboard activation and expected button behavior;
update the Button elements that call settingsService.setAllChecks(true),
settingsService.setRecommendedMode(), settingsService.setLightMode(), and
settingsService.setAllChecks(false) to use `@click` instead of `@mousedown`.
packages/frontend/src/composables/useExportDownload.ts-20-37 (1)

20-37: ⚠️ Potential issue | 🟠 Major

Catch thrown export errors, not just Err results.

If exportFindings rejects/throws, the current flow skips user feedback and can surface unhandled promise rejections.

💡 Suggested fix
 export function useExportDownload() {
   const sdk = useSDK();

+  const download = async (
+    format: "json" | "csv",
+    filename: string,
+    mimeType: string,
+    successMessage: string,
+  ) => {
+    try {
+      const result = await sdk.backend.exportFindings(format);
+      if (result.kind === "Ok") {
+        triggerDownload(result.value, filename, mimeType);
+        sdk.window.showToast(successMessage, { variant: "success" });
+      } else {
+        sdk.window.showToast("Export failed", { variant: "error" });
+      }
+    } catch {
+      sdk.window.showToast("Export failed", { variant: "error" });
+    }
+  };
+
   const downloadAsJson = async () => {
-    const result = await sdk.backend.exportFindings("json");
-    if (result.kind === "Ok") {
-      triggerDownload(result.value, "csp-findings.json", "application/json");
-      sdk.window.showToast("Exported as JSON", { variant: "success" });
-    } else {
-      sdk.window.showToast("Export failed", { variant: "error" });
-    }
+    await download("json", "csp-findings.json", "application/json", "Exported as JSON");
   };

   const downloadAsCsv = async () => {
-    const result = await sdk.backend.exportFindings("csv");
-    if (result.kind === "Ok") {
-      triggerDownload(result.value, "csp-findings.csv", "text/csv");
-      sdk.window.showToast("Exported as CSV", { variant: "success" });
-    } else {
-      sdk.window.showToast("Export failed", { variant: "error" });
-    }
+    await download("csv", "csp-findings.csv", "text/csv", "Exported as CSV");
   };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/frontend/src/composables/useExportDownload.ts` around lines 20 - 37,
Wrap the async calls to sdk.backend.exportFindings in try/catch inside both
downloadAsJson and downloadAsCsv so thrown/rejected promises are handled; keep
the existing result.kind === "Ok" check to call triggerDownload and show success
toasts, but in the catch block call sdk.window.showToast with an error variant
(include the caught error message for clarity) to avoid unhandled promise
rejections and provide user feedback on thrown errors.
packages/backend/src/engine/checks/modernThreats.ts-35-45 (1)

35-45: ⚠️ Potential issue | 🟠 Major

Normalize host values before matching to avoid false negatives.

Line 37 compares raw source values directly. Values with scheme/path/port can bypass detection (e.g. https://cdn.jsdelivr.net/npm/...), so risky hosts may be missed.

Proposed fix
 import type { CheckId, ParsedPolicy, PolicyFinding } from "shared";
 
 import { AI_ML_HOSTS, WEB3_HOSTS } from "../../data";
+import { stripDomainPrefix } from "../../utils/domainMatching";
 import { emitFinding, isCheckEnabled } from "../../utils/findings";
@@
       for (const value of directive.values) {
+        const normalizedValue = stripDomainPrefix(value)
+          .replace(/^\*\./, "")
+          .replace(/:\d+$/, "")
+          .toLowerCase();
         for (const host of config.hosts) {
-          if (value === host || value.endsWith(`.${host}`)) {
+          const normalizedHost = host.toLowerCase();
+          if (
+            normalizedValue === normalizedHost ||
+            normalizedValue.endsWith(`.${normalizedHost}`)
+          ) {
             findings.push(
               emitFinding(
                 config.checkId,
                 directive.name,
                 value,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/backend/src/engine/checks/modernThreats.ts` around lines 35 - 45,
Normalize each directive value to a hostname before matching against
config.hosts: for every item in directive.values, derive a candidateHost by
parsing the value (use the URL constructor for values with scheme, fallback to
stripping port and path for plain host:port or path-containing strings, and
lowercase the result); then use that candidateHost in the existing comparisons
(candidateHost === host || candidateHost.endsWith(`.${host}`)) and pass the
original value into emitFinding (preserve emitFinding(config.checkId,
directive.name, value, ...)). Update the loop that iterates directive.values to
compute candidateHost once and use it for matching so entries like
"https://cdn.jsdelivr.net/..." or "example.com:8080/path" are detected.
packages/frontend/src/components/VulnerabilityModal/Container.vue-100-107 (1)

100-107: ⚠️ Potential issue | 🟠 Major

Use @click and add an accessible label for the copy button.

Line 106 uses @mousedown, which is not keyboard-friendly for activation. Icon-only buttons require an explicit accessible name via aria-label. This creates a WCAG 2.1 Level A violation and is inconsistent with BypassEntry.vue, which correctly uses @click.

Proposed fix
             <Button
               icon="fas fa-copy"
               size="small"
               severity="secondary"
               text
+              aria-label="Copy bypass payload"
               class="!w-6 !h-6 shrink-0"
-              `@mousedown`="copyToClipboard(bypass.payload, 'Bypass copied')"
+              `@click`="copyToClipboard(bypass.payload, 'Bypass copied')"
             />
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/frontend/src/components/VulnerabilityModal/Container.vue` around
lines 100 - 107, The copy button in VulnerabilityModal's template uses
`@mousedown` which prevents keyboard activation and lacks an accessible name;
replace the `@mousedown` handler with `@click` on the <Button> that calls
copyToClipboard(bypass.payload, 'Bypass copied') and add an explicit accessible
label (e.g., aria-label="Copy bypass") to the Button so it matches the
accessible behavior in BypassEntry.vue and supports keyboard users and screen
readers.
packages/frontend/src/components/Dashboard/Container.vue-92-113 (1)

92-113: ⚠️ Potential issue | 🟠 Major

Use click + accessible labels for action buttons.

On Line 98 and Line 112, @mousedown bypasses normal keyboard activation semantics; icon-only buttons also need accessible names.

Proposed fix
             <Button
               icon="fas fa-sync"
+              aria-label="Refresh analyses"
               severity="secondary"
               outlined
               size="small"
               :loading="refreshing"
-              `@mousedown`="onRefresh"
+              `@click`="onRefresh"
             />
@@
             <Button
               icon="fas fa-trash"
+              aria-label="Clear analyses cache"
               severity="danger"
               outlined
               size="small"
-              `@mousedown`="analysesService.clearCache()"
+              `@click`="analysesService.clearCache()"
             />
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/frontend/src/components/Dashboard/Container.vue` around lines 92 -
113, Replace the non-standard `@mousedown` handlers with `@click` to preserve
keyboard activation semantics: change the Button that calls onRefresh to use
`@click`="onRefresh" (referencing the onRefresh method) and change the trash
Button to `@click`="analysesService.clearCache()" (referencing
analysesService.clearCache). Also add accessible names to icon-only buttons by
providing aria-label (or a visible label prop if the Button component supports
it) and/or title attributes (e.g., aria-label="Refresh" for the sync button and
aria-label="Clear cache" for the trash button) so screen readers can announce
the actions.
packages/frontend/src/components/Dashboard/Container.vue-33-49 (1)

33-49: ⚠️ Potential issue | 🟠 Major

summary currently returns incomplete analytics data.

On Line 47–48, checkIdCounts is always {} and lastAnalyzedAt is always undefined, so consumers never receive real values.

Proposed fix
 const summary = computed(() => {
   const a = analyses.value;
   const allFindings = a.flatMap((x) => x.findings);
   const severityCounts: Record<SeverityLevel, number> = {
     high: 0,
     medium: 0,
     low: 0,
     info: 0,
   };
-  for (const f of allFindings) severityCounts[f.severity]++;
+  const checkIdCounts: Record<string, number> = {};
+  for (const f of allFindings) {
+    severityCounts[f.severity]++;
+    checkIdCounts[f.checkId] = (checkIdCounts[f.checkId] ?? 0) + 1;
+  }
+  const lastAnalyzedAt = a.reduce<Date | undefined>((latest, item) => {
+    if (!latest || item.analyzedAt > latest) return item.analyzedAt;
+    return latest;
+  }, undefined);
   return {
     totalAnalyses: a.length,
     totalFindings: allFindings.length,
     severityCounts,
-    checkIdCounts: {} as Record<string, number>,
-    lastAnalyzedAt: undefined as Date | undefined,
+    checkIdCounts,
+    lastAnalyzedAt,
   };
 });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/frontend/src/components/Dashboard/Container.vue` around lines 33 -
49, The computed summary currently leaves checkIdCounts empty and lastAnalyzedAt
undefined; update the summary computed (referencing summary, analyses,
allFindings) to build checkIdCounts by iterating allFindings and incrementing
counts keyed by each finding.checkId (ensure type Record<string, number>), and
compute lastAnalyzedAt by taking the latest Date from analyses (e.g., reduce
over analyses to find the max analyzedAt or use allFindings timestamps if
findings carry them), then return those real values instead of {} and undefined
so consumers receive actual counts and the most recent analysis date.
packages/frontend/src/composables/useBypassData.ts-14-23 (1)

14-23: ⚠️ Potential issue | 🟠 Major

Handle failure paths in loadRecords to avoid silent stale state.

On Line 17–20, non-Ok responses and thrown errors do not update state beyond loading, so stale records can persist with no error signal.

Proposed fix
 export const useBypassData = defineStore("bypass-data", () => {
   const sdk = useSDK();
   const records = ref<BypassRecord[]>([]);
   const loading = ref(false);
+  const error = ref<string | null>(null);
   const searchQuery = ref("");

   const loadRecords = async () => {
     loading.value = true;
+    error.value = null;
     try {
       const result = await sdk.backend.getBypassRecords();
       if (result.kind === "Ok") {
         records.value = result.value;
+      } else {
+        records.value = [];
+        error.value = String(result.error ?? "Failed to load bypass records");
       }
+    } catch (e) {
+      records.value = [];
+      error.value =
+        e instanceof Error ? e.message : "Failed to load bypass records";
     } finally {
       loading.value = false;
     }
   };
@@
   return {
     records,
     loading,
+    error,
     searchQuery,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/frontend/src/composables/useBypassData.ts` around lines 14 - 23, The
loadRecords function currently leaves records.value unchanged on non-Ok
responses or thrown errors, causing stale data; update loadRecords (the async
function calling sdk.backend.getBypassRecords) to handle failure paths by adding
a catch that clears or resets records.value (e.g., set to [] or null) and
optionally set an error flag or rethrow the error so callers know the call
failed; keep the existing finally block to set loading.value = false and ensure
both non-Ok results and exceptions go through the same failure handling logic.
packages/frontend/src/stores/settings.ts-51-109 (1)

51-109: ⚠️ Potential issue | 🟠 Major

Surface backend write failures instead of ignoring them.

All of the mutating actions drop the Error case. If persistence fails, the UI gets no signal to notify the user or retry, so settings changes can fail silently. Return a status/result from these actions or show a toast consistently on failure.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/frontend/src/stores/settings.ts` around lines 51 - 109, The mutating
store methods (updateScope, updateFindings, updateSingleCheck, setAllChecks,
setRecommendedMode, setLightMode) swallow the Error branch from sdk.backend
calls and thus allow silent failures; change each to return the backend result
(or rethrow the error) instead of ignoring it, and only apply local state
updates when result.kind === "Ok"; in the Error branch emit/return the error (or
call a shared notify/toast helper with the error message) so the caller/UI can
surface retries/notifications and include error details in the log/toast for
debugging.
packages/backend/src/api/settings.ts-42-56 (1)

42-56: ⚠️ Potential issue | 🟠 Major

Reject unknown check IDs at the RPC boundary.

settings and checkId are accepted as arbitrary strings here and always return success. An out-of-date or malformed client can send unknown keys, get ok(undefined), and believe the change persisted when it did not. Validate against the supported check IDs and return an error for invalid input instead of unconditionally acknowledging it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/backend/src/api/settings.ts` around lines 42 - 56, Both RPC handlers
currently accept arbitrary strings and always return ok(undefined); change
apiSetCheckSettings and apiUpdateSingleCheck to validate inputs against the
canonical set of supported check IDs (use whatever list/function you have like
SUPPORTED_CHECK_IDS or getSupportedChecks()) before calling
setCheckSettings/updateSingleCheck, and return an error Result when unknown IDs
are supplied. For apiSetCheckSettings, iterate keys of the settings map and if
any key is not in the supported list return err(new Error("unknown check id:
<id>")) (or a suitable typed error) instead of calling setCheckSettings; for
apiUpdateSingleCheck, check checkId is supported before calling
updateSingleCheck and return err for invalid ids. Ensure you still call the
underlying setters only when validation passes and return ok(undefined) on
success.
packages/backend/src/utils/domainMatching.ts-15-19 (1)

15-19: ⚠️ Potential issue | 🟠 Major

Normalize ports, queries, and fragments before matching.

stripDomainPrefix() only removes the scheme and /.... Inputs like https://cdn.example.com:8443/app.js or example.com?cache=1 normalize to cdn.example.com:8443 / example.com?cache=1, so both isUserContentHost() and isVulnerableJsHost() can miss valid matches. Use URL parsing here, or strip :port, ?, and # as well.

Suggested fix
 export function stripDomainPrefix(domain: string): string {
-  return domain
-    .toLowerCase()
-    .replace(/^https?:\/\//, "")
-    .replace(/\/.*$/, "");
+  const normalized = domain.trim().toLowerCase();
+
+  try {
+    const parsed = new URL(
+      /^[a-z][a-z0-9+.-]*:\/\//.test(normalized)
+        ? normalized
+        : `https://${normalized}`,
+    );
+    return parsed.hostname;
+  } catch {
+    return normalized
+      .replace(/^https?:\/\//, "")
+      .replace(/[/?#].*$/, "")
+      .replace(/:\d+$/, "");
+  }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/backend/src/utils/domainMatching.ts` around lines 15 - 19,
stripDomainPrefix currently only removes scheme and trailing path, leaving
ports, query strings, and fragments which causes isUserContentHost and
isVulnerableJsHost to miss matches; update stripDomainPrefix to fully normalize
by parsing the input with the URL constructor (or equivalent) and returning only
hostname (lowercased) without port, query, or fragment, falling back to manual
stripping for bare host inputs; reference stripDomainPrefix and ensure callers
(isUserContentHost/isVulnerableJsHost) continue to receive the normalized
hostname.
packages/frontend/src/stores/settings.ts-25-30 (1)

25-30: ⚠️ Potential issue | 🟠 Major

Don't swallow initialization failures.

This catch() resets initPromise but also converts a rejected loadAll() into a resolved initialize(). Callers then keep rendering defaults as if settings were loaded successfully. Reset the cache and rethrow so initialization failure is observable.

Suggested fix
   async function initialize() {
     if (initPromise !== undefined) return initPromise;
-    initPromise = loadAll().catch(() => {
+    initPromise = loadAll().catch((error) => {
       initPromise = undefined;
+      throw error;
     });
-    await initPromise;
+    return initPromise;
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/frontend/src/stores/settings.ts` around lines 25 - 30, The
initialize function currently swallows loadAll() failures by resetting
initPromise in the catch and not rethrowing; change the catch handler on
initPromise = loadAll().catch(...) to reset initPromise = undefined and then
rethrow the caught error so callers see the failure. Update the promise handling
in initialize (referencing initialize, initPromise, and loadAll) to ensure the
cache is cleared on failure but the error is propagated.
packages/backend/src/engine/parser.ts-86-102 (1)

86-102: ⚠️ Potential issue | 🟠 Major

Ignore repeated directives instead of overwriting them.

Duplicate CSP directives are invalid, and the first occurrence wins. Unconditionally calling Map.set() here makes the last duplicate take effect, which changes the parsed policy and can produce incorrect findings for real headers.

Suggested fix
   for (const raw of rawDirectives) {
     const parts = raw.split(/\s+/).filter((p) => p !== "");
     if (parts.length === 0) continue;
 
     const name = parts[0]?.toLowerCase();
     if (name === undefined || name.trim() === "") continue;
+    if (policy.directives.has(name)) continue;
 
     const values = parts.slice(1);
     const directive: PolicyDirective = {
       name,
       values,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/backend/src/engine/parser.ts` around lines 86 - 102, The loop that
parses rawDirectives currently overwrites duplicate directives by
unconditionally calling policy.directives.set(name, directive); change it to
ignore duplicates so the first occurrence wins: before setting, check
policy.directives.has(name) and skip creating/setting a new PolicyDirective for
that name if present; keep using the same variables (rawDirectives, parts, name,
values, classifySource, PolicyDirective, and policy.directives) so existing
logic for isImplicit and sources remains unchanged.
packages/backend/src/services/exportService.ts-25-55 (1)

25-55: ⚠️ Potential issue | 🟠 Major

Escape every CSV cell before joining.

Only Description is quoted right now. Value, Host, Path, Request ID, etc. can also contain commas, quotes, newlines, or formula prefixes from analyzed data, which produces malformed CSV and leaves the export open to CSV injection in spreadsheet apps. Centralize escaping/neutralization for every column before join(",").

Suggested fix
+function escapeCsvCell(value: unknown): string {
+  const text = String(value ?? "");
+  const neutralized = /^[=+\-@]/.test(text) ? `'${text}` : text;
+  return `"${neutralized.replace(/"/g, '""')}"`;
+}
+
 export function exportAsCsv(analyses: AnalysisResult[]): string {
   const headers = [
     "ID",
     "Check",
@@
   const rows = analyses.flatMap((a) => {
     const { host, path } = extractHostAndPath(a);
     return a.findings.map((f) => [
       f.id,
       f.checkId,
       f.severity,
       f.directive,
       f.value,
-      `"${f.description.replace(/"/g, '""')}"`,
+      f.description,
       host,
       path,
       a.analyzedAt.toISOString(),
       f.requestId,
     ]);
   });
 
-  return [headers, ...rows].map((row) => row.join(",")).join("\n");
+  return [headers, ...rows]
+    .map((row) => row.map(escapeCsvCell).join(","))
+    .join("\n");
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/backend/src/services/exportService.ts` around lines 25 - 55, The CSV
export currently only quotes the Description field; update exportAsCsv to
centrally escape/neutralize every cell by adding a helper (e.g., escapeCsvCell)
and applying it to all columns (headers and each value returned from
a.findings). escapeCsvCell should: convert non-strings to string, double any
existing " characters, wrap the cell in quotes if it contains commas, quotes, or
newlines, and neutralize CSV-injection by prefixing cells that start with =, +,
-, or @ (or by prepending a safe apostrophe) before joining rows with ",";
locate exportAsCsv and replace the current inline quoting with calls to this
helper (use extractHostAndPath as needed to get host/path).
packages/backend/src/services/analysisService.ts-64-83 (1)

64-83: ⚠️ Potential issue | 🟠 Major

Don't let findings export fail processResponse().

sdk.findings.create() is an optional side effect, but any rejection here aborts the whole call after the result was already cached and analysisUpdated was already emitted. This should be isolated with try/catch so transient SDK/storage failures do not break interception.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/backend/src/services/analysisService.ts` around lines 64 - 83, The
call to sdk.findings.create inside processResponse() can throw and should not
abort the main flow; wrap the sdk.findings.create({...}) invocation (the block
that builds title/description and calls sdk.findings.create) in a try/catch so
any rejection is caught and handled (e.g., log the error via the existing logger
or sdk logger) without rethrowing; keep the existing checks (findingsEnabled,
allFindings.length, request/requestData) and ensure the catch does not alter
emitted analysisUpdated or cached results.
packages/backend/src/services/analysisService.ts-14-17 (1)

14-17: ⚠️ Potential issue | 🟠 Major

Bound the in-memory cache before this ships.

analysisCache retains every analyzed request until clearCache() is called. On long-running proxy sessions this turns CSP analysis into an unbounded memory sink. Add an eviction policy here (LRU/TTL/max entries) or store only the smaller summary data you need for the UI.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/backend/src/services/analysisService.ts` around lines 14 - 17,
analysisCache is unbounded and can leak memory; implement a bounded eviction
policy for it (e.g., LRU with max entries or TTL) or store only summaries
instead of full AnalysisResult. Replace the Map used by analysisCache with a
capped cache implementation (or wrap Map operations in helper functions) that
evicts oldest/least-recent entries when exceeding a configured maxEntries or
removes entries after a TTL; update any code referencing analysisCache and
clearCache() to use the new cache API (get/set/delete/clear) and ensure
functions that populate it (where AnalysisResult objects are created) only store
the minimal summary if you choose the summary approach. Configure the cap value
as a constant or env/config option and add tests exercising eviction behavior.
packages/backend/src/index.ts-59-71 (1)

59-71: ⚠️ Potential issue | 🟠 Major

Guard the intercept hook from analysis failures.

This callback runs on live traffic. Any thrown parse/analyze/export error will reject onInterceptResponse, so a plugin-side failure can leak into response handling. Catch and log inside the hook to keep interception resilient.

Proposed fix
   sdk.events.onInterceptResponse(async (_sdk, request, response) => {
-    const headers = response.getHeaders();
-    const cspHeaders = extractCspHeaders(headers);
-    if (cspHeaders.length === 0) return;
-
-    if (getScopeEnabled() && !_sdk.requests.inScope(request)) return;
-
-    await processResponse(
-      { id: request.getId(), host: request.getHost(), path: request.getPath() },
-      { headers },
-      request,
-    );
+    try {
+      const headers = response.getHeaders();
+      const cspHeaders = extractCspHeaders(headers);
+      if (cspHeaders.length === 0) return;
+
+      if (getScopeEnabled() && !_sdk.requests.inScope(request)) return;
+
+      await processResponse(
+        { id: request.getId(), host: request.getHost(), path: request.getPath() },
+        { headers },
+        request,
+      );
+    } catch (error) {
+      _sdk.console.log(`CSP Auditor analysis failed: ${String(error)}`);
+    }
   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/backend/src/index.ts` around lines 59 - 71, The onInterceptResponse
hook (sdk.events.onInterceptResponse) is not protected against exceptions from
extractCspHeaders, getScopeEnabled/_sdk.requests.inScope checks, or the async
processResponse call, so any thrown error can bubble up and break interception;
wrap the entire callback body in a try/catch that logs the error (including
context: request id/host/path) and prevents rethrowing, ensuring failures in
extractCspHeaders or processResponse do not reject the hook; reference the
existing symbols sdk.events.onInterceptResponse, extractCspHeaders,
getScopeEnabled, _sdk.requests.inScope, and processResponse when locating where
to add the try/catch and logging.
🟡 Minor comments (12)
packages/frontend/src/components/Dashboard/GettingStarted/Container.vue-7-7 (1)

7-7: ⚠️ Potential issue | 🟡 Minor

Mark decorative icon as hidden for screen readers.

Line 7 should include aria-hidden="true" (or an accessible label if meaningful) to avoid noisy/non-informative announcements.

Minimal fix
-    <i class="fas fa-shield-alt text-surface-400 text-5xl" />
+    <i class="fas fa-shield-alt text-surface-400 text-5xl" aria-hidden="true" />
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/frontend/src/components/Dashboard/GettingStarted/Container.vue` at
line 7, The decorative <i> icon in the GettingStarted Container component is
announced by screen readers; update the element (the <i class="fas fa-shield-alt
text-surface-400 text-5xl"> in Container.vue) to be ignored by assistive tech by
adding aria-hidden="true" (or replace with an accessible label if the icon
conveys meaningful information).
README.md-6-6 (1)

6-6: ⚠️ Potential issue | 🟡 Minor

Use official brand spelling in the link label.

Line 6 should use “GitHub” instead of “Github” for documentation consistency.

✏️ Suggested doc fix
-  <a href="https://github.com/caido-community" target="_blank">Github</a>
+  <a href="https://github.com/caido-community" target="_blank">GitHub</a>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` at line 6, Update the link label text in the README anchor so it
uses the official brand spelling "GitHub" instead of "Github" (i.e., change the
inner text of the <a href="https://github.com/caido-community" target="_blank">
element from "Github" to "GitHub").
packages/frontend/src/utils/severity.ts-3-14 (1)

3-14: ⚠️ Potential issue | 🟡 Minor

Add an explicit fallback return to avoid undefined class output.

If level is ever out-of-contract at runtime, this currently returns undefined and silently drops badge styling. Add a default fallback.

Suggested hardening
 export function getSeverityBadgeStyle(level: SeverityLevel): string {
   switch (level) {
     case "high":
       return "bg-red-500/20 text-red-400 border-red-500/30";
     case "medium":
       return "bg-orange-500/20 text-orange-400 border-orange-500/30";
     case "low":
       return "bg-yellow-500/20 text-yellow-400 border-yellow-500/30";
     case "info":
       return "bg-surface-500/20 text-surface-400 border-surface-500/30";
+    default:
+      return "bg-surface-500/20 text-surface-400 border-surface-500/30";
   }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/frontend/src/utils/severity.ts` around lines 3 - 14, The
getSeverityBadgeStyle function can return undefined for out-of-contract values;
add a safe fallback string (e.g. neutral gray or transparent badge classes) so
styling never drops. Implement this by adding a default: branch in the switch or
a final return after the switch in getSeverityBadgeStyle to return the chosen
fallback class; keep existing cases unchanged and ensure the fallback handles
any unexpected SeverityLevel values.
knip.ts-1-1 (1)

1-1: ⚠️ Potential issue | 🟡 Minor

Use the public KnipConfig type instead of internal import path.

Importing from knip/dist/types/config.js relies on an internal distribution path. Use the official public export instead:

import type { KnipConfig } from "knip";

The KnipConfig type is the recommended way to type dynamic Knip configuration files, as documented in the official Knip dynamic configuration guide.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@knip.ts` at line 1, The import uses an internal path and the type
RawConfigurationOrFn from "knip/dist/types/config.js"; replace this with the
public export KnipConfig by changing the import to import type { KnipConfig }
from "knip" and update any uses of RawConfigurationOrFn to KnipConfig (e.g., in
knip.ts where RawConfigurationOrFn is referenced) so the module relies on the
official public API.
packages/backend/README.md-74-104 (1)

74-104: ⚠️ Potential issue | 🟡 Minor

Add missing await in async usage examples.

A few examples treat backend calls synchronously and will hand readers a Promise instead of Result.

Example fix
-const result = sdk.backend.getAllAnalyses();
+const result = await sdk.backend.getAllAnalyses();

-const result = sdk.backend.getSummary();
+const result = await sdk.backend.getSummary();

-sdk.backend.clearCache();
+await sdk.backend.clearCache();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/backend/README.md` around lines 74 - 104, The examples call async
backend methods without awaiting them, returning Promises instead of Result
values; update the example snippets to await sdk.backend.getAllAnalyses(), await
sdk.backend.getSummary(), await sdk.backend.exportFindings("json"/"csv") and
await sdk.backend.clearCache() (and mark the example functions or surrounding
scope as async if needed) so the code operates on resolved Result objects (e.g.,
check result.kind on the awaited value).
packages/frontend/src/components/Dashboard/AnalysesTable/Container.vue-47-47 (1)

47-47: ⚠️ Potential issue | 🟡 Minor

Remove invalid expandable-rows prop from DataTable.

expandable-rows is not a valid PrimeVue DataTable prop and will be ignored. Row expansion is already correctly configured via v-model:expanded-rows and <Column :expander="true">, making this prop redundant.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/frontend/src/components/Dashboard/AnalysesTable/Container.vue` at
line 47, Remove the invalid expandable-rows attribute from the DataTable
declaration in Container.vue: the DataTable component should not include the
expandable-rows prop since row expansion is already handled by
v-model:expanded-rows and the <Column :expander="true"> setup; locate the
DataTable element in Container.vue, delete the standalone expandable-rows token,
and ensure the existing v-model:expanded-rows and Column expander configuration
remain intact.
packages/frontend/src/stores/analyses/useAnalysesState.ts-23-29 (1)

23-29: ⚠️ Potential issue | 🟡 Minor

Handle Clear in Loading to avoid sticky loading state.

Clear currently has no effect while loading, so reset/cancel flows can remain stuck in Loading.

💡 Suggested fix
     case "Loading":
+      if (message.type === "Clear") return { type: "Idle" };
       if (message.type === "Error")
         return { type: "Error", error: message.error };
       if (message.type === "Success")
         return { type: "Success", analyses: message.analyses };
       return current;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/frontend/src/stores/analyses/useAnalysesState.ts` around lines 23 -
29, In the switch branch handling case "Loading" inside the analyses state
reducer, handle message.type === "Clear" so a clear/cancel can exit the Loading
state instead of returning current; update the branch (where it currently checks
message.type === "Error" and "Success" and then returns current) to check for
"Clear" first (or alongside others) and return the empty/initial analyses state
(e.g., { type: "Empty" } or the reducer's defined initial state) so loading is
not sticky.
packages/frontend/src/utils/formatting.ts-3-8 (1)

3-8: ⚠️ Potential issue | 🟡 Minor

formatDate fallback does not handle invalid dates.

new Date(invalid).toLocaleString() returns "Invalid Date" instead of throwing, so the catch block is not reached.

Proposed fix
 export function formatDate(date: Date | string): string {
-  try {
-    return new Date(date).toLocaleString();
-  } catch {
-    return String(date);
-  }
+  const parsed = new Date(date);
+  if (Number.isNaN(parsed.getTime())) {
+    return String(date);
+  }
+  return parsed.toLocaleString();
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/frontend/src/utils/formatting.ts` around lines 3 - 8, The fallback
in formatDate doesn't run because new Date(invalid) produces an "Invalid Date"
object rather than throwing; update formatDate to first construct const d = new
Date(date) and check validity (e.g., isNaN(d.getTime()) or Number.isNaN(+d)) and
if invalid return String(date), otherwise return d.toLocaleString(); keep the
function name formatDate and replace the current try/catch with this explicit
validity check.
packages/backend/src/engine/checks/missingDirectives.ts-31-35 (1)

31-35: ⚠️ Potential issue | 🟡 Minor

Wildcard detection for base-uri is too narrow.

Checking only includes("*") as an exact token misses wildcard host-source values (e.g., https://*.example.com).

🔧 Proposed fix
     if (
       baseUri === undefined ||
-      baseUri.values.includes("*") ||
+      baseUri.values.some((value) => value === "*" || value.includes("*")) ||
       baseUri.values.includes("'unsafe-inline'")
     ) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/backend/src/engine/checks/missingDirectives.ts` around lines 31 -
35, The wildcard detection in the base-uri check is too narrow: replace the
current token check that only tests baseUri.values.includes("*") with a check
that inspects each value in baseUri.values for any '*' occurrence (e.g.,
baseUri.values.some(v => v.includes('*') || v === '*')) so host-source patterns
like "https://*.example.com" are caught; update the conditional around baseUri
(the block using baseUri and baseUri.values) to use this .some(...) predicate
instead of the single includes("*") call.
packages/backend/src/engine/checks/bypass.ts-14-28 (1)

14-28: ⚠️ Potential issue | 🟡 Minor

Avoid substring host matching for JSONP detection.

value.includes(host) can create false positives. Match against parsed hostname boundaries instead.

🔧 Proposed fix
   for (const value of scriptSrc.values) {
     if (isCheckEnabled("jsonp-bypass-risk", enabledChecks)) {
       for (const host of JSONP_CAPABLE_HOSTS) {
-        if (value.includes(host)) {
+        if (sourceMatchesHost(value, host)) {
           findings.push(
             emitFinding(
               "jsonp-bypass-risk",
               "script-src",
               value,
               policy.requestId,
               `Host ${host} supports JSONP callbacks that can bypass CSP`,
             ),
           );
         }
       }
     }
function sourceMatchesHost(source: string, host: string): boolean {
  const token = source.replace(/^'+|'+$/g, "");
  try {
    const normalized = token.includes("://") ? token : `https://${token}`;
    const hostname = new URL(normalized).hostname;
    return hostname === host || hostname.endsWith(`.${host}`);
  } catch {
    return false;
  }
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/backend/src/engine/checks/bypass.ts` around lines 14 - 28, The
current JSONP detection uses substring matching (value.includes(host)) which
yields false positives; add a helper like sourceMatchesHost(source, host) that
strips surrounding quotes, prepends a default scheme when missing, parses via
new URL(...).hostname, and returns true when hostname === host or
hostname.endsWith(`.${host}`) (catch and return false on parse errors); then
replace the includes check inside the loop over
scriptSrc.values/JSONP_CAPABLE_HOSTS with sourceMatchesHost(value, host) so
emitFinding("jsonp-bypass-risk", ...) only fires on proper hostname matches;
define sourceMatchesHost in this module and reuse it where
isCheckEnabled("jsonp-bypass-risk", enabledChecks) and emitFinding are used.
packages/backend/src/services/analysisService.ts-98-121 (1)

98-121: ⚠️ Potential issue | 🟡 Minor

lastAnalyzedAt is reading Map order, not the newest analysis.

Array.from(analysisCache.values()) preserves insertion order, and updating an existing key does not move it to the end. So the timestamp returned here can be stale instead of the most recent analysis time.

Proposed fix
 export function computeSummary(): AnalysisSummary {
   const analyses = Array.from(analysisCache.values());
   const allFindings = analyses.flatMap((a) => a.findings);
+  const lastAnalyzedAt = analyses.reduce<Date | undefined>(
+    (latest, analysis) =>
+      latest === undefined || analysis.analyzedAt > latest
+        ? analysis.analyzedAt
+        : latest,
+    undefined,
+  );
 
   const severityCounts: Record<SeverityLevel, number> = {
@@
   return {
     totalAnalyses: analyses.length,
     totalFindings: allFindings.length,
     severityCounts,
     checkIdCounts,
-    lastAnalyzedAt: analyses.length > 0 ? analyses[0]?.analyzedAt : undefined,
+    lastAnalyzedAt,
   };
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/backend/src/services/analysisService.ts` around lines 98 - 121,
computeSummary currently uses Array.from(analysisCache.values()) and assumes the
first element is the newest, but Map insertion order is not updated on value
replacements; fix computeSummary by deriving lastAnalyzedAt from the actual
timestamps (e.g., take the maximum analyzedAt across analyses or sort analyses
by analyzedAt descending) instead of relying on Map order—update the return
value so lastAnalyzedAt = analyses.length > 0 ? max(analyses.map(a =>
a.analyzedAt)) : undefined, referencing computeSummary, analysisCache,
AnalysisSummary, and the analyzedAt field.
packages/backend/src/engine/policyBuilder.ts-55-79 (1)

55-79: ⚠️ Potential issue | 🟡 Minor

Normalize headerName before deriving policy flags.

These checks are case-sensitive, but HTTP header names are not. Mixed-case inputs like Content-Security-Policy-Report-Only or X-WebKit-CSP will currently come back with the wrong isReportOnly / isDeprecated values.

Proposed fix
 export function buildPolicy(
   directives: Record<string, string[]>,
   options?: { headerName?: string; requestId?: string; url?: string },
 ): ParsedPolicy {
   const headerName = options?.headerName ?? "content-security-policy";
+  const normalizedHeaderName = headerName.toLowerCase();
   const requestId = options?.requestId ?? "test-request-1";
   const directiveMap = new Map<string, PolicyDirective>();
@@
     headerValue: Object.entries(directives)
       .map(([k, v]) => `${k} ${v.join(" ")}`)
       .join("; "),
     directives: directiveMap,
-    isReportOnly: headerName.includes("report-only"),
-    isDeprecated: ["x-content-security-policy", "x-webkit-csp"].includes(
-      headerName,
+    isReportOnly: normalizedHeaderName.includes("report-only"),
+    isDeprecated: ["x-content-security-policy", "x-webkit-csp"].includes(
+      normalizedHeaderName,
     ),
     parsedAt: new Date(),
     url: options?.url,
   };
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/backend/src/engine/policyBuilder.ts` around lines 55 - 79, Normalize
headerName to lowercase before computing policy flags: create a lowercase
variant (e.g., headerNameLower = headerName.toLowerCase()) after headerName is
assigned, then use headerNameLower for isReportOnly and isDeprecated checks
(replace headerName.includes(...) and the array includes(...) check with
headerNameLower). Keep the original headerName value for storage if needed, but
base the report-only and deprecated booleans on the normalized headerName to
handle mixed-case inputs; update references in this return block where
isReportOnly and isDeprecated are set.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants