Skip to content

feat: connect cwv to ai provider#118

Open
olekszczepanowski wants to merge 20 commits intomainfrom
cvw-112-connect-cwv-to-ai-provider
Open

feat: connect cwv to ai provider#118
olekszczepanowski wants to merge 20 commits intomainfrom
cvw-112-connect-cwv-to-ai-provider

Conversation

@olekszczepanowski
Copy link
Collaborator

@olekszczepanowski olekszczepanowski commented Feb 27, 2026

Summary by CodeRabbit

  • New Features

    • Anomaly detection for Core Web Vitals with automatic alerts and state tracking
    • AI-powered analysis and on-demand chatbot for deep-dive investigations
    • Slack and Teams webhook notifications for anomalies
    • New anomaly worker service for scheduled monitoring and dispatching
  • Developer / DevOps

    • CI test step added for anomaly detection; local/dev Docker and setup scripts updated to enable anomaly profile and analyst credentials
  • Documentation

    • Architecture and AI agent guide updated with anomaly workflow details

@vercel
Copy link

vercel bot commented Feb 27, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
next-cwv-monitor-client-app Ready Ready Preview, Comment Mar 4, 2026 8:59am
next-cwv-monitor-monitor-app Ready Ready Preview, Comment Mar 4, 2026 8:59am

Request Review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 27, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8aa2fbe and b32dccb.

📒 Files selected for processing (1)
  • apps/monitor-app/src/app/api/chat/[projectId]/route.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/monitor-app/src/app/api/chat/[projectId]/route.ts

📝 Walkthrough

Walkthrough

This PR adds an anomaly detection feature set: ClickHouse schema and migrations for hourly stats and anomalies, an AI-driven agent and SQL tooling, an anomaly worker service with notification dispatch (Slack/Teams), end-to-end tests and test infra updates, env/config changes across dev/test/prod, Docker/compose updates, and CI/test workflow additions.

Changes

Cohort / File(s) Summary
ClickHouse schema & migrations
apps/monitor-app/clickhouse/migrations/007_anomaly_detection.sql, apps/monitor-app/clickhouse/catalog.yml
Adds hourly aggregates table, materialized view, anomaly view, processed_anomalies table, AI analyst user/role/profile and grants.
Agent & AI tooling
apps/monitor-app/src/app/server/lib/agent/agent.ts, apps/monitor-app/src/app/server/lib/agent/utils/system-prompt.ts, apps/monitor-app/src/app/server/lib/agent/utils/schema-catolog.ts, apps/monitor-app/src/app/server/lib/agent/tools/execute-sql.ts
Implements multi-provider agent wiring, system prompt builder, schema catalog loader, and a read-only executeSql tool with CSV preview/truncation logic.
AI bridge & analysis service
apps/monitor-app/src/app/server/domain/ai-bridge/service.ts, .../__tests__/ai-bridge-service.integration.test.ts
Adds AiBridgeService with executeSql, schema catalog access, anomaly context assembly, investigation links, and integration tests.
Notifications (dispatcher & service)
apps/monitor-app/src/app/server/domain/notifications/dispatcher.ts, .../service.ts, .../types.ts, .../__tests__/notifications-service.integration.test.ts
Adds typed notification payloads, dispatcher for Slack/Teams, batched notification service, and integration tests.
Anomaly worker app
apps/anomaly-worker/...
New service: env example, package.json, tsconfig, src/index.ts scheduling/polling, Dockerfile and supporting files for scheduled detection and notification triggering.
ClickHouse client & repositories
apps/monitor-app/src/app/server/lib/clickhouse/client.ts, .../repositories/anomalies-repository.ts, .../repositories/processed-anomalies-repository.ts, .../repositories/projects-repository.ts, .../schema.ts, .../__tests__/anomalies-repository.integration.test.ts
Adds aiSql tag, direct client accessors, test-aware connection logic, new anomaly-related types and repositories for querying/inserting anomalies.
Environment & runtime config
apps/monitor-app/.env.*, apps/monitor-app/src/env.ts, apps/anomaly-worker/src/env.ts, apps/anomaly-worker/.env.example
Adds AI analyst ClickHouse user/password, AI_API_KEY/AI_PROVIDER/AI_MODEL, Slack/Teams webhook URLs; extends runtime schema and test env support.
Seeders & demo data
apps/monitor-app/scripts/seed-demo-data.mjs
Adds export seedAnomalyTestPattern to create deterministic anomaly-like CWV events for tests/demos.
Test infra & integration tests
apps/monitor-app/src/test/clickhouse-test-utils.ts, src/test/global-setup.ts, vitest.integration.config.ts, vitest.integration.setup.ts, src/test/anomaly-detection.integration.test.ts, src/test/performance-guardrails.test.ts
Global test setup, shared-container test path, anomaly optimization helper, new integration tests for anomaly detection and AI bridge.
Instrumentation & bootstrap
apps/monitor-app/src/instrumentation.ts, apps/monitor-app/src/app/server/lib/clickhouse/bootstrap.ts
Guards dynamic imports for server runtime, adds syncDatabaseRoles to set analyst DB credentials when provided.
Docker & Compose
docker/docker-compose.yml, docker/docker-compose.dev.yml, docker/docker-compose.build.yml, docker/anomaly-worker.Dockerfile, docker/monitor-app.prod.Dockerfile, docker/monitor-app.dev.Dockerfile, setup.sh
Adds cwv-worker service, setup service, dev/workspace changes, anomaly-worker Dockerfile, propagates AI analyst env vars and webhook examples, and setup script prompts.
CI workflow
.github/workflows/check.yml
Adds CI step to run anomaly detection tests when monitor changes are detected.
Docs & guides
ARCHITECTURE.md, apps/monitor-app/AI_AGENT_GUIDE.md
Documents anomaly worker architecture, data flows, and AI agent integration guidance.
Dependencies & package changes
apps/monitor-app/package.json
Adds AI SDK deps (@ai-sdk/*), runtime ai, @slack/types, adaptivecards; test script tweak.
API route
apps/monitor-app/src/app/api/chat/[projectId]/route.ts
Adds POST chat route that authenticates, checks project permission, creates agent, and streams agent UI responses.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant Monitor as Monitor App
    participant ClickHouse as ClickHouse DB
    participant Poller as Anomaly Worker<br/>(Poller)
    participant Notifier as Anomaly Worker<br/>(Notifier)
    participant Webhook as Slack/Teams<br/>Webhooks

    rect rgba(100, 150, 255, 0.5)
        Note over Poller,ClickHouse: Scheduled Anomaly Detection (Hourly)
        Poller->>ClickHouse: Query v_cwv_anomalies<br/>(z_score > 3)
        ClickHouse->>Poller: Return new anomalies
        Poller->>ClickHouse: Insert into processed_anomalies<br/>(mark as notified)
    end

    rect rgba(150, 200, 150, 0.5)
        Note over Poller,Notifier: Notification Processing
        Poller->>Notifier: Trigger notifyNewAnomalies()
        Notifier->>ClickHouse: Fetch unprocessed anomalies
        ClickHouse->>Notifier: Return anomaly records
    end

    rect rgba(255, 180, 100, 0.5)
        Note over Notifier,Webhook: Notification Dispatch
        loop For each anomaly
            Notifier->>Notifier: Build payload<br/>(Slack blocks/Teams card)
            Notifier->>Webhook: POST notification
            Webhook-->>Notifier: Acknowledge delivery
        end
    end

    rect rgba(200, 150, 255, 0.5)
        Note over Monitor,ClickHouse: AI-Driven Investigation (On-Demand)
        User->>Monitor: Request anomaly analysis
        Monitor->>ClickHouse: Execute agent SQL queries
        ClickHouse->>Monitor: Return analysis data
        Monitor->>User: Display AI insights
    end
Loading
sequenceDiagram
    participant Client as Client Browser
    participant API as Chat Route<br/>[projectId]
    participant Agent as AI Agent
    participant Tool as ExecuteSql Tool
    participant ClickHouse as ClickHouse DB

    Client->>API: POST /api/chat/[projectId]<br/>{messages}
    activate API
    API->>API: Authenticate user
    API->>API: Check project permission
    API->>Agent: createAgent(projectId)
    activate Agent
    Agent->>Agent: Load schema catalog
    Agent->>Agent: Build system prompt
    deactivate Agent
    
    rect rgba(200, 150, 255, 0.5)
        Note over Agent,ClickHouse: Agent Loop (max 15 steps)
        loop Until completion
            Agent->>Agent: Analyze messages<br/>Decide action
            opt Needs data
                Agent->>Tool: executeSql(query)
                activate Tool
                Tool->>ClickHouse: Execute read query
                ClickHouse->>Tool: Return rows
                Tool->>Tool: Format CSV response
                Tool->>Agent: Return data
                deactivate Tool
            end
            Agent->>Agent: Generate response
        end
    end
    
    API->>Client: Stream UI response
    deactivate API
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • Levosilimo
  • kasin-it
  • iipanda

Poem

🐰
I seeded spikes beneath the moon,
I counted Zs and hummed a tune,
I sent a note to Slack at noon,
Anomaly! — then hop, commune,
Now metrics dance and servers swoon.

🚥 Pre-merge checks | ✅ 2
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: connect cwv to ai provider' accurately reflects the main objective of the changeset, which introduces AI provider integration (Anthropic, Google, OpenAI) throughout the application for anomaly detection and analysis.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch cvw-112-connect-cwv-to-ai-provider

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Nitpick comments (5)
apps/monitor-app/.env.example (1)

29-31: Add a trailing newline.

The file is missing a trailing newline, which can cause issues with some tools and is a common convention for text files.

Proposed fix
 AI_API_KEY=
 AI_PROVIDER=
 AI_MODEL=
+
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/.env.example` around lines 29 - 31, Add a trailing newline
at the end of the file so the final line (after AI_API_KEY=, AI_PROVIDER=,
AI_MODEL=) ends with a newline character; update the file to ensure it
terminates with a single newline to satisfy tooling and POSIX conventions.
docker/docker-compose.yml (1)

88-89: Consider moving these under the ClickHouse section.

These credentials are ClickHouse-related rather than general authentication settings. Moving them under the # ClickHouse section (after line 79) would improve clarity and keep all ClickHouse credentials grouped together.

Suggested reorganization
       # ─────────────────────────────────────────────────────────────
       # ClickHouse
       # ─────────────────────────────────────────────────────────────
       <<: *clickhouse-env
+      AI_ANALYST_CLICKHOUSE_USER: ${AI_ANALYST_CLICKHOUSE_USER:?AI_ANALYST_CLICKHOUSE_USER is required}
+      AI_ANALYST_CLICKHOUSE_PASSWORD: ${AI_ANALYST_CLICKHOUSE_PASSWORD:?AI_ANALYST_CLICKHOUSE_PASSWORD is required}

       # ─────────────────────────────────────────────────────────────
       # Authentication & Security
       # ─────────────────────────────────────────────────────────────
       BETTER_AUTH_SECRET: ${BETTER_AUTH_SECRET:?BETTER_AUTH_SECRET is required}
       MIN_PASSWORD_SCORE: ${MIN_PASSWORD_SCORE:-2}
       RATE_LIMIT_WINDOW_MS: ${RATE_LIMIT_WINDOW_MS:-60000}
       MAX_LOGIN_ATTEMPTS: ${MAX_LOGIN_ATTEMPTS:-5}
-      AI_ANALYST_CLICKHOUSE_USER: ${AI_ANALYST_CLICKHOUSE_USER:?AI_ANALYST_CLICKHOUSE_USER is required}
-      AI_ANALYST_CLICKHOUSE_PASSWORD: ${AI_ANALYST_CLICKHOUSE_PASSWORD:?AI_ANALYST_CLICKHOUSE_PASSWORD is required}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker/docker-compose.yml` around lines 88 - 89, The two environment
variables AI_ANALYST_CLICKHOUSE_USER and AI_ANALYST_CLICKHOUSE_PASSWORD are
currently placed with general auth vars but belong in the ClickHouse block; move
those two entries so they appear under the existing "# ClickHouse" section (the
block introduced around the ClickHouse configuration) to group all ClickHouse
credentials together and improve clarity while keeping variable names unchanged.
apps/monitor-app/src/test/clickhouse-test-utils.ts (1)

146-159: Consider throwing on timeout for consistency.

Unlike optimizeAggregates, this function silently proceeds with OPTIMIZE even if no data appears within the 10-second timeout. This could mask test setup issues where anomaly data fails to materialize.

Suggested fix for consistent timeout behavior
 export async function optimizeAnomalies(sqlClient: any): Promise<void> {
   const deadlineMs = Date.now() + 10_000;
   while (Date.now() < deadlineMs) {
     const rows = (await sqlClient<{ cnt: string }>`
       SELECT toString(count()) AS cnt FROM cwv_stats_hourly
     `) as Array<{ cnt?: string | number }>;
     const raw = rows[0]?.cnt;
     const count = typeof raw === "number" ? raw : Number(raw ?? 0);
     if (count > 0) break;
     await wait(50);
   }
+
+  // Verify data was populated before optimizing
+  const rows = (await sqlClient<{ cnt: string }>`
+    SELECT toString(count()) AS cnt FROM cwv_stats_hourly
+  `) as Array<{ cnt?: string | number }>;
+  const raw = rows[0]?.cnt;
+  const count = typeof raw === "number" ? raw : Number(raw ?? 0);
+  if (count === 0) {
+    throw new Error("Timed out waiting for cwv_stats_hourly to populate");
+  }
+
   await sqlClient`OPTIMIZE TABLE cwv_stats_hourly FINAL`.command();
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/src/test/clickhouse-test-utils.ts` around lines 146 - 159,
The optimizeAnomalies function currently waits up to 10s for rows but proceeds
to run OPTIMIZE even if count stays zero; change it to mirror optimizeAggregates
by throwing a timeout error when the deadline is reached without data. Locate
optimizeAnomalies and after the while loop (or inside it when the timeout
condition is met) raise a clear Error (e.g., "timed out waiting for anomalies to
appear") instead of silently continuing, so tests fail fast when anomaly data
never materializes.
apps/monitor-app/src/env.ts (1)

46-48: Consider adding enum validation for AI_PROVIDER.

AI_PROVIDER is used in a switch statement in agent.ts that only supports "anthropic", "google", and "openai". Adding enum validation here would catch configuration errors at startup rather than at runtime.

💡 Suggested change
-    AI_PROVIDER: z.string(),
+    AI_PROVIDER: z.enum(["anthropic", "google", "openai"]),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/src/env.ts` around lines 46 - 48, Replace the free-form
AI_PROVIDER z.string() schema with a strict enum that lists the supported
providers so misconfiguration fails at startup; update the env.ts schema to
validate AI_PROVIDER against the exact values used in the switch in agent.ts
("anthropic", "google", "openai") (use Zod enum or equivalent) and adjust any
exported types or usages that derive from AI_PROVIDER so they reflect the
narrowed type.
apps/monitor-app/src/app/api/chat/[projectId]/route.ts (1)

6-7: Consider validating projectId and messages input.

The projectId should be validated as a UUID format, and messages should be validated against the expected UIMessage[] schema before processing to prevent malformed input from reaching the agent.

💡 Example validation
+import { z } from "zod";
+
+const chatRequestSchema = z.object({
+  messages: z.array(z.object({
+    role: z.enum(["user", "assistant"]),
+    content: z.string(),
+  })),
+});

 export async function POST(req: NextRequest, { params }: { params: Promise<{ projectId: string }> }) {
   try {
     const { projectId } = await params;
-    const { messages }: { messages: UIMessage[] } = await req.json();
+    if (!z.string().uuid().safeParse(projectId).success) {
+      return Response.json({ ok: false, error: "Invalid project ID" }, { status: 400 });
+    }
+    const body = chatRequestSchema.safeParse(await req.json());
+    if (!body.success) {
+      return Response.json({ ok: false, error: "Invalid request body" }, { status: 400 });
+    }
+    const { messages } = body.data;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/src/app/api/chat/`[projectId]/route.ts around lines 6 - 7,
Validate the incoming inputs in the route handler: ensure projectId (from
params) matches a UUID pattern (e.g., regex or a UUID validation utility) and
validate that messages (from req.json()) conforms to the UIMessage[] shape
(array check, required fields and types for each UIMessage) before any agent or
downstream processing; add early 400 responses on invalid input and use the
existing symbols projectId and messages (and the UIMessage type) in
apps/monitor-app/src/app/api/chat/[projectId]/route.ts to locate where to insert
these checks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/monitor-app/clickhouse/migrations/007_anomaly_detection.sql`:
- Line 101: The GRANT on processed_anomalies gives r_ai_analyst write/mutation
rights; remove INSERT and ALTER/UPDATE from the grant so r_ai_analyst only gets
SELECT, and instead create a separate writer role (e.g., r_ai_writer) and grant
INSERT, UPDATE, ALTER on processed_anomalies to that writer role; update any
deployment or role-assignment logic to grant r_ai_writer to the principals that
truly need mutation rights rather than r_ai_analyst.
- Around line 103-104: The migration creates the user ai_analyst_user using
IDENTIFIED WITH no_password which leaves a passwordless account; change the
migration to provision ai_analyst_user with a secure credential instead (e.g.,
use IDENTIFIED BY or IDENTIFIED WITH a secure auth plugin and a generated strong
password/secret), or create the user without authentication and immediately run
an ALTER USER ai_analyst_user ... IDENTIFIED BY '<secure-secret>' step in the
same migration so the account is never left passwordless; ensure the password is
generated/stored securely (or delegated to secrets management) rather than
hardcoding a plaintext weak secret.

In `@apps/monitor-app/scripts/seed-demo-data.mjs`:
- Around line 616-627: The computed intervalMs can be zero when minutesPastHour
=== 0; update the interval calculation (variable intervalMs) to enforce a
sensible minimum (e.g., Math.max(Math.floor((minutesPastHour * 60_000) / 35),
1000)) so offset = 5000 + (i * intervalMs) never uses 0 and produced
recordedAtDate times are staggered as intended; adjust any dependent logic
around recordedAtDate/currentHourMark as needed to preserve the existing
fallback that ensures times are after currentHourMark.
- Around line 588-592: The code mutates the original Date object by calling
now.setMinutes(...) when building currentHourMark, corrupting later uses of now;
to fix, preserve the original timestamp by cloning now before adjusting
minutes—create a new Date from now (e.g., new Date(now) or using now.getTime())
and call setMinutes(0,0,0) on that clone to produce currentHourMark, leaving the
original now and minutesPastHour unchanged for later calculations.

In `@apps/monitor-app/src/app/api/chat/`[projectId]/route.ts:
- Around line 4-14: The POST handler lacks authz/authn: before using
params/projectId or calling createAgent and createAgentUIStreamResponse,
validate the caller is authenticated and authorized for the requested projectId.
Retrieve the requester identity from the incoming NextRequest (e.g.,
session/cookie or bearer token), verify authentication, then check
permissions/ownership for the projectId (call your existing auth helper or
permission service). If unauthenticated return 401, if not allowed return 403,
and only then call createAgent(projectId) and proceed to
createAgentUIStreamResponse; add these checks at the top of the POST function so
createAgent is never invoked for unauthorized requests.

In `@apps/monitor-app/src/app/server/lib/agent/tools/execute-clickhouse.ts`:
- Around line 23-33: The execute function currently runs arbitrary raw SQL via
agentSql`${agentSql.raw(sql)}`; restrict it to only allow a single read-only
statement and to enforce server-side bounds before execution: validate the
incoming sql in execute (or a small helper) to reject multiple statements (no
unescaped semicolons) and ensure the leading token is a read-only verb (e.g.,
SELECT, SHOW, EXPLAIN, WITH) otherwise throw a descriptive error; then transform
or wrap the query to enforce a safe server-side LIMIT (e.g., wrap as SELECT *
FROM ( original ) AS _a LIMIT <MAX_ROWS>) or append/replace its LIMIT if
present, and only then call agentSql with the sanitized/wrapped string instead
of agentSql.raw(sql); ensure these checks/transformations are applied in execute
in apps/monitor-app/src/app/server/lib/agent/tools/execute-clickhouse.ts around
the agentSql`${agentSql.raw(sql)}` call.
- Around line 10-15: The rowsToCsv function currently concatenates values
without escaping; update rowsToCsv to properly escape CSV control characters by
converting null/undefined to empty string, replacing any double quote " in a
field with two double quotes "", and wrapping a field in double quotes if it
contains a comma, newline, or quote; apply this escaping when building the
header/body (use columns = Object.keys(rows[0]) and the same columns mapping) so
columns.map(...) calls an escapeField helper that performs the quote-doubling
and conditional quoting before joining.

In `@apps/monitor-app/src/app/server/lib/agent/utils/system-prompt.ts`:
- Around line 1-3: The chat route handler is passing an unvalidated projectId
into createAgent which is then interpolated into buildSystemPrompt and
SQL—validate projectId upstream using the existing projectIdSchema before
calling createAgent; in the route handler (where createAgent(...) is invoked)
import projectIdSchema and call projectIdSchema.parse(projectId) or
projectIdSchema.safeParse and return a 400/throw if invalid, ensuring only a
validated projectId is passed to buildSystemPrompt and any SQL construction.

---

Nitpick comments:
In `@apps/monitor-app/.env.example`:
- Around line 29-31: Add a trailing newline at the end of the file so the final
line (after AI_API_KEY=, AI_PROVIDER=, AI_MODEL=) ends with a newline character;
update the file to ensure it terminates with a single newline to satisfy tooling
and POSIX conventions.

In `@apps/monitor-app/src/app/api/chat/`[projectId]/route.ts:
- Around line 6-7: Validate the incoming inputs in the route handler: ensure
projectId (from params) matches a UUID pattern (e.g., regex or a UUID validation
utility) and validate that messages (from req.json()) conforms to the
UIMessage[] shape (array check, required fields and types for each UIMessage)
before any agent or downstream processing; add early 400 responses on invalid
input and use the existing symbols projectId and messages (and the UIMessage
type) in apps/monitor-app/src/app/api/chat/[projectId]/route.ts to locate where
to insert these checks.

In `@apps/monitor-app/src/env.ts`:
- Around line 46-48: Replace the free-form AI_PROVIDER z.string() schema with a
strict enum that lists the supported providers so misconfiguration fails at
startup; update the env.ts schema to validate AI_PROVIDER against the exact
values used in the switch in agent.ts ("anthropic", "google", "openai") (use Zod
enum or equivalent) and adjust any exported types or usages that derive from
AI_PROVIDER so they reflect the narrowed type.

In `@apps/monitor-app/src/test/clickhouse-test-utils.ts`:
- Around line 146-159: The optimizeAnomalies function currently waits up to 10s
for rows but proceeds to run OPTIMIZE even if count stays zero; change it to
mirror optimizeAggregates by throwing a timeout error when the deadline is
reached without data. Locate optimizeAnomalies and after the while loop (or
inside it when the timeout condition is met) raise a clear Error (e.g., "timed
out waiting for anomalies to appear") instead of silently continuing, so tests
fail fast when anomaly data never materializes.

In `@docker/docker-compose.yml`:
- Around line 88-89: The two environment variables AI_ANALYST_CLICKHOUSE_USER
and AI_ANALYST_CLICKHOUSE_PASSWORD are currently placed with general auth vars
but belong in the ClickHouse block; move those two entries so they appear under
the existing "# ClickHouse" section (the block introduced around the ClickHouse
configuration) to group all ClickHouse credentials together and improve clarity
while keeping variable names unchanged.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 86009a7 and ab11532.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (23)
  • .github/workflows/check.yml
  • apps/monitor-app/.env.ci
  • apps/monitor-app/.env.example
  • apps/monitor-app/.env.test
  • apps/monitor-app/clickhouse/migrations/007_anomaly_detection.sql
  • apps/monitor-app/package.json
  • apps/monitor-app/scripts/seed-demo-data.mjs
  • apps/monitor-app/src/app/api/chat/[projectId]/route.ts
  • apps/monitor-app/src/app/server/lib/agent/agent-clickhouse-client.ts
  • apps/monitor-app/src/app/server/lib/agent/agent.ts
  • apps/monitor-app/src/app/server/lib/agent/tools/execute-clickhouse.ts
  • apps/monitor-app/src/app/server/lib/agent/utils/system-prompt.ts
  • apps/monitor-app/src/app/server/lib/clickhouse/bootstrap.ts
  • apps/monitor-app/src/env.ts
  • apps/monitor-app/src/instrumentation.ts
  • apps/monitor-app/src/test/anomaly-detection.test.ts
  • apps/monitor-app/src/test/clickhouse-test-utils.ts
  • apps/monitor-app/src/test/performance-guardrails.test.ts
  • apps/monitor-app/vitest.anomaly.config.ts
  • docker/docker-compose.dev.yml
  • docker/docker-compose.yml
  • docker/monitor-app.prod.Dockerfile
  • setup.sh
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: check_pull_request
🧰 Additional context used
🧬 Code graph analysis (5)
apps/monitor-app/src/app/server/lib/clickhouse/bootstrap.ts (2)
apps/monitor-app/src/env.ts (1)
  • env (27-78)
apps/monitor-app/src/app/server/lib/clickhouse/client.ts (1)
  • sql (44-53)
apps/monitor-app/src/app/api/chat/[projectId]/route.ts (1)
apps/monitor-app/src/app/server/lib/agent/agent.ts (1)
  • createAgent (33-42)
apps/monitor-app/src/app/server/lib/agent/tools/execute-clickhouse.ts (3)
apps/monitor-app/src/app/server/lib/logger.ts (1)
  • logger (6-9)
apps/monitor-app/src/app/server/lib/clickhouse/client.ts (1)
  • sql (44-53)
apps/monitor-app/src/app/server/lib/agent/agent-clickhouse-client.ts (1)
  • agentSql (45-54)
apps/monitor-app/src/instrumentation.ts (2)
apps/monitor-app/src/lib/provision-initial-user.ts (1)
  • provisionInitialUser (5-23)
apps/monitor-app/src/app/server/lib/clickhouse/bootstrap.ts (1)
  • syncDatabaseRoles (4-13)
apps/monitor-app/src/app/server/lib/agent/agent-clickhouse-client.ts (3)
apps/monitor-app/src/env.ts (1)
  • env (27-78)
apps/monitor-app/scripts/run-clickhouse-migrate.mjs (3)
  • port (12-12)
  • user (14-14)
  • password (15-15)
apps/monitor-app/src/lib/auth.ts (1)
  • auth (22-122)
🪛 dotenv-linter (4.0.0)
apps/monitor-app/.env.ci

[warning] 10-10: [UnorderedKey] The AI_ANALYST_CLICKHOUSE_USER key should go before the CLICKHOUSE_DB key

(UnorderedKey)


[warning] 11-11: [UnorderedKey] The AI_ANALYST_CLICKHOUSE_PASSWORD key should go before the AI_ANALYST_CLICKHOUSE_USER key

(UnorderedKey)

apps/monitor-app/.env.example

[warning] 10-10: [UnorderedKey] The AI_ANALYST_CLICKHOUSE_USER key should go before the CLICKHOUSE_DB key

(UnorderedKey)


[warning] 11-11: [UnorderedKey] The AI_ANALYST_CLICKHOUSE_PASSWORD key should go before the AI_ANALYST_CLICKHOUSE_USER key

(UnorderedKey)


[warning] 31-31: [EndingBlankLine] No blank line at the end of the file

(EndingBlankLine)


[warning] 31-31: [UnorderedKey] The AI_MODEL key should go before the AI_PROVIDER key

(UnorderedKey)

apps/monitor-app/.env.test

[warning] 10-10: [UnorderedKey] The AI_ANALYST_CLICKHOUSE_USER key should go before the CLICKHOUSE_DB key

(UnorderedKey)


[warning] 11-11: [UnorderedKey] The AI_ANALYST_CLICKHOUSE_PASSWORD key should go before the AI_ANALYST_CLICKHOUSE_USER key

(UnorderedKey)

🔇 Additional comments (22)
apps/monitor-app/src/test/performance-guardrails.test.ts (1)

246-247: Good payload-shape alignment for non-custom range calls.

Explicitly passing customStart/customEnd as null keeps the request contract clear for fixed ranges.

docker/monitor-app.prod.Dockerfile (1)

40-41: Looks consistent with existing build-time env placeholder setup.

These placeholders match the established pattern used for build-time validation inputs.

apps/monitor-app/package.json (2)

17-18: Nice addition of a dedicated anomaly test entrypoint.

Splitting anomaly tests into a dedicated script improves CI targeting and local debugging.


21-24: AI SDK package versions are compatible with ai@6.0.101.

All @ai-sdk/* packages are pinned to the v3 line (^3.0.x), which is the recommended version for use with ai@6.0.101 per the official AI SDK v6 migration guide. No compatibility issues present.

.github/workflows/check.yml (1)

96-100: Good CI coverage extension.

This step is correctly scoped behind monitor_changed and follows the existing test-step pattern.

apps/monitor-app/vitest.anomaly.config.ts (1)

4-11: Config composition is clean here.

Using mergeConfig with the integration baseline and anomaly-specific overrides is a solid setup.

apps/monitor-app/src/instrumentation.ts (1)

1-6: LGTM!

The addition of syncDatabaseRoles() follows the existing initialization pattern. The function gracefully returns early when AI analyst credentials are not configured, making it safe for deployments that don't use the AI features.

apps/monitor-app/.env.ci (1)

10-11: LGTM!

The AI analyst credentials are appropriately grouped with other ClickHouse-related variables. The static analysis warnings about key ordering can be safely ignored since grouping related variables together improves maintainability.

apps/monitor-app/.env.test (1)

10-11: LGTM!

Consistent with the CI environment configuration. The credentials are properly placed within the ClickHouse configuration section.

setup.sh (1)

212-215: LGTM!

The AI analyst password generation follows the established pattern using generate_secret() (cryptographically secure via openssl rand). The hardcoded username is appropriate, and the generated .env file is properly protected with chmod 600.

Also applies to: 333-334

docker/docker-compose.dev.yml (1)

9-10: LGTM!

Adding these to the x-clickhouse-env anchor ensures all services inherit the AI analyst credentials. The simple default values are appropriate for local development and consistent with other dev defaults.

apps/monitor-app/src/test/clickhouse-test-utils.ts (1)

178-178: LGTM!

Enabling CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT is necessary for the test container to support user management operations required by syncDatabaseRoles.

apps/monitor-app/.env.example (1)

10-11: LGTM!

AI analyst credentials are properly grouped with other ClickHouse settings.

apps/monitor-app/src/app/server/lib/agent/agent.ts (2)

9-31: LGTM!

The getModel() function correctly abstracts the AI provider selection with proper error handling for unsupported providers. The pattern of creating a provider instance and then invoking it with the model name follows the AI SDK conventions.


2-2: No issues found. Both ToolLoopAgent and stepCountIs are valid exports in the ai SDK v6.0.101 and are correctly imported. The implementation matches the official API documented at https://ai-sdk.dev/docs/reference/ai-sdk-core/tool-loop-agent and https://ai-sdk.dev/docs/reference/ai-sdk-core/step-count-is.

Likely an incorrect or invalid review comment.

apps/monitor-app/src/env.ts (1)

36-37: LGTM!

The new AI analyst ClickHouse credentials are properly defined with the username having a sensible default and the password being required. The corresponding runtime environment mappings are correctly added.

Also applies to: 65-66

apps/monitor-app/src/app/server/lib/agent/agent-clickhouse-client.ts (2)

38-54: LGTM!

The lazy Proxy pattern is well-documented and correctly implements deferred client initialization, which is essential for avoiding build-time errors when ClickHouse environment variables aren't configured. The pattern mirrors the existing implementation in client.ts.


1-1: Waddler API usage is correct.

The waddler(connectionUrl) initialization pattern and tagged template literal support match the documented API.

apps/monitor-app/src/test/anomaly-detection.test.ts (3)

18-43: LGTM!

The test setup and teardown properly manages the ClickHouse test container lifecycle, including environment variable backup/restore and client cleanup. The 120-second timeout is appropriate for container initialization.


45-188: LGTM!

The test cases provide comprehensive coverage of the anomaly detection workflow, including:

  • Deterministic anomaly ID verification
  • Metadata validation for AI reasoning
  • State management with processed_anomalies
  • Temporal gap enforcement
  • Path-level breakdown analysis
  • Cross-metric correlation checks

Good end-to-end test coverage for the new anomaly detection feature.


7-7: The .mjs import is correctly configured and poses no module interoperability issues.

The TypeScript configuration has esModuleInterop: true, module: "esnext", and moduleResolution: "bundler", which properly support importing ESM .mjs modules. The function seedAnomalyTestPattern is successfully imported and used at line 34 without any complications. No changes are needed.

Likely an incorrect or invalid review comment.

apps/monitor-app/src/app/server/lib/clickhouse/bootstrap.ts (1)

10-12: No security concern with password parameterization.

The password interpolation is safe. Waddler's sql tagged template automatically converts ${...} values into bind parameters (not string concatenation), which prevents SQL injection. The usage of sql.identifier() for the username and plain ${} for the password is correct per the library's design: identifiers require explicit escaping via sql.identifier(), while values are automatically parameterized.

Likely an incorrect or invalid review comment.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (3)
apps/monitor-app/clickhouse/migrations/007_anomaly_detection.sql (2)

101-101: ⚠️ Potential issue | 🟠 Major

Keep r_ai_analyst read-only; move writes to a separate writer role.

Line 101 grants mutation privileges on processed_anomalies, which broadens blast radius for the analyst identity.

🔐 Proposed direction
 GRANT SELECT ON projects TO r_ai_analyst;
-GRANT SELECT, INSERT, ALTER UPDATE ON processed_anomalies TO r_ai_analyst;
+GRANT SELECT ON processed_anomalies TO r_ai_analyst;
+
+CREATE ROLE IF NOT EXISTS r_anomaly_writer;
+GRANT INSERT, ALTER UPDATE ON processed_anomalies TO r_anomaly_writer;

Expected verification: writes to processed_anomalies should use a non-analyst identity/path.

#!/bin/bash
set -euo pipefail

rg -n -C3 "processed_anomalies|AI_ANALYST_CLICKHOUSE_USER|INSERT\\s+INTO\\s+processed_anomalies|table:\\s*\"processed_anomalies\"" apps/monitor-app

Based on learnings: In the next-cwv-monitor project, the agent ClickHouse client has database-level permissions restricted to read-only operations, providing security at the database layer rather than requiring application-level SQL validation for mutation prevention.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/clickhouse/migrations/007_anomaly_detection.sql` at line
101, The migration currently grants mutation privileges to the analyst role: the
GRANT statement on processed_anomalies gives INSERT, ALTER, UPDATE to
r_ai_analyst; change this so r_ai_analyst is read-only and create a separate
writer role (e.g., r_ai_writer) that receives INSERT/ALTER/UPDATE on
processed_anomalies, update the migration to grant only SELECT to r_ai_analyst
and grant mutation rights to r_ai_writer, and update any deployment/config that
performs writes (the component that issues INSERT INTO processed_anomalies or
uses AI_ANALYST_CLICKHOUSE_USER) to authenticate as the new writer role so all
writes use the non-analyst identity.

103-104: ⚠️ Potential issue | 🔴 Critical

Do not create ai_analyst_user with no_password.

Line 103-Line 104 leaves a passwordless account as the migration default, which is a security exposure before bootstrap hardening executes.

🔐 Proposed direction
-CREATE USER IF NOT EXISTS ai_analyst_user 
-IDENTIFIED WITH no_password;
+-- Avoid passwordless provisioning in migration defaults.
+-- Provision/create this user with env-backed credentials in bootstrap.

Expected verification: migration should not contain no_password, and bootstrap should handle secure create/update.

#!/bin/bash
set -euo pipefail

rg -n -C3 "CREATE USER IF NOT EXISTS ai_analyst_user|IDENTIFIED WITH no_password|syncDatabaseRoles|ALTER USER IF EXISTS" \
  apps/monitor-app/clickhouse/migrations/007_anomaly_detection.sql \
  apps/monitor-app/src/app/server/lib/clickhouse/bootstrap.ts
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/clickhouse/migrations/007_anomaly_detection.sql` around
lines 103 - 104, The migration should not create a passwordless account; remove
the "IDENTIFIED WITH no_password" clause from the CREATE USER statement in the
migration (leave CREATE USER IF NOT EXISTS ai_analyst_user as a plain creation
or remove the user creation entirely) and let the bootstrap code (functions like
syncDatabaseRoles and ALTER USER IF EXISTS in bootstrap.ts) perform secure
creation/updating of the ai_analyst_user with proper credentials; ensure the
migration is idempotent and does not introduce any authentication bypass so all
password/alter-user logic is centralized in the bootstrap routines.
apps/monitor-app/scripts/seed-demo-data.mjs (1)

590-623: ⚠️ Potential issue | 🟠 Major

Avoid mutating now; it skews anomaly timestamps and can collapse spacing.

Line 591 mutates now, and Line 623 then derives offsets from that mutated value. Also, Line 616-Line 618 can yield intervalMs = 0 at the top of the hour.

🐛 Proposed fix
 export async function seedAnomalyTestPattern(client, projectId) {
   const now = new Date();
+  const nowTs = now.getTime();

   const minutesPastHour = now.getMinutes();
-  const currentHourMark = new Date(now.setMinutes(0, 0, 0));
+  const currentHourMark = new Date(nowTs);
+  currentHourMark.setMinutes(0, 0, 0);

   const events = [];
   const route = "/checkout";
   const device = "desktop";
@@
   const intervalMs = minutesPastHour > 30 
     ? 60_000
-    : Math.floor((minutesPastHour * 60_000) / 35);
+    : Math.max(1_000, Math.floor((minutesPastHour * 60_000) / 35));
@@
-    const recordedAtDate = new Date(now.getTime() - offset);
+    const recordedAtDate = new Date(nowTs - offset);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/scripts/seed-demo-data.mjs` around lines 590 - 623, Don't
mutate now with now.setMinutes; create currentHourMark from a copy (e.g., new
Date(now.getTime()) or new Date(new Date(now).setMinutes(0,0,0))) so now remains
unchanged for later offsets, and guard intervalMs against zero (e.g.,
Math.max(1, Math.floor((minutesPastHour * 60_000) / 35))). Update references to
now/currentHourMark in the loops (session creation and recordedAtDate) to use
the non-mutated now and the new currentHourMark variables (symbols: now,
currentHourMark, minutesPastHour, intervalMs, recordedAtDate).
🧹 Nitpick comments (4)
docker/docker-compose.yml (1)

88-89: Consider placement under ClickHouse section for clarity.

These AI analyst credentials are ClickHouse database credentials rather than authentication/security settings. For organizational consistency, consider moving them to the ClickHouse section (after line 79) alongside the other ClickHouse environment variables.

That said, the current placement is functional and follows the required variable pattern correctly.

♻️ Suggested organization
       # ─────────────────────────────────────────────────────────────
       # ClickHouse
       # ─────────────────────────────────────────────────────────────
       <<: *clickhouse-env
+      AI_ANALYST_CLICKHOUSE_USER: ${AI_ANALYST_CLICKHOUSE_USER:?AI_ANALYST_CLICKHOUSE_USER is required}
+      AI_ANALYST_CLICKHOUSE_PASSWORD: ${AI_ANALYST_CLICKHOUSE_PASSWORD:?AI_ANALYST_CLICKHOUSE_PASSWORD is required}

       # ─────────────────────────────────────────────────────────────
       # Authentication & Security
       # ─────────────────────────────────────────────────────────────
       BETTER_AUTH_SECRET: ${BETTER_AUTH_SECRET:?BETTER_AUTH_SECRET is required}
       MIN_PASSWORD_SCORE: ${MIN_PASSWORD_SCORE:-2}
       RATE_LIMIT_WINDOW_MS: ${RATE_LIMIT_WINDOW_MS:-60000}
       MAX_LOGIN_ATTEMPTS: ${MAX_LOGIN_ATTEMPTS:-5}
-      AI_ANALYST_CLICKHOUSE_USER: ${AI_ANALYST_CLICKHOUSE_USER:?AI_ANALYST_CLICKHOUSE_USER is required}
-      AI_ANALYST_CLICKHOUSE_PASSWORD: ${AI_ANALYST_CLICKHOUSE_PASSWORD:?AI_ANALYST_CLICKHOUSE_PASSWORD is required}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker/docker-compose.yml` around lines 88 - 89, The two environment
variables AI_ANALYST_CLICKHOUSE_USER and AI_ANALYST_CLICKHOUSE_PASSWORD are
ClickHouse DB credentials and should be relocated into the ClickHouse
environment block in docker-compose.yml so they sit with the other ClickHouse
vars; update the docker-compose service definition by cutting these two keys
from their current location and pasting them under the ClickHouse section (near
the existing ClickHouse env entries) to improve organization while keeping the
same variable syntax and values.
apps/monitor-app/src/app/server/lib/agent/tools/execute-clickhouse.ts (1)

68-72: Consider including executionTime in error responses for consistency.

The error response omits executionTime while all success paths include it. This may cause issues if consumers expect a consistent response shape.

🔧 Suggested fix
     } catch (error: unknown) {
       const message = error instanceof Error ? error.message : String(error);
       logger.error(`[ExecuteSQL] Error: ${message}`);
-      return { error: message, csv: "", totalRows: 0 };
+      return { error: message, csv: "", totalRows: 0, executionTime: 0 };
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/src/app/server/lib/agent/tools/execute-clickhouse.ts` around
lines 68 - 72, The catch block in execute-clickhouse.ts returns { error, csv,
totalRows } but omits executionTime causing inconsistent response shapes; update
the catch handler (the block that logs `[ExecuteSQL] Error:`) to include
executionTime in the returned object — compute it the same way the success path
does (e.g., using the same start timestamp/elapsed calculation used elsewhere in
this function) and return { error: message, csv: "", totalRows: 0, executionTime
} so callers always receive executionTime.
apps/monitor-app/src/instrumentation.ts (1)

1-7: Consider error handling for syncDatabaseRoles() failure.

If syncDatabaseRoles() throws (e.g., ClickHouse connection failure), the entire instrumentation will fail, potentially preventing application startup. Depending on requirements, you may want to catch and log the error while allowing the app to start without AI role sync.

🔧 Optional defensive approach
 export async function register() {
   await provisionInitialUser();
-  await syncDatabaseRoles();
+  try {
+    await syncDatabaseRoles();
+  } catch (error) {
+    console.warn("Failed to sync database roles for AI analyst:", error);
+  }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/src/instrumentation.ts` around lines 1 - 7, Wrap the
syncDatabaseRoles() call in a try/catch inside register() so a ClickHouse or
network failure doesn't abort startup; keep awaiting provisionInitialUser(),
then try { await syncDatabaseRoles(); } catch (err) { /* log the error via your
app logger (or console.error) with context like "syncDatabaseRoles failed" */ }
to allow the app to continue without AI role sync; reference the register
function and the syncDatabaseRoles symbol when making the change.
apps/monitor-app/src/app/api/chat/[projectId]/route.ts (1)

16-24: Consider differentiating client vs server errors.

Currently all errors return 500. JSON parsing failures from req.json() indicate a client error and would be more appropriately returned as 400.

🔧 Optional improvement
   } catch (error) {
+    if (error instanceof SyntaxError) {
+      return Response.json({ ok: false, error: "Invalid JSON body" }, { status: 400 });
+    }
     const errorMessage = error instanceof Error ? error.message : String(error);
     return Response.json(
       { ok: false, error: errorMessage },
-      { status: 500 },
+      { status: 500 },
     );
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/src/app/api/chat/`[projectId]/route.ts around lines 16 - 24,
The catch-all currently returns 500 for every error; change the error handling
in the route handler (the try/catch surrounding await req.json() and request
processing in route.ts) to detect client-side JSON parse errors (e.g. error
instanceof SyntaxError or error.name === 'SyntaxError' from req.json()) and
return Response.json({ ok: false, error: errorMessage }, { status: 400 }) for
those cases, falling back to status 500 for server errors; ensure you use the
same errorMessage extraction logic and update the Response.json call
accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/monitor-app/.env.example`:
- Around line 29-31: The file ends with the environment variables AI_API_KEY,
AI_PROVIDER, and AI_MODEL but is missing a trailing newline; open the .env
example containing those keys and ensure you add a single newline character at
EOF so the last line terminates with a newline (this satisfies dotenv-linter and
standard text-file conventions).

In `@apps/monitor-app/.env.test`:
- Around line 10-11: The two new env keys AI_ANALYST_CLICKHOUSE_USER and
AI_ANALYST_CLICKHOUSE_PASSWORD are out of order and trigger dotenv-linter
UnorderedKey warnings; reorder these entries to match the file's established
alphabetical (or existing group) ordering so they appear in the correct position
relative to other variables (i.e., move AI_ANALYST_CLICKHOUSE_PASSWORD and
AI_ANALYST_CLICKHOUSE_USER into the alphabetically appropriate spot or group
them consistently with other AI_ANALYST_* keys) to eliminate the lint warnings.

In `@apps/monitor-app/src/env.ts`:
- Around line 36-37: The env schema requires AI_ANALYST_CLICKHOUSE_PASSWORD but
syncDatabaseRoles() already handles missing aiUser/aiPass; update the env
definition for AI_ANALYST_CLICKHOUSE_PASSWORD (alongside
AI_ANALYST_CLICKHOUSE_USER pattern) to be optional to avoid validation failures
in non-AI environments—e.g., change the z.string().min(1) constraint to match
the optional pattern used for other AI vars (use z.string().optional() or
z.string().min(1).optional() as appropriate) so code paths in
syncDatabaseRoles() can rely on possibly undefined aiPass without breaking
startup.

In `@apps/monitor-app/src/test/anomaly-detection.test.ts`:
- Around line 9-11: The afterAll teardown assumes setup succeeded and uses
container, sql, and directClient unguarded, which can mask beforeAll failures;
update the afterAll (and any teardown helpers) to check for
existence/initialization before calling methods (e.g., only call
container.stop(), directClient.close(), or sql.cleanup() if container,
directClient, or sql are defined/non-null), and wrap teardown calls in safe
conditionals or try/catch so a failed beforeAll does not cause afterAll to
throw; reference the variables container, sql, directClient and the
beforeAll/afterAll hooks when making these guards.

In `@apps/monitor-app/src/test/clickhouse-test-utils.ts`:
- Around line 149-159: The wait loop in clickhouse-test-utils.ts can fall
through and call sqlClient`OPTIMIZE TABLE cwv_stats_hourly FINAL`.command() with
zero rows; change the loop to detect when Date.now() >= deadlineMs and then
throw a clear error instead of proceeding. Specifically, in the block that
queries sqlClient<{ cnt: string }>`SELECT toString(count()) AS cnt FROM
cwv_stats_hourly` and computes count, if the deadlineMs is reached and count ===
0, throw an Error (including context like the table name and deadline) so tests
fail fast; keep using the existing symbols sqlClient, deadlineMs, wait and avoid
calling the OPTIMIZE command when count is zero.

---

Duplicate comments:
In `@apps/monitor-app/clickhouse/migrations/007_anomaly_detection.sql`:
- Line 101: The migration currently grants mutation privileges to the analyst
role: the GRANT statement on processed_anomalies gives INSERT, ALTER, UPDATE to
r_ai_analyst; change this so r_ai_analyst is read-only and create a separate
writer role (e.g., r_ai_writer) that receives INSERT/ALTER/UPDATE on
processed_anomalies, update the migration to grant only SELECT to r_ai_analyst
and grant mutation rights to r_ai_writer, and update any deployment/config that
performs writes (the component that issues INSERT INTO processed_anomalies or
uses AI_ANALYST_CLICKHOUSE_USER) to authenticate as the new writer role so all
writes use the non-analyst identity.
- Around line 103-104: The migration should not create a passwordless account;
remove the "IDENTIFIED WITH no_password" clause from the CREATE USER statement
in the migration (leave CREATE USER IF NOT EXISTS ai_analyst_user as a plain
creation or remove the user creation entirely) and let the bootstrap code
(functions like syncDatabaseRoles and ALTER USER IF EXISTS in bootstrap.ts)
perform secure creation/updating of the ai_analyst_user with proper credentials;
ensure the migration is idempotent and does not introduce any authentication
bypass so all password/alter-user logic is centralized in the bootstrap
routines.

In `@apps/monitor-app/scripts/seed-demo-data.mjs`:
- Around line 590-623: Don't mutate now with now.setMinutes; create
currentHourMark from a copy (e.g., new Date(now.getTime()) or new Date(new
Date(now).setMinutes(0,0,0))) so now remains unchanged for later offsets, and
guard intervalMs against zero (e.g., Math.max(1, Math.floor((minutesPastHour *
60_000) / 35))). Update references to now/currentHourMark in the loops (session
creation and recordedAtDate) to use the non-mutated now and the new
currentHourMark variables (symbols: now, currentHourMark, minutesPastHour,
intervalMs, recordedAtDate).

---

Nitpick comments:
In `@apps/monitor-app/src/app/api/chat/`[projectId]/route.ts:
- Around line 16-24: The catch-all currently returns 500 for every error; change
the error handling in the route handler (the try/catch surrounding await
req.json() and request processing in route.ts) to detect client-side JSON parse
errors (e.g. error instanceof SyntaxError or error.name === 'SyntaxError' from
req.json()) and return Response.json({ ok: false, error: errorMessage }, {
status: 400 }) for those cases, falling back to status 500 for server errors;
ensure you use the same errorMessage extraction logic and update the
Response.json call accordingly.

In `@apps/monitor-app/src/app/server/lib/agent/tools/execute-clickhouse.ts`:
- Around line 68-72: The catch block in execute-clickhouse.ts returns { error,
csv, totalRows } but omits executionTime causing inconsistent response shapes;
update the catch handler (the block that logs `[ExecuteSQL] Error:`) to include
executionTime in the returned object — compute it the same way the success path
does (e.g., using the same start timestamp/elapsed calculation used elsewhere in
this function) and return { error: message, csv: "", totalRows: 0, executionTime
} so callers always receive executionTime.

In `@apps/monitor-app/src/instrumentation.ts`:
- Around line 1-7: Wrap the syncDatabaseRoles() call in a try/catch inside
register() so a ClickHouse or network failure doesn't abort startup; keep
awaiting provisionInitialUser(), then try { await syncDatabaseRoles(); } catch
(err) { /* log the error via your app logger (or console.error) with context
like "syncDatabaseRoles failed" */ } to allow the app to continue without AI
role sync; reference the register function and the syncDatabaseRoles symbol when
making the change.

In `@docker/docker-compose.yml`:
- Around line 88-89: The two environment variables AI_ANALYST_CLICKHOUSE_USER
and AI_ANALYST_CLICKHOUSE_PASSWORD are ClickHouse DB credentials and should be
relocated into the ClickHouse environment block in docker-compose.yml so they
sit with the other ClickHouse vars; update the docker-compose service definition
by cutting these two keys from their current location and pasting them under the
ClickHouse section (near the existing ClickHouse env entries) to improve
organization while keeping the same variable syntax and values.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 86009a7 and c8929de.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (23)
  • .github/workflows/check.yml
  • apps/monitor-app/.env.ci
  • apps/monitor-app/.env.example
  • apps/monitor-app/.env.test
  • apps/monitor-app/clickhouse/migrations/007_anomaly_detection.sql
  • apps/monitor-app/package.json
  • apps/monitor-app/scripts/seed-demo-data.mjs
  • apps/monitor-app/src/app/api/chat/[projectId]/route.ts
  • apps/monitor-app/src/app/server/lib/agent/agent.ts
  • apps/monitor-app/src/app/server/lib/agent/tools/execute-clickhouse.ts
  • apps/monitor-app/src/app/server/lib/agent/utils/agent-clickhouse-client.ts
  • apps/monitor-app/src/app/server/lib/agent/utils/system-prompt.ts
  • apps/monitor-app/src/app/server/lib/clickhouse/bootstrap.ts
  • apps/monitor-app/src/env.ts
  • apps/monitor-app/src/instrumentation.ts
  • apps/monitor-app/src/test/anomaly-detection.test.ts
  • apps/monitor-app/src/test/clickhouse-test-utils.ts
  • apps/monitor-app/src/test/performance-guardrails.test.ts
  • apps/monitor-app/vitest.anomaly.config.ts
  • docker/docker-compose.dev.yml
  • docker/docker-compose.yml
  • docker/monitor-app.prod.Dockerfile
  • setup.sh
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: check_pull_request
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-02-27T10:21:06.393Z
Learnt from: olekszczepanowski
Repo: Blazity/next-cwv-monitor PR: 118
File: apps/monitor-app/src/app/server/lib/agent/tools/execute-clickhouse.ts:23-33
Timestamp: 2026-02-27T10:21:06.393Z
Learning: In the next-cwv-monitor project, the agent ClickHouse client has database-level permissions restricted to read-only operations, providing security at the database layer rather than requiring application-level SQL validation for mutation prevention.

Applied to files:

  • apps/monitor-app/.env.test
  • setup.sh
  • apps/monitor-app/.env.ci
  • apps/monitor-app/src/app/server/lib/agent/utils/agent-clickhouse-client.ts
  • apps/monitor-app/.env.example
  • apps/monitor-app/src/app/server/lib/agent/tools/execute-clickhouse.ts
  • apps/monitor-app/clickhouse/migrations/007_anomaly_detection.sql
📚 Learning: 2026-02-27T10:22:20.131Z
Learnt from: olekszczepanowski
Repo: Blazity/next-cwv-monitor PR: 118
File: apps/monitor-app/src/app/api/chat/[projectId]/route.ts:4-14
Timestamp: 2026-02-27T10:22:20.131Z
Learning: In the next-cwv-monitor codebase (apps/monitor-app), the proxy.ts middleware handles authentication by redirecting unauthenticated users to /login. Routes not in PUBLIC_ROUTES array (["/login", "/api/auth", "/api/health", "/api/ingest"]) are automatically protected by this authentication layer.

Applied to files:

  • apps/monitor-app/src/app/api/chat/[projectId]/route.ts
🧬 Code graph analysis (7)
apps/monitor-app/src/app/server/lib/clickhouse/bootstrap.ts (2)
apps/monitor-app/src/env.ts (1)
  • env (27-78)
apps/monitor-app/src/app/server/lib/clickhouse/client.ts (1)
  • sql (44-53)
apps/monitor-app/src/app/api/chat/[projectId]/route.ts (1)
apps/monitor-app/src/app/server/lib/agent/agent.ts (1)
  • createAgent (37-46)
apps/monitor-app/src/app/server/lib/agent/agent.ts (2)
apps/monitor-app/src/env.ts (1)
  • env (27-78)
apps/monitor-app/src/app/server/lib/agent/utils/system-prompt.ts (1)
  • buildSystemPrompt (1-84)
apps/monitor-app/src/env.ts (1)
apps/monitor-app/scripts/seed-demo-data.mjs (1)
  • process (6-12)
apps/monitor-app/src/instrumentation.ts (2)
apps/monitor-app/src/lib/provision-initial-user.ts (1)
  • provisionInitialUser (5-23)
apps/monitor-app/src/app/server/lib/clickhouse/bootstrap.ts (1)
  • syncDatabaseRoles (4-13)
apps/monitor-app/src/app/server/lib/agent/tools/execute-clickhouse.ts (3)
apps/monitor-app/src/app/server/lib/logger.ts (1)
  • logger (6-9)
apps/monitor-app/src/app/server/lib/clickhouse/client.ts (1)
  • sql (44-53)
apps/monitor-app/src/app/server/lib/agent/utils/agent-clickhouse-client.ts (1)
  • agentSql (45-54)
apps/monitor-app/src/test/anomaly-detection.test.ts (3)
apps/monitor-app/src/app/server/lib/clickhouse/client.ts (1)
  • sql (44-53)
apps/monitor-app/scripts/seed-demo-data.mjs (3)
  • seedAnomalyTestPattern (587-646)
  • now (138-138)
  • now (588-588)
apps/monitor-app/src/test/clickhouse-test-utils.ts (2)
  • setupClickHouseContainer (165-219)
  • optimizeAnomalies (147-159)
🪛 dotenv-linter (4.0.0)
apps/monitor-app/.env.test

[warning] 10-10: [UnorderedKey] The AI_ANALYST_CLICKHOUSE_USER key should go before the CLICKHOUSE_DB key

(UnorderedKey)


[warning] 11-11: [UnorderedKey] The AI_ANALYST_CLICKHOUSE_PASSWORD key should go before the AI_ANALYST_CLICKHOUSE_USER key

(UnorderedKey)

apps/monitor-app/.env.ci

[warning] 10-10: [UnorderedKey] The AI_ANALYST_CLICKHOUSE_USER key should go before the CLICKHOUSE_DB key

(UnorderedKey)


[warning] 11-11: [UnorderedKey] The AI_ANALYST_CLICKHOUSE_PASSWORD key should go before the AI_ANALYST_CLICKHOUSE_USER key

(UnorderedKey)


[warning] 24-24: [UnorderedKey] The AI_MODEL key should go before the AI_PROVIDER key

(UnorderedKey)

apps/monitor-app/.env.example

[warning] 10-10: [UnorderedKey] The AI_ANALYST_CLICKHOUSE_USER key should go before the CLICKHOUSE_DB key

(UnorderedKey)


[warning] 11-11: [UnorderedKey] The AI_ANALYST_CLICKHOUSE_PASSWORD key should go before the AI_ANALYST_CLICKHOUSE_USER key

(UnorderedKey)


[warning] 31-31: [EndingBlankLine] No blank line at the end of the file

(EndingBlankLine)


[warning] 31-31: [UnorderedKey] The AI_MODEL key should go before the AI_PROVIDER key

(UnorderedKey)

🔇 Additional comments (23)
apps/monitor-app/src/test/performance-guardrails.test.ts (1)

246-248: LGTM!

The addition of explicit customStart: null and customEnd: null parameters properly completes the argument object for getDashboardData, making the optional time-range overrides explicit in the test.

setup.sh (2)

212-215: LGTM!

The AI analyst credentials are generated securely using the same generate_secret() function as other sensitive values, maintaining consistent security practices throughout the setup wizard.


333-334: LGTM!

The AI analyst credentials are appropriately placed under the ClickHouse Database section and follow the same pattern as other environment variables.

docker/docker-compose.dev.yml (1)

9-10: LGTM!

The AI analyst credentials with development defaults follow the same pattern as other ClickHouse dev credentials (e.g., CLICKHOUSE_PASSWORD:-secret). The anchor ensures consistent propagation across services.

docker/monitor-app.prod.Dockerfile (1)

40-41: LGTM!

The build-time placeholders follow the established pattern for other credentials in this Dockerfile. The comment on lines 30-31 clearly documents that these are replaced at runtime via docker-compose.

apps/monitor-app/.env.ci (2)

10-11: LGTM!

The AI analyst ClickHouse credentials are appropriately placed with other ClickHouse configuration.


21-25: LGTM!

The AI configuration section with placeholder values is appropriate for CI. The comment documenting that these are required by src/env.ts is helpful for maintainability.

apps/monitor-app/src/app/server/lib/agent/utils/system-prompt.ts (3)

1-3: Validate projectId upstream before prompt interpolation.

The projectId parameter is directly interpolated into the system prompt string at multiple locations (lines 3, 63, 78). While the retrieved learning confirms database-level read-only permissions, the projectId should still be validated as a UUID in the calling code (createAgent or the chat route handler) before being passed here, consistent with how other services use projectIdSchema.


63-66: Query guidelines are well-structured.

The guidelines appropriately enforce LIMIT ≤200 and project filtering, which helps prevent resource exhaustion and ensures proper data isolation.


74-81: LGTM on the example query.

The example query demonstrates proper use of ClickHouse aggregate functions (quantilesMerge, countMerge) and follows the established guidelines with appropriate LIMIT clause.

apps/monitor-app/src/app/server/lib/agent/utils/agent-clickhouse-client.ts (1)

1-54: LGTM!

The lazy initialization pattern via Proxy is well-implemented and correctly documented. It appropriately defers ClickHouse client creation to avoid build-time failures when environment variables aren't configured. The caching mechanism and connection URL construction are sound.

apps/monitor-app/src/env.ts (1)

46-48: LGTM!

The optional AI configuration variables (AI_API_KEY, AI_PROVIDER, AI_MODEL) are correctly defined and mapped to runtime environment, allowing flexible deployment with or without AI features.

Also applies to: 72-74

apps/monitor-app/src/app/server/lib/agent/tools/execute-clickhouse.ts (2)

10-16: CSV serialization still needs escaping for control characters.

The rowsToCsv function doesn't escape values containing commas, quotes, or newlines, which will produce malformed CSV when such data is present in query results.

🧾 Suggested CSV escaping fix
+const escapeCsv = (value: unknown) => {
+  const raw = String(value ?? "");
+  return /[",\n\r]/.test(raw) ? `"${raw.replaceAll('"', '""')}"` : raw;
+};
+
 const rowsToCsv = (rows: Record<string, unknown>[]) => {
   if (rows.length === 0) return "";
   const columns = Object.keys(rows[0]);
-  const header = columns.join(",");
-  const body = rows.map((row) => columns.map((col) => String(row[col] ?? "")).join(",")).join("\n");
+  const header = columns.map(escapeCsv).join(",");
+  const body = rows.map((row) => columns.map((col) => escapeCsv(row[col])).join(",")).join("\n");
   return `${header}\n${body}`;
 };

18-67: LGTM!

The tool implementation is well-structured with appropriate thresholds, helpful user feedback (warnings/hints for large result sets), and proper error handling. Based on learnings, the database-level read-only permissions provide adequate security without requiring application-level SQL validation.

apps/monitor-app/src/app/server/lib/agent/agent.ts (2)

9-35: LGTM!

The provider selection logic is clean with appropriate runtime validation and clear error messages for missing configuration or unsupported providers.


37-46: Both ToolLoopAgent and stepCountIs are valid, well-documented APIs in Vercel AI SDK v6. The code correctly imports and uses them according to the official API specification—no issues found.

apps/monitor-app/src/app/api/chat/[projectId]/route.ts (1)

5-15: Project-level authorization is still missing.

While proxy.ts handles authentication, this route doesn't verify the authenticated user has permission to access the specified projectId. Based on the past review discussion, this creates a potential Broken Object Level Authorization (BOLA) vulnerability in multi-tenant scenarios.

If this is a multi-tenant application where users should only access their own projects, consider adding authorization similar to how the settings page verifies project access:

🔒 Suggested approach
+import { getAuthorizedSession } from "@/lib/auth-utils";
+import { hasPermission } from "@/lib/permissions";

 export async function POST(req: NextRequest, { params }: { params: Promise<{ projectId: string }> }) {
   try {
+    const session = await getAuthorizedSession();
+    const canAccess = await hasPermission({ project: ["read"] }, session.user.id, projectId);
+    if (!canAccess) {
+      return Response.json({ ok: false, error: "Forbidden" }, { status: 403 });
+    }
+    
     const { projectId } = await params;
apps/monitor-app/package.json (1)

17-18: LGTM!

The new test script and AI SDK dependencies are properly added to support the anomaly detection feature. Version ranges are appropriate.

Also applies to: 21-24, 38-38

.github/workflows/check.yml (1)

95-100: LGTM!

The new CI step for anomaly detection tests follows the established pattern and is appropriately gated by the monitor_changed condition.

apps/monitor-app/vitest.anomaly.config.ts (1)

4-12: Config merge and anomaly test scoping look good.

Line 4-Line 12 cleanly extends integration config while isolating anomaly tests and timeout settings.

apps/monitor-app/src/app/server/lib/clickhouse/bootstrap.ts (1)

10-12: Password sync query is safely parameterized.

Line 10-Line 12 correctly uses identifier/value interpolation via the SQL tag instead of string concatenation.

apps/monitor-app/src/test/clickhouse-test-utils.ts (1)

178-178: Good addition for role/user integration tests.

Line 178 enabling CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT is aligned with the new permissions/roles flow.

apps/monitor-app/src/test/anomaly-detection.test.ts (1)

45-189: Strong end-to-end anomaly workflow coverage.

These assertions exercise detection, metadata quality, alert state transitions, and analysis-oriented queries in one realistic flow.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
apps/monitor-app/src/app/api/chat/[projectId]/route.ts (1)

9-15: ⚠️ Potential issue | 🟠 Major

Scope authorization to projectId before creating the agent.

Line 9 checks only a generic "project:get" permission, but Line 15 projectId is never part of the access check. That leaves a BOLA-style gap where users with broad read permission can query arbitrary projects by URL.

Suggested fix
 export async function POST(req: NextRequest, { params }: { params: Promise<{ projectId: string }> }) {
   try {
     const session = await getAuthorizedSession();
-    const canRead = await hasPermission({ project: ["get"] }, session.user.id);
+    const { projectId } = await params;
+    const canRead = await hasPermission(
+      { project: ["get"] },
+      session.user.id,
+      // use existing project-scoped access helper/check here
+      // e.g. verify user can access this specific projectId
+    );

     if (!canRead) {
       return Response.json({ ok: false, error: "Forbidden" }, { status: 403 });
     }

-    const { projectId } = await params;
     const { messages }: { messages: UIMessage[] } = await req.json();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/src/app/api/chat/`[projectId]/route.ts around lines 9 - 15,
The authorization currently calls hasPermission({ project: ["get"] },
session.user.id) without scoping to the specific projectId, allowing access to
arbitrary projects; modify the flow to extract projectId from params first (use
the existing params and projectId variable) and call hasPermission with the
project ID context (e.g., hasPermission({ project: ["get"] }, session.user.id, {
projectId }) or include projectId in the permission check payload) before
creating the agent or returning data, ensuring the check uses the same projectId
used later in this route.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/monitor-app/src/app/api/chat/`[projectId]/route.ts:
- Line 16: The handler currently trusts req.json() (const { messages }: {
messages: UIMessage[] } = await req.json()) and lets JSON parse errors or
invalid payload shapes fall through to the 500 catch; change it to validate
input and return 400 for client errors by wrapping req.json() in try/catch (or
using a quick runtime check) and asserting that messages exists and is an array
(and optionally check elements are objects/contain required fields) before
proceeding; on invalid JSON or missing/invalid messages return a Response
(status 400) with a clear error message instead of allowing the exception to
propagate to the generic 500 handler—apply the same validation pattern to the
later block around the second req.json() usage (lines 24–31) so malformed
payloads consistently produce 400.
- Around line 29-30: Replace returning raw internal errors to clients by logging
the full error server-side and sending a generic message in the Response.json
call; specifically, in the route handler where errorMessage is derived (the code
using `const errorMessage = error instanceof Error ? error.message :
String(error);` and `return Response.json({ ok: false, error: errorMessage }, {
status: 500 });`), change behavior to log the full error (e.g., console.error or
the app logger) and return a generic payload such as `{ ok: false, error:
"Internal server error" }` with status 500 so implementation details aren’t
exposed to clients.

---

Duplicate comments:
In `@apps/monitor-app/src/app/api/chat/`[projectId]/route.ts:
- Around line 9-15: The authorization currently calls hasPermission({ project:
["get"] }, session.user.id) without scoping to the specific projectId, allowing
access to arbitrary projects; modify the flow to extract projectId from params
first (use the existing params and projectId variable) and call hasPermission
with the project ID context (e.g., hasPermission({ project: ["get"] },
session.user.id, { projectId }) or include projectId in the permission check
payload) before creating the agent or returning data, ensuring the check uses
the same projectId used later in this route.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c8929de and aeb2e47.

📒 Files selected for processing (2)
  • apps/monitor-app/src/app/api/chat/[projectId]/route.ts
  • apps/monitor-app/src/app/server/lib/agent/utils/system-prompt.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/monitor-app/src/app/server/lib/agent/utils/system-prompt.ts
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: check_pull_request
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2026-02-27T10:22:20.131Z
Learnt from: olekszczepanowski
Repo: Blazity/next-cwv-monitor PR: 118
File: apps/monitor-app/src/app/api/chat/[projectId]/route.ts:4-14
Timestamp: 2026-02-27T10:22:20.131Z
Learning: In the next-cwv-monitor codebase (apps/monitor-app), the proxy.ts middleware handles authentication by redirecting unauthenticated users to /login. Routes not in PUBLIC_ROUTES array (["/login", "/api/auth", "/api/health", "/api/ingest"]) are automatically protected by this authentication layer.

Applied to files:

  • apps/monitor-app/src/app/api/chat/[projectId]/route.ts
🧬 Code graph analysis (1)
apps/monitor-app/src/app/api/chat/[projectId]/route.ts (2)
apps/monitor-app/src/lib/auth-utils.ts (3)
  • getAuthorizedSession (24-36)
  • hasPermission (53-60)
  • UnauthorizedError (10-15)
apps/monitor-app/src/app/server/lib/agent/agent.ts (1)
  • createAgent (37-46)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/monitor-app/src/app/api/chat/`[projectId]/route.ts:
- Around line 8-15: The code only checks general project read rights via
hasPermission({ project: ["get"] }, session.user.id) but never verifies access
to the specific projectId from params; before creating the agent that uses
projectId, add a specific ownership/authorization check by either passing the
projectId into hasPermission (e.g., hasPermission({ project: ["get"] },
session.user.id, projectId) or similar) or by fetching the project and asserting
project.ownerId/session.user.id or calling an existing project-level
authorization helper; ensure this check uses the same session from
getAuthorizedSession and validates the extracted projectId from params before
proceeding to create the agent that consumes projectId.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aeb2e47 and 3267c3c.

📒 Files selected for processing (1)
  • apps/monitor-app/src/app/api/chat/[projectId]/route.ts
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: check_pull_request
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2026-02-27T10:22:20.131Z
Learnt from: olekszczepanowski
Repo: Blazity/next-cwv-monitor PR: 118
File: apps/monitor-app/src/app/api/chat/[projectId]/route.ts:4-14
Timestamp: 2026-02-27T10:22:20.131Z
Learning: In the next-cwv-monitor codebase (apps/monitor-app), the proxy.ts middleware handles authentication by redirecting unauthenticated users to /login. Routes not in PUBLIC_ROUTES array (["/login", "/api/auth", "/api/health", "/api/ingest"]) are automatically protected by this authentication layer.

Applied to files:

  • apps/monitor-app/src/app/api/chat/[projectId]/route.ts
🧬 Code graph analysis (1)
apps/monitor-app/src/app/api/chat/[projectId]/route.ts (2)
apps/monitor-app/src/lib/auth-utils.ts (3)
  • getAuthorizedSession (24-36)
  • hasPermission (53-60)
  • UnauthorizedError (10-15)
apps/monitor-app/src/app/server/lib/agent/agent.ts (1)
  • createAgent (37-46)
🔇 Additional comments (2)
apps/monitor-app/src/app/api/chat/[projectId]/route.ts (2)

16-16: Missing input validation returns 500 instead of 400 for malformed requests.

req.json() can throw on invalid JSON, and destructuring will fail if messages is missing or not an array. Both cases fall into the generic catch block returning 500, which is incorrect for client input errors.

🛡️ Suggested validation
-    const { messages }: { messages: UIMessage[] } = await req.json();
+    let body: unknown;
+    try {
+      body = await req.json();
+    } catch {
+      return Response.json({ ok: false, error: "Invalid JSON body" }, { status: 400 });
+    }
+
+    if (
+      !body ||
+      typeof body !== "object" ||
+      !("messages" in body) ||
+      !Array.isArray((body as { messages: unknown }).messages)
+    ) {
+      return Response.json({ ok: false, error: "Invalid payload: messages[] is required" }, { status: 400 });
+    }
+
+    const { messages } = body as { messages: UIMessage[] };

18-30: Agent creation and error handling look correct.

The agent is created with the projectId, the streaming response is returned properly, and error handling now uses generic messages (addressing the prior concern about exposing internal details). The UnauthorizedError is correctly mapped to 401.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

♻️ Duplicate comments (6)
apps/monitor-app/.env.test (1)

10-11: ⚠️ Potential issue | 🟡 Minor

Reorder AI_ANALYST keys to satisfy dotenv-linter.

At Line 10 and Line 11, ordering still triggers UnorderedKey warnings. Move these keys before CLICKHOUSE_DB and keep AI_ANALYST_CLICKHOUSE_PASSWORD before AI_ANALYST_CLICKHOUSE_USER.

♻️ Proposed fix
 CLICKHOUSE_HOST=127.0.0.1
 CLICKHOUSE_PORT=18123
+AI_ANALYST_CLICKHOUSE_PASSWORD=ai_analyst_password
+AI_ANALYST_CLICKHOUSE_USER=ai_analyst_user
 CLICKHOUSE_DB=cwv_monitor_test
 CLICKHOUSE_USER=default
 CLICKHOUSE_PASSWORD=secret
-AI_ANALYST_CLICKHOUSE_USER=ai_analyst_user
-AI_ANALYST_CLICKHOUSE_PASSWORD=ai_analyst_password
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/.env.test` around lines 10 - 11, The dotenv-linter
UnorderedKey warnings are caused by AI_ANALYST keys placed after CLICKHOUSE_DB;
reorder the variables so AI_ANALYST_CLICKHOUSE_PASSWORD appears first,
immediately followed by AI_ANALYST_CLICKHOUSE_USER, and move both of them to be
positioned before the CLICKHOUSE_DB entry; update the .env test file so the
sequence contains AI_ANALYST_CLICKHOUSE_PASSWORD then AI_ANALYST_CLICKHOUSE_USER
before CLICKHOUSE_DB to satisfy the linter.
apps/monitor-app/.env.example (1)

31-32: ⚠️ Potential issue | 🟡 Minor

Reorder AI_MODEL before AI_PROVIDER to clear UnorderedKey warning.

Line 31/32 ordering still violates dotenv-linter’s key order rule.

🔧 Proposed fix
 AI_API_KEY=
-AI_PROVIDER=
 AI_MODEL=
+AI_PROVIDER=
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/.env.example` around lines 31 - 32, Reorder the two dotenv
keys so AI_MODEL appears before AI_PROVIDER to satisfy dotenv-linter's
UnorderedKey rule; update the block containing the AI_MODEL and AI_PROVIDER
entries (swap their lines) so the key order is alphabetical/consistent and
re-run the linter to confirm the warning is resolved.
apps/monitor-app/src/env.ts (1)

50-52: ⚠️ Potential issue | 🟡 Minor

AI_ANALYST_CLICKHOUSE_PASSWORD is stricter than the rest of the AI config.

AI provider/model/key are optional, but this hard-required password can still fail validation in deployments where AI features are intentionally disabled.

🔧 Suggested adjustment
-    AI_ANALYST_CLICKHOUSE_PASSWORD: z.string().min(1),
+    AI_ANALYST_CLICKHOUSE_PASSWORD: z.string().min(1).optional(),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/src/env.ts` around lines 50 - 52, The
AI_ANALYST_CLICKHOUSE_PASSWORD env var is currently required (z.string().min(1))
while other AI-related config can be optional; make
AI_ANALYST_CLICKHOUSE_PASSWORD optional or provide a safe default so deployments
that disable AI features won't fail validation — update the
AI_ANALYST_CLICKHOUSE_PASSWORD schema (next to AI_ANALYST_CLICKHOUSE_USER and
BETTER_AUTH_SECRET) to use .optional() or .default("") (e.g.,
z.string().optional() or z.string().default("")) to match the optional nature of
the other AI config entries.
apps/monitor-app/src/test/anomaly-detection.integration.test.ts (1)

10-14: ⚠️ Potential issue | 🟡 Minor

Guard teardown calls to avoid masking setup failures.

If setup fails before initialization completes, teardown can throw and obscure the root failure.

🛠️ Proposed fix
-let container: StartedTestContainer;
+let container: StartedTestContainer | undefined;
@@
-let directClient: ReturnType<typeof createClient>;
+let directClient: ReturnType<typeof createClient> | undefined;
@@
   afterAll(async () => {
-    await directClient.close();
-    await container.stop();
+    if (directClient) await directClient.close();
+    if (container) await container.stop();
   });

Also applies to: 35-38

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/src/test/anomaly-detection.integration.test.ts` around lines
10 - 14, The teardown in the test can throw if setup failed and those resources
were never initialized; update the after/afterEach teardown to guard calls
against undefined/null for StartedTestContainer 'container', ClickHouse client
creators 'sql' and 'aiSql' (if they require explicit cleanup), and
'directClient' (ReturnType of createClient) — e.g., only call container.stop(),
directClient.close()/destroy() or any client cleanup when the corresponding
variable is truthy and handle/await the cleanup safely so setup failures aren't
masked by teardown exceptions.
apps/monitor-app/src/app/server/lib/agent/tools/execute-sql.ts (2)

10-15: ⚠️ Potential issue | 🟡 Minor

Escape CSV fields before joining.

Current serialization can produce malformed CSV when values contain commas, quotes, or newlines.

🧾 Proposed fix
+const escapeCsv = (value: unknown) => {
+  const raw = String(value ?? "");
+  return /[",\n\r]/.test(raw) ? `"${raw.replaceAll('"', '""')}"` : raw;
+};
+
 const rowsToCsv = (rows: Record<string, unknown>[]) => {
   if (rows.length === 0) return "";
   const columns = Object.keys(rows[0]);
-  const header = columns.join(",");
-  const body = rows.map((row) => columns.map((col) => String(row[col] ?? "")).join(",")).join("\n");
+  const header = columns.map(escapeCsv).join(",");
+  const body = rows.map((row) => columns.map((col) => escapeCsv(row[col])).join(",")).join("\n");
   return `${header}\n${body}`;
 };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/src/app/server/lib/agent/tools/execute-sql.ts` around lines
10 - 15, The rowsToCsv serializer doesn't escape fields, causing malformed CSV
when values contain commas, quotes, or newlines; add an escape routine and use
it for headers and cell values in rowsToCsv: implement a helper (e.g.,
escapeCsvField) that converts undefined/null to empty string, replaces internal
double quotes with two double quotes, and wraps the field in double quotes if it
contains a comma, quote, or newline, then apply this helper when building header
(columns.map(escapeCsvField)) and body (columns.map(col =>
escapeCsvField(String(row[col] ?? "")))) inside the rowsToCsv function.

31-35: ⚠️ Potential issue | 🟠 Major

Apply a server-side row cap before materializing results.

MAX_ROWS is enforced only after fetching all rows, so large queries can still cause memory/latency spikes.

🛡️ Proposed fix
+const enforceRowCap = (query: string) => {
+  const trimmed = query.trim().replace(/;\s*$/, "");
+  return `SELECT * FROM (${trimmed}) AS _q LIMIT ${MAX_ROWS + 1}`;
+};
+
   execute: async ({ sql, explanation }) => {
@@
-      const rows = await aiSql`${aiSql.raw(sql)}`;
+      const rows = await aiSql`${aiSql.raw(enforceRowCap(sql))}`;
       const executionTime = Date.now() - startTime;
-      const totalRows = rows.length;
+      const hitLimit = rows.length > MAX_ROWS;
+      const totalRows = hitLimit ? MAX_ROWS : rows.length;
@@
-      if (totalRows > MAX_ROWS) {
+      if (hitLimit) {
         return {
           csv: rowsToCsv(rows.slice(0, PREVIEW_ROWS)),
           totalRows,
           truncated: true,

Based on learnings: DB-level read-only permissions are already enforced for this client, so this finding is focused on resource control.

Also applies to: 42-50

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/src/app/server/lib/agent/tools/execute-sql.ts` around lines
31 - 35, The code currently materializes all rows via aiSql`${aiSql.raw(sql)}`
and only applies MAX_ROWS after fetching, causing potential OOM; update execute
logic in execute-sql.ts to enforce a server-side cap by modifying the query
execution to request at most MAX_ROWS+1 rows (e.g., append/ensure a LIMIT clause
or use a cursor/streaming API) before materializing results so you never load
more than the allowed rows into memory; specifically change the call site using
the aiSql template (aiSql`${aiSql.raw(sql)}`) and the handling of the returned
rows variable so the database returns at most MAX_ROWS+1 and you can detect
overflow without fetching the entire result set.
🧹 Nitpick comments (11)
apps/monitor-app/src/instrumentation.ts (1)

4-5: Load independent dynamic imports concurrently.

Lines 4-5 can be resolved in parallel to shave a bit off startup latency.

♻️ Proposed refactor
-    const { syncDatabaseRoles } = await import("@/app/server/lib/clickhouse/bootstrap");
-    const { provisionInitialUser } = await import("@/lib/provision-initial-user");
+    const [{ syncDatabaseRoles }, { provisionInitialUser }] = await Promise.all([
+      import("@/app/server/lib/clickhouse/bootstrap"),
+      import("@/lib/provision-initial-user"),
+    ]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/src/instrumentation.ts` around lines 4 - 5, The two dynamic
imports for syncDatabaseRoles and provisionInitialUser should be executed in
parallel to reduce startup latency; replace the sequential await imports by
launching both imports together with Promise.all (e.g.,
Promise.all([import(...), import(...)]) ) and then destructure the needed
exports (syncDatabaseRoles from the ClickHouse bootstrap module and
provisionInitialUser from the provision-initial-user module) from the resolved
modules before using them.
apps/anomaly-worker/.env.example (1)

7-8: Consider adding AI_ANALYST_CLICKHOUSE_USER for consistency.

The file has AI_ANALYST_CLICKHOUSE_PASSWORD but is missing AI_ANALYST_CLICKHOUSE_USER. The monitor-app's env.ts defines both variables with AI_ANALYST_CLICKHOUSE_USER defaulting to "ai_analyst_user". For clarity and consistency, consider adding the user variable here as well.

💡 Suggested addition
 CLICKHOUSE_DB=cwv_monitor
+AI_ANALYST_CLICKHOUSE_USER=ai_analyst_user
 AI_ANALYST_CLICKHOUSE_PASSWORD=
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/anomaly-worker/.env.example` around lines 7 - 8, Add the missing
environment variable AI_ANALYST_CLICKHOUSE_USER to the example to match the
monitor-app's env.ts defaults and maintain consistency with
AI_ANALYST_CLICKHOUSE_PASSWORD; update the .env.example to include
AI_ANALYST_CLICKHOUSE_USER (defaulting to "ai_analyst_user" as used in
monitor-app's env.ts) so both user and password variables are present and clear
to consumers.
apps/monitor-app/src/test/global-setup.ts (1)

21-53: Consider extracting the config path to reduce duplication.

The config path is computed in both the try block (line 25) and catch block (line 47). Extracting it before the try block would be cleaner.

♻️ Suggested refactor
   try {
     const host = container.getHost() === "localhost" ? "127.0.0.1" : container.getHost();
     const port = container.getMappedPort(HTTP_PORT);
 
-    const configPath = path.resolve(process.cwd(), ".vitest-ch-config.json");
+    const configPath = getConfigPath();
     fs.writeFileSync(configPath, JSON.stringify({ host, port }));
     // ... rest of try block
   } catch (error) {
     await container.stop();
     container = undefined;
     
-    const configPath = path.resolve(process.cwd(), ".vitest-ch-config.json");
+    const configPath = getConfigPath();
     if (fs.existsSync(configPath)) {
       fs.unlinkSync(configPath);
     }
     throw error;
   }
 }
+
+function getConfigPath() {
+  return path.resolve(process.cwd(), ".vitest-ch-config.json");
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/src/test/global-setup.ts` around lines 21 - 53, Extract the
computed config path before the try/catch to eliminate duplication: compute
const configPath = path.resolve(process.cwd(), ".vitest-ch-config.json") above
the try, then remove the duplicate computation inside both the try block and the
catch block; leave the existing fs.writeFileSync(configPath, ...) in the try and
the fs.existsSync(configPath)/fs.unlinkSync(configPath) checks in the catch
unchanged, ensuring container handling and calls to waitForClickHouse and
runClickHouseMigrations remain the same.
apps/monitor-app/src/app/server/domain/notifications/__tests__/notifications-service.integration.test.ts (1)

21-24: This suite behaves like a unit test, not an integration test.

With repositories and dispatcher mocked, this doesn’t validate real ClickHouse + notification-channel integration. Consider renaming this suite to unit tests, or keep one true integration case with real repositories/DB wiring.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/monitor-app/src/app/server/domain/notifications/__tests__/notifications-service.integration.test.ts`
around lines 21 - 24, The test file notifications-service.integration.test.ts is
labeled an integration test but immediately mocks key components (vi.mock calls
for anomalies-repository, projects-repository, processed-anomalies-repository
and dispatcher), so change it to a true unit test or make it a real integration
test: either (A) rename the file/suite to notifications-service.unit.test.ts (or
update the describe/title) and keep the vi.mock(...) lines so the suite is a
proper unit test, or (B) remove or conditionally disable the vi.mock(...) lines
for the repositories and dispatcher so the suite uses the real ClickHouse
repositories and dispatcher wiring, and add the required test DB setup/teardown
and config so the test exercises actual ClickHouse + notification-channel
integration; reference the vi.mock calls and the test file/suite name when
making the change.
apps/monitor-app/src/app/server/lib/clickhouse/repositories/anomalies-repository.ts (2)

79-80: Add LIMIT 1 for anomaly-id lookup.

This keeps the intent explicit and avoids unnecessary row reads if data ever becomes non-unique.

⚙️ Proposed tweak
       FROM v_cwv_anomalies
       WHERE anomaly_id = ${anomalyId}
+      LIMIT 1
     `;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/monitor-app/src/app/server/lib/clickhouse/repositories/anomalies-repository.ts`
around lines 79 - 80, The SQL query in anomalies-repository.ts that selects by
anomaly_id (the query string built around WHERE anomaly_id = ${anomalyId})
should include a "LIMIT 1" to make the intent explicit and prevent unnecessary
row scans if duplicates ever appear; update the query construction within the
AnomaliesRepository (or the function/method that builds/executes this SELECT by
anomalyId) to append LIMIT 1 to the SQL statement before execution.

25-27: Log caught error objects consistently in repository failures.

A few catch blocks drop the actual error, which loses stack/context during incident debugging.

🪵 Proposed fix
   } catch (error) {
-    logger.error("Failed to get new anomalies");
+    logger.error({ err: error }, "Failed to get new anomalies");
     throw error;
   }
@@
   } catch (error) {
-    logger.error(`Failed to get anomaly by id: ${anomalyId}`);
+    logger.error({ err: error, anomalyId }, "Failed to get anomaly by id");
     throw error;
   }
@@
   } catch (error) {
-    logger.error(`Failed to get anomalies for project: ${projectId}`);
+    logger.error({ err: error, projectId }, "Failed to get anomalies for project");
     throw error;
   }

Also applies to: 83-84, 109-110

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/monitor-app/src/app/server/lib/clickhouse/repositories/anomalies-repository.ts`
around lines 25 - 27, Several catch blocks in anomalies-repository.ts (the one
logging "Failed to get new anomalies" and the other similar logs around the
messages on the other two catch blocks) log a message but drop the actual error
object; update those catch handlers to pass the caught error into logger.error
(e.g., logger.error("Failed to get new anomalies", error) or
logger.error("Failed to get new anomalies", { error })) so the stack and context
are preserved, keeping the existing re-throw (throw error) behavior intact; look
for uses of logger.error and the specific message strings in this file to locate
each catch block to change.
apps/monitor-app/src/app/server/lib/clickhouse/client.ts (1)

25-33: Validate test config shape before using it for connection URLs.

String(testInfo.host/port) can silently turn missing fields into "undefined", which makes connection errors harder to diagnose.

🔍 Proposed fix
+type TestConnectionInfo = { host: string; port: string | number };
+
 function getTestConnectionInfo() {
@@
   if (fs.existsSync(configPath)) {
     try {
-      return JSON.parse(fs.readFileSync(configPath, "utf8"));
+      const parsed = JSON.parse(fs.readFileSync(configPath, "utf8")) as Partial<TestConnectionInfo>;
+      if (
+        parsed &&
+        typeof parsed.host === "string" &&
+        (typeof parsed.port === "string" || typeof parsed.port === "number")
+      ) {
+        return parsed;
+      }
+      return null;
     } catch {
       return null;
     }
   }

Also applies to: 39-42

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/src/app/server/lib/clickhouse/client.ts` around lines 25 -
33, The JSON read block that returns parsed config (using configPath,
fs.readFileSync, JSON.parse) must validate the required fields (e.g., host and
port) and their types before returning so callers don't end up doing
String(testInfo.host/port) on undefined; after parsing, check that the object
has the expected shape (host is a non-empty string and port is a number or
numeric string) and return null (or throw a clear error) if validation fails.
Update the same validation where the config is read at lines 39-42 as well, and
include a clear log/error message indicating which field is missing or invalid
so connection URL construction later fails fast and with useful diagnostics.
apps/monitor-app/src/app/server/domain/ai-bridge/service.ts (1)

42-44: Cache schema catalog instead of reloading it per anomaly context call.

This removes repeated I/O on a hot path and keeps latency more stable.

♻️ Proposed fix
 export class AiBridgeService {
+  private schemaCatalogPromise: Promise<string> | null = null;
+
@@
   async getSchemaCatalog(): Promise<string> {
-    return await loadSchemaCatalog();
+    this.schemaCatalogPromise ??= loadSchemaCatalog();
+    return await this.schemaCatalogPromise;
   }

Also applies to: 79-79

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/src/app/server/domain/ai-bridge/service.ts` around lines 42
- 44, The getSchemaCatalog function currently calls loadSchemaCatalog() on every
invocation causing repeated I/O; change it to cache the loaded string (e.g., a
module-level or class private variable like cachedSchemaCatalog) and return the
cached value on subsequent calls, lazily initializing it by calling
loadSchemaCatalog() only when cachedSchemaCatalog is null/undefined; apply the
same caching change to the other occurrence that calls loadSchemaCatalog so all
callers use the cachedSchemaCatalog and avoid repeated reloads.
apps/monitor-app/src/app/server/domain/notifications/dispatcher.ts (1)

84-87: Include response body snippets in webhook failure errors.

Status code alone is often insufficient for debugging payload/schema/auth issues.

🧰 Proposed fix
     if (!response.ok) {
-      const err = `Slack API returned ${response.status}`;
+      const body = await response.text().catch(() => "");
+      const err = `Slack API returned ${response.status}${body ? `: ${body.slice(0, 300)}` : ""}`;
       logger.error(err);
       throw new Error(err);
     }
@@
     if (!response.ok) {
-      const err = `Teams API returned ${response.status}`;
+      const body = await response.text().catch(() => "");
+      const err = `Teams API returned ${response.status}${body ? `: ${body.slice(0, 300)}` : ""}`;
       logger.error(err);
       throw new Error(err);
     }

Also applies to: 126-130

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitor-app/src/app/server/domain/notifications/dispatcher.ts` around
lines 84 - 87, When response.ok is false in dispatcher.ts (the branch that
currently does: const err = `Slack API returned ${response.status}`;
logger.error(err); throw new Error(err);), read the response body (await
response.text() or attempt JSON parse) and include a truncated snippet (e.g.,
first N chars) of that body in both the logger.error and the thrown Error
message so callers see status + body; if reading the body fails or is empty,
fall back to the status-only message. Apply the same change to the other failure
branch around the 126-130 block so both failure paths include response body
snippets in logs and errors.
docker/docker-compose.dev.yml (2)

150-150: Remove unused volume declaration.

The monitor-app-dev-node-modules volume is declared but no longer referenced by any service after the refactoring to use /workspace mappings.

🧹 Proposed cleanup
 volumes:
   clickhouse-dev-data:
   clickhouse-dev-logs:
-  monitor-app-dev-node-modules:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker/docker-compose.dev.yml` at line 150, Remove the unused Docker volume
declaration named monitor-app-dev-node-modules from the compose file: locate the
volume block that declares monitor-app-dev-node-modules and delete that entry
since no service references it anymore (services now use /workspace mappings);
ensure no other parts of the compose file reference that volume and run a quick
docker-compose config validation to confirm the file is still valid.

129-130: Consider using empty defaults for webhook URLs.

Using https://localhost/placeholder as defaults will cause connection failures/timeouts when the worker attempts to send notifications, potentially cluttering logs with misleading errors. Consider using empty strings and having the application skip notification when URLs are not configured.

♻️ Proposed fix
-      SLACK_WEBHOOK_URL: ${SLACK_WEBHOOK_URL:-https://localhost/placeholder}
-      TEAMS_WEBHOOK_URL: ${TEAMS_WEBHOOK_URL:-https://localhost/placeholder}
+      SLACK_WEBHOOK_URL: ${SLACK_WEBHOOK_URL:-}
+      TEAMS_WEBHOOK_URL: ${TEAMS_WEBHOOK_URL:-}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker/docker-compose.dev.yml` around lines 129 - 130, Replace the
placeholder default webhook URLs with empty defaults by changing the environment
variable fallbacks for SLACK_WEBHOOK_URL and TEAMS_WEBHOOK_URL from
"https://localhost/placeholder" to empty strings (e.g., ${SLACK_WEBHOOK_URL:-""}
and ${TEAMS_WEBHOOK_URL:-""}); also ensure the notification sending code checks
these env vars (SLACK_WEBHOOK_URL and TEAMS_WEBHOOK_URL) and skips sending when
they are empty or unset so the worker/notification logic does not attempt
connections to invalid URLs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/anomaly-worker/src/env.ts`:
- Around line 6-54: The env export using createEnv is missing the skipValidation
option which causes builds/tests to fail when env vars are absent; update the
createEnv call for the exported const env to include skipValidation (e.g., set
based on process.env.npm_lifecycle_event === "build" || process.env.NODE_ENV ===
"test") so validation is skipped during build/test runs—modify the createEnv
invocation (symbol: createEnv and exported const env) to include this
skipValidation logic alongside the existing server/client/runtimeEnv
configuration.
- Around line 6-29: The env schema in createEnv exports a large server object
but the anomaly-worker only uses env.LOG_LEVEL; remove all unused server keys
and simplify the schema to only declare server.LOG_LEVEL: reference the
createEnv call and the exported env constant and replace the current server
object (all AUTH_BASE_URL, TRUST_PROXY, CLICKHOUSE_*, AI_ANALYST_*,
BETTER_AUTH_SECRET, MIN_PASSWORD_SCORE, RATE_LIMIT_WINDOW_MS,
MAX_LOGIN_ATTEMPTS, INITIAL_USER_*, NODE_ENV, SLACK_WEBHOOK_URL,
TEAMS_WEBHOOK_URL) with a single LOG_LEVEL field using
z.enum(LOG_LEVELS).default("info") so the module only validates and exposes
env.LOG_LEVEL.

In `@apps/anomaly-worker/src/index.ts`:
- Around line 19-20: The runtime crash is caused by a TypeScript path alias
import of NotificationsService from `@monitor-app/`... that isn’t present in the
worker image; remove the direct cross-app import in index.ts (where
NotificationsService is instantiated and runDetectionCycle is invoked) and
instead either (A) extract NotificationsService into a shared package that
anomaly-worker lists as a dependency and import that package at runtime, or (B)
add monitor-app as a real dependency in anomaly-worker package.json and ensure
its compiled dist is built/copied into the worker image so the import resolves
at runtime; update index.ts to import from the runtime-resolvable package name
or accept a NotificationsService instance via dependency injection to avoid
cross-repo path-alias imports.

In `@apps/monitor-app/.env.example`:
- Around line 10-11: The dotenv keys in the ClickHouse block are out of
alphabetical order and fail dotenv-linter; swap the two entries so
AI_ANALYST_CLICKHOUSE_PASSWORD appears before AI_ANALYST_CLICKHOUSE_USER (i.e.,
reorder the lines) in the .env example to satisfy linting while preserving the
same values.

In `@apps/monitor-app/.env.test`:
- Line 22: Remove the extra blank line immediately before the "Logging" section
header in the .env.test so the file has no consecutive blank lines (fixes
dotenv-linter ExtraBlankLine); open the .env.test, locate the "Logging" section
header and delete the blank line just above it so the header follows the
previous entry directly.

In `@apps/monitor-app/src/app/server/domain/ai-bridge/service.ts`:
- Around line 15-17: Replace the brittle string concatenation with the URL API:
strip the trailing slash from env.AUTH_BASE_URL into baseUrl, create a new URL
for the path `/projects/${data.project.id}/regressions` (e.g., new
URL(`/projects/${data.project.id}/regressions`, baseUrl)), set the query param
using url.searchParams.set('anomalyId', data.anomaly_id), and return
url.toString(); this ensures proper encoding of data.project.id and
data.anomaly_id and safe handling of query parameters.

In `@apps/monitor-app/src/app/server/lib/agent/utils/schema-catolog.ts`:
- Around line 10-11: The current catalog loading uses process.cwd() which is
fragile; change the resolution to locate the catalog relative to this module so
it works in monorepos/runtimes: replace the process.cwd() usage when building
catalogPath (the variable `catalogPath` and the subsequent `cachedCatalog =
await fs.readFile(...)` line) with a module-relative resolution (e.g., derive
the directory of this file via __dirname or, if ESM, use
fileURLToPath(import.meta.url) and path.join(moduleDir, "clickhouse",
"catalog.yml")), and optionally add a fallback lookup (e.g., try module-relative
path first, then process.cwd()) and clear error logging if file not found so
failures are visible.

In
`@apps/monitor-app/src/app/server/lib/clickhouse/repositories/processed-anomalies-repository.ts`:
- Around line 1-3: The repository is performing an INSERT using the read-only
client aiSql; replace aiSql with the write-capable ClickHouse client used
elsewhere for mutations (e.g., the project's write client instance) and use that
client for the INSERT of InsertableProcessedAnomalyRow; update the import in
processed-anomalies-repository.ts to the write-capable client and adjust any
call sites that invoke aiSql for the mutation so they call the write client's
insert/exec method instead.

In `@apps/monitor-app/src/test/clickhouse-test-utils.ts`:
- Around line 95-101: The count parsing using Number(raw) can yield NaN and let
readiness checks pass incorrectly; update the parsing logic around sqlClient /
rows / raw so you coerce to a number and explicitly validate it (e.g., use
Number(raw) or parseInt then check Number.isFinite(parsed) or
!Number.isNaN(parsed)), and if the value is not a finite number throw an error
(or otherwise fail fast) instead of returning NaN; apply the same change to the
second occurrence referenced at lines 104-110 so both parsing sites validate and
fail strictly when the count cannot be parsed.

In `@apps/monitor-app/vitest.integration.setup.ts`:
- Around line 19-28: The config loader currently assigns
process.env.TEST_CH_HOST/TEST_CH_PORT directly which can set the literal
"undefined"; update the logic around configPath/config to validate properties
before setting env vars: verify config.host is a non-empty string and
config.port is a finite number (or numeric string) and only then set
process.env.TEST_CH_HOST and process.env.TEST_CH_PORT; if validation fails, log
a clear error (including which property is missing/invalid) and avoid setting
the env vars (or exit/throw if you prefer a hard-fail). Reference symbols:
configPath, config, process.env.TEST_CH_HOST, process.env.TEST_CH_PORT.

In `@docker/docker-compose.yml`:
- Around line 120-121: Remove the localhost placeholder defaults for
SLACK_WEBHOOK_URL and TEAMS_WEBHOOK_URL so the compose file does not silently
pretend webhooks are configured; update the docker-compose env entries (the
SLACK_WEBHOOK_URL and TEAMS_WEBHOOK_URL variables) to omit the
":-https://localhost/placeholder" defaults (or set them to empty) and ensure the
runtime validates/throws on missing/empty webhook envs so containers fail fast
if webhooks are not provided.
- Around line 116-119: The compose file currently provides weak production
fallbacks for credentials (BETTER_AUTH_SECRET, INITIAL_USER_EMAIL,
INITIAL_USER_PASSWORD, INITIAL_USER_NAME); remove these insecure default values
and require callers to supply real secrets instead (e.g., drop the ":-..."
defaults or validate at startup), or switch to an external secrets mechanism
(env_file/secret manager) so those environment variables are not silently
populated with insecure fallback values; update any startup/validation logic to
fail fast with a clear error if BETTER_AUTH_SECRET or initial user credentials
are missing.

---

Duplicate comments:
In `@apps/monitor-app/.env.example`:
- Around line 31-32: Reorder the two dotenv keys so AI_MODEL appears before
AI_PROVIDER to satisfy dotenv-linter's UnorderedKey rule; update the block
containing the AI_MODEL and AI_PROVIDER entries (swap their lines) so the key
order is alphabetical/consistent and re-run the linter to confirm the warning is
resolved.

In `@apps/monitor-app/.env.test`:
- Around line 10-11: The dotenv-linter UnorderedKey warnings are caused by
AI_ANALYST keys placed after CLICKHOUSE_DB; reorder the variables so
AI_ANALYST_CLICKHOUSE_PASSWORD appears first, immediately followed by
AI_ANALYST_CLICKHOUSE_USER, and move both of them to be positioned before the
CLICKHOUSE_DB entry; update the .env test file so the sequence contains
AI_ANALYST_CLICKHOUSE_PASSWORD then AI_ANALYST_CLICKHOUSE_USER before
CLICKHOUSE_DB to satisfy the linter.

In `@apps/monitor-app/src/app/server/lib/agent/tools/execute-sql.ts`:
- Around line 10-15: The rowsToCsv serializer doesn't escape fields, causing
malformed CSV when values contain commas, quotes, or newlines; add an escape
routine and use it for headers and cell values in rowsToCsv: implement a helper
(e.g., escapeCsvField) that converts undefined/null to empty string, replaces
internal double quotes with two double quotes, and wraps the field in double
quotes if it contains a comma, quote, or newline, then apply this helper when
building header (columns.map(escapeCsvField)) and body (columns.map(col =>
escapeCsvField(String(row[col] ?? "")))) inside the rowsToCsv function.
- Around line 31-35: The code currently materializes all rows via
aiSql`${aiSql.raw(sql)}` and only applies MAX_ROWS after fetching, causing
potential OOM; update execute logic in execute-sql.ts to enforce a server-side
cap by modifying the query execution to request at most MAX_ROWS+1 rows (e.g.,
append/ensure a LIMIT clause or use a cursor/streaming API) before materializing
results so you never load more than the allowed rows into memory; specifically
change the call site using the aiSql template (aiSql`${aiSql.raw(sql)}`) and the
handling of the returned rows variable so the database returns at most
MAX_ROWS+1 and you can detect overflow without fetching the entire result set.

In `@apps/monitor-app/src/env.ts`:
- Around line 50-52: The AI_ANALYST_CLICKHOUSE_PASSWORD env var is currently
required (z.string().min(1)) while other AI-related config can be optional; make
AI_ANALYST_CLICKHOUSE_PASSWORD optional or provide a safe default so deployments
that disable AI features won't fail validation — update the
AI_ANALYST_CLICKHOUSE_PASSWORD schema (next to AI_ANALYST_CLICKHOUSE_USER and
BETTER_AUTH_SECRET) to use .optional() or .default("") (e.g.,
z.string().optional() or z.string().default("")) to match the optional nature of
the other AI config entries.

In `@apps/monitor-app/src/test/anomaly-detection.integration.test.ts`:
- Around line 10-14: The teardown in the test can throw if setup failed and
those resources were never initialized; update the after/afterEach teardown to
guard calls against undefined/null for StartedTestContainer 'container',
ClickHouse client creators 'sql' and 'aiSql' (if they require explicit cleanup),
and 'directClient' (ReturnType of createClient) — e.g., only call
container.stop(), directClient.close()/destroy() or any client cleanup when the
corresponding variable is truthy and handle/await the cleanup safely so setup
failures aren't masked by teardown exceptions.

---

Nitpick comments:
In `@apps/anomaly-worker/.env.example`:
- Around line 7-8: Add the missing environment variable
AI_ANALYST_CLICKHOUSE_USER to the example to match the monitor-app's env.ts
defaults and maintain consistency with AI_ANALYST_CLICKHOUSE_PASSWORD; update
the .env.example to include AI_ANALYST_CLICKHOUSE_USER (defaulting to
"ai_analyst_user" as used in monitor-app's env.ts) so both user and password
variables are present and clear to consumers.

In `@apps/monitor-app/src/app/server/domain/ai-bridge/service.ts`:
- Around line 42-44: The getSchemaCatalog function currently calls
loadSchemaCatalog() on every invocation causing repeated I/O; change it to cache
the loaded string (e.g., a module-level or class private variable like
cachedSchemaCatalog) and return the cached value on subsequent calls, lazily
initializing it by calling loadSchemaCatalog() only when cachedSchemaCatalog is
null/undefined; apply the same caching change to the other occurrence that calls
loadSchemaCatalog so all callers use the cachedSchemaCatalog and avoid repeated
reloads.

In
`@apps/monitor-app/src/app/server/domain/notifications/__tests__/notifications-service.integration.test.ts`:
- Around line 21-24: The test file notifications-service.integration.test.ts is
labeled an integration test but immediately mocks key components (vi.mock calls
for anomalies-repository, projects-repository, processed-anomalies-repository
and dispatcher), so change it to a true unit test or make it a real integration
test: either (A) rename the file/suite to notifications-service.unit.test.ts (or
update the describe/title) and keep the vi.mock(...) lines so the suite is a
proper unit test, or (B) remove or conditionally disable the vi.mock(...) lines
for the repositories and dispatcher so the suite uses the real ClickHouse
repositories and dispatcher wiring, and add the required test DB setup/teardown
and config so the test exercises actual ClickHouse + notification-channel
integration; reference the vi.mock calls and the test file/suite name when
making the change.

In `@apps/monitor-app/src/app/server/domain/notifications/dispatcher.ts`:
- Around line 84-87: When response.ok is false in dispatcher.ts (the branch that
currently does: const err = `Slack API returned ${response.status}`;
logger.error(err); throw new Error(err);), read the response body (await
response.text() or attempt JSON parse) and include a truncated snippet (e.g.,
first N chars) of that body in both the logger.error and the thrown Error
message so callers see status + body; if reading the body fails or is empty,
fall back to the status-only message. Apply the same change to the other failure
branch around the 126-130 block so both failure paths include response body
snippets in logs and errors.

In `@apps/monitor-app/src/app/server/lib/clickhouse/client.ts`:
- Around line 25-33: The JSON read block that returns parsed config (using
configPath, fs.readFileSync, JSON.parse) must validate the required fields
(e.g., host and port) and their types before returning so callers don't end up
doing String(testInfo.host/port) on undefined; after parsing, check that the
object has the expected shape (host is a non-empty string and port is a number
or numeric string) and return null (or throw a clear error) if validation fails.
Update the same validation where the config is read at lines 39-42 as well, and
include a clear log/error message indicating which field is missing or invalid
so connection URL construction later fails fast and with useful diagnostics.

In
`@apps/monitor-app/src/app/server/lib/clickhouse/repositories/anomalies-repository.ts`:
- Around line 79-80: The SQL query in anomalies-repository.ts that selects by
anomaly_id (the query string built around WHERE anomaly_id = ${anomalyId})
should include a "LIMIT 1" to make the intent explicit and prevent unnecessary
row scans if duplicates ever appear; update the query construction within the
AnomaliesRepository (or the function/method that builds/executes this SELECT by
anomalyId) to append LIMIT 1 to the SQL statement before execution.
- Around line 25-27: Several catch blocks in anomalies-repository.ts (the one
logging "Failed to get new anomalies" and the other similar logs around the
messages on the other two catch blocks) log a message but drop the actual error
object; update those catch handlers to pass the caught error into logger.error
(e.g., logger.error("Failed to get new anomalies", error) or
logger.error("Failed to get new anomalies", { error })) so the stack and context
are preserved, keeping the existing re-throw (throw error) behavior intact; look
for uses of logger.error and the specific message strings in this file to locate
each catch block to change.

In `@apps/monitor-app/src/instrumentation.ts`:
- Around line 4-5: The two dynamic imports for syncDatabaseRoles and
provisionInitialUser should be executed in parallel to reduce startup latency;
replace the sequential await imports by launching both imports together with
Promise.all (e.g., Promise.all([import(...), import(...)]) ) and then
destructure the needed exports (syncDatabaseRoles from the ClickHouse bootstrap
module and provisionInitialUser from the provision-initial-user module) from the
resolved modules before using them.

In `@apps/monitor-app/src/test/global-setup.ts`:
- Around line 21-53: Extract the computed config path before the try/catch to
eliminate duplication: compute const configPath = path.resolve(process.cwd(),
".vitest-ch-config.json") above the try, then remove the duplicate computation
inside both the try block and the catch block; leave the existing
fs.writeFileSync(configPath, ...) in the try and the
fs.existsSync(configPath)/fs.unlinkSync(configPath) checks in the catch
unchanged, ensuring container handling and calls to waitForClickHouse and
runClickHouseMigrations remain the same.

In `@docker/docker-compose.dev.yml`:
- Line 150: Remove the unused Docker volume declaration named
monitor-app-dev-node-modules from the compose file: locate the volume block that
declares monitor-app-dev-node-modules and delete that entry since no service
references it anymore (services now use /workspace mappings); ensure no other
parts of the compose file reference that volume and run a quick docker-compose
config validation to confirm the file is still valid.
- Around line 129-130: Replace the placeholder default webhook URLs with empty
defaults by changing the environment variable fallbacks for SLACK_WEBHOOK_URL
and TEAMS_WEBHOOK_URL from "https://localhost/placeholder" to empty strings
(e.g., ${SLACK_WEBHOOK_URL:-""} and ${TEAMS_WEBHOOK_URL:-""}); also ensure the
notification sending code checks these env vars (SLACK_WEBHOOK_URL and
TEAMS_WEBHOOK_URL) and skips sending when they are empty or unset so the
worker/notification logic does not attempt connections to invalid URLs.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3267c3c and 8aa2fbe.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (42)
  • ARCHITECTURE.md
  • apps/anomaly-worker/.env.example
  • apps/anomaly-worker/.gitignore
  • apps/anomaly-worker/package.json
  • apps/anomaly-worker/src/env.ts
  • apps/anomaly-worker/src/index.ts
  • apps/anomaly-worker/tsconfig.json
  • apps/monitor-app/.env.example
  • apps/monitor-app/.env.test
  • apps/monitor-app/AI_AGENT_GUIDE.md
  • apps/monitor-app/clickhouse/catalog.yml
  • apps/monitor-app/package.json
  • apps/monitor-app/scripts/seed-demo-data.mjs
  • apps/monitor-app/src/app/server/domain/ai-bridge/__tests__/ai-bridge-service.integration.test.ts
  • apps/monitor-app/src/app/server/domain/ai-bridge/service.ts
  • apps/monitor-app/src/app/server/domain/notifications/__tests__/notifications-service.integration.test.ts
  • apps/monitor-app/src/app/server/domain/notifications/dispatcher.ts
  • apps/monitor-app/src/app/server/domain/notifications/service.ts
  • apps/monitor-app/src/app/server/domain/notifications/types.ts
  • apps/monitor-app/src/app/server/lib/agent/agent.ts
  • apps/monitor-app/src/app/server/lib/agent/tools/execute-sql.ts
  • apps/monitor-app/src/app/server/lib/agent/utils/schema-catolog.ts
  • apps/monitor-app/src/app/server/lib/agent/utils/system-prompt.ts
  • apps/monitor-app/src/app/server/lib/clickhouse/client.ts
  • apps/monitor-app/src/app/server/lib/clickhouse/repositories/__tests__/anomalies-repository.integration.test.ts
  • apps/monitor-app/src/app/server/lib/clickhouse/repositories/anomalies-repository.ts
  • apps/monitor-app/src/app/server/lib/clickhouse/repositories/processed-anomalies-repository.ts
  • apps/monitor-app/src/app/server/lib/clickhouse/repositories/projects-repository.ts
  • apps/monitor-app/src/app/server/lib/clickhouse/schema.ts
  • apps/monitor-app/src/env.ts
  • apps/monitor-app/src/instrumentation.ts
  • apps/monitor-app/src/test/anomaly-detection.integration.test.ts
  • apps/monitor-app/src/test/clickhouse-test-utils.ts
  • apps/monitor-app/src/test/global-setup.ts
  • apps/monitor-app/vitest.integration.config.ts
  • apps/monitor-app/vitest.integration.setup.ts
  • docker/anomaly-worker.Dockerfile
  • docker/docker-compose.build.yml
  • docker/docker-compose.dev.yml
  • docker/docker-compose.yml
  • docker/monitor-app.dev.Dockerfile
  • setup.sh
✅ Files skipped from review due to trivial changes (3)
  • apps/monitor-app/AI_AGENT_GUIDE.md
  • apps/anomaly-worker/.gitignore
  • apps/anomaly-worker/tsconfig.json
🚧 Files skipped from review as they are similar to previous changes (2)
  • apps/monitor-app/src/app/server/lib/agent/utils/system-prompt.ts
  • apps/monitor-app/package.json
📜 Review details
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: olekszczepanowski
Repo: Blazity/next-cwv-monitor PR: 118
File: apps/monitor-app/src/app/server/lib/agent/tools/execute-clickhouse.ts:23-33
Timestamp: 2026-02-27T10:21:06.393Z
Learning: In the next-cwv-monitor project, the agent ClickHouse client has database-level permissions restricted to read-only operations, providing security at the database layer rather than requiring application-level SQL validation for mutation prevention.
📚 Learning: 2026-02-27T10:22:20.131Z
Learnt from: olekszczepanowski
Repo: Blazity/next-cwv-monitor PR: 118
File: apps/monitor-app/src/app/api/chat/[projectId]/route.ts:4-14
Timestamp: 2026-02-27T10:22:20.131Z
Learning: In the next-cwv-monitor codebase (apps/monitor-app), the proxy.ts middleware handles authentication by redirecting unauthenticated users to /login. Routes not in PUBLIC_ROUTES array (["/login", "/api/auth", "/api/health", "/api/ingest"]) are automatically protected by this authentication layer.

Applied to files:

  • ARCHITECTURE.md
📚 Learning: 2026-02-27T10:21:06.393Z
Learnt from: olekszczepanowski
Repo: Blazity/next-cwv-monitor PR: 118
File: apps/monitor-app/src/app/server/lib/agent/tools/execute-clickhouse.ts:23-33
Timestamp: 2026-02-27T10:21:06.393Z
Learning: In the next-cwv-monitor project, the agent ClickHouse client has database-level permissions restricted to read-only operations, providing security at the database layer rather than requiring application-level SQL validation for mutation prevention.

Applied to files:

  • apps/monitor-app/src/app/server/lib/agent/tools/execute-sql.ts
  • apps/monitor-app/clickhouse/catalog.yml
  • setup.sh
  • apps/monitor-app/.env.example
  • apps/monitor-app/.env.test
🧬 Code graph analysis (15)
apps/anomaly-worker/src/index.ts (4)
apps/monitor-app/src/app/server/lib/logger.ts (1)
  • logger (6-9)
apps/monitor-app/src/env.ts (1)
  • env (41-99)
apps/anomaly-worker/src/env.ts (1)
  • env (6-54)
apps/monitor-app/src/app/server/domain/notifications/service.ts (1)
  • NotificationsService (14-126)
apps/monitor-app/src/app/server/lib/clickhouse/repositories/processed-anomalies-repository.ts (3)
apps/monitor-app/src/app/server/lib/clickhouse/schema.ts (1)
  • InsertableProcessedAnomalyRow (99-109)
apps/monitor-app/src/app/server/lib/clickhouse/client.ts (1)
  • aiSql (120-120)
apps/monitor-app/src/app/server/lib/logger.ts (1)
  • logger (6-9)
apps/monitor-app/src/app/server/lib/clickhouse/repositories/anomalies-repository.ts (3)
apps/monitor-app/src/app/server/lib/clickhouse/schema.ts (1)
  • AnomalyRow (85-97)
apps/monitor-app/src/app/server/lib/clickhouse/client.ts (1)
  • aiSql (120-120)
apps/monitor-app/src/app/server/lib/logger.ts (1)
  • logger (6-9)
apps/monitor-app/src/app/server/lib/agent/utils/schema-catolog.ts (1)
apps/monitor-app/src/app/server/lib/logger.ts (1)
  • logger (6-9)
apps/monitor-app/src/instrumentation.ts (3)
apps/monitor-app/scripts/seed-demo-data.mjs (1)
  • process (6-12)
apps/monitor-app/src/lib/provision-initial-user.ts (1)
  • provisionInitialUser (5-23)
apps/monitor-app/src/app/server/lib/clickhouse/bootstrap.ts (1)
  • syncDatabaseRoles (4-13)
apps/monitor-app/src/test/global-setup.ts (3)
apps/monitor-app/src/test/clickhouse-test-utils.ts (4)
  • CLICKHOUSE_IMAGE (9-9)
  • HTTP_PORT (10-10)
  • waitForClickHouse (32-54)
  • runClickHouseMigrations (67-89)
apps/monitor-app/scripts/run-clickhouse-migrate.mjs (1)
  • port (12-12)
apps/monitor-app/scripts/seed-demo-data.mjs (3)
  • path (154-154)
  • path (200-200)
  • process (6-12)
apps/monitor-app/src/app/server/domain/notifications/dispatcher.ts (3)
apps/monitor-app/src/app/server/domain/notifications/types.ts (2)
  • NotificationConfig (21-24)
  • NotificationPayload (8-19)
apps/monitor-app/src/app/server/lib/logger.ts (1)
  • logger (6-9)
apps/monitor-app/src/env.ts (1)
  • env (41-99)
apps/anomaly-worker/src/env.ts (2)
apps/monitor-app/src/env.ts (1)
  • env (41-99)
apps/monitor-app/scripts/seed-demo-data.mjs (1)
  • process (6-12)
apps/monitor-app/src/app/server/domain/ai-bridge/__tests__/ai-bridge-service.integration.test.ts (4)
apps/monitor-app/src/app/server/domain/ai-bridge/service.ts (1)
  • aiBridge (84-84)
apps/monitor-app/src/test/clickhouse-test-utils.ts (1)
  • setupClickHouseContainer (174-284)
apps/monitor-app/src/app/server/lib/clickhouse/repositories/projects-repository.ts (1)
  • createProject (10-30)
apps/monitor-app/src/app/server/lib/clickhouse/client.ts (2)
  • getDirectClient (122-122)
  • sql (118-118)
apps/monitor-app/src/env.ts (1)
apps/monitor-app/scripts/seed-demo-data.mjs (1)
  • process (6-12)
apps/monitor-app/src/app/server/domain/notifications/__tests__/notifications-service.integration.test.ts (3)
apps/monitor-app/src/app/server/domain/notifications/dispatcher.ts (1)
  • dispatcher (138-141)
apps/monitor-app/src/app/server/domain/notifications/service.ts (2)
  • notifyNewAnomalies (15-49)
  • notifyNewAnomalies (130-130)
apps/monitor-app/src/app/server/lib/clickhouse/schema.ts (2)
  • AnomalyRow (85-97)
  • ProjectRow (4-11)
apps/monitor-app/src/app/server/lib/agent/tools/execute-sql.ts (3)
apps/monitor-app/src/app/server/domain/ai-bridge/service.ts (1)
  • executeSql (19-40)
apps/monitor-app/src/app/server/lib/logger.ts (1)
  • logger (6-9)
apps/monitor-app/src/app/server/lib/clickhouse/client.ts (2)
  • sql (118-118)
  • aiSql (120-120)
apps/monitor-app/src/app/server/domain/notifications/service.ts (7)
apps/monitor-app/src/app/server/lib/clickhouse/repositories/anomalies-repository.ts (1)
  • getUnprocessedAnomalies (31-61)
apps/monitor-app/src/app/server/lib/clickhouse/schema.ts (1)
  • AnomalyRow (85-97)
apps/monitor-app/src/app/server/lib/clickhouse/repositories/projects-repository.ts (1)
  • getProjectById (32-44)
apps/monitor-app/src/app/server/domain/ai-bridge/service.ts (1)
  • aiBridge (84-84)
apps/monitor-app/src/app/server/domain/notifications/types.ts (1)
  • NotificationPayload (8-19)
apps/monitor-app/src/app/server/domain/notifications/dispatcher.ts (1)
  • dispatcher (138-141)
apps/monitor-app/src/app/server/lib/clickhouse/repositories/processed-anomalies-repository.ts (1)
  • insertProcessedAnomaly (5-33)
apps/monitor-app/src/test/clickhouse-test-utils.ts (2)
apps/monitor-app/scripts/seed-demo-data.mjs (1)
  • process (6-12)
apps/monitor-app/scripts/run-clickhouse-migrate.mjs (1)
  • port (12-12)
apps/monitor-app/src/app/server/lib/agent/agent.ts (5)
apps/monitor-app/src/env.ts (1)
  • env (41-99)
apps/monitor-app/src/app/server/lib/agent/utils/schema-catolog.ts (1)
  • loadSchemaCatalog (7-17)
apps/monitor-app/src/app/server/lib/agent/utils/system-prompt.ts (1)
  • buildSystemPrompt (1-38)
apps/monitor-app/src/app/server/domain/ai-bridge/service.ts (1)
  • executeSql (19-40)
apps/monitor-app/src/app/server/lib/agent/tools/execute-sql.ts (1)
  • executeSql (18-74)
🪛 dotenv-linter (4.0.0)
apps/anomaly-worker/.env.example

[warning] 5-5: [UnorderedKey] The CLICKHOUSE_PASSWORD key should go before the CLICKHOUSE_PORT key

(UnorderedKey)


[warning] 6-6: [UnorderedKey] The CLICKHOUSE_DB key should go before the CLICKHOUSE_HOST key

(UnorderedKey)


[warning] 7-7: [UnorderedKey] The AI_ANALYST_CLICKHOUSE_PASSWORD key should go before the CLICKHOUSE_DB key

(UnorderedKey)


[warning] 8-8: [UnorderedKey] The BETTER_AUTH_SECRET key should go before the CLICKHOUSE_DB key

(UnorderedKey)


[warning] 11-11: [UnorderedKey] The INITIAL_USER_NAME key should go before the INITIAL_USER_PASSWORD key

(UnorderedKey)


[warning] 16-16: [UnorderedKey] The AUTH_BASE_URL key should go before the LOG_LEVEL key

(UnorderedKey)

apps/monitor-app/.env.example

[warning] 10-10: [UnorderedKey] The AI_ANALYST_CLICKHOUSE_USER key should go before the CLICKHOUSE_DB key

(UnorderedKey)


[warning] 11-11: [UnorderedKey] The AI_ANALYST_CLICKHOUSE_PASSWORD key should go before the AI_ANALYST_CLICKHOUSE_USER key

(UnorderedKey)


[warning] 32-32: [UnorderedKey] The AI_MODEL key should go before the AI_PROVIDER key

(UnorderedKey)

apps/monitor-app/.env.test

[warning] 10-10: [UnorderedKey] The AI_ANALYST_CLICKHOUSE_USER key should go before the CLICKHOUSE_DB key

(UnorderedKey)


[warning] 11-11: [UnorderedKey] The AI_ANALYST_CLICKHOUSE_PASSWORD key should go before the AI_ANALYST_CLICKHOUSE_USER key

(UnorderedKey)


[warning] 22-22: [ExtraBlankLine] Extra blank line detected

(ExtraBlankLine)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants