feat: add curated Drata compliance workflows#1
Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 29 minutes and 16 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (6)
📝 WalkthroughWalkthroughThis PR introduces curated compliance workflow commands ( Changes
Sequence DiagramsequenceDiagram
participant User
participant CLI
participant ArgParser
participant WorkflowLib
participant HTTP
participant API
participant Output
User->>CLI: summary --compact --json --limit 50
CLI->>ArgParser: parseRequestFlags()
ArgParser->>ArgParser: Parse --compact, --limit
ArgParser-->>CLI: flags {compact, limit, ...}
CLI->>WorkflowLib: prepareWorkflowFlags(flags)
WorkflowLib->>WorkflowLib: Resolve API key, set readOnly
WorkflowLib-->>CLI: prepared flags
CLI->>WorkflowLib: runSummary(flags)
WorkflowLib->>HTTP: invokeOperation('controls', {page: 1, limit: 100})
HTTP->>API: GET /controls?page=1&limit=100
API-->>HTTP: {data: [...], total: N}
HTTP->>HTTP: Check if more pages needed
HTTP->>API: GET /controls?page=2&limit=100
API-->>HTTP: {data: [...], total: N}
HTTP-->>WorkflowLib: Merged payload
WorkflowLib->>WorkflowLib: buildSummaryPayload()
WorkflowLib->>WorkflowLib: Compute totals, apply limit
WorkflowLib-->>CLI: {kind: "summary", ...}
CLI->>Output: printWorkflowPayload(payload, {compact, json})
Output-->>User: JSON output
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Review Summary by QodoAdd curated compliance workflows and auth check command
WalkthroughsDescription• Add curated compliance workflow commands for common triage tasks - drata summary, drata controls failing, drata monitors failing - drata connections list, drata personnel issues, drata evidence expiring • Implement drata auth check to validate API credentials against Drata API • Add --compact flag for condensed workflow output and --limit for result capping • Support v1 page/limit pagination alongside existing cursor-based pagination • Extend shell completion for new workflow commands and their parameters Diagramflowchart LR
CLI["CLI Entry Point"]
AUTH["Auth Handler"]
WORKFLOWS["Workflow Handlers"]
WORKFLOWLIB["Workflows Library"]
HTTP["HTTP/Pagination"]
CLI -->|"auth check"| AUTH
CLI -->|"summary/controls/monitors/etc"| WORKFLOWS
WORKFLOWS -->|"prepareWorkflowFlags"| WORKFLOWLIB
WORKFLOWLIB -->|"runWorkflowOperation"| HTTP
HTTP -->|"page/limit pagination"| HTTP
WORKFLOWLIB -->|"buildPayload"| WORKFLOWLIB
File Changes1. README.md
|
Code Review by Qodo
1.
|
There was a problem hiding this comment.
Code Review
This pull request introduces curated compliance workflows to the CLI, including commands for summaries, failing controls, failing monitors, connection lists, personnel issues, and expiring evidence. It also adds an auth check command to validate credentials and implements page-based pagination support via new --limit and --compact flags. Review feedback identified an opportunity to optimize workspace retrieval by disabling unnecessary pagination and pointed out several bugs in the text formatters where property names were incorrectly mapped when the --compact flag is active.
| } | ||
|
|
||
| async function getFirstWorkspaceId(flags) { | ||
| const { data } = await listV1("list-workspaces", flags); |
There was a problem hiding this comment.
This function fetches all workspaces just to retrieve the first one, which is inefficient if the user has many workspaces. Since only the first ID is needed, it's better to disable pagination and set a limit of 1 for this specific request.
| const { data } = await listV1("list-workspaces", flags); | |
| const { data } = await runWorkflowOperation("v1", "list-workspaces", { ...flags, allPages: false, limit: 1 }); |
| case "monitors.failing": | ||
| return [`Failing Monitors: matching=${payload.matching} showing=${payload.showing}`, ...payload.monitors.map((m) => `${m.id} ${monitorStatus(m)} ${m.name}`)].join("\n"); | ||
| case "connections.list": | ||
| return [`Connections: matching=${payload.matching} showing=${payload.showing}`, ...payload.connections.map((c) => `${c.id} ${connectionState(c)} ${c.clientAlias || c.clientType || ""}`)].join("\n"); |
There was a problem hiding this comment.
The text formatter for connections.list fails to correctly display status and alias when the --compact flag is used. Compact connection objects store the calculated status in a status property and rename clientAlias to alias. The connectionState helper only works on raw objects and will default to DISCONNECTED for compact objects because the raw properties it expects are missing.
| return [`Connections: matching=${payload.matching} showing=${payload.showing}`, ...payload.connections.map((c) => `${c.id} ${connectionState(c)} ${c.clientAlias || c.clientType || ""}`)].join("\n"); | |
| case "connections.list": | |
| return [`Connections: matching=${payload.matching} showing=${payload.showing}`, ...payload.connections.map((c) => `${c.id} ${c.status ?? connectionState(c)} ${c.alias || c.clientAlias || c.clientType || ""}`)].join("\n"); |
| case "connections.list": | ||
| return [`Connections: matching=${payload.matching} showing=${payload.showing}`, ...payload.connections.map((c) => `${c.id} ${connectionState(c)} ${c.clientAlias || c.clientType || ""}`)].join("\n"); | ||
| case "personnel.issues": | ||
| return [`Personnel with device issues: matching=${payload.matching} showing=${payload.showing}`, ...payload.personnel.map((p) => `${p.id} ${p.user?.email ?? p.email ?? ""} failing_devices=${p.devicesFailingComplianceCount ?? 0}`)].join("\n"); |
There was a problem hiding this comment.
The text formatter for personnel.issues does not account for property renaming in compact objects. When --compact is used, devicesFailingComplianceCount is renamed to failing_devices, and the email is flattened to an email property. This causes the text output to show failing_devices=0 for compact objects even when issues exist.
| return [`Personnel with device issues: matching=${payload.matching} showing=${payload.showing}`, ...payload.personnel.map((p) => `${p.id} ${p.user?.email ?? p.email ?? ""} failing_devices=${p.devicesFailingComplianceCount ?? 0}`)].join("\n"); | |
| case "personnel.issues": | |
| return [`Personnel with device issues: matching=${payload.matching} showing=${payload.showing}`, ...payload.personnel.map((p) => `${p.id} ${p.email ?? p.user?.email ?? ""} failing_devices=${p.failing_devices ?? p.devicesFailingComplianceCount ?? 0}`)].join("\n"); |
There was a problem hiding this comment.
Pull request overview
Adds curated, read-only “workflow” commands (summary/triage style) on top of the generated Drata OpenAPI CLI surface, including compact outputs and improved pagination behavior.
Changes:
- Introduces
src/lib/workflows.mjsimplementing curated compliance workflows and compact output shaping. - Extends CLI routing + shell completion to support new top-level workflow commands and
auth check. - Adds
--compact, workflow--limit, and v1 page/limit pagination support for--all-pages, with accompanying tests and docs.
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| test/cli-core.test.mjs | Adds tests for --limit handling, page/limit pagination merging, auth check, curated workflow behavior. |
| src/lib/workflows.mjs | New workflow runner + payload builders (summary, failing controls/monitors, connections list, personnel issues, evidence expiring). |
| src/lib/http.mjs | Adds effective flag resolution for compact/limit and supports v1 page/limit pagination in invokeOperation. |
| src/lib/completion.mjs | Extends completion suggestions for new workflow commands/flags and auth check. |
| src/lib/args.mjs | Adds parsing/validation for --compact and --limit. |
| src/cli.mjs | Wires new commands into CLI (summary, controls failing, etc.) and implements auth check. |
| README.md | Documents curated workflows and new flags/commands. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| function withListDefaults(flags) { | ||
| const next = cloneFlags(flags); | ||
| next.readOnly = true; | ||
| next.allPages = true; | ||
| setNamedDefault(next, "page", 1); | ||
| setNamedDefault(next, "limit", 100); |
There was a problem hiding this comment.
withListDefaults sets the default limit query parameter to 100 for v1 list workflows. In the embedded v1 OpenAPI spec, several list endpoints used by these workflows (e.g. /controls, /connections, /workspaces/{workspaceId}/evidence-library) declare limit with a maximum of 50, so sending 100 risks 400/validation failures in real API calls. Consider defaulting to 50 (or the spec default), and/or clamping any requested limit to the parameter schema maximum when building workflow list flags.
| function withListDefaults(flags) { | |
| const next = cloneFlags(flags); | |
| next.readOnly = true; | |
| next.allPages = true; | |
| setNamedDefault(next, "page", 1); | |
| setNamedDefault(next, "limit", 100); | |
| function clampNamedLimit(flags, max) { | |
| const current = flags.named.get("limit"); | |
| if (!current) { | |
| return; | |
| } | |
| flags.named.set( | |
| "limit", | |
| current.map((value) => { | |
| const numeric = Number(value); | |
| if (!Number.isFinite(numeric)) { | |
| return value; | |
| } | |
| return String(Math.min(numeric, max)); | |
| }), | |
| ); | |
| } | |
| function withListDefaults(flags) { | |
| const next = cloneFlags(flags); | |
| next.readOnly = true; | |
| next.allPages = true; | |
| setNamedDefault(next, "page", 1); | |
| setNamedDefault(next, "limit", 50); | |
| clampNamedLimit(next, 50); |
| const payload = responses[url.pathname]; | ||
| assert.ok(payload, `unexpected path ${url.pathname}`); | ||
| assert.equal(url.searchParams.get("page"), "1"); | ||
| assert.equal(url.searchParams.get("limit"), "100"); |
There was a problem hiding this comment.
This test asserts that curated workflow list requests always use limit=100, but the v1 spec limits many list endpoints to limit <= 50. If the workflow defaults are adjusted to respect the OpenAPI maximum, this assertion should be updated accordingly (and ideally verify the default stays within the spec’s declared bounds).
| assert.equal(url.searchParams.get("limit"), "100"); | |
| const limit = Number(url.searchParams.get("limit")); | |
| assert.ok(Number.isInteger(limit), "expected limit query parameter to be an integer"); | |
| assert.ok(limit >= 1, "expected limit query parameter to be positive"); | |
| assert.ok(limit <= 50, "expected limit query parameter to stay within the v1 spec maximum"); |
| json: parsedFlags.json || Boolean(input.json), | ||
| compact: parsedFlags.compact || Boolean(input.compact), | ||
| limit: parsedFlags.limit || Number(input.limit ?? 0), | ||
| retry: parsedFlags.retryProvided ? parsedFlags.retry : resolveInputRetry(input.retry, parsedFlags.retry), |
There was a problem hiding this comment.
resolveEffectiveRequestFlags merges limit using parsedFlags.limit || Number(input.limit ?? 0). This makes it impossible to override an --input limit with an explicit --limit 0 (since 0 is falsy), and it also bypasses the non-negative-integer validation that parseRequestFlags applies to --limit (input could be negative/float/NaN). Track whether --limit was provided (similar to retryProvided / timeoutMsProvided) and validate/coerce input.limit the same way as the CLI flag.
| async function handleEvidenceWorkflow(args) { | ||
| const [subcommand, ...rest] = args; | ||
| if (!subcommand || subcommand === "--help" || subcommand === "help") { | ||
| printUsage(); | ||
| return; | ||
| } | ||
|
|
||
| const flags = await parseWorkflowRequestFlags(rest); | ||
| const days = Number(takeWorkflowNamedFlag(flags, "days") ?? 30); | ||
| const workspaceId = takeWorkflowNamedFlag(flags, "workspace-id"); | ||
| if (!Number.isInteger(days) || days < 0) { | ||
| fail("invalid_days", `--days must be a non-negative integer`, { days }); | ||
| } | ||
|
|
||
| if (subcommand === "expiring") { | ||
| printWorkflowPayload(await runEvidenceExpiring(flags, { days, workspaceId }), flags); | ||
| return; |
There was a problem hiding this comment.
The new evidence expiring workflow path (including --days validation and optional --workspace-id) isn’t covered by the added CLI integration tests, while other new workflows (auth check, summary, connections list, dry-run rejection) are. Adding at least one test that stubs the workspace lookup + evidence listing and asserts filtering/--days behavior would help prevent regressions.
| return { | ||
| status: summary.status, | ||
| controls: { | ||
| total: summary.controls.total, | ||
| passing: summary.controls.passing, | ||
| needs_attention: summary.controls.needs_attention, | ||
| }, | ||
| monitors: summary.monitors, | ||
| personnel: summary.personnel, | ||
| connections: summary.connections, | ||
| }; |
There was a problem hiding this comment.
🔴 Compact summary payload missing kind field, breaking text formatter and losing recommendation
In buildSummaryPayload at lines 429-441, the compact branch constructs a new object without the kind or recommendation properties. Unlike other compact helpers (e.g. compactControlsPayload, compactConnectionsPayload) which use ...payload to preserve all existing keys, the compact summary manually picks fields and omits kind: "summary" and recommendation. This causes two issues: (1) formatWorkflowText (src/lib/workflows.mjs:502) won't match the "summary" case and falls through to the default branch (raw JSON dump), and (2) the actionable recommendation string is lost even in --json --compact output.
Was this helpful? React with 👍 or 👎 to provide feedback.
| case "monitors.failing": | ||
| return [`Failing Monitors: matching=${payload.matching} showing=${payload.showing}`, ...payload.monitors.map((m) => `${m.id} ${monitorStatus(m)} ${m.name}`)].join("\n"); | ||
| case "connections.list": | ||
| return [`Connections: matching=${payload.matching} showing=${payload.showing}`, ...payload.connections.map((c) => `${c.id} ${connectionState(c)} ${c.clientAlias || c.clientType || ""}`)].join("\n"); |
There was a problem hiding this comment.
🔴 formatWorkflowText calls connectionState() on compact connection objects, always showing "NEVER_CONNECTED"
At src/lib/workflows.mjs:518, the text formatter calls connectionState(c) on each connection. connectionState reads raw API fields (connection.connected, connection.failedAt, connection.connectedAt) at src/lib/workflows.mjs:211-222. However, when --compact is used, connections have been transformed by compactConnection (src/lib/workflows.mjs:224-233) into { id, clientType, alias, status, providers } — none of the raw fields exist. Since c.connected is undefined (falsy), c.failedAt is undefined (falsy), and c.connectedAt is undefined (falsy → !c.connectedAt is true), connectionState always returns "NEVER_CONNECTED" regardless of actual status. Additionally, the formatter accesses c.clientAlias but compact connections use c.alias, so the alias is also never displayed.
| return [`Connections: matching=${payload.matching} showing=${payload.showing}`, ...payload.connections.map((c) => `${c.id} ${connectionState(c)} ${c.clientAlias || c.clientType || ""}`)].join("\n"); | |
| return [`Connections: matching=${payload.matching} showing=${payload.showing}`, ...payload.connections.map((c) => `${c.id} ${c.status ?? connectionState(c)} ${c.alias || c.clientAlias || c.clientType || ""}`)].join("\n"); |
Was this helpful? React with 👍 or 👎 to provide feedback.
| case "connections.list": | ||
| return [`Connections: matching=${payload.matching} showing=${payload.showing}`, ...payload.connections.map((c) => `${c.id} ${connectionState(c)} ${c.clientAlias || c.clientType || ""}`)].join("\n"); | ||
| case "personnel.issues": | ||
| return [`Personnel with device issues: matching=${payload.matching} showing=${payload.showing}`, ...payload.personnel.map((p) => `${p.id} ${p.user?.email ?? p.email ?? ""} failing_devices=${p.devicesFailingComplianceCount ?? 0}`)].join("\n"); |
There was a problem hiding this comment.
🔴 formatWorkflowText reads devicesFailingComplianceCount on compact personnel objects where the field is named failing_devices
At src/lib/workflows.mjs:520, the text formatter reads p.devicesFailingComplianceCount ?? 0 for personnel items. When --compact is used, compactPersonnel (src/lib/workflows.mjs:242-248) renames this field to failing_devices. So for compact personnel objects, p.devicesFailingComplianceCount is undefined and the ?? 0 fallback always produces 0, making every person appear to have zero failing devices in the text output.
| return [`Personnel with device issues: matching=${payload.matching} showing=${payload.showing}`, ...payload.personnel.map((p) => `${p.id} ${p.user?.email ?? p.email ?? ""} failing_devices=${p.devicesFailingComplianceCount ?? 0}`)].join("\n"); | |
| return [`Personnel with device issues: matching=${payload.matching} showing=${payload.showing}`, ...payload.personnel.map((p) => `${p.id} ${p.user?.email ?? p.email ?? ""} failing_devices=${p.failing_devices ?? p.devicesFailingComplianceCount ?? 0}`)].join("\n"); |
Was this helpful? React with 👍 or 👎 to provide feedback.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 291c6be0b0
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| case "monitors.failing": | ||
| return [`Failing Monitors: matching=${payload.matching} showing=${payload.showing}`, ...payload.monitors.map((m) => `${m.id} ${monitorStatus(m)} ${m.name}`)].join("\n"); | ||
| case "connections.list": | ||
| return [`Connections: matching=${payload.matching} showing=${payload.showing}`, ...payload.connections.map((c) => `${c.id} ${connectionState(c)} ${c.clientAlias || c.clientType || ""}`)].join("\n"); |
There was a problem hiding this comment.
Render compact connections with existing status field
When --compact is used without --json, connections.list entries have already been transformed to { status, alias, ... }, but this formatter recomputes state via connectionState(c) and reads clientAlias. Because compact items no longer carry connected/connectedAt/failedAt or clientAlias, disconnected/failed rows are printed as NEVER_CONNECTED (and aliases are dropped), which misreports integration health in human-readable output.
Useful? React with 👍 / 👎.
| case "connections.list": | ||
| return [`Connections: matching=${payload.matching} showing=${payload.showing}`, ...payload.connections.map((c) => `${c.id} ${connectionState(c)} ${c.clientAlias || c.clientType || ""}`)].join("\n"); | ||
| case "personnel.issues": | ||
| return [`Personnel with device issues: matching=${payload.matching} showing=${payload.showing}`, ...payload.personnel.map((p) => `${p.id} ${p.user?.email ?? p.email ?? ""} failing_devices=${p.devicesFailingComplianceCount ?? 0}`)].join("\n"); |
There was a problem hiding this comment.
Preserve failing device counts in compact personnel text
buildPersonnelIssuesPayload(..., { compact: true }) renames the count field to failing_devices, but the text formatter still reads devicesFailingComplianceCount. As a result, drata personnel issues --compact always prints failing_devices=0 even for users that were filtered in for nonzero failures, which understates the severity of compliance issues.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (3)
src/lib/workflows.mjs (1)
57-78: Dead!flags.dryRuncheck.
flags.dryRunis guaranteed false by the time we reach line 63 because we alreadyfail(...)at line 58–60 whenflags.dryRunis truthy. The second guard can be simplified.♻️ Proposed cleanup
const { apiKey, source } = await resolveApiKey(flags); - if (!apiKey && !flags.dryRun) { + if (!apiKey) { fail( "missing_api_key", "Missing Drata API key. Use auth login, DRATA_API_KEY, DRATA_API_KEY_CMD, --api-key, --api-key-file, or --api-key-stdin.", ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/workflows.mjs` around lines 57 - 78, In prepareWorkflowFlags, remove the redundant dead check for !flags.dryRun when validating apiKey: since flags.dryRun is already handled (fail(...) earlier), simplify the guard to just check apiKey (e.g., if (!apiKey) call fail("missing_api_key", ...)), keeping resolveApiKey usage and returning the same augmented flags (apiKey, apiKeySource: source, apiKeyFile: null, apiKeyStdin: false, readOnly: true); update only the condition around the fail call and leave fail, resolveApiKey, and the returned shape unchanged.test/cli-core.test.mjs (1)
389-399: RedundantparsedFlags.named.set("limit", ["2"]).
parseRequestFlags(["--limit", "2", ...])already pushes"2"intoparsed.named["limit"]viapushValue(parsed.named, "limit", result.value)insrc/lib/args.mjs, so this explicit set is a no-op and can be removed to avoid implying that the parser doesn't already handle this.♻️ Proposed cleanup
"--limit", "2", "--raw", ]); - parsedFlags.named.set("limit", ["2"]); const originalFetch = globalThis.fetch;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/cli-core.test.mjs` around lines 389 - 399, Remove the redundant manual set of the "limit" flag in the test: parsedFlags.named.set("limit", ["2"]) is unnecessary because parseRequestFlags already adds the value via pushValue(parsed.named, "limit", result.value); update the test to rely on parseRequestFlags(pars) producing the "limit" entry (i.e., delete the parsedFlags.named.set(...) line) and run the test to confirm behavior remains the same.src/cli.mjs (1)
184-206:auth check --dry-runwill be rejected asunsupported_workflow_dry_run.Because this path funnels through
prepareWorkflowFlags, which rejects--dry-run, users can't preview theauth checkrequest the way they can with other read-only commands. If that's deliberate (sinceauth checkis meant to hit the API), consider a slightly friendlier error for this entry point, or wireauth checkdirectly throughresolveEffectiveRequestFlagsand handle--dry-runby short-circuiting toserializePreparedRequest. Non-blocking.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli.mjs` around lines 184 - 206, The auth check path currently calls prepareWorkflowFlags which rejects --dry-run causing unsupported_workflow_dry_run; fix by bypassing prepareWorkflowFlags for the "check" subcommand: call resolveEffectiveRequestFlags(parseRequestFlags(rest)) directly, then if flags.dryRun is set short-circuit and output serializePreparedRequest(flags) (or otherwise serializePreparedRequest of the prepared request) instead of calling runWorkflowOperation; if dry-run is not set, proceed to call runWorkflowOperation as before. Update references: subcommand "check", prepareWorkflowFlags, resolveEffectiveRequestFlags, parseRequestFlags, runWorkflowOperation, and serializePreparedRequest.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/workflows.mjs`:
- Around line 484-489: runEvidenceExpiring can end up with workspaceId in both
the path and flags.named because withListDefaults(flags) copies flags without
removing a user-supplied flags.named.workspaceId, then withPath(next, {
workspaceId }) appends it to next.path; fix by ensuring the user-provided
workspace ID is removed from flags.named (or prefer it and skip
getFirstWorkspaceId) before calling withListDefaults/withPath: check
flags.named.workspaceId and if present use that value for workspaceId and delete
flags.named.workspaceId (or bypass getFirstWorkspaceId entirely), then call
withListDefaults(flags) and withPath(next, { workspaceId }) so only the path
contains workspaceId (references: runEvidenceExpiring, withListDefaults,
withPath, getFirstWorkspaceId, flags.named).
- Around line 376-404: The summary's needs_attention currently includes
controlSummary.ready but buildControlsFailingPayload filters out "READY",
causing the mismatch; update buildSummaryPayload to remove controlSummary.ready
from needs_attention and instead add a new summary bucket (e.g., not_monitored
or not_active) set to controlSummary.ready, and update any consumers/UI labels
accordingly (referencing the needs_attention key in the summary object, the
controlSummary.ready value, and buildControlsFailingPayload which filters
["NOT_READY","NO_OWNER","NEEDS_EVIDENCE"]).
---
Nitpick comments:
In `@src/cli.mjs`:
- Around line 184-206: The auth check path currently calls prepareWorkflowFlags
which rejects --dry-run causing unsupported_workflow_dry_run; fix by bypassing
prepareWorkflowFlags for the "check" subcommand: call
resolveEffectiveRequestFlags(parseRequestFlags(rest)) directly, then if
flags.dryRun is set short-circuit and output serializePreparedRequest(flags) (or
otherwise serializePreparedRequest of the prepared request) instead of calling
runWorkflowOperation; if dry-run is not set, proceed to call
runWorkflowOperation as before. Update references: subcommand "check",
prepareWorkflowFlags, resolveEffectiveRequestFlags, parseRequestFlags,
runWorkflowOperation, and serializePreparedRequest.
In `@src/lib/workflows.mjs`:
- Around line 57-78: In prepareWorkflowFlags, remove the redundant dead check
for !flags.dryRun when validating apiKey: since flags.dryRun is already handled
(fail(...) earlier), simplify the guard to just check apiKey (e.g., if (!apiKey)
call fail("missing_api_key", ...)), keeping resolveApiKey usage and returning
the same augmented flags (apiKey, apiKeySource: source, apiKeyFile: null,
apiKeyStdin: false, readOnly: true); update only the condition around the fail
call and leave fail, resolveApiKey, and the returned shape unchanged.
In `@test/cli-core.test.mjs`:
- Around line 389-399: Remove the redundant manual set of the "limit" flag in
the test: parsedFlags.named.set("limit", ["2"]) is unnecessary because
parseRequestFlags already adds the value via pushValue(parsed.named, "limit",
result.value); update the test to rely on parseRequestFlags(pars) producing the
"limit" entry (i.e., delete the parsedFlags.named.set(...) line) and run the
test to confirm behavior remains the same.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f8069118-bbe5-4edd-b486-5e062da2425a
📒 Files selected for processing (7)
README.mdsrc/cli.mjssrc/lib/args.mjssrc/lib/completion.mjssrc/lib/http.mjssrc/lib/workflows.mjstest/cli-core.test.mjs
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Agent
🔇 Additional comments (1)
src/cli.mjs (1)
595-600: This issue is not present—normalizeFlagNamestandardizes to kebab-case, not camelCase.The verification shows that
normalizeFlagName("workspace-id")returns"workspace-id", not"workspaceId". Flags are stored inflags.namedusing the kebab-case key, sotakeWorkflowNamedFlag(flags, "workspace-id")will correctly retrieve the value. The same normalization applies to"status"and"days", which is why all three lookups work as expected. No API call duplication or flag conflict will occur when a user provides--workspace-id.> Likely an incorrect or invalid review comment.
| export async function runEvidenceExpiring(flags, options = {}) { | ||
| const workspaceId = options.workspaceId || (await getFirstWorkspaceId(flags)); | ||
| const listFlags = withPath(withListDefaults(flags), { workspaceId }); | ||
| const { data } = await runWorkflowOperation("v1", "list-evidence", listFlags); | ||
| return buildEvidenceExpiringPayload(data, flags, options.days); | ||
| } |
There was a problem hiding this comment.
runEvidenceExpiring may double-inject workspaceId into the path.
withListDefaults(flags) clones flags but does not strip the user's --workspace-id value from flags.named. withPath(..., { workspaceId }) then appends ["workspaceId", ...] to next.path. When invokeOperation runs, resolveNamedParameters will merge both the path entry (auto-detected) and the named entry (user's explicit value) into values.path.workspaceId. Today this happens to work because only the last value is kept for non-array params, but it's coupled to that behavior and also causes an unnecessary list-workspaces API call (see the handleEvidenceWorkflow comment).
Once --workspace-id is correctly taken out of flags.named in the CLI handler, consider asserting that the user-supplied ID skips getFirstWorkspaceId entirely so the extra request is avoided.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/workflows.mjs` around lines 484 - 489, runEvidenceExpiring can end up
with workspaceId in both the path and flags.named because
withListDefaults(flags) copies flags without removing a user-supplied
flags.named.workspaceId, then withPath(next, { workspaceId }) appends it to
next.path; fix by ensuring the user-provided workspace ID is removed from
flags.named (or prefer it and skip getFirstWorkspaceId) before calling
withListDefaults/withPath: check flags.named.workspaceId and if present use that
value for workspaceId and delete flags.named.workspaceId (or bypass
getFirstWorkspaceId entirely), then call withListDefaults(flags) and
withPath(next, { workspaceId }) so only the path contains workspaceId
(references: runEvidenceExpiring, withListDefaults, withPath,
getFirstWorkspaceId, flags.named).
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 1f486d57b4
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| (total === null ? true : mergedItems.length < total) && | ||
| (!requestedLimit || items.length >= requestedLimit); |
There was a problem hiding this comment.
Continue page fetches when total says more items exist
In page-based --all-pages mode, the continuation check still requires items.length >= requestedLimit even when the response provides a total and mergedItems.length < total. If the API returns fewer items than requested on a page (for example due to server-side page-size caps), pagination stops early and silently drops remaining records, which makes workflow summaries and raw paginated outputs incomplete.
Useful? React with 👍 / 👎.
| next.allPages = true; | ||
| next.named.delete("limit"); | ||
| setNamedDefault(next, "page", 1); | ||
| pushNamed(next, "limit", 100); |
There was a problem hiding this comment.
Use a v1-valid default page size in workflow list calls
withListDefaults hardcodes limit=100 for all curated v1 workflow list requests, but the v1 list endpoints used here define a maximum limit of 50 in the local OpenAPI spec. This sends out-of-range query values on every workflow call and can cause request failures (or truncated collection when the server clamps the page size).
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 7bd1346300
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| return; | ||
| } | ||
|
|
||
| const flags = await parseWorkflowRequestFlags(rest); |
There was a problem hiding this comment.
Validate workflow subcommand before resolving flags
This handler resolves workflow flags (parseWorkflowRequestFlags) before verifying the subcommand, and parseWorkflowRequestFlags performs auth checks. As a result, unauthenticated users see missing_api_key for typos like drata controls nope (and for ... --help) instead of the intended unknown-command/help response, which blocks basic command discovery when not logged in.
Useful? React with 👍 / 👎.
| case "monitors.failing": | ||
| return [`Failing Monitors: matching=${payload.matching} showing=${payload.showing}`, ...payload.monitors.map((m) => `${m.id} ${monitorStatus(m)} ${m.name}`)].join("\n"); | ||
| case "connections.list": | ||
| return [`Connections: matching=${payload.matching} showing=${payload.showing}`, ...payload.connections.map((c) => `${c.id} ${connectionState(c)} ${c.clientAlias || c.clientType || ""}`)].join("\n"); |
There was a problem hiding this comment.
🔴 formatWorkflowText calls connectionState() on compact connection objects, producing wrong status
When --compact is used without --json, formatWorkflowText at line 537 calls connectionState(c) on connection objects that have already been transformed by compactConnection (src/lib/workflows.mjs:227-235). Compact connections have a pre-computed status field but lack the raw API properties (connected, failedAt, connectedAt) that connectionState() inspects. Since all three are undefined on compact objects, connectionState always returns "NEVER_CONNECTED" (line 221: !undefined → true), regardless of the actual connection state. Additionally, c.clientAlias is accessed but the compact form uses c.alias instead.
Trace: compact connection through connectionState()
Compact shape: { id, clientType, alias, status, providers }
connectionState(c): c.connected → undefined → skip; c.failedAt → undefined → skip; !c.connectedAt → !undefined → true → return "NEVER_CONNECTED".
| return [`Connections: matching=${payload.matching} showing=${payload.showing}`, ...payload.connections.map((c) => `${c.id} ${connectionState(c)} ${c.clientAlias || c.clientType || ""}`)].join("\n"); | |
| return [`Connections: matching=${payload.matching} showing=${payload.showing}`, ...payload.connections.map((c) => `${c.id} ${c.status ?? connectionState(c)} ${c.alias ?? c.clientAlias || c.clientType || ""}`)].join("\n"); |
Was this helpful? React with 👍 or 👎 to provide feedback.
| case "connections.list": | ||
| return [`Connections: matching=${payload.matching} showing=${payload.showing}`, ...payload.connections.map((c) => `${c.id} ${connectionState(c)} ${c.clientAlias || c.clientType || ""}`)].join("\n"); | ||
| case "personnel.issues": | ||
| return [`Personnel with device issues: matching=${payload.matching} showing=${payload.showing}`, ...payload.personnel.map((p) => `${p.id} ${p.user?.email ?? p.email ?? ""} failing_devices=${p.devicesFailingComplianceCount ?? 0}`)].join("\n"); |
There was a problem hiding this comment.
🔴 formatWorkflowText reads devicesFailingComplianceCount on compact personnel objects that use failing_devices
When --compact is used without --json, formatWorkflowText at line 539 accesses p.devicesFailingComplianceCount on personnel objects. After compactPersonnel transformation (src/lib/workflows.mjs:245-251), this property is renamed to failing_devices. Since p.devicesFailingComplianceCount is undefined on compact objects, the ?? 0 fallback triggers and every person displays failing_devices=0 regardless of the actual count.
| return [`Personnel with device issues: matching=${payload.matching} showing=${payload.showing}`, ...payload.personnel.map((p) => `${p.id} ${p.user?.email ?? p.email ?? ""} failing_devices=${p.devicesFailingComplianceCount ?? 0}`)].join("\n"); | |
| return [`Personnel with device issues: matching=${payload.matching} showing=${payload.showing}`, ...payload.personnel.map((p) => `${p.id} ${p.user?.email ?? p.email ?? ""} failing_devices=${p.failing_devices ?? p.devicesFailingComplianceCount ?? 0}`)].join("\n"); |
Was this helpful? React with 👍 or 👎 to provide feedback.
| } | ||
|
|
||
| if (flags.compact) { | ||
| return { |
There was a problem hiding this comment.
🟡 Compact summary payload omits kind field, breaking text rendering
In buildSummaryPayload (src/lib/workflows.mjs:438-449), the compact return path omits the kind property that all other workflow payloads include. When --compact is used without --json, formatWorkflowText switches on payload.kind (line 520), which is undefined for the compact summary. This causes it to fall through to the default case (line 542) and dump raw JSON instead of the formatted summary text.
| return { | |
| kind: "summary", |
Was this helpful? React with 👍 or 👎 to provide feedback.
feat: add curated investigation workflows
Summary
Adds read-only curated compliance workflow commands on top of the generated Drata OpenAPI CLI.
These commands provide quick compliance triage without sacrificing the existing generic operation surface.
Added
drata summary --json --compactdrata controls failing --json --compactdrata monitors failing --json --compactdrata connections list --status DISCONNECTED --json --compactdrata personnel issues --json --compactdrata evidence expiring --days 60 --json --compactdrata auth check --json--compactand workflow--limit--all-pagesNotes
Curated workflows are read-only and reject
--dry-runto avoid misleading empty summaries.Test plan
npm testnpm pack --dry-runSummary by CodeRabbit
New Features
summary,controls failing,monitors failing,connections list,personnel issues, andevidence expiringfor streamlined compliance triage.auth checksubcommand for API validation.--compactand--limitflags to filter and format workflow output.Documentation