Skip to content

feat: AWS Security & Compliance β€” finding-to-code mapping with AI remediation#2282

Open
aknysh wants to merge 63 commits intomainfrom
aknysh/atmos-ai-auth-compliance-1
Open

feat: AWS Security & Compliance β€” finding-to-code mapping with AI remediation#2282
aknysh wants to merge 63 commits intomainfrom
aknysh/atmos-ai-auth-compliance-1

Conversation

@aknysh
Copy link
Copy Markdown
Member

@aknysh aknysh commented Apr 3, 2026

what

  • Add atmos aws security analyze β€” fetch security findings from AWS Security Hub, map them to Atmos components and stacks via resource tags, and generate structured remediation reports
  • Add atmos aws compliance report β€” generate compliance posture reports against industry frameworks (CIS AWS, PCI-DSS, SOC2, HIPAA, NIST) with pass/fail scoring
  • Add finding-to-code mapping pipeline with 7 strategies: finding-tag (exact), tag-api (exact), context-tags (high), account-map (low), ecr-repo (low), naming-convention (low), resource-type (low)
  • Add optional AI-powered remediation via --ai flag β€” multi-turn tool analysis reads component source and stack config, generates root cause analysis, specific code changes, stack YAML changes, deploy commands, risk assessment
  • Add AI finding deduplication (same title+component+stack analyzed once) and retry with exponential backoff for transient API errors
  • Add 4 output formats: Markdown (terminal), JSON (CI/CD), YAML, CSV
  • Add Atmos Auth integration via identity config field for targeting Security Hub delegated admin accounts
  • Add --stack, --component, --severity, --source, --framework, --format, --file, --max-findings, --no-group, --region, --identity flags
  • Add AI tools: atmos_list_findings, atmos_describe_finding, atmos_analyze_finding, atmos_compliance_report
  • Add example: examples/aws-security-compliance/
  • Add Docusaurus docs for all commands and configuration
  • Add blog post with production testing results

why

Reviewing AWS security findings today requires navigating multiple AWS console pages, cross-referencing resources with Terraform code, and manually figuring out which configuration caused the issue. This is slow, error-prone, and requires deep AWS + Terraform expertise.

Atmos owns the component-to-stack relationship, so it can trace a security finding on an AWS resource all the way back to the exact Terraform code and stack configuration that created it β€” and generate a targeted fix.

The key differentiator vs AWS MCP security servers: MCP servers return raw findings but have no concept of Atmos stacks, components, or Terraform source code. Our implementation maps findings to IaC and generates specific remediation with deploy commands.

See It in Action

Tested against a multi-account AWS organization (11 accounts, Security Hub delegated admin, 500 findings, 97.2% mapped to Atmos components).

1. Security findings mapped to components

$ atmos aws security analyze --stack plat-use2-dev --component rds/example

β„Ή Fetching security findings...
β„Ή Mapping 500 findings to Atmos components...
β„Ή Filtered to 4 findings matching stack="plat-use2-dev" component="rds/example"

# Security Report: plat-use2-dev / rds/example

Findings: 4 (1 CRITICAL, 3 HIGH)

## CRITICAL Findings (1)

### 1. Security groups should not allow unrestricted access to ports with high risk

| Field          | Value                                                        |
|----------------|--------------------------------------------------------------|
| **Severity**   | CRITICAL                                                     |
| **Source**     | security-hub (aws-foundational-security-best-practices/1.0)  |
| **Resource**   | arn:aws:ec2:us-east-2:***:security-group/sg-***              |
| **Component**  | rds/example                                                  |
| **Stack**      | plat-use2-dev                                                |
| **Confidence** | exact                                                        |
| **Mapped By**  | finding-tag                                                  |

Resource Tags:
β€’ atmos_stack = plat-use2-dev
β€’ atmos_component = rds/example
β€’ terraform_component = rds
β€’ Name = acme-plat-use2-dev-example-postgres-db
β€’ Namespace = acme, Tenant = plat, Environment = use2, Stage = dev

## Summary
| Severity  | Count | Mapped | Unmapped |
|-----------|-------|--------|----------|
| CRITICAL  | 1     | 1      | 0        |
| HIGH      | 3     | 3      | 0        |
| **Total** | **4** | **4**  | **0**    |

2. AI-powered remediation (--ai)

$ atmos aws security analyze --stack plat-use2-dev --component rds/example --ai

β„Ή Analyzing findings with AI...

βœ“ AI analysis complete β€” Security Analysis: rds/example in plat-use2-dev

## Findings Breakdown

### EC2.18: Unrestricted Ingress on Unauthorized Port (HIGH)
Port 5432 (PostgreSQL) is open to 0.0.0.0/0. The likely cause is
allowed_cidr_blocks being set to an overly permissive value.

### EC2.13: Unrestricted Ingress on Port 22/SSH (HIGH)
⚠️ Port 22 has no business being on an RDS security group. This strongly
suggests an out-of-band manual change in the AWS Console β€” drift from IaC.

## Root Cause (Common Thread)
Both findings stem from var.allowed_cidr_blocks being set too permissively,
compounded by possible out-of-band drift.

## Priority Actions
1. Remove the port-22 inbound rule manually (out-of-band drift)
2. Update catalog/rds/example.yaml:
     allowed_cidr_blocks: []
     publicly_accessible: false
     use_private_subnets: true
3. Add Terraform validation guards:
     validation {
       condition     = !contains(var.allowed_cidr_blocks, "0.0.0.0/0")
       error_message = "allowed_cidr_blocks must not contain 0.0.0.0/0."
     }
4. Plan and apply:
     atmos terraform plan rds/example -s plat-use2-dev
     atmos terraform apply rds/example -s plat-use2-dev

## Risk Assessment
| Finding              | Risk   | Note                                       |
|----------------------|--------|--------------------------------------------|
| EC2.18 (port 5432)   | Medium | Removing rule breaks direct DB connections |
| EC2.13 (port 22/SSH) | Low    | No RDS traffic should depend on SSH        |

3. Compliance report

$ atmos aws compliance report

# Compliance Report: CIS AWS Foundations Benchmark

## Score: 35/42 Controls Passing (83%)

### Failing Controls
| Control      | Title                                                                 | Severity |
|--------------|-----------------------------------------------------------------------|----------|
| Config.1     | AWS Config should be enabled with service-linked role                 | CRITICAL |
| EC2.14       | Security groups should not allow ingress from 0.0.0.0/0 to port 3389  | HIGH     |
| EC2.13       | Security groups should not allow ingress from 0.0.0.0/0 to port 22    | HIGH     |
| S3.1         | S3 buckets should have block public access settings enabled           | MEDIUM   |
| EC2.6        | VPC flow logging should be enabled in all VPCs                        | MEDIUM   |
| IAM.17       | Ensure IAM password policy expires passwords within 90 days           | LOW      |
| CloudTrail.7 | Ensure S3 access logging on CloudTrail S3 bucket                      | LOW      |

4. Compliance report with AI (--ai)

$ atmos aws compliance report --ai

βœ“ AI analysis complete β€” CIS Foundations Benchmark

## Overall Status: 🟑 83% Compliant (35/42 controls passing)

## 🚨 Priority Issues (Fix First)

### CRITICAL
| Control  | Issue                                  | Action                             |
|----------|----------------------------------------|------------------------------------|
| Config.1 | AWS Config not enabled or missing role | Enable in all regions, attach role |

### HIGH
| Control | Issue                              | Action                                  |
|---------|------------------------------------|-----------------------------------------|
| EC2.14  | RDP (port 3389) open to 0.0.0.0/0 | Restrict to known IP ranges or VPN       |
| EC2.13  | SSH (port 22) open to 0.0.0.0/0   | Use SSM Session Manager instead of SSH   |

## 🟠 Medium: S3.1 (Block Public Access), EC2.6 (VPC Flow Logs)
## 🟒 Low: IAM.17 (Password policy), CloudTrail.7 (S3 access logging)

## Next Steps
1. Lock down security groups for ports 22/3389
2. Enable AWS Config β€” also detects future drift
3. Run `atmos terraform apply` on security-groups, vpc, config components
4. Re-run this report to verify score improves

New CLI Commands

atmos aws security analyze

Fetches findings from Security Hub, maps them to Atmos components via resource tags (7 mapping strategies with confidence levels), and renders reports in 4 formats. Post-mapping --stack and --component filters narrow results after mapping. With --ai, the AI reads component source code and stack config via multi-turn tools to generate specific remediation.

atmos aws compliance report

Queries Security Hub enabled standards, counts total controls via ListSecurityControlDefinitions, and computes pass/fail scores. Supports --framework filter for CIS AWS, PCI-DSS, SOC2, HIPAA, NIST. With --ai, generates prioritized remediation for each failing control.

Configuration

aws:
  security:
    enabled: true
    identity: "security-readonly"  # Atmos Auth β†’ Security Hub account
    region: "us-east-2"            # Aggregation region
    max_findings: 500
    tag_mapping:
      stack_tag: "atmos:stack"
      component_tag: "atmos:component"

Example

See examples/aws-security-compliance/ for a complete configuration with auth, tag mapping, AI provider, and all commands.

references

  • PRD: docs/prd/atmos-aws-security-compliance.md
  • Blog: website/blog/2026-04-03-aws-security-compliance.mdx
  • Example: examples/aws-security-compliance/
  • Docs: website/docs/cli/commands/aws/security/, website/docs/cli/commands/aws/compliance/
  • Config: website/docs/cli/configuration/aws/security.mdx

Summary by CodeRabbit

  • New Features

    • New AWS security commands: analyze findings (mapping to components/stacks) and compliance report generation with multi-format output (markdown/json/yaml/csv), filtering, file export, and identity/region overrides.
    • Optional AI-enabled analysis (--ai) for root-cause remediation, automated remediation suggestions, and AI tooling for finding lookup/describe/list.
  • Documentation

    • Comprehensive AWS security & compliance docs, examples, configuration reference, CLI usage, and tutorials added.
  • Examples

    • Example configs and walkthroughs for AWS Security Hub integration and AI-enabled workflows.

aknysh and others added 30 commits March 9, 2026 16:28
…egration

Add `atmos ai security` and `atmos ai compliance` CLI commands with full AWS
Security Hub integration, dual-path finding-to-component mapping (tag-based
and heuristic), and multi-format report rendering (markdown/json/yaml/csv).

Phase 1: Schema, CLI commands, report renderers, sentinel errors.
Phase 2: AWS SDK clients (Security Hub, Resource Groups Tagging API),
paginated finding fetcher with filters, batch tag-based mapping (Path A),
naming convention and resource type heuristics (Path B), unit tests.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ce_report

Phase 3: Register three new AI tools for the agent to use:
- atmos_list_findings: Query Security Hub findings with severity/source/stack filters
- atmos_describe_finding: Get detailed info about a specific finding by ID
- atmos_compliance_report: Generate compliance posture reports for CIS/PCI/SOC2/HIPAA/NIST

Tools are registered as read-only core tools in the AI tool registry, available
in both chat and MCP modes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add findings cache with configurable TTL to reduce AWS API calls
- Add report renderer tests (17 tests covering all formats)
- Add Docusaurus docs for atmos ai security and compliance commands
- Increase test coverage from 71% to 91%
- Update PRD to support all 7 Atmos AI providers (not just Bedrock)
- Position Bedrock for enterprise customers needing data residency

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Implement FindingAnalyzer that sends findings + component source to
  configured AI provider for root cause analysis and remediation
- Parse AI response into structured Remediation (root cause, deploy
  command, risk level)
- Integrate analyzer into security command (runs unless --no-ai is set)
- Add ErrAISecurityAnalysisFailed sentinel error
- Tests for analyzer, prompt building, response parsing (91.8% coverage)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Register atmos_analyze_finding tool in AI tool system
- Fetches finding by ID, maps to component, runs AI analysis
- Returns root cause, remediation, deploy command, and risk level
- Accepts optional component_source and stack_config for richer context

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Move security and compliance commands from `atmos ai` to `atmos aws` since
they are AWS-specific and work without AI. AI analysis is now opt-in via
the `--ai` flag (default: off) instead of opt-out via `--no-ai`.

Key changes:
- Move cmd/ai/security.go β†’ cmd/aws/security.go
- Move cmd/ai/compliance.go β†’ cmd/aws/compliance.go
- Move pkg/ai/security/ β†’ pkg/aws/security/
- Move schema from ai.security β†’ aws.security (AWSSecuritySettings)
- Flip --no-ai flag to --ai (opt-in, default false)
- Use error builder pattern for all error handling
- Replace errors.Join with fmt.Errorf for order preservation
- Add tests for parseOutputFormat, parseSource, parseSeverities, etc.
- Add Docusaurus docs under cli/commands/aws/ and cli/configuration/aws/
- Update PRD to v0.3 with aws namespace and --ai flag
- Update depguard to allow AWS SDK in pkg/aws/security/

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… status

Move --ai from a command-local flag on `atmos aws security` to a global
persistent flag available to all commands. Register via GlobalOptionsBuilder
with ATMOS_AI env var support. Update PRD to v0.4 with detailed completion
status for all phases. Update global-flags.mdx documentation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Resolve conflicts:
- website/docs/cli/global-flags.mdx: keep main's expanded --ai docs + new --skill flag
- pkg/flags/global/flags.go: remove duplicate AI field (keep main's version with Skill)
- pkg/flags/global_builder.go: remove duplicate registerAIFlags (keep main's with --skill)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…compliance-1

# Conflicts:
#	errors/errors.go
…ial validation, tests

1. Wire up --controls flag in compliance command β€” parses comma-separated
   control IDs, filters report, recalculates score.

2. Fix compliance total_controls β€” was set to failing count (score always 0%).
   Now uses DescribeStandardsControls API with graceful fallback.

3. Add AWS credential validation β€” early STS GetCallerIdentity check before
   pipeline starts. Clear error with actionable hints.

4. Add 30+ new tests β€” all testable functions at 100% coverage.
   Covers: flag parsing, error sentinels, validation, report building,
   control filtering, shorthand aliases.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add `identity` field to `AWSSecuritySettings` schema and `--identity`/`-i`
CLI flag to both `atmos aws security analyze` and `atmos aws compliance report`.

When set, credentials are resolved through the Atmos Auth provider chain
(SSO β†’ role assumption β†’ isolated credentials), targeting the delegated
admin account where Security Hub aggregates all findings.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- AWS client cache: newAWSClientCache, WithAuthContext, empty maps
- resolveAuthContext: empty identity (nil), non-empty with no auth config (error)
- Schema: identity field read/write, empty default
- Coverage: WithAuthContext 100%, resolveAuthContext 73%, pkg/aws/security 90.1%

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add `region` field to AWSSecuritySettings for Security Hub aggregation
region. Precedence: --region CLI flag > config region > default (us-east-1).

Combined with `identity`, users configure once in atmos.yaml:
  aws:
    security:
      identity: "security-readonly"  # Which account
      region: "us-east-2"            # Which region

No extra flags needed for subsequent queries.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add identity and region fields to the main Configuration section example.
Add --identity flag to CLI flags table.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two changes ensuring consistent, reproducible AI analysis across all providers:

1. Formalized Remediation schema β€” added Steps, References fields, changed
   StackChanges to string. Every output follows the same structure.

2. Created atmos-aws-security agent skill β€” embedded as system prompt via
   go:embed. Instructs AI to use exact section headers that map directly
   to Remediation struct fields. Parser handles both structured and legacy
   formats via extractFirstMatch fallback.

Refactored for complexity compliance:
- parseRemediationResponse uses extractFirstMatch helper (17 β†’ 8 complexity)
- parseListItems/extractListItem replace parseNumberedList/parseReferenceList
- Named constants for magic numbers

New tests: skill prompt embedding, parseListItems (6 cases),
structured format parsing, fallback format parsing, JSON round-trip.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Shows the complete flow: AWS APIs β†’ Finding β†’ ComponentMapping β†’ Remediation
β†’ Report β†’ ReportRenderer (4 formats). Explains behavior with and without
--ai flag, and how the skill prompt ensures consistent AI output.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
aknysh and others added 11 commits April 3, 2026 12:25
50 was too low for multi-account orgs β€” with post-mapping filtering,
we need to fetch enough findings to cover all accounts before filtering
by stack/component. AI cost is controlled separately (only mapped
findings are sent to AI). Users can still override with --max-findings.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
ECR findings had empty stack because mapByECRRepo only extracted the
component name. Now uses the account_map to resolve the finding's
account ID to an account/stack name.

Before: component=monitor-nats, stack="" (395 findings)
After:  component=monitor-nats, stack="core-artifacts"

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Same fix as ECR repo mapper β€” use account_map to set the stack name
from the finding's account ID. Fixes the last empty-stack finding
(AwsEc2Instance in core-auto).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Previously the "no findings" message only showed for Markdown format.
Now shows for all formats (JSON, YAML, CSV) so the user knows no
report was written.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace all hardcoded emoji icons (πŸ”, βœ…, πŸ—ΊοΈ, πŸ€–, πŸ“Š) with the
ui layer functions that handle theming automatically:
- ui.Info() for status messages
- ui.Success() for completion messages
- ui.Warning() for warnings
- ui.Successf()/ui.Infof() for formatted messages

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Items 40-42: max-findings 500, account map for ECR/resource-type, themed UI.

Production results: 97.2% mapping accuracy (486/500), 5 mapping methods,
stack/component filtering verified across multiple stacks.

Replaced Known Limitations with Production Testing Results table
showing exact counts per method and verified filter scenarios.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Deduplicate findings before AI analysis: same title+component+stack
  share remediation (4 identical findings β†’ 1 API call)
- Retry with exponential backoff on transient AI errors (529/429/5xx)
  using pkg/retry: 3 attempts, 2s initial delay, 15s max, 30% jitter
- Increase default timeout to 300s when --ai is used (multi-turn tool
  analysis with retries needs more time)
- Fix glamour color profile: add explicit WithColorProfile to
  renderMarkdown() for consistent colored output
- Update PRD to v0.8 with items 43-47 and production AI test results
- Update blog post and example README with full AI analysis showcase
  (findings breakdown, anomaly detection, remediation, risk assessment)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Fix 3 broken links pointing to /cli/commands/aws/eks-update-kubeconfig
(flat path) β†’ /cli/commands/aws/eks/update-kubeconfig (correct nested path).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Fix framework filter: use PREFIX with full standard ID paths including
  type prefix (ruleset/ or standards/) since CONTAINS is not supported
- Replace deprecated DescribeStandardsControls with
  ListSecurityControlDefinitions (works in delegated admin mode)
- Remove empty Component/Remediation columns from compliance table
- Extract repeated framework name strings to constants (linter fix)
- Add compliance report examples (with and without --ai) to docs

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… format

- Fix max_findings default from 50 to 500 in config docs and example YAML
- Add missing source values (macie, access-analyzer, all) to analyze docs
- Clarify severity is case-insensitive with default critical,high
- Fix PRD tag names from underscores to colons (atmos:stack, atmos:component)
- Simplify PRD to v1.0 (problem/solution/implementation/results structure)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@aknysh aknysh added the minor New features that do not break anything label Apr 3, 2026
@aknysh aknysh requested review from a team as code owners April 3, 2026 21:59
@aknysh aknysh self-assigned this Apr 3, 2026
@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 3, 2026

Dependency Review

The following issues were found:
  • βœ… 0 vulnerable package(s)
  • βœ… 0 package(s) with incompatible licenses
  • βœ… 0 package(s) with invalid SPDX license definitions
  • ⚠️ 11 package(s) with unknown licenses.
See the Details below.

Snapshot Warnings

⚠️: No snapshots were found for the head SHA 82b08f2.
Ensure that dependencies are being submitted on PR branches and consider enabling retry-on-snapshot-warnings. See the documentation for more information and troubleshooting advice.

License Issues

go.mod

PackageVersionLicenseIssue Type
github.com/Azure/azure-sdk-for-go/sdk/internal1.12.0NullUnknown License
github.com/aws/aws-sdk-go-v2/config1.32.14NullUnknown License
github.com/aws/aws-sdk-go-v2/credentials1.19.14NullUnknown License
github.com/aws/aws-sdk-go-v2/feature/s3/manager1.22.12NullUnknown License
github.com/aws/aws-sdk-go-v2/service/resourcegroupstaggingapi1.31.10NullUnknown License
github.com/aws/aws-sdk-go-v2/service/sso1.30.15NullUnknown License
github.com/aws/aws-sdk-go-v2/service/ssooidc1.35.19NullUnknown License
github.com/containerd/platforms1.0.0-rc.4NullUnknown License
github.com/hashicorp/go-getter1.8.6NullUnknown License
github.com/huandu/go-sqlbuilder1.40.1NullUnknown License
github.com/yuin/gopher-lua1.1.2NullUnknown License
Allowed Licenses: MIT, MIT-0, Apache-2.0, BSD-2-Clause, BSD-2-Clause-Views, BSD-3-Clause, ISC, MPL-2.0, 0BSD, Unlicense, CC0-1.0, CC-BY-3.0, CC-BY-4.0, CC-BY-SA-3.0, Python-2.0, OFL-1.1, LicenseRef-scancode-generic-cla, LicenseRef-scancode-unknown-license-reference, LicenseRef-scancode-unicode, LicenseRef-scancode-google-patent-license-golang
Excluded from license check: pkg:golang/modernc.org/libc

Scanned Files

  • go.mod

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 3, 2026

πŸ“ Walkthrough

Walkthrough

Adds an AWS Security & Compliance feature: new atmos aws security analyze and atmos aws compliance report CLI commands, Security Hub integration, finding fetching/mapping, AI analysis tools, multi-format report rendering, configuration schema, tests, docs, examples, and related tooling changes.

Changes

Cohort / File(s) Summary
Core security package
pkg/aws/security/types.go, pkg/aws/security/aws_clients.go, pkg/aws/security/cache.go, pkg/aws/security/finding_fetcher.go, pkg/aws/security/component_mapper.go, pkg/aws/security/analyzer.go, pkg/aws/security/report_renderer.go
New package: normalized Finding types, AWS client cache, TTL findings/compliance cache, Security Hub fetcher, tag+heuristic component mapping, AI-driven analyzer with tool/retry support, and multi-format report renderers.
Security tests
pkg/aws/security/*_test.go
Extensive unit tests for fetching, mapping, analyzer (AI/tool mocks, retries), caching, client cache, and renderer output.
CLI commands & auth
cmd/aws/security.go, cmd/aws/compliance.go, cmd/aws/credentials.go, cmd/aws/markdown/*
New security analyze and compliance report commands with Viper-backed flags, auth resolution/validation (STS), timeouts, output/file writing, and embedded help docs.
CLI tests
cmd/aws/security_test.go, cmd/aws/compliance_test.go
Tests for flag wiring, parsing helpers (format/source/severity), framework validation, control filtering, and command metadata.
AI tools & registry
pkg/ai/tools/atmos/*.go, pkg/ai/tools/atmos/setup.go
Adds LLM-callable tools: list/describe/analyze findings and compliance-report tool; registers tools for multi-turn AI flows.
Schema & config
pkg/schema/aws.go, pkg/schema/aws_security.go, pkg/schema/schema.go, pkg/schema/ai.go
New AWS settings and security configuration schema (sources, tag mapping, account map, frameworks) and a small AI comment.
Errors & UI tweaks
errors/errors.go, pkg/ui/formatter.go
New sentinel errors for AWS flows and markdown renderer color-profile change.
Agent skill & linter
.claude/skills/atmos-aws-security, agent-skills/skills/atmos-aws-security/SKILL.md, agent-skills/AGENTS.md, .golangci.yml, pkg/aws/security/skill_prompt.md
Adds atmos-aws-security skill, skill prompt, AGENTS index row, and depguard exclusion for the security package.
Docs, examples & website
docs/prd/*, examples/aws-security-compliance/*, website/docs/cli/commands/aws/**, website/blog/*, website/src/components/Screengrabs/*, website/src/data/roadmap.js
PRD, Bedrock provider guide, example atmos.yaml, CLI docs, examples, screengrabs, blog post, and roadmap entry.
Misc & deps
go.mod, NOTICE, examples/quick-start-advanced/Dockerfile, snapshots/fixtures`
Dependency bumps (adds AWS SDK services: resourcegroupstaggingapi, securityhub), NOTICE updates, ATMOS_VERSION bump, and fixture/snapshot updates.
Small additions
.claude/skills/atmos-aws-security
Adds a relative skill reference file (no trailing newline).

Sequence Diagram

sequenceDiagram
    autonumber
    actor User
    participant CLI as "atmos aws security analyze"
    participant Config as "Config / Auth"
    participant SecurityHub as "AWS Security Hub"
    participant Mapper as "Component Mapper"
    participant Analyzer as "AI Analyzer"
    participant Renderer as "Report Renderer"

    User->>CLI: run analyze (filters, --ai?)
    CLI->>Config: load atmos.yaml, resolve identity/region
    CLI->>Config: validate AWS creds (STS)
    CLI->>SecurityHub: fetch findings (paginated)
    SecurityHub-->>CLI: raw findings
    CLI->>Mapper: map findings β†’ components (tags, heuristics)
    Mapper-->>CLI: mapped findings
    alt AI enabled
        CLI->>Analyzer: analyze mapped findings (component source, stack config)
        Analyzer->>Analyzer: build prompt, call LLM (tools if available), retries
        Analyzer-->>CLI: findings enriched with Remediation
    end
    CLI->>Renderer: render report (markdown/json/yaml/csv)
    Renderer-->>CLI: formatted output
    CLI->>User: print or write file
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Suggested reviewers

  • osterman
πŸš₯ Pre-merge checks | βœ… 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 30.60% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
βœ… Passed checks (2 passed)
Check name Status Explanation
Description Check βœ… Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check βœ… Passed The title clearly and specifically describes the main change: AWS security and compliance features with finding-to-code mapping and AI-powered remediation. It directly reflects the core additions across CLI commands, mapping strategies, and AI capabilities.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
πŸ“ Generate docstrings
  • Create stacked PR
  • Commit on current branch
πŸ§ͺ Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch aknysh/atmos-ai-auth-compliance-1

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
NOTICE (1)

1-1817: ⚠️ Potential issue | πŸ”΄ Critical

NOTICE is still out of date (merge blocker).

Dependency Review is failing on this file. Please regenerate NOTICE from source (./scripts/generate-notice.sh) and commit the generated output rather than hand-editing entries.

Based on learnings: β€œIn the cloudposse/atmos repository, the NOTICE file is programmatically generated and should not be manually edited.”

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@NOTICE` around lines 1 - 1817, The NOTICE file is out of date and must not be
hand-edited; regenerate the file using the repository's automated generator
instead. Run the provided script ./scripts/generate-notice.sh to produce a fresh
NOTICE, replace the current NOTICE with the generated output (do not manually
edit entries), verify dependency review passes, and commit the generated NOTICE
to the branch so the PR no longer fails the Dependency Review check.
🟠 Major comments (24)
website/src/data/roadmap.js-588-610 (1)

588-610: ⚠️ Potential issue | 🟠 Major

Add missing roadmap traceability fields for shipped milestones.

At Line 588–598, several new shipped milestones are missing pr and/or changelog, and at Line 609 the initiative-level prs list is empty. Please wire these to PR #2282 (or the exact originating PRs) so roadmap entries stay auditable.

♻️ Proposed patch
-        { label: 'Schema additions for `aws.security` config', status: 'shipped', quarter: 'q1-2026', prd: 'atmos-aws-security-compliance', description: 'Added AWSSettings, AWSSecuritySettings, AWSSecuritySources, and AWSSecurityTagMapping types to pkg/schema/.', benefits: 'Configure security scanning behavior directly in atmos.yaml.' },
+        { label: 'Schema additions for `aws.security` config', status: 'shipped', quarter: 'q1-2026', pr: 2282, changelog: 'aws-security-compliance', prd: 'atmos-aws-security-compliance', description: 'Added AWSSettings, AWSSecuritySettings, AWSSecuritySources, and AWSSecurityTagMapping types to pkg/schema/.', benefits: 'Configure security scanning behavior directly in atmos.yaml.' },

-        { label: 'AWS security client package', status: 'shipped', quarter: 'q1-2026', prd: 'atmos-aws-security-compliance', description: 'Created pkg/aws/security/ with clients for Security Hub, Config, Inspector, and GuardDuty. Interface-driven design with mock support.', benefits: 'Unified access to all AWS security services from a single package.' },
+        { label: 'AWS security client package', status: 'shipped', quarter: 'q1-2026', pr: 2282, changelog: 'aws-security-compliance', prd: 'atmos-aws-security-compliance', description: 'Created pkg/aws/security/ with clients for Security Hub, Config, Inspector, and GuardDuty. Interface-driven design with mock support.', benefits: 'Unified access to all AWS security services from a single package.' },

  ...
-      prs: [],
+      prs: [
+        { number: 2282, title: 'feat: AWS Security & Compliance β€” finding-to-code mapping with AI remediation' },
+      ],

As per coding guidelines: β€œPRs labeled minor or major MUST update website/src/data/roadmap.js. Add new milestones with status: 'shipped', add changelog: 'your-blog-slug' link, add pr: <pr-number> link, and update initiative progress percentage.”

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@website/src/data/roadmap.js` around lines 588 - 610, Several shipped
milestones in website/src/data/roadmap.js (e.g., entries with labels 'Schema
additions for `aws.security` config', 'AWS security client package', 'Tag-based
finding-to-code mapping', 'Heuristic finding-to-code mapping', '`atmos aws
security analyze` CLI command', '`atmos aws compliance report` CLI command',
'Multi-format output (Markdown, JSON, YAML, CSV)', 'AI-powered analysis via
`--ai` flag', 'AI tools for security (4 tools)', '`--ai` global persistent
flag', 'Finding cache infrastructure') are missing pr and/or changelog fields
and the parent initiative's prs array is empty; for each shipped milestone add
pr: 2282 and changelog: 'aws-security-compliance' (or the exact originating
PR/changelog if different) and update the initiative's prs array to include 2282
and adjust the initiative progress percentage accordingly (increase to reflect
shipped items) so roadmap entries are auditable and compliant with the PR/update
rules.
docs/prd/aws-bedrock-ai-provider.md-12-27 (1)

12-27: ⚠️ Potential issue | 🟠 Major

Qualify claims about data residency, compliance scope, and provider data handling.

AWS official docs show these statements need tightening:

  • Data residency: Bedrock keeps data in your selected Region by default, but supports cross-Region inference (Geographic and Global routing) that can route requests elsewhere. Mention this option.

  • Compliance: Bedrock is in scope for SOC 2, ISO, HIPAA (eligible), PCI DSS (via AWS scope), and FedRAMP High (GovCloud US-West only). Add that compliance responsibility is sharedβ€”customers must handle data sensitivity and proper configurations. Use "in scope" or "eligible" rather than "compliant."

  • Third-party models: Each provider has its own EULA governing data use and retention (e.g., Anthropic doesn't train on customer content). Note this responsibility explicitly.

Cite AWS Bedrock data protection and compliance validation pages in the doc.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/prd/aws-bedrock-ai-provider.md` around lines 12 - 27, Update the "Why
Bedrock for Enterprise Security Workloads" section and the table entries ("Data
residency", "Compliance", "No API keys") to tone down absolute claims: state
that Bedrock keeps data in your selected Region by default but supports
cross-Region/global routing for inference (mention Geographic/Global routing),
change "Compliance" to note which frameworks Bedrock is in-scope or eligible for
(SOC 2, ISO, HIPAA eligible, PCI DSS via AWS scope, FedRAMP High limited to
GovCloud US-West), and add a sentence about shared responsibility requiring
customers to configure and handle sensitive data appropriately; also add a note
under "Third-party models" that each model provider/EULA may govern data
use/retention (e.g., Anthropic) and include citations to the AWS Bedrock data
protection and compliance validation pages
(https://docs.aws.amazon.com/bedrock/latest/userguide/data-protection.html and
https://docs.aws.amazon.com/bedrock/latest/userguide/compliance-validation.html)
and ensure the `atmos aws security analyze --ai` mention remains accurate with
these caveats.
pkg/aws/security/skill_prompt.md-2-3 (1)

2-3: ⚠️ Potential issue | 🟠 Major

Prompt/parser contract drift for required Code Changes section.

This prompt says output is parsed programmatically and requires ### Code Changes, but the parser path shown in pkg/aws/security/analyzer.go appears to extract other sections only. That risks silently dropping structured remediation detail.

Suggested fix direction
# In pkg/aws/security/analyzer.go (parseRemediationResponse):
# Add extraction for "### Code Changes" and map it into the Remediation schema field
# used by JSON/YAML/CSV rendering.

Also applies to: 17-20

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/aws/security/skill_prompt.md` around lines 2 - 3, The prompt file’s
required header "### Code Changes" is out of sync with the remediation parser
used by the analyzer (the parsing logic in analyzer.go), so either add/normalize
the exact section header "### Code Changes" in the skill prompt or extend the
analyzer’s remediation parsing logic to explicitly extract the "### Code
Changes" section; locate the parser function in analyzer.go that extracts
remediation sections (the remediation/section-parsing routine) and (a) update it
to recognize and return the "### Code Changes" block or (b) modify the prompt
content to include the exact headers the parser expects, then update/cover the
change with a unit test validating the parser returns the "Code Changes"
content.
cmd/aws/credentials.go-50-55 (1)

50-55: ⚠️ Potential issue | 🟠 Major

Validate resolved AWS auth env fields before returning AWSAuthContext.

If AWS_PROFILE/file paths are empty, downstream credential checks fail with less actionable errors. Fail fast here with a clear identity-specific message.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/aws/credentials.go` around lines 50 - 55, The AWSAuthContext is being
constructed from envVars without validating required identity fields; update the
code that builds AWSAuthContext (the block that sets CredentialsFile,
ConfigFile, Profile, Region) to validate the resolved values from envVars (at
minimum ensure AWS_PROFILE is non-empty and that at least one of CredentialsFile
or ConfigFile is provided) and fail fast by returning an explicit error (or
non-nil result signaling failure) with a clear, identity-specific message if
validation fails so downstream credential checks don't see ambiguous errors.
cmd/aws/credentials.go-18-31 (1)

18-31: ⚠️ Potential issue | 🟠 Major

Guard atmosConfig before dereference to avoid panic.

resolveAuthContext assumes atmosConfig is always non-nil. A nil input will panic at Line 30.

πŸ’‘ Suggested fix
 func resolveAuthContext(atmosConfig *schema.AtmosConfiguration, identityName string) (*schema.AWSAuthContext, error) {
+	if atmosConfig == nil {
+		return nil, errUtils.Build(errUtils.ErrAWSCredentialsNotValid).
+			WithExplanation("Missing Atmos configuration while resolving AWS auth context").
+			WithHint("Ensure Atmos config is loaded before running AWS commands").
+			WithExitCode(1).
+			Err()
+	}
+
 	if identityName == "" {
 		return nil, nil
 	}
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/aws/credentials.go` around lines 18 - 31, The function resolveAuthContext
dereferences atmosConfig (e.g., atmosConfig.Auth and atmosConfig.CliConfigPath)
without nil-checking which can panic; add a guard at the start of
resolveAuthContext to validate atmosConfig is non-nil and return a descriptive
error (or nil, nil per existing semantics) if it's nil, then proceed to create
credentials.NewCredentialStore(), validation.NewValidator(), and
auth.NewAuthManager(...) only when atmosConfig is present so dereferences are
safe.
docs/prd/atmos-aws-security-compliance.md-186-201 (1)

186-201: ⚠️ Potential issue | 🟠 Major

PRD config sample conflicts with implemented default tag keys.

Line 196-197 uses atmos_stack / atmos_component, while implemented defaults are atmos:stack / atmos:component. This can break copy-paste setup for exact tag mapping.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/prd/atmos-aws-security-compliance.md` around lines 186 - 201, The docs
sample's tag_mapping values conflict with the implemented default keys; update
the example under tag_mapping (the stack_tag and component_tag entries) to use
the actual default tag keys "atmos:stack" and "atmos:component" so copy-paste
configuration matches the implementation (or alternatively change the
implementation defaults to match the sample), ensuring the entries for stack_tag
and component_tag align with the implemented defaults.
pkg/aws/security/cache.go-75-75 (1)

75-75: ⚠️ Potential issue | 🟠 Major

Cache exposes mutable slices; callers can accidentally corrupt cached results.

Line 75 returns the internal slice directly, and Lines 87-90 store the caller slice directly. Any downstream mutation changes cached state.

πŸ’‘ Suggested fix
 func (c *findingsCache) GetFindings(opts *QueryOptions) ([]Finding, bool) {
@@
-	return entry.value, true
+	return append([]Finding(nil), entry.value...), true
 }
@@
 func (c *findingsCache) SetFindings(opts *QueryOptions, findings []Finding) {
@@
 	c.findings[key] = &cacheEntry[[]Finding]{
-		value:     findings,
+		value:     append([]Finding(nil), findings...),
 		expiresAt: time.Now().Add(c.ttl),
 	}
 }

Also applies to: 87-90

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/aws/security/cache.go` at line 75, The cache currently exposes and stores
internal slices directly (see return of entry.value and the setter storing the
caller slice), allowing callers to mutate cached data; fix this by making
defensive copies: when returning slices from the getter (the code that returns
entry.value) return a new slice copy instead of the internal slice, and when
storing slices in the cache (the code at the assignment around lines 87-90 that
sets entry.value or similar) save a copy of the input slice rather than the
original; use idiomatic slice-copying (allocate a new slice and copy/append the
elements) around the functions that manipulate entry.value to prevent external
mutation of cached state.
pkg/aws/security/cache.go-136-151 (1)

136-151: ⚠️ Potential issue | 🟠 Major

Cache key is missing query dimensions and can return incorrect findings.

Line 146 builds a key without Framework (and other query discriminators). Two different queries can collide and reuse stale/incorrect data.

πŸ’‘ Suggested fix
 func buildFindingsKey(opts *QueryOptions) string {
+	if opts == nil {
+		return "findings:<nil>"
+	}
+
 	// Sort severities for consistent key generation.
 	sevs := make([]string, len(opts.Severity))
 	for i, s := range opts.Severity {
 		sevs[i] = string(s)
 	}
 	sort.Strings(sevs)
 
-	return fmt.Sprintf("findings:%s:%s:%s:%d",
+	return fmt.Sprintf("findings:%s:%s:%s:%s:%s:%s:%d",
 		opts.Region,
 		strings.Join(sevs, ","),
 		string(opts.Source),
+		opts.Framework,
+		opts.Stack,
+		opts.Component,
 		opts.MaxFindings,
 	)
 }
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/aws/security/cache.go` around lines 136 - 151, The cache key generated by
buildFindingsKey currently omits important query discriminators (e.g.,
Framework) causing collisions; update buildFindingsKey to include opts.Framework
and any other QueryOptions fields that affect result sets (for example
ResourceType, Status, ControlIDs or similar) into the fmt.Sprintf key, making
sure to normalize/sort any slice fields (like severities or control ID lists)
before joining so key generation is deterministic and unique per distinct query.
website/docs/cli/commands/aws/compliance/report.mdx-38-58 (1)

38-58: ⚠️ Potential issue | 🟠 Major

Document the --ai flag for compliance report to match shipped behavior.

The command page currently omits --ai in both Flags and Examples, which makes the new remediation mode undiscoverable.

As per coding guidelines: Keep CLI documentation and website documentation in sync and document new features on the website with examples and use cases.

Also applies to: 60-137

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@website/docs/cli/commands/aws/compliance/report.mdx` around lines 38 - 58,
Summary: The docs for the CLI command "compliance report" miss the new --ai flag
and example usage. Update the "Flags" section to add a `--ai` entry describing
the remediation/AI mode (what it does and default behavior) and any accepted
values, and update the "Examples" section for `compliance report` to include one
or two concrete examples showing `--ai` usage (with and without other flags like
`--stack` or `--framework`), ensuring the text matches the shipped CLI behavior
and any option aliases.
pkg/aws/security/report_renderer.go-357-387 (1)

357-387: ⚠️ Potential issue | 🟠 Major

Grouped markdown output drops remediation blocks.

Lines 357-387 handle the grouped path, but unlike renderFindingMarkdown, this code never calls renderRemediationMarkdown. The caller in cmd/aws/security.go enables grouping by default, so repeated titles lose their AI remediation in the normal CLI flow.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/aws/security/report_renderer.go` around lines 357 - 387,
renderGroupedFindingMarkdown currently omits remediation blocks, so grouped
findings lose AI remediation; add a call to renderRemediationMarkdown for the
group's representative finding (f := &findings[0]) inside
renderGroupedFindingMarkdown (similar to how renderFindingMarkdown includes
remediation). Place the call after renderGroupedTags(sb, findings) (or just
before the final separator) and pass the representative finding (f) and the
strings.Builder to renderRemediationMarkdown to ensure the grouped output
includes the remediation content.
cmd/aws/compliance.go-123-185 (1)

123-185: ⚠️ Potential issue | 🟠 Major

Don’t stream multiple framework reports into one JSON/YAML/CSV document.

Lines 155-184 call RenderComplianceReport once per framework on the same writer. That is fine for markdown, but JSON/YAML become concatenated documents and CSV repeats headers as soon as more than one framework is selected from config.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/aws/compliance.go` around lines 123 - 185, The code currently reuses the
same writer and calls renderer.RenderComplianceReport for each framework, which
concatenates JSON/YAML/CSV outputs; change behavior so non-markdown formats are
emitted per-framework to avoid concatenation: detect when outputFormat !=
security.FormatMarkdown and len(frameworks) > 1, then if fileOutput == "" return
a user-facing error (via ui.Errorf/return) asking to select a single framework
or supply a file pattern; otherwise, for each fw build a per-framework filename
by inserting the framework name before the extension (use filepath.Ext and
strings.TrimSuffix on fileOutput), create each file with os.Create, call
renderer.RenderComplianceReport(file, report) and close it (don’t reuse the
original single output writer); keep markdown/stdout behavior unchanged and
continue using renderer.RenderComplianceReport for other cases.
cmd/aws/compliance.go-241-260 (1)

241-260: ⚠️ Potential issue | 🟠 Major

Filtered compliance scores are misleading.

Lines 250-257 keep report.PassingControls at the full-framework value but shrink TotalControls to len(filtered)+report.PassingControls. Filtering to one failing control can therefore report a near-perfect or 100% score for a control set that is actually failing.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/aws/compliance.go` around lines 241 - 260, The filtered report currently
reuses report.PassingControls causing misleading high scores; in
filterComplianceReport recompute counts from the filtered dataset instead of
keeping the original passing count: set filteredReport.FailingControls =
len(filtered), set filteredReport.PassingControls = 0 (or compute an
intersection if you have a list of passing control IDs) and set
filteredReport.TotalControls = filteredReport.FailingControls +
filteredReport.PassingControls, then recalc filteredReport.ScorePercent from
those new values (use percentMultiplier as before). Update the function
references FailingDetails, FailingControls, PassingControls, TotalControls, and
ScorePercent accordingly.
cmd/aws/security.go-72-75 (1)

72-75: ⚠️ Potential issue | 🟠 Major

Honor global config-selection flags before loading Atmos config.

Lines 73-75 pass an empty schema.ConfigAndStacksInfo into cfg.InitCliConfig, so this command silently ignores --base-path, --config, --config-path, and --profile. Parse the global flags into configAndStacksInfo first so the security analysis runs in the same config context as the rest of the CLI.

Based on learnings: In Atmos, cfg.InitCliConfig must receive global flag values via flags.ParseGlobalFlags(cmd, v); passing an empty ConfigAndStacksInfo silently ignores config selection flags.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/aws/security.go` around lines 72 - 75, The command currently passes an
empty schema.ConfigAndStacksInfo into cfg.InitCliConfig so global
config-selection flags are ignored; before calling cfg.InitCliConfig, call
flags.ParseGlobalFlags(cmd, v) to populate a ConfigAndStacksInfo instance
(schema.ConfigAndStacksInfo) with the global flags and then pass that populated
configAndStacksInfo into cfg.InitCliConfig so
--base-path/--config/--config-path/--profile are honored in the security
analysis.
cmd/aws/compliance.go-62-65 (1)

62-65: ⚠️ Potential issue | 🟠 Major

Honor global config-selection flags before loading Atmos config.

Lines 63-64 pass an empty schema.ConfigAndStacksInfo into cfg.InitCliConfig, so this command silently ignores --base-path, --config, --config-path, and --profile. Parse the global flags into configAndStacksInfo first so the compliance report uses the same config context as the rest of the CLI.

Based on learnings: In Atmos, cfg.InitCliConfig must receive global flag values via flags.ParseGlobalFlags(cmd, v); passing an empty ConfigAndStacksInfo silently ignores config selection flags.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/aws/compliance.go` around lines 62 - 65, The command currently constructs
an empty schema.ConfigAndStacksInfo and calls cfg.InitCliConfig, which ignores
global config-selection flags; call flags.ParseGlobalFlags(cmd, v) (or otherwise
populate a schema.ConfigAndStacksInfo from the parsed globals) before invoking
cfg.InitCliConfig so that the ConfigAndStacksInfo passed into InitCliConfig
contains values from --base-path, --config, --config-path, and --profile; update
the code around configAndStacksInfo creation to use flags.ParseGlobalFlags(cmd,
v) to fill it and then pass that populated ConfigAndStacksInfo into
cfg.InitCliConfig.
pkg/ai/tools/atmos/list_findings.go-91-143 (1)

91-143: ⚠️ Potential issue | 🟠 Major

Apply the stack filter after mapping.

Lines 127-135 populate opts.Stack, but Lines 107-113 render the full mapped slice with no post-mapping stack check. Security Hub cannot filter by Atmos stack name, so this tool currently returns cross-stack findings even when stack is set.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/ai/tools/atmos/list_findings.go` around lines 91 - 143, The code sets
opts.Stack in parseFindingsQueryParams but applies no post-mapping stack filter,
so mapped findings still include cross-stack results; after calling
mapper.MapFindings (the MapFindings call that returns findings), filter the
returned findings slice by opts.Stack (if non-empty) using the component/stack
name present on each mapped finding, then use the filtered slice for
formatFindingsOutput, JSON marshalling, and setting the "total" and "findings"
fields in the Result; keep fetcher.FetchFindings and mapper.MapFindings
unchanged but ensure subsequent logic uses the filtered findings variable.
cmd/aws/security.go-59-71 (1)

59-71: ⚠️ Potential issue | 🟠 Major

--framework is registered but never reaches the fetcher.

Lines 290-292 add the flag, but Lines 59-71 never read it and Lines 147-153 never set security.QueryOptions.Framework. atmos aws security analyze --framework ... is therefore a no-op today.

Also applies to: 147-153, 285-300

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/aws/security.go` around lines 59 - 71, The --framework CLI flag is
registered but never read or passed into the security fetcher; update the Viper
reads to include framework (e.g., add framework := v.GetString("framework")
alongside stack/component/severity in the top block) and then set that value on
the options passed to the fetcher by assigning it to
security.QueryOptions.Framework when building the QueryOptions (the same place
where maxFindings/useAI/region/identity/noGroup are set); ensure the framework
value flows from the flag registration (lines ~290-292) through the Viper read
to the QueryOptions.Framework field so atmos aws security analyze --framework
works.
pkg/aws/security/report_renderer.go-149-214 (1)

149-214: ⚠️ Potential issue | 🟠 Major

Add error checking after CSV flush in both renderers.

Both RenderSecurityReport and RenderComplianceReport call defer cw.Flush() but never check cw.Error() before returning. Buffered CSV writes can fail silently on broken pipes or full disksβ€”callers will see success but get truncated output. Check the writer's error state after flush to catch these cases.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/aws/security/report_renderer.go` around lines 149 - 214, The CSV writers
currently use defer cw.Flush() but never check cw.Error(), so CSV write failures
can be missed; update both RenderSecurityReport and RenderComplianceReport to
flush and check errors before returning: replace the deferred-only flush with an
explicit cw.Flush() call just before the function returns (or keep defer but
also call cw.Flush() near the end), then call if err := cw.Error(); err != nil {
return err } (referencing the local csv.Writer variable cw and the functions
RenderSecurityReport and RenderComplianceReport) so any buffered write errors
are propagated to the caller.
pkg/ai/tools/atmos/compliance_report.go-67-85 (1)

67-85: ⚠️ Potential issue | 🟠 Major

Keep the tool on the caller’s config and AWS identity.

Lines 68-85 ignore t.atmosConfig, rebuild config from an empty schema.ConfigAndStacksInfo, and then create the fetcher with nil auth context. In delegated-admin or non-default profile/base-path sessions, this tool can read compliance data from the wrong account/region instead of the active Atmos session.

Based on learnings: cfg.InitCliConfig reads config selection from schema.ConfigAndStacksInfo; passing an empty struct causes --base-path, --config, --config-path, and --profile to be ignored.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/ai/tools/atmos/compliance_report.go` around lines 67 - 85, The code
re-initializes config with cfg.InitCliConfig and passes nil auth, which ignores
the caller's selected config and AWS identity; instead use the tool's existing
t.atmosConfig for checks (replace atmosConfig with t.atmosConfig) and pass the
tool's auth/context (e.g., t.atmosAuth or the struct field that holds the
current AWS auth) into security.NewFindingFetcher (replace the nil), and add a
defensive nil-check for t.atmosConfig (and t.atmosAuth) before use and return an
appropriate error if missing so the fetcher runs in the caller's active Atmos
session and profile.
pkg/aws/security/component_mapper.go-123-128 (1)

123-128: ⚠️ Potential issue | 🟠 Major

Carry fetched tags forward into the heuristic path.

Line 124 loads tags from the Tagging API, but Line 128 only uses them for the exact Atmos-tag match. If that misses, mapByContextTags still reads the original finding.ResourceTags, so the context-tags strategy never sees fetched Name/Namespace/Tenant/Environment/Stage tags.

Also applies to: 272-276

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/aws/security/component_mapper.go` around lines 123 - 128, The code
fetches tags via m.getResourceTags(...)->tags and then calls m.matchTags(tags,
"tag-api") but leaves mapByContextTags operating on the original
finding.ResourceTags, so fetched tags (Name/Namespace/Tenant/Environment/Stage)
are never considered in the "context-tags" heuristic; update the flow so that
after fetching/merging tags you pass the merged tags into mapByContextTags (or
make mapByContextTags accept/use a mergedTags argument) instead of relying on
finding.ResourceTags, and apply the same change to the second occurrence around
mapByContextTags at the other block (the 272–276 area) so both heuristic paths
see fetched tags.
pkg/aws/security/analyzer.go-347-364 (1)

347-364: ⚠️ Potential issue | 🟠 Major

Populate Remediation.CodeChanges from the structured response.

This parser never reads a ### Code Changes section, so any file/line/before/after edits returned by the model are discarded. Downstream renderers will always see an empty CodeChanges slice even when the AI produced concrete code patches.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/aws/security/analyzer.go` around lines 347 - 364, The parser in
parseRemediationResponse never reads a "### Code Changes" section so
Remediation.CodeChanges stays empty; add logic to extract the "### Code Changes"
block (use extractFirstMatch(response, "### Code Changes")) and populate
remediation.CodeChanges by parsing that block (e.g., call an existing
parseCodeChanges / parsePatch utility or implement a small parser that converts
file/line/before/after edits into the Remediation.CodeChanges slice) and assign
the result to remediation.CodeChanges inside parseRemediationResponse.
pkg/ai/tools/atmos/analyze_finding.go-145-157 (1)

145-157: ⚠️ Potential issue | 🟠 Major

Exact finding lookup shouldn't stop at the first 500 results.

This helper does an unfiltered fetch capped by MaxFindingsForLookup and then scans client-side. In accounts with more than 500 findings, valid IDs outside that window will be reported as "not found" even though they exist.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/ai/tools/atmos/analyze_finding.go` around lines 145 - 157, The current
lookup uses security.NewFindingFetcher + fetcher.FetchFindings with a single
QueryOptions(MaxFindingsForLookup) then scans the returned slice and will miss
IDs beyond the cap; change the logic to perform a targeted or paginated fetch:
either add an ID filter to QueryOptions (if supported) or loop calling
fetcher.FetchFindings with pagination/offset/next-token until you find
findings[i].ID == findingID or no more pages, instead of relying on
MaxFindingsForLookup; update the code around security.NewFindingFetcher,
security.QueryOptions, and fetcher.FetchFindings to implement the
paging/ID-filter approach so exact lookups are exhaustive.
pkg/aws/security/finding_fetcher.go-215-225 (1)

215-225: ⚠️ Potential issue | 🟠 Major

Let auth-managed regions win when the caller didn't set one.

Line 224 hard-codes us-east-1, so identity.LoadConfigWithAuth never receives an empty region and cannot apply AWSAuthContext.Region. With delegated-admin identities, this can fetch Security Hub data from the wrong region.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/aws/security/finding_fetcher.go` around lines 215 - 225, The
resolveRegion function currently returns a hard-coded "us-east-1" when no
override or config region is set, preventing identity.LoadConfigWithAuth from
applying AWSAuthContext.Region; modify awsFindingFetcher.resolveRegion so that
if override is empty and f.atmosConfig.AWS.Security.Region is empty it returns
an empty string (allowing identity.LoadConfigWithAuth to pick up auth-managed
region) while still returning override when provided and
f.atmosConfig.AWS.Security.Region when configured.
pkg/ai/tools/atmos/analyze_finding.go-107-107 (1)

107-107: ⚠️ Potential issue | 🟠 Major

Thread the resolved AWS auth context into both fetch and mapping.

Both constructors get nil here, so atmos_analyze_finding ignores any configured aws.security.identity and falls back to ambient AWS credentials. In delegated-admin setups, the tool can fetch and map findings against the wrong account even when the CLI command path works.

Also applies to: 145-145

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/ai/tools/atmos/analyze_finding.go` at line 107, The code currently passes
nil for the AWS auth context into security.NewComponentMapper(atmosConfig, nil)
(and the findings fetch constructor at the other site), causing ambient
credentials to be used; locate where the resolved AWS auth/context value is
obtained in this function (the variable holding the resolved aws security
identity) and pass that variable into security.NewComponentMapper(atmosConfig,
<authContext>) and into the findings fetcher constructor instead of nil so both
fetch and mapping use the resolved AWS identity.
pkg/aws/security/component_mapper_test.go-692-744 (1)

692-744: ⚠️ Potential issue | 🟠 Major

Lock down the embedded-tag vs API precedence with a real mock.

These two cases do not actually prove the contract they describe. The embedded-tags test never verifies β€œno API call,” and the fallback test never seeds mapper.clients.tagging/taggingFn, so a cache miss can still walk the real awsClientCache client path and pass on the weak mapping.Method != "finding-tag" assertion. Inject mockTaggingClient here as a spy/stub and assert zero calls for embedded tags plus an actual tag-api mapping when ResourceTags are absent.

Suggested tightening.
 type mockTaggingClient struct {
 	resources []tagtypes.ResourceTagMapping
 	err       error
+	calls     int
 }
 
 func (m *mockTaggingClient) GetResources(_ context.Context, _ *resourcegroupstaggingapi.GetResourcesInput, _ ...func(*resourcegroupstaggingapi.Options)) (*resourcegroupstaggingapi.GetResourcesOutput, error) {
+	m.calls++
 	if m.err != nil {
 		return nil, m.err
 	}
 	return &resourcegroupstaggingapi.GetResourcesOutput{
 		ResourceTagMappingList: m.resources,
 	}, nil
 }
@@
-	mapper := NewComponentMapper(atmosConfig, nil)
+	spy := &mockTaggingClient{}
+	mapper := NewComponentMapper(atmosConfig, nil).(*dualPathMapper)
+	mapper.clients.tagging["us-east-1"] = spy
@@
 	assert.Equal(t, ConfidenceExact, mapping.Confidence)
 	assert.Equal(t, "finding-tag", mapping.Method)
+	assert.Zero(t, spy.calls)
@@
 	finding := &Finding{
 		ID:           "no-tag-001",
 		ResourceARN:  "arn:aws:s3:::my-bucket",
+		Region:       "us-east-1",
 		ResourceTags: nil,
 	}
 
-	mapper := NewComponentMapper(atmosConfig, nil)
+	spy := &mockTaggingClient{
+		resources: []tagtypes.ResourceTagMapping{{
+			ResourceARN: aws.String("arn:aws:s3:::my-bucket"),
+			Tags: []tagtypes.Tag{
+				{Key: aws.String("atmos:stack"), Value: aws.String("plat-use2-prod")},
+				{Key: aws.String("atmos:component"), Value: aws.String("s3-bucket")},
+			},
+		}},
+	}
+	mapper := NewComponentMapper(atmosConfig, nil).(*dualPathMapper)
+	mapper.clients.tagging["us-east-1"] = spy
 
 	mapping, err := mapper.MapFinding(context.Background(), finding)
 	require.NoError(t, err)
 	require.NotNil(t, mapping)
-	assert.NotEqual(t, "finding-tag", mapping.Method)
+	assert.Equal(t, "tag-api", mapping.Method)
+	assert.Equal(t, "plat-use2-prod", mapping.Stack)
+	assert.Equal(t, "s3-bucket", mapping.Component)
+	assert.Equal(t, 1, spy.calls)

Based on learnings, "Test behavior, not implementation. Never test stub functions. Avoid tautological tests. Make code testable via dependency injection. No coverage theater."

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/aws/security/component_mapper_test.go` around lines 692 - 744, Update the
two tests to inject a mockTaggingClient into the ComponentMapper via
NewComponentMapper so you can assert API call behavior: in
TestMapByTags_FindingEmbeddedTags pass a mockTaggingClient spy on
mapper.clients.tagging/taggingFn and assert it was never called and
mapping.Method remains "finding-tag"; in TestMapByTags_FindingTagsFallbackToAPI
seed mapper.clients.tagging/taggingFn with a mock that returns tags for the ARN
(simulate tag-api response), assert the mock was invoked (call count > 0) and
that mapping.Method equals the expected "tag-api" value; use NewComponentMapper,
MapFinding, mapper.clients.tagging/taggingFn and mockTaggingClient identifiers
to locate the code to modify.
🟑 Minor comments (10)
website/docs/cli/commands/aws/eks/eks.mdx-18-26 (1)

18-26: ⚠️ Potential issue | 🟑 Minor

Add required Arguments, Flags, and Examples sections to this command page.

This page currently stops at usage/subcommands. Please add explicit sections (even if β€œNone”) to keep CLI docs consistent.

πŸ“„ Suggested doc patch
 ## Subcommands
 
 <DocCardList/>
+
+## Arguments
+
+<dl>
+  <dt>None</dt>
+  <dd>This command group does not accept positional arguments directly.</dd>
+</dl>
+
+## Flags
+
+<dl>
+  <dt>None</dt>
+  <dd>Use flags on concrete subcommands.</dd>
+</dl>
+
+## Examples
+
+```shell
+atmos aws eks <subcommand> [flags]
+```

As per coding guidelines: β€œAll CLI commands MUST have Docusaurus documentation ... Include ... Arguments/Flags ... and Examples section.”

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@website/docs/cli/commands/aws/eks/eks.mdx` around lines 18 - 26, Add explicit
"Arguments", "Flags", and "Examples" sections to the eks command doc page (in
the same doc that contains the Usage and Subcommands headings) even if
emptyβ€”create an "Arguments" section listing any positional args or the word
"None", a "Flags" section enumerating global/command flags or "None", and an
"Examples" section showing at least one example CLI invocation (e.g., a basic
eks subcommand call) to comply with CLI docs guidelines; update the content near
the existing Usage/Subcommands block so the page has clear Arguments, Flags, and
Examples headings.
website/docs/cli/global-flags.mdx-778-789 (1)

778-789: ⚠️ Potential issue | 🟑 Minor

Duplicate β€œAI-Powered Analysis Examples” section should be consolidated.

This creates two sections with the same heading and overlapping examples. Merge both into one section to keep docs stable and easier to maintain.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@website/docs/cli/global-flags.mdx` around lines 778 - 789, There are
duplicate "AI-Powered Analysis Examples" sections; merge them into a single
section by consolidating the example blocks (the commands shown like "atmos aws
security analyze --ai -s prod", "ATMOS_AI=true atmos aws security analyze
--severity=critical -s prod", and "atmos aws compliance report --ai
--framework=cis-aws -s prod") into one cohesive "AI-Powered Analysis Examples"
heading, remove the redundant duplicate heading and example block, and ensure
the final single section contains all unique examples and correct
flags/formatting so there are no overlapping or repeated examples.
website/docs/cli/commands/aws/compliance/compliance.mdx-21-29 (1)

21-29: ⚠️ Potential issue | 🟑 Minor

Please add Arguments/Flags and Examples for this command page.

The command doc structure is missing required reference sections, which makes the top-level command less actionable for users.

As per coding guidelines, β€œAll CLI commands MUST have Docusaurus documentation in website/docs/cli/commands/<command>/<subcommand>.mdx… [including] Arguments/Flags … and Examples section.”

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@website/docs/cli/commands/aws/compliance/compliance.mdx` around lines 21 -
29, The top-level CLI doc for the "atmos aws compliance" command is missing the
required "Arguments/Flags" and "Examples" sections; update the MDX (the current
file containing the "## Usage" and "## Subcommands" blocks and the
<DocCardList/> token) by adding a clear "## Arguments/Flags" heading that
enumerates each supported flag/argument for the `atmos aws compliance
<subcommand>` command (name, type, default, brief description) and an "##
Examples" heading with one or more concrete usage examples showing common
subcommands and flags (shell code blocks demonstrating real-world invocations);
keep formatting consistent with other CLI docs and include at least one example
per major subcommand listed by <DocCardList/>.
website/docs/cli/global-flags.mdx-593-604 (1)

593-604: ⚠️ Potential issue | 🟑 Minor

This reintroduces a duplicate ATMOS_AI env-var section.

ATMOS_AI is already documented above, so keeping both copies risks drift. Prefer one canonical entry and link to examples from that single spot.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@website/docs/cli/global-flags.mdx` around lines 593 - 604, Remove the
duplicated "AI Integration Environment Variables" block that reintroduces the
`ATMOS_AI` entry; instead, delete this duplicate section (the block containing
the `ATMOS_AI` dt/dd) and add a short cross-reference or link to the canonical
`ATMOS_AI` documentation entry (and to the `--ai` CLI flag) so examples are
maintained from a single source of truth; ensure the remaining canonical entry
mentions equivalence to `--ai` and includes the example usage `ATMOS_AI=true
atmos aws security analyze -s prod`.
website/docs/cli/commands/aws/security/security.mdx-21-29 (1)

21-29: ⚠️ Potential issue | 🟑 Minor

Add Arguments/Flags and Examples sections to complete the command doc.

This new command page is close, but it’s missing the required sections for discoverability and consistency.

πŸ› οΈ Suggested doc scaffold
 ## Usage
 
 ```shell
 atmos aws security <subcommand> [flags]

+## Arguments
+
+

  • ``
  • Security operation to run (for example: `analyze`).

+


+
+## Flags
+
+

  • `--help`
  • Show help for `atmos aws security`.

+


+
+## Examples
+
+shell +atmos aws security analyze --help +
+

Subcommands

</details>

As per coding guidelines, β€œAll CLI commands MUST have Docusaurus documentation in `website/docs/cli/commands/<command>/<subcommand>.mdx`… [including] Arguments/Flags … and Examples section.”

<details>
<summary>πŸ€– Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @website/docs/cli/commands/aws/security/security.mdx around lines 21 - 29,
Add the required "Arguments", "Flags", and "Examples" sections to the existing
CLI doc for the atmos aws security command: under the Usage line for "atmos aws
security [flags]" add an Arguments section describing
<subcommand> (e.g., "Security operation to run (for example: analyze)"), a
Flags section listing --help (and any other global flags your CLI supports)
with brief descriptions, and an Examples section showing at least one practical
invocation such as "atmos aws security analyze --help"; place these new headings
before the existing Subcommands/DocCardList so the doc follows the project's CLI
documentation scaffold and is discoverable.


</details>

</blockquote></details>
<details>
<summary>website/docs/cli/commands/aws/security/analyze.mdx-61-63 (1)</summary><blockquote>

`61-63`: _⚠️ Potential issue_ | _🟑 Minor_

**Inconsistent `--max-findings` default across docs artifacts.**

This page documents default `500`, while the generated help screengrab shows default `50`. Please sync both to the actual command default to avoid user confusion.



As per coding guidelines, β€œKeep CLI documentation and website documentation in sync and document new features on the website with examples and use cases.”

<details>
<summary>πŸ€– Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @website/docs/cli/commands/aws/security/analyze.mdx around lines 61 - 63, The
docs show --max-findings default as 500 but the CLI help/screenshot shows 50;
locate the analyze command's option definition (the --max-findings flag in
the Analyze command / option parser) and confirm the actual default there, then
update this MDX entry to match that value (or change the option's default in the
analyze command implementation to 500 if the intended default is 500); ensure
the --max-findings value in the help output, the analyze command
implementation, and this docs line are all the same and update the help
screenshot if necessary.


</details>

</blockquote></details>
<details>
<summary>docs/prd/atmos-aws-security-compliance.md-82-93 (1)</summary><blockquote>

`82-93`: _⚠️ Potential issue_ | _🟑 Minor_

**Mapping strategy count is inconsistent in the same section.**

Line 82 says β€œ5 mapping strategies,” but the table lists 7. Please align the count text with the table.

<details>
<summary>πŸ€– Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @docs/prd/atmos-aws-security-compliance.md around lines 82 - 93, The doc
header currently states "5 mapping strategies" but the mapping table actually
lists seven strategies (finding-tag, tag-api, context-tags, account-map,
ecr-repo, naming-convention, resource-type); update the header text to
reflect "7 mapping strategies" (or reduce the table to five if that was
intended) so the count matches the listed strategies, ensuring the phrase
containing "5 mapping strategies" and any nearby summary references are updated
to "7 mapping strategies".


</details>

</blockquote></details>
<details>
<summary>website/docs/cli/commands/aws/compliance/report.mdx-172-177 (1)</summary><blockquote>

`172-177`: _⚠️ Potential issue_ | _🟑 Minor_

**Related commands list includes an unrelated EKS command.**

`/cli/commands/aws/eks/update-kubeconfig` doesn’t fit this page’s compliance workflow; replace with a compliance/security-related entry.

As per coding guidelines: Keep CLI documentation and website documentation in sync and document new features on the website with examples and use cases.

<details>
<summary>πŸ€– Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @website/docs/cli/commands/aws/compliance/report.mdx around lines 172 - 177,
The Related Commands list mistakenly includes an unrelated EKS entry: remove the
DocCardList item with href '/cli/commands/aws/eks/update-kubeconfig' and replace
it with a compliance/security-related entry (e.g., add an item similar to {type:
'link', href: '/cli/commands/aws/security/report', label: 'atmos aws security
report', description: 'Generate compliance/security reports'}), keeping the same
DocCardList structure and ensuring the label/href/description match the actual
CLI command docs so website and CLI docs remain in sync.


</details>

</blockquote></details>
<details>
<summary>pkg/aws/security/report_renderer.go-52-70 (1)</summary><blockquote>

`52-70`: _⚠️ Potential issue_ | _🟑 Minor_

**Include INFORMATIONAL in the summary breakdown.**

Lines 52-70 and 228-235 only enumerate Critical/High/Medium/Low, but `renderFindingsBySeverity` also renders INFORMATIONAL. Reports with informational findings will show totals that do not reconcile with the summary. 


Also applies to: 228-235

<details>
<summary>πŸ€– Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @pkg/aws/security/report_renderer.go around lines 52 - 70, The summary table
omits SeverityInformation/INFORMATIONAL so totals mismatch with
renderFindingsBySeverity; update the loop that builds the summary (which
currently iterates []Severity{SeverityCritical, SeverityHigh, SeverityMedium,
SeverityLow}) to also include the INFORMATIONAL severity constant (e.g.,
SeverityInformational or SeverityInformation depending on the enum name used),
ensuring you read counts from report.SeverityCounts and compute mapped/unmapped
via countMappedBySeverity(report.Findings, sev) exactly as for other severities;
make the same change in the other occurrence noted (lines referenced around
where the summary is built and the second block at 228-235) so informational
findings are included consistently.


</details>

</blockquote></details>
<details>
<summary>pkg/aws/security/component_mapper_test.go-975-986 (1)</summary><blockquote>

`975-986`: _⚠️ Potential issue_ | _🟑 Minor_

**Assert the grouped titles here, not only the lengths.**

This passes as long as `groupByTitle` returns three singleton groups, even if the titles are wrong. Pull the title back out of each group and assert `A`, `B`, and `C` explicitly; `assert.ElementsMatch` is enough if order is not part of the contract.

<details>
<summary>Suggested tightening.</summary>

```diff
 func TestGroupByTitle_NoDuplicates(t *testing.T) {
 	findings := []Finding{
 		{Title: "A"},
 		{Title: "B"},
 		{Title: "C"},
 	}
 	groups := groupByTitle(findings)
 	require.Len(t, groups, 3)
-	assert.Len(t, groups[0], 1)
-	assert.Len(t, groups[1], 1)
-	assert.Len(t, groups[2], 1)
+
+	gotTitles := make([]string, 0, len(groups))
+	for _, group := range groups {
+		require.Len(t, group, 1)
+		gotTitles = append(gotTitles, group[0].Title)
+	}
+
+	assert.ElementsMatch(t, []string{"A", "B", "C"}, gotTitles)
 }

As per coding guidelines, "For slice-result tests, assert element contents, not just length. require.Len alone allows regressions that drop or corrupt contents. Assert at least the first and last element by value."

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/aws/security/component_mapper_test.go` around lines 975 - 986, The test
TestGroupByTitle_NoDuplicates only checks group counts and sizes but not the
actual titles; update the test to extract titles from each group returned by
groupByTitle and assert their contents are exactly "A", "B", and "C" (use
assert.ElementsMatch if order is not guaranteed). Locate
TestGroupByTitle_NoDuplicates and after calling groupByTitle(findings) map each
group (e.g., groups[i][0].Title) or collect all Titles from groups into a slice,
then replace or augment the length assertions with an assertion that the
collected titles equal the expected slice {"A","B","C"}; keep require.Len checks
if desired but ensure content assertions are present.
🧹 Nitpick comments (7)
website/src/data/roadmap.js (1)

599-603: Consider collapsing duplicate shipped milestones.

At Line 599–603, these summary milestones overlap with the detailed shipped milestones already listed above (e.g., analyze/report/AI/remediation). Keeping both can inflate milestone counts and make progress less clear.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@website/src/data/roadmap.js` around lines 599 - 603, The five summary
milestones ('AWS security finding analysis', 'Finding-to-code mapping',
'AI-powered remediation', 'Compliance framework reports', 'Atmos Auth
integration for security') duplicate more detailed shipped items above and
should be collapsed to avoid inflated counts: remove these duplicate entries
from the milestones array (or replace them with a single aggregated milestone
entry if you need a summary), and ensure any important metadata like changelog:
'aws-security-compliance' is preserved/moved to the corresponding detailed
milestone(s) so no changelog linkage is lost.
cmd/aws/markdown/atmos_aws_compliance.md (1)

8-26: Align examples with repository CLI prompt convention.

Please switch examples to the $ atmos ... style used in cmd/markdown/ docs for consistency.

✏️ Example style adjustment
-atmos aws compliance report --framework cis-aws --stack prod-us-east-1
+ $ atmos aws compliance report --framework cis-aws --stack prod-us-east-1

Based on learnings: In the cloudposse/atmos repository, CLI documentation files in cmd/markdown/ follow a specific format that uses $ atmos command in code blocks.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/aws/markdown/atmos_aws_compliance.md` around lines 8 - 26, Update the
examples in the AWS compliance docs so they follow the repository CLI prompt
convention by adding a leading " $ " before each atmos invocation (e.g., change
lines invoking "atmos aws compliance report ..." to " $ atmos aws compliance
report ..."); update all occurrences in this file (examples showing
framework/stack/format/file variants) so the code block consistently uses the "
$ atmos ..." prompt style.
pkg/aws/security/aws_clients_test.go (1)

22-38: Consider table-driving the auth-context scenarios.

The nil and non-nil paths can be merged into one table-driven test to reduce duplication and keep future scenarios easy to add.

♻️ Suggested refactor shape
-func TestWithAuthContext(t *testing.T) {
-	cache := newAWSClientCache()
-	authCtx := &schema.AWSAuthContext{
-		CredentialsFile: "/tmp/creds",
-		ConfigFile:      "/tmp/config",
-		Profile:         "test-profile",
-		Region:          "us-east-2",
-	}
-	cache.WithAuthContext(authCtx)
-	assert.Equal(t, authCtx, cache.authContext)
-}
-
-func TestWithAuthContext_Nil(t *testing.T) {
-	cache := newAWSClientCache()
-	cache.WithAuthContext(nil)
-	assert.Nil(t, cache.authContext)
-}
+func TestWithAuthContext(t *testing.T) {
+	tests := []struct {
+		name    string
+		authCtx *schema.AWSAuthContext
+	}{
+		{
+			name: "non-nil auth context",
+			authCtx: &schema.AWSAuthContext{
+				CredentialsFile: "/tmp/creds",
+				ConfigFile:      "/tmp/config",
+				Profile:         "test-profile",
+				Region:          "us-east-2",
+			},
+		},
+		{name: "nil auth context", authCtx: nil},
+	}
+
+	for _, tt := range tests {
+		t.Run(tt.name, func(t *testing.T) {
+			cache := newAWSClientCache()
+			cache.WithAuthContext(tt.authCtx)
+			assert.Equal(t, tt.authCtx, cache.authContext)
+		})
+	}
+}

As per coding guidelines, **/*_test.go: β€œUse table-driven tests for testing multiple scenarios in Go”.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/aws/security/aws_clients_test.go` around lines 22 - 38, The two tests
TestWithAuthContext and TestWithAuthContext_Nil duplicate logic and should be
combined into a table-driven test: create a slice of cases with name, input
*schema.AWSAuthContext (including nil and a populated struct), and expected
authContext, then loop over cases calling cache := newAWSClientCache();
cache.WithAuthContext(tc.input); and assert.Equal(t, tc.expected,
cache.authContext). Keep references to WithAuthContext, newAWSClientCache, and
authContext to locate the code and ensure both nil and non-nil scenarios are
covered in the single table-driven test.
cmd/aws/markdown/atmos_aws_security.md (1)

9-33: Align example commands with cmd/markdown prompt style.

Please use the repo-standard $ atmos ... format in this code block for consistency with other CLI markdown pages.

Based on learnings: In the cloudposse/atmos repository, CLI documentation files in cmd/markdown/ use $ atmos command in code blocks.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/aws/markdown/atmos_aws_security.md` around lines 9 - 33, Update the code
block examples so each CLI line uses the repo-standard " $ atmos ..." prompt
prefix instead of bare commands; locate the code block containing the examples
for "atmos aws security analyze" (the lines with "atmos aws security analyze
--stack prod-us-east-1", "--severity", "--source", "--format", "--ai", "--file",
etc.) and prepend " $ " to each command so they follow the other cmd/markdown
pages' style.
pkg/aws/security/skill_prompt.md (1)

31-33: Add a language to the fenced deploy command block.

Use a language tag (e.g., bash) so markdown tooling and renderers stay consistent.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/aws/security/skill_prompt.md` around lines 31 - 33, The fenced code block
containing the deploy command "atmos terraform apply <component> -s <stack>"
lacks a language tag; update the markdown in pkg/aws/security/skill_prompt.md to
use a language identifier (e.g., bash) on that triple-backtick fence so
renderers and syntax highlighters treat the block as shell commands.
pkg/aws/security/cache_test.go (2)

113-117: Strengthen cache-hit assertions with element content checks.

On hit paths, asserting only Len can miss corrupted or wrong entries. Assert at least first/last Finding.ID (or full equality for small fixtures).

As per coding guidelines, β€œFor slice-result tests, assert element contents, not just length.”

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/aws/security/cache_test.go` around lines 113 - 117, The test currently
only asserts slice length on cache-hit paths (hit true) which can miss
corrupted/wrong entries; update the block handling tt.wantHit to also assert
element contents β€” for example, when require.NotNil(t, got) is true, check that
the first and last Finding.ID values in got match expected IDs from the test
fixture (or perform full slice equality for small fixtures) using assert.Equal
on those IDs (referencing variables got, tt.wantCount, tt.wantHit and the
Finding.ID field) so the test verifies actual entry contents, not just length.

196-201: Replace fixed sleeps with eventual assertions to reduce test flakiness.

Using hard sleeps around TTL boundaries can be timing-sensitive in CI. Prefer require.Eventually / assert.Eventually with polling.

Refactor sketch
- time.Sleep(100 * time.Millisecond)
- got, hit = cache.GetFindings(opts)
- assert.False(t, hit)
- assert.Nil(t, got)
+ require.Eventually(t, func() bool {
+   got, hit = cache.GetFindings(opts)
+   return !hit && got == nil
+ }, 500*time.Millisecond, 10*time.Millisecond)

Also applies to: 216-220

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/aws/security/cache_test.go` around lines 196 - 201, Replace the fixed
time.Sleep call with an eventual assertion that polls until the cache expires:
call assert.Eventually (or require.Eventually) and inside the predicate call
cache.GetFindings(opts) and return true only when hit==false && got==nil; use a
reasonable timeout (e.g. 1s) and poll interval (e.g. 10–50ms). Apply the same
change for the second occurrence around the 216–220 block so both TTL-bound
checks use assert.Eventually/require.Eventually against cache.GetFindings(opts)
instead of time.Sleep.

…al flags

- Fix compliance dedup: use SecurityControlID (e.g., EC2.18) instead of
  ComplianceStandard (framework ID) as dedup key
- Fix cache key: add Framework, Stack, Component; add nil guard
- Fix cache safety: copy slices on get/set to prevent mutation
- Fix global flags: parse BasePath in security and compliance commands
- Fix --framework flag: wire into QueryOptions for security analyze
- Fix CSV flush: check cw.Error() after Flush() in both renderers
- Fix auth region fallback: check authContext.Region before us-east-1
- Fix resolveAuthContext: nil guard, Profile validation, extract helper
- Fix INFORMATIONAL: include in summary table severity loops
- Fix Bedrock PRD: qualify data residency and compliance claims
- Fix PRD: strategy count 5β†’7, tag names use colons
- Fix roadmap: add pr:2282 to milestones, remove duplicate entries
- Fix CLI markdown: use $ prompt style per convention
- Document --ai flag for compliance report with examples
- Improve tests: table-driven auth, assert element contents, tool calls
- Update compliance examples with 35/42 (83%) production results

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

♻️ Duplicate comments (1)
pkg/aws/security/finding_fetcher.go (1)

230-285: ⚠️ Potential issue | πŸ”΄ Critical

--stack still isn’t enforced in compliance fetches.

FetchComplianceStatus sets QueryOptions.Stack, but this filter builder never reads it. A stack-scoped compliance report will still score every finding in the framework and only stamp the requested stack onto the output.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/aws/security/finding_fetcher.go` around lines 230 - 285,
buildFindingFilters currently ignores QueryOptions.Stack so stack-scoped
compliance reports still return all findings; update buildFindingFilters
(awsFindingFetcher.buildFindingFilters) to detect opts.Stack
(QueryOptions.Stack) and append an appropriate Security Hub string filter that
restricts findings to that stack (e.g., add a StringFilter for the finding
resource identifier or resource group value that your system uses to tag the
stack) so the returned AwsSecurityFindingFilters include the stack constraint;
locate the place that builds filters and append the new filter to the filters
struct (use the same shtypes.StringFilter shape and
StringFilterComparisonEquals/Prefix as appropriate).
🧹 Nitpick comments (1)
pkg/aws/security/component_mapper_test.go (1)

727-744: Make the fallback test assert a real outcome.

This currently passes for any non-finding-tag result, including unmatched, so it will not catch a broken fallback path.

πŸ’‘ Tighten the assertion
 func TestMapByTags_FindingTagsFallbackToAPI(t *testing.T) {
 	// When finding has no ResourceTags, fall back to API.
 	finding := &Finding{
 		ID:           "no-tag-001",
 		ResourceARN:  "arn:aws:s3:::my-bucket",
+		ResourceType: "AwsS3Bucket",
 		ResourceTags: nil, // No embedded tags.
 	}
@@
 	mapping, err := mapper.MapFinding(context.Background(), finding)
 	require.NoError(t, err)
 	require.NotNil(t, mapping)
-	// Should use naming convention or resource type, not tag.
-	assert.NotEqual(t, "finding-tag", mapping.Method)
+	assert.True(t, mapping.Mapped)
+	assert.Equal(t, "resource-type", mapping.Method)
+	assert.Equal(t, "s3-bucket", mapping.Component)
 }

As per coding guidelines, "Test behavior, not implementation. Never test stub functions. Avoid tautological tests. No coverage theater."

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/aws/security/component_mapper_test.go` around lines 727 - 744, The test
TestMapByTags_FindingTagsFallbackToAPI currently only asserts mapping.Method !=
"finding-tag" which allows "unmatched" to pass; update the assertion to verify a
concrete expected fallback result from MapFinding (for example assert
mapping.Method is not "unmatched" and/or equals the expected fallback method
such as the naming convention or resource-type value your mapper returns).
Locate TestMapByTags_FindingTagsFallbackToAPI and change the require/assert on
mapping.Method to assert a real outcome (e.g., assert.NotEqual(t, "unmatched",
mapping.Method) and assert.Equal(t, "<expected-fallback-method>",
mapping.Method) or assert.Contains(t,
[]string{"naming-convention","resource-type"}, mapping.Method) as appropriate).
πŸ€– Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cmd/aws/compliance.go`:
- Around line 125-187: The loop currently renders each framework report
separately (variables/frameworks, fetcher.FetchComplianceStatus,
renderer.RenderComplianceReport) which breaks structured formats
(JSON/YAML/CSV); instead accumulate all non-nil reports into a single in-memory
collection keyed by framework (apply filterComplianceReport when
len(controlFilter)>0) and after the for loop call a single render method to emit
one aggregate payload for non-markdown formats (keep the existing per-framework
markdown/stdout ui.Markdown path when outputFormat == security.FormatMarkdown &&
fileOutput == "") and ensure errors from fetch/render are still returned; update
code around NewFindingFetcher/NewReportRenderer, FetchComplianceStatus and the
RenderComplianceReport invocation accordingly.
- Around line 241-262: filterComplianceReport currently only filters
report.FailingDetails and leaves report.PassingControls unchanged, which skews
TotalControls and ScorePercent; fix by recomputing counts for the selected
control set: build a lookup of failing IDs from report.FailingDetails, iterate
over the controlFilter keys and for each selected ID increment failingCount if
present in that lookup otherwise increment passingCount, then set
filteredReport.FailingControls = failingCount, filteredReport.PassingControls =
passingCount, filteredReport.TotalControls = failingCount+passingCount and
recalc ScorePercent accordingly (use report.FailingDetails -> ctrl.ControlID,
controlFilter, filteredReport, and the existing filtered slice to locate the
code to change).
- Around line 62-66: The code only copies BasePath into
schema.ConfigAndStacksInfo before calling cfg.InitCliConfig, so config-selection
flags (configs, config-dirs, profile, etc.) are ignored; instead, populate the
entire ConfigAndStacksInfo with the result of flags.ParseGlobalFlags(cmd, v) (or
copy all its fields) and then pass that populated struct to cfg.InitCliConfig so
InitCliConfig sees the user's selected configs, config dirs and profile; update
references around flags.ParseGlobalFlags, configAndStacksInfo, and
cfg.InitCliConfig accordingly.
- Around line 38-45: The CLI command complianceReportCmd is missing the
advertised --ai flag and its RunE handler does not branch into the AI
remediation flow, so add a boolean flag (e.g., "ai") to the command registration
and bind it to Viper/flags, then update the RunE function to check the flag (via
cmd.Flags().GetBool("ai") or viper.GetBool) and call the AI remediation path
when true (ensure you wire the same logic referenced elsewhere in the file where
the AI flow is implemented); update any flag help text to mention the AI
remediation behavior so the feature is reachable from the CLI.

In `@cmd/aws/credentials.go`:
- Around line 19-28: The function currently checks atmosConfig == nil before
handling the empty identity fast path, causing an AWS-credentials error when
identityName == "" and atmosConfig == nil; move the identityName == "" check to
run first and return nil, nil immediately for the empty-identity fast path so
the default AWS credential chain is used. Update the logic around the
identityName variable (and any return in the function in cmd/aws/credentials.go)
so the empty-identity branch short-circuits before validating atmosConfig,
preserving the existing error construction using
errUtils.ErrAWSCredentialsNotValid for the atmosConfig == nil case.

In `@cmd/aws/security.go`:
- Around line 233-236: When creating the AI analyzer
(security.NewFindingAnalyzer) you currently only write a debug log on
analyzerErr and continue; change this so that if the user explicitly requested
AI remediation (the --ai flag / AI mode variable) you log the error (use
log.Error) and propagate/return a non-nil error (or os.Exit(1)) instead of
falling through; update the analyzerErr branch around analyzer, analyzerErr :=
security.NewFindingAnalyzer(...) to check the AI-request flag and fail fast
(return the analyzerErr or exit) so the CLI shows a visible error when AI
construction fails.
- Around line 73-77: The code only copies BasePath from the parsed global flags
into schema.ConfigAndStacksInfo before calling cfg.InitCliConfig, so
InitCliConfig doesn't receive --config, --config-path, or --profile values;
update the call site to populate the ConfigAndStacksInfo with all relevant
selections returned by flags.ParseGlobalFlags (e.g., set
AtmosConfigFilesFromArg, AtmosConfigDirsFromArg, BasePath, and ProfilesFromArg
on the ConfigAndStacksInfo struct) so InitCliConfig/LoadConfig sees the full
global config selection used to resolve identity, region, and AI settings (see
symbols: ConfigAndStacksInfo, flags.ParseGlobalFlags, InitCliConfig, LoadConfig,
AtmosConfigFilesFromArg, AtmosConfigDirsFromArg, ProfilesFromArg).

In `@pkg/aws/security/report_renderer.go`:
- Around line 357-387: renderGroupedFindingMarkdown currently omits any
Remediation content for grouped findings; update it to include remediation by
checking findings[i].Remediation (or f.Remediation) and rendering it after
renderGroupedTags: if all findings share the same non-empty Remediation, render
that once (reuse whatever renderer you have for single findings, e.g. the same
code/path used by renderFindingMarkdown), otherwise iterate findings and render
each finding's Remediation under its table row or in per-finding subsections so
no remediation payload is dropped; reference renderGroupedFindingMarkdown,
renderGroupedTags, and the findings[].Remediation/Remediation field when
implementing.
- Around line 218-225: The reportTarget function currently returns "All Stacks"
when stack is empty even if component is provided; update reportTarget (using
parameters stack and component) so that when stack == "" and component != "" it
returns the component string (and keep the existing behavior for both non-empty
or both empty), i.e. add a branch checking component != "" and returning
component before the final "All Stacks" fallback.

In `@tests/fixtures/scenarios/native-ci/github-output.txt`:
- Line 50: The anchor id "result-prod-mycomponent" is duplicated in this fixture
output and must be made unique; find the second occurrence inside the output
block (the <details><summary><a id="result-prod-mycomponent" /> ...</summary>)
and change the id to a unique value (for example "result-prod-mycomponent-2" or
include an environment/sequence suffix) so that anchor resolution is
unambiguous; ensure any internal references to this anchor (if present) are
updated to match the new id.

In `@tests/fixtures/scenarios/native-ci/github-step-summary.txt`:
- Line 37: The summary block reuses the HTML anchor id "result-prod-mycomponent"
which duplicates an earlier anchor; update the <summary><a
id="result-prod-mycomponent" />... line to use a unique id (for example append a
suffix like "-2" or include the step name/environment) and update any references
to that anchor so links target this new section; ensure the id is unique within
the fixture and follows the same naming pattern used elsewhere.

In `@website/docs/cli/commands/aws/compliance/report.mdx`:
- Around line 168-169: Update the documentation entry for the `nist` option so
it correctly describes the mapped standard as NIST Special Publication 800-53
rather than the NIST Cybersecurity Framework (CSF): replace the `<dd>` text
currently describing "NIST Cybersecurity Framework (CSF)" with a description
that names "NIST SP 800-53 (NIST 800-53)", mentions that the CLI maps `nist` to
Security Hub’s `nist-800-53` standard, and optionally notes it is a controls
catalog for federal information systems.

---

Duplicate comments:
In `@pkg/aws/security/finding_fetcher.go`:
- Around line 230-285: buildFindingFilters currently ignores QueryOptions.Stack
so stack-scoped compliance reports still return all findings; update
buildFindingFilters (awsFindingFetcher.buildFindingFilters) to detect opts.Stack
(QueryOptions.Stack) and append an appropriate Security Hub string filter that
restricts findings to that stack (e.g., add a StringFilter for the finding
resource identifier or resource group value that your system uses to tag the
stack) so the returned AwsSecurityFindingFilters include the stack constraint;
locate the place that builds filters and append the new filter to the filters
struct (use the same shtypes.StringFilter shape and
StringFilterComparisonEquals/Prefix as appropriate).

---

Nitpick comments:
In `@pkg/aws/security/component_mapper_test.go`:
- Around line 727-744: The test TestMapByTags_FindingTagsFallbackToAPI currently
only asserts mapping.Method != "finding-tag" which allows "unmatched" to pass;
update the assertion to verify a concrete expected fallback result from
MapFinding (for example assert mapping.Method is not "unmatched" and/or equals
the expected fallback method such as the naming convention or resource-type
value your mapper returns). Locate TestMapByTags_FindingTagsFallbackToAPI and
change the require/assert on mapping.Method to assert a real outcome (e.g.,
assert.NotEqual(t, "unmatched", mapping.Method) and assert.Equal(t,
"<expected-fallback-method>", mapping.Method) or assert.Contains(t,
[]string{"naming-convention","resource-type"}, mapping.Method) as appropriate).
πŸͺ„ Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
βš™οΈ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: fd5da8e0-5e15-40bb-aa9b-f1020d4a8efd

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between 7a03cc2 and 0213408.

πŸ“’ Files selected for processing (23)
  • cmd/aws/compliance.go
  • cmd/aws/credentials.go
  • cmd/aws/markdown/atmos_aws_compliance.md
  • cmd/aws/markdown/atmos_aws_security.md
  • cmd/aws/security.go
  • docs/prd/atmos-aws-security-compliance.md
  • docs/prd/aws-bedrock-ai-provider.md
  • examples/aws-security-compliance/README.md
  • pkg/aws/security/analyzer_test.go
  • pkg/aws/security/aws_clients_test.go
  • pkg/aws/security/cache.go
  • pkg/aws/security/cache_test.go
  • pkg/aws/security/component_mapper_test.go
  • pkg/aws/security/finding_fetcher.go
  • pkg/aws/security/report_renderer.go
  • pkg/aws/security/types.go
  • tests/fixtures/scenarios/native-ci/github-output.txt
  • tests/fixtures/scenarios/native-ci/github-step-summary.txt
  • tests/snapshots/TestCLICommands_atmos_describe_config.stdout.golden
  • tests/snapshots/TestCLICommands_secrets-masking_describe_config.stdout.golden
  • website/blog/2026-04-03-aws-security-compliance.mdx
  • website/docs/cli/commands/aws/compliance/report.mdx
  • website/src/data/roadmap.js
βœ… Files skipped from review due to trivial changes (6)
  • tests/snapshots/TestCLICommands_atmos_describe_config.stdout.golden
  • cmd/aws/markdown/atmos_aws_security.md
  • pkg/aws/security/aws_clients_test.go
  • website/src/data/roadmap.js
  • website/blog/2026-04-03-aws-security-compliance.mdx
  • pkg/aws/security/types.go
🚧 Files skipped from review as they are similar to previous changes (3)
  • pkg/aws/security/cache_test.go
  • pkg/aws/security/analyzer_test.go
  • pkg/aws/security/cache.go

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 4, 2026

Codecov Report

❌ Patch coverage is 66.44799% with 716 lines in your changes missing coverage. Please review.
βœ… Project coverage is 76.99%. Comparing base (b51717f) to head (82b08f2).

Files with missing lines Patch % Lines
cmd/aws/security.go 36.75% 159 Missing and 1 partial ⚠️
cmd/aws/compliance.go 34.93% 94 Missing and 1 partial ⚠️
pkg/ai/tools/atmos/analyze_finding.go 23.80% 80 Missing ⚠️
pkg/ai/tools/atmos/list_findings.go 29.72% 78 Missing ⚠️
pkg/ai/tools/atmos/describe_finding.go 18.51% 66 Missing ⚠️
pkg/ai/tools/atmos/compliance_report.go 24.35% 59 Missing ⚠️
cmd/aws/credentials.go 37.68% 42 Missing and 1 partial ⚠️
pkg/aws/security/analyzer.go 86.61% 34 Missing and 2 partials ⚠️
pkg/aws/security/finding_fetcher.go 88.65% 23 Missing and 10 partials ⚠️
pkg/aws/security/component_mapper.go 89.41% 18 Missing and 13 partials ⚠️
... and 3 more

❌ Your patch check has failed because the patch coverage (66.44%) is below the target coverage (80.00%). You can increase the patch coverage or adjust the target coverage.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #2282      +/-   ##
==========================================
- Coverage   77.19%   76.99%   -0.20%     
==========================================
  Files        1045     1059      +14     
  Lines       98388   100522    +2134     
==========================================
+ Hits        75951    77399    +1448     
- Misses      18184    18834     +650     
- Partials     4253     4289      +36     
Flag Coverage Ξ”
unittests 76.99% <66.44%> (-0.20%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Ξ”
errors/errors.go 100.00% <ΓΈ> (ΓΈ)
pkg/ai/tools/atmos/setup.go 81.57% <100.00%> (+2.16%) ⬆️
pkg/aws/security/cache.go 100.00% <100.00%> (ΓΈ)
pkg/schema/schema.go 87.70% <ΓΈ> (ΓΈ)
pkg/ui/formatter.go 80.36% <100.00%> (+0.04%) ⬆️
pkg/schema/aws_security.go 33.33% <33.33%> (ΓΈ)
pkg/aws/security/aws_clients.go 68.42% <68.42%> (ΓΈ)
pkg/aws/security/report_renderer.go 95.11% <95.11%> (ΓΈ)
pkg/aws/security/component_mapper.go 89.41% <89.41%> (ΓΈ)
pkg/aws/security/finding_fetcher.go 88.65% <88.65%> (ΓΈ)
... and 8 more

... and 4 files with indirect coverage changes

πŸš€ New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

- Fix global config flags: pass Config, ConfigPath, Profile to InitCliConfig
  (not just BasePath) in both security and compliance commands
- Fix resolveAuthContext: check empty identity before nil config
- Fix AI analyzer warning: show visible ui.Warning instead of silent debug log
- Fix reportTarget: handle component-only case ("All Stacks / vpc")
- Fix grouped markdown: render remediation for grouped findings
- Fix NIST description: "NIST 800-53" not "NIST CSF" in compliance docs
- Improve test coverage: 45 new tests across credentials, renderer,
  fetcher, and mapper β€” pkg/aws/security/ coverage 85.5% β†’ 91.8%

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
cmd/aws/compliance.go (3)

201-214: ⚠️ Potential issue | 🟠 Major

Missing --ai flag per PR scope.

The PR description mentions atmos aws compliance report supports --ai for AI-assisted prioritized remediation, but the flag isn't registered here. The documentation (lines 59-61 in report.mdx) also documents this flag.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/aws/compliance.go` around lines 201 - 214, The init() function building
complianceParser is missing registration of the --ai flag; add a boolean flag
option to the flags.NewStandardParser call (e.g., flags.WithBoolFlag("ai", "",
false, "Enable AI-assisted prioritized remediation")) and also include a
corresponding WithEnvVars entry if environment-toggle is needed (e.g.,
"ATMOS_AWS_COMPLIANCE_AI"); update complianceParser construction so the new flag
is included alongside existing flags referenced in this function.

161-191: ⚠️ Potential issue | 🟠 Major

Multi-framework loop produces invalid structured output.

For JSON/YAML/CSV formats, iterating frameworks and calling RenderComplianceReport multiple times produces concatenated documents (e.g., {...}{...} for JSON) which parsers cannot consume.

Consider aggregating reports into a slice and rendering once, or restricting multi-framework mode to markdown only.

πŸ’‘ Approach sketch
// For structured formats, aggregate and render once
if outputFormat != security.FormatMarkdown {
    var reports []*security.ComplianceReport
    for _, fw := range frameworks {
        report, err := fetcher.FetchComplianceStatus(ctx, fw, stack)
        // ... error handling ...
        if report != nil {
            reports = append(reports, report)
        }
    }
    return renderer.RenderComplianceReports(output, reports) // new method
}
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/aws/compliance.go` around lines 161 - 191, The loop over frameworks calls
renderer.RenderComplianceReport repeatedly which corrupts structured outputs
(JSON/YAML/CSV) by concatenating documents; change logic so when outputFormat !=
security.FormatMarkdown you collect non-nil reports returned by
fetcher.FetchComplianceStatus (ref: frameworks, fetcher.FetchComplianceStatus,
report) into a slice and then call a single renderer method (add/implement
renderer.RenderComplianceReports) to render them at once to output; keep the
existing per-framework behavior only for markdown to stdout (refs: outputFormat,
security.FormatMarkdown, fileOutput, renderer.RenderComplianceReport).

245-267: ⚠️ Potential issue | 🟠 Major

Control filtering math is still incorrect.

When --controls filters to specific control IDs, the score calculation uses report.PassingControls from the original unfiltered report (line 262). If the user filters to just CIS.1.1 (a failing control), the report incorrectly claims all original passing controls are relevant.

The filter should only count passing/failing within the requested control set.

πŸ”§ Suggested fix sketch
 func filterComplianceReport(report *security.ComplianceReport, controlFilter map[string]bool) *security.ComplianceReport {
+	// Count how many filtered controls are failing vs in the filter set.
 	filtered := make([]security.ComplianceControl, 0, len(report.FailingDetails))
+	failingIDs := make(map[string]bool)
 	for _, ctrl := range report.FailingDetails {
 		if controlFilter[ctrl.ControlID] {
 			filtered = append(filtered, ctrl)
+			failingIDs[ctrl.ControlID] = true
 		}
 	}

+	// Count passing = controls in filter that aren't failing.
+	passingInFilter := 0
+	for id := range controlFilter {
+		if !failingIDs[id] {
+			passingInFilter++
+		}
+	}
+
 	filteredReport := *report
 	filteredReport.FailingDetails = filtered
 	filteredReport.FailingControls = len(filtered)
-	filteredReport.TotalControls = len(filtered) + report.PassingControls
+	filteredReport.PassingControls = passingInFilter
+	filteredReport.TotalControls = len(controlFilter)
 	if filteredReport.TotalControls > 0 {
 		const percentMultiplier = 100
-		filteredReport.ScorePercent = float64(report.PassingControls) / float64(filteredReport.TotalControls) * percentMultiplier
+		filteredReport.ScorePercent = float64(passingInFilter) / float64(filteredReport.TotalControls) * percentMultiplier
 	} else {
 		filteredReport.ScorePercent = 0
 	}
 	return &filteredReport
 }
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/aws/compliance.go` around lines 245 - 267, filterComplianceReport
currently reuses report.PassingControls from the original report, producing
incorrect scores for filtered control sets; update filterComplianceReport to
recompute passing and total counts within the requested control set by (1)
building a set of control IDs present in the original report (include both
failing and passing entries if a PassingDetails slice exists) or at minimum
intersecting controlFilter with report.FailingDetails to count filtered failing
controls, (2) compute totalFiltered as the number of control IDs that are in
controlFilter and present in the original report, (3) compute passingFiltered =
totalFiltered - filteredFailingCount, and then set
filteredReport.PassingControls = passingFiltered, filteredReport.FailingControls
= filteredFailingCount, filteredReport.TotalControls = totalFiltered and recalc
ScorePercent from those filtered counts before returning; change only inside
filterComplianceReport and use the existing symbols report,
report.FailingDetails, filteredReport and ScorePercent.
🧹 Nitpick comments (2)
pkg/aws/security/report_renderer_test.go (1)

543-560: Table-driven test for reportTarget missing component-only case.

The table at lines 544-553 covers both_set, stack_only, and neither_set, but the component-only case is tested separately in TestReportTarget_ComponentOnly (line 751). Consider adding it to the table for completeness and discoverability.

πŸ’‘ Add component-only case to the table
 tests := []struct {
 	name      string
 	stack     string
 	component string
 	expected  string
 }{
 	{name: "both_set", stack: "prod", component: "vpc", expected: "prod / vpc"},
 	{name: "stack_only", stack: "prod", component: "", expected: "prod"},
+	{name: "component_only", stack: "", component: "vpc", expected: "All Stacks / vpc"},
 	{name: "neither_set", stack: "", component: "", expected: "All Stacks"},
 }
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/aws/security/report_renderer_test.go` around lines 543 - 560, The
table-driven TestReportTarget is missing the component-only scenario; update the
tests slice in TestReportTarget to include a case like {name: "component_only",
stack: "", component: "vpc", expected: "vpc"} so the component-only behavior of
reportTarget is covered in the same table (and remove or de-duplicate the
separate TestReportTarget_ComponentOnly test if desired). Ensure you reference
reportTarget and TestReportTarget when making the change.
pkg/aws/security/report_renderer.go (1)

506-509: Diff rendering could show malformed output for multi-line changes.

When change.Before or change.After contain newlines, the diff block will break formatting since each line won't have the proper +/- prefix. This is a minor edge case since most remediation code changes are likely single-line.

πŸ’‘ Optional: Split multi-line diffs properly
 if change.Before != "" {
-	fmt.Fprintf(sb, "```diff\n- %s\n+ %s\n```\n\n", change.Before, change.After)
+	beforeLines := strings.Split(change.Before, "\n")
+	afterLines := strings.Split(change.After, "\n")
+	sb.WriteString("```diff\n")
+	for _, line := range beforeLines {
+		fmt.Fprintf(sb, "- %s\n", line)
+	}
+	for _, line := range afterLines {
+		fmt.Fprintf(sb, "+ %s\n", line)
+	}
+	sb.WriteString("```\n\n")
 }
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/aws/security/report_renderer.go` around lines 506 - 509, The diff
renderer in render loop currently writes change.Before and change.After as
single %s strings which breaks when those values contain newlines; update the
block in the renderer that handles change.Before/change.After (the code using
sb, change.Before, change.After) to split both values with strings.Split(...,
"\n"), open the fenced code block, iterate over beforeLines and write each line
prefixed with "- " (or "- " followed by content) and iterate over afterLines
writing each line prefixed with "+ ", handle differing line counts by emitting
blank prefixed lines where one side is shorter, then close the fenced block β€”
this preserves proper per-line +/- prefixes for multi-line diffs.
πŸ€– Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@website/docs/cli/commands/aws/compliance/report.mdx`:
- Around line 59-61: The docs mention a `--ai` flag that isn't registered;
update the AWS compliance command by adding a boolean flag named "ai" to the
compliance command registration (e.g., in the function that builds or registers
the compliance command such as NewComplianceCmd or the complianceCmd flag setup)
and wire the flag into the command's execution path so the flag value is read
and used to enable AI-powered analysis, or remove the `--ai` mention from the
docs if you prefer not to implement it yet; ensure the flag has the same help
text as the docs and is accessible to the command handler that runs the
compliance report.

---

Duplicate comments:
In `@cmd/aws/compliance.go`:
- Around line 201-214: The init() function building complianceParser is missing
registration of the --ai flag; add a boolean flag option to the
flags.NewStandardParser call (e.g., flags.WithBoolFlag("ai", "", false, "Enable
AI-assisted prioritized remediation")) and also include a corresponding
WithEnvVars entry if environment-toggle is needed (e.g.,
"ATMOS_AWS_COMPLIANCE_AI"); update complianceParser construction so the new flag
is included alongside existing flags referenced in this function.
- Around line 161-191: The loop over frameworks calls
renderer.RenderComplianceReport repeatedly which corrupts structured outputs
(JSON/YAML/CSV) by concatenating documents; change logic so when outputFormat !=
security.FormatMarkdown you collect non-nil reports returned by
fetcher.FetchComplianceStatus (ref: frameworks, fetcher.FetchComplianceStatus,
report) into a slice and then call a single renderer method (add/implement
renderer.RenderComplianceReports) to render them at once to output; keep the
existing per-framework behavior only for markdown to stdout (refs: outputFormat,
security.FormatMarkdown, fileOutput, renderer.RenderComplianceReport).
- Around line 245-267: filterComplianceReport currently reuses
report.PassingControls from the original report, producing incorrect scores for
filtered control sets; update filterComplianceReport to recompute passing and
total counts within the requested control set by (1) building a set of control
IDs present in the original report (include both failing and passing entries if
a PassingDetails slice exists) or at minimum intersecting controlFilter with
report.FailingDetails to count filtered failing controls, (2) compute
totalFiltered as the number of control IDs that are in controlFilter and present
in the original report, (3) compute passingFiltered = totalFiltered -
filteredFailingCount, and then set filteredReport.PassingControls =
passingFiltered, filteredReport.FailingControls = filteredFailingCount,
filteredReport.TotalControls = totalFiltered and recalc ScorePercent from those
filtered counts before returning; change only inside filterComplianceReport and
use the existing symbols report, report.FailingDetails, filteredReport and
ScorePercent.

---

Nitpick comments:
In `@pkg/aws/security/report_renderer_test.go`:
- Around line 543-560: The table-driven TestReportTarget is missing the
component-only scenario; update the tests slice in TestReportTarget to include a
case like {name: "component_only", stack: "", component: "vpc", expected: "vpc"}
so the component-only behavior of reportTarget is covered in the same table (and
remove or de-duplicate the separate TestReportTarget_ComponentOnly test if
desired). Ensure you reference reportTarget and TestReportTarget when making the
change.

In `@pkg/aws/security/report_renderer.go`:
- Around line 506-509: The diff renderer in render loop currently writes
change.Before and change.After as single %s strings which breaks when those
values contain newlines; update the block in the renderer that handles
change.Before/change.After (the code using sb, change.Before, change.After) to
split both values with strings.Split(..., "\n"), open the fenced code block,
iterate over beforeLines and write each line prefixed with "- " (or "- "
followed by content) and iterate over afterLines writing each line prefixed with
"+ ", handle differing line counts by emitting blank prefixed lines where one
side is shorter, then close the fenced block β€” this preserves proper per-line
+/- prefixes for multi-line diffs.
πŸͺ„ Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
βš™οΈ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: a9c79da6-569b-4fdd-b2a6-de66ad3fe9ab

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between 0213408 and 82b08f2.

πŸ“’ Files selected for processing (9)
  • cmd/aws/compliance.go
  • cmd/aws/credentials.go
  • cmd/aws/security.go
  • cmd/aws/security_test.go
  • pkg/aws/security/component_mapper_test.go
  • pkg/aws/security/finding_fetcher_test.go
  • pkg/aws/security/report_renderer.go
  • pkg/aws/security/report_renderer_test.go
  • website/docs/cli/commands/aws/compliance/report.mdx
βœ… Files skipped from review due to trivial changes (2)
  • pkg/aws/security/component_mapper_test.go
  • pkg/aws/security/finding_fetcher_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • cmd/aws/credentials.go

Comment on lines +59 to +61
<dt>`--ai`</dt>
<dd>Enable AI-powered analysis of the compliance report. The global `--ai` flag captures the report output and sends it to the configured AI provider for a summary with remediation guidance for each failing control.</dd>
</dl>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Documentation describes --ai flag that isn't implemented.

The --ai flag is documented here but isn't registered in cmd/aws/compliance.go. Either add the flag implementation or remove this documentation until it's ready.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@website/docs/cli/commands/aws/compliance/report.mdx` around lines 59 - 61,
The docs mention a `--ai` flag that isn't registered; update the AWS compliance
command by adding a boolean flag named "ai" to the compliance command
registration (e.g., in the function that builds or registers the compliance
command such as NewComplianceCmd or the complianceCmd flag setup) and wire the
flag into the command's execution path so the flag value is read and used to
enable AI-powered analysis, or remove the `--ai` mention from the docs if you
prefer not to implement it yet; ensure the flag has the same help text as the
docs and is accessible to the command handler that runs the compliance report.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

minor New features that do not break anything size/xxl

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant