feat: AWS Security & Compliance β finding-to-code mapping with AI remediation#2282
feat: AWS Security & Compliance β finding-to-code mapping with AI remediation#2282
Conversation
β¦egration Add `atmos ai security` and `atmos ai compliance` CLI commands with full AWS Security Hub integration, dual-path finding-to-component mapping (tag-based and heuristic), and multi-format report rendering (markdown/json/yaml/csv). Phase 1: Schema, CLI commands, report renderers, sentinel errors. Phase 2: AWS SDK clients (Security Hub, Resource Groups Tagging API), paginated finding fetcher with filters, batch tag-based mapping (Path A), naming convention and resource type heuristics (Path B), unit tests. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
β¦ce_report Phase 3: Register three new AI tools for the agent to use: - atmos_list_findings: Query Security Hub findings with severity/source/stack filters - atmos_describe_finding: Get detailed info about a specific finding by ID - atmos_compliance_report: Generate compliance posture reports for CIS/PCI/SOC2/HIPAA/NIST Tools are registered as read-only core tools in the AI tool registry, available in both chat and MCP modes. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add findings cache with configurable TTL to reduce AWS API calls - Add report renderer tests (17 tests covering all formats) - Add Docusaurus docs for atmos ai security and compliance commands - Increase test coverage from 71% to 91% - Update PRD to support all 7 Atmos AI providers (not just Bedrock) - Position Bedrock for enterprise customers needing data residency Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Implement FindingAnalyzer that sends findings + component source to configured AI provider for root cause analysis and remediation - Parse AI response into structured Remediation (root cause, deploy command, risk level) - Integrate analyzer into security command (runs unless --no-ai is set) - Add ErrAISecurityAnalysisFailed sentinel error - Tests for analyzer, prompt building, response parsing (91.8% coverage) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Register atmos_analyze_finding tool in AI tool system - Fetches finding by ID, maps to component, runs AI analysis - Returns root cause, remediation, deploy command, and risk level - Accepts optional component_source and stack_config for richer context Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Move security and compliance commands from `atmos ai` to `atmos aws` since they are AWS-specific and work without AI. AI analysis is now opt-in via the `--ai` flag (default: off) instead of opt-out via `--no-ai`. Key changes: - Move cmd/ai/security.go β cmd/aws/security.go - Move cmd/ai/compliance.go β cmd/aws/compliance.go - Move pkg/ai/security/ β pkg/aws/security/ - Move schema from ai.security β aws.security (AWSSecuritySettings) - Flip --no-ai flag to --ai (opt-in, default false) - Use error builder pattern for all error handling - Replace errors.Join with fmt.Errorf for order preservation - Add tests for parseOutputFormat, parseSource, parseSeverities, etc. - Add Docusaurus docs under cli/commands/aws/ and cli/configuration/aws/ - Update PRD to v0.3 with aws namespace and --ai flag - Update depguard to allow AWS SDK in pkg/aws/security/ Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
β¦ status Move --ai from a command-local flag on `atmos aws security` to a global persistent flag available to all commands. Register via GlobalOptionsBuilder with ATMOS_AI env var support. Update PRD to v0.4 with detailed completion status for all phases. Update global-flags.mdx documentation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Resolve conflicts: - website/docs/cli/global-flags.mdx: keep main's expanded --ai docs + new --skill flag - pkg/flags/global/flags.go: remove duplicate AI field (keep main's version with Skill) - pkg/flags/global_builder.go: remove duplicate registerAIFlags (keep main's with --skill) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
β¦compliance-1 # Conflicts: # errors/errors.go
β¦ial validation, tests 1. Wire up --controls flag in compliance command β parses comma-separated control IDs, filters report, recalculates score. 2. Fix compliance total_controls β was set to failing count (score always 0%). Now uses DescribeStandardsControls API with graceful fallback. 3. Add AWS credential validation β early STS GetCallerIdentity check before pipeline starts. Clear error with actionable hints. 4. Add 30+ new tests β all testable functions at 100% coverage. Covers: flag parsing, error sentinels, validation, report building, control filtering, shorthand aliases. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add `identity` field to `AWSSecuritySettings` schema and `--identity`/`-i` CLI flag to both `atmos aws security analyze` and `atmos aws compliance report`. When set, credentials are resolved through the Atmos Auth provider chain (SSO β role assumption β isolated credentials), targeting the delegated admin account where Security Hub aggregates all findings. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- AWS client cache: newAWSClientCache, WithAuthContext, empty maps - resolveAuthContext: empty identity (nil), non-empty with no auth config (error) - Schema: identity field read/write, empty default - Coverage: WithAuthContext 100%, resolveAuthContext 73%, pkg/aws/security 90.1% Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add `region` field to AWSSecuritySettings for Security Hub aggregation
region. Precedence: --region CLI flag > config region > default (us-east-1).
Combined with `identity`, users configure once in atmos.yaml:
aws:
security:
identity: "security-readonly" # Which account
region: "us-east-2" # Which region
No extra flags needed for subsequent queries.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add identity and region fields to the main Configuration section example. Add --identity flag to CLI flags table. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two changes ensuring consistent, reproducible AI analysis across all providers: 1. Formalized Remediation schema β added Steps, References fields, changed StackChanges to string. Every output follows the same structure. 2. Created atmos-aws-security agent skill β embedded as system prompt via go:embed. Instructs AI to use exact section headers that map directly to Remediation struct fields. Parser handles both structured and legacy formats via extractFirstMatch fallback. Refactored for complexity compliance: - parseRemediationResponse uses extractFirstMatch helper (17 β 8 complexity) - parseListItems/extractListItem replace parseNumberedList/parseReferenceList - Named constants for magic numbers New tests: skill prompt embedding, parseListItems (6 cases), structured format parsing, fallback format parsing, JSON round-trip. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Shows the complete flow: AWS APIs β Finding β ComponentMapping β Remediation β Report β ReportRenderer (4 formats). Explains behavior with and without --ai flag, and how the skill prompt ensures consistent AI output. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
50 was too low for multi-account orgs β with post-mapping filtering, we need to fetch enough findings to cover all accounts before filtering by stack/component. AI cost is controlled separately (only mapped findings are sent to AI). Users can still override with --max-findings. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
ECR findings had empty stack because mapByECRRepo only extracted the component name. Now uses the account_map to resolve the finding's account ID to an account/stack name. Before: component=monitor-nats, stack="" (395 findings) After: component=monitor-nats, stack="core-artifacts" Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Same fix as ECR repo mapper β use account_map to set the stack name from the finding's account ID. Fixes the last empty-stack finding (AwsEc2Instance in core-auto). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Previously the "no findings" message only showed for Markdown format. Now shows for all formats (JSON, YAML, CSV) so the user knows no report was written. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace all hardcoded emoji icons (π, β , πΊοΈ, π€, π) with the ui layer functions that handle theming automatically: - ui.Info() for status messages - ui.Success() for completion messages - ui.Warning() for warnings - ui.Successf()/ui.Infof() for formatted messages Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Items 40-42: max-findings 500, account map for ECR/resource-type, themed UI. Production results: 97.2% mapping accuracy (486/500), 5 mapping methods, stack/component filtering verified across multiple stacks. Replaced Known Limitations with Production Testing Results table showing exact counts per method and verified filter scenarios. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Deduplicate findings before AI analysis: same title+component+stack share remediation (4 identical findings β 1 API call) - Retry with exponential backoff on transient AI errors (529/429/5xx) using pkg/retry: 3 attempts, 2s initial delay, 15s max, 30% jitter - Increase default timeout to 300s when --ai is used (multi-turn tool analysis with retries needs more time) - Fix glamour color profile: add explicit WithColorProfile to renderMarkdown() for consistent colored output - Update PRD to v0.8 with items 43-47 and production AI test results - Update blog post and example README with full AI analysis showcase (findings breakdown, anomaly detection, remediation, risk assessment) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Fix 3 broken links pointing to /cli/commands/aws/eks-update-kubeconfig (flat path) β /cli/commands/aws/eks/update-kubeconfig (correct nested path). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Fix framework filter: use PREFIX with full standard ID paths including type prefix (ruleset/ or standards/) since CONTAINS is not supported - Replace deprecated DescribeStandardsControls with ListSecurityControlDefinitions (works in delegated admin mode) - Remove empty Component/Remediation columns from compliance table - Extract repeated framework name strings to constants (linter fix) - Add compliance report examples (with and without --ai) to docs Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
β¦ format - Fix max_findings default from 50 to 500 in config docs and example YAML - Add missing source values (macie, access-analyzer, all) to analyze docs - Clarify severity is case-insensitive with default critical,high - Fix PRD tag names from underscores to colons (atmos:stack, atmos:component) - Simplify PRD to v1.0 (problem/solution/implementation/results structure) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Dependency ReviewThe following issues were found:
Snapshot WarningsEnsure that dependencies are being submitted on PR branches and consider enabling retry-on-snapshot-warnings. See the documentation for more information and troubleshooting advice. License Issuesgo.mod
Scanned Files
|
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
π WalkthroughWalkthroughAdds an AWS Security & Compliance feature: new Changes
Sequence DiagramsequenceDiagram
autonumber
actor User
participant CLI as "atmos aws security analyze"
participant Config as "Config / Auth"
participant SecurityHub as "AWS Security Hub"
participant Mapper as "Component Mapper"
participant Analyzer as "AI Analyzer"
participant Renderer as "Report Renderer"
User->>CLI: run analyze (filters, --ai?)
CLI->>Config: load atmos.yaml, resolve identity/region
CLI->>Config: validate AWS creds (STS)
CLI->>SecurityHub: fetch findings (paginated)
SecurityHub-->>CLI: raw findings
CLI->>Mapper: map findings β components (tags, heuristics)
Mapper-->>CLI: mapped findings
alt AI enabled
CLI->>Analyzer: analyze mapped findings (component source, stack config)
Analyzer->>Analyzer: build prompt, call LLM (tools if available), retries
Analyzer-->>CLI: findings enriched with Remediation
end
CLI->>Renderer: render report (markdown/json/yaml/csv)
Renderer-->>CLI: formatted output
CLI->>User: print or write file
Estimated code review effortπ― 4 (Complex) | β±οΈ ~75 minutes Possibly related PRs
Suggested reviewers
π₯ Pre-merge checks | β 2 | β 1β Failed checks (1 warning)
β Passed checks (2 passed)
βοΈ Tip: You can configure your own custom pre-merge checks in the settings. β¨ Finishing Touchesπ Generate docstrings
π§ͺ Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 2
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and canβt be posted inline due to platform limitations.
β οΈ Outside diff range comments (1)
NOTICE (1)
1-1817:β οΈ Potential issue | π΄ CriticalNOTICE is still out of date (merge blocker).
Dependency Review is failing on this file. Please regenerate NOTICE from source (
./scripts/generate-notice.sh) and commit the generated output rather than hand-editing entries.Based on learnings: βIn the cloudposse/atmos repository, the NOTICE file is programmatically generated and should not be manually edited.β
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@NOTICE` around lines 1 - 1817, The NOTICE file is out of date and must not be hand-edited; regenerate the file using the repository's automated generator instead. Run the provided script ./scripts/generate-notice.sh to produce a fresh NOTICE, replace the current NOTICE with the generated output (do not manually edit entries), verify dependency review passes, and commit the generated NOTICE to the branch so the PR no longer fails the Dependency Review check.
π Major comments (24)
website/src/data/roadmap.js-588-610 (1)
588-610:β οΈ Potential issue | π MajorAdd missing roadmap traceability fields for shipped milestones.
At Line 588β598, several new shipped milestones are missing
prand/orchangelog, and at Line 609 the initiative-levelprslist is empty. Please wire these to PR#2282(or the exact originating PRs) so roadmap entries stay auditable.β»οΈ Proposed patch
- { label: 'Schema additions for `aws.security` config', status: 'shipped', quarter: 'q1-2026', prd: 'atmos-aws-security-compliance', description: 'Added AWSSettings, AWSSecuritySettings, AWSSecuritySources, and AWSSecurityTagMapping types to pkg/schema/.', benefits: 'Configure security scanning behavior directly in atmos.yaml.' }, + { label: 'Schema additions for `aws.security` config', status: 'shipped', quarter: 'q1-2026', pr: 2282, changelog: 'aws-security-compliance', prd: 'atmos-aws-security-compliance', description: 'Added AWSSettings, AWSSecuritySettings, AWSSecuritySources, and AWSSecurityTagMapping types to pkg/schema/.', benefits: 'Configure security scanning behavior directly in atmos.yaml.' }, - { label: 'AWS security client package', status: 'shipped', quarter: 'q1-2026', prd: 'atmos-aws-security-compliance', description: 'Created pkg/aws/security/ with clients for Security Hub, Config, Inspector, and GuardDuty. Interface-driven design with mock support.', benefits: 'Unified access to all AWS security services from a single package.' }, + { label: 'AWS security client package', status: 'shipped', quarter: 'q1-2026', pr: 2282, changelog: 'aws-security-compliance', prd: 'atmos-aws-security-compliance', description: 'Created pkg/aws/security/ with clients for Security Hub, Config, Inspector, and GuardDuty. Interface-driven design with mock support.', benefits: 'Unified access to all AWS security services from a single package.' }, ... - prs: [], + prs: [ + { number: 2282, title: 'feat: AWS Security & Compliance β finding-to-code mapping with AI remediation' }, + ],As per coding guidelines: βPRs labeled
minorormajorMUST updatewebsite/src/data/roadmap.js. Add new milestones withstatus: 'shipped', addchangelog: 'your-blog-slug'link, addpr: <pr-number>link, and update initiativeprogresspercentage.βπ€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/data/roadmap.js` around lines 588 - 610, Several shipped milestones in website/src/data/roadmap.js (e.g., entries with labels 'Schema additions for `aws.security` config', 'AWS security client package', 'Tag-based finding-to-code mapping', 'Heuristic finding-to-code mapping', '`atmos aws security analyze` CLI command', '`atmos aws compliance report` CLI command', 'Multi-format output (Markdown, JSON, YAML, CSV)', 'AI-powered analysis via `--ai` flag', 'AI tools for security (4 tools)', '`--ai` global persistent flag', 'Finding cache infrastructure') are missing pr and/or changelog fields and the parent initiative's prs array is empty; for each shipped milestone add pr: 2282 and changelog: 'aws-security-compliance' (or the exact originating PR/changelog if different) and update the initiative's prs array to include 2282 and adjust the initiative progress percentage accordingly (increase to reflect shipped items) so roadmap entries are auditable and compliant with the PR/update rules.docs/prd/aws-bedrock-ai-provider.md-12-27 (1)
12-27:β οΈ Potential issue | π MajorQualify claims about data residency, compliance scope, and provider data handling.
AWS official docs show these statements need tightening:
Data residency: Bedrock keeps data in your selected Region by default, but supports cross-Region inference (Geographic and Global routing) that can route requests elsewhere. Mention this option.
Compliance: Bedrock is in scope for SOC 2, ISO, HIPAA (eligible), PCI DSS (via AWS scope), and FedRAMP High (GovCloud US-West only). Add that compliance responsibility is sharedβcustomers must handle data sensitivity and proper configurations. Use "in scope" or "eligible" rather than "compliant."
Third-party models: Each provider has its own EULA governing data use and retention (e.g., Anthropic doesn't train on customer content). Note this responsibility explicitly.
Cite AWS Bedrock data protection and compliance validation pages in the doc.
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/prd/aws-bedrock-ai-provider.md` around lines 12 - 27, Update the "Why Bedrock for Enterprise Security Workloads" section and the table entries ("Data residency", "Compliance", "No API keys") to tone down absolute claims: state that Bedrock keeps data in your selected Region by default but supports cross-Region/global routing for inference (mention Geographic/Global routing), change "Compliance" to note which frameworks Bedrock is in-scope or eligible for (SOC 2, ISO, HIPAA eligible, PCI DSS via AWS scope, FedRAMP High limited to GovCloud US-West), and add a sentence about shared responsibility requiring customers to configure and handle sensitive data appropriately; also add a note under "Third-party models" that each model provider/EULA may govern data use/retention (e.g., Anthropic) and include citations to the AWS Bedrock data protection and compliance validation pages (https://docs.aws.amazon.com/bedrock/latest/userguide/data-protection.html and https://docs.aws.amazon.com/bedrock/latest/userguide/compliance-validation.html) and ensure the `atmos aws security analyze --ai` mention remains accurate with these caveats.pkg/aws/security/skill_prompt.md-2-3 (1)
2-3:β οΈ Potential issue | π MajorPrompt/parser contract drift for required
Code Changessection.This prompt says output is parsed programmatically and requires
### Code Changes, but the parser path shown inpkg/aws/security/analyzer.goappears to extract other sections only. That risks silently dropping structured remediation detail.Suggested fix direction
# In pkg/aws/security/analyzer.go (parseRemediationResponse): # Add extraction for "### Code Changes" and map it into the Remediation schema field # used by JSON/YAML/CSV rendering.Also applies to: 17-20
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/aws/security/skill_prompt.md` around lines 2 - 3, The prompt fileβs required header "### Code Changes" is out of sync with the remediation parser used by the analyzer (the parsing logic in analyzer.go), so either add/normalize the exact section header "### Code Changes" in the skill prompt or extend the analyzerβs remediation parsing logic to explicitly extract the "### Code Changes" section; locate the parser function in analyzer.go that extracts remediation sections (the remediation/section-parsing routine) and (a) update it to recognize and return the "### Code Changes" block or (b) modify the prompt content to include the exact headers the parser expects, then update/cover the change with a unit test validating the parser returns the "Code Changes" content.cmd/aws/credentials.go-50-55 (1)
50-55:β οΈ Potential issue | π MajorValidate resolved AWS auth env fields before returning
AWSAuthContext.If
AWS_PROFILE/file paths are empty, downstream credential checks fail with less actionable errors. Fail fast here with a clear identity-specific message.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/aws/credentials.go` around lines 50 - 55, The AWSAuthContext is being constructed from envVars without validating required identity fields; update the code that builds AWSAuthContext (the block that sets CredentialsFile, ConfigFile, Profile, Region) to validate the resolved values from envVars (at minimum ensure AWS_PROFILE is non-empty and that at least one of CredentialsFile or ConfigFile is provided) and fail fast by returning an explicit error (or non-nil result signaling failure) with a clear, identity-specific message if validation fails so downstream credential checks don't see ambiguous errors.cmd/aws/credentials.go-18-31 (1)
18-31:β οΈ Potential issue | π MajorGuard
atmosConfigbefore dereference to avoid panic.
resolveAuthContextassumesatmosConfigis always non-nil. A nil input will panic at Line 30.π‘ Suggested fix
func resolveAuthContext(atmosConfig *schema.AtmosConfiguration, identityName string) (*schema.AWSAuthContext, error) { + if atmosConfig == nil { + return nil, errUtils.Build(errUtils.ErrAWSCredentialsNotValid). + WithExplanation("Missing Atmos configuration while resolving AWS auth context"). + WithHint("Ensure Atmos config is loaded before running AWS commands"). + WithExitCode(1). + Err() + } + if identityName == "" { return nil, nil }π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/aws/credentials.go` around lines 18 - 31, The function resolveAuthContext dereferences atmosConfig (e.g., atmosConfig.Auth and atmosConfig.CliConfigPath) without nil-checking which can panic; add a guard at the start of resolveAuthContext to validate atmosConfig is non-nil and return a descriptive error (or nil, nil per existing semantics) if it's nil, then proceed to create credentials.NewCredentialStore(), validation.NewValidator(), and auth.NewAuthManager(...) only when atmosConfig is present so dereferences are safe.docs/prd/atmos-aws-security-compliance.md-186-201 (1)
186-201:β οΈ Potential issue | π MajorPRD config sample conflicts with implemented default tag keys.
Line 196-197 uses
atmos_stack/atmos_component, while implemented defaults areatmos:stack/atmos:component. This can break copy-paste setup for exact tag mapping.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/prd/atmos-aws-security-compliance.md` around lines 186 - 201, The docs sample's tag_mapping values conflict with the implemented default keys; update the example under tag_mapping (the stack_tag and component_tag entries) to use the actual default tag keys "atmos:stack" and "atmos:component" so copy-paste configuration matches the implementation (or alternatively change the implementation defaults to match the sample), ensuring the entries for stack_tag and component_tag align with the implemented defaults.pkg/aws/security/cache.go-75-75 (1)
75-75:β οΈ Potential issue | π MajorCache exposes mutable slices; callers can accidentally corrupt cached results.
Line 75 returns the internal slice directly, and Lines 87-90 store the caller slice directly. Any downstream mutation changes cached state.
π‘ Suggested fix
func (c *findingsCache) GetFindings(opts *QueryOptions) ([]Finding, bool) { @@ - return entry.value, true + return append([]Finding(nil), entry.value...), true } @@ func (c *findingsCache) SetFindings(opts *QueryOptions, findings []Finding) { @@ c.findings[key] = &cacheEntry[[]Finding]{ - value: findings, + value: append([]Finding(nil), findings...), expiresAt: time.Now().Add(c.ttl), } }Also applies to: 87-90
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/aws/security/cache.go` at line 75, The cache currently exposes and stores internal slices directly (see return of entry.value and the setter storing the caller slice), allowing callers to mutate cached data; fix this by making defensive copies: when returning slices from the getter (the code that returns entry.value) return a new slice copy instead of the internal slice, and when storing slices in the cache (the code at the assignment around lines 87-90 that sets entry.value or similar) save a copy of the input slice rather than the original; use idiomatic slice-copying (allocate a new slice and copy/append the elements) around the functions that manipulate entry.value to prevent external mutation of cached state.pkg/aws/security/cache.go-136-151 (1)
136-151:β οΈ Potential issue | π MajorCache key is missing query dimensions and can return incorrect findings.
Line 146 builds a key without
Framework(and other query discriminators). Two different queries can collide and reuse stale/incorrect data.π‘ Suggested fix
func buildFindingsKey(opts *QueryOptions) string { + if opts == nil { + return "findings:<nil>" + } + // Sort severities for consistent key generation. sevs := make([]string, len(opts.Severity)) for i, s := range opts.Severity { sevs[i] = string(s) } sort.Strings(sevs) - return fmt.Sprintf("findings:%s:%s:%s:%d", + return fmt.Sprintf("findings:%s:%s:%s:%s:%s:%s:%d", opts.Region, strings.Join(sevs, ","), string(opts.Source), + opts.Framework, + opts.Stack, + opts.Component, opts.MaxFindings, ) }π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/aws/security/cache.go` around lines 136 - 151, The cache key generated by buildFindingsKey currently omits important query discriminators (e.g., Framework) causing collisions; update buildFindingsKey to include opts.Framework and any other QueryOptions fields that affect result sets (for example ResourceType, Status, ControlIDs or similar) into the fmt.Sprintf key, making sure to normalize/sort any slice fields (like severities or control ID lists) before joining so key generation is deterministic and unique per distinct query.website/docs/cli/commands/aws/compliance/report.mdx-38-58 (1)
38-58:β οΈ Potential issue | π MajorDocument the
--aiflag forcompliance reportto match shipped behavior.The command page currently omits
--aiin both Flags and Examples, which makes the new remediation mode undiscoverable.As per coding guidelines: Keep CLI documentation and website documentation in sync and document new features on the website with examples and use cases.
Also applies to: 60-137
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/docs/cli/commands/aws/compliance/report.mdx` around lines 38 - 58, Summary: The docs for the CLI command "compliance report" miss the new --ai flag and example usage. Update the "Flags" section to add a `--ai` entry describing the remediation/AI mode (what it does and default behavior) and any accepted values, and update the "Examples" section for `compliance report` to include one or two concrete examples showing `--ai` usage (with and without other flags like `--stack` or `--framework`), ensuring the text matches the shipped CLI behavior and any option aliases.pkg/aws/security/report_renderer.go-357-387 (1)
357-387:β οΈ Potential issue | π MajorGrouped markdown output drops remediation blocks.
Lines 357-387 handle the grouped path, but unlike
renderFindingMarkdown, this code never callsrenderRemediationMarkdown. The caller incmd/aws/security.goenables grouping by default, so repeated titles lose their AI remediation in the normal CLI flow.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/aws/security/report_renderer.go` around lines 357 - 387, renderGroupedFindingMarkdown currently omits remediation blocks, so grouped findings lose AI remediation; add a call to renderRemediationMarkdown for the group's representative finding (f := &findings[0]) inside renderGroupedFindingMarkdown (similar to how renderFindingMarkdown includes remediation). Place the call after renderGroupedTags(sb, findings) (or just before the final separator) and pass the representative finding (f) and the strings.Builder to renderRemediationMarkdown to ensure the grouped output includes the remediation content.cmd/aws/compliance.go-123-185 (1)
123-185:β οΈ Potential issue | π MajorDonβt stream multiple framework reports into one JSON/YAML/CSV document.
Lines 155-184 call
RenderComplianceReportonce per framework on the same writer. That is fine for markdown, but JSON/YAML become concatenated documents and CSV repeats headers as soon as more than one framework is selected from config.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/aws/compliance.go` around lines 123 - 185, The code currently reuses the same writer and calls renderer.RenderComplianceReport for each framework, which concatenates JSON/YAML/CSV outputs; change behavior so non-markdown formats are emitted per-framework to avoid concatenation: detect when outputFormat != security.FormatMarkdown and len(frameworks) > 1, then if fileOutput == "" return a user-facing error (via ui.Errorf/return) asking to select a single framework or supply a file pattern; otherwise, for each fw build a per-framework filename by inserting the framework name before the extension (use filepath.Ext and strings.TrimSuffix on fileOutput), create each file with os.Create, call renderer.RenderComplianceReport(file, report) and close it (donβt reuse the original single output writer); keep markdown/stdout behavior unchanged and continue using renderer.RenderComplianceReport for other cases.cmd/aws/compliance.go-241-260 (1)
241-260:β οΈ Potential issue | π MajorFiltered compliance scores are misleading.
Lines 250-257 keep
report.PassingControlsat the full-framework value but shrinkTotalControlstolen(filtered)+report.PassingControls. Filtering to one failing control can therefore report a near-perfect or 100% score for a control set that is actually failing.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/aws/compliance.go` around lines 241 - 260, The filtered report currently reuses report.PassingControls causing misleading high scores; in filterComplianceReport recompute counts from the filtered dataset instead of keeping the original passing count: set filteredReport.FailingControls = len(filtered), set filteredReport.PassingControls = 0 (or compute an intersection if you have a list of passing control IDs) and set filteredReport.TotalControls = filteredReport.FailingControls + filteredReport.PassingControls, then recalc filteredReport.ScorePercent from those new values (use percentMultiplier as before). Update the function references FailingDetails, FailingControls, PassingControls, TotalControls, and ScorePercent accordingly.cmd/aws/security.go-72-75 (1)
72-75:β οΈ Potential issue | π MajorHonor global config-selection flags before loading Atmos config.
Lines 73-75 pass an empty
schema.ConfigAndStacksInfointocfg.InitCliConfig, so this command silently ignores--base-path,--config,--config-path, and--profile. Parse the global flags intoconfigAndStacksInfofirst so the security analysis runs in the same config context as the rest of the CLI.Based on learnings: In Atmos,
cfg.InitCliConfigmust receive global flag values viaflags.ParseGlobalFlags(cmd, v); passing an emptyConfigAndStacksInfosilently ignores config selection flags.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/aws/security.go` around lines 72 - 75, The command currently passes an empty schema.ConfigAndStacksInfo into cfg.InitCliConfig so global config-selection flags are ignored; before calling cfg.InitCliConfig, call flags.ParseGlobalFlags(cmd, v) to populate a ConfigAndStacksInfo instance (schema.ConfigAndStacksInfo) with the global flags and then pass that populated configAndStacksInfo into cfg.InitCliConfig so --base-path/--config/--config-path/--profile are honored in the security analysis.cmd/aws/compliance.go-62-65 (1)
62-65:β οΈ Potential issue | π MajorHonor global config-selection flags before loading Atmos config.
Lines 63-64 pass an empty
schema.ConfigAndStacksInfointocfg.InitCliConfig, so this command silently ignores--base-path,--config,--config-path, and--profile. Parse the global flags intoconfigAndStacksInfofirst so the compliance report uses the same config context as the rest of the CLI.Based on learnings: In Atmos,
cfg.InitCliConfigmust receive global flag values viaflags.ParseGlobalFlags(cmd, v); passing an emptyConfigAndStacksInfosilently ignores config selection flags.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/aws/compliance.go` around lines 62 - 65, The command currently constructs an empty schema.ConfigAndStacksInfo and calls cfg.InitCliConfig, which ignores global config-selection flags; call flags.ParseGlobalFlags(cmd, v) (or otherwise populate a schema.ConfigAndStacksInfo from the parsed globals) before invoking cfg.InitCliConfig so that the ConfigAndStacksInfo passed into InitCliConfig contains values from --base-path, --config, --config-path, and --profile; update the code around configAndStacksInfo creation to use flags.ParseGlobalFlags(cmd, v) to fill it and then pass that populated ConfigAndStacksInfo into cfg.InitCliConfig.pkg/ai/tools/atmos/list_findings.go-91-143 (1)
91-143:β οΈ Potential issue | π MajorApply the
stackfilter after mapping.Lines 127-135 populate
opts.Stack, but Lines 107-113 render the full mapped slice with no post-mapping stack check. Security Hub cannot filter by Atmos stack name, so this tool currently returns cross-stack findings even whenstackis set.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/tools/atmos/list_findings.go` around lines 91 - 143, The code sets opts.Stack in parseFindingsQueryParams but applies no post-mapping stack filter, so mapped findings still include cross-stack results; after calling mapper.MapFindings (the MapFindings call that returns findings), filter the returned findings slice by opts.Stack (if non-empty) using the component/stack name present on each mapped finding, then use the filtered slice for formatFindingsOutput, JSON marshalling, and setting the "total" and "findings" fields in the Result; keep fetcher.FetchFindings and mapper.MapFindings unchanged but ensure subsequent logic uses the filtered findings variable.cmd/aws/security.go-59-71 (1)
59-71:β οΈ Potential issue | π Major
--frameworkis registered but never reaches the fetcher.Lines 290-292 add the flag, but Lines 59-71 never read it and Lines 147-153 never set
security.QueryOptions.Framework.atmos aws security analyze --framework ...is therefore a no-op today.Also applies to: 147-153, 285-300
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/aws/security.go` around lines 59 - 71, The --framework CLI flag is registered but never read or passed into the security fetcher; update the Viper reads to include framework (e.g., add framework := v.GetString("framework") alongside stack/component/severity in the top block) and then set that value on the options passed to the fetcher by assigning it to security.QueryOptions.Framework when building the QueryOptions (the same place where maxFindings/useAI/region/identity/noGroup are set); ensure the framework value flows from the flag registration (lines ~290-292) through the Viper read to the QueryOptions.Framework field so atmos aws security analyze --framework works.pkg/aws/security/report_renderer.go-149-214 (1)
149-214:β οΈ Potential issue | π MajorAdd error checking after CSV flush in both renderers.
Both
RenderSecurityReportandRenderComplianceReportcalldefer cw.Flush()but never checkcw.Error()before returning. Buffered CSV writes can fail silently on broken pipes or full disksβcallers will see success but get truncated output. Check the writer's error state after flush to catch these cases.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/aws/security/report_renderer.go` around lines 149 - 214, The CSV writers currently use defer cw.Flush() but never check cw.Error(), so CSV write failures can be missed; update both RenderSecurityReport and RenderComplianceReport to flush and check errors before returning: replace the deferred-only flush with an explicit cw.Flush() call just before the function returns (or keep defer but also call cw.Flush() near the end), then call if err := cw.Error(); err != nil { return err } (referencing the local csv.Writer variable cw and the functions RenderSecurityReport and RenderComplianceReport) so any buffered write errors are propagated to the caller.pkg/ai/tools/atmos/compliance_report.go-67-85 (1)
67-85:β οΈ Potential issue | π MajorKeep the tool on the callerβs config and AWS identity.
Lines 68-85 ignore
t.atmosConfig, rebuild config from an emptyschema.ConfigAndStacksInfo, and then create the fetcher withnilauth context. In delegated-admin or non-default profile/base-path sessions, this tool can read compliance data from the wrong account/region instead of the active Atmos session.Based on learnings:
cfg.InitCliConfigreads config selection fromschema.ConfigAndStacksInfo; passing an empty struct causes--base-path,--config,--config-path, and--profileto be ignored.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/tools/atmos/compliance_report.go` around lines 67 - 85, The code re-initializes config with cfg.InitCliConfig and passes nil auth, which ignores the caller's selected config and AWS identity; instead use the tool's existing t.atmosConfig for checks (replace atmosConfig with t.atmosConfig) and pass the tool's auth/context (e.g., t.atmosAuth or the struct field that holds the current AWS auth) into security.NewFindingFetcher (replace the nil), and add a defensive nil-check for t.atmosConfig (and t.atmosAuth) before use and return an appropriate error if missing so the fetcher runs in the caller's active Atmos session and profile.pkg/aws/security/component_mapper.go-123-128 (1)
123-128:β οΈ Potential issue | π MajorCarry fetched tags forward into the heuristic path.
Line 124 loads tags from the Tagging API, but Line 128 only uses them for the exact Atmos-tag match. If that misses,
mapByContextTagsstill reads the originalfinding.ResourceTags, so thecontext-tagsstrategy never sees fetchedName/Namespace/Tenant/Environment/Stagetags.Also applies to: 272-276
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/aws/security/component_mapper.go` around lines 123 - 128, The code fetches tags via m.getResourceTags(...)->tags and then calls m.matchTags(tags, "tag-api") but leaves mapByContextTags operating on the original finding.ResourceTags, so fetched tags (Name/Namespace/Tenant/Environment/Stage) are never considered in the "context-tags" heuristic; update the flow so that after fetching/merging tags you pass the merged tags into mapByContextTags (or make mapByContextTags accept/use a mergedTags argument) instead of relying on finding.ResourceTags, and apply the same change to the second occurrence around mapByContextTags at the other block (the 272β276 area) so both heuristic paths see fetched tags.pkg/aws/security/analyzer.go-347-364 (1)
347-364:β οΈ Potential issue | π MajorPopulate
Remediation.CodeChangesfrom the structured response.This parser never reads a
### Code Changessection, so any file/line/before/after edits returned by the model are discarded. Downstream renderers will always see an emptyCodeChangesslice even when the AI produced concrete code patches.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/aws/security/analyzer.go` around lines 347 - 364, The parser in parseRemediationResponse never reads a "### Code Changes" section so Remediation.CodeChanges stays empty; add logic to extract the "### Code Changes" block (use extractFirstMatch(response, "### Code Changes")) and populate remediation.CodeChanges by parsing that block (e.g., call an existing parseCodeChanges / parsePatch utility or implement a small parser that converts file/line/before/after edits into the Remediation.CodeChanges slice) and assign the result to remediation.CodeChanges inside parseRemediationResponse.pkg/ai/tools/atmos/analyze_finding.go-145-157 (1)
145-157:β οΈ Potential issue | π MajorExact finding lookup shouldn't stop at the first 500 results.
This helper does an unfiltered fetch capped by
MaxFindingsForLookupand then scans client-side. In accounts with more than 500 findings, valid IDs outside that window will be reported as "not found" even though they exist.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/tools/atmos/analyze_finding.go` around lines 145 - 157, The current lookup uses security.NewFindingFetcher + fetcher.FetchFindings with a single QueryOptions(MaxFindingsForLookup) then scans the returned slice and will miss IDs beyond the cap; change the logic to perform a targeted or paginated fetch: either add an ID filter to QueryOptions (if supported) or loop calling fetcher.FetchFindings with pagination/offset/next-token until you find findings[i].ID == findingID or no more pages, instead of relying on MaxFindingsForLookup; update the code around security.NewFindingFetcher, security.QueryOptions, and fetcher.FetchFindings to implement the paging/ID-filter approach so exact lookups are exhaustive.pkg/aws/security/finding_fetcher.go-215-225 (1)
215-225:β οΈ Potential issue | π MajorLet auth-managed regions win when the caller didn't set one.
Line 224 hard-codes
us-east-1, soidentity.LoadConfigWithAuthnever receives an empty region and cannot applyAWSAuthContext.Region. With delegated-admin identities, this can fetch Security Hub data from the wrong region.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/aws/security/finding_fetcher.go` around lines 215 - 225, The resolveRegion function currently returns a hard-coded "us-east-1" when no override or config region is set, preventing identity.LoadConfigWithAuth from applying AWSAuthContext.Region; modify awsFindingFetcher.resolveRegion so that if override is empty and f.atmosConfig.AWS.Security.Region is empty it returns an empty string (allowing identity.LoadConfigWithAuth to pick up auth-managed region) while still returning override when provided and f.atmosConfig.AWS.Security.Region when configured.pkg/ai/tools/atmos/analyze_finding.go-107-107 (1)
107-107:β οΈ Potential issue | π MajorThread the resolved AWS auth context into both fetch and mapping.
Both constructors get
nilhere, soatmos_analyze_findingignores any configuredaws.security.identityand falls back to ambient AWS credentials. In delegated-admin setups, the tool can fetch and map findings against the wrong account even when the CLI command path works.Also applies to: 145-145
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ai/tools/atmos/analyze_finding.go` at line 107, The code currently passes nil for the AWS auth context into security.NewComponentMapper(atmosConfig, nil) (and the findings fetch constructor at the other site), causing ambient credentials to be used; locate where the resolved AWS auth/context value is obtained in this function (the variable holding the resolved aws security identity) and pass that variable into security.NewComponentMapper(atmosConfig, <authContext>) and into the findings fetcher constructor instead of nil so both fetch and mapping use the resolved AWS identity.pkg/aws/security/component_mapper_test.go-692-744 (1)
692-744:β οΈ Potential issue | π MajorLock down the embedded-tag vs API precedence with a real mock.
These two cases do not actually prove the contract they describe. The embedded-tags test never verifies βno API call,β and the fallback test never seeds
mapper.clients.tagging/taggingFn, so a cache miss can still walk the realawsClientCacheclient path and pass on the weakmapping.Method != "finding-tag"assertion. InjectmockTaggingClienthere as a spy/stub and assert zero calls for embedded tags plus an actualtag-apimapping whenResourceTagsare absent.Suggested tightening.
type mockTaggingClient struct { resources []tagtypes.ResourceTagMapping err error + calls int } func (m *mockTaggingClient) GetResources(_ context.Context, _ *resourcegroupstaggingapi.GetResourcesInput, _ ...func(*resourcegroupstaggingapi.Options)) (*resourcegroupstaggingapi.GetResourcesOutput, error) { + m.calls++ if m.err != nil { return nil, m.err } return &resourcegroupstaggingapi.GetResourcesOutput{ ResourceTagMappingList: m.resources, }, nil } @@ - mapper := NewComponentMapper(atmosConfig, nil) + spy := &mockTaggingClient{} + mapper := NewComponentMapper(atmosConfig, nil).(*dualPathMapper) + mapper.clients.tagging["us-east-1"] = spy @@ assert.Equal(t, ConfidenceExact, mapping.Confidence) assert.Equal(t, "finding-tag", mapping.Method) + assert.Zero(t, spy.calls) @@ finding := &Finding{ ID: "no-tag-001", ResourceARN: "arn:aws:s3:::my-bucket", + Region: "us-east-1", ResourceTags: nil, } - mapper := NewComponentMapper(atmosConfig, nil) + spy := &mockTaggingClient{ + resources: []tagtypes.ResourceTagMapping{{ + ResourceARN: aws.String("arn:aws:s3:::my-bucket"), + Tags: []tagtypes.Tag{ + {Key: aws.String("atmos:stack"), Value: aws.String("plat-use2-prod")}, + {Key: aws.String("atmos:component"), Value: aws.String("s3-bucket")}, + }, + }}, + } + mapper := NewComponentMapper(atmosConfig, nil).(*dualPathMapper) + mapper.clients.tagging["us-east-1"] = spy mapping, err := mapper.MapFinding(context.Background(), finding) require.NoError(t, err) require.NotNil(t, mapping) - assert.NotEqual(t, "finding-tag", mapping.Method) + assert.Equal(t, "tag-api", mapping.Method) + assert.Equal(t, "plat-use2-prod", mapping.Stack) + assert.Equal(t, "s3-bucket", mapping.Component) + assert.Equal(t, 1, spy.calls)Based on learnings, "Test behavior, not implementation. Never test stub functions. Avoid tautological tests. Make code testable via dependency injection. No coverage theater."
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/aws/security/component_mapper_test.go` around lines 692 - 744, Update the two tests to inject a mockTaggingClient into the ComponentMapper via NewComponentMapper so you can assert API call behavior: in TestMapByTags_FindingEmbeddedTags pass a mockTaggingClient spy on mapper.clients.tagging/taggingFn and assert it was never called and mapping.Method remains "finding-tag"; in TestMapByTags_FindingTagsFallbackToAPI seed mapper.clients.tagging/taggingFn with a mock that returns tags for the ARN (simulate tag-api response), assert the mock was invoked (call count > 0) and that mapping.Method equals the expected "tag-api" value; use NewComponentMapper, MapFinding, mapper.clients.tagging/taggingFn and mockTaggingClient identifiers to locate the code to modify.
π‘ Minor comments (10)
website/docs/cli/commands/aws/eks/eks.mdx-18-26 (1)
18-26:β οΈ Potential issue | π‘ MinorAdd required
Arguments,Flags, andExamplessections to this command page.This page currently stops at usage/subcommands. Please add explicit sections (even if βNoneβ) to keep CLI docs consistent.
π Suggested doc patch
## Subcommands <DocCardList/> + +## Arguments + +<dl> + <dt>None</dt> + <dd>This command group does not accept positional arguments directly.</dd> +</dl> + +## Flags + +<dl> + <dt>None</dt> + <dd>Use flags on concrete subcommands.</dd> +</dl> + +## Examples + +```shell +atmos aws eks <subcommand> [flags] +```As per coding guidelines: βAll CLI commands MUST have Docusaurus documentation ... Include ... Arguments/Flags ... and Examples section.β
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/docs/cli/commands/aws/eks/eks.mdx` around lines 18 - 26, Add explicit "Arguments", "Flags", and "Examples" sections to the eks command doc page (in the same doc that contains the Usage and Subcommands headings) even if emptyβcreate an "Arguments" section listing any positional args or the word "None", a "Flags" section enumerating global/command flags or "None", and an "Examples" section showing at least one example CLI invocation (e.g., a basic eks subcommand call) to comply with CLI docs guidelines; update the content near the existing Usage/Subcommands block so the page has clear Arguments, Flags, and Examples headings.website/docs/cli/global-flags.mdx-778-789 (1)
778-789:β οΈ Potential issue | π‘ MinorDuplicate βAI-Powered Analysis Examplesβ section should be consolidated.
This creates two sections with the same heading and overlapping examples. Merge both into one section to keep docs stable and easier to maintain.
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/docs/cli/global-flags.mdx` around lines 778 - 789, There are duplicate "AI-Powered Analysis Examples" sections; merge them into a single section by consolidating the example blocks (the commands shown like "atmos aws security analyze --ai -s prod", "ATMOS_AI=true atmos aws security analyze --severity=critical -s prod", and "atmos aws compliance report --ai --framework=cis-aws -s prod") into one cohesive "AI-Powered Analysis Examples" heading, remove the redundant duplicate heading and example block, and ensure the final single section contains all unique examples and correct flags/formatting so there are no overlapping or repeated examples.website/docs/cli/commands/aws/compliance/compliance.mdx-21-29 (1)
21-29:β οΈ Potential issue | π‘ MinorPlease add
Arguments/FlagsandExamplesfor this command page.The command doc structure is missing required reference sections, which makes the top-level command less actionable for users.
As per coding guidelines, βAll CLI commands MUST have Docusaurus documentation in
website/docs/cli/commands/<command>/<subcommand>.mdxβ¦ [including] Arguments/Flags β¦ and Examples section.βπ€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/docs/cli/commands/aws/compliance/compliance.mdx` around lines 21 - 29, The top-level CLI doc for the "atmos aws compliance" command is missing the required "Arguments/Flags" and "Examples" sections; update the MDX (the current file containing the "## Usage" and "## Subcommands" blocks and the <DocCardList/> token) by adding a clear "## Arguments/Flags" heading that enumerates each supported flag/argument for the `atmos aws compliance <subcommand>` command (name, type, default, brief description) and an "## Examples" heading with one or more concrete usage examples showing common subcommands and flags (shell code blocks demonstrating real-world invocations); keep formatting consistent with other CLI docs and include at least one example per major subcommand listed by <DocCardList/>.website/docs/cli/global-flags.mdx-593-604 (1)
593-604:β οΈ Potential issue | π‘ MinorThis reintroduces a duplicate
ATMOS_AIenv-var section.
ATMOS_AIis already documented above, so keeping both copies risks drift. Prefer one canonical entry and link to examples from that single spot.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/docs/cli/global-flags.mdx` around lines 593 - 604, Remove the duplicated "AI Integration Environment Variables" block that reintroduces the `ATMOS_AI` entry; instead, delete this duplicate section (the block containing the `ATMOS_AI` dt/dd) and add a short cross-reference or link to the canonical `ATMOS_AI` documentation entry (and to the `--ai` CLI flag) so examples are maintained from a single source of truth; ensure the remaining canonical entry mentions equivalence to `--ai` and includes the example usage `ATMOS_AI=true atmos aws security analyze -s prod`.website/docs/cli/commands/aws/security/security.mdx-21-29 (1)
21-29:β οΈ Potential issue | π‘ MinorAdd
Arguments/FlagsandExamplessections to complete the command doc.This new command page is close, but itβs missing the required sections for discoverability and consistency.
π οΈ Suggested doc scaffold
## Usage ```shell atmos aws security <subcommand> [flags]+## Arguments
+
+
- ``
- Security operation to run (for example: `analyze`).
+
+
+## Flags
+
+
- `--help`
- Show help for `atmos aws security`.
+
+
+## Examples
+
+shell +atmos aws security analyze --help +
+Subcommands
</details> As per coding guidelines, βAll CLI commands MUST have Docusaurus documentation in `website/docs/cli/commands/<command>/<subcommand>.mdx`β¦ [including] Arguments/Flags β¦ and Examples section.β <details> <summary>π€ Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@website/docs/cli/commands/aws/security/security.mdxaround lines 21 - 29,
Add the required "Arguments", "Flags", and "Examples" sections to the existing
CLI doc for the atmos aws security command: under the Usage line for "atmos aws
security [flags]" add an Arguments section describing
<subcommand>(e.g., "Security operation to run (for example: analyze)"), a
Flags section listing--help(and any other global flags your CLI supports)
with brief descriptions, and an Examples section showing at least one practical
invocation such as "atmos aws security analyze --help"; place these new headings
before the existing Subcommands/DocCardList so the doc follows the project's CLI
documentation scaffold and is discoverable.</details> </blockquote></details> <details> <summary>website/docs/cli/commands/aws/security/analyze.mdx-61-63 (1)</summary><blockquote> `61-63`: _β οΈ Potential issue_ | _π‘ Minor_ **Inconsistent `--max-findings` default across docs artifacts.** This page documents default `500`, while the generated help screengrab shows default `50`. Please sync both to the actual command default to avoid user confusion. As per coding guidelines, βKeep CLI documentation and website documentation in sync and document new features on the website with examples and use cases.β <details> <summary>π€ Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@website/docs/cli/commands/aws/security/analyze.mdxaround lines 61 - 63, The
docs show--max-findingsdefault as 500 but the CLI help/screenshot shows 50;
locate theanalyzecommand's option definition (the--max-findingsflag in
the Analyze command / option parser) and confirm the actual default there, then
update this MDX entry to match that value (or change the option's default in the
analyzecommand implementation to 500 if the intended default is 500); ensure
the--max-findingsvalue in the help output, theanalyzecommand
implementation, and this docs line are all the same and update the help
screenshot if necessary.</details> </blockquote></details> <details> <summary>docs/prd/atmos-aws-security-compliance.md-82-93 (1)</summary><blockquote> `82-93`: _β οΈ Potential issue_ | _π‘ Minor_ **Mapping strategy count is inconsistent in the same section.** Line 82 says β5 mapping strategies,β but the table lists 7. Please align the count text with the table. <details> <summary>π€ Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/prd/atmos-aws-security-compliance.mdaround lines 82 - 93, The doc
header currently states "5 mapping strategies" but the mapping table actually
lists seven strategies (finding-tag,tag-api,context-tags,account-map,
ecr-repo,naming-convention,resource-type); update the header text to
reflect "7 mapping strategies" (or reduce the table to five if that was
intended) so the count matches the listed strategies, ensuring the phrase
containing "5 mapping strategies" and any nearby summary references are updated
to "7 mapping strategies".</details> </blockquote></details> <details> <summary>website/docs/cli/commands/aws/compliance/report.mdx-172-177 (1)</summary><blockquote> `172-177`: _β οΈ Potential issue_ | _π‘ Minor_ **Related commands list includes an unrelated EKS command.** `/cli/commands/aws/eks/update-kubeconfig` doesnβt fit this pageβs compliance workflow; replace with a compliance/security-related entry. As per coding guidelines: Keep CLI documentation and website documentation in sync and document new features on the website with examples and use cases. <details> <summary>π€ Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@website/docs/cli/commands/aws/compliance/report.mdxaround lines 172 - 177,
The Related Commands list mistakenly includes an unrelated EKS entry: remove the
DocCardList item with href '/cli/commands/aws/eks/update-kubeconfig' and replace
it with a compliance/security-related entry (e.g., add an item similar to {type:
'link', href: '/cli/commands/aws/security/report', label: 'atmos aws security
report', description: 'Generate compliance/security reports'}), keeping the same
DocCardList structure and ensuring the label/href/description match the actual
CLI command docs so website and CLI docs remain in sync.</details> </blockquote></details> <details> <summary>pkg/aws/security/report_renderer.go-52-70 (1)</summary><blockquote> `52-70`: _β οΈ Potential issue_ | _π‘ Minor_ **Include INFORMATIONAL in the summary breakdown.** Lines 52-70 and 228-235 only enumerate Critical/High/Medium/Low, but `renderFindingsBySeverity` also renders INFORMATIONAL. Reports with informational findings will show totals that do not reconcile with the summary. Also applies to: 228-235 <details> <summary>π€ Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@pkg/aws/security/report_renderer.goaround lines 52 - 70, The summary table
omits SeverityInformation/INFORMATIONAL so totals mismatch with
renderFindingsBySeverity; update the loop that builds the summary (which
currently iterates []Severity{SeverityCritical, SeverityHigh, SeverityMedium,
SeverityLow}) to also include the INFORMATIONAL severity constant (e.g.,
SeverityInformational or SeverityInformation depending on the enum name used),
ensuring you read counts from report.SeverityCounts and compute mapped/unmapped
via countMappedBySeverity(report.Findings, sev) exactly as for other severities;
make the same change in the other occurrence noted (lines referenced around
where the summary is built and the second block at 228-235) so informational
findings are included consistently.</details> </blockquote></details> <details> <summary>pkg/aws/security/component_mapper_test.go-975-986 (1)</summary><blockquote> `975-986`: _β οΈ Potential issue_ | _π‘ Minor_ **Assert the grouped titles here, not only the lengths.** This passes as long as `groupByTitle` returns three singleton groups, even if the titles are wrong. Pull the title back out of each group and assert `A`, `B`, and `C` explicitly; `assert.ElementsMatch` is enough if order is not part of the contract. <details> <summary>Suggested tightening.</summary> ```diff func TestGroupByTitle_NoDuplicates(t *testing.T) { findings := []Finding{ {Title: "A"}, {Title: "B"}, {Title: "C"}, } groups := groupByTitle(findings) require.Len(t, groups, 3) - assert.Len(t, groups[0], 1) - assert.Len(t, groups[1], 1) - assert.Len(t, groups[2], 1) + + gotTitles := make([]string, 0, len(groups)) + for _, group := range groups { + require.Len(t, group, 1) + gotTitles = append(gotTitles, group[0].Title) + } + + assert.ElementsMatch(t, []string{"A", "B", "C"}, gotTitles) }As per coding guidelines, "For slice-result tests, assert element contents, not just length.
require.Lenalone allows regressions that drop or corrupt contents. Assert at least the first and last element by value."π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/aws/security/component_mapper_test.go` around lines 975 - 986, The test TestGroupByTitle_NoDuplicates only checks group counts and sizes but not the actual titles; update the test to extract titles from each group returned by groupByTitle and assert their contents are exactly "A", "B", and "C" (use assert.ElementsMatch if order is not guaranteed). Locate TestGroupByTitle_NoDuplicates and after calling groupByTitle(findings) map each group (e.g., groups[i][0].Title) or collect all Titles from groups into a slice, then replace or augment the length assertions with an assertion that the collected titles equal the expected slice {"A","B","C"}; keep require.Len checks if desired but ensure content assertions are present.
π§Ή Nitpick comments (7)
website/src/data/roadmap.js (1)
599-603: Consider collapsing duplicate shipped milestones.At Line 599β603, these summary milestones overlap with the detailed shipped milestones already listed above (e.g., analyze/report/AI/remediation). Keeping both can inflate milestone counts and make progress less clear.
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/data/roadmap.js` around lines 599 - 603, The five summary milestones ('AWS security finding analysis', 'Finding-to-code mapping', 'AI-powered remediation', 'Compliance framework reports', 'Atmos Auth integration for security') duplicate more detailed shipped items above and should be collapsed to avoid inflated counts: remove these duplicate entries from the milestones array (or replace them with a single aggregated milestone entry if you need a summary), and ensure any important metadata like changelog: 'aws-security-compliance' is preserved/moved to the corresponding detailed milestone(s) so no changelog linkage is lost.cmd/aws/markdown/atmos_aws_compliance.md (1)
8-26: Align examples with repository CLI prompt convention.Please switch examples to the
$ atmos ...style used incmd/markdown/docs for consistency.βοΈ Example style adjustment
-atmos aws compliance report --framework cis-aws --stack prod-us-east-1 + $ atmos aws compliance report --framework cis-aws --stack prod-us-east-1Based on learnings: In the cloudposse/atmos repository, CLI documentation files in
cmd/markdown/follow a specific format that uses$ atmos commandin code blocks.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/aws/markdown/atmos_aws_compliance.md` around lines 8 - 26, Update the examples in the AWS compliance docs so they follow the repository CLI prompt convention by adding a leading " $ " before each atmos invocation (e.g., change lines invoking "atmos aws compliance report ..." to " $ atmos aws compliance report ..."); update all occurrences in this file (examples showing framework/stack/format/file variants) so the code block consistently uses the " $ atmos ..." prompt style.pkg/aws/security/aws_clients_test.go (1)
22-38: Consider table-driving the auth-context scenarios.The nil and non-nil paths can be merged into one table-driven test to reduce duplication and keep future scenarios easy to add.
β»οΈ Suggested refactor shape
-func TestWithAuthContext(t *testing.T) { - cache := newAWSClientCache() - authCtx := &schema.AWSAuthContext{ - CredentialsFile: "/tmp/creds", - ConfigFile: "/tmp/config", - Profile: "test-profile", - Region: "us-east-2", - } - cache.WithAuthContext(authCtx) - assert.Equal(t, authCtx, cache.authContext) -} - -func TestWithAuthContext_Nil(t *testing.T) { - cache := newAWSClientCache() - cache.WithAuthContext(nil) - assert.Nil(t, cache.authContext) -} +func TestWithAuthContext(t *testing.T) { + tests := []struct { + name string + authCtx *schema.AWSAuthContext + }{ + { + name: "non-nil auth context", + authCtx: &schema.AWSAuthContext{ + CredentialsFile: "/tmp/creds", + ConfigFile: "/tmp/config", + Profile: "test-profile", + Region: "us-east-2", + }, + }, + {name: "nil auth context", authCtx: nil}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + cache := newAWSClientCache() + cache.WithAuthContext(tt.authCtx) + assert.Equal(t, tt.authCtx, cache.authContext) + }) + } +}As per coding guidelines,
**/*_test.go: βUse table-driven tests for testing multiple scenarios in Goβ.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/aws/security/aws_clients_test.go` around lines 22 - 38, The two tests TestWithAuthContext and TestWithAuthContext_Nil duplicate logic and should be combined into a table-driven test: create a slice of cases with name, input *schema.AWSAuthContext (including nil and a populated struct), and expected authContext, then loop over cases calling cache := newAWSClientCache(); cache.WithAuthContext(tc.input); and assert.Equal(t, tc.expected, cache.authContext). Keep references to WithAuthContext, newAWSClientCache, and authContext to locate the code and ensure both nil and non-nil scenarios are covered in the single table-driven test.cmd/aws/markdown/atmos_aws_security.md (1)
9-33: Align example commands withcmd/markdownprompt style.Please use the repo-standard
$ atmos ...format in this code block for consistency with other CLI markdown pages.Based on learnings: In the cloudposse/atmos repository, CLI documentation files in
cmd/markdown/use$ atmos commandin code blocks.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/aws/markdown/atmos_aws_security.md` around lines 9 - 33, Update the code block examples so each CLI line uses the repo-standard " $ atmos ..." prompt prefix instead of bare commands; locate the code block containing the examples for "atmos aws security analyze" (the lines with "atmos aws security analyze --stack prod-us-east-1", "--severity", "--source", "--format", "--ai", "--file", etc.) and prepend " $ " to each command so they follow the other cmd/markdown pages' style.pkg/aws/security/skill_prompt.md (1)
31-33: Add a language to the fenced deploy command block.Use a language tag (e.g.,
bash) so markdown tooling and renderers stay consistent.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/aws/security/skill_prompt.md` around lines 31 - 33, The fenced code block containing the deploy command "atmos terraform apply <component> -s <stack>" lacks a language tag; update the markdown in pkg/aws/security/skill_prompt.md to use a language identifier (e.g., bash) on that triple-backtick fence so renderers and syntax highlighters treat the block as shell commands.pkg/aws/security/cache_test.go (2)
113-117: Strengthen cache-hit assertions with element content checks.On hit paths, asserting only
Lencan miss corrupted or wrong entries. Assert at least first/lastFinding.ID(or full equality for small fixtures).As per coding guidelines, βFor slice-result tests, assert element contents, not just length.β
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/aws/security/cache_test.go` around lines 113 - 117, The test currently only asserts slice length on cache-hit paths (hit true) which can miss corrupted/wrong entries; update the block handling tt.wantHit to also assert element contents β for example, when require.NotNil(t, got) is true, check that the first and last Finding.ID values in got match expected IDs from the test fixture (or perform full slice equality for small fixtures) using assert.Equal on those IDs (referencing variables got, tt.wantCount, tt.wantHit and the Finding.ID field) so the test verifies actual entry contents, not just length.
196-201: Replace fixed sleeps with eventual assertions to reduce test flakiness.Using hard sleeps around TTL boundaries can be timing-sensitive in CI. Prefer
require.Eventually/assert.Eventuallywith polling.Refactor sketch
- time.Sleep(100 * time.Millisecond) - got, hit = cache.GetFindings(opts) - assert.False(t, hit) - assert.Nil(t, got) + require.Eventually(t, func() bool { + got, hit = cache.GetFindings(opts) + return !hit && got == nil + }, 500*time.Millisecond, 10*time.Millisecond)Also applies to: 216-220
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/aws/security/cache_test.go` around lines 196 - 201, Replace the fixed time.Sleep call with an eventual assertion that polls until the cache expires: call assert.Eventually (or require.Eventually) and inside the predicate call cache.GetFindings(opts) and return true only when hit==false && got==nil; use a reasonable timeout (e.g. 1s) and poll interval (e.g. 10β50ms). Apply the same change for the second occurrence around the 216β220 block so both TTL-bound checks use assert.Eventually/require.Eventually against cache.GetFindings(opts) instead of time.Sleep.
β¦al flags - Fix compliance dedup: use SecurityControlID (e.g., EC2.18) instead of ComplianceStandard (framework ID) as dedup key - Fix cache key: add Framework, Stack, Component; add nil guard - Fix cache safety: copy slices on get/set to prevent mutation - Fix global flags: parse BasePath in security and compliance commands - Fix --framework flag: wire into QueryOptions for security analyze - Fix CSV flush: check cw.Error() after Flush() in both renderers - Fix auth region fallback: check authContext.Region before us-east-1 - Fix resolveAuthContext: nil guard, Profile validation, extract helper - Fix INFORMATIONAL: include in summary table severity loops - Fix Bedrock PRD: qualify data residency and compliance claims - Fix PRD: strategy count 5β7, tag names use colons - Fix roadmap: add pr:2282 to milestones, remove duplicate entries - Fix CLI markdown: use $ prompt style per convention - Document --ai flag for compliance report with examples - Improve tests: table-driven auth, assert element contents, tool calls - Update compliance examples with 35/42 (83%) production results Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 12
β»οΈ Duplicate comments (1)
pkg/aws/security/finding_fetcher.go (1)
230-285:β οΈ Potential issue | π΄ Critical
--stackstill isnβt enforced in compliance fetches.
FetchComplianceStatussetsQueryOptions.Stack, but this filter builder never reads it. A stack-scoped compliance report will still score every finding in the framework and only stamp the requested stack onto the output.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/aws/security/finding_fetcher.go` around lines 230 - 285, buildFindingFilters currently ignores QueryOptions.Stack so stack-scoped compliance reports still return all findings; update buildFindingFilters (awsFindingFetcher.buildFindingFilters) to detect opts.Stack (QueryOptions.Stack) and append an appropriate Security Hub string filter that restricts findings to that stack (e.g., add a StringFilter for the finding resource identifier or resource group value that your system uses to tag the stack) so the returned AwsSecurityFindingFilters include the stack constraint; locate the place that builds filters and append the new filter to the filters struct (use the same shtypes.StringFilter shape and StringFilterComparisonEquals/Prefix as appropriate).
π§Ή Nitpick comments (1)
pkg/aws/security/component_mapper_test.go (1)
727-744: Make the fallback test assert a real outcome.This currently passes for any non-
finding-tagresult, includingunmatched, so it will not catch a broken fallback path.π‘ Tighten the assertion
func TestMapByTags_FindingTagsFallbackToAPI(t *testing.T) { // When finding has no ResourceTags, fall back to API. finding := &Finding{ ID: "no-tag-001", ResourceARN: "arn:aws:s3:::my-bucket", + ResourceType: "AwsS3Bucket", ResourceTags: nil, // No embedded tags. } @@ mapping, err := mapper.MapFinding(context.Background(), finding) require.NoError(t, err) require.NotNil(t, mapping) - // Should use naming convention or resource type, not tag. - assert.NotEqual(t, "finding-tag", mapping.Method) + assert.True(t, mapping.Mapped) + assert.Equal(t, "resource-type", mapping.Method) + assert.Equal(t, "s3-bucket", mapping.Component) }As per coding guidelines, "Test behavior, not implementation. Never test stub functions. Avoid tautological tests. No coverage theater."
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/aws/security/component_mapper_test.go` around lines 727 - 744, The test TestMapByTags_FindingTagsFallbackToAPI currently only asserts mapping.Method != "finding-tag" which allows "unmatched" to pass; update the assertion to verify a concrete expected fallback result from MapFinding (for example assert mapping.Method is not "unmatched" and/or equals the expected fallback method such as the naming convention or resource-type value your mapper returns). Locate TestMapByTags_FindingTagsFallbackToAPI and change the require/assert on mapping.Method to assert a real outcome (e.g., assert.NotEqual(t, "unmatched", mapping.Method) and assert.Equal(t, "<expected-fallback-method>", mapping.Method) or assert.Contains(t, []string{"naming-convention","resource-type"}, mapping.Method) as appropriate).
π€ Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cmd/aws/compliance.go`:
- Around line 125-187: The loop currently renders each framework report
separately (variables/frameworks, fetcher.FetchComplianceStatus,
renderer.RenderComplianceReport) which breaks structured formats
(JSON/YAML/CSV); instead accumulate all non-nil reports into a single in-memory
collection keyed by framework (apply filterComplianceReport when
len(controlFilter)>0) and after the for loop call a single render method to emit
one aggregate payload for non-markdown formats (keep the existing per-framework
markdown/stdout ui.Markdown path when outputFormat == security.FormatMarkdown &&
fileOutput == "") and ensure errors from fetch/render are still returned; update
code around NewFindingFetcher/NewReportRenderer, FetchComplianceStatus and the
RenderComplianceReport invocation accordingly.
- Around line 241-262: filterComplianceReport currently only filters
report.FailingDetails and leaves report.PassingControls unchanged, which skews
TotalControls and ScorePercent; fix by recomputing counts for the selected
control set: build a lookup of failing IDs from report.FailingDetails, iterate
over the controlFilter keys and for each selected ID increment failingCount if
present in that lookup otherwise increment passingCount, then set
filteredReport.FailingControls = failingCount, filteredReport.PassingControls =
passingCount, filteredReport.TotalControls = failingCount+passingCount and
recalc ScorePercent accordingly (use report.FailingDetails -> ctrl.ControlID,
controlFilter, filteredReport, and the existing filtered slice to locate the
code to change).
- Around line 62-66: The code only copies BasePath into
schema.ConfigAndStacksInfo before calling cfg.InitCliConfig, so config-selection
flags (configs, config-dirs, profile, etc.) are ignored; instead, populate the
entire ConfigAndStacksInfo with the result of flags.ParseGlobalFlags(cmd, v) (or
copy all its fields) and then pass that populated struct to cfg.InitCliConfig so
InitCliConfig sees the user's selected configs, config dirs and profile; update
references around flags.ParseGlobalFlags, configAndStacksInfo, and
cfg.InitCliConfig accordingly.
- Around line 38-45: The CLI command complianceReportCmd is missing the
advertised --ai flag and its RunE handler does not branch into the AI
remediation flow, so add a boolean flag (e.g., "ai") to the command registration
and bind it to Viper/flags, then update the RunE function to check the flag (via
cmd.Flags().GetBool("ai") or viper.GetBool) and call the AI remediation path
when true (ensure you wire the same logic referenced elsewhere in the file where
the AI flow is implemented); update any flag help text to mention the AI
remediation behavior so the feature is reachable from the CLI.
In `@cmd/aws/credentials.go`:
- Around line 19-28: The function currently checks atmosConfig == nil before
handling the empty identity fast path, causing an AWS-credentials error when
identityName == "" and atmosConfig == nil; move the identityName == "" check to
run first and return nil, nil immediately for the empty-identity fast path so
the default AWS credential chain is used. Update the logic around the
identityName variable (and any return in the function in cmd/aws/credentials.go)
so the empty-identity branch short-circuits before validating atmosConfig,
preserving the existing error construction using
errUtils.ErrAWSCredentialsNotValid for the atmosConfig == nil case.
In `@cmd/aws/security.go`:
- Around line 233-236: When creating the AI analyzer
(security.NewFindingAnalyzer) you currently only write a debug log on
analyzerErr and continue; change this so that if the user explicitly requested
AI remediation (the --ai flag / AI mode variable) you log the error (use
log.Error) and propagate/return a non-nil error (or os.Exit(1)) instead of
falling through; update the analyzerErr branch around analyzer, analyzerErr :=
security.NewFindingAnalyzer(...) to check the AI-request flag and fail fast
(return the analyzerErr or exit) so the CLI shows a visible error when AI
construction fails.
- Around line 73-77: The code only copies BasePath from the parsed global flags
into schema.ConfigAndStacksInfo before calling cfg.InitCliConfig, so
InitCliConfig doesn't receive --config, --config-path, or --profile values;
update the call site to populate the ConfigAndStacksInfo with all relevant
selections returned by flags.ParseGlobalFlags (e.g., set
AtmosConfigFilesFromArg, AtmosConfigDirsFromArg, BasePath, and ProfilesFromArg
on the ConfigAndStacksInfo struct) so InitCliConfig/LoadConfig sees the full
global config selection used to resolve identity, region, and AI settings (see
symbols: ConfigAndStacksInfo, flags.ParseGlobalFlags, InitCliConfig, LoadConfig,
AtmosConfigFilesFromArg, AtmosConfigDirsFromArg, ProfilesFromArg).
In `@pkg/aws/security/report_renderer.go`:
- Around line 357-387: renderGroupedFindingMarkdown currently omits any
Remediation content for grouped findings; update it to include remediation by
checking findings[i].Remediation (or f.Remediation) and rendering it after
renderGroupedTags: if all findings share the same non-empty Remediation, render
that once (reuse whatever renderer you have for single findings, e.g. the same
code/path used by renderFindingMarkdown), otherwise iterate findings and render
each finding's Remediation under its table row or in per-finding subsections so
no remediation payload is dropped; reference renderGroupedFindingMarkdown,
renderGroupedTags, and the findings[].Remediation/Remediation field when
implementing.
- Around line 218-225: The reportTarget function currently returns "All Stacks"
when stack is empty even if component is provided; update reportTarget (using
parameters stack and component) so that when stack == "" and component != "" it
returns the component string (and keep the existing behavior for both non-empty
or both empty), i.e. add a branch checking component != "" and returning
component before the final "All Stacks" fallback.
In `@tests/fixtures/scenarios/native-ci/github-output.txt`:
- Line 50: The anchor id "result-prod-mycomponent" is duplicated in this fixture
output and must be made unique; find the second occurrence inside the output
block (the <details><summary><a id="result-prod-mycomponent" /> ...</summary>)
and change the id to a unique value (for example "result-prod-mycomponent-2" or
include an environment/sequence suffix) so that anchor resolution is
unambiguous; ensure any internal references to this anchor (if present) are
updated to match the new id.
In `@tests/fixtures/scenarios/native-ci/github-step-summary.txt`:
- Line 37: The summary block reuses the HTML anchor id "result-prod-mycomponent"
which duplicates an earlier anchor; update the <summary><a
id="result-prod-mycomponent" />... line to use a unique id (for example append a
suffix like "-2" or include the step name/environment) and update any references
to that anchor so links target this new section; ensure the id is unique within
the fixture and follows the same naming pattern used elsewhere.
In `@website/docs/cli/commands/aws/compliance/report.mdx`:
- Around line 168-169: Update the documentation entry for the `nist` option so
it correctly describes the mapped standard as NIST Special Publication 800-53
rather than the NIST Cybersecurity Framework (CSF): replace the `<dd>` text
currently describing "NIST Cybersecurity Framework (CSF)" with a description
that names "NIST SP 800-53 (NIST 800-53)", mentions that the CLI maps `nist` to
Security Hubβs `nist-800-53` standard, and optionally notes it is a controls
catalog for federal information systems.
---
Duplicate comments:
In `@pkg/aws/security/finding_fetcher.go`:
- Around line 230-285: buildFindingFilters currently ignores QueryOptions.Stack
so stack-scoped compliance reports still return all findings; update
buildFindingFilters (awsFindingFetcher.buildFindingFilters) to detect opts.Stack
(QueryOptions.Stack) and append an appropriate Security Hub string filter that
restricts findings to that stack (e.g., add a StringFilter for the finding
resource identifier or resource group value that your system uses to tag the
stack) so the returned AwsSecurityFindingFilters include the stack constraint;
locate the place that builds filters and append the new filter to the filters
struct (use the same shtypes.StringFilter shape and
StringFilterComparisonEquals/Prefix as appropriate).
---
Nitpick comments:
In `@pkg/aws/security/component_mapper_test.go`:
- Around line 727-744: The test TestMapByTags_FindingTagsFallbackToAPI currently
only asserts mapping.Method != "finding-tag" which allows "unmatched" to pass;
update the assertion to verify a concrete expected fallback result from
MapFinding (for example assert mapping.Method is not "unmatched" and/or equals
the expected fallback method such as the naming convention or resource-type
value your mapper returns). Locate TestMapByTags_FindingTagsFallbackToAPI and
change the require/assert on mapping.Method to assert a real outcome (e.g.,
assert.NotEqual(t, "unmatched", mapping.Method) and assert.Equal(t,
"<expected-fallback-method>", mapping.Method) or assert.Contains(t,
[]string{"naming-convention","resource-type"}, mapping.Method) as appropriate).
πͺ Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
βΉοΈ Review info
βοΈ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: fd5da8e0-5e15-40bb-aa9b-f1020d4a8efd
π Files selected for processing (23)
cmd/aws/compliance.gocmd/aws/credentials.gocmd/aws/markdown/atmos_aws_compliance.mdcmd/aws/markdown/atmos_aws_security.mdcmd/aws/security.godocs/prd/atmos-aws-security-compliance.mddocs/prd/aws-bedrock-ai-provider.mdexamples/aws-security-compliance/README.mdpkg/aws/security/analyzer_test.gopkg/aws/security/aws_clients_test.gopkg/aws/security/cache.gopkg/aws/security/cache_test.gopkg/aws/security/component_mapper_test.gopkg/aws/security/finding_fetcher.gopkg/aws/security/report_renderer.gopkg/aws/security/types.gotests/fixtures/scenarios/native-ci/github-output.txttests/fixtures/scenarios/native-ci/github-step-summary.txttests/snapshots/TestCLICommands_atmos_describe_config.stdout.goldentests/snapshots/TestCLICommands_secrets-masking_describe_config.stdout.goldenwebsite/blog/2026-04-03-aws-security-compliance.mdxwebsite/docs/cli/commands/aws/compliance/report.mdxwebsite/src/data/roadmap.js
β Files skipped from review due to trivial changes (6)
- tests/snapshots/TestCLICommands_atmos_describe_config.stdout.golden
- cmd/aws/markdown/atmos_aws_security.md
- pkg/aws/security/aws_clients_test.go
- website/src/data/roadmap.js
- website/blog/2026-04-03-aws-security-compliance.mdx
- pkg/aws/security/types.go
π§ Files skipped from review as they are similar to previous changes (3)
- pkg/aws/security/cache_test.go
- pkg/aws/security/analyzer_test.go
- pkg/aws/security/cache.go
Codecov Reportβ Patch coverage is β Your patch check has failed because the patch coverage (66.44%) is below the target coverage (80.00%). You can increase the patch coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## main #2282 +/- ##
==========================================
- Coverage 77.19% 76.99% -0.20%
==========================================
Files 1045 1059 +14
Lines 98388 100522 +2134
==========================================
+ Hits 75951 77399 +1448
- Misses 18184 18834 +650
- Partials 4253 4289 +36
Flags with carried forward coverage won't be shown. Click here to find out more.
π New features to boost your workflow:
|
- Fix global config flags: pass Config, ConfigPath, Profile to InitCliConfig
(not just BasePath) in both security and compliance commands
- Fix resolveAuthContext: check empty identity before nil config
- Fix AI analyzer warning: show visible ui.Warning instead of silent debug log
- Fix reportTarget: handle component-only case ("All Stacks / vpc")
- Fix grouped markdown: render remediation for grouped findings
- Fix NIST description: "NIST 800-53" not "NIST CSF" in compliance docs
- Improve test coverage: 45 new tests across credentials, renderer,
fetcher, and mapper β pkg/aws/security/ coverage 85.5% β 91.8%
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
β»οΈ Duplicate comments (3)
cmd/aws/compliance.go (3)
201-214:β οΈ Potential issue | π MajorMissing
--aiflag per PR scope.The PR description mentions
atmos aws compliance reportsupports--aifor AI-assisted prioritized remediation, but the flag isn't registered here. The documentation (lines 59-61 in report.mdx) also documents this flag.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/aws/compliance.go` around lines 201 - 214, The init() function building complianceParser is missing registration of the --ai flag; add a boolean flag option to the flags.NewStandardParser call (e.g., flags.WithBoolFlag("ai", "", false, "Enable AI-assisted prioritized remediation")) and also include a corresponding WithEnvVars entry if environment-toggle is needed (e.g., "ATMOS_AWS_COMPLIANCE_AI"); update complianceParser construction so the new flag is included alongside existing flags referenced in this function.
161-191:β οΈ Potential issue | π MajorMulti-framework loop produces invalid structured output.
For JSON/YAML/CSV formats, iterating frameworks and calling
RenderComplianceReportmultiple times produces concatenated documents (e.g.,{...}{...}for JSON) which parsers cannot consume.Consider aggregating reports into a slice and rendering once, or restricting multi-framework mode to markdown only.
π‘ Approach sketch
// For structured formats, aggregate and render once if outputFormat != security.FormatMarkdown { var reports []*security.ComplianceReport for _, fw := range frameworks { report, err := fetcher.FetchComplianceStatus(ctx, fw, stack) // ... error handling ... if report != nil { reports = append(reports, report) } } return renderer.RenderComplianceReports(output, reports) // new method }π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/aws/compliance.go` around lines 161 - 191, The loop over frameworks calls renderer.RenderComplianceReport repeatedly which corrupts structured outputs (JSON/YAML/CSV) by concatenating documents; change logic so when outputFormat != security.FormatMarkdown you collect non-nil reports returned by fetcher.FetchComplianceStatus (ref: frameworks, fetcher.FetchComplianceStatus, report) into a slice and then call a single renderer method (add/implement renderer.RenderComplianceReports) to render them at once to output; keep the existing per-framework behavior only for markdown to stdout (refs: outputFormat, security.FormatMarkdown, fileOutput, renderer.RenderComplianceReport).
245-267:β οΈ Potential issue | π MajorControl filtering math is still incorrect.
When
--controlsfilters to specific control IDs, the score calculation usesreport.PassingControlsfrom the original unfiltered report (line 262). If the user filters to justCIS.1.1(a failing control), the report incorrectly claims all original passing controls are relevant.The filter should only count passing/failing within the requested control set.
π§ Suggested fix sketch
func filterComplianceReport(report *security.ComplianceReport, controlFilter map[string]bool) *security.ComplianceReport { + // Count how many filtered controls are failing vs in the filter set. filtered := make([]security.ComplianceControl, 0, len(report.FailingDetails)) + failingIDs := make(map[string]bool) for _, ctrl := range report.FailingDetails { if controlFilter[ctrl.ControlID] { filtered = append(filtered, ctrl) + failingIDs[ctrl.ControlID] = true } } + // Count passing = controls in filter that aren't failing. + passingInFilter := 0 + for id := range controlFilter { + if !failingIDs[id] { + passingInFilter++ + } + } + filteredReport := *report filteredReport.FailingDetails = filtered filteredReport.FailingControls = len(filtered) - filteredReport.TotalControls = len(filtered) + report.PassingControls + filteredReport.PassingControls = passingInFilter + filteredReport.TotalControls = len(controlFilter) if filteredReport.TotalControls > 0 { const percentMultiplier = 100 - filteredReport.ScorePercent = float64(report.PassingControls) / float64(filteredReport.TotalControls) * percentMultiplier + filteredReport.ScorePercent = float64(passingInFilter) / float64(filteredReport.TotalControls) * percentMultiplier } else { filteredReport.ScorePercent = 0 } return &filteredReport }π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/aws/compliance.go` around lines 245 - 267, filterComplianceReport currently reuses report.PassingControls from the original report, producing incorrect scores for filtered control sets; update filterComplianceReport to recompute passing and total counts within the requested control set by (1) building a set of control IDs present in the original report (include both failing and passing entries if a PassingDetails slice exists) or at minimum intersecting controlFilter with report.FailingDetails to count filtered failing controls, (2) compute totalFiltered as the number of control IDs that are in controlFilter and present in the original report, (3) compute passingFiltered = totalFiltered - filteredFailingCount, and then set filteredReport.PassingControls = passingFiltered, filteredReport.FailingControls = filteredFailingCount, filteredReport.TotalControls = totalFiltered and recalc ScorePercent from those filtered counts before returning; change only inside filterComplianceReport and use the existing symbols report, report.FailingDetails, filteredReport and ScorePercent.
π§Ή Nitpick comments (2)
pkg/aws/security/report_renderer_test.go (1)
543-560: Table-driven test forreportTargetmissing component-only case.The table at lines 544-553 covers
both_set,stack_only, andneither_set, but the component-only case is tested separately inTestReportTarget_ComponentOnly(line 751). Consider adding it to the table for completeness and discoverability.π‘ Add component-only case to the table
tests := []struct { name string stack string component string expected string }{ {name: "both_set", stack: "prod", component: "vpc", expected: "prod / vpc"}, {name: "stack_only", stack: "prod", component: "", expected: "prod"}, + {name: "component_only", stack: "", component: "vpc", expected: "All Stacks / vpc"}, {name: "neither_set", stack: "", component: "", expected: "All Stacks"}, }π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/aws/security/report_renderer_test.go` around lines 543 - 560, The table-driven TestReportTarget is missing the component-only scenario; update the tests slice in TestReportTarget to include a case like {name: "component_only", stack: "", component: "vpc", expected: "vpc"} so the component-only behavior of reportTarget is covered in the same table (and remove or de-duplicate the separate TestReportTarget_ComponentOnly test if desired). Ensure you reference reportTarget and TestReportTarget when making the change.pkg/aws/security/report_renderer.go (1)
506-509: Diff rendering could show malformed output for multi-line changes.When
change.Beforeorchange.Aftercontain newlines, the diff block will break formatting since each line won't have the proper+/-prefix. This is a minor edge case since most remediation code changes are likely single-line.π‘ Optional: Split multi-line diffs properly
if change.Before != "" { - fmt.Fprintf(sb, "```diff\n- %s\n+ %s\n```\n\n", change.Before, change.After) + beforeLines := strings.Split(change.Before, "\n") + afterLines := strings.Split(change.After, "\n") + sb.WriteString("```diff\n") + for _, line := range beforeLines { + fmt.Fprintf(sb, "- %s\n", line) + } + for _, line := range afterLines { + fmt.Fprintf(sb, "+ %s\n", line) + } + sb.WriteString("```\n\n") }π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/aws/security/report_renderer.go` around lines 506 - 509, The diff renderer in render loop currently writes change.Before and change.After as single %s strings which breaks when those values contain newlines; update the block in the renderer that handles change.Before/change.After (the code using sb, change.Before, change.After) to split both values with strings.Split(..., "\n"), open the fenced code block, iterate over beforeLines and write each line prefixed with "- " (or "- " followed by content) and iterate over afterLines writing each line prefixed with "+ ", handle differing line counts by emitting blank prefixed lines where one side is shorter, then close the fenced block β this preserves proper per-line +/- prefixes for multi-line diffs.
π€ Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@website/docs/cli/commands/aws/compliance/report.mdx`:
- Around line 59-61: The docs mention a `--ai` flag that isn't registered;
update the AWS compliance command by adding a boolean flag named "ai" to the
compliance command registration (e.g., in the function that builds or registers
the compliance command such as NewComplianceCmd or the complianceCmd flag setup)
and wire the flag into the command's execution path so the flag value is read
and used to enable AI-powered analysis, or remove the `--ai` mention from the
docs if you prefer not to implement it yet; ensure the flag has the same help
text as the docs and is accessible to the command handler that runs the
compliance report.
---
Duplicate comments:
In `@cmd/aws/compliance.go`:
- Around line 201-214: The init() function building complianceParser is missing
registration of the --ai flag; add a boolean flag option to the
flags.NewStandardParser call (e.g., flags.WithBoolFlag("ai", "", false, "Enable
AI-assisted prioritized remediation")) and also include a corresponding
WithEnvVars entry if environment-toggle is needed (e.g.,
"ATMOS_AWS_COMPLIANCE_AI"); update complianceParser construction so the new flag
is included alongside existing flags referenced in this function.
- Around line 161-191: The loop over frameworks calls
renderer.RenderComplianceReport repeatedly which corrupts structured outputs
(JSON/YAML/CSV) by concatenating documents; change logic so when outputFormat !=
security.FormatMarkdown you collect non-nil reports returned by
fetcher.FetchComplianceStatus (ref: frameworks, fetcher.FetchComplianceStatus,
report) into a slice and then call a single renderer method (add/implement
renderer.RenderComplianceReports) to render them at once to output; keep the
existing per-framework behavior only for markdown to stdout (refs: outputFormat,
security.FormatMarkdown, fileOutput, renderer.RenderComplianceReport).
- Around line 245-267: filterComplianceReport currently reuses
report.PassingControls from the original report, producing incorrect scores for
filtered control sets; update filterComplianceReport to recompute passing and
total counts within the requested control set by (1) building a set of control
IDs present in the original report (include both failing and passing entries if
a PassingDetails slice exists) or at minimum intersecting controlFilter with
report.FailingDetails to count filtered failing controls, (2) compute
totalFiltered as the number of control IDs that are in controlFilter and present
in the original report, (3) compute passingFiltered = totalFiltered -
filteredFailingCount, and then set filteredReport.PassingControls =
passingFiltered, filteredReport.FailingControls = filteredFailingCount,
filteredReport.TotalControls = totalFiltered and recalc ScorePercent from those
filtered counts before returning; change only inside filterComplianceReport and
use the existing symbols report, report.FailingDetails, filteredReport and
ScorePercent.
---
Nitpick comments:
In `@pkg/aws/security/report_renderer_test.go`:
- Around line 543-560: The table-driven TestReportTarget is missing the
component-only scenario; update the tests slice in TestReportTarget to include a
case like {name: "component_only", stack: "", component: "vpc", expected: "vpc"}
so the component-only behavior of reportTarget is covered in the same table (and
remove or de-duplicate the separate TestReportTarget_ComponentOnly test if
desired). Ensure you reference reportTarget and TestReportTarget when making the
change.
In `@pkg/aws/security/report_renderer.go`:
- Around line 506-509: The diff renderer in render loop currently writes
change.Before and change.After as single %s strings which breaks when those
values contain newlines; update the block in the renderer that handles
change.Before/change.After (the code using sb, change.Before, change.After) to
split both values with strings.Split(..., "\n"), open the fenced code block,
iterate over beforeLines and write each line prefixed with "- " (or "- "
followed by content) and iterate over afterLines writing each line prefixed with
"+ ", handle differing line counts by emitting blank prefixed lines where one
side is shorter, then close the fenced block β this preserves proper per-line
+/- prefixes for multi-line diffs.
πͺ Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
βΉοΈ Review info
βοΈ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: a9c79da6-569b-4fdd-b2a6-de66ad3fe9ab
π Files selected for processing (9)
cmd/aws/compliance.gocmd/aws/credentials.gocmd/aws/security.gocmd/aws/security_test.gopkg/aws/security/component_mapper_test.gopkg/aws/security/finding_fetcher_test.gopkg/aws/security/report_renderer.gopkg/aws/security/report_renderer_test.gowebsite/docs/cli/commands/aws/compliance/report.mdx
β Files skipped from review due to trivial changes (2)
- pkg/aws/security/component_mapper_test.go
- pkg/aws/security/finding_fetcher_test.go
π§ Files skipped from review as they are similar to previous changes (1)
- cmd/aws/credentials.go
| <dt>`--ai`</dt> | ||
| <dd>Enable AI-powered analysis of the compliance report. The global `--ai` flag captures the report output and sends it to the configured AI provider for a summary with remediation guidance for each failing control.</dd> | ||
| </dl> |
There was a problem hiding this comment.
Documentation describes --ai flag that isn't implemented.
The --ai flag is documented here but isn't registered in cmd/aws/compliance.go. Either add the flag implementation or remove this documentation until it's ready.
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@website/docs/cli/commands/aws/compliance/report.mdx` around lines 59 - 61,
The docs mention a `--ai` flag that isn't registered; update the AWS compliance
command by adding a boolean flag named "ai" to the compliance command
registration (e.g., in the function that builds or registers the compliance
command such as NewComplianceCmd or the complianceCmd flag setup) and wire the
flag into the command's execution path so the flag value is read and used to
enable AI-powered analysis, or remove the `--ai` mention from the docs if you
prefer not to implement it yet; ensure the flag has the same help text as the
docs and is accessible to the command handler that runs the compliance report.
what
atmos aws security analyzeβ fetch security findings from AWS Security Hub, map them to Atmos components and stacks via resource tags, and generate structured remediation reportsatmos aws compliance reportβ generate compliance posture reports against industry frameworks (CIS AWS, PCI-DSS, SOC2, HIPAA, NIST) with pass/fail scoringfinding-tag(exact),tag-api(exact),context-tags(high),account-map(low),ecr-repo(low),naming-convention(low),resource-type(low)--aiflag β multi-turn tool analysis reads component source and stack config, generates root cause analysis, specific code changes, stack YAML changes, deploy commands, risk assessmentidentityconfig field for targeting Security Hub delegated admin accounts--stack,--component,--severity,--source,--framework,--format,--file,--max-findings,--no-group,--region,--identityflagsatmos_list_findings,atmos_describe_finding,atmos_analyze_finding,atmos_compliance_reportexamples/aws-security-compliance/why
Reviewing AWS security findings today requires navigating multiple AWS console pages, cross-referencing resources with Terraform code, and manually figuring out which configuration caused the issue. This is slow, error-prone, and requires deep AWS + Terraform expertise.
Atmos owns the component-to-stack relationship, so it can trace a security finding on an AWS resource all the way back to the exact Terraform code and stack configuration that created it β and generate a targeted fix.
The key differentiator vs AWS MCP security servers: MCP servers return raw findings but have no concept of Atmos stacks, components, or Terraform source code. Our implementation maps findings to IaC and generates specific remediation with deploy commands.
See It in Action
Tested against a multi-account AWS organization (11 accounts, Security Hub delegated admin, 500 findings, 97.2% mapped to Atmos components).
1. Security findings mapped to components
2. AI-powered remediation (
--ai)3. Compliance report
4. Compliance report with AI (
--ai)New CLI Commands
atmos aws security analyzeFetches findings from Security Hub, maps them to Atmos components via resource tags (7 mapping strategies with confidence levels), and renders reports in 4 formats. Post-mapping
--stackand--componentfilters narrow results after mapping. With--ai, the AI reads component source code and stack config via multi-turn tools to generate specific remediation.atmos aws compliance reportQueries Security Hub enabled standards, counts total controls via
ListSecurityControlDefinitions, and computes pass/fail scores. Supports--frameworkfilter for CIS AWS, PCI-DSS, SOC2, HIPAA, NIST. With--ai, generates prioritized remediation for each failing control.Configuration
Example
See
examples/aws-security-compliance/for a complete configuration with auth, tag mapping, AI provider, and all commands.references
docs/prd/atmos-aws-security-compliance.mdwebsite/blog/2026-04-03-aws-security-compliance.mdxexamples/aws-security-compliance/website/docs/cli/commands/aws/security/,website/docs/cli/commands/aws/compliance/website/docs/cli/configuration/aws/security.mdxSummary by CodeRabbit
New Features
Documentation
Examples