diff --git a/.claude/agents/tui-expert.md b/.claude/agents/tui-expert.md index d7a2534697..01a01ebfdb 100644 --- a/.claude/agents/tui-expert.md +++ b/.claude/agents/tui-expert.md @@ -680,11 +680,36 @@ func TestCommandWithTheme(t *testing.T) { } ``` +## Specialized Agent: tui-list + +For implementing list commands specifically (list components, list stacks, list workflows, etc.), delegate to the **`tui-list` agent** which specializes in: + +- List command architecture and rendering pipeline (filter → column → sort → format → output) +- Column configuration via Go templates and atmos.yaml +- Filter/sort implementation patterns +- Table rendering with lipgloss +- Multi-format output (table, JSON, YAML, CSV, TSV, tree) +- Dynamic tab completion for --columns flag + +**When to use tui-list:** +- Creating new list commands +- Adding columns or filters to existing lists +- Implementing sorting functionality +- Troubleshooting list rendering or column issues +- Working with the renderer pipeline (`pkg/list/renderer/`, `pkg/list/format/`, `pkg/list/column/`) + +**When to stay with tui-expert:** +- General TUI components (pager, help, interactive forms) +- Theme integration and styling +- Non-list UI elements (status messages, markdown, logs) +- Refactoring hard-coded colors to theme-aware patterns + ## File Organization **Core:** `pkg/ui/theme/` - theme.go (349 themes), registry.go, scheme.go, styles.go, table.go, converter.go, log_styles.go **Integration:** pkg/ui/theme/colors.go, pkg/ui/markdown/styles.go, cmd/root.go **Commands:** cmd/theme/ (theme.go, list.go, show.go) +**List Commands:** `pkg/list/` (renderer/, format/, column/, filter/, sort/) - See `tui-list` agent ## Error Handling diff --git a/.claude/agents/tui-list.md b/.claude/agents/tui-list.md new file mode 100644 index 0000000000..70fc988099 --- /dev/null +++ b/.claude/agents/tui-list.md @@ -0,0 +1,584 @@ +--- +name: tui-list +description: >- + Expert in developing DX-friendly list commands for Atmos CLI. Specializes in table rendering, + column configuration, filter/sort implementation, and pipeline-friendly output formats. + + **Invoke when:** + - Creating new list commands (list components, list stacks, list workflows, etc.) + - Adding columns to existing list commands + - Implementing filters or sort functionality for lists + - Troubleshooting table rendering or column selection issues + - Optimizing list output for TTY vs non-TTY environments + - Working with dynamic tab completion for --columns flag + - Understanding list rendering pipeline (filter → column → sort → format → output) + +tools: Read, Write, Edit, Grep, Glob, Bash, Task, TodoWrite +model: sonnet +color: cyan +--- + +# TUI List - DX-Friendly List Output Specialist + +Expert in Atmos list command architecture with deep knowledge of table rendering, column configuration, +filter/sort patterns, and zero-configuration output degradation. + +## Core Responsibilities + +1. **Implement new list commands** - Following established patterns and conventions +2. **Add columns and filters** - Using template-based column system and filter chains +3. **Table rendering** - Theme-aware lipgloss tables with TTY detection +4. **Output format handling** - Table, JSON, YAML, CSV, TSV, tree formats +5. **Dynamic tab completion** - For --columns flag based on atmos.yaml configuration +6. **Pipeline optimization** - Filter → Column → Sort → Format → Output + +## List Command Architecture + +### The Rendering Pipeline + +All list commands follow a consistent 6-stage pipeline: + +``` +1. Data Extraction → Extract structured data ([]map[string]any) +2. Filtering → Apply filters (glob, bool, column value) +3. Column Selection → Extract columns via Go templates +4. Sorting → Multi-column sort with type awareness +5. Formatting → Convert to output format (table/JSON/YAML/CSV/TSV/tree) +6. Output → Write to appropriate stream (stdout) +``` + +**Key files:** +- `pkg/list/renderer/renderer.go` - Orchestrates the pipeline +- `pkg/list/column/column.go` - Template-based column extraction +- `pkg/list/filter/filter.go` - Composable filter chains +- `pkg/list/sort/sort.go` - Multi-column sorting with type awareness +- `pkg/list/format/table.go` - Lipgloss table rendering +- `pkg/list/output/output.go` - Stream routing (stdout/stderr) + +### Reference Implementations + +- `cmd/list/stacks.go` - Simple list with filtering +- `cmd/list/workflows.go` - List with file filtering +- `cmd/list/instances.go` - Complex list with upload + +## Flag Patterns + +### Flag Wrapper Functions + +All list commands use named wrapper functions for consistent flag configuration: + +```go +// In cmd/list/flag_wrappers.go +func WithFormatFlag(options *[]flags.Option) // Output format +func WithStacksColumnsFlag(options *[]flags.Option) // Column selection +func WithSortFlag(options *[]flags.Option) // Sort specification +func WithComponentFlag(options *[]flags.Option) // Filter by component +``` + +### Creating Parser for List Command + +```go +var stacksParser *flags.StandardParser + +func init() { + // Compose flags using wrapper functions + stacksParser = NewListParser( + WithFormatFlag, + WithStacksColumnsFlag, + WithSortFlag, + WithComponentFlag, + WithProvenanceFlag, + ) + + // Register flags on command + stacksParser.RegisterFlags(stacksCmd) + + // Register dynamic tab completion for columns + if err := stacksCmd.RegisterFlagCompletionFunc("columns", columnsCompletionForStacks); err != nil { + panic(err) + } + + // Bind to Viper for environment variable support + if err := stacksParser.BindToViper(viper.GetViper()); err != nil { + panic(err) + } +} +``` + +### Common Flags + +**Output:** `--format`, `--columns`, `--delimiter` +**Filter:** `--stack`, `--component`, `--file`, `--filter`, `--enabled`, `--locked`, `--type` +**Sort:** `--sort` (e.g., "stack:asc,component:desc") +**Special:** `--provenance`, `--upload` + +## Column Configuration + +### Template-Based Columns + +Columns are defined using Go templates that extract data from structured maps: + +```go +// Column configuration +type Config struct { + Name string `yaml:"name"` // Display header + Value string `yaml:"value"` // Go template for extraction + Width int `yaml:"width"` // Optional width override +} + +// Example: Default columns for stacks +columns := []column.Config{ + {Name: "Stack", Value: "{{ .stack }}"}, + {Name: "Component", Value: "{{ .component }}"}, +} + +// Example: Custom columns with template functions +columns := []column.Config{ + {Name: "Name", Value: "{{ .name | upper }}"}, + {Name: "Region", Value: "{{ .vars.region | default \"us-east-1\" }}"}, + {Name: "Enabled", Value: "{{ .enabled | ternary \"✓\" \"✗\" }}"}, +} +``` + +### Column Configuration Sources (Precedence) + +1. **CLI flag** - `--columns component,stack,region` (highest) +2. **atmos.yaml** - Command-specific configuration +3. **Default columns** - Hardcoded in command (lowest) + +```yaml +# atmos.yaml +stacks: + list: + format: table + columns: + - name: Stack + value: "{{ .stack }}" + - name: Component + value: "{{ .component }}" + - name: Region + value: "{{ .vars.region }}" + +workflows: + list: + format: json + columns: + - name: Workflow + value: "{{ .name }}" + - name: File + value: "{{ .file }}" +``` + +### Template Functions + +**Type:** toString, toInt, toBool +**Format:** truncate, pad, upper, lower +**Data:** get, getOr, has +**Collections:** len, join, split +**Conditional:** ternary (e.g., `{{ .enabled | ternary "✓" "✗" }}`) + +### Column Selector + +The column selector pre-parses templates and evaluates them during rendering: + +```go +// Create selector with function map +selector, err := column.NewSelector(columns, column.BuildColumnFuncMap()) +if err != nil { + return fmt.Errorf("error creating column selector: %w", err) +} + +// Extract headers and rows +headers, rows, err := selector.Extract(data) +if err != nil { + return fmt.Errorf("column extraction failed: %w", err) +} +``` + +### Dynamic Tab Completion + +Register tab completion for `--columns` flag that reads from atmos.yaml: + +```go +stacksCmd.RegisterFlagCompletionFunc("columns", columnsCompletionForStacks) +``` + +## Filtering and Sorting + +### Filter Patterns + +Filters use composable interfaces: + +```go +// Filter interface +type Filter interface { + Apply(data interface{}) (interface{}, error) +} + +// Built-in filters +type GlobFilter struct { + Field string + Pattern string +} + +type ColumnValueFilter struct { + Column string + Value string +} + +type BoolFilter struct { + Field string + Value *bool // nil = all, true = enabled only, false = disabled only +} + +// Filter chain (AND logic) +type Chain struct { + filters []Filter +} +``` + +**Creating filters:** + +```go +// Build filters from command options +func buildStackFilters(opts *StacksOptions) []filter.Filter { + var filters []filter.Filter + + // Component filter is handled by extraction logic + // Add additional filters here + + return filters +} +``` + +### Sort Patterns + +Multi-column sorting with type awareness: + +```go +// Sort specification format: "column1:asc,column2:desc" +sorters, err := listSort.ParseSortSpec(opts.Sort) + +// Default sort if no specification +if sortSpec == "" { + sorters = []*listSort.Sorter{ + listSort.NewSorter("Stack", listSort.Ascending), + listSort.NewSorter("Component", listSort.Ascending), + } +} + +// Type-aware sorting +sorter := listSort.NewSorter("Count", listSort.Descending). + WithDataType(listSort.Number) +``` + +## Table Rendering + +### TTY Detection and Format Selection + +List commands automatically adapt output based on TTY: + +```go +// TTY detected → Styled table with borders and colors +// Non-TTY (piped) → Plain text (single column) or TSV (multi-column) + +func formatStyledTableOrPlain(headers []string, rows [][]string) string { + term := terminal.New() + isTTY := term.IsTTY(terminal.Stdout) + + if !isTTY { + // Piped/redirected output - plain format for backward compatibility + return formatPlainList(headers, rows) + } + + // Interactive terminal - styled table + return format.CreateStyledTable(headers, rows) +} +``` + +### Table Styling + +Tables use lipgloss with theme-aware colors: + +```go +// Create styled table with automatic width calculation +output := format.CreateStyledTable(headers, rows) + +// Table features: +// - Automatic column width calculation +// - Description column gets flexible space +// - Semantic cell styling (booleans green/red, numbers blue) +// - Inline markdown rendering for Description column +// - Multi-line cell support +``` + +### Column Width Calculation + +Smart width distribution prioritizes readability: + +```go +// Strategy: +// 1. Compact columns (Stack, Component, etc.) - capped at 20 chars +// 2. Description column - gets remaining space (30-60 chars) +// 3. All columns - minimum 5 chars + +const ( + MaxColumnWidth = 60 // Maximum width for any column + CompactColumnMaxWidth = 20 // Max for non-Description columns + DescriptionColumnMinWidth = 30 // Min for Description column + MinColumnWidth = 5 // Absolute minimum +) +``` + +### Semantic Cell Styling + +Table cells are automatically styled based on content: + +```go +// Booleans: true (green), false (red) +// Numbers: blue +// Placeholders ({...}, [...]): muted +// Default: standard text color +``` + +## Output Formats + +### Format Types + +```go +const ( + FormatTable Format = "table" // Default: styled table (TTY) or plain (non-TTY) + FormatJSON Format = "json" // JSON array of objects + FormatYAML Format = "yaml" // YAML array of objects + FormatCSV Format = "csv" // Comma-separated values + FormatTSV Format = "tsv" // Tab-separated values + FormatTree Format = "tree" // Hierarchical tree view +) +``` + +### Format Configuration + +Format can be specified via: + +1. **CLI flag** - `--format json` (highest) +2. **atmos.yaml** - Command-specific default +3. **Default** - `table` (lowest) + +```go +// Check command-specific config if flag is empty +if opts.Format == "" && atmosConfig.Stacks.List.Format != "" { + opts.Format = atmosConfig.Stacks.List.Format +} +``` + +### Tree Format Special Handling + +Tree format shows hierarchical import chains: + +```go +if opts.Format == "tree" { + // Enable provenance tracking + atmosConfig.TrackProvenance = true + + // Clear caches for fresh processing + e.ClearMergeContexts() + e.ClearFindStacksMapCache() + + // Re-process with provenance enabled + stacksMap, err = e.ExecuteDescribeStacks(&atmosConfig, ...) + + // Resolve import trees + importTrees, err := l.ResolveImportTreeFromProvenance(stacksMap, &atmosConfig) + + // Render tree view + output := format.RenderStacksTree(importTrees, opts.Provenance) + fmt.Println(output) + return nil +} +``` + +## Output Routing + +### Stream Selection + +All list output goes to **stdout (data channel)** for pipeability: + +```go +// pkg/list/output/output.go +func (m *Manager) Write(content string) error { + // All list formats → stdout (data channel, pipeable) + return data.Write(content) +} +``` + +**Why stdout for all formats:** +- Table format is the "default view" of data +- JSON/YAML are clearly structured data +- CSV/TSV are structured data +- Users expect to pipe list output: `atmos list stacks | grep prod` + +**Status messages go to stderr (UI channel):** + +```go +if len(stacks) == 0 { + _ = ui.Info("No stacks found") + return nil +} +``` + +## Implementation Pattern + +See `cmd/list/stacks.go` for reference implementation with: +1. Options struct with global.Flags +2. Parser with flag wrappers +3. RunE: parse flags → load config → extract data → build filters → render +4. Helpers: getColumns(), buildFilters(), buildSorters() + +## Common Tasks + +### Task: Add New List Command + +1. **Create command file** - `cmd/list/mycommand.go` +2. **Define options struct** - Embed `global.Flags` +3. **Create parser** - Use `NewListParser()` with flag wrappers +4. **Implement RunE** - Parse flags, extract data, render +5. **Add tab completion** - For `--columns` flag +6. **Add helper functions** - Column config, filters, sorters + +### Task: Add Column to Existing List + +1. **Identify data source** - What data field to extract? +2. **Update default columns** - Add to `getXColumns()` function +3. **Update atmos.yaml schema** - Document new column option +4. **Test with templates** - Verify template evaluation works + +### Task: Add Filter to List Command + +1. **Define filter flag** - Create `WithMyFilterFlag()` wrapper +2. **Add to parser** - Include in `NewListParser()` call +3. **Implement filter logic** - In `buildMyFilters()` helper +4. **Test filter chain** - Verify AND logic with other filters + +### Task: Add Custom Sort Field + +1. **Identify column name** - Must match column header +2. **Determine data type** - String, Number, or Boolean +3. **Update documentation** - Show sort examples in help text +4. **Test sort parsing** - Verify `ParseSortSpec()` handles it + +## Zero-Configuration Degradation + +List commands automatically adapt to environment: + +**TTY Detection:** +- ✅ TTY → Styled tables with colors and borders +- ✅ Non-TTY (piped) → Plain text (single column) or TSV (multi-column) +- ✅ Respects `--no-color` flag and `NO_COLOR` environment variable + +**Width Adaptation:** +- ✅ Automatic width detection from terminal +- ✅ Smart column distribution (compact vs flexible) +- ✅ Multi-line cell support for long content + +**Format Selection:** +- ✅ CLI flag → atmos.yaml → default (table) +- ✅ Pipeline-friendly: `atmos list stacks | grep prod` works +- ✅ Structured data: `atmos list stacks --format json | jq` + +## Quality Checks + +Before completing list command implementation: + +**Compilation:** +- [ ] `go build ./cmd/list` succeeds +- [ ] `make lint` passes +- [ ] All imports organized correctly + +**Flag Configuration:** +- [ ] Parser created with `NewListParser()` +- [ ] Flags registered in `init()` +- [ ] Bound to Viper for environment variables +- [ ] Tab completion registered for `--columns` + +**Rendering Pipeline:** +- [ ] Data extraction returns `[]map[string]any` +- [ ] Column selector created with `BuildColumnFuncMap()` +- [ ] Filters implement `Filter` interface +- [ ] Sorters use `ParseSortSpec()` or defaults +- [ ] Renderer orchestrates pipeline correctly + +**Output Handling:** +- [ ] All output goes to `data.Write()` (stdout) +- [ ] Status messages use `ui.Info()` (stderr) +- [ ] Empty results show friendly message +- [ ] TTY vs non-TTY handled correctly + +**Testing:** +- [ ] Unit tests for filters and sorters +- [ ] Integration tests for full pipeline +- [ ] Golden snapshots for table output +- [ ] Test coverage >80% + +## Anti-Patterns + +❌ DO NOT write output to stderr (use `data.Write()` for all list output) +❌ DO NOT create filters outside the filter package +❌ DO NOT hardcode column widths (use automatic calculation) +❌ DO NOT bypass the renderer pipeline +❌ DO NOT use `fmt.Printf` (use `data.*` or `ui.*`) +❌ DO NOT skip tab completion for `--columns` flag +❌ DO NOT forget to handle empty results + +## Testing Patterns + +**Unit tests:** Column extraction, filter chains, sorter parsing +**Integration tests:** Full pipeline with NewTestKit, verify output format +**Golden snapshots:** Table output for regression testing + +## Agent Coordination + +Coordinate with specialized agents: +- **flag-handler** - Flag parsing, StandardParser, flag wrappers +- **tui-expert** - Table styling, theme integration, lipgloss +- **test-automation-expert** - Test coverage, golden snapshots + +## Resources + +**Core Architecture:** +- `pkg/list/renderer/renderer.go` - Pipeline orchestration +- `pkg/list/column/column.go` - Template-based columns +- `pkg/list/filter/filter.go` - Composable filters +- `pkg/list/sort/sort.go` - Type-aware sorting +- `pkg/list/format/table.go` - Table rendering +- `pkg/list/output/output.go` - Output routing + +**Reference Commands:** +- `cmd/list/stacks.go` - Simple list with filtering +- `cmd/list/workflows.go` - List with file filtering +- `cmd/list/instances.go` - Complex list with upload + +**Flag Patterns:** +- `cmd/list/flag_wrappers.go` - Reusable flag builders +- `cmd/list/flag_wrappers_examples.go` - Usage examples + +**Core Patterns:** +- `CLAUDE.md` - I/O separation, error handling, testing +- `docs/prd/command-registry-pattern.md` - Command architecture +- `docs/prd/flag-handling/unified-flag-parsing.md` - Flag parsing + +## Self-Maintenance + +Monitor key dependencies and update when patterns change: +- `pkg/list/renderer/`, `pkg/list/column/`, `pkg/list/format/` +- `cmd/list/flag_wrappers.go` +- `CLAUDE.md` I/O patterns + +Before each invocation, read latest pipeline architecture and check for new patterns. + +## Key Principles + +- Pipeline-friendly: All output to stdout (data channel) +- Consistent flags: Use flag wrappers from `flag_wrappers.go` +- Template-based columns with built-in functions +- Auto-degrade for non-TTY environments +- Smart defaults, tab completion, clear errors diff --git a/.github/workflows/dependency-review.yml b/.github/workflows/dependency-review.yml index 7929a75756..4593dd9254 100644 --- a/.github/workflows/dependency-review.yml +++ b/.github/workflows/dependency-review.yml @@ -34,7 +34,7 @@ jobs: # Allow only permissive licenses # NOTE: GitHub's dependency graph detects Go modules from go.mod automatically # License checking works at the manifest level for go.mod dependencies - allow-licenses: MIT, MIT-0, Apache-2.0, BSD-2-Clause, BSD-2-Clause-Views, BSD-3-Clause, ISC, MPL-2.0, 0BSD, Unlicense, CC0-1.0, CC-BY-3.0, CC-BY-4.0, CC-BY-SA-3.0, Python-2.0, OFL-1.1, LicenseRef-scancode-generic-cla, LicenseRef-scancode-unknown-license-reference, LicenseRef-scancode-unicode + allow-licenses: MIT, MIT-0, Apache-2.0, BSD-2-Clause, BSD-2-Clause-Views, BSD-3-Clause, ISC, MPL-2.0, 0BSD, Unlicense, CC0-1.0, CC-BY-3.0, CC-BY-4.0, CC-BY-SA-3.0, Python-2.0, OFL-1.1, LicenseRef-scancode-generic-cla, LicenseRef-scancode-unknown-license-reference, LicenseRef-scancode-unicode, LicenseRef-scancode-google-patent-license-golang # Fail on moderate or higher severity vulnerabilities fail-on-severity: moderate diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index ca98c3baae..e243a8a5ec 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -48,6 +48,14 @@ repos: files: ^go\.mod$ pass_filenames: false + - id: claude-md-size-check + name: Check CLAUDE.md size limit + description: Ensure CLAUDE.md files stay under 40KB size limit + entry: scripts/check-claude-md-size.sh + language: system + files: ^(CLAUDE\.md|\.conductor/.*/CLAUDE\.md)$ + pass_filenames: false + # General file hygiene - repo: https://github.com/pre-commit/pre-commit-hooks diff --git a/NOTICE b/NOTICE index 55b2e4f6d8..e832f37f14 100644 --- a/NOTICE +++ b/NOTICE @@ -107,27 +107,27 @@ APACHE 2.0 LICENSED DEPENDENCIES - github.com/aws/aws-sdk-go-v2/config License: Apache-2.0 - URL: https://github.com/aws/aws-sdk-go-v2/blob/config/v1.32.3/config/LICENSE.txt + URL: https://github.com/aws/aws-sdk-go-v2/blob/config/v1.32.4/config/LICENSE.txt - github.com/aws/aws-sdk-go-v2/credentials License: Apache-2.0 - URL: https://github.com/aws/aws-sdk-go-v2/blob/credentials/v1.19.3/credentials/LICENSE.txt + URL: https://github.com/aws/aws-sdk-go-v2/blob/credentials/v1.19.4/credentials/LICENSE.txt - github.com/aws/aws-sdk-go-v2/feature/ec2/imds License: Apache-2.0 - URL: https://github.com/aws/aws-sdk-go-v2/blob/feature/ec2/imds/v1.18.15/feature/ec2/imds/LICENSE.txt + URL: https://github.com/aws/aws-sdk-go-v2/blob/feature/ec2/imds/v1.18.16/feature/ec2/imds/LICENSE.txt - github.com/aws/aws-sdk-go-v2/feature/s3/manager License: Apache-2.0 - URL: https://github.com/aws/aws-sdk-go-v2/blob/feature/s3/manager/v1.20.13/feature/s3/manager/LICENSE.txt + URL: https://github.com/aws/aws-sdk-go-v2/blob/feature/s3/manager/v1.20.14/feature/s3/manager/LICENSE.txt - github.com/aws/aws-sdk-go-v2/internal/configsources License: Apache-2.0 - URL: https://github.com/aws/aws-sdk-go-v2/blob/internal/configsources/v1.4.15/internal/configsources/LICENSE.txt + URL: https://github.com/aws/aws-sdk-go-v2/blob/internal/configsources/v1.4.16/internal/configsources/LICENSE.txt - github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 License: Apache-2.0 - URL: https://github.com/aws/aws-sdk-go-v2/blob/internal/endpoints/v2.7.15/internal/endpoints/v2/LICENSE.txt + URL: https://github.com/aws/aws-sdk-go-v2/blob/internal/endpoints/v2.7.16/internal/endpoints/v2/LICENSE.txt - github.com/aws/aws-sdk-go-v2/internal/ini License: Apache-2.0 @@ -135,7 +135,7 @@ APACHE 2.0 LICENSED DEPENDENCIES - github.com/aws/aws-sdk-go-v2/internal/v4a License: Apache-2.0 - URL: https://github.com/aws/aws-sdk-go-v2/blob/internal/v4a/v1.4.15/internal/v4a/LICENSE.txt + URL: https://github.com/aws/aws-sdk-go-v2/blob/internal/v4a/v1.4.16/internal/v4a/LICENSE.txt - github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding License: Apache-2.0 @@ -143,19 +143,19 @@ APACHE 2.0 LICENSED DEPENDENCIES - github.com/aws/aws-sdk-go-v2/service/internal/checksum License: Apache-2.0 - URL: https://github.com/aws/aws-sdk-go-v2/blob/service/internal/checksum/v1.9.6/service/internal/checksum/LICENSE.txt + URL: https://github.com/aws/aws-sdk-go-v2/blob/service/internal/checksum/v1.9.7/service/internal/checksum/LICENSE.txt - github.com/aws/aws-sdk-go-v2/service/internal/presigned-url License: Apache-2.0 - URL: https://github.com/aws/aws-sdk-go-v2/blob/service/internal/presigned-url/v1.13.15/service/internal/presigned-url/LICENSE.txt + URL: https://github.com/aws/aws-sdk-go-v2/blob/service/internal/presigned-url/v1.13.16/service/internal/presigned-url/LICENSE.txt - github.com/aws/aws-sdk-go-v2/service/internal/s3shared License: Apache-2.0 - URL: https://github.com/aws/aws-sdk-go-v2/blob/service/internal/s3shared/v1.19.15/service/internal/s3shared/LICENSE.txt + URL: https://github.com/aws/aws-sdk-go-v2/blob/service/internal/s3shared/v1.19.16/service/internal/s3shared/LICENSE.txt - github.com/aws/aws-sdk-go-v2/service/s3 License: Apache-2.0 - URL: https://github.com/aws/aws-sdk-go-v2/blob/service/s3/v1.93.0/service/s3/LICENSE.txt + URL: https://github.com/aws/aws-sdk-go-v2/blob/service/s3/v1.93.1/service/s3/LICENSE.txt - github.com/aws/aws-sdk-go-v2/service/secretsmanager License: Apache-2.0 @@ -163,23 +163,23 @@ APACHE 2.0 LICENSED DEPENDENCIES - github.com/aws/aws-sdk-go-v2/service/signin License: Apache-2.0 - URL: https://github.com/aws/aws-sdk-go-v2/blob/service/signin/v1.0.3/service/signin/LICENSE.txt + URL: https://github.com/aws/aws-sdk-go-v2/blob/service/signin/v1.0.4/service/signin/LICENSE.txt - github.com/aws/aws-sdk-go-v2/service/ssm License: Apache-2.0 - URL: https://github.com/aws/aws-sdk-go-v2/blob/service/ssm/v1.67.5/service/ssm/LICENSE.txt + URL: https://github.com/aws/aws-sdk-go-v2/blob/service/ssm/v1.67.6/service/ssm/LICENSE.txt - github.com/aws/aws-sdk-go-v2/service/sso License: Apache-2.0 - URL: https://github.com/aws/aws-sdk-go-v2/blob/service/sso/v1.30.6/service/sso/LICENSE.txt + URL: https://github.com/aws/aws-sdk-go-v2/blob/service/sso/v1.30.7/service/sso/LICENSE.txt - github.com/aws/aws-sdk-go-v2/service/ssooidc License: Apache-2.0 - URL: https://github.com/aws/aws-sdk-go-v2/blob/service/ssooidc/v1.35.11/service/ssooidc/LICENSE.txt + URL: https://github.com/aws/aws-sdk-go-v2/blob/service/ssooidc/v1.35.12/service/ssooidc/LICENSE.txt - github.com/aws/aws-sdk-go-v2/service/sts License: Apache-2.0 - URL: https://github.com/aws/aws-sdk-go-v2/blob/service/sts/v1.41.3/service/sts/LICENSE.txt + URL: https://github.com/aws/aws-sdk-go-v2/blob/service/sts/v1.41.4/service/sts/LICENSE.txt - github.com/aws/smithy-go License: Apache-2.0 @@ -756,23 +756,23 @@ BSD LICENSED DEPENDENCIES - golang.org/x/oauth2 License: BSD-3-Clause - URL: https://cs.opensource.google/go/x/oauth2/+/v0.33.0:LICENSE + URL: https://cs.opensource.google/go/x/oauth2/+/v0.34.0:LICENSE - golang.org/x/sync License: BSD-3-Clause - URL: https://cs.opensource.google/go/x/sync/+/v0.18.0:LICENSE + URL: https://cs.opensource.google/go/x/sync/+/v0.19.0:LICENSE - golang.org/x/sys License: BSD-3-Clause - URL: https://cs.opensource.google/go/x/sys/+/v0.38.0:LICENSE + URL: https://cs.opensource.google/go/x/sys/+/v0.39.0:LICENSE - golang.org/x/term License: BSD-3-Clause - URL: https://cs.opensource.google/go/x/term/+/v0.37.0:LICENSE + URL: https://cs.opensource.google/go/x/term/+/v0.38.0:LICENSE - golang.org/x/text License: BSD-3-Clause - URL: https://cs.opensource.google/go/x/text/+/v0.31.0:LICENSE + URL: https://cs.opensource.google/go/x/text/+/v0.32.0:LICENSE - golang.org/x/time/rate License: BSD-3-Clause diff --git a/cmd/list/components.go b/cmd/list/components.go index 71772c432c..7e5878333b 100644 --- a/cmd/list/components.go +++ b/cmd/list/components.go @@ -7,15 +7,20 @@ import ( "github.com/spf13/cobra" "github.com/spf13/viper" + errUtils "github.com/cloudposse/atmos/errors" e "github.com/cloudposse/atmos/internal/exec" "github.com/cloudposse/atmos/pkg/config" "github.com/cloudposse/atmos/pkg/flags" "github.com/cloudposse/atmos/pkg/flags/global" - l "github.com/cloudposse/atmos/pkg/list" + "github.com/cloudposse/atmos/pkg/list/column" + "github.com/cloudposse/atmos/pkg/list/extract" + "github.com/cloudposse/atmos/pkg/list/filter" + "github.com/cloudposse/atmos/pkg/list/format" + "github.com/cloudposse/atmos/pkg/list/renderer" + listSort "github.com/cloudposse/atmos/pkg/list/sort" + perf "github.com/cloudposse/atmos/pkg/perf" "github.com/cloudposse/atmos/pkg/schema" "github.com/cloudposse/atmos/pkg/ui" - "github.com/cloudposse/atmos/pkg/ui/theme" - u "github.com/cloudposse/atmos/pkg/utils" ) var componentsParser *flags.StandardParser @@ -23,81 +28,351 @@ var componentsParser *flags.StandardParser // ComponentsOptions contains parsed flags for the components command. type ComponentsOptions struct { global.Flags - Stack string + Stack string + Type string + Enabled *bool + Locked *bool + Format string + Columns []string + Sort string + Abstract bool } // componentsCmd lists atmos components. var componentsCmd = &cobra.Command{ Use: "components", - Short: "List all Atmos components or filter by stack", - Long: "List Atmos components, with options to filter results by specific stacks.", + Short: "List all Atmos components with filtering, sorting, and formatting options", + Long: `List Atmos components with support for filtering by stack, type, enabled/locked status, custom column selection, sorting, and multiple output formats.`, Args: cobra.NoArgs, RunE: func(cmd *cobra.Command, args []string) error { - // Check Atmos configuration + // Check Atmos configuration. if err := checkAtmosConfig(); err != nil { return err } - // Parse flags using StandardParser with Viper precedence + // Parse flags using StandardParser with Viper precedence. v := viper.GetViper() if err := componentsParser.BindFlagsToViper(cmd, v); err != nil { return err } - opts := &ComponentsOptions{ - Flags: flags.ParseGlobalFlags(cmd, v), - Stack: v.GetString("stack"), + // Parse enabled/locked flags as tri-state (*bool). + // nil = unset (show all), true = filter for true, false = filter for false. + // Use cmd.Flags().Changed() instead of v.IsSet() because IsSet returns true + // when a default value is registered, but we only want to filter when + // the user explicitly provided the flag. + var enabledPtr *bool + if cmd.Flags().Changed("enabled") { + val := v.GetBool("enabled") + enabledPtr = &val } - - output, err := listComponentsWithOptions(cmd, opts) - if err != nil { - return err + var lockedPtr *bool + if cmd.Flags().Changed("locked") { + val := v.GetBool("locked") + lockedPtr = &val } - if len(output) == 0 { - ui.Info("No components found") - return nil + opts := &ComponentsOptions{ + Flags: flags.ParseGlobalFlags(cmd, v), + Stack: v.GetString("stack"), + Type: v.GetString("type"), + Enabled: enabledPtr, + Locked: lockedPtr, + Format: v.GetString("format"), + Columns: v.GetStringSlice("columns"), + Sort: v.GetString("sort"), + Abstract: v.GetBool("abstract"), } - u.PrintMessageInColor(strings.Join(output, "\n")+"\n", theme.Colors.Success) - return nil + return listComponentsWithOptions(cmd, args, opts) }, } +// columnsCompletionForComponents provides dynamic tab completion for --columns flag. +// Returns column names from atmos.yaml components.list.columns configuration. +func columnsCompletionForComponents(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { + defer perf.Track(nil, "list.components.columnsCompletionForComponents")() + + // Load atmos configuration with CLI flags. + configAndStacksInfo, err := e.ProcessCommandLineArgs("list", cmd, args, nil) + if err != nil { + return nil, cobra.ShellCompDirectiveNoFileComp + } + + atmosConfig, err := config.InitCliConfig(configAndStacksInfo, false) + if err != nil { + return nil, cobra.ShellCompDirectiveNoFileComp + } + + // Extract column names from atmos.yaml configuration. + if len(atmosConfig.Components.List.Columns) > 0 { + var columnNames []string + for _, col := range atmosConfig.Components.List.Columns { + columnNames = append(columnNames, col.Name) + } + return columnNames, cobra.ShellCompDirectiveNoFileComp + } + + // If no custom columns configured, return empty list. + return nil, cobra.ShellCompDirectiveNoFileComp +} + func init() { - // Create parser with components-specific flags using functional options - componentsParser = flags.NewStandardParser( - flags.WithStringFlag("stack", "s", "", "Filter by stack name or pattern"), - flags.WithEnvVars("stack", "ATMOS_STACK"), + // Create parser with components-specific flags using flag wrappers. + componentsParser = NewListParser( + WithFormatFlag, + WithComponentsColumnsFlag, + WithSortFlag, + WithStackFlag, + WithTypeFlag, + WithEnabledFlag, + WithLockedFlag, + WithAbstractFlag, ) - // Register flags + // Register flags. componentsParser.RegisterFlags(componentsCmd) - // Bind flags to Viper for environment variable support + // Register dynamic tab completion for --columns flag. + if err := componentsCmd.RegisterFlagCompletionFunc("columns", columnsCompletionForComponents); err != nil { + panic(err) + } + + // Bind flags to Viper for environment variable support. if err := componentsParser.BindToViper(viper.GetViper()); err != nil { panic(err) } } -func listComponentsWithOptions(cmd *cobra.Command, opts *ComponentsOptions) ([]string, error) { - configAndStacksInfo := schema.ConfigAndStacksInfo{} +func listComponentsWithOptions(cmd *cobra.Command, args []string, opts *ComponentsOptions) error { + defer perf.Track(nil, "list.components.listComponentsWithOptions")() + + // Initialize configuration and extract components. + atmosConfig, components, err := initAndExtractComponents(cmd, args, opts) + if err != nil { + return err + } + + if len(components) == 0 { + _ = ui.Info("No components found") + return nil + } + + // Build and execute render pipeline. + return renderComponents(atmosConfig, opts, components) +} + +// initAndExtractComponents initializes config and extracts components from stacks. +func initAndExtractComponents(cmd *cobra.Command, args []string, opts *ComponentsOptions) (*schema.AtmosConfiguration, []map[string]any, error) { + defer perf.Track(nil, "list.components.initAndExtractComponents")() + + // Process command line args to get ConfigAndStacksInfo with CLI flags. + configAndStacksInfo, err := e.ProcessCommandLineArgs("list", cmd, args, nil) + if err != nil { + return nil, nil, err + } + atmosConfig, err := config.InitCliConfig(configAndStacksInfo, true) if err != nil { - return nil, fmt.Errorf("error initializing CLI config: %v", err) + return nil, nil, fmt.Errorf("%w: %w", errUtils.ErrInitializingCLIConfig, err) + } + + // If format is empty, check command-specific config. + if opts.Format == "" && atmosConfig.Components.List.Format != "" { + opts.Format = atmosConfig.Components.List.Format } // Create AuthManager for authentication support. authManager, err := createAuthManagerForList(cmd, &atmosConfig) if err != nil { - return nil, err + return nil, nil, err } stacksMap, err := e.ExecuteDescribeStacks(&atmosConfig, "", nil, nil, nil, false, false, false, false, nil, authManager) if err != nil { - return nil, fmt.Errorf("error describing stacks: %v", err) + return nil, nil, fmt.Errorf("%w: %w", errUtils.ErrExecuteDescribeStacks, err) + } + + // Extract components into structured data. + components, err := extract.Components(stacksMap) + if err != nil { + return nil, nil, err + } + + return &atmosConfig, components, nil +} + +// renderComponents builds the render pipeline and renders components. +func renderComponents(atmosConfig *schema.AtmosConfiguration, opts *ComponentsOptions, components []map[string]any) error { + defer perf.Track(nil, "list.components.renderComponents")() + + // Build filters. + filters := buildComponentFilters(opts) + + // Get column configuration. + columns := getComponentColumns(atmosConfig, opts.Columns) + + // Build column selector. + selector, err := column.NewSelector(columns, column.BuildColumnFuncMap()) + if err != nil { + return fmt.Errorf("error creating column selector: %w", err) + } + + // Build sorters. + sorters, err := buildComponentSorters(opts.Sort) + if err != nil { + return fmt.Errorf("error parsing sort specification: %w", err) } - output, err := l.FilterAndListComponents(opts.Stack, stacksMap) - return output, err + // Create renderer and execute pipeline. + outputFormat := format.Format(opts.Format) + r := renderer.New(filters, selector, sorters, outputFormat, "") + + return r.Render(components) +} + +// buildComponentFilters creates filters based on command options. +func buildComponentFilters(opts *ComponentsOptions) []filter.Filter { + defer perf.Track(nil, "list.components.buildComponentFilters")() + + var filters []filter.Filter + + // Stack filter (glob pattern). + if opts.Stack != "" { + globFilter, err := filter.NewGlobFilter("stack", opts.Stack) + if err != nil { + _ = ui.Warning(fmt.Sprintf("Invalid glob pattern '%s': %v, filter will be ignored", opts.Stack, err)) + } else { + filters = append(filters, globFilter) + } + } + + // Type filter (authoritative when provided, targets component_type field). + if opts.Type != "" && opts.Type != "all" { + filters = append(filters, filter.NewColumnFilter("component_type", opts.Type)) + } else if opts.Type == "" && !opts.Abstract { + // Only apply default abstract filter when Type is not set. + filters = append(filters, filter.NewColumnFilter("component_type", "real")) + } + + // Enabled filter (tri-state: nil = all, true = enabled only, false = disabled only). + if opts.Enabled != nil { + filters = append(filters, filter.NewBoolFilter("enabled", opts.Enabled)) + } + + // Locked filter (tri-state: nil = all, true = locked only, false = unlocked only). + if opts.Locked != nil { + filters = append(filters, filter.NewBoolFilter("locked", opts.Locked)) + } + + return filters +} + +// getComponentColumns returns column configuration. +func getComponentColumns(atmosConfig *schema.AtmosConfiguration, columnsFlag []string) []column.Config { + defer perf.Track(nil, "list.components.getComponentColumns")() + + // If --columns flag is provided, parse it and return. + if len(columnsFlag) > 0 { + return parseColumnsFlag(columnsFlag) + } + + // Check atmos.yaml for components.list.columns configuration. + if len(atmosConfig.Components.List.Columns) > 0 { + var configs []column.Config + for _, col := range atmosConfig.Components.List.Columns { + configs = append(configs, column.Config{ + Name: col.Name, + Value: col.Value, + Width: col.Width, + }) + } + return configs + } + + // Default columns: show all standard component fields. + return []column.Config{ + {Name: "Component", Value: "{{ .component }}"}, + {Name: "Stack", Value: "{{ .stack }}"}, + {Name: "Type", Value: "{{ .type }}"}, + {Name: "Component Type", Value: "{{ .component_type }}"}, + {Name: "Enabled", Value: "{{ .enabled }}"}, + {Name: "Locked", Value: "{{ .locked }}"}, + } +} + +// buildComponentSorters creates sorters from sort specification. +func buildComponentSorters(sortSpec string) ([]*listSort.Sorter, error) { + defer perf.Track(nil, "list.components.buildComponentSorters")() + + if sortSpec == "" { + // Default sort: by component ascending. + return []*listSort.Sorter{ + listSort.NewSorter("Component", listSort.Ascending), + }, nil + } + + return listSort.ParseSortSpec(sortSpec) +} + +// parseColumnsFlag parses column specifications from CLI flag. +// Supports two formats: +// - Simple field name: "component" → Name: "component", Value: "{{ .component }}" +// - Named column with template: "Name=template" → Name: "Name", Value: "template" +// +// Examples: +// +// --columns component,stack,type +// --columns "Component={{ .component }},Stack={{ .stack }}" +// --columns component --columns stack +func parseColumnsFlag(columnsFlag []string) []column.Config { + defer perf.Track(nil, "list.components.parseColumnsFlag")() + + var configs []column.Config + + for _, spec := range columnsFlag { + cfg := parseColumnSpec(spec) + if cfg.Name != "" { + configs = append(configs, cfg) + } + } + + return configs +} + +// parseColumnSpec parses a single column specification. +// Format: "name" or "Name=template". +func parseColumnSpec(spec string) column.Config { + defer perf.Track(nil, "list.components.parseColumnSpec")() + + spec = strings.TrimSpace(spec) + if spec == "" { + return column.Config{} + } + + // Check for Name=template format. + if idx := strings.Index(spec, "="); idx > 0 { + name := strings.TrimSpace(spec[:idx]) + value := strings.TrimSpace(spec[idx+1:]) + + // If value doesn't contain template syntax, wrap it. + if !strings.Contains(value, "{{") { + value = "{{ ." + value + " }}" + } + + return column.Config{ + Name: name, + Value: value, + } + } + + // Simple field name: auto-generate template. + // Use title case for display name. + name := strings.Title(spec) //nolint:staticcheck // strings.Title is deprecated but works for simple ASCII column names + value := "{{ ." + spec + " }}" + + return column.Config{ + Name: name, + Value: value, + } } diff --git a/cmd/list/components_test.go b/cmd/list/components_test.go index b1f5850f21..67f111ef31 100644 --- a/cmd/list/components_test.go +++ b/cmd/list/components_test.go @@ -1,4 +1,3 @@ -//nolint:dupl // Test structure similarity is intentional for consistency package list import ( @@ -6,8 +5,25 @@ import ( "github.com/spf13/cobra" "github.com/stretchr/testify/assert" + + "github.com/cloudposse/atmos/pkg/data" + iolib "github.com/cloudposse/atmos/pkg/io" + "github.com/cloudposse/atmos/pkg/schema" + "github.com/cloudposse/atmos/pkg/ui" ) +// initTestIO initializes the I/O and UI contexts for testing. +// This must be called before tests that use renderComponents or similar functions. +func initTestIO(t *testing.T) { + t.Helper() + ioCtx, err := iolib.NewContext() + if err != nil { + t.Fatalf("failed to initialize I/O context: %v", err) + } + ui.InitFormatter(ioCtx) + data.InitWriter(ioCtx) +} + // TestListComponentsFlags tests that the list components command has the correct flags. func TestListComponentsFlags(t *testing.T) { cmd := &cobra.Command{ @@ -118,3 +134,987 @@ func TestComponentsOptions_AllPatterns(t *testing.T) { }) } } + +// TestComponentsOptions_AllFields tests all fields in ComponentsOptions. +func TestComponentsOptions_AllFields(t *testing.T) { + enabledTrue := true + enabledFalse := false + lockedTrue := true + lockedFalse := false + + testCases := []struct { + name string + opts *ComponentsOptions + expectedStack string + expectedType string + expectedEnabled *bool + expectedLocked *bool + expectedFormat string + expectedColumns []string + expectedSort string + expectedAbstract bool + }{ + { + name: "All fields populated", + opts: &ComponentsOptions{ + Stack: "prod-*", + Type: "terraform", + Enabled: &enabledTrue, + Locked: &lockedFalse, + Format: "table", + Columns: []string{"component", "stack", "type"}, + Sort: "component:asc", + Abstract: true, + }, + expectedStack: "prod-*", + expectedType: "terraform", + expectedEnabled: &enabledTrue, + expectedLocked: &lockedFalse, + expectedFormat: "table", + expectedColumns: []string{"component", "stack", "type"}, + expectedSort: "component:asc", + expectedAbstract: true, + }, + { + name: "Empty options", + opts: &ComponentsOptions{}, + expectedStack: "", + expectedType: "", + expectedEnabled: nil, + expectedLocked: nil, + expectedFormat: "", + expectedColumns: nil, + expectedSort: "", + expectedAbstract: false, + }, + { + name: "Tri-state booleans - all true", + opts: &ComponentsOptions{ + Enabled: &enabledTrue, + Locked: &lockedTrue, + }, + expectedStack: "", + expectedType: "", + expectedEnabled: &enabledTrue, + expectedLocked: &lockedTrue, + expectedFormat: "", + expectedAbstract: false, + }, + { + name: "Tri-state booleans - all false", + opts: &ComponentsOptions{ + Enabled: &enabledFalse, + Locked: &lockedFalse, + }, + expectedStack: "", + expectedType: "", + expectedEnabled: &enabledFalse, + expectedLocked: &lockedFalse, + expectedFormat: "", + expectedAbstract: false, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + assert.Equal(t, tc.expectedStack, tc.opts.Stack) + assert.Equal(t, tc.expectedType, tc.opts.Type) + + // Compare tri-state booleans by value, not pointer identity. + if tc.expectedEnabled == nil { + assert.Nil(t, tc.opts.Enabled) + } else if assert.NotNil(t, tc.opts.Enabled) { + assert.Equal(t, *tc.expectedEnabled, *tc.opts.Enabled) + } + + if tc.expectedLocked == nil { + assert.Nil(t, tc.opts.Locked) + } else if assert.NotNil(t, tc.opts.Locked) { + assert.Equal(t, *tc.expectedLocked, *tc.opts.Locked) + } + + assert.Equal(t, tc.expectedFormat, tc.opts.Format) + assert.Equal(t, tc.expectedColumns, tc.opts.Columns) + assert.Equal(t, tc.expectedSort, tc.opts.Sort) + assert.Equal(t, tc.expectedAbstract, tc.opts.Abstract) + }) + } +} + +// TestBuildComponentFilters tests filter building. +func TestBuildComponentFilters(t *testing.T) { + enabledTrue := true + enabledFalse := false + lockedTrue := true + + testCases := []struct { + name string + opts *ComponentsOptions + expectedCount int + description string + }{ + { + name: "No filters", + opts: &ComponentsOptions{}, + expectedCount: 1, // Abstract filter (component_type=real) is always added + description: "Default includes abstract filter for real components", + }, + { + name: "Stack filter", + opts: &ComponentsOptions{ + Stack: "prod-*", + }, + expectedCount: 2, // Stack filter + abstract filter + description: "Stack glob filter + abstract filter", + }, + { + name: "Type filter", + opts: &ComponentsOptions{ + Type: "terraform", + }, + expectedCount: 1, // Type filter only (authoritative) + description: "Type filter is authoritative", + }, + { + name: "Enabled filter true", + opts: &ComponentsOptions{ + Enabled: &enabledTrue, + }, + expectedCount: 2, // Enabled filter + abstract filter + description: "Enabled=true filter + abstract filter", + }, + { + name: "Enabled filter false", + opts: &ComponentsOptions{ + Enabled: &enabledFalse, + }, + expectedCount: 2, // Enabled filter + abstract filter + description: "Enabled=false filter + abstract filter", + }, + { + name: "Locked filter", + opts: &ComponentsOptions{ + Locked: &lockedTrue, + }, + expectedCount: 2, // Locked filter + abstract filter + description: "Locked filter + abstract filter", + }, + { + name: "Abstract flag true", + opts: &ComponentsOptions{ + Abstract: true, + }, + expectedCount: 0, // Abstract filter is NOT added when Abstract=true + description: "No filters when showing abstract components", + }, + { + name: "All filters combined", + opts: &ComponentsOptions{ + Stack: "prod-*", + Type: "terraform", + Enabled: &enabledTrue, + Locked: &lockedTrue, + }, + expectedCount: 4, // Stack + Type + Enabled + Locked (no abstract filter when Type is set) + description: "All filters combined", + }, + { + name: "Type filter with 'all'", + opts: &ComponentsOptions{ + Type: "all", + }, + expectedCount: 0, // Type='all' is not added as filter, and no abstract filter either + description: "Type='all' is ignored and no abstract filter", + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + result := buildComponentFilters(tc.opts) + assert.Equal(t, tc.expectedCount, len(result), tc.description) + }) + } +} + +// TestGetComponentColumns tests column configuration logic. +func TestGetComponentColumns(t *testing.T) { + testCases := []struct { + name string + atmosConfig *schema.AtmosConfiguration + columnsFlag []string + expectLen int + expectName string + }{ + { + name: "Default columns", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{}, + }, + }, + columnsFlag: []string{}, + expectLen: 6, // All standard fields: Component, Stack, Type, Component Type, Enabled, Locked + expectName: "Component", + }, + { + name: "Columns from flag", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{}, + }, + }, + columnsFlag: []string{"component", "stack", "type", "enabled"}, + expectLen: 4, // CLI flag now properly parses column specifications + expectName: "Component", + }, + { + name: "Columns from config", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{ + Columns: []schema.ListColumnConfig{ + {Name: "Component", Value: "{{ .component }}"}, + {Name: "Stack", Value: "{{ .stack }}"}, + {Name: "Type", Value: "{{ .type }}"}, + {Name: "Enabled", Value: "{{ .enabled }}"}, + }, + }, + }, + }, + columnsFlag: []string{}, + expectLen: 4, + expectName: "Component", + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + result := getComponentColumns(tc.atmosConfig, tc.columnsFlag) + assert.Equal(t, tc.expectLen, len(result)) + + if tc.expectName != "" && len(result) > 0 { + assert.Equal(t, tc.expectName, result[0].Name) + } + }) + } +} + +// TestBuildComponentSorters tests sorter building. +func TestBuildComponentSorters(t *testing.T) { + testCases := []struct { + name string + sortSpec string + expectLen int + expectError bool + }{ + { + name: "Empty sort (default)", + sortSpec: "", + expectLen: 1, // Default sort by component ascending + }, + { + name: "Single sort field ascending", + sortSpec: "component:asc", + expectLen: 1, + }, + { + name: "Single sort field descending", + sortSpec: "stack:desc", + expectLen: 1, + }, + { + name: "Multiple sort fields", + sortSpec: "type:asc,component:desc", + expectLen: 2, + }, + { + name: "Invalid sort spec", + sortSpec: "invalid::spec", + expectError: true, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + result, err := buildComponentSorters(tc.sortSpec) + + if tc.expectError { + assert.Error(t, err) + } else { + assert.NoError(t, err) + assert.Equal(t, tc.expectLen, len(result)) + } + }) + } +} + +// TestParseColumnsFlag tests column flag parsing. +func TestParseColumnsFlag(t *testing.T) { + testCases := []struct { + name string + columnsFlag []string + expectLen int + expectName string + expectValue string + }{ + { + name: "Empty columns", + columnsFlag: []string{}, + expectLen: 0, + expectName: "", + expectValue: "", + }, + { + name: "Single simple column", + columnsFlag: []string{"component"}, + expectLen: 1, + expectName: "Component", + expectValue: "{{ .component }}", + }, + { + name: "Multiple simple columns", + columnsFlag: []string{"component", "stack", "type"}, + expectLen: 3, + expectName: "Component", + expectValue: "{{ .component }}", + }, + { + name: "Named column with template", + columnsFlag: []string{"Name={{ .component }}"}, + expectLen: 1, + expectName: "Name", + expectValue: "{{ .component }}", + }, + { + name: "Named column with simple field", + columnsFlag: []string{"MyStack=stack"}, + expectLen: 1, + expectName: "MyStack", + expectValue: "{{ .stack }}", + }, + { + name: "Mixed formats", + columnsFlag: []string{"component", "MyType={{ .type }}"}, + expectLen: 2, + expectName: "Component", + expectValue: "{{ .component }}", + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + result := parseColumnsFlag(tc.columnsFlag) + assert.Equal(t, tc.expectLen, len(result)) + + if tc.expectName != "" && len(result) > 0 { + assert.Equal(t, tc.expectName, result[0].Name) + } + if tc.expectValue != "" && len(result) > 0 { + assert.Equal(t, tc.expectValue, result[0].Value) + } + }) + } +} + +// TestParseColumnSpec tests parsing individual column specifications. +func TestParseColumnSpec(t *testing.T) { + testCases := []struct { + name string + spec string + expectName string + expectValue string + }{ + { + name: "Empty spec", + spec: "", + expectName: "", + expectValue: "", + }, + { + name: "Whitespace only", + spec: " ", + expectName: "", + expectValue: "", + }, + { + name: "Simple field name", + spec: "component", + expectName: "Component", + expectValue: "{{ .component }}", + }, + { + name: "Field with leading/trailing whitespace", + spec: " stack ", + expectName: "Stack", + expectValue: "{{ .stack }}", + }, + { + name: "Named column with template", + spec: "MyColumn={{ .component }}", + expectName: "MyColumn", + expectValue: "{{ .component }}", + }, + { + name: "Named column with simple field (auto-wrap)", + spec: "MyStack=stack", + expectName: "MyStack", + expectValue: "{{ .stack }}", + }, + { + name: "Named column with whitespace", + spec: " Name = {{ .field }} ", + expectName: "Name", + expectValue: "{{ .field }}", + }, + { + name: "Complex template", + spec: "Info={{ .component }}-{{ .stack }}", + expectName: "Info", + expectValue: "{{ .component }}-{{ .stack }}", + }, + { + name: "Template with function", + spec: "Upper={{ upper .component }}", + expectName: "Upper", + expectValue: "{{ upper .component }}", + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + result := parseColumnSpec(tc.spec) + assert.Equal(t, tc.expectName, result.Name) + assert.Equal(t, tc.expectValue, result.Value) + }) + } +} + +// TestParseColumnsFlag_EdgeCases tests edge cases for column flag parsing. +func TestParseColumnsFlag_EdgeCases(t *testing.T) { + testCases := []struct { + name string + columnsFlag []string + expectLen int + checkFirst bool + firstName string + firstValue string + }{ + { + name: "Single empty string in slice", + columnsFlag: []string{""}, + expectLen: 0, // Empty strings are skipped + }, + { + name: "Multiple empty strings", + columnsFlag: []string{"", "", ""}, + expectLen: 0, + }, + { + name: "Mix of empty and valid", + columnsFlag: []string{"", "component", ""}, + expectLen: 1, + checkFirst: true, + firstName: "Component", + firstValue: "{{ .component }}", + }, + { + name: "Underscore field name", + columnsFlag: []string{"component_type"}, + expectLen: 1, + checkFirst: true, + firstName: "Component_type", + firstValue: "{{ .component_type }}", + }, + { + name: "Field with numbers", + columnsFlag: []string{"var1"}, + expectLen: 1, + checkFirst: true, + firstName: "Var1", + firstValue: "{{ .var1 }}", + }, + { + name: "Named column with equals in template", + columnsFlag: []string{"Check={{ if eq .enabled true }}yes{{ end }}"}, + expectLen: 1, + checkFirst: true, + firstName: "Check", + firstValue: "{{ if eq .enabled true }}yes{{ end }}", + }, + { + name: "Multiple named columns", + columnsFlag: []string{"A={{ .a }}", "B={{ .b }}", "C={{ .c }}"}, + expectLen: 3, + checkFirst: true, + firstName: "A", + firstValue: "{{ .a }}", + }, + { + name: "Column name only (equals at end)", + columnsFlag: []string{"Name="}, + expectLen: 1, + checkFirst: true, + firstName: "Name", + firstValue: "{{ . }}", + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + result := parseColumnsFlag(tc.columnsFlag) + assert.Equal(t, tc.expectLen, len(result), "Expected %d columns, got %d", tc.expectLen, len(result)) + + if tc.checkFirst && len(result) > 0 { + assert.Equal(t, tc.firstName, result[0].Name) + assert.Equal(t, tc.firstValue, result[0].Value) + } + }) + } +} + +// TestParseColumnSpec_SpecialCharacters tests parsing with special characters. +func TestParseColumnSpec_SpecialCharacters(t *testing.T) { + testCases := []struct { + name string + spec string + expectName string + expectValue string + }{ + { + name: "Dot in field name", + spec: "vars.region", + expectName: "Vars.Region", // strings.Title capitalizes after dots. + expectValue: "{{ .vars.region }}", + }, + { + name: "Hyphen in field name", + spec: "my-field", + expectName: "My-Field", // strings.Title capitalizes after hyphens. + expectValue: "{{ .my-field }}", + }, + { + name: "Template with pipe", + spec: "Upper={{ .component | upper }}", + expectName: "Upper", + expectValue: "{{ .component | upper }}", + }, + { + name: "Template with multiple pipes", + spec: "Formatted={{ .name | lower | truncate 10 }}", + expectName: "Formatted", + expectValue: "{{ .name | lower | truncate 10 }}", + }, + { + name: "Template with conditional", + spec: "Status={{ if .enabled }}on{{ else }}off{{ end }}", + expectName: "Status", + expectValue: "{{ if .enabled }}on{{ else }}off{{ end }}", + }, + { + name: "Template with range", + spec: "Items={{ range .items }}{{ . }}{{ end }}", + expectName: "Items", + expectValue: "{{ range .items }}{{ . }}{{ end }}", + }, + { + name: "Named column with colon in name", + spec: "Type:Info={{ .type }}", + expectName: "Type:Info", + expectValue: "{{ .type }}", + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + result := parseColumnSpec(tc.spec) + assert.Equal(t, tc.expectName, result.Name) + assert.Equal(t, tc.expectValue, result.Value) + }) + } +} + +// TestParseColumnsFlag_VerifyAllColumns tests that all columns are parsed correctly. +func TestParseColumnsFlag_VerifyAllColumns(t *testing.T) { + columnsFlag := []string{ + "component", + "Stack={{ .stack }}", + "MyType=type", + } + + result := parseColumnsFlag(columnsFlag) + assert.Equal(t, 3, len(result)) + + // Check first column (simple field) + assert.Equal(t, "Component", result[0].Name) + assert.Equal(t, "{{ .component }}", result[0].Value) + + // Check second column (named with template) + assert.Equal(t, "Stack", result[1].Name) + assert.Equal(t, "{{ .stack }}", result[1].Value) + + // Check third column (named with field) + assert.Equal(t, "MyType", result[2].Name) + assert.Equal(t, "{{ .type }}", result[2].Value) +} + +// TestColumnsCompletionForComponents tests tab completion for columns flag. +func TestColumnsCompletionForComponents(t *testing.T) { + // This test verifies the function signature and basic behavior. + // Full integration testing would require a valid atmos.yaml config. + cmd := &cobra.Command{} + args := []string{} + toComplete := "" + + // Should return empty or error if config cannot be loaded. + suggestions, directive := columnsCompletionForComponents(cmd, args, toComplete) + + // Function should return (even if empty) and directive should be NoFileComp. + // Suggestions can be nil or empty when config is not available. + _ = suggestions // May be nil or empty + assert.Equal(t, cobra.ShellCompDirectiveNoFileComp, directive) +} + +// TestRenderComponents tests the renderComponents function with mock data. +func TestRenderComponents(t *testing.T) { + initTestIO(t) + + testCases := []struct { + name string + atmosConfig *schema.AtmosConfiguration + opts *ComponentsOptions + components []map[string]any + expectError bool + }{ + { + name: "Empty components list", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{}, + }, + }, + opts: &ComponentsOptions{Format: "table"}, + components: []map[string]any{}, + expectError: false, + }, + { + name: "Single component with table format", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{}, + }, + }, + opts: &ComponentsOptions{Format: "table"}, + components: []map[string]any{ + {"component": "vpc", "stack": "prod-us-east-1", "type": "terraform"}, + }, + expectError: false, + }, + { + name: "Multiple components with json format", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{}, + }, + }, + opts: &ComponentsOptions{Format: "json"}, + components: []map[string]any{ + {"component": "vpc", "stack": "prod-us-east-1", "type": "terraform"}, + {"component": "rds", "stack": "prod-us-east-1", "type": "terraform"}, + {"component": "eks", "stack": "dev-us-west-2", "type": "terraform"}, + }, + expectError: false, + }, + { + name: "Components with yaml format", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{}, + }, + }, + opts: &ComponentsOptions{Format: "yaml"}, + components: []map[string]any{ + {"component": "vpc", "stack": "prod", "type": "terraform"}, + }, + expectError: false, + }, + { + name: "Components with invalid sort spec", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{}, + }, + }, + opts: &ComponentsOptions{ + Format: "table", + Sort: "invalid::sort::spec", + }, + components: []map[string]any{ + {"component": "vpc", "stack": "prod", "type": "terraform"}, + }, + expectError: true, + }, + { + name: "Components with stack filter", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{}, + }, + }, + opts: &ComponentsOptions{ + Format: "table", + Stack: "prod-*", + }, + components: []map[string]any{ + {"component": "vpc", "stack": "prod-us-east-1", "type": "terraform"}, + {"component": "rds", "stack": "dev-us-west-2", "type": "terraform"}, + }, + expectError: false, + }, + { + name: "Components with custom columns from config", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{ + Columns: []schema.ListColumnConfig{ + {Name: "Name", Value: "{{ .component }}"}, + {Name: "Environment", Value: "{{ .stack }}"}, + }, + }, + }, + }, + opts: &ComponentsOptions{ + Format: "table", + Sort: "Name:asc", // Use custom column name for sorting. + }, + components: []map[string]any{ + {"component": "vpc", "stack": "prod", "type": "terraform"}, + }, + expectError: false, + }, + { + name: "Components with sort ascending", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{}, + }, + }, + opts: &ComponentsOptions{ + Format: "table", + Sort: "component:asc", + }, + components: []map[string]any{ + {"component": "rds", "stack": "prod", "type": "terraform"}, + {"component": "eks", "stack": "prod", "type": "terraform"}, + {"component": "vpc", "stack": "prod", "type": "terraform"}, + }, + expectError: false, + }, + { + name: "Components with sort descending", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{}, + }, + }, + opts: &ComponentsOptions{ + Format: "table", + Sort: "component:desc", + }, + components: []map[string]any{ + {"component": "rds", "stack": "prod", "type": "terraform"}, + {"component": "eks", "stack": "prod", "type": "terraform"}, + {"component": "vpc", "stack": "prod", "type": "terraform"}, + }, + expectError: false, + }, + { + name: "Components with csv format", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{}, + }, + }, + opts: &ComponentsOptions{Format: "csv"}, + components: []map[string]any{ + {"component": "vpc", "stack": "prod-us-east-1", "type": "terraform"}, + {"component": "rds", "stack": "prod-us-east-1", "type": "terraform"}, + }, + expectError: false, + }, + { + name: "Components with tsv format", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{}, + }, + }, + opts: &ComponentsOptions{Format: "tsv"}, + components: []map[string]any{ + {"component": "vpc", "stack": "prod", "type": "terraform"}, + }, + expectError: false, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + err := renderComponents(tc.atmosConfig, tc.opts, tc.components) + + if tc.expectError { + assert.Error(t, err) + } else { + assert.NoError(t, err) + } + }) + } +} + +// TestRenderComponents_TriStateBoolFilters tests renderComponents with tri-state boolean filters. +func TestRenderComponents_TriStateBoolFilters(t *testing.T) { + initTestIO(t) + + enabledTrue := true + enabledFalse := false + lockedTrue := true + lockedFalse := false + + testCases := []struct { + name string + opts *ComponentsOptions + components []map[string]any + }{ + { + name: "Filter enabled=true", + opts: &ComponentsOptions{ + Format: "json", + Enabled: &enabledTrue, + }, + components: []map[string]any{ + {"component": "vpc", "stack": "prod", "enabled": true}, + {"component": "rds", "stack": "prod", "enabled": false}, + }, + }, + { + name: "Filter enabled=false", + opts: &ComponentsOptions{ + Format: "json", + Enabled: &enabledFalse, + }, + components: []map[string]any{ + {"component": "vpc", "stack": "prod", "enabled": true}, + {"component": "rds", "stack": "prod", "enabled": false}, + }, + }, + { + name: "Filter locked=true", + opts: &ComponentsOptions{ + Format: "json", + Locked: &lockedTrue, + }, + components: []map[string]any{ + {"component": "vpc", "stack": "prod", "locked": true}, + {"component": "rds", "stack": "prod", "locked": false}, + }, + }, + { + name: "Filter locked=false", + opts: &ComponentsOptions{ + Format: "json", + Locked: &lockedFalse, + }, + components: []map[string]any{ + {"component": "vpc", "stack": "prod", "locked": true}, + {"component": "rds", "stack": "prod", "locked": false}, + }, + }, + { + name: "Combine enabled and locked filters", + opts: &ComponentsOptions{ + Format: "json", + Enabled: &enabledTrue, + Locked: &lockedFalse, + }, + components: []map[string]any{ + {"component": "vpc", "stack": "prod", "enabled": true, "locked": false}, + {"component": "rds", "stack": "prod", "enabled": false, "locked": true}, + }, + }, + } + + atmosConfig := &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{}, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + err := renderComponents(atmosConfig, tc.opts, tc.components) + assert.NoError(t, err) + }) + } +} + +// TestRenderComponents_TypeFilter tests renderComponents with type filtering. +func TestRenderComponents_TypeFilter(t *testing.T) { + initTestIO(t) + + testCases := []struct { + name string + typeFilter string + abstract bool + }{ + { + name: "Type filter terraform", + typeFilter: "terraform", + abstract: false, + }, + { + name: "Type filter helmfile", + typeFilter: "helmfile", + abstract: false, + }, + { + name: "Type filter all", + typeFilter: "all", + abstract: false, + }, + { + name: "Abstract flag true", + typeFilter: "", + abstract: true, + }, + { + name: "Type filter with abstract", + typeFilter: "terraform", + abstract: true, + }, + } + + atmosConfig := &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{}, + }, + } + + components := []map[string]any{ + {"component": "vpc", "stack": "prod", "type": "terraform", "component_type": "real"}, + {"component": "base-vpc", "stack": "prod", "type": "terraform", "component_type": "abstract"}, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + opts := &ComponentsOptions{ + Format: "json", + Type: tc.typeFilter, + Abstract: tc.abstract, + } + err := renderComponents(atmosConfig, opts, components) + assert.NoError(t, err) + }) + } +} + +// TestInitAndExtractComponents is documented in integration tests. +// Unit testing with nil command is not meaningful as ProcessCommandLineArgs requires a valid command context. +// See tests/cli_list_commands_test.go for integration tests that exercise the full command flow. diff --git a/cmd/list/flag_wrappers.go b/cmd/list/flag_wrappers.go new file mode 100644 index 0000000000..2df7419584 --- /dev/null +++ b/cmd/list/flag_wrappers.go @@ -0,0 +1,315 @@ +package list + +import ( + "github.com/cloudposse/atmos/pkg/flags" + "github.com/cloudposse/atmos/pkg/perf" +) + +const ( + // Flag names. + flagColumns = "columns" + + // Environment variables. + envListColumns = "ATMOS_LIST_COLUMNS" + + // Flag descriptions. + descColumns = "Columns to display (comma-separated, overrides atmos.yaml)" +) + +// Named wrapper functions for list command flags. +// Follow With* naming convention from pkg/flags/ API. +// Each function appends flag options to the provided slice. +// +// Design principles: +// - One function per flag (granular composition) +// - Consistent naming: With{FlagName}Flag +// - Reusable across multiple list commands +// - Each command chooses only the flags it needs +// - Single source of truth for flag configuration + +// WithFormatFlag adds output format flag with environment variable support. +// Used by: components, stacks, workflows, vendor, values, vars, metadata, settings, instances. +func WithFormatFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithFormatFlag")() + + *options = append(*options, + flags.WithStringFlag("format", "f", "", "Output format: table, json, yaml, csv, tsv, tree"), + flags.WithEnvVars("format", "ATMOS_LIST_FORMAT"), + flags.WithValidValues("format", "table", "json", "yaml", "csv", "tsv", "tree"), + ) +} + +// WithDelimiterFlag adds CSV/TSV delimiter flag. +// Used by: workflows, vendor, values, vars, metadata, settings, instances. +func WithDelimiterFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithDelimiterFlag")() + + *options = append(*options, + flags.WithStringFlag("delimiter", "", "", "Delimiter for CSV/TSV output"), + flags.WithEnvVars("delimiter", "ATMOS_LIST_DELIMITER"), + ) +} + +// WithInstancesColumnsFlag adds column selection flag for list instances command. +// Tab completion is registered via RegisterFlagCompletionFunc in the command init. +// Used by: instances. +func WithInstancesColumnsFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithInstancesColumnsFlag")() + + *options = append(*options, + flags.WithStringSliceFlag(flagColumns, "", []string{}, descColumns), + flags.WithEnvVars(flagColumns, envListColumns), + ) +} + +// WithMetadataColumnsFlag adds column selection flag for list metadata and components commands. +// Tab completion is registered via RegisterFlagCompletionFunc in the command init. +// Used by: metadata, components. +func WithMetadataColumnsFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithMetadataColumnsFlag")() + + *options = append(*options, + flags.WithStringSliceFlag(flagColumns, "", []string{}, descColumns), + flags.WithEnvVars(flagColumns, envListColumns), + ) +} + +// WithComponentsColumnsFlag adds column selection flag for list components command. +// Components command uses the same columns as metadata. +// Tab completion is registered via RegisterFlagCompletionFunc in the command init. +// Used by: components. +func WithComponentsColumnsFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithComponentsColumnsFlag")() + + // Components share metadata columns. + WithMetadataColumnsFlag(options) +} + +// WithStacksColumnsFlag adds column selection flag for list stacks command. +// Tab completion is registered via RegisterFlagCompletionFunc in the command init. +// Used by: stacks. +func WithStacksColumnsFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithStacksColumnsFlag")() + + *options = append(*options, + flags.WithStringSliceFlag(flagColumns, "", []string{}, descColumns), + flags.WithEnvVars(flagColumns, envListColumns), + ) +} + +// WithWorkflowsColumnsFlag adds column selection flag for list workflows command. +// Tab completion is registered via RegisterFlagCompletionFunc in the command init. +// Used by: workflows. +func WithWorkflowsColumnsFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithWorkflowsColumnsFlag")() + + *options = append(*options, + flags.WithStringSliceFlag(flagColumns, "", []string{}, descColumns), + flags.WithEnvVars(flagColumns, envListColumns), + ) +} + +// WithVendorColumnsFlag adds column selection flag for list vendor command. +// Tab completion is registered via RegisterFlagCompletionFunc in the command init. +// Used by: vendor. +func WithVendorColumnsFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithVendorColumnsFlag")() + + *options = append(*options, + flags.WithStringSliceFlag(flagColumns, "", []string{}, descColumns), + flags.WithEnvVars(flagColumns, envListColumns), + ) +} + +// WithStackFlag adds stack filter flag for filtering by stack pattern (glob). +// Used by: components, vendor, values, vars, metadata, settings, instances. +func WithStackFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithStackFlag")() + + *options = append(*options, + flags.WithStringFlag("stack", "s", "", "Filter by stack pattern (glob, e.g., 'plat-*-prod')"), + flags.WithEnvVars("stack", "ATMOS_STACK"), + ) +} + +// WithFilterFlag adds YQ filter expression flag with environment variable support. +// Used by: components, vendor, instances. +func WithFilterFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithFilterFlag")() + + *options = append(*options, + flags.WithStringFlag("filter", "", "", "Filter expression using YQ syntax"), + flags.WithEnvVars("filter", "ATMOS_LIST_FILTER"), + ) +} + +// WithSortFlag adds sort specification flag with environment variable support. +// Format: "column1:asc,column2:desc". +// Used by: components, stacks, workflows, vendor, instances. +func WithSortFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithSortFlag")() + + *options = append(*options, + flags.WithStringFlag("sort", "", "", "Sort by column:order (e.g., 'stack:asc,component:desc')"), + flags.WithEnvVars("sort", "ATMOS_LIST_SORT"), + ) +} + +// WithEnabledFlag adds enabled filter flag for filtering by enabled status. +// Nil value = all, true = enabled only, false = disabled only. +// Used by: components. +func WithEnabledFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithEnabledFlag")() + + *options = append(*options, + flags.WithBoolFlag("enabled", "", false, "Filter by enabled status (omit for all, --enabled=true for enabled only)"), + flags.WithEnvVars("enabled", "ATMOS_COMPONENT_ENABLED"), + ) +} + +// WithLockedFlag adds locked filter flag for filtering by locked status. +// Nil value = all, true = locked only, false = unlocked only. +// Used by: components. +func WithLockedFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithLockedFlag")() + + *options = append(*options, + flags.WithBoolFlag("locked", "", false, "Filter by locked status (omit for all, --locked=true for locked only)"), + flags.WithEnvVars("locked", "ATMOS_COMPONENT_LOCKED"), + ) +} + +// WithTypeFlag adds component type filter flag with environment variable support. +// Valid values: "real", "abstract", "all". +// Used by: components. +func WithTypeFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithTypeFlag")() + + *options = append(*options, + flags.WithStringFlag("type", "t", "real", "Component type: real, abstract, all"), + flags.WithEnvVars("type", "ATMOS_COMPONENT_TYPE"), + flags.WithValidValues("type", "real", "abstract", "all"), + ) +} + +// WithComponentFlag adds component filter flag for filtering stacks by component. +// Used by: stacks. +func WithComponentFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithComponentFlag")() + + *options = append(*options, + flags.WithStringFlag("component", "c", "", "Filter stacks by component name"), + flags.WithEnvVars("component", "ATMOS_COMPONENT"), + ) +} + +// WithFileFlag adds workflow file filter flag. +// Used by: workflows. +func WithFileFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithFileFlag")() + + *options = append(*options, + flags.WithStringFlag("file", "", "", "Filter workflows by file path"), + flags.WithEnvVars("file", "ATMOS_WORKFLOW_FILE"), + ) +} + +// WithMaxColumnsFlag adds max columns limit flag for values/metadata/settings. +// Used by: values, vars, metadata, settings. +func WithMaxColumnsFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithMaxColumnsFlag")() + + *options = append(*options, + flags.WithIntFlag("max-columns", "", 0, "Maximum number of columns to display (0 = no limit)"), + flags.WithEnvVars("max-columns", "ATMOS_LIST_MAX_COLUMNS"), + ) +} + +// WithQueryFlag adds YQ query expression flag for filtering values. +// Used by: values, vars, metadata, settings. +func WithQueryFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithQueryFlag")() + + *options = append(*options, + flags.WithStringFlag("query", "q", "", "YQ expression to filter values (e.g., '.vars.region')"), + flags.WithEnvVars("query", "ATMOS_LIST_QUERY"), + ) +} + +// WithAbstractFlag adds abstract component inclusion flag. +// Used by: values, vars. +func WithAbstractFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithAbstractFlag")() + + *options = append(*options, + flags.WithBoolFlag("abstract", "", false, "Include abstract components in output"), + flags.WithEnvVars("abstract", "ATMOS_ABSTRACT"), + ) +} + +// WithProcessTemplatesFlag adds template processing flag. +// Used by: values, vars, metadata, settings. +func WithProcessTemplatesFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithProcessTemplatesFlag")() + + *options = append(*options, + flags.WithBoolFlag("process-templates", "", true, "Enable/disable Go template processing"), + flags.WithEnvVars("process-templates", "ATMOS_PROCESS_TEMPLATES"), + ) +} + +// WithProcessFunctionsFlag adds template function processing flag. +// Used by: values, vars, metadata, settings. +func WithProcessFunctionsFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithProcessFunctionsFlag")() + + *options = append(*options, + flags.WithBoolFlag("process-functions", "", true, "Enable/disable template function processing"), + flags.WithEnvVars("process-functions", "ATMOS_PROCESS_FUNCTIONS"), + ) +} + +// WithUploadFlag adds upload to Pro API flag. +// Used by: instances. +func WithUploadFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithUploadFlag")() + + *options = append(*options, + flags.WithBoolFlag("upload", "", false, "Upload instances to Atmos Pro API"), + flags.WithEnvVars("upload", "ATMOS_UPLOAD"), + ) +} + +// WithProvenanceFlag adds provenance display flag for tree format. +// Used by: instances, stacks. +func WithProvenanceFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithProvenanceFlag")() + + *options = append(*options, + flags.WithBoolFlag("provenance", "", false, "Show import provenance (only works with --format=tree)"), + flags.WithEnvVars("provenance", "ATMOS_PROVENANCE"), + ) +} + +// NewListParser creates a StandardParser with specified flag builders. +// Each command composes only the flags it needs by passing the appropriate With* functions. +// +// Example: +// +// parser := NewListParser( +// WithFormatFlag, +// WithColumnsFlag, +// WithStackFlag, +// ) +func NewListParser(builders ...func(*[]flags.Option)) *flags.StandardParser { + defer perf.Track(nil, "list.NewListParser")() + + options := []flags.Option{} + + // Apply each builder function to compose the flag set + for _, builder := range builders { + builder(&options) + } + + return flags.NewStandardParser(options...) +} diff --git a/cmd/list/flag_wrappers_test.go b/cmd/list/flag_wrappers_test.go new file mode 100644 index 0000000000..b9ced2fd49 --- /dev/null +++ b/cmd/list/flag_wrappers_test.go @@ -0,0 +1,478 @@ +package list + +import ( + "testing" + + "github.com/spf13/cobra" + "github.com/spf13/viper" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + + "github.com/cloudposse/atmos/pkg/flags" +) + +// TestWithFormatFlag verifies format flag registration. +func TestWithFormatFlag(t *testing.T) { + parser := NewListParser(WithFormatFlag) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify flag exists + flag := cmd.Flags().Lookup("format") + require.NotNil(t, flag, "format flag should be registered") + assert.Equal(t, "f", flag.Shorthand) + assert.Equal(t, "", flag.DefValue) + assert.Contains(t, flag.Usage, "Output format") +} + +// TestWithInstancesColumnsFlag verifies columns flag registration for instances. +func TestWithInstancesColumnsFlag(t *testing.T) { + parser := NewListParser(WithInstancesColumnsFlag) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify flag exists + flag := cmd.Flags().Lookup("columns") + require.NotNil(t, flag, "columns flag should be registered") + assert.Equal(t, "", flag.Shorthand) + assert.Contains(t, flag.Usage, "Columns to display") +} + +// TestWithStackFlag verifies stack flag registration. +func TestWithStackFlag(t *testing.T) { + parser := NewListParser(WithStackFlag) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify flag exists + flag := cmd.Flags().Lookup("stack") + require.NotNil(t, flag, "stack flag should be registered") + assert.Equal(t, "s", flag.Shorthand) + assert.Contains(t, flag.Usage, "Filter by stack pattern") +} + +// TestWithFilterFlag verifies filter flag registration. +func TestWithFilterFlag(t *testing.T) { + parser := NewListParser(WithFilterFlag) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify flag exists + flag := cmd.Flags().Lookup("filter") + require.NotNil(t, flag, "filter flag should be registered") + assert.Contains(t, flag.Usage, "Filter expression") +} + +// TestWithSortFlag verifies sort flag registration. +func TestWithSortFlag(t *testing.T) { + parser := NewListParser(WithSortFlag) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify flag exists + flag := cmd.Flags().Lookup("sort") + require.NotNil(t, flag, "sort flag should be registered") + assert.Contains(t, flag.Usage, "Sort by column") +} + +// TestWithEnabledFlag verifies enabled flag registration. +func TestWithEnabledFlag(t *testing.T) { + parser := NewListParser(WithEnabledFlag) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify flag exists + flag := cmd.Flags().Lookup("enabled") + require.NotNil(t, flag, "enabled flag should be registered") + assert.Contains(t, flag.Usage, "Filter by enabled") +} + +// TestWithLockedFlag verifies locked flag registration. +func TestWithLockedFlag(t *testing.T) { + parser := NewListParser(WithLockedFlag) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify flag exists + flag := cmd.Flags().Lookup("locked") + require.NotNil(t, flag, "locked flag should be registered") + assert.Contains(t, flag.Usage, "Filter by locked") +} + +// TestWithTypeFlag verifies type flag registration. +func TestWithTypeFlag(t *testing.T) { + parser := NewListParser(WithTypeFlag) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify flag exists + flag := cmd.Flags().Lookup("type") + require.NotNil(t, flag, "type flag should be registered") + assert.Equal(t, "t", flag.Shorthand) + assert.Equal(t, "real", flag.DefValue) + assert.Contains(t, flag.Usage, "Component type") +} + +// TestWithComponentFlag verifies component flag registration. +func TestWithComponentFlag(t *testing.T) { + parser := NewListParser(WithComponentFlag) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify flag exists + flag := cmd.Flags().Lookup("component") + require.NotNil(t, flag, "component flag should be registered") + assert.Equal(t, "c", flag.Shorthand) + assert.Contains(t, flag.Usage, "Filter stacks") +} + +// TestWithDelimiterFlag verifies delimiter flag registration. +func TestWithDelimiterFlag(t *testing.T) { + parser := NewListParser(WithDelimiterFlag) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify flag exists + flag := cmd.Flags().Lookup("delimiter") + require.NotNil(t, flag, "delimiter flag should be registered") + assert.Contains(t, flag.Usage, "Delimiter") +} + +// TestWithFileFlag verifies file flag registration. +func TestWithFileFlag(t *testing.T) { + parser := NewListParser(WithFileFlag) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify flag exists + flag := cmd.Flags().Lookup("file") + require.NotNil(t, flag, "file flag should be registered") + assert.Contains(t, flag.Usage, "Filter workflows") +} + +// TestWithMaxColumnsFlag verifies max-columns flag registration. +func TestWithMaxColumnsFlag(t *testing.T) { + parser := NewListParser(WithMaxColumnsFlag) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify flag exists + flag := cmd.Flags().Lookup("max-columns") + require.NotNil(t, flag, "max-columns flag should be registered") + assert.Equal(t, "0", flag.DefValue) + assert.Contains(t, flag.Usage, "Maximum number of columns") +} + +// TestWithQueryFlag verifies query flag registration. +func TestWithQueryFlag(t *testing.T) { + parser := NewListParser(WithQueryFlag) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify flag exists + flag := cmd.Flags().Lookup("query") + require.NotNil(t, flag, "query flag should be registered") + assert.Equal(t, "q", flag.Shorthand) + assert.Contains(t, flag.Usage, "YQ expression") +} + +// TestWithAbstractFlag verifies abstract flag registration. +func TestWithAbstractFlag(t *testing.T) { + parser := NewListParser(WithAbstractFlag) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify flag exists + flag := cmd.Flags().Lookup("abstract") + require.NotNil(t, flag, "abstract flag should be registered") + assert.Equal(t, "false", flag.DefValue) + assert.Contains(t, flag.Usage, "Include abstract") +} + +// TestWithProcessTemplatesFlag verifies process-templates flag registration. +func TestWithProcessTemplatesFlag(t *testing.T) { + parser := NewListParser(WithProcessTemplatesFlag) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify flag exists + flag := cmd.Flags().Lookup("process-templates") + require.NotNil(t, flag, "process-templates flag should be registered") + assert.Equal(t, "true", flag.DefValue) + assert.Contains(t, flag.Usage, "Go template processing") +} + +// TestWithProcessFunctionsFlag verifies process-functions flag registration. +func TestWithProcessFunctionsFlag(t *testing.T) { + parser := NewListParser(WithProcessFunctionsFlag) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify flag exists + flag := cmd.Flags().Lookup("process-functions") + require.NotNil(t, flag, "process-functions flag should be registered") + assert.Equal(t, "true", flag.DefValue) + assert.Contains(t, flag.Usage, "template function processing") +} + +// TestWithUploadFlag verifies upload flag registration. +func TestWithUploadFlag(t *testing.T) { + parser := NewListParser(WithUploadFlag) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify flag exists + flag := cmd.Flags().Lookup("upload") + require.NotNil(t, flag, "upload flag should be registered") + assert.Equal(t, "false", flag.DefValue) + assert.Contains(t, flag.Usage, "Upload instances") +} + +// TestNewListParser_MultipleFlagsComposition verifies composing multiple flags. +func TestNewListParser_MultipleFlagsComposition(t *testing.T) { + // Simulate components command with all relevant flags + parser := NewListParser( + WithFormatFlag, + WithComponentsColumnsFlag, + WithSortFlag, + WithFilterFlag, + WithStackFlag, + WithTypeFlag, + WithEnabledFlag, + WithLockedFlag, + ) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify all flags are registered + flags := []string{"format", "columns", "sort", "filter", "stack", "type", "enabled", "locked"} + for _, flagName := range flags { + flag := cmd.Flags().Lookup(flagName) + assert.NotNil(t, flag, "flag %s should be registered", flagName) + } +} + +// TestNewListParser_SelectiveFlagComposition verifies each command composes only needed flags. +func TestNewListParser_SelectiveFlagComposition(t *testing.T) { + tests := []struct { + name string + builders []func(*[]flags.Option) + expectedFlags []string + missingFlags []string + }{ + { + name: "components command", + builders: []func(*[]flags.Option){ + WithFormatFlag, + WithComponentsColumnsFlag, + WithSortFlag, + WithFilterFlag, + WithStackFlag, + WithTypeFlag, + WithEnabledFlag, + WithLockedFlag, + }, + expectedFlags: []string{"format", "columns", "sort", "filter", "stack", "type", "enabled", "locked"}, + missingFlags: []string{"component", "file", "max-columns", "query", "upload"}, + }, + { + name: "stacks command", + builders: []func(*[]flags.Option){ + WithFormatFlag, + WithStacksColumnsFlag, + WithSortFlag, + WithComponentFlag, + }, + expectedFlags: []string{"format", "columns", "sort", "component"}, + missingFlags: []string{"stack", "filter", "type", "enabled", "locked"}, + }, + { + name: "workflows command", + builders: []func(*[]flags.Option){ + WithFormatFlag, + WithDelimiterFlag, + WithWorkflowsColumnsFlag, + WithSortFlag, + WithFileFlag, + }, + expectedFlags: []string{"format", "delimiter", "columns", "sort", "file"}, + missingFlags: []string{"stack", "filter", "component"}, + }, + { + name: "values command", + builders: []func(*[]flags.Option){ + WithFormatFlag, + WithDelimiterFlag, + WithMaxColumnsFlag, + WithQueryFlag, + WithStackFlag, + WithAbstractFlag, + WithProcessTemplatesFlag, + WithProcessFunctionsFlag, + }, + expectedFlags: []string{"format", "delimiter", "max-columns", "query", "stack", "abstract", "process-templates", "process-functions"}, + missingFlags: []string{"columns", "sort", "filter", "component"}, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + parser := NewListParser(tt.builders...) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify expected flags are present + for _, flagName := range tt.expectedFlags { + flag := cmd.Flags().Lookup(flagName) + assert.NotNil(t, flag, "flag %s should be registered for %s", flagName, tt.name) + } + + // Verify flags that shouldn't be present are absent + for _, flagName := range tt.missingFlags { + flag := cmd.Flags().Lookup(flagName) + assert.Nil(t, flag, "flag %s should NOT be registered for %s", flagName, tt.name) + } + }) + } +} + +// TestFlagEnvironmentVariableBinding verifies environment variable bindings. +func TestFlagEnvironmentVariableBinding(t *testing.T) { + tests := []struct { + name string + builder func(*[]flags.Option) + flagName string + envVarName string + }{ + {"format", WithFormatFlag, "format", "ATMOS_LIST_FORMAT"}, + {"columns", WithInstancesColumnsFlag, "columns", "ATMOS_LIST_COLUMNS"}, + {"sort", WithSortFlag, "sort", "ATMOS_LIST_SORT"}, + {"filter", WithFilterFlag, "filter", "ATMOS_LIST_FILTER"}, + {"stack", WithStackFlag, "stack", "ATMOS_STACK"}, + {"type", WithTypeFlag, "type", "ATMOS_COMPONENT_TYPE"}, + {"enabled", WithEnabledFlag, "enabled", "ATMOS_COMPONENT_ENABLED"}, + {"locked", WithLockedFlag, "locked", "ATMOS_COMPONENT_LOCKED"}, + {"component", WithComponentFlag, "component", "ATMOS_COMPONENT"}, + {"delimiter", WithDelimiterFlag, "delimiter", "ATMOS_LIST_DELIMITER"}, + {"query", WithQueryFlag, "query", "ATMOS_LIST_QUERY"}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + v := viper.New() + parser := NewListParser(tt.builder) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + err := parser.BindToViper(v) + require.NoError(t, err, "binding to viper should not fail") + + // Environment variable binding is tested via Viper integration + // The actual binding happens when BindToViper is called + }) + } +} + +// TestFlagDefaultValues verifies default values for flags. +func TestFlagDefaultValues(t *testing.T) { + tests := []struct { + name string + builder func(*[]flags.Option) + flagName string + defaultValue string + }{ + {"format empty", WithFormatFlag, "format", ""}, + {"type real", WithTypeFlag, "type", "real"}, + {"enabled false", WithEnabledFlag, "enabled", "false"}, + {"locked false", WithLockedFlag, "locked", "false"}, + {"abstract false", WithAbstractFlag, "abstract", "false"}, + {"process-templates true", WithProcessTemplatesFlag, "process-templates", "true"}, + {"process-functions true", WithProcessFunctionsFlag, "process-functions", "true"}, + {"upload false", WithUploadFlag, "upload", "false"}, + {"max-columns zero", WithMaxColumnsFlag, "max-columns", "0"}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + parser := NewListParser(tt.builder) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + flag := cmd.Flags().Lookup(tt.flagName) + require.NotNil(t, flag, "flag %s should exist", tt.flagName) + assert.Equal(t, tt.defaultValue, flag.DefValue, "default value mismatch for %s", tt.flagName) + }) + } +} + +// TestFlagShorthands verifies shorthand flags are registered correctly. +func TestFlagShorthands(t *testing.T) { + tests := []struct { + name string + builder func(*[]flags.Option) + flagName string + shorthand string + }{ + {"format -f", WithFormatFlag, "format", "f"}, + {"stack -s", WithStackFlag, "stack", "s"}, + {"type -t", WithTypeFlag, "type", "t"}, + {"component -c", WithComponentFlag, "component", "c"}, + {"query -q", WithQueryFlag, "query", "q"}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + parser := NewListParser(tt.builder) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + flag := cmd.Flags().Lookup(tt.flagName) + require.NotNil(t, flag, "flag %s should exist", tt.flagName) + assert.Equal(t, tt.shorthand, flag.Shorthand, "shorthand mismatch for %s", tt.flagName) + }) + } +} diff --git a/cmd/list/instances.go b/cmd/list/instances.go index 97d6ccfbda..e7fa009ff7 100644 --- a/cmd/list/instances.go +++ b/cmd/list/instances.go @@ -1,16 +1,18 @@ package list import ( + "fmt" + "github.com/spf13/cobra" "github.com/spf13/viper" + errUtils "github.com/cloudposse/atmos/errors" e "github.com/cloudposse/atmos/internal/exec" - "github.com/cloudposse/atmos/pkg/auth" cfg "github.com/cloudposse/atmos/pkg/config" "github.com/cloudposse/atmos/pkg/flags" "github.com/cloudposse/atmos/pkg/flags/global" "github.com/cloudposse/atmos/pkg/list" - "github.com/cloudposse/atmos/pkg/schema" + "github.com/cloudposse/atmos/pkg/list/format" ) var instancesParser *flags.StandardParser @@ -19,11 +21,15 @@ var instancesParser *flags.StandardParser type InstancesOptions struct { global.Flags Format string + Columns []string MaxColumns int Delimiter string Stack string + Filter string Query string + Sort string Upload bool + Provenance bool } // instancesCmd lists atmos instances. @@ -48,34 +54,83 @@ var instancesCmd = &cobra.Command{ opts := &InstancesOptions{ Flags: flags.ParseGlobalFlags(cmd, v), Format: v.GetString("format"), + Columns: v.GetStringSlice("columns"), MaxColumns: v.GetInt("max-columns"), Delimiter: v.GetString("delimiter"), Stack: v.GetString("stack"), + Filter: v.GetString("filter"), Query: v.GetString("query"), + Sort: v.GetString("sort"), Upload: v.GetBool("upload"), + Provenance: v.GetBool("provenance"), } return executeListInstancesCmd(cmd, args, opts) }, } +// columnsCompletionForInstances provides dynamic tab completion for --columns flag. +// Returns column names from atmos.yaml components.list.columns configuration. +func columnsCompletionForInstances(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { + // Load atmos configuration. + configAndStacksInfo, err := e.ProcessCommandLineArgs("list", cmd, args, nil) + if err != nil { + return nil, cobra.ShellCompDirectiveNoFileComp + } + + atmosConfig, err := cfg.InitCliConfig(configAndStacksInfo, false) + if err != nil { + return nil, cobra.ShellCompDirectiveNoFileComp + } + + // Extract column names from atmos.yaml configuration. + if len(atmosConfig.Components.List.Columns) > 0 { + var columnNames []string + for _, col := range atmosConfig.Components.List.Columns { + columnNames = append(columnNames, col.Name) + } + return columnNames, cobra.ShellCompDirectiveNoFileComp + } + + // If no custom columns configured, return empty list. + return nil, cobra.ShellCompDirectiveNoFileComp +} + func init() { - // Create parser with common list flags plus upload flag - instancesParser = newCommonListParser( - flags.WithBoolFlag("upload", "", false, "Upload instances to pro API"), - flags.WithEnvVars("upload", "ATMOS_LIST_UPLOAD"), + // Create parser using flag wrappers. + instancesParser = NewListParser( + WithFormatFlag, + WithInstancesColumnsFlag, + WithDelimiterFlag, + WithMaxColumnsFlag, + WithStackFlag, + WithFilterFlag, + WithQueryFlag, + WithSortFlag, + WithUploadFlag, + WithProvenanceFlag, ) - // Register flags + // Register flags. instancesParser.RegisterFlags(instancesCmd) - // Bind flags to Viper for environment variable support + // Register dynamic tab completion for --columns flag. + if err := instancesCmd.RegisterFlagCompletionFunc("columns", columnsCompletionForInstances); err != nil { + panic(err) + } + + // Bind flags to Viper for environment variable support. if err := instancesParser.BindToViper(viper.GetViper()); err != nil { panic(err) } } func executeListInstancesCmd(cmd *cobra.Command, args []string, opts *InstancesOptions) error { + // Validate that --provenance only works with --format=tree. + if opts.Provenance && opts.Format != string(format.FormatTree) { + return fmt.Errorf("%w: --provenance flag only works with --format=tree", errUtils.ErrInvalidFlag) + } + // Process and validate command line arguments. configAndStacksInfo, err := e.ProcessCommandLineArgs("list", cmd, args, nil) if err != nil { @@ -84,25 +139,28 @@ func executeListInstancesCmd(cmd *cobra.Command, args []string, opts *InstancesO configAndStacksInfo.Command = "list" configAndStacksInfo.SubCommand = "instances" - // Load atmos configuration to get auth config. - atmosConfig, err := cfg.InitCliConfig(schema.ConfigAndStacksInfo{}, false) + // Initialize config to create auth manager. + atmosConfig, err := cfg.InitCliConfig(configAndStacksInfo, true) if err != nil { return err } - // Get identity from --identity flag or ATMOS_IDENTITY env var using shared helper. - identityName := getIdentityFromCommand(cmd) - - // Create AuthManager with stack-level default identity loading. - authManager, err := auth.CreateAndAuthenticateManagerWithAtmosConfig( - identityName, - &atmosConfig.Auth, - cfg.IdentityFlagSelectValue, - &atmosConfig, - ) + // Create AuthManager for authentication support. + authManager, err := createAuthManagerForList(cmd, &atmosConfig) if err != nil { return err } - return list.ExecuteListInstancesCmd(&configAndStacksInfo, cmd, args, authManager) + return list.ExecuteListInstancesCmd(&list.InstancesCommandOptions{ + Info: &configAndStacksInfo, + Cmd: cmd, + Args: args, + ShowImports: opts.Provenance, + ColumnsFlag: opts.Columns, + FilterSpec: opts.Filter, + SortSpec: opts.Sort, + Delimiter: opts.Delimiter, + Query: opts.Query, + AuthManager: authManager, + }) } diff --git a/cmd/list/list_test.go b/cmd/list/list_test.go index 2529696478..c61bd92113 100644 --- a/cmd/list/list_test.go +++ b/cmd/list/list_test.go @@ -5,6 +5,7 @@ import ( "github.com/spf13/cobra" "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" l "github.com/cloudposse/atmos/pkg/list" "github.com/cloudposse/atmos/pkg/list/errors" @@ -242,3 +243,38 @@ func TestListCmds_NoResults(t *testing.T) { assert.Nil(t, output, "Expected nil output when no matching stacks exist") }) } + +// TestListCommandProvider tests the ListCommandProvider interface methods. +func TestListCommandProvider(t *testing.T) { + provider := &ListCommandProvider{} + + t.Run("GetCommand returns listCmd", func(t *testing.T) { + cmd := provider.GetCommand() + require.NotNil(t, cmd) + assert.Equal(t, "list", cmd.Use) + }) + + t.Run("GetName returns list", func(t *testing.T) { + assert.Equal(t, "list", provider.GetName()) + }) + + t.Run("GetGroup returns Stack Introspection", func(t *testing.T) { + assert.Equal(t, "Stack Introspection", provider.GetGroup()) + }) + + t.Run("GetFlagsBuilder returns nil", func(t *testing.T) { + assert.Nil(t, provider.GetFlagsBuilder()) + }) + + t.Run("GetPositionalArgsBuilder returns nil", func(t *testing.T) { + assert.Nil(t, provider.GetPositionalArgsBuilder()) + }) + + t.Run("GetCompatibilityFlags returns nil", func(t *testing.T) { + assert.Nil(t, provider.GetCompatibilityFlags()) + }) + + t.Run("GetAliases returns nil", func(t *testing.T) { + assert.Nil(t, provider.GetAliases()) + }) +} diff --git a/cmd/list/metadata.go b/cmd/list/metadata.go index 325e9c5dc2..43b81b9c66 100644 --- a/cmd/list/metadata.go +++ b/cmd/list/metadata.go @@ -1,16 +1,14 @@ package list import ( - log "github.com/cloudposse/atmos/pkg/logger" "github.com/spf13/cobra" "github.com/spf13/viper" e "github.com/cloudposse/atmos/internal/exec" + cfg "github.com/cloudposse/atmos/pkg/config" "github.com/cloudposse/atmos/pkg/flags" "github.com/cloudposse/atmos/pkg/flags/global" - l "github.com/cloudposse/atmos/pkg/list" - listerrors "github.com/cloudposse/atmos/pkg/list/errors" - utils "github.com/cloudposse/atmos/pkg/utils" + "github.com/cloudposse/atmos/pkg/list" ) var metadataParser *flags.StandardParser @@ -18,141 +16,131 @@ var metadataParser *flags.StandardParser // MetadataOptions contains parsed flags for the metadata command. type MetadataOptions struct { global.Flags - Format string - MaxColumns int - Delimiter string - Stack string - Query string - ProcessTemplates bool - ProcessFunctions bool + Format string + Stack string + Columns []string + Sort string + Filter string } // metadataCmd lists metadata across stacks. var metadataCmd = &cobra.Command{ - Use: "metadata [component]", + Use: "metadata", Short: "List metadata across stacks", - Long: "List metadata information across all stacks or for a specific component", + Long: "List metadata information across all stacks with customizable columns", Example: "atmos list metadata\n" + - "atmos list metadata c1\n" + - "atmos list metadata --query .component\n" + "atmos list metadata --format json\n" + - "atmos list metadata --stack '*-{dev,staging}-*'\n" + - "atmos list metadata --stack 'prod-*'", + "atmos list metadata --stack 'plat-*-prod'\n" + + "atmos list metadata --columns stack,component,type,enabled\n" + + "atmos list metadata --sort stack:asc,component:desc\n" + + "atmos list metadata --filter '.enabled == true'", + Args: cobra.NoArgs, RunE: func(cmd *cobra.Command, args []string) error { + // Check Atmos configuration. if err := checkAtmosConfig(); err != nil { return err } - // Parse flags using StandardParser with Viper precedence + // Parse flags using StandardParser with Viper precedence. v := viper.GetViper() if err := metadataParser.BindFlagsToViper(cmd, v); err != nil { return err } opts := &MetadataOptions{ - Flags: flags.ParseGlobalFlags(cmd, v), - Format: v.GetString("format"), - MaxColumns: v.GetInt("max-columns"), - Delimiter: v.GetString("delimiter"), - Stack: v.GetString("stack"), - Query: v.GetString("query"), - ProcessTemplates: v.GetBool("process-templates"), - ProcessFunctions: v.GetBool("process-functions"), + Flags: flags.ParseGlobalFlags(cmd, v), + Format: v.GetString("format"), + Stack: v.GetString("stack"), + Columns: v.GetStringSlice("columns"), + Sort: v.GetString("sort"), + Filter: v.GetString("filter"), } - output, err := listMetadataWithOptions(cmd, opts, args) - if err != nil { - return err + return executeListMetadataCmd(cmd, args, opts) + }, +} + +// columnsCompletionForMetadata provides dynamic tab completion for --columns flag. +// Returns column names from atmos.yaml components.list.columns configuration. +func columnsCompletionForMetadata(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { + // Load atmos configuration. + configAndStacksInfo, err := e.ProcessCommandLineArgs("list", cmd, args, nil) + if err != nil { + return nil, cobra.ShellCompDirectiveNoFileComp + } + + atmosConfig, err := cfg.InitCliConfig(configAndStacksInfo, false) + if err != nil { + return nil, cobra.ShellCompDirectiveNoFileComp + } + + // Extract column names from atmos.yaml configuration. + if len(atmosConfig.Components.List.Columns) > 0 { + var columnNames []string + for _, col := range atmosConfig.Components.List.Columns { + columnNames = append(columnNames, col.Name) } + return columnNames, cobra.ShellCompDirectiveNoFileComp + } - utils.PrintMessage(output) - return nil - }, + // If no custom columns configured, return empty list. + return nil, cobra.ShellCompDirectiveNoFileComp } func init() { - // Create parser with common list flags plus processing flags - metadataParser = newCommonListParser( - flags.WithBoolFlag("process-templates", "", true, "Enable/disable Go template processing in Atmos stack manifests when executing the command"), - flags.WithBoolFlag("process-functions", "", true, "Enable/disable YAML functions processing in Atmos stack manifests when executing the command"), - flags.WithEnvVars("process-templates", "ATMOS_PROCESS_TEMPLATES"), - flags.WithEnvVars("process-functions", "ATMOS_PROCESS_FUNCTIONS"), + // Create parser using flag wrappers. + metadataParser = NewListParser( + WithFormatFlag, + WithStackFlag, + WithMetadataColumnsFlag, + WithSortFlag, + WithFilterFlag, ) - // Register flags + // Register flags. metadataParser.RegisterFlags(metadataCmd) - // Add stack completion - addStackCompletion(metadataCmd) - - // Bind flags to Viper for environment variable support - if err := metadataParser.BindToViper(viper.GetViper()); err != nil { + // Register dynamic tab completion for --columns flag. + if err := metadataCmd.RegisterFlagCompletionFunc("columns", columnsCompletionForMetadata); err != nil { panic(err) } -} - -// setupMetadataOptions sets up the filter options for metadata listing. -func setupMetadataOptions(opts *MetadataOptions, componentFilter string) *l.FilterOptions { - query := opts.Query - if query == "" { - query = ".metadata" - } - return &l.FilterOptions{ - Component: l.KeyMetadata, - ComponentFilter: componentFilter, - Query: query, - IncludeAbstract: false, - MaxColumns: opts.MaxColumns, - FormatStr: opts.Format, - Delimiter: opts.Delimiter, - StackPattern: opts.Stack, - } -} - -// logNoMetadataFoundMessage logs an appropriate message when no metadata is found. -func logNoMetadataFoundMessage(componentFilter string) { - if componentFilter != "" { - log.Info("No metadata found", "component", componentFilter) - } else { - log.Info("No metadata found") + // Bind flags to Viper for environment variable support. + if err := metadataParser.BindToViper(viper.GetViper()); err != nil { + panic(err) } } -func listMetadataWithOptions(cmd *cobra.Command, opts *MetadataOptions, args []string) (string, error) { - // Set default delimiter for CSV. - setDefaultCSVDelimiter(&opts.Delimiter, opts.Format) - - componentFilter := getComponentFilter(args) - - // Initialize CLI config and auth manager. - atmosConfig, authManager, err := initConfigAndAuth(cmd) +func executeListMetadataCmd(cmd *cobra.Command, args []string, opts *MetadataOptions) error { + // Process and validate command line arguments. + configAndStacksInfo, err := e.ProcessCommandLineArgs("list", cmd, args, nil) if err != nil { - return "", err + return err } + configAndStacksInfo.Command = "list" + configAndStacksInfo.SubCommand = "metadata" - // Validate component exists if filter is specified. - if err := validateComponentFilter(&atmosConfig, componentFilter); err != nil { - return "", err + // Initialize config to create auth manager. + atmosConfig, err := cfg.InitCliConfig(configAndStacksInfo, true) + if err != nil { + return err } - // Get all stacks. - stacksMap, err := e.ExecuteDescribeStacks(&atmosConfig, "", nil, nil, nil, false, - opts.ProcessTemplates, opts.ProcessFunctions, false, nil, authManager) + // Create AuthManager for authentication support. + authManager, err := createAuthManagerForList(cmd, &atmosConfig) if err != nil { - return "", &listerrors.DescribeStacksError{Cause: err} + return err } - log.Debug("Filtering metadata", - "component", componentFilter, "query", opts.Query, - "maxColumns", opts.MaxColumns, "format", opts.Format, - "stackPattern", opts.Stack, "templates", opts.ProcessTemplates) - - filterOptions := setupMetadataOptions(opts, componentFilter) - output, err := l.FilterAndListValues(stacksMap, filterOptions) - if err != nil { - return handleNoValuesError(err, componentFilter, logNoMetadataFoundMessage) + // Convert cmd-level options to pkg-level options. + pkgOpts := &list.MetadataOptions{ + Format: opts.Format, + Columns: opts.Columns, + Sort: opts.Sort, + Filter: opts.Filter, + Stack: opts.Stack, + AuthManager: authManager, } - return output, nil + return list.ExecuteListMetadataCmd(&configAndStacksInfo, cmd, args, pkgOpts) } diff --git a/cmd/list/metadata_test.go b/cmd/list/metadata_test.go deleted file mode 100644 index 9892b568f2..0000000000 --- a/cmd/list/metadata_test.go +++ /dev/null @@ -1,297 +0,0 @@ -package list - -import ( - "testing" - - "github.com/spf13/cobra" - "github.com/stretchr/testify/assert" - - l "github.com/cloudposse/atmos/pkg/list" -) - -// TestListMetadataFlags tests that the list metadata command has the correct flags. -func TestListMetadataFlags(t *testing.T) { - cmd := &cobra.Command{ - Use: "metadata [component]", - Short: "List metadata across stacks", - Long: "List metadata information across all stacks or for a specific component", - } - - cmd.PersistentFlags().String("format", "", "Output format") - cmd.PersistentFlags().String("delimiter", "", "Delimiter for CSV/TSV output") - cmd.PersistentFlags().String("stack", "", "Stack pattern") - cmd.PersistentFlags().String("query", "", "JQ query") - cmd.PersistentFlags().Int("max-columns", 0, "Maximum columns") - cmd.PersistentFlags().Bool("process-templates", true, "Enable/disable Go template processing") - cmd.PersistentFlags().Bool("process-functions", true, "Enable/disable YAML functions processing") - - formatFlag := cmd.PersistentFlags().Lookup("format") - assert.NotNil(t, formatFlag, "Expected format flag to exist") - assert.Equal(t, "", formatFlag.DefValue) - - delimiterFlag := cmd.PersistentFlags().Lookup("delimiter") - assert.NotNil(t, delimiterFlag, "Expected delimiter flag to exist") - assert.Equal(t, "", delimiterFlag.DefValue) - - stackFlag := cmd.PersistentFlags().Lookup("stack") - assert.NotNil(t, stackFlag, "Expected stack flag to exist") - assert.Equal(t, "", stackFlag.DefValue) - - queryFlag := cmd.PersistentFlags().Lookup("query") - assert.NotNil(t, queryFlag, "Expected query flag to exist") - assert.Equal(t, "", queryFlag.DefValue) - - maxColumnsFlag := cmd.PersistentFlags().Lookup("max-columns") - assert.NotNil(t, maxColumnsFlag, "Expected max-columns flag to exist") - assert.Equal(t, "0", maxColumnsFlag.DefValue) - - processTemplatesFlag := cmd.PersistentFlags().Lookup("process-templates") - assert.NotNil(t, processTemplatesFlag, "Expected process-templates flag to exist") - assert.Equal(t, "true", processTemplatesFlag.DefValue) - - processFunctionsFlag := cmd.PersistentFlags().Lookup("process-functions") - assert.NotNil(t, processFunctionsFlag, "Expected process-functions flag to exist") - assert.Equal(t, "true", processFunctionsFlag.DefValue) -} - -// TestListMetadataCommand tests the metadata command structure. -func TestListMetadataCommand(t *testing.T) { - assert.Equal(t, "metadata [component]", metadataCmd.Use) - assert.Contains(t, metadataCmd.Short, "List metadata across stacks") - assert.NotNil(t, metadataCmd.RunE) - assert.NotEmpty(t, metadataCmd.Example) -} - -// TestSetupMetadataOptions tests the setupMetadataOptions function. -func TestSetupMetadataOptions(t *testing.T) { - testCases := []struct { - name string - opts *MetadataOptions - componentFilter string - expectedQuery string - expectedComp string - }{ - { - name: "with component and custom query", - opts: &MetadataOptions{ - Query: ".metadata.component", - MaxColumns: 10, - Format: "json", - Delimiter: ",", - Stack: "prod-*", - }, - componentFilter: "vpc", - expectedQuery: ".metadata.component", - expectedComp: l.KeyMetadata, - }, - { - name: "without component and default query", - opts: &MetadataOptions{ - Query: "", - MaxColumns: 5, - Format: "yaml", - Delimiter: "\t", - Stack: "", - }, - componentFilter: "", - expectedQuery: ".metadata", - expectedComp: l.KeyMetadata, - }, - { - name: "with component but no query", - opts: &MetadataOptions{ - Query: "", - MaxColumns: 0, - Format: "", - Delimiter: "", - Stack: "*-dev-*", - }, - componentFilter: "app", - expectedQuery: ".metadata", - expectedComp: l.KeyMetadata, - }, - } - - for _, tc := range testCases { - t.Run(tc.name, func(t *testing.T) { - filterOpts := setupMetadataOptions(tc.opts, tc.componentFilter) - - assert.Equal(t, tc.expectedComp, filterOpts.Component) - assert.Equal(t, tc.componentFilter, filterOpts.ComponentFilter) - assert.Equal(t, tc.expectedQuery, filterOpts.Query) - assert.False(t, filterOpts.IncludeAbstract) - assert.Equal(t, tc.opts.MaxColumns, filterOpts.MaxColumns) - assert.Equal(t, tc.opts.Format, filterOpts.FormatStr) - assert.Equal(t, tc.opts.Delimiter, filterOpts.Delimiter) - assert.Equal(t, tc.opts.Stack, filterOpts.StackPattern) - }) - } -} - -// TestMetadataOptions tests the MetadataOptions structure. -func TestMetadataOptions(t *testing.T) { - opts := &MetadataOptions{ - Format: "json", - MaxColumns: 10, - Delimiter: ",", - Stack: "prod-*", - Query: ".metadata.component", - ProcessTemplates: true, - ProcessFunctions: false, - } - - assert.Equal(t, "json", opts.Format) - assert.Equal(t, 10, opts.MaxColumns) - assert.Equal(t, ",", opts.Delimiter) - assert.Equal(t, "prod-*", opts.Stack) - assert.Equal(t, ".metadata.component", opts.Query) - assert.True(t, opts.ProcessTemplates) - assert.False(t, opts.ProcessFunctions) -} - -// TestListMetadataWithOptions_DefaultQuery tests that default query is applied. -func TestListMetadataWithOptions_DefaultQuery(t *testing.T) { - opts := &MetadataOptions{ - Query: "", - } - - filterOpts := setupMetadataOptions(opts, "") - assert.Equal(t, ".metadata", filterOpts.Query, "Should apply default .metadata query") -} - -// TestListMetadataWithOptions_CustomQuery tests that custom query is preserved. -func TestListMetadataWithOptions_CustomQuery(t *testing.T) { - opts := &MetadataOptions{ - Query: ".metadata.custom", - } - - filterOpts := setupMetadataOptions(opts, "") - assert.Equal(t, ".metadata.custom", filterOpts.Query, "Should preserve custom query") -} - -// TestLogNoMetadataFoundMessage tests the logNoMetadataFoundMessage function. -func TestLogNoMetadataFoundMessage(t *testing.T) { - testCases := []struct { - name string - componentFilter string - }{ - { - name: "with component filter", - componentFilter: "vpc", - }, - { - name: "without component filter", - componentFilter: "", - }, - } - - for _, tc := range testCases { - t.Run(tc.name, func(t *testing.T) { - // This function only logs, so we just verify it doesn't panic - assert.NotPanics(t, func() { - logNoMetadataFoundMessage(tc.componentFilter) - }) - }) - } -} - -// TestSetupMetadataOptions_AllCombinations tests various option combinations. -func TestSetupMetadataOptions_AllCombinations(t *testing.T) { - testCases := []struct { - name string - opts *MetadataOptions - componentFilter string - expectedComponent string - expectedCompFilter string - expectedQuery string - expectedAbstract bool - expectedMaxColumns int - expectedFormat string - expectedDelimiter string - expectedStackPat string - }{ - { - name: "all options with custom query", - opts: &MetadataOptions{ - Query: ".metadata.terraform", - MaxColumns: 8, - Format: "csv", - Delimiter: ",", - Stack: "*-dev-*", - }, - componentFilter: "database", - expectedComponent: l.KeyMetadata, - expectedCompFilter: "database", - expectedQuery: ".metadata.terraform", - expectedAbstract: false, - expectedMaxColumns: 8, - expectedFormat: "csv", - expectedDelimiter: ",", - expectedStackPat: "*-dev-*", - }, - { - name: "empty query defaults to .metadata", - opts: &MetadataOptions{}, - componentFilter: "", - expectedComponent: l.KeyMetadata, - expectedCompFilter: "", - expectedQuery: ".metadata", - expectedAbstract: false, - expectedMaxColumns: 0, - expectedFormat: "", - expectedDelimiter: "", - expectedStackPat: "", - }, - { - name: "with component and default query", - opts: &MetadataOptions{ - Query: "", - }, - componentFilter: "app", - expectedComponent: l.KeyMetadata, - expectedCompFilter: "app", - expectedQuery: ".metadata", - expectedAbstract: false, - expectedMaxColumns: 0, - expectedFormat: "", - expectedDelimiter: "", - expectedStackPat: "", - }, - } - - for _, tc := range testCases { - t.Run(tc.name, func(t *testing.T) { - filterOpts := setupMetadataOptions(tc.opts, tc.componentFilter) - - assert.Equal(t, tc.expectedComponent, filterOpts.Component) - assert.Equal(t, tc.expectedCompFilter, filterOpts.ComponentFilter) - assert.Equal(t, tc.expectedQuery, filterOpts.Query) - assert.Equal(t, tc.expectedAbstract, filterOpts.IncludeAbstract) - assert.Equal(t, tc.expectedMaxColumns, filterOpts.MaxColumns) - assert.Equal(t, tc.expectedFormat, filterOpts.FormatStr) - assert.Equal(t, tc.expectedDelimiter, filterOpts.Delimiter) - assert.Equal(t, tc.expectedStackPat, filterOpts.StackPattern) - }) - } -} - -// TestMetadataOptions_AllFields tests the MetadataOptions structure with all fields populated. -func TestMetadataOptions_AllFields(t *testing.T) { - opts := &MetadataOptions{ - Format: "table", - MaxColumns: 20, - Delimiter: ";", - Stack: "prod-us-*", - Query: ".metadata.atmos_version", - ProcessTemplates: false, - ProcessFunctions: false, - } - - assert.Equal(t, "table", opts.Format) - assert.Equal(t, 20, opts.MaxColumns) - assert.Equal(t, ";", opts.Delimiter) - assert.Equal(t, "prod-us-*", opts.Stack) - assert.Equal(t, ".metadata.atmos_version", opts.Query) - assert.False(t, opts.ProcessTemplates) - assert.False(t, opts.ProcessFunctions) -} diff --git a/cmd/list/settings.go b/cmd/list/settings.go index 0036bf592a..53cb65fd43 100644 --- a/cmd/list/settings.go +++ b/cmd/list/settings.go @@ -1,7 +1,8 @@ package list import ( - log "github.com/cloudposse/atmos/pkg/logger" + "errors" + "github.com/spf13/cobra" "github.com/spf13/viper" @@ -10,6 +11,8 @@ import ( "github.com/cloudposse/atmos/pkg/flags/global" l "github.com/cloudposse/atmos/pkg/list" listerrors "github.com/cloudposse/atmos/pkg/list/errors" + log "github.com/cloudposse/atmos/pkg/logger" + "github.com/cloudposse/atmos/pkg/ui" utils "github.com/cloudposse/atmos/pkg/utils" ) @@ -106,12 +109,12 @@ func setupSettingsOptions(opts *SettingsOptions, componentFilter string) *l.Filt } } -// logNoSettingsFoundMessage logs an appropriate message when no settings are found. -func logNoSettingsFoundMessage(componentFilter string) { +// displayNoSettingsFoundMessage displays an appropriate message when no settings are found. +func displayNoSettingsFoundMessage(componentFilter string) { if componentFilter != "" { - log.Info("No settings found", "component", componentFilter) + _ = ui.Info("No settings found for component: " + componentFilter) } else { - log.Info("No settings found") + _ = ui.Info("No settings found") } } @@ -148,7 +151,12 @@ func listSettingsWithOptions(cmd *cobra.Command, opts *SettingsOptions, args []s filterOptions := setupSettingsOptions(opts, componentFilter) output, err := l.FilterAndListValues(stacksMap, filterOptions) if err != nil { - return handleNoValuesError(err, componentFilter, logNoSettingsFoundMessage) + var noValuesErr *listerrors.NoValuesFoundError + if errors.As(err, &noValuesErr) { + displayNoSettingsFoundMessage(componentFilter) + return "", nil + } + return "", err } return output, nil diff --git a/cmd/list/settings_test.go b/cmd/list/settings_test.go index eff295d0b5..abf5cd82c6 100644 --- a/cmd/list/settings_test.go +++ b/cmd/list/settings_test.go @@ -203,9 +203,9 @@ func TestLogNoSettingsFoundMessage(t *testing.T) { for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { - // This function only logs, so we just verify it doesn't panic + // This function only displays UI output, so we just verify it doesn't panic assert.NotPanics(t, func() { - logNoSettingsFoundMessage(tc.componentFilter) + displayNoSettingsFoundMessage(tc.componentFilter) }) }) } diff --git a/cmd/list/stacks.go b/cmd/list/stacks.go index eaef473f8a..8001d2a08a 100644 --- a/cmd/list/stacks.go +++ b/cmd/list/stacks.go @@ -2,20 +2,29 @@ package list import ( "fmt" - "strings" "github.com/spf13/cobra" "github.com/spf13/viper" + errUtils "github.com/cloudposse/atmos/errors" e "github.com/cloudposse/atmos/internal/exec" + "github.com/cloudposse/atmos/pkg/auth" "github.com/cloudposse/atmos/pkg/config" + "github.com/cloudposse/atmos/pkg/data" "github.com/cloudposse/atmos/pkg/flags" "github.com/cloudposse/atmos/pkg/flags/global" - l "github.com/cloudposse/atmos/pkg/list" + "github.com/cloudposse/atmos/pkg/list/column" + "github.com/cloudposse/atmos/pkg/list/extract" + "github.com/cloudposse/atmos/pkg/list/filter" + "github.com/cloudposse/atmos/pkg/list/format" + "github.com/cloudposse/atmos/pkg/list/importresolver" + "github.com/cloudposse/atmos/pkg/list/renderer" + listSort "github.com/cloudposse/atmos/pkg/list/sort" + "github.com/cloudposse/atmos/pkg/list/tree" + log "github.com/cloudposse/atmos/pkg/logger" + perf "github.com/cloudposse/atmos/pkg/perf" "github.com/cloudposse/atmos/pkg/schema" "github.com/cloudposse/atmos/pkg/ui" - "github.com/cloudposse/atmos/pkg/ui/theme" - u "github.com/cloudposse/atmos/pkg/utils" ) var stacksParser *flags.StandardParser @@ -23,82 +32,362 @@ var stacksParser *flags.StandardParser // StacksOptions contains parsed flags for the stacks command. type StacksOptions struct { global.Flags - Component string + Component string + Format string + Columns []string + Sort string + Provenance bool } // stacksCmd lists atmos stacks. var stacksCmd = &cobra.Command{ Use: "stacks", - Short: "List all Atmos stacks or stacks for a specific component", - Long: "This command lists all Atmos stacks, or filters the list to show only the stacks associated with a specified component.", + Short: "List all Atmos stacks with filtering, sorting, and formatting options", + Long: `List Atmos stacks with support for filtering by component, custom column selection, sorting, and multiple output formats.`, Args: cobra.NoArgs, RunE: func(cmd *cobra.Command, args []string) error { - // Check Atmos configuration + // Check Atmos configuration. if err := checkAtmosConfig(); err != nil { return err } - // Parse flags using StandardParser with Viper precedence + // Parse flags using StandardParser with Viper precedence. v := viper.GetViper() if err := stacksParser.BindFlagsToViper(cmd, v); err != nil { return err } opts := &StacksOptions{ - Flags: flags.ParseGlobalFlags(cmd, v), - Component: v.GetString("component"), + Flags: flags.ParseGlobalFlags(cmd, v), + Component: v.GetString("component"), + Format: v.GetString("format"), + Columns: v.GetStringSlice("columns"), + Sort: v.GetString("sort"), + Provenance: v.GetBool("provenance"), } - output, err := listStacksWithOptions(cmd, opts) - if err != nil { - return err - } + return listStacksWithOptions(cmd, args, opts) + }, +} + +// columnsCompletionForStacks provides dynamic tab completion for --columns flag. +// Returns column names from atmos.yaml stacks.list.columns configuration. +func columnsCompletionForStacks(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { + defer perf.Track(nil, "list.stacks.columnsCompletionForStacks")() + + // Load atmos configuration with CLI flags. + configAndStacksInfo, err := e.ProcessCommandLineArgs("list", cmd, args, nil) + if err != nil { + return nil, cobra.ShellCompDirectiveNoFileComp + } + + atmosConfig, err := config.InitCliConfig(configAndStacksInfo, false) + if err != nil { + return nil, cobra.ShellCompDirectiveNoFileComp + } - if len(output) == 0 { - ui.Info("No stacks found") - return nil + // Extract column names from atmos.yaml configuration. + if len(atmosConfig.Stacks.List.Columns) > 0 { + var columnNames []string + for _, col := range atmosConfig.Stacks.List.Columns { + columnNames = append(columnNames, col.Name) } + return columnNames, cobra.ShellCompDirectiveNoFileComp + } - u.PrintMessageInColor(strings.Join(output, "\n")+"\n", theme.Colors.Success) - return nil - }, + // If no custom columns configured, return empty list. + return nil, cobra.ShellCompDirectiveNoFileComp } func init() { - // Create parser with stacks-specific flags using functional options - stacksParser = flags.NewStandardParser( - flags.WithStringFlag("component", "c", "", "List all stacks that contain the specified component"), - flags.WithEnvVars("component", "ATMOS_COMPONENT"), + // Create parser with stacks-specific flags using flag wrappers. + stacksParser = NewListParser( + WithFormatFlag, + WithStacksColumnsFlag, + WithSortFlag, + WithComponentFlag, + WithProvenanceFlag, ) - // Register flags + // Register flags. stacksParser.RegisterFlags(stacksCmd) - // Bind flags to Viper for environment variable support + // Register dynamic tab completion for --columns flag. + if err := stacksCmd.RegisterFlagCompletionFunc("columns", columnsCompletionForStacks); err != nil { + panic(err) + } + + // Bind flags to Viper for environment variable support. if err := stacksParser.BindToViper(viper.GetViper()); err != nil { panic(err) } } -func listStacksWithOptions(cmd *cobra.Command, opts *StacksOptions) ([]string, error) { - configAndStacksInfo := schema.ConfigAndStacksInfo{} +func listStacksWithOptions(cmd *cobra.Command, args []string, opts *StacksOptions) error { + defer perf.Track(nil, "list.stacks.listStacksWithOptions")() + + // Early validation: --provenance only works with --format=tree. + if err := validateProvenanceFlag(opts); err != nil { + return err + } + + // Initialize configuration and auth. + atmosConfig, authManager, err := initStacksConfig(cmd, args, opts) + if err != nil { + return err + } + + // Execute describe stacks and extract results. + stacks, stacksMap, err := executeAndExtractStacks(&atmosConfig, opts, authManager) + if err != nil { + return err + } + if len(stacks) == 0 { + _ = ui.Info("No stacks found") + return nil + } + + // Handle tree format specially - it shows import hierarchies. + if opts.Format == string(format.FormatTree) { + return renderStacksTreeFormat(&atmosConfig, stacks, opts.Provenance, authManager) + } + _ = stacksMap // Unused in non-tree format. + + // Render stacks with filters, columns, and sorters. + return renderStacksTable(&atmosConfig, stacks, opts) +} + +// validateProvenanceFlag checks that --provenance is only used with --format=tree. +func validateProvenanceFlag(opts *StacksOptions) error { + if opts.Provenance && opts.Format != "" && opts.Format != string(format.FormatTree) { + return fmt.Errorf("%w: --provenance flag only works with --format=tree", errUtils.ErrInvalidFlag) + } + return nil +} + +// initStacksConfig initializes configuration and authentication for the stacks command. +func initStacksConfig( + cmd *cobra.Command, + args []string, + opts *StacksOptions, +) (schema.AtmosConfiguration, auth.AuthManager, error) { + defer perf.Track(nil, "list.stacks.initStacksConfig")() + + configAndStacksInfo, err := e.ProcessCommandLineArgs("list", cmd, args, nil) + if err != nil { + return schema.AtmosConfiguration{}, nil, err + } atmosConfig, err := config.InitCliConfig(configAndStacksInfo, true) if err != nil { - return nil, fmt.Errorf("error initializing CLI config: %v", err) + return schema.AtmosConfiguration{}, nil, fmt.Errorf("%w: %w", errUtils.ErrInitializingCLIConfig, err) + } + + // Apply format from config if not set via flag. + if opts.Format == "" && atmosConfig.Stacks.List.Format != "" { + opts.Format = atmosConfig.Stacks.List.Format + } + + // Validate provenance after resolving format from config. + if opts.Provenance && opts.Format != string(format.FormatTree) { + return schema.AtmosConfiguration{}, nil, fmt.Errorf("%w: --provenance flag only works with --format=tree", errUtils.ErrInvalidFlag) } - // Create AuthManager for authentication support. authManager, err := createAuthManagerForList(cmd, &atmosConfig) if err != nil { - return nil, err + return schema.AtmosConfiguration{}, nil, err } - stacksMap, err := e.ExecuteDescribeStacks(&atmosConfig, "", nil, nil, nil, false, false, false, false, nil, authManager) + return atmosConfig, authManager, nil +} + +// executeAndExtractStacks runs describe stacks and extracts the results. +func executeAndExtractStacks( + atmosConfig *schema.AtmosConfiguration, + opts *StacksOptions, + authManager auth.AuthManager, +) ([]map[string]any, map[string]any, error) { + defer perf.Track(nil, "list.stacks.executeAndExtractStacks")() + + stacksMap, err := e.ExecuteDescribeStacks(atmosConfig, "", nil, nil, nil, false, false, false, false, nil, authManager) if err != nil { - return nil, fmt.Errorf("error describing stacks: %v", err) + return nil, nil, fmt.Errorf("%w: %w", errUtils.ErrExecuteDescribeStacks, err) + } + + var stacks []map[string]any + if opts.Component != "" { + stacks, err = extract.StacksForComponent(opts.Component, stacksMap) + } else { + stacks, err = extract.Stacks(stacksMap) + } + if err != nil { + return nil, nil, err + } + + return stacks, stacksMap, nil +} + +// renderStacksTable renders stacks in table format with filters, columns, and sorters. +func renderStacksTable(atmosConfig *schema.AtmosConfiguration, stacks []map[string]any, opts *StacksOptions) error { + defer perf.Track(nil, "list.stacks.renderStacksTable")() + + filters := buildStackFilters(opts) + columns := getStackColumns(atmosConfig, opts.Columns, opts.Component != "") + + selector, err := column.NewSelector(columns, column.BuildColumnFuncMap()) + if err != nil { + return fmt.Errorf("error creating column selector: %w", err) + } + + sorters, err := buildStackSorters(opts.Sort) + if err != nil { + return fmt.Errorf("error parsing sort specification: %w", err) + } + + outputFormat := format.Format(opts.Format) + r := renderer.New(filters, selector, sorters, outputFormat, "") + return r.Render(stacks) +} + +// buildStackFilters creates filters based on command options. +func buildStackFilters(opts *StacksOptions) []filter.Filter { + var filters []filter.Filter + + // Component filter already handled by extraction logic. + // Add any additional filters here in the future. + + return filters +} + +// getStackColumns returns column configuration. +func getStackColumns(atmosConfig *schema.AtmosConfiguration, columnsFlag []string, hasComponent bool) []column.Config { + defer perf.Track(nil, "list.stacks.getStackColumns")() + + // If --columns flag is provided, parse it and return. + if len(columnsFlag) > 0 { + return parseColumnsFlag(columnsFlag) + } + + // Check atmos.yaml for stacks.list.columns configuration. + if len(atmosConfig.Stacks.List.Columns) > 0 { + var configs []column.Config + for _, col := range atmosConfig.Stacks.List.Columns { + configs = append(configs, column.Config{ + Name: col.Name, + Value: col.Value, + Width: col.Width, + }) + } + return configs + } + + // Default columns for stacks. + if hasComponent { + // When filtering by component, show both stack and component. + return []column.Config{ + {Name: "Stack", Value: "{{ .stack }}"}, + {Name: "Component", Value: "{{ .component }}"}, + } + } + + // When showing all stacks, just show stack name. + return []column.Config{ + {Name: "Stack", Value: "{{ .stack }}"}, + } +} + +// renderStacksTreeFormat handles the tree format output for stacks. +// It enables provenance tracking, re-processes stacks, and renders the import hierarchy. +func renderStacksTreeFormat( + atmosConfig *schema.AtmosConfiguration, + stacks []map[string]any, + showProvenance bool, + authManager auth.AuthManager, +) error { + defer perf.Track(nil, "list.stacks.renderStacksTreeFormat")() + + log.Trace("Tree format detected, enabling provenance tracking") + atmosConfig.TrackProvenance = true + + // Clear caches to ensure fresh processing with provenance enabled. + e.ClearMergeContexts() + e.ClearFindStacksMapCache() + log.Trace("Caches cleared, re-processing with provenance") + + // Re-process stacks with provenance tracking enabled. + stacksMap, err := e.ExecuteDescribeStacks(atmosConfig, "", nil, nil, nil, false, false, false, false, nil, authManager) + if err != nil { + return fmt.Errorf("error re-processing stacks with provenance: %w", err) + } + + // Resolve import trees and filter to allowed stacks. + importTrees, err := resolveAndFilterImportTrees(stacksMap, atmosConfig, stacks) + if err != nil { + return err + } + + // Render and output the tree. + output := format.RenderStacksTree(importTrees, showProvenance) + _ = data.Writeln(output) + return nil +} + +// resolveAndFilterImportTrees resolves import trees from provenance and filters to allowed stacks. +func resolveAndFilterImportTrees( + stacksMap map[string]any, + atmosConfig *schema.AtmosConfiguration, + stacks []map[string]any, +) (map[string][]*tree.ImportNode, error) { + defer perf.Track(nil, "list.stacks.resolveAndFilterImportTrees")() + + importTreesWithComponents, err := importresolver.ResolveImportTreeFromProvenance(stacksMap, atmosConfig) + if err != nil { + return nil, fmt.Errorf("error resolving import tree from provenance: %w", err) + } + + // Build a set of allowed stack names from the already-filtered stacks slice. + allowedStacks := buildAllowedStacksSet(stacks) + + // Flatten component level - for stacks view, we just need stack → imports. + // All components in a stack share the same import chain from the stack file. + importTrees := make(map[string][]*tree.ImportNode) + for stackName, componentImports := range importTreesWithComponents { + if !allowedStacks[stackName] { + continue + } + // Just take the first component's imports (they're all the same for a stack file). + for _, imports := range componentImports { + importTrees[stackName] = imports + break + } + } + + return importTrees, nil +} + +// buildAllowedStacksSet creates a set of stack names from a slice of stack maps. +func buildAllowedStacksSet(stacks []map[string]any) map[string]bool { + defer perf.Track(nil, "list.stacks.buildAllowedStacksSet")() + + allowedStacks := make(map[string]bool) + for _, stack := range stacks { + if stackName, ok := stack["stack"].(string); ok { + allowedStacks[stackName] = true + } + } + return allowedStacks +} + +// buildStackSorters creates sorters from sort specification. +func buildStackSorters(sortSpec string) ([]*listSort.Sorter, error) { + defer perf.Track(nil, "list.stacks.buildStackSorters")() + + if sortSpec == "" { + // Default sort: by stack ascending. + return []*listSort.Sorter{ + listSort.NewSorter("Stack", listSort.Ascending), + }, nil } - output, err := l.FilterAndListStacks(stacksMap, opts.Component) - return output, err + return listSort.ParseSortSpec(sortSpec) } diff --git a/cmd/list/stacks_test.go b/cmd/list/stacks_test.go index d6e05d486e..16db0ed1b7 100644 --- a/cmd/list/stacks_test.go +++ b/cmd/list/stacks_test.go @@ -6,6 +6,9 @@ import ( "github.com/spf13/cobra" "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + + "github.com/cloudposse/atmos/pkg/schema" ) // TestListStacksFlags tests that the list stacks command has the correct flags. @@ -103,3 +106,312 @@ func TestStacksOptions(t *testing.T) { }) } } + +// TestStacksOptions_AllFields tests all fields in StacksOptions. +func TestStacksOptions_AllFields(t *testing.T) { + testCases := []struct { + name string + opts *StacksOptions + expectedComp string + expectedFormat string + expectedColumns []string + expectedSort string + expectedProv bool + }{ + { + name: "All fields populated", + opts: &StacksOptions{ + Component: "vpc", + Format: "table", + Columns: []string{"stack", "component"}, + Sort: "stack:asc", + Provenance: true, + }, + expectedComp: "vpc", + expectedFormat: "table", + expectedColumns: []string{"stack", "component"}, + expectedSort: "stack:asc", + expectedProv: true, + }, + { + name: "Empty options", + opts: &StacksOptions{}, + expectedComp: "", + expectedFormat: "", + expectedColumns: nil, + expectedSort: "", + expectedProv: false, + }, + { + name: "Format options", + opts: &StacksOptions{ + Format: "json", + }, + expectedComp: "", + expectedFormat: "json", + expectedSort: "", + expectedProv: false, + }, + { + name: "Tree format with provenance", + opts: &StacksOptions{ + Format: "tree", + Provenance: true, + }, + expectedComp: "", + expectedFormat: "tree", + expectedProv: true, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + assert.Equal(t, tc.expectedComp, tc.opts.Component) + assert.Equal(t, tc.expectedFormat, tc.opts.Format) + assert.Equal(t, tc.expectedColumns, tc.opts.Columns) + assert.Equal(t, tc.expectedSort, tc.opts.Sort) + assert.Equal(t, tc.expectedProv, tc.opts.Provenance) + }) + } +} + +// TestBuildStackFilters tests filter building. +func TestBuildStackFilters(t *testing.T) { + testCases := []struct { + name string + opts *StacksOptions + expectedCount int + }{ + { + name: "No filters", + opts: &StacksOptions{}, + expectedCount: 0, + }, + { + name: "With component filter", + opts: &StacksOptions{ + Component: "vpc", + }, + expectedCount: 0, // Component filter is handled by extraction logic + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + result := buildStackFilters(tc.opts) + assert.Equal(t, tc.expectedCount, len(result)) + }) + } +} + +// TestGetStackColumns tests column configuration logic. +func TestGetStackColumns(t *testing.T) { + testCases := []struct { + name string + atmosConfig *schema.AtmosConfiguration + columnsFlag []string + hasComponent bool + expectLen int + expectName string + }{ + { + name: "Default columns without component", + atmosConfig: &schema.AtmosConfiguration{ + Stacks: schema.Stacks{ + List: schema.ListConfig{}, + }, + }, + columnsFlag: []string{}, + hasComponent: false, + expectLen: 1, + expectName: "Stack", + }, + { + name: "Default columns with component", + atmosConfig: &schema.AtmosConfiguration{ + Stacks: schema.Stacks{ + List: schema.ListConfig{}, + }, + }, + columnsFlag: []string{}, + hasComponent: true, + expectLen: 2, + expectName: "Stack", + }, + { + name: "Columns from flag", + atmosConfig: &schema.AtmosConfiguration{ + Stacks: schema.Stacks{ + List: schema.ListConfig{}, + }, + }, + columnsFlag: []string{"stack", "component", "type"}, + hasComponent: false, + expectLen: 3, + }, + { + name: "Columns from config", + atmosConfig: &schema.AtmosConfiguration{ + Stacks: schema.Stacks{ + List: schema.ListConfig{ + Columns: []schema.ListColumnConfig{ + {Name: "Stack", Value: "{{ .stack }}"}, + {Name: "Component", Value: "{{ .component }}"}, + }, + }, + }, + }, + columnsFlag: []string{}, + hasComponent: false, + expectLen: 2, + expectName: "Stack", + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + result := getStackColumns(tc.atmosConfig, tc.columnsFlag, tc.hasComponent) + assert.Equal(t, tc.expectLen, len(result)) + + if tc.expectName != "" && len(result) > 0 { + assert.Equal(t, tc.expectName, result[0].Name) + } + }) + } +} + +// TestBuildStackSorters tests sorter building. +func TestBuildStackSorters(t *testing.T) { + testCases := []struct { + name string + sortSpec string + expectLen int + expectError bool + }{ + { + name: "Empty sort (default)", + sortSpec: "", + expectLen: 1, // Default sort by stack ascending + }, + { + name: "Single sort field ascending", + sortSpec: "stack:asc", + expectLen: 1, + }, + { + name: "Single sort field descending", + sortSpec: "stack:desc", + expectLen: 1, + }, + { + name: "Multiple sort fields", + sortSpec: "component:asc,stack:desc", + expectLen: 2, + }, + { + name: "Invalid sort spec", + sortSpec: "invalid::spec", + expectError: true, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + result, err := buildStackSorters(tc.sortSpec) + + if tc.expectError { + assert.Error(t, err) + } else { + assert.NoError(t, err) + assert.Equal(t, tc.expectLen, len(result)) + } + }) + } +} + +// TestColumnsCompletionForStacks tests tab completion for columns flag. +func TestColumnsCompletionForStacks(t *testing.T) { + // This test verifies the function signature and basic behavior. + // Full integration testing would require a valid atmos.yaml config. + cmd := &cobra.Command{} + args := []string{} + toComplete := "" + + // Should return empty or error if config cannot be loaded. + suggestions, directive := columnsCompletionForStacks(cmd, args, toComplete) + + // Function should return (even if empty) and directive should be NoFileComp. + // Suggestions can be nil or empty when config is not available. + _ = suggestions // May be nil or empty + assert.Equal(t, cobra.ShellCompDirectiveNoFileComp, directive) +} + +// TestListStacksWithOptions_ProvenanceValidation tests the provenance validation logic. +// Note: This test validates the early validation that runs before config loading. +// Only test cases with explicit format can be validated early (before config loading). +func TestListStacksWithOptions_ProvenanceValidation(t *testing.T) { + tests := []struct { + name string + format string + provenance bool + expectInvalid bool // true if we expect the provenance validation to fail + skipReason string + }{ + { + name: "provenance with table format is invalid", + format: "table", + provenance: true, + expectInvalid: true, + }, + { + name: "provenance with json format is invalid", + format: "json", + provenance: true, + expectInvalid: true, + }, + { + name: "provenance without format is invalid", + format: "", + provenance: true, + expectInvalid: true, + skipReason: "Empty format requires config loading to determine default, cannot test with nil cmd", + }, + { + name: "provenance with tree format is valid", + format: "tree", + provenance: true, + expectInvalid: false, + skipReason: "Valid provenance requires config loading to proceed, cannot test with nil cmd", + }, + { + name: "no provenance with any format is valid", + format: "table", + provenance: false, + expectInvalid: false, + skipReason: "No provenance requires config loading to proceed, cannot test with nil cmd", + }, + } + + for _, tc := range tests { + t.Run(tc.name, func(t *testing.T) { + if tc.skipReason != "" { + t.Skip(tc.skipReason) + } + + opts := &StacksOptions{ + Format: tc.format, + Provenance: tc.provenance, + } + + err := listStacksWithOptions(nil, nil, opts) + + if tc.expectInvalid { + require.Error(t, err, "Expected provenance validation error") + assert.Contains(t, err.Error(), "--provenance") + } else if err != nil { + // We don't expect a provenance error, but other errors are acceptable. + assert.NotContains(t, err.Error(), "--provenance", "Got unexpected provenance error") + } + }) + } +} diff --git a/cmd/list/values.go b/cmd/list/values.go index 18a51a9ce2..078e325321 100644 --- a/cmd/list/values.go +++ b/cmd/list/values.go @@ -18,6 +18,7 @@ import ( f "github.com/cloudposse/atmos/pkg/list/format" listutils "github.com/cloudposse/atmos/pkg/list/utils" "github.com/cloudposse/atmos/pkg/schema" + "github.com/cloudposse/atmos/pkg/ui" u "github.com/cloudposse/atmos/pkg/utils" ) @@ -148,13 +149,13 @@ var varsCmd = &cobra.Command{ if err != nil { var componentVarsNotFoundErr *listerrors.ComponentVarsNotFoundError if errors.As(err, &componentVarsNotFoundErr) { - log.Info("No vars found", "component", componentVarsNotFoundErr.Component) + _ = ui.Info("No vars found for component: " + componentVarsNotFoundErr.Component) return nil } var noValuesErr *listerrors.NoValuesFoundError if errors.As(err, &noValuesErr) { - log.Info("No values found for query '.vars'", "component", args[0]) + _ = ui.Info("No values found for query '.vars' for component: " + args[0]) return nil } @@ -167,56 +168,65 @@ var varsCmd = &cobra.Command{ } func init() { - // Create parser for values command with all flags - valuesParser = newCommonListParser( - flags.WithBoolFlag("abstract", "", false, "Include abstract components"), - flags.WithBoolFlag("vars", "", false, "Show only vars (equivalent to --query .vars)"), - flags.WithBoolFlag("process-templates", "", true, "Enable/disable Go template processing in Atmos stack manifests when executing the command"), - flags.WithBoolFlag("process-functions", "", true, "Enable/disable YAML functions processing in Atmos stack manifests when executing the command"), - flags.WithEnvVars("abstract", "ATMOS_LIST_ABSTRACT"), - flags.WithEnvVars("vars", "ATMOS_LIST_VARS"), - flags.WithEnvVars("process-templates", "ATMOS_PROCESS_TEMPLATES"), - flags.WithEnvVars("process-functions", "ATMOS_PROCESS_FUNCTIONS"), + // Create parser for values command using flag wrappers. + valuesParser = NewListParser( + WithFormatFlag, + WithDelimiterFlag, + WithStackFlag, + WithQueryFlag, + WithMaxColumnsFlag, + WithAbstractFlag, + WithProcessTemplatesFlag, + WithProcessFunctionsFlag, + // Add vars flag only for values command. + func(options *[]flags.Option) { + *options = append(*options, + flags.WithBoolFlag("vars", "", false, "Show only vars (equivalent to --query .vars)"), + flags.WithEnvVars("vars", "ATMOS_LIST_VARS"), + ) + }, ) - // Register flags for values command + // Register flags for values command. valuesParser.RegisterFlags(valuesCmd) - // Customize query flag usage for values command + // Customize query flag usage for values command. if queryFlag := valuesCmd.PersistentFlags().Lookup("query"); queryFlag != nil { queryFlag.Usage = "Filter the results using YQ expressions" } - // Add stack completion + // Add stack completion. addStackCompletion(valuesCmd) - // Bind flags to Viper for environment variable support + // Bind flags to Viper for environment variable support. if err := valuesParser.BindToViper(viper.GetViper()); err != nil { panic(err) } - // Create parser for vars command (no vars flag, as it's always .vars) - varsParser = newCommonListParser( - flags.WithBoolFlag("abstract", "", false, "Include abstract components"), - flags.WithBoolFlag("process-templates", "", true, "Enable/disable Go template processing in Atmos stack manifests when executing the command"), - flags.WithBoolFlag("process-functions", "", true, "Enable/disable YAML functions processing in Atmos stack manifests when executing the command"), - flags.WithEnvVars("abstract", "ATMOS_LIST_ABSTRACT"), - flags.WithEnvVars("process-templates", "ATMOS_PROCESS_TEMPLATES"), - flags.WithEnvVars("process-functions", "ATMOS_PROCESS_FUNCTIONS"), + // Create parser for vars command (no vars flag, as it's always .vars). + varsParser = NewListParser( + WithFormatFlag, + WithDelimiterFlag, + WithStackFlag, + WithQueryFlag, + WithMaxColumnsFlag, + WithAbstractFlag, + WithProcessTemplatesFlag, + WithProcessFunctionsFlag, ) - // Register flags for vars command + // Register flags for vars command. varsParser.RegisterFlags(varsCmd) - // Customize query flag usage for vars command + // Customize query flag usage for vars command. if queryFlag := varsCmd.PersistentFlags().Lookup("query"); queryFlag != nil { queryFlag.Usage = "Filter the results using YQ expressions" } - // Add stack completion + // Add stack completion. addStackCompletion(varsCmd) - // Bind flags to Viper for environment variable support + // Bind flags to Viper for environment variable support. if err := varsParser.BindToViper(viper.GetViper()); err != nil { panic(err) } @@ -263,12 +273,12 @@ func getFilterOptionsFromValues(opts *ValuesOptions) *l.FilterOptions { } } -// logNoValuesFoundMessage logs an appropriate message when no values or vars are found. -func logNoValuesFoundMessage(componentName string, query string) { +// displayNoValuesFoundMessage displays an appropriate message when no values or vars are found. +func displayNoValuesFoundMessage(componentName string, query string) { if query == ".vars" { - log.Info("No vars found", "component", componentName) + _ = ui.Info("No vars found for component: " + componentName) } else { - log.Info("No values found", "component", componentName) + _ = ui.Info("No values found for component: " + componentName) } } @@ -351,7 +361,7 @@ func listValuesWithOptions(cmd *cobra.Command, opts *ValuesOptions, args []strin if err != nil { var noValuesErr *listerrors.NoValuesFoundError if errors.As(err, &noValuesErr) { - logNoValuesFoundMessage(componentName, filterOptions.Query) + displayNoValuesFoundMessage(componentName, filterOptions.Query) return "", nil } return "", err diff --git a/cmd/list/values_test.go b/cmd/list/values_test.go index 4eeaad0b82..a66e9b4039 100644 --- a/cmd/list/values_test.go +++ b/cmd/list/values_test.go @@ -316,9 +316,9 @@ func TestLogNoValuesFoundMessage(t *testing.T) { for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { - // This function only logs, so we just verify it doesn't panic + // This function only displays UI output, so we just verify it doesn't panic assert.NotPanics(t, func() { - logNoValuesFoundMessage(tc.componentName, tc.query) + displayNoValuesFoundMessage(tc.componentName, tc.query) }) }) } diff --git a/cmd/list/vendor.go b/cmd/list/vendor.go index def196ed6e..495884f2b7 100644 --- a/cmd/list/vendor.go +++ b/cmd/list/vendor.go @@ -13,7 +13,15 @@ import ( "github.com/cloudposse/atmos/pkg/flags" "github.com/cloudposse/atmos/pkg/flags/global" l "github.com/cloudposse/atmos/pkg/list" + "github.com/cloudposse/atmos/pkg/list/column" + "github.com/cloudposse/atmos/pkg/list/extract" + "github.com/cloudposse/atmos/pkg/list/filter" + "github.com/cloudposse/atmos/pkg/list/format" + "github.com/cloudposse/atmos/pkg/list/renderer" + listSort "github.com/cloudposse/atmos/pkg/list/sort" + perf "github.com/cloudposse/atmos/pkg/perf" "github.com/cloudposse/atmos/pkg/schema" + "github.com/cloudposse/atmos/pkg/ui" ) var vendorParser *flags.StandardParser @@ -21,16 +29,17 @@ var vendorParser *flags.StandardParser // VendorOptions contains parsed flags for the vendor command. type VendorOptions struct { global.Flags - Format string - Stack string - Delimiter string + Format string + Stack string + Columns []string + Sort string } // vendorCmd lists vendor configurations. var vendorCmd = &cobra.Command{ Use: "vendor", - Short: "List all vendor configurations", - Long: "List all vendor configurations in a tabular way, including component and vendor manifests.", + Short: "List all vendor configurations with filtering, sorting, and formatting options", + Long: `List Atmos vendor configurations including component and vendor manifests with support for filtering, custom column selection, sorting, and multiple output formats.`, Args: cobra.NoArgs, RunE: func(cmd *cobra.Command, args []string) error { // Skip stack validation for vendor. @@ -38,68 +47,192 @@ var vendorCmd = &cobra.Command{ return err } - // Parse flags using StandardParser with Viper precedence + // Parse flags using StandardParser with Viper precedence. v := viper.GetViper() if err := vendorParser.BindFlagsToViper(cmd, v); err != nil { return err } opts := &VendorOptions{ - Flags: flags.ParseGlobalFlags(cmd, v), - Format: v.GetString("format"), - Stack: v.GetString("stack"), - Delimiter: v.GetString("delimiter"), + Flags: flags.ParseGlobalFlags(cmd, v), + Format: v.GetString("format"), + Stack: v.GetString("stack"), + Columns: v.GetStringSlice("columns"), + Sort: v.GetString("sort"), } - output, err := listVendorWithOptions(opts) - if err != nil { - return err + return listVendorWithOptions(opts) + }, +} + +// columnsCompletionForVendor provides dynamic tab completion for --columns flag. +// Returns column names from atmos.yaml vendor.list.columns configuration. +func columnsCompletionForVendor(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { + defer perf.Track(nil, "list.vendor.columnsCompletionForVendor")() + + // Load atmos configuration. + configAndStacksInfo := schema.ConfigAndStacksInfo{} + atmosConfig, err := config.InitCliConfig(configAndStacksInfo, false) + if err != nil { + return nil, cobra.ShellCompDirectiveNoFileComp + } + + // Extract column names from atmos.yaml configuration. + if len(atmosConfig.Vendor.List.Columns) > 0 { + var columnNames []string + for _, col := range atmosConfig.Vendor.List.Columns { + columnNames = append(columnNames, col.Name) } + return columnNames, cobra.ShellCompDirectiveNoFileComp + } - // Obfuscate home directory paths before printing. - obfuscatedOutput := obfuscateHomeDirInOutput(output) - fmt.Println(obfuscatedOutput) - return nil - }, + // If no custom columns configured, return empty list. + return nil, cobra.ShellCompDirectiveNoFileComp } func init() { - // Create parser with vendor-specific flags using functional options - vendorParser = flags.NewStandardParser( - flags.WithStringFlag("format", "f", "", "Output format: table, json, yaml, csv, tsv"), - flags.WithStringFlag("stack", "s", "", "Filter by stack name or pattern"), - flags.WithStringFlag("delimiter", "d", "", "Delimiter for CSV/TSV output"), - flags.WithEnvVars("format", "ATMOS_LIST_FORMAT"), - flags.WithEnvVars("stack", "ATMOS_STACK"), - flags.WithEnvVars("delimiter", "ATMOS_LIST_DELIMITER"), + // Create parser with vendor-specific flags using flag wrappers. + vendorParser = NewListParser( + WithFormatFlag, + WithVendorColumnsFlag, + WithSortFlag, + WithStackFlag, ) - // Register flags + // Register flags. vendorParser.RegisterFlags(vendorCmd) - // Add stack completion + // Register dynamic tab completion for --columns flag. + if err := vendorCmd.RegisterFlagCompletionFunc("columns", columnsCompletionForVendor); err != nil { + panic(err) + } + + // Add stack completion. addStackCompletion(vendorCmd) - // Bind flags to Viper for environment variable support + // Bind flags to Viper for environment variable support. if err := vendorParser.BindToViper(viper.GetViper()); err != nil { panic(err) } } -func listVendorWithOptions(opts *VendorOptions) (string, error) { +func listVendorWithOptions(opts *VendorOptions) error { + defer perf.Track(nil, "list.vendor.listVendorWithOptions")() + configAndStacksInfo := schema.ConfigAndStacksInfo{} atmosConfig, err := config.InitCliConfig(configAndStacksInfo, false) if err != nil { - return "", err + return err + } + + // If format is empty, check command-specific config. + if opts.Format == "" && atmosConfig.Vendor.List.Format != "" { + opts.Format = atmosConfig.Vendor.List.Format + } + + // Get vendor configurations. + vendorInfos, err := l.GetVendorInfos(&atmosConfig) + if err != nil { + return err + } + + // Convert to renderer-compatible format. + vendors, err := extract.Vendor(vendorInfos) + if err != nil { + return err + } + + if len(vendors) == 0 { + _ = ui.Info("No vendor configurations found") + return nil + } + + // Build filters. + filters := buildVendorFilters(opts) + + // Get column configuration. + columns := getVendorColumns(&atmosConfig, opts.Columns) + + // Build column selector. + selector, err := column.NewSelector(columns, column.BuildColumnFuncMap()) + if err != nil { + return fmt.Errorf("error creating column selector: %w", err) + } + + // Build sorters. + sorters, err := buildVendorSorters(opts.Sort) + if err != nil { + return fmt.Errorf("error parsing sort specification: %w", err) + } + + // Create renderer and execute pipeline. + outputFormat := format.Format(opts.Format) + r := renderer.New(filters, selector, sorters, outputFormat, "") + + return r.Render(vendors) +} + +// buildVendorFilters creates filters based on command options. +func buildVendorFilters(opts *VendorOptions) []filter.Filter { + defer perf.Track(nil, "list.vendor.buildVendorFilters")() + + var filters []filter.Filter + + // Component filter (glob pattern on component field). + // Vendor rows contain: component, type, manifest, folder. + if opts.Stack != "" { + globFilter, err := filter.NewGlobFilter("component", opts.Stack) + if err == nil { + filters = append(filters, globFilter) + } + } + + return filters +} + +// getVendorColumns returns column configuration. +func getVendorColumns(atmosConfig *schema.AtmosConfiguration, columnsFlag []string) []column.Config { + defer perf.Track(nil, "list.vendor.getVendorColumns")() + + // If --columns flag is provided, parse it and return. + if len(columnsFlag) > 0 { + return parseColumnsFlag(columnsFlag) } - options := &l.FilterOptions{ - FormatStr: opts.Format, - StackPattern: opts.Stack, - Delimiter: opts.Delimiter, + // Check atmos.yaml for vendor.list.columns configuration. + if len(atmosConfig.Vendor.List.Columns) > 0 { + var configs []column.Config + for _, col := range atmosConfig.Vendor.List.Columns { + configs = append(configs, column.Config{ + Name: col.Name, + Value: col.Value, + Width: col.Width, + }) + } + return configs + } + + // Default columns for vendor. + return []column.Config{ + {Name: "Component", Value: "{{ .component }}"}, + {Name: "Type", Value: "{{ .type }}"}, + {Name: "Manifest", Value: "{{ .manifest }}"}, + {Name: "Folder", Value: "{{ .folder }}"}, + } +} + +// buildVendorSorters creates sorters from sort specification. +func buildVendorSorters(sortSpec string) ([]*listSort.Sorter, error) { + defer perf.Track(nil, "list.vendor.buildVendorSorters")() + + if sortSpec == "" { + // Default sort: by component ascending. + return []*listSort.Sorter{ + listSort.NewSorter("Component", listSort.Ascending), + }, nil } - return l.FilterAndListVendor(&atmosConfig, options) + return listSort.ParseSortSpec(sortSpec) } // obfuscateHomeDirInOutput replaces occurrences of the home directory with "~" to prevent leaking user paths. diff --git a/cmd/list/vendor_test.go b/cmd/list/vendor_test.go index bef5ef942d..c054077065 100644 --- a/cmd/list/vendor_test.go +++ b/cmd/list/vendor_test.go @@ -13,29 +13,33 @@ import ( // TestVendorOptions tests the VendorOptions structure. func TestVendorOptions(t *testing.T) { testCases := []struct { - name string - opts *VendorOptions - expectedFormat string - expectedStack string - expectedDelimiter string + name string + opts *VendorOptions + expectedFormat string + expectedStack string + expectedColumns []string + expectedSort string }{ { name: "all options populated", opts: &VendorOptions{ - Format: "json", - Stack: "prod-*", - Delimiter: ",", + Format: "json", + Stack: "prod-*", + Columns: []string{"component", "type"}, + Sort: "component:asc", }, - expectedFormat: "json", - expectedStack: "prod-*", - expectedDelimiter: ",", + expectedFormat: "json", + expectedStack: "prod-*", + expectedColumns: []string{"component", "type"}, + expectedSort: "component:asc", }, { - name: "empty options", - opts: &VendorOptions{}, - expectedFormat: "", - expectedStack: "", - expectedDelimiter: "", + name: "empty options", + opts: &VendorOptions{}, + expectedFormat: "", + expectedStack: "", + expectedColumns: nil, + expectedSort: "", }, { name: "yaml format with stack filter", @@ -43,9 +47,10 @@ func TestVendorOptions(t *testing.T) { Format: "yaml", Stack: "*-staging-*", }, - expectedFormat: "yaml", - expectedStack: "*-staging-*", - expectedDelimiter: "", + expectedFormat: "yaml", + expectedStack: "*-staging-*", + expectedColumns: nil, + expectedSort: "", }, } @@ -53,7 +58,8 @@ func TestVendorOptions(t *testing.T) { t.Run(tc.name, func(t *testing.T) { assert.Equal(t, tc.expectedFormat, tc.opts.Format) assert.Equal(t, tc.expectedStack, tc.opts.Stack) - assert.Equal(t, tc.expectedDelimiter, tc.opts.Delimiter) + assert.Equal(t, tc.expectedColumns, tc.opts.Columns) + assert.Equal(t, tc.expectedSort, tc.opts.Sort) }) } } diff --git a/cmd/list/workflows.go b/cmd/list/workflows.go index 7c29d6f465..e5c8d3d9cd 100644 --- a/cmd/list/workflows.go +++ b/cmd/list/workflows.go @@ -1,16 +1,23 @@ package list import ( + "fmt" + "github.com/spf13/cobra" "github.com/spf13/viper" + e "github.com/cloudposse/atmos/internal/exec" "github.com/cloudposse/atmos/pkg/config" "github.com/cloudposse/atmos/pkg/flags" "github.com/cloudposse/atmos/pkg/flags/global" - l "github.com/cloudposse/atmos/pkg/list" + "github.com/cloudposse/atmos/pkg/list/column" + "github.com/cloudposse/atmos/pkg/list/extract" + "github.com/cloudposse/atmos/pkg/list/filter" + "github.com/cloudposse/atmos/pkg/list/format" + "github.com/cloudposse/atmos/pkg/list/renderer" + listSort "github.com/cloudposse/atmos/pkg/list/sort" "github.com/cloudposse/atmos/pkg/schema" - "github.com/cloudposse/atmos/pkg/ui/theme" - u "github.com/cloudposse/atmos/pkg/utils" + "github.com/cloudposse/atmos/pkg/ui" ) var workflowsParser *flags.StandardParser @@ -18,70 +25,193 @@ var workflowsParser *flags.StandardParser // WorkflowsOptions contains parsed flags for the workflows command. type WorkflowsOptions struct { global.Flags - File string - Format string - Delimiter string + File string + Format string + Columns []string + Sort string } // workflowsCmd lists atmos workflows. var workflowsCmd = &cobra.Command{ Use: "workflows", - Short: "List all Atmos workflows", - Long: "List Atmos workflows, with options to filter results by specific files.", + Short: "List all Atmos workflows with filtering, sorting, and formatting options", + Long: `List Atmos workflows with support for filtering by file, custom column selection, sorting, and multiple output formats.`, + Args: cobra.NoArgs, RunE: func(cmd *cobra.Command, args []string) error { + // Skip stack validation for workflows. if err := checkAtmosConfig(true); err != nil { return err } - // Parse flags using StandardParser with Viper precedence + // Parse flags using StandardParser with Viper precedence. v := viper.GetViper() if err := workflowsParser.BindFlagsToViper(cmd, v); err != nil { return err } opts := &WorkflowsOptions{ - Flags: flags.ParseGlobalFlags(cmd, v), - File: v.GetString("file"), - Format: v.GetString("format"), - Delimiter: v.GetString("delimiter"), + Flags: flags.ParseGlobalFlags(cmd, v), + File: v.GetString("file"), + Format: v.GetString("format"), + Columns: v.GetStringSlice("columns"), + Sort: v.GetString("sort"), } - output, err := listWorkflowsWithOptions(opts) - if err != nil { - return err + return listWorkflowsWithOptions(cmd, args, opts) + }, +} + +// columnsCompletionForWorkflows provides dynamic tab completion for --columns flag. +// Returns column names from atmos.yaml workflows.list.columns configuration. +func columnsCompletionForWorkflows(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { + // Load atmos configuration with CLI flags. + configAndStacksInfo, err := e.ProcessCommandLineArgs("list", cmd, args, nil) + if err != nil { + return nil, cobra.ShellCompDirectiveNoFileComp + } + + atmosConfig, err := config.InitCliConfig(configAndStacksInfo, false) + if err != nil { + return nil, cobra.ShellCompDirectiveNoFileComp + } + + // Extract column names from atmos.yaml configuration. + if len(atmosConfig.Workflows.List.Columns) > 0 { + var columnNames []string + for _, col := range atmosConfig.Workflows.List.Columns { + columnNames = append(columnNames, col.Name) } + return columnNames, cobra.ShellCompDirectiveNoFileComp + } - u.PrintMessageInColor(output, theme.Colors.Success) - return nil - }, + // If no custom columns configured, return empty list. + return nil, cobra.ShellCompDirectiveNoFileComp } func init() { - // Create parser with workflows-specific flags using functional options - workflowsParser = flags.NewStandardParser( - flags.WithStringFlag("file", "f", "", "Filter workflows by file (e.g., atmos list workflows -f workflow1)"), - flags.WithStringFlag("format", "", "", "Output format (table, json, csv)"), - flags.WithStringFlag("delimiter", "", "\t", "Delimiter for csv output"), - flags.WithEnvVars("file", "ATMOS_WORKFLOW_FILE"), - flags.WithEnvVars("format", "ATMOS_LIST_FORMAT"), - flags.WithEnvVars("delimiter", "ATMOS_LIST_DELIMITER"), + // Create parser with workflows-specific flags using flag wrappers. + workflowsParser = NewListParser( + WithFormatFlag, + WithWorkflowsColumnsFlag, + WithSortFlag, + WithFileFlag, ) - // Register flags + // Register flags. workflowsParser.RegisterFlags(workflowsCmd) - // Bind flags to Viper for environment variable support + // Register dynamic tab completion for --columns flag. + if err := workflowsCmd.RegisterFlagCompletionFunc("columns", columnsCompletionForWorkflows); err != nil { + panic(err) + } + + // Bind flags to Viper for environment variable support. if err := workflowsParser.BindToViper(viper.GetViper()); err != nil { panic(err) } } -func listWorkflowsWithOptions(opts *WorkflowsOptions) (string, error) { - configAndStacksInfo := schema.ConfigAndStacksInfo{} +func listWorkflowsWithOptions(cmd *cobra.Command, args []string, opts *WorkflowsOptions) error { + // Process command line args to get real ConfigAndStacksInfo with CLI flags. + configAndStacksInfo, err := e.ProcessCommandLineArgs("list", cmd, args, nil) + if err != nil { + return err + } + atmosConfig, err := config.InitCliConfig(configAndStacksInfo, false) if err != nil { - return "", err + return err + } + + // If format is empty, check command-specific config. + if opts.Format == "" && atmosConfig.Workflows.List.Format != "" { + opts.Format = atmosConfig.Workflows.List.Format + } + + // Extract workflows into structured data. + workflows, err := extract.Workflows(&atmosConfig, opts.File) + if err != nil { + return err + } + + if len(workflows) == 0 { + _ = ui.Info("No workflows found") + return nil + } + + // Build filters. + filters := buildWorkflowFilters(opts) + + // Get column configuration. + columns := getWorkflowColumns(&atmosConfig, opts.Columns) + + // Build column selector. + selector, err := column.NewSelector(columns, column.BuildColumnFuncMap()) + if err != nil { + return fmt.Errorf("error creating column selector: %w", err) + } + + // Build sorters. + sorters, err := buildWorkflowSorters(opts.Sort) + if err != nil { + return fmt.Errorf("error parsing sort specification: %w", err) + } + + // Create renderer and execute pipeline. + outputFormat := format.Format(opts.Format) + r := renderer.New(filters, selector, sorters, outputFormat, "") + + return r.Render(workflows) +} + +// buildWorkflowFilters creates filters based on command options. +func buildWorkflowFilters(opts *WorkflowsOptions) []filter.Filter { + var filters []filter.Filter + + // File filter already handled by extraction logic. + // Add any additional filters here in the future. + + return filters +} + +// getWorkflowColumns returns column configuration. +func getWorkflowColumns(atmosConfig *schema.AtmosConfiguration, columnsFlag []string) []column.Config { + // If --columns flag is provided, parse it and return. + if len(columnsFlag) > 0 { + return parseColumnsFlag(columnsFlag) + } + + // Check atmos.yaml for workflows.list.columns configuration. + if len(atmosConfig.Workflows.List.Columns) > 0 { + var configs []column.Config + for _, col := range atmosConfig.Workflows.List.Columns { + configs = append(configs, column.Config{ + Name: col.Name, + Value: col.Value, + Width: col.Width, + }) + } + return configs + } + + // Default columns for workflows. + return []column.Config{ + {Name: "File", Value: "{{ .file }}"}, + {Name: "Workflow", Value: "{{ .name }}"}, + {Name: "Description", Value: "{{ .description }}"}, + {Name: "Steps", Value: "{{ .steps }}"}, + } +} + +// buildWorkflowSorters creates sorters from sort specification. +func buildWorkflowSorters(sortSpec string) ([]*listSort.Sorter, error) { + if sortSpec == "" { + // Default sort: by file ascending, then workflow ascending. + return []*listSort.Sorter{ + listSort.NewSorter("File", listSort.Ascending), + listSort.NewSorter("Workflow", listSort.Ascending), + }, nil } - return l.FilterAndListWorkflows(opts.File, atmosConfig.Workflows.List, opts.Format, opts.Delimiter) + return listSort.ParseSortSpec(sortSpec) } diff --git a/cmd/root.go b/cmd/root.go index 861e62adf2..81a9df8d2e 100644 --- a/cmd/root.go +++ b/cmd/root.go @@ -38,6 +38,7 @@ import ( "github.com/cloudposse/atmos/pkg/profiler" "github.com/cloudposse/atmos/pkg/schema" "github.com/cloudposse/atmos/pkg/telemetry" + "github.com/cloudposse/atmos/pkg/terminal" "github.com/cloudposse/atmos/pkg/ui" "github.com/cloudposse/atmos/pkg/ui/heatmap" "github.com/cloudposse/atmos/pkg/ui/markdown" @@ -84,11 +85,12 @@ var chdirProcessed bool // It recognizes `--chdir=value`, `--chdir value`, `-C=value`, `-Cvalue`, and `-C value` forms. // If no chdir flag is found, it returns an empty string. func parseChdirFromArgs() string { - return parseChdirFromArgList(os.Args) + return parseChdirFromArgsInternal(os.Args) } -// parseChdirFromArgList manually parses --chdir or -C flag from the given argument list. -func parseChdirFromArgList(args []string) string { +// parseChdirFromArgsInternal manually parses --chdir or -C flag from the provided args. +// This internal version accepts args as a parameter for testability. +func parseChdirFromArgsInternal(args []string) string { for i := 0; i < len(args); i++ { arg := args[i] @@ -405,6 +407,11 @@ var RootCmd = &cobra.Command{ ui.InitFormatter(ioCtx) data.InitWriter(ioCtx) data.SetMarkdownRenderer(ui.Format) // Connect markdown rendering to data channel + + // Configure lipgloss color profile based on terminal capabilities. + // This ensures tables and styled output degrade gracefully when piped or in non-TTY environments. + term := terminal.New() + lipgloss.SetColorProfile(convertToTermenvProfile(term.ColorProfile())) }, PersistentPostRun: func(cmd *cobra.Command, args []string) { // Stop profiler after command execution. @@ -1293,7 +1300,25 @@ func getInvalidCommandName(input string) string { // displayPerformanceHeatmap shows the performance heatmap visualization. // +// ConvertToTermenvProfile converts our terminal.ColorProfile to termenv.Profile. +// //nolint:unparam // cmd parameter reserved for future use +func convertToTermenvProfile(profile terminal.ColorProfile) termenv.Profile { + switch profile { + case terminal.ColorNone: + return termenv.Ascii + case terminal.Color16: + return termenv.ANSI + case terminal.Color256: + return termenv.ANSI256 + case terminal.ColorTrue: + return termenv.TrueColor + default: + // Default to ASCII (no color) for unknown profiles. + return termenv.Ascii + } +} + func displayPerformanceHeatmap(cmd *cobra.Command, mode string) error { // Print performance summary to console, filtering out zero-time functions. snap := perf.SnapshotTopFiltered("total", defaultTopFunctionsMax) diff --git a/cmd/root_test.go b/cmd/root_test.go index ec6583e7b4..3bd45e0c3e 100644 --- a/cmd/root_test.go +++ b/cmd/root_test.go @@ -836,12 +836,12 @@ func TestParseChdirFromArgs(t *testing.T) { for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { - // Call the function with test args. - result := parseChdirFromArgList(tt.args) + // Call the internal function directly with test args. + result := parseChdirFromArgsInternal(tt.args) // Verify. assert.Equal(t, tt.expected, result, - "parseChdirFromArgList() with args %v should return %q, got %q", + "parseChdirFromArgsInternal() with args %v should return %q, got %q", tt.args, tt.expected, result) }) } diff --git a/docs/list-flag-wrappers.md b/docs/list-flag-wrappers.md new file mode 100644 index 0000000000..5db5aadaa8 --- /dev/null +++ b/docs/list-flag-wrappers.md @@ -0,0 +1,579 @@ +# List Command Flag Wrappers + +## Overview + +This document explains the flag wrapper pattern used across list commands in `cmd/list/`. The wrapper functions follow the StandardParser pattern from `pkg/flags/` and provide a consistent, reusable way to compose flags for each command. + +## Design Principles + +### 1. One Function Per Flag + +Each flag gets its own wrapper function (not grouped). This provides maximum flexibility for composing only the flags each command needs. + +**Example:** +```go +func WithFormatFlag(options *[]flags.Option) +func WithColumnsFlag(options *[]flags.Option) +func WithSortFlag(options *[]flags.Option) +``` + +### 2. Consistent Naming Convention + +All wrapper functions follow the `With{FlagName}Flag` pattern, matching the convention in `pkg/flags/standard_builder.go`. + +**Good:** +```go +WithFormatFlag +WithColumnsFlag +WithStackFlag +``` + +**Bad:** +```go +addFormatFlag // Wrong prefix +AddFormat // Missing "Flag" suffix +withFormat // Not exported +``` + +### 3. Composable by Design + +Commands use `NewListParser()` with only the wrapper functions they need. This is the "options pattern" applied to flag composition. + +**Example:** +```go +// Components needs many flags +componentsParser = NewListParser( + WithFormatFlag, + WithColumnsFlag, + WithSortFlag, + WithFilterFlag, + WithStackFlag, + WithTypeFlag, + WithEnabledFlag, + WithLockedFlag, +) + +// Stacks needs fewer flags +stacksParser = NewListParser( + WithFormatFlag, + WithColumnsFlag, + WithSortFlag, + WithComponentFlag, +) +``` + +### 4. Single Source of Truth + +Each wrapper function defines: +- Flag name and shorthand +- Default value +- Description +- Environment variable bindings +- Valid values (if applicable) + +This ensures consistency across all commands using that flag. + +## Available Flag Wrappers + +### Universal Flags (Used by Most Commands) + +#### `WithFormatFlag` +- **Flag:** `--format` / `-f` +- **Environment:** `ATMOS_LIST_FORMAT` +- **Description:** Output format: table, json, yaml, csv, tsv +- **Used by:** All list commands + +#### `WithColumnsFlag` +- **Flag:** `--columns` +- **Environment:** `ATMOS_LIST_COLUMNS` +- **Description:** Columns to display (comma-separated, overrides atmos.yaml) +- **Used by:** components, stacks, workflows, vendor, instances + +#### `WithSortFlag` +- **Flag:** `--sort` +- **Environment:** `ATMOS_LIST_SORT` +- **Description:** Sort by column:order (e.g., 'stack:asc,component:desc') +- **Used by:** components, stacks, workflows, vendor, instances + +#### `WithStackFlag` +- **Flag:** `--stack` / `-s` +- **Environment:** `ATMOS_STACK` +- **Description:** Filter by stack pattern (glob, e.g., 'plat-*-prod') +- **Used by:** components, vendor, values, vars, metadata, settings, instances + +### Filtering Flags + +#### `WithFilterFlag` +- **Flag:** `--filter` +- **Environment:** `ATMOS_LIST_FILTER` +- **Description:** Filter expression using YQ syntax +- **Used by:** components, vendor + +#### `WithQueryFlag` +- **Flag:** `--query` / `-q` +- **Environment:** `ATMOS_LIST_QUERY` +- **Description:** YQ expression to filter values (e.g., '.vars.region') +- **Used by:** values, vars, metadata, settings + +### Component-Specific Flags + +#### `WithTypeFlag` +- **Flag:** `--type` / `-t` +- **Environment:** `ATMOS_COMPONENT_TYPE` +- **Description:** Component type: real, abstract, all +- **Default:** `real` +- **Valid Values:** `real`, `abstract`, `all` +- **Used by:** components + +#### `WithEnabledFlag` +- **Flag:** `--enabled` +- **Environment:** `ATMOS_COMPONENT_ENABLED` +- **Description:** Filter by enabled status +- **Default:** `false` +- **Used by:** components + +#### `WithLockedFlag` +- **Flag:** `--locked` +- **Environment:** `ATMOS_COMPONENT_LOCKED` +- **Description:** Filter by locked status +- **Default:** `false` +- **Used by:** components + +#### `WithAbstractFlag` +- **Flag:** `--abstract` +- **Environment:** `ATMOS_ABSTRACT` +- **Description:** Include abstract components in output +- **Default:** `false` +- **Used by:** values, vars + +### Stack-Specific Flags + +#### `WithComponentFlag` +- **Flag:** `--component` / `-c` +- **Environment:** `ATMOS_COMPONENT` +- **Description:** Filter stacks by component name +- **Used by:** stacks + +### Workflow-Specific Flags + +#### `WithFileFlag` +- **Flag:** `--file` +- **Environment:** `ATMOS_WORKFLOW_FILE` +- **Description:** Filter workflows by file path +- **Used by:** workflows + +### Output Formatting Flags + +#### `WithDelimiterFlag` +- **Flag:** `--delimiter` +- **Environment:** `ATMOS_LIST_DELIMITER` +- **Description:** Delimiter for CSV/TSV output +- **Used by:** workflows, vendor, values, vars, metadata, settings, instances + +#### `WithMaxColumnsFlag` +- **Flag:** `--max-columns` +- **Environment:** `ATMOS_LIST_MAX_COLUMNS` +- **Description:** Maximum number of columns to display (0 = no limit) +- **Default:** `0` +- **Used by:** values, vars, metadata, settings + +### Template Processing Flags + +#### `WithProcessTemplatesFlag` +- **Flag:** `--process-templates` +- **Environment:** `ATMOS_PROCESS_TEMPLATES` +- **Description:** Enable/disable Go template processing +- **Default:** `true` +- **Used by:** values, vars, metadata, settings + +#### `WithProcessFunctionsFlag` +- **Flag:** `--process-functions` +- **Environment:** `ATMOS_PROCESS_FUNCTIONS` +- **Description:** Enable/disable template function processing +- **Default:** `true` +- **Used by:** values, vars, metadata, settings + +### Pro Integration Flags + +#### `WithUploadFlag` +- **Flag:** `--upload` +- **Environment:** `ATMOS_UPLOAD` +- **Description:** Upload instances to Atmos Pro API +- **Default:** `false` +- **Used by:** instances + +## Usage Patterns + +### Pattern 1: Full-Featured Command (Components) + +```go +func init() { + componentsParser = NewListParser( + WithFormatFlag, // Output format selection + WithColumnsFlag, // Column customization + WithSortFlag, // Sorting + WithFilterFlag, // YQ filtering + WithStackFlag, // Filter by stack + WithTypeFlag, // Filter by component type + WithEnabledFlag, // Filter by enabled status + WithLockedFlag, // Filter by locked status + ) + + componentsParser.RegisterFlags(componentsCmd) + _ = componentsParser.BindToViper(viper.GetViper()) +} +``` + +### Pattern 2: Simple Command (Stacks) + +```go +func init() { + stacksParser = NewListParser( + WithFormatFlag, // Output format + WithColumnsFlag, // Column customization + WithSortFlag, // Sorting + WithComponentFlag, // Filter stacks by component + ) + + stacksParser.RegisterFlags(stacksCmd) + _ = stacksParser.BindToViper(viper.GetViper()) +} +``` + +### Pattern 3: Complex Filtering Command (Values) + +```go +func init() { + valuesParser = NewListParser( + WithFormatFlag, // Output format + WithDelimiterFlag, // CSV/TSV delimiter + WithMaxColumnsFlag, // Limit columns + WithQueryFlag, // YQ expression filtering + WithStackFlag, // Filter by stack + WithAbstractFlag, // Include abstract components + WithProcessTemplatesFlag, // Process templates + WithProcessFunctionsFlag, // Process functions + ) + + valuesParser.RegisterFlags(valuesCmd) + _ = valuesParser.BindToViper(viper.GetViper()) +} +``` + +## Flag Mapping Reference + +| Command | Format | Columns | Sort | Filter | Stack | Delimiter | Command-Specific | +|---------|--------|---------|------|--------|-------|-----------|------------------| +| **stacks** | ✓ | ✓ | ✓ | - | - | - | `--component` | +| **components** | ✓ | ✓ | ✓ | ✓ | ✓ | - | `--type`, `--enabled`, `--locked` | +| **workflows** | ✓ | ✓ | ✓ | - | - | ✓ | `--file` | +| **vendor** | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | - | +| **values** | ✓ | - | - | - | ✓ | ✓ | `--max-columns`, `--query`, `--abstract`, `--process-*` | +| **vars** | ✓ | - | - | - | ✓ | ✓ | Same as values (alias) | +| **metadata** | ✓ | - | - | - | ✓ | ✓ | `--max-columns`, `--query`, `--process-*` | +| **settings** | ✓ | - | - | - | ✓ | ✓ | `--max-columns`, `--query`, `--process-*` | +| **instances** | ✓ | ✓ | ✓ | - | ✓ | ✓ | `--upload` | + +## Best Practices + +### 1. Only Include Flags Your Command Needs + +Don't add flags "just in case" - compose only what makes sense for your command. + +**Good:** +```go +// Stacks command doesn't need --stack flag (it lists all stacks) +stacksParser = NewListParser( + WithFormatFlag, + WithComponentFlag, // Filter by component makes sense +) +``` + +**Bad:** +```go +// Don't add flags that don't make sense +stacksParser = NewListParser( + WithFormatFlag, + WithStackFlag, // ❌ Doesn't make sense for listing stacks + WithLockedFlag, // ❌ Stacks don't have locked status +) +``` + +### 2. Follow Alphabetical Ordering (Optional) + +For readability, consider ordering wrapper functions alphabetically or by logical grouping. + +**Example:** +```go +componentsParser = NewListParser( + // Output formatting + WithFormatFlag, + WithColumnsFlag, + WithSortFlag, + + // Filtering + WithFilterFlag, + WithStackFlag, + WithTypeFlag, + + // Boolean filters + WithEnabledFlag, + WithLockedFlag, +) +``` + +### 3. Add Comments for Clarity + +When composing flags, add comments explaining what each flag does. + +**Example:** +```go +componentsParser = NewListParser( + WithFormatFlag, // --format (table/json/yaml/csv/tsv) + WithColumnsFlag, // --columns (override atmos.yaml) + WithSortFlag, // --sort "stack:asc,component:desc" + WithFilterFlag, // --filter (YQ expression) + WithStackFlag, // --stack "plat-*-prod" + WithTypeFlag, // --type real/abstract/all + WithEnabledFlag, // --enabled=true + WithLockedFlag, // --locked=false +) +``` + +### 4. Reuse Existing Wrappers + +Before creating a new wrapper, check if one already exists in `flag_wrappers.go`. Reuse whenever possible. + +### 5. Test Your Flag Composition + +Use the test patterns in `flag_wrappers_test.go` to verify your flag composition works correctly. + +```go +func TestYourCommand_Flags(t *testing.T) { + parser := NewListParser( + WithFormatFlag, + WithColumnsFlag, + ) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + // Verify expected flags exist + assert.NotNil(t, cmd.Flags().Lookup("format")) + assert.NotNil(t, cmd.Flags().Lookup("columns")) + + // Verify unwanted flags don't exist + assert.Nil(t, cmd.Flags().Lookup("stack")) +} +``` + +## Adding New Flag Wrappers + +When you need to add a new flag wrapper: + +### 1. Follow the Template + +```go +// WithYourFlagNameFlag adds your-flag-name flag with environment variable support. +// Used by: command1, command2. +func WithYourFlagNameFlag(options *[]flags.Option) { + defer perf.Track(nil, "list.WithYourFlagNameFlag")() + + *options = append(*options, + flags.WithStringFlag("your-flag-name", "y", "default", "Description"), + flags.WithEnvVars("your-flag-name", "ATMOS_YOUR_FLAG_NAME"), + ) +} +``` + +### 2. Add Godoc Comments + +Include: +- What the flag does +- Which commands use it +- Example usage (if complex) + +### 3. Add Tests + +Add test coverage in `flag_wrappers_test.go`: + +```go +func TestWithYourFlagNameFlag(t *testing.T) { + parser := NewListParser(WithYourFlagNameFlag) + assert.NotNil(t, parser) + + cmd := &cobra.Command{Use: "test"} + parser.RegisterFlags(cmd) + + flag := cmd.Flags().Lookup("your-flag-name") + require.NotNil(t, flag, "your-flag-name flag should be registered") + assert.Equal(t, "y", flag.Shorthand) + assert.Equal(t, "default", flag.DefValue) + assert.Contains(t, flag.Usage, "Description") +} +``` + +### 4. Update Documentation + +Add your new flag to this document's "Available Flag Wrappers" section. + +## Benefits of This Pattern + +### 1. Discoverability + +Autocomplete shows all available flag wrappers when typing `With` in your editor. + +### 2. Consistency + +Each flag has the same configuration across all commands that use it: +- Same description +- Same environment variable +- Same default value +- Same shorthand + +### 3. Maintainability + +Updating a flag's behavior requires changing only one function, not every command that uses it. + +### 4. Testability + +Each wrapper can be tested independently, and flag composition can be verified per command. + +### 5. Readability + +Command initialization clearly shows which flags are supported: + +```go +componentsParser = NewListParser( + WithFormatFlag, + WithColumnsFlag, + WithSortFlag, +) +``` + +This is more readable than: + +```go +componentsParser = flags.NewStandardParser( + flags.WithStringFlag("format", "", "", "Output format: table, json, yaml, csv, tsv"), + flags.WithEnvVars("format", "ATMOS_LIST_FORMAT"), + flags.WithStringSliceFlag("columns", "", nil, "Columns to display"), + flags.WithEnvVars("columns", "ATMOS_LIST_COLUMNS"), + flags.WithStringFlag("sort", "", "", "Sort by column:order"), + flags.WithEnvVars("sort", "ATMOS_LIST_SORT"), +) +``` + +## Common Mistakes to Avoid + +### 1. Don't Modify the Options Slice Incorrectly + +**Wrong:** +```go +func WithBadFlag(options *[]flags.Option) { + options = append(*options, flags.WithStringFlag(...)) // ❌ Missing dereference +} +``` + +**Correct:** +```go +func WithGoodFlag(options *[]flags.Option) { + *options = append(*options, flags.WithStringFlag(...)) // ✅ Correct +} +``` + +### 2. Don't Forget Environment Variable Bindings + +**Wrong:** +```go +func WithFormatFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithStringFlag("format", "", "", "Output format"), + // ❌ Missing WithEnvVars + ) +} +``` + +**Correct:** +```go +func WithFormatFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithStringFlag("format", "", "", "Output format"), + flags.WithEnvVars("format", "ATMOS_LIST_FORMAT"), // ✅ Correct + ) +} +``` + +### 3. Don't Use Inconsistent Naming + +**Wrong:** +```go +func AddFormatFlag(...) // ❌ Wrong prefix +func withColumnsFlag(...) // ❌ Not exported +func WithFormat(...) // ❌ Missing "Flag" suffix +``` + +**Correct:** +```go +func WithFormatFlag(...) // ✅ Correct +func WithColumnsFlag(...) // ✅ Correct +``` + +## Migration Guide + +If you have existing list commands using the old pattern, follow these steps: + +### Step 1: Identify Flags Used + +```go +// Old pattern +cmd.Flags().StringP("format", "", "", "Output format") +cmd.Flags().StringSliceP("columns", "", nil, "Columns") +``` + +### Step 2: Map to Wrapper Functions + +```go +// New pattern +componentsParser = NewListParser( + WithFormatFlag, + WithColumnsFlag, +) +``` + +### Step 3: Update RunE to Use Parser + +```go +// Old pattern +RunE: func(cmd *cobra.Command, args []string) error { + format, _ := cmd.Flags().GetString("format") + // ... +} + +// New pattern +RunE: func(cmd *cobra.Command, args []string) error { + v := viper.GetViper() + if err := componentsParser.BindFlagsToViper(cmd, v); err != nil { + return err + } + + format := v.GetString("format") + // ... +} +``` + +### Step 4: Test + +Run tests to ensure flags work as expected: + +```bash +go test ./cmd/list -run TestComponents -v +``` + +## Related Documentation + +- `pkg/flags/standard_parser.go` - StandardParser implementation +- `pkg/flags/standard_builder.go` - Builder pattern with With* methods +- `docs/prd/list-commands-ui-overhaul.md` - List commands architecture +- `flag_wrappers_examples.go` - Example usage patterns +- `flag_wrappers_test.go` - Test coverage examples diff --git a/docs/list-implementation-guide.md b/docs/list-implementation-guide.md new file mode 100644 index 0000000000..47faf977dd --- /dev/null +++ b/docs/list-implementation-guide.md @@ -0,0 +1,401 @@ +# List Command Flag Wrappers - Implementation Guide + +## Answers to Your Questions + +### 1. Should I create one wrapper function per flag? + +**YES - One function per flag is the correct approach.** + +**Rationale:** +- Maximum flexibility - each command composes only the flags it needs +- Follows the established `With*` naming convention from `pkg/flags/standard_builder.go` +- Better discoverability via autocomplete +- Easier to test individually +- Single source of truth for each flag's configuration + +**Example:** +```go +func WithFormatFlag(options *[]flags.Option) +func WithColumnsFlag(options *[]flags.Option) +func WithStackFlag(options *[]flags.Option) +``` + +**NOT grouped functions like:** +```go +func WithFilterFlags(options *[]flags.Option) // ❌ Too broad +func WithOutputFlags(options *[]flags.Option) // ❌ Not granular enough +``` + +### 2. What naming convention should I use? + +**Use: `With{FlagName}Flag` pattern** + +**Examples:** +- `WithFormatFlag` - for `--format` flag +- `WithColumnsFlag` - for `--columns` flag +- `WithStackFlag` - for `--stack` flag +- `WithEnabledFlag` - for `--enabled` flag + +**Key rules:** +1. **Prefix:** `With` (capitalized, exported) +2. **Middle:** Flag name in PascalCase (e.g., `MaxColumns` for `--max-columns`) +3. **Suffix:** `Flag` (makes it clear this is about a flag) + +**Consistency with pkg/flags/:** +This follows the same pattern as `pkg/flags/standard_builder.go`: +- `WithStack(bool)` - builder method for stack flag +- `WithFormat([]string, string)` - builder method for format flag + +Our list-specific wrappers extend this pattern: +- `WithStackFlag(*[]flags.Option)` - wrapper that appends stack flag options +- `WithFormatFlag(*[]flags.Option)` - wrapper that appends format flag options + +### 3. How do I handle flags only needed by specific commands? + +**Create command-specific flag wrappers and only use them where needed.** + +**Example - Components-only flags:** + +```go +// cmd/list/flag_wrappers.go +func WithTypeFlag(options *[]flags.Option) { + // Component type filter (real/abstract/all) + // ONLY used by: components command +} + +func WithEnabledFlag(options *[]flags.Option) { + // Enabled filter + // ONLY used by: components command +} + +func WithLockedFlag(options *[]flags.Option) { + // Locked filter + // ONLY used by: components command +} +``` + +**Usage in components.go:** +```go +func init() { + componentsParser = NewListParser( + WithFormatFlag, // Universal flag + WithColumnsFlag, // Universal flag + WithSortFlag, // Universal flag + WithStackFlag, // Universal flag + WithTypeFlag, // ✓ Components-specific + WithEnabledFlag, // ✓ Components-specific + WithLockedFlag, // ✓ Components-specific + ) +} +``` + +**Usage in stacks.go (doesn't need component-specific flags):** +```go +func init() { + stacksParser = NewListParser( + WithFormatFlag, // Universal flag + WithColumnsFlag, // Universal flag + WithSortFlag, // Universal flag + WithComponentFlag, // ✓ Stacks-specific (filter stacks by component) + // NO type, enabled, locked - they don't apply to stacks + ) +} +``` + +**Benefits:** +- Each command composes ONLY the flags it needs +- No unused flags registered +- Clear which flags apply to which commands +- Prevents user confusion (e.g., `--locked` doesn't appear on `atmos list stacks --help`) + +### 4. Should wrapper functions set default values? + +**YES - Wrapper functions should set default values.** + +Each wrapper is the single source of truth for that flag's configuration, including: +- Default value +- Description +- Environment variable bindings +- Shorthand +- Valid values (if applicable) + +**Example:** +```go +func WithTypeFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithStringFlag("type", "t", "real", "Component type: real, abstract, all"), + // ^^^^^ Default value set here + flags.WithEnvVars("type", "ATMOS_COMPONENT_TYPE"), + flags.WithValidValues("type", "real", "abstract", "all"), + ) +} +``` + +**Why this matters:** +- **Consistency:** Same default everywhere the flag is used +- **Maintainability:** Change default in one place +- **Documentation:** Default is self-documenting in the code + +**Commands can override if needed:** +Commands receive parsed values through Viper and can apply their own logic: + +```go +// In command RunE: +v := viper.GetViper() +componentType := v.GetString("type") + +// Apply command-specific logic if needed +if componentType == "" { + componentType = "real" // Override if you need different default +} +``` + +But this is rare - usually the wrapper's default is correct for all uses. + +### 5. How do commands compose these wrappers? + +**Use `NewListParser()` with variadic builder functions.** + +**Pattern:** +```go +// cmd/list/components.go +var componentsParser *flags.StandardParser + +func init() { + componentsParser = NewListParser( + WithFormatFlag, + WithColumnsFlag, + WithSortFlag, + WithFilterFlag, + WithStackFlag, + WithTypeFlag, + WithEnabledFlag, + WithLockedFlag, + ) + + componentsParser.RegisterFlags(componentsCmd) + _ = componentsParser.BindToViper(viper.GetViper()) +} +``` + +**How it works:** +1. `NewListParser()` creates empty options slice +2. Each `With*Flag` function appends to the slice +3. Returns `*flags.StandardParser` configured with those flags + +**Implementation:** +```go +// cmd/list/flag_wrappers.go +func NewListParser(builders ...func(*[]flags.Option)) *flags.StandardParser { + options := []flags.Option{} + + // Apply each builder function + for _, builder := range builders { + builder(&options) // Builder appends its flags to options + } + + return flags.NewStandardParser(options...) +} +``` + +## Command-Specific Examples + +### Example 1: Components (Full-Featured) + +```go +// cmd/list/components.go +func init() { + componentsParser = NewListParser( + // Universal flags + WithFormatFlag, // --format table/json/yaml/csv/tsv + WithColumnsFlag, // --columns (override atmos.yaml) + WithSortFlag, // --sort "stack:asc,component:desc" + WithFilterFlag, // --filter (YQ expression) + WithStackFlag, // --stack "plat-*-prod" + + // Component-specific flags + WithTypeFlag, // --type real/abstract/all + WithEnabledFlag, // --enabled=true + WithLockedFlag, // --locked=false + ) + + componentsParser.RegisterFlags(componentsCmd) + _ = componentsParser.BindToViper(viper.GetViper()) +} +``` + +**Flags available:** +- `--format` / `-f` - Output format +- `--columns` - Column selection +- `--sort` - Sorting +- `--filter` - YQ filter +- `--stack` / `-s` - Stack pattern +- `--type` / `-t` - Component type (real/abstract/all) +- `--enabled` - Filter by enabled status +- `--locked` - Filter by locked status + +### Example 2: Stacks (Simple) + +```go +// cmd/list/stacks.go +func init() { + stacksParser = NewListParser( + WithFormatFlag, // --format + WithColumnsFlag, // --columns + WithSortFlag, // --sort + WithComponentFlag, // --component (filter stacks by component) + ) + + stacksParser.RegisterFlags(stacksCmd) + _ = stacksParser.BindToViper(viper.GetViper()) +} +``` + +**Flags available:** +- `--format` / `-f` - Output format +- `--columns` - Column selection +- `--sort` - Sorting +- `--component` / `-c` - Filter stacks by component + +### Example 3: Values (Complex) + +```go +// cmd/list/values.go +func init() { + valuesParser = NewListParser( + WithFormatFlag, // --format + WithDelimiterFlag, // --delimiter (CSV/TSV) + WithMaxColumnsFlag, // --max-columns + WithQueryFlag, // --query (YQ expression) + WithStackFlag, // --stack + WithAbstractFlag, // --abstract + WithProcessTemplatesFlag, // --process-templates + WithProcessFunctionsFlag, // --process-functions + ) + + valuesParser.RegisterFlags(valuesCmd) + _ = valuesParser.BindToViper(viper.GetViper()) +} +``` + +**Flags available:** +- `--format` / `-f` - Output format +- `--delimiter` - CSV/TSV delimiter +- `--max-columns` - Limit columns displayed +- `--query` / `-q` - YQ expression +- `--stack` / `-s` - Stack pattern +- `--abstract` - Include abstract components +- `--process-templates` - Process Go templates +- `--process-functions` - Process template functions + +## Flag Mapping Matrix + +| Command | Format | Columns | Sort | Filter | Stack | Delimiter | Command-Specific | +|---------|--------|---------|------|--------|-------|-----------|------------------| +| **components** | ✓ | ✓ | ✓ | ✓ | ✓ | - | `--type`, `--enabled`, `--locked` | +| **stacks** | ✓ | ✓ | ✓ | - | - | - | `--component` | +| **workflows** | ✓ | ✓ | ✓ | - | - | ✓ | `--file` | +| **vendor** | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | - | +| **values** | ✓ | - | - | - | ✓ | ✓ | `--max-columns`, `--query`, `--abstract`, `--process-*` | +| **vars** | ✓ | - | - | - | ✓ | ✓ | Same as values (alias) | +| **metadata** | ✓ | - | - | - | ✓ | ✓ | `--max-columns`, `--query`, `--process-*` | +| **settings** | ✓ | - | - | - | ✓ | ✓ | `--max-columns`, `--query`, `--process-*` | +| **instances** | ✓ | ✓ | ✓ | - | ✓ | ✓ | `--upload` | + +## Environment Variable Bindings + +All flags support environment variable configuration: + +| Flag | Environment Variable | +|------|---------------------| +| `--format` | `ATMOS_LIST_FORMAT` | +| `--columns` | `ATMOS_LIST_COLUMNS` | +| `--sort` | `ATMOS_LIST_SORT` | +| `--filter` | `ATMOS_LIST_FILTER` | +| `--stack` | `ATMOS_STACK` | +| `--delimiter` | `ATMOS_LIST_DELIMITER` | +| `--type` | `ATMOS_COMPONENT_TYPE` | +| `--enabled` | `ATMOS_COMPONENT_ENABLED` | +| `--locked` | `ATMOS_COMPONENT_LOCKED` | +| `--component` | `ATMOS_COMPONENT` | +| `--query` | `ATMOS_LIST_QUERY` | +| `--max-columns` | `ATMOS_LIST_MAX_COLUMNS` | + +**Usage:** +```bash +# Set default format for all list commands +export ATMOS_LIST_FORMAT=json + +# Set default component type filter +export ATMOS_COMPONENT_TYPE=real + +# Now all list commands use these defaults +atmos list components +atmos list stacks +``` + +## Best Practices Summary + +### DO ✅ +- Create one wrapper function per flag +- Follow `With{FlagName}Flag` naming convention +- Set default values in the wrapper +- Compose only the flags your command needs +- Add comprehensive godoc comments +- Add unit tests for each wrapper +- Include environment variable bindings + +### DON'T ❌ +- Group multiple flags in one wrapper function +- Use inconsistent naming (`addFlag`, `withFlag`, `FlagHelper`) +- Leave default values to command implementation +- Add flags "just in case" - only add what makes sense +- Forget to add tests +- Skip environment variable bindings +- Duplicate flag configuration across commands + +## Files Created + +1. **`cmd/list/flag_wrappers.go`** - All wrapper functions + `NewListParser()` +2. **`cmd/list/flag_wrappers_test.go`** - Comprehensive test coverage +3. **`cmd/list/flag_wrappers_examples.go`** - Example usage for each command +4. **`cmd/list/FLAG_WRAPPERS.md`** - Complete reference documentation +5. **`cmd/list/IMPLEMENTATION_GUIDE.md`** - This guide (answers to your questions) + +## Next Steps + +1. **Update existing commands** to use the new wrappers: + ```go + // OLD (in components.go) + componentsParser = flags.NewStandardParser( + flags.WithStringFlag("stack", "s", "", "Filter by stack"), + flags.WithEnvVars("stack", "ATMOS_STACK"), + ) + + // NEW + componentsParser = NewListParser( + WithStackFlag, // Much cleaner! + ) + ``` + +2. **Add command-specific wrappers** as needed: + - Create new `With*Flag` functions in `flag_wrappers.go` + - Add tests in `flag_wrappers_test.go` + - Update documentation in `FLAG_WRAPPERS.md` + +3. **Verify all tests pass:** + ```bash + go test ./cmd/list -v + ``` + +4. **Build documentation:** + ```bash + cd website && npm run build + ``` + +## Additional Resources + +- **PRD:** `/Users/erik/conductor/atmos/.conductor/lincoln/docs/prd/list-commands-ui-overhaul.md` +- **StandardParser:** `/Users/erik/conductor/atmos/.conductor/lincoln/pkg/flags/standard_parser.go` +- **StandardBuilder:** `/Users/erik/conductor/atmos/.conductor/lincoln/pkg/flags/standard_builder.go` +- **Flag Types:** `/Users/erik/conductor/atmos/.conductor/lincoln/pkg/flags/types.go` diff --git a/docs/prd/auth-context-multi-identity.md b/docs/prd/auth-context-multi-identity.md index 62b0dc4c57..3be5dcfca7 100644 --- a/docs/prd/auth-context-multi-identity.md +++ b/docs/prd/auth-context-multi-identity.md @@ -205,36 +205,36 @@ Spawned processes get derived env vars: // VendorPull(..., authContext.GitHub) // GitHub vendoring uses GitHub creds // GetTerraformState(..., authContext.AWS) // Terraform state uses AWS creds type AuthContext struct { - // AWS holds AWS credentials if an AWS identity is active. - AWS *AWSAuthContext `json:"aws,omitempty" yaml:"aws,omitempty"` + // AWS holds AWS credentials if an AWS identity is active. + AWS *AWSAuthContext `json:"aws,omitempty" yaml:"aws,omitempty"` - // GitHub holds GitHub credentials if a GitHub identity is active (future). - // GitHub *GitHubAuthContext `json:"github,omitempty" yaml:"github,omitempty"` + // GitHub holds GitHub credentials if a GitHub identity is active (future). + // GitHub *GitHubAuthContext `json:"github,omitempty" yaml:"github,omitempty"` - // Azure holds Azure credentials if an Azure identity is active (future). - // Azure *AzureAuthContext `json:"azure,omitempty" yaml:"azure,omitempty"` + // Azure holds Azure credentials if an Azure identity is active (future). + // Azure *AzureAuthContext `json:"azure,omitempty" yaml:"azure,omitempty"` - // GCP holds GCP credentials if a GCP identity is active (future). - // GCP *GCPAuthContext `json:"gcp,omitempty" yaml:"gcp,omitempty"` + // GCP holds GCP credentials if a GCP identity is active (future). + // GCP *GCPAuthContext `json:"gcp,omitempty" yaml:"gcp,omitempty"` } // AWSAuthContext holds AWS-specific authentication context. // This is populated by the AWS auth system and consumed by AWS SDK calls. type AWSAuthContext struct { - // CredentialsFile is the absolute path to the AWS credentials file managed by Atmos. - // Example: /home/user/.atmos/auth/aws-sso/credentials - CredentialsFile string `json:"credentials_file" yaml:"credentials_file"` + // CredentialsFile is the absolute path to the AWS credentials file managed by Atmos. + // Example: /home/user/.atmos/auth/aws-sso/credentials + CredentialsFile string `json:"credentials_file" yaml:"credentials_file"` - // ConfigFile is the absolute path to the AWS config file managed by Atmos. - // Example: /home/user/.atmos/auth/aws-sso/config - ConfigFile string `json:"config_file" yaml:"config_file"` + // ConfigFile is the absolute path to the AWS config file managed by Atmos. + // Example: /home/user/.atmos/auth/aws-sso/config + ConfigFile string `json:"config_file" yaml:"config_file"` - // Profile is the AWS profile name to use from the credentials file. - // This corresponds to the identity name in Atmos auth config. - Profile string `json:"profile" yaml:"profile"` + // Profile is the AWS profile name to use from the credentials file. + // This corresponds to the identity name in Atmos auth config. + Profile string `json:"profile" yaml:"profile"` - // Region is the AWS region (optional, may be empty if not specified in identity). - Region string `json:"region,omitempty" yaml:"region,omitempty"` + // Region is the AWS region (optional, may be empty if not specified in identity). + Region string `json:"region,omitempty" yaml:"region,omitempty"` } // Future: Add Azure, GCP, GitHub auth contexts following same pattern @@ -249,23 +249,23 @@ type AWSAuthContext struct { ```go type ConfigAndStacksInfo struct { - // ... existing fields ... - ComponentEnvSection AtmosSectionMapType - ComponentAuthSection AtmosSectionMapType - ComponentEnvList []string - - // AuthContext is a REFERENCE to the runtime authentication context. - // The actual AuthContext is created by commands and passed throughout execution. - // ConfigAndStacksInfo holds a reference for convenience (e.g., in YAML processing). - // - // Ownership: Commands create AuthContext, ConfigAndStacksInfo just references it. - // Lifetime: Single command execution (not persisted). - // - // It enables multiple cloud provider identities to be active simultaneously - // (e.g., AWS + GitHub credentials in the same component). - AuthContext *AuthContext - - // ... remaining fields ... + // ... existing fields ... + ComponentEnvSection AtmosSectionMapType + ComponentAuthSection AtmosSectionMapType + ComponentEnvList []string + + // AuthContext is a REFERENCE to the runtime authentication context. + // The actual AuthContext is created by commands and passed throughout execution. + // ConfigAndStacksInfo holds a reference for convenience (e.g., in YAML processing). + // + // Ownership: Commands create AuthContext, ConfigAndStacksInfo just references it. + // Lifetime: Single command execution (not persisted). + // + // It enables multiple cloud provider identities to be active simultaneously + // (e.g., AWS + GitHub credentials in the same component). + AuthContext *AuthContext + + // ... remaining fields ... } ``` @@ -390,53 +390,53 @@ This separation allows: // - identityName: Identity name (e.g., "dev-admin") // - creds: Authenticated credentials (may contain region info) func SetAuthContext(authContext *schema.AuthContext, stackInfo *schema.ConfigAndStacksInfo, providerName, identityName string, creds types.ICredentials) error { - if authContext == nil { - return nil // No auth context to populate - } - - m, err := NewAWSFileManager() - if err != nil { - return errors.Join(errUtils.ErrAuthAwsFileManagerFailed, err) - } - - credentialsPath := m.GetCredentialsPath(providerName) - configPath := m.GetConfigPath(providerName) - - // Extract region from credentials if available. - var region string - if awsCreds, ok := creds.(*AWSCredentials); ok && awsCreds != nil { - region = awsCreds.Region - } - - // Check for component-level region override from merged auth config. - // Stack inheritance allows components to override identity configuration. - if stackInfo != nil && stackInfo.ComponentAuthSection != nil { - if identities, ok := stackInfo.ComponentAuthSection["identities"].(map[string]any); ok { - if identityCfg, ok := identities[identityName].(map[string]any); ok { - if regionOverride, ok := identityCfg["region"].(string); ok && regionOverride != "" { - region = regionOverride - log.Debug("Using component-level region override", "region", region) - } - } - } - } - - // Populate AWS auth context. - authContext.AWS = &schema.AWSAuthContext{ - CredentialsFile: credentialsPath, - ConfigFile: configPath, - Profile: identityName, - Region: region, - } - - log.Debug("Set AWS auth context", - "profile", identityName, - "credentials", credentialsPath, - "config", configPath, - "region", region, - ) - - return nil + if authContext == nil { + return nil // No auth context to populate + } + + m, err := NewAWSFileManager() + if err != nil { + return errors.Join(errUtils.ErrAuthAwsFileManagerFailed, err) + } + + credentialsPath := m.GetCredentialsPath(providerName) + configPath := m.GetConfigPath(providerName) + + // Extract region from credentials if available. + var region string + if awsCreds, ok := creds.(*AWSCredentials); ok && awsCreds != nil { + region = awsCreds.Region + } + + // Check for component-level region override from merged auth config. + // Stack inheritance allows components to override identity configuration. + if stackInfo != nil && stackInfo.ComponentAuthSection != nil { + if identities, ok := stackInfo.ComponentAuthSection["identities"].(map[string]any); ok { + if identityCfg, ok := identities[identityName].(map[string]any); ok { + if regionOverride, ok := identityCfg["region"].(string); ok && regionOverride != "" { + region = regionOverride + log.Debug("Using component-level region override", "region", region) + } + } + } + } + + // Populate AWS auth context. + authContext.AWS = &schema.AWSAuthContext{ + CredentialsFile: credentialsPath, + ConfigFile: configPath, + Profile: identityName, + Region: region, + } + + log.Debug("Set AWS auth context", + "profile", identityName, + "credentials", credentialsPath, + "config", configPath, + "region", region, + ) + + return nil } ``` @@ -449,22 +449,22 @@ func SetAuthContext(authContext *schema.AuthContext, stackInfo *schema.ConfigAnd // This derives environment variables from the auth context (single source of truth). // The env vars are used by spawned processes (terraform, helmfile, packer). func SetEnvironmentVariables(stackInfo *schema.ConfigAndStacksInfo) error { - if stackInfo == nil || stackInfo.AuthContext == nil || stackInfo.AuthContext.AWS == nil { - return nil // No auth context to derive from - } + if stackInfo == nil || stackInfo.AuthContext == nil || stackInfo.AuthContext.AWS == nil { + return nil // No auth context to derive from + } - awsAuth := stackInfo.AuthContext.AWS + awsAuth := stackInfo.AuthContext.AWS - // Derive environment variables from auth context. - utils.SetEnvironmentVariable(stackInfo, "AWS_SHARED_CREDENTIALS_FILE", awsAuth.CredentialsFile) - utils.SetEnvironmentVariable(stackInfo, "AWS_CONFIG_FILE", awsAuth.ConfigFile) - utils.SetEnvironmentVariable(stackInfo, "AWS_PROFILE", awsAuth.Profile) + // Derive environment variables from auth context. + utils.SetEnvironmentVariable(stackInfo, "AWS_SHARED_CREDENTIALS_FILE", awsAuth.CredentialsFile) + utils.SetEnvironmentVariable(stackInfo, "AWS_CONFIG_FILE", awsAuth.ConfigFile) + utils.SetEnvironmentVariable(stackInfo, "AWS_PROFILE", awsAuth.Profile) - if awsAuth.Region != "" { - utils.SetEnvironmentVariable(stackInfo, "AWS_REGION", awsAuth.Region) - } + if awsAuth.Region != "" { + utils.SetEnvironmentVariable(stackInfo, "AWS_REGION", awsAuth.Region) + } - return nil + return nil } ``` @@ -477,29 +477,29 @@ func SetEnvironmentVariables(stackInfo *schema.ConfigAndStacksInfo) error { ```go func (i *AssumeRoleIdentity) PostAuthenticate( - ctx context.Context, - stackInfo *schema.ConfigAndStacksInfo, - providerName, identityName string, - creds types.ICredentials, + ctx context.Context, + stackInfo *schema.ConfigAndStacksInfo, + providerName, identityName string, + creds types.ICredentials, ) error { - // ... existing credential setup ... + // ... existing credential setup ... - if err := awsCloud.SetupFiles(providerName, identityName, creds); err != nil { - return errors.Join(errUtils.ErrAwsAuth, err) - } + if err := awsCloud.SetupFiles(providerName, identityName, creds); err != nil { + return errors.Join(errUtils.ErrAwsAuth, err) + } - // NEW: Set auth context (single source of truth). - if err := awsCloud.SetAuthContext(stackInfo, providerName, identityName); err != nil { - return errors.Join(errUtils.ErrAwsAuth, err) - } + // NEW: Set auth context (single source of truth). + if err := awsCloud.SetAuthContext(stackInfo, providerName, identityName); err != nil { + return errors.Join(errUtils.ErrAwsAuth, err) + } - // NEW: Derive environment variables from auth context. - // This populates ComponentEnvSection from the auth context. - if err := awsCloud.SetEnvironmentVariables(stackInfo); err != nil { - return errors.Join(errUtils.ErrAwsAuth, err) - } + // NEW: Derive environment variables from auth context. + // This populates ComponentEnvSection from the auth context. + if err := awsCloud.SetEnvironmentVariables(stackInfo); err != nil { + return errors.Join(errUtils.ErrAwsAuth, err) + } - return nil + return nil } // Apply same pattern to PermissionSetIdentity and UserIdentity @@ -516,73 +516,73 @@ func (i *AssumeRoleIdentity) PostAuthenticate( // If authContext is provided, it uses the Atmos-managed credentials files and profile. // Otherwise, it falls back to standard AWS SDK credential resolution. func LoadAWSConfigWithAuth( - ctx context.Context, - region string, - roleArn string, - assumeRoleDuration time.Duration, - authContext *schema.AWSAuthContext, + ctx context.Context, + region string, + roleArn string, + assumeRoleDuration time.Duration, + authContext *schema.AWSAuthContext, ) (aws.Config, error) { - defer perf.Track(nil, "aws_utils.LoadAWSConfigWithAuth")() - - var cfgOpts []func(*config.LoadOptions) error - - // If auth context is provided, use Atmos-managed credentials. - if authContext != nil { - log.Debug("Using Atmos auth context for AWS SDK", - "profile", authContext.Profile, - "credentials", authContext.CredentialsFile, - "config", authContext.ConfigFile, - ) - - // Set custom credential and config file paths. - // This overrides the default ~/.aws/credentials and ~/.aws/config. - cfgOpts = append(cfgOpts, - config.WithSharedCredentialsFiles([]string{authContext.CredentialsFile}), - config.WithSharedConfigFiles([]string{authContext.ConfigFile}), - config.WithSharedConfigProfile(authContext.Profile), - ) - - // Use region from auth context if not explicitly provided. - if region == "" && authContext.Region != "" { - region = authContext.Region - } - } - - // Set region if provided. - if region != "" { - cfgOpts = append(cfgOpts, config.WithRegion(region)) - } - - // Load base config. - baseCfg, err := config.LoadDefaultConfig(ctx, cfgOpts...) - if err != nil { - return aws.Config{}, fmt.Errorf("%w: %v", errUtils.ErrLoadAwsConfig, err) - } - - // Conditionally assume role if specified. - if roleArn != "" { - log.Debug("Assuming role", "ARN", roleArn) - stsClient := sts.NewFromConfig(baseCfg) - - creds := stscreds.NewAssumeRoleProvider(stsClient, roleArn, func(o *stscreds.AssumeRoleOptions) { - o.Duration = assumeRoleDuration - }) - - cfgOpts = append(cfgOpts, config.WithCredentialsProvider(aws.NewCredentialsCache(creds))) - - // Reload full config with assumed role credentials. - return config.LoadDefaultConfig(ctx, cfgOpts...) - } - - return baseCfg, nil + defer perf.Track(nil, "aws_utils.LoadAWSConfigWithAuth")() + + var cfgOpts []func(*config.LoadOptions) error + + // If auth context is provided, use Atmos-managed credentials. + if authContext != nil { + log.Debug("Using Atmos auth context for AWS SDK", + "profile", authContext.Profile, + "credentials", authContext.CredentialsFile, + "config", authContext.ConfigFile, + ) + + // Set custom credential and config file paths. + // This overrides the default ~/.aws/credentials and ~/.aws/config. + cfgOpts = append(cfgOpts, + config.WithSharedCredentialsFiles([]string{authContext.CredentialsFile}), + config.WithSharedConfigFiles([]string{authContext.ConfigFile}), + config.WithSharedConfigProfile(authContext.Profile), + ) + + // Use region from auth context if not explicitly provided. + if region == "" && authContext.Region != "" { + region = authContext.Region + } + } + + // Set region if provided. + if region != "" { + cfgOpts = append(cfgOpts, config.WithRegion(region)) + } + + // Load base config. + baseCfg, err := config.LoadDefaultConfig(ctx, cfgOpts...) + if err != nil { + return aws.Config{}, fmt.Errorf("%w: %v", errUtils.ErrLoadAWSConfig, err) + } + + // Conditionally assume role if specified. + if roleArn != "" { + log.Debug("Assuming role", "ARN", roleArn) + stsClient := sts.NewFromConfig(baseCfg) + + creds := stscreds.NewAssumeRoleProvider(stsClient, roleArn, func(o *stscreds.AssumeRoleOptions) { + o.Duration = assumeRoleDuration + }) + + cfgOpts = append(cfgOpts, config.WithCredentialsProvider(aws.NewCredentialsCache(creds))) + + // Reload full config with assumed role credentials. + return config.LoadDefaultConfig(ctx, cfgOpts...) + } + + return baseCfg, nil } // LoadAWSConfig is kept for backward compatibility. // It wraps LoadAWSConfigWithAuth with nil authContext. func LoadAWSConfig(ctx context.Context, region string, roleArn string, assumeRoleDuration time.Duration) (aws.Config, error) { - defer perf.Track(nil, "aws_utils.LoadAWSConfig")() + defer perf.Track(nil, "aws_utils.LoadAWSConfig")() - return LoadAWSConfigWithAuth(ctx, region, roleArn, assumeRoleDuration, nil) + return LoadAWSConfigWithAuth(ctx, region, roleArn, assumeRoleDuration, nil) } ``` @@ -594,34 +594,34 @@ func LoadAWSConfig(ctx context.Context, region string, roleArn string, assumeRol ```go func GetTerraformState( - atmosConfig *schema.AtmosConfiguration, - yamlFunc string, - stack string, - component string, - output string, - skipCache bool, - authContext *schema.AuthContext, // NEW: Optional auth context + atmosConfig *schema.AtmosConfiguration, + yamlFunc string, + stack string, + component string, + output string, + skipCache bool, + authContext *schema.AuthContext, // NEW: Optional auth context ) (any, error) { - defer perf.Track(atmosConfig, "exec.GetTerraformState")() + defer perf.Track(atmosConfig, "exec.GetTerraformState")() - // ... existing cache logic ... + // ... existing cache logic ... - componentSections, err := ExecuteDescribeComponent(component, stack, true, true, nil) - if err != nil { - er := fmt.Errorf("%w `%s` in stack `%s`\nin YAML function: `%s`\n%v", errUtils.ErrDescribeComponent, component, stack, yamlFunc, err) - return nil, er - } + componentSections, err := ExecuteDescribeComponent(component, stack, true, true, nil) + if err != nil { + er := fmt.Errorf("%w `%s` in stack `%s`\nin YAML function: `%s`\n%v", errUtils.ErrDescribeComponent, component, stack, yamlFunc, err) + return nil, er + } - // ... existing static remote state logic ... + // ... existing static remote state logic ... - // Read Terraform backend with auth context. - backend, err := tb.GetTerraformBackend(atmosConfig, &componentSections, authContext) - if err != nil { - er := fmt.Errorf("%w for component `%s` in stack `%s`\nin YAML function: `%s`\n%v", errUtils.ErrReadTerraformState, component, stack, yamlFunc, err) - return nil, er - } + // Read Terraform backend with auth context. + backend, err := tb.GetTerraformBackend(atmosConfig, &componentSections, authContext) + if err != nil { + er := fmt.Errorf("%w for component `%s` in stack `%s`\nin YAML function: `%s`\n%v", errUtils.ErrReadTerraformState, component, stack, yamlFunc, err) + return nil, er + } - // ... existing output retrieval logic ... + // ... existing output retrieval logic ... } ``` @@ -629,31 +629,31 @@ func GetTerraformState( ```go func GetTerraformBackend( - atmosConfig *schema.AtmosConfiguration, - componentSections *map[string]any, - authContext *schema.AuthContext, // NEW: Optional auth context + atmosConfig *schema.AtmosConfiguration, + componentSections *map[string]any, + authContext *schema.AuthContext, // NEW: Optional auth context ) (map[string]any, error) { - defer perf.Track(atmosConfig, "terraform_backend.GetTerraformBackend")() + defer perf.Track(atmosConfig, "terraform_backend.GetTerraformBackend")() - RegisterTerraformBackends() + RegisterTerraformBackends() - backendType := GetComponentBackendType(componentSections) - if backendType == "" { - backendType = cfg.BackendTypeLocal - } + backendType := GetComponentBackendType(componentSections) + if backendType == "" { + backendType = cfg.BackendTypeLocal + } - readBackendStateFunc := GetTerraformBackendReadFunc(backendType) - if readBackendStateFunc == nil { - return nil, fmt.Errorf("%w: `%s`\nsupported backends: `local`, `s3`", errUtils.ErrUnsupportedBackendType, backendType) - } + readBackendStateFunc := GetTerraformBackendReadFunc(backendType) + if readBackendStateFunc == nil { + return nil, fmt.Errorf("%w: `%s`\nsupported backends: `local`, `s3`", errUtils.ErrUnsupportedBackendType, backendType) + } - // Pass auth context to backend reader. - content, err := readBackendStateFunc(atmosConfig, componentSections, authContext) - if err != nil { - return nil, err - } + // Pass auth context to backend reader. + content, err := readBackendStateFunc(atmosConfig, componentSections, authContext) + if err != nil { + return nil, err + } - // ... existing state file processing ... + // ... existing state file processing ... } ``` @@ -662,64 +662,64 @@ func GetTerraformBackend( ```go // Update function signature type. type TerraformBackendReadFunc func( - atmosConfig *schema.AtmosConfiguration, - componentSections *map[string]any, - authContext *schema.AuthContext, // NEW: Optional auth context + atmosConfig *schema.AtmosConfiguration, + componentSections *map[string]any, + authContext *schema.AuthContext, // NEW: Optional auth context ) ([]byte, error) func ReadTerraformBackendS3( - atmosConfig *schema.AtmosConfiguration, - componentSections *map[string]any, - authContext *schema.AuthContext, // NEW: Optional auth context + atmosConfig *schema.AtmosConfiguration, + componentSections *map[string]any, + authContext *schema.AuthContext, // NEW: Optional auth context ) ([]byte, error) { - defer perf.Track(nil, "terraform_backend.ReadTerraformBackendS3")() + defer perf.Track(nil, "terraform_backend.ReadTerraformBackendS3")() - backend := GetComponentBackend(componentSections) + backend := GetComponentBackend(componentSections) - // Use auth context if available. - s3Client, err := getCachedS3ClientWithAuth(&backend, authContext) - if err != nil { - return nil, err - } + // Use auth context if available. + s3Client, err := getCachedS3ClientWithAuth(&backend, authContext) + if err != nil { + return nil, err + } - return ReadTerraformBackendS3Internal(s3Client, componentSections, &backend) + return ReadTerraformBackendS3Internal(s3Client, componentSections, &backend) } // getCachedS3ClientWithAuth creates or retrieves a cached S3 client with auth context support. func getCachedS3ClientWithAuth(backend *map[string]any, authContext *schema.AuthContext) (S3API, error) { - region := GetBackendAttribute(backend, "region") - roleArn := GetS3BackendAssumeRoleArn(backend) - - // Build cache key based on region, role, and auth profile. - cacheKey := fmt.Sprintf("region=%s;role_arn=%s", region, roleArn) - if authContext != nil && authContext.AWS != nil { - cacheKey += fmt.Sprintf(";profile=%s", authContext.AWS.Profile) - } - - // Check cache. - if cached, ok := s3ClientCache.Load(cacheKey); ok { - return cached.(S3API), nil - } - - // Build S3 client with auth context. - ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) - defer cancel() - - // Extract AWS auth context. - var awsAuthContext *schema.AWSAuthContext - if authContext != nil { - awsAuthContext = authContext.AWS - } - - // Load AWS config with auth context. - cfg, err := awsUtils.LoadAWSConfigWithAuth(ctx, region, roleArn, 15*time.Minute, awsAuthContext) - if err != nil { - return nil, err - } - - s3Client := s3.NewFromConfig(cfg) - s3ClientCache.Store(cacheKey, s3Client) - return s3Client, nil + region := GetBackendAttribute(backend, "region") + roleArn := GetS3BackendAssumeRoleArn(backend) + + // Build cache key based on region, role, and auth profile. + cacheKey := fmt.Sprintf("region=%s;role_arn=%s", region, roleArn) + if authContext != nil && authContext.AWS != nil { + cacheKey += fmt.Sprintf(";profile=%s", authContext.AWS.Profile) + } + + // Check cache. + if cached, ok := s3ClientCache.Load(cacheKey); ok { + return cached.(S3API), nil + } + + // Build S3 client with auth context. + ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) + defer cancel() + + // Extract AWS auth context. + var awsAuthContext *schema.AWSAuthContext + if authContext != nil { + awsAuthContext = authContext.AWS + } + + // Load AWS config with auth context. + cfg, err := awsUtils.LoadAWSConfigWithAuth(ctx, region, roleArn, 15*time.Minute, awsAuthContext) + if err != nil { + return nil, err + } + + s3Client := s3.NewFromConfig(cfg) + s3ClientCache.Store(cacheKey, s3Client) + return s3Client, nil } ``` @@ -728,13 +728,13 @@ func getCachedS3ClientWithAuth(backend *map[string]any, authContext *schema.Auth ```go // Update to match new signature (auth context not used for local backend). func ReadTerraformBackendLocal( - _ *schema.AtmosConfiguration, - componentSections *map[string]any, - _ *schema.AuthContext, // Unused for local backend + _ *schema.AtmosConfiguration, + componentSections *map[string]any, + _ *schema.AuthContext, // Unused for local backend ) ([]byte, error) { - defer perf.Track(nil, "terraform_backend.ReadTerraformBackendLocal")() + defer perf.Track(nil, "terraform_backend.ReadTerraformBackendLocal")() - // ... existing implementation unchanged ... + // ... existing implementation unchanged ... } ``` @@ -746,87 +746,87 @@ func ReadTerraformBackendLocal( ```go func ProcessCustomYamlTags( - atmosConfig *schema.AtmosConfiguration, - input schema.AtmosSectionMapType, - currentStack string, - skip []string, - stackInfo *schema.ConfigAndStacksInfo, // NEW: Stack info for auth context + atmosConfig *schema.AtmosConfiguration, + input schema.AtmosSectionMapType, + currentStack string, + skip []string, + stackInfo *schema.ConfigAndStacksInfo, // NEW: Stack info for auth context ) (schema.AtmosSectionMapType, error) { - defer perf.Track(atmosConfig, "exec.ProcessCustomYamlTags")() + defer perf.Track(atmosConfig, "exec.ProcessCustomYamlTags")() - return processNodes(atmosConfig, input, currentStack, skip, stackInfo), nil + return processNodes(atmosConfig, input, currentStack, skip, stackInfo), nil } func processNodes( - atmosConfig *schema.AtmosConfiguration, - data map[string]any, - currentStack string, - skip []string, - stackInfo *schema.ConfigAndStacksInfo, // NEW + atmosConfig *schema.AtmosConfiguration, + data map[string]any, + currentStack string, + skip []string, + stackInfo *schema.ConfigAndStacksInfo, // NEW ) map[string]any { - newMap := make(map[string]any) - var recurse func(any) any - - recurse = func(node any) any { - switch v := node.(type) { - case string: - return processCustomTags(atmosConfig, v, currentStack, skip, stackInfo) - - case map[string]any: - newNestedMap := make(map[string]any) - for k, val := range v { - newNestedMap[k] = recurse(val) - } - return newNestedMap - - case []any: - newSlice := make([]any, len(v)) - for i, val := range v { - newSlice[i] = recurse(val) - } - return newSlice - - default: - return v - } - } - - for k, v := range data { - newMap[k] = recurse(v) - } - - return newMap + newMap := make(map[string]any) + var recurse func(any) any + + recurse = func(node any) any { + switch v := node.(type) { + case string: + return processCustomTags(atmosConfig, v, currentStack, skip, stackInfo) + + case map[string]any: + newNestedMap := make(map[string]any) + for k, val := range v { + newNestedMap[k] = recurse(val) + } + return newNestedMap + + case []any: + newSlice := make([]any, len(v)) + for i, val := range v { + newSlice[i] = recurse(val) + } + return newSlice + + default: + return v + } + } + + for k, v := range data { + newMap[k] = recurse(v) + } + + return newMap } func processCustomTags( - atmosConfig *schema.AtmosConfiguration, - input string, - currentStack string, - skip []string, - stackInfo *schema.ConfigAndStacksInfo, // NEW + atmosConfig *schema.AtmosConfiguration, + input string, + currentStack string, + skip []string, + stackInfo *schema.ConfigAndStacksInfo, // NEW ) any { - switch { - case strings.HasPrefix(input, u.AtmosYamlFuncTemplate) && !skipFunc(skip, u.AtmosYamlFuncTemplate): - return processTagTemplate(input) - case strings.HasPrefix(input, u.AtmosYamlFuncExec) && !skipFunc(skip, u.AtmosYamlFuncExec): - res, err := u.ProcessTagExec(input) - errUtils.CheckErrorPrintAndExit(err, "", "") - return res - case strings.HasPrefix(input, u.AtmosYamlFuncStoreGet) && !skipFunc(skip, u.AtmosYamlFuncStoreGet): - return processTagStoreGet(atmosConfig, input, currentStack) - case strings.HasPrefix(input, u.AtmosYamlFuncStore) && !skipFunc(skip, u.AtmosYamlFuncStore): - return processTagStore(atmosConfig, input, currentStack) - case strings.HasPrefix(input, u.AtmosYamlFuncTerraformOutput) && !skipFunc(skip, u.AtmosYamlFuncTerraformOutput): - return processTagTerraformOutput(atmosConfig, input, currentStack) - case strings.HasPrefix(input, u.AtmosYamlFuncTerraformState) && !skipFunc(skip, u.AtmosYamlFuncTerraformState): - return processTagTerraformState(atmosConfig, input, currentStack, stackInfo) // Pass stackInfo - case strings.HasPrefix(input, u.AtmosYamlFuncEnv) && !skipFunc(skip, u.AtmosYamlFuncEnv): - res, err := u.ProcessTagEnv(input) - errUtils.CheckErrorPrintAndExit(err, "", "") - return res - default: - return input - } + switch { + case strings.HasPrefix(input, u.AtmosYamlFuncTemplate) && !skipFunc(skip, u.AtmosYamlFuncTemplate): + return processTagTemplate(input) + case strings.HasPrefix(input, u.AtmosYamlFuncExec) && !skipFunc(skip, u.AtmosYamlFuncExec): + res, err := u.ProcessTagExec(input) + errUtils.CheckErrorPrintAndExit(err, "", "") + return res + case strings.HasPrefix(input, u.AtmosYamlFuncStoreGet) && !skipFunc(skip, u.AtmosYamlFuncStoreGet): + return processTagStoreGet(atmosConfig, input, currentStack) + case strings.HasPrefix(input, u.AtmosYamlFuncStore) && !skipFunc(skip, u.AtmosYamlFuncStore): + return processTagStore(atmosConfig, input, currentStack) + case strings.HasPrefix(input, u.AtmosYamlFuncTerraformOutput) && !skipFunc(skip, u.AtmosYamlFuncTerraformOutput): + return processTagTerraformOutput(atmosConfig, input, currentStack) + case strings.HasPrefix(input, u.AtmosYamlFuncTerraformState) && !skipFunc(skip, u.AtmosYamlFuncTerraformState): + return processTagTerraformState(atmosConfig, input, currentStack, stackInfo) // Pass stackInfo + case strings.HasPrefix(input, u.AtmosYamlFuncEnv) && !skipFunc(skip, u.AtmosYamlFuncEnv): + res, err := u.ProcessTagEnv(input) + errUtils.CheckErrorPrintAndExit(err, "", "") + return res + default: + return input + } } ``` @@ -834,34 +834,34 @@ func processCustomTags( ```go func processTagTerraformState( - atmosConfig *schema.AtmosConfiguration, - input string, - currentStack string, - stackInfo *schema.ConfigAndStacksInfo, // NEW: Stack info for auth context + atmosConfig *schema.AtmosConfiguration, + input string, + currentStack string, + stackInfo *schema.ConfigAndStacksInfo, // NEW: Stack info for auth context ) any { - defer perf.Track(atmosConfig, "exec.processTagTerraformState")() + defer perf.Track(atmosConfig, "exec.processTagTerraformState")() - log.Debug("Executing Atmos YAML function", "function", input) + log.Debug("Executing Atmos YAML function", "function", input) - str, err := getStringAfterTag(input, u.AtmosYamlFuncTerraformState) - errUtils.CheckErrorPrintAndExit(err, "", "") + str, err := getStringAfterTag(input, u.AtmosYamlFuncTerraformState) + errUtils.CheckErrorPrintAndExit(err, "", "") - var component string - var stack string - var output string + var component string + var stack string + var output string - // ... existing argument parsing ... + // ... existing argument parsing ... - // Extract auth context from stack info. - var authContext *schema.AuthContext - if stackInfo != nil { - authContext = stackInfo.AuthContext - } + // Extract auth context from stack info. + var authContext *schema.AuthContext + if stackInfo != nil { + authContext = stackInfo.AuthContext + } - // Pass auth context to GetTerraformState. - value, err := GetTerraformState(atmosConfig, input, stack, component, output, false, authContext) - errUtils.CheckErrorPrintAndExit(err, "", "") - return value + // Pass auth context to GetTerraformState. + value, err := GetTerraformState(atmosConfig, input, stack, component, output, false, authContext) + errUtils.CheckErrorPrintAndExit(err, "", "") + return value } ``` @@ -878,18 +878,18 @@ func processTagTerraformState( ```go // Process YAML functions in Atmos manifest sections. if processYamlFunctions { - // Pass configAndStacksInfo to provide auth context. - componentSectionConverted, err := ProcessCustomYamlTags( - atmosConfig, - configAndStacksInfo.ComponentSection, - configAndStacksInfo.Stack, - skip, - &configAndStacksInfo, // NEW: Pass stack info - ) - if err != nil { - return configAndStacksInfo, err - } - configAndStacksInfo.ComponentSection = componentSectionConverted + // Pass configAndStacksInfo to provide auth context. + componentSectionConverted, err := ProcessCustomYamlTags( + atmosConfig, + configAndStacksInfo.ComponentSection, + configAndStacksInfo.Stack, + skip, + &configAndStacksInfo, // NEW: Pass stack info + ) + if err != nil { + return configAndStacksInfo, err + } + configAndStacksInfo.ComponentSection = componentSectionConverted } ``` @@ -974,33 +974,33 @@ if processYamlFunctions { **Auth Context Population:** ```go func TestSetAuthContext(t *testing.T) { - stackInfo := &schema.ConfigAndStacksInfo{} - err := awsCloud.SetAuthContext(stackInfo, "aws-sso", "my-identity") - - require.NoError(t, err) - require.NotNil(t, stackInfo.AuthContext) - require.NotNil(t, stackInfo.AuthContext.AWS) - assert.Equal(t, "my-identity", stackInfo.AuthContext.AWS.Profile) - assert.Contains(t, stackInfo.AuthContext.AWS.CredentialsFile, ".atmos/auth") + stackInfo := &schema.ConfigAndStacksInfo{} + err := awsCloud.SetAuthContext(stackInfo, "aws-sso", "my-identity") + + require.NoError(t, err) + require.NotNil(t, stackInfo.AuthContext) + require.NotNil(t, stackInfo.AuthContext.AWS) + assert.Equal(t, "my-identity", stackInfo.AuthContext.AWS.Profile) + assert.Contains(t, stackInfo.AuthContext.AWS.CredentialsFile, ".atmos/auth") } ``` **AWS Config Loading:** ```go func TestLoadAWSConfigWithAuth(t *testing.T) { - authContext := &schema.AWSAuthContext{ - CredentialsFile: "/test/credentials", - ConfigFile: "/test/config", - Profile: "test-profile", - Region: "us-east-1", - } - - ctx := context.Background() - cfg, err := LoadAWSConfigWithAuth(ctx, "", "", 0, authContext) - - require.NoError(t, err) - assert.Equal(t, "us-east-1", cfg.Region) - // Verify config uses custom files (requires AWS SDK testing utilities) + authContext := &schema.AWSAuthContext{ + CredentialsFile: "/test/credentials", + ConfigFile: "/test/config", + Profile: "test-profile", + Region: "us-east-1", + } + + ctx := context.Background() + cfg, err := LoadAWSConfigWithAuth(ctx, "", "", 0, authContext) + + require.NoError(t, err) + assert.Equal(t, "us-east-1", cfg.Region) + // Verify config uses custom files (requires AWS SDK testing utilities) } ``` @@ -1009,22 +1009,22 @@ func TestLoadAWSConfigWithAuth(t *testing.T) { **End-to-End Test:** ```go func TestTerraformStateWithAuth(t *testing.T) { - // Setup: Create test stack with !terraform.state - // Setup: Mock S3 backend with test state file - // Setup: Create Atmos auth identity - - // Authenticate - authManager := /* create auth manager */ - _, err := authManager.Authenticate(ctx, "test-identity") - require.NoError(t, err) - - // Process component with !terraform.state - stackInfo, err := ExecuteDescribeComponent("test-component", "test-stack", true, true, nil) - require.NoError(t, err) - - // Verify auth context was used (check logs or mock calls) - assert.NotNil(t, stackInfo.AuthContext) - assert.NotNil(t, stackInfo.AuthContext.AWS) + // Setup: Create test stack with !terraform.state + // Setup: Mock S3 backend with test state file + // Setup: Create Atmos auth identity + + // Authenticate + authManager := /* create auth manager */ + _, err := authManager.Authenticate(ctx, "test-identity") + require.NoError(t, err) + + // Process component with !terraform.state + stackInfo, err := ExecuteDescribeComponent("test-component", "test-stack", true, true, nil) + require.NoError(t, err) + + // Verify auth context was used (check logs or mock calls) + assert.NotNil(t, stackInfo.AuthContext) + assert.NotNil(t, stackInfo.AuthContext.AWS) } ``` @@ -1071,10 +1071,10 @@ func TestTerraformStateWithAuth(t *testing.T) { **Azure:** ```go type AzureAuthContext struct { - SubscriptionID string - TenantID string - ClientID string - TokenFile string + SubscriptionID string + TenantID string + ClientID string + TokenFile string } authContext.Azure = &AzureAuthContext{...} @@ -1083,9 +1083,9 @@ authContext.Azure = &AzureAuthContext{...} **GCP:** ```go type GCPAuthContext struct { - ProjectID string - ServiceAccountFile string - ImpersonateAccount string + ProjectID string + ServiceAccountFile string + ImpersonateAccount string } authContext.GCP = &GCPAuthContext{...} @@ -1094,10 +1094,10 @@ authContext.GCP = &GCPAuthContext{...} **GitHub:** ```go type GitHubAuthContext struct { - Token string - TokenFile string - AppID string - InstallID string + Token string + TokenFile string + AppID string + InstallID string } authContext.GitHub = &GitHubAuthContext{...} @@ -1109,12 +1109,12 @@ The `!store` YAML function could also benefit from auth context for accessing se ```go func processTagStore( - atmosConfig *schema.AtmosConfiguration, - input string, - currentStack string, - stackInfo *schema.ConfigAndStacksInfo, // Add stack info + atmosConfig *schema.AtmosConfiguration, + input string, + currentStack string, + stackInfo *schema.ConfigAndStacksInfo, // Add stack info ) any { - // Use authContext for AWS SSM Parameter Store, Azure Key Vault, etc. + // Use authContext for AWS SSM Parameter Store, Azure Key Vault, etc. } ``` diff --git a/docs/prd/list-commands-configurable-output.md b/docs/prd/list-commands-configurable-output.md new file mode 100644 index 0000000000..c11a8e1a45 --- /dev/null +++ b/docs/prd/list-commands-configurable-output.md @@ -0,0 +1,1511 @@ +# PRD: List Commands UI Overhaul + +## Document Information + +- **Status**: Draft +- **Created**: 2025-01-16 +- **Author**: Claude (Conductor Agent) +- **Related Issues**: + - [DEV-2803: Implement `atmos list deployments`](https://linear.app/cloudposse/issue/DEV-2803) + - [DEV-2805: Improve `atmos list components`](https://linear.app/cloudposse/issue/DEV-2805) + - [DEV-2806: Implement `atmos list vendor`](https://linear.app/cloudposse/issue/DEV-2806) + +## Executive Summary + +Modernize all Atmos list commands to support configurable column customization, universal filtering and sorting, and theme-aware output. Create highly reusable, well-tested infrastructure that eliminates code duplication across the 10 list commands by implementing generic utilities tested once and used everywhere. + +### Key Goals + +1. **Reusable Infrastructure**: Create generic filter/sort/column/render logic tested once (>90% coverage) and reused across all commands +2. **Proper Architecture**: Leverage existing UI/data layer (TTY handled automatically), maintain zero deep exits +3. **Feature Complete**: Implement column customization, filtering, sorting for all list commands +4. **Documentation Alignment**: Ensure all documented features are implemented (fix current documentation-implementation gap) +5. **High Test Coverage**: 80-90% coverage across all code (90%+ on reusables, 80%+ on commands) + +## Problem Statement + +### Current Issues + +1. **Documentation-Implementation Gap**: Column customization is extensively documented but only implemented for `workflows` and `vendor` commands. The `stacks`, `components`, and other commands lack schema fields and implementation. + +2. **Code Duplication**: Format handling, filtering, sorting, and output logic is duplicated across multiple list commands, making maintenance difficult and bug fixes inconsistent. + +3. **Inconsistent Output**: Different list commands use different output patterns - some use `u.PrintMessageInColor()` (anti-pattern), others properly use data/ui layer. + +4. **Limited Capabilities**: + - No universal filter/sort support across commands + - No column selection/ordering + - No conditional styling (disabled=gray, locked=orange) + - Filtering flags inconsistent across commands + +5. **User Experience**: Teams exploring cloud architecture data model need better tools to query, list, and view infrastructure across stacks and components. + +## Architecture Overview + +### Core Principles + +1. **Separation of Concerns**: Data fetching → Transformation → Rendering → Output +2. **Pure Functions**: Maximize testability, minimize side effects +3. **Reusable First**: Generic utilities in `pkg/list/`, command-specific logic in `cmd/list/` +4. **Leverage Existing Infrastructure**: Use `data.*` and `ui.*` methods (TTY handled automatically) +5. **No Deep Exits**: All functions return errors (already achieved, must maintain) + +### Data Flow + +``` +┌─────────────────────────────────────────────────────────────┐ +│ 1. User Input (CLI flags + atmos.yaml config) │ +└────────────────────┬────────────────────────────────────────┘ + │ + ▼ +┌─────────────────────────────────────────────────────────────┐ +│ 2. Data Fetching (command-specific) │ +│ - Load atmos config │ +│ - Execute describe stacks / workflows / etc. │ +│ - Returns: map[string]any or []WorkflowRow │ +└────────────────────┬────────────────────────────────────────┘ + │ + ▼ +┌─────────────────────────────────────────────────────────────┐ +│ 3. Filtering (pkg/list/filter - REUSABLE) │ +│ - YQ expressions │ +│ - Glob patterns │ +│ - Column value filters │ +│ - Boolean filters (enabled/locked) │ +│ - Returns: filtered data │ +└────────────────────┬────────────────────────────────────────┘ + │ + ▼ +┌─────────────────────────────────────────────────────────────┐ +│ 4. Column Extraction (pkg/list/column - REUSABLE) │ +│ ⚠️ CRITICAL: Go template evaluation happens HERE │ +│ - Parse column configs from atmos.yaml │ +│ - Evaluate Go templates against each row of data │ +│ - Template context: full component/stack configuration │ +│ - Returns: [][]string (headers + rows) │ +└────────────────────┬────────────────────────────────────────┘ + │ + ▼ +┌─────────────────────────────────────────────────────────────┐ +│ 5. Sorting (pkg/list/sort - REUSABLE) │ +│ - Single or multi-column sorting │ +│ - Type-aware (string, number, date, boolean) │ +│ - Returns: sorted [][]string │ +└────────────────────┬────────────────────────────────────────┘ + │ + ▼ +┌─────────────────────────────────────────────────────────────┐ +│ 6. Rendering (pkg/list/renderer - REUSABLE) │ +│ - Orchestrates steps 3-5 │ +│ - Applies conditional styling (disabled, locked, etc.) │ +│ - Delegates to format-specific formatter │ +│ - Returns: string (formatted output) │ +└────────────────────┬────────────────────────────────────────┘ + │ + ▼ +┌─────────────────────────────────────────────────────────────┐ +│ 7. Output (pkg/list/output - REUSABLE) │ +│ - Routes to data.Write() (stdout) for structured formats │ +│ - Routes to ui.Write() (stderr) for human-readable │ +│ - TTY detection handled by ui/data layer │ +└─────────────────────────────────────────────────────────────┘ +``` + +### Template Evaluation Timing ⚠️ + +**CRITICAL DISTINCTION:** + +```yaml +# atmos.yaml - Configuration loaded at startup +components: + list: + columns: + - name: Component + value: '{{ .atmos_component }}' # ← Template string stored, NOT evaluated + - name: Region + value: '{{ .vars.region }}' # ← Template string stored, NOT evaluated +``` + +**When Config is Loaded:** +- ✅ Parse YAML structure +- ✅ Store template strings as-is +- ❌ Do NOT evaluate templates (no data available yet) + +**When Rows are Processed:** +- ✅ For each component/stack/workflow +- ✅ Create template context with full data: `.atmos_component`, `.vars`, `.settings`, etc. +- ✅ Evaluate template against context +- ✅ Extract column value +- ✅ Build row: `[]string{component_name, region_value, ...}` + +```go +// Pseudocode for template evaluation +for _, item := range data { + row := []string{} + for _, columnConfig := range listConfig.Columns { + // Create template context with full item data + context := map[string]any{ + "atmos_component": item.Component, + "atmos_stack": item.Stack, + "vars": item.Vars, + "settings": item.Settings, + "enabled": item.Enabled, + "locked": item.Locked, + // ... all available fields + } + + // Evaluate template NOW (not at config load time) + tmpl := template.New("column").Parse(columnConfig.Value) + var buf bytes.Buffer + tmpl.Execute(&buf, context) + + row = append(row, buf.String()) + } + rows = append(rows, row) +} +``` + +## Detailed Design + +### Phase 1: Reusable Infrastructure + +#### 1.1 Column System (`pkg/list/column/`) + +**Purpose**: Manage column configuration and Go template evaluation during row processing. + +**Files**: +- `column.go` - Core column extraction logic +- `column_test.go` - Test coverage target: >90% + +**Key Types**: + +```go +// Config matches schema from atmos.yaml +type Config struct { + Name string `yaml:"name" json:"name"` // Display header + Value string `yaml:"value" json:"value"` // Go template string + Width int `yaml:"width" json:"width"` // Optional width override +} + +// Selector manages column extraction with template evaluation +type Selector struct { + configs []Config + selected []string // Column names to display (nil = all) + templateMap *template.Template // Pre-parsed templates with FuncMap +} + +// Template context provided to each template evaluation +type TemplateContext struct { + // Standard fields available in all templates + AtmosComponent string `json:"atmos_component"` + AtmosStack string `json:"atmos_stack"` + AtmosComponentType string `json:"atmos_component_type"` + + // Component configuration + Vars map[string]any `json:"vars"` + Settings map[string]any `json:"settings"` + Metadata map[string]any `json:"metadata"` + Env map[string]any `json:"env"` + + // Flags + Enabled bool `json:"enabled"` + Locked bool `json:"locked"` + Abstract bool `json:"abstract"` + + // Full raw data for advanced templates + Raw map[string]any `json:"raw"` +} +``` + +**Public API**: + +```go +// NewSelector creates a selector with Go template support +// funcMap should include Atmos template functions (atmos.Component, etc.) +func NewSelector(configs []Config, funcMap template.FuncMap) (*Selector, error) + +// Select restricts which columns to display (nil = all) +func (s *Selector) Select(columnNames []string) error + +// Extract evaluates templates against data and returns table rows +// ⚠️ This is where Go template evaluation happens (NOT at config load) +func (s *Selector) Extract(data []map[string]any) (headers []string, rows [][]string, err error) + +// Headers returns the header row +func (s *Selector) Headers() []string +``` + +**Template Function Map**: + +```go +// BuildColumnFuncMap returns template functions for column templates +func BuildColumnFuncMap() template.FuncMap { + return template.FuncMap{ + // Type conversion + "toString": toString, + "toInt": toInt, + "toBool": toBool, + + // Formatting + "truncate": truncate, + "pad": pad, + "upper": strings.ToUpper, + "lower": strings.ToLower, + + // Data access + "get": mapGet, // Safe nested map access + "getOr": mapGetOr, // With default value + "has": mapHas, // Check if key exists + + // Collections + "len": length, + "join": strings.Join, + "split": strings.Split, + + // Conditional + "ternary": ternary, // {{ ternary .enabled "yes" "no" }} + + // Include standard Gomplate functions if needed + // (may need to restrict to safe subset) + } +} +``` + +**Example Usage**: + +```go +// From atmos.yaml +configs := []column.Config{ + {Name: "Component", Value: "{{ .atmos_component }}"}, + {Name: "Region", Value: "{{ .vars.region }}"}, + {Name: "Enabled", Value: "{{ ternary .enabled \"✓\" \"✗\" }}"}, +} + +// Create selector with template functions +selector, err := column.NewSelector(configs, column.BuildColumnFuncMap()) + +// Data from component processing +data := []map[string]any{ + { + "atmos_component": "vpc", + "atmos_stack": "plat-ue2-dev", + "vars": map[string]any{"region": "us-east-2"}, + "enabled": true, + }, + // ... more items +} + +// Extract evaluates templates NOW (during row processing) +headers, rows, err := selector.Extract(data) +// headers: ["Component", "Region", "Enabled"] +// rows: [["vpc", "us-east-2", "✓"], ...] +``` + +**Test Coverage**: +- Template parsing and validation +- Template evaluation with various data types +- Template function map functions +- Error handling (invalid templates, missing fields) +- Column selection/filtering +- Width calculation +- Edge cases (nil values, nested maps, arrays) +- Target: >90% + +#### 1.2 Filter System (`pkg/list/filter/`) + +**Purpose**: Universal filtering for any data structure. + +**Files**: +- `filter.go` - Filter implementations +- `filter_test.go` - Test coverage target: >90% + +**Key Types**: + +```go +// Filter interface for composability +type Filter interface { + Apply(data interface{}) (interface{}, error) +} + +// YQFilter uses yq expressions for filtering +type YQFilter struct { + Query string +} + +// GlobFilter matches patterns (e.g., "plat-*-dev") +type GlobFilter struct { + Pattern string +} + +// ColumnValueFilter filters rows by column value +type ColumnValueFilter struct { + Column string + Value string +} + +// BoolFilter filters by boolean field +type BoolFilter struct { + Field string + Value *bool // nil = all, true = enabled only, false = disabled only +} + +// Chain combines multiple filters (AND logic) +type Chain struct { + filters []Filter +} +``` + +**Public API**: + +```go +// Factory functions +func NewYQFilter(query string) (*YQFilter, error) +func NewGlobFilter(pattern string) (*GlobFilter, error) +func NewColumnFilter(column, value string) *ColumnValueFilter +func NewBoolFilter(field string, value *bool) *BoolFilter + +// Apply filters data +func (f *YQFilter) Apply(data interface{}) (interface{}, error) +func (f *GlobFilter) Apply(data interface{}) (interface{}, error) +func (f *ColumnValueFilter) Apply(data interface{}) (interface{}, error) +func (f *BoolFilter) Apply(data interface{}) (interface{}, error) + +// Chain combines filters +func NewChain(filters ...Filter) *Chain +func (c *Chain) Apply(data interface{}) (interface{}, error) +``` + +**Test Coverage**: +- Each filter type independently +- Filter chains (multiple filters) +- Edge cases (empty data, nil values) +- Error handling (invalid queries, type mismatches) +- Target: >90% + +#### 1.3 Sort System (`pkg/list/sort/`) + +**Purpose**: Universal sorting for any column, any direction. + +**Files**: +- `sort.go` - Sort implementations +- `sort_test.go` - Test coverage target: >90% + +**Key Types**: + +```go +type Order int + +const ( + Ascending Order = iota + Descending +) + +type DataType int + +const ( + String DataType = iota + Number + Date + Boolean +) + +// Sorter handles single column sorting +type Sorter struct { + Column string + Order Order + DataType DataType // Auto-detected if not specified +} + +// MultiSorter handles multi-column sorting +type MultiSorter struct { + sorters []*Sorter +} +``` + +**Public API**: + +```go +// NewSorter creates a sorter for a single column +func NewSorter(column string, order Order) *Sorter + +// WithDataType sets explicit data type (otherwise auto-detected) +func (s *Sorter) WithDataType(dt DataType) *Sorter + +// Sort sorts rows in-place by the column +func (s *Sorter) Sort(rows [][]string, headers []string) error + +// NewMultiSorter creates a multi-column sorter +func NewMultiSorter(sorters ...*Sorter) *MultiSorter + +// Sort applies all sorters in order (primary, secondary, etc.) +func (ms *MultiSorter) Sort(rows [][]string, headers []string) error + +// ParseSortSpec parses CLI sort spec (e.g., "stack:asc,component:desc") +func ParseSortSpec(spec string) ([]*Sorter, error) +``` + +**Test Coverage**: +- Single column sorting (ascending/descending) +- Multi-column sorting (with precedence) +- Type-aware sorting (numeric vs lexicographic) +- Date parsing and sorting +- Boolean sorting +- Edge cases (empty rows, missing columns) +- Target: >90% + +#### 1.4 Renderer (`pkg/list/renderer/`) + +**Purpose**: Orchestrate the full rendering pipeline. + +**Files**: +- `renderer.go` - Pipeline orchestration +- `renderer_test.go` - Test coverage target: >90% + +**Key Types**: + +```go +// RowStyleFunc provides conditional styling +type RowStyleFunc func(row []string, rowIndex int, headers []string) lipgloss.Style + +// Options configure the rendering pipeline +type Options struct { + Format format.Format // table, json, yaml, csv, tsv + Columns []column.Config // From atmos.yaml or CLI + Filters []filter.Filter // Applied before column extraction + Sorters []*sort.Sorter // Applied after column extraction + Delimiter string // For CSV/TSV + StyleFunc RowStyleFunc // Conditional styling for tables + ColumnWidths map[string]int // Custom column widths +} + +// Renderer executes the rendering pipeline +type Renderer struct { + data interface{} + options Options +} +``` + +**Public API**: + +```go +// New creates a renderer +func New(data interface{}, opts Options) *Renderer + +// Render executes the full pipeline +func (r *Renderer) Render() (string, error) +``` + +**Internal Pipeline**: + +```go +// Pipeline execution order +func (r *Renderer) Render() (string, error) { + // 1. Apply filters to raw data + if err := r.applyFilters(); err != nil { + return "", err + } + + // 2. Extract columns (⚠️ Go template evaluation happens here) + if err := r.extractColumns(); err != nil { + return "", err + } + + // 3. Apply sorting to extracted rows + if err := r.applySort(); err != nil { + return "", err + } + + // 4. Format output (table, json, yaml, csv, tsv) + return r.formatOutput() +} +``` + +**Test Coverage**: +- Full pipeline (filter → columns → sort → format) +- Each format type (table, json, yaml, csv, tsv) +- Conditional styling +- Error propagation +- Edge cases (empty data, no columns configured) +- Target: >90% + +#### 1.5 Output Manager (`pkg/list/output/`) + +**Purpose**: Route output to correct stream using data/ui layer. + +**Files**: +- `output.go` - Output routing +- `output_test.go` - Test coverage target: >90% + +**Key Types**: + +```go +// Manager routes output to data or ui layer +type Manager struct { + format format.Format +} +``` + +**Public API**: + +```go +// New creates an output manager +func New(format format.Format) *Manager + +// Write routes to data.Write() or ui.Write() based on format +func (m *Manager) Write(content string) error { + if m.format.IsStructured() { // JSON, YAML, CSV, TSV + return data.Write(content) // → stdout (pipeable) + } + return ui.Write(content) // → stderr (human readable, TTY-aware) +} +``` + +**Test Coverage**: +- Output routing for each format +- Format detection +- Target: >90% + +### Phase 2: Schema & Configuration + +#### 2.1 Schema Updates (`pkg/schema/schema.go`) + +**Add `List` field to structs**: + +```go +type Stacks struct { + NamePattern string `yaml:"name_pattern" json:"name_pattern" mapstructure:"name_pattern"` + NameTemplate string `yaml:"name_template" json:"name_template" mapstructure:"name_template"` + IncludedPaths []string `yaml:"included_paths" json:"included_paths" mapstructure:"included_paths"` + ExcludedPaths []string `yaml:"excluded_paths" json:"excluded_paths" mapstructure:"excluded_paths"` + List ListConfig `yaml:"list" json:"list" mapstructure:"list"` // NEW +} + +type Components struct { + Terraform ComponentsSection `yaml:"terraform" json:"terraform" mapstructure:"terraform"` + Helmfile ComponentsSection `yaml:"helmfile" json:"helmfile" mapstructure:"helmfile"` + List ListConfig `yaml:"list" json:"list" mapstructure:"list"` // NEW +} + +// Workflows and Vendor already have List field ✅ +``` + +#### 2.2 Enhanced ListConfig + +```go +type ListConfig struct { + Format string `yaml:"format" json:"format" mapstructure:"format"` + Columns []ListColumnConfig `yaml:"columns" json:"columns" mapstructure:"columns"` + Sort []SortConfig `yaml:"sort" json:"sort" mapstructure:"sort"` // NEW +} + +type ListColumnConfig struct { + Name string `yaml:"name" json:"name" mapstructure:"name"` + Value string `yaml:"value" json:"value" mapstructure:"value"` + Width int `yaml:"width" json:"width" mapstructure:"width"` // NEW +} + +type SortConfig struct { + Column string `yaml:"column" json:"column" mapstructure:"column"` + Order string `yaml:"order" json:"order" mapstructure:"order"` // "asc" or "desc" +} +``` + +#### 2.3 Configuration Example + +```yaml +# atmos.yaml +components: + list: + format: table + columns: + - name: Component + value: '{{ .atmos_component }}' + width: 30 + - name: Type + value: '{{ .atmos_component_type }}' + - name: Stack + value: '{{ .atmos_stack }}' + width: 25 + - name: Region + value: '{{ .vars.region }}' + - name: Enabled + value: '{{ ternary .enabled "✓" "✗" }}' + - name: Locked + value: '{{ ternary .locked "🔒" "" }}' + sort: + - column: Stack + order: asc + - column: Component + order: asc + +stacks: + list: + columns: + - name: Stack + value: '{{ .stack }}' + - name: Terraform Components + value: '{{ len .components.terraform }}' + - name: Helmfile Components + value: '{{ len .components.helmfile }}' + +workflows: + list: + columns: + - name: File + value: '{{ .file }}' + - name: Workflow + value: '{{ .name }}' + - name: Description + value: '{{ .description }}' + +vendor: + list: + columns: + - name: Component + value: '{{ .atmos_component }}' + - name: Type + value: '{{ .atmos_vendor_type }}' + - name: Manifest + value: '{{ .atmos_vendor_file }}' + - name: Folder + value: '{{ .atmos_vendor_target }}' +``` + +### Phase 3: Command Implementation + +#### 3.1 Standard Pattern + +**Every list command follows this structure**: + +```go +// cmd/list/components.go +var componentsCmd = &cobra.Command{ + Use: "components", + Short: "List components", + Long: "List all components or filter by stack pattern", + Example: componentsExample, + RunE: executeListComponents, +} + +func executeListComponents(cmd *cobra.Command, args []string) error { + // 1. Parse options + opts, err := parseComponentsOptions(cmd) + if err != nil { + return err + } + + // 2. Fetch data (command-specific) + data, err := fetchComponentData(opts) + if err != nil { + return err + } + + // 3. Render using generic renderer + output, err := renderComponents(data, opts) + if err != nil { + return err + } + + // 4. Write output + return output.New(opts.Format).Write(output) +} + +// Command-specific data fetching +func fetchComponentData(opts *ComponentsOptions) ([]map[string]any, error) { + atmosConfig, err := cfg.InitCliConfig(schema.ConfigAndStacksInfo{}, false) + if err != nil { + return nil, err + } + + stacksMap, err := e.ExecuteDescribeStacks( + atmosConfig, + "", // stack + nil, // components + "", // sections + opts.Stack, // pattern + false, // ignoreMissingFiles + ) + if err != nil { + return nil, err + } + + // Convert to slice of maps for renderer + return convertComponentsToMaps(stacksMap), nil +} + +// Rendering using reusables +func renderComponents(data []map[string]any, opts *ComponentsOptions) (string, error) { + // Get column config from atmos.yaml + atmosConfig, _ := cfg.GetContextFromViper() + columnConfigs := atmosConfig.Components.List.Columns + + // Override with CLI columns if provided + if len(opts.Columns) > 0 { + columnConfigs = parseColumnOverride(opts.Columns) + } + + return renderer.New(data, renderer.Options{ + Format: opts.Format, + Columns: columnConfigs, + Filters: buildComponentFilters(opts), + Sorters: buildComponentSorters(opts), + StyleFunc: componentStyleFunc, + }).Render() +} + +// Build filters from options +func buildComponentFilters(opts *ComponentsOptions) []filter.Filter { + var filters []filter.Filter + + // Type filter (real, abstract, all) + if opts.Type != "all" { + filters = append(filters, filter.NewColumnFilter("atmos_component_type", opts.Type)) + } + + // Enabled filter + if opts.Enabled != nil { + filters = append(filters, filter.NewBoolFilter("enabled", opts.Enabled)) + } + + // Locked filter + if opts.Locked != nil { + filters = append(filters, filter.NewBoolFilter("locked", opts.Locked)) + } + + // Custom filter expression + if opts.Filter != "" { + yqFilter, _ := filter.NewYQFilter(opts.Filter) + filters = append(filters, yqFilter) + } + + return filters +} + +// Build sorters from options +func buildComponentSorters(opts *ComponentsOptions) []*sort.Sorter { + if opts.Sort != "" { + sorters, _ := sort.ParseSortSpec(opts.Sort) + return sorters + } + + // Use config default + atmosConfig, _ := cfg.GetContextFromViper() + return parseSortConfig(atmosConfig.Components.List.Sort) +} + +// Conditional styling +func componentStyleFunc(row []string, idx int, headers []string) lipgloss.Style { + styles := theme.GetCurrentStyles() + + // Find enabled/locked column indices + enabledIdx := findColumnIndex(headers, "Enabled") + lockedIdx := findColumnIndex(headers, "Locked") + + // Disabled = gray + if enabledIdx >= 0 && row[enabledIdx] == "✗" { + return styles.Muted + } + + // Locked = orange/warning + if lockedIdx >= 0 && row[lockedIdx] == "🔒" { + return styles.Warning + } + + return lipgloss.NewStyle() +} +``` + +#### 3.2 Flag Handler Enhancement + +**Update `cmd/list/utils.go`** with named wrapper functions: + +```go +// Named wrapper functions for list command flags +// Follow same With* naming convention as pkg/flags/ API +// Each function appends flag options to the slice + +// WithFormatFlag adds output format flag with environment variable support +func WithFormatFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithStringFlag("format", "", "", "Output format: table, json, yaml, csv, tsv"), + flags.WithEnvVars("format", "ATMOS_LIST_FORMAT"), + ) +} + +// WithDelimiterFlag adds CSV/TSV delimiter flag +func WithDelimiterFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithStringFlag("delimiter", "", "", "Delimiter for CSV/TSV output"), + ) +} + +// WithColumnsFlag adds column selection flag with environment variable support +func WithColumnsFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithStringSliceFlag("columns", "", nil, "Columns to display (overrides atmos.yaml)"), + flags.WithEnvVars("columns", "ATMOS_LIST_COLUMNS"), + ) +} + +// WithStackFlag adds stack filter flag +func WithStackFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithStringFlag("stack", "s", "", "Filter by stack pattern (glob)"), + ) +} + +// WithFilterFlag adds YQ filter expression flag with environment variable support +func WithFilterFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithStringFlag("filter", "", "", "Filter expression (YQ syntax)"), + flags.WithEnvVars("filter", "ATMOS_LIST_FILTER"), + ) +} + +// WithSortFlag adds sort specification flag with environment variable support +func WithSortFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithStringFlag("sort", "", "", "Sort by column:order (e.g., 'stack:asc,component:desc')"), + flags.WithEnvVars("sort", "ATMOS_LIST_SORT"), + ) +} + +// WithEnabledFlag adds enabled filter flag +func WithEnabledFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithBoolFlag("enabled", "", nil, "Filter by enabled (true/false, omit for all)"), + ) +} + +// WithLockedFlag adds locked filter flag +func WithLockedFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithBoolFlag("locked", "", nil, "Filter by locked (true/false, omit for all)"), + ) +} + +// WithTypeFlag adds component type filter flag with environment variable support +func WithTypeFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithStringFlag("type", "", "real", "Component type: real, abstract, all"), + flags.WithEnvVars("type", "ATMOS_COMPONENT_TYPE"), + ) +} + +// NewListParser creates a parser with specified flags +// NOT all commands use the same flags - only include what makes sense per command +func NewListParser(builders ...func(*[]flags.Option)) *flags.StandardParser { + options := []flags.Option{} + + // Build flags from provided builder functions + for _, builder := range builders { + builder(&options) + } + + return flags.NewStandardParser(options...) +} +``` + +**Command-specific flag composition**: + +```go +// cmd/list/components.go - Has format, columns, filters +func init() { + componentsParser = NewListParser( + WithFormatFlag, // Output format selection + WithColumnsFlag, // Column customization + WithSortFlag, // Sorting + WithFilterFlag, // YQ filtering + WithStackFlag, // Filter by stack + WithTypeFlag, // Filter by component type (real/abstract) + WithEnabledFlag, // Filter by enabled status + WithLockedFlag, // Filter by locked status + ) + componentsParser.RegisterFlags(componentsCmd) + _ = componentsParser.BindToViper(viper.GetViper()) +} + +// cmd/list/stacks.go - Simpler, just filtering +func init() { + stacksParser = NewListParser( + WithFormatFlag, // Output format + WithColumnsFlag, // Column customization + WithSortFlag, // Sorting + // WithFilterFlag - NOT needed, stacks is simple + // WithStackFlag - NOT needed, this lists stacks + WithComponentFlag, // Filter stacks by component + ) + stacksParser.RegisterFlags(stacksCmd) + _ = stacksParser.BindToViper(viper.GetViper()) +} + +// cmd/list/workflows.go - File filtering, format output +func init() { + workflowsParser = NewListParser( + WithFormatFlag, // Output format + WithDelimiterFlag, // For CSV/TSV + WithColumnsFlag, // Column customization + WithSortFlag, // Sorting + WithFileFlag, // Filter by workflow file (existing flag) + // WithStackFlag - NOT relevant to workflows + // WithFilterFlag - Could add later, not critical + ) + workflowsParser.RegisterFlags(workflowsCmd) + _ = workflowsParser.BindToViper(viper.GetViper()) +} + +// cmd/list/values.go - Complex with YQ filtering +func init() { + valuesParser = NewListParser( + WithFormatFlag, // Output format + WithDelimiterFlag, // For CSV/TSV + WithMaxColumnsFlag, // Limit columns displayed + WithQueryFlag, // YQ expression filtering + WithStackFlag, // Filter by stack pattern + WithAbstractFlag, // Include abstract components + WithProcessTemplatesFlag, // Process Go templates + WithProcessFunctionsFlag, // Process template functions + // WithColumnsFlag - NOT needed, uses max-columns instead + // WithSortFlag - Could add, not critical + ) + valuesParser.RegisterFlags(valuesCmd) + _ = valuesParser.BindToViper(viper.GetViper()) +} +``` + +**Flag Mapping by Command**: + +| Command | Format | Columns | Sort | Filter | Stack | Delimiter | Command-Specific | +|---------|--------|---------|------|--------|-------|-----------|------------------| +| **stacks** | ✓ | ✓ | ✓ | - | - | - | `--component` | +| **components** | ✓ | ✓ | ✓ | ✓ | ✓ | - | `--type`, `--enabled`, `--locked` | +| **workflows** | ✓ | ✓ | ✓ | - | - | ✓ | `--file` | +| **vendor** | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | - | +| **values** | ✓ | - | - | - | ✓ | ✓ | `--max-columns`, `--query`, `--abstract`, `--process-*` | +| **vars** | ✓ | - | - | - | ✓ | ✓ | Same as values (alias) | +| **metadata** | ✓ | - | - | - | ✓ | ✓ | `--max-columns`, `--query`, `--process-*` | +| **settings** | ✓ | - | - | - | ✓ | ✓ | `--max-columns`, `--query`, `--process-*` | +| **instances** | ✓ | ✓ | ✓ | - | ✓ | ✓ | `--upload` | + +**Additional Command-Specific Flag Helpers**: + +```go +// WithComponentFlag adds component filter flag (for list stacks) +func WithComponentFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithStringFlag("component", "c", "", "Filter stacks by component"), + flags.WithEnvVars("component", "ATMOS_COMPONENT"), + ) +} + +// WithFileFlag adds workflow file filter flag +func WithFileFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithStringFlag("file", "f", "", "Filter by workflow file"), + ) +} + +// WithMaxColumnsFlag adds max columns limit flag (for values/metadata/settings) +func WithMaxColumnsFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithIntFlag("max-columns", "", 0, "Maximum number of columns to display"), + ) +} + +// WithQueryFlag adds YQ query expression flag +func WithQueryFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithStringFlag("query", "", "", "YQ expression to filter data"), + ) +} + +// WithAbstractFlag adds abstract component inclusion flag +func WithAbstractFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithBoolFlag("abstract", "", false, "Include abstract components"), + ) +} + +// WithProcessTemplatesFlag adds template processing flag +func WithProcessTemplatesFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithBoolFlag("process-templates", "", true, "Process Go templates"), + ) +} + +// WithProcessFunctionsFlag adds function processing flag +func WithProcessFunctionsFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithBoolFlag("process-functions", "", true, "Process template functions"), + ) +} + +// WithUploadFlag adds upload to Pro API flag (for instances) +func WithUploadFlag(options *[]flags.Option) { + *options = append(*options, + flags.WithBoolFlag("upload", "", false, "Upload instances to Atmos Pro API"), + ) +} +``` + +**Benefits of Named Functions**: +- ✅ More readable: `WithLockedFlag` vs `flags.WithBoolFlag("locked", "", nil, "...")` +- ✅ **Consistent with pkg/flags/ naming** (`With*` pattern) +- ✅ Reusable across multiple commands +- ✅ Consistent flag definitions (description, env vars) +- ✅ Easy to discover available flags via autocomplete +- ✅ Single source of truth for each flag's configuration +- ✅ Easier to test flag configurations +- ✅ **Each command chooses only the flags it needs** + +#### 3.3 Commands to Update + +**Priority Tier 1** (Linear Issues): +1. `cmd/list/stacks.go` - Add schema field, use reusables +2. `cmd/list/components.go` - Add filters, use reusables (DEV-2805) +3. `cmd/list/vendor.go` - Use reusables (DEV-2806) + +**Priority Tier 2**: +4. `cmd/list/workflows.go` - Migrate to reusables +5. `cmd/list/values.go` - Use reusables +6. `cmd/list/vars.go` - Use reusables (alias of values) +7. `cmd/list/instances.go` - Verify reusable usage + +**Priority Tier 3**: +8. `cmd/list/metadata.go` - Use reusables +9. `cmd/list/settings.go` - Use reusables + +**Reference** (no changes): +10. `cmd/list/themes.go` - Already complete ✅ + +### Phase 4: Business Logic Refactoring + +#### 4.1 Separate Data from Formatting + +**Before (mixed concerns)**: +```go +// pkg/list/list_workflows.go +func FilterAndListWorkflows(..., format string) (string, error) { + rows := fetchData() + return formatOutput(rows, format) // ❌ Mixed +} +``` + +**After (separated)**: +```go +// pkg/list/list_workflows.go +func FetchWorkflowData(...) ([]WorkflowRow, error) { + // Pure data fetching + return rows, nil +} + +// Formatting handled by renderer in cmd layer +``` + +#### 4.2 Eliminate Anti-Patterns + +**Replace**: +- ❌ `u.PrintMessageInColor()` → ✅ `data.Write()` or `ui.Write()` +- ❌ Direct format handling in pkg → ✅ Use `pkg/list/renderer/` +- ❌ Custom CSV/JSON logic → ✅ Use `pkg/list/format/` formatters +- ❌ Duplicated validation → ✅ Centralize in reusables + +### Phase 5: Documentation + +#### 5.1 Update Existing Documentation + +**Files**: +- `website/docs/cli/commands/list/usage.mdx` - Verify examples +- `website/docs/cli/commands/list/list-stacks.mdx` +- `website/docs/cli/commands/list/list-components.mdx` (DEV-2805) +- `website/docs/cli/commands/list/list-vendor.mdx` (DEV-2806) +- All other list command docs + +**Content**: +- Template variable reference for each command +- Filter flag examples +- Sort flag examples +- Conditional styling behavior +- Configuration examples + +#### 5.2 Template Variable Reference + +**Document available template variables per command**: + +**Components**: +``` +{{ .atmos_component }} - Component name +{{ .atmos_stack }} - Stack name +{{ .atmos_component_type }} - "real" or "abstract" +{{ .vars.* }} - Component variables +{{ .settings.* }} - Component settings +{{ .metadata.* }} - Component metadata +{{ .env.* }} - Environment variables +{{ .enabled }} - Boolean: component enabled +{{ .locked }} - Boolean: component locked +{{ .abstract }} - Boolean: component is abstract +``` + +**Stacks**: +``` +{{ .stack }} - Stack name +{{ .components.terraform }} - Map of Terraform components +{{ .components.helmfile }} - Map of Helmfile components +{{ len .components.terraform }} - Count of Terraform components +``` + +**Workflows**: +``` +{{ .file }} - Workflow file path +{{ .name }} - Workflow name +{{ .description }} - Workflow description +{{ .steps }} - Workflow steps array +``` + +**Vendor**: +``` +{{ .atmos_component }} - Component name +{{ .atmos_vendor_type }} - "component" or "vendor" +{{ .atmos_vendor_file }} - Manifest file path +{{ .atmos_vendor_target }}- Target folder +``` + +#### 5.3 Build Verification + +```bash +cd website +npm run build +``` + +### Phase 6: Testing + +#### 6.1 Reusable Utilities (>90% Coverage) + +**`pkg/list/column/column_test.go`**: +```go +func TestNewSelector(t *testing.T) { /* ... */ } +func TestSelector_Extract_TemplateEvaluation(t *testing.T) { /* ... */ } +func TestSelector_Extract_WithNestedVars(t *testing.T) { /* ... */ } +func TestSelector_Extract_WithMissingFields(t *testing.T) { /* ... */ } +func TestSelector_Select(t *testing.T) { /* ... */ } +func TestTemplateFunctions(t *testing.T) { /* ... */ } +``` + +**`pkg/list/filter/filter_test.go`**: +```go +func TestYQFilter(t *testing.T) { /* ... */ } +func TestGlobFilter(t *testing.T) { /* ... */ } +func TestColumnValueFilter(t *testing.T) { /* ... */ } +func TestBoolFilter(t *testing.T) { /* ... */ } +func TestChain(t *testing.T) { /* ... */ } +``` + +**`pkg/list/sort/sort_test.go`**: +```go +func TestSorter_Sort_Ascending(t *testing.T) { /* ... */ } +func TestSorter_Sort_Descending(t *testing.T) { /* ... */ } +func TestSorter_Sort_Numeric(t *testing.T) { /* ... */ } +func TestMultiSorter(t *testing.T) { /* ... */ } +func TestParseSortSpec(t *testing.T) { /* ... */ } +``` + +**`pkg/list/renderer/renderer_test.go`**: +```go +func TestRenderer_FullPipeline(t *testing.T) { /* ... */ } +func TestRenderer_AllFormats(t *testing.T) { /* ... */ } +func TestRenderer_ConditionalStyling(t *testing.T) { /* ... */ } +``` + +**`pkg/list/output/output_test.go`**: +```go +func TestManager_Write_Structured(t *testing.T) { /* ... */ } +func TestManager_Write_HumanReadable(t *testing.T) { /* ... */ } +``` + +**Coverage Target**: >90% on all reusable packages + +#### 6.2 Command Tests (80-90% Coverage) + +**`cmd/list/*_test.go`**: +- Test data fetching logic +- Test flag parsing +- Test integration with reusables +- Mock renderer for isolation + +**Coverage Target**: 80-90% on command files + +#### 6.3 Integration Tests + +**Golden snapshots**: +```bash +# Regenerate all list command snapshots +go test ./tests -run 'TestCLICommands/atmos_list_*' -regenerate-snapshots + +# Verify specific command +go test ./tests -run 'TestCLICommands/atmos_list_components' -v +``` + +**Test scenarios**: +- Custom columns from atmos.yaml +- CLI column override +- Filter combinations +- Sort combinations +- All format types +- Conditional styling output + +## Implementation Timeline + +### Week 1: Reusable Foundation +- **Day 1-2**: `pkg/list/column/` + tests (90%+ coverage) +- **Day 3**: `pkg/list/filter/` + tests (90%+ coverage) +- **Day 4**: `pkg/list/sort/` + tests (90%+ coverage) +- **Day 5**: `pkg/list/renderer/` + `pkg/list/output/` + tests (90%+ coverage) + +### Week 2: Schema & Tier 1 Commands +- **Day 1**: Schema updates (Stacks, Components) +- **Day 2-3**: Update `list components` (DEV-2805) + tests (80%+ coverage) +- **Day 4**: Update `list stacks` + tests (80%+ coverage) +- **Day 5**: Update `list vendor` (DEV-2806) + tests (80%+ coverage) + +### Week 3: Tier 2 & 3 Commands +- **Day 1-2**: Update workflows, values, vars + tests +- **Day 3**: Update instances, metadata, settings + tests +- **Day 4-5**: Remove deprecated code, refactor business logic + +### Week 4: Documentation & Testing +- **Day 1-2**: Update all documentation +- **Day 3**: Integration tests and golden snapshots +- **Day 4**: PRD documentation (this document) +- **Day 5**: Final review and cleanup + +## Success Criteria + +### Reusable Infrastructure ✅ +- [x] Generic column selection with Go template evaluation (>90% coverage) +- [x] Generic filtering (YQ, glob, value, bool) (>90% coverage) +- [x] Generic sorting (any column, any order) (>90% coverage) +- [x] Generic renderer supporting all formats (>90% coverage) +- [x] Output manager enforcing data/ui layer usage (>90% coverage) + +### Architecture ✅ +- [x] No deep exits (maintain existing clean pattern) +- [x] Data fetching separated from formatting +- [x] Uses `data.Write*()` for structured output (stdout) +- [x] Uses `ui.Write*()` for human output (stderr) +- [x] TTY detection automatic (no manual checks) +- [x] No `u.PrintMessageInColor()` usage + +### Features ✅ +- [x] DEV-2805: `atmos list components` with filters +- [x] DEV-2806: `atmos list vendor` with columns +- [x] All commands support column customization +- [x] All commands support --columns CLI override +- [x] All commands support --sort flag +- [x] All commands support --filter flag + +### Documentation ✅ +- [x] All documented features implemented +- [x] Configuration examples accurate +- [x] Template variable reference complete +- [x] Website builds without errors + +### Testing ✅ +- [x] >90% coverage on reusable utilities +- [x] 80-90% coverage on commands +- [x] Integration tests with golden snapshots +- [x] All tests pass + +## Risks & Mitigations + +### Risk: Template Evaluation Performance + +**Risk**: Evaluating Go templates for every row could be slow for large datasets. + +**Mitigation**: +- Pre-parse and cache templates (done once per selector) +- Benchmark with realistic datasets +- Consider lazy evaluation or pagination for very large lists +- Profile and optimize hot paths + +### Risk: Breaking Changes in Column Config + +**Risk**: Existing `workflows` and `vendor` column configs might break. + +**Mitigation**: +- Maintain backward compatibility with existing configs +- Test with existing configurations in fixtures +- Document migration path if changes needed +- Version column config format in schema + +### Risk: Complex Template Debugging + +**Risk**: Users may struggle with template syntax errors. + +**Mitigation**: +- Provide clear error messages with line/column info +- Include template context in error output +- Add `--debug-templates` flag for troubleshooting +- Document common template patterns and gotchas + +## Future Enhancements + +**Out of scope for this PRD but worth considering**: + +1. **Interactive TUI**: Browse/filter list results interactively +2. **Pagination**: For very large datasets +3. **Column Auto-Sizing**: Smart column width calculation +4. **Export Templates**: Save custom column configs as templates +5. **List Profiles**: Predefined column sets (minimal, default, full) +6. **Watch Mode**: Auto-refresh list output (`--watch`) +7. **Diff Mode**: Compare two list outputs + +## Appendix A: Template Function Reference + +### Type Conversion +- `toString` - Convert any value to string +- `toInt` - Convert to integer +- `toBool` - Convert to boolean + +### Formatting +- `truncate n` - Truncate string to n characters +- `pad n` - Pad string to n characters +- `upper` - Convert to uppercase +- `lower` - Convert to lowercase + +### Data Access +- `get map key` - Safe nested map access +- `getOr map key default` - Get with default value +- `has map key` - Check if key exists + +### Collections +- `len` - Get length of array/map/string +- `join array sep` - Join array with separator +- `split string sep` - Split string by separator + +### Conditional +- `ternary condition true false` - Ternary operator + +### Examples + +```yaml +# Conditional icon +value: '{{ ternary .enabled "✓" "✗" }}' + +# Nested field access +value: '{{ get .vars "region" }}' + +# With default +value: '{{ getOr .vars "region" "unknown" }}' + +# Array length +value: '{{ len .steps }}' + +# Formatting +value: '{{ .description | truncate 50 }}' + +# Multiple operations +value: '{{ if .enabled }}{{ .vars.region | upper }}{{ else }}disabled{{ end }}' +``` + +## Appendix B: CLI Examples + +```bash +# List components with default columns from atmos.yaml +atmos list components + +# Override columns via CLI +atmos list components --columns component,stack,region + +# Filter by type +atmos list components --type abstract + +# Filter by enabled/locked status +atmos list components --enabled=true --locked=false + +# Sort by multiple columns +atmos list components --sort "stack:asc,component:desc" + +# YQ filter expression +atmos list components --filter '.vars.region == "us-east-2"' + +# Combine filters and custom columns +atmos list components \ + --type real \ + --enabled=true \ + --columns component,stack,region \ + --sort "region:asc,stack:asc" \ + --format table + +# Export to CSV +atmos list components --format csv > components.csv + +# Export to JSON for jq processing +atmos list components --format json | jq '.[] | select(.region == "us-east-2")' + +# List stacks with custom columns +atmos list stacks --columns stack,terraform_count,helmfile_count + +# List workflows sorted by name +atmos list workflows --sort "name:asc" + +# List vendor components +atmos list vendor --columns component,type,manifest,folder +``` + +## Appendix C: Configuration Precedence + +**Column Selection**: +1. CLI `--columns` flag (highest priority) +2. `atmos.yaml` `{command}.list.columns` +3. Default columns per command (hardcoded fallback) + +**Sort Order**: +1. CLI `--sort` flag (highest priority) +2. `atmos.yaml` `{command}.list.sort` +3. No sorting (default) + +**Output Format**: +1. CLI `--format` flag (highest priority) +2. Command-specific `{command}.list.format` (for commands with config sections) +3. Environment variable `ATMOS_LIST_FORMAT` +4. `"table"` (default) + +**Note**: Only commands with dedicated config sections support `list.format` configuration: +- `stacks.list.format` +- `components.list.format` +- `workflows.list.format` +- `vendor.list.format` + +Commands without config sections (instances, values, vars, metadata, settings) use env var and CLI flag only. + +## Appendix D: Error Messages + +**Template Evaluation Errors**: +``` +Error evaluating column template for "Region": + Template: {{ .vars.region }} + Error: map has no entry for key "region" + Context: component=vpc, stack=plat-ue2-dev + +Hint: Check that the field exists in your component configuration. +Use --debug-templates to see full template context. +``` + +**Filter Errors**: +``` +Error applying YQ filter: + Filter: .vars.region == "us-east-2" + Error: invalid syntax at line 1, column 15 + +Hint: Use YQ syntax for filters. See: https://mikefarah.gitbook.io/yq/ +``` + +**Sort Errors**: +``` +Error sorting by column "Region": + Column not found in output + Available columns: Component, Stack, Enabled + +Hint: Sorting applies to output columns, not raw data fields. +Ensure the column exists in your column configuration. +``` diff --git a/docs/prd/provenance-import-resolution.md b/docs/prd/provenance-import-resolution.md new file mode 100644 index 0000000000..fb42a77484 --- /dev/null +++ b/docs/prd/provenance-import-resolution.md @@ -0,0 +1,479 @@ +# Atmos Provenance System Investigation + +## Executive Summary + +Atmos provides a sophisticated provenance tracking system that records the file, line number, and import chain for every configuration value. The system uses a **three-layer architecture** to properly track where stacks are defined and how they're imported. + +## Key Architecture Components + +### 1. MergeContext - The Import Chain Tracker + +**Location**: `/pkg/merge/merge_context.go` + +The `MergeContext` struct tracks the complete import chain during configuration merging: + +```go +type MergeContext struct { + // CurrentFile is the file currently being processed + CurrentFile string + + // ImportChain tracks the chain of imports leading to the current file + // First element is the root file, last is the current file + ImportChain []string + + // ParentContext for nested operations + ParentContext *MergeContext + + // Provenance stores optional provenance information + Provenance *ProvenanceStorage + + // Positions stores YAML position information (line:column) + Positions u.PositionMap +} +``` + +**Key Methods**: +- `WithFile(filePath)` - Creates a new context for a file, appending to ImportChain +- `GetImportChainString()` - Returns formatted import chain (e.g., "file1 → file2 → file3") +- `GetDepth()` - Returns the depth of the import chain +- `HasFile(filePath)` - Detects circular imports + +### 2. ProvenanceEntry - The Value Source Record + +**Location**: `/pkg/merge/provenance_entry.go` + +Each value in the configuration has provenance metadata: + +```go +type ProvenanceEntry struct { + File string // Source file path + Line int // Line number (1-indexed) + Column int // Column number (1-indexed) + Type ProvenanceType // import, inline, override, computed + ValueHash string // Hash of value for change detection + Depth int // Import depth: 0=parent, 1=direct import, 2+=nested +} + +type ProvenanceType string +const ( + ProvenanceTypeImport = "import" // ○ Inherited + ProvenanceTypeInline = "inline" // ● Defined + ProvenanceTypeOverride = "override" // ● Overridden + ProvenanceTypeComputed = "computed" // ∴ Templated + ProvenanceTypeDefault = "default" // ○ Default +) +``` + +### 3. ProvenanceStorage - Thread-Safe Value Tracking + +**Location**: `/pkg/merge/provenance_storage.go` + +Stores provenance chains keyed by JSONPath: + +```go +type ProvenanceStorage struct { + // entries maps JSONPath to a chain of provenance entries + // e.g., "vars.cidr" -> [entry1, entry2, entry3] + // Chain is ordered base → override + entries map[string][]ProvenanceEntry + + // Thread-safe access + mutex sync.RWMutex +} +``` + +**Usage Examples**: +```go +// Record provenance for a nested value +entry := ProvenanceEntry{ + File: "stacks/prod/us-east-2.yaml", + Line: 10, + Column: 5, + Type: ProvenanceTypeInline, + Depth: 0, // Parent stack +} +storage.Record("vars.cidr", entry) + +// Get the inheritance chain for a value +chain := storage.Get("vars.cidr") +// Returns all values this variable had through the inheritance chain + +// Get only the final value +latest := storage.GetLatest("vars.cidr") +``` + +## How Provenance Tracks Import Chains + +### The Flow (in ProcessYAMLConfigFileWithContext) + +1. **Initialize MergeContext** (line 590-597): +```go +if mergeContext == nil { + mergeContext = m.NewMergeContext() + if atmosConfig != nil && atmosConfig.TrackProvenance { + mergeContext.EnableProvenance() + } +} +mergeContext = mergeContext.WithFile(relativeFilePath) +``` +Each file call creates a **new context** with updated ImportChain. + +2. **Extract YAML Positions** (line 657): +```go +stackConfigMap, positions, err := u.UnmarshalYAMLFromFileWithPositions[...]( + atmosConfig, stackManifestTemplatesProcessed, filePath) +``` +The positions map tracks where each value appears in the YAML. + +3. **Enable Provenance Storage** (line 676-679): +```go +if atmosConfig.TrackProvenance && mergeContext != nil && len(positions) > 0 { + mergeContext.EnableProvenance() + mergeContext.Positions = positions +} +``` + +4. **Recursive Merge with Provenance** (in MergeWithProvenance): +```go +// MergeWithProvenance calls standard Merge, then records provenance +recordProvenanceRecursive(provenanceRecursiveParams{ + data: result, + currentPath: "", + ctx: ctx, // Has ImportChain + Provenance + positions: positions, // YAML line:column info + currentFile: ctx.CurrentFile, + depth: ctx.GetImportDepth(), +}) +``` + +The `depth` is calculated from ImportChain length: +```go +func (c *MergeContext) GetImportDepth() int { + depth := 0 + current := c + for current != nil && current.ParentContext != nil { + depth++ + current = current.ParentContext + } + return depth +} +``` + +## How Stacks Are Mapped to Files + +### Two-Part System + +**Part 1: Stack Name to File Mapping** (ProcessYAMLConfigFiles) + +In `stack_processor_utils.go` (lines 314-328): +```go +for i, filePath := range filePaths { + go func(i int, p string) { + // Derive stack name from file path + stackFileName := strings.TrimSuffix( + strings.TrimSuffix( + u.TrimBasePathFromPath(stackBasePath+"/", p), + u.DefaultStackConfigFileExtension), + ".yml", + ) + + // Example: "stacks/orgs/prod/us-east-2.yaml" → "orgs/prod/us-east-2" + + // ... process file ... + + // Store result by stack name + results <- stackProcessResult{ + stackFileName: stackFileName, + // ... + } + }(i, filePath) +} +``` + +**Part 2: MergeContext Storage** (lines 424-428): +```go +// Store merge context for this stack file if provenance tracking is enabled +if atmosConfig != nil && atmosConfig.TrackProvenance && + result.mergeContext != nil && result.mergeContext.IsProvenanceEnabled() { + + // Key: stack name (derived from file path) + SetMergeContextForStack(result.stackFileName, result.mergeContext) + SetLastMergeContext(result.mergeContext) // For backward compat +} +``` + +### Retrieving Stack File Information + +**For a specific component in a stack** (describe_component.go): +```go +// Get the stack file (from ProcessComponentConfig) +stackFile = result.StackFile // e.g., "prod/us-east-2" + +// Get the merge context with import chain +mergeContext = GetMergeContextForStack(configAndStacksInfo.StackFile) + +// Now you can: +// 1. Get the import chain +importChain := mergeContext.ImportChain +// Returns: ["prod/us-east-2.yaml", "catalog/vpc/defaults.yaml", "orgs/acme/_defaults.yaml"] + +// 2. Get provenance for a specific value +provenance := mergeContext.GetProvenance("vars.cidr") +// Returns: [ +// {File: "catalog/vpc/defaults.yaml", Line: 8, Depth: 2}, +// {File: "prod/us-east-2.yaml", Line: 10, Depth: 0, Type: override}, +// ] + +// 3. Get the final value's source +latest := mergeContext.GetProvenance("vars.cidr")[len(...)-1] +// Returns: {File: "prod/us-east-2.yaml", Line: 10, Depth: 0} +``` + +## Correct Way to Resolve Stack Files + +### ✅ DO: Use ExecuteDescribeStacks + MergeContext + +This is the **authoritative** method used by describe component: + +```go +// 1. Get the stacks map (maps stack name → config) +stacksMap, _, err := FindStacksMap(atmosConfig, false) + +// 2. Process a specific component +err = ProcessComponentConfig( + &configAndStacksInfo, + stackName, + stacksMap, + componentType, + component, + authManager, +) + +// 3. Get the stack file (this is accurate) +stackFile := configAndStacksInfo.StackFile + +// 4. Get the merge context with import chain +mergeContext := GetMergeContextForStack(stackFile) + +// 5. Use import chain from mergeContext +for i, file := range mergeContext.ImportChain { + depth := i // 0 = parent stack, 1+ = imports + fmt.Printf("Level %d: %s\n", depth, file) +} +``` + +### ❌ DON'T: Use Heuristic Path Guessing + +The old `import_resolver.go` uses unreliable heuristics: + +```go +// BAD: Tries to guess the file path from stack name +possiblePaths := []string{ + filepath.Join(stacksBasePath, "orgs", stackName+".yaml"), // Assumes pattern! + filepath.Join(stacksBasePath, stackName+".yaml"), + filepath.Join(stacksBasePath, strings.ReplaceAll(stackName, "-", "/"), ".yaml"), +} + +for _, path := range possiblePaths { + if u.FileExists(path) { + return path, nil + } +} +``` + +**Problems**: +- Assumes stack names follow a specific pattern +- Fails for complex directory structures +- Doesn't work with symlinks or aliased imports +- Can return wrong file if multiple matches exist + +## Data Flow for Tree View Implementation + +### Recommended Architecture + +``` +┌─────────────────────────────────────────────────┐ +│ ExecuteDescribeStacks() or describe component │ +└─────────────────────────────────────────────────┘ + │ + ▼ +┌─────────────────────────────────────────────────┐ +│ FindStacksMap() │ +│ Returns: stacksMap[stackName] = config │ +└─────────────────────────────────────────────────┘ + │ + ▼ +┌─────────────────────────────────────────────────┐ +│ ProcessComponentConfig() │ +│ Sets configAndStacksInfo.StackFile │ +└─────────────────────────────────────────────────┘ + │ + ▼ +┌─────────────────────────────────────────────────┐ +│ GetMergeContextForStack(stackFile) │ +│ Returns: MergeContext with ImportChain │ +└─────────────────────────────────────────────────┘ + │ + ▼ +┌─────────────────────────────────────────────────┐ +│ mergeContext.ImportChain │ +│ [0] = parent stack file path │ +│ [1..N] = imported files in order │ +└─────────────────────────────────────────────────┘ + │ + ▼ +┌─────────────────────────────────────────────────┐ +│ Build Tree View │ +│ - Root: parent stack file │ +│ - Children: imported files at each level │ +│ - Depth indicators show inheritance chain │ +└─────────────────────────────────────────────────┘ +``` + +## Code Examples for Tree View Implementation + +### Example 1: Get Accurate Import Chain for Stack + +```go +package exec + +import ( + m "github.com/cloudposse/atmos/pkg/merge" +) + +// GetStackImportChain returns the import chain for a specific stack +// This is the CORRECT way to get the actual files +func GetStackImportChain(stackFileName string) []string { + mergeContext := GetMergeContextForStack(stackFileName) + if mergeContext == nil { + return []string{} + } + + // ImportChain contains the actual file paths + // Index 0 = parent stack + // Index 1..N = imported files + return mergeContext.ImportChain +} + +// GetStackFileDepth returns how many levels of imports a stack has +func GetStackFileDepth(stackFileName string) int { + mergeContext := GetMergeContextForStack(stackFileName) + if mergeContext == nil { + return 0 + } + + // Depth = number of imports + return len(mergeContext.ImportChain) - 1 +} +``` + +### Example 2: Get Provenance for a Specific Value + +```go +// GetValueProvenance returns where a value was defined +func GetValueProvenance(stackFileName string, jsonPath string) *m.ProvenanceEntry { + mergeContext := GetMergeContextForStack(stackFileName) + if mergeContext == nil { + return nil + } + + chain := mergeContext.GetProvenance(jsonPath) + if len(chain) == 0 { + return nil + } + + // Last entry is the final value + latest := chain[len(chain)-1] + return &latest +} + +// GetValueInheritanceChain returns all overrides for a value +func GetValueInheritanceChain(stackFileName string, jsonPath string) []*m.ProvenanceEntry { + mergeContext := GetMergeContextForStack(stackFileName) + if mergeContext == nil { + return nil + } + + chain := mergeContext.GetProvenance(jsonPath) + result := make([]*m.ProvenanceEntry, len(chain)) + for i := range chain { + result[i] = &chain[i] + } + return result +} +``` + +### Example 3: Build a Tree of Stacks and Their Imports + +```go +// StackNode represents a node in the stack import tree +type StackNode struct { + Name string + StackFile string + ImportedFrom []string + Depth int +} + +// BuildStackImportTree builds a tree showing import relationships +func BuildStackImportTree(atmosConfig *schema.AtmosConfiguration) (map[string]*StackNode, error) { + stacksMap, _, err := FindStacksMap(atmosConfig, false) + if err != nil { + return nil, err + } + + nodes := make(map[string]*StackNode) + + for stackName := range stacksMap { + mergeContext := GetMergeContextForStack(stackName) + if mergeContext == nil { + continue + } + + node := &StackNode{ + Name: stackName, + StackFile: stackName, + Depth: len(mergeContext.ImportChain) - 1, + } + + // ImportChain[0] is the parent, ImportChain[1:] are imports + if len(mergeContext.ImportChain) > 1 { + node.ImportedFrom = mergeContext.ImportChain[1:] + } + + nodes[stackName] = node + } + + return nodes, nil +} +``` + +## Key Insights for Tree View Implementation + +### 1. Stack File Resolution is Automatic +- Don't use heuristics or pattern matching +- Use `ProcessComponentConfig()` which sets `StackFile` correctly +- Or retrieve via `GetMergeContextForStack(stackName)` + +### 2. Import Chain is Complete and Ordered +- `ImportChain[0]` = parent stack file (the one being described) +- `ImportChain[1..N]` = imported files in merge order +- This is the **complete and accurate** import path + +### 3. Depth Tracking is Built-In +- `ImportChain.length - 1` = depth of imports +- Used to show indentation and visual hierarchy +- Depth 0 = parent stack, Depth 1 = direct import, Depth 2+ = nested imports + +### 4. Line Numbers Are Available +- Use `mergeContext.GetProvenance(jsonPath)` for value-level provenance +- Each entry has `File`, `Line`, `Column`, and `Depth` +- This enables accurate "go to line" functionality + +### 5. Thread Safety +- `MergeContext` uses `sync.RWMutex` in `ProvenanceStorage` +- `GetMergeContextForStack()` uses locks for safe access +- Safe to call from multiple goroutines + +## Deprecation Note + +The old system with `import_resolver.go` (using heuristic path guessing) is **outdated and inaccurate**. Always use the merge context system for reliable stack-to-file mapping. diff --git a/errors/errors.go b/errors/errors.go index e563431838..d084a6715f 100644 --- a/errors/errors.go +++ b/errors/errors.go @@ -45,6 +45,8 @@ var ( ErrPathResolution = errors.New("failed to resolve absolute path") ErrInvalidTemplateFunc = errors.New("invalid template function") ErrInvalidTemplateSettings = errors.New("invalid template settings") + ErrTemplateEvaluation = errors.New("template evaluation failed") + ErrInvalidConfig = errors.New("invalid configuration") ErrRefuseDeleteSymbolicLink = errors.New("refusing to delete symbolic link") ErrNoDocsGenerateEntry = errors.New("no docs.generate entry found") ErrMissingDocType = errors.New("doc-type argument missing") @@ -105,7 +107,7 @@ var ( ErrTerraformBackendAPIError = errors.New("terraform backend API error") ErrUnsupportedBackendType = errors.New("unsupported backend type") ErrProcessTerraformStateFile = errors.New("error processing terraform state file") - ErrLoadAwsConfig = errors.New("failed to load AWS config") + ErrLoadAWSConfig = errors.New("failed to load AWS config") ErrGetObjectFromS3 = errors.New("failed to get object from S3") ErrReadS3ObjectBody = errors.New("failed to read S3 object body") ErrCreateGCSClient = errors.New("failed to create GCS client") @@ -448,6 +450,7 @@ var ( ErrParseStacks = errors.New("could not parse stacks") ErrParseComponents = errors.New("could not parse components") ErrNoComponentsFound = errors.New("no components found") + ErrNoStacksFound = errors.New("no stacks found") ErrStackNotFound = errors.New("stack not found") ErrProcessStack = errors.New("error processing stack") diff --git a/examples/quick-start-advanced/atmos.yaml b/examples/quick-start-advanced/atmos.yaml index 93a21f94b2..4ca43f859c 100644 --- a/examples/quick-start-advanced/atmos.yaml +++ b/examples/quick-start-advanced/atmos.yaml @@ -54,6 +54,30 @@ components: # Can also be set using 'ATMOS_COMPONENTS_HELMFILE_CLUSTER_NAME_PATTERN' ENV var cluster_name_pattern: "{namespace}-{tenant}-{environment}-{stage}-eks-cluster" + # List command configuration for components + list: + # Custom columns for 'atmos list instances' command + # Each column supports Go template syntax with access to instance data + columns: + - name: " " + value: "{{ .status }}" + - name: Stack + value: "{{ .stack }}" + - name: Component + value: "{{ .component }}" + - name: Type + value: "{{ .type }}" + - name: Tenant + value: "{{ .vars.tenant }}" + - name: Environment + value: "{{ .vars.environment }}" + - name: Stage + value: "{{ .vars.stage }}" + - name: Region + value: "{{ .vars.region }}" + - name: Component Folder + value: "{{ .component_folder }}" + stacks: # Can also be set using 'ATMOS_STACKS_BASE_PATH' ENV var, or '--config-dir' and '--stacks-dir' command-line arguments # Supports both absolute and relative paths diff --git a/examples/quick-start-advanced/stacks/catalog/vpc-flow-logs-bucket/defaults.yaml b/examples/quick-start-advanced/stacks/catalog/vpc-flow-logs-bucket/defaults.yaml index e9319e5ff0..65a58bb37e 100644 --- a/examples/quick-start-advanced/stacks/catalog/vpc-flow-logs-bucket/defaults.yaml +++ b/examples/quick-start-advanced/stacks/catalog/vpc-flow-logs-bucket/defaults.yaml @@ -4,6 +4,7 @@ components: metadata: # Point to the Terraform component component: vpc-flow-logs-bucket + description: "S3 bucket for VPC Flow Logs storage" vars: enabled: true name: "vpc-flow-logs" diff --git a/examples/quick-start-advanced/stacks/catalog/vpc/defaults.yaml b/examples/quick-start-advanced/stacks/catalog/vpc/defaults.yaml index 7c30014b46..3a675e0de5 100644 --- a/examples/quick-start-advanced/stacks/catalog/vpc/defaults.yaml +++ b/examples/quick-start-advanced/stacks/catalog/vpc/defaults.yaml @@ -4,6 +4,7 @@ components: metadata: # Point to the Terraform component component: vpc + description: "Virtual Private Cloud with subnets and NAT gateway" settings: # The `vpc` component depends on the `vpc-flow-logs-bucket` component in the same stack depends_on: diff --git a/go.mod b/go.mod index 2b542315d7..4613d661b5 100644 --- a/go.mod +++ b/go.mod @@ -22,14 +22,14 @@ require ( github.com/arsham/figurine v1.3.0 github.com/atotto/clipboard v0.1.4 github.com/aws/aws-sdk-go-v2 v1.41.0 - github.com/aws/aws-sdk-go-v2/config v1.32.3 - github.com/aws/aws-sdk-go-v2/credentials v1.19.3 - github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.13 - github.com/aws/aws-sdk-go-v2/service/s3 v1.93.0 - github.com/aws/aws-sdk-go-v2/service/ssm v1.67.5 - github.com/aws/aws-sdk-go-v2/service/sso v1.30.6 - github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.11 - github.com/aws/aws-sdk-go-v2/service/sts v1.41.3 + github.com/aws/aws-sdk-go-v2/config v1.32.4 + github.com/aws/aws-sdk-go-v2/credentials v1.19.4 + github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.14 + github.com/aws/aws-sdk-go-v2/service/s3 v1.93.1 + github.com/aws/aws-sdk-go-v2/service/ssm v1.67.6 + github.com/aws/aws-sdk-go-v2/service/sso v1.30.7 + github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.12 + github.com/aws/aws-sdk-go-v2/service/sts v1.41.4 github.com/aws/smithy-go v1.24.0 github.com/bmatcuk/doublestar/v4 v4.9.1 github.com/charmbracelet/bubbles v0.21.1-0.20250623103423-23b8fd6302d7 @@ -100,9 +100,9 @@ require ( github.com/zclconf/go-cty v1.17.0 go.uber.org/mock v0.6.0 go.yaml.in/yaml/v3 v3.0.4 - golang.org/x/oauth2 v0.33.0 - golang.org/x/term v0.37.0 - golang.org/x/text v0.31.0 + golang.org/x/oauth2 v0.34.0 + golang.org/x/term v0.38.0 + golang.org/x/text v0.32.0 google.golang.org/api v0.257.0 google.golang.org/grpc v1.77.0 gopkg.in/ini.v1 v1.67.0 @@ -153,17 +153,17 @@ require ( github.com/avast/retry-go v3.0.0+incompatible // indirect github.com/aws/aws-sdk-go v1.55.7 // indirect github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.4 // indirect - github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.15 // indirect - github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.15 // indirect - github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.15 // indirect + github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.16 // indirect + github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.16 // indirect + github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.16 // indirect github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 // indirect - github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.15 // indirect + github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.16 // indirect github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4 // indirect - github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.6 // indirect - github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.15 // indirect - github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.15 // indirect + github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.7 // indirect + github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.16 // indirect + github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.16 // indirect github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.35.4 // indirect - github.com/aws/aws-sdk-go-v2/service/signin v1.0.3 // indirect + github.com/aws/aws-sdk-go-v2/service/signin v1.0.4 // indirect github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect github.com/aymerick/douceur v0.2.0 // indirect github.com/bearsh/hid v1.6.0 // indirect @@ -397,8 +397,8 @@ require ( golang.org/x/exp v0.0.0-20250305212735-054e65f0b394 // indirect golang.org/x/mod v0.30.0 // indirect golang.org/x/net v0.47.0 // indirect - golang.org/x/sync v0.18.0 // indirect - golang.org/x/sys v0.38.0 // indirect + golang.org/x/sync v0.19.0 // indirect + golang.org/x/sys v0.39.0 // indirect golang.org/x/time v0.14.0 // indirect golang.org/x/tools v0.39.0 // indirect golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da // indirect diff --git a/go.sum b/go.sum index 8bc00e2b5b..e1b1aa2a0f 100644 --- a/go.sum +++ b/go.sum @@ -175,44 +175,44 @@ github.com/aws/aws-sdk-go-v2 v1.41.0 h1:tNvqh1s+v0vFYdA1xq0aOJH+Y5cRyZ5upu6roPgP github.com/aws/aws-sdk-go-v2 v1.41.0/go.mod h1:MayyLB8y+buD9hZqkCW3kX1AKq07Y5pXxtgB+rRFhz0= github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.4 h1:489krEF9xIGkOaaX3CE/Be2uWjiXrkCH6gUX+bZA/BU= github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.4/go.mod h1:IOAPF6oT9KCsceNTvvYMNHy0+kMF8akOjeDvPENWxp4= -github.com/aws/aws-sdk-go-v2/config v1.32.3 h1:cpz7H2uMNTDa0h/5CYL5dLUEzPSLo2g0NkbxTRJtSSU= -github.com/aws/aws-sdk-go-v2/config v1.32.3/go.mod h1:srtPKaJJe3McW6T/+GMBZyIPc+SeqJsNPJsd4mOYZ6s= -github.com/aws/aws-sdk-go-v2/credentials v1.19.3 h1:01Ym72hK43hjwDeJUfi1l2oYLXBAOR8gNSZNmXmvuas= -github.com/aws/aws-sdk-go-v2/credentials v1.19.3/go.mod h1:55nWF/Sr9Zvls0bGnWkRxUdhzKqj9uRNlPvgV1vgxKc= -github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.15 h1:utxLraaifrSBkeyII9mIbVwXXWrZdlPO7FIKmyLCEcY= -github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.15/go.mod h1:hW6zjYUDQwfz3icf4g2O41PHi77u10oAzJ84iSzR/lo= -github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.13 h1:s6/ARIdkx/rp5BDSlDZ2BVI9svqkiURlel6muTLo3rw= -github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.13/go.mod h1:1KM+TxVmodlscDCO9fTYyjmDNy5IBSKPapy17XS+Czk= -github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.15 h1:Y5YXgygXwDI5P4RkteB5yF7v35neH7LfJKBG+hzIons= -github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.15/go.mod h1:K+/1EpG42dFSY7CBj+Fruzm8PsCGWTXJ3jdeJ659oGQ= -github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.15 h1:AvltKnW9ewxX2hFmQS0FyJH93aSvJVUEFvXfU+HWtSE= -github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.15/go.mod h1:3I4oCdZdmgrREhU74qS1dK9yZ62yumob+58AbFR4cQA= +github.com/aws/aws-sdk-go-v2/config v1.32.4 h1:gl+DxVuadpkYoaDcWllZqLkhGEbvwyqgNVRTmlaf5PI= +github.com/aws/aws-sdk-go-v2/config v1.32.4/go.mod h1:MBUp9Og/bzMmQHjMwace4aJfyvJeadzXjoTcR/SxLV0= +github.com/aws/aws-sdk-go-v2/credentials v1.19.4 h1:KeIZxHVbGWRLhPvhdPbbi/DtFBHNKm6OsVDuiuFefdQ= +github.com/aws/aws-sdk-go-v2/credentials v1.19.4/go.mod h1:Smw5n0nCZE9PeFEguofdXyt8kUC4JNrkDTfBOioPhFA= +github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.16 h1:80+uETIWS1BqjnN9uJ0dBUaETh+P1XwFy5vwHwK5r9k= +github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.16/go.mod h1:wOOsYuxYuB/7FlnVtzeBYRcjSRtQpAW0hCP7tIULMwo= +github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.14 h1:Ml4JmbZDi48OiAQx7CUst0ZO48ftbfNsWMEYiuhu06Q= +github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.14/go.mod h1:aBxw6KN/hqD598VrxXR6RXgNSWC3q0/aT14VXHD/MSo= +github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.16 h1:rgGwPzb82iBYSvHMHXc8h9mRoOUBZIGFgKb9qniaZZc= +github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.16/go.mod h1:L/UxsGeKpGoIj6DxfhOWHWQ/kGKcd4I1VncE4++IyKA= +github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.16 h1:1jtGzuV7c82xnqOVfx2F0xmJcOw5374L7N6juGW6x6U= +github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.16/go.mod h1:M2E5OQf+XLe+SZGmmpaI2yy+J326aFf6/+54PoxSANc= github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 h1:WKuaxf++XKWlHWu9ECbMlha8WOEGm0OUEZqm4K/Gcfk= github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4/go.mod h1:ZWy7j6v1vWGmPReu0iSGvRiise4YI5SkR3OHKTZ6Wuc= -github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.15 h1:NLYTEyZmVZo0Qh183sC8nC+ydJXOOeIL/qI/sS3PdLY= -github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.15/go.mod h1:Z803iB3B0bc8oJV8zH2PERLRfQUJ2n2BXISpsA4+O1M= +github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.16 h1:CjMzUs78RDDv4ROu3JnJn/Ig1r6ZD7/T2DXLLRpejic= +github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.16/go.mod h1:uVW4OLBqbJXSHJYA9svT9BluSvvwbzLQ2Crf6UPzR3c= github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4 h1:0ryTNEdJbzUCEWkVXEXoqlXV72J5keC1GvILMOuD00E= github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4/go.mod h1:HQ4qwNZh32C3CBeO6iJLQlgtMzqeG17ziAA/3KDJFow= -github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.6 h1:P1MU/SuhadGvg2jtviDXPEejU3jBNhoeeAlRadHzvHI= -github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.6/go.mod h1:5KYaMG6wmVKMFBSfWoyG/zH8pWwzQFnKgpoSRlXHKdQ= -github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.15 h1:3/u/4yZOffg5jdNk1sDpOQ4Y+R6Xbh+GzpDrSZjuy3U= -github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.15/go.mod h1:4Zkjq0FKjE78NKjabuM4tRXKFzUJWXgP0ItEZK8l7JU= -github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.15 h1:wsSQ4SVz5YE1crz0Ap7VBZrV4nNqZt4CIBBT8mnwoNc= -github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.15/go.mod h1:I7sditnFGtYMIqPRU1QoHZAUrXkGp4SczmlLwrNPlD0= -github.com/aws/aws-sdk-go-v2/service/s3 v1.93.0 h1:IrbE3B8O9pm3lsg96AXIN5MXX4pECEuExh/A0Du3AuI= -github.com/aws/aws-sdk-go-v2/service/s3 v1.93.0/go.mod h1:/sJLzHtiiZvs6C1RbxS/anSAFwZD6oC6M/kotQzOiLw= +github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.7 h1:DIBqIrJ7hv+e4CmIk2z3pyKT+3B6qVMgRsawHiR3qso= +github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.7/go.mod h1:vLm00xmBke75UmpNvOcZQ/Q30ZFjbczeLFqGx5urmGo= +github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.16 h1:oHjJHeUy0ImIV0bsrX0X91GkV5nJAyv1l1CC9lnO0TI= +github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.16/go.mod h1:iRSNGgOYmiYwSCXxXaKb9HfOEj40+oTKn8pTxMlYkRM= +github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.16 h1:NSbvS17MlI2lurYgXnCOLvCFX38sBW4eiVER7+kkgsU= +github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.16/go.mod h1:SwT8Tmqd4sA6G1qaGdzWCJN99bUmPGHfRwwq3G5Qb+A= +github.com/aws/aws-sdk-go-v2/service/s3 v1.93.1 h1:5FhzzN6JmlGQF6c04kDIb5KNGm6KnNdLISNrfivIhHg= +github.com/aws/aws-sdk-go-v2/service/s3 v1.93.1/go.mod h1:79S2BdqCJpScXZA2y+cpZuocWsjGjJINyXnOsf5DTz8= github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.35.4 h1:EKXYJ8kgz4fiqef8xApu7eH0eae2SrVG+oHCLFybMRI= github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.35.4/go.mod h1:yGhDiLKguA3iFJYxbrQkQiNzuy+ddxesSZYWVeeEH5Q= -github.com/aws/aws-sdk-go-v2/service/signin v1.0.3 h1:d/6xOGIllc/XW1lzG9a4AUBMmpLA9PXcQnVPTuHHcik= -github.com/aws/aws-sdk-go-v2/service/signin v1.0.3/go.mod h1:fQ7E7Qj9GiW8y0ClD7cUJk3Bz5Iw8wZkWDHsTe8vDKs= -github.com/aws/aws-sdk-go-v2/service/ssm v1.67.5 h1:YKGgwB1rye0JpV10Bfma3cZdQzX61j2HPWQw+YxWvrQ= -github.com/aws/aws-sdk-go-v2/service/ssm v1.67.5/go.mod h1:eBDSa0vuYB0lalpNxavIw80Q4Ksy08bhHHbT0aWa4tE= -github.com/aws/aws-sdk-go-v2/service/sso v1.30.6 h1:8sTTiw+9yuNXcfWeqKF2x01GqCF49CpP4Z9nKrrk/ts= -github.com/aws/aws-sdk-go-v2/service/sso v1.30.6/go.mod h1:8WYg+Y40Sn3X2hioaaWAAIngndR8n1XFdRPPX+7QBaM= -github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.11 h1:E+KqWoVsSrj1tJ6I/fjDIu5xoS2Zacuu1zT+H7KtiIk= -github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.11/go.mod h1:qyWHz+4lvkXcr3+PoGlGHEI+3DLLiU6/GdrFfMaAhB0= -github.com/aws/aws-sdk-go-v2/service/sts v1.41.3 h1:tzMkjh0yTChUqJDgGkcDdxvZDSrJ/WB6R6ymI5ehqJI= -github.com/aws/aws-sdk-go-v2/service/sts v1.41.3/go.mod h1:T270C0R5sZNLbWUe8ueiAF42XSZxxPocTaGSgs5c/60= +github.com/aws/aws-sdk-go-v2/service/signin v1.0.4 h1:HpI7aMmJ+mm1wkSHIA2t5EaFFv5EFYXePW30p1EIrbQ= +github.com/aws/aws-sdk-go-v2/service/signin v1.0.4/go.mod h1:C5RdGMYGlfM0gYq/tifqgn4EbyX99V15P2V3R+VHbQU= +github.com/aws/aws-sdk-go-v2/service/ssm v1.67.6 h1:n4xdcw+gJvSJHqpzq0Nt/sZ328rbR3TS4A4mKz1kSgo= +github.com/aws/aws-sdk-go-v2/service/ssm v1.67.6/go.mod h1:urlU9nfKJEfi0+8T9luB3f3Y0UnomH/yxI7tTrfH9es= +github.com/aws/aws-sdk-go-v2/service/sso v1.30.7 h1:eYnlt6QxnFINKzwxP5/Ucs1vkG7VT3Iezmvfgc2waUw= +github.com/aws/aws-sdk-go-v2/service/sso v1.30.7/go.mod h1:+fWt2UHSb4kS7Pu8y+BMBvJF0EWx+4H0hzNwtDNRTrg= +github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.12 h1:AHDr0DaHIAo8c9t1emrzAlVDFp+iMMKnPdYy6XO4MCE= +github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.12/go.mod h1:GQ73XawFFiWxyWXMHWfhiomvP3tXtdNar/fi8z18sx0= +github.com/aws/aws-sdk-go-v2/service/sts v1.41.4 h1:YCu/iAhQer8WZ66lldyKkpvMyv+HkPufMa4dyT6wils= +github.com/aws/aws-sdk-go-v2/service/sts v1.41.4/go.mod h1:iW40X4QBmUxdP+fZNOpfmkdMZqsovezbAeO+Ubiv2pk= github.com/aws/smithy-go v1.24.0 h1:LpilSUItNPFr1eY85RYgTIg5eIEPtvFbskaFcmmIUnk= github.com/aws/smithy-go v1.24.0/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0= github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k= @@ -1252,8 +1252,8 @@ golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY= golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= -golang.org/x/oauth2 v0.33.0 h1:4Q+qn+E5z8gPRJfmRy7C2gGG3T4jIprK6aSYgTXGRpo= -golang.org/x/oauth2 v0.33.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA= +golang.org/x/oauth2 v0.34.0 h1:hqK/t4AKgbqWkdkcAeI8XLmbK+4m4G5YeQRrmiotGlw= +golang.org/x/oauth2 v0.34.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= @@ -1266,8 +1266,8 @@ golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJ golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= -golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I= -golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= +golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4= +golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= @@ -1308,8 +1308,8 @@ golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= -golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc= -golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= +golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk= +golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= @@ -1319,8 +1319,8 @@ golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU= golang.org/x/term v0.16.0/go.mod h1:yn7UURbUtPyrVJPGPq404EukNFxcm/foM+bV/bfcDsY= golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk= -golang.org/x/term v0.37.0 h1:8EGAD0qCmHYZg6J17DvsMy9/wJ7/D/4pV/wfnld5lTU= -golang.org/x/term v0.37.0/go.mod h1:5pB4lxRNYYVZuTLmy8oR2BH8dflOR+IbTYFD8fi3254= +golang.org/x/term v0.38.0 h1:PQ5pkm/rLO6HnxFR7N2lJHOZX6Kez5Y1gDSJla6jo7Q= +golang.org/x/term v0.38.0/go.mod h1:bSEAKrOT1W+VSu9TSCMtoGEOUcKxOKgl3LE5QEF/xVg= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= @@ -1331,8 +1331,8 @@ golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= -golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM= -golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM= +golang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU= +golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI= golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4= diff --git a/internal/aws_utils/aws_utils.go b/internal/aws_utils/aws_utils.go index f6dfec612f..9a48130f63 100644 --- a/internal/aws_utils/aws_utils.go +++ b/internal/aws_utils/aws_utils.go @@ -96,7 +96,7 @@ func LoadAWSConfigWithAuth( baseCfg, err := config.LoadDefaultConfig(ctx, cfgOpts...) if err != nil { log.Debug("Failed to load AWS config", "error", err) - return aws.Config{}, fmt.Errorf("%w: %w", errUtils.ErrLoadAwsConfig, err) + return aws.Config{}, fmt.Errorf("%w: %w", errUtils.ErrLoadAWSConfig, err) } log.Debug("Successfully loaded AWS SDK config", "region", baseCfg.Region) diff --git a/internal/exec/stack_processor_cache.go b/internal/exec/stack_processor_cache.go new file mode 100644 index 0000000000..98d4083795 --- /dev/null +++ b/internal/exec/stack_processor_cache.go @@ -0,0 +1,227 @@ +package exec + +import ( + "os" + "sync" + + "github.com/santhosh-tekuri/jsonschema/v5" + + errUtils "github.com/cloudposse/atmos/errors" + m "github.com/cloudposse/atmos/pkg/merge" + "github.com/cloudposse/atmos/pkg/perf" + "github.com/cloudposse/atmos/pkg/schema" +) + +var ( + // File content sync map. + getFileContentSyncMap = sync.Map{} + + // Base component inheritance cache to avoid re-processing the same inheritance chains. + // Cache key: "stack:component:baseComponent" -> BaseComponentConfig. + // No cache invalidation needed - configuration is immutable per command execution. + baseComponentConfigCache = make(map[string]*schema.BaseComponentConfig) + baseComponentConfigCacheMu sync.RWMutex + + // JSON schema compilation cache to avoid re-compiling the same schema for every stack file. + // Cache key: absolute file path to schema file -> compiled schema. + // No cache invalidation needed - schemas are immutable per command execution. + jsonSchemaCache = make(map[string]*jsonschema.Schema) + jsonSchemaCacheMu sync.RWMutex +) + +// deepCopyBaseComponentConfigMaps deep copies all map fields from src to dst. +// Returns an error if any deep copy fails. +func deepCopyBaseComponentConfigMaps(dst, src *schema.BaseComponentConfig) error { + var err error + if dst.BaseComponentVars, err = m.DeepCopyMap(src.BaseComponentVars); err != nil { + return err + } + if dst.BaseComponentSettings, err = m.DeepCopyMap(src.BaseComponentSettings); err != nil { + return err + } + if dst.BaseComponentEnv, err = m.DeepCopyMap(src.BaseComponentEnv); err != nil { + return err + } + if dst.BaseComponentAuth, err = m.DeepCopyMap(src.BaseComponentAuth); err != nil { + return err + } + if dst.BaseComponentMetadata, err = m.DeepCopyMap(src.BaseComponentMetadata); err != nil { + return err + } + if dst.BaseComponentProviders, err = m.DeepCopyMap(src.BaseComponentProviders); err != nil { + return err + } + if dst.BaseComponentHooks, err = m.DeepCopyMap(src.BaseComponentHooks); err != nil { + return err + } + if dst.BaseComponentBackendSection, err = m.DeepCopyMap(src.BaseComponentBackendSection); err != nil { + return err + } + if dst.BaseComponentRemoteStateBackendSection, err = m.DeepCopyMap(src.BaseComponentRemoteStateBackendSection); err != nil { + return err + } + return nil +} + +// getCachedBaseComponentConfig retrieves a cached base component config if it exists. +// Returns a deep copy to prevent mutations affecting the cache. +func getCachedBaseComponentConfig(cacheKey string) (*schema.BaseComponentConfig, *[]string, bool) { + defer perf.Track(nil, "exec.getCachedBaseComponentConfig")() + + baseComponentConfigCacheMu.RLock() + defer baseComponentConfigCacheMu.RUnlock() + + cached, found := baseComponentConfigCache[cacheKey] + if !found { + return nil, nil, false + } + + // Deep copy to prevent external mutations from affecting the cache. + // All map fields must be deep copied since they are mutable. + copyConfig := schema.BaseComponentConfig{ + FinalBaseComponentName: cached.FinalBaseComponentName, + BaseComponentCommand: cached.BaseComponentCommand, + BaseComponentBackendType: cached.BaseComponentBackendType, + BaseComponentRemoteStateBackendType: cached.BaseComponentRemoteStateBackendType, + } + + // Deep copy all map fields. + if err := deepCopyBaseComponentConfigMaps(©Config, cached); err != nil { + // If deep copy fails, return not found to force reprocessing. + return nil, nil, false + } + + // Deep copy the slice. + copyBaseComponents := make([]string, len(cached.ComponentInheritanceChain)) + copy(copyBaseComponents, cached.ComponentInheritanceChain) + copyConfig.ComponentInheritanceChain = copyBaseComponents + + return ©Config, ©BaseComponents, true +} + +// cacheBaseComponentConfig stores a base component config in the cache. +// Stores a deep copy to prevent external mutations from affecting the cache. +func cacheBaseComponentConfig(cacheKey string, config *schema.BaseComponentConfig) { + defer perf.Track(nil, "exec.cacheBaseComponentConfig")() + + baseComponentConfigCacheMu.Lock() + defer baseComponentConfigCacheMu.Unlock() + + // Deep copy to prevent external mutations from affecting the cache. + // All map fields must be deep copied since they are mutable. + copyConfig := schema.BaseComponentConfig{ + FinalBaseComponentName: config.FinalBaseComponentName, + BaseComponentCommand: config.BaseComponentCommand, + BaseComponentBackendType: config.BaseComponentBackendType, + BaseComponentRemoteStateBackendType: config.BaseComponentRemoteStateBackendType, + } + + // Deep copy all map fields. + if err := deepCopyBaseComponentConfigMaps(©Config, config); err != nil { + // If deep copy fails, don't cache - return silently. + return + } + + // Deep copy the slice. + copyBaseComponents := make([]string, len(config.ComponentInheritanceChain)) + copy(copyBaseComponents, config.ComponentInheritanceChain) + copyConfig.ComponentInheritanceChain = copyBaseComponents + + baseComponentConfigCache[cacheKey] = ©Config +} + +// getCachedCompiledSchema retrieves a cached compiled JSON schema if it exists. +// The compiled schema is thread-safe for concurrent validation operations. +func getCachedCompiledSchema(schemaPath string) (*jsonschema.Schema, bool) { + defer perf.Track(nil, "exec.getCachedCompiledSchema")() + + jsonSchemaCacheMu.RLock() + defer jsonSchemaCacheMu.RUnlock() + + compiledSchema, found := jsonSchemaCache[schemaPath] + return compiledSchema, found +} + +// cacheCompiledSchema stores a compiled JSON schema in the cache. +// The compiled schema is thread-safe and can be safely shared across goroutines. +func cacheCompiledSchema(schemaPath string, schema *jsonschema.Schema) { + defer perf.Track(nil, "exec.cacheCompiledSchema")() + + jsonSchemaCacheMu.Lock() + defer jsonSchemaCacheMu.Unlock() + + jsonSchemaCache[schemaPath] = schema +} + +// ClearBaseComponentConfigCache clears the base component config cache. +// This should be called between independent operations (like tests) to ensure fresh processing. +func ClearBaseComponentConfigCache() { + defer perf.Track(nil, "exec.ClearBaseComponentConfigCache")() + + baseComponentConfigCacheMu.Lock() + defer baseComponentConfigCacheMu.Unlock() + baseComponentConfigCache = make(map[string]*schema.BaseComponentConfig) +} + +// ClearJsonSchemaCache clears the JSON schema cache. +// This should be called between independent operations (like tests) to ensure fresh processing. +func ClearJsonSchemaCache() { + defer perf.Track(nil, "exec.ClearJsonSchemaCache")() + + jsonSchemaCacheMu.Lock() + defer jsonSchemaCacheMu.Unlock() + jsonSchemaCache = make(map[string]*jsonschema.Schema) +} + +// ClearFileContentCache clears the file content cache. +// This should be called between independent operations (like tests) to ensure fresh processing. +func ClearFileContentCache() { + defer perf.Track(nil, "exec.ClearFileContentCache")() + + getFileContentSyncMap.Range(func(key, value interface{}) bool { + getFileContentSyncMap.Delete(key) + return true + }) +} + +// GetFileContent tries to read and return the file content from the sync map if it exists in the map. +// Otherwise, it reads the file, stores its content in the map, and returns the content. +func GetFileContent(filePath string) (string, error) { + defer perf.Track(nil, "exec.GetFileContent")() + + if existingContent, found := getFileContentSyncMap.Load(filePath); found { + switch v := existingContent.(type) { + case []byte: + return string(v), nil + case string: + return v, nil + } + } + + content, err := os.ReadFile(filePath) + if err != nil { + return "", errUtils.Build(errUtils.ErrReadFile). + WithCause(err). + WithContext("path", filePath). + Err() + } + getFileContentSyncMap.Store(filePath, content) + + return string(content), nil +} + +// GetFileContentWithoutCache reads file content without using the cache. +// Used when provenance tracking is enabled to ensure fresh reads with position tracking. +func GetFileContentWithoutCache(filePath string) (string, error) { + defer perf.Track(nil, "exec.GetFileContentWithoutCache")() + + content, err := os.ReadFile(filePath) + if err != nil { + return "", errUtils.Build(errUtils.ErrReadFile). + WithCause(err). + WithContext("path", filePath). + Err() + } + + return string(content), nil +} diff --git a/internal/exec/stack_processor_cache_test.go b/internal/exec/stack_processor_cache_test.go new file mode 100644 index 0000000000..170d9141cc --- /dev/null +++ b/internal/exec/stack_processor_cache_test.go @@ -0,0 +1,336 @@ +package exec + +import ( + "os" + "path/filepath" + "sync" + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + + "github.com/cloudposse/atmos/pkg/schema" +) + +// TestClearBaseComponentConfigCache tests that the cache clearing function works correctly. +func TestClearBaseComponentConfigCache(t *testing.T) { + // First, populate the cache with a test entry. + testConfig := &schema.BaseComponentConfig{ + FinalBaseComponentName: "test-component", + BaseComponentVars: map[string]any{"key": "value"}, + } + cacheBaseComponentConfig("test:component:base", testConfig) + + // Verify it was cached. + _, _, found := getCachedBaseComponentConfig("test:component:base") + assert.True(t, found, "config should be cached before clearing") + + // Clear the cache. + ClearBaseComponentConfigCache() + + // Verify it's gone. + _, _, found = getCachedBaseComponentConfig("test:component:base") + assert.False(t, found, "config should not be cached after clearing") +} + +// TestClearJsonSchemaCache tests that the JSON schema cache clearing works correctly. +func TestClearJsonSchemaCache(t *testing.T) { + // Clear the cache first to start fresh. + ClearJsonSchemaCache() + + // Verify a non-existent entry is not found. + _, found := getCachedCompiledSchema("/path/to/schema.json") + assert.False(t, found, "schema should not be cached") + + // Clear again (should be safe even if empty). + ClearJsonSchemaCache() +} + +// TestClearFileContentCache tests that the file content cache clearing works correctly. +func TestClearFileContentCache(t *testing.T) { + // Create a temp file to cache. + tmpDir := t.TempDir() + tmpFile := filepath.Join(tmpDir, "test.yaml") + err := os.WriteFile(tmpFile, []byte("test: content"), 0o644) + require.NoError(t, err) + + // Read it to populate the cache. + content1, err := GetFileContent(tmpFile) + require.NoError(t, err) + assert.Equal(t, "test: content", content1) + + // Clear the cache. + ClearFileContentCache() + + // Modify the file. + err = os.WriteFile(tmpFile, []byte("modified: content"), 0o644) + require.NoError(t, err) + + // Read again - should get new content since cache was cleared. + content2, err := GetFileContent(tmpFile) + require.NoError(t, err) + assert.Equal(t, "modified: content", content2) +} + +// TestGetFileContent tests file content reading and caching. +func TestGetFileContent(t *testing.T) { + // Clear cache to start fresh. + ClearFileContentCache() + + // Create a temp file. + tmpDir := t.TempDir() + tmpFile := filepath.Join(tmpDir, "test.yaml") + err := os.WriteFile(tmpFile, []byte("test: content\nmore: data"), 0o644) + require.NoError(t, err) + + // First read should read from disk. + content1, err := GetFileContent(tmpFile) + require.NoError(t, err) + assert.Equal(t, "test: content\nmore: data", content1) + + // Modify the file on disk. + err = os.WriteFile(tmpFile, []byte("changed: content"), 0o644) + require.NoError(t, err) + + // Second read should return cached content (not the changed file). + content2, err := GetFileContent(tmpFile) + require.NoError(t, err) + assert.Equal(t, "test: content\nmore: data", content2, "should return cached content") + + // Clean up. + ClearFileContentCache() +} + +// TestGetFileContentNonExistent tests reading a non-existent file. +func TestGetFileContentNonExistent(t *testing.T) { + ClearFileContentCache() + + _, err := GetFileContent("/nonexistent/path/file.yaml") + assert.Error(t, err, "should return error for non-existent file") +} + +// TestGetFileContentWithoutCache tests uncached file reading. +func TestGetFileContentWithoutCache(t *testing.T) { + // Create a temp file. + tmpDir := t.TempDir() + tmpFile := filepath.Join(tmpDir, "test.yaml") + err := os.WriteFile(tmpFile, []byte("original: content"), 0o644) + require.NoError(t, err) + + // First read. + content1, err := GetFileContentWithoutCache(tmpFile) + require.NoError(t, err) + assert.Equal(t, "original: content", content1) + + // Modify the file. + err = os.WriteFile(tmpFile, []byte("modified: content"), 0o644) + require.NoError(t, err) + + // Second read should see the modification (no caching). + content2, err := GetFileContentWithoutCache(tmpFile) + require.NoError(t, err) + assert.Equal(t, "modified: content", content2, "should always read fresh content") +} + +// TestGetFileContentWithoutCacheNonExistent tests uncached reading of non-existent file. +func TestGetFileContentWithoutCacheNonExistent(t *testing.T) { + _, err := GetFileContentWithoutCache("/nonexistent/path/file.yaml") + assert.Error(t, err, "should return error for non-existent file") +} + +// TestCacheBaseComponentConfig tests caching of base component configurations. +func TestCacheBaseComponentConfig(t *testing.T) { + ClearBaseComponentConfigCache() + + // Create a config with all fields populated. + config := &schema.BaseComponentConfig{ + FinalBaseComponentName: "final-base", + BaseComponentCommand: "terraform", + BaseComponentBackendType: "s3", + BaseComponentRemoteStateBackendType: "s3", + BaseComponentVars: map[string]any{ + "var1": "value1", + "var2": map[string]any{"nested": "value"}, + }, + BaseComponentSettings: map[string]any{ + "setting1": true, + }, + BaseComponentEnv: map[string]any{ + "ENV_VAR": "value", + }, + BaseComponentAuth: map[string]any{ + "auth_type": "aws", + }, + BaseComponentMetadata: map[string]any{ + "component_type": "terraform", + }, + BaseComponentProviders: map[string]any{ + "aws": map[string]any{"region": "us-east-1"}, + }, + BaseComponentHooks: map[string]any{ + "pre_plan": []any{"echo hello"}, + }, + BaseComponentBackendSection: map[string]any{ + "bucket": "my-bucket", + }, + BaseComponentRemoteStateBackendSection: map[string]any{ + "bucket": "state-bucket", + }, + ComponentInheritanceChain: []string{"base1", "base2"}, + } + + // Cache the config. + cacheKey := "stack:component:base" + cacheBaseComponentConfig(cacheKey, config) + + // Retrieve and verify. + cached, baseComponents, found := getCachedBaseComponentConfig(cacheKey) + require.True(t, found, "config should be found in cache") + require.NotNil(t, cached) + require.NotNil(t, baseComponents) + + // Verify all fields. + assert.Equal(t, "final-base", cached.FinalBaseComponentName) + assert.Equal(t, "terraform", cached.BaseComponentCommand) + assert.Equal(t, "s3", cached.BaseComponentBackendType) + assert.Equal(t, "s3", cached.BaseComponentRemoteStateBackendType) + assert.Equal(t, "value1", cached.BaseComponentVars["var1"]) + assert.Equal(t, true, cached.BaseComponentSettings["setting1"]) + assert.Equal(t, "value", cached.BaseComponentEnv["ENV_VAR"]) + assert.Equal(t, "aws", cached.BaseComponentAuth["auth_type"]) + assert.Equal(t, "terraform", cached.BaseComponentMetadata["component_type"]) + assert.Equal(t, "my-bucket", cached.BaseComponentBackendSection["bucket"]) + assert.Equal(t, "state-bucket", cached.BaseComponentRemoteStateBackendSection["bucket"]) + assert.Equal(t, []string{"base1", "base2"}, cached.ComponentInheritanceChain) + assert.Equal(t, []string{"base1", "base2"}, *baseComponents) + + // Clean up. + ClearBaseComponentConfigCache() +} + +// TestCacheBaseComponentConfigDeepCopy tests that cached configs are deep copied. +func TestCacheBaseComponentConfigDeepCopy(t *testing.T) { + ClearBaseComponentConfigCache() + + // Create a config with mutable nested data. + originalVars := map[string]any{ + "key": "original", + } + originalMetadata := map[string]any{ + "type": "original", + } + config := &schema.BaseComponentConfig{ + FinalBaseComponentName: "test", + BaseComponentVars: originalVars, + BaseComponentMetadata: originalMetadata, + ComponentInheritanceChain: []string{"base1"}, + } + + // Cache it. + cacheBaseComponentConfig("test-key", config) + + // Modify the original after caching. + originalVars["key"] = "modified" + originalMetadata["type"] = "modified" + config.ComponentInheritanceChain[0] = "modified-base" + + // Retrieve from cache. + cached, _, found := getCachedBaseComponentConfig("test-key") + require.True(t, found) + + // Cached values should NOT be affected by modifications to original. + assert.Equal(t, "original", cached.BaseComponentVars["key"], "cached vars should not be modified") + assert.Equal(t, "original", cached.BaseComponentMetadata["type"], "cached metadata should not be modified") + + // Now modify the cached value. + cached.BaseComponentVars["key"] = "cached-modified" + cached.BaseComponentMetadata["type"] = "cached-modified" + + // Retrieve again and verify it's still the original. + cached2, _, found := getCachedBaseComponentConfig("test-key") + require.True(t, found) + assert.Equal(t, "original", cached2.BaseComponentVars["key"], "cache should return independent copies") + assert.Equal(t, "original", cached2.BaseComponentMetadata["type"], "cache should return independent copies for metadata") + + // Clean up. + ClearBaseComponentConfigCache() +} + +// TestGetCachedBaseComponentConfigNotFound tests cache miss behavior. +func TestGetCachedBaseComponentConfigNotFound(t *testing.T) { + ClearBaseComponentConfigCache() + + cached, baseComponents, found := getCachedBaseComponentConfig("nonexistent-key") + assert.False(t, found) + assert.Nil(t, cached) + assert.Nil(t, baseComponents) +} + +// TestConcurrentCacheAccess tests thread-safety of cache operations. +func TestConcurrentCacheAccess(t *testing.T) { + ClearBaseComponentConfigCache() + ClearFileContentCache() + + // Create a temp file for file content cache testing. + tmpDir := t.TempDir() + tmpFile := filepath.Join(tmpDir, "concurrent.yaml") + err := os.WriteFile(tmpFile, []byte("concurrent: test"), 0o644) + require.NoError(t, err) + + var wg sync.WaitGroup + numGoroutines := 50 + + // Test concurrent base component config cache access. + for i := 0; i < numGoroutines; i++ { + wg.Add(1) + go func(id int) { + defer wg.Done() + + config := &schema.BaseComponentConfig{ + FinalBaseComponentName: "component", + BaseComponentVars: map[string]any{"id": id}, + } + cacheKey := "stack:component:base" + + // Cache and retrieve. + cacheBaseComponentConfig(cacheKey, config) + getCachedBaseComponentConfig(cacheKey) + }(i) + } + + // Test concurrent file content cache access. + for i := 0; i < numGoroutines; i++ { + wg.Add(1) + go func() { + defer wg.Done() + _, _ = GetFileContent(tmpFile) + }() + } + + wg.Wait() + + // Clean up. + ClearBaseComponentConfigCache() + ClearFileContentCache() +} + +// TestCacheCompiledSchemaBasic tests JSON schema caching mechanics. +func TestCacheCompiledSchemaBasic(t *testing.T) { + ClearJsonSchemaCache() + + // Verify not found initially. + _, found := getCachedCompiledSchema("/test/schema.json") + assert.False(t, found) + + // Note: We can't easily test with real compiled schemas without actual schema files, + // but we can verify the cache mechanism works with nil values. + cacheCompiledSchema("/test/schema.json", nil) + + // Should be found now (even if nil). + cached, found := getCachedCompiledSchema("/test/schema.json") + assert.True(t, found) + assert.Nil(t, cached) + + // Clean up. + ClearJsonSchemaCache() +} diff --git a/internal/exec/stack_processor_provenance.go b/internal/exec/stack_processor_provenance.go new file mode 100644 index 0000000000..eb79292cae --- /dev/null +++ b/internal/exec/stack_processor_provenance.go @@ -0,0 +1,146 @@ +package exec + +import ( + "sync" + + log "github.com/cloudposse/atmos/pkg/logger" + m "github.com/cloudposse/atmos/pkg/merge" + "github.com/cloudposse/atmos/pkg/perf" + "github.com/cloudposse/atmos/pkg/schema" + u "github.com/cloudposse/atmos/pkg/utils" +) + +var ( + // The mergeContexts stores MergeContexts keyed by stack file path when provenance tracking is enabled. + // This is used to capture provenance data for the describe component command. + mergeContexts = make(map[string]*m.MergeContext) + mergeContextsMu sync.RWMutex + + // Deprecated: Use SetMergeContextForStack/GetMergeContextForStack instead. + lastMergeContext *m.MergeContext + lastMergeContextMu sync.RWMutex +) + +// SetMergeContextForStack stores the merge context for a specific stack file. +func SetMergeContextForStack(stackFile string, ctx *m.MergeContext) { + defer perf.Track(nil, "exec.SetMergeContextForStack")() + + mergeContextsMu.Lock() + defer mergeContextsMu.Unlock() + mergeContexts[stackFile] = ctx +} + +// GetMergeContextForStack retrieves the merge context for a specific stack file. +func GetMergeContextForStack(stackFile string) *m.MergeContext { + defer perf.Track(nil, "exec.GetMergeContextForStack")() + + mergeContextsMu.RLock() + defer mergeContextsMu.RUnlock() + return mergeContexts[stackFile] +} + +// ClearMergeContexts clears all stored merge contexts. +func ClearMergeContexts() { + defer perf.Track(nil, "exec.ClearMergeContexts")() + + mergeContextsMu.Lock() + defer mergeContextsMu.Unlock() + mergeContexts = make(map[string]*m.MergeContext) +} + +// GetAllMergeContexts returns all stored merge contexts. +// Returns a map of stack file paths to their merge contexts. +func GetAllMergeContexts() map[string]*m.MergeContext { + defer perf.Track(nil, "exec.GetAllMergeContexts")() + + mergeContextsMu.RLock() + defer mergeContextsMu.RUnlock() + + // Return a copy to prevent external modifications. + result := make(map[string]*m.MergeContext, len(mergeContexts)) + for k, v := range mergeContexts { + result[k] = v + } + return result +} + +// SetLastMergeContext stores the merge context for later retrieval. +// Deprecated: Use SetMergeContextForStack instead. +func SetLastMergeContext(ctx *m.MergeContext) { + defer perf.Track(nil, "exec.SetLastMergeContext")() + + lastMergeContextMu.Lock() + defer lastMergeContextMu.Unlock() + lastMergeContext = ctx +} + +// GetLastMergeContext retrieves the last stored merge context. +// Deprecated: Use GetMergeContextForStack instead. +func GetLastMergeContext() *m.MergeContext { + defer perf.Track(nil, "exec.GetLastMergeContext")() + + lastMergeContextMu.RLock() + defer lastMergeContextMu.RUnlock() + return lastMergeContext +} + +// ClearLastMergeContext clears the stored merge context. +// Deprecated: Use ClearMergeContexts instead. +func ClearLastMergeContext() { + defer perf.Track(nil, "exec.ClearLastMergeContext")() + + lastMergeContextMu.Lock() + defer lastMergeContextMu.Unlock() + lastMergeContext = nil +} + +// processImportProvenanceTracking handles storing merge context and updating import chains. +// It stores the merge context for imported files and adds imported files to the parent's import chain. +func processImportProvenanceTracking( + atmosConfig *schema.AtmosConfiguration, + result *importFileResult, + mergeContext *m.MergeContext, +) { + defer perf.Track(atmosConfig, "exec.processImportProvenanceTracking")() + + if atmosConfig == nil || !atmosConfig.TrackProvenance { + return + } + + if result.mergeContext == nil { + log.Trace("Import has nil merge context", "import", result.importRelativePathWithoutExt) + return + } + + if !result.mergeContext.IsProvenanceEnabled() { + log.Trace("Import has merge context but provenance not enabled", "import", result.importRelativePathWithoutExt) + return + } + + log.Trace("Storing merge context for import", "import", result.importRelativePathWithoutExt, "chain_length", len(result.mergeContext.ImportChain)) + SetMergeContextForStack(result.importRelativePathWithoutExt, result.mergeContext) + + // Add imported files to parent merge context's import chain. + updateParentImportChain(result.mergeContext, mergeContext) +} + +// updateParentImportChain adds imported files from the child's import chain to the parent's chain. +func updateParentImportChain(childContext, parentContext *m.MergeContext) { + defer perf.Track(nil, "exec.updateParentImportChain")() + + if parentContext == nil { + return + } + + for i, importedFile := range childContext.ImportChain { + if u.SliceContainsString(parentContext.ImportChain, importedFile) { + continue + } + parentContext.ImportChain = append(parentContext.ImportChain, importedFile) + if i == 0 { + log.Trace("Added import to parent import chain", "file", importedFile) + } else { + log.Trace("Added nested import to parent import chain", "file", importedFile) + } + } +} diff --git a/internal/exec/stack_processor_provenance_test.go b/internal/exec/stack_processor_provenance_test.go new file mode 100644 index 0000000000..81d2d9eda7 --- /dev/null +++ b/internal/exec/stack_processor_provenance_test.go @@ -0,0 +1,366 @@ +package exec + +import ( + "sync" + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + + m "github.com/cloudposse/atmos/pkg/merge" + "github.com/cloudposse/atmos/pkg/schema" +) + +// TestSetAndGetMergeContextForStack tests storing and retrieving merge contexts. +func TestSetAndGetMergeContextForStack(t *testing.T) { + // Clear to start fresh. + ClearMergeContexts() + + // Create a merge context. + ctx := m.NewMergeContext() + ctx.EnableProvenance() + ctx.ImportChain = []string{"file1.yaml", "file2.yaml"} + + // Store it. + SetMergeContextForStack("my-stack", ctx) + + // Retrieve it. + retrieved := GetMergeContextForStack("my-stack") + require.NotNil(t, retrieved) + assert.Equal(t, []string{"file1.yaml", "file2.yaml"}, retrieved.ImportChain) + assert.True(t, retrieved.IsProvenanceEnabled()) + + // Clean up. + ClearMergeContexts() +} + +// TestGetMergeContextForStackNotFound tests retrieving a non-existent context. +func TestGetMergeContextForStackNotFound(t *testing.T) { + ClearMergeContexts() + + retrieved := GetMergeContextForStack("nonexistent-stack") + assert.Nil(t, retrieved) +} + +// TestClearMergeContexts tests clearing all stored merge contexts. +func TestClearMergeContexts(t *testing.T) { + // Store multiple contexts. + ctx1 := m.NewMergeContext() + ctx2 := m.NewMergeContext() + + SetMergeContextForStack("stack1", ctx1) + SetMergeContextForStack("stack2", ctx2) + + // Verify they exist. + assert.NotNil(t, GetMergeContextForStack("stack1")) + assert.NotNil(t, GetMergeContextForStack("stack2")) + + // Clear all. + ClearMergeContexts() + + // Verify they're gone. + assert.Nil(t, GetMergeContextForStack("stack1")) + assert.Nil(t, GetMergeContextForStack("stack2")) +} + +// TestGetAllMergeContexts tests retrieving all stored merge contexts. +func TestGetAllMergeContexts(t *testing.T) { + ClearMergeContexts() + + // Store multiple contexts. + ctx1 := m.NewMergeContext() + ctx1.ImportChain = []string{"chain1"} + ctx2 := m.NewMergeContext() + ctx2.ImportChain = []string{"chain2"} + + SetMergeContextForStack("stack1", ctx1) + SetMergeContextForStack("stack2", ctx2) + + // Get all contexts. + allContexts := GetAllMergeContexts() + require.Len(t, allContexts, 2) + + assert.Equal(t, []string{"chain1"}, allContexts["stack1"].ImportChain) + assert.Equal(t, []string{"chain2"}, allContexts["stack2"].ImportChain) + + // Verify it returns a copy (modifications don't affect original). + allContexts["stack1"] = m.NewMergeContext() + allContexts["stack1"].ImportChain = []string{"modified"} + + // Original should be unchanged. + original := GetMergeContextForStack("stack1") + assert.Equal(t, []string{"chain1"}, original.ImportChain) + + // Clean up. + ClearMergeContexts() +} + +// TestGetAllMergeContextsEmpty tests retrieving when no contexts exist. +func TestGetAllMergeContextsEmpty(t *testing.T) { + ClearMergeContexts() + + allContexts := GetAllMergeContexts() + assert.NotNil(t, allContexts, "should return empty map, not nil") + assert.Len(t, allContexts, 0) +} + +// TestSetAndGetLastMergeContext tests the deprecated last merge context functions. +func TestSetAndGetLastMergeContext(t *testing.T) { + ClearLastMergeContext() + + // Initially should be nil. + assert.Nil(t, GetLastMergeContext()) + + // Set a context. + ctx := m.NewMergeContext() + ctx.ImportChain = []string{"last-file.yaml"} + + SetLastMergeContext(ctx) + + // Retrieve it. + retrieved := GetLastMergeContext() + require.NotNil(t, retrieved) + assert.Equal(t, []string{"last-file.yaml"}, retrieved.ImportChain) + + // Clear it. + ClearLastMergeContext() + assert.Nil(t, GetLastMergeContext()) +} + +// TestConcurrentMergeContextAccess tests thread-safety of merge context operations. +func TestConcurrentMergeContextAccess(t *testing.T) { + ClearMergeContexts() + ClearLastMergeContext() + + var wg sync.WaitGroup + numGoroutines := 50 + + // Test concurrent SetMergeContextForStack. + for i := 0; i < numGoroutines; i++ { + wg.Add(1) + go func() { + defer wg.Done() + ctx := m.NewMergeContext() + ctx.ImportChain = []string{"file.yaml"} + SetMergeContextForStack("concurrent-stack", ctx) + }() + } + + // Test concurrent GetMergeContextForStack. + for i := 0; i < numGoroutines; i++ { + wg.Add(1) + go func() { + defer wg.Done() + GetMergeContextForStack("concurrent-stack") + }() + } + + // Test concurrent GetAllMergeContexts. + for i := 0; i < numGoroutines; i++ { + wg.Add(1) + go func() { + defer wg.Done() + GetAllMergeContexts() + }() + } + + // Test concurrent last merge context operations. + for i := 0; i < numGoroutines; i++ { + wg.Add(1) + go func() { + defer wg.Done() + ctx := m.NewMergeContext() + SetLastMergeContext(ctx) + GetLastMergeContext() + }() + } + + wg.Wait() + + // Clean up. + ClearMergeContexts() + ClearLastMergeContext() +} + +// TestProcessImportProvenanceTracking tests the provenance tracking helper function. +func TestProcessImportProvenanceTracking(t *testing.T) { + ClearMergeContexts() + + t.Run("nil atmosConfig returns early", func(t *testing.T) { + result := &importFileResult{ + importRelativePathWithoutExt: "test-import", + } + // Should not panic with nil atmosConfig. + processImportProvenanceTracking(nil, result, nil) + }) + + t.Run("provenance disabled returns early", func(t *testing.T) { + atmosConfig := &schema.AtmosConfiguration{ + TrackProvenance: false, + } + result := &importFileResult{ + importRelativePathWithoutExt: "test-import", + } + // Should return early without storing anything. + processImportProvenanceTracking(atmosConfig, result, nil) + assert.Nil(t, GetMergeContextForStack("test-import")) + }) + + t.Run("nil result merge context returns early", func(t *testing.T) { + atmosConfig := &schema.AtmosConfiguration{ + TrackProvenance: true, + } + result := &importFileResult{ + importRelativePathWithoutExt: "test-import", + mergeContext: nil, + } + processImportProvenanceTracking(atmosConfig, result, nil) + assert.Nil(t, GetMergeContextForStack("test-import")) + }) + + t.Run("provenance not enabled on context returns early", func(t *testing.T) { + atmosConfig := &schema.AtmosConfiguration{ + TrackProvenance: true, + } + ctx := m.NewMergeContext() + // Don't enable provenance on context. + result := &importFileResult{ + importRelativePathWithoutExt: "test-import", + mergeContext: ctx, + } + processImportProvenanceTracking(atmosConfig, result, nil) + assert.Nil(t, GetMergeContextForStack("test-import")) + }) + + t.Run("stores merge context when provenance enabled", func(t *testing.T) { + ClearMergeContexts() + + atmosConfig := &schema.AtmosConfiguration{ + TrackProvenance: true, + } + ctx := m.NewMergeContext() + ctx.EnableProvenance() + ctx.ImportChain = []string{"nested-file.yaml"} + + result := &importFileResult{ + importRelativePathWithoutExt: "stored-import", + mergeContext: ctx, + } + + processImportProvenanceTracking(atmosConfig, result, nil) + + // Should be stored. + stored := GetMergeContextForStack("stored-import") + require.NotNil(t, stored) + assert.Equal(t, []string{"nested-file.yaml"}, stored.ImportChain) + + ClearMergeContexts() + }) + + t.Run("updates parent import chain", func(t *testing.T) { + ClearMergeContexts() + + atmosConfig := &schema.AtmosConfiguration{ + TrackProvenance: true, + } + + childCtx := m.NewMergeContext() + childCtx.EnableProvenance() + childCtx.ImportChain = []string{"child-file.yaml", "nested-child.yaml"} + + parentCtx := m.NewMergeContext() + parentCtx.EnableProvenance() + parentCtx.ImportChain = []string{"parent-file.yaml"} + + result := &importFileResult{ + importRelativePathWithoutExt: "child-import", + mergeContext: childCtx, + } + + processImportProvenanceTracking(atmosConfig, result, parentCtx) + + // Parent should now have child's imports added. + assert.Contains(t, parentCtx.ImportChain, "parent-file.yaml") + assert.Contains(t, parentCtx.ImportChain, "child-file.yaml") + assert.Contains(t, parentCtx.ImportChain, "nested-child.yaml") + + ClearMergeContexts() + }) +} + +// TestUpdateParentImportChain tests the import chain update helper function. +func TestUpdateParentImportChain(t *testing.T) { + t.Run("nil parent context does nothing", func(t *testing.T) { + childCtx := m.NewMergeContext() + childCtx.ImportChain = []string{"file1.yaml"} + + // Should not panic with nil parent. + updateParentImportChain(childCtx, nil) + }) + + t.Run("adds child imports to parent", func(t *testing.T) { + childCtx := m.NewMergeContext() + childCtx.ImportChain = []string{"child1.yaml", "child2.yaml"} + + parentCtx := m.NewMergeContext() + parentCtx.ImportChain = []string{"parent.yaml"} + + updateParentImportChain(childCtx, parentCtx) + + assert.Equal(t, []string{"parent.yaml", "child1.yaml", "child2.yaml"}, parentCtx.ImportChain) + }) + + t.Run("avoids duplicates", func(t *testing.T) { + childCtx := m.NewMergeContext() + childCtx.ImportChain = []string{"shared.yaml", "child.yaml"} + + parentCtx := m.NewMergeContext() + parentCtx.ImportChain = []string{"parent.yaml", "shared.yaml"} + + updateParentImportChain(childCtx, parentCtx) + + // "shared.yaml" should appear only once. + count := 0 + for _, f := range parentCtx.ImportChain { + if f == "shared.yaml" { + count++ + } + } + assert.Equal(t, 1, count, "shared.yaml should appear only once") + assert.Contains(t, parentCtx.ImportChain, "child.yaml") + }) + + t.Run("empty child chain does nothing", func(t *testing.T) { + childCtx := m.NewMergeContext() + childCtx.ImportChain = []string{} + + parentCtx := m.NewMergeContext() + parentCtx.ImportChain = []string{"parent.yaml"} + + updateParentImportChain(childCtx, parentCtx) + + assert.Equal(t, []string{"parent.yaml"}, parentCtx.ImportChain) + }) +} + +// TestMergeContextOverwrite tests that setting a context overwrites the previous one. +func TestMergeContextOverwrite(t *testing.T) { + ClearMergeContexts() + + // Set first context. + ctx1 := m.NewMergeContext() + ctx1.ImportChain = []string{"first.yaml"} + SetMergeContextForStack("my-stack", ctx1) + + // Set second context with same key. + ctx2 := m.NewMergeContext() + ctx2.ImportChain = []string{"second.yaml"} + SetMergeContextForStack("my-stack", ctx2) + + // Should have second context. + retrieved := GetMergeContextForStack("my-stack") + require.NotNil(t, retrieved) + assert.Equal(t, []string{"second.yaml"}, retrieved.ImportChain) + + ClearMergeContexts() +} diff --git a/internal/exec/stack_processor_template_test.go b/internal/exec/stack_processor_template_test.go index ea9f6c06f7..29d050c127 100644 --- a/internal/exec/stack_processor_template_test.go +++ b/internal/exec/stack_processor_template_test.go @@ -151,7 +151,7 @@ metadata: require.NoError(t, err) // Process the file - result, _, _, _, _, _, _, err := ProcessYAMLConfigFileWithContext( + result, _, _, _, _, _, _, _, err := ProcessYAMLConfigFileWithContext( atmosConfig, tempDir, filePath, @@ -317,7 +317,7 @@ components: // Process the main stack file stackPath := filepath.Join(tempDir, "stack.yaml") - result, _, _, _, _, _, _, err := ProcessYAMLConfigFileWithContext( //nolint:dogsled + result, _, _, _, _, _, _, _, err := ProcessYAMLConfigFileWithContext( //nolint:dogsled atmosConfig, tempDir, stackPath, @@ -387,7 +387,7 @@ components: } // Test with skipTemplatesProcessingInImports = true - result, _, _, _, _, _, _, err := ProcessYAMLConfigFileWithContext( //nolint:dogsled + result, _, _, _, _, _, _, _, err := ProcessYAMLConfigFileWithContext( //nolint:dogsled atmosConfig, tempDir, templateFile, @@ -419,7 +419,7 @@ components: assert.Equal(t, 10, vars["value"]) // Test with skipTemplatesProcessingInImports = false - result2, _, _, _, _, _, _, err2 := ProcessYAMLConfigFileWithContext( //nolint:dogsled + result2, _, _, _, _, _, _, _, err2 := ProcessYAMLConfigFileWithContext( //nolint:dogsled atmosConfig, tempDir, templateFile, diff --git a/internal/exec/stack_processor_utils.go b/internal/exec/stack_processor_utils.go index 82f827775f..2a3b38882f 100644 --- a/internal/exec/stack_processor_utils.go +++ b/internal/exec/stack_processor_utils.go @@ -16,258 +16,15 @@ import ( errUtils "github.com/cloudposse/atmos/errors" cfg "github.com/cloudposse/atmos/pkg/config" + log "github.com/cloudposse/atmos/pkg/logger" m "github.com/cloudposse/atmos/pkg/merge" "github.com/cloudposse/atmos/pkg/perf" "github.com/cloudposse/atmos/pkg/schema" u "github.com/cloudposse/atmos/pkg/utils" ) -var ( - // File content sync map. - getFileContentSyncMap = sync.Map{} - - // Mutex to serialize writes to importsConfig maps during parallel import processing. - importsConfigLock = &sync.Mutex{} - - // The mergeContexts stores MergeContexts keyed by stack file path when provenance tracking is enabled. - // This is used to capture provenance data for the describe component command. - mergeContexts = make(map[string]*m.MergeContext) - mergeContextsMu sync.RWMutex - - // Deprecated: Use SetMergeContextForStack/GetMergeContextForStack instead. - lastMergeContext *m.MergeContext - lastMergeContextMu sync.RWMutex - - // Base component inheritance cache to avoid re-processing the same inheritance chains. - // Cache key: "stack:component:baseComponent" -> BaseComponentConfig. - // No cache invalidation needed - configuration is immutable per command execution. - baseComponentConfigCache = make(map[string]*schema.BaseComponentConfig) - baseComponentConfigCacheMu sync.RWMutex - - // JSON schema compilation cache to avoid re-compiling the same schema for every stack file. - // Cache key: absolute file path to schema file -> compiled schema. - // No cache invalidation needed - schemas are immutable per command execution. - jsonSchemaCache = make(map[string]*jsonschema.Schema) - jsonSchemaCacheMu sync.RWMutex -) - -// SetMergeContextForStack stores the merge context for a specific stack file. -func SetMergeContextForStack(stackFile string, ctx *m.MergeContext) { - defer perf.Track(nil, "exec.SetMergeContextForStack")() - - mergeContextsMu.Lock() - defer mergeContextsMu.Unlock() - mergeContexts[stackFile] = ctx -} - -// GetMergeContextForStack retrieves the merge context for a specific stack file. -func GetMergeContextForStack(stackFile string) *m.MergeContext { - defer perf.Track(nil, "exec.GetMergeContextForStack")() - - mergeContextsMu.RLock() - defer mergeContextsMu.RUnlock() - return mergeContexts[stackFile] -} - -// ClearMergeContexts clears all stored merge contexts. -func ClearMergeContexts() { - defer perf.Track(nil, "exec.ClearMergeContexts")() - - mergeContextsMu.Lock() - defer mergeContextsMu.Unlock() - mergeContexts = make(map[string]*m.MergeContext) -} - -// SetLastMergeContext stores the merge context for later retrieval. -// Deprecated: Use SetMergeContextForStack instead. -func SetLastMergeContext(ctx *m.MergeContext) { - defer perf.Track(nil, "exec.SetLastMergeContext")() - - lastMergeContextMu.Lock() - defer lastMergeContextMu.Unlock() - lastMergeContext = ctx -} - -// GetLastMergeContext retrieves the last stored merge context. -// Deprecated: Use GetMergeContextForStack instead. -func GetLastMergeContext() *m.MergeContext { - defer perf.Track(nil, "exec.GetLastMergeContext")() - - lastMergeContextMu.RLock() - defer lastMergeContextMu.RUnlock() - return lastMergeContext -} - -// ClearLastMergeContext clears the stored merge context. -// Deprecated: Use ClearMergeContexts instead. -func ClearLastMergeContext() { - defer perf.Track(nil, "exec.ClearLastMergeContext")() - - lastMergeContextMu.Lock() - defer lastMergeContextMu.Unlock() - lastMergeContext = nil -} - -// getCachedBaseComponentConfig retrieves a cached base component config if it exists. -// Returns a deep copy to prevent mutations affecting the cache. -func getCachedBaseComponentConfig(cacheKey string) (*schema.BaseComponentConfig, *[]string, bool) { - defer perf.Track(nil, "exec.getCachedBaseComponentConfig")() - - baseComponentConfigCacheMu.RLock() - defer baseComponentConfigCacheMu.RUnlock() - - cached, found := baseComponentConfigCache[cacheKey] - if !found { - return nil, nil, false - } - - // Deep copy to prevent external mutations from affecting the cache. - // All map fields must be deep copied since they are mutable. - copyConfig := schema.BaseComponentConfig{ - FinalBaseComponentName: cached.FinalBaseComponentName, - BaseComponentCommand: cached.BaseComponentCommand, - BaseComponentBackendType: cached.BaseComponentBackendType, - BaseComponentRemoteStateBackendType: cached.BaseComponentRemoteStateBackendType, - } - - // Deep copy all map fields. - var err error - if copyConfig.BaseComponentVars, err = m.DeepCopyMap(cached.BaseComponentVars); err != nil { - // If deep copy fails, return not found to force reprocessing. - return nil, nil, false - } - if copyConfig.BaseComponentSettings, err = m.DeepCopyMap(cached.BaseComponentSettings); err != nil { - return nil, nil, false - } - if copyConfig.BaseComponentEnv, err = m.DeepCopyMap(cached.BaseComponentEnv); err != nil { - return nil, nil, false - } - if copyConfig.BaseComponentAuth, err = m.DeepCopyMap(cached.BaseComponentAuth); err != nil { - return nil, nil, false - } - if copyConfig.BaseComponentProviders, err = m.DeepCopyMap(cached.BaseComponentProviders); err != nil { - return nil, nil, false - } - if copyConfig.BaseComponentHooks, err = m.DeepCopyMap(cached.BaseComponentHooks); err != nil { - return nil, nil, false - } - if copyConfig.BaseComponentBackendSection, err = m.DeepCopyMap(cached.BaseComponentBackendSection); err != nil { - return nil, nil, false - } - if copyConfig.BaseComponentRemoteStateBackendSection, err = m.DeepCopyMap(cached.BaseComponentRemoteStateBackendSection); err != nil { - return nil, nil, false - } - - // Deep copy the slice. - copyBaseComponents := make([]string, len(cached.ComponentInheritanceChain)) - copy(copyBaseComponents, cached.ComponentInheritanceChain) - copyConfig.ComponentInheritanceChain = copyBaseComponents - - return ©Config, ©BaseComponents, true -} - -// cacheBaseComponentConfig stores a base component config in the cache. -// Stores a deep copy to prevent external mutations from affecting the cache. -func cacheBaseComponentConfig(cacheKey string, config *schema.BaseComponentConfig) { - defer perf.Track(nil, "exec.cacheBaseComponentConfig")() - - baseComponentConfigCacheMu.Lock() - defer baseComponentConfigCacheMu.Unlock() - - // Deep copy to prevent external mutations from affecting the cache. - // All map fields must be deep copied since they are mutable. - copyConfig := schema.BaseComponentConfig{ - FinalBaseComponentName: config.FinalBaseComponentName, - BaseComponentCommand: config.BaseComponentCommand, - BaseComponentBackendType: config.BaseComponentBackendType, - BaseComponentRemoteStateBackendType: config.BaseComponentRemoteStateBackendType, - } - - // Deep copy all map fields. - var err error - if copyConfig.BaseComponentVars, err = m.DeepCopyMap(config.BaseComponentVars); err != nil { - // If deep copy fails, don't cache - log and return. - return - } - if copyConfig.BaseComponentSettings, err = m.DeepCopyMap(config.BaseComponentSettings); err != nil { - return - } - if copyConfig.BaseComponentEnv, err = m.DeepCopyMap(config.BaseComponentEnv); err != nil { - return - } - if copyConfig.BaseComponentAuth, err = m.DeepCopyMap(config.BaseComponentAuth); err != nil { - return - } - if copyConfig.BaseComponentProviders, err = m.DeepCopyMap(config.BaseComponentProviders); err != nil { - return - } - if copyConfig.BaseComponentHooks, err = m.DeepCopyMap(config.BaseComponentHooks); err != nil { - return - } - if copyConfig.BaseComponentBackendSection, err = m.DeepCopyMap(config.BaseComponentBackendSection); err != nil { - return - } - if copyConfig.BaseComponentRemoteStateBackendSection, err = m.DeepCopyMap(config.BaseComponentRemoteStateBackendSection); err != nil { - return - } - - // Deep copy the slice. - copyBaseComponents := make([]string, len(config.ComponentInheritanceChain)) - copy(copyBaseComponents, config.ComponentInheritanceChain) - copyConfig.ComponentInheritanceChain = copyBaseComponents - - baseComponentConfigCache[cacheKey] = ©Config -} - -// getCachedCompiledSchema retrieves a cached compiled JSON schema if it exists. -// The compiled schema is thread-safe for concurrent validation operations. -func getCachedCompiledSchema(schemaPath string) (*jsonschema.Schema, bool) { - defer perf.Track(nil, "exec.getCachedCompiledSchema")() - - jsonSchemaCacheMu.RLock() - defer jsonSchemaCacheMu.RUnlock() - - schema, found := jsonSchemaCache[schemaPath] - return schema, found -} - -// cacheCompiledSchema stores a compiled JSON schema in the cache. -// The compiled schema is thread-safe and can be safely shared across goroutines. -func cacheCompiledSchema(schemaPath string, schema *jsonschema.Schema) { - defer perf.Track(nil, "exec.cacheCompiledSchema")() - - jsonSchemaCacheMu.Lock() - defer jsonSchemaCacheMu.Unlock() - - jsonSchemaCache[schemaPath] = schema -} - -// ClearBaseComponentConfigCache clears the base component config cache. -// This should be called between independent operations (like tests) to ensure fresh processing. -func ClearBaseComponentConfigCache() { - baseComponentConfigCacheMu.Lock() - defer baseComponentConfigCacheMu.Unlock() - baseComponentConfigCache = make(map[string]*schema.BaseComponentConfig) -} - -// ClearJsonSchemaCache clears the JSON schema cache. -// This should be called between independent operations (like tests) to ensure fresh processing. -func ClearJsonSchemaCache() { - jsonSchemaCacheMu.Lock() - defer jsonSchemaCacheMu.Unlock() - jsonSchemaCache = make(map[string]*jsonschema.Schema) -} - -// ClearFileContentCache clears the file content cache. -// This should be called between independent operations (like tests) to ensure fresh processing. -func ClearFileContentCache() { - defer perf.Track(nil, "exec.ClearFileContentCache")() - - getFileContentSyncMap.Range(func(key, value interface{}) bool { - getFileContentSyncMap.Delete(key) - return true - }) -} +// Mutex to serialize writes to importsConfig maps during parallel import processing. +var importsConfigLock = &sync.Mutex{} // stackProcessResult holds the result of processing a single stack in parallel. type stackProcessResult struct { @@ -335,7 +92,7 @@ func ProcessYAMLConfigFiles( mergeContext.EnableProvenance() } - deepMergedStackConfig, importsConfig, stackConfig, _, _, _, _, err := ProcessYAMLConfigFileWithContext( + deepMergedStackConfig, importsConfig, stackConfig, _, _, _, _, mergeContext, err := ProcessYAMLConfigFileWithContext( atmosConfig, stackBasePath, p, @@ -357,6 +114,14 @@ func ProcessYAMLConfigFiles( return } + if mergeContext != nil { + if len(mergeContext.ImportChain) > 0 { + log.Trace("After processing file, merge context has import chain", "file", stackFileName, "import_chain_length", len(mergeContext.ImportChain), "import_chain", mergeContext.ImportChain) + } else { + log.Trace("After processing file, merge context has empty import chain", "file", stackFileName) + } + } + var imports []string for k := range importsConfig { imports = append(imports, k) @@ -478,7 +243,7 @@ func ProcessYAMLConfigFile( } // Call the context-aware version - deepMerged, imports, stackConfig, terraformInline, terraformImports, helmfileInline, helmfileImports, err := ProcessYAMLConfigFileWithContext( + deepMerged, imports, stackConfig, terraformInline, terraformImports, helmfileInline, helmfileImports, mergeContext, err := ProcessYAMLConfigFileWithContext( atmosConfig, basePath, filePath, @@ -531,6 +296,7 @@ func ProcessYAMLConfigFileWithContext( map[string]any, map[string]any, map[string]any, + *m.MergeContext, error, ) { defer perf.Track(atmosConfig, "exec.ProcessYAMLConfigFileWithContext")() @@ -554,6 +320,21 @@ func ProcessYAMLConfigFileWithContext( ) } +// importFileResult holds the result of processing a single import file in parallel. +type importFileResult struct { + index int + importFile string + yamlConfig map[string]any + yamlConfigRaw map[string]any + terraformOverridesInline map[string]any + terraformOverridesImports map[string]any + helmfileOverridesInline map[string]any + helmfileOverridesImports map[string]any + importRelativePathWithoutExt string + mergeContext *m.MergeContext + err error +} + // processYAMLConfigFileWithContextInternal is the internal recursive implementation. // //nolint:gocognit,revive,cyclop,funlen @@ -581,11 +362,14 @@ func processYAMLConfigFileWithContextInternal( map[string]any, map[string]any, map[string]any, + *m.MergeContext, error, ) { var stackConfigs []map[string]any relativeFilePath := u.TrimBasePathFromPath(basePath+"/", filePath) + log.Trace("Processing YAML config file", "file", relativeFilePath) + // Initialize or update merge context with current file. if mergeContext == nil { mergeContext = m.NewMergeContext() @@ -595,6 +379,7 @@ func processYAMLConfigFileWithContextInternal( } } mergeContext = mergeContext.WithFile(relativeFilePath) + log.Trace("Merge context updated with file", "file", relativeFilePath, "import_chain_length", len(mergeContext.ImportChain), "track_provenance", atmosConfig != nil && atmosConfig.TrackProvenance) globalTerraformSection := map[string]any{} globalHelmfileSection := map[string]any{} @@ -623,13 +408,13 @@ func processYAMLConfigFileWithContextInternal( // This is useful when generating Atmos manifests using other tools, but the imported files are not present yet at the generation time. if err != nil { if ignoreMissingFiles || skipIfMissing { - return map[string]any{}, map[string]map[string]any{}, map[string]any{}, map[string]any{}, map[string]any{}, map[string]any{}, map[string]any{}, nil + return map[string]any{}, map[string]map[string]any{}, map[string]any{}, map[string]any{}, map[string]any{}, map[string]any{}, map[string]any{}, nil, nil } else { - return nil, nil, nil, nil, nil, nil, nil, err + return nil, nil, nil, nil, nil, nil, nil, nil, err } } if stackYamlConfig == "" { - return map[string]any{}, map[string]map[string]any{}, map[string]any{}, map[string]any{}, map[string]any{}, map[string]any{}, map[string]any{}, nil + return map[string]any{}, map[string]map[string]any{}, map[string]any{}, map[string]any{}, map[string]any{}, map[string]any{}, map[string]any{}, nil, nil } stackManifestTemplatesProcessed := stackYamlConfig @@ -646,11 +431,11 @@ func processYAMLConfigFileWithContextInternal( if atmosConfig.Logs.Level == u.LogLevelTrace || atmosConfig.Logs.Level == u.LogLevelDebug { stackManifestTemplatesErrorMessage = fmt.Sprintf("\n\n%s", stackYamlConfig) } - wrappedErr := fmt.Errorf("%w: %v", errUtils.ErrInvalidStackManifest, tmplErr) + wrappedErr := fmt.Errorf("%w: %w", errUtils.ErrInvalidStackManifest, tmplErr) if mergeContext != nil { - return nil, nil, nil, nil, nil, nil, nil, mergeContext.FormatError(wrappedErr, fmt.Sprintf("stack manifest '%s'%s", relativeFilePath, stackManifestTemplatesErrorMessage)) + return nil, nil, nil, nil, nil, nil, nil, nil, mergeContext.FormatError(wrappedErr, fmt.Sprintf("stack manifest '%s'%s", relativeFilePath, stackManifestTemplatesErrorMessage)) } - return nil, nil, nil, nil, nil, nil, nil, fmt.Errorf("%w: stack manifest '%s'\n%v%s", errUtils.ErrInvalidStackManifest, relativeFilePath, tmplErr, stackManifestTemplatesErrorMessage) + return nil, nil, nil, nil, nil, nil, nil, nil, fmt.Errorf("%w: stack manifest '%s'\n%w%s", errUtils.ErrInvalidStackManifest, relativeFilePath, tmplErr, stackManifestTemplatesErrorMessage) } } @@ -665,10 +450,10 @@ func processYAMLConfigFileWithContextInternal( wrappedErr := fmt.Errorf("%w: %v", errUtils.ErrInvalidStackManifest, err) // Then format it with context information e := mergeContext.FormatError(wrappedErr, fmt.Sprintf("stack manifest '%s'%s", relativeFilePath, stackManifestTemplatesErrorMessage)) - return nil, nil, nil, nil, nil, nil, nil, e + return nil, nil, nil, nil, nil, nil, nil, nil, e } else { e := fmt.Errorf("%w: stack manifest '%s'\n%v%s", errUtils.ErrInvalidStackManifest, relativeFilePath, err, stackManifestTemplatesErrorMessage) - return nil, nil, nil, nil, nil, nil, nil, e + return nil, nil, nil, nil, nil, nil, nil, nil, e } } @@ -684,12 +469,12 @@ func processYAMLConfigFileWithContextInternal( // jsonschema: invalid jsonType: map[interface {}]interface {} dataJson, err := u.ConvertToJSONFast(stackConfigMap) if err != nil { - return nil, nil, nil, nil, nil, nil, nil, err + return nil, nil, nil, nil, nil, nil, nil, nil, err } dataFromJson, err := u.ConvertFromJSON(dataJson) if err != nil { - return nil, nil, nil, nil, nil, nil, nil, err + return nil, nil, nil, nil, nil, nil, nil, nil, err } atmosManifestJsonSchemaValidationErrorFormat := "Atmos manifest JSON Schema validation error in the file '%s':\n%v" @@ -703,21 +488,21 @@ func processYAMLConfigFileWithContextInternal( atmosManifestJsonSchemaFileReader, err := os.Open(atmosManifestJsonSchemaFilePath) if err != nil { - return nil, nil, nil, nil, nil, nil, nil, errors.Errorf(atmosManifestJsonSchemaValidationErrorFormat, relativeFilePath, err) + return nil, nil, nil, nil, nil, nil, nil, nil, errors.Errorf(atmosManifestJsonSchemaValidationErrorFormat, relativeFilePath, err) } defer func() { _ = atmosManifestJsonSchemaFileReader.Close() }() if err := compiler.AddResource(atmosManifestJsonSchemaFilePath, atmosManifestJsonSchemaFileReader); err != nil { - return nil, nil, nil, nil, nil, nil, nil, errors.Errorf(atmosManifestJsonSchemaValidationErrorFormat, relativeFilePath, err) + return nil, nil, nil, nil, nil, nil, nil, nil, errors.Errorf(atmosManifestJsonSchemaValidationErrorFormat, relativeFilePath, err) } compiler.Draft = jsonschema.Draft2020 compiledSchema, err = compiler.Compile(atmosManifestJsonSchemaFilePath) if err != nil { - return nil, nil, nil, nil, nil, nil, nil, errors.Errorf(atmosManifestJsonSchemaValidationErrorFormat, relativeFilePath, err) + return nil, nil, nil, nil, nil, nil, nil, nil, errors.Errorf(atmosManifestJsonSchemaValidationErrorFormat, relativeFilePath, err) } // Store compiled schema in cache for reuse. @@ -730,11 +515,11 @@ func processYAMLConfigFileWithContextInternal( case *jsonschema.ValidationError: b, err2 := json.MarshalIndent(e.BasicOutput(), "", " ") if err2 != nil { - return nil, nil, nil, nil, nil, nil, nil, errors.Errorf(atmosManifestJsonSchemaValidationErrorFormat, relativeFilePath, err2) + return nil, nil, nil, nil, nil, nil, nil, nil, errors.Errorf(atmosManifestJsonSchemaValidationErrorFormat, relativeFilePath, err2) } - return nil, nil, nil, nil, nil, nil, nil, errors.Errorf(atmosManifestJsonSchemaValidationErrorFormat, relativeFilePath, string(b)) + return nil, nil, nil, nil, nil, nil, nil, nil, errors.Errorf(atmosManifestJsonSchemaValidationErrorFormat, relativeFilePath, string(b)) default: - return nil, nil, nil, nil, nil, nil, nil, errors.Errorf(atmosManifestJsonSchemaValidationErrorFormat, relativeFilePath, err) + return nil, nil, nil, nil, nil, nil, nil, nil, errors.Errorf(atmosManifestJsonSchemaValidationErrorFormat, relativeFilePath, err) } } } @@ -744,19 +529,19 @@ func processYAMLConfigFileWithContextInternal( // Global overrides in this stack manifest if i, ok := stackConfigMap[cfg.OverridesSectionName]; ok { if globalOverrides, ok = i.(map[string]any); !ok { - return nil, nil, nil, nil, nil, nil, nil, fmt.Errorf("%w in the stack manifest '%s'", errUtils.ErrInvalidOverridesSection, relativeFilePath) + return nil, nil, nil, nil, nil, nil, nil, nil, fmt.Errorf("%w in the stack manifest '%s'", errUtils.ErrInvalidOverridesSection, relativeFilePath) } } // Terraform overrides in this stack manifest if o, ok := stackConfigMap[cfg.TerraformSectionName]; ok { if globalTerraformSection, ok = o.(map[string]any); !ok { - return nil, nil, nil, nil, nil, nil, nil, fmt.Errorf("%w in the stack manifest '%s'", errUtils.ErrInvalidTerraformSection, relativeFilePath) + return nil, nil, nil, nil, nil, nil, nil, nil, fmt.Errorf("%w in the stack manifest '%s'", errUtils.ErrInvalidTerraformSection, relativeFilePath) } if i, ok := globalTerraformSection[cfg.OverridesSectionName]; ok { if terraformOverrides, ok = i.(map[string]any); !ok { - return nil, nil, nil, nil, nil, nil, nil, fmt.Errorf("%w in the stack manifest '%s'", errUtils.ErrInvalidTerraformOverridesSection, relativeFilePath) + return nil, nil, nil, nil, nil, nil, nil, nil, fmt.Errorf("%w in the stack manifest '%s'", errUtils.ErrInvalidTerraformOverridesSection, relativeFilePath) } } } @@ -764,12 +549,12 @@ func processYAMLConfigFileWithContextInternal( // Helmfile overrides in this stack manifest if o, ok := stackConfigMap[cfg.HelmfileSectionName]; ok { if globalHelmfileSection, ok = o.(map[string]any); !ok { - return nil, nil, nil, nil, nil, nil, nil, fmt.Errorf("%w in the stack manifest '%s'", errUtils.ErrInvalidHelmfileSection, relativeFilePath) + return nil, nil, nil, nil, nil, nil, nil, nil, fmt.Errorf("%w in the stack manifest '%s'", errUtils.ErrInvalidHelmfileSection, relativeFilePath) } if i, ok := globalHelmfileSection[cfg.OverridesSectionName]; ok { if helmfileOverrides, ok = i.(map[string]any); !ok { - return nil, nil, nil, nil, nil, nil, nil, fmt.Errorf("%w in the stack manifest '%s'", errUtils.ErrInvalidHelmfileOverridesSection, relativeFilePath) + return nil, nil, nil, nil, nil, nil, nil, nil, fmt.Errorf("%w in the stack manifest '%s'", errUtils.ErrInvalidHelmfileOverridesSection, relativeFilePath) } } } @@ -780,7 +565,7 @@ func processYAMLConfigFileWithContextInternal( mergeContext, ) if err != nil { - return nil, nil, nil, nil, nil, nil, nil, err + return nil, nil, nil, nil, nil, nil, nil, nil, err } parentHelmfileOverridesInline, err = m.MergeWithContext( @@ -789,13 +574,13 @@ func processYAMLConfigFileWithContextInternal( mergeContext, ) if err != nil { - return nil, nil, nil, nil, nil, nil, nil, err + return nil, nil, nil, nil, nil, nil, nil, nil, err } // Find and process all imports importStructs, err := ProcessImportSection(stackConfigMap, relativeFilePath) if err != nil { - return nil, nil, nil, nil, nil, nil, nil, err + return nil, nil, nil, nil, nil, nil, nil, nil, err } // Record provenance for each import if provenance tracking is enabled. @@ -829,25 +614,13 @@ func processYAMLConfigFileWithContextInternal( } } - // importFileResult holds the result of processing a single import file in parallel. - type importFileResult struct { - index int - importFile string - yamlConfig map[string]any - yamlConfigRaw map[string]any - terraformOverridesInline map[string]any - terraformOverridesImports map[string]any - helmfileOverridesInline map[string]any - helmfileOverridesImports map[string]any - importRelativePathWithoutExt string - err error - } - + //nolint:staticcheck // atmosConfig nil check is present. + log.Trace("Processing import structs", "count", len(importStructs), "file", relativeFilePath, "track_provenance", atmosConfig != nil && atmosConfig.TrackProvenance) for _, importStruct := range importStructs { imp := importStruct.Path if imp == "" { - return nil, nil, nil, nil, nil, nil, nil, fmt.Errorf("%w in the manifest '%s'", errUtils.ErrInvalidImport, relativeFilePath) + return nil, nil, nil, nil, nil, nil, nil, nil, fmt.Errorf("%w in the manifest '%s'", errUtils.ErrInvalidImport, relativeFilePath) } // If the import file is specified without extension, use `.yaml` as default @@ -889,7 +662,7 @@ func processYAMLConfigFileWithContextInternal( errorMessage := fmt.Sprintf("invalid import in the manifest '%s'\nThe file imports itself in '%s'", relativeFilePath, imp) - return nil, nil, nil, nil, nil, nil, nil, errors.New(errorMessage) + return nil, nil, nil, nil, nil, nil, nil, nil, errors.New(errorMessage) } // Find all import matches in the glob @@ -903,7 +676,7 @@ func processYAMLConfigFileWithContextInternal( // The import was not found -> check if the import is a Go template; if not, return the error isGolangTemplate, err2 := IsGolangTemplate(atmosConfig, imp) if err2 != nil { - return nil, nil, nil, nil, nil, nil, nil, err2 + return nil, nil, nil, nil, nil, nil, nil, nil, err2 } // If the import is not a Go template and SkipIfMissing is false, return the error @@ -914,13 +687,13 @@ func processYAMLConfigFileWithContextInternal( relativeFilePath, err, ) - return nil, nil, nil, nil, nil, nil, nil, errors.New(errorMessage) + return nil, nil, nil, nil, nil, nil, nil, nil, errors.New(errorMessage) } else if importMatches == nil { errorMessage := fmt.Sprintf("no matches found for the import '%s' in the file '%s'", imp, relativeFilePath, ) - return nil, nil, nil, nil, nil, nil, nil, errors.New(errorMessage) + return nil, nil, nil, nil, nil, nil, nil, nil, errors.New(errorMessage) } } } @@ -933,7 +706,7 @@ func processYAMLConfigFileWithContextInternal( listOfMaps := []map[string]any{importStruct.Context, context} mergedContext, err := m.MergeWithContext(atmosConfig, listOfMaps, mergeContext) if err != nil { - return nil, nil, nil, nil, nil, nil, nil, err + return nil, nil, nil, nil, nil, nil, nil, nil, err } // Initialize provenance storage before parallel processing to avoid data races. @@ -961,7 +734,9 @@ func processYAMLConfigFileWithContextInternal( terraformOverridesInline, terraformOverridesImports, helmfileOverridesInline, - helmfileOverridesImports, processErr := processYAMLConfigFileWithContextInternal( + helmfileOverridesImports, + importMergeContext, + processErr := processYAMLConfigFileWithContextInternal( atmosConfig, basePath, file, @@ -1002,6 +777,7 @@ func processYAMLConfigFileWithContextInternal( helmfileOverridesInline: helmfileOverridesInline, helmfileOverridesImports: helmfileOverridesImports, importRelativePathWithoutExt: importRelativePathWithoutExt, + mergeContext: importMergeContext, err: nil, } }(i, importFile) @@ -1011,11 +787,15 @@ func processYAMLConfigFileWithContextInternal( wg.Wait() // Sequentially merge results in the original import order to preserve Atmos inheritance. + log.Trace("Processing import results", "count", len(results), "track_provenance", atmosConfig != nil && atmosConfig.TrackProvenance) for _, result := range results { if result.err != nil { - return nil, nil, nil, nil, nil, nil, nil, result.err + return nil, nil, nil, nil, nil, nil, nil, nil, result.err } + // Store merge context for imported files if provenance tracking is enabled. + processImportProvenanceTracking(atmosConfig, &result, mergeContext) + // From the imported manifest, get the `overrides` sections and merge them with the parent `overrides` section. // The inline `overrides` section takes precedence over the imported `overrides` section inside the imported manifest. parentTerraformOverridesImports, err = m.MergeWithContext( @@ -1024,7 +804,7 @@ func processYAMLConfigFileWithContextInternal( mergeContext, ) if err != nil { - return nil, nil, nil, nil, nil, nil, nil, err + return nil, nil, nil, nil, nil, nil, nil, nil, err } // From the imported manifest, get the `overrides` sections and merge them with the parent `overrides` section. @@ -1035,7 +815,7 @@ func processYAMLConfigFileWithContextInternal( mergeContext, ) if err != nil { - return nil, nil, nil, nil, nil, nil, nil, err + return nil, nil, nil, nil, nil, nil, nil, nil, err } // Append to stackConfigs in order. @@ -1081,7 +861,7 @@ func processYAMLConfigFileWithContextInternal( mergeContext, ) if err != nil { - return nil, nil, nil, nil, nil, nil, nil, err + return nil, nil, nil, nil, nil, nil, nil, nil, err } // Helmfile `overrides` @@ -1091,7 +871,7 @@ func processYAMLConfigFileWithContextInternal( mergeContext, ) if err != nil { - return nil, nil, nil, nil, nil, nil, nil, err + return nil, nil, nil, nil, nil, nil, nil, nil, err } // Add the `overrides` section to all components in this stack manifest @@ -1129,7 +909,7 @@ func processYAMLConfigFileWithContextInternal( stackConfigsDeepMerged, err := m.MergeWithContext(atmosConfig, stackConfigs, mergeContext) if err != nil { // The error already contains context information from MergeWithContext - return nil, nil, nil, nil, nil, nil, nil, err + return nil, nil, nil, nil, nil, nil, nil, nil, err } // NOTE: We don't store merge context here because ProcessYAMLConfigFileWithContext @@ -1143,6 +923,7 @@ func processYAMLConfigFileWithContextInternal( parentTerraformOverridesImports, parentHelmfileOverridesInline, parentHelmfileOverridesImports, + mergeContext, nil } @@ -1394,7 +1175,7 @@ func ProcessImportSection(stackMap map[string]any, filePath string) ([]schema.St return result, nil } -// sectionContainsAnyNotEmptySections checks if a section contains any of the provided low-level sections, and it's not empty +// sectionContainsAnyNotEmptySections checks if a section contains any of the provided low-level sections, and it's not empty. func sectionContainsAnyNotEmptySections(section map[string]any, sectionsToCheck []string) bool { for _, s := range sectionsToCheck { if len(s) > 0 { @@ -1411,38 +1192,6 @@ func sectionContainsAnyNotEmptySections(section map[string]any, sectionsToCheck return false } -// GetFileContent tries to read and return the file content from the sync map if it exists in the map, -// otherwise it reads the file, stores its content in the map and returns the content. -func GetFileContent(filePath string) (string, error) { - defer perf.Track(nil, "exec.GetFileContent")() - - existingContent, found := getFileContentSyncMap.Load(filePath) - if found && existingContent != nil { - return fmt.Sprintf("%s", existingContent), nil - } - - content, err := os.ReadFile(filePath) - if err != nil { - return "", err - } - getFileContentSyncMap.Store(filePath, content) - - return string(content), nil -} - -// GetFileContentWithoutCache reads file content without using the cache. -// Used when provenance tracking is enabled to ensure fresh reads with position tracking. -func GetFileContentWithoutCache(filePath string) (string, error) { - defer perf.Track(nil, "exec.GetFileContentWithoutCache")() - - content, err := os.ReadFile(filePath) - if err != nil { - return "", err - } - - return string(content), nil -} - // ProcessBaseComponentConfig processes base component(s) config. func ProcessBaseComponentConfig( atmosConfig *schema.AtmosConfiguration, diff --git a/internal/exec/utils.go b/internal/exec/utils.go index 815c875d79..3c85a86eb3 100644 --- a/internal/exec/utils.go +++ b/internal/exec/utils.go @@ -260,6 +260,16 @@ func getFindStacksMapCacheKey(atmosConfig *schema.AtmosConfiguration, ignoreMiss // FindStacksMap processes stack config and returns a map of all stacks. // Results are cached to avoid re-processing the same YAML files multiple times // within the same command execution (e.g., when ValidateStacks is called before ExecuteDescribeStacks). +// ClearFindStacksMapCache clears the FindStacksMap cache. +func ClearFindStacksMapCache() { + defer perf.Track(nil, "exec.ClearFindStacksMapCache")() + + log.Trace("ClearFindStacksMapCache called") + findStacksMapCacheMu.Lock() + findStacksMapCache = make(map[string]*findStacksMapCacheEntry) + findStacksMapCacheMu.Unlock() +} + func FindStacksMap(atmosConfig *schema.AtmosConfiguration, ignoreMissingFiles bool) ( map[string]any, map[string]map[string]any, diff --git a/internal/exec/validate_stacks.go b/internal/exec/validate_stacks.go index cafe9ac2eb..3f4059d924 100644 --- a/internal/exec/validate_stacks.go +++ b/internal/exec/validate_stacks.go @@ -218,7 +218,7 @@ func ValidateStacks(atmosConfig *schema.AtmosConfiguration) error { // Create a new merge context to track import chain for better error messages mergeContext := m.NewMergeContext() - stackConfig, importsConfig, _, _, _, _, _, err := ProcessYAMLConfigFileWithContext( + stackConfig, importsConfig, _, _, _, _, _, _, err := ProcessYAMLConfigFileWithContext( atmosConfig, atmosConfig.StacksBaseAbsolutePath, filePath, diff --git a/pkg/auth/cloud/aws/env.go b/pkg/auth/cloud/aws/env.go index 0ef6a04edc..5897239f17 100644 --- a/pkg/auth/cloud/aws/env.go +++ b/pkg/auth/cloud/aws/env.go @@ -113,11 +113,11 @@ func LoadIsolatedAWSConfig(ctx context.Context, optFns ...func(*config.LoadOptio }) if isolateErr != nil { - return aws.Config{}, fmt.Errorf("%w: %w", errUtils.ErrLoadAwsConfig, isolateErr) + return aws.Config{}, fmt.Errorf("%w: %w", errUtils.ErrLoadAWSConfig, isolateErr) } if err != nil { - return aws.Config{}, fmt.Errorf("%w: %w", errUtils.ErrLoadAwsConfig, err) + return aws.Config{}, fmt.Errorf("%w: %w", errUtils.ErrLoadAWSConfig, err) } return cfg, nil @@ -180,7 +180,7 @@ func LoadAtmosManagedAWSConfig(ctx context.Context, optFns ...func(*config.LoadO if err != nil { log.Debug("Failed to load AWS SDK config", "error", err) - return aws.Config{}, fmt.Errorf("%w: %w", errUtils.ErrLoadAwsConfig, err) + return aws.Config{}, fmt.Errorf("%w: %w", errUtils.ErrLoadAWSConfig, err) } log.Debug("Successfully loaded AWS SDK config", "region", cfg.Region) diff --git a/pkg/auth/providers/aws/sso.go b/pkg/auth/providers/aws/sso.go index 46ff022d57..b329103513 100644 --- a/pkg/auth/providers/aws/sso.go +++ b/pkg/auth/providers/aws/sso.go @@ -144,7 +144,7 @@ func (p *ssoProvider) Authenticate(ctx context.Context) (authTypes.ICredentials, // to avoid conflicts with external AWS env vars. cfg, err := awsCloud.LoadIsolatedAWSConfig(ctx, configOpts...) if err != nil { - return nil, errUtils.Build(errUtils.ErrLoadAwsConfig). + return nil, errUtils.Build(errUtils.ErrLoadAWSConfig). WithExplanationf("Failed to load AWS configuration for SSO authentication in region '%s'", p.region). WithHint("Verify that the AWS region is valid and accessible"). WithHint("Check your network connectivity and AWS service availability"). diff --git a/pkg/list/column/column.go b/pkg/list/column/column.go new file mode 100644 index 0000000000..62330792eb --- /dev/null +++ b/pkg/list/column/column.go @@ -0,0 +1,421 @@ +package column + +import ( + "bytes" + "fmt" + "strings" + "text/template" + + errUtils "github.com/cloudposse/atmos/errors" +) + +// Config defines a column with name, Go template for value extraction, and optional width. +type Config struct { + Name string `yaml:"name" json:"name" mapstructure:"name"` // Display header + Value string `yaml:"value" json:"value" mapstructure:"value"` // Go template string + Width int `yaml:"width" json:"width" mapstructure:"width"` // Optional width override +} + +// Selector manages column extraction with template evaluation. +// Templates are evaluated during Extract(), not at config load time. +type Selector struct { + configs []Config + selected []string // Column names to display (nil = all) + templateMap *template.Template // Pre-parsed templates with FuncMap +} + +// TemplateContext provides data available to column templates during evaluation. +type TemplateContext struct { + // Standard fields available in all templates + AtmosComponent string `json:"atmos_component"` + AtmosStack string `json:"atmos_stack"` + AtmosComponentType string `json:"atmos_component_type"` + + // Component configuration + Vars map[string]any `json:"vars"` + Settings map[string]any `json:"settings"` + Metadata map[string]any `json:"metadata"` + Env map[string]any `json:"env"` + + // Flags + Enabled bool `json:"enabled"` + Locked bool `json:"locked"` + Abstract bool `json:"abstract"` + + // Full raw data for advanced templates + Raw map[string]any `json:"raw"` +} + +// NewSelector creates a selector with Go template support. +// Templates are pre-parsed for performance but NOT evaluated until Extract(). +// FuncMap should include functions like atmos.Component, toString, get, etc. +func NewSelector(configs []Config, funcMap template.FuncMap) (*Selector, error) { + if len(configs) == 0 { + return nil, fmt.Errorf("%w: no columns configured", errUtils.ErrInvalidConfig) + } + + // Pre-parse all templates with function map + tmplMap := template.New("columns").Funcs(funcMap) + for _, cfg := range configs { + if cfg.Name == "" { + return nil, fmt.Errorf("%w: column name cannot be empty", errUtils.ErrInvalidConfig) + } + if cfg.Value == "" { + return nil, fmt.Errorf("%w: column %q has empty value template", errUtils.ErrInvalidConfig, cfg.Name) + } + + // Parse each column template + _, err := tmplMap.New(cfg.Name).Parse(cfg.Value) + if err != nil { + return nil, fmt.Errorf("%w: invalid template for column %q: %w", errUtils.ErrInvalidConfig, cfg.Name, err) + } + } + + return &Selector{ + configs: configs, + selected: nil, // nil = all columns + templateMap: tmplMap, + }, nil +} + +// Select restricts which columns to display. +// Pass nil or empty slice to display all columns. +func (s *Selector) Select(columnNames []string) error { + if len(columnNames) == 0 { + s.selected = nil + return nil + } + + // Validate all requested columns exist + configMap := make(map[string]bool) + for _, cfg := range s.configs { + configMap[cfg.Name] = true + } + + for _, name := range columnNames { + if !configMap[name] { + return fmt.Errorf("%w: column %q not found in configuration", errUtils.ErrInvalidConfig, name) + } + } + + s.selected = columnNames + return nil +} + +// Extract evaluates templates against data and returns table rows. +// ⚠️ CRITICAL: This is where Go template evaluation happens (NOT at config load). +// Each row is processed with its full data as template context. +func (s *Selector) Extract(data []map[string]any) (headers []string, rows [][]string, err error) { + if len(data) == 0 { + return s.Headers(), [][]string{}, nil + } + + headers = s.Headers() + + // Get configs for selected columns (or all if no selection) + selectedConfigs := s.getSelectedConfigs() + + // Process each data item + for i, item := range data { + row := make([]string, len(selectedConfigs)) + + for j, cfg := range selectedConfigs { + // Evaluate template for this column and data item + value, evalErr := s.evaluateTemplate(cfg, item) + if evalErr != nil { + return nil, nil, fmt.Errorf("%w: row %d, column %q: %w", errUtils.ErrTemplateEvaluation, i, cfg.Name, evalErr) + } + row[j] = value + } + + rows = append(rows, row) + } + + return headers, rows, nil +} + +// Headers returns the header row based on selected columns. +func (s *Selector) Headers() []string { + selectedConfigs := s.getSelectedConfigs() + headers := make([]string, len(selectedConfigs)) + for i, cfg := range selectedConfigs { + headers[i] = cfg.Name + } + return headers +} + +// getSelectedConfigs returns configs for selected columns (or all if no selection). +func (s *Selector) getSelectedConfigs() []Config { + if len(s.selected) == 0 { + return s.configs + } + + // Build map for quick lookup + selectedMap := make(map[string]bool) + for _, name := range s.selected { + selectedMap[name] = true + } + + // Filter configs to selected columns in original order + var configs []Config + for _, cfg := range s.configs { + if selectedMap[cfg.Name] { + configs = append(configs, cfg) + } + } + + return configs +} + +// evaluateTemplate evaluates a column template against data item. +func (s *Selector) evaluateTemplate(cfg Config, data map[string]any) (string, error) { + // Get the pre-parsed template for this column + tmpl := s.templateMap.Lookup(cfg.Name) + if tmpl == nil { + return "", fmt.Errorf("%w: template %q not found", errUtils.ErrTemplateEvaluation, cfg.Name) + } + + // Build template context from data + context := buildTemplateContext(data) + + // Execute template + var buf bytes.Buffer + if err := tmpl.Execute(&buf, context); err != nil { + return "", err + } + + return buf.String(), nil +} + +// buildTemplateContext creates template context from raw data. +// Maps common field names and makes full data available via .Raw. +// +//nolint:gocognit,revive,cyclop,funlen // Complexity and length from repetitive field mapping (unavoidable pattern). +func buildTemplateContext(data map[string]any) any { + // Try to map to structured context for better template readability + ctx := make(map[string]any) + + // Copy all data to .Raw for full access + ctx["raw"] = data + + // Map common fields to top-level for convenience + if v, ok := data["atmos_component"]; ok { + ctx["atmos_component"] = v + } + if v, ok := data["atmos_stack"]; ok { + ctx["atmos_stack"] = v + } + if v, ok := data["atmos_component_type"]; ok { + ctx["atmos_component_type"] = v + } + if v, ok := data["vars"]; ok { + ctx["vars"] = v + } + if v, ok := data["settings"]; ok { + ctx["settings"] = v + } + if v, ok := data["metadata"]; ok { + ctx["metadata"] = v + } + if v, ok := data["env"]; ok { + ctx["env"] = v + } + if v, ok := data["enabled"]; ok { + ctx["enabled"] = v + } + if v, ok := data["locked"]; ok { + ctx["locked"] = v + } + if v, ok := data["abstract"]; ok { + ctx["abstract"] = v + } + + // For workflow data + if v, ok := data["file"]; ok { + ctx["file"] = v + } + if v, ok := data["name"]; ok { + ctx["name"] = v + } + if v, ok := data["description"]; ok { + ctx["description"] = v + } + if v, ok := data["steps"]; ok { + ctx["steps"] = v + } + + // For stack data + if v, ok := data["stack"]; ok { + ctx["stack"] = v + } + if v, ok := data["components"]; ok { + ctx["components"] = v + } + + // For vendor data + if v, ok := data["atmos_vendor_type"]; ok { + ctx["atmos_vendor_type"] = v + } + if v, ok := data["atmos_vendor_file"]; ok { + ctx["atmos_vendor_file"] = v + } + if v, ok := data["atmos_vendor_target"]; ok { + ctx["atmos_vendor_target"] = v + } + + // Also allow direct access to unmapped fields + for k, v := range data { + if _, exists := ctx[k]; !exists { + ctx[k] = v + } + } + + return ctx +} + +// BuildColumnFuncMap returns template functions for column templates. +// These functions are safe for use in column value extraction. +func BuildColumnFuncMap() template.FuncMap { + return template.FuncMap{ + // Type conversion + "toString": toString, + "toInt": toInt, + "toBool": toBool, + + // Formatting + "truncate": truncate, + "pad": pad, + "upper": strings.ToUpper, + "lower": strings.ToLower, + + // Data access + "get": mapGet, + "getOr": mapGetOr, + "has": mapHas, + + // Collections + "len": length, + "join": strings.Join, + "split": strings.Split, + + // Conditional + "ternary": ternary, + } +} + +// Template function implementations. + +func toString(v any) string { + if v == nil { + return "" + } + return fmt.Sprintf("%v", v) +} + +func toInt(v any) int { + switch val := v.(type) { + case int: + return val + case int64: + return int(val) + case float64: + return int(val) + case string: + var i int + _, _ = fmt.Sscanf(val, "%d", &i) + return i + default: + return 0 + } +} + +func toBool(v any) bool { + switch val := v.(type) { + case bool: + return val + case string: + return val == "true" || val == "yes" || val == "1" + case int: + return val != 0 + default: + return false + } +} + +func truncate(s string, length any) string { + // Convert length to int + var l int + switch v := length.(type) { + case int: + l = v + case int64: + l = int(v) + case float64: + l = int(v) + default: + return s + } + + if len(s) <= l { + return s + } + if l <= 0 { + return "" + } + if l <= 3 { + return s[:l] + } + return s[:l-3] + "..." +} + +func pad(s string, length int) string { + if len(s) >= length { + return s + } + return s + strings.Repeat(" ", length-len(s)) +} + +func mapGet(m map[string]any, key string) any { + if m == nil { + return nil + } + return m[key] +} + +func mapGetOr(m map[string]any, key string, defaultVal any) any { + if m == nil { + return defaultVal + } + if v, ok := m[key]; ok { + return v + } + return defaultVal +} + +func mapHas(m map[string]any, key string) bool { + if m == nil { + return false + } + _, ok := m[key] + return ok +} + +func length(v any) int { + switch val := v.(type) { + case string: + return len(val) + case []any: + return len(val) + case map[string]any: + return len(val) + default: + return 0 + } +} + +func ternary(condition bool, trueVal, falseVal any) any { + if condition { + return trueVal + } + return falseVal +} diff --git a/pkg/list/column/column_test.go b/pkg/list/column/column_test.go new file mode 100644 index 0000000000..5faabc864c --- /dev/null +++ b/pkg/list/column/column_test.go @@ -0,0 +1,692 @@ +package column + +import ( + "testing" + "text/template" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + + errUtils "github.com/cloudposse/atmos/errors" +) + +func TestNewSelector(t *testing.T) { + tests := []struct { + name string + configs []Config + funcMap template.FuncMap + expectErr bool + errType error + }{ + { + name: "valid single column", + configs: []Config{ + {Name: "Component", Value: "{{ .atmos_component }}"}, + }, + funcMap: BuildColumnFuncMap(), + expectErr: false, + }, + { + name: "valid multiple columns", + configs: []Config{ + {Name: "Component", Value: "{{ .atmos_component }}"}, + {Name: "Stack", Value: "{{ .atmos_stack }}"}, + {Name: "Region", Value: "{{ .vars.region }}"}, + }, + funcMap: BuildColumnFuncMap(), + expectErr: false, + }, + { + name: "empty configs", + configs: []Config{}, + funcMap: BuildColumnFuncMap(), + expectErr: true, + errType: errUtils.ErrInvalidConfig, + }, + { + name: "empty column name", + configs: []Config{ + {Name: "", Value: "{{ .atmos_component }}"}, + }, + funcMap: BuildColumnFuncMap(), + expectErr: true, + errType: errUtils.ErrInvalidConfig, + }, + { + name: "empty column value", + configs: []Config{ + {Name: "Component", Value: ""}, + }, + funcMap: BuildColumnFuncMap(), + expectErr: true, + errType: errUtils.ErrInvalidConfig, + }, + { + name: "invalid template syntax", + configs: []Config{ + {Name: "Component", Value: "{{ .atmos_component "}, + }, + funcMap: BuildColumnFuncMap(), + expectErr: true, + errType: errUtils.ErrInvalidConfig, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + selector, err := NewSelector(tt.configs, tt.funcMap) + + if tt.expectErr { + require.Error(t, err) + if tt.errType != nil { + assert.ErrorIs(t, err, tt.errType) + } + assert.Nil(t, selector) + } else { + require.NoError(t, err) + assert.NotNil(t, selector) + assert.Equal(t, len(tt.configs), len(selector.configs)) + } + }) + } +} + +func TestSelector_Select(t *testing.T) { + configs := []Config{ + {Name: "Component", Value: "{{ .atmos_component }}"}, + {Name: "Stack", Value: "{{ .atmos_stack }}"}, + {Name: "Region", Value: "{{ .vars.region }}"}, + } + + selector, err := NewSelector(configs, BuildColumnFuncMap()) + require.NoError(t, err) + + tests := []struct { + name string + columns []string + expectErr bool + errType error + expectedLen int + }{ + { + name: "select single column", + columns: []string{"Component"}, + expectErr: false, + expectedLen: 1, + }, + { + name: "select multiple columns", + columns: []string{"Component", "Stack"}, + expectErr: false, + expectedLen: 2, + }, + { + name: "select all columns (empty slice)", + columns: []string{}, + expectErr: false, + expectedLen: 3, + }, + { + name: "select all columns (nil)", + columns: nil, + expectErr: false, + expectedLen: 3, + }, + { + name: "select non-existent column", + columns: []string{"NonExistent"}, + expectErr: true, + errType: errUtils.ErrInvalidConfig, + }, + { + name: "select mix of valid and invalid columns", + columns: []string{"Component", "NonExistent"}, + expectErr: true, + errType: errUtils.ErrInvalidConfig, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + err := selector.Select(tt.columns) + + if tt.expectErr { + require.Error(t, err) + if tt.errType != nil { + assert.ErrorIs(t, err, tt.errType) + } + } else { + require.NoError(t, err) + headers := selector.Headers() + assert.Len(t, headers, tt.expectedLen) + } + }) + } +} + +func TestSelector_Extract(t *testing.T) { + configs := []Config{ + {Name: "Component", Value: "{{ .atmos_component }}"}, + {Name: "Stack", Value: "{{ .atmos_stack }}"}, + {Name: "Enabled", Value: "{{ ternary .enabled \"✓\" \"✗\" }}"}, + } + + selector, err := NewSelector(configs, BuildColumnFuncMap()) + require.NoError(t, err) + + tests := []struct { + name string + data []map[string]any + selectColumns []string + expectedRows [][]string + expectedHeader []string + expectErr bool + }{ + { + name: "extract single row", + data: []map[string]any{ + { + "atmos_component": "vpc", + "atmos_stack": "plat-ue2-dev", + "enabled": true, + }, + }, + expectedHeader: []string{"Component", "Stack", "Enabled"}, + expectedRows: [][]string{ + {"vpc", "plat-ue2-dev", "✓"}, + }, + expectErr: false, + }, + { + name: "extract multiple rows", + data: []map[string]any{ + { + "atmos_component": "vpc", + "atmos_stack": "plat-ue2-dev", + "enabled": true, + }, + { + "atmos_component": "eks", + "atmos_stack": "plat-ue2-prod", + "enabled": false, + }, + }, + expectedHeader: []string{"Component", "Stack", "Enabled"}, + expectedRows: [][]string{ + {"vpc", "plat-ue2-dev", "✓"}, + {"eks", "plat-ue2-prod", "✗"}, + }, + expectErr: false, + }, + { + name: "extract empty data", + data: []map[string]any{}, + expectedHeader: []string{"Component", "Stack", "Enabled"}, + expectedRows: [][]string{}, + expectErr: false, + }, + { + name: "extract with column selection", + data: []map[string]any{ + { + "atmos_component": "vpc", + "atmos_stack": "plat-ue2-dev", + "enabled": true, + }, + }, + selectColumns: []string{"Component", "Enabled"}, + expectedHeader: []string{"Component", "Enabled"}, + expectedRows: [][]string{ + {"vpc", "✓"}, + }, + expectErr: false, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + if len(tt.selectColumns) > 0 { + err := selector.Select(tt.selectColumns) + require.NoError(t, err) + } else { + err := selector.Select(nil) + require.NoError(t, err) + } + + headers, rows, err := selector.Extract(tt.data) + + if tt.expectErr { + require.Error(t, err) + } else { + require.NoError(t, err) + assert.Equal(t, tt.expectedHeader, headers) + assert.Equal(t, tt.expectedRows, rows) + } + }) + } +} + +func TestSelector_Extract_NestedFields(t *testing.T) { + configs := []Config{ + {Name: "Component", Value: "{{ .atmos_component }}"}, + {Name: "Region", Value: "{{ .vars.region }}"}, + {Name: "Namespace", Value: "{{ .vars.namespace }}"}, + } + + selector, err := NewSelector(configs, BuildColumnFuncMap()) + require.NoError(t, err) + + data := []map[string]any{ + { + "atmos_component": "vpc", + "vars": map[string]any{ + "region": "us-east-2", + "namespace": "platform", + }, + }, + } + + headers, rows, err := selector.Extract(data) + require.NoError(t, err) + assert.Equal(t, []string{"Component", "Region", "Namespace"}, headers) + assert.Equal(t, [][]string{{"vpc", "us-east-2", "platform"}}, rows) +} + +func TestSelector_Extract_TemplateFunctions(t *testing.T) { + configs := []Config{ + {Name: "Component", Value: "{{ .atmos_component | upper }}"}, + {Name: "Description", Value: "{{ truncate .description 10 }}"}, + {Name: "Status", Value: "{{ ternary .enabled \"active\" \"inactive\" }}"}, + {Name: "Count", Value: "{{ toString (len .items) }}"}, + } + + selector, err := NewSelector(configs, BuildColumnFuncMap()) + require.NoError(t, err) + + data := []map[string]any{ + { + "atmos_component": "vpc", + "description": "This is a very long description that should be truncated", + "enabled": true, + "items": []any{"a", "b", "c"}, + }, + } + + _, rows, err := selector.Extract(data) + require.NoError(t, err) + assert.Equal(t, [][]string{{"VPC", "This is...", "active", "3"}}, rows) +} + +func TestBuildColumnFuncMap(t *testing.T) { + funcMap := BuildColumnFuncMap() + + // Verify expected functions exist + expectedFuncs := []string{ + "toString", "toInt", "toBool", + "truncate", "pad", "upper", "lower", + "get", "getOr", "has", + "len", "join", "split", + "ternary", + } + + for _, funcName := range expectedFuncs { + _, ok := funcMap[funcName] + assert.True(t, ok, "Function %q should exist in FuncMap", funcName) + } +} + +func TestTemplateFunctions_ToString(t *testing.T) { + tests := []struct { + name string + input any + expected string + }{ + {"nil", nil, ""}, + {"string", "hello", "hello"}, + {"int", 42, "42"}, + {"bool", true, "true"}, + {"float", 3.14, "3.14"}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := toString(tt.input) + assert.Equal(t, tt.expected, result) + }) + } +} + +func TestTemplateFunctions_ToInt(t *testing.T) { + tests := []struct { + name string + input any + expected int + }{ + {"int", 42, 42}, + {"int64", int64(42), 42}, + {"float64", float64(42.7), 42}, + {"string", "42", 42}, + {"invalid string", "abc", 0}, + {"nil", nil, 0}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := toInt(tt.input) + assert.Equal(t, tt.expected, result) + }) + } +} + +func TestTemplateFunctions_ToBool(t *testing.T) { + tests := []struct { + name string + input any + expected bool + }{ + {"bool true", true, true}, + {"bool false", false, false}, + {"string true", "true", true}, + {"string yes", "yes", true}, + {"string 1", "1", true}, + {"string false", "false", false}, + {"int non-zero", 42, true}, + {"int zero", 0, false}, + {"nil", nil, false}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := toBool(tt.input) + assert.Equal(t, tt.expected, result) + }) + } +} + +func TestTemplateFunctions_Truncate(t *testing.T) { + tests := []struct { + name string + input string + length int + expected string + }{ + {"no truncation needed", "hello", 10, "hello"}, + {"truncate with ellipsis", "hello world", 8, "hello..."}, + {"truncate short", "hello", 3, "hel"}, // length <= 3, no ellipsis + {"empty string", "", 5, ""}, + {"exact length", "hello", 5, "hello"}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := truncate(tt.input, tt.length) + assert.Equal(t, tt.expected, result) + }) + } +} + +func TestTemplateFunctions_Pad(t *testing.T) { + tests := []struct { + name string + input string + length int + expected string + }{ + {"no padding needed", "hello", 5, "hello"}, + {"padding needed", "hi", 5, "hi "}, + {"already longer", "hello world", 5, "hello world"}, + {"empty string", "", 3, " "}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := pad(tt.input, tt.length) + assert.Equal(t, tt.expected, result) + }) + } +} + +func TestTemplateFunctions_MapGet(t *testing.T) { + m := map[string]any{ + "key1": "value1", + "key2": 42, + } + + tests := []struct { + name string + m map[string]any + key string + expected any + }{ + {"existing key", m, "key1", "value1"}, + {"existing key int", m, "key2", 42}, + {"non-existent key", m, "key3", nil}, + {"nil map", nil, "key", nil}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := mapGet(tt.m, tt.key) + assert.Equal(t, tt.expected, result) + }) + } +} + +func TestTemplateFunctions_MapGetOr(t *testing.T) { + m := map[string]any{ + "key1": "value1", + } + + tests := []struct { + name string + m map[string]any + key string + defaultVal any + expected any + }{ + {"existing key", m, "key1", "default", "value1"}, + {"non-existent key", m, "key2", "default", "default"}, + {"nil map", nil, "key", "default", "default"}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := mapGetOr(tt.m, tt.key, tt.defaultVal) + assert.Equal(t, tt.expected, result) + }) + } +} + +func TestTemplateFunctions_MapHas(t *testing.T) { + m := map[string]any{ + "key1": "value1", + } + + tests := []struct { + name string + m map[string]any + key string + expected bool + }{ + {"existing key", m, "key1", true}, + {"non-existent key", m, "key2", false}, + {"nil map", nil, "key", false}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := mapHas(tt.m, tt.key) + assert.Equal(t, tt.expected, result) + }) + } +} + +func TestTemplateFunctions_Length(t *testing.T) { + tests := []struct { + name string + input any + expected int + }{ + {"string", "hello", 5}, + {"empty string", "", 0}, + {"slice", []any{"a", "b", "c"}, 3}, + {"map", map[string]any{"a": 1, "b": 2}, 2}, + {"nil", nil, 0}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := length(tt.input) + assert.Equal(t, tt.expected, result) + }) + } +} + +func TestTemplateFunctions_Ternary(t *testing.T) { + tests := []struct { + name string + condition bool + trueVal any + falseVal any + expected any + }{ + {"true condition", true, "yes", "no", "yes"}, + {"false condition", false, "yes", "no", "no"}, + {"true with numbers", true, 1, 0, 1}, + {"false with numbers", false, 1, 0, 0}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := ternary(tt.condition, tt.trueVal, tt.falseVal) + assert.Equal(t, tt.expected, result) + }) + } +} + +func TestTemplateFunction_Pad_EdgeCases(t *testing.T) { + tests := []struct { + name string + input string + length any + expected string + }{ + {"int length", "hi", 5, "hi "}, + {"int64 length", "hi", int64(5), "hi "}, + {"float64 length", "hi", float64(5.0), "hi "}, + {"invalid length type", "hi", "invalid", "hi"}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := pad(tt.input, toInt(tt.length)) + assert.Equal(t, tt.expected, result) + }) + } +} + +func TestSelector_Extract_MissingFields(t *testing.T) { + configs := []Config{ + {Name: "Component", Value: "{{ .atmos_component }}"}, + {Name: "Missing", Value: "{{ .nonexistent }}"}, + } + + selector, err := NewSelector(configs, BuildColumnFuncMap()) + require.NoError(t, err) + + data := []map[string]any{ + { + "atmos_component": "vpc", + }, + } + + // Should handle missing fields gracefully (Go template returns "") + headers, rows, err := selector.Extract(data) + require.NoError(t, err) + assert.Equal(t, []string{"Component", "Missing"}, headers) + assert.Equal(t, [][]string{{"vpc", ""}}, rows) +} + +func TestSelector_Headers(t *testing.T) { + configs := []Config{ + {Name: "Col1", Value: "{{ .field1 }}"}, + {Name: "Col2", Value: "{{ .field2 }}"}, + {Name: "Col3", Value: "{{ .field3 }}"}, + } + + selector, err := NewSelector(configs, BuildColumnFuncMap()) + require.NoError(t, err) + + // Test all headers + headers := selector.Headers() + assert.Equal(t, []string{"Col1", "Col2", "Col3"}, headers) + + // Test selected headers + err = selector.Select([]string{"Col1", "Col3"}) + require.NoError(t, err) + headers = selector.Headers() + assert.Equal(t, []string{"Col1", "Col3"}, headers) +} + +func TestBuildTemplateContext(t *testing.T) { + tests := []struct { + name string + data map[string]any + expected map[string]any + }{ + { + name: "component data", + data: map[string]any{ + "atmos_component": "vpc", + "atmos_stack": "plat-ue2-dev", + "atmos_component_type": "real", + "vars": map[string]any{ + "region": "us-east-2", + }, + "enabled": true, + }, + expected: map[string]any{ + "atmos_component": "vpc", + "atmos_stack": "plat-ue2-dev", + "atmos_component_type": "real", + "vars": map[string]any{ + "region": "us-east-2", + }, + "enabled": true, + }, + }, + { + name: "workflow data", + data: map[string]any{ + "file": "workflows/deploy.yaml", + "name": "deploy", + "description": "Deploy workflow", + }, + expected: map[string]any{ + "file": "workflows/deploy.yaml", + "name": "deploy", + "description": "Deploy workflow", + }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := buildTemplateContext(tt.data) + resultMap, ok := result.(map[string]any) + require.True(t, ok, "Result should be a map") + + // Verify expected fields are present + for key, expectedVal := range tt.expected { + actualVal, exists := resultMap[key] + assert.True(t, exists, "Key %q should exist in context", key) + assert.Equal(t, expectedVal, actualVal, "Value for key %q should match", key) + } + + // Verify raw field exists and contains all data + raw, exists := resultMap["raw"] + assert.True(t, exists, "raw field should exist in context") + assert.Equal(t, tt.data, raw, "raw field should contain all original data") + }) + } +} diff --git a/pkg/list/extract/components.go b/pkg/list/extract/components.go new file mode 100644 index 0000000000..a3b584648e --- /dev/null +++ b/pkg/list/extract/components.go @@ -0,0 +1,166 @@ +package extract + +import ( + "fmt" + + errUtils "github.com/cloudposse/atmos/errors" + perf "github.com/cloudposse/atmos/pkg/perf" +) + +const ( + // Component metadata field names. + metadataEnabled = "enabled" + metadataLocked = "locked" +) + +// Components transforms stacksMap into structured component data. +// Returns []map[string]any suitable for the renderer pipeline. +func Components(stacksMap map[string]any) ([]map[string]any, error) { + defer perf.Track(nil, "list.extract.Components")() + + if stacksMap == nil { + return nil, errUtils.ErrStackNotFound + } + + var components []map[string]any + + for stackName, stackData := range stacksMap { + stackMap, ok := stackData.(map[string]any) + if !ok { + continue // Skip invalid stacks. + } + + componentsMap, ok := stackMap["components"].(map[string]any) + if !ok { + continue // Skip stacks without components. + } + + // Process each component type. + components = append(components, extractComponentType(stackName, "terraform", componentsMap)...) + components = append(components, extractComponentType(stackName, "helmfile", componentsMap)...) + components = append(components, extractComponentType(stackName, "packer", componentsMap)...) + + // TODO: Add support for plugin component types from schema.Components.Plugins + } + + return components, nil +} + +// extractComponentType extracts components of a specific type from a stack. +func extractComponentType(stackName, componentType string, componentsMap map[string]any) []map[string]any { + defer perf.Track(nil, "list.extract.extractComponentType")() + + typeComponents, ok := componentsMap[componentType].(map[string]any) + if !ok { + return nil + } + + var result []map[string]any + for componentName, componentData := range typeComponents { + comp := buildBaseComponent(componentName, stackName, componentType) + enrichComponentWithMetadata(comp, componentData) + result = append(result, comp) + } + + return result +} + +// buildBaseComponent creates the base component map with required fields. +func buildBaseComponent(componentName, stackName, componentType string) map[string]any { + defer perf.Track(nil, "list.extract.buildBaseComponent")() + + return map[string]any{ + "component": componentName, + "stack": stackName, + "type": componentType, + } +} + +// enrichComponentWithMetadata adds metadata fields to a component map. +func enrichComponentWithMetadata(comp map[string]any, componentData any) { + defer perf.Track(nil, "list.extract.enrichComponentWithMetadata")() + + compMap, ok := componentData.(map[string]any) + if !ok { + return + } + + metadata, hasMetadata := compMap["metadata"].(map[string]any) + if hasMetadata { + comp["metadata"] = metadata + extractMetadataFields(comp, metadata) + } else { + setDefaultMetadataFields(comp) + } + + comp["data"] = compMap +} + +// extractMetadataFields extracts common metadata fields to top level. +func extractMetadataFields(comp map[string]any, metadata map[string]any) { + defer perf.Track(nil, "list.extract.extractMetadataFields")() + + comp[metadataEnabled] = getBoolWithDefault(metadata, metadataEnabled, true) + comp[metadataLocked] = getBoolWithDefault(metadata, metadataLocked, false) + comp["component_type"] = getStringWithDefault(metadata, "type", "real") +} + +// setDefaultMetadataFields sets default values for metadata fields. +func setDefaultMetadataFields(comp map[string]any) { + defer perf.Track(nil, "list.extract.setDefaultMetadataFields")() + + comp[metadataEnabled] = true + comp[metadataLocked] = false + comp["component_type"] = "real" +} + +// getBoolWithDefault safely extracts a bool value or returns the default. +func getBoolWithDefault(m map[string]any, key string, defaultValue bool) bool { + defer perf.Track(nil, "list.extract.getBoolWithDefault")() + + if val, ok := m[key].(bool); ok { + return val + } + return defaultValue +} + +// getStringWithDefault safely extracts a string value or returns the default. +func getStringWithDefault(m map[string]any, key string, defaultValue string) string { + defer perf.Track(nil, "list.extract.getStringWithDefault")() + + if val, ok := m[key].(string); ok { + return val + } + return defaultValue +} + +// ComponentsForStack extracts components for a specific stack only. +func ComponentsForStack(stackName string, stacksMap map[string]any) ([]map[string]any, error) { + defer perf.Track(nil, "list.extract.ComponentsForStack")() + + stackData, ok := stacksMap[stackName] + if !ok { + return nil, fmt.Errorf("%w: %s", errUtils.ErrStackNotFound, stackName) + } + + stackMap, ok := stackData.(map[string]any) + if !ok { + return nil, errUtils.ErrParseStacks + } + + componentsMap, ok := stackMap["components"].(map[string]any) + if !ok { + return nil, errUtils.ErrParseComponents + } + + var components []map[string]any + components = append(components, extractComponentType(stackName, "terraform", componentsMap)...) + components = append(components, extractComponentType(stackName, "helmfile", componentsMap)...) + components = append(components, extractComponentType(stackName, "packer", componentsMap)...) + + if len(components) == 0 { + return nil, errUtils.ErrNoComponentsFound + } + + return components, nil +} diff --git a/pkg/list/extract/components_test.go b/pkg/list/extract/components_test.go new file mode 100644 index 0000000000..f64e7d6b3d --- /dev/null +++ b/pkg/list/extract/components_test.go @@ -0,0 +1,245 @@ +package extract + +import ( + "testing" + + errUtils "github.com/cloudposse/atmos/errors" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestComponents(t *testing.T) { + stacksMap := map[string]any{ + "plat-ue2-dev": map[string]any{ + "components": map[string]any{ + "terraform": map[string]any{ + "vpc": map[string]any{ + "metadata": map[string]any{ + "enabled": true, + "locked": false, + "type": "real", + }, + }, + "eks": map[string]any{ + "metadata": map[string]any{ + "enabled": false, + "locked": true, + "type": "real", + }, + }, + }, + "helmfile": map[string]any{ + "ingress": map[string]any{ + "metadata": map[string]any{ + "enabled": true, + "type": "abstract", + }, + }, + }, + }, + }, + "plat-ue2-prod": map[string]any{ + "components": map[string]any{ + "terraform": map[string]any{ + "vpc": map[string]any{ + "metadata": map[string]any{ + "enabled": true, + }, + }, + }, + }, + }, + } + + components, err := Components(stacksMap) + require.NoError(t, err) + assert.Len(t, components, 4) // vpc, eks, ingress, vpc + + // Verify structure of extracted data. + for _, comp := range components { + assert.Contains(t, comp, "component") + assert.Contains(t, comp, "stack") + assert.Contains(t, comp, "type") + assert.Contains(t, comp, "enabled") + assert.Contains(t, comp, "locked") + assert.Contains(t, comp, "component_type") + } +} + +func TestComponents_Nil(t *testing.T) { + _, err := Components(nil) + assert.ErrorIs(t, err, errUtils.ErrStackNotFound) +} + +func TestComponents_EmptyMap(t *testing.T) { + components, err := Components(map[string]any{}) + require.NoError(t, err) + assert.Empty(t, components) +} + +func TestComponents_InvalidStack(t *testing.T) { + stacksMap := map[string]any{ + "invalid": "not a map", + } + + components, err := Components(stacksMap) + require.NoError(t, err) + assert.Empty(t, components) // Skips invalid stacks. +} + +func TestComponents_NoComponents(t *testing.T) { + stacksMap := map[string]any{ + "plat-ue2-dev": map[string]any{ + "vars": map[string]any{}, + }, + } + + components, err := Components(stacksMap) + require.NoError(t, err) + assert.Empty(t, components) +} + +func TestComponents_DefaultValues(t *testing.T) { + stacksMap := map[string]any{ + "test-stack": map[string]any{ + "components": map[string]any{ + "terraform": map[string]any{ + "vpc": map[string]any{}, // No metadata. + }, + }, + }, + } + + components, err := Components(stacksMap) + require.NoError(t, err) + require.Len(t, components, 1) + + comp := components[0] + assert.Equal(t, "vpc", comp["component"]) + assert.Equal(t, "test-stack", comp["stack"]) + assert.Equal(t, "terraform", comp["type"]) + assert.Equal(t, true, comp["enabled"]) + assert.Equal(t, false, comp["locked"]) + assert.Equal(t, "real", comp["component_type"]) +} + +func TestComponentsForStack(t *testing.T) { + stacksMap := map[string]any{ + "plat-ue2-dev": map[string]any{ + "components": map[string]any{ + "terraform": map[string]any{ + "vpc": map[string]any{}, + "eks": map[string]any{}, + }, + }, + }, + "plat-ue2-prod": map[string]any{ + "components": map[string]any{ + "terraform": map[string]any{ + "rds": map[string]any{}, + }, + }, + }, + } + + components, err := ComponentsForStack("plat-ue2-dev", stacksMap) + require.NoError(t, err) + assert.Len(t, components, 2) + + // Verify only dev stack components. + for _, comp := range components { + assert.Equal(t, "plat-ue2-dev", comp["stack"]) + } +} + +func TestComponentsForStack_NotFound(t *testing.T) { + stacksMap := map[string]any{ + "plat-ue2-dev": map[string]any{}, + } + + _, err := ComponentsForStack("nonexistent", stacksMap) + assert.ErrorIs(t, err, errUtils.ErrStackNotFound) +} + +func TestComponentsForStack_InvalidData(t *testing.T) { + stacksMap := map[string]any{ + "test": "invalid", + } + + _, err := ComponentsForStack("test", stacksMap) + assert.ErrorIs(t, err, errUtils.ErrParseStacks) +} + +func TestComponentsForStack_NoComponents(t *testing.T) { + stacksMap := map[string]any{ + "test": map[string]any{ + "vars": map[string]any{}, + }, + } + + _, err := ComponentsForStack("test", stacksMap) + assert.ErrorIs(t, err, errUtils.ErrParseComponents) +} + +func TestComponentsForStack_EmptyComponents(t *testing.T) { + stacksMap := map[string]any{ + "test": map[string]any{ + "components": map[string]any{ + "terraform": map[string]any{}, + "helmfile": map[string]any{}, + }, + }, + } + + _, err := ComponentsForStack("test", stacksMap) + assert.ErrorIs(t, err, errUtils.ErrNoComponentsFound) +} + +func TestExtractComponentType(t *testing.T) { + componentsMap := map[string]any{ + "terraform": map[string]any{ + "vpc": map[string]any{ + "metadata": map[string]any{ + "enabled": true, + }, + }, + "eks": map[string]any{}, + }, + } + + components := extractComponentType("test-stack", "terraform", componentsMap) + assert.Len(t, components, 2) + + // Find vpc component. + var vpc map[string]any + for _, comp := range components { + if comp["component"] == "vpc" { + vpc = comp + break + } + } + + require.NotNil(t, vpc) + assert.Equal(t, "vpc", vpc["component"]) + assert.Equal(t, "test-stack", vpc["stack"]) + assert.Equal(t, "terraform", vpc["type"]) + assert.Equal(t, true, vpc["enabled"]) +} + +func TestExtractComponentType_InvalidType(t *testing.T) { + componentsMap := map[string]any{ + "terraform": "not a map", + } + + components := extractComponentType("test-stack", "terraform", componentsMap) + assert.Nil(t, components) +} + +func TestExtractComponentType_MissingType(t *testing.T) { + componentsMap := map[string]any{ + "helmfile": map[string]any{}, + } + + components := extractComponentType("test-stack", "terraform", componentsMap) + assert.Nil(t, components) +} diff --git a/pkg/list/extract/instances.go b/pkg/list/extract/instances.go new file mode 100644 index 0000000000..87a64a09d7 --- /dev/null +++ b/pkg/list/extract/instances.go @@ -0,0 +1,29 @@ +package extract + +import ( + "github.com/cloudposse/atmos/pkg/schema" +) + +// ExtractInstances transforms schema.Instance slice into []map[string]any for renderer. +// Converts structured Instance objects into flat maps accessible by column templates. +func Instances(instances []schema.Instance) []map[string]any { + result := make([]map[string]any, 0, len(instances)) + + for _, instance := range instances { + // Create flat map with all instance fields accessible to templates. + item := map[string]any{ + "component": instance.Component, + "stack": instance.Stack, + "component_type": instance.ComponentType, + "vars": instance.Vars, + "settings": instance.Settings, + "env": instance.Env, + "backend": instance.Backend, + "metadata": instance.Metadata, + } + + result = append(result, item) + } + + return result +} diff --git a/pkg/list/extract/instances_test.go b/pkg/list/extract/instances_test.go new file mode 100644 index 0000000000..04d34bea5c --- /dev/null +++ b/pkg/list/extract/instances_test.go @@ -0,0 +1,124 @@ +package extract + +import ( + "testing" + + "github.com/stretchr/testify/assert" + + "github.com/cloudposse/atmos/pkg/schema" +) + +func TestInstances(t *testing.T) { + testCases := []struct { + name string + instances []schema.Instance + expected []map[string]any + }{ + { + name: "empty instances", + instances: []schema.Instance{}, + expected: []map[string]any{}, + }, + { + name: "single instance", + instances: []schema.Instance{ + { + Component: "vpc", + Stack: "plat-ue2-dev", + ComponentType: "terraform", + Vars: map[string]any{ + "tenant": "plat", + "environment": "ue2", + "stage": "dev", + }, + Settings: map[string]any{}, + Env: map[string]any{}, + Backend: map[string]any{}, + Metadata: map[string]any{}, + }, + }, + expected: []map[string]any{ + { + "component": "vpc", + "stack": "plat-ue2-dev", + "component_type": "terraform", + "vars": map[string]any{ + "tenant": "plat", + "environment": "ue2", + "stage": "dev", + }, + "settings": map[string]any{}, + "env": map[string]any{}, + "backend": map[string]any{}, + "metadata": map[string]any{}, + }, + }, + }, + { + name: "multiple instances", + instances: []schema.Instance{ + { + Component: "vpc", + Stack: "plat-ue2-dev", + ComponentType: "terraform", + Vars: map[string]any{ + "tenant": "plat", + "stage": "dev", + }, + Settings: map[string]any{}, + Env: map[string]any{}, + Backend: map[string]any{}, + Metadata: map[string]any{}, + }, + { + Component: "eks", + Stack: "plat-ue2-prod", + ComponentType: "terraform", + Vars: map[string]any{ + "tenant": "plat", + "stage": "prod", + }, + Settings: map[string]any{}, + Env: map[string]any{}, + Backend: map[string]any{}, + Metadata: map[string]any{}, + }, + }, + expected: []map[string]any{ + { + "component": "vpc", + "stack": "plat-ue2-dev", + "component_type": "terraform", + "vars": map[string]any{ + "tenant": "plat", + "stage": "dev", + }, + "settings": map[string]any{}, + "env": map[string]any{}, + "backend": map[string]any{}, + "metadata": map[string]any{}, + }, + { + "component": "eks", + "stack": "plat-ue2-prod", + "component_type": "terraform", + "vars": map[string]any{ + "tenant": "plat", + "stage": "prod", + }, + "settings": map[string]any{}, + "env": map[string]any{}, + "backend": map[string]any{}, + "metadata": map[string]any{}, + }, + }, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + result := Instances(tc.instances) + assert.Equal(t, tc.expected, result) + }) + } +} diff --git a/pkg/list/extract/metadata.go b/pkg/list/extract/metadata.go new file mode 100644 index 0000000000..ae6ae7b537 --- /dev/null +++ b/pkg/list/extract/metadata.go @@ -0,0 +1,182 @@ +package extract + +import ( + "github.com/charmbracelet/lipgloss" + + "github.com/cloudposse/atmos/pkg/perf" + "github.com/cloudposse/atmos/pkg/schema" + "github.com/cloudposse/atmos/pkg/ui/theme" +) + +// getStatusIndicator returns a colored dot indicator based on enabled/locked state. +// - Gray (●) if enabled: false (disabled). +// - Red (●) if locked: true. +// - Green (●) if enabled: true and not locked. +func getStatusIndicator(enabled, locked bool) string { + const statusDot = "●" + + switch { + case locked: + // Red for locked - use theme error color. + return lipgloss.NewStyle().Foreground(lipgloss.Color(theme.GetErrorColor())).Render(statusDot) + case enabled: + // Green for enabled - use theme success color. + return lipgloss.NewStyle().Foreground(lipgloss.Color(theme.GetSuccessColor())).Render(statusDot) + default: + // Gray for disabled - use theme muted color. + return lipgloss.NewStyle().Foreground(lipgloss.Color(theme.ColorDarkGray)).Render(statusDot) + } +} + +// instanceMetadata holds extracted metadata fields from a schema.Instance. +type instanceMetadata struct { + metadataType string + enabled bool + locked bool + componentVal string + inherits string + description string + componentFolder string + status string +} + +// Metadata transforms a slice of schema.Instance into []map[string]any for the renderer. +// It extracts metadata fields and makes them accessible to column templates. +func Metadata(instances []schema.Instance) []map[string]any { + defer perf.Track(nil, "extract.Metadata")() + + result := make([]map[string]any, 0, len(instances)) + + for i := range instances { + metadata := extractInstanceMetadata(&instances[i]) + item := buildMetadataMap(&instances[i], &metadata) + result = append(result, item) + } + + return result +} + +// extractInstanceMetadata extracts and processes metadata fields from an instance. +func extractInstanceMetadata(instance *schema.Instance) instanceMetadata { + metadata := instanceMetadata{ + metadataType: getMetadataType(instance), + enabled: getEnabledStatus(instance), + locked: getLockedStatus(instance), + componentVal: getComponentValue(instance), + inherits: getInheritsString(instance), + description: getDescription(instance), + } + + metadata.componentFolder = determineComponentFolder(instance.Component, metadata.componentVal) + metadata.status = getStatusIndicator(metadata.enabled, metadata.locked) + + return metadata +} + +// getMetadataType extracts the metadata type, defaulting to "real". +func getMetadataType(instance *schema.Instance) string { + if val, ok := instance.Metadata["type"].(string); ok { + return val + } + return "real" +} + +// getEnabledStatus extracts the enabled status, defaulting to true. +func getEnabledStatus(instance *schema.Instance) bool { + if val, ok := instance.Metadata[metadataEnabled].(bool); ok { + return val + } + return true +} + +// getLockedStatus extracts the locked status. +func getLockedStatus(instance *schema.Instance) bool { + if val, ok := instance.Metadata[metadataLocked].(bool); ok { + return val + } + return false +} + +// getComponentValue extracts the component value from metadata. +func getComponentValue(instance *schema.Instance) string { + if val, ok := instance.Metadata["component"].(string); ok { + return val + } + return "" +} + +// getInheritsString converts the inherits array to a comma-separated string. +func getInheritsString(instance *schema.Instance) string { + val, ok := instance.Metadata["inherits"].([]interface{}) + if !ok { + return "" + } + + inheritsSlice := convertToStringSlice(val) + return joinWithComma(inheritsSlice) +} + +// convertToStringSlice converts []interface{} to []string. +func convertToStringSlice(values []interface{}) []string { + result := make([]string, 0, len(values)) + for _, v := range values { + if str, ok := v.(string); ok { + result = append(result, str) + } + } + return result +} + +// joinWithComma joins a string slice with comma separators. +func joinWithComma(values []string) string { + if len(values) == 0 { + return "" + } + + result := "" + for i, s := range values { + if i > 0 { + result += ", " + } + result += s + } + return result +} + +// getDescription extracts the description from metadata. +func getDescription(instance *schema.Instance) string { + if val, ok := instance.Metadata["description"].(string); ok { + return val + } + return "" +} + +// determineComponentFolder determines the actual component folder. +// If componentVal is set, use it (base component); otherwise use component name. +func determineComponentFolder(component, componentVal string) string { + if componentVal != "" { + return componentVal + } + return component +} + +// buildMetadataMap creates a flat map with all fields accessible to templates. +func buildMetadataMap(instance *schema.Instance, metadata *instanceMetadata) map[string]any { + return map[string]any{ + "status": metadata.status, // Colored status dot (●) + "stack": instance.Stack, + "component": instance.Component, + "component_type": instance.ComponentType, + "component_folder": metadata.componentFolder, // The actual component folder name + "type": metadata.metadataType, + "enabled": metadata.enabled, + "locked": metadata.locked, + "component_base": metadata.componentVal, + "inherits": metadata.inherits, + "description": metadata.description, + "metadata": instance.Metadata, // Full metadata for advanced filtering + "vars": instance.Vars, // Expose vars for template access + "settings": instance.Settings, // Expose settings for template access + "env": instance.Env, // Expose env for template access + } +} diff --git a/pkg/list/extract/metadata_test.go b/pkg/list/extract/metadata_test.go new file mode 100644 index 0000000000..9b43a005b2 --- /dev/null +++ b/pkg/list/extract/metadata_test.go @@ -0,0 +1,332 @@ +package extract + +import ( + "testing" + + "github.com/stretchr/testify/assert" + + "github.com/cloudposse/atmos/pkg/schema" +) + +func TestMetadata(t *testing.T) { + testCases := []struct { + name string + instances []schema.Instance + expected []map[string]any + }{ + { + name: "empty instances", + instances: []schema.Instance{}, + expected: []map[string]any{}, + }, + { + name: "single instance with metadata", + instances: []schema.Instance{ + { + Component: "vpc", + Stack: "plat-ue2-dev", + ComponentType: "terraform", + Metadata: map[string]any{ + "type": "real", + "enabled": true, + "locked": false, + "component": "vpc-base", + "inherits": []interface{}{"vpc/defaults"}, + "description": "VPC infrastructure", + }, + }, + }, + expected: []map[string]any{ + { + "stack": "plat-ue2-dev", + "component": "vpc", + "component_type": "terraform", + "component_folder": "vpc-base", + "type": "real", + "enabled": true, + "locked": false, + "component_base": "vpc-base", + "inherits": "vpc/defaults", + "description": "VPC infrastructure", + "metadata": map[string]any{ + "type": "real", + "enabled": true, + "locked": false, + "component": "vpc-base", + "inherits": []interface{}{"vpc/defaults"}, + "description": "VPC infrastructure", + }, + "vars": map[string]any(nil), + "settings": map[string]any(nil), + "env": map[string]any(nil), + }, + }, + }, + { + name: "instance with multiple inherits", + instances: []schema.Instance{ + { + Component: "eks", + Stack: "plat-ue2-prod", + ComponentType: "terraform", + Metadata: map[string]any{ + "type": "real", + "enabled": true, + "locked": true, + "component": "eks-base", + "inherits": []interface{}{"eks/defaults", "eks/prod-overrides"}, + "description": "EKS cluster", + }, + }, + }, + expected: []map[string]any{ + { + "stack": "plat-ue2-prod", + "component": "eks", + "component_type": "terraform", + "component_folder": "eks-base", + "type": "real", + "enabled": true, + "locked": true, + "component_base": "eks-base", + "inherits": "eks/defaults, eks/prod-overrides", + "description": "EKS cluster", + "metadata": map[string]any{ + "type": "real", + "enabled": true, + "locked": true, + "component": "eks-base", + "inherits": []interface{}{"eks/defaults", "eks/prod-overrides"}, + "description": "EKS cluster", + }, + "vars": map[string]any(nil), + "settings": map[string]any(nil), + "env": map[string]any(nil), + }, + }, + }, + { + name: "instance with minimal metadata", + instances: []schema.Instance{ + { + Component: "minimal", + Stack: "test", + ComponentType: "terraform", + Metadata: map[string]any{}, + }, + }, + expected: []map[string]any{ + { + "stack": "test", + "component": "minimal", + "component_type": "terraform", + "component_folder": "minimal", // Uses component name when metadata.component is not set. + "type": "real", // Defaults to "real" since abstract components are filtered. + "enabled": true, // Defaults to true. + "locked": false, + "component_base": "", + "inherits": "", + "description": "", + "metadata": map[string]any{}, + "vars": map[string]any(nil), + "settings": map[string]any(nil), + "env": map[string]any(nil), + }, + }, + }, + { + name: "multiple instances with mixed metadata", + instances: []schema.Instance{ + { + Component: "vpc", + Stack: "plat-ue2-dev", + ComponentType: "terraform", + Metadata: map[string]any{ + "type": "real", + "enabled": true, + "description": "Development VPC", + }, + }, + { + Component: "eks", + Stack: "plat-ue2-prod", + ComponentType: "terraform", + Metadata: map[string]any{ + "type": "real", + "enabled": true, + "locked": true, + "component": "eks-base", + }, + }, + }, + expected: []map[string]any{ + { + "stack": "plat-ue2-dev", + "component": "vpc", + "component_type": "terraform", + "component_folder": "vpc", // Uses component name when metadata.component is not set. + "type": "real", + "enabled": true, + "locked": false, + "component_base": "", + "inherits": "", + "description": "Development VPC", + "metadata": map[string]any{ + "type": "real", + "enabled": true, + "description": "Development VPC", + }, + "vars": map[string]any(nil), + "settings": map[string]any(nil), + "env": map[string]any(nil), + }, + { + "stack": "plat-ue2-prod", + "component": "eks", + "component_type": "terraform", + "component_folder": "eks-base", + "type": "real", + "enabled": true, + "locked": true, + "component_base": "eks-base", + "inherits": "", + "description": "", + "metadata": map[string]any{ + "type": "real", + "enabled": true, + "locked": true, + "component": "eks-base", + }, + "vars": map[string]any(nil), + "settings": map[string]any(nil), + "env": map[string]any(nil), + }, + }, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + result := Metadata(tc.instances) + + // Check length matches. + assert.Len(t, result, len(tc.expected)) + + // Check each item's fields (excluding status which contains ANSI codes). + for i := range result { + if i < len(tc.expected) { + // Verify status field exists and is non-empty. + assert.Contains(t, result[i], "status") + assert.NotEmpty(t, result[i]["status"]) + + // Check all other fields match expected. + for key, expectedVal := range tc.expected[i] { + if key != "status" { + assert.Equal(t, expectedVal, result[i][key], "mismatch for key %s", key) + } + } + } + } + }) + } +} + +func TestMetadata_IncludesVarsSettingsEnv(t *testing.T) { + instances := []schema.Instance{ + { + Component: "vpc", + Stack: "plat-ue2-dev", + ComponentType: "terraform", + Metadata: map[string]any{ + "type": "real", + "enabled": true, + "description": "VPC infrastructure", + }, + Vars: map[string]any{ + "region": "us-east-2", + "environment": "dev", + "tags": map[string]string{ + "Team": "platform", + "Env": "dev", + }, + }, + Settings: map[string]any{ + "spacelift": map[string]any{ + "workspace_enabled": true, + }, + }, + Env: map[string]any{ + "AWS_REGION": "us-east-2", + }, + }, + } + + result := Metadata(instances) + + assert.Len(t, result, 1) + + // Verify status is included. + assert.Contains(t, result[0], "status") + assert.NotEmpty(t, result[0]["status"]) + + // Verify vars are included + assert.Contains(t, result[0], "vars") + vars := result[0]["vars"].(map[string]any) + assert.Equal(t, "us-east-2", vars["region"]) + assert.Equal(t, "dev", vars["environment"]) + + // Verify settings are included + assert.Contains(t, result[0], "settings") + settings := result[0]["settings"].(map[string]any) + spacelift := settings["spacelift"].(map[string]any) + assert.Equal(t, true, spacelift["workspace_enabled"]) + + // Verify env is included + assert.Contains(t, result[0], "env") + env := result[0]["env"].(map[string]any) + assert.Equal(t, "us-east-2", env["AWS_REGION"]) +} + +func TestGetStatusIndicator(t *testing.T) { + tests := []struct { + name string + enabled bool + locked bool + contains string // Check if output contains the dot character + }{ + { + name: "enabled and not locked shows green", + enabled: true, + locked: false, + contains: "●", + }, + { + name: "locked shows red", + enabled: true, + locked: true, + contains: "●", + }, + { + name: "disabled shows gray", + enabled: false, + locked: false, + contains: "●", + }, + { + name: "disabled and locked shows red (locked takes precedence)", + enabled: false, + locked: true, + contains: "●", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := getStatusIndicator(tt.enabled, tt.locked) + // Always contains the status dot. + assert.Contains(t, result, tt.contains) + // Result is non-empty (may or may not have ANSI codes depending on TTY). + assert.NotEmpty(t, result) + }) + } +} diff --git a/pkg/list/extract/stacks.go b/pkg/list/extract/stacks.go new file mode 100644 index 0000000000..91b3884eac --- /dev/null +++ b/pkg/list/extract/stacks.go @@ -0,0 +1,78 @@ +package extract + +import ( + "fmt" + + errUtils "github.com/cloudposse/atmos/errors" + "github.com/cloudposse/atmos/pkg/perf" +) + +// Stacks transforms stacksMap into structured stack data. +// It returns []map[string]any suitable for the renderer pipeline. +func Stacks(stacksMap map[string]any) ([]map[string]any, error) { + defer perf.Track(nil, "extract.Stacks")() + + if stacksMap == nil { + return nil, errUtils.ErrStackNotFound + } + + var stacks []map[string]any + + for stackName := range stacksMap { + stack := map[string]any{ + "stack": stackName, + } + + stacks = append(stacks, stack) + } + + return stacks, nil +} + +// StacksForComponent extracts stacks that contain a specific component. +func StacksForComponent(componentName string, stacksMap map[string]any) ([]map[string]any, error) { + defer perf.Track(nil, "extract.StacksForComponent")() + + if stacksMap == nil { + return nil, fmt.Errorf("%w: %s", errUtils.ErrStackNotFound, componentName) + } + + var stacks []map[string]any + + for stackName, stackData := range stacksMap { + stackMap, ok := stackData.(map[string]any) + if !ok { + continue // Skip invalid stacks. + } + + componentsMap, ok := stackMap["components"].(map[string]any) + if !ok { + continue // Skip stacks without components. + } + + // Check if component exists in any component type. + found := false + for _, componentType := range []string{"terraform", "helmfile", "packer"} { + if typeComponents, ok := componentsMap[componentType].(map[string]any); ok { + if _, exists := typeComponents[componentName]; exists { + found = true + break + } + } + } + + if found { + stack := map[string]any{ + "stack": stackName, + "component": componentName, + } + stacks = append(stacks, stack) + } + } + + if len(stacks) == 0 { + return nil, errUtils.ErrNoStacksFound + } + + return stacks, nil +} diff --git a/pkg/list/extract/stacks_test.go b/pkg/list/extract/stacks_test.go new file mode 100644 index 0000000000..93eefad1e0 --- /dev/null +++ b/pkg/list/extract/stacks_test.go @@ -0,0 +1,188 @@ +package extract + +import ( + "testing" + + errUtils "github.com/cloudposse/atmos/errors" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestStacks(t *testing.T) { + stacksMap := map[string]any{ + "plat-ue2-dev": map[string]any{ + "components": map[string]any{ + "terraform": map[string]any{ + "vpc": map[string]any{}, + }, + }, + }, + "plat-ue2-prod": map[string]any{ + "components": map[string]any{ + "terraform": map[string]any{ + "vpc": map[string]any{}, + }, + }, + }, + "plat-uw2-staging": map[string]any{ + "components": map[string]any{ + "terraform": map[string]any{ + "eks": map[string]any{}, + }, + }, + }, + } + + stacks, err := Stacks(stacksMap) + require.NoError(t, err) + assert.Len(t, stacks, 3) + + // Verify structure of extracted data. + stackNames := make(map[string]bool) + for _, stack := range stacks { + assert.Contains(t, stack, "stack") + stackName, ok := stack["stack"].(string) + require.True(t, ok) + stackNames[stackName] = true + } + + // Verify all stacks are present. + assert.True(t, stackNames["plat-ue2-dev"]) + assert.True(t, stackNames["plat-ue2-prod"]) + assert.True(t, stackNames["plat-uw2-staging"]) +} + +func TestStacks_Nil(t *testing.T) { + _, err := Stacks(nil) + assert.ErrorIs(t, err, errUtils.ErrStackNotFound) +} + +func TestStacks_EmptyMap(t *testing.T) { + stacks, err := Stacks(map[string]any{}) + require.NoError(t, err) + assert.Empty(t, stacks) +} + +func TestStacksForComponent(t *testing.T) { + stacksMap := map[string]any{ + "plat-ue2-dev": map[string]any{ + "components": map[string]any{ + "terraform": map[string]any{ + "vpc": map[string]any{}, + "eks": map[string]any{}, + }, + }, + }, + "plat-ue2-prod": map[string]any{ + "components": map[string]any{ + "terraform": map[string]any{ + "vpc": map[string]any{}, + "rds": map[string]any{}, + }, + }, + }, + "plat-uw2-staging": map[string]any{ + "components": map[string]any{ + "terraform": map[string]any{ + "eks": map[string]any{}, + }, + }, + }, + } + + stacks, err := StacksForComponent("vpc", stacksMap) + require.NoError(t, err) + assert.Len(t, stacks, 2) + + // Verify only stacks with vpc component. + for _, stack := range stacks { + assert.Equal(t, "vpc", stack["component"]) + stackName := stack["stack"].(string) + assert.True(t, stackName == "plat-ue2-dev" || stackName == "plat-ue2-prod") + } +} + +func TestStacksForComponent_MultipleTypes(t *testing.T) { + stacksMap := map[string]any{ + "plat-ue2-dev": map[string]any{ + "components": map[string]any{ + "terraform": map[string]any{ + "vpc": map[string]any{}, + }, + "helmfile": map[string]any{ + "ingress": map[string]any{}, + }, + }, + }, + "plat-ue2-prod": map[string]any{ + "components": map[string]any{ + "helmfile": map[string]any{ + "ingress": map[string]any{}, + }, + }, + }, + } + + stacks, err := StacksForComponent("ingress", stacksMap) + require.NoError(t, err) + assert.Len(t, stacks, 2) + + // Verify both stacks with ingress helmfile component. + for _, stack := range stacks { + assert.Equal(t, "ingress", stack["component"]) + } +} + +func TestStacksForComponent_NotFound(t *testing.T) { + stacksMap := map[string]any{ + "plat-ue2-dev": map[string]any{ + "components": map[string]any{ + "terraform": map[string]any{ + "vpc": map[string]any{}, + }, + }, + }, + } + + _, err := StacksForComponent("nonexistent", stacksMap) + assert.ErrorIs(t, err, errUtils.ErrNoStacksFound) +} + +func TestStacksForComponent_Nil(t *testing.T) { + _, err := StacksForComponent("vpc", nil) + assert.ErrorIs(t, err, errUtils.ErrStackNotFound) +} + +func TestStacksForComponent_InvalidData(t *testing.T) { + stacksMap := map[string]any{ + "test": "invalid", + } + + _, err := StacksForComponent("vpc", stacksMap) + assert.ErrorIs(t, err, errUtils.ErrNoStacksFound) +} + +func TestStacksForComponent_NoComponents(t *testing.T) { + stacksMap := map[string]any{ + "test": map[string]any{ + "vars": map[string]any{}, + }, + } + + _, err := StacksForComponent("vpc", stacksMap) + assert.ErrorIs(t, err, errUtils.ErrNoStacksFound) +} + +func TestStacksForComponent_EmptyComponents(t *testing.T) { + stacksMap := map[string]any{ + "test": map[string]any{ + "components": map[string]any{ + "terraform": map[string]any{}, + "helmfile": map[string]any{}, + }, + }, + } + + _, err := StacksForComponent("vpc", stacksMap) + assert.ErrorIs(t, err, errUtils.ErrNoStacksFound) +} diff --git a/pkg/list/extract/vendor.go b/pkg/list/extract/vendor.go new file mode 100644 index 0000000000..cde7e35f1c --- /dev/null +++ b/pkg/list/extract/vendor.go @@ -0,0 +1,51 @@ +package extract + +import ( + "github.com/cloudposse/atmos/pkg/perf" + "github.com/cloudposse/atmos/pkg/schema" +) + +const ( + // VendorTypeComponent is the type for components with component manifests. + VendorTypeComponent = "Component Manifest" + // VendorTypeVendor is the type for vendor manifests. + VendorTypeVendor = "Vendor Manifest" +) + +// VendorInfo holds information about a vendor component. +type VendorInfo struct { + Component string + Type string + Manifest string + Folder string +} + +// Vendor transforms vendorInfos into structured data. +// Returns []map[string]any suitable for the renderer pipeline. +func Vendor(vendorInfos []VendorInfo) ([]map[string]any, error) { + defer perf.Track(nil, "extract.Vendor")() + + var vendors []map[string]any + + for _, vi := range vendorInfos { + vendor := map[string]any{ + "atmos_component": vi.Component, + "atmos_vendor_type": vi.Type, + "atmos_vendor_file": vi.Manifest, + "atmos_vendor_target": vi.Folder, + // Also add simple column names for easier templates. + "component": vi.Component, + "type": vi.Type, + "manifest": vi.Manifest, + "folder": vi.Folder, + } + + vendors = append(vendors, vendor) + } + + return vendors, nil +} + +// GetVendorInfosFunc is a function type for getting vendor information. +// This allows the extract package to avoid importing the list package. +type GetVendorInfosFunc func(*schema.AtmosConfiguration) ([]VendorInfo, error) diff --git a/pkg/list/extract/vendor_test.go b/pkg/list/extract/vendor_test.go new file mode 100644 index 0000000000..fe83bdab96 --- /dev/null +++ b/pkg/list/extract/vendor_test.go @@ -0,0 +1,179 @@ +package extract + +import ( + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestVendor(t *testing.T) { + vendorInfos := []VendorInfo{ + { + Component: "vpc/v1", + Type: VendorTypeComponent, + Manifest: "components/terraform/vpc/v1/component.yaml", + Folder: "components/terraform/vpc/v1", + }, + { + Component: "eks/cluster", + Type: VendorTypeVendor, + Manifest: "vendor.d/eks.yaml", + Folder: "components/terraform/eks/cluster", + }, + } + + vendors, err := Vendor(vendorInfos) + require.NoError(t, err) + assert.Len(t, vendors, 2) + + // Verify structure of extracted data. + for _, vendor := range vendors { + // Check template keys (used in atmos.yaml column templates). + assert.Contains(t, vendor, "atmos_component") + assert.Contains(t, vendor, "atmos_vendor_type") + assert.Contains(t, vendor, "atmos_vendor_file") + assert.Contains(t, vendor, "atmos_vendor_target") + + // Check simple column names (easier for users). + assert.Contains(t, vendor, "component") + assert.Contains(t, vendor, "type") + assert.Contains(t, vendor, "manifest") + assert.Contains(t, vendor, "folder") + } + + // Verify first vendor. + vpc := vendors[0] + assert.Equal(t, "vpc/v1", vpc["component"]) + assert.Equal(t, "vpc/v1", vpc["atmos_component"]) + assert.Equal(t, VendorTypeComponent, vpc["type"]) + assert.Equal(t, VendorTypeComponent, vpc["atmos_vendor_type"]) + assert.Equal(t, "components/terraform/vpc/v1/component.yaml", vpc["manifest"]) + assert.Equal(t, "components/terraform/vpc/v1", vpc["folder"]) + + // Verify second vendor. + eks := vendors[1] + assert.Equal(t, "eks/cluster", eks["component"]) + assert.Equal(t, VendorTypeVendor, eks["type"]) + assert.Equal(t, "vendor.d/eks.yaml", eks["manifest"]) + assert.Equal(t, "components/terraform/eks/cluster", eks["folder"]) +} + +func TestVendor_EmptyList(t *testing.T) { + vendors, err := Vendor([]VendorInfo{}) + require.NoError(t, err) + assert.Empty(t, vendors) +} + +func TestVendor_SingleVendor(t *testing.T) { + vendorInfos := []VendorInfo{ + { + Component: "rds", + Type: VendorTypeVendor, + Manifest: "vendor.d/rds.yaml", + Folder: "components/terraform/rds", + }, + } + + vendors, err := Vendor(vendorInfos) + require.NoError(t, err) + assert.Len(t, vendors, 1) + + vendor := vendors[0] + assert.Equal(t, "rds", vendor["component"]) + assert.Equal(t, VendorTypeVendor, vendor["type"]) +} + +func TestVendor_ComponentManifests(t *testing.T) { + vendorInfos := []VendorInfo{ + { + Component: "vpc", + Type: VendorTypeComponent, + Manifest: "components/terraform/vpc/component.yaml", + Folder: "components/terraform/vpc", + }, + { + Component: "eks", + Type: VendorTypeComponent, + Manifest: "components/terraform/eks/component.yaml", + Folder: "components/terraform/eks", + }, + } + + vendors, err := Vendor(vendorInfos) + require.NoError(t, err) + assert.Len(t, vendors, 2) + + // Verify all are component manifests. + for _, vendor := range vendors { + assert.Equal(t, VendorTypeComponent, vendor["type"]) + } +} + +func TestVendor_VendorManifests(t *testing.T) { + vendorInfos := []VendorInfo{ + { + Component: "vpc", + Type: VendorTypeVendor, + Manifest: "vendor.d/vpc.yaml", + Folder: "components/terraform/vpc", + }, + { + Component: "eks", + Type: VendorTypeVendor, + Manifest: "vendor.d/eks.yaml", + Folder: "components/terraform/eks", + }, + } + + vendors, err := Vendor(vendorInfos) + require.NoError(t, err) + assert.Len(t, vendors, 2) + + // Verify all are vendor manifests. + for _, vendor := range vendors { + assert.Equal(t, VendorTypeVendor, vendor["type"]) + } +} + +func TestVendor_MixedTypes(t *testing.T) { + vendorInfos := []VendorInfo{ + { + Component: "vpc", + Type: VendorTypeComponent, + Manifest: "components/terraform/vpc/component.yaml", + Folder: "components/terraform/vpc", + }, + { + Component: "eks", + Type: VendorTypeVendor, + Manifest: "vendor.d/eks.yaml", + Folder: "components/terraform/eks", + }, + { + Component: "rds", + Type: VendorTypeComponent, + Manifest: "components/terraform/rds/component.yaml", + Folder: "components/terraform/rds", + }, + } + + vendors, err := Vendor(vendorInfos) + require.NoError(t, err) + assert.Len(t, vendors, 3) + + // Count types. + componentCount := 0 + vendorCount := 0 + for _, vendor := range vendors { + switch vendor["type"] { + case VendorTypeComponent: + componentCount++ + case VendorTypeVendor: + vendorCount++ + } + } + + assert.Equal(t, 2, componentCount) + assert.Equal(t, 1, vendorCount) +} diff --git a/pkg/list/extract/workflows.go b/pkg/list/extract/workflows.go new file mode 100644 index 0000000000..5aea69cde8 --- /dev/null +++ b/pkg/list/extract/workflows.go @@ -0,0 +1,120 @@ +package extract + +import ( + "errors" + "fmt" + "os" + "path/filepath" + + "gopkg.in/yaml.v3" + + errUtils "github.com/cloudposse/atmos/errors" + perf "github.com/cloudposse/atmos/pkg/perf" + "github.com/cloudposse/atmos/pkg/schema" + "github.com/cloudposse/atmos/pkg/utils" +) + +// ExtractWorkflows transforms workflow manifests into structured data. +// Returns []map[string]any suitable for the renderer pipeline. +// +//nolint:gocognit,nestif,revive,funlen // Complexity and length from file handling and manifest parsing (unavoidable pattern). +func Workflows(atmosConfig *schema.AtmosConfiguration, fileFilter string) ([]map[string]any, error) { + defer perf.Track(atmosConfig, "list.workflows.extract")() + + var workflows []map[string]any + + // If a specific file is provided, validate and load it. + if fileFilter != "" { + cleanPath := filepath.Clean(fileFilter) + if !utils.IsYaml(cleanPath) { + return nil, fmt.Errorf("%w: invalid workflow file extension: %s", errUtils.ErrParseFile, cleanPath) + } + + if _, err := os.Stat(cleanPath); os.IsNotExist(err) { + return nil, errors.Join(errUtils.ErrParseFile, fmt.Errorf("workflow file not found: %s", cleanPath)) + } + + // Read and parse the workflow file. + data, err := os.ReadFile(cleanPath) + if err != nil { + return nil, errors.Join(errUtils.ErrParseFile, err) + } + + var manifest schema.WorkflowManifest + if err := yaml.Unmarshal(data, &manifest); err != nil { + return nil, errors.Join(errUtils.ErrParseFile, err) + } + + manifest.Name = cleanPath + workflows = append(workflows, extractFromManifest(manifest)...) + return workflows, nil + } + + // Get the workflows directory. + var workflowsDir string + if utils.IsPathAbsolute(atmosConfig.Workflows.BasePath) { + workflowsDir = atmosConfig.Workflows.BasePath + } else { + workflowsDir = filepath.Join(atmosConfig.BasePath, atmosConfig.Workflows.BasePath) + } + + isDirectory, err := utils.IsDirectory(workflowsDir) + if err != nil || !isDirectory { + return nil, fmt.Errorf("%w: '%s'", errUtils.ErrWorkflowDirectoryDoesNotExist, workflowsDir) + } + + files, err := utils.GetAllYamlFilesInDir(workflowsDir) + if err != nil { + return nil, errors.Join(errUtils.ErrWorkflowDirectoryDoesNotExist, err) + } + + // Extract workflows from all manifests. + for _, f := range files { + var workflowPath string + if utils.IsPathAbsolute(atmosConfig.Workflows.BasePath) { + workflowPath = filepath.Join(atmosConfig.Workflows.BasePath, f) + } else { + workflowPath = filepath.Join(atmosConfig.BasePath, atmosConfig.Workflows.BasePath, f) + } + + fileContent, err := os.ReadFile(workflowPath) + if err != nil { + continue // Skip files that can't be read. + } + + var manifest schema.WorkflowManifest + if err := yaml.Unmarshal(fileContent, &manifest); err != nil { + continue // Skip invalid manifests. + } + + manifest.Name = f + workflows = append(workflows, extractFromManifest(manifest)...) + } + + return workflows, nil +} + +// extractFromManifest extracts workflow data from a single manifest. +func extractFromManifest(manifest schema.WorkflowManifest) []map[string]any { + defer perf.Track(nil, "list.workflows.extractFromManifest")() + + var workflows []map[string]any + + if manifest.Workflows == nil { + return workflows + } + + for workflowName, workflow := range manifest.Workflows { + w := map[string]any{ + "file": manifest.Name, + "workflow": workflowName, + "description": workflow.Description, + // Add additional fields for advanced templates. + "steps": len(workflow.Steps), + } + + workflows = append(workflows, w) + } + + return workflows +} diff --git a/pkg/list/extract/workflows_test.go b/pkg/list/extract/workflows_test.go new file mode 100644 index 0000000000..cc92b8f39a --- /dev/null +++ b/pkg/list/extract/workflows_test.go @@ -0,0 +1,338 @@ +package extract + +import ( + "os" + "path/filepath" + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + + errUtils "github.com/cloudposse/atmos/errors" + "github.com/cloudposse/atmos/pkg/schema" +) + +func TestExtractFromManifest(t *testing.T) { + manifest := schema.WorkflowManifest{ + Name: "deploy-workflows", + Workflows: map[string]schema.WorkflowDefinition{ + "deploy-all": { + Description: "Deploy all components", + Steps: []schema.WorkflowStep{ + {Name: "step1"}, + {Name: "step2"}, + }, + }, + "destroy-all": { + Description: "Destroy all components", + Steps: []schema.WorkflowStep{ + {Name: "step1"}, + }, + }, + }, + } + + workflows := extractFromManifest(manifest) + require.Len(t, workflows, 2) + + // Verify structure. + for _, wf := range workflows { + assert.Contains(t, wf, "file") + assert.Contains(t, wf, "workflow") + assert.Contains(t, wf, "description") + assert.Contains(t, wf, "steps") + assert.Equal(t, "deploy-workflows", wf["file"]) + } + + // Find deploy-all workflow. + var deployAll map[string]any + for _, wf := range workflows { + if wf["workflow"] == "deploy-all" { + deployAll = wf + break + } + } + + require.NotNil(t, deployAll) + assert.Equal(t, "deploy-all", deployAll["workflow"]) + assert.Equal(t, "Deploy all components", deployAll["description"]) + assert.Equal(t, 2, deployAll["steps"]) +} + +func TestExtractFromManifest_EmptyWorkflows(t *testing.T) { + manifest := schema.WorkflowManifest{ + Name: "empty-workflows", + Workflows: nil, + } + + workflows := extractFromManifest(manifest) + assert.Empty(t, workflows) +} + +func TestExtractFromManifest_NoDescription(t *testing.T) { + manifest := schema.WorkflowManifest{ + Name: "test-workflows", + Workflows: map[string]schema.WorkflowDefinition{ + "test": { + Description: "", + Steps: []schema.WorkflowStep{}, + }, + }, + } + + workflows := extractFromManifest(manifest) + require.Len(t, workflows, 1) + + assert.Equal(t, "", workflows[0]["description"]) + assert.Equal(t, 0, workflows[0]["steps"]) +} + +func TestExtractFromManifest_MultipleWorkflows(t *testing.T) { + manifest := schema.WorkflowManifest{ + Name: "multi-workflows", + Workflows: map[string]schema.WorkflowDefinition{ + "wf1": {Description: "Workflow 1", Steps: []schema.WorkflowStep{{Name: "s1"}}}, + "wf2": {Description: "Workflow 2", Steps: []schema.WorkflowStep{{Name: "s1"}, {Name: "s2"}}}, + "wf3": {Description: "Workflow 3", Steps: []schema.WorkflowStep{{Name: "s1"}, {Name: "s2"}, {Name: "s3"}}}, + }, + } + + workflows := extractFromManifest(manifest) + assert.Len(t, workflows, 3) + + // Verify all have file field. + for _, wf := range workflows { + assert.Equal(t, "multi-workflows", wf["file"]) + } +} + +// TestWorkflows_WithDirectory tests loading workflows from a directory. +func TestWorkflows_WithDirectory(t *testing.T) { + // Create temporary directory with workflow files. + tmpDir := t.TempDir() + workflowsDir := filepath.Join(tmpDir, "workflows") + err := os.MkdirAll(workflowsDir, 0o755) + require.NoError(t, err) + + // Create workflow file 1. + workflow1 := ` +workflows: + deploy: + description: Deploy infrastructure + steps: + - name: step1 + - name: step2 +` + err = os.WriteFile(filepath.Join(workflowsDir, "deploy.yaml"), []byte(workflow1), 0o644) + require.NoError(t, err) + + // Create workflow file 2. + workflow2 := ` +workflows: + destroy: + description: Destroy infrastructure + steps: + - name: step1 +` + err = os.WriteFile(filepath.Join(workflowsDir, "destroy.yaml"), []byte(workflow2), 0o644) + require.NoError(t, err) + + // Configure atmos to use the test workflows directory. + atmosConfig := &schema.AtmosConfiguration{ + BasePath: tmpDir, + Workflows: schema.Workflows{ + BasePath: "workflows", + }, + } + + // Test loading all workflows. + workflows, err := Workflows(atmosConfig, "") + require.NoError(t, err) + assert.Len(t, workflows, 2) + + // Verify workflow structure. + for _, wf := range workflows { + assert.Contains(t, wf, "file") + assert.Contains(t, wf, "workflow") + assert.Contains(t, wf, "description") + assert.Contains(t, wf, "steps") + } +} + +// TestWorkflows_WithFileFilter tests loading a specific workflow file. +func TestWorkflows_WithFileFilter(t *testing.T) { + // Create temporary directory with workflow file. + tmpDir := t.TempDir() + workflowFile := filepath.Join(tmpDir, "deploy.yaml") + + workflowContent := ` +workflows: + deploy-all: + description: Deploy all components + steps: + - name: step1 + - name: step2 +` + err := os.WriteFile(workflowFile, []byte(workflowContent), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + BasePath: tmpDir, + } + + // Test loading specific file. + workflows, err := Workflows(atmosConfig, workflowFile) + require.NoError(t, err) + assert.Len(t, workflows, 1) + + assert.Equal(t, workflowFile, workflows[0]["file"]) + assert.Equal(t, "deploy-all", workflows[0]["workflow"]) + assert.Equal(t, "Deploy all components", workflows[0]["description"]) + assert.Equal(t, 2, workflows[0]["steps"]) +} + +// TestWorkflows_InvalidFileExtension tests error handling for non-YAML files. +func TestWorkflows_InvalidFileExtension(t *testing.T) { + atmosConfig := &schema.AtmosConfiguration{ + BasePath: "/tmp", + } + + workflows, err := Workflows(atmosConfig, "test.txt") + assert.Error(t, err) + assert.Nil(t, workflows) + assert.ErrorIs(t, err, errUtils.ErrParseFile) + assert.Contains(t, err.Error(), "invalid workflow file extension") +} + +// TestWorkflows_FileNotFound tests error handling for missing files. +func TestWorkflows_FileNotFound(t *testing.T) { + atmosConfig := &schema.AtmosConfiguration{ + BasePath: "/tmp", + } + + workflows, err := Workflows(atmosConfig, "/nonexistent/file.yaml") + assert.Error(t, err) + assert.Nil(t, workflows) + assert.ErrorIs(t, err, errUtils.ErrParseFile) + assert.Contains(t, err.Error(), "workflow file not found") +} + +// TestWorkflows_InvalidYAML tests error handling for malformed YAML. +func TestWorkflows_InvalidYAML(t *testing.T) { + tmpDir := t.TempDir() + workflowFile := filepath.Join(tmpDir, "invalid.yaml") + + // Write invalid YAML. + err := os.WriteFile(workflowFile, []byte("invalid: yaml: content: ["), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + BasePath: tmpDir, + } + + workflows, err := Workflows(atmosConfig, workflowFile) + assert.Error(t, err) + assert.Nil(t, workflows) + assert.ErrorIs(t, err, errUtils.ErrParseFile) +} + +// TestWorkflows_DirectoryNotFound tests error handling for missing workflow directory. +func TestWorkflows_DirectoryNotFound(t *testing.T) { + atmosConfig := &schema.AtmosConfiguration{ + BasePath: "/tmp", + Workflows: schema.Workflows{ + BasePath: "nonexistent-workflows", + }, + } + + workflows, err := Workflows(atmosConfig, "") + assert.Error(t, err) + assert.Nil(t, workflows) + assert.ErrorIs(t, err, errUtils.ErrWorkflowDirectoryDoesNotExist) +} + +// TestWorkflows_EmptyDirectory tests loading from empty directory. +func TestWorkflows_EmptyDirectory(t *testing.T) { + tmpDir := t.TempDir() + workflowsDir := filepath.Join(tmpDir, "workflows") + err := os.MkdirAll(workflowsDir, 0o755) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + BasePath: tmpDir, + Workflows: schema.Workflows{ + BasePath: "workflows", + }, + } + + workflows, err := Workflows(atmosConfig, "") + require.NoError(t, err) + assert.Empty(t, workflows) +} + +// TestWorkflows_SkipsInvalidFiles tests that invalid files in directory are skipped. +func TestWorkflows_SkipsInvalidFiles(t *testing.T) { + tmpDir := t.TempDir() + workflowsDir := filepath.Join(tmpDir, "workflows") + err := os.MkdirAll(workflowsDir, 0o755) + require.NoError(t, err) + + // Create valid workflow. + validWorkflow := ` +workflows: + deploy: + description: Deploy + steps: + - name: step1 +` + err = os.WriteFile(filepath.Join(workflowsDir, "valid.yaml"), []byte(validWorkflow), 0o644) + require.NoError(t, err) + + // Create invalid workflow (malformed YAML). + err = os.WriteFile(filepath.Join(workflowsDir, "invalid.yaml"), []byte("invalid: yaml: ["), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + BasePath: tmpDir, + Workflows: schema.Workflows{ + BasePath: "workflows", + }, + } + + // Should load only the valid workflow. + workflows, err := Workflows(atmosConfig, "") + require.NoError(t, err) + assert.Len(t, workflows, 1) + assert.Equal(t, "valid.yaml", workflows[0]["file"]) +} + +// TestWorkflows_AbsoluteWorkflowsPath tests loading with absolute workflows path. +func TestWorkflows_AbsoluteWorkflowsPath(t *testing.T) { + tmpDir := t.TempDir() + workflowsDir := filepath.Join(tmpDir, "workflows") + err := os.MkdirAll(workflowsDir, 0o755) + require.NoError(t, err) + + workflow := ` +workflows: + test: + description: Test workflow + steps: + - name: step1 +` + err = os.WriteFile(filepath.Join(workflowsDir, "test.yaml"), []byte(workflow), 0o644) + require.NoError(t, err) + + // Use absolute path for workflows. + atmosConfig := &schema.AtmosConfiguration{ + BasePath: tmpDir, + Workflows: schema.Workflows{ + BasePath: workflowsDir, // Absolute path. + }, + } + + workflows, err := Workflows(atmosConfig, "") + require.NoError(t, err) + assert.Len(t, workflows, 1) + assert.Equal(t, "test.yaml", workflows[0]["file"]) +} diff --git a/pkg/list/filter/filter.go b/pkg/list/filter/filter.go new file mode 100644 index 0000000000..92dda31715 --- /dev/null +++ b/pkg/list/filter/filter.go @@ -0,0 +1,195 @@ +package filter + +import ( + "fmt" + "path/filepath" + "strings" + + errUtils "github.com/cloudposse/atmos/errors" + "github.com/cloudposse/atmos/pkg/perf" +) + +// Filter interface for composability. +type Filter interface { + Apply(data interface{}) (interface{}, error) +} + +// GlobFilter matches patterns (e.g., "plat-*-dev"). +type GlobFilter struct { + Field string + Pattern string +} + +// ColumnValueFilter filters rows by column value. +type ColumnValueFilter struct { + Column string + Value string +} + +// BoolFilter filters by boolean field. +type BoolFilter struct { + Field string + Value *bool // nil = all, true = enabled only, false = disabled only +} + +// Chain combines multiple filters (AND logic). +type Chain struct { + filters []Filter +} + +// NewGlobFilter creates a filter that matches field values against glob pattern. +func NewGlobFilter(field, pattern string) (*GlobFilter, error) { + defer perf.Track(nil, "filter.NewGlobFilter")() + + if field == "" { + return nil, fmt.Errorf("%w: field cannot be empty", errUtils.ErrInvalidConfig) + } + if pattern == "" { + return nil, fmt.Errorf("%w: pattern cannot be empty", errUtils.ErrInvalidConfig) + } + + // Validate pattern syntax + _, err := filepath.Match(pattern, "test") + if err != nil { + return nil, fmt.Errorf("%w: invalid glob pattern %q: %w", errUtils.ErrInvalidConfig, pattern, err) + } + + return &GlobFilter{ + Field: field, + Pattern: pattern, + }, nil +} + +// Apply filters data by glob pattern matching. +func (f *GlobFilter) Apply(data interface{}) (interface{}, error) { + defer perf.Track(nil, "filter.GlobFilter.Apply")() + + items, ok := data.([]map[string]any) + if !ok { + return nil, fmt.Errorf("%w: expected []map[string]any, got %T", errUtils.ErrInvalidConfig, data) + } + + var filtered []map[string]any + for _, item := range items { + value, ok := item[f.Field] + if !ok { + continue // Skip items without the field + } + + valueStr := fmt.Sprintf("%v", value) + matched, err := filepath.Match(f.Pattern, valueStr) + if err != nil { + return nil, fmt.Errorf("%w: pattern matching failed: %w", errUtils.ErrInvalidConfig, err) + } + + if matched { + filtered = append(filtered, item) + } + } + + return filtered, nil +} + +// NewColumnFilter creates a filter for exact column value matching. +func NewColumnFilter(column, value string) *ColumnValueFilter { + defer perf.Track(nil, "filter.NewColumnFilter")() + + return &ColumnValueFilter{ + Column: column, + Value: value, + } +} + +// Apply filters data by exact column value match. +func (f *ColumnValueFilter) Apply(data interface{}) (interface{}, error) { + defer perf.Track(nil, "filter.ColumnValueFilter.Apply")() + + items, ok := data.([]map[string]any) + if !ok { + return nil, fmt.Errorf("%w: expected []map[string]any, got %T", errUtils.ErrInvalidConfig, data) + } + + var filtered []map[string]any + for _, item := range items { + value, ok := item[f.Column] + if !ok { + continue + } + + valueStr := fmt.Sprintf("%v", value) + if valueStr == f.Value { + filtered = append(filtered, item) + } + } + + return filtered, nil +} + +// NewBoolFilter creates a filter for boolean field values. +// Value nil = all, true = only true values, false = only false values. +func NewBoolFilter(field string, value *bool) *BoolFilter { + defer perf.Track(nil, "filter.NewBoolFilter")() + + return &BoolFilter{ + Field: field, + Value: value, + } +} + +// Apply filters data by boolean field value. +func (f *BoolFilter) Apply(data interface{}) (interface{}, error) { + defer perf.Track(nil, "filter.BoolFilter.Apply")() + + items, ok := data.([]map[string]any) + if !ok { + return nil, fmt.Errorf("%w: expected []map[string]any, got %T", errUtils.ErrInvalidConfig, data) + } + + // nil = no filtering + if f.Value == nil { + return items, nil + } + + var filtered []map[string]any + for _, item := range items { + value, ok := item[f.Field] + if !ok { + continue + } + + boolValue, ok := value.(bool) + if !ok { + // Try string conversion + strValue := strings.ToLower(fmt.Sprintf("%v", value)) + boolValue = strValue == "true" || strValue == "yes" || strValue == "1" + } + + if boolValue == *f.Value { + filtered = append(filtered, item) + } + } + + return filtered, nil +} + +// NewChain creates a filter chain that applies filters in sequence (AND logic). +func NewChain(filters ...Filter) *Chain { + defer perf.Track(nil, "filter.NewChain")() + + return &Chain{filters: filters} +} + +// Apply applies all filters in sequence. +func (c *Chain) Apply(data interface{}) (interface{}, error) { + defer perf.Track(nil, "filter.Chain.Apply")() + + current := data + for i, filter := range c.filters { + result, err := filter.Apply(current) + if err != nil { + return nil, fmt.Errorf("filter %d failed: %w", i, err) + } + current = result + } + return current, nil +} diff --git a/pkg/list/filter/filter_test.go b/pkg/list/filter/filter_test.go new file mode 100644 index 0000000000..8bb817e865 --- /dev/null +++ b/pkg/list/filter/filter_test.go @@ -0,0 +1,405 @@ +package filter + +import ( + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + + errUtils "github.com/cloudposse/atmos/errors" +) + +func TestNewGlobFilter(t *testing.T) { + tests := []struct { + name string + field string + pattern string + expectErr bool + errType error + }{ + {"valid simple pattern", "stack", "plat-*", false, nil}, + {"valid complex pattern", "stack", "plat-*-dev", false, nil}, + {"valid exact match", "stack", "exact", false, nil}, + {"empty field", "", "pattern", true, errUtils.ErrInvalidConfig}, + {"empty pattern", "field", "", true, errUtils.ErrInvalidConfig}, + {"invalid pattern", "field", "[", true, errUtils.ErrInvalidConfig}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + filter, err := NewGlobFilter(tt.field, tt.pattern) + + if tt.expectErr { + require.Error(t, err) + if tt.errType != nil { + assert.ErrorIs(t, err, tt.errType) + } + assert.Nil(t, filter) + } else { + require.NoError(t, err) + assert.NotNil(t, filter) + assert.Equal(t, tt.field, filter.Field) + assert.Equal(t, tt.pattern, filter.Pattern) + } + }) + } +} + +func TestGlobFilter_Apply(t *testing.T) { + tests := []struct { + name string + field string + pattern string + data []map[string]any + expectedCount int + expectErr bool + }{ + { + name: "match all with wildcard", + field: "stack", + pattern: "*", + data: []map[string]any{ + {"stack": "plat-ue2-dev"}, + {"stack": "plat-ue2-prod"}, + }, + expectedCount: 2, + }, + { + name: "match pattern", + field: "stack", + pattern: "plat-*-dev", + data: []map[string]any{ + {"stack": "plat-ue2-dev"}, + {"stack": "plat-ue2-prod"}, + {"stack": "plat-uw2-dev"}, + }, + expectedCount: 2, + }, + { + name: "no matches", + field: "stack", + pattern: "non-*", + data: []map[string]any{ + {"stack": "plat-ue2-dev"}, + {"stack": "plat-ue2-prod"}, + }, + expectedCount: 0, + }, + { + name: "missing field", + field: "missing", + pattern: "*", + data: []map[string]any{ + {"stack": "plat-ue2-dev"}, + }, + expectedCount: 0, + }, + { + name: "exact match", + field: "stack", + pattern: "exact", + data: []map[string]any{ + {"stack": "exact"}, + {"stack": "notexact"}, + }, + expectedCount: 1, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + filter, err := NewGlobFilter(tt.field, tt.pattern) + require.NoError(t, err) + + result, err := filter.Apply(tt.data) + + if tt.expectErr { + require.Error(t, err) + } else { + require.NoError(t, err) + filtered, ok := result.([]map[string]any) + require.True(t, ok) + assert.Len(t, filtered, tt.expectedCount) + } + }) + } +} + +func TestGlobFilter_Apply_InvalidData(t *testing.T) { + filter, err := NewGlobFilter("field", "*") + require.NoError(t, err) + + tests := []struct { + name string + data interface{} + }{ + {"string", "invalid"}, + {"int", 123}, + {"map", map[string]string{"key": "value"}}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + _, err := filter.Apply(tt.data) + require.Error(t, err) + assert.ErrorIs(t, err, errUtils.ErrInvalidConfig) + }) + } +} + +func TestNewColumnFilter(t *testing.T) { + filter := NewColumnFilter("component", "vpc") + assert.NotNil(t, filter) + assert.Equal(t, "component", filter.Column) + assert.Equal(t, "vpc", filter.Value) +} + +func TestColumnValueFilter_Apply(t *testing.T) { + tests := []struct { + name string + column string + value string + data []map[string]any + expectedCount int + }{ + { + name: "exact match", + column: "component", + value: "vpc", + data: []map[string]any{ + {"component": "vpc"}, + {"component": "eks"}, + {"component": "vpc"}, + }, + expectedCount: 2, + }, + { + name: "no matches", + column: "component", + value: "nonexistent", + data: []map[string]any{ + {"component": "vpc"}, + {"component": "eks"}, + }, + expectedCount: 0, + }, + { + name: "missing column", + column: "missing", + value: "value", + data: []map[string]any{ + {"component": "vpc"}, + }, + expectedCount: 0, + }, + { + name: "numeric value as string", + column: "port", + value: "8080", + data: []map[string]any{ + {"port": 8080}, + {"port": 9090}, + }, + expectedCount: 1, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + filter := NewColumnFilter(tt.column, tt.value) + result, err := filter.Apply(tt.data) + + require.NoError(t, err) + filtered, ok := result.([]map[string]any) + require.True(t, ok) + assert.Len(t, filtered, tt.expectedCount) + }) + } +} + +func TestNewBoolFilter(t *testing.T) { + trueVal := true + falseVal := false + + tests := []struct { + name string + field string + value *bool + }{ + {"filter true", "enabled", &trueVal}, + {"filter false", "enabled", &falseVal}, + {"filter all (nil)", "enabled", nil}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + filter := NewBoolFilter(tt.field, tt.value) + assert.NotNil(t, filter) + assert.Equal(t, tt.field, filter.Field) + assert.Equal(t, tt.value, filter.Value) + }) + } +} + +func TestBoolFilter_Apply(t *testing.T) { + trueVal := true + falseVal := false + + tests := []struct { + name string + field string + value *bool + data []map[string]any + expectedCount int + }{ + { + name: "filter enabled only", + field: "enabled", + value: &trueVal, + data: []map[string]any{ + {"enabled": true}, + {"enabled": false}, + {"enabled": true}, + }, + expectedCount: 2, + }, + { + name: "filter disabled only", + field: "enabled", + value: &falseVal, + data: []map[string]any{ + {"enabled": true}, + {"enabled": false}, + {"enabled": true}, + }, + expectedCount: 1, + }, + { + name: "nil value returns all", + field: "enabled", + value: nil, + data: []map[string]any{ + {"enabled": true}, + {"enabled": false}, + }, + expectedCount: 2, + }, + { + name: "string true conversion", + field: "enabled", + value: &trueVal, + data: []map[string]any{ + {"enabled": "true"}, + {"enabled": "yes"}, + {"enabled": "1"}, + {"enabled": "false"}, + }, + expectedCount: 3, + }, + { + name: "missing field", + field: "missing", + value: &trueVal, + data: []map[string]any{ + {"enabled": true}, + }, + expectedCount: 0, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + filter := NewBoolFilter(tt.field, tt.value) + result, err := filter.Apply(tt.data) + + require.NoError(t, err) + filtered, ok := result.([]map[string]any) + require.True(t, ok) + assert.Len(t, filtered, tt.expectedCount) + }) + } +} + +func TestNewChain(t *testing.T) { + filter1 := NewColumnFilter("col1", "val1") + filter2 := NewColumnFilter("col2", "val2") + + chain := NewChain(filter1, filter2) + assert.NotNil(t, chain) + assert.Len(t, chain.filters, 2) +} + +func TestChain_Apply(t *testing.T) { + tests := []struct { + name string + filters []Filter + data []map[string]any + expectedCount int + }{ + { + name: "two filters AND logic", + filters: []Filter{ + NewColumnFilter("type", "real"), + NewBoolFilter("enabled", boolPtr(true)), + }, + data: []map[string]any{ + {"type": "real", "enabled": true}, // match both + {"type": "real", "enabled": false}, // match first only + {"type": "abstract", "enabled": true}, // match second only + {"type": "abstract", "enabled": false}, // match neither + }, + expectedCount: 1, + }, + { + name: "three filters cascade", + filters: []Filter{ + NewColumnFilter("region", "us-east-2"), + NewColumnFilter("env", "prod"), + NewBoolFilter("enabled", boolPtr(true)), + }, + data: []map[string]any{ + {"region": "us-east-2", "env": "prod", "enabled": true}, + {"region": "us-east-2", "env": "prod", "enabled": false}, + {"region": "us-east-2", "env": "dev", "enabled": true}, + }, + expectedCount: 1, + }, + { + name: "empty chain returns all", + filters: []Filter{}, + data: []map[string]any{ + {"component": "vpc"}, + {"component": "eks"}, + }, + expectedCount: 2, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + chain := NewChain(tt.filters...) + result, err := chain.Apply(tt.data) + + require.NoError(t, err) + filtered, ok := result.([]map[string]any) + require.True(t, ok) + assert.Len(t, filtered, tt.expectedCount) + }) + } +} + +func TestChain_Apply_ErrorPropagation(t *testing.T) { + goodFilter := NewColumnFilter("col", "val") + + // Create a chain where second filter will receive wrong type + chain := NewChain(goodFilter, goodFilter) + + // Pass invalid data type - first filter will fail + _, err := chain.Apply("invalid") + require.Error(t, err) +} + +// Helper function for test readability. +func boolPtr(b bool) *bool { + return &b +} diff --git a/pkg/list/format/formatter.go b/pkg/list/format/formatter.go index 8ba67f76aa..5aae34cfc0 100644 --- a/pkg/list/format/formatter.go +++ b/pkg/list/format/formatter.go @@ -14,6 +14,7 @@ const ( FormatCSV Format = "csv" FormatTSV Format = "tsv" FormatTemplate Format = "template" + FormatTree Format = "tree" ) // FormatOptions contains options for formatting output. @@ -77,7 +78,7 @@ func NewFormatter(format Format) (Formatter, error) { // ValidateFormat checks if the provided format is valid. func ValidateFormat(format string) error { - validFormats := []Format{FormatTable, FormatJSON, FormatYAML, FormatCSV, FormatTSV} + validFormats := []Format{FormatTable, FormatJSON, FormatYAML, FormatCSV, FormatTSV, FormatTree} for _, f := range validFormats { if Format(format) == f { return nil @@ -85,6 +86,6 @@ func ValidateFormat(format string) error { } return &errors.InvalidFormatError{ Format: format, - Valid: []string{string(FormatTable), string(FormatJSON), string(FormatYAML), string(FormatCSV), string(FormatTSV)}, + Valid: []string{string(FormatTable), string(FormatJSON), string(FormatYAML), string(FormatCSV), string(FormatTSV), string(FormatTree)}, } } diff --git a/pkg/list/format/table.go b/pkg/list/format/table.go index 2825a31103..f354c54b2a 100644 --- a/pkg/list/format/table.go +++ b/pkg/list/format/table.go @@ -4,11 +4,16 @@ import ( "encoding/json" "fmt" "reflect" + "regexp" "sort" + "strconv" + "strings" + "github.com/charmbracelet/glamour" "github.com/charmbracelet/lipgloss" "github.com/charmbracelet/lipgloss/table" "github.com/cloudposse/atmos/internal/tui/templates" + "github.com/cloudposse/atmos/pkg/terminal" "github.com/cloudposse/atmos/pkg/ui/theme" "github.com/cloudposse/atmos/pkg/utils" "github.com/pkg/errors" @@ -16,10 +21,23 @@ import ( // Constants for table formatting. const ( - MaxColumnWidth = 60 // Maximum width for a column. - TableColumnPadding = 3 // Padding for table columns. - DefaultKeyWidth = 15 // Default base width for keys. - KeyValue = "value" + MaxColumnWidth = 60 // Maximum width for a column. + TableColumnPadding = 3 // Padding for table columns. + DefaultKeyWidth = 15 // Default base width for keys. + KeyValue = "value" + CompactColumnMaxWidth = 20 // Maximum width for non-Description columns. + DescriptionColumnMinWidth = 30 // Minimum width for Description column. + MinColumnWidth = 5 // Absolute minimum width for any column. + + // Format strings for value formatting. + fmtBool = "%v" + fmtInt = "%d" + fmtFloat = "%.2f" + fmtSpace = " " + fmtNewline = "\n" + + // Parsing constants. + floatBitSize = 64 ) // Error variables for table formatting. @@ -166,14 +184,204 @@ func formatCollectionValue(v reflect.Value) string { count := v.Len() switch v.Kind() { case reflect.Map: + // Try to expand scalar maps (like tags) into multi-line format. + if expanded := tryExpandScalarMap(v); expanded != "" { + return expanded + } return fmt.Sprintf("{...} (%d keys)", count) case reflect.Array, reflect.Slice: + // Try to expand scalar arrays into multi-line format. + if expanded := tryExpandScalarArray(v); expanded != "" { + return expanded + } return fmt.Sprintf("[...] (%d items)", count) default: return "{unknown collection}" } } +// tryExpandScalarArray attempts to expand an array of scalar values into a multi-line string. +// Returns empty string if the array contains non-scalar values or would be too wide. +func tryExpandScalarArray(v reflect.Value) string { + if v.Len() == 0 { + return "" + } + + items, maxItemWidth := extractScalarArrayItems(v) + if items == nil { + return "" + } + + if !isWidthReasonable(maxItemWidth) { + return "" + } + + return joinItems(items) +} + +// extractScalarArrayItems extracts scalar items from an array. +// Returns nil if array contains non-scalar values. +func extractScalarArrayItems(v reflect.Value) ([]string, int) { + var items []string + maxItemWidth := 0 + + for i := 0; i < v.Len(); i++ { + elem := unwrapInterfaceValue(v.Index(i)) + if elem.Kind() == reflect.Invalid { + return nil, 0 + } + + itemStr := formatScalarValue(elem) + if itemStr == "" { + return nil, 0 + } + + items = append(items, itemStr) + + itemWidth := lipgloss.Width(itemStr) + if itemWidth > maxItemWidth { + maxItemWidth = itemWidth + } + } + + return items, maxItemWidth +} + +// unwrapInterfaceValue unwraps interface{} wrappers from a reflect value. +// Returns Invalid kind if the value is nil. +func unwrapInterfaceValue(elem reflect.Value) reflect.Value { + if elem.Kind() == reflect.Interface { + if elem.IsNil() { + return reflect.Value{} + } + return elem.Elem() + } + return elem +} + +// formatScalarValue formats a scalar reflect value as a string. +// Returns empty string if the value is not a scalar type. +func formatScalarValue(elem reflect.Value) string { + switch elem.Kind() { + case reflect.String: + return elem.String() + case reflect.Bool: + return fmt.Sprintf(fmtBool, elem.Bool()) + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return fmt.Sprintf(fmtInt, elem.Int()) + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64: + return fmt.Sprintf(fmtInt, elem.Uint()) + case reflect.Float32, reflect.Float64: + return fmt.Sprintf(fmtFloat, elem.Float()) + default: + return "" + } +} + +// isWidthReasonable checks if the width is reasonable for table display. +func isWidthReasonable(width int) bool { + const maxReasonableItemWidth = 20 + return width <= maxReasonableItemWidth +} + +// tryExpandScalarMap attempts to expand a map with scalar values into a multi-line string. +// Returns empty string if the map contains non-scalar values or would be too wide. +func tryExpandScalarMap(v reflect.Value) string { + if v.Len() == 0 { + return "" + } + + sortedKeys, keyMap := sortMapKeys(v) + if sortedKeys == nil { + return "" + } + + items, maxItemWidth := extractScalarMapItems(v, sortedKeys, keyMap) + if items == nil { + return "" + } + + if !isWidthReasonable(maxItemWidth) { + return "" + } + + return joinItems(items) +} + +// sortMapKeys sorts map keys by their string representation. +// Returns nil if the map is empty. +func sortMapKeys(v reflect.Value) ([]string, map[string]reflect.Value) { + keys := v.MapKeys() + if len(keys) == 0 { + return nil, nil + } + + sortedKeys := make([]string, len(keys)) + keyMap := make(map[string]reflect.Value) + + for i, key := range keys { + keyStr := fmt.Sprintf(fmtBool, key.Interface()) + sortedKeys[i] = keyStr + keyMap[keyStr] = key + } + + sort.Strings(sortedKeys) + return sortedKeys, keyMap +} + +// extractScalarMapItems extracts scalar key-value pairs from a map. +// Returns nil if the map contains non-scalar values. +func extractScalarMapItems(v reflect.Value, sortedKeys []string, keyMap map[string]reflect.Value) ([]string, int) { + var items []string + maxItemWidth := 0 + + for _, keyStr := range sortedKeys { + key := keyMap[keyStr] + val := unwrapInterfaceValue(v.MapIndex(key)) + if val.Kind() == reflect.Invalid { + return nil, 0 + } + + valueStr := formatScalarValue(val) + if valueStr == "" { + return nil, 0 + } + + itemStr := fmt.Sprintf("%s: %s", keyStr, valueStr) + items = append(items, itemStr) + + itemWidth := lipgloss.Width(itemStr) + if itemWidth > maxItemWidth { + maxItemWidth = itemWidth + } + } + + return items, maxItemWidth +} + +// joinItems joins array items with newlines, respecting MaxColumnWidth. +func joinItems(items []string) string { + if len(items) == 0 { + return "" + } + + // Join with newlines. + result := "" + for i, item := range items { + if i > 0 { + result += "\n" + } + // Truncate individual items if they're too long. + if len(item) > MaxColumnWidth { + result += item[:MaxColumnWidth-3] + "..." + } else { + result += item + } + } + + return result +} + // formatComplexValue formats complex values using JSON. func formatComplexValue(val interface{}) string { jsonBytes, err := json.Marshal(val) @@ -183,23 +391,469 @@ func formatComplexValue(val interface{}) string { return truncateString(string(jsonBytes)) } +// cellContentType represents the type of content in a table cell. +type cellContentType int + +const ( + contentTypeDefault cellContentType = iota + contentTypeBoolean + contentTypeNumber + contentTypePlaceholder + contentTypeNoValue // For Go template output +) + +// Regular expressions for content detection. +var ( + placeholderRegex = regexp.MustCompile(`^(\{\.\.\.}|\[\.\.\.]).*$`) +) + +// detectContentType determines the content type of a cell value. +func detectContentType(value string) cellContentType { + if value == "" { + return contentTypeDefault + } + + // Check for from Go templates. + if value == "" { + return contentTypeNoValue + } + + // Check for placeholders first (they contain specific patterns). + if placeholderRegex.MatchString(value) { + return contentTypePlaceholder + } + + // Check for booleans. + if value == "true" || value == "false" { + return contentTypeBoolean + } + + // Check for numbers (integers or floats). + if _, err := strconv.ParseFloat(value, floatBitSize); err == nil { + return contentTypeNumber + } + + return contentTypeDefault +} + +// getCellStyle returns the appropriate lipgloss style for a cell based on its content. +func getCellStyle(value string, baseStyle *lipgloss.Style, styles *theme.StyleSet) lipgloss.Style { + contentType := detectContentType(value) + + switch contentType { + case contentTypeBoolean: + if value == "true" { + return baseStyle.Foreground(styles.Success.GetForeground()) + } + return baseStyle.Foreground(styles.Error.GetForeground()) + + case contentTypeNumber: + return baseStyle.Foreground(styles.Info.GetForeground()) + + case contentTypePlaceholder: + return baseStyle.Foreground(styles.Muted.GetForeground()) + + case contentTypeNoValue: + return baseStyle.Foreground(styles.Muted.GetForeground()) + + default: + return *baseStyle + } +} + +// renderInlineMarkdown renders markdown content inline for table cells. +// Strips newlines and renders markdown formatting (bold, italic, links, code). +func renderInlineMarkdown(content string) string { + if content == "" { + return "" + } + + // Create a terminal instance to detect color support. + term := terminal.New() + + // Build glamour options for inline rendering. + var opts []glamour.TermRendererOption + + // Use theme-aware glamour styles if color is supported. + if term.ColorProfile() != terminal.ColorNone { + // Get the configured theme name from atmos config if available. + // Default to "dark" theme for better terminal compatibility. + themeName := "dark" + glamourStyle, err := theme.GetGlamourStyleForTheme(themeName) + if err == nil { + opts = append(opts, glamour.WithStylesFromJSONBytes(glamourStyle)) + } else { + // Fallback to auto style if theme conversion fails. + opts = append(opts, glamour.WithAutoStyle()) + } + } else { + // Use plain notty style for terminals without color. + opts = append(opts, glamour.WithStylePath("notty")) + } + + // No word wrap - we'll handle line breaks manually. + opts = append(opts, glamour.WithWordWrap(0)) + + // Create the renderer. + renderer, err := glamour.NewTermRenderer(opts...) + if err != nil { + // If rendering fails, return the original content. + return content + } + defer renderer.Close() + + // Render the markdown. + rendered, err := renderer.Render(content) + if err != nil { + // If rendering fails, return the original content. + return content + } + + // Convert to single line by replacing newlines with spaces. + // This keeps inline markdown (bold, italic, code) but removes block formatting. + singleLine := strings.ReplaceAll(rendered, "\n", " ") + + // Collapse multiple spaces into single space. + singleLine = regexp.MustCompile(`\s+`).ReplaceAllString(singleLine, " ") + + // Trim leading and trailing whitespace. + return strings.TrimSpace(singleLine) +} + +// ColumnWidthParams contains parameters for column width calculation. +type columnWidthParams struct { + numColumns int + availableWidth int + minWidths []int + descriptionColIndex int +} + +// calculateColumnWidths calculates optimal column widths for table display. +func calculateColumnWidths(header []string, rows [][]string, terminalWidth int) []int { + numColumns := len(header) + if numColumns == 0 { + return []int{} + } + + availableWidth := calculateAvailableWidth(terminalWidth, numColumns) + if availableWidth < numColumns { + return createUniformWidths(numColumns) + } + + minWidths := calculateMinimumWidths(header, rows, numColumns) + descriptionColIndex := findDescriptionColumnIndex(header) + + params := columnWidthParams{ + numColumns: numColumns, + availableWidth: availableWidth, + minWidths: minWidths, + descriptionColIndex: descriptionColIndex, + } + + return distributeWidths(params) +} + +// calculateAvailableWidth calculates the available width for table content. +func calculateAvailableWidth(terminalWidth, numColumns int) int { + const paddingPerColumn = 5 + totalPadding := numColumns * paddingPerColumn + availableWidth := terminalWidth - totalPadding + + minRequiredWidth := numColumns * MinColumnWidth + if availableWidth < minRequiredWidth { + return minRequiredWidth + } + + return availableWidth +} + +// createUniformWidths creates uniform column widths when space is limited. +func createUniformWidths(numColumns int) []int { + widths := make([]int, numColumns) + for i := range widths { + widths[i] = MinColumnWidth + } + return widths +} + +// calculateMinimumWidths calculates the minimum width needed for each column. +func calculateMinimumWidths(header []string, rows [][]string, numColumns int) []int { + minWidths := make([]int, numColumns) + + for col := 0; col < numColumns; col++ { + minWidths[col] = lipgloss.Width(header[col]) + + for _, row := range rows { + if col < len(row) { + cellWidth := getMaxLineWidth(row[col]) + if cellWidth > minWidths[col] { + minWidths[col] = cellWidth + } + } + } + } + + return minWidths +} + +// findDescriptionColumnIndex finds the index of the Description column. +func findDescriptionColumnIndex(header []string) int { + for i, h := range header { + if h == "Description" { + return i + } + } + return -1 +} + +// distributeWidths distributes available width among columns. +func distributeWidths(params columnWidthParams) []int { + if params.descriptionColIndex >= 0 { + return distributeWithDescriptionColumn(params) + } + return distributeProportionally(params) +} + +// distributeWithDescriptionColumn distributes width when a Description column exists. +func distributeWithDescriptionColumn(params columnWidthParams) []int { + widths := allocateInitialWidths(params) + usedWidth := calculateUsedWidth(widths, params.descriptionColIndex) + + descWidth := calculateDescriptionWidth(params.availableWidth, usedWidth) + widths[params.descriptionColIndex] = descWidth + + return shrinkIfNeeded(widths, params.availableWidth, params.descriptionColIndex) +} + +// allocateInitialWidths allocates initial widths to all columns. +func allocateInitialWidths(params columnWidthParams) []int { + widths := make([]int, params.numColumns) + + for i := range widths { + if i == params.descriptionColIndex { + widths[i] = DescriptionColumnMinWidth + } else { + widths[i] = params.minWidths[i] + if widths[i] > CompactColumnMaxWidth { + widths[i] = CompactColumnMaxWidth + } + } + } + + return widths +} + +// calculateUsedWidth calculates total width used by non-Description columns. +func calculateUsedWidth(widths []int, descriptionColIndex int) int { + usedWidth := 0 + for i, w := range widths { + if i != descriptionColIndex { + usedWidth += w + } + } + return usedWidth +} + +// calculateDescriptionWidth calculates the width for the Description column. +func calculateDescriptionWidth(availableWidth, usedWidth int) int { + remainingWidth := availableWidth - usedWidth + + if remainingWidth > availableWidth { + remainingWidth = availableWidth + } + if remainingWidth > MaxColumnWidth { + remainingWidth = MaxColumnWidth + } + if remainingWidth < DescriptionColumnMinWidth { + remainingWidth = DescriptionColumnMinWidth + } + if remainingWidth < MinColumnWidth { + remainingWidth = MinColumnWidth + } + + return remainingWidth +} + +// shrinkIfNeeded shrinks non-Description columns if total width exceeds available space. +func shrinkIfNeeded(widths []int, availableWidth, descriptionColIndex int) []int { + totalWidth := 0 + for _, w := range widths { + totalWidth += w + } + + if totalWidth <= availableWidth { + return widths + } + + excess := totalWidth - availableWidth + for i := range widths { + if i != descriptionColIndex && excess > 0 { + reduction := widths[i] - MinColumnWidth + if reduction > 0 { + if reduction > excess { + reduction = excess + } + widths[i] -= reduction + excess -= reduction + } + } + } + + return widths +} + +// distributeProportionally distributes width proportionally when no Description column exists. +func distributeProportionally(params columnWidthParams) []int { + widths := make([]int, params.numColumns) + totalMinWidth := calculateTotalMinWidth(params.minWidths, params.descriptionColIndex) + + if totalMinWidth <= params.availableWidth { + copy(widths, params.minWidths) + return widths + } + + scaleFactor := float64(params.availableWidth) / float64(totalMinWidth) + for i, minWidth := range params.minWidths { + widths[i] = int(float64(minWidth) * scaleFactor) + if widths[i] < MinColumnWidth { + widths[i] = MinColumnWidth + } + } + + return widths +} + +// calculateTotalMinWidth calculates total minimum width with capping for non-Description columns. +func calculateTotalMinWidth(minWidths []int, descriptionColIndex int) int { + totalMinWidth := 0 + for i, minWidth := range minWidths { + if i != descriptionColIndex && minWidth > CompactColumnMaxWidth { + minWidth = CompactColumnMaxWidth + } + totalMinWidth += minWidth + } + return totalMinWidth +} + +// padToWidth pads a string to the target width without truncating. +// For multi-line content, pads each line individually. +func padToWidth(s string, width int) string { + if width <= 0 { + return s + } + + // For multi-line content, pad each line. + lines := splitLines(s) + if len(lines) > 1 { + padded := make([]string, len(lines)) + for i, line := range lines { + currentWidth := lipgloss.Width(line) + if currentWidth < width { + padded[i] = line + strings.Repeat(fmtSpace, width-currentWidth) + } else { + padded[i] = line + } + } + return strings.Join(padded, "\n") + } + + // Single line: pad if needed. + currentWidth := lipgloss.Width(s) + if currentWidth < width { + return s + strings.Repeat(fmtSpace, width-currentWidth) + } + return s +} + // createStyledTable creates a styled table with headers and rows. +// Uses intelligent column width calculation to optimize space usage. func CreateStyledTable(header []string, rows [][]string) string { + // Get terminal width - use exactly what's detected. + detectedWidth := templates.GetTerminalWidth() + + // Get theme-aware styles. + styles := theme.GetCurrentStyles() + + // Find the index of the "Description" column if it exists. + descriptionColIndex := -1 + for i, h := range header { + if h == "Description" { + descriptionColIndex = i + break + } + } + + // Apply markdown rendering to Description column cells. + processedRows := rows + if descriptionColIndex >= 0 { + processedRows = make([][]string, len(rows)) + for i, row := range rows { + processedRows[i] = make([]string, len(row)) + copy(processedRows[i], row) + if descriptionColIndex < len(row) && row[descriptionColIndex] != "" { + // Render markdown content inline (strip block elements). + processedRows[i][descriptionColIndex] = renderInlineMarkdown(row[descriptionColIndex]) + } + } + } + + // Calculate optimal column widths. + columnWidths := calculateColumnWidths(header, processedRows, detectedWidth) + + // Pad headers to match column widths. + paddedHeaders := make([]string, len(header)) + for i, h := range header { + if i < len(columnWidths) { + paddedHeaders[i] = padToWidth(h, columnWidths[i]) + } else { + paddedHeaders[i] = h + } + } + + // Pad cells to match column widths (don't truncate, just pad). + constrainedRows := make([][]string, len(processedRows)) + for i, row := range processedRows { + constrainedRows[i] = make([]string, len(row)) + for j, cell := range row { + if j < len(columnWidths) { + // Pad to width, but allow wrapping for long content. + constrainedRows[i][j] = padToWidth(cell, columnWidths[j]) + } else { + constrainedRows[i][j] = cell + } + } + } + + // Table styling - simple and clean like version list. + headerStyle := lipgloss.NewStyle().Bold(true) + cellStyle := lipgloss.NewStyle() + t := table.New(). - Border(lipgloss.ThickBorder()). - BorderStyle(lipgloss.NewStyle().Foreground(lipgloss.Color(theme.ColorBorder))). + Headers(paddedHeaders...). + Rows(constrainedRows...). + BorderHeader(true). // Show border under header. + BorderTop(false). // No top border. + BorderBottom(false). // No bottom border. + BorderLeft(false). // No left border. + BorderRight(false). // No right border. + BorderRow(false). // No row separators. + BorderColumn(false). // No column separators. + BorderStyle(lipgloss.NewStyle().Foreground(lipgloss.Color("8"))). // Gray border. StyleFunc(func(row, col int) lipgloss.Style { - style := lipgloss.NewStyle().PaddingLeft(1).PaddingRight(1) - if row == -1 { - return style. - Foreground(lipgloss.Color(theme.ColorGreen)). - Bold(true). - Align(lipgloss.Center) + switch row { + case table.HeaderRow: + return headerStyle.Padding(0, 1) + default: + // Apply semantic styling based on cell content. + baseStyle := cellStyle.Padding(0, 1) + // Row indices for data start at 0, matching the rows array. + if row >= 0 && row < len(constrainedRows) && col < len(constrainedRows[row]) { + cellValue := constrainedRows[row][col] + return getCellStyle(cellValue, &baseStyle, styles) + } + return baseStyle } - return style.Inherit(theme.Styles.Description) - }). - Headers(header...). - Rows(rows...) + }) return t.String() + utils.GetLineEnding() } @@ -236,8 +890,9 @@ func (f *TableFormatter) Format(data map[string]interface{}, options FormatOptio func calculateMaxKeyWidth(valueKeys []string) int { maxKeyWidth := DefaultKeyWidth // Base width assumption for _, key := range valueKeys { - if len(key) > maxKeyWidth { - maxKeyWidth = len(key) + keyWidth := lipgloss.Width(key) + if keyWidth > maxKeyWidth { + maxKeyWidth = keyWidth } } return maxKeyWidth @@ -258,7 +913,8 @@ func getMaxValueWidth(stackData map[string]interface{}, valueKeys []string) int for _, valueKey := range valueKeys { if val, ok := stackData[valueKey]; ok { formattedValue := formatTableCellValue(val) - valueWidth := len(formattedValue) + // For multi-line values, get the width of the widest line. + valueWidth := getMaxLineWidth(formattedValue) if valueWidth > maxWidth { maxWidth = valueWidth @@ -269,10 +925,36 @@ func getMaxValueWidth(stackData map[string]interface{}, valueKeys []string) int return limitWidth(maxWidth) } +// getMaxLineWidth returns the maximum visual width of any line in a multi-line string. +// Uses lipgloss.Width to properly handle ANSI codes and multi-byte characters. +func getMaxLineWidth(s string) int { + if s == "" { + return 0 + } + + maxWidth := 0 + lines := splitLines(s) + for _, line := range lines { + width := lipgloss.Width(line) + if width > maxWidth { + maxWidth = width + } + } + return maxWidth +} + +// splitLines splits a string by newlines. +func splitLines(s string) []string { + if s == "" { + return []string{} + } + return strings.Split(s, fmtNewline) +} + // calculateStackColumnWidth calculates the width for a single stack column. func calculateStackColumnWidth(stackName string, stackData map[string]interface{}, valueKeys []string) int { - // Start with the width based on stack name - columnWidth := limitWidth(len(stackName)) + // Start with the width based on stack name using visual width. + columnWidth := limitWidth(lipgloss.Width(stackName)) // Check value widths valueWidth := getMaxValueWidth(stackData, valueKeys) diff --git a/pkg/list/format/table_test.go b/pkg/list/format/table_test.go new file mode 100644 index 0000000000..8774e2745f --- /dev/null +++ b/pkg/list/format/table_test.go @@ -0,0 +1,921 @@ +package format + +import ( + "reflect" + "strings" + "testing" + + "github.com/stretchr/testify/assert" +) + +func TestTryExpandScalarArray(t *testing.T) { + tests := []struct { + name string + input interface{} + expected string + }{ + { + name: "String array", + input: []interface{}{"us-east-1a", "us-east-1b", "us-east-1c"}, + expected: "us-east-1a\nus-east-1b\nus-east-1c", + }, + { + name: "Integer array", + input: []interface{}{1, 2, 3}, + expected: "1\n2\n3", + }, + { + name: "Boolean array", + input: []interface{}{true, false, true}, + expected: "true\nfalse\ntrue", + }, + { + name: "Empty array", + input: []interface{}{}, + expected: "", + }, + { + name: "Mixed types (non-scalar)", + input: []interface{}{"string", map[string]string{"key": "value"}}, + expected: "", // Should return empty for non-scalar arrays + }, + { + name: "Nested array (non-scalar)", + input: []interface{}{[]string{"a", "b"}, []string{"c", "d"}}, + expected: "", // Should return empty for nested arrays + }, + { + name: "Array with very long strings (too wide)", + input: []interface{}{"this-is-a-very-long-string-that-exceeds-the-width-threshold", "short"}, + expected: "", // Should return empty if any item is too wide + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + v := reflect.ValueOf(tt.input) + result := tryExpandScalarArray(v) + assert.Equal(t, tt.expected, result) + }) + } +} + +func TestFormatCollectionValue(t *testing.T) { + tests := []struct { + name string + input interface{} + expected string + }{ + { + name: "Scalar string array expands", + input: []interface{}{"zone-a", "zone-b", "zone-c"}, + expected: "zone-a\nzone-b\nzone-c", + }, + { + name: "Scalar map expands", + input: map[string]interface{}{"key1": "value1", "key2": "value2"}, + expected: "key1: value1\nkey2: value2", + }, + { + name: "Complex map shows placeholder", + input: map[string]interface{}{"key1": map[string]string{"nested": "value"}}, + expected: "{...} (1 keys)", + }, + { + name: "Complex array shows placeholder", + input: []interface{}{map[string]string{"a": "b"}, map[string]string{"c": "d"}}, + expected: "[...] (2 items)", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + v := reflect.ValueOf(tt.input) + result := formatCollectionValue(v) + assert.Equal(t, tt.expected, result) + }) + } +} + +func TestJoinItems(t *testing.T) { + tests := []struct { + name string + items []string + expected string + }{ + { + name: "Multiple items", + items: []string{"item1", "item2", "item3"}, + expected: "item1\nitem2\nitem3", + }, + { + name: "Single item", + items: []string{"item1"}, + expected: "item1", + }, + { + name: "Empty array", + items: []string{}, + expected: "", + }, + { + name: "Items with long values get truncated", + items: []string{strings.Repeat("a", 70)}, + expected: strings.Repeat("a", 57) + "...", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := joinItems(tt.items) + assert.Equal(t, tt.expected, result) + }) + } +} + +func TestFormatTableCellValueWithArrays(t *testing.T) { + tests := []struct { + name string + input interface{} + contains string + }{ + { + name: "Scalar array expands", + input: []interface{}{"us-east-1a", "us-east-1b"}, + contains: "us-east-1a\nus-east-1b", + }, + { + name: "Scalar map expands", + input: map[string]interface{}{"key": "value"}, + contains: "key: value", + }, + { + name: "Complex map shows placeholder", + input: map[string]interface{}{"key": map[string]string{"nested": "value"}}, + contains: "{...} (1 keys)", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := formatTableCellValue(tt.input) + assert.Contains(t, result, tt.contains) + }) + } +} + +func TestTryExpandScalarMap(t *testing.T) { + tests := []struct { + name string + input interface{} + expected string + }{ + { + name: "String map", + input: map[string]interface{}{"env": "production", "team": "platform"}, + expected: "env: production\nteam: platform", + }, + { + name: "Mixed scalar types", + input: map[string]interface{}{"count": 5, "enabled": true, "name": "test"}, + expected: "count: 5\nenabled: true\nname: test", + }, + { + name: "Empty map", + input: map[string]interface{}{}, + expected: "", + }, + { + name: "Nested map (non-scalar)", + input: map[string]interface{}{"outer": map[string]string{"inner": "value"}}, + expected: "", // Should return empty for non-scalar values + }, + { + name: "Map with array value (non-scalar)", + input: map[string]interface{}{"list": []string{"a", "b"}}, + expected: "", // Should return empty for non-scalar values + }, + { + name: "Map with very long value (too wide)", + input: map[string]interface{}{"key": "this-is-a-very-long-string-that-exceeds-the-width-threshold"}, + expected: "", // Should return empty if any item is too wide + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + v := reflect.ValueOf(tt.input) + result := tryExpandScalarMap(v) + assert.Equal(t, tt.expected, result) + }) + } +} + +func TestGetMaxLineWidth(t *testing.T) { + tests := []struct { + name string + input string + expected int + }{ + { + name: "Single line", + input: "hello", + expected: 5, + }, + { + name: "Multi-line - first longest", + input: "hello world\nhi\nbye", + expected: 11, + }, + { + name: "Multi-line - middle longest", + input: "hi\nhello world\nbye", + expected: 11, + }, + { + name: "Empty string", + input: "", + expected: 0, + }, + { + name: "ANSI colored text", + input: "\x1b[31mred\x1b[0m\nblue", + expected: 4, // "blue" is longer visually, ANSI codes don't count + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := getMaxLineWidth(tt.input) + assert.Equal(t, tt.expected, result) + }) + } +} + +func TestSplitLines(t *testing.T) { + tests := []struct { + name string + input string + expected []string + }{ + { + name: "Single line", + input: "hello", + expected: []string{"hello"}, + }, + { + name: "Multiple lines", + input: "line1\nline2\nline3", + expected: []string{"line1", "line2", "line3"}, + }, + { + name: "Empty string", + input: "", + expected: []string{}, + }, + { + name: "Trailing newline", + input: "line1\nline2\n", + expected: []string{"line1", "line2", ""}, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := splitLines(tt.input) + assert.Equal(t, tt.expected, result) + }) + } +} + +func TestRenderInlineMarkdown(t *testing.T) { + tests := []struct { + name string + input string + contains string // Check if output contains this (for rendered markdown) + }{ + { + name: "Empty string", + input: "", + contains: "", + }, + { + name: "Plain text", + input: "Virtual Private Cloud with subnets", + contains: "Virtual Private Cloud with subnets", + }, + { + name: "Bold text", + input: "**Important** configuration", + contains: "Important", + }, + { + name: "Italic text", + input: "*Enhanced* security", + contains: "Enhanced", + }, + { + name: "Inline code", + input: "Configure `vpc_id` parameter", + contains: "vpc_id", + }, + { + name: "Multiple newlines collapsed", + input: "Line one\n\nLine two\n\nLine three", + contains: "Line one Line two Line three", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := renderInlineMarkdown(tt.input) + if tt.contains != "" { + assert.Contains(t, result, tt.contains) + } else { + assert.Equal(t, tt.contains, result) + } + }) + } +} + +func TestDetectContentType_NoValue(t *testing.T) { + tests := []struct { + name string + input string + expected cellContentType + }{ + { + name: " detected", + input: "", + expected: contentTypeNoValue, + }, + { + name: "Empty string is default", + input: "", + expected: contentTypeDefault, + }, + { + name: "Boolean true", + input: "true", + expected: contentTypeBoolean, + }, + { + name: "Number", + input: "42", + expected: contentTypeNumber, + }, + { + name: "Placeholder map", + input: "{...} (3 keys)", + expected: contentTypePlaceholder, + }, + { + name: "Placeholder array", + input: "[...] (5 items)", + expected: contentTypePlaceholder, + }, + { + name: "Regular text", + input: "vpc", + expected: contentTypeDefault, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := detectContentType(tt.input) + assert.Equal(t, tt.expected, result) + }) + } +} + +// TestCalculateColumnWidths tests the column width calculation logic. +func TestCalculateColumnWidths(t *testing.T) { + tests := []struct { + name string + header []string + rows [][]string + terminalWidth int + expectMinLen int + }{ + { + name: "Empty columns", + header: []string{}, + rows: [][]string{}, + terminalWidth: 120, + expectMinLen: 0, + }, + { + name: "Simple table", + header: []string{"Name", "Value"}, + rows: [][]string{{"vpc", "10.0.0.0/16"}}, + terminalWidth: 120, + expectMinLen: 2, + }, + { + name: "Table with Description column", + header: []string{"Component", "Stack", "Description"}, + rows: [][]string{{"vpc", "prod", "Virtual Private Cloud for production"}}, + terminalWidth: 120, + expectMinLen: 3, + }, + { + name: "Narrow terminal", + header: []string{"Name", "Value"}, + rows: [][]string{{"component", "value"}}, + terminalWidth: 40, + expectMinLen: 2, + }, + { + name: "Very narrow terminal (minimum width enforcement)", + header: []string{"Name", "Value"}, + rows: [][]string{{"a", "b"}}, + terminalWidth: 10, + expectMinLen: 2, + }, + { + name: "Multi-line content", + header: []string{"Tags"}, + rows: [][]string{{"env: prod\nteam: platform\nregion: us-east-1"}}, + terminalWidth: 120, + expectMinLen: 1, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := calculateColumnWidths(tt.header, tt.rows, tt.terminalWidth) + assert.Equal(t, tt.expectMinLen, len(result)) + + // Verify all widths are positive (actual minimum may vary based on terminal width constraints). + for i, width := range result { + assert.Greater(t, width, 0, "Column %d width %d should be positive", i, width) + } + }) + } +} + +// TestCreateStyledTable tests the main table creation function. +func TestCreateStyledTable(t *testing.T) { + tests := []struct { + name string + header []string + rows [][]string + contains string + }{ + { + name: "Simple table", + header: []string{"Name", "Value"}, + rows: [][]string{{"vpc", "prod"}}, + contains: "Name", + }, + { + name: "Empty table", + header: []string{"Name"}, + rows: [][]string{}, + contains: "Name", + }, + { + name: "Table with Description column (markdown rendering)", + header: []string{"Component", "Description"}, + rows: [][]string{{"vpc", "**Virtual** Private Cloud"}}, + contains: "Component", + }, + { + name: "Multi-column table", + header: []string{"Component", "Stack", "Type", "Enabled"}, + rows: [][]string{{"vpc", "prod-ue2-dev", "terraform", "true"}}, + contains: "Component", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := CreateStyledTable(tt.header, tt.rows) + assert.NotEmpty(t, result) + assert.Contains(t, result, tt.contains) + }) + } +} + +// TestTableFormatterFormat tests the Format method. +func TestTableFormatterFormat(t *testing.T) { + tests := []struct { + name string + data map[string]interface{} + options FormatOptions + expectError bool + }{ + { + name: "TTY mode with simple data", + data: map[string]interface{}{ + "stack1": map[string]interface{}{ + "vars": map[string]interface{}{ + "environment": "prod", + }, + }, + }, + options: FormatOptions{ + TTY: true, + }, + expectError: false, + }, + { + name: "Non-TTY mode (falls back to CSV)", + data: map[string]interface{}{ + "stack1": map[string]interface{}{ + "vars": map[string]interface{}{ + "environment": "prod", + }, + }, + }, + options: FormatOptions{ + TTY: false, + }, + expectError: false, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + formatter := &TableFormatter{} + result, err := formatter.Format(tt.data, tt.options) + + if tt.expectError { + assert.Error(t, err) + } else { + assert.NoError(t, err) + assert.NotEmpty(t, result) + } + }) + } +} + +// TestPadToWidth tests string padding logic. +func TestPadToWidth(t *testing.T) { + tests := []struct { + name string + input string + width int + expected string + }{ + { + name: "No padding needed", + input: "hello", + width: 5, + expected: "hello", + }, + { + name: "Padding needed", + input: "hi", + width: 10, + expected: "hi ", + }, + { + name: "Zero width", + input: "test", + width: 0, + expected: "test", + }, + { + name: "Negative width", + input: "test", + width: -1, + expected: "test", + }, + { + name: "Multi-line content", + input: "line1\nline2", + width: 10, + expected: "line1 \nline2 ", + }, + { + name: "Already wider than target", + input: "very long string", + width: 5, + expected: "very long string", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := padToWidth(tt.input, tt.width) + assert.Equal(t, tt.expected, result) + }) + } +} + +// TestCreateHeader tests header creation. +func TestCreateHeader(t *testing.T) { + tests := []struct { + name string + stackKeys []string + customHeaders []string + expected []string + }{ + { + name: "Default headers", + stackKeys: []string{"stack1", "stack2"}, + customHeaders: []string{}, + expected: []string{"Key", "stack1", "stack2"}, + }, + { + name: "Custom headers", + stackKeys: []string{"stack1", "stack2"}, + customHeaders: []string{"Name", "Env1", "Env2"}, + expected: []string{"Name", "Env1", "Env2"}, + }, + { + name: "Empty stack keys", + stackKeys: []string{}, + customHeaders: []string{}, + expected: []string{"Key"}, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := createHeader(tt.stackKeys, tt.customHeaders) + assert.Equal(t, tt.expected, result) + }) + } +} + +// TestCreateRows tests row creation. +func TestCreateRows(t *testing.T) { + tests := []struct { + name string + data map[string]interface{} + valueKeys []string + stackKeys []string + expectLen int + }{ + { + name: "Simple key-value rows", + data: map[string]interface{}{ + "stack1": map[string]interface{}{ + "vars": map[string]interface{}{ + "environment": "prod", + "region": "us-east-1", + }, + }, + }, + valueKeys: []string{"environment", "region"}, + stackKeys: []string{"stack1"}, + expectLen: 2, + }, + { + name: "Value keyword handling", + data: map[string]interface{}{ + "stack1": "simple-value", + }, + valueKeys: []string{"value"}, + stackKeys: []string{"stack1"}, + expectLen: 1, + }, + { + name: "Multiple stacks", + data: map[string]interface{}{ + "stack1": map[string]interface{}{ + "vars": map[string]interface{}{ + "environment": "prod", + }, + }, + "stack2": map[string]interface{}{ + "vars": map[string]interface{}{ + "environment": "dev", + }, + }, + }, + valueKeys: []string{"environment"}, + stackKeys: []string{"stack1", "stack2"}, + expectLen: 1, + }, + { + name: "Empty data", + data: map[string]interface{}{}, + valueKeys: []string{}, + stackKeys: []string{}, + expectLen: 0, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := createRows(tt.data, tt.valueKeys, tt.stackKeys) + assert.Equal(t, tt.expectLen, len(result)) + }) + } +} + +// TestExtractAndSortKeys tests key extraction and sorting. +func TestExtractAndSortKeys(t *testing.T) { + tests := []struct { + name string + data map[string]interface{} + maxColumns int + expected []string + }{ + { + name: "Alphabetical sorting", + data: map[string]interface{}{ + "zebra": "value", + "apple": "value", + "banana": "value", + }, + maxColumns: 0, + expected: []string{"apple", "banana", "zebra"}, + }, + { + name: "Max columns limit", + data: map[string]interface{}{ + "alpha": "value", + "beta": "value", + "gamma": "value", + "delta": "value", + }, + maxColumns: 2, + expected: []string{"alpha", "beta"}, + }, + { + name: "Empty data", + data: map[string]interface{}{}, + maxColumns: 0, + expected: nil, // extractAndSortKeys returns nil for empty data + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := extractAndSortKeys(tt.data, tt.maxColumns) + assert.Equal(t, tt.expected, result) + }) + } +} + +// TestExtractValueKeys tests value key extraction. +func TestExtractValueKeys(t *testing.T) { + tests := []struct { + name string + data map[string]interface{} + stackKeys []string + expected []string + }{ + { + name: "Extract from vars", + data: map[string]interface{}{ + "stack1": map[string]interface{}{ + "vars": map[string]interface{}{ + "environment": "prod", + "region": "us-east-1", + }, + }, + }, + stackKeys: []string{"stack1"}, + expected: []string{"environment", "region"}, + }, + { + name: "Extract from top-level keys", + data: map[string]interface{}{ + "stack1": map[string]interface{}{ + "component": "vpc", + "stack": "prod", + }, + }, + stackKeys: []string{"stack1"}, + expected: []string{"component", "stack"}, + }, + { + name: "Array value returns 'value'", + data: map[string]interface{}{ + "stack1": []interface{}{"item1", "item2"}, + }, + stackKeys: []string{"stack1"}, + expected: []string{"value"}, + }, + { + name: "Scalar value returns 'value'", + data: map[string]interface{}{ + "stack1": "simple-string", + }, + stackKeys: []string{"stack1"}, + expected: []string{"value"}, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := extractValueKeys(tt.data, tt.stackKeys) + assert.ElementsMatch(t, tt.expected, result) + }) + } +} + +// TestFormatComplexValue tests JSON formatting for complex values. +func TestFormatComplexValue(t *testing.T) { + tests := []struct { + name string + input interface{} + contains string + }{ + { + name: "Map value", + input: map[string]string{"key": "value"}, + contains: "key", + }, + { + name: "Struct value", + input: struct{ Name string }{Name: "test"}, + contains: "test", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := formatComplexValue(tt.input) + assert.NotEmpty(t, result) + if tt.contains != "" { + assert.Contains(t, result, tt.contains) + } + }) + } +} + +// TestTruncateString tests string truncation. +func TestTruncateString(t *testing.T) { + tests := []struct { + name string + input string + expected string + }{ + { + name: "Short string (no truncation)", + input: "short", + expected: "short", + }, + { + name: "Exact max width", + input: strings.Repeat("a", MaxColumnWidth), + expected: strings.Repeat("a", MaxColumnWidth), + }, + { + name: "Exceeds max width", + input: strings.Repeat("a", MaxColumnWidth+10), + expected: strings.Repeat("a", MaxColumnWidth-3) + "...", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := truncateString(tt.input) + assert.Equal(t, tt.expected, result) + }) + } +} + +// TestFormatTableCellValue tests cell value formatting. +func TestFormatTableCellValue(t *testing.T) { + tests := []struct { + name string + input interface{} + expected string + }{ + { + name: "Nil value", + input: nil, + expected: "", + }, + { + name: "String value", + input: "test-value", + expected: "test-value", + }, + { + name: "Boolean true", + input: true, + expected: "true", + }, + { + name: "Boolean false", + input: false, + expected: "false", + }, + { + name: "Integer", + input: 42, + expected: "42", + }, + { + name: "Float", + input: 3.14159, + expected: "3.14", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := formatTableCellValue(tt.input) + assert.Equal(t, tt.expected, result) + }) + } +} diff --git a/pkg/list/format/tree_instances.go b/pkg/list/format/tree_instances.go new file mode 100644 index 0000000000..057f682d3e --- /dev/null +++ b/pkg/list/format/tree_instances.go @@ -0,0 +1,222 @@ +package format + +import ( + "fmt" + "sort" + "strings" + + "github.com/charmbracelet/lipgloss" + "github.com/charmbracelet/lipgloss/tree" + + listtree "github.com/cloudposse/atmos/pkg/list/tree" + "github.com/cloudposse/atmos/pkg/perf" + "github.com/cloudposse/atmos/pkg/ui/theme" +) + +const ( + treeNewline = "\n" +) + +const ( + spacerMarker = "<<>>" + componentSpacerMarker = "<<>>" +) + +// RenderInstancesTree renders stacks with their components and import hierarchies as a tree. +// Structure: Stacks → Components → Imports. +// If showImports is false, only shows stacks and components without import details. +func RenderInstancesTree(stacksWithComponents map[string]map[string][]*listtree.ImportNode, showImports bool) string { + defer perf.Track(nil, "format.RenderInstancesTree")() + + if len(stacksWithComponents) == 0 { + return "No stacks found" + } + + header := renderTreeHeader("Component Instances") + root := buildInstancesRootTree(stacksWithComponents, showImports) + treeOutput := root.String() + cleanedOutput := cleanupSpacerMarkers(treeOutput, []string{spacerMarker, componentSpacerMarker}) + + return header + cleanedOutput + treeNewline +} + +// renderTreeHeader creates and renders a styled header for tree output. +func renderTreeHeader(title string) string { + h1Style := lipgloss.NewStyle(). + Foreground(lipgloss.Color(theme.ColorWhite)). + Background(lipgloss.Color(theme.ColorBlue)). + Bold(true). + Padding(0, 1) + + return h1Style.Render(title) + treeNewline +} + +// buildInstancesRootTree builds the root tree structure for instances. +func buildInstancesRootTree(stacksWithComponents map[string]map[string][]*listtree.ImportNode, showImports bool) *tree.Tree { + root := tree.New().EnumeratorStyle(getBranchStyle()) + + // Add spacer at the top. + topSpacer := tree.New().Root(spacerMarker).EnumeratorStyle(getBranchStyle()) + root.Child(topSpacer) + + // Sort stack names for consistent output. + stackNames := getSortedStackNames(stacksWithComponents) + + // Build tree for each stack. + for i, stackName := range stackNames { + components := stacksWithComponents[stackName] + stackNode := buildStackNodeWithComponents(stackName, components, showImports) + root.Child(stackNode) + + // Add spacer between stacks (but not after the last one). + if i < len(stackNames)-1 { + spacer := tree.New().Root(spacerMarker).EnumeratorStyle(getBranchStyle()) + root.Child(spacer) + } + } + + return root +} + +// getSortedStackNames extracts and sorts stack names from a map. +func getSortedStackNames(stacksWithComponents map[string]map[string][]*listtree.ImportNode) []string { + stackNames := make([]string, 0, len(stacksWithComponents)) + for stackName := range stacksWithComponents { + stackNames = append(stackNames, stackName) + } + sort.Strings(stackNames) + return stackNames +} + +// cleanupSpacerMarkers removes spacer markers from tree output and replaces with styled vertical bars. +func cleanupSpacerMarkers(treeOutput string, markers []string) string { + lines := strings.Split(treeOutput, treeNewline) + cleaned := make([]string, 0, len(lines)) + + for _, line := range lines { + plainLine := stripANSI(line) + + if !containsAnyMarker(plainLine, markers) { + cleaned = append(cleaned, line) + continue + } + + // Replace spacer line with styled vertical bar. + indent := getIndentLevel(plainLine) + style := getBranchStyle() + cleaned = append(cleaned, strings.Repeat(" ", indent)+style.Render("│")) + } + + return strings.Join(cleaned, treeNewline) +} + +// containsAnyMarker checks if a string contains any of the given markers. +func containsAnyMarker(s string, markers []string) bool { + for _, marker := range markers { + if strings.Contains(s, marker) { + return true + } + } + return false +} + +// getIndentLevel returns the number of leading spaces in a string. +func getIndentLevel(s string) int { + indent := 0 + for _, ch := range s { + if ch != ' ' { + break + } + indent++ + } + return indent +} + +// buildStackNodeWithComponents creates a tree node for a stack with its components. +func buildStackNodeWithComponents(stackName string, components map[string][]*listtree.ImportNode, showImports bool) *tree.Tree { + // Style the stack name. + styledStackName := getStackStyle().Render(stackName) + + // Create the stack node. + stackNode := tree.New(). + Root(styledStackName). + EnumeratorStyle(getBranchStyle()) + + // Sort component names for consistent output. + var componentNames []string + for componentName := range components { + componentNames = append(componentNames, componentName) + } + sort.Strings(componentNames) + + // Add component children with spacers between them. + for i, componentName := range componentNames { + imports := components[componentName] + var componentNode *tree.Tree + + // Extract component folder from imports if available. + var componentFolder string + if len(imports) > 0 && imports[0].ComponentFolder != "" { + componentFolder = imports[0].ComponentFolder + } + + if showImports { + componentNode = buildComponentNode(componentName, imports) + } else { + componentNode = buildComponentNodeSimple(componentName, componentFolder) + } + stackNode.Child(componentNode) + + // Add spacer between components only when showing imports (but not after the last one). + if showImports && i < len(componentNames)-1 { + spacer := tree.New().Root(componentSpacerMarker).EnumeratorStyle(getBranchStyle()) + stackNode.Child(spacer) + } + } + + return stackNode +} + +// buildComponentNodeSimple creates a tree node for a component without imports. +func buildComponentNodeSimple(componentName string, componentFolder string) *tree.Tree { + // Build display text: component_name (component_folder). + var displayText string + if componentFolder != "" { + // Style component name and folder path separately. + styledName := getComponentStyle().Render(componentName) + // Use muted style for the folder path in parentheses. + mutedStyle := lipgloss.NewStyle().Foreground(lipgloss.Color(theme.ColorDarkGray)) + styledFolder := mutedStyle.Render(fmt.Sprintf(" (%s)", componentFolder)) + displayText = styledName + styledFolder + } else { + displayText = getComponentStyle().Render(componentName) + } + + // Create the component node without children. + componentNode := tree.New(). + Root(displayText). + EnumeratorStyle(getBranchStyle()) + + return componentNode +} + +// buildComponentNode creates a tree node for a component with its imports. +func buildComponentNode(componentName string, imports []*listtree.ImportNode) *tree.Tree { + // Style the component name. + styledComponentName := getComponentStyle().Render(componentName) + + // Create the component node. + componentNode := tree.New(). + Root(styledComponentName). + EnumeratorStyle(getBranchStyle()) + + // Add import children. + for _, imp := range imports { + importNode := buildImportNode(imp) + componentNode.Child(importNode) + } + + return componentNode +} + +// Note: getComponentStyle and buildImportNode are defined in tree_utils.go and tree_stacks.go respectively. diff --git a/pkg/list/format/tree_instances_test.go b/pkg/list/format/tree_instances_test.go new file mode 100644 index 0000000000..80319b2d09 --- /dev/null +++ b/pkg/list/format/tree_instances_test.go @@ -0,0 +1,129 @@ +package format + +import ( + "strings" + "testing" + + listtree "github.com/cloudposse/atmos/pkg/list/tree" +) + +func TestRenderInstancesTree_SpacerBetweenComponents(t *testing.T) { + // Create sample stacks with multiple components. + stacksWithComponents := map[string]map[string][]*listtree.ImportNode{ + "plat-uw2-staging": { + "vpc": { + {Path: "orgs/acme/plat/staging/_defaults"}, + {Path: "orgs/acme/plat/_defaults"}, + {Path: "orgs/acme/_defaults"}, + }, + "vpc-flow-logs-bucket": { + {Path: "orgs/acme/plat/staging/_defaults"}, + {Path: "orgs/acme/plat/_defaults"}, + }, + "eks": { + {Path: "catalog/eks/defaults"}, + }, + }, + } + + output := RenderInstancesTree(stacksWithComponents, false) + + // Strip ANSI codes for testing. + plainOutput := stripANSI(output) + + // Verify header is present. + if !strings.Contains(plainOutput, "Component Instances") { + t.Error("Expected 'Component Instances' header in output") + } + + // Verify stack name is present. + if !strings.Contains(plainOutput, "plat-uw2-staging") { + t.Error("Expected 'plat-uw2-staging' stack in output") + } + + // Verify all components are present. + expectedComponents := []string{"vpc", "vpc-flow-logs-bucket", "eks"} + for _, comp := range expectedComponents { + if !strings.Contains(plainOutput, comp) { + t.Errorf("Expected component '%s' in output", comp) + } + } + + // Verify spacer lines (│) exist between components. + lines := strings.Split(plainOutput, "\n") + spacerCount := 0 + for _, line := range lines { + stripped := strings.TrimSpace(line) + // Look for lines that are just the vertical bar (spacer). + if stripped == "│" { + spacerCount++ + } + } + + // We should have 1 spacer: + // - 1 at top (after "Component Instances" header) + // - No spacers between components when showImports=false (only one stack) + if spacerCount != 1 { + t.Errorf("Expected exactly 1 spacer line, got %d", spacerCount) + t.Logf("Output:\n%s", plainOutput) + } +} + +func TestRenderInstancesTree_EmptyInput(t *testing.T) { + stacksWithComponents := map[string]map[string][]*listtree.ImportNode{} + + output := RenderInstancesTree(stacksWithComponents, false) + + if !strings.Contains(output, "No stacks found") { + t.Errorf("Expected 'No stacks found' message, got: %s", output) + } +} + +func TestRenderInstancesTree_MultipleStacks(t *testing.T) { + stacksWithComponents := map[string]map[string][]*listtree.ImportNode{ + "stack-a": { + "component1": {{Path: "imports/a"}}, + "component2": {{Path: "imports/b"}}, + }, + "stack-b": { + "component3": {{Path: "imports/c"}}, + }, + } + + output := RenderInstancesTree(stacksWithComponents, false) + plainOutput := stripANSI(output) + + // Verify both stacks are present. + if !strings.Contains(plainOutput, "stack-a") { + t.Error("Expected 'stack-a' in output") + } + if !strings.Contains(plainOutput, "stack-b") { + t.Error("Expected 'stack-b' in output") + } + + // Verify all components are present. + expectedComponents := []string{"component1", "component2", "component3"} + for _, comp := range expectedComponents { + if !strings.Contains(plainOutput, comp) { + t.Errorf("Expected component '%s' in output", comp) + } + } + + // Count spacers. + lines := strings.Split(plainOutput, "\n") + spacerCount := 0 + for _, line := range lines { + if strings.TrimSpace(stripANSI(line)) == "│" { + spacerCount++ + } + } + + // Should have: + // - 1 at top (after "Component Instances" header) + // - 1 between stack-a and stack-b + // Total: 2 spacers (no spacers between components when showImports=false) + if spacerCount != 2 { + t.Errorf("Expected exactly 2 spacer lines, got %d", spacerCount) + t.Logf("Output:\n%s", plainOutput) + } +} diff --git a/pkg/list/format/tree_stacks.go b/pkg/list/format/tree_stacks.go new file mode 100644 index 0000000000..4c47f2c5be --- /dev/null +++ b/pkg/list/format/tree_stacks.go @@ -0,0 +1,129 @@ +package format + +import ( + "fmt" + "sort" + + "github.com/charmbracelet/lipgloss/tree" + + listtree "github.com/cloudposse/atmos/pkg/list/tree" + "github.com/cloudposse/atmos/pkg/perf" +) + +// RenderStacksTree renders stacks with their import hierarchies as a tree. +// If showImports is false, only stack names are shown without import details. +func RenderStacksTree(stacksWithImports map[string][]*listtree.ImportNode, showImports bool) string { + defer perf.Track(nil, "format.RenderStacksTree")() + + if len(stacksWithImports) == 0 { + return "No stacks found" + } + + header := renderTreeHeader("Stacks") + root := buildStacksRootTree(stacksWithImports, showImports) + treeOutput := root.String() + cleanedOutput := cleanupSpacerMarkers(treeOutput, []string{spacerMarker}) + + return header + cleanedOutput + treeNewline +} + +// buildStacksRootTree builds the root tree structure for stacks. +func buildStacksRootTree(stacksWithImports map[string][]*listtree.ImportNode, showImports bool) *tree.Tree { + root := tree.New().EnumeratorStyle(getBranchStyle()) + + // Add spacer at the top. + topSpacer := tree.New().Root(spacerMarker).EnumeratorStyle(getBranchStyle()) + root.Child(topSpacer) + + // Sort stack names for consistent output. + stackNames := getSortedKeysFromImportsMap(stacksWithImports) + + // Build tree for each stack. + for i, stackName := range stackNames { + imports := stacksWithImports[stackName] + var stackNode *tree.Tree + if showImports { + stackNode = buildStackNode(stackName, imports) + } else { + stackNode = buildStackNodeSimple(stackName) + } + root.Child(stackNode) + + // Add spacer between stacks (but not after the last one). + if i < len(stackNames)-1 { + spacer := tree.New().Root(spacerMarker).EnumeratorStyle(getBranchStyle()) + root.Child(spacer) + } + } + + return root +} + +// getSortedKeysFromImportsMap extracts and sorts stack names from an imports map. +func getSortedKeysFromImportsMap(stacksWithImports map[string][]*listtree.ImportNode) []string { + stackNames := make([]string, 0, len(stacksWithImports)) + for stackName := range stacksWithImports { + stackNames = append(stackNames, stackName) + } + sort.Strings(stackNames) + return stackNames +} + +// buildStackNodeSimple creates a tree node for a stack without imports. +func buildStackNodeSimple(stackName string) *tree.Tree { + // Style the stack name. + styledStackName := getStackStyle().Render(stackName) + + // Create the stack node without children. + stackNode := tree.New(). + Root(styledStackName). + EnumeratorStyle(getBranchStyle()) + + return stackNode +} + +// buildStackNode creates a tree node for a stack with its imports. +func buildStackNode(stackName string, imports []*listtree.ImportNode) *tree.Tree { + // Style the stack name. + styledStackName := getStackStyle().Render(stackName) + + // Create the stack node. + stackNode := tree.New(). + Root(styledStackName). + EnumeratorStyle(getBranchStyle()) + + // Add import children. + for _, imp := range imports { + importNode := buildImportNode(imp) + stackNode.Child(importNode) + } + + return stackNode +} + +// buildImportNode recursively creates a tree node for an import. +func buildImportNode(imp *listtree.ImportNode) *tree.Tree { + var nodeText string + + // Check if this is a circular reference. + if imp.Circular { + nodeText = getCircularStyle().Render(fmt.Sprintf("%s (circular reference)", imp.Path)) + } else { + nodeText = getImportStyle().Render(imp.Path) + } + + // Create the import node. + importNode := tree.New(). + Root(nodeText). + EnumeratorStyle(getBranchStyle()) + + // Recursively add children (unless circular). + if !imp.Circular { + for _, child := range imp.Children { + childNode := buildImportNode(child) + importNode.Child(childNode) + } + } + + return importNode +} diff --git a/pkg/list/format/tree_stacks_test.go b/pkg/list/format/tree_stacks_test.go new file mode 100644 index 0000000000..098d0d3ccf --- /dev/null +++ b/pkg/list/format/tree_stacks_test.go @@ -0,0 +1,157 @@ +package format + +import ( + "strings" + "testing" + + listtree "github.com/cloudposse/atmos/pkg/list/tree" +) + +func TestRenderStacksTree_SpacerBetweenStacks(t *testing.T) { + // Create sample stacks with imports. + stacksWithImports := map[string][]*listtree.ImportNode{ + "plat-uw2-dev": { + {Path: "orgs/acme/plat/dev/_defaults"}, + {Path: "orgs/acme/plat/_defaults"}, + {Path: "orgs/acme/_defaults"}, + }, + "plat-uw2-prod": { + {Path: "orgs/acme/plat/prod/_defaults"}, + {Path: "orgs/acme/plat/_defaults"}, + {Path: "orgs/acme/_defaults"}, + }, + "plat-uw2-staging": { + {Path: "orgs/acme/plat/staging/_defaults"}, + {Path: "orgs/acme/plat/_defaults"}, + {Path: "orgs/acme/_defaults"}, + }, + } + + output := RenderStacksTree(stacksWithImports, false) + + // Strip ANSI codes for testing. + plainOutput := stripANSI(output) + + // Verify header is present. + if !strings.Contains(plainOutput, "Stacks") { + t.Error("Expected 'Stacks' header in output") + } + + // Verify spacer lines (│) exist between stacks and at the top. + lines := strings.Split(plainOutput, "\n") + spacerCount := 0 + for _, line := range lines { + stripped := strings.TrimSpace(line) + // Look for lines that are just the vertical bar (spacer). + if stripped == "│" { + spacerCount++ + } + } + + // We should have 3 spacers (1 at top + 2 between 3 stacks: dev-prod and prod-staging). + if spacerCount < 3 { + t.Errorf("Expected at least 3 spacer lines (1 at top + 2 between stacks), got %d", spacerCount) + t.Logf("Output:\n%s", plainOutput) + } + + // Verify all stack names are present. + expectedStacks := []string{"plat-uw2-dev", "plat-uw2-prod", "plat-uw2-staging"} + for _, stack := range expectedStacks { + if !strings.Contains(plainOutput, stack) { + t.Errorf("Expected stack '%s' in output", stack) + } + } + + // When showImports=false, import paths should NOT be present. + unexpectedImports := []string{ + "orgs/acme/plat/dev/_defaults", + "orgs/acme/plat/_defaults", + "orgs/acme/_defaults", + } + for _, imp := range unexpectedImports { + if strings.Contains(plainOutput, imp) { + t.Errorf("Did not expect import path '%s' in output when showImports=false", imp) + } + } +} + +func TestRenderStacksTree_NoSpacerAfterLastStack(t *testing.T) { + stacksWithImports := map[string][]*listtree.ImportNode{ + "stack-a": { + {Path: "imports/a"}, + }, + "stack-b": { + {Path: "imports/b"}, + }, + } + + output := RenderStacksTree(stacksWithImports, false) + plainOutput := stripANSI(output) + + lines := strings.Split(plainOutput, "\n") + + // Find the last non-empty line. + lastNonEmptyIndex := -1 + for i := len(lines) - 1; i >= 0; i-- { + if strings.TrimSpace(lines[i]) != "" { + lastNonEmptyIndex = i + break + } + } + + // Verify the last line is not a spacer. + if lastNonEmptyIndex >= 0 { + lastLine := strings.TrimSpace(lines[lastNonEmptyIndex]) + if lastLine == "│" { + t.Error("Expected no spacer after the last stack") + } + } +} + +func TestRenderStacksTree_EmptyInput(t *testing.T) { + stacksWithImports := map[string][]*listtree.ImportNode{} + + output := RenderStacksTree(stacksWithImports, false) + + if !strings.Contains(output, "No stacks found") { + t.Errorf("Expected 'No stacks found' message, got: %s", output) + } +} + +func TestStripANSI(t *testing.T) { + tests := []struct { + name string + input string + expected string + }{ + { + name: "plain text", + input: "hello world", + expected: "hello world", + }, + { + name: "text with color codes", + input: "\x1b[32mgreen text\x1b[0m", + expected: "green text", + }, + { + name: "text with multiple colors", + input: "\x1b[31mred\x1b[0m \x1b[34mblue\x1b[0m", + expected: "red blue", + }, + { + name: "tree characters with styling", + input: "\x1b[90m├──\x1b[0m", + expected: "├──", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := stripANSI(tt.input) + if result != tt.expected { + t.Errorf("stripANSI(%q) = %q, expected %q", tt.input, result, tt.expected) + } + }) + } +} diff --git a/pkg/list/format/tree_utils.go b/pkg/list/format/tree_utils.go new file mode 100644 index 0000000000..305ed1b4cd --- /dev/null +++ b/pkg/list/format/tree_utils.go @@ -0,0 +1,62 @@ +package format + +import ( + "github.com/charmbracelet/lipgloss" + + "github.com/cloudposse/atmos/pkg/ui/theme" +) + +// getBranchStyle returns the style for tree branches. +// Uses Muted style from the current theme for subtle branch lines. +func getBranchStyle() lipgloss.Style { + styles := theme.GetCurrentStyles() + return styles.Muted +} + +// getStackStyle returns the style for stack names. +// Uses Body style from the current theme for primary text. +func getStackStyle() lipgloss.Style { + styles := theme.GetCurrentStyles() + return styles.Body +} + +// getComponentStyle returns the style for component names. +// Uses Command style from the current theme for component names. +func getComponentStyle() lipgloss.Style { + styles := theme.GetCurrentStyles() + return styles.Command +} + +// getImportStyle returns the style for import paths. +// Uses Info style from the current theme for import file paths. +func getImportStyle() lipgloss.Style { + styles := theme.GetCurrentStyles() + return styles.Info +} + +// getCircularStyle returns the style for circular reference markers. +// Uses Error style from the current theme for warnings about circular imports. +func getCircularStyle() lipgloss.Style { + styles := theme.GetCurrentStyles() + return styles.Error +} + +// stripANSI removes ANSI escape codes from a string for text processing. +func stripANSI(s string) string { + result := "" + inEscape := false + for _, r := range s { + if r == '\x1b' { + inEscape = true + continue + } + if inEscape { + if r == 'm' { + inEscape = false + } + continue + } + result += string(r) + } + return result +} diff --git a/pkg/list/importresolver/provenance.go b/pkg/list/importresolver/provenance.go new file mode 100644 index 0000000000..fc9d64192d --- /dev/null +++ b/pkg/list/importresolver/provenance.go @@ -0,0 +1,492 @@ +package importresolver + +import ( + "fmt" + "os" + "path/filepath" + "strings" + + e "github.com/cloudposse/atmos/internal/exec" + "github.com/cloudposse/atmos/pkg/list/tree" + log "github.com/cloudposse/atmos/pkg/logger" + perf "github.com/cloudposse/atmos/pkg/perf" + "github.com/cloudposse/atmos/pkg/schema" + "gopkg.in/yaml.v3" +) + +const ( + // File extension constants. + yamlExt = ".yaml" + ymlExt = ".yml" + + // Component metadata field names. + fieldStackFile = "stack_file" +) + +// ResolveImportTreeFromProvenance resolves import trees for all stacks using the provenance system. +// Returns: map[stackName]map[componentName][]*tree.ImportNode +// +// Note: This function relies on merge contexts being populated during stack processing. +// Merge contexts are automatically created when ExecuteDescribeStacks processes stack files. +// +//nolint:gocognit,revive // Complexity from nested stack/component/import tree resolution (unavoidable pattern). +func ResolveImportTreeFromProvenance( + stacksMap map[string]interface{}, + atmosConfig *schema.AtmosConfiguration, +) (map[string]map[string][]*tree.ImportNode, error) { + defer perf.Track(nil, "list.importresolver.ResolveImportTreeFromProvenance")() + + result := make(map[string]map[string][]*tree.ImportNode) + + // Get all merge contexts (keyed by stack file path). + allMergeContexts := e.GetAllMergeContexts() + + log.Trace("Found merge contexts and stacks", "merge_context_count", len(allMergeContexts), "stack_count", len(stacksMap)) + + // Iterate over all merge contexts. + // Each merge context corresponds to a stack file and contains the ImportChain. + for stackFilePath, ctx := range allMergeContexts { + if ctx == nil { + log.Trace("Merge context is nil", fieldStackFile, stackFilePath) + continue + } + + if len(ctx.ImportChain) == 0 { + log.Trace("Merge context has empty import chain", fieldStackFile, stackFilePath) + continue + } + + log.Trace("Processing stack file", fieldStackFile, stackFilePath, "import_chain_length", len(ctx.ImportChain), "import_chain", ctx.ImportChain) + + // Find which stack(s) in stacksMap have this file path. + // We need to search through the stacksMap to find matching atmos_stack_file values. + matchingStacks := findStacksForFilePath(stackFilePath, stacksMap, atmosConfig) + if len(matchingStacks) == 0 { + // Could not determine stack name for this file. + log.Trace("Could not find stack for file path", fieldStackFile, stackFilePath) + continue + } + + // Process each matching stack. + for stackName, components := range matchingStacks { + log.Trace("Found stack with components", "stack", stackName, "component_count", len(components), fieldStackFile, stackFilePath) + + // Build import tree from the ImportChain. + componentImports := make(map[string][]*tree.ImportNode) + + // Get component metadata to extract component folders. + stackData := stacksMap[stackName] + componentFolders := extractComponentFolders(stackData) + + for componentName := range components { + // All components in a stack share the same import chain. + importNodes := buildImportTreeFromChain(ctx.ImportChain, atmosConfig) + + // Set component folder on the first import node. + if len(importNodes) > 0 { + if folder, ok := componentFolders[componentName]; ok { + // Store component folder in first node for access during rendering. + importNodes[0].ComponentFolder = folder + } + } + + componentImports[componentName] = importNodes + } + + result[stackName] = componentImports + } + } + + return result, nil +} + +// findStacksForFilePath finds all stacks that have components from the given file path. +// Returns a map of stackName -> componentNames. +// +//nolint:gocognit,revive,funlen // Complexity and length from nested stack/component/metadata inspection (unavoidable pattern). +func findStacksForFilePath( + filePath string, + stacksMap map[string]interface{}, + atmosConfig *schema.AtmosConfiguration, +) map[string]map[string]bool { + defer perf.Track(nil, "list.importresolver.findStacksForFilePath")() + + result := make(map[string]map[string]bool) + + // Normalize the file path for comparison. + filePath = filepath.Clean(filePath) + + // Convert to relative path for easier comparison. + basePath := filepath.Clean(atmosConfig.StacksBaseAbsolutePath) + relFilePath, err := filepath.Rel(basePath, filePath) + if err != nil { + relFilePath = filePath + } + + // Remove .yaml extension from relative path for comparison. + relFilePathNoExt := strings.TrimSuffix(relFilePath, yamlExt) + relFilePathNoExt = strings.TrimSuffix(relFilePathNoExt, ymlExt) + + log.Trace("Looking for stacks with file path", "abs", filePath, "rel", relFilePath, "noext", relFilePathNoExt) + + // Iterate through all stacks. + for stackName, stackData := range stacksMap { + stackMap, ok := stackData.(map[string]interface{}) + if !ok { + continue + } + + // Look for components section. + componentsSection, ok := stackMap["components"].(map[string]interface{}) + if !ok { + continue + } + + components := make(map[string]bool) + + // Iterate through component types (terraform, helmfile, etc). + for _, typeData := range componentsSection { + typeMap, ok := typeData.(map[string]interface{}) + if !ok { + continue + } + + // Check each component. + for componentName, componentData := range typeMap { + componentMap, ok := componentData.(map[string]interface{}) + if !ok { + continue + } + + // Check if this component has atmos_stack_file matching our file. + if stackFile, ok := componentMap["atmos_stack_file"].(string); ok { + // Normalize stack file path and remove extension. + stackFileClean := filepath.Clean(stackFile) + stackFileNoExt := strings.TrimSuffix(stackFileClean, yamlExt) + stackFileNoExt = strings.TrimSuffix(stackFileNoExt, ymlExt) + + // Try multiple matching strategies: + // 1. Exact match (with or without extension) + // 2. Relative path match + // 3. Match without extensions + if stackFileClean == filePath || stackFileClean == relFilePath || + stackFile == relFilePath || stackFileNoExt == relFilePathNoExt { + components[componentName] = true + log.Trace("Component has matching atmos_stack_file", "component", componentName, "stack", stackName, "atmos_stack_file", stackFile, "matched_with", relFilePath) + } + } + } + } + + if len(components) > 0 { + result[stackName] = components + } + } + + return result +} + +// extractComponentFolders extracts component folder paths from stack data. +// Returns a map of componentName -> componentFolder. +func extractComponentFolders(stackData interface{}) map[string]string { + defer perf.Track(nil, "list.importresolver.extractComponentFolders")() + + folders := make(map[string]string) + + stackMap, ok := stackData.(map[string]interface{}) + if !ok { + return folders + } + + // Look for components section. + componentsSection, ok := stackMap["components"].(map[string]interface{}) + if !ok { + return folders + } + + // Iterate through component types (terraform, helmfile, etc). + for componentType, typeData := range componentsSection { + typeMap, ok := typeData.(map[string]interface{}) + if !ok { + continue + } + + // Extract component folder from metadata. + for componentName, componentData := range typeMap { + componentMap, ok := componentData.(map[string]interface{}) + if !ok { + continue + } + + // Try to get component folder from metadata.component field. + // This is the "real" component folder that the component uses. + folder := componentName // Default to component name + + if metadata, ok := componentMap["metadata"].(map[string]interface{}); ok { + if componentVal, ok := metadata["component"].(string); ok && componentVal != "" { + folder = componentVal + } + } + + // Build full path: components/{type}/{folder} + fullPath := fmt.Sprintf("components/%s/%s", componentType, folder) + folders[componentName] = fullPath + } + } + + return folders +} + +// extractComponentsFromStackData extracts component names from stack data. +func extractComponentsFromStackData(stackData interface{}) map[string]bool { + defer perf.Track(nil, "list.importresolver.extractComponentsFromStackData")() + + components := make(map[string]bool) + + stackMap, ok := stackData.(map[string]interface{}) + if !ok { + return components + } + + // Look for components section. + componentsSection, ok := stackMap["components"].(map[string]interface{}) + if !ok { + return components + } + + // Iterate through component types (terraform, helmfile, etc). + for _, typeData := range componentsSection { + typeMap, ok := typeData.(map[string]interface{}) + if !ok { + continue + } + + // Extract component names. + for componentName := range typeMap { + components[componentName] = true + } + } + + return components +} + +// buildImportTreeFromChain builds an import tree from MergeContext.ImportChain. +// ImportChain[0] = parent stack file +// ImportChain[1..N] = imported files in merge order. +func buildImportTreeFromChain(importChain []string, atmosConfig *schema.AtmosConfiguration) []*tree.ImportNode { + defer perf.Track(nil, "list.importresolver.buildImportTreeFromChain")() + + if len(importChain) <= 1 { + // No imports (just the stack file itself). + return nil + } + + var roots []*tree.ImportNode + visited := make(map[string]bool) + importCache := make(map[string][]string) + + // Process each import in the chain (skip index 0 which is the parent stack). + for i := 1; i < len(importChain); i++ { + importPath := importChain[i] + + // Convert absolute path to relative path for display. + relativePath := stripBasePath(importPath, atmosConfig.StacksBaseAbsolutePath) + + // Check for circular reference. + circular := visited[importPath] + + node := &tree.ImportNode{ + Path: relativePath, + Circular: circular, + } + + if !circular { + visited[importPath] = true + // Recursively resolve this import's imports. + node.Children = resolveImportFileImports(importPath, atmosConfig, visited, importCache) + // Backtrack to allow same import in different branches. + delete(visited, importPath) + } + + roots = append(roots, node) + } + + return roots +} + +// stripBasePath converts an absolute path to a relative path by removing the base path. +// Returns a relative, extensionless, forward-slash-normalized path suitable for display. +func stripBasePath(absolutePath, basePath string) string { + defer perf.Track(nil, "list.importresolver.stripBasePath")() + + // Normalize both paths. + absolutePath = filepath.Clean(absolutePath) + basePath = filepath.Clean(basePath) + + // Try to compute relative path using filepath.Rel. + relativePath, err := filepath.Rel(basePath, absolutePath) + if err != nil { + // Fall back to string prefix removal if Rel fails. + if !strings.HasSuffix(basePath, string(filepath.Separator)) { + basePath += string(filepath.Separator) + } + relativePath = strings.TrimPrefix(absolutePath, basePath) + } + + // Convert to forward slashes for consistent cross-platform display. + relativePath = filepath.ToSlash(relativePath) + + // Remove .yaml/.yml extension for cleaner display (after ToSlash for consistency). + relativePath = strings.TrimSuffix(relativePath, ".yaml") + relativePath = strings.TrimSuffix(relativePath, ".yml") + + return relativePath +} + +// resolveImportFileImports recursively resolves imports from a file. +func resolveImportFileImports( + importFilePath string, + atmosConfig *schema.AtmosConfiguration, + visited map[string]bool, + cache map[string][]string, +) []*tree.ImportNode { + defer perf.Track(nil, "list.importresolver.resolveImportFileImports")() + + // Check cache first. + if imports, ok := cache[importFilePath]; ok { + return buildNodesFromImportPaths(imports, importFilePath, atmosConfig, visited, cache) + } + + // Read imports from the file. + imports, err := readImportsFromYAMLFile(importFilePath) + if err != nil { + // File can't be read or has no imports. + return nil + } + + // Cache the imports. + cache[importFilePath] = imports + + return buildNodesFromImportPaths(imports, importFilePath, atmosConfig, visited, cache) +} + +// buildNodesFromImportPaths builds import nodes from a list of import paths. +func buildNodesFromImportPaths( + imports []string, + parentFilePath string, + atmosConfig *schema.AtmosConfiguration, + visited map[string]bool, + cache map[string][]string, +) []*tree.ImportNode { + defer perf.Track(nil, "list.importresolver.buildNodesFromImportPaths")() + + var children []*tree.ImportNode + + for _, importPath := range imports { + // Resolve import path to absolute path. + absolutePath := resolveImportPath(importPath, parentFilePath, atmosConfig) + + // Check for circular reference. + circular := visited[absolutePath] + + node := &tree.ImportNode{ + Path: importPath, // Use original relative path for display + Circular: circular, + } + + if !circular { + visited[absolutePath] = true + // Recursively resolve children. + node.Children = resolveImportFileImports(absolutePath, atmosConfig, visited, cache) + // Backtrack for other branches. + delete(visited, absolutePath) + } + + children = append(children, node) + } + + return children +} + +// resolveImportPath converts a relative import path to an absolute file path. +func resolveImportPath(importPath, parentFilePath string, atmosConfig *schema.AtmosConfiguration) string { + defer perf.Track(nil, "list.importresolver.resolveImportPath")() + + // Check if this is a relative import (starts with . or ..). + if strings.HasPrefix(importPath, ".") { + // Resolve relative to parent file's directory. + parentDir := filepath.Dir(parentFilePath) + absolutePath := filepath.Join(parentDir, importPath) + absolutePath = filepath.Clean(absolutePath) + + // Add .yaml extension if not present. + if !strings.HasSuffix(absolutePath, yamlExt) && !strings.HasSuffix(absolutePath, ymlExt) { + absolutePath += yamlExt + } + + return absolutePath + } + + // Non-relative imports are resolved against stacks base path. + basePath := atmosConfig.StacksBaseAbsolutePath + + // Add .yaml extension if not present. + if !strings.HasSuffix(importPath, yamlExt) && !strings.HasSuffix(importPath, ymlExt) { + importPath += yamlExt + } + + return filepath.Join(basePath, importPath) +} + +// readImportsFromYAMLFile reads the import/imports array from a YAML file. +func readImportsFromYAMLFile(filePath string) ([]string, error) { + defer perf.Track(nil, "list.importresolver.readImportsFromYAMLFile")() + + // Read the file. + content, err := os.ReadFile(filePath) + if err != nil { + return nil, fmt.Errorf("failed to read file %s: %w", filePath, err) + } + + // Parse as YAML. + var data map[string]interface{} + if err := yaml.Unmarshal(content, &data); err != nil { + return nil, fmt.Errorf("failed to parse YAML from %s: %w", filePath, err) + } + + // Extract imports (can be "import" or "imports" array). + // Initialize as empty slice to ensure we return []string{} instead of nil. + imports := []string{} + + // Check for "import" field. + if importVal, ok := data["import"]; ok { + imports = append(imports, extractImportStringsHelper(importVal)...) + } + + // Check for "imports" field. + if importsVal, ok := data["imports"]; ok { + imports = append(imports, extractImportStringsHelper(importsVal)...) + } + + return imports, nil +} + +// extractImportStringsHelper extracts import strings from an interface{} (can be string or []interface{}). +func extractImportStringsHelper(val interface{}) []string { + defer perf.Track(nil, "list.importresolver.extractImportStringsHelper")() + + var results []string + + switch v := val.(type) { + case string: + results = append(results, v) + case []interface{}: + for _, item := range v { + if str, ok := item.(string); ok { + results = append(results, str) + } + } + } + + return results +} diff --git a/pkg/list/importresolver/provenance_test.go b/pkg/list/importresolver/provenance_test.go new file mode 100644 index 0000000000..140ee9c3ec --- /dev/null +++ b/pkg/list/importresolver/provenance_test.go @@ -0,0 +1,1129 @@ +package importresolver + +import ( + "os" + "path/filepath" + "strconv" + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + + "github.com/cloudposse/atmos/pkg/schema" +) + +// TestResolveImportTreeFromProvenance tests the main entry point for import tree resolution. +func TestResolveImportTreeFromProvenance(t *testing.T) { + tests := []struct { + name string + stacksMap map[string]interface{} + setupMerge func() + expectEmpty bool + expectStack string + }{ + { + name: "Empty stacks map", + stacksMap: map[string]interface{}{}, + setupMerge: func() {}, + expectEmpty: true, + }, + { + name: "Stack with no merge context", + stacksMap: map[string]interface{}{ + "test-stack": map[string]interface{}{ + "components": map[string]interface{}{ + "terraform": map[string]interface{}{ + "vpc": map[string]interface{}{ + "atmos_stack_file": "test-stack.yaml", + }, + }, + }, + }, + }, + setupMerge: func() {}, + expectEmpty: true, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + tt.setupMerge() + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: "/tmp/stacks", + } + + result, err := ResolveImportTreeFromProvenance(tt.stacksMap, atmosConfig) + require.NoError(t, err) + + if tt.expectEmpty { + assert.Empty(t, result) + } else if tt.expectStack != "" { + assert.Contains(t, result, tt.expectStack) + } + }) + } +} + +// TestFindStacksForFilePath tests the file path matching logic. +func TestFindStacksForFilePath(t *testing.T) { + tests := []struct { + name string + filePath string + stacksMap map[string]interface{} + atmosConfig *schema.AtmosConfiguration + expectedStacks int + expectComponent string + }{ + { + name: "No matching stacks", + filePath: "/tmp/stacks/nonexistent.yaml", + stacksMap: map[string]interface{}{}, + atmosConfig: &schema.AtmosConfiguration{StacksBaseAbsolutePath: "/tmp/stacks"}, + expectedStacks: 0, + }, + { + name: "Exact absolute path match", + filePath: "/tmp/stacks/test-stack.yaml", + stacksMap: map[string]interface{}{ + "test-stack": map[string]interface{}{ + "components": map[string]interface{}{ + "terraform": map[string]interface{}{ + "vpc": map[string]interface{}{ + "atmos_stack_file": "/tmp/stacks/test-stack.yaml", + }, + }, + }, + }, + }, + atmosConfig: &schema.AtmosConfiguration{StacksBaseAbsolutePath: "/tmp/stacks"}, + expectedStacks: 1, + expectComponent: "vpc", + }, + { + name: "Relative path match", + filePath: "/tmp/stacks/test-stack.yaml", + stacksMap: map[string]interface{}{ + "test-stack": map[string]interface{}{ + "components": map[string]interface{}{ + "terraform": map[string]interface{}{ + "vpc": map[string]interface{}{ + "atmos_stack_file": "test-stack.yaml", + }, + }, + }, + }, + }, + atmosConfig: &schema.AtmosConfiguration{StacksBaseAbsolutePath: "/tmp/stacks"}, + expectedStacks: 1, + expectComponent: "vpc", + }, + { + name: "Match without .yaml extension", + filePath: "/tmp/stacks/test-stack.yaml", + stacksMap: map[string]interface{}{ + "test-stack": map[string]interface{}{ + "components": map[string]interface{}{ + "terraform": map[string]interface{}{ + "vpc": map[string]interface{}{ + "atmos_stack_file": "test-stack", + }, + }, + }, + }, + }, + atmosConfig: &schema.AtmosConfiguration{StacksBaseAbsolutePath: "/tmp/stacks"}, + expectedStacks: 1, + expectComponent: "vpc", + }, + { + name: "Match with .yml extension", + filePath: "/tmp/stacks/test-stack.yml", + stacksMap: map[string]interface{}{ + "test-stack": map[string]interface{}{ + "components": map[string]interface{}{ + "terraform": map[string]interface{}{ + "vpc": map[string]interface{}{ + "atmos_stack_file": "test-stack.yaml", + }, + }, + }, + }, + }, + atmosConfig: &schema.AtmosConfiguration{StacksBaseAbsolutePath: "/tmp/stacks"}, + expectedStacks: 1, + expectComponent: "vpc", + }, + { + name: "Multiple components in same stack", + filePath: "/tmp/stacks/test-stack.yaml", + stacksMap: map[string]interface{}{ + "test-stack": map[string]interface{}{ + "components": map[string]interface{}{ + "terraform": map[string]interface{}{ + "vpc": map[string]interface{}{ + "atmos_stack_file": "test-stack.yaml", + }, + "database": map[string]interface{}{ + "atmos_stack_file": "test-stack.yaml", + }, + }, + }, + }, + }, + atmosConfig: &schema.AtmosConfiguration{StacksBaseAbsolutePath: "/tmp/stacks"}, + expectedStacks: 1, + expectComponent: "vpc", + }, + { + name: "Invalid stack data structure", + filePath: "/tmp/stacks/test-stack.yaml", + stacksMap: map[string]interface{}{ + "test-stack": "invalid", + }, + atmosConfig: &schema.AtmosConfiguration{StacksBaseAbsolutePath: "/tmp/stacks"}, + expectedStacks: 0, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := findStacksForFilePath(tt.filePath, tt.stacksMap, tt.atmosConfig) + assert.Equal(t, tt.expectedStacks, len(result)) + + if tt.expectComponent != "" && len(result) > 0 { + for _, components := range result { + assert.True(t, components[tt.expectComponent], "Expected component %s to be present", tt.expectComponent) + } + } + }) + } +} + +// TestExtractComponentFolders tests component folder extraction. +func TestExtractComponentFolders(t *testing.T) { + tests := []struct { + name string + stackData interface{} + expectedFolder string + expectEmpty bool + }{ + { + name: "Component with metadata.component", + stackData: map[string]interface{}{ + "components": map[string]interface{}{ + "terraform": map[string]interface{}{ + "vpc": map[string]interface{}{ + "metadata": map[string]interface{}{ + "component": "base-vpc", + }, + }, + }, + }, + }, + expectedFolder: "components/terraform/base-vpc", + }, + { + name: "Component without metadata (uses component name)", + stackData: map[string]interface{}{ + "components": map[string]interface{}{ + "terraform": map[string]interface{}{ + "vpc": map[string]interface{}{}, + }, + }, + }, + expectedFolder: "components/terraform/vpc", + }, + { + name: "Component with empty metadata.component", + stackData: map[string]interface{}{ + "components": map[string]interface{}{ + "helmfile": map[string]interface{}{ + "monitoring": map[string]interface{}{ + "metadata": map[string]interface{}{ + "component": "", + }, + }, + }, + }, + }, + expectedFolder: "components/helmfile/monitoring", + }, + { + name: "Invalid stack data", + stackData: "invalid", + expectEmpty: true, + }, + { + name: "No components section", + stackData: map[string]interface{}{ + "vars": map[string]interface{}{ + "key": "value", + }, + }, + expectEmpty: true, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := extractComponentFolders(tt.stackData) + + if tt.expectEmpty { + assert.Empty(t, result) + return + } + + assert.NotEmpty(t, result) + if tt.expectedFolder == "" { + return + } + + found := false + for _, folder := range result { + if folder == tt.expectedFolder { + found = true + break + } + } + assert.True(t, found, "Expected folder %s not found in result", tt.expectedFolder) + }) + } +} + +// TestExtractComponentsFromStackData tests component name extraction. +func TestExtractComponentsFromStackData(t *testing.T) { + tests := []struct { + name string + stackData interface{} + expectedCount int + expectComponent string + }{ + { + name: "Multiple components across types", + stackData: map[string]interface{}{ + "components": map[string]interface{}{ + "terraform": map[string]interface{}{ + "vpc": map[string]interface{}{}, + "database": map[string]interface{}{}, + }, + "helmfile": map[string]interface{}{ + "monitoring": map[string]interface{}{}, + }, + }, + }, + expectedCount: 3, + expectComponent: "vpc", + }, + { + name: "Single component", + stackData: map[string]interface{}{ + "components": map[string]interface{}{ + "terraform": map[string]interface{}{ + "vpc": map[string]interface{}{}, + }, + }, + }, + expectedCount: 1, + expectComponent: "vpc", + }, + { + name: "Invalid stack data", + stackData: "invalid", + expectedCount: 0, + }, + { + name: "No components section", + stackData: map[string]interface{}{ + "vars": map[string]interface{}{}, + }, + expectedCount: 0, + }, + { + name: "Empty components section", + stackData: map[string]interface{}{ + "components": map[string]interface{}{}, + }, + expectedCount: 0, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := extractComponentsFromStackData(tt.stackData) + assert.Equal(t, tt.expectedCount, len(result)) + + if tt.expectComponent != "" { + assert.True(t, result[tt.expectComponent], "Expected component %s to be present", tt.expectComponent) + } + }) + } +} + +// TestBuildImportTreeFromChain tests import tree construction. +func TestBuildImportTreeFromChain(t *testing.T) { + tests := []struct { + name string + importChain []string + expectNodes int + expectCircular bool + }{ + { + name: "Empty chain", + importChain: []string{}, + expectNodes: 0, + }, + { + name: "Single file (no imports)", + importChain: []string{"/tmp/stacks/stack.yaml"}, + expectNodes: 0, + }, + { + name: "Simple import chain", + importChain: []string{ + "/tmp/stacks/stack.yaml", + "/tmp/stacks/catalog/base.yaml", + }, + expectNodes: 1, + }, + { + name: "Multiple imports", + importChain: []string{ + "/tmp/stacks/stack.yaml", + "/tmp/stacks/catalog/base.yaml", + "/tmp/stacks/catalog/network.yaml", + }, + expectNodes: 2, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: "/tmp/stacks", + } + + result := buildImportTreeFromChain(tt.importChain, atmosConfig) + assert.Equal(t, tt.expectNodes, len(result)) + }) + } +} + +// TestStripBasePath tests base path removal. +func TestStripBasePath(t *testing.T) { + // Use t.TempDir() to get an OS-appropriate temp directory. + tmpDir := t.TempDir() + + tests := []struct { + name string + absolutePath string + basePath string + expected string + }{ + { + name: "Simple path", + absolutePath: filepath.Join(tmpDir, "catalog", "base.yaml"), + basePath: tmpDir, + expected: "catalog/base", + }, + { + name: "Base path with trailing slash", + absolutePath: filepath.Join(tmpDir, "catalog", "base.yaml"), + basePath: tmpDir + string(filepath.Separator), + expected: "catalog/base", + }, + { + name: "Remove .yml extension", + absolutePath: filepath.Join(tmpDir, "catalog", "base.yml"), + basePath: tmpDir, + expected: "catalog/base", + }, + { + name: "Path already relative", + absolutePath: filepath.Join("catalog", "base.yaml"), + basePath: tmpDir, + // When path is relative and doesn't start with basePath, filepath.Rel returns relative form. + expected: "catalog/base", + }, + { + name: "Nested directories", + absolutePath: filepath.Join(tmpDir, "catalog", "network", "vpc.yaml"), + basePath: tmpDir, + expected: "catalog/network/vpc", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := stripBasePath(tt.absolutePath, tt.basePath) + assert.Equal(t, tt.expected, result) + }) + } +} + +// TestResolveImportPath tests import path resolution. +func TestResolveImportPath(t *testing.T) { + // Use t.TempDir() to get an OS-appropriate temp directory. + tmpDir := t.TempDir() + + tests := []struct { + name string + importPath string + expected string + }{ + { + name: "Simple import", + importPath: "catalog/base", + expected: filepath.Join(tmpDir, "catalog", "base.yaml"), + }, + { + name: "Import with .yaml extension", + importPath: "catalog/base.yaml", + expected: filepath.Join(tmpDir, "catalog", "base.yaml"), + }, + { + name: "Import with .yml extension", + importPath: "catalog/base.yml", + expected: filepath.Join(tmpDir, "catalog", "base.yml"), + }, + { + name: "Nested import", + importPath: "catalog/network/vpc", + expected: filepath.Join(tmpDir, "catalog", "network", "vpc.yaml"), + }, + } + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := resolveImportPath(tt.importPath, "", atmosConfig) + assert.Equal(t, tt.expected, result) + }) + } +} + +// TestReadImportsFromYAMLFile tests YAML import extraction. +func TestReadImportsFromYAMLFile(t *testing.T) { + // Create temporary directory for test files. + tmpDir := t.TempDir() + + tests := []struct { + name string + yamlContent string + expected []string + expectError bool + }{ + { + name: "Single import field", + yamlContent: ` +import: catalog/base +vars: + environment: prod +`, + expected: []string{"catalog/base"}, + }, + { + name: "Multiple imports field", + yamlContent: ` +imports: + - catalog/base + - catalog/network + - catalog/security +`, + expected: []string{"catalog/base", "catalog/network", "catalog/security"}, + }, + { + name: "Both import and imports fields", + yamlContent: ` +import: catalog/base +imports: + - catalog/network + - catalog/security +`, + expected: []string{"catalog/base", "catalog/network", "catalog/security"}, + }, + { + name: "No imports", + yamlContent: ` +vars: + environment: prod +`, + expected: []string{}, + }, + { + name: "Invalid YAML", + yamlContent: "invalid: [unclosed", + expectError: true, + }, + { + name: "Empty imports array", + yamlContent: ` +imports: [] +`, + expected: []string{}, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + // Create temporary YAML file. + tmpFile := filepath.Join(tmpDir, "test.yaml") + err := os.WriteFile(tmpFile, []byte(tt.yamlContent), 0o644) + require.NoError(t, err) + + result, err := readImportsFromYAMLFile(tmpFile) + + if tt.expectError { + assert.Error(t, err) + } else { + require.NoError(t, err) + assert.Equal(t, tt.expected, result) + } + }) + } + + // Test missing file. + t.Run("Missing file", func(t *testing.T) { + _, err := readImportsFromYAMLFile("/nonexistent/file.yaml") + assert.Error(t, err) + }) +} + +// TestExtractImportStringsHelper tests import string extraction from various types. +func TestExtractImportStringsHelper(t *testing.T) { + tests := []struct { + name string + input interface{} + expected []string + }{ + { + name: "String value", + input: "catalog/base", + expected: []string{"catalog/base"}, + }, + { + name: "Array of strings", + input: []interface{}{"catalog/base", "catalog/network"}, + expected: []string{"catalog/base", "catalog/network"}, + }, + { + name: "Empty array", + input: []interface{}{}, + expected: nil, // Function returns nil for empty input + }, + { + name: "Array with mixed types (strings only extracted)", + input: []interface{}{"catalog/base", 123, "catalog/network"}, + expected: []string{"catalog/base", "catalog/network"}, + }, + { + name: "Nil value", + input: nil, + expected: nil, // Function returns nil for nil input + }, + { + name: "Integer (non-string, non-array)", + input: 123, + expected: nil, // Function returns nil for non-string/array + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := extractImportStringsHelper(tt.input) + assert.Equal(t, tt.expected, result) + }) + } +} + +// TestBuildNodesFromImportPaths tests node building from import paths. +func TestBuildNodesFromImportPaths(t *testing.T) { + tests := []struct { + name string + imports []string + expectNodes int + }{ + { + name: "Empty imports", + imports: []string{}, + expectNodes: 0, + }, + { + name: "Single import", + imports: []string{"catalog/base"}, + expectNodes: 1, + }, + { + name: "Multiple imports", + imports: []string{"catalog/base", "catalog/network", "catalog/security"}, + expectNodes: 3, + }, + } + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: "/tmp/stacks", + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + visited := make(map[string]bool) + cache := make(map[string][]string) + result := buildNodesFromImportPaths(tt.imports, "/tmp/stacks/stack.yaml", atmosConfig, visited, cache) + assert.Equal(t, tt.expectNodes, len(result)) + }) + } +} + +// TestCircularImportDetection tests that circular imports are detected. +func TestCircularImportDetection(t *testing.T) { + // Create temporary directory for test files. + tmpDir := t.TempDir() + + // Create circular import files. + file1Content := ` +imports: + - file2 +vars: + name: file1 +` + file2Content := ` +imports: + - file1 +vars: + name: file2 +` + + file1Path := filepath.Join(tmpDir, "file1.yaml") + file2Path := filepath.Join(tmpDir, "file2.yaml") + + err := os.WriteFile(file1Path, []byte(file1Content), 0o644) + require.NoError(t, err) + err = os.WriteFile(file2Path, []byte(file2Content), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + // Test circular detection. + visited := make(map[string]bool) + cache := make(map[string][]string) + + visited[file1Path] = true + nodes := resolveImportFileImports(file1Path, atmosConfig, visited, cache) + + // Should have one node for file2. + assert.Equal(t, 1, len(nodes)) + + // The node for file1 should be marked as circular when file2 tries to import it. + if len(nodes) > 0 && len(nodes[0].Children) > 0 { + assert.True(t, nodes[0].Children[0].Circular, "Expected circular reference to be detected") + } +} + +// TestImportCaching tests that import caching works correctly. +func TestImportCaching(t *testing.T) { + tmpDir := t.TempDir() + + baseContent := ` +imports: + - network +vars: + env: prod +` + basePath := filepath.Join(tmpDir, "base.yaml") + err := os.WriteFile(basePath, []byte(baseContent), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + visited := make(map[string]bool) + cache := make(map[string][]string) + + // First call - should read from file and populate cache. + nodes1 := resolveImportFileImports(basePath, atmosConfig, visited, cache) + assert.Equal(t, 1, len(nodes1)) + assert.Contains(t, cache, basePath) + + // Second call - should use cached value. + visited2 := make(map[string]bool) + nodes2 := resolveImportFileImports(basePath, atmosConfig, visited2, cache) + assert.Equal(t, 1, len(nodes2)) +} + +// TestResolveImportFileImports_FileNotFound tests handling of missing import file. +func TestResolveImportFileImports_FileNotFound(t *testing.T) { + tmpDir := t.TempDir() + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + visited := make(map[string]bool) + cache := make(map[string][]string) + + // Try to resolve imports from non-existent file. + filePath := filepath.Join(tmpDir, "nonexistent.yaml") + nodes := resolveImportFileImports(filePath, atmosConfig, visited, cache) + + // Should return nil (no nodes) when file can't be read. + assert.Nil(t, nodes) +} + +// TestResolveImportFileImports_InvalidYAML tests handling of invalid YAML file. +func TestResolveImportFileImports_InvalidYAML(t *testing.T) { + tmpDir := t.TempDir() + + invalidContent := ` +imports: [unclosed +vars: broken +` + filePath := filepath.Join(tmpDir, "invalid.yaml") + err := os.WriteFile(filePath, []byte(invalidContent), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + visited := make(map[string]bool) + cache := make(map[string][]string) + + nodes := resolveImportFileImports(filePath, atmosConfig, visited, cache) + + // Should return nil when YAML parsing fails. + assert.Nil(t, nodes) +} + +// TestResolveImportFileImports_EmptyFile tests file with no imports. +func TestResolveImportFileImports_EmptyFile(t *testing.T) { + tmpDir := t.TempDir() + + emptyContent := ` +vars: + environment: prod +` + filePath := filepath.Join(tmpDir, "empty.yaml") + err := os.WriteFile(filePath, []byte(emptyContent), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + visited := make(map[string]bool) + cache := make(map[string][]string) + + nodes := resolveImportFileImports(filePath, atmosConfig, visited, cache) + + // Should handle empty imports gracefully. + assert.Empty(t, nodes) + assert.Contains(t, cache, filePath) + assert.Empty(t, cache[filePath]) +} + +// TestResolveImportFileImports_DeepRecursion tests deep import chains. +func TestResolveImportFileImports_DeepRecursion(t *testing.T) { + tmpDir := t.TempDir() + + // Create chain: file1 → file2 → file3 → file4 → file5. + for i := 1; i <= 5; i++ { + var content string + if i < 5 { + content = ` +imports: + - file` + string(rune('0'+i+1)) + ` +vars: + level: ` + string(rune('0'+i)) + } else { + content = ` +vars: + level: 5 +` + } + filePath := filepath.Join(tmpDir, "file"+string(rune('0'+i))+".yaml") + err := os.WriteFile(filePath, []byte(content), 0o644) + require.NoError(t, err) + } + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + visited := make(map[string]bool) + cache := make(map[string][]string) + + file1Path := filepath.Join(tmpDir, "file1.yaml") + nodes := resolveImportFileImports(file1Path, atmosConfig, visited, cache) + + // Should resolve all levels. + assert.Len(t, nodes, 1) + node := nodes[0] + assert.Equal(t, "file2", node.Path) + + // Navigate through all levels. + for i := 2; i < 5; i++ { + assert.Len(t, node.Children, 1) + node = node.Children[0] + assert.Equal(t, "file"+string(rune('0'+i+1)), node.Path) + } + + // Last node should have no children. + assert.Empty(t, node.Children) +} + +// TestResolveImportFileImports_VisitedBacktracking tests visited map backtracking. +func TestResolveImportFileImports_VisitedBacktracking(t *testing.T) { + tmpDir := t.TempDir() + + // Create structure: + // parent imports both a and b + // a imports common + // b imports common + // 'common' should appear in both branches (not marked circular). + + parentContent := ` +imports: + - a + - b +` + err := os.WriteFile(filepath.Join(tmpDir, "parent.yaml"), []byte(parentContent), 0o644) + require.NoError(t, err) + + aContent := ` +imports: + - common +vars: + name: a +` + err = os.WriteFile(filepath.Join(tmpDir, "a.yaml"), []byte(aContent), 0o644) + require.NoError(t, err) + + bContent := ` +imports: + - common +vars: + name: b +` + err = os.WriteFile(filepath.Join(tmpDir, "b.yaml"), []byte(bContent), 0o644) + require.NoError(t, err) + + commonContent := ` +vars: + shared: true +` + err = os.WriteFile(filepath.Join(tmpDir, "common.yaml"), []byte(commonContent), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + visited := make(map[string]bool) + cache := make(map[string][]string) + + parentPath := filepath.Join(tmpDir, "parent.yaml") + nodes := resolveImportFileImports(parentPath, atmosConfig, visited, cache) + + // Should have 2 top-level nodes (a and b). + assert.Len(t, nodes, 2) + + // Both a and b should have 'common' as child (not marked circular). + for _, node := range nodes { + assert.Len(t, node.Children, 1) + assert.Equal(t, "common", node.Children[0].Path) + assert.False(t, node.Children[0].Circular, "Expected backtracking to allow same import in different branches") + } +} + +// TestResolveImportFileImports_WithCache tests cache population and reuse. +func TestResolveImportFileImports_WithCache(t *testing.T) { + tmpDir := t.TempDir() + + baseContent := ` +imports: + - common +vars: + base: true +` + basePath := filepath.Join(tmpDir, "base.yaml") + err := os.WriteFile(basePath, []byte(baseContent), 0o644) + require.NoError(t, err) + + commonContent := ` +vars: + common: true +` + commonPath := filepath.Join(tmpDir, "common.yaml") + err = os.WriteFile(commonPath, []byte(commonContent), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + visited := make(map[string]bool) + cache := make(map[string][]string) + + // First call - populates cache. + nodes1 := resolveImportFileImports(basePath, atmosConfig, visited, cache) + assert.Len(t, nodes1, 1) + assert.Contains(t, cache, basePath) + + // Modify file on disk. + err = os.WriteFile(basePath, []byte("imports:\n - different"), 0o644) + require.NoError(t, err) + + // Second call - should use cache (not re-read file). + visited2 := make(map[string]bool) + nodes2 := resolveImportFileImports(basePath, atmosConfig, visited2, cache) + assert.Len(t, nodes2, 1) + assert.Equal(t, "common", nodes2[0].Path, "Expected cached import, not re-read file") +} + +// TestBuildImportTreeFromChain_LongChain tests processing of long import chains. +func TestBuildImportTreeFromChain_LongChain(t *testing.T) { + // Use t.TempDir() to get an OS-appropriate temp directory. + tmpDir := t.TempDir() + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + // Create a chain with 10 imports using strconv.Itoa for proper number formatting. + importChain := []string{filepath.Join(tmpDir, "parent.yaml")} + for i := 1; i <= 10; i++ { + importChain = append(importChain, filepath.Join(tmpDir, "import"+strconv.Itoa(i)+".yaml")) + } + + nodes := buildImportTreeFromChain(importChain, atmosConfig) + + // Should have 10 nodes (skipping first element which is parent). + assert.Len(t, nodes, 10) + + // Verify paths are correctly stripped (relative, extensionless, forward-slash format). + for i, node := range nodes { + expected := "import" + strconv.Itoa(i+1) + assert.Equal(t, expected, node.Path) + } +} + +// TestBuildImportTreeFromChain_DuplicateInChain tests duplicate imports in chain. +func TestBuildImportTreeFromChain_DuplicateInChain(t *testing.T) { + tmpDir := t.TempDir() + + // Create files. + baseContent := `vars: {}` + basePath := filepath.Join(tmpDir, "base.yaml") + err := os.WriteFile(basePath, []byte(baseContent), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + // Import chain with duplicate. + importChain := []string{ + filepath.Join(tmpDir, "parent.yaml"), + basePath, + filepath.Join(tmpDir, "other.yaml"), + basePath, // Duplicate - visited is cleared after each import so not marked circular + } + + nodes := buildImportTreeFromChain(importChain, atmosConfig) + + // Should have 3 nodes. + assert.Len(t, nodes, 3) + + // All nodes get their paths. + assert.Equal(t, "base", nodes[0].Path) + assert.Equal(t, "other", nodes[1].Path) + assert.Equal(t, "base", nodes[2].Path) + + // Duplicates in chain are allowed because visited is cleared after processing each import. + // This allows the same file to appear multiple times in the merge chain. + assert.False(t, nodes[0].Circular) + assert.False(t, nodes[2].Circular) +} + +// TestBuildNodesFromImportPaths_LargeImportList tests handling of many imports. +func TestBuildNodesFromImportPaths_LargeImportList(t *testing.T) { + tmpDir := t.TempDir() + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + // Create 20 import paths using strconv.Itoa for proper number formatting. + var imports []string + for i := 1; i <= 20; i++ { + imports = append(imports, "catalog/import"+strconv.Itoa(i)) + } + + visited := make(map[string]bool) + cache := make(map[string][]string) + + nodes := buildNodesFromImportPaths(imports, filepath.Join(tmpDir, "parent.yaml"), atmosConfig, visited, cache) + + // Should have 20 nodes. + assert.Len(t, nodes, 20) + + // Verify all imports are present. + for i, node := range nodes { + expected := "catalog/import" + strconv.Itoa(i+1) + assert.Equal(t, expected, node.Path) + } +} + +// TestBuildNodesFromImportPaths_WithRealFiles tests node building with actual files. +func TestBuildNodesFromImportPaths_WithRealFiles(t *testing.T) { + tmpDir := t.TempDir() + + // Create catalog directory. + catalogDir := filepath.Join(tmpDir, "catalog") + err := os.MkdirAll(catalogDir, 0o755) + require.NoError(t, err) + + // Create base.yaml with import. + baseContent := ` +imports: + - catalog/network +vars: + base: true +` + basePath := filepath.Join(catalogDir, "base.yaml") + err = os.WriteFile(basePath, []byte(baseContent), 0o644) + require.NoError(t, err) + + // Create network.yaml (no imports). + networkContent := ` +vars: + network: true +` + networkPath := filepath.Join(catalogDir, "network.yaml") + err = os.WriteFile(networkPath, []byte(networkContent), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + imports := []string{"catalog/base"} + visited := make(map[string]bool) + cache := make(map[string][]string) + + nodes := buildNodesFromImportPaths(imports, filepath.Join(tmpDir, "parent.yaml"), atmosConfig, visited, cache) + + // Should have 1 top-level node. + assert.Len(t, nodes, 1) + assert.Equal(t, "catalog/base", nodes[0].Path) + + // base.yaml should have network as child. + assert.Len(t, nodes[0].Children, 1) + assert.Equal(t, "catalog/network", nodes[0].Children[0].Path) +} diff --git a/pkg/list/importresolver/resolver.go b/pkg/list/importresolver/resolver.go new file mode 100644 index 0000000000..91d345cb72 --- /dev/null +++ b/pkg/list/importresolver/resolver.go @@ -0,0 +1,210 @@ +package importresolver + +import ( + "errors" + "fmt" + "os" + "path/filepath" + "strings" + + "gopkg.in/yaml.v3" + + errUtils "github.com/cloudposse/atmos/errors" + "github.com/cloudposse/atmos/pkg/list/tree" + "github.com/cloudposse/atmos/pkg/perf" + "github.com/cloudposse/atmos/pkg/schema" + u "github.com/cloudposse/atmos/pkg/utils" +) + +// ResolveImportTree resolves the complete import tree for all stacks. +// Returns a map of stack names to their import trees. +func ResolveImportTree(stacksMap map[string]interface{}, atmosConfig *schema.AtmosConfiguration) (map[string][]*tree.ImportNode, error) { + defer perf.Track(atmosConfig, "importresolver.ResolveImportTree")() + + result := make(map[string][]*tree.ImportNode) + + // Cache to avoid re-reading the same import file multiple times. + importCache := make(map[string][]string) + + // Process each stack. + for stackName := range stacksMap { + // Get the import paths for this stack. + imports, err := getStackImports(stackName, atmosConfig, importCache) + if err != nil { + return nil, fmt.Errorf("failed to get imports for stack %s: %w", stackName, err) + } + + // Build the import tree recursively. + var importNodes []*tree.ImportNode + visited := make(map[string]bool) + for _, importPath := range imports { + node := buildImportTree(importPath, atmosConfig, importCache, visited) + importNodes = append(importNodes, node) + } + + result[stackName] = importNodes + } + + return result, nil +} + +// getStackImports returns the import paths for a given stack. +func getStackImports(stackName string, atmosConfig *schema.AtmosConfiguration, cache map[string][]string) ([]string, error) { + // Find the stack file path. + stackFilePath, err := findStackFilePath(stackName, atmosConfig) + if err != nil { + // Stack might not have a direct file (could be generated), return empty imports. + // Only treat ErrStackManifestFileNotFound as non-error; propagate other errors. + if errors.Is(err, errUtils.ErrStackManifestFileNotFound) { + return []string{}, nil + } + return nil, err + } + + // Check cache first. + if imports, ok := cache[stackFilePath]; ok { + return imports, nil + } + + // Read the stack file. + imports, err := readImportsFromFile(stackFilePath) + if err != nil { + return nil, err + } + + // Cache the result. + cache[stackFilePath] = imports + + return imports, nil +} + +// findStackFilePath attempts to find the file path for a stack. +// Stacks follow the pattern: stacks/orgs/{org}/{tenant}/{environment}/*.yaml. +func findStackFilePath(stackName string, atmosConfig *schema.AtmosConfiguration) (string, error) { + // Try to construct the file path from the stack name. + // Stack names follow pattern like "plat-ue2-prod" which maps to files in stacks/orgs/ + + // For now, we'll search through all stack files to find matches. + // This is not perfect but works for the common case. + stacksBasePath := atmosConfig.StacksBaseAbsolutePath + + // Try common patterns. + transformed := strings.ReplaceAll(stackName, "-", string(os.PathSeparator)) + possiblePaths := []string{ + filepath.Join(stacksBasePath, "orgs", stackName+".yaml"), + filepath.Join(stacksBasePath, stackName+".yaml"), + filepath.Join(stacksBasePath, transformed+".yaml"), + } + + for _, path := range possiblePaths { + if u.FileExists(path) { + return path, nil + } + } + + // If not found, return error (stack file might not exist as a standalone file). + return "", fmt.Errorf("%w: %s", errUtils.ErrStackManifestFileNotFound, stackName) +} + +// readImportsFromFile reads the import/imports array from a YAML file. +func readImportsFromFile(filePath string) ([]string, error) { + // Read the file. + content, err := os.ReadFile(filePath) + if err != nil { + return nil, fmt.Errorf("failed to read file %s: %w", filePath, err) + } + + // Parse as YAML. + var data map[string]interface{} + if err := yaml.Unmarshal(content, &data); err != nil { + return nil, fmt.Errorf("failed to parse YAML from %s: %w", filePath, err) + } + + // Extract imports (can be "import" or "imports" array). + var imports []string + + // Check for "import" field. + if importVal, ok := data["import"]; ok { + imports = append(imports, extractImportStrings(importVal)...) + } + + // Check for "imports" field. + if importsVal, ok := data["imports"]; ok { + imports = append(imports, extractImportStrings(importsVal)...) + } + + return imports, nil +} + +// extractImportStrings extracts import strings from an interface{} (can be string or []interface{}). +func extractImportStrings(val interface{}) []string { + var results []string + + switch v := val.(type) { + case string: + results = append(results, v) + case []interface{}: + for _, item := range v { + if str, ok := item.(string); ok { + results = append(results, str) + } + } + } + + return results +} + +// buildImportTree recursively builds the import tree for a given import path. +func buildImportTree(importPath string, atmosConfig *schema.AtmosConfiguration, cache map[string][]string, visited map[string]bool) *tree.ImportNode { + node := &tree.ImportNode{ + Path: importPath, + Circular: false, + } + + // Check for circular reference. + if visited[importPath] { + node.Circular = true + return node + } + + // Mark as visited. + visited[importPath] = true + defer func() { + // Unmark when backtracking (allows same import in different branches). + delete(visited, importPath) + }() + + // Resolve the file path for this import. + importFilePath := resolveImportFilePath(importPath, atmosConfig) + + // Read imports from this file. + childImports, err := readImportsFromFile(importFilePath) + if err != nil { + // If we can't read the file, just return the node without children. + return node + } + + // Cache the imports. + cache[importFilePath] = childImports + + // Recursively build children. + for _, childImportPath := range childImports { + childNode := buildImportTree(childImportPath, atmosConfig, cache, visited) + node.Children = append(node.Children, childNode) + } + + return node +} + +// resolveImportFilePath converts an import path to an absolute file path. +func resolveImportFilePath(importPath string, atmosConfig *schema.AtmosConfiguration) string { + stacksBasePath := atmosConfig.StacksBaseAbsolutePath + + // Import paths are relative to the stacks base path. + // They may or may not have .yaml extension. + if !strings.HasSuffix(importPath, ".yaml") && !strings.HasSuffix(importPath, ".yml") { + importPath += ".yaml" + } + + return filepath.Join(stacksBasePath, importPath) +} diff --git a/pkg/list/importresolver/resolver_test.go b/pkg/list/importresolver/resolver_test.go new file mode 100644 index 0000000000..ba3bdf9c5b --- /dev/null +++ b/pkg/list/importresolver/resolver_test.go @@ -0,0 +1,974 @@ +package importresolver + +import ( + "errors" + "os" + "path/filepath" + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + + errUtils "github.com/cloudposse/atmos/errors" + "github.com/cloudposse/atmos/pkg/schema" +) + +// TestResolveImportTree_EmptyStacks tests behavior with empty stacks map. +func TestResolveImportTree_EmptyStacks(t *testing.T) { + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: t.TempDir(), + } + + result, err := ResolveImportTree(map[string]interface{}{}, atmosConfig) + require.NoError(t, err) + assert.Empty(t, result) +} + +// TestResolveImportTree_StackWithNoFile tests behavior when stack file doesn't exist. +func TestResolveImportTree_StackWithNoFile(t *testing.T) { + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: t.TempDir(), + } + + stacksMap := map[string]interface{}{ + "nonexistent-stack": map[string]interface{}{ + "components": map[string]interface{}{}, + }, + } + + result, err := ResolveImportTree(stacksMap, atmosConfig) + require.NoError(t, err) + + // Stack should be in result but with empty imports. + assert.Contains(t, result, "nonexistent-stack") + assert.Empty(t, result["nonexistent-stack"]) +} + +// TestResolveImportTree_StackWithSingleImport tests single import resolution. +func TestResolveImportTree_StackWithSingleImport(t *testing.T) { + tmpDir := t.TempDir() + + // Create stack file with single import. + stackContent := ` +imports: + - catalog/base +vars: + environment: prod +` + stackPath := filepath.Join(tmpDir, "prod.yaml") + err := os.WriteFile(stackPath, []byte(stackContent), 0o644) + require.NoError(t, err) + + // Create catalog/base file (no imports). + catalogDir := filepath.Join(tmpDir, "catalog") + err = os.MkdirAll(catalogDir, 0o755) + require.NoError(t, err) + + baseContent := ` +vars: + common: true +` + basePath := filepath.Join(catalogDir, "base.yaml") + err = os.WriteFile(basePath, []byte(baseContent), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + stacksMap := map[string]interface{}{ + "prod": map[string]interface{}{}, + } + + result, err := ResolveImportTree(stacksMap, atmosConfig) + require.NoError(t, err) + + assert.Contains(t, result, "prod") + assert.Len(t, result["prod"], 1) + assert.Equal(t, "catalog/base", result["prod"][0].Path) + assert.False(t, result["prod"][0].Circular) + assert.Empty(t, result["prod"][0].Children) +} + +// TestResolveImportTree_StackWithNestedImports tests nested import chains. +func TestResolveImportTree_StackWithNestedImports(t *testing.T) { + tmpDir := t.TempDir() + + // Create stack file. + stackContent := ` +imports: + - catalog/base +` + stackPath := filepath.Join(tmpDir, "prod.yaml") + err := os.WriteFile(stackPath, []byte(stackContent), 0o644) + require.NoError(t, err) + + // Create catalog directory. + catalogDir := filepath.Join(tmpDir, "catalog") + err = os.MkdirAll(catalogDir, 0o755) + require.NoError(t, err) + + // catalog/base imports common/variables. + baseContent := ` +imports: + - common/variables +vars: + common: true +` + basePath := filepath.Join(catalogDir, "base.yaml") + err = os.WriteFile(basePath, []byte(baseContent), 0o644) + require.NoError(t, err) + + // Create common directory. + commonDir := filepath.Join(tmpDir, "common") + err = os.MkdirAll(commonDir, 0o755) + require.NoError(t, err) + + // common/variables has no imports. + variablesContent := ` +vars: + region: us-east-1 +` + variablesPath := filepath.Join(commonDir, "variables.yaml") + err = os.WriteFile(variablesPath, []byte(variablesContent), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + stacksMap := map[string]interface{}{ + "prod": map[string]interface{}{}, + } + + result, err := ResolveImportTree(stacksMap, atmosConfig) + require.NoError(t, err) + + assert.Contains(t, result, "prod") + assert.Len(t, result["prod"], 1) + assert.Equal(t, "catalog/base", result["prod"][0].Path) + assert.False(t, result["prod"][0].Circular) + + // Check nested import. + assert.Len(t, result["prod"][0].Children, 1) + assert.Equal(t, "common/variables", result["prod"][0].Children[0].Path) + assert.False(t, result["prod"][0].Children[0].Circular) + assert.Empty(t, result["prod"][0].Children[0].Children) +} + +// TestResolveImportTree_MultipleImports tests stack with multiple imports. +func TestResolveImportTree_MultipleImports(t *testing.T) { + tmpDir := t.TempDir() + + // Create stack file with multiple imports. + stackContent := ` +imports: + - catalog/base + - catalog/network + - catalog/security +` + stackPath := filepath.Join(tmpDir, "prod.yaml") + err := os.WriteFile(stackPath, []byte(stackContent), 0o644) + require.NoError(t, err) + + // Create catalog files (no imports). + catalogDir := filepath.Join(tmpDir, "catalog") + err = os.MkdirAll(catalogDir, 0o755) + require.NoError(t, err) + + for _, name := range []string{"base", "network", "security"} { + content := `vars: {}` + path := filepath.Join(catalogDir, name+".yaml") + err = os.WriteFile(path, []byte(content), 0o644) + require.NoError(t, err) + } + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + stacksMap := map[string]interface{}{ + "prod": map[string]interface{}{}, + } + + result, err := ResolveImportTree(stacksMap, atmosConfig) + require.NoError(t, err) + + assert.Contains(t, result, "prod") + assert.Len(t, result["prod"], 3) + + // Verify all imports are present. + paths := make([]string, 3) + for i, node := range result["prod"] { + paths[i] = node.Path + } + assert.Contains(t, paths, "catalog/base") + assert.Contains(t, paths, "catalog/network") + assert.Contains(t, paths, "catalog/security") +} + +// TestResolveImportTree_CircularReference tests circular import detection. +func TestResolveImportTree_CircularReference(t *testing.T) { + tmpDir := t.TempDir() + + // Create stack file importing circular/a. + stackContent := ` +imports: + - circular/a +` + stackPath := filepath.Join(tmpDir, "stack.yaml") + err := os.WriteFile(stackPath, []byte(stackContent), 0o644) + require.NoError(t, err) + + // Create circular directory. + circularDir := filepath.Join(tmpDir, "circular") + err = os.MkdirAll(circularDir, 0o755) + require.NoError(t, err) + + // circular/a imports circular/b. + aContent := ` +imports: + - circular/b +vars: + name: a +` + aPath := filepath.Join(circularDir, "a.yaml") + err = os.WriteFile(aPath, []byte(aContent), 0o644) + require.NoError(t, err) + + // circular/b imports circular/a (creates circle). + bContent := ` +imports: + - circular/a +vars: + name: b +` + bPath := filepath.Join(circularDir, "b.yaml") + err = os.WriteFile(bPath, []byte(bContent), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + stacksMap := map[string]interface{}{ + "stack": map[string]interface{}{}, + } + + result, err := ResolveImportTree(stacksMap, atmosConfig) + require.NoError(t, err) + + assert.Contains(t, result, "stack") + assert.Len(t, result["stack"], 1) + assert.Equal(t, "circular/a", result["stack"][0].Path) + assert.False(t, result["stack"][0].Circular) + + // circular/a has child circular/b. + assert.Len(t, result["stack"][0].Children, 1) + assert.Equal(t, "circular/b", result["stack"][0].Children[0].Path) + assert.False(t, result["stack"][0].Children[0].Circular) + + // circular/b tries to import circular/a again - should be marked circular. + assert.Len(t, result["stack"][0].Children[0].Children, 1) + assert.Equal(t, "circular/a", result["stack"][0].Children[0].Children[0].Path) + assert.True(t, result["stack"][0].Children[0].Children[0].Circular, "Expected circular reference to be detected") +} + +// TestResolveImportTree_DeepNesting tests deep import chains. +func TestResolveImportTree_DeepNesting(t *testing.T) { + tmpDir := t.TempDir() + + // Create stack importing deep/level1. + stackContent := ` +imports: + - deep/level1 +` + stackPath := filepath.Join(tmpDir, "stack.yaml") + err := os.WriteFile(stackPath, []byte(stackContent), 0o644) + require.NoError(t, err) + + // Create deep directory with nested imports. + deepDir := filepath.Join(tmpDir, "deep") + err = os.MkdirAll(deepDir, 0o755) + require.NoError(t, err) + + // Create 5 levels of nested imports. + for i := 1; i <= 5; i++ { + var content string + if i < 5 { + content = ` +imports: + - deep/level` + string(rune('0'+i+1)) + ` +vars: + level: ` + string(rune('0'+i)) + } else { + content = ` +vars: + level: 5 +` + } + path := filepath.Join(deepDir, "level"+string(rune('0'+i))+".yaml") + err = os.WriteFile(path, []byte(content), 0o644) + require.NoError(t, err) + } + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + stacksMap := map[string]interface{}{ + "stack": map[string]interface{}{}, + } + + result, err := ResolveImportTree(stacksMap, atmosConfig) + require.NoError(t, err) + + // Verify deep nesting is resolved. + assert.Contains(t, result, "stack") + node := result["stack"][0] + assert.Equal(t, "deep/level1", node.Path) + + // Navigate through levels 2-4. + for i := 2; i <= 4; i++ { + assert.Len(t, node.Children, 1) + node = node.Children[0] + assert.Equal(t, "deep/level"+string(rune('0'+i)), node.Path) + assert.False(t, node.Circular) + } + + // Level 4's child is level 5, which has no children. + assert.Len(t, node.Children, 1) + assert.Equal(t, "deep/level5", node.Children[0].Path) + assert.Empty(t, node.Children[0].Children) +} + +// TestResolveImportTree_MultipleStacks tests multiple stacks independently. +func TestResolveImportTree_MultipleStacks(t *testing.T) { + tmpDir := t.TempDir() + + // Create prod.yaml. + prodContent := ` +imports: + - catalog/base +` + prodPath := filepath.Join(tmpDir, "prod.yaml") + err := os.WriteFile(prodPath, []byte(prodContent), 0o644) + require.NoError(t, err) + + // Create staging.yaml. + stagingContent := ` +imports: + - catalog/network +` + stagingPath := filepath.Join(tmpDir, "staging.yaml") + err = os.WriteFile(stagingPath, []byte(stagingContent), 0o644) + require.NoError(t, err) + + // Create catalog files. + catalogDir := filepath.Join(tmpDir, "catalog") + err = os.MkdirAll(catalogDir, 0o755) + require.NoError(t, err) + + for _, name := range []string{"base", "network"} { + content := `vars: {}` + path := filepath.Join(catalogDir, name+".yaml") + err = os.WriteFile(path, []byte(content), 0o644) + require.NoError(t, err) + } + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + stacksMap := map[string]interface{}{ + "prod": map[string]interface{}{}, + "staging": map[string]interface{}{}, + } + + result, err := ResolveImportTree(stacksMap, atmosConfig) + require.NoError(t, err) + + // Verify both stacks have their respective imports. + assert.Contains(t, result, "prod") + assert.Contains(t, result, "staging") + assert.Len(t, result["prod"], 1) + assert.Len(t, result["staging"], 1) + assert.Equal(t, "catalog/base", result["prod"][0].Path) + assert.Equal(t, "catalog/network", result["staging"][0].Path) +} + +// TestResolveImportTree_CacheBehavior tests that caching works correctly. +func TestResolveImportTree_CacheBehavior(t *testing.T) { + tmpDir := t.TempDir() + + // Create stack files that import the same base. + for _, name := range []string{"prod", "staging"} { + content := ` +imports: + - catalog/base +` + path := filepath.Join(tmpDir, name+".yaml") + err := os.WriteFile(path, []byte(content), 0o644) + require.NoError(t, err) + } + + // Create catalog/base. + catalogDir := filepath.Join(tmpDir, "catalog") + err := os.MkdirAll(catalogDir, 0o755) + require.NoError(t, err) + + baseContent := ` +imports: + - common/variables +vars: + common: true +` + basePath := filepath.Join(catalogDir, "base.yaml") + err = os.WriteFile(basePath, []byte(baseContent), 0o644) + require.NoError(t, err) + + // Create common/variables. + commonDir := filepath.Join(tmpDir, "common") + err = os.MkdirAll(commonDir, 0o755) + require.NoError(t, err) + + variablesContent := `vars: {}` + variablesPath := filepath.Join(commonDir, "variables.yaml") + err = os.WriteFile(variablesPath, []byte(variablesContent), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + stacksMap := map[string]interface{}{ + "prod": map[string]interface{}{}, + "staging": map[string]interface{}{}, + } + + result, err := ResolveImportTree(stacksMap, atmosConfig) + require.NoError(t, err) + + // Both stacks should have the same import tree structure. + // This verifies caching doesn't cause issues. + assert.Contains(t, result, "prod") + assert.Contains(t, result, "staging") + + for _, stack := range []string{"prod", "staging"} { + assert.Len(t, result[stack], 1) + assert.Equal(t, "catalog/base", result[stack][0].Path) + assert.Len(t, result[stack][0].Children, 1) + assert.Equal(t, "common/variables", result[stack][0].Children[0].Path) + } +} + +// TestResolveImportTree_BothImportFields tests stack with both import and imports fields. +func TestResolveImportTree_BothImportFields(t *testing.T) { + tmpDir := t.TempDir() + + // Create stack file with both import and imports. + stackContent := ` +import: catalog/base +imports: + - catalog/network + - catalog/security +` + stackPath := filepath.Join(tmpDir, "stack.yaml") + err := os.WriteFile(stackPath, []byte(stackContent), 0o644) + require.NoError(t, err) + + // Create catalog files. + catalogDir := filepath.Join(tmpDir, "catalog") + err = os.MkdirAll(catalogDir, 0o755) + require.NoError(t, err) + + for _, name := range []string{"base", "network", "security"} { + content := `vars: {}` + path := filepath.Join(catalogDir, name+".yaml") + err = os.WriteFile(path, []byte(content), 0o644) + require.NoError(t, err) + } + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + stacksMap := map[string]interface{}{ + "stack": map[string]interface{}{}, + } + + result, err := ResolveImportTree(stacksMap, atmosConfig) + require.NoError(t, err) + + assert.Contains(t, result, "stack") + // Should have all 3 imports (both fields combined). + assert.Len(t, result["stack"], 3) + + // Verify all expected imports are present. + paths := []string{} + for _, node := range result["stack"] { + paths = append(paths, node.Path) + } + assert.ElementsMatch(t, []string{"catalog/base", "catalog/network", "catalog/security"}, paths) +} + +// TestFindStackFilePath_PatternOrgsWithStackName tests orgs/.yaml pattern. +func TestFindStackFilePath_PatternOrgsWithStackName(t *testing.T) { + tmpDir := t.TempDir() + + orgsDir := filepath.Join(tmpDir, "orgs") + err := os.MkdirAll(orgsDir, 0o755) + require.NoError(t, err) + + stackPath := filepath.Join(orgsDir, "test-stack.yaml") + err = os.WriteFile(stackPath, []byte("vars: {}"), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + result, err := findStackFilePath("test-stack", atmosConfig) + require.NoError(t, err) + assert.Equal(t, stackPath, result) +} + +// TestFindStackFilePath_PatternRootStackName tests .yaml pattern in root. +func TestFindStackFilePath_PatternRootStackName(t *testing.T) { + tmpDir := t.TempDir() + + stackPath := filepath.Join(tmpDir, "test-stack.yaml") + err := os.WriteFile(stackPath, []byte("vars: {}"), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + result, err := findStackFilePath("test-stack", atmosConfig) + require.NoError(t, err) + assert.Equal(t, stackPath, result) +} + +// TestFindStackFilePath_PatternTransformed tests hyphen-to-path transformation. +func TestFindStackFilePath_PatternTransformed(t *testing.T) { + tmpDir := t.TempDir() + + // Create path: platform/region/environment.yaml. + platformDir := filepath.Join(tmpDir, "platform", "region") + err := os.MkdirAll(platformDir, 0o755) + require.NoError(t, err) + + stackPath := filepath.Join(platformDir, "environment.yaml") + err = os.WriteFile(stackPath, []byte("vars: {}"), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + result, err := findStackFilePath("platform-region-environment", atmosConfig) + require.NoError(t, err) + assert.Equal(t, stackPath, result) +} + +// TestFindStackFilePath_NotFound tests error when stack file not found. +func TestFindStackFilePath_NotFound(t *testing.T) { + tmpDir := t.TempDir() + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + _, err := findStackFilePath("nonexistent-stack", atmosConfig) + assert.Error(t, err) + assert.True(t, errors.Is(err, errUtils.ErrStackManifestFileNotFound)) +} + +// TestFindStackFilePath_EmptyStackName tests behavior with empty stack name. +func TestFindStackFilePath_EmptyStackName(t *testing.T) { + tmpDir := t.TempDir() + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + _, err := findStackFilePath("", atmosConfig) + assert.Error(t, err) + assert.True(t, errors.Is(err, errUtils.ErrStackManifestFileNotFound)) +} + +// TestFindStackFilePath_SpecialCharacters tests stack names with special characters. +func TestFindStackFilePath_SpecialCharacters(t *testing.T) { + tmpDir := t.TempDir() + + // Create stack with underscores. + stackPath := filepath.Join(tmpDir, "test_stack_name.yaml") + err := os.WriteFile(stackPath, []byte("vars: {}"), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + result, err := findStackFilePath("test_stack_name", atmosConfig) + require.NoError(t, err) + assert.Equal(t, stackPath, result) +} + +// TestGetStackImports_StackNotFound tests behavior when stack file not found. +func TestGetStackImports_StackNotFound(t *testing.T) { + tmpDir := t.TempDir() + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + cache := make(map[string][]string) + + result, err := getStackImports("nonexistent-stack", atmosConfig, cache) + require.NoError(t, err) + assert.Empty(t, result) + assert.Empty(t, cache) +} + +// TestGetStackImports_NoImports tests stack file with no imports. +func TestGetStackImports_NoImports(t *testing.T) { + tmpDir := t.TempDir() + + stackContent := ` +vars: + environment: prod +` + stackPath := filepath.Join(tmpDir, "stack.yaml") + err := os.WriteFile(stackPath, []byte(stackContent), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + cache := make(map[string][]string) + + result, err := getStackImports("stack", atmosConfig, cache) + require.NoError(t, err) + assert.Empty(t, result) + + // Cache should be populated even for empty imports. + assert.Contains(t, cache, stackPath) + assert.Empty(t, cache[stackPath]) +} + +// TestGetStackImports_CacheHit tests that cache is used on second call. +func TestGetStackImports_CacheHit(t *testing.T) { + tmpDir := t.TempDir() + + stackContent := ` +imports: + - catalog/base +` + stackPath := filepath.Join(tmpDir, "stack.yaml") + err := os.WriteFile(stackPath, []byte(stackContent), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + cache := make(map[string][]string) + + // First call - populates cache. + result1, err := getStackImports("stack", atmosConfig, cache) + require.NoError(t, err) + assert.Equal(t, []string{"catalog/base"}, result1) + assert.Contains(t, cache, stackPath) + + // Modify file on disk (shouldn't affect cached result). + err = os.WriteFile(stackPath, []byte("imports:\n - different/import"), 0o644) + require.NoError(t, err) + + // Second call - should use cache. + result2, err := getStackImports("stack", atmosConfig, cache) + require.NoError(t, err) + assert.Equal(t, []string{"catalog/base"}, result2, "Expected cached result, not re-read file") +} + +// TestGetStackImports_MultipleImports tests stack with multiple imports. +func TestGetStackImports_MultipleImports(t *testing.T) { + tmpDir := t.TempDir() + + stackContent := ` +imports: + - catalog/base + - catalog/network + - catalog/security +` + stackPath := filepath.Join(tmpDir, "stack.yaml") + err := os.WriteFile(stackPath, []byte(stackContent), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + cache := make(map[string][]string) + + result, err := getStackImports("stack", atmosConfig, cache) + require.NoError(t, err) + assert.Len(t, result, 3) + assert.Contains(t, result, "catalog/base") + assert.Contains(t, result, "catalog/network") + assert.Contains(t, result, "catalog/security") +} + +// TestGetStackImports_InvalidYAML tests error handling for invalid YAML. +func TestGetStackImports_InvalidYAML(t *testing.T) { + tmpDir := t.TempDir() + + invalidContent := ` +imports: [unclosed +vars: {broken +` + stackPath := filepath.Join(tmpDir, "stack.yaml") + err := os.WriteFile(stackPath, []byte(invalidContent), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + cache := make(map[string][]string) + + _, err = getStackImports("stack", atmosConfig, cache) + assert.Error(t, err) +} + +// TestReadImportsFromFile_SingleImportField tests reading single import field. +func TestReadImportsFromFile_SingleImportField(t *testing.T) { + tmpDir := t.TempDir() + + content := ` +import: catalog/base +vars: + environment: prod +` + filePath := filepath.Join(tmpDir, "test.yaml") + err := os.WriteFile(filePath, []byte(content), 0o644) + require.NoError(t, err) + + result, err := readImportsFromFile(filePath) + require.NoError(t, err) + assert.Equal(t, []string{"catalog/base"}, result) +} + +// TestReadImportsFromFile_MultipleImportsField tests reading imports array. +func TestReadImportsFromFile_MultipleImportsField(t *testing.T) { + tmpDir := t.TempDir() + + content := ` +imports: + - catalog/base + - catalog/network + - catalog/security +` + filePath := filepath.Join(tmpDir, "test.yaml") + err := os.WriteFile(filePath, []byte(content), 0o644) + require.NoError(t, err) + + result, err := readImportsFromFile(filePath) + require.NoError(t, err) + assert.Equal(t, []string{"catalog/base", "catalog/network", "catalog/security"}, result) +} + +// TestReadImportsFromFile_BothFields tests reading both import and imports fields. +func TestReadImportsFromFile_BothFields(t *testing.T) { + tmpDir := t.TempDir() + + content := ` +import: catalog/base +imports: + - catalog/network + - catalog/security +` + filePath := filepath.Join(tmpDir, "test.yaml") + err := os.WriteFile(filePath, []byte(content), 0o644) + require.NoError(t, err) + + result, err := readImportsFromFile(filePath) + require.NoError(t, err) + assert.Len(t, result, 3) + assert.Contains(t, result, "catalog/base") + assert.Contains(t, result, "catalog/network") + assert.Contains(t, result, "catalog/security") +} + +// TestReadImportsFromFile_NoImports tests file with no import fields. +func TestReadImportsFromFile_NoImports(t *testing.T) { + tmpDir := t.TempDir() + + content := ` +vars: + environment: prod +` + filePath := filepath.Join(tmpDir, "test.yaml") + err := os.WriteFile(filePath, []byte(content), 0o644) + require.NoError(t, err) + + result, err := readImportsFromFile(filePath) + require.NoError(t, err) + assert.Empty(t, result) +} + +// TestReadImportsFromFile_FileNotFound tests error when file doesn't exist. +func TestReadImportsFromFile_FileNotFound(t *testing.T) { + _, err := readImportsFromFile("/nonexistent/file.yaml") + assert.Error(t, err) + assert.Contains(t, err.Error(), "failed to read file") +} + +// TestReadImportsFromFile_InvalidYAML tests error handling for invalid YAML. +func TestReadImportsFromFile_InvalidYAML(t *testing.T) { + tmpDir := t.TempDir() + + invalidContent := ` +imports: [unclosed +` + filePath := filepath.Join(tmpDir, "test.yaml") + err := os.WriteFile(filePath, []byte(invalidContent), 0o644) + require.NoError(t, err) + + _, err = readImportsFromFile(filePath) + assert.Error(t, err) + assert.Contains(t, err.Error(), "failed to parse YAML") +} + +// TestExtractImportStrings_StringValue tests extracting string value. +func TestExtractImportStrings_StringValue(t *testing.T) { + result := extractImportStrings("catalog/base") + assert.Equal(t, []string{"catalog/base"}, result) +} + +// TestExtractImportStrings_ArrayOfStrings tests extracting array of strings. +func TestExtractImportStrings_ArrayOfStrings(t *testing.T) { + result := extractImportStrings([]interface{}{"catalog/base", "catalog/network"}) + assert.Equal(t, []string{"catalog/base", "catalog/network"}, result) +} + +// TestExtractImportStrings_EmptyArray tests empty array. +func TestExtractImportStrings_EmptyArray(t *testing.T) { + result := extractImportStrings([]interface{}{}) + assert.Nil(t, result) +} + +// TestExtractImportStrings_MixedTypes tests array with mixed types. +func TestExtractImportStrings_MixedTypes(t *testing.T) { + result := extractImportStrings([]interface{}{"catalog/base", 123, "catalog/network", true}) + assert.Equal(t, []string{"catalog/base", "catalog/network"}, result) +} + +// TestExtractImportStrings_NilValue tests nil value. +func TestExtractImportStrings_NilValue(t *testing.T) { + result := extractImportStrings(nil) + assert.Nil(t, result) +} + +// TestExtractImportStrings_NonStringNonArray tests non-string, non-array value. +func TestExtractImportStrings_NonStringNonArray(t *testing.T) { + result := extractImportStrings(123) + assert.Nil(t, result) +} + +// TestResolveImportFilePath_NoExtension tests adding .yaml extension. +func TestResolveImportFilePath_NoExtension(t *testing.T) { + tmpDir := t.TempDir() + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + result := resolveImportFilePath("catalog/base", atmosConfig) + assert.Equal(t, filepath.Join(tmpDir, "catalog", "base.yaml"), result) +} + +// TestResolveImportFilePath_WithYamlExtension tests existing .yaml extension. +func TestResolveImportFilePath_WithYamlExtension(t *testing.T) { + tmpDir := t.TempDir() + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + result := resolveImportFilePath("catalog/base.yaml", atmosConfig) + assert.Equal(t, filepath.Join(tmpDir, "catalog", "base.yaml"), result) +} + +// TestResolveImportFilePath_WithYmlExtension tests existing .yml extension. +func TestResolveImportFilePath_WithYmlExtension(t *testing.T) { + tmpDir := t.TempDir() + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + result := resolveImportFilePath("catalog/base.yml", atmosConfig) + assert.Equal(t, filepath.Join(tmpDir, "catalog", "base.yml"), result) +} + +// TestBuildImportTree_FileNotFound tests graceful handling of missing file. +func TestBuildImportTree_FileNotFound(t *testing.T) { + tmpDir := t.TempDir() + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + cache := make(map[string][]string) + visited := make(map[string]bool) + + node := buildImportTree("nonexistent/import", atmosConfig, cache, visited) + + assert.Equal(t, "nonexistent/import", node.Path) + assert.False(t, node.Circular) + assert.Empty(t, node.Children) +} + +// TestBuildImportTree_CircularDetection tests circular reference detection. +func TestBuildImportTree_CircularDetection(t *testing.T) { + tmpDir := t.TempDir() + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + cache := make(map[string][]string) + visited := make(map[string]bool) + + // Mark a path as visited to simulate circular reference. + visited["catalog/base"] = true + + node := buildImportTree("catalog/base", atmosConfig, cache, visited) + + assert.Equal(t, "catalog/base", node.Path) + assert.True(t, node.Circular, "Expected circular reference to be detected") + assert.Empty(t, node.Children) +} + +// TestBuildImportTree_DeferCleanup tests that visited map is cleaned up. +func TestBuildImportTree_DeferCleanup(t *testing.T) { + tmpDir := t.TempDir() + + // Create file with no imports. + content := `vars: {}` + filePath := filepath.Join(tmpDir, "test.yaml") + err := os.WriteFile(filePath, []byte(content), 0o644) + require.NoError(t, err) + + atmosConfig := &schema.AtmosConfiguration{ + StacksBaseAbsolutePath: tmpDir, + } + + cache := make(map[string][]string) + visited := make(map[string]bool) + + _ = buildImportTree("test", atmosConfig, cache, visited) + + // After function returns, visited should be cleaned up (defer delete). + assert.False(t, visited[filepath.Join(tmpDir, "test.yaml")], "Expected visited map to be cleaned up after defer") +} diff --git a/pkg/list/list_instances.go b/pkg/list/list_instances.go index 0d417e0269..a6a2af1401 100644 --- a/pkg/list/list_instances.go +++ b/pkg/list/list_instances.go @@ -1,10 +1,8 @@ package list import ( - "encoding/csv" "errors" "fmt" - "os" "sort" "strings" @@ -12,22 +10,81 @@ import ( errUtils "github.com/cloudposse/atmos/errors" e "github.com/cloudposse/atmos/internal/exec" - term "github.com/cloudposse/atmos/internal/tui/templates/term" "github.com/cloudposse/atmos/pkg/auth" cfg "github.com/cloudposse/atmos/pkg/config" "github.com/cloudposse/atmos/pkg/git" + "github.com/cloudposse/atmos/pkg/list/column" + "github.com/cloudposse/atmos/pkg/list/extract" + "github.com/cloudposse/atmos/pkg/list/filter" "github.com/cloudposse/atmos/pkg/list/format" + "github.com/cloudposse/atmos/pkg/list/importresolver" + "github.com/cloudposse/atmos/pkg/list/renderer" + listSort "github.com/cloudposse/atmos/pkg/list/sort" log "github.com/cloudposse/atmos/pkg/logger" + "github.com/cloudposse/atmos/pkg/perf" "github.com/cloudposse/atmos/pkg/pro" "github.com/cloudposse/atmos/pkg/pro/dtos" "github.com/cloudposse/atmos/pkg/schema" + "github.com/cloudposse/atmos/pkg/ui" u "github.com/cloudposse/atmos/pkg/utils" ) -const ( - componentHeader = "Component" - stackHeader = "Stack" -) +// Default columns for list instances if not specified in atmos.yaml. +var defaultInstanceColumns = []column.Config{ + {Name: "Component", Value: "{{ .component }}"}, + {Name: "Stack", Value: "{{ .stack }}"}, +} + +// InstancesCommandOptions contains options for the list instances command. +type InstancesCommandOptions struct { + Info *schema.ConfigAndStacksInfo + Cmd *cobra.Command + Args []string + ShowImports bool + ColumnsFlag []string + FilterSpec string + SortSpec string + Delimiter string + Query string + AuthManager auth.AuthManager +} + +// parseColumnsFlag parses column specifications from CLI flag. +// Each flag value should be in the format: "Name=TemplateExpression" +// Example: --columns "Component={{ .component }}" --columns "Stack={{ .stack }}" +// Returns error if any column specification is invalid. +func parseColumnsFlag(columnsFlag []string) ([]column.Config, error) { + if len(columnsFlag) == 0 { + return defaultInstanceColumns, nil + } + + columns := make([]column.Config, 0, len(columnsFlag)) + for i, spec := range columnsFlag { + // Split on first '=' to separate name from template + parts := strings.SplitN(spec, "=", 2) + if len(parts) != 2 { + return nil, fmt.Errorf("%w: column spec %d must be in format 'Name=Template', got: %q", + errUtils.ErrInvalidConfig, i+1, spec) + } + + name := strings.TrimSpace(parts[0]) + value := strings.TrimSpace(parts[1]) + + if name == "" { + return nil, fmt.Errorf("%w: column spec %d has empty name", errUtils.ErrInvalidConfig, i+1) + } + if value == "" { + return nil, fmt.Errorf("%w: column spec %d has empty template", errUtils.ErrInvalidConfig, i+1) + } + + columns = append(columns, column.Config{ + Name: name, + Value: value, + }) + } + + return columns, nil +} // processComponentConfig processes a single component configuration and returns an instance if valid. func processComponentConfig(stackName, componentName, componentType string, componentConfig interface{}) *schema.Instance { @@ -168,41 +225,33 @@ func sortInstances(instances []schema.Instance) []schema.Instance { return instances } -// formatInstances formats the instances for output. -func formatInstances(instances []schema.Instance) string { - formatOpts := format.FormatOptions{ - TTY: term.IsTTYSupportForStdout(), - CustomHeaders: []string{componentHeader, stackHeader}, +// getInstanceColumns returns column configuration from CLI flag, atmos.yaml, or defaults. +// Returns error if CLI flag parsing fails. +func getInstanceColumns(atmosConfig *schema.AtmosConfiguration, columnsFlag []string) ([]column.Config, error) { + // If --columns flag is provided, parse it and return. + if len(columnsFlag) > 0 { + columns, err := parseColumnsFlag(columnsFlag) + if err != nil { + return nil, err + } + return columns, nil } - // If not in a TTY environment, output CSV. - if !formatOpts.TTY { - var output strings.Builder - csvWriter := csv.NewWriter(&output) - if err := csvWriter.Write([]string{componentHeader, stackHeader}); err != nil { - return "" - } - for _, i := range instances { - if err := csvWriter.Write([]string{i.Component, i.Stack}); err != nil { - return "" + // Check if custom columns are configured in atmos.yaml. + if len(atmosConfig.Components.List.Columns) > 0 { + columns := make([]column.Config, len(atmosConfig.Components.List.Columns)) + for i, col := range atmosConfig.Components.List.Columns { + columns[i] = column.Config{ + Name: col.Name, + Value: col.Value, + Width: col.Width, } } - csvWriter.Flush() - if err := csvWriter.Error(); err != nil { - log.Error(errUtils.ErrFailedToFinalizeCSVOutput.Error(), "error", err) - return "" - } - return output.String() - } - - // For TTY mode, create a styled table with only Component and Stack columns. - tableRows := make([][]string, 0, len(instances)) - for _, i := range instances { - row := []string{i.Component, i.Stack} - tableRows = append(tableRows, row) + return columns, nil } - return format.CreateStyledTable(formatOpts.CustomHeaders, tableRows) + // Return default columns. + return defaultInstanceColumns, nil } // uploadInstancesWithDeps uploads instances to Atmos Pro API using injected dependencies. @@ -302,37 +351,116 @@ func processInstances(atmosConfig *schema.AtmosConfiguration, authManager auth.A } // ExecuteListInstancesCmd executes the list instances command. -func ExecuteListInstancesCmd(info *schema.ConfigAndStacksInfo, cmd *cobra.Command, args []string, authManager auth.AuthManager) error { - // Inline initializeConfig. - atmosConfig, err := cfg.InitCliConfig(*info, true) +// +//nolint:revive,cyclop,funlen // Complexity and length from format branching and upload handling (unavoidable pattern). +func ExecuteListInstancesCmd(opts *InstancesCommandOptions) error { + defer perf.Track(nil, "list.ExecuteListInstancesCmd")() + + log.Trace("ExecuteListInstancesCmd starting") + // Initialize CLI config. + atmosConfig, err := cfg.InitCliConfig(*opts.Info, true) if err != nil { log.Error(errUtils.ErrFailedToInitConfig.Error(), "error", err) return errors.Join(errUtils.ErrFailedToInitConfig, err) } // Get flags. - upload, err := cmd.Flags().GetBool("upload") + upload, err := opts.Cmd.Flags().GetBool("upload") if err != nil { log.Error(errUtils.ErrParseFlag.Error(), "flag", "upload", "error", err) return errors.Join(errUtils.ErrParseFlag, err) } - // Process instances. - instances, err := processInstances(&atmosConfig, authManager) + formatFlag, err := opts.Cmd.Flags().GetString("format") + if err != nil { + log.Error(errUtils.ErrParseFlag.Error(), "flag", "format", "error", err) + return errors.Join(errUtils.ErrParseFlag, err) + } + + // Handle tree format specially - branch before calling processInstances to avoid double processing. + log.Trace("Checking format flag", "format_flag", formatFlag, "format_tree", format.FormatTree, "match", formatFlag == string(format.FormatTree)) + if formatFlag == string(format.FormatTree) { + // Tree format does not support --upload. + if upload { + return fmt.Errorf("%w: --upload is not supported with --format=tree", errUtils.ErrInvalidFlag) + } + + // Enable provenance tracking to capture import chains. + atmosConfig.TrackProvenance = true + + // Clear caches to ensure fresh processing with provenance enabled. + e.ClearMergeContexts() + e.ClearFindStacksMapCache() + + // Get all stacks for provenance-based import resolution (single call). + stacksMap, err := e.ExecuteDescribeStacks(&atmosConfig, "", nil, nil, nil, false, false, false, false, nil, opts.AuthManager) + if err != nil { + log.Error(errUtils.ErrExecuteDescribeStacks.Error(), "error", err) + return errors.Join(errUtils.ErrExecuteDescribeStacks, err) + } + + // Resolve import trees using provenance system. + importTrees, err := importresolver.ResolveImportTreeFromProvenance(stacksMap, &atmosConfig) + if err != nil { + return fmt.Errorf("failed to resolve import trees: %w", err) + } + + // Render tree view. + // Use showImports parameter from --provenance flag. + output := format.RenderInstancesTree(importTrees, opts.ShowImports) + fmt.Println(output) + return nil + } + + // For non-tree formats, process instances normally. + instances, err := processInstances(&atmosConfig, opts.AuthManager) if err != nil { log.Error(errUtils.ErrProcessInstances.Error(), "error", err) return errors.Join(errUtils.ErrProcessInstances, err) } - // Inline handleOutput. - output := formatInstances(instances) - fmt.Fprint(os.Stdout, output) + // Extract instances into renderer-compatible format with metadata fields. + data := extract.Metadata(instances) + + // Get column configuration. + columns, err := getInstanceColumns(&atmosConfig, opts.ColumnsFlag) + if err != nil { + log.Error("failed to get columns", "error", err) + return errors.Join(errUtils.ErrInvalidConfig, err) + } + + // Create column selector. + selector, err := column.NewSelector(columns, column.BuildColumnFuncMap()) + if err != nil { + return fmt.Errorf("failed to create column selector: %w", err) + } + + // Build filters from filter specification. + filters, err := buildInstanceFilters(opts.FilterSpec) + if err != nil { + return fmt.Errorf("failed to build filters: %w", err) + } + + // Build sorters from sort specification. + // Pass columns to allow smart default sorting based on available columns. + sorters, err := buildInstanceSorters(opts.SortSpec, columns) + if err != nil { + return fmt.Errorf("failed to build sorters: %w", err) + } + + // Create renderer. + r := renderer.New(filters, selector, sorters, format.Format(formatFlag), opts.Delimiter) + + // Render output. + if err := r.Render(data); err != nil { + return fmt.Errorf("failed to render instances: %w", err) + } // Handle upload if requested. if upload { proInstances := filterProEnabledInstances(instances) if len(proInstances) == 0 { - u.PrintfMessageToTUI("No Atmos Pro-enabled instances found; nothing to upload.") + _ = ui.Info("No Atmos Pro-enabled instances found; nothing to upload.") return nil } return uploadInstances(proInstances) @@ -340,3 +468,39 @@ func ExecuteListInstancesCmd(info *schema.ConfigAndStacksInfo, cmd *cobra.Comman return nil } + +// buildInstanceFilters creates filters from filter specification. +// The filter spec format is currently undefined for instances, +// so this returns an empty filter list for now. +func buildInstanceFilters(filterSpec string) ([]filter.Filter, error) { + // TODO: Implement filter parsing when filter spec format is defined. + // For now, return empty filter list. + return nil, nil +} + +// buildInstanceSorters creates sorters from sort specification. +// When sortSpec is empty and columns contain default "Component" and "Stack", +// applies default sorting. Otherwise returns empty sorters (natural order). +func buildInstanceSorters(sortSpec string, columns []column.Config) ([]*listSort.Sorter, error) { + // If user provided explicit sort spec, use it. + if sortSpec != "" { + return listSort.ParseSortSpec(sortSpec) + } + + // Build map of available column names. + columnNames := make(map[string]bool) + for _, col := range columns { + columnNames[col.Name] = true + } + + // Only apply default sort if both Component and Stack columns exist. + if columnNames["Component"] && columnNames["Stack"] { + return []*listSort.Sorter{ + listSort.NewSorter("Component", listSort.Ascending), + listSort.NewSorter("Stack", listSort.Ascending), + }, nil + } + + // No default sort for custom columns - return empty sorters (natural order). + return nil, nil +} diff --git a/pkg/list/list_instances_comprehensive_test.go b/pkg/list/list_instances_comprehensive_test.go index 108f11e6b9..410fd05ebb 100644 --- a/pkg/list/list_instances_comprehensive_test.go +++ b/pkg/list/list_instances_comprehensive_test.go @@ -1,13 +1,10 @@ package list import ( - "io" - "os" "testing" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/mock" - "github.com/stretchr/testify/require" "github.com/cloudposse/atmos/pkg/pro/dtos" "github.com/cloudposse/atmos/pkg/schema" @@ -30,88 +27,6 @@ func (m *MockAtmosProAPIClientInterface) UploadInstanceStatus(req *dtos.Instance return args.Error(0) } -// Test formatInstances function. -func TestFormatInstances(t *testing.T) { - instances := []schema.Instance{ - {Component: "vpc", Stack: "stack1"}, - {Component: "app", Stack: "stack2"}, - {Component: "db", Stack: "stack1"}, - } - - t.Run("TTY mode", func(t *testing.T) { - // Test TTY mode by directly calling formatInstances. - // In a real TTY environment, this would return styled table format. - output := formatInstances(instances) - - // Should return styled table format with headers and data. - assert.Contains(t, output, "Component") - assert.Contains(t, output, "Stack") - assert.Contains(t, output, "vpc") - assert.Contains(t, output, "app") - assert.Contains(t, output, "db") - }) - - t.Run("non-TTY mode", func(t *testing.T) { - // Mock non-TTY environment by redirecting stdout. - originalStdout := os.Stdout - defer func() { os.Stdout = originalStdout }() - - r, w, err := os.Pipe() - require.NoError(t, err) - defer func() { - require.NoError(t, r.Close()) - }() - - os.Stdout = w - - output := formatInstances(instances) - - err = w.Close() - require.NoError(t, err) - os.Stdout = originalStdout - - // Read the output from the pipe. - pipeOutput, err := io.ReadAll(r) - require.NoError(t, err) - csvOutput := string(pipeOutput) - - // Should return CSV format. - expectedCSV := "Component,Stack\nvpc,stack1\napp,stack2\ndb,stack1\n" - assert.Equal(t, expectedCSV, output) - // The function doesn't write to stdout, it only returns the formatted string - assert.Equal(t, "", csvOutput) - }) - - t.Run("empty instances", func(t *testing.T) { - originalStdout := os.Stdout - defer func() { os.Stdout = originalStdout }() - - r, w, err := os.Pipe() - require.NoError(t, err) - defer func() { - require.NoError(t, r.Close()) - }() - - os.Stdout = w - - output := formatInstances([]schema.Instance{}) - - err = w.Close() - require.NoError(t, err) - os.Stdout = originalStdout - - // Read the output from the pipe. - pipeOutput, err := io.ReadAll(r) - require.NoError(t, err) - csvOutput := string(pipeOutput) - - expectedCSV := "Component,Stack\n" - assert.Equal(t, expectedCSV, output) - // The function doesn't write to stdout, it only returns the formatted string - assert.Equal(t, "", csvOutput) - }) -} - // Test processComponentConfig edge cases. func TestProcessComponentConfig(t *testing.T) { t.Run("invalid component config type", func(t *testing.T) { diff --git a/pkg/list/list_instances_coverage_test.go b/pkg/list/list_instances_coverage_test.go index 98a41c709b..ac3c22c516 100644 --- a/pkg/list/list_instances_coverage_test.go +++ b/pkg/list/list_instances_coverage_test.go @@ -1,3 +1,4 @@ +//nolint:dupl // Test structure similarity is intentional for comprehensive coverage package list import ( @@ -5,54 +6,18 @@ import ( "github.com/spf13/cobra" "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + errUtils "github.com/cloudposse/atmos/errors" + "github.com/cloudposse/atmos/pkg/data" + iolib "github.com/cloudposse/atmos/pkg/io" + "github.com/cloudposse/atmos/pkg/list/column" + listSort "github.com/cloudposse/atmos/pkg/list/sort" "github.com/cloudposse/atmos/pkg/schema" + "github.com/cloudposse/atmos/pkg/ui" "github.com/cloudposse/atmos/tests" ) -// TestFormatInstances_TTY tests formatInstances() in TTY mode (table format). -func TestFormatInstances_TTY(t *testing.T) { - // This test covers the TTY branch that creates a styled table. - instances := []schema.Instance{ - {Component: "vpc", Stack: "dev"}, - {Component: "app", Stack: "prod"}, - } - - result := formatInstances(instances) - - // Should produce table output with headers and data. - assert.NotEmpty(t, result) - assert.Contains(t, result, "Component") - assert.Contains(t, result, "Stack") -} - -// TestFormatInstances_WithMultipleInstances tests formatInstances() with multiple instances. -func TestFormatInstances_WithMultipleInstances(t *testing.T) { - // Test with multiple instances to cover the loop logic. - instances := []schema.Instance{ - {Component: "vpc", Stack: "dev"}, - {Component: "app", Stack: "prod"}, - {Component: "db", Stack: "staging"}, - } - - result := formatInstances(instances) - - // Should produce output with all instances. - assert.NotEmpty(t, result) - assert.Contains(t, result, "Component") - assert.Contains(t, result, "Stack") -} - -// TestFormatInstances_EmptyList tests formatInstances() with empty instance list. -func TestFormatInstances_EmptyList(t *testing.T) { - instances := []schema.Instance{} - - result := formatInstances(instances) - - // Should produce output with headers but no data rows. - assert.NotEmpty(t, result) -} - // TestUploadInstances tests the uploadInstances() wrapper function. func TestUploadInstances(t *testing.T) { // This tests the production wrapper that uses default implementations. @@ -93,6 +58,14 @@ func TestProcessInstances(t *testing.T) { // TestExecuteListInstancesCmd tests the main command entry point with real fixtures. func TestExecuteListInstancesCmd(t *testing.T) { + // Initialize I/O and UI contexts for testing. + ioCtx, err := iolib.NewContext() + if err != nil { + t.Fatalf("failed to initialize I/O context: %v", err) + } + ui.InitFormatter(ioCtx) + data.InitWriter(ioCtx) + // Use actual test fixture for integration test. fixturePath := "../../tests/fixtures/scenarios/complete" tests.RequireFilePath(t, fixturePath, "test fixture directory") @@ -100,13 +73,22 @@ func TestExecuteListInstancesCmd(t *testing.T) { // Create command with flags. cmd := &cobra.Command{} cmd.Flags().Bool("upload", false, "Upload instances to Atmos Pro") + cmd.Flags().String("format", "table", "Output format") info := &schema.ConfigAndStacksInfo{ BasePath: fixturePath, } // Execute command - should successfully list instances. - err := ExecuteListInstancesCmd(info, cmd, []string{}, nil) + err = ExecuteListInstancesCmd(&InstancesCommandOptions{ + Info: info, + Cmd: cmd, + Args: []string{}, + ShowImports: false, + ColumnsFlag: []string{}, + FilterSpec: "", + SortSpec: "", + }) // Should succeed with valid fixture. assert.NoError(t, err) @@ -117,6 +99,7 @@ func TestExecuteListInstancesCmd_InvalidConfig(t *testing.T) { // Create command with flags. cmd := &cobra.Command{} cmd.Flags().Bool("upload", false, "Upload instances to Atmos Pro") + cmd.Flags().String("format", "table", "Output format") // Use invalid config to trigger error path. info := &schema.ConfigAndStacksInfo{ @@ -124,7 +107,15 @@ func TestExecuteListInstancesCmd_InvalidConfig(t *testing.T) { } // Execute command - will error but won't panic. - err := ExecuteListInstancesCmd(info, cmd, []string{}, nil) + err := ExecuteListInstancesCmd(&InstancesCommandOptions{ + Info: info, + Cmd: cmd, + Args: []string{}, + ShowImports: false, + ColumnsFlag: []string{}, + FilterSpec: "", + SortSpec: "", + }) // Error is expected with invalid config. assert.Error(t, err) @@ -135,14 +126,291 @@ func TestExecuteListInstancesCmd_UploadPath(t *testing.T) { // Test that upload flag parsing works. cmd := &cobra.Command{} cmd.Flags().Bool("upload", true, "Upload instances to Atmos Pro") + cmd.Flags().String("format", "table", "Output format") info := &schema.ConfigAndStacksInfo{ BasePath: "/nonexistent/path", } // Execute with upload enabled - will error in config loading before upload. - err := ExecuteListInstancesCmd(info, cmd, []string{}, nil) + err := ExecuteListInstancesCmd(&InstancesCommandOptions{ + Info: info, + Cmd: cmd, + Args: []string{}, + ShowImports: false, + ColumnsFlag: []string{}, + FilterSpec: "", + SortSpec: "", + }) // Error is expected (config load will fail). assert.Error(t, err) } + +// TestParseColumnsFlag tests parsing column specifications from CLI flags. +func TestParseColumnsFlag(t *testing.T) { + tests := []struct { + name string + columnsFlag []string + expected []column.Config + expectErr bool + errContains string + }{ + { + name: "empty flag returns defaults", + columnsFlag: []string{}, + expected: defaultInstanceColumns, + expectErr: false, + }, + { + name: "nil flag returns defaults", + columnsFlag: nil, + expected: defaultInstanceColumns, + expectErr: false, + }, + { + name: "valid single column", + columnsFlag: []string{"Stack={{ .stack }}"}, + expected: []column.Config{ + {Name: "Stack", Value: "{{ .stack }}"}, + }, + expectErr: false, + }, + { + name: "valid multiple columns", + columnsFlag: []string{"Stack={{ .stack }}", "Component={{ .component }}"}, + expected: []column.Config{ + {Name: "Stack", Value: "{{ .stack }}"}, + {Name: "Component", Value: "{{ .component }}"}, + }, + expectErr: false, + }, + { + name: "column with multiple equals signs in template", + columnsFlag: []string{"Check={{ if eq .enabled true }}yes{{ end }}"}, + expected: []column.Config{ + {Name: "Check", Value: "{{ if eq .enabled true }}yes{{ end }}"}, + }, + expectErr: false, + }, + { + name: "missing equals sign", + columnsFlag: []string{"InvalidSpec"}, + expectErr: true, + errContains: "must be in format 'Name=Template'", + }, + { + name: "empty name", + columnsFlag: []string{"={{ .stack }}"}, + expectErr: true, + errContains: "has empty name", + }, + { + name: "empty template", + columnsFlag: []string{"Stack="}, + expectErr: true, + errContains: "has empty template", + }, + } + + for _, tc := range tests { + t.Run(tc.name, func(t *testing.T) { + result, err := parseColumnsFlag(tc.columnsFlag) + + if tc.expectErr { + require.Error(t, err) + assert.ErrorIs(t, err, errUtils.ErrInvalidConfig) + if tc.errContains != "" { + assert.Contains(t, err.Error(), tc.errContains) + } + return + } + + require.NoError(t, err) + assert.Equal(t, tc.expected, result) + }) + } +} + +// TestGetInstanceColumns tests column configuration resolution. +func TestGetInstanceColumns(t *testing.T) { + tests := []struct { + name string + atmosConfig *schema.AtmosConfiguration + columnsFlag []string + expected []column.Config + expectErr bool + }{ + { + name: "CLI flag takes precedence over config", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{ + Columns: []schema.ListColumnConfig{ + {Name: "ConfigColumn", Value: "{{ .config }}"}, + }, + }, + }, + }, + columnsFlag: []string{"FlagColumn={{ .flag }}"}, + expected: []column.Config{ + {Name: "FlagColumn", Value: "{{ .flag }}"}, + }, + expectErr: false, + }, + { + name: "config columns used when no flag provided", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{ + Columns: []schema.ListColumnConfig{ + {Name: "ConfigStack", Value: "{{ .stack }}"}, + {Name: "ConfigComponent", Value: "{{ .component }}"}, + }, + }, + }, + }, + columnsFlag: nil, + expected: []column.Config{ + {Name: "ConfigStack", Value: "{{ .stack }}"}, + {Name: "ConfigComponent", Value: "{{ .component }}"}, + }, + expectErr: false, + }, + { + name: "defaults used when no flag and no config", + atmosConfig: &schema.AtmosConfiguration{}, + columnsFlag: nil, + expected: defaultInstanceColumns, + expectErr: false, + }, + { + name: "defaults used when config has empty columns", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{ + Columns: []schema.ListColumnConfig{}, + }, + }, + }, + columnsFlag: nil, + expected: defaultInstanceColumns, + expectErr: false, + }, + { + name: "invalid flag returns error", + atmosConfig: &schema.AtmosConfiguration{}, + columnsFlag: []string{"InvalidSpec"}, + expectErr: true, + }, + } + + for _, tc := range tests { + t.Run(tc.name, func(t *testing.T) { + result, err := getInstanceColumns(tc.atmosConfig, tc.columnsFlag) + + if tc.expectErr { + require.Error(t, err) + return + } + + require.NoError(t, err) + assert.Equal(t, tc.expected, result) + }) + } +} + +// TestBuildInstanceSorters tests sorter configuration. +func TestBuildInstanceSorters(t *testing.T) { + tests := []struct { + name string + sortSpec string + columns []column.Config + expected []*listSort.Sorter + expectErr bool + errContains string + }{ + { + name: "empty spec with default columns returns default sorters", + sortSpec: "", + columns: []column.Config{ + {Name: "Component", Value: "{{ .component }}"}, + {Name: "Stack", Value: "{{ .stack }}"}, + }, + expected: []*listSort.Sorter{ + listSort.NewSorter("Component", listSort.Ascending), + listSort.NewSorter("Stack", listSort.Ascending), + }, + expectErr: false, + }, + { + name: "empty spec with custom columns returns nil", + sortSpec: "", + columns: []column.Config{{Name: "Custom", Value: "{{ .custom }}"}}, + expected: nil, + expectErr: false, + }, + { + name: "explicit sort spec overrides defaults", + sortSpec: "Stack:asc", + columns: []column.Config{ + {Name: "Component", Value: "{{ .component }}"}, + {Name: "Stack", Value: "{{ .stack }}"}, + }, + expected: []*listSort.Sorter{ + listSort.NewSorter("Stack", listSort.Ascending), + }, + expectErr: false, + }, + { + name: "descending sort", + sortSpec: "Component:desc", + columns: []column.Config{{Name: "Component", Value: "{{ .component }}"}}, + expected: []*listSort.Sorter{ + listSort.NewSorter("Component", listSort.Descending), + }, + expectErr: false, + }, + { + name: "invalid sort spec format", + sortSpec: "InvalidFormat", + columns: []column.Config{{Name: "Component", Value: "{{ .component }}"}}, + expectErr: true, + errContains: "expected format 'column:order'", + }, + } + + for _, tc := range tests { + t.Run(tc.name, func(t *testing.T) { + result, err := buildInstanceSorters(tc.sortSpec, tc.columns) + + if tc.expectErr { + require.Error(t, err) + if tc.errContains != "" { + assert.Contains(t, err.Error(), tc.errContains) + } + return + } + + require.NoError(t, err) + if tc.expected == nil { + assert.Nil(t, result) + return + } + + require.Len(t, result, len(tc.expected)) + for i, s := range result { + assert.Equal(t, tc.expected[i].Column, s.Column) + assert.Equal(t, tc.expected[i].Order, s.Order) + } + }) + } +} + +// TestBuildInstanceFilters tests the filter builder placeholder. +func TestBuildInstanceFilters(t *testing.T) { + // Currently buildInstanceFilters is a placeholder that returns nil. + result, err := buildInstanceFilters("any-spec") + require.NoError(t, err) + assert.Nil(t, result) +} diff --git a/pkg/list/list_metadata.go b/pkg/list/list_metadata.go new file mode 100644 index 0000000000..694ae73b48 --- /dev/null +++ b/pkg/list/list_metadata.go @@ -0,0 +1,182 @@ +package list + +import ( + "errors" + "fmt" + "strings" + + "github.com/spf13/cobra" + + errUtils "github.com/cloudposse/atmos/errors" + "github.com/cloudposse/atmos/pkg/auth" + cfg "github.com/cloudposse/atmos/pkg/config" + "github.com/cloudposse/atmos/pkg/list/column" + "github.com/cloudposse/atmos/pkg/list/extract" + "github.com/cloudposse/atmos/pkg/list/filter" + "github.com/cloudposse/atmos/pkg/list/format" + "github.com/cloudposse/atmos/pkg/list/renderer" + listSort "github.com/cloudposse/atmos/pkg/list/sort" + "github.com/cloudposse/atmos/pkg/schema" +) + +// Default columns for list metadata if not specified in atmos.yaml. +var defaultMetadataColumns = []column.Config{ + {Name: "Stack", Value: "{{ .stack }}"}, + {Name: "Component", Value: "{{ .component }}"}, + {Name: "Type", Value: "{{ .type }}"}, + {Name: "Enabled", Value: "{{ .enabled }}"}, + {Name: "Locked", Value: "{{ .locked }}"}, + {Name: "Component (base)", Value: "{{ .component_base }}"}, + {Name: "Inherits", Value: "{{ .inherits }}"}, + {Name: "Description", Value: "{{ .description }}"}, +} + +// getMetadataColumns returns column configuration from CLI flag, atmos.yaml, or defaults. +// Returns error if CLI flag parsing fails. +func getMetadataColumns(atmosConfig *schema.AtmosConfiguration, columnsFlag []string) ([]column.Config, error) { + // If --columns flag is provided, parse it and return. + if len(columnsFlag) > 0 { + columns, err := parseMetadataColumnsFlag(columnsFlag) + if err != nil { + return nil, err + } + return columns, nil + } + + // Check if custom columns are configured in atmos.yaml. + if len(atmosConfig.Components.List.Columns) > 0 { + columns := make([]column.Config, len(atmosConfig.Components.List.Columns)) + for i, col := range atmosConfig.Components.List.Columns { + columns[i] = column.Config{ + Name: col.Name, + Value: col.Value, + } + } + return columns, nil + } + + // Return default columns. + return defaultMetadataColumns, nil +} + +// parseMetadataColumnsFlag parses column specifications from CLI flag. +// Each flag value should be in the format: "Name=TemplateExpression" +// Example: --columns "Component={{ .component }}" --columns "Stack={{ .stack }}" +// Returns error if any column specification is invalid. +func parseMetadataColumnsFlag(columnsFlag []string) ([]column.Config, error) { + if len(columnsFlag) == 0 { + return defaultMetadataColumns, nil + } + + columns := make([]column.Config, 0, len(columnsFlag)) + for i, spec := range columnsFlag { + // Split on first '=' to separate name from template + parts := strings.SplitN(spec, "=", 2) + if len(parts) != 2 { + return nil, fmt.Errorf("%w: column spec %d must be in format 'Name=Template', got: %q", + errUtils.ErrInvalidConfig, i+1, spec) + } + + name := strings.TrimSpace(parts[0]) + value := strings.TrimSpace(parts[1]) + + if name == "" { + return nil, fmt.Errorf("%w: column spec %d has empty name", errUtils.ErrInvalidConfig, i+1) + } + if value == "" { + return nil, fmt.Errorf("%w: column spec %d has empty template", errUtils.ErrInvalidConfig, i+1) + } + + columns = append(columns, column.Config{ + Name: name, + Value: value, + }) + } + + return columns, nil +} + +// MetadataOptions contains options for list metadata command. +type MetadataOptions struct { + Format string + Columns []string + Sort string + Filter string + Stack string + Delimiter string + AuthManager auth.AuthManager +} + +// ExecuteListMetadataCmd executes the list metadata command using the renderer pipeline. +func ExecuteListMetadataCmd(info *schema.ConfigAndStacksInfo, cmd *cobra.Command, args []string, opts *MetadataOptions) error { + // Initialize CLI config. + atmosConfig, err := cfg.InitCliConfig(*info, true) + if err != nil { + return errors.Join(errUtils.ErrFailedToInitConfig, err) + } + + // Process instances (same as list instances, but we'll extract metadata). + instances, err := processInstances(&atmosConfig, opts.AuthManager) + if err != nil { + return errors.Join(errUtils.ErrProcessInstances, err) + } + + // Extract metadata into renderer-compatible format. + data := extract.Metadata(instances) + + // Get column configuration. + columns, err := getMetadataColumns(&atmosConfig, opts.Columns) + if err != nil { + return errors.Join(errUtils.ErrInvalidConfig, err) + } + + // Create column selector. + selector, err := column.NewSelector(columns, column.BuildColumnFuncMap()) + if err != nil { + return fmt.Errorf("failed to create column selector: %w", err) + } + + // Build filters from filter specification. + filters, err := buildMetadataFilters(opts.Filter) + if err != nil { + return fmt.Errorf("failed to build filters: %w", err) + } + + // Build sorters from sort specification. + sorters, err := buildMetadataSorters(opts.Sort) + if err != nil { + return fmt.Errorf("failed to build sorters: %w", err) + } + + // Create renderer with filters and sorters. + r := renderer.New(filters, selector, sorters, format.Format(opts.Format), opts.Delimiter) + + // Render output. + if err := r.Render(data); err != nil { + return fmt.Errorf("failed to render metadata: %w", err) + } + + return nil +} + +// buildMetadataFilters creates filters from filter specification. +// The filter spec format is currently undefined for metadata, +// so this returns an empty filter list for now. +func buildMetadataFilters(filterSpec string) ([]filter.Filter, error) { + // TODO: Implement filter parsing when filter spec format is defined. + // For now, return empty filter list. + return nil, nil +} + +// buildMetadataSorters creates sorters from sort specification. +func buildMetadataSorters(sortSpec string) ([]*listSort.Sorter, error) { + if sortSpec == "" { + // Default sort: by stack then component ascending. + return []*listSort.Sorter{ + listSort.NewSorter("Stack", listSort.Ascending), + listSort.NewSorter("Component", listSort.Ascending), + }, nil + } + + return listSort.ParseSortSpec(sortSpec) +} diff --git a/pkg/list/list_metadata_test.go b/pkg/list/list_metadata_test.go new file mode 100644 index 0000000000..a08ab2a52d --- /dev/null +++ b/pkg/list/list_metadata_test.go @@ -0,0 +1,350 @@ +//nolint:dupl // Test structure similarity is intentional for comprehensive coverage +package list + +import ( + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + + errUtils "github.com/cloudposse/atmos/errors" + "github.com/cloudposse/atmos/pkg/list/column" + listSort "github.com/cloudposse/atmos/pkg/list/sort" + "github.com/cloudposse/atmos/pkg/schema" +) + +func TestParseMetadataColumnsFlag(t *testing.T) { + tests := []struct { + name string + columnsFlag []string + expected []column.Config + expectErr bool + errContains string + }{ + { + name: "empty flag returns defaults", + columnsFlag: []string{}, + expected: defaultMetadataColumns, + expectErr: false, + }, + { + name: "nil flag returns defaults", + columnsFlag: nil, + expected: defaultMetadataColumns, + expectErr: false, + }, + { + name: "valid single column", + columnsFlag: []string{"Stack={{ .stack }}"}, + expected: []column.Config{ + {Name: "Stack", Value: "{{ .stack }}"}, + }, + expectErr: false, + }, + { + name: "valid multiple columns", + columnsFlag: []string{"Stack={{ .stack }}", "Component={{ .component }}"}, + expected: []column.Config{ + {Name: "Stack", Value: "{{ .stack }}"}, + {Name: "Component", Value: "{{ .component }}"}, + }, + expectErr: false, + }, + { + name: "column with multiple equals signs in template", + columnsFlag: []string{"Check={{ if eq .enabled true }}yes{{ end }}"}, + expected: []column.Config{ + {Name: "Check", Value: "{{ if eq .enabled true }}yes{{ end }}"}, + }, + expectErr: false, + }, + { + name: "trims whitespace from name and value", + columnsFlag: []string{" Stack = {{ .stack }} "}, + expected: []column.Config{ + {Name: "Stack", Value: "{{ .stack }}"}, + }, + expectErr: false, + }, + { + name: "missing equals sign", + columnsFlag: []string{"InvalidSpec"}, + expectErr: true, + errContains: "must be in format 'Name=Template'", + }, + { + name: "empty name", + columnsFlag: []string{"={{ .stack }}"}, + expectErr: true, + errContains: "has empty name", + }, + { + name: "whitespace-only name", + columnsFlag: []string{" ={{ .stack }}"}, + expectErr: true, + errContains: "has empty name", + }, + { + name: "empty template", + columnsFlag: []string{"Stack="}, + expectErr: true, + errContains: "has empty template", + }, + { + name: "whitespace-only template", + columnsFlag: []string{"Stack= "}, + expectErr: true, + errContains: "has empty template", + }, + { + name: "error includes column number", + columnsFlag: []string{"Valid={{ .stack }}", "Invalid"}, + expectErr: true, + errContains: "column spec 2", + }, + } + + for _, tc := range tests { + t.Run(tc.name, func(t *testing.T) { + result, err := parseMetadataColumnsFlag(tc.columnsFlag) + + if tc.expectErr { + require.Error(t, err) + assert.ErrorIs(t, err, errUtils.ErrInvalidConfig) + if tc.errContains != "" { + assert.Contains(t, err.Error(), tc.errContains) + } + return + } + + require.NoError(t, err) + assert.Equal(t, tc.expected, result) + }) + } +} + +func TestGetMetadataColumns(t *testing.T) { + tests := []struct { + name string + atmosConfig *schema.AtmosConfiguration + columnsFlag []string + expected []column.Config + expectErr bool + }{ + { + name: "CLI flag takes precedence over config", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{ + Columns: []schema.ListColumnConfig{ + {Name: "ConfigColumn", Value: "{{ .config }}"}, + }, + }, + }, + }, + columnsFlag: []string{"FlagColumn={{ .flag }}"}, + expected: []column.Config{ + {Name: "FlagColumn", Value: "{{ .flag }}"}, + }, + expectErr: false, + }, + { + name: "config columns used when no flag provided", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{ + Columns: []schema.ListColumnConfig{ + {Name: "ConfigStack", Value: "{{ .stack }}"}, + {Name: "ConfigComponent", Value: "{{ .component }}"}, + }, + }, + }, + }, + columnsFlag: nil, + expected: []column.Config{ + {Name: "ConfigStack", Value: "{{ .stack }}"}, + {Name: "ConfigComponent", Value: "{{ .component }}"}, + }, + expectErr: false, + }, + { + name: "defaults used when no flag and no config", + atmosConfig: &schema.AtmosConfiguration{}, + columnsFlag: nil, + expected: defaultMetadataColumns, + expectErr: false, + }, + { + name: "defaults used when config has empty columns", + atmosConfig: &schema.AtmosConfiguration{ + Components: schema.Components{ + List: schema.ListConfig{ + Columns: []schema.ListColumnConfig{}, + }, + }, + }, + columnsFlag: nil, + expected: defaultMetadataColumns, + expectErr: false, + }, + { + name: "invalid flag returns error", + atmosConfig: &schema.AtmosConfiguration{}, + columnsFlag: []string{"InvalidSpec"}, + expectErr: true, + }, + } + + for _, tc := range tests { + t.Run(tc.name, func(t *testing.T) { + result, err := getMetadataColumns(tc.atmosConfig, tc.columnsFlag) + + if tc.expectErr { + require.Error(t, err) + return + } + + require.NoError(t, err) + assert.Equal(t, tc.expected, result) + }) + } +} + +func TestBuildMetadataSorters(t *testing.T) { + tests := []struct { + name string + sortSpec string + expected []*listSort.Sorter + expectErr bool + errContains string + }{ + { + name: "empty spec returns default sorters", + sortSpec: "", + expected: []*listSort.Sorter{ + listSort.NewSorter("Stack", listSort.Ascending), + listSort.NewSorter("Component", listSort.Ascending), + }, + expectErr: false, + }, + { + name: "single column ascending", + sortSpec: "Stack:asc", + expected: []*listSort.Sorter{ + listSort.NewSorter("Stack", listSort.Ascending), + }, + expectErr: false, + }, + { + name: "single column descending", + sortSpec: "Stack:desc", + expected: []*listSort.Sorter{ + listSort.NewSorter("Stack", listSort.Descending), + }, + expectErr: false, + }, + { + name: "multiple columns", + sortSpec: "Stack:asc,Component:desc", + expected: []*listSort.Sorter{ + listSort.NewSorter("Stack", listSort.Ascending), + listSort.NewSorter("Component", listSort.Descending), + }, + expectErr: false, + }, + { + name: "invalid format missing colon", + sortSpec: "Stack", + expectErr: true, + errContains: "expected format 'column:order'", + }, + } + + for _, tc := range tests { + t.Run(tc.name, func(t *testing.T) { + result, err := buildMetadataSorters(tc.sortSpec) + + if tc.expectErr { + require.Error(t, err) + if tc.errContains != "" { + assert.Contains(t, err.Error(), tc.errContains) + } + return + } + + require.NoError(t, err) + require.Len(t, result, len(tc.expected)) + for i, s := range result { + assert.Equal(t, tc.expected[i].Column, s.Column) + assert.Equal(t, tc.expected[i].Order, s.Order) + } + }) + } +} + +func TestBuildMetadataFilters(t *testing.T) { + // Currently buildMetadataFilters is a placeholder that returns nil. + // Test that it behaves as expected. + tests := []struct { + name string + filterSpec string + }{ + { + name: "empty filter spec", + filterSpec: "", + }, + { + name: "non-empty filter spec (currently ignored)", + filterSpec: "stack=dev*", + }, + } + + for _, tc := range tests { + t.Run(tc.name, func(t *testing.T) { + result, err := buildMetadataFilters(tc.filterSpec) + require.NoError(t, err) + assert.Nil(t, result) + }) + } +} + +func TestDefaultMetadataColumns(t *testing.T) { + // Verify default columns are properly configured. + assert.Len(t, defaultMetadataColumns, 8) + + expectedNames := []string{ + "Stack", + "Component", + "Type", + "Enabled", + "Locked", + "Component (base)", + "Inherits", + "Description", + } + + for i, col := range defaultMetadataColumns { + assert.Equal(t, expectedNames[i], col.Name, "column %d name mismatch", i) + assert.NotEmpty(t, col.Value, "column %d should have a template", i) + assert.Contains(t, col.Value, "{{", "column %d template should be a Go template", i) + } +} + +func TestMetadataOptionsStruct(t *testing.T) { + // Test that MetadataOptions struct can be properly constructed. + opts := MetadataOptions{ + Format: "json", + Columns: []string{"Stack={{ .stack }}"}, + Sort: "-Stack", + Filter: "stack=dev*", + Stack: "dev", + Delimiter: ",", + } + + assert.Equal(t, "json", opts.Format) + assert.Equal(t, []string{"Stack={{ .stack }}"}, opts.Columns) + assert.Equal(t, "-Stack", opts.Sort) + assert.Equal(t, "stack=dev*", opts.Filter) + assert.Equal(t, "dev", opts.Stack) + assert.Equal(t, ",", opts.Delimiter) +} diff --git a/pkg/list/list_vendor.go b/pkg/list/list_vendor.go index a488d4a1e5..1908f7d330 100644 --- a/pkg/list/list_vendor.go +++ b/pkg/list/list_vendor.go @@ -13,6 +13,7 @@ import ( "github.com/cloudposse/atmos/internal/exec" term "github.com/cloudposse/atmos/internal/tui/templates/term" "github.com/cloudposse/atmos/pkg/filetype" + "github.com/cloudposse/atmos/pkg/list/extract" "github.com/cloudposse/atmos/pkg/list/format" log "github.com/cloudposse/atmos/pkg/logger" "github.com/cloudposse/atmos/pkg/schema" @@ -45,10 +46,6 @@ const ( ColumnNameManifest = "Manifest" // ColumnNameFolder is the column name for folder. ColumnNameFolder = "Folder" - // VendorTypeComponent is the type for components with component manifests. - VendorTypeComponent = "Component Manifest" - // VendorTypeVendor is the type for vendor manifests. - VendorTypeVendor = "Vendor Manifest" // TemplateKeyComponent is the template key for component name. TemplateKeyComponent = "atmos_component" // TemplateKeyVendorType is the template key for vendor type. @@ -62,12 +59,6 @@ const ( ) // VendorInfo contains information about a vendor configuration. -type VendorInfo struct { - Component string // Component name - Type string // "Component Manifest" or "Vendor Manifest" - Manifest string // Path to manifest file - Folder string // Target folder -} // formatVendorOutput handles output formatting for vendor list based on options.FormatStr. func formatVendorOutput(rows []map[string]interface{}, customHeaders []string, options *FilterOptions) (string, error) { @@ -120,7 +111,7 @@ func FilterAndListVendor(atmosConfig *schema.AtmosConfiguration, options *Filter return "", err } - vendorInfos, err := getVendorInfos(atmosConfig) + vendorInfos, err := GetVendorInfos(atmosConfig) if err != nil { return "", err } @@ -133,30 +124,31 @@ func FilterAndListVendor(atmosConfig *schema.AtmosConfiguration, options *Filter } // getVendorInfos retrieves vendor information, handling test and production modes. -func getVendorInfos(atmosConfig *schema.AtmosConfiguration) ([]VendorInfo, error) { +// GetVendorInfos reads vendor configuration from vendor.yaml files. +func GetVendorInfos(atmosConfig *schema.AtmosConfiguration) ([]extract.VendorInfo, error) { isTest := strings.Contains(atmosConfig.BasePath, "atmos-test-vendor") if isTest { if atmosConfig.Vendor.BasePath == "" { return nil, ErrVendorBasepathNotSet } - return []VendorInfo{ + return []extract.VendorInfo{ { Component: "vpc/v1", Folder: "components/terraform/vpc/v1", Manifest: "components/terraform/vpc/v1/component", - Type: VendorTypeComponent, + Type: extract.VendorTypeComponent, }, { Component: "eks/cluster", Folder: "components/terraform/eks/cluster", Manifest: "vendor.d/eks", - Type: VendorTypeVendor, + Type: extract.VendorTypeVendor, }, { Component: "ecs/cluster", Folder: "components/terraform/ecs/cluster", Manifest: "vendor.d/ecs", - Type: VendorTypeVendor, + Type: extract.VendorTypeVendor, }, }, nil } @@ -178,8 +170,8 @@ func getVendorColumns(atmosConfig *schema.AtmosConfiguration) []schema.ListColum } // findVendorConfigurations finds all vendor configurations. -func findVendorConfigurations(atmosConfig *schema.AtmosConfiguration) ([]VendorInfo, error) { - var vendorInfos []VendorInfo +func findVendorConfigurations(atmosConfig *schema.AtmosConfiguration) ([]extract.VendorInfo, error) { + var vendorInfos []extract.VendorInfo if atmosConfig.Vendor.BasePath == "" { return nil, ErrVendorBasepathNotSet @@ -214,7 +206,7 @@ func findVendorConfigurations(atmosConfig *schema.AtmosConfiguration) ([]VendorI } // appendVendorManifests processes the vendor base path and appends any found manifests to the provided list. -func appendVendorManifests(vendorInfos []VendorInfo, vendorBasePath string) []VendorInfo { +func appendVendorManifests(vendorInfos []extract.VendorInfo, vendorBasePath string) []extract.VendorInfo { // Check if vendorBasePath is a file or directory. fileInfo, err := os.Stat(vendorBasePath) if err != nil { @@ -235,7 +227,7 @@ func appendVendorManifests(vendorInfos []VendorInfo, vendorBasePath string) []Ve } // appendVendorManifestsFromDirectory finds vendor manifests in a directory and appends them to the provided list. -func appendVendorManifestsFromDirectory(vendorInfos []VendorInfo, dirPath string) []VendorInfo { +func appendVendorManifestsFromDirectory(vendorInfos []extract.VendorInfo, dirPath string) []extract.VendorInfo { log.Debug("Processing vendor manifests from directory", "path", dirPath) vendorManifests, err := findVendorManifests(dirPath) @@ -249,7 +241,7 @@ func appendVendorManifestsFromDirectory(vendorInfos []VendorInfo, dirPath string } // appendVendorManifestFromFile processes a single vendor manifest file and appends results to the provided list. -func appendVendorManifestFromFile(vendorInfos []VendorInfo, filePath string) []VendorInfo { +func appendVendorManifestFromFile(vendorInfos []extract.VendorInfo, filePath string) []extract.VendorInfo { log.Debug("Processing single vendor manifest file", "path", filePath) vendorManifests := processVendorManifest(filePath) @@ -262,7 +254,7 @@ func appendVendorManifestFromFile(vendorInfos []VendorInfo, filePath string) []V } // processComponent processes a single component and returns a VendorInfo if it has a component manifest. -func processComponent(atmosConfig *schema.AtmosConfiguration, componentName string, componentData interface{}) *VendorInfo { +func processComponent(atmosConfig *schema.AtmosConfiguration, componentName string, componentData interface{}) *extract.VendorInfo { _, ok := componentData.(map[string]interface{}) if !ok { return nil @@ -301,17 +293,17 @@ func processComponent(atmosConfig *schema.AtmosConfiguration, componentName stri } // Create vendor info. - return &VendorInfo{ + return &extract.VendorInfo{ Component: componentName, - Type: VendorTypeComponent, + Type: extract.VendorTypeComponent, Manifest: relativeManifestPath, Folder: relativeComponentPath, } } // findComponentManifests finds all component manifests. -func findComponentManifests(atmosConfig *schema.AtmosConfiguration) ([]VendorInfo, error) { - var vendorInfos []VendorInfo +func findComponentManifests(atmosConfig *schema.AtmosConfiguration) ([]extract.VendorInfo, error) { + var vendorInfos []extract.VendorInfo stacksMap, err := exec.ExecuteDescribeStacks(atmosConfig, "", nil, nil, nil, false, false, false, false, nil, nil) if err != nil { @@ -524,8 +516,8 @@ func formatTargetFolder(target, component, version string) string { // processVendorManifest processes a vendor manifest file and returns vendor infos. // If there's an error reading the manifest, it logs the error and returns nil. -func processVendorManifest(path string) []VendorInfo { - var vendorInfos []VendorInfo +func processVendorManifest(path string) []extract.VendorInfo { + var vendorInfos []extract.VendorInfo // Read vendor manifest. vendorManifest, err := readVendorManifest(path) @@ -551,9 +543,9 @@ func processVendorManifest(path string) []VendorInfo { formattedFolder := formatTargetFolder(*target, source.Component, source.Version) // Add to vendor infos. - vendorInfos = append(vendorInfos, VendorInfo{ + vendorInfos = append(vendorInfos, extract.VendorInfo{ Component: source.Component, - Type: VendorTypeVendor, + Type: extract.VendorTypeVendor, Manifest: relativeManifestPath, Folder: formattedFolder, }) @@ -564,8 +556,8 @@ func processVendorManifest(path string) []VendorInfo { } // findVendorManifests finds all vendor manifests. -func findVendorManifests(vendorBasePath string) ([]VendorInfo, error) { - var vendorInfos []VendorInfo +func findVendorManifests(vendorBasePath string) ([]extract.VendorInfo, error) { + var vendorInfos []extract.VendorInfo // Check if vendor base path exists. if !utils.FileOrDirExists(vendorBasePath) { @@ -630,14 +622,14 @@ func readVendorManifest(path string) (*schema.AtmosVendorConfig, error) { } // applyVendorFilters applies filters to vendor infos. -func applyVendorFilters(vendorInfos []VendorInfo, stackPattern string) []VendorInfo { +func applyVendorFilters(vendorInfos []extract.VendorInfo, stackPattern string) []extract.VendorInfo { // If no stack pattern, return all vendor infos. if stackPattern == "" { return vendorInfos } // Filter by stack pattern. - var filteredVendorInfos []VendorInfo + var filteredVendorInfos []extract.VendorInfo for _, vendorInfo := range vendorInfos { // Check if component matches stack pattern. if matchesStackPattern(vendorInfo.Component, stackPattern) { diff --git a/pkg/list/list_vendor_format.go b/pkg/list/list_vendor_format.go index 1742fe7a98..b26bdc510d 100644 --- a/pkg/list/list_vendor_format.go +++ b/pkg/list/list_vendor_format.go @@ -5,6 +5,7 @@ import ( "fmt" "strings" + "github.com/cloudposse/atmos/pkg/list/extract" "github.com/cloudposse/atmos/pkg/list/format" "github.com/cloudposse/atmos/pkg/schema" ) @@ -26,7 +27,7 @@ var ( ) // buildVendorRows constructs the slice of row maps for the vendor table. -func buildVendorRows(vendorInfos []VendorInfo, columns []schema.ListColumnConfig) []map[string]interface{} { +func buildVendorRows(vendorInfos []extract.VendorInfo, columns []schema.ListColumnConfig) []map[string]interface{} { var rows []map[string]interface{} for _, vi := range vendorInfos { row := make(map[string]interface{}) diff --git a/pkg/list/list_vendor_test.go b/pkg/list/list_vendor_test.go index bfde5e0fae..6c85bbf56e 100644 --- a/pkg/list/list_vendor_test.go +++ b/pkg/list/list_vendor_test.go @@ -9,6 +9,7 @@ import ( "strings" "testing" + "github.com/cloudposse/atmos/pkg/list/extract" "github.com/cloudposse/atmos/pkg/list/format" "github.com/cloudposse/atmos/pkg/schema" "github.com/stretchr/testify/assert" @@ -621,7 +622,7 @@ func TestFormatTargetFolder(t *testing.T) { // TestApplyVendorFilters tests the filtering logic. func TestApplyVendorFilters(t *testing.T) { - initialInfos := []VendorInfo{ + initialInfos := []extract.VendorInfo{ {Component: "vpc", Type: "terraform", Folder: "components/terraform/vpc"}, {Component: "eks", Type: "helmfile", Folder: "components/helmfile/eks"}, {Component: "rds", Type: "terraform", Folder: "components/terraform/rds"}, @@ -632,8 +633,8 @@ func TestApplyVendorFilters(t *testing.T) { testCases := []struct { name string options FilterOptions - input []VendorInfo - expected []VendorInfo + input []extract.VendorInfo + expected []extract.VendorInfo }{ { name: "NoFilters", @@ -645,25 +646,25 @@ func TestApplyVendorFilters(t *testing.T) { name: "FilterComponentExactMatch", options: FilterOptions{StackPattern: "vpc"}, input: initialInfos, - expected: []VendorInfo{initialInfos[0]}, + expected: []extract.VendorInfo{initialInfos[0]}, }, { name: "FilterComponentNoMatch", options: FilterOptions{StackPattern: "nomatch"}, input: initialInfos, - expected: []VendorInfo{}, + expected: []extract.VendorInfo{}, }, { name: "FilterMultiplePatterns", options: FilterOptions{StackPattern: "vpc,eks"}, input: initialInfos, - expected: []VendorInfo{initialInfos[0], initialInfos[1]}, + expected: []extract.VendorInfo{initialInfos[0], initialInfos[1]}, }, { name: "FilterSpecialCaseEcs", options: FilterOptions{StackPattern: "ecs"}, input: initialInfos, - expected: []VendorInfo{initialInfos[4]}, + expected: []extract.VendorInfo{initialInfos[4]}, }, } diff --git a/pkg/list/output/output.go b/pkg/list/output/output.go new file mode 100644 index 0000000000..9d5a669cbf --- /dev/null +++ b/pkg/list/output/output.go @@ -0,0 +1,34 @@ +package output + +import ( + "github.com/cloudposse/atmos/pkg/data" + "github.com/cloudposse/atmos/pkg/list/format" +) + +// Manager routes output to data or ui layer based on format. +type Manager struct { + format format.Format +} + +// New creates an output manager for the specified format. +func New(fmt format.Format) *Manager { + return &Manager{format: fmt} +} + +// Write routes content to the appropriate output stream. +// All list output goes to data.Write() (stdout) for pipeability. +// List output includes JSON, YAML, CSV, TSV, and table formats. +func (m *Manager) Write(content string) error { + // All list formats → stdout (data channel, pipeable) + return data.Write(content) +} + +// IsStructured returns true if the format is structured data (JSON, YAML, CSV, TSV). +// Note: This function is kept for backward compatibility but is no longer used +// for routing output. All list output now goes to the data channel (stdout). +func IsStructured(f format.Format) bool { + return f == format.FormatJSON || + f == format.FormatYAML || + f == format.FormatCSV || + f == format.FormatTSV +} diff --git a/pkg/list/output/output_test.go b/pkg/list/output/output_test.go new file mode 100644 index 0000000000..1e010994d3 --- /dev/null +++ b/pkg/list/output/output_test.go @@ -0,0 +1,71 @@ +package output + +import ( + "testing" + + "github.com/cloudposse/atmos/pkg/data" + iolib "github.com/cloudposse/atmos/pkg/io" + "github.com/cloudposse/atmos/pkg/list/format" + "github.com/cloudposse/atmos/pkg/ui" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestNew(t *testing.T) { + manager := New(format.FormatJSON) + assert.NotNil(t, manager) + assert.Equal(t, format.FormatJSON, manager.format) +} + +func TestFormat_IsStructured(t *testing.T) { + tests := []struct { + format format.Format + expected bool + }{ + {format.FormatJSON, true}, + {format.FormatYAML, true}, + {format.FormatCSV, true}, + {format.FormatTSV, true}, + {format.FormatTable, false}, + {format.FormatTemplate, false}, + } + + for _, tt := range tests { + t.Run(string(tt.format), func(t *testing.T) { + result := IsStructured(tt.format) + assert.Equal(t, tt.expected, result) + }) + } +} + +func TestManager_Write(t *testing.T) { + // Initialize I/O context and data writer for tests. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + tests := []struct { + name string + format format.Format + content string + }{ + {"JSON format", format.FormatJSON, `{"key":"value"}`}, + {"YAML format", format.FormatYAML, "key: value"}, + {"CSV format", format.FormatCSV, "col1,col2"}, + {"TSV format", format.FormatTSV, "col1\tcol2"}, + {"Table format", format.FormatTable, "│ Col1 │ Col2 │"}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + manager := New(tt.format) + + // Note: We can't easily test the actual output routing without + // mocking the global io context, but we can verify the method + // doesn't error. + err := manager.Write(tt.content) + assert.NoError(t, err) + }) + } +} diff --git a/pkg/list/renderer/mock_filter_test.go b/pkg/list/renderer/mock_filter_test.go new file mode 100644 index 0000000000..44d523939a --- /dev/null +++ b/pkg/list/renderer/mock_filter_test.go @@ -0,0 +1,10 @@ +package renderer + +// mockBadFilter is a filter that returns an invalid type for testing error handling. +type mockBadFilter struct{} + +// Apply returns an invalid type to trigger error path in Render. +func (f *mockBadFilter) Apply(data interface{}) (interface{}, error) { + // Return wrong type - string instead of []map[string]any. + return "invalid type", nil +} diff --git a/pkg/list/renderer/renderer.go b/pkg/list/renderer/renderer.go new file mode 100644 index 0000000000..4b1c020a5b --- /dev/null +++ b/pkg/list/renderer/renderer.go @@ -0,0 +1,219 @@ +package renderer + +import ( + "encoding/json" + "fmt" + "strings" + + "gopkg.in/yaml.v3" + + errUtils "github.com/cloudposse/atmos/errors" + "github.com/cloudposse/atmos/pkg/list/column" + "github.com/cloudposse/atmos/pkg/list/filter" + "github.com/cloudposse/atmos/pkg/list/format" + "github.com/cloudposse/atmos/pkg/list/output" + "github.com/cloudposse/atmos/pkg/list/sort" + "github.com/cloudposse/atmos/pkg/terminal" +) + +// Renderer orchestrates the complete list rendering pipeline. +// Pipeline: data → filter → column selection → sort → format → output. +type Renderer struct { + filters []filter.Filter + selector *column.Selector + sorters []*sort.Sorter + format format.Format + delimiter string + output *output.Manager +} + +// New creates a renderer with optional components. +func New( + filters []filter.Filter, + selector *column.Selector, + sorters []*sort.Sorter, + fmt format.Format, + delimiter string, +) *Renderer { + return &Renderer{ + filters: filters, + selector: selector, + sorters: sorters, + format: fmt, + delimiter: delimiter, + output: output.New(fmt), + } +} + +// Render executes the full pipeline and writes output. +func (r *Renderer) Render(data []map[string]any) error { + // Guard against nil column selector. + if r.selector == nil { + return fmt.Errorf("%w: renderer created with nil column selector", errUtils.ErrInvalidConfig) + } + + // Step 1: Apply filters (AND logic). + filtered := data + if len(r.filters) > 0 { + chain := filter.NewChain(r.filters...) + result, err := chain.Apply(filtered) + if err != nil { + return fmt.Errorf("filtering failed: %w", err) + } + var ok bool + filtered, ok = result.([]map[string]any) + if !ok { + return fmt.Errorf("%w: filter returned invalid type: expected []map[string]any, got %T", errUtils.ErrInvalidConfig, result) + } + } + + // Step 2: Extract columns with template evaluation. + headers, rows, err := r.selector.Extract(filtered) + if err != nil { + return fmt.Errorf("column extraction failed: %w", err) + } + + // Step 3: Sort rows. + if len(r.sorters) > 0 { + ms := sort.NewMultiSorter(r.sorters...) + if err := ms.Sort(rows, headers); err != nil { + return fmt.Errorf("sorting failed: %w", err) + } + } + + // Step 4: Format output. + formatted, err := r.formatTable(headers, rows) + if err != nil { + return fmt.Errorf("formatting failed: %w", err) + } + + // Step 5: Write to appropriate stream. + if err := r.output.Write(formatted); err != nil { + return fmt.Errorf("output failed: %w", err) + } + + return nil +} + +// formatTable formats headers and rows into the requested format. +func (r *Renderer) formatTable(headers []string, rows [][]string) (string, error) { + f := r.format + // Default to table format if empty. + if f == "" { + f = format.FormatTable + } + + switch f { + case format.FormatJSON: + return formatJSON(headers, rows) + case format.FormatYAML: + return formatYAML(headers, rows) + case format.FormatCSV: + delim := r.delimiter + if delim == "" { + delim = "," + } + return formatDelimited(headers, rows, delim) + case format.FormatTSV: + delim := r.delimiter + if delim == "" { + delim = "\t" + } + return formatDelimited(headers, rows, delim) + case format.FormatTable: + return formatStyledTableOrPlain(headers, rows), nil + default: + return "", fmt.Errorf("%w: unsupported format: %s", errUtils.ErrInvalidConfig, f) + } +} + +// formatJSON formats headers and rows as JSON array of objects. +func formatJSON(headers []string, rows [][]string) (string, error) { + var result []map[string]string + for _, row := range rows { + obj := make(map[string]string) + for i, header := range headers { + if i < len(row) { + obj[header] = row[i] + } + } + result = append(result, obj) + } + + jsonBytes, err := json.MarshalIndent(result, "", " ") + if err != nil { + return "", err + } + return string(jsonBytes), nil +} + +// formatYAML formats headers and rows as YAML array of objects. +func formatYAML(headers []string, rows [][]string) (string, error) { + var result []map[string]string + for _, row := range rows { + obj := make(map[string]string) + for i, header := range headers { + if i < len(row) { + obj[header] = row[i] + } + } + result = append(result, obj) + } + + yamlBytes, err := yaml.Marshal(result) + if err != nil { + return "", err + } + return string(yamlBytes), nil +} + +// formatDelimited formats headers and rows as CSV or TSV. +func formatDelimited(headers []string, rows [][]string, delimiter string) (string, error) { + var lines []string + lines = append(lines, strings.Join(headers, delimiter)) + for _, row := range rows { + lines = append(lines, strings.Join(row, delimiter)) + } + return strings.Join(lines, "\n"), nil +} + +// formatStyledTableOrPlain formats output as a styled table for TTY or plain list when piped. +// When stdout is not a TTY (piped/redirected), outputs plain format without headers for backward compatibility. +// This maintains compatibility with scripts that expect simple line-by-line output. +func formatStyledTableOrPlain(headers []string, rows [][]string) string { + // Check if stdout is a TTY + term := terminal.New() + isTTY := term.IsTTY(terminal.Stdout) + + if !isTTY { + // Piped/redirected output - use plain format (no headers, no borders) + // This matches the old behavior for backward compatibility with scripts. + return formatPlainList(headers, rows) + } + + // Interactive terminal - use styled table + return format.CreateStyledTable(headers, rows) +} + +// formatPlainList formats rows as a simple list (one value per line, no headers). +// Used when output is piped to maintain backward compatibility with scripts. +func formatPlainList(headers []string, rows [][]string) string { + var lines []string + + // For single-column output, just output the values (most common case for list commands) + if len(headers) == 1 { + for _, row := range rows { + if len(row) > 0 { + lines = append(lines, row[0]) + } + } + } else { + // Multi-column output when piped - use tab-separated values without headers + // This provides structured data that can be processed by scripts + for _, row := range rows { + lines = append(lines, strings.Join(row, "\t")) + } + } + + return strings.Join(lines, "\n") + "\n" +} diff --git a/pkg/list/renderer/renderer_integration_test.go b/pkg/list/renderer/renderer_integration_test.go new file mode 100644 index 0000000000..48dbb92999 --- /dev/null +++ b/pkg/list/renderer/renderer_integration_test.go @@ -0,0 +1,166 @@ +package renderer + +import ( + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + + "github.com/cloudposse/atmos/pkg/data" + iolib "github.com/cloudposse/atmos/pkg/io" + "github.com/cloudposse/atmos/pkg/list/column" + "github.com/cloudposse/atmos/pkg/list/format" + "github.com/cloudposse/atmos/pkg/list/sort" + "github.com/cloudposse/atmos/pkg/ui" +) + +// TestRenderer_EmptyFormat tests that empty format defaults to table. +func TestRenderer_EmptyFormat(t *testing.T) { + // Initialize I/O context. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + data := []map[string]any{ + {"stack": "plat-ue2-dev", "component": "vpc"}, + {"stack": "plat-ue2-prod", "component": "vpc"}, + } + + columns := []column.Config{ + {Name: "Stack", Value: "{{ .stack }}"}, + {Name: "Component", Value: "{{ .component }}"}, + } + + selector, err := column.NewSelector(columns, column.BuildColumnFuncMap()) + require.NoError(t, err) + + // Create renderer with empty format string. + r := New(nil, selector, nil, "", "") + + // Should not error - should default to table format. + err = r.Render(data) + assert.NoError(t, err, "Empty format should default to table") +} + +// TestRenderer_AllFormats tests all supported formats. +func TestRenderer_AllFormats(t *testing.T) { + // Initialize I/O context. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + testData := []map[string]any{ + {"stack": "plat-ue2-dev", "component": "vpc"}, + {"stack": "plat-ue2-prod", "component": "eks"}, + } + + columns := []column.Config{ + {Name: "Stack", Value: "{{ .stack }}"}, + {Name: "Component", Value: "{{ .component }}"}, + } + + testCases := []struct { + name string + format format.Format + }{ + {"table format", format.FormatTable}, + {"json format", format.FormatJSON}, + {"yaml format", format.FormatYAML}, + {"csv format", format.FormatCSV}, + {"tsv format", format.FormatTSV}, + {"empty format (defaults to table)", ""}, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + selector, err := column.NewSelector(columns, column.BuildColumnFuncMap()) + require.NoError(t, err) + + r := New(nil, selector, nil, tc.format, "") + err = r.Render(testData) + assert.NoError(t, err) + }) + } +} + +// TestRenderer_WithSorting tests renderer with sorting. +func TestRenderer_WithSorting(t *testing.T) { + // Initialize I/O context. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + testData := []map[string]any{ + {"stack": "plat-ue2-prod", "component": "vpc"}, + {"stack": "plat-ue2-dev", "component": "vpc"}, + {"stack": "plat-uw2-dev", "component": "eks"}, + } + + columns := []column.Config{ + {Name: "Stack", Value: "{{ .stack }}"}, + {Name: "Component", Value: "{{ .component }}"}, + } + + selector, err := column.NewSelector(columns, column.BuildColumnFuncMap()) + require.NoError(t, err) + + // Sort by stack ascending. + sorters := []*sort.Sorter{ + sort.NewSorter("Stack", sort.Ascending), + } + + r := New(nil, selector, sorters, format.FormatJSON, "") + err = r.Render(testData) + assert.NoError(t, err) +} + +// TestRenderer_EmptyData tests renderer with empty data. +func TestRenderer_EmptyData(t *testing.T) { + // Initialize I/O context. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + testData := []map[string]any{} + + columns := []column.Config{ + {Name: "Stack", Value: "{{ .stack }}"}, + } + + selector, err := column.NewSelector(columns, column.BuildColumnFuncMap()) + require.NoError(t, err) + + r := New(nil, selector, nil, format.FormatTable, "") + err = r.Render(testData) + assert.NoError(t, err) +} + +// TestRenderer_InvalidFormat tests unsupported format handling. +func TestRenderer_InvalidFormat(t *testing.T) { + // Initialize I/O context. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + testData := []map[string]any{ + {"stack": "plat-ue2-dev"}, + } + + columns := []column.Config{ + {Name: "Stack", Value: "{{ .stack }}"}, + } + + selector, err := column.NewSelector(columns, column.BuildColumnFuncMap()) + require.NoError(t, err) + + // Use invalid format. + r := New(nil, selector, nil, format.Format("invalid"), "") + err = r.Render(testData) + assert.Error(t, err) + assert.Contains(t, err.Error(), "unsupported format") +} diff --git a/pkg/list/renderer/renderer_test.go b/pkg/list/renderer/renderer_test.go new file mode 100644 index 0000000000..bb7e3a1e4d --- /dev/null +++ b/pkg/list/renderer/renderer_test.go @@ -0,0 +1,491 @@ +package renderer + +import ( + "testing" + "text/template" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + + "github.com/cloudposse/atmos/pkg/data" + iolib "github.com/cloudposse/atmos/pkg/io" + "github.com/cloudposse/atmos/pkg/list/column" + "github.com/cloudposse/atmos/pkg/list/filter" + "github.com/cloudposse/atmos/pkg/list/format" + "github.com/cloudposse/atmos/pkg/list/sort" + "github.com/cloudposse/atmos/pkg/ui" +) + +func TestNew(t *testing.T) { + configs := []column.Config{ + {Name: "Component", Value: "{{ .component }}"}, + } + selector, err := column.NewSelector(configs, template.FuncMap{}) + require.NoError(t, err) + + r := New( + []filter.Filter{}, + selector, + []*sort.Sorter{}, + format.FormatJSON, + "", + ) + + assert.NotNil(t, r) + assert.NotNil(t, r.output) + assert.Equal(t, format.FormatJSON, r.format) +} + +func TestRenderer_Render_Complete(t *testing.T) { + // Initialize I/O context for output tests. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + testData := []map[string]any{ + {"component": "vpc", "stack": "prod", "enabled": true}, + {"component": "eks", "stack": "dev", "enabled": false}, + {"component": "rds", "stack": "prod", "enabled": true}, + {"component": "s3", "stack": "staging", "enabled": true}, + } + + // Column configuration. + configs := []column.Config{ + {Name: "Component", Value: "{{ .component }}"}, + {Name: "Stack", Value: "{{ .stack }}"}, + } + selector, err := column.NewSelector(configs, column.BuildColumnFuncMap()) + require.NoError(t, err) + + // Filter: only prod stack. + filters := []filter.Filter{ + filter.NewColumnFilter("stack", "prod"), + } + + // Sort: by component ascending. + sorters := []*sort.Sorter{ + sort.NewSorter("Component", sort.Ascending), + } + + r := New(filters, selector, sorters, format.FormatJSON, "") + + err = r.Render(testData) + assert.NoError(t, err) +} + +func TestRenderer_Render_NoFilters(t *testing.T) { + // Initialize I/O context for output tests. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + testData := []map[string]any{ + {"component": "vpc", "stack": "prod"}, + {"component": "eks", "stack": "dev"}, + } + + configs := []column.Config{ + {Name: "Component", Value: "{{ .component }}"}, + } + selector, err := column.NewSelector(configs, column.BuildColumnFuncMap()) + require.NoError(t, err) + + r := New(nil, selector, nil, format.FormatJSON, "") + + err = r.Render(testData) + assert.NoError(t, err) +} + +func TestRenderer_Render_NoSorters(t *testing.T) { + // Initialize I/O context for output tests. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + testData := []map[string]any{ + {"component": "vpc"}, + {"component": "eks"}, + } + + configs := []column.Config{ + {Name: "Component", Value: "{{ .component }}"}, + } + selector, err := column.NewSelector(configs, column.BuildColumnFuncMap()) + require.NoError(t, err) + + r := New(nil, selector, nil, format.FormatYAML, "") + + err = r.Render(testData) + assert.NoError(t, err) +} + +func TestRenderer_Render_MultipleFilters(t *testing.T) { + // Initialize I/O context for output tests. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + testData := []map[string]any{ + {"component": "vpc", "stack": "prod", "enabled": true}, + {"component": "eks", "stack": "dev", "enabled": false}, + {"component": "rds", "stack": "prod", "enabled": true}, + {"component": "s3", "stack": "prod", "enabled": false}, + } + + configs := []column.Config{ + {Name: "Component", Value: "{{ .component }}"}, + {Name: "Stack", Value: "{{ .stack }}"}, + {Name: "Enabled", Value: "{{ .enabled }}"}, + } + selector, err := column.NewSelector(configs, column.BuildColumnFuncMap()) + require.NoError(t, err) + + trueVal := true + + // Filter: prod stack AND enabled=true. + filters := []filter.Filter{ + filter.NewColumnFilter("stack", "prod"), + filter.NewBoolFilter("enabled", &trueVal), + } + + r := New(filters, selector, nil, format.FormatCSV, "") + + err = r.Render(testData) + assert.NoError(t, err) +} + +func TestRenderer_Render_MultiSorter(t *testing.T) { + // Initialize I/O context for output tests. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + testData := []map[string]any{ + {"component": "vpc", "stack": "prod"}, + {"component": "eks", "stack": "dev"}, + {"component": "vpc", "stack": "dev"}, + {"component": "eks", "stack": "prod"}, + } + + configs := []column.Config{ + {Name: "Stack", Value: "{{ .stack }}"}, + {Name: "Component", Value: "{{ .component }}"}, + } + selector, err := column.NewSelector(configs, column.BuildColumnFuncMap()) + require.NoError(t, err) + + // Sort by stack ascending, then component ascending. + sorters := []*sort.Sorter{ + sort.NewSorter("Stack", sort.Ascending), + sort.NewSorter("Component", sort.Ascending), + } + + r := New(nil, selector, sorters, format.FormatTSV, "") + + err = r.Render(testData) + assert.NoError(t, err) +} + +func TestRenderer_Render_EmptyData(t *testing.T) { + // Initialize I/O context for output tests. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + testData := []map[string]any{} + + configs := []column.Config{ + {Name: "Component", Value: "{{ .component }}"}, + } + selector, err := column.NewSelector(configs, column.BuildColumnFuncMap()) + require.NoError(t, err) + + r := New(nil, selector, nil, format.FormatJSON, "") + + err = r.Render(testData) + assert.NoError(t, err) +} + +func TestRenderer_Render_TableFormat(t *testing.T) { + // Initialize I/O context for output tests. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + testData := []map[string]any{ + {"component": "vpc", "stack": "prod"}, + {"component": "eks", "stack": "dev"}, + } + + configs := []column.Config{ + {Name: "Component", Value: "{{ .component }}"}, + {Name: "Stack", Value: "{{ .stack }}"}, + } + selector, err := column.NewSelector(configs, column.BuildColumnFuncMap()) + require.NoError(t, err) + + r := New(nil, selector, nil, format.FormatTable, "") + + err = r.Render(testData) + assert.NoError(t, err) +} + +func TestRenderer_Render_InvalidColumnTemplate(t *testing.T) { + // Initialize I/O context for output tests. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + testData := []map[string]any{ + {"component": "vpc"}, + } + + configs := []column.Config{ + {Name: "Component", Value: "{{ .component }}"}, + {Name: "Bad", Value: "{{ .nonexistent.nested.field }}"}, // Will produce + } + selector, err := column.NewSelector(configs, column.BuildColumnFuncMap()) + require.NoError(t, err) + + r := New(nil, selector, nil, format.FormatJSON, "") + + // Should still succeed - template returns "" for missing fields. + err = r.Render(testData) + assert.NoError(t, err) +} + +func TestRenderer_Render_FilterReturnsNoResults(t *testing.T) { + // Initialize I/O context for output tests. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + testData := []map[string]any{ + {"component": "vpc", "stack": "prod"}, + {"component": "eks", "stack": "dev"}, + } + + configs := []column.Config{ + {Name: "Component", Value: "{{ .component }}"}, + } + selector, err := column.NewSelector(configs, column.BuildColumnFuncMap()) + require.NoError(t, err) + + // Filter that matches nothing. + filters := []filter.Filter{ + filter.NewColumnFilter("stack", "nonexistent"), + } + + r := New(filters, selector, nil, format.FormatJSON, "") + + err = r.Render(testData) + assert.NoError(t, err) +} + +func TestRenderer_Render_GlobFilter(t *testing.T) { + // Initialize I/O context for output tests. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + testData := []map[string]any{ + {"component": "vpc", "stack": "plat-ue2-prod"}, + {"component": "eks", "stack": "plat-ue2-dev"}, + {"component": "rds", "stack": "plat-uw2-prod"}, + } + + configs := []column.Config{ + {Name: "Component", Value: "{{ .component }}"}, + {Name: "Stack", Value: "{{ .stack }}"}, + } + selector, err := column.NewSelector(configs, column.BuildColumnFuncMap()) + require.NoError(t, err) + + // Filter: plat-*-prod pattern. + globFilter, err := filter.NewGlobFilter("stack", "plat-*-prod") + require.NoError(t, err) + + filters := []filter.Filter{globFilter} + + r := New(filters, selector, nil, format.FormatJSON, "") + + err = r.Render(testData) + assert.NoError(t, err) +} + +func TestRenderer_Render_CompleteWorkflow(t *testing.T) { + // Initialize I/O context for output tests. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + // Simulate realistic component list data. + testData := []map[string]any{ + {"component": "vpc", "stack": "plat-ue2-prod", "type": "real", "enabled": true}, + {"component": "eks", "stack": "plat-ue2-dev", "type": "real", "enabled": false}, + {"component": "rds", "stack": "plat-ue2-prod", "type": "real", "enabled": true}, + {"component": "s3", "stack": "plat-uw2-prod", "type": "abstract", "enabled": true}, + {"component": "lambda", "stack": "plat-ue2-staging", "type": "real", "enabled": false}, + } + + // Column configuration with templates. + configs := []column.Config{ + {Name: "Component", Value: "{{ .component }}"}, + {Name: "Stack", Value: "{{ .stack }}"}, + {Name: "Type", Value: "{{ .type }}"}, + {Name: "Enabled", Value: "{{ .enabled }}"}, + } + selector, err := column.NewSelector(configs, column.BuildColumnFuncMap()) + require.NoError(t, err) + + trueVal := true + + // Filters: real components, prod stacks, enabled only. + globFilter, err := filter.NewGlobFilter("stack", "*-prod") + require.NoError(t, err) + + filters := []filter.Filter{ + filter.NewColumnFilter("type", "real"), + globFilter, + filter.NewBoolFilter("enabled", &trueVal), + } + + // Sort: by component ascending. + sorters := []*sort.Sorter{ + sort.NewSorter("Component", sort.Ascending), + } + + r := New(filters, selector, sorters, format.FormatJSON, "") + + err = r.Render(testData) + assert.NoError(t, err) +} + +func TestRenderer_Render_SortError(t *testing.T) { + // Initialize I/O context for output tests. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + testData := []map[string]any{ + {"component": "vpc"}, + } + + configs := []column.Config{ + {Name: "Component", Value: "{{ .component }}"}, + } + selector, err := column.NewSelector(configs, column.BuildColumnFuncMap()) + require.NoError(t, err) + + // Sort by non-existent column. + sorters := []*sort.Sorter{ + sort.NewSorter("NonExistent", sort.Ascending), + } + + r := New(nil, selector, sorters, format.FormatJSON, "") + + err = r.Render(testData) + assert.Error(t, err) + assert.Contains(t, err.Error(), "sorting failed") +} + +func TestRenderer_Render_AllFormats(t *testing.T) { + // Initialize I/O context for output tests. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + testData := []map[string]any{ + {"name": "test1", "value": "1"}, + {"name": "test2", "value": "2"}, + } + + configs := []column.Config{ + {Name: "Name", Value: "{{ .name }}"}, + {Name: "Value", Value: "{{ .value }}"}, + } + selector, err := column.NewSelector(configs, column.BuildColumnFuncMap()) + require.NoError(t, err) + + formats := []format.Format{ + format.FormatJSON, + format.FormatYAML, + format.FormatCSV, + format.FormatTSV, + format.FormatTable, + } + + for _, f := range formats { + t.Run(string(f), func(t *testing.T) { + r := New(nil, selector, nil, f, "") + err := r.Render(testData) + assert.NoError(t, err) + }) + } +} + +func TestRenderer_Render_UnsupportedFormat(t *testing.T) { + // Initialize I/O context for output tests. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + testData := []map[string]any{ + {"name": "test"}, + } + + configs := []column.Config{ + {Name: "Name", Value: "{{ .name }}"}, + } + selector, err := column.NewSelector(configs, column.BuildColumnFuncMap()) + require.NoError(t, err) + + // Use an unsupported format. + r := New(nil, selector, nil, format.Format("unsupported"), "") + + err = r.Render(testData) + assert.Error(t, err) + assert.Contains(t, err.Error(), "formatting failed") + assert.Contains(t, err.Error(), "unsupported format") +} + +func TestRenderer_Render_FilterReturnsInvalidType(t *testing.T) { + // Initialize I/O context for output tests. + ioCtx, err := iolib.NewContext() + require.NoError(t, err) + data.InitWriter(ioCtx) + ui.InitFormatter(ioCtx) + + testData := []map[string]any{ + {"name": "test"}, + } + + configs := []column.Config{ + {Name: "Name", Value: "{{ .name }}"}, + } + selector, err := column.NewSelector(configs, column.BuildColumnFuncMap()) + require.NoError(t, err) + + // Use a filter that returns invalid type. + badFilter := &mockBadFilter{} + + r := New([]filter.Filter{badFilter}, selector, nil, format.FormatJSON, "") + + err = r.Render(testData) + assert.Error(t, err) + assert.Contains(t, err.Error(), "filter returned invalid type") +} diff --git a/pkg/list/sort/sort.go b/pkg/list/sort/sort.go new file mode 100644 index 0000000000..a7119fcee0 --- /dev/null +++ b/pkg/list/sort/sort.go @@ -0,0 +1,237 @@ +package sort + +import ( + "fmt" + "sort" + "strconv" + "strings" + + errUtils "github.com/cloudposse/atmos/errors" +) + +// Order defines sort direction. +type Order int + +const ( + // Ascending sort order. + Ascending Order = iota + // Descending sort order. + Descending +) + +// DataType defines the type of data for type-aware sorting. +type DataType int + +const ( + // String data type (lexicographic sorting). + String DataType = iota + // Number data type (numeric sorting). + Number + // Boolean data type (false < true). + Boolean +) + +// Sorter handles single column sorting. +type Sorter struct { + Column string + Order Order + DataType DataType +} + +// MultiSorter handles multi-column sorting with precedence. +type MultiSorter struct { + sorters []*Sorter +} + +// NewSorter creates a sorter for a single column. +// DataType is set to String by default, use WithDataType() to override. +func NewSorter(column string, order Order) *Sorter { + return &Sorter{ + Column: column, + Order: order, + DataType: String, + } +} + +// WithDataType sets explicit data type for type-aware sorting. +func (s *Sorter) WithDataType(dt DataType) *Sorter { + s.DataType = dt + return s +} + +// Sort sorts rows in-place by the column. +func (s *Sorter) Sort(rows [][]string, headers []string) error { + // Find column index + colIdx := -1 + for i, h := range headers { + if h == s.Column { + colIdx = i + break + } + } + + if colIdx == -1 { + return fmt.Errorf("%w: column %q not found in headers", errUtils.ErrInvalidConfig, s.Column) + } + + // Sort with type-aware comparison + sort.SliceStable(rows, func(i, j int) bool { + if colIdx >= len(rows[i]) || colIdx >= len(rows[j]) { + return false + } + + valI := rows[i][colIdx] + valJ := rows[j][colIdx] + + cmp := s.compare(valI, valJ) + + if s.Order == Ascending { + return cmp < 0 + } + return cmp > 0 + }) + + return nil +} + +// compare performs type-aware comparison. +// Returns: -1 if a < b, 0 if a == b, 1 if a > b. +func (s *Sorter) compare(a, b string) int { + switch s.DataType { + case Number: + return compareNumeric(a, b) + case Boolean: + return compareBoolean(a, b) + default: // String + return strings.Compare(a, b) + } +} + +const ( + // FloatBitSize is the bit size for ParseFloat (64-bit float). + floatBitSize = 64 +) + +// compareNumeric compares two strings as numbers. +func compareNumeric(a, b string) int { + numA, errA := strconv.ParseFloat(a, floatBitSize) + numB, errB := strconv.ParseFloat(b, floatBitSize) + + // Non-numeric values sort last + if errA != nil && errB != nil { + return strings.Compare(a, b) + } + if errA != nil { + return 1 + } + if errB != nil { + return -1 + } + + if numA < numB { + return -1 + } + if numA > numB { + return 1 + } + return 0 +} + +// compareBoolean compares two strings as booleans (false < true). +func compareBoolean(a, b string) int { + boolA := parseBoolean(a) + boolB := parseBoolean(b) + + if !boolA && boolB { + return -1 + } + if boolA && !boolB { + return 1 + } + return 0 +} + +// parseBoolean converts string to boolean. +func parseBoolean(s string) bool { + lower := strings.ToLower(strings.TrimSpace(s)) + return lower == "true" || lower == "yes" || lower == "1" || lower == "✓" +} + +// NewMultiSorter creates a multi-column sorter. +// Sorters are applied in order (primary, secondary, etc.). +func NewMultiSorter(sorters ...*Sorter) *MultiSorter { + return &MultiSorter{sorters: sorters} +} + +// Sort applies all sorters in order with stable sorting. +func (ms *MultiSorter) Sort(rows [][]string, headers []string) error { + // Validate all sorters + for i, s := range ms.sorters { + colIdx := -1 + for j, h := range headers { + if h == s.Column { + colIdx = j + break + } + } + if colIdx == -1 { + return fmt.Errorf("%w: sorter %d: column %q not found", errUtils.ErrInvalidConfig, i, s.Column) + } + } + + // Apply sorters in reverse order for stable multi-column sorting + // This ensures primary sort takes precedence + for i := len(ms.sorters) - 1; i >= 0; i-- { + if err := ms.sorters[i].Sort(rows, headers); err != nil { + return err + } + } + + return nil +} + +// ParseSortSpec parses CLI sort specification. +// Format: "column1:asc,column2:desc" or "column1:ascending,column2:descending". +func ParseSortSpec(spec string) ([]*Sorter, error) { + if spec == "" { + return nil, nil + } + + parts := strings.Split(spec, ",") + var sorters []*Sorter + + for _, part := range parts { + part = strings.TrimSpace(part) + if part == "" { + continue + } + + // Split by colon + fields := strings.SplitN(part, ":", 2) + if len(fields) != 2 { + return nil, fmt.Errorf("%w: invalid sort spec %q, expected format 'column:order'", errUtils.ErrInvalidConfig, part) + } + + column := strings.TrimSpace(fields[0]) + // Normalize column name: capitalize first letter for case-insensitive matching. + // This allows users to use "stack:asc" instead of requiring "Stack:asc". + if len(column) > 0 { + column = strings.ToUpper(column[:1]) + column[1:] + } + orderStr := strings.ToLower(strings.TrimSpace(fields[1])) + + var order Order + switch orderStr { + case "asc", "ascending": + order = Ascending + case "desc", "descending": + order = Descending + default: + return nil, fmt.Errorf("%w: invalid sort order %q, expected 'asc' or 'desc'", errUtils.ErrInvalidConfig, orderStr) + } + + sorters = append(sorters, NewSorter(column, order)) + } + + return sorters, nil +} diff --git a/pkg/list/sort/sort_test.go b/pkg/list/sort/sort_test.go new file mode 100644 index 0000000000..18232391d6 --- /dev/null +++ b/pkg/list/sort/sort_test.go @@ -0,0 +1,356 @@ +package sort + +import ( + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + + errUtils "github.com/cloudposse/atmos/errors" +) + +func TestNewSorter(t *testing.T) { + sorter := NewSorter("Component", Ascending) + assert.NotNil(t, sorter) + assert.Equal(t, "Component", sorter.Column) + assert.Equal(t, Ascending, sorter.Order) + assert.Equal(t, String, sorter.DataType) // default +} + +func TestSorter_WithDataType(t *testing.T) { + sorter := NewSorter("Port", Ascending).WithDataType(Number) + assert.Equal(t, Number, sorter.DataType) + + sorter = NewSorter("Enabled", Ascending).WithDataType(Boolean) + assert.Equal(t, Boolean, sorter.DataType) +} + +func TestSorter_Sort_Ascending(t *testing.T) { + headers := []string{"Component", "Stack"} + rows := [][]string{ + {"vpc", "prod"}, + {"eks", "dev"}, + {"rds", "staging"}, + } + + sorter := NewSorter("Component", Ascending) + err := sorter.Sort(rows, headers) + + require.NoError(t, err) + assert.Equal(t, "eks", rows[0][0]) + assert.Equal(t, "rds", rows[1][0]) + assert.Equal(t, "vpc", rows[2][0]) +} + +func TestSorter_Sort_Descending(t *testing.T) { + headers := []string{"Component", "Stack"} + rows := [][]string{ + {"vpc", "prod"}, + {"eks", "dev"}, + {"rds", "staging"}, + } + + sorter := NewSorter("Component", Descending) + err := sorter.Sort(rows, headers) + + require.NoError(t, err) + assert.Equal(t, "vpc", rows[0][0]) + assert.Equal(t, "rds", rows[1][0]) + assert.Equal(t, "eks", rows[2][0]) +} + +func TestSorter_Sort_Numeric(t *testing.T) { + headers := []string{"Port", "Service"} + rows := [][]string{ + {"8080", "app"}, + {"443", "https"}, + {"80", "http"}, + {"9090", "metrics"}, + } + + sorter := NewSorter("Port", Ascending).WithDataType(Number) + err := sorter.Sort(rows, headers) + + require.NoError(t, err) + assert.Equal(t, "80", rows[0][0]) + assert.Equal(t, "443", rows[1][0]) + assert.Equal(t, "8080", rows[2][0]) + assert.Equal(t, "9090", rows[3][0]) +} + +func TestSorter_Sort_Boolean(t *testing.T) { + headers := []string{"Component", "Enabled"} + rows := [][]string{ + {"vpc", "true"}, + {"eks", "false"}, + {"rds", "yes"}, + {"s3", "no"}, + } + + sorter := NewSorter("Enabled", Ascending).WithDataType(Boolean) + err := sorter.Sort(rows, headers) + + require.NoError(t, err) + // false values first (eks, s3), then true values (vpc, rds) + assert.Contains(t, []string{"eks", "s3"}, rows[0][0]) + assert.Contains(t, []string{"eks", "s3"}, rows[1][0]) + assert.Contains(t, []string{"vpc", "rds"}, rows[2][0]) + assert.Contains(t, []string{"vpc", "rds"}, rows[3][0]) +} + +func TestSorter_Sort_ColumnNotFound(t *testing.T) { + headers := []string{"Component"} + rows := [][]string{{"vpc"}} + + sorter := NewSorter("NonExistent", Ascending) + err := sorter.Sort(rows, headers) + + require.Error(t, err) + assert.ErrorIs(t, err, errUtils.ErrInvalidConfig) +} + +func TestSorter_Sort_EmptyRows(t *testing.T) { + headers := []string{"Component"} + rows := [][]string{} + + sorter := NewSorter("Component", Ascending) + err := sorter.Sort(rows, headers) + + require.NoError(t, err) + assert.Empty(t, rows) +} + +func TestCompareNumeric(t *testing.T) { + tests := []struct { + name string + a string + b string + expected int + }{ + {"a < b", "5", "10", -1}, + {"a > b", "10", "5", 1}, + {"a == b", "5", "5", 0}, + {"decimal a < b", "1.5", "2.5", -1}, + {"negative numbers", "-5", "5", -1}, + {"non-numeric a", "abc", "5", 1}, + {"non-numeric b", "5", "abc", -1}, + {"both non-numeric", "abc", "xyz", -1}, // lexicographic fallback + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := compareNumeric(tt.a, tt.b) + switch { + case tt.expected < 0: + assert.Less(t, result, 0) + case tt.expected > 0: + assert.Greater(t, result, 0) + default: + assert.Equal(t, 0, result) + } + }) + } +} + +func TestCompareBoolean(t *testing.T) { + tests := []struct { + name string + a string + b string + expected int + }{ + {"false < true", "false", "true", -1}, + {"true > false", "true", "false", 1}, + {"true == true", "true", "true", 0}, + {"false == false", "false", "false", 0}, + {"yes == true", "yes", "true", 0}, + {"no < yes", "no", "yes", -1}, + {"checkmark true", "✓", "✗", 1}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := compareBoolean(tt.a, tt.b) + switch { + case tt.expected < 0: + assert.Less(t, result, 0) + case tt.expected > 0: + assert.Greater(t, result, 0) + default: + assert.Equal(t, 0, result) + } + }) + } +} + +func TestParseBoolean(t *testing.T) { + tests := []struct { + input string + expected bool + }{ + {"true", true}, + {"True", true}, + {"TRUE", true}, + {"yes", true}, + {"Yes", true}, + {"1", true}, + {"✓", true}, + {"false", false}, + {"False", false}, + {"no", false}, + {"0", false}, + {"✗", false}, + {"", false}, + {"invalid", false}, + } + + for _, tt := range tests { + t.Run(tt.input, func(t *testing.T) { + result := parseBoolean(tt.input) + assert.Equal(t, tt.expected, result) + }) + } +} + +func TestNewMultiSorter(t *testing.T) { + sorter1 := NewSorter("Col1", Ascending) + sorter2 := NewSorter("Col2", Descending) + + ms := NewMultiSorter(sorter1, sorter2) + assert.NotNil(t, ms) + assert.Len(t, ms.sorters, 2) +} + +func TestMultiSorter_Sort(t *testing.T) { + headers := []string{"Stack", "Component", "Region"} + rows := [][]string{ + {"prod", "vpc", "us-east-1"}, + {"dev", "vpc", "us-west-2"}, + {"prod", "eks", "us-east-1"}, + {"dev", "eks", "us-west-2"}, + } + + // Sort by Stack (asc), then Component (asc) + ms := NewMultiSorter( + NewSorter("Stack", Ascending), + NewSorter("Component", Ascending), + ) + + err := ms.Sort(rows, headers) + require.NoError(t, err) + + // Expected order: + // dev, eks, us-west-2 + // dev, vpc, us-west-2 + // prod, eks, us-east-1 + // prod, vpc, us-east-1 + assert.Equal(t, "dev", rows[0][0]) + assert.Equal(t, "eks", rows[0][1]) + assert.Equal(t, "dev", rows[1][0]) + assert.Equal(t, "vpc", rows[1][1]) + assert.Equal(t, "prod", rows[2][0]) + assert.Equal(t, "eks", rows[2][1]) + assert.Equal(t, "prod", rows[3][0]) + assert.Equal(t, "vpc", rows[3][1]) +} + +func TestMultiSorter_Sort_ColumnNotFound(t *testing.T) { + headers := []string{"Component"} + rows := [][]string{{"vpc"}} + + ms := NewMultiSorter( + NewSorter("Component", Ascending), + NewSorter("NonExistent", Ascending), + ) + + err := ms.Sort(rows, headers) + require.Error(t, err) + assert.ErrorIs(t, err, errUtils.ErrInvalidConfig) +} + +func TestParseSortSpec(t *testing.T) { + tests := []struct { + name string + spec string + expectedCount int + expectErr bool + errType error + }{ + {"single column asc", "Component:asc", 1, false, nil}, + {"single column desc", "Component:desc", 1, false, nil}, + {"multiple columns", "Stack:asc,Component:desc", 2, false, nil}, + {"full words", "Stack:ascending,Component:descending", 2, false, nil}, + {"empty spec", "", 0, false, nil}, + {"whitespace handling", " Stack : asc , Component : desc ", 2, false, nil}, + {"missing colon", "Component", 0, true, errUtils.ErrInvalidConfig}, + {"invalid order", "Component:invalid", 0, true, errUtils.ErrInvalidConfig}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + sorters, err := ParseSortSpec(tt.spec) + + if tt.expectErr { + require.Error(t, err) + if tt.errType != nil { + assert.ErrorIs(t, err, tt.errType) + } + return + } + + require.NoError(t, err) + if tt.expectedCount == 0 { + assert.Nil(t, sorters) + return + } + + assert.Len(t, sorters, tt.expectedCount) + }) + } +} + +func TestParseSortSpec_OrderParsing(t *testing.T) { + tests := []struct { + spec string + expectedOrder Order + }{ + {"Col:asc", Ascending}, + {"Col:ascending", Ascending}, + {"Col:desc", Descending}, + {"Col:descending", Descending}, + } + + for _, tt := range tests { + t.Run(tt.spec, func(t *testing.T) { + sorters, err := ParseSortSpec(tt.spec) + require.NoError(t, err) + require.Len(t, sorters, 1) + assert.Equal(t, tt.expectedOrder, sorters[0].Order) + }) + } +} + +func TestSorter_Sort_StableSort(t *testing.T) { + // Test that sorting is stable (preserves original order for equal elements) + headers := []string{"Priority", "Name"} + rows := [][]string{ + {"1", "Alice"}, + {"2", "Bob"}, + {"1", "Charlie"}, + {"2", "Diana"}, + } + + sorter := NewSorter("Priority", Ascending) + err := sorter.Sort(rows, headers) + + require.NoError(t, err) + // Items with same priority should maintain original order + assert.Equal(t, "1", rows[0][0]) + assert.Equal(t, "Alice", rows[0][1]) + assert.Equal(t, "1", rows[1][0]) + assert.Equal(t, "Charlie", rows[1][1]) + assert.Equal(t, "2", rows[2][0]) + assert.Equal(t, "Bob", rows[2][1]) + assert.Equal(t, "2", rows[3][0]) + assert.Equal(t, "Diana", rows[3][1]) +} diff --git a/pkg/list/tree/types.go b/pkg/list/tree/types.go new file mode 100644 index 0000000000..56fa92887e --- /dev/null +++ b/pkg/list/tree/types.go @@ -0,0 +1,9 @@ +package tree + +// ImportNode represents a node in the import tree with recursive children. +type ImportNode struct { + Path string + Children []*ImportNode + Circular bool // True if this node creates a circular reference + ComponentFolder string // The resolved component folder path (e.g., "components/terraform/vpc") +} diff --git a/pkg/schema/schema.go b/pkg/schema/schema.go index 9d3b7f97b0..0888f4e35c 100644 --- a/pkg/schema/schema.go +++ b/pkg/schema/schema.go @@ -410,6 +410,9 @@ type Components struct { Helmfile Helmfile `yaml:"helmfile" json:"helmfile" mapstructure:"helmfile"` Packer Packer `yaml:"packer" json:"packer" mapstructure:"packer"` + // List configuration for component listing. + List ListConfig `yaml:"list,omitempty" json:"list,omitempty" mapstructure:"list"` + // Dynamic plugin component types. // Uses mapstructure:",remain" to capture all unmapped fields from the YAML/JSON. // This allows new component types (like mock, pulumi, cdk) to be added without schema changes. @@ -444,6 +447,7 @@ type Stacks struct { ExcludedPaths []string `yaml:"excluded_paths" json:"excluded_paths" mapstructure:"excluded_paths"` NamePattern string `yaml:"name_pattern" json:"name_pattern" mapstructure:"name_pattern"` NameTemplate string `yaml:"name_template" json:"name_template" mapstructure:"name_template"` + List ListConfig `yaml:"list,omitempty" json:"list,omitempty" mapstructure:"list"` Inherit StacksInherit `yaml:"inherit,omitempty" json:"inherit,omitempty" mapstructure:"inherit"` } @@ -974,4 +978,5 @@ type ListConfig struct { type ListColumnConfig struct { Name string `yaml:"name" json:"name" mapstructure:"name"` Value string `yaml:"value" json:"value" mapstructure:"value"` + Width int `yaml:"width,omitempty" json:"width,omitempty" mapstructure:"width"` } diff --git a/pkg/store/errors.go b/pkg/store/errors.go index 9fea015823..f4c9a28172 100644 --- a/pkg/store/errors.go +++ b/pkg/store/errors.go @@ -20,7 +20,7 @@ var ( // AWS SSM specific errors. ErrRegionRequired = errors.New("region is required in ssm store configuration") - ErrLoadAWSConfig = errors.New("failed to load AWS configuration") + ErrLoadAWSConfig = errors.New("failed to load AWS config") ErrSetParameter = errors.New("failed to set parameter") ErrGetParameter = errors.New("failed to get parameter") diff --git a/scripts/check-claude-md-size.sh b/scripts/check-claude-md-size.sh new file mode 100755 index 0000000000..9f863be91e --- /dev/null +++ b/scripts/check-claude-md-size.sh @@ -0,0 +1,70 @@ +#!/bin/bash +# Check CLAUDE.md and agent file sizes to ensure they stay under the limit. + +set -e + +MAX_SIZE=40000 # 40KB limit for CLAUDE.md files +AGENT_MAX_SIZE=25000 # 25KB limit for agent files +EXIT_CODE=0 + +# Check all CLAUDE.md files in the repository. +for file in CLAUDE.md .conductor/*/CLAUDE.md; do + if [ ! -f "$file" ]; then + continue + fi + + size=$(wc -c < "$file" | tr -d ' ') + + if [ "$size" -gt "$MAX_SIZE" ]; then + over=$((size - MAX_SIZE)) + percent=$((over * 100 / MAX_SIZE)) + + echo "❌ CLAUDE.md Too Large" + echo "The modified $file exceeds the $MAX_SIZE byte size limit:" + echo "" + echo "$file: $size bytes (over by $over bytes, ~${percent}%)" + echo "" + echo "Action needed: Please refactor the oversized CLAUDE.md file. Consider:" + echo "" + echo " • Removing verbose explanations" + echo " • Consolidating redundant examples" + echo " • Keeping only essential requirements" + echo " • Moving detailed guides to separate docs in docs/ or docs/prd/" + echo "" + echo "All MANDATORY requirements must be preserved." + echo "" + EXIT_CODE=1 + fi +done + +# Check all agent files in .claude/agents/. +for file in .claude/agents/*.md; do + if [ ! -f "$file" ]; then + continue + fi + + size=$(wc -c < "$file" | tr -d ' ') + + if [ "$size" -gt "$AGENT_MAX_SIZE" ]; then + over=$((size - AGENT_MAX_SIZE)) + percent=$((over * 100 / AGENT_MAX_SIZE)) + + echo "❌ Agent File Too Large" + echo "The modified $file exceeds the $AGENT_MAX_SIZE byte size limit:" + echo "" + echo "$file: $size bytes (over by $over bytes, ~${percent}%)" + echo "" + echo "Action needed: Please refactor the oversized agent file. Consider:" + echo "" + echo " • Removing verbose explanations and examples" + echo " • Consolidating redundant sections" + echo " • Keeping only essential instructions" + echo " • Moving detailed documentation to docs/prd/" + echo "" + echo "Agent files should be concise and focused on specific tasks." + echo "" + EXIT_CODE=1 + fi +done + +exit $EXIT_CODE diff --git a/tests/snapshots/TestCLICommands_atmos_describe_config.stdout.golden b/tests/snapshots/TestCLICommands_atmos_describe_config.stdout.golden index eec52b4b9d..7f8fdca443 100644 --- a/tests/snapshots/TestCLICommands_atmos_describe_config.stdout.golden +++ b/tests/snapshots/TestCLICommands_atmos_describe_config.stdout.golden @@ -31,6 +31,10 @@ "base_path": "", "command": "" }, + "list": { + "format": "", + "columns": null + }, "Plugins": null }, "stacks": { @@ -43,6 +47,10 @@ ], "name_pattern": "{stage}", "name_template": "", + "list": { + "format": "", + "columns": null + }, "inherit": {} }, "workflows": { diff --git a/tests/snapshots/TestCLICommands_atmos_list_instances.tty.golden b/tests/snapshots/TestCLICommands_atmos_list_instances.tty.golden index 75cce6c50e..4f3b7ee2a4 100644 --- a/tests/snapshots/TestCLICommands_atmos_list_instances.tty.golden +++ b/tests/snapshots/TestCLICommands_atmos_list_instances.tty.golden @@ -1,9 +1,7 @@ -]11;?\]10;?\]11;?\]11;?\]11;?\┏━━━━━━━━━━━━━━━┳━━━━━━━━━┓ -┃ Component ┃ Stack ┃ -┣━━━━━━━━━━━━━━━╋━━━━━━━━━┫ -┃ mock/disabled ┃ nonprod ┃ -┃ mock/drift ┃ nonprod ┃ -┃ mock/nodrift ┃ nonprod ┃ -┃ mock/drift ┃ prod ┃ -┃ mock/nodrift ┃ prod ┃ -┗━━━━━━━━━━━━━━━┻━━━━━━━━━┛ +]11;?\]10;?\]11;?\]11;?\]11;?\ Component  Stack  +──────────────────────── + mock/disabled nonprod + mock/drift nonprod + mock/drift prod + mock/nodrift nonprod + mock/nodrift prod diff --git a/tests/snapshots/TestCLICommands_atmos_list_instances_no_tty.stdout.golden b/tests/snapshots/TestCLICommands_atmos_list_instances_no_tty.stdout.golden index 08bcd049eb..bef84ed472 100644 --- a/tests/snapshots/TestCLICommands_atmos_list_instances_no_tty.stdout.golden +++ b/tests/snapshots/TestCLICommands_atmos_list_instances_no_tty.stdout.golden @@ -1,6 +1,6 @@ Component,Stack mock/disabled,nonprod mock/drift,nonprod -mock/nodrift,nonprod mock/drift,prod -mock/nodrift,prod +mock/nodrift,nonprod +mock/nodrift,prod \ No newline at end of file diff --git a/tests/snapshots/TestCLICommands_atmos_list_instances_with_custom_columns.stderr.golden b/tests/snapshots/TestCLICommands_atmos_list_instances_with_custom_columns.stderr.golden new file mode 100644 index 0000000000..3a769ed9d1 --- /dev/null +++ b/tests/snapshots/TestCLICommands_atmos_list_instances_with_custom_columns.stderr.golden @@ -0,0 +1,2 @@ + +**Notice:** Telemetry Enabled - Atmos now collects anonymous telemetry regarding usage. This information is used to shape the Atmos roadmap and prioritize features. You can learn more, including how to opt out if you'd prefer not to participate in this anonymous program, by visiting: https://atmos.tools/cli/telemetry diff --git a/tests/snapshots/TestCLICommands_atmos_list_instances_with_custom_columns.stdout.golden b/tests/snapshots/TestCLICommands_atmos_list_instances_with_custom_columns.stdout.golden new file mode 100644 index 0000000000..dfc264b9c8 --- /dev/null +++ b/tests/snapshots/TestCLICommands_atmos_list_instances_with_custom_columns.stdout.golden @@ -0,0 +1,6 @@ +Stack,Comp,Type +nonprod,mock/disabled,terraform +nonprod,mock/drift,terraform +nonprod,mock/nodrift,terraform +prod,mock/drift,terraform +prod,mock/nodrift,terraform \ No newline at end of file diff --git a/tests/test-cases/atmos-pro.yaml b/tests/test-cases/atmos-pro.yaml index cec3c0833e..3f6bdb2f22 100644 --- a/tests/test-cases/atmos-pro.yaml +++ b/tests/test-cases/atmos-pro.yaml @@ -36,6 +36,8 @@ tests: args: - "list" - "instances" + - "--format" + - "csv" expect: exit_code: 0 stdout: @@ -44,4 +46,49 @@ tests: - "mock/drift,nonprod" - "mock/nodrift,nonprod" - "mock/drift,prod" - - "mock/nodrift,prod" \ No newline at end of file + - "mock/nodrift,prod" + + - name: atmos list instances with custom columns + snapshot: true + enabled: true + tty: false + description: "Ensure Atmos honors --columns flag for custom column output" + workdir: "fixtures/scenarios/atmos-pro" + command: "atmos" + args: + - "list" + - "instances" + - "--format" + - "csv" + - "--columns" + - "Stack={{ .stack }}" + - "--columns" + - "Comp={{ .component }}" + - "--columns" + - "Type={{ .component_type }}" + expect: + exit_code: 0 + stdout: + - "Stack,Comp,Type" + - "nonprod,mock/disabled,terraform" + - "nonprod,mock/drift,terraform" + - "nonprod,mock/nodrift,terraform" + - "prod,mock/drift,terraform" + - "prod,mock/nodrift,terraform" + + - name: atmos list instances with invalid column spec + snapshot: false + enabled: true + tty: false + description: "Ensure Atmos returns error for invalid --columns format" + workdir: "fixtures/scenarios/atmos-pro" + command: "atmos" + args: + - "list" + - "instances" + - "--columns" + - "InvalidNoEquals" + expect: + exit_code: 1 + stderr: + - "must be in format 'Name=Template'" \ No newline at end of file diff --git a/website/blog/2025-11-17-customizable-list-command-output.mdx b/website/blog/2025-11-17-customizable-list-command-output.mdx new file mode 100644 index 0000000000..37b3b842b6 --- /dev/null +++ b/website/blog/2025-11-17-customizable-list-command-output.mdx @@ -0,0 +1,160 @@ +--- +title: "Customize List Command Output to Explore Your Cloud Architecture" +slug: customizable-list-command-output +description: "Configure custom columns in atmos.yaml to tailor list command output to your team's needs" +authors: [osterman] +tags: [feature, usability, list-commands] +--- + +Atmos lets you model your cloud architecture, so why shouldn't you be able to easily explore that? This is especially a pain point for people new to a team who just want to see what exists without having to understand your complete cloud architecture. Atmos List makes that possible. + +We've enhanced all column-supporting list commands (`instances`, `components`, `stacks`, `workflows`, `vendor`) to support customizable output columns via `atmos.yaml` configuration. + + + +## The Problem + +When exploring a new infrastructure codebase, you're often overwhelmed with questions: +- What components are deployed in production? +- Which stacks use a specific component? +- What's the region and environment for each deployment? + +Running `atmos list instances` gives you raw data, but not the specific view you need to answer these questions quickly. + +## The Solution + +Configure custom columns in `atmos.yaml` to show exactly what your team needs: + +```yaml +# atmos.yaml +components: + list: + columns: + - name: Stack + value: "{{ .stack }}" + - name: Component + value: "{{ .component }}" + - name: Region + value: "{{ .vars.region }}" + - name: Environment + value: "{{ .vars.environment }}" + - name: Stage + value: "{{ .vars.stage }}" + - name: Description + value: "{{ .metadata.description }}" +``` + +Now `atmos list instances` shows a clean, team-specific view: + +```shell +atmos list instances +┌─────────────────────┬───────────┬───────────┬─────────────┬──────┬────────────────────────┐ +│ Stack │ Component │ Region │ Environment │ Stage│ Description │ +├─────────────────────┼───────────┼───────────┼─────────────┼──────┼────────────────────────┤ +│ plat-ue2-prod │ vpc │ us-east-2 │ ue2 │ prod │ Production VPC │ +│ plat-ue2-prod │ eks │ us-east-2 │ ue2 │ prod │ Production EKS cluster │ +│ plat-uw2-staging │ vpc │ us-west-2 │ uw2 │ stage│ Staging VPC │ +└─────────────────────┴───────────┴───────────┴─────────────┴──────┴────────────────────────┘ +``` + +## Practical Examples + +### Find All Production Infrastructure + +Filter by stack pattern and see critical details: + +```shell +atmos list instances --stack "*-prod" --columns "component,vars.region,enabled" +``` + +### Explore Vendored Dependencies + +See what external components you're using: + +```yaml +# atmos.yaml +vendor: + list: + columns: + - name: Component + value: "{{ .component }}" + - name: Source + value: "{{ .source | truncate 50 }}" + - name: Version + value: "{{ .version }}" +``` + +```shell +atmos list vendor +┌───────────┬──────────────────────────────────────────────────┬─────────┐ +│ Component │ Source │ Version │ +├───────────┼──────────────────────────────────────────────────┼─────────┤ +│ vpc │ github.com/cloudposse/terraform-aws-vpc │ 1.5.0 │ +│ eks │ github.com/cloudposse/terraform-aws-eks │ 2.0.0 │ +└───────────┴──────────────────────────────────────────────────┴─────────┘ +``` + +### Audit Workflows + +See all available automation: + +```yaml +# atmos.yaml +workflows: + list: + columns: + - name: Workflow + value: "{{ .name }}" + - name: File + value: "{{ .file }}" + - name: Steps + value: "{{ .steps | len }} steps" +``` + +### Use Template Functions + +Transform data with built-in functions: + +```yaml +components: + list: + columns: + - name: Component + value: "{{ .component | upper }}" + - name: Status + value: "{{ if .enabled }}✓ Enabled{{ else }}✗ Disabled{{ end }}" + - name: Short Description + value: "{{ .metadata.description | truncate 40 }}" +``` + +## Override from CLI + +Need a different view for a one-off query? Override columns via CLI: + +```shell +# Quick component-stack view +atmos list instances --columns stack,component + +# Region-specific query +atmos list instances --columns "component,vars.region,vars.account_id" +``` + +## What's Supported + +Column customization is available for: +- ✅ `atmos list instances` - All component instances across stacks +- ✅ `atmos list components` - Components in your project +- ✅ `atmos list stacks` - Stack configurations +- ✅ `atmos list workflows` - Available workflows +- ✅ `atmos list vendor` - Vendored dependencies + +Each command has access to its own template context with fields like `.stack`, `.component`, `.vars.*`, `.settings.*`, `.metadata.*`, and more. + +## Learn More + +- [List Instances Documentation](/cli/commands/list/list-instances) +- [List Components Documentation](/cli/commands/list/components) +- [List Workflows Documentation](/cli/commands/list/list-workflows) +- [List Vendor Documentation](/cli/commands/list/list-vendor) + +Make exploring your cloud architecture as easy as modeling it. Configure your columns once, and every team member gets the view they need. diff --git a/website/docs/cli/commands/list/list-components.mdx b/website/docs/cli/commands/list/list-components.mdx index 51d02c0dad..2458991ea6 100644 --- a/website/docs/cli/commands/list/list-components.mdx +++ b/website/docs/cli/commands/list/list-components.mdx @@ -5,11 +5,11 @@ sidebar_class_name: command id: components description: Use this command to list Atmos components --- -import Intro from '@site/src/components/Intro' import Screengrab from '@site/src/components/Screengrab' +import Intro from '@site/src/components/Intro' -Use this command to list all Atmos components or Atmos components in a specified stack. +Use this command to list all components in your Atmos configuration, optionally filtering by stack. View components in multiple output formats including tables, JSON, YAML, and CSV. @@ -22,7 +22,7 @@ Execute the `list components` command like this: atmos list components ``` -This command lists Atmos components in a specified stack. +This command lists Atmos components in all stacks or in a specified stack: ```shell atmos list components -s @@ -39,30 +39,216 @@ atmos list components atmos list components -s tenant1-ue2-dev ``` -### Custom Columns for Components +Filter by component type: +```shell +# List only real (non-abstract) components +atmos list components --type real -This configuration customizes the output of `atmos list components`: +# List abstract components +atmos list components --type abstract -```yaml -# In atmos.yaml -components: - list: - columns: - - name: Component Name - value: "{{ .component_name }}" - - name: Component Type - value: "{{ .component_type }}" - - name: Component Path - value: "{{ .component_path }}" +# List all components including abstract +atmos list components --type all +``` + +Filter by enabled/locked status: +```shell +# List only enabled components +atmos list components --enabled=true + +# List only locked components +atmos list components --locked=true +``` + +Include abstract components: +```shell +# Include abstract components (normally hidden by default) +atmos list components --abstract + +# Equivalent: show all component types +atmos list components --type all +``` + +Output in different formats: +```shell +# JSON format +atmos list components --format json + +# YAML format +atmos list components --format yaml + +# CSV format with custom delimiter +atmos list components --format csv +``` + +Custom columns: +```shell +# Simple field names (auto-generates templates) +atmos list components --columns component,stack,type + +# Named columns with custom templates +atmos list components --columns "Name={{ .component }},Stack={{ .stack }}" + +# Named columns with simple field reference +atmos list components --columns "MyStack=stack,MyType=type" + +# Mix of formats +atmos list components --columns component,"Status={{ if .enabled }}Active{{ else }}Disabled{{ end }}" +``` + +Sort results: +```shell +# Sort by component name ascending +atmos list components --sort component:asc + +# Sort by multiple columns +atmos list components --sort "stack:asc,component:desc" ``` -Running `atmos list components` will produce a table with these custom columns. ## Flags
-
`--stack` / `-s` (optional)
-
Atmos stack.
+
`--stack` / `-s`
+
Filter by stack name pattern (supports glob patterns like `plat-*-prod`).
Environment variable: `ATMOS_STACK`
+ +
`--format` / `-f`
+
Output format: `table`, `json`, `yaml`, `csv`, `tsv`, `tree`. Overrides `components.list.format` configuration in atmos.yaml (default: `table`).
Environment variable: `ATMOS_LIST_FORMAT`
+ +
`--columns`
+
Columns to display. Supports simple field names (e.g., `component,stack,type`), named columns with templates (e.g., `"Name={{ .component }}"`), or named with field reference (e.g., `"MyStack=stack"`). Overrides `components.list.columns` configuration in atmos.yaml. Environment variable: `ATMOS_LIST_COLUMNS`
+ +
`--type` / `-t`
+
Component type: `real`, `abstract`, `all` (default: `real`).
Environment variable: `ATMOS_COMPONENT_TYPE`
+ +
`--abstract`
+
Include abstract components in output. Equivalent to `--type all`.
Environment variable: `ATMOS_ABSTRACT`
+ +
`--enabled`
+
Filter by enabled status (omit for all, `--enabled=true` for enabled only, `--enabled=false` for disabled only).
Environment variable: `ATMOS_COMPONENT_ENABLED`
+ +
`--locked`
+
Filter by locked status (omit for all, `--locked=true` for locked only, `--locked=false` for unlocked only).
Environment variable: `ATMOS_COMPONENT_LOCKED`
+ +
`--sort`
+
Sort by column:order (e.g., `component:asc,stack:desc`). Multiple sort columns separated by comma.
Environment variable: `ATMOS_LIST_SORT`
`--identity` / `-i` (optional)
-
Authenticate with a specific identity before listing components.
This is required when stack configurations use YAML template functions
(e.g., `!terraform.state`, `!terraform.output`) that require authentication.
`atmos list components --identity my-aws-identity`

Can also be set via `ATMOS_IDENTITY` environment variable.
+
Authenticate with a specific identity before listing components.
This is required when stack configurations use YAML template functions
(e.g., `!terraform.state`, `!terraform.output`) that require authentication.
`atmos list components --identity my-aws-identity`

Environment variable: `ATMOS_IDENTITY`
+ +## Configuration + +You can customize the default output format and columns displayed by `atmos list components` in your `atmos.yaml`: + +### Default Format + +```yaml +# atmos.yaml +components: + list: + format: yaml # Default format: table, json, yaml, csv, tsv +``` + +**Precedence**: CLI `--format` flag > Config file > Environment variable `ATMOS_LIST_FORMAT` > Default (`table`) + +### Custom Columns + +```yaml +# atmos.yaml +components: + list: + format: table + columns: + - name: Component + value: "{{ .atmos_component }}" + - name: Stack + value: "{{ .atmos_stack }}" + - name: Type + value: "{{ .atmos_component_type }}" + - name: Enabled + value: "{{ .enabled }}" + - name: Description + value: "{{ .metadata.description }}" +``` + +### Available Template Fields + +Column `value` fields support Go template syntax with access to: + +- `.atmos_component` - Atmos component name +- `.atmos_stack` - Stack name +- `.atmos_component_type` - Component type (`terraform`, `helmfile`, etc.) +- `.component` - Terraform/Helmfile component path +- `.vars` - All component variables (e.g., `.vars.region`, `.vars.tenant`) +- `.settings` - Component settings +- `.metadata` - Component metadata (e.g., `.metadata.description`, `.metadata.component`) +- `.env` - Environment variables +- `.enabled` - Whether component is enabled (boolean) +- `.locked` - Whether component is locked (boolean) +- `.abstract` - Whether component is abstract (boolean) + +### Template Functions + +Columns support template functions for data transformation: + +```yaml +components: + list: + columns: + - name: Component (Upper) + value: "{{ .atmos_component | upper }}" + - name: Region + value: "{{ .vars.region | default \"N/A\" }}" + - name: Status + value: "{{ if .enabled }}Enabled{{ else }}Disabled{{ end }}" + - name: Short Description + value: "{{ .metadata.description | truncate 50 }}" +``` + +Available functions: +- `upper`, `lower` - String case conversion +- `truncate` - Truncate string with ellipsis +- `len` - Length of arrays/strings +- `default` - Provide default value if empty +- `toString` - Convert value to string +- `ternary` - Conditional expression + +### Override Columns via CLI + +Override configured columns using the `--columns` flag. The flag supports multiple formats: + +**Simple field names** (auto-generates templates and title-case names): +```shell +# Display component, stack, and type columns +atmos list components --columns component,stack,type + +# Include enabled status +atmos list components --columns component,stack,enabled +``` + +**Named columns with templates** (full control over display name and value): +```shell +# Custom column names with templates +atmos list components --columns "Name={{ .component }},Stack={{ .stack }}" + +# Complex templates with conditionals +atmos list components --columns "Status={{ if .enabled }}Active{{ else }}Disabled{{ end }}" +``` + +**Named columns with field reference** (auto-wraps field in template): +```shell +# Shorthand: Name=field becomes Name={{ .field }} +atmos list components --columns "MyComponent=component,MyStack=stack" +``` + +**Mixed formats**: +```shell +# Combine simple fields and named columns +atmos list components --columns component,"CustomType={{ .type | upper }}" +``` + +## Related Commands + +- [`atmos list instances`](/cli/commands/list/list-instances) - List all component instances across stacks +- [`atmos list stacks`](/cli/commands/list/stacks) - List all stacks +- [`atmos describe component`](/cli/commands/describe/component) - Get detailed component configuration diff --git a/website/docs/cli/commands/list/list-instances.mdx b/website/docs/cli/commands/list/list-instances.mdx new file mode 100644 index 0000000000..c14164bbf1 --- /dev/null +++ b/website/docs/cli/commands/list/list-instances.mdx @@ -0,0 +1,243 @@ +--- +title: "atmos list instances" +id: "list-instances" +sidebar_label: instances +sidebar_class_name: command +description: Use this command to list all Atmos component instances across stacks +--- +import Screengrab from '@site/src/components/Screengrab' +import Intro from '@site/src/components/Intro' + + +Use this command to list all component instances across stacks, showing each unique component-stack combination. Upload instance metadata to Atmos Pro for centralized tracking and management. + + + + +## Usage + +```shell +atmos list instances [flags] +``` + +## Description + +The `atmos list instances` command displays all component instances defined across your Atmos stacks. Each instance represents a unique combination of a component and stack. This command is useful for: + +- Getting an overview of all deployed/configured infrastructure +- Finding specific component instances across stacks +- Filtering instances by stack pattern or custom criteria +- Uploading instance inventory to Atmos Pro for centralized management + +## Flags + +
+
`--format` / `-f`
+
Output format: `table`, `json`, `yaml`, `csv`, `tsv`, `tree` (default: `table`)
+ +
`--delimiter`
+
Delimiter for CSV/TSV output (default: tab for tsv, comma for csv)
+ +
`--provenance`
+
Show import provenance in tree format. Only works with `--format=tree`. Displays the import hierarchy showing which files each component inherits from.
+ +
`--columns`
+
Columns to display (comma-separated). Overrides `components.list.columns` configuration in atmos.yaml
+ +
`--stack` / `-s`
+
Filter by stack pattern (supports glob patterns, e.g., `plat-*-prod`)
+ +
`--filter`
+
Filter expression using YQ syntax (e.g., `.vars.region == "us-east-1"`)
+ +
`--query` / `-q`
+
YQ expression to filter values (e.g., `.vars.region`)
+ +
`--sort`
+
Sort by column:order (e.g., `stack:asc,component:desc`)
+ +
`--upload`
+
Upload instances to Atmos Pro API (requires Pro configuration)
+
+ +## Examples + +List all instances in table format: +```shell +atmos list instances +``` + +Filter instances by stack pattern: +```shell +# List instances in all production stacks +atmos list instances --stack "*-prod" + +# List instances in a specific stack +atmos list instances --stack tenant1-ue2-dev +``` + +Output in different formats: +```shell +# JSON format for machine processing +atmos list instances --format json + +# YAML format for configuration review +atmos list instances --format yaml + +# CSV format for spreadsheet compatibility +atmos list instances --format csv +``` + +Filter instances using YQ expressions: +```shell +# Find all instances in us-east-1 region +atmos list instances --filter '.vars.region == "us-east-1"' + +# Find enabled VPC components +atmos list instances --filter '.component == "vpc" and .enabled == true' +``` + +Sort instances: +```shell +# Sort by stack name ascending +atmos list instances --sort stack:asc + +# Multi-column sort +atmos list instances --sort "stack:asc,component:desc" +``` + +Upload instances to Atmos Pro: +```shell +atmos list instances --upload +``` + +View instances in tree format: +```shell +# Tree view without import details +atmos list instances --format tree + +# Tree view with import provenance (shows inheritance chain) +atmos list instances --format tree --provenance +``` + +## Tree Format with Import Provenance + +The `tree` format provides a hierarchical view of your component instances organized by stack. When combined with the `--provenance` flag, it shows the complete import chain for each component, making it easy to understand configuration inheritance. + +### Tree Format Structure + +The tree format displays: +- **Stacks** as top-level nodes +- **Components** as child nodes under each stack +- **Import hierarchy** (when `--provenance` is enabled) showing the chain of stack configuration files + +### Import Provenance + +When you enable `--provenance`, each component shows its import chain - the sequence of stack configuration files it inherits from. This is particularly useful for: + +- **Debugging configuration** - See exactly where each component's configuration comes from +- **Understanding inheritance** - Visualize the complete import chain +- **Auditing changes** - Track which base configurations affect which components +- **Documentation** - Generate visual representations of stack dependencies + +Example tree output with provenance: +``` +Component Instances +│ +├── tenant1-ue2-dev +│ ├── vpc +│ │ ├── stacks/tenant1/ue2/dev +│ │ ├── stacks/tenant1/ue2/_defaults +│ │ └── stacks/catalog/vpc +│ └── eks +│ ├── stacks/tenant1/ue2/dev +│ ├── stacks/tenant1/ue2/_defaults +│ └── stacks/catalog/eks +``` + +The import chain is shown from most specific (top) to most general (bottom), reflecting how Atmos merges configurations. + +## Custom Columns Configuration + +You can customize the columns displayed by `atmos list instances` in your `atmos.yaml`: + +```yaml +# atmos.yaml +components: + list: + columns: + - name: Stack + value: "{{ .stack }}" + - name: Component + value: "{{ .component }}" + - name: Tenant + value: "{{ .vars.tenant }}" + - name: Environment + value: "{{ .vars.environment }}" + - name: Stage + value: "{{ .vars.stage }}" + - name: Region + value: "{{ .vars.region }}" + - name: Description + value: "{{ .metadata.description }}" + - name: Enabled + value: "{{ .enabled }}" +``` + +### Available Template Fields + +Column `value` fields support Go template syntax with access to: + +- `.stack` - Stack name (e.g., `tenant1-ue2-dev`) +- `.component` - Component name (e.g., `vpc`) +- `.atmos_component` - Atmos component identifier +- `.atmos_component_type` - Component type (`terraform`, `helmfile`, etc.) +- `.vars` - All component variables (e.g., `.vars.region`, `.vars.tenant`) +- `.settings` - Component settings (e.g., `.settings.spacelift.workspace_enabled`) +- `.metadata` - Component metadata (e.g., `.metadata.description`) +- `.env` - Environment variables +- `.enabled` - Whether component is enabled (boolean) +- `.locked` - Whether component is locked (boolean) +- `.abstract` - Whether component is abstract (boolean) + +### Template Functions + +Columns support template functions for data transformation: + +```yaml +components: + list: + columns: + - name: Region (Upper) + value: "{{ .vars.region | upper }}" + - name: Short Description + value: "{{ .metadata.description | truncate 50 }}" + - name: Has Monitoring + value: "{{ if .vars.monitoring_enabled }}Yes{{ else }}No{{ end }}" +``` + +### Override Columns via CLI + +Override configured columns using the `--columns` flag: + +```shell +# Display only stack and component columns +atmos list instances --columns stack,component + +# Display custom subset +atmos list instances --columns "stack,component,vars.region,enabled" +``` + +:::tip +- Use `--format tree --provenance` to visualize component import hierarchies +- Use the `--filter` flag for complex filtering with YQ syntax +- Combine `--stack` (glob pattern) with `--filter` (YQ expression) for precise filtering +- The `--upload` flag sends instance data to Atmos Pro for centralized infrastructure management +- Use `--format json` or `--format yaml` for programmatic processing +::: + +## Related Commands + +- [`atmos list components`](/cli/commands/list/components) - List all components +- [`atmos list stacks`](/cli/commands/list/stacks) - List all stacks +- [`atmos describe component`](/cli/commands/describe/component) - Get detailed component configuration diff --git a/website/docs/cli/commands/list/list-metadata.mdx b/website/docs/cli/commands/list/list-metadata.mdx index 28778ed841..e281ec3c23 100644 --- a/website/docs/cli/commands/list/list-metadata.mdx +++ b/website/docs/cli/commands/list/list-metadata.mdx @@ -3,11 +3,16 @@ title: "atmos list metadata" id: "list-metadata" sidebar_label: metadata sidebar_class_name: command +description: Use this command to list component metadata across all stacks --- +import Screengrab from '@site/src/components/Screengrab' +import Intro from '@site/src/components/Intro' -# atmos list metadata + +Use this command to list component metadata across all stacks, displaying custom metadata fields in a table. Filter and sort metadata to quickly find components with specific attributes. + -The `atmos list metadata` command displays component metadata across all stacks. + ## Usage @@ -17,60 +22,72 @@ atmos list metadata [flags] ## Description -The `atmos list metadata` command helps you inspect component metadata across different stacks. It provides a tabular view where: +The `atmos list metadata` command displays component metadata across all stacks in a tabular format. Each row represents a component instance, showing metadata fields like: -- Each column represents a stack (e.g., dev-ue1, staging-ue1, prod-ue1) -- Each row represents a key in the component's metadata -- Cells contain the metadata values for each key in each stack +- Component type (`abstract` or `real`) +- Enabled/disabled status +- Locked status +- Base component name +- Inheritance chain +- Description -The command is particularly useful for: -- Comparing component metadata across different environments -- Verifying component types and versions across stacks -- Understanding component organization patterns across your infrastructure +This command is useful for: +- Auditing component types across environments +- Finding enabled/disabled components +- Understanding component inheritance patterns +- Verifying locked components ## Flags
-
`--query string`
-
JMESPath query to filter metadata (default: `.metadata`)
-
`--max-columns int`
-
Maximum number of columns to display (default: `50`)
-
`--format string`
+
`--format` / `-f`
Output format: `table`, `json`, `yaml`, `csv`, `tsv` (default: `table`)
-
`--delimiter string`
+ +
`--columns`
+
Columns to display (comma-separated). Overrides `components.list.columns` configuration in atmos.yaml
+ +
`--stack` / `-s`
+
Filter by stack pattern (supports glob patterns, e.g., `plat-*-prod`)
+ +
`--filter`
+
Filter expression using YQ syntax (e.g., `.enabled == true`)
+ +
`--sort`
+
Sort by column:order (e.g., `stack:asc,component:desc`)
+ +
`--delimiter`
Delimiter for csv/tsv output (default: `,` for csv, `\t` for tsv)
-
`--stack string`
-
Filter by stack pattern (e.g., `*-dev-*`, `prod-*`, `*-{dev,staging}-*`)
+
`--identity` / `-i` (optional)
Authenticate with a specific identity before listing metadata.
This is required when stack configurations use YAML template functions
(e.g., `!terraform.state`, `!terraform.output`) that require authentication.
`atmos list metadata --identity my-aws-identity`

Can also be set via `ATMOS_IDENTITY` environment variable.
## Examples -List all metadata: +List all component metadata: ```shell atmos list metadata ``` -List metadata for specific stacks: +Filter by stack pattern: ```shell -# List metadata for dev stacks -atmos list metadata --stack '*-dev-*' - # List metadata for production stacks -atmos list metadata --stack 'prod-*' +atmos list metadata --stack '*-prod' + +# List metadata for specific environment +atmos list metadata --stack 'plat-ue2-*' ``` -List specific metadata using JMESPath queries: +Filter by metadata fields: ```shell -# Query component names -atmos list metadata --query '.metadata.component' +# Find all enabled components +atmos list metadata --filter '.enabled == true' -# Query component types -atmos list metadata --query '.metadata.type' +# Find all abstract components +atmos list metadata --filter '.type == "abstract"' -# Query component versions -atmos list metadata --query '.metadata.version' +# Find locked components +atmos list metadata --filter '.locked == true' ``` Output in different formats: @@ -78,47 +95,135 @@ Output in different formats: # JSON format for machine processing atmos list metadata --format json -# YAML format for configuration files +# YAML format for configuration review atmos list metadata --format yaml -# CSV format for spreadsheet compatibility +# CSV format for spreadsheet analysis atmos list metadata --format csv +``` + +Sort metadata: +```shell +# Sort by stack name ascending +atmos list metadata --sort stack:asc + +# Multi-column sort +atmos list metadata --sort "type:desc,stack:asc,component:asc" +``` + +Custom column selection: +```shell +# Show only essential fields +atmos list metadata --columns stack,component,type,enabled + +# Show inheritance information +atmos list metadata --columns "component,component_base,inherits" +``` + +## Custom Columns Configuration -# TSV format with tab delimiters -atmos list metadata --format tsv +You can customize the columns displayed by `atmos list metadata` in your `atmos.yaml`: + +```yaml +# atmos.yaml +components: + list: + columns: + - name: Stack + value: "{{ .stack }}" + - name: Component + value: "{{ .component }}" + - name: Type + value: "{{ .type }}" + - name: Enabled + value: "{{ .enabled }}" + - name: Locked + value: "{{ .locked }}" + - name: Base Component + value: "{{ .component_base }}" + - name: Inherits + value: "{{ .inherits }}" + - name: Description + value: "{{ .description }}" ``` -### Custom Column using Stack Name +### Available Template Fields -You can use available variables like `.stack_name` in your column definitions: +Column `value` fields support Go template syntax with access to: + +- `.stack` - Stack name +- `.component` - Component name +- `.component_type` - Component type (`terraform`, `helmfile`, etc.) +- `.type` - Metadata type (`abstract`, `real`) +- `.enabled` - Whether component is enabled (boolean) +- `.locked` - Whether component is locked (boolean) +- `.component_base` - Base Terraform/Helmfile component +- `.inherits` - Comma-separated list of inherited components +- `.description` - Component description +- `.metadata` - Full metadata map for advanced templates + +### Template Functions + +Columns support template functions for data transformation: ```yaml -# In atmos.yaml, under the appropriate scope (values, vars, settings, or metadata) -list: - columns: - - name: "Stack" - value: "{{ .stack_name }}" - - name: "Metadata" - value: "{{ .key }}" - - name: "Value" - value: "{{ .value }}" +components: + list: + columns: + - name: Component (Upper) + value: "{{ .component | upper }}" + - name: Status + value: "{{ if .enabled }}✓ Enabled{{ else }}✗ Disabled{{ end }}" + - name: Type Badge + value: "{{ if eq .type \"abstract\" }}[A]{{ else }}[R]{{ end }}" + - name: Has Inherits + value: "{{ if .inherits }}Yes{{ else }}No{{ end }}" +``` + +Available functions: +- `upper`, `lower` - String case conversion +- `truncate` - Truncate string with ellipsis +- `len` - Length of arrays/strings +- `toString` - Convert value to string +- `ternary` - Conditional expression +- `eq`, `ne` - Equality comparison + +### Override Columns via CLI + +Override configured columns using the `--columns` flag: + +```shell +# Display only stack, component, and type +atmos list metadata --columns stack,component,type + +# Display custom subset +atmos list metadata --columns "stack,component,type,enabled,locked" ``` ## Example Output ```shell > atmos list metadata -┌──────────────┬──────────────┬──────────────┬──────────────┐ -│ │ dev-ue1 │ staging-ue1 │ prod-ue1 │ -├──────────────┼──────────────┼──────────────┼──────────────┤ -│ component │ vpc │ vpc │ vpc │ -│ type │ terraform │ terraform │ terraform │ -│ version │ 1.0.0 │ 1.0.0 │ 1.0.0 │ -└──────────────┴──────────────┴──────────────┴──────────────┘ +┌─────────────────┬───────────┬──────────┬─────────┬────────┬────────────────┬──────────────┬────────────────────┐ +│ Stack │ Component │ Type │ Enabled │ Locked │ Base Component │ Inherits │ Description │ +├─────────────────┼───────────┼──────────┼─────────┼────────┼────────────────┼──────────────┼────────────────────┤ +│ plat-ue2-dev │ vpc │ real │ true │ false │ vpc │ vpc/defaults │ Development VPC │ +│ plat-ue2-dev │ eks │ real │ true │ false │ eks │ eks/defaults │ Development EKS │ +│ plat-ue2-prod │ vpc │ real │ true │ true │ vpc │ vpc/defaults │ Production VPC │ +│ plat-ue2-prod │ eks │ real │ true │ true │ eks │ eks/defaults │ Production EKS │ +└─────────────────┴───────────┴──────────┴─────────┴────────┴────────────────┴──────────────┴────────────────────┘ ``` :::tip -- For wide tables, try using more specific queries or reduce the number of stacks -- Stack patterns support glob matching (e.g., `*-dev-*`, `prod-*`, `*-{dev,staging}-*`) -- Metadata is typically found under component configurations +- Use `--filter` to find specific metadata patterns (e.g., locked components, abstract components) +- Combine `--stack` (glob) with `--filter` (YQ) for precise filtering +- The `--sort` flag supports multi-column sorting for organized output +- Metadata is component-level configuration (use [`atmos list settings`](/cli/commands/list/settings) for settings data) ::: + +## Related Commands + +- [`atmos list instances`](/cli/commands/list/list-instances) - List all component instances with full configuration +- [`atmos list components`](/cli/commands/list/components) - List all components +- [`atmos list settings`](/cli/commands/list/settings) - List component settings +- [`atmos describe component`](/cli/commands/describe/component) - Get detailed component configuration diff --git a/website/docs/cli/commands/list/list-settings.mdx b/website/docs/cli/commands/list/list-settings.mdx index 3cbcc8f2b0..62c5f701b9 100644 --- a/website/docs/cli/commands/list/list-settings.mdx +++ b/website/docs/cli/commands/list/list-settings.mdx @@ -3,11 +3,16 @@ title: "atmos list settings" sidebar_label: settings sidebar_class_name: command id: settings +description: Use this command to list component settings across all stacks --- +import Screengrab from '@site/src/components/Screengrab' +import Intro from '@site/src/components/Intro' -# atmos list settings + +Use this command to list component settings across all stacks in a comparison table. View how settings vary between environments to quickly spot configuration differences and validate consistency. + -The `atmos list settings` command displays component settings across all stacks. + ## Usage diff --git a/website/docs/cli/commands/list/list-stacks.mdx b/website/docs/cli/commands/list/list-stacks.mdx index 636a2c591c..3b070263cc 100644 --- a/website/docs/cli/commands/list/list-stacks.mdx +++ b/website/docs/cli/commands/list/list-stacks.mdx @@ -3,14 +3,14 @@ title: atmos list stacks sidebar_label: stacks sidebar_class_name: command id: stacks -description: Use this command to list all Stack configurations or a stack of a specified component. +description: Use this command to list all Stack configurations or stacks for a specified component --- import Screengrab from '@site/src/components/Screengrab' import Terminal from '@site/src/components/Terminal' import Intro from '@site/src/components/Intro' -Use this command to list Atmos stacks. +Use this command to list all stacks in your Atmos configuration, optionally filtering by component. View stacks in multiple formats including tables, JSON, YAML, or hierarchical trees with import provenance to understand configuration inheritance. @@ -35,35 +35,228 @@ Run `atmos list stacks --help` to see all the available options ## Examples +List all stacks: ```shell atmos list stacks +``` + +List stacks for a specific component: +```shell atmos list stacks -c vpc +atmos list stacks --component eks +``` + +Output in different formats: +```shell +# JSON format +atmos list stacks --format json + +# YAML format +atmos list stacks --format yaml + +# CSV format +atmos list stacks --format csv +``` + +Sort stacks: +```shell +# Sort by stack name ascending +atmos list stacks --sort stack:asc + +# Sort by component name descending +atmos list stacks --component vpc --sort component:desc +``` + +Custom columns: +```shell +# Simple field names (auto-generates templates) +atmos list stacks --columns stack + +# Named columns with custom templates +atmos list stacks --columns "Name={{ .stack }}" + +# When filtering by component, show both stack and component +atmos list stacks --component vpc --columns stack,component +``` + +View stacks in tree format: +```shell +# Tree view without import details +atmos list stacks --format tree + +# Tree view with import provenance (shows inheritance chain) +atmos list stacks --format tree --provenance + +# Tree view with provenance for a specific component +atmos list stacks --component vpc --format tree --provenance +``` + +## Flags + +
+
`--component` / `-c`
+
Filter stacks by component name.
Environment variable: `ATMOS_COMPONENT`
+ +
`--format` / `-f`
+
Output format: `table`, `json`, `yaml`, `csv`, `tsv`, `tree`. Overrides `stacks.list.format` configuration in atmos.yaml (default: `table`).
Environment variable: `ATMOS_LIST_FORMAT`
+ +
`--columns`
+
Columns to display. Supports simple field names (e.g., `stack`), named columns with templates (e.g., `"Name={{ .stack }}"`), or named with field reference (e.g., `"MyStack=stack"`). Overrides `stacks.list.columns` configuration in atmos.yaml. Environment variable: `ATMOS_LIST_COLUMNS`
+ +
`--sort`
+
Sort by column:order (e.g., `stack:asc,component:desc`). Multiple sort columns separated by comma.
Environment variable: `ATMOS_LIST_SORT`
+ +
`--provenance`
+
Show import provenance in tree format. Only works with `--format=tree`. Displays the import hierarchy showing which files each stack inherits from.
Environment variable: `ATMOS_PROVENANCE`
+ +
`--identity` / `-i` (optional)
+
Authenticate with a specific identity before listing stacks.
This is required when stack configurations use YAML template functions
(e.g., `!terraform.state`, `!terraform.output`) that require authentication.
`atmos list stacks --identity my-aws-identity`

Environment variable: `ATMOS_IDENTITY`
+
+ +## Tree Format with Import Provenance + +The `tree` format provides a hierarchical view of your stacks. When combined with the `--provenance` flag, it shows the complete import chain for each stack, making it easy to understand configuration inheritance. + +### Tree Format Structure + +The tree format displays: +- **Stack names** as top-level nodes +- **Import hierarchy** (when `--provenance` is enabled) showing the chain of stack configuration files that each stack imports + +### Import Provenance + +When you enable `--provenance`, each stack shows its import chain - the sequence of stack configuration files it inherits from. This is particularly useful for: + +- **Debugging configuration** - See exactly where each stack's configuration comes from +- **Understanding inheritance** - Visualize the complete import chain +- **Auditing changes** - Track which base configurations affect which stacks +- **Documentation** - Generate visual representations of stack dependencies + +Example tree output with provenance: +``` +Stacks +│ +├── tenant1-ue2-dev +│ ├── stacks/tenant1/ue2/dev +│ ├── stacks/tenant1/ue2/_defaults +│ ├── stacks/tenant1/_defaults +│ └── stacks/_defaults +│ +├── tenant1-ue2-staging +│ ├── stacks/tenant1/ue2/staging +│ ├── stacks/tenant1/ue2/_defaults +│ ├── stacks/tenant1/_defaults +│ └── stacks/_defaults ``` -### Customizing Output Columns +The import chain is shown from most specific (top) to most general (bottom), reflecting how Atmos merges configurations. -This configuration customizes the output of `atmos list stacks`: +When used with `--component`, the tree shows only stacks that contain the specified component: + +```shell +atmos list stacks --component vpc --format tree --provenance +``` + +This filters the output to show only stacks where the `vpc` component is defined, along with their import chains. + +## Configuration + +You can customize the default output format and columns displayed by `atmos list stacks` in your `atmos.yaml`: + +### Default Format + +```yaml +# atmos.yaml +stacks: + list: + format: tree # Default format: table, json, yaml, csv, tsv, tree +``` + +**Precedence**: CLI `--format` flag > Config file > Environment variable `ATMOS_LIST_FORMAT` > Default (`table`) + +### Custom Columns ```yaml -# In atmos.yaml +# atmos.yaml stacks: list: format: table columns: - - name: Stack Name - value: "{{ .stack_name }}" - - name: Configuration Path - value: "{{ .stack_path }}" + - name: Stack + value: "{{ .stack }}" + - name: Components + value: "{{ .components | len }} components" + - name: File + value: "{{ .file }}" ``` -When you run `atmos list stacks`, the output table will have columns titled "Stack Name" and "Configuration Path". +:::note +Column configuration for stacks is under the `stacks.list.columns` section in atmos.yaml, not `components.list.columns`. +::: -## Flags +### Available Template Fields -
-
`--component` / `-c` (optional)
-
Atmos component.
+Column `value` fields support Go template syntax with access to: -
`--identity` / `-i` (optional)
-
Authenticate with a specific identity before listing stacks.
This is required when stack configurations use YAML template functions
(e.g., `!terraform.state`, `!terraform.output`) that require authentication.
`atmos list stacks --identity my-aws-identity`

Can also be set via `ATMOS_IDENTITY` environment variable.
-
+- `.stack` - Stack name +- `.components` - Array of components in the stack +- `.file` - Stack configuration file path +- `.vars` - Stack-level variables (if available) + +### Template Functions + +Columns support template functions for data transformation: + +```yaml +stacks: + list: + columns: + - name: Stack (Upper) + value: "{{ .stack | upper }}" + - name: Component Count + value: "{{ .components | len }}" + - name: Short File + value: "{{ .file | truncate 40 }}" +``` + +Available functions: +- `upper`, `lower` - String case conversion +- `truncate` - Truncate string with ellipsis +- `len` - Length of arrays/strings +- `join` - Join array elements with delimiter +- `toString` - Convert value to string +- `ternary` - Conditional expression + +### Override Columns via CLI + +Override configured columns using the `--columns` flag. The flag supports multiple formats: + +**Simple field names** (auto-generates templates and title-case names): +```shell +# Display only stack column +atmos list stacks --columns stack + +# When filtering by component, show both +atmos list stacks --component vpc --columns stack,component +``` + +**Named columns with templates** (full control over display name and value): +```shell +# Custom column names with templates +atmos list stacks --columns "StackName={{ .stack }}" + +# Multiple named columns +atmos list stacks --columns "Name={{ .stack }},Components={{ .components | len }}" +``` + +**Named columns with field reference** (auto-wraps field in template): +```shell +# Shorthand: Name=field becomes Name={{ .field }} +atmos list stacks --columns "MyStack=stack" +``` + +## Related Commands + +- [`atmos list components`](/cli/commands/list/components) - List all components +- [`atmos list instances`](/cli/commands/list/list-instances) - List all component instances across stacks +- [`atmos describe stacks`](/cli/commands/describe/stacks) - Get detailed stack configuration diff --git a/website/docs/cli/commands/list/list-values.mdx b/website/docs/cli/commands/list/list-values.mdx index c1bc787788..8b6485d111 100644 --- a/website/docs/cli/commands/list/list-values.mdx +++ b/website/docs/cli/commands/list/list-values.mdx @@ -3,11 +3,16 @@ title: "atmos list values" id: "list-values" sidebar_label: values sidebar_class_name: command +description: Use this command to list component values across all stacks --- +import Screengrab from '@site/src/components/Screengrab' +import Intro from '@site/src/components/Intro' -# atmos list values + +Use this command to list component configuration values across all stacks in a comparison table. View how a component's configuration varies between environments to spot differences and validate settings. + -The `atmos list values` command displays component values across all stacks where the component is used. + ## Usage diff --git a/website/docs/cli/commands/list/list-vars.mdx b/website/docs/cli/commands/list/list-vars.mdx index c566273e68..58556c3d4a 100644 --- a/website/docs/cli/commands/list/list-vars.mdx +++ b/website/docs/cli/commands/list/list-vars.mdx @@ -3,11 +3,16 @@ title: "atmos list vars" id: "list-vars" sidebar_label: vars sidebar_class_name: command +description: Use this command to list component variables across all stacks --- +import Screengrab from '@site/src/components/Screengrab' +import Intro from '@site/src/components/Intro' -# atmos list vars + +Use this command to list component variables across all stacks in a comparison table. View how Terraform variables vary between environments to quickly identify configuration differences and validate consistency. + -The `atmos list vars` command displays component variables across all stacks where the component is used. + ## Usage diff --git a/website/docs/cli/commands/list/list-vendor.mdx b/website/docs/cli/commands/list/list-vendor.mdx new file mode 100644 index 0000000000..8e24caf3c5 --- /dev/null +++ b/website/docs/cli/commands/list/list-vendor.mdx @@ -0,0 +1,209 @@ +--- +title: "atmos list vendor" +id: "list-vendor" +sidebar_label: vendor +sidebar_class_name: command +description: Use this command to list all vendored components and modules +--- +import Screengrab from '@site/src/components/Screengrab' +import Intro from '@site/src/components/Intro' + + +Use this command to list all components and modules configured for vendoring in your Atmos project. View vendor sources, types, and target folders to understand what external dependencies are managed by Atmos vendoring. + + + + +## Usage + +```shell +atmos list vendor [flags] +``` + +## Description + +The `atmos list vendor` command displays all vendored components and modules defined in your vendor configuration files (`vendor.yaml`). It provides a tabular view where each row represents a vendored item with information about: + +- Component/module name +- Source location (GitHub, local, HTTP, etc.) +- Version or Git reference +- Target destination path +- Vendor configuration file + +This command is useful for: +- Getting an overview of all vendored dependencies +- Verifying vendoring configuration before running `atmos vendor pull` +- Finding specific vendored components +- Auditing external dependencies in your infrastructure + +## Flags + +
+
`--format` / `-f`
+
Output format: `table`, `json`, `yaml`, `csv`, `tsv`. Overrides `vendor.list.format` configuration in atmos.yaml (default: `table`)
+ +
`--delimiter`
+
Delimiter for CSV/TSV output (default: tab for tsv, comma for csv)
+ +
`--columns`
+
Columns to display (comma-separated). Overrides `vendor.list.columns` configuration in atmos.yaml
+ +
`--stack` / `-s`
+
Filter by stack pattern (supports glob patterns)
+ +
`--filter`
+
Filter expression using YQ syntax
+ +
`--sort`
+
Sort by column:order (e.g., `component:asc,source:desc`)
+
+ +## Examples + +List all vendored items: +```shell +atmos list vendor +``` + +Output in different formats: +```shell +# JSON format for machine processing +atmos list vendor --format json + +# YAML format for configuration review +atmos list vendor --format yaml + +# CSV format for dependency auditing +atmos list vendor --format csv +``` + +Filter vendored items: +```shell +# Filter by specific source pattern +atmos list vendor --filter '.source | contains("github.com/cloudposse")' + +# Find specific component +atmos list vendor --filter '.component == "vpc"' +``` + +Sort vendored items: +```shell +# Sort by component name +atmos list vendor --sort component:asc + +# Multi-column sort +atmos list vendor --sort "source:asc,component:asc" +``` + +## Configuration + +You can customize the default output format and columns displayed by `atmos list vendor` in your `atmos.yaml`: + +### Default Format + +```yaml +# atmos.yaml +vendor: + list: + format: table # Default format: table, json, yaml, csv, tsv +``` + +**Precedence**: CLI `--format` flag > Config file > Environment variable `ATMOS_LIST_FORMAT` > Default (`table`) + +### Custom Columns + +```yaml +# atmos.yaml +vendor: + list: + format: table + columns: + - name: Component + value: "{{ .component }}" + - name: Source + value: "{{ .source }}" + - name: Version + value: "{{ .version }}" + - name: Target + value: "{{ .targets | join \", \" }}" + - name: File + value: "{{ .atmos_vendor_file }}" +``` + +### Available Template Fields + +Column `value` fields support Go template syntax with access to: + +- `.component` - Component/module name +- `.source` - Source URL or path (GitHub, HTTP, local, etc.) +- `.version` - Version, tag, or Git ref to vendor +- `.targets` - Array of target destination paths +- `.included_paths` - Glob patterns for files to include +- `.excluded_paths` - Glob patterns for files to exclude +- `.tags` - Array of tags associated with the vendored item +- `.atmos_vendor_file` - Path to vendor.yaml file containing this item +- `.atmos_vendor_type` - Type of vendor source (git, http, local, etc.) +- `.atmos_vendor_target` - Primary target path + +### Template Functions + +Columns support template functions for data transformation: + +```yaml +vendor: + list: + columns: + - name: Component (Upper) + value: "{{ .component | upper }}" + - name: Short Source + value: "{{ .source | truncate 50 }}" + - name: Target Count + value: "{{ .targets | len }}" + - name: Has Tags + value: "{{ if .tags }}Yes{{ else }}No{{ end }}" +``` + +Available functions: +- `upper`, `lower` - String case conversion +- `truncate` - Truncate string with ellipsis +- `len` - Length of arrays/strings +- `join` - Join array elements with delimiter +- `toString` - Convert value to string +- `ternary` - Conditional expression + +### Override Columns via CLI + +Override configured columns using the `--columns` flag: + +```shell +# Display only component and source columns +atmos list vendor --columns component,source + +# Display custom subset +atmos list vendor --columns "component,source,version,atmos_vendor_file" +``` + +## Example Output + +```shell +> atmos list vendor +┌────────────────┬─────────────────────────────────────────┬─────────┬──────────────────────┬─────────────────┐ +│ Component │ Source │ Version │ Target │ File │ +├────────────────┼─────────────────────────────────────────┼─────────┼──────────────────────┼─────────────────┤ +│ vpc │ github.com/cloudposse/terraform-aws-vpc │ 1.5.0 │ components/vpc │ vendor.yaml │ +│ eks │ github.com/cloudposse/terraform-aws-eks │ 2.0.0 │ components/eks │ vendor.yaml │ +│ rds │ github.com/cloudposse/terraform-aws-rds │ 0.45.0 │ components/rds │ vendor.yaml │ +└────────────────┴─────────────────────────────────────────┴─────────┴──────────────────────┴─────────────────┘ +``` + +:::tip +- Use `atmos vendor pull` to download vendored components after reviewing the list +- The `--filter` flag supports full YQ syntax for complex queries +- Use `--format json` to pipe vendor information to other tools for analysis +- Vendor configuration files can be split across multiple `vendor.yaml` files in `vendor.d/` directory +::: + +## Related Commands + +- [`atmos vendor pull`](/cli/commands/vendor/pull) - Download vendored components +- [`atmos list components`](/cli/commands/list/components) - List all components (including vendored) diff --git a/website/docs/cli/commands/list/list-workflows.mdx b/website/docs/cli/commands/list/list-workflows.mdx index dd1ce40e7f..ed3b85d0d6 100644 --- a/website/docs/cli/commands/list/list-workflows.mdx +++ b/website/docs/cli/commands/list/list-workflows.mdx @@ -3,11 +3,16 @@ title: "atmos list workflows" id: "list-workflows" sidebar_label: workflows sidebar_class_name: command +description: Use this command to list all workflows defined in your Atmos project --- +import Screengrab from '@site/src/components/Screengrab' +import Intro from '@site/src/components/Intro' -# atmos list workflows + +Use this command to list all workflows defined in your project's workflow manifests. View workflow names, descriptions, and source files to discover automation available for your infrastructure. + -The `atmos list workflows` command displays all Atmos workflows defined in your project. + ## Usage @@ -33,9 +38,13 @@ This command is useful for:
`--file, -f string`
Filter workflows by file (e.g., `atmos list workflows -f workflow1`)
`--format string`
-
Output format: `table`, `json`, `yaml`, `csv`, `tsv` (default: `table`)
+
Output format: `table`, `json`, `yaml`, `csv`, `tsv`. Overrides `workflows.list.format` configuration in atmos.yaml (default: `table`)
`--delimiter string`
Delimiter for csv/tsv output (default: `\t`)
+
`--columns string`
+
Columns to display (comma-separated). Overrides `workflows.list.columns` configuration in atmos.yaml
+
`--sort string`
+
Sort by column:order (e.g., `name:asc,file:desc`)
## Examples @@ -90,42 +99,83 @@ atmos list workflows --format csv --delimiter ',' ::: -## Examples +## Configuration + +You can customize the default output format and columns displayed by `atmos list workflows` in your `atmos.yaml`: + +### Default Format + +```yaml +# atmos.yaml +workflows: + list: + format: table # Default format: table, json, yaml, csv, tsv +``` -### Custom Columns for Workflows +**Precedence**: CLI `--format` flag > Config file > Environment variable `ATMOS_LIST_FORMAT` > Default (`table`) -This configuration customizes the output of `atmos list workflows`: +### Custom Columns ```yaml -# In atmos.yaml +# atmos.yaml workflows: list: + format: table columns: - name: Workflow - value: "{{ .workflow_name }}" - - name: Definition File - value: "{{ .workflow_file }}" + value: "{{ .name }}" + - name: File + value: "{{ .file }}" - name: Description - value: "{{ .workflow_description }}" + value: "{{ .description }}" + - name: Steps + value: "{{ .steps | len }} steps" ``` -Running `atmos list workflows` will display these columns. -## Examples +### Available Template Fields + +Column `value` fields support Go template syntax with access to: -### Custom Columns for Workflows +- `.name` - Workflow name +- `.file` - Workflow definition file path +- `.description` - Workflow description +- `.steps` - Array of workflow steps +- `.stack` - Stack name (if workflow is stack-specific) -This configuration customizes the output of `atmos list workflows`: +### Template Functions + +Columns support template functions for data transformation: ```yaml -# In atmos.yaml workflows: list: columns: - - name: Workflow - value: "{{ .workflow_name }}" - - name: Definition File - value: "{{ .workflow_file }}" - - name: Description - value: "{{ .workflow_description }}" + - name: Workflow (Upper) + value: "{{ .name | upper }}" + - name: Short File + value: "{{ .file | truncate 30 }}" + - name: Step Count + value: "{{ .steps | len }}" + - name: Has Description + value: "{{ if .description }}Yes{{ else }}No{{ end }}" +``` + +Available functions: +- `upper`, `lower` - String case conversion +- `truncate` - Truncate string with ellipsis +- `len` - Length of arrays/strings +- `join` - Join array elements with delimiter +- `toString` - Convert value to string +- `ternary` - Conditional expression + +### Override Columns via CLI + +Override configured columns using the `--columns` flag: + +```shell +# Display only name and file columns +atmos list workflows --columns name,file + +# Display custom subset +atmos list workflows --columns "name,file,description" ``` -Running `atmos list workflows` will display these columns. diff --git a/website/docs/cli/commands/list/themes.mdx b/website/docs/cli/commands/list/themes.mdx index 56584c84c9..0a7eb1fd00 100644 --- a/website/docs/cli/commands/list/themes.mdx +++ b/website/docs/cli/commands/list/themes.mdx @@ -7,11 +7,11 @@ description: List available terminal themes for markdown rendering --- import Screengrab from '@site/src/components/Screengrab' -import Intro from '@site/src/components/Intro' import Terminal from '@site/src/components/Terminal' +import Intro from '@site/src/components/Intro' -Use this command to discover available terminal themes that can be used to customize the appearance of markdown output in Atmos. +Use this command to discover available terminal themes for customizing markdown output appearance in Atmos. Preview color schemes and syntax highlighting styles to enhance readability of command output.