Skip to content

Commit 68775fa

Browse files
committed
docs(quick-1): Add production workflow examples to documentation
1 parent 23b9b92 commit 68775fa

2 files changed

Lines changed: 201 additions & 0 deletions

File tree

.planning/STATE.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -145,6 +145,12 @@ None yet.
145145
- [Phase 4]: Object store access policy rule IAM schema (conditions/effects) not fully mapped — requires FLASHBLADE_API.md deep-dive during planning
146146
- [Phase 4]: Array admin singleton DELETE semantics (reset to defaults vs. error) unconfirmed
147147

148+
### Quick Tasks Completed
149+
150+
| # | Description | Date | Commit | Directory |
151+
|---|-------------|------|--------|-----------|
152+
| 1 | Add production workflow examples to documentation | 2026-03-28 | 8fdc00a | [1-add-production-workflow-examples-to-docu](./quick/1-add-production-workflow-examples-to-docu/) |
153+
148154
## Session Continuity
149155

150156
Last session: 2026-03-28T08:36:45.056Z
Lines changed: 195 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,195 @@
1+
---
2+
phase: quick
3+
plan: 1
4+
type: execute
5+
wave: 1
6+
depends_on: []
7+
files_modified:
8+
- examples/workflows/object-store-setup/main.tf
9+
- examples/workflows/nfs-file-share/main.tf
10+
- examples/workflows/multi-protocol-file-system/main.tf
11+
- examples/workflows/array-admin-baseline/main.tf
12+
- examples/workflows/secured-s3-bucket/main.tf
13+
- README.md
14+
autonomous: true
15+
requirements: ["QUICK-01"]
16+
17+
must_haves:
18+
truths:
19+
- "Ops engineer can copy-paste any workflow .tf file and adapt it for their environment"
20+
- "Each workflow shows resource composition with correct cross-resource references"
21+
- "Comments explain WHY each attribute is configured, not just WHAT it does"
22+
- "README links to the workflows directory for discoverability"
23+
artifacts:
24+
- path: "examples/workflows/object-store-setup/main.tf"
25+
provides: "Account -> bucket -> access key full S3 workflow"
26+
contains: "flashblade_object_store_account"
27+
- path: "examples/workflows/nfs-file-share/main.tf"
28+
provides: "File system + NFS export policy + rules"
29+
contains: "flashblade_nfs_export_policy_rule"
30+
- path: "examples/workflows/multi-protocol-file-system/main.tf"
31+
provides: "NFS + SMB on same FS with both policies"
32+
contains: "multi_protocol"
33+
- path: "examples/workflows/array-admin-baseline/main.tf"
34+
provides: "DNS + NTP + SMTP day-1 setup"
35+
contains: "flashblade_array_dns"
36+
- path: "examples/workflows/secured-s3-bucket/main.tf"
37+
provides: "Bucket + network access + OAP policy stack"
38+
contains: "flashblade_object_store_access_policy_rule"
39+
key_links:
40+
- from: "examples/workflows/object-store-setup/main.tf"
41+
to: "bucket references account"
42+
via: "account attribute referencing account resource name"
43+
pattern: "flashblade_object_store_account\\..+\\.name"
44+
- from: "examples/workflows/nfs-file-share/main.tf"
45+
to: "file system references NFS policy"
46+
via: "nfs_export_policy attribute"
47+
pattern: "nfs_export_policy.*=.*flashblade_nfs_export_policy"
48+
---
49+
50+
<objective>
51+
Create 5 production workflow examples showing how FlashBlade Terraform resources compose together in real ops team scenarios.
52+
53+
Purpose: Existing per-resource examples show isolated usage. Ops engineers need complete, copy-pasteable workflows showing how resources wire together for common tasks (S3 setup, NFS shares, multi-protocol, day-1 admin, secured buckets).
54+
55+
Output: 5 self-contained .tf files in examples/workflows/ plus README update for discoverability.
56+
</objective>
57+
58+
<execution_context>
59+
@/home/gule/.claude/get-shit-done/workflows/execute-plan.md
60+
@/home/gule/.claude/get-shit-done/templates/summary.md
61+
</execution_context>
62+
63+
<context>
64+
@examples/resources/flashblade_object_store_account/resource.tf
65+
@examples/resources/flashblade_bucket/resource.tf
66+
@examples/resources/flashblade_object_store_access_key/resource.tf
67+
@examples/resources/flashblade_file_system/resource.tf
68+
@examples/resources/flashblade_nfs_export_policy/resource.tf
69+
@examples/resources/flashblade_nfs_export_policy_rule/resource.tf
70+
@examples/resources/flashblade_smb_share_policy/resource.tf
71+
@examples/resources/flashblade_smb_share_policy_rule/resource.tf
72+
@examples/resources/flashblade_array_dns/resource.tf
73+
@examples/resources/flashblade_array_ntp/resource.tf
74+
@examples/resources/flashblade_array_smtp/resource.tf
75+
@examples/resources/flashblade_network_access_policy/resource.tf
76+
@examples/resources/flashblade_network_access_policy_rule/resource.tf
77+
@examples/resources/flashblade_object_store_access_policy/resource.tf
78+
@examples/resources/flashblade_object_store_access_policy_rule/resource.tf
79+
@docs/resources/file_system.md
80+
@README.md
81+
</context>
82+
83+
<tasks>
84+
85+
<task type="auto">
86+
<name>Task 1: Create all 5 workflow example files</name>
87+
<files>
88+
examples/workflows/object-store-setup/main.tf
89+
examples/workflows/nfs-file-share/main.tf
90+
examples/workflows/multi-protocol-file-system/main.tf
91+
examples/workflows/array-admin-baseline/main.tf
92+
examples/workflows/secured-s3-bucket/main.tf
93+
</files>
94+
<action>
95+
Create the `examples/workflows/` directory structure and write 5 complete .tf files. Each file must:
96+
- Start with a header comment block explaining the workflow scenario and what it provisions
97+
- Include the provider block with variable-driven config (endpoint + api_token from vars)
98+
- Use `variable` blocks for all environment-specific values (endpoint, token, CIDR ranges, email addresses, domain names)
99+
- Use Terraform references between resources (not hardcoded names) to show proper composition
100+
- Include inline comments explaining WHY each attribute value is set (ops context, not schema docs)
101+
102+
**Workflow 1: Object Store Setup** (`object-store-setup/main.tf`)
103+
Complete S3-compatible storage workflow: account -> bucket (with versioning + 100 GiB quota, hard limit) -> access key -> outputs for key_id and secret.
104+
- Account: `var.account_name`, quota_limit 1 TiB, hard_limit_enabled false (soft warn, not block)
105+
- Bucket: references `flashblade_object_store_account.this.name`, versioning "enabled" (compliance/audit trail), quota_limit 100 GiB hard limit, destroy_eradicate_on_delete false (protect production data)
106+
- Access key: references account name, enabled true
107+
- Outputs: access_key_id (plain), secret_access_key (sensitive)
108+
109+
**Workflow 2: NFS File Share** (`nfs-file-share/main.tf`)
110+
Team shared storage: file system + NFS export policy + 2 rules (app servers rw, backup servers ro).
111+
- File system: 50 GiB provisioned, NFS enabled with v4_1_enabled, nfs_export_policy referencing the policy resource name, default_quotas with user_quota 5 GiB
112+
- NFS export policy: enabled
113+
- Rule 1: app servers subnet (`var.app_subnet`, default "10.10.0.0/16"), permission "rw", access "root-squash", security ["sys"] -- comment: root-squash prevents app containers running as root from having root on NFS
114+
- Rule 2: backup subnet (`var.backup_subnet`, default "10.20.0.0/16"), permission "ro", access "root-squash", security ["sys"] -- comment: read-only for backup agents, they pull snapshots
115+
116+
**Workflow 3: Multi-Protocol File System** (`multi-protocol-file-system/main.tf`)
117+
Windows + Linux access on same FS: file system with both NFS and SMB enabled, separate policies for each.
118+
- File system: 100 GiB, NFS enabled (v3 + v4.1), SMB enabled (access_based_enumeration_enabled true, smb_encryption_enabled true), multi_protocol block (access_control_style "nfs", safeguard_acls true), nfs_export_policy and smb_share_policy referencing respective policy resources
119+
- NFS export policy + rule: Linux subnet, rw, no-root-squash (trusted admin hosts), security ["sys", "krb5"]
120+
- SMB share policy + rule: principal "Domain Users", read "allow", change "allow", full_control "deny" -- comment: change=allow lets users create/modify files, full_control=deny prevents ACL/ownership changes
121+
122+
**Workflow 4: Array Admin Baseline** (`array-admin-baseline/main.tf`)
123+
Day-1 array setup: DNS + NTP + SMTP configuration.
124+
- DNS: domain `var.domain` (default "corp.example.com"), nameservers from `var.dns_servers` (default ["10.0.0.53", "10.0.1.53"]) -- comment: internal DNS for forward+reverse resolution of array hostname
125+
- NTP: ntp_servers from `var.ntp_servers` (default ["0.pool.ntp.org", "1.pool.ntp.org", "2.pool.ntp.org"]) -- comment: minimum 2 servers for redundancy, 3 preferred for quorum
126+
- SMTP: relay_host `var.smtp_relay` (default "smtp.corp.example.com"), sender_domain `var.domain`, encryption_mode "tls" -- comment: TLS mandatory for PCI/SOC2 compliance
127+
- alert_watchers: ops-team email at warning level (day-to-day capacity/performance alerts), oncall email at error level (pages for hardware failures, space critical)
128+
129+
**Workflow 5: Secured S3 Bucket** (`secured-s3-bucket/main.tf`)
130+
Bucket with full policy stack: account + bucket + network access policy + NAP rule + object store access policy + OAP rule.
131+
- Account + Bucket: similar to workflow 1 but focused on security, no access key (keys managed separately)
132+
- Network access policy: name "default" (singleton), enabled true -- comment: singleton on FlashBlade, Terraform adopts it
133+
- NAP rule: client `var.allowed_cidr` (default "10.0.0.0/8"), effect "allow", interfaces ["s3"] -- comment: restrict S3 protocol to internal network only
134+
- Object store access policy: name "app-readonly", description "Read-only S3 access for application tier"
135+
- OAP rule: name "allow-bucket-read", effect "allow", actions ["s3:GetObject", "s3:ListBucket", "s3:GetBucketLocation"], resources referencing the bucket ARN pattern -- comment: least-privilege, no write/delete actions
136+
137+
All numeric byte values must use inline comments showing human-readable size (e.g., `# 50 GiB`).
138+
</action>
139+
<verify>
140+
<automated>find examples/workflows -name "main.tf" -type f | wc -l | grep -q "5" && echo "PASS: 5 workflow files created" || echo "FAIL"</automated>
141+
</verify>
142+
<done>5 workflow .tf files exist, each self-contained with provider block, variables, resources with cross-references, and ops-context comments</done>
143+
</task>
144+
145+
<task type="auto">
146+
<name>Task 2: Add workflows section to README and validate HCL</name>
147+
<files>README.md</files>
148+
<action>
149+
1. Edit README.md to add a "Workflow Examples" section after the "Data Sources" table and before "Development". Content:
150+
151+
```
152+
## Workflow Examples
153+
154+
Production-ready configurations showing how resources compose together:
155+
156+
| Workflow | Description | Resources Used |
157+
|----------|-------------|----------------|
158+
| [Object Store Setup](examples/workflows/object-store-setup/) | S3-compatible storage: account, bucket, access key | account, bucket, access_key |
159+
| [NFS File Share](examples/workflows/nfs-file-share/) | Team shared storage with export policy | file_system, nfs_export_policy, nfs_export_policy_rule |
160+
| [Multi-Protocol File System](examples/workflows/multi-protocol-file-system/) | Windows + Linux access on same FS | file_system, nfs_export_policy, smb_share_policy |
161+
| [Array Admin Baseline](examples/workflows/array-admin-baseline/) | Day-1 DNS, NTP, SMTP configuration | array_dns, array_ntp, array_smtp |
162+
| [Secured S3 Bucket](examples/workflows/secured-s3-bucket/) | Bucket with network + access policies | bucket, network_access_policy, object_store_access_policy |
163+
```
164+
165+
2. Run `terraform fmt -check -recursive examples/workflows/` to validate HCL formatting. Fix any formatting issues with `terraform fmt -recursive examples/workflows/`.
166+
167+
3. Run `terraform validate` in each workflow directory (init with `-backend=false` first) to catch syntax errors. If terraform binary is not available, at minimum verify HCL is parseable with `terraform fmt`.
168+
</action>
169+
<verify>
170+
<automated>grep -q "Workflow Examples" README.md && terraform fmt -check -recursive examples/workflows/ && echo "PASS" || echo "FAIL"</automated>
171+
</verify>
172+
<done>README contains workflow examples section with links, all .tf files pass terraform fmt validation</done>
173+
</task>
174+
175+
</tasks>
176+
177+
<verification>
178+
- All 5 workflow files exist in examples/workflows/{name}/main.tf
179+
- Each file contains provider block, variable blocks, resource blocks with cross-references
180+
- Each file has inline comments explaining WHY (not just WHAT)
181+
- terraform fmt passes on all files
182+
- README links to all 5 workflows
183+
</verification>
184+
185+
<success_criteria>
186+
- 5 complete, self-contained .tf workflow files in examples/workflows/
187+
- Every resource reference uses Terraform expressions (no hardcoded names between resources)
188+
- Comments provide ops context (security rationale, sizing reasoning, compliance notes)
189+
- README updated with discoverable links to all workflows
190+
- All HCL passes terraform fmt validation
191+
</success_criteria>
192+
193+
<output>
194+
After completion, create `.planning/quick/1-add-production-workflow-examples-to-docu/1-SUMMARY.md`
195+
</output>

0 commit comments

Comments
 (0)