You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ask only the questions NOT already answered by `project-understanding.md`. Use Socratic dialogue — ask **one question at a time**, wait for the answer, then ask the next. Do not batch multiple questions. This single-question cadence ensures the user's answer to each question can inform which follow-up questions are relevant, avoiding wasted effort.
63
64
65
+
**Dialogue style**: Use open-ended questions that invite the user to describe their situation, not closed menus that force a choice. Avoid rigid question formats: no multiple-choice menus, no lettered lists like (a)/(b)/(c). Instead, ask one open-ended question, listen to the answer, and ask a follow-up question tailored to what was revealed. No rigid menu-style prompts — they prevent the user from expressing nuance. Do not present multiple choice; invite narrative instead.
66
+
64
67
### Question Bank (ask only unanswered ones)
65
68
69
+
For each gap, craft an open-ended question appropriate to the context. The examples below are starting points, not scripts. Ask follow-up questions based on what the user reveals.
70
+
66
71
#### Group A: Abstraction Surface (enforcement-critical)
67
72
68
-
**A1. Variants** — Will the system support multiple implementations of the same concept (multiple LLM providers, output formats, storage backends, payment gateways)? If yes, how many on Day 1 vs. planned?
73
+
**A1. Variants** — Ask the user to describe whether the system will support multiple implementations of the same concept (e.g., multiple LLM providers, output formats, storage backends, payment gateways). Ask them to describe what Day 1 looks like vs. planned growth.
69
74
-*Why this matters:* ≥2 variants → AP-3 (incomplete coverage) and AP-4 (parallel inheritance) risks. The blueprint must include a variant registry and abstract error hierarchy.
70
75
71
-
**A2. Shared Mutable State** — Will components share state through a mutable object (pipeline state dict, request context, shared cache)? Or will state flow through immutable messages/events?
76
+
**A2. Shared Mutable State** — Ask the user to describe how components will share state — whether through a mutable object (pipeline state dict, request context, shared cache) or through immutable messages/events. Ask follow-up questions about the points where state crosses component boundaries.
72
77
-*Why this matters:* Shared mutable state → AP-1 (contract without enforcement) risk. The blueprint must specify the immutability mechanism.
73
78
74
-
**A3. Configuration Complexity** — How many environment-specific settings do you expect (API keys, feature flags, service URLs, thresholds)? Is there an existing config pattern you want to follow?
79
+
**A3. Configuration Complexity** — Ask the user to describe the configuration landscape: how many environment-specific settings they expect and whether there is an existing config pattern they want to follow.
75
80
-*Why this matters:* >10 config values → AP-5 (config bypass) risk. The blueprint must centralize all configuration into a typed config system.
76
81
77
82
#### Group B: Enforcement Preferences
78
83
79
-
**B1. Enforcement style** — Do you prefer enforcement that fails at **edit time** (real-time linting via hooks), **test time** (fitness functions in the test suite), or **CI time** (pre-merge gate)? Which layer do you trust most to catch violations?
84
+
**B1. Enforcement style** — Ask the user where they want enforcement failures to surface: at edit time (real-time linting via hooks), test time (fitness functions in the test suite), or CI time (pre-merge gate). Ask them to describe which layer they trust most.
80
85
81
-
**B2. Anti-pattern risk tolerance** — Which anti-patterns concern you most for this project? (e.g., AP-1: contract without enforcement, AP-2: error hierarchy leakage, AP-3: incomplete coverage, AP-4: parallel inheritance, AP-5: config bypass). Are there project-specific anti-patterns we should add?
86
+
**B2. Anti-pattern risk tolerance** — Ask the user which anti-patterns concern them most for this project and whether there are project-specific anti-patterns to add. Common ones: AP-1 (contract without enforcement), AP-2 (error hierarchy leakage), AP-3 (incomplete coverage), AP-4 (parallel inheritance), AP-5 (config bypass).
82
87
83
-
**B3. Existing enforcement gaps** — Are there architectural rules the team already knows they want to enforce but hasn't yet? (e.g., "no direct DB calls from handlers", "all external I/O must be in adapters", "no `Any` types in domain layer")
88
+
**B3. Existing enforcement gaps** — Ask the user to describe architectural rules the team already knows they want to enforce but hasn't codified yet (e.g., "no direct DB calls from handlers", "all external I/O must be in adapters", "no `Any` types in domain layer"). Ask follow-up questions to understand the history behind each rule.
84
89
85
90
#### Group C: Blueprint Scope
86
91
87
-
**C1. Blueprint depth** — Do you want a full system context diagram and directory structure, or just the enforcement layer on top of the existing structure?
92
+
**C1. Blueprint depth** — Ask the user to describe the scope they want: full system context diagram and directory structure, or just the enforcement layer on top of the existing structure.
88
93
89
-
**C2. ADR preference** — Should we generate Architecture Decision Records for choices already made, or only for new decisions introduced by this scaffolding session?
94
+
**C2. ADR preference** — Ask the user whether they want Architecture Decision Records for choices already made, only for new decisions introduced by this scaffolding session, or both.
90
95
91
96
---
92
97
@@ -117,6 +122,44 @@ If the user requests adjustments, revise the blueprint and re-present. Do not pr
117
122
118
123
---
119
124
125
+
## Phase 2.5: Recommendation Synthesis
126
+
127
+
Before generating enforcement scaffolding, synthesize everything learned in Phases 0 and 1 into a concrete, project-specific set of enforcement recommendations. This synthesis step ensures the scaffolding reflects actual project patterns — not generic templates.
128
+
129
+
### Synthesis Process
130
+
131
+
1.**Gather signals**: Collect all facts from `project-understanding.md` and all answers from the Phase 1 Socratic dialogue.
132
+
133
+
2.**Synthesize into recommendations**: For each proposed enforcement mechanism, synthesize a recommendation that:
134
+
- States what will be enforced and at which layer (edit-time, test-time, CI-time)
135
+
-**Cites the specific project file or pattern that triggered this recommendation** — e.g., "Because `src/adapters/db.py` exists and directly imports `domain/models.py`, recommend enforcing the adapter boundary as a fitness function" or "Because `config.py` reads 14 environment variables directly, recommend a typed config boundary"
136
+
- Explains why this recommendation fits this specific project (not just why it is generally good)
137
+
- Includes test isolation enforcement: each recommendation must specify how tests remain isolated from the enforcement mechanism itself (e.g., mock injection points, test-only config overrides, fixture boundaries). Test isolation is a first-class concern in every recommendation.
138
+
139
+
3.**Present recommendations individually**: Present each recommendation one at a time. For each recommendation, the user may:
140
+
-**Accept** it (move to the next)
141
+
-**Reject** it (remove from the enforcement set)
142
+
-**Discuss** the recommendation further (ask follow-up questions, revise)
143
+
Allow the user to accept, reject, or discuss each recommendation individually before proceeding to the next.
144
+
145
+
4.**Revise and confirm**: After the user has reviewed all recommendations, present a final consolidated list of accepted recommendations. Confirm before proceeding to Phase 3.
146
+
147
+
### Recommendation Template
148
+
149
+
```
150
+
Recommendation N: [Short title]
151
+
Trigger: [Specific project file or pattern that triggered this — cite file path or code pattern]
152
+
Enforcement: [What will be enforced and at which layer]
153
+
Fit: [Why this fits this project specifically]
154
+
Test isolation: [How tests remain isolated]
155
+
```
156
+
157
+
### ARCH_ENFORCEMENT.md Compatibility
158
+
159
+
The accepted recommendations from this phase will be materialized into `ARCH_ENFORCEMENT.md` at the repo root. This file is detected by `check-onboarding.sh` as evidence that architect-foundation scaffolding has been completed. Each accepted recommendation becomes a section in `ARCH_ENFORCEMENT.md`.
160
+
161
+
---
162
+
120
163
## Phase 3: The Enforcer (Deterministic Guardrails)
121
164
122
165
Treat "Architecture" as something that can be tested. Generate **Fitness Functions** and **enforcement infrastructure** using tools appropriate for the chosen stack. Architecture enforcement operates at multiple layers — each layer catches violations at a different point in the development cycle.
Copy file name to clipboardExpand all lines: plugins/dso/skills/onboarding/SKILL.md
+138-1Lines changed: 138 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -558,6 +558,143 @@ Accept these values? [Y/n]
558
558
559
559
Write `commands.acli_version` and `commands.acli_sha256` to `.claude/dso-config.conf` on acceptance.
560
560
561
+
### Step 2c: Infrastructure Initialization
562
+
563
+
After writing `.claude/dso-config.conf`, set up the supporting infrastructure for the host project. These steps ensure the enforcement gates, ticket system, and documentation templates are in place before the first commit.
564
+
565
+
#### Hook Installation
566
+
567
+
Install the DSO git pre-commit hooks (`pre-commit-test-gate.sh` and `pre-commit-review-gate.sh`) into the project's hooks directory. Hook installation must account for the detected hook manager:
568
+
569
+
**Detect hook manager and install accordingly:**
570
+
571
+
1.**Husky** — if `.husky/` exists, add DSO hook calls to `.husky/pre-commit` (create if absent). **Idempotency**: check whether the hook call already exists before appending to avoid duplicates on re-run:
2.**pre-commit framework** — if `.pre-commit-config.yaml` exists, add DSO hooks as local hooks in the config.
581
+
582
+
3.**Bare `.git/hooks/`** — if neither Husky nor the pre-commit framework is detected, install directly into the git hooks directory. Use `git rev-parse --git-common-dir` to find the correct hooks path (supports worktrees and submodules where `.git` may be a file rather than a directory):
**Push verification:** After creating the orphan branch, push it to the remote and verify push success. If the push fails, warn the user:
617
+
618
+
```bash
619
+
if git push origin tickets 2>&1;then
620
+
echo"Ticket system initialized and pushed successfully."
621
+
else
622
+
echo"WARNING: push to origin tickets failed. The ticket system is initialized locally but not synced to remote. Run 'git push origin tickets' when remote access is available."
623
+
fi
624
+
```
625
+
626
+
#### Ticket Smoke Test
627
+
628
+
After initialization, perform a ticket smoke test to verify the system works end-to-end. Create a test ticket and read it back:
.claude/scripts/dso ticket show "$TEST_ID"> /dev/null 2>&1&&echo"Ticket smoke test PASSED (id: $TEST_ID)"||echo"WARNING: ticket smoke test failed — show returned non-zero"
635
+
.claude/scripts/dso ticket transition "$TEST_ID" open closed --reason="Fixed: smoke test cleanup"2>/dev/null
636
+
else
637
+
echo"WARNING: ticket smoke test failed — could not create test ticket"
638
+
fi
639
+
```
640
+
641
+
#### Generate Test Index
642
+
643
+
If test directories were detected during Phase 1 auto-detection, run `generate-test-index.sh` to build the initial `.test-index` file mapping source files to test files:
- CI trigger strategy notes from the onboarding conversation (do NOT assume PR-based workflow)
659
+
660
+
```
661
+
Invoke: /dso:generate-claude-md
662
+
```
663
+
664
+
The generated `CLAUDE.md` must include a Quick Reference table of ticket commands so that future Claude sessions can manage work items without re-reading the full DSO documentation.
665
+
666
+
#### Copy KNOWN-ISSUES Template
667
+
668
+
Copy the DSO `KNOWN-ISSUES` template to `.claude/docs/` in the host project:
0 commit comments