This page is the canonical onboarding path for repositories generated from this GitHub template. If you want a faster onboarding checklist first, use First 30 Minutes.
- Click Use this template in GitHub.
- Select owner/name and create your repository.
- Clone the generated repository locally.
Interactive wizard:
make blueprint-init-repo-interactiveNon-interactive (env-file) mode:
cp blueprint/repo.init.secrets.example.env blueprint/repo.init.secrets.env
${EDITOR:-vi} blueprint/repo.init.env blueprint/repo.init.secrets.env
make blueprint-init-repoMinimum required variables for env-file mode:
BLUEPRINT_REPO_NAMEBLUEPRINT_GITHUB_ORGBLUEPRINT_GITHUB_REPOBLUEPRINT_DEFAULT_BRANCHBLUEPRINT_STACKIT_REGIONBLUEPRINT_STACKIT_TENANT_SLUGBLUEPRINT_STACKIT_PLATFORM_SLUGBLUEPRINT_STACKIT_PROJECT_IDBLUEPRINT_STACKIT_TFSTATE_BUCKETBLUEPRINT_STACKIT_TFSTATE_KEY_PREFIX
make blueprint-init-repo creates or refreshes tracked defaults in blueprint/repo.init.env
and local-sensitive defaults in blueprint/repo.init.secrets.env (scaffolded from blueprint/repo.init.secrets.example.env).
When optional modules are enabled, required non-sensitive module inputs are seeded in blueprint/repo.init.env,
while required sensitive module inputs are scaffolded in the secrets files with non-empty placeholders.
Later make blueprint-check-placeholders and infra targets auto-load both files when present.
Infra targets run blueprint-check-placeholders first, so missing required inputs fail fast before mutable operations.
After first init, re-apply init-owned files only with BLUEPRINT_INIT_FORCE=true make blueprint-init-repo.
For existing generated repos that need template seed updates, start with:
make blueprint-resync-consumer-seedsThen apply only safe updates when appropriate:
BLUEPRINT_RESYNC_APPLY_SAFE=true make blueprint-resync-consumer-seedsFor full blueprint-managed upgrades on existing generated repos (non-destructive plan/apply workflow):
BLUEPRINT_UPGRADE_REF=<tag|branch|commit> make blueprint-upgrade-consumer-preflight
BLUEPRINT_UPGRADE_REF=<tag|branch|commit> make blueprint-upgrade-consumer
BLUEPRINT_UPGRADE_REF=<tag|branch|commit> BLUEPRINT_UPGRADE_APPLY=true make blueprint-upgrade-consumer
make blueprint-upgrade-consumer-validate
make blueprint-upgrade-consumer-postcheckUse the preflight report artifacts/blueprint/upgrade_preflight.json to inspect auto-apply candidates,
manual-merge/conflict paths, required follow-up commands, and missing contract-required consumer-owned Make targets before apply mode.
Inspect artifacts/blueprint/upgrade_plan.json, artifacts/blueprint/upgrade_apply.json, and
artifacts/blueprint/upgrade_summary.md after each run. Inspect
artifacts/blueprint/upgrade/upgrade_reconcile_report.json for blocking buckets.
merge-required entries carry a semantic annotation (kind, description, verification_hints)
auto-generated from the baseline-to-source diff; the Merge-Required Annotations section in
upgrade_summary.md lists one annotated entry per merge path with the kind, description, and
hints to verify after applying.
conflicts_unresolved in the reconcile report reflects files that still contain active <<<<<<< / ======= / >>>>>>> merge markers in the working tree; a file is removed from this count as soon as its markers are cleared — auto-merged and manually-resolved files are not counted.
When required_manual_actions is non-empty,
resolve the listed dependency paths first, then re-run make blueprint-upgrade-consumer-validate.
When postcheck reports status failure, resolve blocked reasons and re-run make blueprint-upgrade-consumer-postcheck.
When fresh_env_gate.json lists divergences with path/worktree_checksum/working_tree_checksum entries, the artifact content produced in the clean worktree differs from what was produced locally — inspect the listed paths to identify non-deterministic outputs or missing seeded inputs.
For missing required consumer-owned Make targets, define the target in make/platform.mk or linked includes under make/platform/*.mk
using the exact target name from the manual-action reason.
When LOCAL_POST_DEPLOY_HOOK_ENABLED=true, preflight also flags a blocking manual action if
infra-post-deploy-consumer is still placeholder in make/platform.mk.
Set BLUEPRINT_UPGRADE_SOURCE when the blueprint source repository differs from your default origin remote.
By default, the upgrade target resolves BLUEPRINT_UPGRADE_SOURCE from remote.upstream.url
when present, and falls back to remote.origin.url.
To install/sync all bundled Codex skills into your local Codex skills directory:
make blueprint-install-codex-skillsInstall only the upgrade skill when needed:
make blueprint-install-codex-skillInstall only the consumer operations skill when needed:
make blueprint-install-codex-skill-consumer-opsOverride install location when needed:
BLUEPRINT_CODEX_SKILLS_DIR="${CODEX_HOME:-$HOME/.codex}/skills" make blueprint-install-codex-skill
BLUEPRINT_CODEX_SKILLS_DIR="${CODEX_HOME:-$HOME/.codex}/skills" make blueprint-install-codex-skill-consumer-opsInstall SDD-specialized skills when needed:
make blueprint-install-codex-skill-sdd-step01-intake
make blueprint-install-codex-skill-sdd-step02-resolve-questions
make blueprint-install-codex-skill-sdd-step03-spec-complete
make blueprint-install-codex-skill-sdd-step04-plan-slicer
make blueprint-install-codex-skill-sdd-step05-implement
make blueprint-install-codex-skill-sdd-step06-document-sync
make blueprint-install-codex-skill-sdd-step07-pr-packager
make blueprint-install-codex-skill-sdd-traceability-keeperCreate a work-item folder first:
make spec-scaffold SPEC_SLUG=<work-item-slug>Then enforce the readiness gate before writing implementation code:
- complete
Discover,High-Level Architecture,Specify, andPlaninspecs/<YYYY-MM-DD>-<work-item-slug>/ - if requirements are incomplete, record
BLOCKED_MISSING_INPUTSand keepSPEC_READY=false - map applicable
SDD-C-###controls from.spec-kit/control-catalog.mdinspec.md - use
Managed service preference: stackit-managed-firstby default forstackit-*runtime capabilities; if you choose an alternative, recordexplicit-consumer-exceptionwith rationale and approved ADR/decision-log entry - start implementation only after
spec.mdrecordsSPEC_READY=true
Before closing the work item, run Document and Publish phases:
- update affected
docs/platform/** - run
make docs-buildandmake docs-smoke - run
make quality-sdd-check-all - run
make quality-hardening-review - run
make spec-pr-context
make blueprint-bootstrap
make infra-bootstrap
make infra-validatemake blueprint-template-smokemake blueprint-template-smoke respects exported BLUEPRINT_PROFILE and optional-module flags, so you can dry-run the exact generated-repo scenario you want to validate before provisioning live infrastructure.
APP_CATALOG_SCAFFOLD_ENABLED is disabled by default so minimal generated repos are not forced into a multi-app catalog layout.
Enable it when you want the canonical app contract (apps/catalog/manifest.yaml + apps/catalog/versions.lock) and the test-lane baseline to stay synchronized:
APP_CATALOG_SCAFFOLD_ENABLED=true make apps-bootstrap
APP_CATALOG_SCAFFOLD_ENABLED=true make apps-smokeKeep these surfaces synchronized after changes:
apps/catalog/manifest.yaml(topology + runtime/framework pin contract)apps/catalog/versions.lock(script-friendly pin mirror)- app test lanes in
make/platform.mk(backend-*,touchpoints-*, and aggregatetest-*-alltargets) - onboarding/target baseline in App Onboarding Contract
APP_RUNTIME_GITOPS_ENABLED defaults to true and keeps the baseline app runtime workload path active under:
apps/descriptor.yaml— canonical consumer-owned app/component topology (see App Onboarding Contract)infra/gitops/platform/base/apps/kustomization.yamlinfra/gitops/platform/base/apps/backend-api-*.yamlinfra/gitops/platform/base/apps/touchpoints-web-*.yaml
apps/descriptor.yaml is seeded by make blueprint-init-repo with the two baseline apps
(backend-api, touchpoints-web). Edit it to add components, change owner team, or set
explicit manifest references. infra-validate parses the descriptor and verifies every
component manifest exists and is listed in the apps kustomization.yaml.
Validate scaffold and runtime-path wiring:
APP_RUNTIME_GITOPS_ENABLED=true make infra-bootstrap
APP_RUNTIME_GITOPS_ENABLED=true make infra-validateIn execute mode (DRY_RUN=false), runtime smoke guardrails also assert live workload presence:
APP_RUNTIME_MIN_WORKLOADScontrols the minimum expectedDeployment/StatefulSetcount in namespaceapps(default1).make apps-smokeperforms the live check directly.- The
infra-smokewrapper records the same assertion and emits explicit empty-runtime diagnostics inartifacts/infra/smoke_diagnostics.json.
When app catalog scaffold is also enabled, apps/catalog/manifest.yaml is regenerated
from apps/descriptor.yaml on every make apps-bootstrap and includes:
deliveryTopologyfor descriptor-derived workload/service mappingruntimeDeliveryContractwith canonical GitOps manifest paths and default image contract values
apps/catalog/manifest.yamlis a deprecated generated compatibility artifact for two blueprint minor release cycles. Do not edit it by hand — editapps/descriptor.yamlinstead. Removal is tracked inAGENTS.backlog.mdunderafter: consumer-app-descriptor-adoption.
To replace scaffold defaults with real runtime images and wiring:
- Publish images (
make apps-publish-ghcr). - Update the per-component
--component-imagevalues passed byapps-bootstrap(or set the correspondingAPP_RUNTIME_BACKEND_IMAGE/APP_RUNTIME_TOUCHPOINTS_IMAGEenvironment variables) and re-runmake apps-bootstrapto regenerateapps/catalog/manifest.yaml. Mirror those image refs ininfra/gitops/platform/base/apps/*deployment.yaml. - Add app env/secret references in deployment manifests (
env,envFrom, secret/configMap refs) using your runtime credential contract outputs. - Reconcile runtime (
make infra-deployor Argo sync ofplatform-<env>-core).
make infra-context
make infra-provision-deploy
make auth-reconcile-runtime-identity
make infra-status-jsonmake infra-provision-deploy already runs the canonical smoke stage and writes
artifacts/infra/smoke_result.json, artifacts/infra/smoke_diagnostics.json, and artifacts/infra/workload_health.json.
For local profiles, it also supports an optional post-deploy hook contract:
- set
LOCAL_POST_DEPLOY_HOOK_ENABLED=trueto invoke a consumer-owned hook command after successful smoke. - default command is
LOCAL_POST_DEPLOY_HOOK_CMD='make -C "$ROOT_DIR" infra-post-deploy-consumer'. - set
LOCAL_POST_DEPLOY_HOOK_REQUIRED=truefor strict fail-fast behavior; keepfalsefor best-effort warn-and-continue behavior. - hook outcomes are persisted in
artifacts/infra/local_post_deploy_hook.envand emitted aslocal_post_deploy_hook_duration_secondsmetrics.make infra-status-jsoncaptures the latest consolidated snapshot atartifacts/infra/infra_status_snapshot.json. For local live execution, the blueprint prefers thedocker-desktopKubernetes context when it exists. SetLOCAL_KUBE_CONTEXTbefore runninginfra-provision-deployif you want to override that default. For consumer-maintained scripts that need direct cluster/Helm access, use shared wrappers instead of rawkubectl/helmcalls:
source "$ROOT_DIR/scripts/lib/shell/bootstrap.sh"
source "$ROOT_DIR/scripts/lib/infra/tooling.sh"
source "$ROOT_DIR/scripts/lib/infra/port_forward.sh"
run_helm_with_active_access list --all-namespaces
start_port_forward "example" "apps" "svc/backend-api" "18080" "8080"
wait_for_local_port "example" "18080" "20"
stop_port_forward "example"Use cleanup_port_forwards in trap handlers for long-running scripts.
For deterministic operator workflows, prefer make wrappers:
PF_NAME=backend-api PF_NAMESPACE=apps PF_RESOURCE=svc/backend-api PF_LOCAL_PORT=18080 PF_REMOTE_PORT=8080 make infra-port-forward-start
make infra-port-forward-stop PF_NAME=backend-api
make infra-port-forward-cleanupUse make auth-reconcile-runtime-identity whenever you need an explicit runtime identity reconciliation pass
(ESO source-to-target checks + Argo repo access + Keycloak/module contract coverage).
For local profiles, Keycloak Argo sync is manual by default; after a successful reconcile run,
sync platform-keycloak-local explicitly from ArgoCD UI/CLI when you want to activate browser login.
See Runtime Credentials (ESO) for local seeding and managed-store wiring.
Before publishing hosts or API routes, review Endpoint Exposure Model so public UI, protected UI, direct APIs, and internal SSR/BFF flows stay separated intentionally. If you plan to expose bearer-token APIs on the shared edge, review Protected API Routes before attaching JWT policy resources. For async choreography and tenant-aware service boundaries, also review:
For managed STACKIT execution (BLUEPRINT_PROFILE=stackit-dev|stackit-stage|stackit-prod), export:
STACKIT_PROJECT_IDSTACKIT_REGION(for exampleeu01)STACKIT_SERVICE_ACCOUNT_KEYSTACKIT_TFSTATE_ACCESS_KEY_IDSTACKIT_TFSTATE_SECRET_ACCESS_KEY
These values should align with the repository identity values:
BLUEPRINT_STACKIT_REGIONBLUEPRINT_STACKIT_PROJECT_IDBLUEPRINT_STACKIT_TFSTATE_BUCKETBLUEPRINT_STACKIT_TFSTATE_KEY_PREFIX
Before first live apply (DRY_RUN=false), pre-create the Object Storage bucket referenced by
BLUEPRINT_STACKIT_TFSTATE_BUCKET and provision an access key/secret with read/write access to it.
The blueprint does not auto-create backend bucket credentials.
Then run:
export BLUEPRINT_PROFILE=stackit-dev
make infra-stackit-bootstrap-preflight
make infra-stackit-bootstrap-apply
make infra-stackit-foundation-preflight
make infra-stackit-foundation-apply
make infra-stackit-foundation-seed-runtime-secret
make infra-stackit-foundation-fetch-kubeconfig
make infra-stackit-runtime-prerequisites
make infra-stackit-runtime-deploy
make auth-reconcile-runtime-identityinfra-deploy / infra-stackit-runtime-deploy already call
infra-stackit-foundation-seed-runtime-secret automatically; running it explicitly
is useful for debugging foundation output-to-runtime contract wiring.
Cleanup:
- Local cluster resources only:
make infra-local-destroy-all - Managed STACKIT layers:
make infra-stackit-destroy-all