Skip to content

H·AI·K·U — Rebrand AI-DLC with domain-agnostic studio/stage architecture#141

Open
jwaldrip wants to merge 27 commits intoTheBushidoCollective:mainfrom
jwaldrip:main
Open

H·AI·K·U — Rebrand AI-DLC with domain-agnostic studio/stage architecture#141
jwaldrip wants to merge 27 commits intoTheBushidoCollective:mainfrom
jwaldrip:main

Conversation

@jwaldrip
Copy link
Copy Markdown
Contributor

@jwaldrip jwaldrip commented Apr 3, 2026

Summary

Rebrand AI-DLC → H·AI·K·U (Human / AI Knowledge Unification) with a domain-agnostic studio/stage architecture. Supersedes #137, #138, and #139.

What Changed

Rebrand

  • Plugin: ai-dlchaiku
  • Project directory: .ai-dlc/.haiku/
  • Commands: /ai-dlc:*/haiku:*
  • NPM packages: @ai-dlc/*@haiku/*
  • MCP server: ai-dlc-reviewhaiku-review
  • CLI tool: ai-dlc-dashboardhaiku-dashboard
  • /haiku:migrate provides seamless transition from legacy projects

Architecture

  • Studios — Named lifecycle templates (software, ideation, and more)
  • Stages — Lifecycle phases with their own hats, review gates, and I/O chains
  • Hats — File-based behavioral roles scoped to stages (not global)
  • Persistence — Studio-level abstraction (git adapter, filesystem adapter)
  • Review gatesauto / ask / external per stage

Website

  • Methodology pages (elaboration, execution, operation, reflection)
  • Developer docs (CLI reference, migration guide, getting started)
  • Studios page, stages page, hats page
  • Blog posts introducing H·AI·K·U
  • Deploy config for haikumethod.ai

New Skills

  • /haiku:new — Create intents with studio detection
  • /haiku:run — Stage-based pipeline execution (replaces /haiku:execute)
  • /haiku:migrate — Legacy .ai-dlc/.haiku/ migration

Background

The original spec PR (#137) was being reviewed when the branch was accidentally merged (#138), then reverted (#139). This PR cleanly re-applies the rebrand by reverting the revert. Continuing work will be pushed to this fork's main and updated here.

Test plan

  • Plugin installs and /haiku:setup works
  • /haiku:new creates intents under .haiku/
  • /haiku:run advances through stages
  • Website builds and deploys to haikumethod.ai
  • /haiku:migrate handles legacy .ai-dlc/ projects
  • Backwards compat: existing intents continue to work

🤖 Generated with Claude Code

@jwaldrip jwaldrip changed the title Reapply H·AI·K·U rebrand (revert the revert) H·AI·K·U — Rebrand AI-DLC with domain-agnostic studio/stage architecture Apr 3, 2026
jwaldrip and others added 25 commits April 3, 2026 11:02
…ds compat

Plugin:
- Rename ai-dlc-review → haiku-review (MCP server)
- Rename ai-dlc-dashboard → haiku-dashboard (CLI)
- Rename @ai-dlc/* → @haiku/* (NPM packages)
- Rename marketplace.json ai-dlc → haiku
- Fix stale ai-dlc references in setup and execute skills
- Update setup Phase 5b: legacy passes → studios
- Add /haiku:scaffold skill for custom artifact generation

Studios (10 new, 12 total):
- Engineering: software, data-pipeline, migration, incident-response,
  compliance, security-assessment
- Go-to-Market: sales, marketing, customer-success, product-strategy
- General Purpose: ideation, documentation
- 60 stages, 120+ hats across all studios

Website:
- Dynamic studio browser reading from plugin source at build time
- Studio listing (/studios), detail (/studios/[slug]),
  stage detail (/studios/[slug]/[stage])
- Homepage: subtitle → "Human / AI Knowledge Unification",
  studios section now dynamic
- Docs: /haiku:execute → /haiku:run across 8 files
- New customization guide (docs/customization.md)
- H·AI·K·U evolution presentation (haiku-evolution.html)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Review agents:
- 72 review agent files across all 12 studios, scoped per-stage
- Each agent defines a mandate and checklist for adversarial review
- Stages can include agents from other stages via review-agents-include
- Cross-stage includes wired for software, compliance, data-pipeline,
  documentation, and migration studios

Continuous vs discrete mode fix:
- Removed "continuous mode collapse" — stages are never merged
- Continuous mode auto-advances through stages per review gate settings
- Discrete mode always stops between stages regardless of gate
- Renamed internal phases: Plan→Decompose, Build→Execute
- Added stage-back detection during decomposition

Scaffold skill:
- Added review-agent scaffold type
- Stage template now includes review-agents/ directory and
  review-agents-include field

Paper + website:
- Stage definition now declares 5 things (added review agents)
- Updated stage loop terminology throughout
- Fixed continuous mode description
- Git persistence: one PR per intent, no per-unit PRs
- Studio detail pages show review agents per stage
- Stage detail pages show full review agent mandates
- Cross-stage includes rendered with links to source stage

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Await gate (4th review gate type):
- Blocks until an external event occurs (customer response, CI result,
  stakeholder decision) — distinct from external which pushes for review
- Added to orchestrator (return 3), run skill, paper, website docs
- Applied to sales proposal/negotiation and incident-response mitigate
- Presentation and studio detail pages render blue await badge

Stage-scoped refinement:
- /haiku:refine now accepts stage:<name> to target upstream stage outputs
- Creates a targeted unit in the upstream stage, runs hats, persists output
- Does NOT reset current stage progress — scoped side-trip
- Agent can invoke autonomously during decomposition when gaps detected
- User can trigger via "add a screen for X" style requests

Tagline consistency:
- Standardized on "Human + AI Knowledge Unification" everywhere
- Fixed homepage, paper title, docs, presentation, paper revisions

Presentation overhaul:
- Expanded from 10 to 19 slides with slide counter
- Added: problem statement, four-layer hierarchy, stage loop deep dive,
  review agents, review gates (now 4 types), continuous vs discrete,
  backpressure model, delivery model, software studio table

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
38 operation templates across all 12 studios:
- Software: dependency-audit, secret-rotation, health-check, backup-verification
- Sales: pipeline-review, crm-sync, win-loss-log
- Marketing: performance-dashboard, content-calendar, attribution-audit
- Customer Success: health-score-refresh, renewal-tracker, nps-collection
- Compliance: control-monitoring, evidence-collection, framework-update-watch
- Data Pipeline: data-freshness-check, schema-drift-detection, backfill-procedure
- Documentation: link-checker, freshness-review
- Incident Response: runbook-review, incident-drill
- Migration: data-sync-monitor, rollback-readiness
- Product Strategy: roadmap-refresh, user-signal-review
- Security Assessment: finding-tracker, retest-schedule
- Ideation: content-review, source-freshness

26 reflection dimensions across all 12 studios:
- Each dimension defines what to analyze, what patterns to look for,
  and what to produce — domain-specific analysis lenses
- Software: velocity, quality, architecture, process
- Sales: conversion, objection-patterns
- Other studios: 2 dimensions each, tailored to domain

Updated CLAUDE.md with file locations and concept mapping.
Updated presentation with operation/reflection counts.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Created haiku-gaps.html — visual gap analysis for multi-company platform
- Visualizes current architecture (done) vs missing layers (orchestration,
  people, visibility) with specific scenarios and proposed solutions
- Proposed fixes: studio-level triggers, intent templates, cross-studio
  inputs, stage ownership + gate protocols, portfolio API
- Fixed "Human-AI Knowledge Units" → "Human + AI Knowledge Unification"
  in introducing-haiku blog post

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Providers are now bidirectional translation layers, not simple connectors.
Each provider defines inbound (provider → H·AI·K·U), outbound (H·AI·K·U →
provider), and sync behavior. Claude mediates the translation — external
tools don't need to understand H·AI·K·U's schema.

New provider categories:
- CRM (Salesforce, HubSpot): bidirectional deal/opportunity sync,
  stage mapping, activity logging, cross-studio trigger source
- Knowledge (Notion, Confluence, Google Docs): organizational memory
  and cross-studio context sharing — the primary channel for
  inter-studio data flow

Updated existing providers:
- Ticketing: added inbound event discovery, translation tables,
  sync on session start
- Spec: added cross-studio knowledge flow, outbound output persistence
- Design: restructured as inbound/outbound with token extraction
- Comms: added inbound signal detection, await gate resolution via
  channel threads

Architecture principle: H·AI·K·U is a CLI tool, not a platform.
Providers are the durable coordination layer. Scheduled tasks and
remote triggers handle polling when no session is active.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…lutions

Triggers skill:
- Polls configured providers for events since last poll
- Matches events to studio trigger declarations
- Checks await gates for satisfied conditions
- Syncs active intent state from providers
- Designed for /schedule (runs every 30m) and interactive use
- Scheduled mode: auto-creates intents with auto:true triggers,
  logs pending suggestions for interactive review

Gap analysis reclassification:
- Cross-studio data flow: SOLVED (knowledge provider)
- Event-driven gates: SOLVED (scheduled task polling)
- Role assignment: SOLVED (ticketing provider)
- Approval chains: SOLVED (provider workflows)
- Handoff protocols: SOLVED (knowledge provider)
- Portfolio visibility: SOLVED (ticketing + CRM)
- Remaining gaps: intent templates, gate protocol schema,
  /haiku:portfolio, /haiku:triggers (now built)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…paces

SPA-based workspace browser with no backend:

Local mode (/browse):
- Drag-and-drop a project folder or use File System Access API picker
- Reads .haiku/ directory directly from the local filesystem
- Zero auth, zero network

Remote mode (/browse/git?repo=github.com/org/repo):
- Connects to GitHub or GitLab APIs
- Public repos render immediately
- Private repos prompt for a personal access token (stored in localStorage)
- Supports branch parameter for intent-specific branches

Provider abstraction (lib/browse/):
- BrowseProvider interface — listIntents, getIntent, readFile, listFiles
- LocalProvider — File System Access API
- GitHubProvider — GitHub REST API with token auth
- GitLabProvider — GitLab REST API with token auth
- Shared YAML frontmatter parser and criteria parser

Views:
- Portfolio — all intents with studio, stage progress, status
- Intent detail — stage pipeline visualization, expandable stages with units
- Unit detail — criteria checklist with progress bar, dependencies, spec content
- Same component tree for local and remote — only the data source differs

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
OAuth flows:
- GitHub: Authorization Code flow via Cloudflare Worker proxy
  (GitHub requires server-side code→token exchange)
- GitLab: Implicit grant flow (token returned in URL fragment, no backend)

Auth library (lib/browse/auth.ts):
- CSRF protection via random state parameter
- Token storage in localStorage (never sent to haikumethod.ai)
- Auth config from env vars (NEXT_PUBLIC_GITHUB_OAUTH_CLIENT_ID)
- Session-based state for cross-redirect continuity

Callback page (/browse/auth/callback):
- Handles both GitHub code exchange and GitLab implicit token
- Auto-redirects back to the original browse URL on success
- Error state with retry and back-to-browse options

Git browse page updated:
- OAuth "Sign in with GitHub/GitLab" as primary auth method
- PAT input collapsed as fallback for when OAuth isn't configured
- Uses auth library for token storage/retrieval

Auth proxy (deploy/auth-proxy/):
- Cloudflare Worker for GitHub code→token exchange
- Single stateless function — no database, no sessions
- CORS restricted to haikumethod.ai origin
- Secrets via wrangler: GITHUB_CLIENT_ID, GITHUB_CLIENT_SECRET

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…y URL

- HAIKU_GITHUB_OAUTH_CLIENT_ID / HAIKU_GITHUB_OAUTH_CLIENT_SECRET
- HAIKU_GITLAB_OAUTH_CLIENT_ID for future GitLab support
- NEXT_PUBLIC_HAIKU_AUTH_PROXY_URL — configurable for forks
- Worker route: auth.haikumethod.ai

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Deploys deploy/auth-proxy/ to Cloudflare Workers on push to main.
Uses cloudflare/wrangler-action with secrets injection.

Required secrets:
- CLOUDFLARE_API_TOKEN — Cloudflare API token with Workers edit permission
- HAIKU_GITHUB_OAUTH_CLIENT_SECRET — GitHub OAuth client secret

Required variables:
- NEXT_PUBLIC_HAIKU_GITHUB_OAUTH_CLIENT_ID — GitHub OAuth client ID

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Removed hardcoded route from wrangler.toml
- Custom domain configured via Cloudflare dashboard per deployment
- ALLOWED_ORIGIN passed as secret from HAIKU_AUTH_ALLOWED_ORIGIN variable
- Defaults to https://haikumethod.ai if variable not set

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…providers

- Both GitHub and GitLab use Authorization Code flow via the auth proxy
- Callback URL pattern: /auth/{provider}/callback/ (github, gitlab)
- Auth proxy handles POST /{provider}/token for both
- GitLab exchange supports self-hosted instances (host param in body)
- Workflow passes GitLab secrets alongside GitHub
- Per-provider env vars: HAIKU_{GITHUB,GITLAB}_OAUTH_CLIENT_{ID,SECRET}

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
CNAME record for auth.haikumethod.ai pointing to the Cloudflare Worker.
Disabled by default — enable via Terraform Cloud variables:
  enable_auth_proxy_dns = true
  auth_proxy_dns_value  = "haiku-auth-proxy.<account>.workers.dev."

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replaces the Cloudflare Worker with a GCP Cloud Function v2 (Cloud Run)
for the OAuth code→token exchange. Same GCP project as DNS — one vendor.

Cloud Function:
- deploy/auth-proxy/src/index.ts — same logic, adapted for functions-framework
- POST /github/token and /gitlab/token endpoints
- Secrets from GCP Secret Manager (created by Terraform)
- CORS with configurable ALLOWED_ORIGIN

Terraform:
- New module: modules/auth-proxy/ — Cloud Function, Secret Manager, IAM
- Removed TF Cloud backend — local apply with GCP service account
- Variables for per-environment OAuth credentials
- DNS CNAME for auth.{domain} → Cloud Run URL

GitHub Actions:
- Workflow builds function source, authenticates with GCP SA, runs terraform apply
- All OAuth credentials from repo variables/secrets
- Domain configurable per fork via HAIKU_DOMAIN variable

Removed:
- deploy/auth-proxy/worker.ts (Cloudflare Worker)
- deploy/auth-proxy/wrangler.toml

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Merge auth proxy and browse features to main
- Backend: gs://waldrip-net-terraform-state/haiku
- Imported existing DNS zone and records into state
- Enabled Secret Manager, Cloud Functions, Cloud Run, Cloud Build APIs

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
GCS backend for Terraform state
Cloud Run needs the default compute SA to have secretAccessor role
to read OAuth secrets at runtime.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Fix: IAM for Cloud Run secret access
- Removed google_project_iam_member resources — SA lacks projectIamAdmin
- Secret accessor role granted manually via gcloud
- Gap doc: /browse now addresses portfolio visibility
- Gap doc: remaining gaps reduced to intent templates + gate protocol schema

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant