AI-powered assistant for communities — React + Tauri v2 desktop app with a Rust core (JSON-RPC / CLI) and sandboxed QuickJS skills.
This file orients contributors and coding agents. Authoritative narrative architecture: gitbooks/developing/architecture.md. Frontend layout: gitbooks/developing/frontend.md. Tauri shell: gitbooks/developing/tauri-shell.md.
| Path | Role |
|---|---|
app/ |
Yarn workspace openhuman-app: Vite + React (app/src/), Tauri desktop host (app/src-tauri/), Vitest tests |
Repo root src/ |
Rust library openhuman_core and openhuman-core CLI binary entrypoint (src/main.rs) — core_server, openhuman::* domains, skills runtime (QuickJS / rquickjs), MCP routing in the core process |
| Skills registry | tinyhumansai/openhuman-skills on GitHub — canonical skill packages and TS build; not vendored in this tree (see blurb below). |
Cargo.toml (root) |
Core crate; cargo build --bin openhuman-core produces the sidecar the UI stages via app’s core:stage |
docs/ |
Architecture and deep-internal references |
gitbooks/developing/ |
Public contributor docs — frontend, Tauri shell, testing, release, skills |
Commands in documentation assume the repo root unless noted: pnpm dev runs the app workspace.
Skills registry: Skill sources and the bundler live in github.com/tinyhumansai/openhuman-skills. Clone that repository to author or change skills (pnpm install, pnpm build). The desktop app’s skills catalog defaults to that GitHub slug; override with VITE_SKILLS_GITHUB_REPO (see app/src/utils/config.ts).
- Shipped product: desktop — Windows, macOS, Linux (see
gitbooks/developing/architecture.md“Platform reach”). - Tauri host (
app/src-tauri): desktop-only (compile_error!for non-desktop targets). Do not add Android/iOS branches insideapp/src-tauri. - Core binary (
openhuman-core): spawned/staged as a sidecar; the Web UI talks to it over HTTP (core_rpc_relay+core_rpcclient), not by re-implementing domain logic in the shell.
Where logic lives
- Rust (
openhuman/ repo rootsrc/): Business logic and execution—domains, skills runtime, RPC, persistence, and CLI behavior. This is the authoritative place for rules and side effects. - Tauri + React (
app/): Interaction and UX—screens, navigation, input, accessibility, windowing, and bridging to the core. The shell presents and orchestrates; it does not duplicate core business rules.
# Frontend + Tauri dev (workspace delegates to app/)
pnpm dev
# Desktop with Tauri (loads env via scripts/load-dotenv.sh)
pnpm tauri dev
# Production UI build (app workspace)
pnpm build
# Typecheck / lint / format (app workspace)
pnpm typecheck
pnpm lint
pnpm format
pnpm format:check
# Stage openhuman core binary next to Tauri resources (required for core RPC)
cd app && pnpm core:stage
# Skills — develop in the GitHub registry repo, then build (see tinyhumansai/openhuman-skills).
# If you keep a local clone path wired in app scripts, you can also run:
pnpm workspace openhuman-app skills:build
pnpm workspace openhuman-app skills:watch
# Rust — core library + CLI (repo root)
cargo check --manifest-path Cargo.toml
cargo build --manifest-path Cargo.toml --bin openhuman-core
# Rust — Tauri shell only
cargo check --manifest-path app/src-tauri/Cargo.tomlTests: Vitest in app/ (pnpm test, pnpm test:coverage). Rust tests via cargo test at repo root as wired in app/package.json.
Quality: ESLint + Prettier + Husky in the app workspace.
Before opening AI-authored PRs from Codex web sessions or Linear-launched implementation agents, follow docs/agent-workflows/codex-pr-checklist.md.
This checklist is required for remote agents because OpenHuman has several merge gates that are easy to miss in partial environments: Prettier, Rust formatting, TypeScript typecheck, focused Vitest coverage, controller dispatch parity, and Tauri vendored dependency availability. If a command cannot run in the remote environment, the PR body must report the exact blocked command and error instead of claiming validation passed.
Use these wrappers instead of invoking Vitest / WDIO / cargo directly when iterating — they keep stdout summary-sized and tee full output to target/debug-logs/<kind>-<suffix>-<timestamp>.log. Add --verbose to also stream raw output. See scripts/debug/README.md.
# Vitest
pnpm debug unit # full suite
pnpm debug unit src/components/Foo.test.tsx # one file (positional pattern)
pnpm debug unit -t "renders empty state" # filter by test name
pnpm debug unit Foo -t "renders empty" --verbose
# WDIO E2E (one spec at a time)
pnpm debug e2e test/e2e/specs/smoke.spec.ts
pnpm debug e2e test/e2e/specs/cron-jobs-flow.spec.ts cron-jobs --verbose
# cargo tests (delegates to scripts/test-rust-with-mock.sh)
pnpm debug rust
pnpm debug rust json_rpc_e2e
# Inspect saved logs
pnpm debug logs # list 50 most recent
pnpm debug logs last # print most recent (last 400 lines)
pnpm debug logs unit # most recent matching prefix "unit"
pnpm debug logs last --tail 100Files: scripts/debug/{cli,unit,e2e,rust,logs,lib}.sh. Entry point: pnpm debug (scripts/debug/cli.sh).
PRs must meet ≥ 80% coverage on changed lines. Enforced by .github/workflows/coverage.yml via diff-cover over merged Vitest + cargo-llvm-cov (core + Tauri shell) lcov outputs. Below the threshold the PR will not merge. Run pnpm test:coverage and pnpm test:rust locally; add tests for new/changed lines (happy path + at least one failure / edge case).
Environment variables are documented in two .env.example files:
.env.example(repo root) — Rust core, Tauri shell, backend URL, logging, proxy, storage, web search, local AI binary overrides. Loaded viasource scripts/load-dotenv.sh.app/.env.example— FrontendVITE_*vars (core RPC URL, backend URL, Sentry DSN, skills repo, dev helpers). Copy toapp/.env.localfor local overrides.
Frontend config is centralized in app/src/utils/config.ts. All VITE_* env vars should be read there and re-exported — do not read import.meta.env directly in other files.
Rust config uses a TOML-based Config struct (src/openhuman/config/schema/types.rs) with env var overrides applied in src/openhuman/config/schema/load.rs. Env vars override config file values at runtime (e.g. OPENHUMAN_API_URL overrides config.api_url).
- Where tests live: co-locate as
*.test.ts/*.test.tsxunderapp/src/**. - Runner/config: Vitest with
app/test/vitest.config.tsand shared setup inapp/src/test/setup.ts. - Run:
pnpm test:unit
pnpm test:coverage- Authoring rules:
- Prefer testing behavior over implementation details.
- Use existing helpers from
app/src/test/(test-utils.tsx, shared mock backend) before adding new harness code. - Keep tests deterministic: avoid real network calls, time-sensitive flakes, or hidden global state.
- Core implementation:
scripts/mock-api-core.mjs - Standalone server entrypoint:
scripts/mock-api-server.mjs - E2E wrapper:
app/test/e2e/mock-server.ts - Vitest unit setup:
app/src/test/setup.tsstarts the shared mock server by default onhttp://127.0.0.1:5005.
Key admin endpoints:
GET /__admin/healthPOST /__admin/resetPOST /__admin/behaviorGET /__admin/requests
Run manually:
pnpm mock:api
curl -s http://127.0.0.1:18473/__admin/healthFull guide: gitbooks/developing/e2e-testing.md.
Two automation backends:
-
Linux (CI default):
tauri-driver(WebDriver, port 4444) — drives the debug binary directly -
macOS (local dev): Appium Mac2 (XCUITest, port 4723) — drives the
.appbundle -
Where specs live:
app/test/e2e/specs/*.spec.ts -
Shared harness:
- Platform detection:
app/test/e2e/helpers/platform.ts - Element helpers:
app/test/e2e/helpers/element-helpers.ts - Deep link helpers:
app/test/e2e/helpers/deep-link-helpers.ts - App lifecycle:
app/test/e2e/helpers/app-helpers.ts - Mock backend:
app/test/e2e/mock-server.ts - WDIO config:
app/test/wdio.conf.ts(auto-detects platform)
- Platform detection:
-
Build + run:
# Build app + stage core sidecar (detects macOS vs Linux automatically)
pnpm test:e2e:build
# Run one spec
bash app/scripts/e2e-run-spec.sh test/e2e/specs/smoke.spec.ts smoke
# Run all flow specs
pnpm test:e2e:all:flows
# Docker on macOS (run Linux E2E locally)
docker compose -f e2e/docker-compose.yml run --rm e2e- Authoring rules:
- Ensure each spec is runnable in isolation.
- Use helpers from
element-helpers.ts— never use rawXCUIElementType*selectors in specs. - Use
clickNativeButton(),hasAppChrome(),waitForWebView(),clickToggle()for cross-platform element interaction. - Assert both UI outcomes and backend/mock effects when relevant.
- Add failure diagnostics (request logs,
dumpAccessibilityTree()) for faster debugging by agents.
By default, app/scripts/e2e-run-spec.sh creates and cleans a temp OPENHUMAN_WORKSPACE
automatically when the variable is not provided.
If you need a fixed workspace for debugging, provide one explicitly:
export OPENHUMAN_WORKSPACE="$(mktemp -d)"
pnpm test:e2e:build
bash app/scripts/e2e-run-spec.sh test/e2e/specs/smoke.spec.ts smoke
rm -rf "$OPENHUMAN_WORKSPACE"OPENHUMAN_WORKSPACEredirects core config + workspace storage away from~/.openhuman.- Default reset strategy:
- Rebuild/stage sidecar once per E2E run (
pnpm test:e2e:build). - Isolate state per test case with a fresh temp workspace (default behavior in
e2e-run-spec.sh).
- Rebuild/stage sidecar once per E2E run (
Use the shared mock backend runner so Rust unit/integration tests get deterministic API behavior:
pnpm test:rust
# or targeted
bash scripts/test-rust-with-mock.sh --test json_rpc_e2eExample per-test-case pattern inside a harness script:
run_case() {
export OPENHUMAN_WORKSPACE="$(mktemp -d)"
bash app/scripts/e2e-run-spec.sh "$1" "$2"
rm -rf "$OPENHUMAN_WORKSPACE"
}- Add/update unit tests for logic changes before stacking additional features.
- Add/update E2E coverage for user-visible flows and cross-process integration behavior.
- Keep new tests independent, deterministic, and debuggable from logs alone.
- When touching core/sidecar behavior, validate both:
pnpm test:unit- targeted E2E spec(s) via
app/scripts/e2e-run-spec.sh
Order matters for auth and realtime:
Redux Provider → PersistGate → UserProvider → SocketProvider → AIProvider → SkillProvider → HashRouter → AppRoutes.
There is no TelegramProvider in the current tree; Telegram may appear in UI copy or legacy settings, but MTProto is not an active provider here.
Redux Toolkit slices include auth, user, socket, ai, skills, team, and related modules. Prefer Redux (and persist where configured) over ad hoc localStorage for app state; see project rules for exceptions.
Singleton-style modules include apiClient, socketService, coreRpcClient (HTTP bridge to the core process), and domain api/* clients. There is no mtprotoService in this tree.
Transport, validation, and types for JSON-RPC-style messaging over Socket.io — not a large Telegram tool pack. Tooling for agents is driven by the skills system and backend; see agentToolRegistry.ts and core RPC.
Hash routes include /, /onboarding, /mnemonic, /home, /intelligence, /skills, /conversations, /invites, /agents, /settings/*, plus DefaultRedirect. No dedicated /login route in AppRoutes (auth flows use the welcome/onboarding paths).
Bundled prompts live under src/openhuman/agent/prompts/ at the repository root (also bundled via app/src-tauri/tauri.conf.json resources). Loaders under app/src/lib/ai/ use ?raw imports, optional remote fetch, and in Tauri ai_get_config / ai_refresh_config for packaged content.
Thin desktop host: window management, daemon health bridging, core process lifecycle (core_process, CoreProcessHandle), and JSON-RPC relay to the openhuman-core sidecar (core_rpc_relay, core_rpc).
Registered IPC commands (see gitbooks/developing/tauri-shell.md) include greet, write_ai_config_file, ai_get_config, ai_refresh_config, core_rpc_relay, window commands, and OpenHuman service / daemon host helpers (openhuman_*).
Deep link plugin is registered where supported; behavior is platform-specific (see platform notes below).
openhuman/— Domain logic (skills, memory, channels, config, …). RPC controllers live inrpc.rsfiles per domain; useRpcOutcome<T>pattern perAGENTS.md/ internal rules.src/openhuman/module layout: New functionality must live in a dedicated subdirectory (its own folder/module, e.g.openhuman/my_domain/mod.rsplus related files, or a new subfolder under an existing domain). Do not add new standalone*.rsfiles directly atsrc/openhuman/root; place new code in a module directory and declare it frommod.rs(or merge into an existing domain folder).- Controller schema contract: Shared controller metadata types live in
src/core/mod.rs(ControllerSchema,FieldSchema,TypeSchema) and are consumed by adapters (RPC/CLI) in different ways. - Domain schema files: For each domain, define controller schema metadata in a dedicated module inside the domain folder (example:
src/openhuman/cron/schemas.rs) and export from the domainmod.rs. - Controller-only exposure rule: Expose domain functionality to CLI and JSON-RPC through the controller registry (
schemas.rs+ registered handlers). Do not add domain-specific branches or one-off transport logic insrc/core/cli.rsorsrc/core/jsonrpc.rsjust to expose a feature. - Light
mod.rsrule: Keep domainmod.rsfiles light and export-focused. Put operational code in sibling files (example:ops.rs,store.rs,schedule.rs,types.rs), then re-export the public API frommod.rs. core_server/— Transport only: Axum/HTTP, JSON-RPC envelope, CLI parsing, dispatch (core_server::dispatch) — no heavy business logic here.- Layering: Implementation in
openhuman::<domain>/, controllers inopenhuman::<domain>/rpc.rs, routes incore_server/.
Skills runtime uses QuickJS (rquickjs) in src/openhuman/skills/ (e.g. qjs_skill_instance.rs, qjs_engine.rs), not V8/deno_core in this repository.
src/openhuman/<domain>/mod.rs: keep export-focused, addmod schemas;and re-export:all_controller_schemas as all_<domain>_controller_schemasall_registered_controllers as all_<domain>_registered_controllers
src/openhuman/<domain>/schemas.rsmust define:schemas(function: &str) -> ControllerSchemaall_controller_schemas() -> Vec<ControllerSchema>all_registered_controllers() -> Vec<RegisteredController>- domain handler fns
fn handle_*(_: Map<String, Value>) -> ControllerFuture
- Handlers should delegate to existing domain
rpc.rsfunctions during migration. - Wire domain exports into
src/core/all.rsfor both declared schemas and registered handlers. - Keep adapters generic: do not add domain-specific logic to
src/core/cli.rsorsrc/core/jsonrpc.rs. - Remove migrated method branches from
src/rpc/dispatch.rsonce registry coverage is in place.
A typed pub/sub event bus for decoupled cross-module communication plus a native, in-process typed request/response surface. Both are singletons — one instance each for the whole application. Do not construct EventBus or NativeRegistry directly; use the module-level functions.
When to use which surface:
- Broadcast events (
publish_global/subscribe_global) — fire-and-forget notification. One publisher, many subscribers, no return value. Use when a module needs to announce something happened and other modules may react independently. - Native request/response (
register_native_global/request_native_global) — one-to-one typed Rust dispatch keyed by a method string. Zero serialization: trait objects (Arc<dyn Provider>), streaming channels (mpsc::Sender<T>), oneshot senders, and anything elseSend + 'staticall pass through unchanged. Use when a module needs a typed return value from another module in-process. This is internal-only — anything that needs to be callable over JSON-RPC should register againstsrc/core/all.rsinstead.
Core types (all in src/core/event_bus/):
| Type | File | Purpose |
|---|---|---|
DomainEvent |
events.rs |
#[non_exhaustive] enum — all cross-module events live here, grouped by domain |
EventBus |
bus.rs |
Singleton backed by tokio::sync::broadcast. Construction is pub(crate) — tests only |
NativeRegistry / NativeRequestError |
native_request.rs |
In-process typed request/response registry keyed by method name. Rust types only — passes trait objects, mpsc::Sender, and oneshot::Sender through without serialization |
EventHandler |
subscriber.rs |
Async trait with optional domains() filter for selective subscription |
SubscriptionHandle |
subscriber.rs |
RAII handle — subscriber task is cancelled on drop |
TracingSubscriber |
tracing.rs |
Built-in debug logger for all events (registered at startup) |
Singleton API (all modules use these — never hold or pass EventBus / NativeRegistry instances):
| Function | Purpose |
|---|---|
event_bus::init_global(capacity) |
Initialize both singletons (broadcast bus + native registry) at startup (once) |
event_bus::publish_global(event) |
Publish a broadcast event from anywhere (no-op if not yet initialized) |
event_bus::subscribe_global(handler) |
Subscribe to broadcast events from anywhere (returns None if not yet initialized) |
event_bus::register_native_global(method, handler) |
Register a typed native request handler for a method name — called at startup by each domain's bus.rs |
event_bus::request_native_global(method, req) |
Dispatch a typed native request to the registered handler — zero serialization |
event_bus::global() / event_bus::native_registry() |
Get the underlying singleton for advanced use |
Domains: agent, memory, channel, cron, skill, tool, webhook, system. See events.rs for the full variant list — events carry rich payloads so subscribers have everything they need.
Domain subscriber files — each domain owns its bus.rs with EventHandler impls:
cron/bus.rs—CronDeliverySubscriber(delivers job output to channels)webhooks/bus.rs—WebhookRequestSubscriber(routes incoming requests to skills, emits responses via socket)channels/bus.rs—ChannelInboundSubscriber(runs agent loop for inbound socket messages)skills/bus.rs— stub for future subscribers
Adding events for a new domain:
- Add variants to
DomainEventinevents.rs(prefix with domain name, e.g.BillingInvoiceCreated { ... }). - Add the domain string to the
domain()match arm. - Create a
bus.rsfile inside your domain module (e.g.src/openhuman/billing/bus.rs) for subscriber implementations — each domain owns its handlers. - Register subscribers in startup (e.g.
channels/runtime/startup.rs) via the singleton. - Publish events with
event_bus::publish_global(DomainEvent::YourEvent { ... }).
Example — publishing:
use crate::core::event_bus::{publish_global, DomainEvent};
publish_global(DomainEvent::CronDeliveryRequested {
job_id: job.id.clone(),
channel: "telegram".into(),
target: "chat-123".into(),
output: "Job completed".into(),
});Example — subscribing (trait-based, in <domain>/bus.rs):
use crate::core::event_bus::{DomainEvent, EventHandler};
use async_trait::async_trait;
pub struct MyDomainSubscriber { /* dependencies */ }
#[async_trait]
impl EventHandler for MyDomainSubscriber {
fn name(&self) -> &str { "my_domain::handler" }
fn domains(&self) -> Option<&[&str]> { Some(&["cron"]) } // filter by domain
async fn handle(&self, event: &DomainEvent) {
if let DomainEvent::CronJobCompleted { job_id, success } = event {
// react to the event
}
}
}Convention: Name the handler struct <Purpose>Subscriber (e.g. CronDeliverySubscriber) and the name() return value "<domain>::<purpose>" for grep-friendly tracing output.
Adding a native request handler for a new domain:
- Define the request and response types in the domain (e.g.
src/openhuman/billing/bus.rs). Use owned fields,Arcs, and channels — not borrows. Types only needSend + 'static, notSerialize. - Register the handler at startup from the same
bus.rs, keyed by a stable method name prefixed with the domain (e.g."billing.charge_invoice"). - Callers import the request/response types from the domain's public surface and dispatch via
request_native_global. - Method name convention:
"<domain>.<verb>"— same naming scheme as JSON-RPC method roots for consistency, but these are not exposed over JSON-RPC.
Example — native request (typed request/response, in <domain>/bus.rs):
use crate::core::event_bus::{register_native_global, request_native_global};
use std::sync::Arc;
use tokio::sync::mpsc;
// Request carries non-serializable state directly — trait objects and
// streaming channels all pass through unchanged.
pub struct BillingChargeRequest {
pub provider: Arc<dyn BillingProvider>,
pub amount_cents: u64,
pub progress_tx: Option<mpsc::Sender<String>>,
}
pub struct BillingChargeResponse {
pub charge_id: String,
}
// At startup:
pub async fn register_billing_handlers() {
register_native_global::<BillingChargeRequest, BillingChargeResponse, _, _>(
"billing.charge",
|req| async move {
let id = req.provider.charge(req.amount_cents).await
.map_err(|e| e.to_string())?;
Ok(BillingChargeResponse { charge_id: id })
},
).await;
}
// From another module:
let resp: BillingChargeResponse = request_native_global(
"billing.charge",
BillingChargeRequest { provider, amount_cents: 500, progress_tx: None },
).await?;Tests: override production handlers by calling register_native_global again for the same method before exercising the code under test — the most recent registration wins. For full isolation, construct a fresh NativeRegistry directly via NativeRegistry::new() and use its register / request methods.
Design intent: Premium, calm visual language — ocean primary (#4A83DD), sage / amber / coral semantic colors, Inter + Cabinet Grotesk + JetBrains Mono, Tailwind with custom radii/spacing/shadows. Details: gitbooks/resources/design-language.md.
In the parent OpenHuman desktop app, Tauri / Rust is a delivery vehicle: windowing, process lifecycle, IPC to the core sidecar, and other host concerns. Keep as much UI behavior and product logic as practical in TypeScript/React (app/). Avoid growing Rust in the shell for flows that belong in the web layer unless there is a hard platform or security reason.
- GitHub issues on upstream — File and track issues on tinyhumansai/openhuman (Issues), not only a fork’s tracker, unless the workflow explicitly says otherwise.
- GitHub issue templates — Use
.github/ISSUE_TEMPLATE/feature.mdfor new features and.github/ISSUE_TEMPLATE/bug.mdfor bugs; keep the same section structure and fill every required part. AI-authored issues should follow those templates verbatim. - Open pull requests on upstream — Always create PRs against tinyhumansai/openhuman (pull requests), not only a fork’s default remote, unless the workflow explicitly says otherwise.
- Public repo; push to your working branch; PRs target
main. - Use
.github/PULL_REQUEST_TEMPLATE.md; AI-generated PR text should follow its sections and checklist.
- Unix-style modules: Prefer individual modules with a single, sharp responsibility—each should do one thing really well. Compose behavior through small, well-named units and clear boundaries instead of monolithic code.
- Tests before the next layer: Ship enough unit tests and coverage for the behavior you are adding or changing before building additional features on top of it. Treat untested code as incomplete; do not accumulate depth on a shaky base.
- Documentation with code: New or changed behavior must ship with matching documentation. At minimum, add concise rustdoc / code comments where the flow is not obvious, and update
AGENTS.md, architecture docs, or feature docs when repository rules or user-visible behavior change.
- Default to verbose diagnostics on new/changed flows: Add substantial, development-oriented logs while implementing features or fixes so issues are easy to trace end-to-end.
- Log critical checkpoints: Include logs at entry/exit points, branch decisions, external calls, retries/timeouts, state transitions, and error handling paths.
- Use structured, grep-friendly context: Prefer stable prefixes (for example
[domain],[rpc],[ui-flow]) and include correlation fields such as request IDs, method names, and entity IDs when available. - Platform conventions: In Rust, use
log/tracingatdebugortrace; inapp/, use namespaceddebuglogs and dev-only detail as needed. - Keep logs safe: Never log secrets or sensitive payloads (API keys, JWTs, credentials, full PII). Redact or omit sensitive fields.
- Treat debuggability as a deliverable: Changes lacking sufficient logging for diagnosis are incomplete and should be updated before handoff.
Follow this order so behavior is specified, proven in Rust, proven over RPC, then surfaced in the UI with matching tests.
- Specify against the current codebase — Ground the design in existing domains, controller/registry patterns, and JSON-RPC naming (
openhuman.<namespace>_<function>). Reuse or extend documented flows ingitbooks/developing/architecture.mdand sibling guides; avoid parallel architectures. - Implement in Rust — Add domain logic under
src/openhuman/<domain>/, wire schemas + registered handlers into the shared registry, and land unit tests in the crate (cargo test -p openhuman, focused modules) until the feature is correct in isolation. - JSON-RPC E2E — Add or extend integration-style tests that call the real HTTP JSON-RPC surface (e.g.
tests/json_rpc_e2e.rs, mock backend /scripts/test-rust-with-mock.shas appropriate) so methods, params, and outcomes match what the UI will call. - UI in the Tauri app — Build React screens, state, and
core_rpc_relay/coreRpcClientusage inapp/; keep business rules in the core, not duplicated in the shell. - App unit tests — Cover components, hooks, and clients with Vitest (
pnpm test/pnpm test:unitinapp/). - App E2E — Add desktop E2E specs where the feature is user-visible (
pnpm test:e2e*, isolated workspace — see Testing Guide (Unit + E2E)) so the full stack (UI → Tauri → sidecar) behaves as intended.
Capability catalog — When a change adds, removes, renames, relocates, or materially changes a user-facing feature, update src/openhuman/about_app/ in the same work so the runtime capability catalog remains the source of truth for what the app can do.
Debug logging (throughout) — Add lots of development-oriented logging as you build, not as an afterthought. In Rust, use log / tracing at debug or trace on RPC entry and exit, error paths, state transitions, and any branch that is hard to infer from tests alone. In app/, follow existing patterns (e.g. the debug npm package with a namespace per area) plus dev-only detail where useful. Prefer grep-friendly prefixes ([feature], domain name, or JSON-RPC method) so terminal output from sidecar, Tauri, and WebView can be correlated during pnpm dev / tauri dev. Never log secrets, raw JWTs, API keys, or full PII—redact or omit.
Planning rule: When scoping a feature, define the E2E scenarios (core RPC + app) up front. Those scenarios should cover the full intended scope—happy paths, failure modes, auth or policy gates, and regressions you care about. If a scenario is not testable end-to-end, the spec is incomplete or the cut is too large; split or add harness support first.
- Debug logging: Ship heavy
debug/trace(Rust) and namespaceddebug/ dev logs (app/) on new flows so sidecar + WebView output is easy to grep; see Feature design workflow. Never log secrets or raw tokens. src/openhuman/: New features go in a folder/module, not new root-levelsrc/openhuman/*.rsfiles (see Rust core section).- File size: Prefer ≤ ~500 lines per source file; split modules when growing.
- Pre-merge checks (when touching code): Prettier, ESLint,
tsc --noEmitinapp/;cargo fmt+cargo checkfor changed Rust (Cargo.tomlat root and/orapp/src-tauri/Cargo.tomlas appropriate). - No dynamic imports in production
app/srccode — use staticimport/import typeat the top of the module. Do not useimport()(async dynamic import),React.lazy(() => import(...)), orawait import('…')to load app modules, Tauri APIs, or RPC clients. Why: predictable chunk graph, simpler static analysis, fewer surprises in Tauri + Vite, and easier code review. If a module must not run at load time (e.g. heavy optional path), use a static import and guard the call site withtry/catchor an explicit runtime check instead of deferring module load via dynamic import. Exceptions: Vitest harness patterns (vi.importActual, dynamic imports only inside*.test.ts/__tests__/test/setup.tswhen required by the runner); ambienttypeof import('…')in.d.ts; config files (e.g.tailwind.config.jsJSDoc).- Type-only imports:import typewhere appropriate. - Dual socket / tool sync: If you change realtime protocol, keep frontend (
socketService/ MCP transport) and core socket behavior aligned (seegitbooks/developing/architecture.mddual-socket section).
- macOS deep links: Often require a built
.appbundle; not onlytauri dev. Seedocs/telegram-login-desktop.mdif applicable. window.__TAURI__: Not assumed at module load; guard Tauri usage accordingly.- Core sidecar: Must be staged/built so
core_rpccan reach theopenhuman-corebinary (seescripts/stage-core-sidecar.mjs).
Last aligned with monorepo layout (app/ + root src/), QuickJS skills in openhuman_core, skills catalog on GitHub (tinyhumansai/openhuman-skills), and Tauri shell IPC as of repo state.
Two services run independently for development:
| Service | Start command | Port | Notes |
|---|---|---|---|
| Vite dev server | pnpm dev (from repo root) |
1420 | React frontend with HMR |
| Core JSON-RPC server | ./target/debug/openhuman-core serve |
7788 | Rust core, writes bearer token to ~/.openhuman-staging/core.token |
The app connects to a remote staging backend at https://staging-api.tinyhumans.ai — there is no local backend to run.
The core generates a bearer token at startup written to {workspace_dir}/core.token (default ~/.openhuman-staging/core.token when OPENHUMAN_APP_ENV=staging). Read that file for authenticated RPC calls:
TOKEN=$(cat ~/.openhuman-staging/core.token)
curl http://localhost:7788/rpc -X POST \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer $TOKEN" \
-d '{"jsonrpc":"2.0","method":"core.ping","params":{},"id":1}'Public endpoints (no token needed): GET /health, GET /schema, GET /events.
Compiling the Rust core on Linux requires these system packages beyond the basics:
libasound2-dev libxi-dev libxtst-dev libxdo-dev libudev-dev libssl-dev clang cmake pkg-config libstdc++-14-dev
The libstdc++-14-dev package is needed because clang selects GCC 14 headers; without it, whisper-rs-sys fails with fatal error: 'array' file not found. A symlink may also be needed: ln -sf /usr/lib/gcc/x86_64-linux-gnu/13/libstdc++.so /usr/lib/x86_64-linux-gnu/libstdc++.so.
All commands are documented in CLAUDE.md and AGENTS.md above. The most-used subset:
- Lint:
pnpm lint(ESLint, 0 errors expected; warnings are acceptable) - Typecheck:
pnpm typecheck(tsc --noEmit) - Unit tests:
pnpm test(Vitest, runs 1000+ tests) - Rust check:
cargo check --manifest-path Cargo.toml - Rust tests:
cargo test --lib(5600+ tests) - Format check:
pnpm format:check
The full desktop app can be built and run on headless Linux VMs with:
export CEF_PATH="$HOME/Library/Caches/tauri-cef"
export LD_LIBRARY_PATH="$CEF_PATH/146.0.9/cef_linux_x86_64:$LD_LIBRARY_PATH"
source scripts/load-dotenv.sh
cargo tauri dev -- -- --no-sandboxKey requirements:
--no-sandboxis required because Chromium refuses to run as root without it.LD_LIBRARY_PATHmust include the CEF distribution directory solibcef.sois found at runtime.- The vendored CEF-aware
cargo-taurimust be installed first viabash scripts/ensure-tauri-cli.sh. - First build downloads ~300MB CEF binary and compiles ~900 crates; subsequent builds are incremental.
- GTK/cairo libraries are required:
libgtk-3-dev libwebkit2gtk-4.1-dev libsoup-3.0-dev libjavascriptcoregtk-4.1-dev libglib2.0-dev libcairo2-dev libpango1.0-dev libgdk-pixbuf-2.0-dev libatk1.0-dev libdbus-1-dev. - WebGL errors in the log (
ContextResult::kFatalFailure: WebGL1/2 blocklisted) are normal on GPU-less VMs and do not affect app functionality.
pnpm installmay warn about ignored build scripts (@sentry/cli,esbuild, etc.). The esbuild binary is correctly installed via its native platform package despite the warning — Vite and Vitest work fine.- Git submodules (
app/src-tauri/vendor/tauri-cef,app/src-tauri/vendor/tauri-plugin-notification) must be initialized for Tauri shell compilation. Rungit submodule update --init --recursiveif not already done. pnpm test:unitdoes not exist at the root level; usepnpm testinstead (which delegates tovitest runin theappworkspace).- The Tauri shell
cargo checkrequires GTK/desktop system libraries; without them, the pre-push hook'spnpm rust:checkwill fail. Use--no-verifyon push if GTK libs are missing and the change is unrelated to the Tauri shell.
Legend: 🎯session 🔴bugfix 🟣feature 🔄refactor ✅change 🔵discovery ⚖️decision Format: ID TIME TYPE TITLE Fetch details: get_observations([IDs]) | Search: mem-search skill
Stats: 20 obs (8,333t read) | 593,112t work | 99% savings
2848 9:07a ✅ openhuman: All Three Review Branches Pushed to Fork Successfully 2849 " 🔵 openhuman review-daemon-lifecycle: Two Post-Push Issues — Unstaged Prettier Changes + Missing tauri-cef Vendor 2851 9:08a ✅ openhuman daemon lifecycle: Prettier Format Committed as Follow-Up 2855 9:09a ✅ openhuman: All Three Review Branches Fully Pushed — PRs Ready to Open 2857 9:10a 🔵 openhuman: GitHub Connector Cannot Create PRs to tinyhumansai/openhuman — 403 Forbidden 2858 9:11a 🔵 openhuman webhooks-ingress: Session Stalled — Instruction Not Processed After 10+ Minutes 2860 " 🔵 openhuman webhooks: WebhooksDebugPanel Architecture for E2E Smoke Spec 2861 9:13a 🔵 openhuman webhooks-ingress: Full Spec Surface Mapped — RPC Log Strings + UI Navigation Path 2866 9:15a 🟣 openhuman webhooks-ingress: webhooks-ingress-flow.spec.ts Written 2869 9:18a ⚖️ openhuman Memory Refactor Plan: Trait Shape, L1 Pointer, and Missing Pieces 2871 " 🔵 openhuman Memory Architecture: Auto-Inject Pattern Has 3 Separate Implementations 2873 9:31a 🟣 openhuman: Draft PR Opened — Config Runtime Dir Refactor for Testability 2874 9:32a 🟣 openhuman: 3 More Draft PRs Opened — Threads Schema, Daemon Lifecycle, Webhooks E2E 2875 9:33a 🔵 openhuman Memory Namespace: 3 Auto-Inject Sites, Not 1 2876 " ⚖️ openhuman Memory Refactor: Breaking Trait Change + Flag-Off + ToolDiscovery Hybrid 2877 " ✅ Memory Namespace Refactor Plan Written to docs/plans/memory-namespace-refactor.md 2879 9:34a 🔵 openhuman Memory Trait: 15 Impls, Not 14; MemoryRecalled Has No Live Emit Site 2880 " 🔵 openhuman SQLite Schema: memory_docs Already Has namespace Column; Migration Scope Minimal 2881 " 🔵 openhuman Memory Trait Current Signatures: No Namespace Param on Any Method 2882 " 🔵 openhuman Eval Infra: Does Not Exist; Phase D Requires Bootstrap from Scratch
Access 593k tokens of past work via get_observations([IDs]) or mem-search skill.