Symptom: You run npx emdash seed seed/seed.json, it reports success, but the new entries don't show up in /_emdash/admin.
Cause: The dev server was started with pnpm astro dev (or similar) instead of npx emdash dev, OR you were previously using the Cloudflare adapter in dev which reads from Wrangler's D1 emulation (.wrangler/state/v3/d1/) instead of data.db.
Fix:
- Make sure
astro.config.mjsis using the SQLite adapter in dev (it should be — check thatisDev = process.env.NODE_ENV !== 'production'). - Use
npx emdash devto start the server (notpnpm dev). - Hard-refresh the admin (
Cmd+Shift+R).
To verify what's actually in the database:
sqlite3 data.db "SELECT slug, status, title FROM ec_pages;"Same as above — you were previously using the Cloudflare D1 emulation. The manually created page is in .wrangler/state/v3/d1/...sqlite and the seeded content is in data.db.
Once you switch to SQLite-in-dev mode, the manually created page won't appear (it's in the Wrangler DB). Either:
- Recreate it manually in the admin (it will go into
data.dbthis time) - Or add it to
seed/seed.jsonso it's reproducible
Symptom: Page throws Error: PWB_API_URL environment variable is not set.
Fix: Copy .env.example to .env and set the URL:
cp .env.example .env
# Edit .env: PWB_API_URL=http://localhost:3000If you change the API paths in client.ts, update the MSW mock paths in src/test/mocks/pwb-server.ts to match. The mock uses the same URL patterns that PwbClient builds.
Run npx emdash dev once to generate types, then the TypeScript errors should clear. The emdash-env.d.ts file is auto-generated on dev server start.
Possible causes:
PWB_API_URLis pointing at an instance that doesn't have the property- The PWB backend isn't running
- The slug in the URL doesn't match any property in PWB
Check the browser console and the Astro dev server terminal for the actual HTTP error.
The form reads the PWB API URL from a <meta name="pwb-api-url"> tag in BaseLayout.astro. If PWB_API_URL is not set in the environment at build time, the meta tag will be empty and fetch calls will go to undefined/api_public/v1/enquiries.
Check: document.querySelector('meta[name="pwb-api-url"]').content in the browser console.
Symptom: Passkey verification returns 200 but the next request gets a 401, and the server logs show:
[WARN] [session] context.session was used … but no storage configuration was provided
Cause: No session driver is configured. Without the Cloudflare adapter (which provides session storage in production), Astro can't persist the session between requests.
Fix: astro.config.mjs must include a dev-only session driver:
...(isDev ? { session: { driver: "fs-lite" } } : {}),This is already set. If the warning reappears, check that isDev resolves to true (i.e. NODE_ENV is not "production").
Symptom: Browser console shows:
Invalid hook call. Hooks can only be called inside of the body of a function component.
Cause: Two copies of React are loaded — one pre-bundled by Vite, one inlined inside the @emdash-cms/admin chunk.
Fix: @emdash-cms/admin must be in vite.optimizeDeps.exclude in astro.config.mjs so Vite doesn't pre-bundle it (which would inline React). This is already set. If the error returns after a package update, check that the exclude entry is still present.
Usually a stale service worker. In Chrome DevTools → Application → Service Workers → click "Unregister", then hard-refresh.
Symptom: Remote login reports a successful connection, then shows undefined for the browser URL and device code, and exits with Device code expired (timeout).
What this means: The deployed site can still be healthy. During verification, the browser admin login page loaded correctly and offered Passkey, GitHub, Google, and email-link sign-in. The failure appears to be in the CLI device-code path for this deployment, not in the deployed admin itself.
Workaround:
- Use the browser login at
/_emdash/admin/loginto confirm the site auth flow is healthy. - Prefer an MCP client that supports browser OAuth when connecting to
/_emdash/api/mcp. - If you need remote writes immediately, use the admin UI or a browser-authenticated MCP client rather than blocking on the CLI device flow.
Context: The deployed MCP endpoint is OAuth-protected and advertises a standard authorization server, so this symptom should not be interpreted as "MCP is not deployed" or "admin auth is broken".
Symptom: The admin login page loads correctly but clicking "Sign in with Passkey" does nothing or fails to authenticate.
Cause: The dev-browser skill (and any Playwright-managed browser) runs an isolated browser profile that has no access to the user's system passkeys or saved credentials.
Fix: Use mcp__claude-in-chrome tools instead. These connect to the user's real Chrome instance where passkeys are registered. Navigate to /_emdash/admin/login in the real browser and complete the passkey prompt there.
Symptom: Clicking "Sign in with email link" and submitting an email address shows: Email is not configured. Magic link authentication requires an email provider.
Cause: The production deployment does not have an email provider configured, so magic link auth is unavailable.
Fix: Use Passkey, GitHub, or Google login instead. For automated sessions, use mcp__claude-in-chrome to authenticate via passkey in the user's real Chrome browser.
Symptom: Worker logs show:
[PASSKEY_VERIFY_ERROR] Error: Credential not found
and there is no alternate sign-in path available.
Cause: The deployed database no longer has a passkey row that matches the browser credential being presented. In passkey mode, EmDash can also get stuck with emdash:setup_complete = true, which prevents the first-admin setup flow from reopening automatically.
Fix: Use the repo recovery command:
pnpm reset:admin-accessThat will:
- back up the remote auth/setup tables
- clear remote auth and passkey rows, including deleting all rows from
users - reset
emdash:setup_completetofalse - preserve content
Then reopen setup:
and register a new admin passkey.
This recovery flow preserves CMS content, but it does remove all existing user accounts from the target D1 database.
If you want to inspect the plan first:
pnpm reset:admin-access --dry-runFor the full scripted workflow and exact table list, see:
Symptom: Direct fetch() calls to /_emdash/api/content/* with POST/PUT/PATCH return {"error":{"code":"CSRF_REJECTED","message":"Missing required header"}}.
Cause: The EmDash csrfInterceptor requires the custom header X-EmDash-Request: 1 on all mutating requests. The admin UI's bundled fetch client adds this automatically; direct fetch calls do not.
Fix: Add 'X-EmDash-Request': '1' to the headers of every POST, PUT, PATCH, or DELETE request:
fetch('/_emdash/api/content/posts', {
method: 'POST',
headers: { 'Content-Type': 'application/json', 'X-EmDash-Request': '1' },
body: JSON.stringify({ ... })
})Symptom: The custom domain can get stuck redirecting to /404, and Worker logs may show repeated backend errors such as:
Error: HTTP 521 https://demo.propertywebbuilder.com/api_public/v1/en/site_details
at PwbClient.get (chunks/client_BNBH10g4.mjs:80:13)
at async chunks/404_D01OZCcY.mjs:8:16
HTTP 521 means Cloudflare cannot reach the origin server (PWB backend is down or refusing connections).
Cause: The main problem was redirect-based not-found handling. Several routes used Astro.redirect('/404') when content was missing. On the custom domain, that can create a /404 redirect loop instead of terminating in a real 404 response. Some of those routes also fetched PWB site data before deciding whether to render a not-found page, which added noisy site_details errors whenever the backend was unavailable.
Fix: Return a direct 404 response from the failing route instead of redirecting to /404. For property-layout pages, use a minimal fallback SiteDetails object so the page can still render when the PWB backend is down. The dedicated src/pages/404.astro page should also avoid calling the PWB backend.
Symptom: Worker logs show:
OAuth initiation error: Error: Astro.locals.runtime.env has been removed in Astro v6. Use 'import { env } from "cloudflare:workers"' instead.
Clicking "Sign in with GitHub" or "Sign in with Google" redirects back to the login page with ?error=oauth_error.
Cause: The emdash OAuth routes used locals.runtime?.env to read Cloudflare environment bindings (OAuth client ID/secret). Astro v6 removed locals.runtime entirely — accessing it now throws instead of returning undefined.
Fix: The emdash package is patched in patches/emdash@0.1.0.patch to use import("cloudflare:workers") in Cloudflare Workers, with a fallback to import.meta.env for local dev. Both [provider].ts and [provider]/callback.ts are patched.
If this error reappears after upgrading emdash, re-apply the patch:
- Open a fresh patch edit dir:
pnpm patch emdash@<new-version> - In both
src/astro/routes/api/auth/oauth/[provider].tsand.../callback.ts, replace theruntimeLocals.runtime?.envblock with:let env: Record<string, unknown>; try { const cf = await import("cloudflare:workers"); env = cf.env as Record<string, unknown>; } catch { env = import.meta.env as Record<string, unknown>; }
- Commit the patch:
pnpm patch-commit node_modules/.pnpm_patches/emdash@<new-version> - Redeploy:
pnpm run deploy:prod
Regression tests in src/emdash-oauth-patch.test.ts will catch if the fix is ever dropped.
Symptom: A custom domain or route on propertywebbuilder.com (e.g. emdash2.propertywebbuilder.com) returns HTTP 200 with an empty body. Worker observability logs show HTTP 521 errors for every call to demo.propertywebbuilder.com, even though curl https://demo.propertywebbuilder.com/ works fine from outside Cloudflare. A custom domain on a different zone (e.g. emdash.homestocompare.com) works correctly with the same Worker and the same PWB_API_URL.
Cause — Cloudflare same-zone subrequest restriction. When the Worker handles a request arriving via the propertywebbuilder.com zone, any outbound fetch() to another hostname that also lives in that zone (including grey-cloud / DNS-only records) is subject to Cloudflare's internal routing for that zone. Cloudflare cannot forward those subrequests to the origin and returns HTTP 521 ("Web Server Down"). When the same Worker handles a request arriving via a different zone (homestocompare.com), the subrequest to demo.propertywebbuilder.com crosses zone boundaries and reaches the origin normally.
The same failure applies to both mechanisms Cloudflare uses for Worker binding on propertywebbuilder.com:
- Route (
emdash.propertywebbuilder.com/*) — all fetch calls todemo.propertywebbuilder.comreturn 521. - Custom domain (
emdash2.propertywebbuilder.com) — same result.
Diagnosis: In the Worker's Observability → Events view, filter by emdash2 (or the failing domain). Every invocation will show:
[pwb] GET https://demo.propertywebbuilder.com/api_public/v1/en/site_details
HTTP 521 https://demo.propertywebbuilder.com/api_public/v1/en/properties?...
The invocation itself is marked as an error even though the browser receives HTTP 200 with an empty body (the Worker catches the 521 and returns an empty response rather than crashing).
Fix options (choose one):
-
Use a
PWB_API_URLon a different zone. SetPWB_API_URLto a hostname that is not in thepropertywebbuilder.comCloudflare zone — for example, the server's IP address, or an alias record managed underhomestocompare.comor another zone you control. -
Delete the
demoDNS record from the Cloudflare zone. Ifdemo.propertywebbuilder.comhas a DNS record in thepropertywebbuilder.comCloudflare zone (even grey-cloud / DNS-only), Cloudflare still routes Worker subrequests through the zone. Removing the record from Cloudflare and managing DNS for it elsewhere (or via the server's IP) allows subrequests to reach the origin directly. -
Add the
cf-no-workerheader to API subrequests. Passing'cf-no-worker': '1'on the outbound fetch instructs Cloudflare to skip Worker processing for that subrequest, which can also break the loopback. This requires modifying thePwbClientfetch calls.
Symptom: When deploying via the Cloudflare "Deploy to Workers" button, the configuration form shows PWB_API_URL twice — one field pre-filled with dots (masked) and one empty field.
Cause: PWB_API_URL has been set both as a Worker secret (via wrangler secret put PWB_API_URL at some point) and as a plain var in wrangler.jsonc ("vars": { "PWB_API_URL": "" }). The deploy form renders one field for each. Secrets take precedence over vars at runtime, so the Worker uses the secret value.
Fix: Since PWB_API_URL is not sensitive and should be visible and easy to change, delete the secret version:
wrangler secret delete PWB_API_URLAfter that the form shows only the single plain-text vars field from wrangler.jsonc.
If pnpm install fails on better-sqlite3, the native bindings need to be compiled:
pnpm approve-builds
# Select: better-sqlite3
pnpm installIf you're on Apple Silicon and get an architecture mismatch, try:
arch -arm64 pnpm install