Skip to content

fix: cascade websites on user/team delete + filter soft-deleted in list queries#4245

Open
anvme wants to merge 4 commits into
umami-software:devfrom
anvme:fix/cascade-websites
Open

fix: cascade websites on user/team delete + filter soft-deleted in list queries#4245
anvme wants to merge 4 commits into
umami-software:devfrom
anvme:fix/cascade-websites

Conversation

@anvme
Copy link
Copy Markdown

@anvme anvme commented May 7, 2026

Follow-up to #4243. Stacks on top of that PR's branch — the GitHub diff currently includes #4243's commits too. Once #4243 merges, this branch will rebase cleanly onto dev and show only its own changes (~+58 / -32). For incremental review now, focus on commit 872d737.

Five adjacent bugs from the same family

Bug Where Symptom
A deleteTeam Team-owned websites + ALL their dependent rows orphaned
B non-cloud deleteUser When user's owned teams are hard-deleted, team-owned websites orphaned
C getTeamLinks / getUserPixels / getTeamPixels Soft-deleted entries leak into list views
D cloud deleteUser Restamps already-soft-deleted website's deleted_at to current time
E non-cloud deleteUser (pre-existing) Missing sessionReplaySaved/sessionReplay/revenue/segment cleanup; entityIds excluded website ids so website-shares were orphaned

Approach

  • Mirror the deleteWebsite cleanup pattern (gold standard) inline in deleteUser + deleteTeam. relationMode = "prisma" so order is purely for readability — no DB FK constraints.
  • Cloud-gated ownedFilter extended to cover websites: cloud uses { userId } only (cloud preserves owned teams, so their websites must survive); non-cloud uses { OR: [{ userId }, { teamId: { in: teamIds } }] }.
  • entityIds extended with websiteIds so the existing share.deleteMany covers website-shares too.
  • Cloud updateMany calls (link/pixel/website) get deletedAt: null filter to prevent restamping (same shape fix as fix: clean up link, pixel, board rows on user/team deletion #4243's amend commit applied to link/pixel).
  • Redis invalidation extended to website:${id} cache keys (matches deleteWebsite's pattern).

Test plan

  • pnpm build-app clean (no errors/warnings)
  • No new lint warnings in changed files
  • E2E Test A (deleteTeam): create team + team-owned website (with session/event/event_data/segment/share rows) -> delete team -> all dependent rows go to 0
  • E2E Test B (deleteUser non-cloud team-owned cleanup): create user owning a team that owns a website (with events) -> delete user -> team-owned website + dependents all go to 0
  • E2E Test C (list filter): create alive + soft-deleted link/pixel rows -> verify /api/teams/[id]/links, /api/pixels, /api/teams/[id]/pixels return only the alive entries

Out of scope (separate PRs if maintainers want)

anvme added 3 commits May 7, 2026 03:37
deleteUser and deleteTeam left link/pixel/board rows (and their share rows)
in the database after the owner was removed. /q/<slug> and /p/<slug>
also kept serving deleted entries because the routes did not filter
deletedAt and Redis cached lookups for 24h.

- deleteUser: clean up link/pixel/board + shares for the deleted user.
  Cloud mode: soft-delete link/pixel, hard-delete board, only userId-owned.
  Non-cloud: hard-delete everything matching userId or owned teamIds.
- deleteTeam: same cleanup, scoped to teamId.
- /q and /p route handlers: filter deletedAt: null at the call sites
  (not in findLink/findPixel helpers, which would null-deref the
  permission checks at src/permissions/link.ts and pixel.ts).
- Post-transaction Redis invalidation mirrors deleteWebsite.
…eted slugs

Address Greptile review feedback on umami-software#4243.

- Cloud-mode link.updateMany / pixel.updateMany now filter where: { ..., deletedAt: null } so a previously soft-deleted row keeps its original deletion timestamp instead of being restamped with the current time.
- Pre-transaction findMany now selects deletedAt; the Redis invalidation list filters to only live slugs, avoiding harmless but wasted DEL calls for already-soft-deleted entries.

Note: the share.deleteMany cleanup still uses the broad entityId list (not filtered by deletedAt) so that orphan share rows of already-soft-deleted links/pixels are still cleaned up. Filtering the prefetch itself, as Greptile's exact suggestion proposed, would skip those shares while link.deleteMany still hard-deletes the rows, leaving orphan share rows behind. Verified empirically with a 3-scenario reproduction.
…st queries

Stacks on top of umami-software#4243. Five adjacent bugs from the same family:

- deleteTeam left team-owned websites (and all dependent rows) orphaned. Added
  inline cleanup mirroring deleteWebsite.
- non-cloud deleteUser, when hard-deleting the user's owned teams, also left
  team-owned websites orphaned. Extended the existing ownedFilter pattern
  (cloud-gated OR) to cover websites.
- getTeamLinks/getUserPixels/getTeamPixels did not filter deletedAt: null,
  leaking soft-deleted entries into list views.
- cloud deleteUser restamped already-soft-deleted websites' deleted_at;
  added deletedAt: null guard (same shape as link/pixel restamping fix).
- Surfaced pre-existing gaps in deleteUser non-cloud: missing
  sessionReplaySaved/sessionReplay/revenue/segment cleanups, entityIds
  excluded website ids so website-shares were orphaned.
@vercel
Copy link
Copy Markdown

vercel Bot commented May 7, 2026

@anvme is attempting to deploy a commit to the Umami Software Team on Vercel.

A member of the Team first needs to authorize it.

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented May 7, 2026

Greptile Summary

This PR fixes five related data-integrity bugs in deleteUser and deleteTeam: orphaned team-owned websites, missing table cleanups (sessionReplaySaved, sessionReplay, revenue, segment), soft-deleted entries leaking into list queries, and cloud-mode deletedAt restamping on already-soft-deleted rows.

  • Cascade cleanup: both delete functions now mirror deleteWebsite's full cleanup order with a cloud/non-cloud split and deletedAt: null guards to avoid restamping.
  • List query fixes: getTeamLinks, getUserPixels, and getTeamPixels now filter deletedAt: null, consistent with their user-scoped counterparts.
  • Route fixes: p/[slug] and q/[slug] pass deletedAt: null in both the Redis-cached and DB-fallback paths, so soft-deleted pixels/links no longer serve tracking events or redirects.

Confidence Score: 3/5

The bulk-delete cleanup logic is correct, but individual deletePixel/deleteLink functions lack Redis cache invalidation, allowing hard-deleted pixels/links to continue serving tracking events for up to 24 hours.

The ownedFilter expansion, missing-table additions, and cloud-mode restamping guards all match the deleteWebsite gold standard. The stale-Redis gap on individual deletes is a concrete defect on the changed tracking path: p/slug and q/slug now depend on deletedAt:null correctness, but deletePixel/deleteLink bypass that via the 86400s cache.

src/queries/prisma/pixel.ts and src/queries/prisma/link.ts — the individual deletePixel and deleteLink functions need Redis cache invalidation to match the bulk-delete paths added by this PR.

Important Files Changed

Filename Overview
src/queries/prisma/user.ts deleteUser expanded to cover team-owned content via ownedFilter, missing-table cleanups added for non-cloud, deletedAt:null guards added for cloud soft-deletes, Redis invalidation added
src/queries/prisma/team.ts deleteTeam extended with full website cascade, share cleanup, and Redis invalidation; cloud path gains deletedAt:null guards to prevent restamping
src/queries/prisma/pixel.ts getUserPixels and getTeamPixels now filter deletedAt: null to prevent soft-deleted entries leaking into list views
src/queries/prisma/link.ts getTeamLinks now filters deletedAt: null, matching the pre-existing filter on getUserLinks
src/app/(collect)/p/[slug]/route.ts Both Redis-cached and DB fallback paths now pass deletedAt: null when resolving a pixel by slug
src/app/(collect)/q/[slug]/route.ts Both Redis-cached and DB fallback paths now pass deletedAt: null when resolving a link by slug

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    DU[deleteUser / deleteTeam] --> PF[Pre-fetch: links, pixels, boards, websites via ownedFilter]
    PF --> CM{cloudMode?}
    CM -- yes --> CSoft[Cloud: website/link/pixel updateMany deletedAt, board.deleteMany, share.deleteMany, user/team soft-delete]
    CM -- no --> CHard[Non-cloud: session/event/replay/revenue/segment cleanup, teamUser/team, report/share, link/pixel/board, website.deleteMany, user/team.delete]
    CSoft --> Redis[invalidateRedis: del link:slug, pixel:slug, website:id]
    CHard --> Redis
    subgraph Routes
        PR[p/slug or q/slug] -- Redis hit --> RC[Serve cached pixel/link up to TTL]
        PR -- Redis miss --> DB[findPixel/findLink where slug AND deletedAt IS NULL]
    end
    Redis -.->|clears stale keys| RC
Loading

Comments Outside Diff (1)

  1. src/queries/prisma/pixel.ts, line 60-62 (link)

    P1 Stale Redis cache window on individual pixel delete

    deletePixel hard-deletes the row without calling redis.client.del('pixel:${slug}'). The p/[slug] route (fixed in this PR) now skips soft-deleted pixels at the DB level, but the Redis fetch call caches for 86400 seconds — so a hard-deleted pixel will continue to fire tracking events for up to 24 hours whenever Redis is enabled. The bulk paths (deleteUser/deleteTeam) correctly invalidate the cache via invalidateRedis, but the individual delete path has no equivalent. The same gap exists in deleteLink.

Reviews (1): Last reviewed commit: "fix: cascade websites on user/team delet..." | Re-trigger Greptile

Comment thread src/queries/prisma/user.ts
Address Greptile review feedback on umami-software#4245.

- deleteLink and deletePixel now redis.client.del('link:slug' / 'pixel:slug')
  using the slug returned by Prisma's delete(). Previously the row was hard-
  deleted but the Redis cache (24h TTL) kept serving the slug, so /q/<slug>
  and /p/<slug> kept firing for up to a day after deletion.
- updateLink and updatePixel now invalidate the cache for the current slug,
  and additionally for the previous slug if the slug was changed. Previously
  changing a link's destination URL or slug left the public cache stale.
- Cloud-mode link.updateMany and pixel.updateMany in deleteUser now spread
  ownedFilter (which is { userId } in cloud mode) instead of hardcoding
  { userId }, so the cleanup intent stays consistent if ownedFilter ever
  evolves.

Verified empirically against a Docker Postgres + Redis: deleted link's
/q/<slug> returns 404 immediately (was: still redirected to old URL for 24h);
slug rename invalidates both old and new cache keys.
@anvme
Copy link
Copy Markdown
Author

anvme commented May 7, 2026

Pushed 3c7fbad addressing all of Greptile's review:

Comment Verdict What I did
P1: stale Redis cache on individual deleteLink/deletePixel Accepted (real bug) Added redis.client.del('link:${slug}' / 'pixel:${slug}') to both functions. Used the slug returned by Prisma's delete() so it's atomic with no extra DB read.
P2: hardcoded userId instead of ownedFilter Accepted Spread ownedFilter for both link/pixel updateMany in cloud deleteUser.
F (caught while addressing P1): updateLink / updatePixel ALSO never invalidated the cache Same root cause as P1 — fixed in same commit Pre-fetch old slug, do the update, del cache for new slug AND old slug if it changed. Without this fix: changing a link's destination URL or slug leaves the public /q/<slug> redirect serving the OLD URL for up to 24h.

Empirical verification

Reproduced the bug class against a Docker Postgres + Redis test stack:

  • Before P1 fix: created link with url=A, hit /q/<slug> (cached), DELETE the link, hit /q/<slug> again -> still 307-redirected to url=A. Postgres link rows = 0, Redis cache still has the slug entry.
  • After P1 fix: same flow -> /q/<slug> returns 404 immediately after delete. Redis EXISTS = 0.
  • Slug rename test: link with slug=X, hit /q/X (cache populated), PUT update to slug=Y. After fix: /q/X -> 404 (old cache cleared), /q/Y -> works (new entry).

For deployments with no Redis (e.g., Postgres-only or Postgres+ClickHouse without Redis), this entire bug class is invisible because every public-route request goes straight to Postgres. The fix is a no-op when REDIS_URL is unset.

Thanks for catching this — the update-path version (F) was a real surprise that I'd have likely missed without the P1 prompt.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant