fix: cascade websites on user/team delete + filter soft-deleted in list queries#4245
fix: cascade websites on user/team delete + filter soft-deleted in list queries#4245anvme wants to merge 4 commits into
Conversation
deleteUser and deleteTeam left link/pixel/board rows (and their share rows) in the database after the owner was removed. /q/<slug> and /p/<slug> also kept serving deleted entries because the routes did not filter deletedAt and Redis cached lookups for 24h. - deleteUser: clean up link/pixel/board + shares for the deleted user. Cloud mode: soft-delete link/pixel, hard-delete board, only userId-owned. Non-cloud: hard-delete everything matching userId or owned teamIds. - deleteTeam: same cleanup, scoped to teamId. - /q and /p route handlers: filter deletedAt: null at the call sites (not in findLink/findPixel helpers, which would null-deref the permission checks at src/permissions/link.ts and pixel.ts). - Post-transaction Redis invalidation mirrors deleteWebsite.
…eted slugs Address Greptile review feedback on umami-software#4243. - Cloud-mode link.updateMany / pixel.updateMany now filter where: { ..., deletedAt: null } so a previously soft-deleted row keeps its original deletion timestamp instead of being restamped with the current time. - Pre-transaction findMany now selects deletedAt; the Redis invalidation list filters to only live slugs, avoiding harmless but wasted DEL calls for already-soft-deleted entries. Note: the share.deleteMany cleanup still uses the broad entityId list (not filtered by deletedAt) so that orphan share rows of already-soft-deleted links/pixels are still cleaned up. Filtering the prefetch itself, as Greptile's exact suggestion proposed, would skip those shares while link.deleteMany still hard-deletes the rows, leaving orphan share rows behind. Verified empirically with a 3-scenario reproduction.
…st queries Stacks on top of umami-software#4243. Five adjacent bugs from the same family: - deleteTeam left team-owned websites (and all dependent rows) orphaned. Added inline cleanup mirroring deleteWebsite. - non-cloud deleteUser, when hard-deleting the user's owned teams, also left team-owned websites orphaned. Extended the existing ownedFilter pattern (cloud-gated OR) to cover websites. - getTeamLinks/getUserPixels/getTeamPixels did not filter deletedAt: null, leaking soft-deleted entries into list views. - cloud deleteUser restamped already-soft-deleted websites' deleted_at; added deletedAt: null guard (same shape as link/pixel restamping fix). - Surfaced pre-existing gaps in deleteUser non-cloud: missing sessionReplaySaved/sessionReplay/revenue/segment cleanups, entityIds excluded website ids so website-shares were orphaned.
|
@anvme is attempting to deploy a commit to the Umami Software Team on Vercel. A member of the Team first needs to authorize it. |
Greptile SummaryThis PR fixes five related data-integrity bugs in
Confidence Score: 3/5The bulk-delete cleanup logic is correct, but individual deletePixel/deleteLink functions lack Redis cache invalidation, allowing hard-deleted pixels/links to continue serving tracking events for up to 24 hours. The ownedFilter expansion, missing-table additions, and cloud-mode restamping guards all match the deleteWebsite gold standard. The stale-Redis gap on individual deletes is a concrete defect on the changed tracking path: p/slug and q/slug now depend on deletedAt:null correctness, but deletePixel/deleteLink bypass that via the 86400s cache. src/queries/prisma/pixel.ts and src/queries/prisma/link.ts — the individual deletePixel and deleteLink functions need Redis cache invalidation to match the bulk-delete paths added by this PR. Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
DU[deleteUser / deleteTeam] --> PF[Pre-fetch: links, pixels, boards, websites via ownedFilter]
PF --> CM{cloudMode?}
CM -- yes --> CSoft[Cloud: website/link/pixel updateMany deletedAt, board.deleteMany, share.deleteMany, user/team soft-delete]
CM -- no --> CHard[Non-cloud: session/event/replay/revenue/segment cleanup, teamUser/team, report/share, link/pixel/board, website.deleteMany, user/team.delete]
CSoft --> Redis[invalidateRedis: del link:slug, pixel:slug, website:id]
CHard --> Redis
subgraph Routes
PR[p/slug or q/slug] -- Redis hit --> RC[Serve cached pixel/link up to TTL]
PR -- Redis miss --> DB[findPixel/findLink where slug AND deletedAt IS NULL]
end
Redis -.->|clears stale keys| RC
|
Address Greptile review feedback on umami-software#4245. - deleteLink and deletePixel now redis.client.del('link:slug' / 'pixel:slug') using the slug returned by Prisma's delete(). Previously the row was hard- deleted but the Redis cache (24h TTL) kept serving the slug, so /q/<slug> and /p/<slug> kept firing for up to a day after deletion. - updateLink and updatePixel now invalidate the cache for the current slug, and additionally for the previous slug if the slug was changed. Previously changing a link's destination URL or slug left the public cache stale. - Cloud-mode link.updateMany and pixel.updateMany in deleteUser now spread ownedFilter (which is { userId } in cloud mode) instead of hardcoding { userId }, so the cleanup intent stays consistent if ownedFilter ever evolves. Verified empirically against a Docker Postgres + Redis: deleted link's /q/<slug> returns 404 immediately (was: still redirected to old URL for 24h); slug rename invalidates both old and new cache keys.
|
Pushed 3c7fbad addressing all of Greptile's review:
Empirical verificationReproduced the bug class against a Docker Postgres + Redis test stack:
For deployments with no Redis (e.g., Postgres-only or Postgres+ClickHouse without Redis), this entire bug class is invisible because every public-route request goes straight to Postgres. The fix is a no-op when Thanks for catching this — the update-path version (F) was a real surprise that I'd have likely missed without the P1 prompt. |
Follow-up to #4243. Stacks on top of that PR's branch — the GitHub diff currently includes #4243's commits too. Once #4243 merges, this branch will rebase cleanly onto
devand show only its own changes (~+58 / -32). For incremental review now, focus on commit872d737.Five adjacent bugs from the same family
deleteTeamdeleteUsergetTeamLinks/getUserPixels/getTeamPixelsdeleteUserdeleted_atto current timedeleteUser(pre-existing)sessionReplaySaved/sessionReplay/revenue/segmentcleanup;entityIdsexcluded website ids so website-shares were orphanedApproach
deleteWebsitecleanup pattern (gold standard) inline indeleteUser+deleteTeam.relationMode = "prisma"so order is purely for readability — no DB FK constraints.ownedFilterextended to cover websites: cloud uses{ userId }only (cloud preserves owned teams, so their websites must survive); non-cloud uses{ OR: [{ userId }, { teamId: { in: teamIds } }] }.entityIdsextended withwebsiteIdsso the existingshare.deleteManycovers website-shares too.updateManycalls (link/pixel/website) getdeletedAt: nullfilter to prevent restamping (same shape fix as fix: clean up link, pixel, board rows on user/team deletion #4243's amend commit applied to link/pixel).website:${id}cache keys (matchesdeleteWebsite's pattern).Test plan
pnpm build-appclean (no errors/warnings)/api/teams/[id]/links,/api/pixels,/api/teams/[id]/pixelsreturn only the alive entriesOut of scope (separate PRs if maintainers want)
cleanupWebsites(websiteIds)helper shared bydeleteUser/deleteTeam/deleteWebsitedeletedAtcolumn toboard(Prisma migration)