Conversation
When fraud reports are resolved without banning the partner, and optional notes are provided, they're added as a comment to the partner comments. But without context, it's not clear what the note is pertaining to. This addition provides context for any fraud notes left, but only for ones added after this feature is added. Because old comments are stored `text only` it'll be difficult and not accurate to back fill previous resolution comments to show like this.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughAdds structured fraud-resolution metadata to partner comments (DB + zod), introduces serializer/parser and backfill script, changes resolve flow to return resolved IDs and create per-group metadata comments, preserves/migrates metadata on updates, and surfaces parsed fraud-resolution info in the UI. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant WebAction as Web Action
participant API as Resolve API
participant DB as Database
participant Comments as PartnerComments
User->>WebAction: Trigger bulk resolve
WebAction->>API: request resolveFraudGroups(payload)
API->>DB: find pending fraud groups (select ids, type, partnerId, ...)
API->>DB: update groups in transaction (set resolved fields)
DB-->>API: updated rows / resolvedGroupIds
API-->>WebAction: { count, resolvedGroupIds }
WebAction->>Comments: build per-partner metadata and create partner comments (source, groupId, type, note)
Comments->>DB: insert partnerComment records (with metadata)
DB-->>Comments: confirm inserts
WebAction-->>User: return summary (count, created comments)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
No actionable comments were generated in the recent review. 🎉 🧹 Recent nitpick comments
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@apps/web/lib/actions/fraud/bulk-resolve-fraud-groups.ts`:
- Around line 70-85: The current implementation uses
prisma.partnerComment.createMany with fraudGroups.map to create a comment per
group (fraudGroups) which can produce duplicate comments per partner and also
creates comments for groups that may not have been actually resolved by
resolveFraudGroups; change it to create comments only for the set of groups that
were successfully resolved and deduplicate per partner if product expects one
comment per partner. Specifically, after calling resolveFraudGroups, collect the
IDs of actually resolved groups (from the function return), filter fraudGroups
to those resolved, then either (a) map to unique partnerId entries (e.g.,
group.partnerId) to create one comment per partner or (b) create one comment per
resolved group if per-group comments are desired—update the call site using
prisma.partnerComment.createMany, referencing fraudGroups, resolveFraudGroups,
programId, and user.id to implement the chosen behavior.
🧹 Nitpick comments (7)
apps/web/tests/misc/fraud-resolution-comment.test.ts (1)
72-105: Consider assertingnotein the negative-path tests for completeness.Tests for "missing suffix", "missing groupId", and "missing type" only assert
metadatais null but don't verify thatnotefalls back to the full input text. Addingexpect(result.note).toBe(...)assertions would strengthen coverage of the graceful-degradation behavior.Example for the missing suffix test
expect(result.metadata).toBeNull(); + expect(result.note).toBe( + '[[dub-fraud-resolution:{"groupId":"x","type":"y"}missing suffix', + );apps/web/scripts/migrations/backfill-fraud-resolution-comment-metadata.ts (3)
8-23: Unbounded query loads all null-metadata comments into memory.If the table is large, this could OOM the process. Consider cursor-based pagination for resilience.
Sketch: paginated fetching
- const comments = await prisma.partnerComment.findMany({ - where: { - metadata: null, - }, - select: { ... }, - orderBy: { createdAt: "asc" }, - }); + const BATCH_SIZE = 500; + let cursor: string | undefined; + let comments; + // Process in batches + do { + comments = await prisma.partnerComment.findMany({ + where: { metadata: null }, + select: { id: true, programId: true, partnerId: true, userId: true, text: true, createdAt: true }, + orderBy: { createdAt: "asc" }, + take: BATCH_SIZE, + ...(cursor ? { skip: 1, cursor: { id: cursor } } : {}), + }); + if (comments.length === 0) break; + cursor = comments[comments.length - 1].id; + // ... process batch ... + } while (comments.length === BATCH_SIZE);
30-93: Individual updates lack error handling — a single failure aborts the entire migration.Wrap each iteration in try-catch so one bad record doesn't prevent the rest from being processed.
Sketch: per-comment error handling
+ let errors = 0; for (const comment of comments) { + try { const legacy = parseFraudResolutionComment(comment.text); // ... existing logic ... + } catch (err) { + console.error(`Failed to process comment ${comment.id}:`, err); + errors++; + } }
107-107: Addprisma.$disconnect()before exit.One-shot scripts should disconnect to avoid hanging connections.
Proposed fix
-main(); +main() + .catch((err) => { + console.error(err); + process.exit(1); + }) + .finally(() => prisma.$disconnect());apps/web/ui/partners/partner-comments.tsx (1)
422-428:nodeparameter is unused — destructure it to satisfy the linter.The
nodeprop fromreact-markdowncomponents is destructured but not used. Some ESLint configurations may flag this. Consider using_nodeor omitting it if the rest-spread already excludes it.Proposed fix
- a: ({ node, ...props }) => ( + a: ({ node: _node, ...props }) => (apps/web/lib/zod/schemas/programs.ts (1)
192-196: Consider usingfraudResolutionCommentMetadataSchemafor runtime validation in consuming code.The schema is well-defined but unused —
getFraudResolutionCommentDatainpartner-comments.tsx(lines 42-50) andupdatePartnerCommentActionboth perform manual type-narrowing with typeof checks and property existence checks instead of leveraging.safeParse(). The schema structure matches the manual checks, so either apply schema validation in these functions or export an inferred type if the schema is meant only for documentation purposes.apps/web/lib/fraud-resolution-comment.ts (1)
46-58:metadata.typeis only validated as a string, not as a validFraudRuleTypevalue.The runtime check on line 52 accepts any string for
type, so a malformed or tampered comment could produce aFraudResolutionCommentMetadatawith an invalidFraudRuleType. Since this is an internal utility and consumers likely handle display gracefully, it's low-risk — but if you want stricter guarantees, you could validate against the actual enum values.🔧 Optional: validate against FraudRuleType enum values
Import the enum values (e.g., from Prisma) and check membership:
+import { FraudRuleType } from "@dub/prisma/client"; + +const FRAUD_RULE_TYPES = new Set(Object.values(FraudRuleType)); + // Inside parseFraudResolutionComment, replace the type check: if ( !metadata || typeof metadata.groupId !== "string" || - typeof metadata.type !== "string" + !FRAUD_RULE_TYPES.has(metadata.type) ) {
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@apps/web/lib/api/fraud/resolve-fraud-groups.ts`:
- Line 34: The current code captures resolvedAt before calling updateMany and
later re-queries groups using equality to that timestamp, which can collide if
two transactions share the same millisecond; change the re-query filter (the
findMany/block that currently compares resolved_at === resolvedAt) to use a
range (e.g., resolved_at >= resolvedAt && resolved_at <= new Date()) and also
include the resolver identity (e.g., resolved_by or actorId) in the filter so
only groups this run could have updated are returned; update the references to
resolvedAt, the updateMany call, and the subsequent findMany/re-query logic
accordingly (or alternatively switch to a generated batchId if you adopt a
schema change).
🧹 Nitpick comments (1)
apps/web/lib/actions/fraud/bulk-resolve-fraud-groups.ts (1)
82-95: Bulk dedup attaches only the first group's metadata per partner.When multiple fraud groups for the same partner are resolved in one bulk call, the
uniquePartnersMap keeps only the first group'sgroupIdandtype. The single-resolve path creates one comment per group (with distinct metadata), but here the partner gets one comment with metadata referencing only one of several resolved groups.This may be fine if the
resolutionReasontext is the primary content and the metadata is supplementary. Just flagging the asymmetry so it's a conscious choice.
|
Converting to a draft, and maybe deleting later as it's part of a much larger platform improvement for fraud detection. |
Review feasibility and needWhen fraud reports are resolved without banning the partner, and optional notes are provided, they're added as a comment to the partner comments. But without context, it's not clear what the note is pertaining to.
This addition provides context for any fraud notes left, but only for ones added after this feature is added.
Because old comments are stored
text onlyit'll be difficult and not accurate to back fill previous resolution comments to show like this. In old bulk resolve flows, one note could represent multiple resolved fraud groups. The same note text can be reused across many resolutions, so text matching is ambiguous. Without unique linkage in historical data, we can’t always know which exact fraud event a comment came from.Summary by CodeRabbit