test(e2e): test harness + signal/prekey fixes uncovered along the way#2432
test(e2e): test harness + signal/prekey fixes uncovered along the way#2432
Conversation
|
Thanks for opening this pull request and contributing to the project! The next step is for the maintainers to review your changes. If everything looks good, it will be approved and merged into the main branch. In the meantime, anyone in the community is encouraged to test this pull request and provide feedback. ✅ How to confirm it worksIf you’ve tested this PR, please comment below with: This helps us speed up the review and merge process. 📦 To test this PR locally:If you encounter any issues or have feedback, feel free to comment as well. |
8bc2dcd to
0ea3cab
Compare
0ea3cab to
d604fc1
Compare
|
joao you madman, finally the PR has come here at last. Taking over |
|
This PR is stale because it has been open for 14 days with no activity. Remove the stale label or comment or this will be closed in 14 days |
📝 WalkthroughWalkthroughAdds E2E testing infrastructure (GitHub Actions workflow, TestClient, mock-phone helper, and multiple e2e suites), updates test script behavior, and changes socket/session/media/pre-key logic (stanza de-duplication, retry simplification, removal of client-side upload throttle, media host tracking) plus a small type addition ( Changes
Sequence DiagramsequenceDiagram
participant CI as CI Workflow
participant TestClient as TestClient
participant MockPhone as Mock Phone HTTP
participant WS as WebSocket Server
participant Socket as Baileys Socket
CI->>TestClient: start e2e job, set env & start mock server
TestClient->>Socket: connect(socketUrl, authDir, pushName)
Socket->>WS: open websocket (produce QR)
WS-->>Socket: QR generated
Socket-->>TestClient: emit QR event
TestClient->>MockPhone: POST /admin/mock-phone/scan-qr (QR)
MockPhone-->>TestClient: 200 OK
WS->>Socket: complete pairing handshake
Socket-->>TestClient: open event
TestClient-->>CI: run e2e tests (messaging, groups, media)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~30 minutes Possibly related issues
Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
# Conflicts: # src/Socket/messages-recv.ts
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/Socket/socket.ts (1)
470-523:⚠️ Potential issue | 🔴 CriticalRecursive retry deadlocks the in-flight upload promise.
If
query(node)fails, the retry path callsuploadPreKeys()again whileuploadPreKeysPromisestill points at the outer attempt. The recursive invocation then waits on that same promise at Line 475, so transient failures turn into guaranteedUPLOAD_TIMEOUTs instead of real retries.Proposed fix
- const uploadPreKeys = async (count = MIN_PREKEY_COUNT, retryCount = 0) => { + const uploadPreKeys = async (count = MIN_PREKEY_COUNT) => { if (uploadPreKeysPromise) { logger.debug('Pre-key upload already in progress, waiting for completion') - await uploadPreKeysPromise + return uploadPreKeysPromise } const uploadLogic = async () => { - logger.info({ count, retryCount }, 'uploading pre-keys') - - // Generate and save pre-keys atomically (prevents ID collisions on retry) - const node = await keys.transaction(async () => { - logger.debug({ requestedCount: count }, 'generating pre-keys with requested count') - const { update, node } = await getNextPreKeysNode({ creds, keys }, count) - // Update credentials immediately to prevent duplicate IDs on retry - ev.emit('creds.update', update) - return node // Only return node since update is already used - }, creds?.me?.id || 'upload-pre-keys') - - // Upload to server (outside transaction, can fail without affecting local keys) - try { - await query(node) - logger.info({ count }, 'uploaded pre-keys successfully') - } catch (uploadError) { - logger.error({ uploadError: (uploadError as Error).toString(), count }, 'Failed to upload pre-keys to server') - - // Exponential backoff retry (max 3 retries) - if (retryCount < 3) { - const backoffDelay = Math.min(1000 * Math.pow(2, retryCount), 10000) - logger.info(`Retrying pre-key upload in ${backoffDelay}ms`) - await new Promise(resolve => setTimeout(resolve, backoffDelay)) - return uploadPreKeys(count, retryCount + 1) - } - - throw uploadError + for (let retryCount = 0; retryCount < 4; retryCount++) { + logger.info({ count, retryCount }, 'uploading pre-keys') + try { + const node = await keys.transaction(async () => { + logger.debug({ requestedCount: count }, 'generating pre-keys with requested count') + const { update, node } = await getNextPreKeysNode({ creds, keys }, count) + ev.emit('creds.update', update) + return node + }, creds?.me?.id || 'upload-pre-keys') + + await query(node) + logger.info({ count }, 'uploaded pre-keys successfully') + return + } catch (uploadError) { + logger.error({ uploadError: (uploadError as Error).toString(), count }, 'Failed to upload pre-keys to server') + if (retryCount === 3) { + throw uploadError + } + + const backoffDelay = Math.min(1000 * Math.pow(2, retryCount), 10000) + logger.info(`Retrying pre-key upload in ${backoffDelay}ms`) + await new Promise(resolve => setTimeout(resolve, backoffDelay)) + } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/Socket/socket.ts` around lines 470 - 523, The current retry path deadlocks because uploadPreKeys() is called recursively while uploadPreKeysPromise still references the outer attempt; change the retry to retry the inner upload logic instead of calling uploadPreKeys again. Refactor uploadLogic (used in uploadPreKeys) to accept a retryCount param (or convert it to a loop) and replace the recursive call "return uploadPreKeys(count, retryCount + 1)" with a call to the inner uploader (e.g., "return uploadLogic(count, retryCount + 1)"), so retries bypass the outer uploadPreKeysPromise; keep the transaction via getNextPreKeysNode/keys.transaction and preserve the timeout wrapping around the outer uploadPreKeysPromise.
🧹 Nitpick comments (2)
src/__tests__/e2e/groups.test-e2e.ts (1)
86-97: Test assumesgroupMetadata()bypasses cache after participant update.The comment "stale cache would still report 2 participants" implies testing cache invalidation, but the test doesn't explicitly verify the cache mechanism. If
groupMetadata()always fetches fresh data from the server (or the mock server), this test would pass regardless of cache behavior.Consider adding a comment clarifying whether this tests actual cache invalidation logic in Baileys or the mock server's state consistency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/__tests__/e2e/groups.test-e2e.ts` around lines 86 - 97, The test "group metadata cache invalidates on participant add" currently calls groupMetadata and groupParticipantsUpdate but doesn't prove cache behavior; either add a clarifying comment stating this is only verifying server/mock state consistency (not internal cache invalidation), or change the test to explicitly exercise the cache: seed a stale value into the client's cache (or spy/mock groupMetadata to return a cached response), call groupParticipantsUpdate, then call groupMetadata and assert that the client refreshed the cache and returned the updated participants; reference groupMetadata and groupParticipantsUpdate (and the test name) when making the change.src/__tests__/e2e/helpers/test-client.ts (1)
253-262: Consider filtering by upsert type to reduce false-positive matches.
waitForMessagecurrently evaluates everymessages.upsert. Restricting totype === 'notify'makes waits less likely to match replay/append traffic during sync.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/__tests__/e2e/helpers/test-client.ts` around lines 253 - 262, The waitForMessage handler in waitForMessage currently scans every 'messages.upsert' event and can match replay/append traffic; modify the handler inside waitForMessage to only inspect events where the upsert type is 'notify' (e.g., destructure { messages, type } and return early unless type === 'notify') before finding a matching message with the provided predicate, and keep the same subscription/unsubscription to this.sock.ev for 'messages.upsert' using the updated handler.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/e2e.yml:
- Around line 7-35: The workflow currently uses the pull_request_target event
(symbol: pull_request_target) which exposes repository secrets (e.g.,
secrets.BARTENDER_GHCR_TOKEN) to untrusted PR code checked out by the checkout
step that uses the PR head ref; change the trigger to pull_request or add a
guard to restrict execution to same-repository PRs by checking
github.event.pull_request.head.repo.full_name equals github.repository before
running the e2e job (job: e2e) and/or before starting the mock-server service
and checkout step, ensuring secrets are not passed to untrusted forked PR code.
In `@src/__tests__/e2e/helpers/mock-phone.ts`:
- Around line 11-36: postQrToMockPhone uses httpsRequest unconditionally which
breaks when wsToHttp(socketUrl) yields an http URL; update postQrToMockPhone to
inspect the protocol from wsToHttp(socketUrl) and call the correct request
function (http.request for 'http:' and https.request for 'https:') and pass the
same options (host, port, path, method, rejectUnauthorized where applicable)
while keeping error and response handling intact; reference postQrToMockPhone,
wsToHttp, httpsRequest, and ADMIN_PATH when making this change.
In `@src/__tests__/e2e/helpers/test-client.ts`:
- Around line 93-126: The connection bring-up promise in test-client (the new
Promise around sock.ev.on('connection.update')) can hang indefinitely because
attemptConnect/openConnection have no timeout or capped retries; add a bounded
retry/count and exponential backoff in openConnection and make attemptConnect
support an overall timeout parameter, then reject/resolve the attempt Promise
when the timeout elapses; update the connection.update handler to clear the
event listener and resolve with an error kind (e.g., { kind: 'error', error: new
Error("connect timeout") }) on timeout to avoid hanging beforeAll, and ensure
any successful/terminal resolution (open, logged-out, reconnect) also removes
the listener to prevent leaks.
In `@src/__tests__/e2e/send-receive-message.test-e2e.ts`:
- Around line 114-119: The test named "should send a poll message and receive a
vote" currently only sends a poll (via tc.sock.sendMessage) and asserts
pollCreationMessageV3.name; update the test so its intent matches the
assertions: either rename the test to reflect poll creation only, or extend it
to simulate a vote and assert the poll update; specifically after creating the
poll (pollCreationMessageV3) send a vote/update (using tc.sock.sendMessage to
the same JID with the appropriate poll update payload) and add an expectation
that the response contains pollUpdateMessageV3 (or the relevant vote field)
showing the chosen option and voter identity.
In `@src/Socket/messages-recv.ts`:
- Around line 524-546: The dedupe Set in handleEncryptNotification is
added/removed around uploadPreKeys inside notificationMutex, so queued
duplicates still run because the first call hasn't added the id until after
mutex release; fix by performing the dedupe check and adding the stanza id to
inFlightPreKeyLow before acquiring notificationMutex (or at the very start of
handleNotification) so subsequent queued notifications see the id and return
early; ensure the id is removed in the same finally after uploadPreKeys
completes (references: handleEncryptNotification, handleNotification,
notificationMutex, inFlightPreKeyLow, uploadPreKeys).
In `@src/Utils/messages-media.ts`:
- Around line 503-513: The module-level mutable _mediaHost (with DEF_MEDIA_HOST,
setMediaHost and getMediaHost) is shared across all socket instances and can be
clobbered when multiple sockets exist; change the design to scope media host
per-socket by moving _mediaHost into the socket instance/state (e.g., add a
mediaHost property on the socket object or connection state) and replace global
setMediaHost/getMediaHost calls with instance methods or accessors (or pass host
as an argument into the media download functions) so each socket maintains its
own host, ensuring callers use the socket-specific API instead of the
module-level functions.
---
Outside diff comments:
In `@src/Socket/socket.ts`:
- Around line 470-523: The current retry path deadlocks because uploadPreKeys()
is called recursively while uploadPreKeysPromise still references the outer
attempt; change the retry to retry the inner upload logic instead of calling
uploadPreKeys again. Refactor uploadLogic (used in uploadPreKeys) to accept a
retryCount param (or convert it to a loop) and replace the recursive call
"return uploadPreKeys(count, retryCount + 1)" with a call to the inner uploader
(e.g., "return uploadLogic(count, retryCount + 1)"), so retries bypass the outer
uploadPreKeysPromise; keep the transaction via
getNextPreKeysNode/keys.transaction and preserve the timeout wrapping around the
outer uploadPreKeysPromise.
---
Nitpick comments:
In `@src/__tests__/e2e/groups.test-e2e.ts`:
- Around line 86-97: The test "group metadata cache invalidates on participant
add" currently calls groupMetadata and groupParticipantsUpdate but doesn't prove
cache behavior; either add a clarifying comment stating this is only verifying
server/mock state consistency (not internal cache invalidation), or change the
test to explicitly exercise the cache: seed a stale value into the client's
cache (or spy/mock groupMetadata to return a cached response), call
groupParticipantsUpdate, then call groupMetadata and assert that the client
refreshed the cache and returned the updated participants; reference
groupMetadata and groupParticipantsUpdate (and the test name) when making the
change.
In `@src/__tests__/e2e/helpers/test-client.ts`:
- Around line 253-262: The waitForMessage handler in waitForMessage currently
scans every 'messages.upsert' event and can match replay/append traffic; modify
the handler inside waitForMessage to only inspect events where the upsert type
is 'notify' (e.g., destructure { messages, type } and return early unless type
=== 'notify') before finding a matching message with the provided predicate, and
keep the same subscription/unsubscription to this.sock.ev for 'messages.upsert'
using the updated handler.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f5f879ca-d313-4580-bf19-97f5372fea1c
📒 Files selected for processing (14)
.github/workflows/e2e.ymlpackage.jsonsrc/Defaults/index.tssrc/Socket/messages-recv.tssrc/Socket/messages-send.tssrc/Socket/socket.tssrc/Types/Socket.tssrc/Utils/messages-media.tssrc/Utils/validate-connection.tssrc/__tests__/e2e/groups.test-e2e.tssrc/__tests__/e2e/helpers/mock-phone.tssrc/__tests__/e2e/helpers/test-client.tssrc/__tests__/e2e/messaging.test-e2e.tssrc/__tests__/e2e/send-receive-message.test-e2e.ts
💤 Files with no reviewable changes (1)
- src/Defaults/index.ts
| pull_request_target: | ||
| branches: | ||
| - master | ||
|
|
||
| permissions: | ||
| contents: read | ||
|
|
||
| jobs: | ||
| e2e: | ||
| name: E2E Tests | ||
| runs-on: ubuntu-latest | ||
| timeout-minutes: 10 | ||
| services: | ||
| mock-server: | ||
| image: ghcr.io/whiskeysockets-devtools/bartender:latest | ||
| credentials: | ||
| username: ${{ github.actor }} | ||
| password: ${{ secrets.BARTENDER_GHCR_TOKEN }} | ||
| ports: | ||
| - 8080:8080 | ||
| env: | ||
| CHATSTATE_TTL_SECS: "3" | ||
| ADV_SECRET_KEY: "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=" | ||
| options: --log-driver none | ||
|
|
||
| steps: | ||
| - uses: actions/checkout@v4 | ||
| with: | ||
| ref: ${{ github.event.pull_request.head.sha || github.sha }} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
rg -n -C2 'pull_request_target|github\.event\.pull_request\.head\.sha|secrets\.BARTENDER_GHCR_TOKEN' .github/workflows/e2e.ymlRepository: WhiskeySockets/Baileys
Length of output: 508
🏁 Script executed:
cat -n .github/workflows/e2e.yml | head -80Repository: WhiskeySockets/Baileys
Length of output: 2454
Critical: untrusted PR code runs with repository secrets exposed.
Line 7 triggers pull_request_target, which executes untrusted fork code with base-repository secrets. Line 35 checks out the PR head SHA, and lines 56 and 74 execute package managers and tests from that code. This allows malicious PRs to exfiltrate secrets.BARTENDER_GHCR_TOKEN (line 24) through install scripts, build hooks, or test code.
Replace pull_request_target with pull_request, or gate the job to same-repository PRs only (filter via github.event.pull_request.head.repo.full_name). Do not expose secrets to untrusted code paths.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/e2e.yml around lines 7 - 35, The workflow currently uses
the pull_request_target event (symbol: pull_request_target) which exposes
repository secrets (e.g., secrets.BARTENDER_GHCR_TOKEN) to untrusted PR code
checked out by the checkout step that uses the PR head ref; change the trigger
to pull_request or add a guard to restrict execution to same-repository PRs by
checking github.event.pull_request.head.repo.full_name equals github.repository
before running the e2e job (job: e2e) and/or before starting the mock-server
service and checkout step, ensuring secrets are not passed to untrusted forked
PR code.
| const result = await new Promise<AttemptResult>(resolve => { | ||
| let qrScanned = false | ||
|
|
||
| sock.ev.on('connection.update', update => { | ||
| const { connection, lastDisconnect, qr } = update | ||
|
|
||
| if (qr && !qrScanned) { | ||
| qrScanned = true | ||
| postQrToMockPhone(config.socketUrl, qr).catch(error => resolve({ kind: 'error', error: toError(error) })) | ||
| return | ||
| } | ||
|
|
||
| if (connection === 'open') { | ||
| if (!sock.user?.id) { | ||
| resolve({ kind: 'error', error: new Error('socket reported open without user.id') }) | ||
| return | ||
| } | ||
|
|
||
| resolve({ kind: 'open', meJid: jidNormalizedUser(sock.user.id), meLid: sock.user.lid }) | ||
| return | ||
| } | ||
|
|
||
| if (connection === 'close') { | ||
| const reason = getDisconnectReason(lastDisconnect) | ||
| if (reason === DisconnectReason.loggedOut) { | ||
| resolve({ kind: 'logged-out' }) | ||
| return | ||
| } | ||
|
|
||
| const label = reason !== undefined ? (DisconnectReason[reason] ?? String(reason)) : 'unknown' | ||
| resolve({ kind: 'reconnect', reason: label }) | ||
| } | ||
| }) | ||
| }) |
There was a problem hiding this comment.
Connection bring-up can hang indefinitely under server/network failure.
attemptConnect has no attempt timeout, and openConnection retries forever (Line 170) with no cap/backoff. This can stall beforeAll and burn CI time when the mock server is unhealthy.
🔧 Proposed reliability fix (timeout + bounded retries)
+const CONNECT_ATTEMPTS = 5
+const RETRY_DELAY_MS = 1_000
+
const attemptConnect = async (config: ResolvedConfig): Promise<AttemptOutcome> => {
const { state, saveCreds } = await useMultiFileAuthState(config.authDir)
state.creds.advSecretKey = config.advSecretKey
@@
- const result = await new Promise<AttemptResult>(resolve => {
+ const result = await waitWithTimeout<AttemptResult>(emit => {
let qrScanned = false
@@
- postQrToMockPhone(config.socketUrl, qr).catch(error => resolve({ kind: 'error', error: toError(error) }))
+ postQrToMockPhone(config.socketUrl, qr).catch(error => emit({ kind: 'error', error: toError(error) }))
return
}
@@
- resolve({ kind: 'error', error: new Error('socket reported open without user.id') })
+ emit({ kind: 'error', error: new Error('socket reported open without user.id') })
return
}
- resolve({ kind: 'open', meJid: jidNormalizedUser(sock.user.id), meLid: sock.user.lid })
+ emit({ kind: 'open', meJid: jidNormalizedUser(sock.user.id), meLid: sock.user.lid })
return
}
@@
- resolve({ kind: 'logged-out' })
+ emit({ kind: 'logged-out' })
return
}
@@
- resolve({ kind: 'reconnect', reason: label })
+ emit({ kind: 'reconnect', reason: label })
}
})
- })
+ return () => sock.ev.removeAllListeners('connection.update')
+ }, DEFAULT_TIMEOUT_MS, 'connection.update')
@@
const openConnection = async (config: ResolvedConfig): Promise<{ sock: Socket; meJid: string; meLid?: string }> => {
- for (;;) {
+ for (let attempt = 1; attempt <= CONNECT_ATTEMPTS; attempt++) {
const { sock, result } = await attemptConnect(config)
@@
if (result.kind === 'error') {
throw result.error
}
+ if (attempt === CONNECT_ATTEMPTS) {
+ throw new Error(`Failed to connect after ${CONNECT_ATTEMPTS} attempts (last reason: ${result.reason})`)
+ }
+ await new Promise(resolve => setTimeout(resolve, RETRY_DELAY_MS))
config.logger.debug({ reason: result.reason }, 'reconnecting after pairing handoff')
}
+ throw new Error('unreachable')
}Also applies to: 169-189
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/__tests__/e2e/helpers/test-client.ts` around lines 93 - 126, The
connection bring-up promise in test-client (the new Promise around
sock.ev.on('connection.update')) can hang indefinitely because
attemptConnect/openConnection have no timeout or capped retries; add a bounded
retry/count and exponential backoff in openConnection and make attemptConnect
support an overall timeout parameter, then reject/resolve the attempt Promise
when the timeout elapses; update the connection.update handler to clear the
event listener and resolve with an error kind (e.g., { kind: 'error', error: new
Error("connect timeout") }) on timeout to avoid hanging beforeAll, and ensure
any successful/terminal resolution (open, logged-out, reconnect) also removes
the listener to prevent leaks.
| // Mirrors WAWeb/Handle/PreKeyLow.js: skip a re-issued notification with the same stanza id. | ||
| const inFlightPreKeyLow = new Set<string>() | ||
|
|
||
| const handleEncryptNotification = async (node: BinaryNode) => { | ||
| const from = node.attrs.from | ||
| if (from === S_WHATSAPP_NET) { | ||
| const stanzaId = node.attrs.id | ||
| if (stanzaId && inFlightPreKeyLow.has(stanzaId)) { | ||
| return | ||
| } | ||
|
|
||
| const countChild = getBinaryNodeChild(node, 'count') | ||
| const count = +countChild!.attrs.value! | ||
| const shouldUploadMorePreKeys = count < MIN_PREKEY_COUNT | ||
|
|
||
| logger.debug({ count, shouldUploadMorePreKeys }, 'recv pre-key count') | ||
| if (shouldUploadMorePreKeys) { | ||
| await uploadPreKeys() | ||
| if (stanzaId) inFlightPreKeyLow.add(stanzaId) | ||
| try { | ||
| await uploadPreKeys() | ||
| } finally { | ||
| if (stanzaId) inFlightPreKeyLow.delete(stanzaId) | ||
| } |
There was a problem hiding this comment.
This dedupe set never catches queued duplicate PreKeyLow notifications.
handleNotification() runs processNotification() under notificationMutex, so a reissued stanza with the same id does not reach this block until the first uploadPreKeys() call has already finished and the finally at Line 545 has removed the id. In practice, queued duplicates still upload once each.
Possible fix
- // Mirrors WAWeb/Handle/PreKeyLow.js: skip a re-issued notification with the same stanza id.
- const inFlightPreKeyLow = new Set<string>()
+ // Keep recently handled stanza ids long enough to suppress queued re-issues too.
+ const recentPreKeyLow = new NodeCache<boolean>({ stdTTL: 30, useClones: false })
const handleEncryptNotification = async (node: BinaryNode) => {
const from = node.attrs.from
if (from === S_WHATSAPP_NET) {
const stanzaId = node.attrs.id
- if (stanzaId && inFlightPreKeyLow.has(stanzaId)) {
+ if (stanzaId && (await recentPreKeyLow.get(stanzaId))) {
return
}
const countChild = getBinaryNodeChild(node, 'count')
const count = +countChild!.attrs.value!
const shouldUploadMorePreKeys = count < MIN_PREKEY_COUNT
logger.debug({ count, shouldUploadMorePreKeys }, 'recv pre-key count')
if (shouldUploadMorePreKeys) {
- if (stanzaId) inFlightPreKeyLow.add(stanzaId)
- try {
- await uploadPreKeys()
- } finally {
- if (stanzaId) inFlightPreKeyLow.delete(stanzaId)
- }
+ if (stanzaId) await recentPreKeyLow.set(stanzaId, true)
+ await uploadPreKeys()
}
} else {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/Socket/messages-recv.ts` around lines 524 - 546, The dedupe Set in
handleEncryptNotification is added/removed around uploadPreKeys inside
notificationMutex, so queued duplicates still run because the first call hasn't
added the id until after mutex release; fix by performing the dedupe check and
adding the stanza id to inFlightPreKeyLow before acquiring notificationMutex (or
at the very start of handleNotification) so subsequent queued notifications see
the id and return early; ensure the id is removed in the same finally after
uploadPreKeys completes (references: handleEncryptNotification,
handleNotification, notificationMutex, inFlightPreKeyLow, uploadPreKeys).
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
src/Socket/messages-recv.ts (1)
580-596:⚠️ Potential issue | 🟠 MajorQueued
PreKeyLowduplicates still bypass this dedupe.
handleNotification()serializes notifications behindnotificationMutex, so a duplicate stanza only reaches Line 581 after Line 595 has already removed the id frominFlightPreKeyLow. In practice, queued re-issues can still trigger oneuploadPreKeys()each.Possible fix
- // Mirrors WAWeb/Handle/PreKeyLow.js: skip a re-issued notification with the same stanza id. - const inFlightPreKeyLow = new Set<string>() + // Keep handled stanza ids around long enough to suppress queued re-issues too. + const recentPreKeyLow = new NodeCache<boolean>({ stdTTL: 30, useClones: false })- const stanzaId = node.attrs.id - if (stanzaId && inFlightPreKeyLow.has(stanzaId)) { + const stanzaId = node.attrs.id + if (stanzaId && (await recentPreKeyLow.get(stanzaId))) { return } @@ logger.debug({ count, shouldUploadMorePreKeys }, 'recv pre-key count') if (shouldUploadMorePreKeys) { - if (stanzaId) inFlightPreKeyLow.add(stanzaId) - try { - await uploadPreKeys() - } finally { - if (stanzaId) inFlightPreKeyLow.delete(stanzaId) - } + if (stanzaId) await recentPreKeyLow.set(stanzaId, true) + await uploadPreKeys() }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/Socket/messages-recv.ts` around lines 580 - 596, The dedupe set inFlightPreKeyLow allows queued duplicates to start their own upload because the original clears the id before later queued handlers reach the initial check; change inFlightPreKeyLow from a Set to a Map<string, Promise<void>> and store the upload promise keyed by stanzaId when starting upload (in the block around stanzaId, shouldUploadMorePreKeys and uploadPreKeys). On entry check map.has(stanzaId) and if present await the existing promise instead of starting a new upload; when creating the upload, set map.set(stanzaId, promise) where promise runs uploadPreKeys() and in its finally deletes the map entry. This ensures handleNotification callers see in-flight uploads and wait rather than spawn duplicates.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/__tests__/e2e/groups.test-e2e.ts`:
- Around line 65-75: The test "alice removes charlie from a group" only asserts
bob receives messages but must also assert charlie does not; after calling
alice.sock.groupParticipantsUpdate(...) and sending the message, add a waiter
like const charlieReceives = charlie.waitForText(text, { remoteJid: groupJid })
and assert it does not resolve (e.g. await
expect(charlieReceives).rejects.toThrow() or await
expect(Promise.race([charlieReceives,
delay(timeout)])).resolves.toBe(timeoutMarker)) so the test fails if charlie
receives the message; use the existing variables and methods (alice, charlie,
groupParticipantsUpdate, waitForText, sendMessage) to implement this negative
assertion.
- Around line 93-97: The test should assert the result of the participant add
operation before checking group metadata: capture the promise returned by
alice.sock.groupParticipantsUpdate(groupJid, [charlie.meJid], 'add') into a
variable, assert that the update result indicates success (e.g., success
status/ack for the added JID) and/or no error, then call
alice.sock.groupMetadata(groupJid) and assert after.participants.length === 3;
ensure you reference the update call (groupParticipantsUpdate), the target JID
(charlie.meJid) and the metadata call (groupMetadata/after) when adding the
assertion.
- Around line 12-19: The current createGroup helper uses
waitForEvent('groups.upsert', g => g.some(x => x.subject === subject)) which can
falsely match unrelated upserts that share the same subject; update the
seenByMembers predicate to require the expected participants (and/or creator) as
well as the subject. Concretely, in createGroup change seenByMembers to compute
the expectedJids = [creator.meJid, ...members.map(m => m.meJid)] and
waitForEvent('groups.upsert', g => g.some(evt => evt.subject === subject &&
expectedJids.every(j => evt.participants?.some(p => p.jid === j)))) so the
barrier only resolves when the upsert contains the same subject plus the
expected member JIDs (or other unique group identifier if available).
---
Duplicate comments:
In `@src/Socket/messages-recv.ts`:
- Around line 580-596: The dedupe set inFlightPreKeyLow allows queued duplicates
to start their own upload because the original clears the id before later queued
handlers reach the initial check; change inFlightPreKeyLow from a Set to a
Map<string, Promise<void>> and store the upload promise keyed by stanzaId when
starting upload (in the block around stanzaId, shouldUploadMorePreKeys and
uploadPreKeys). On entry check map.has(stanzaId) and if present await the
existing promise instead of starting a new upload; when creating the upload, set
map.set(stanzaId, promise) where promise runs uploadPreKeys() and in its finally
deletes the map entry. This ensures handleNotification callers see in-flight
uploads and wait rather than spawn duplicates.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1288b7c1-b09f-4d0a-8db9-24040e6448b1
📒 Files selected for processing (8)
package.jsonsrc/Defaults/index.tssrc/Socket/chats.tssrc/Socket/messages-recv.tssrc/Socket/messages-send.tssrc/Utils/messages-media.tssrc/__tests__/e2e/groups.test-e2e.tssrc/__tests__/e2e/helpers/test-client.ts
💤 Files with no reviewable changes (1)
- src/Defaults/index.ts
✅ Files skipped from review due to trivial changes (1)
- src/Socket/chats.ts
🚧 Files skipped from review as they are similar to previous changes (4)
- package.json
- src/Utils/messages-media.ts
- src/Socket/messages-send.ts
- src/tests/e2e/helpers/test-client.ts
| const createGroup = async (creator: TestClient, subject: string, members: TestClient[]): Promise<string> => { | ||
| const seenByMembers = members.map(m => m.waitForEvent('groups.upsert', g => g.some(x => x.subject === subject))) | ||
| const { id } = await creator.sock.groupCreate( | ||
| subject, | ||
| members.map(m => m.meJid) | ||
| ) | ||
| await Promise.all(seenByMembers) | ||
| return id |
There was a problem hiding this comment.
Harden group materialization barrier matching to avoid false positives.
On Line 13, waiting by subject alone can match unrelated groups.upsert events when subjects repeat across runs, which can bypass the intended barrier and reintroduce send races.
Suggested fix
const createGroup = async (creator: TestClient, subject: string, members: TestClient[]): Promise<string> => {
- const seenByMembers = members.map(m => m.waitForEvent('groups.upsert', g => g.some(x => x.subject === subject)))
+ const uniqueSubject = `${subject}-${Date.now()}`
+ const seenByMembers = members.map(m =>
+ m.waitForEvent('groups.upsert', g => g.some(x => x.subject === uniqueSubject))
+ )
const { id } = await creator.sock.groupCreate(
- subject,
+ uniqueSubject,
members.map(m => m.meJid)
)
await Promise.all(seenByMembers)
return id
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/__tests__/e2e/groups.test-e2e.ts` around lines 12 - 19, The current
createGroup helper uses waitForEvent('groups.upsert', g => g.some(x => x.subject
=== subject)) which can falsely match unrelated upserts that share the same
subject; update the seenByMembers predicate to require the expected participants
(and/or creator) as well as the subject. Concretely, in createGroup change
seenByMembers to compute the expectedJids = [creator.meJid, ...members.map(m =>
m.meJid)] and waitForEvent('groups.upsert', g => g.some(evt => evt.subject ===
subject && expectedJids.every(j => evt.participants?.some(p => p.jid === j))))
so the barrier only resolves when the upsert contains the same subject plus the
expected member JIDs (or other unique group identifier if available).
| test('alice removes charlie from a group', async () => { | ||
| const groupJid = await createGroup(alice, 'Remove Test Group', [bob, charlie]) | ||
|
|
||
| const removeResult = await alice.sock.groupParticipantsUpdate(groupJid, [charlie.meJid], 'remove') | ||
| expect(removeResult[0]?.status).toBe('200') | ||
|
|
||
| const text = `After remove ${Date.now()}` | ||
| const bobReceives = bob.waitForText(text, { remoteJid: groupJid }) | ||
| await alice.sock.sendMessage(groupJid, { text }) | ||
| await bobReceives | ||
| }) |
There was a problem hiding this comment.
Validate the removed participant does not receive subsequent messages.
This test currently proves only that bob still receives messages. It does not assert the key behavior in the title: charlie must not receive after removal.
Suggested fix
const text = `After remove ${Date.now()}`
const bobReceives = bob.waitForText(text, { remoteJid: groupJid })
+ const charlieDoesNotReceive = Promise.race([
+ charlie.waitForText(text, { remoteJid: groupJid }).then(() => {
+ throw new Error('charlie received group message after removal')
+ }),
+ new Promise(resolve => setTimeout(resolve, 1500))
+ ])
await alice.sock.sendMessage(groupJid, { text })
- await bobReceives
+ await Promise.all([bobReceives, charlieDoesNotReceive])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/__tests__/e2e/groups.test-e2e.ts` around lines 65 - 75, The test "alice
removes charlie from a group" only asserts bob receives messages but must also
assert charlie does not; after calling alice.sock.groupParticipantsUpdate(...)
and sending the message, add a waiter like const charlieReceives =
charlie.waitForText(text, { remoteJid: groupJid }) and assert it does not
resolve (e.g. await expect(charlieReceives).rejects.toThrow() or await
expect(Promise.race([charlieReceives,
delay(timeout)])).resolves.toBe(timeoutMarker)) so the test fails if charlie
receives the message; use the existing variables and methods (alice, charlie,
groupParticipantsUpdate, waitForText, sendMessage) to implement this negative
assertion.
| await alice.sock.groupParticipantsUpdate(groupJid, [charlie.meJid], 'add') | ||
|
|
||
| // stale cache would still report 2 participants | ||
| const after = await alice.sock.groupMetadata(groupJid) | ||
| expect(after.participants.length).toBe(3) |
There was a problem hiding this comment.
Assert participant-add operation success before metadata assertion.
If add fails, the participant count check fails later with a less actionable signal. Capture and assert the update result first.
Suggested fix
- await alice.sock.groupParticipantsUpdate(groupJid, [charlie.meJid], 'add')
+ const addResult = await alice.sock.groupParticipantsUpdate(groupJid, [charlie.meJid], 'add')
+ expect(addResult[0]?.status).toBe('200')
// stale cache would still report 2 participants
const after = await alice.sock.groupMetadata(groupJid)
expect(after.participants.length).toBe(3)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| await alice.sock.groupParticipantsUpdate(groupJid, [charlie.meJid], 'add') | |
| // stale cache would still report 2 participants | |
| const after = await alice.sock.groupMetadata(groupJid) | |
| expect(after.participants.length).toBe(3) | |
| const addResult = await alice.sock.groupParticipantsUpdate(groupJid, [charlie.meJid], 'add') | |
| expect(addResult[0]?.status).toBe('200') | |
| // stale cache would still report 2 participants | |
| const after = await alice.sock.groupMetadata(groupJid) | |
| expect(after.participants.length).toBe(3) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/__tests__/e2e/groups.test-e2e.ts` around lines 93 - 97, The test should
assert the result of the participant add operation before checking group
metadata: capture the promise returned by
alice.sock.groupParticipantsUpdate(groupJid, [charlie.meJid], 'add') into a
variable, assert that the update result indicates success (e.g., success
status/ack for the added JID) and/or no error, then call
alice.sock.groupMetadata(groupJid) and assert after.participants.length === 3;
ensure you reference the update call (groupParticipantsUpdate), the target JID
(charlie.meJid) and the metadata call (groupMetadata/after) when adding the
assertion.
There was a problem hiding this comment.
5 issues found across 15 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name=".github/workflows/e2e.yml">
<violation number="1" location=".github/workflows/e2e.yml:35">
P0: `pull_request_target` combined with checking out `github.event.pull_request.head.sha` executes untrusted PR code with secrets, enabling CI secret exfiltration.</violation>
</file>
<file name="src/Socket/messages-recv.ts">
<violation number="1" location="src/Socket/messages-recv.ts:595">
P2: The stanza-id dedupe only works while upload is in-flight; successful handling removes the id, so later re-issued notifications with the same id will upload prekeys again.
(Based on your team's feedback about requiring strict WhatsApp protocol parity for retry/ack flows.) [FEEDBACK_USED]</violation>
</file>
<file name="src/Utils/messages-media.ts">
<violation number="1" location="src/Utils/messages-media.ts:505">
P1: Using a module-global `_mediaHost` makes media URL resolution cross-socket stateful; one socket can overwrite another socket’s host and break downloads.</violation>
</file>
<file name="src/__tests__/e2e/groups.test-e2e.ts">
<violation number="1" location="src/__tests__/e2e/groups.test-e2e.ts:43">
P2: Wait for the group update event here; `groups.upsert` is only emitted for new-group creation, so this will hang after adding Charlie.</violation>
</file>
<file name="src/__tests__/e2e/helpers/test-client.ts">
<violation number="1" location="src/__tests__/e2e/helpers/test-client.ts:115">
P2: Only retry the expected pairing-handoff close reasons here; otherwise this can loop forever on fatal disconnects.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| steps: | ||
| - uses: actions/checkout@v4 | ||
| with: | ||
| ref: ${{ github.event.pull_request.head.sha || github.sha }} |
There was a problem hiding this comment.
P0: pull_request_target combined with checking out github.event.pull_request.head.sha executes untrusted PR code with secrets, enabling CI secret exfiltration.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .github/workflows/e2e.yml, line 35:
<comment>`pull_request_target` combined with checking out `github.event.pull_request.head.sha` executes untrusted PR code with secrets, enabling CI secret exfiltration.</comment>
<file context>
@@ -0,0 +1,74 @@
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ ref: ${{ github.event.pull_request.head.sha || github.sha }}
+
+ - name: Setup Node.js and Corepack
</file context>
| try { | ||
| await uploadPreKeys() | ||
| } finally { | ||
| if (stanzaId) inFlightPreKeyLow.delete(stanzaId) |
There was a problem hiding this comment.
P2: The stanza-id dedupe only works while upload is in-flight; successful handling removes the id, so later re-issued notifications with the same id will upload prekeys again.
(Based on your team's feedback about requiring strict WhatsApp protocol parity for retry/ack flows.)
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/Socket/messages-recv.ts, line 595:
<comment>The stanza-id dedupe only works while upload is in-flight; successful handling removes the id, so later re-issued notifications with the same id will upload prekeys again.
(Based on your team's feedback about requiring strict WhatsApp protocol parity for retry/ack flows.) </comment>
<file context>
@@ -574,13 +577,23 @@ export const makeMessagesRecvSocket = (config: SocketConfig) => {
+ try {
+ await uploadPreKeys()
+ } finally {
+ if (stanzaId) inFlightPreKeyLow.delete(stanzaId)
+ }
}
</file context>
| await alice.sock.sendMessage(groupJid, { text: text1 }) | ||
| await bobReceives1 | ||
|
|
||
| const charlieSeesGroup = charlie.waitForEvent('groups.upsert', groups => groups.some(g => g.id === groupJid)) |
There was a problem hiding this comment.
P2: Wait for the group update event here; groups.upsert is only emitted for new-group creation, so this will hang after adding Charlie.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/__tests__/e2e/groups.test-e2e.ts, line 43:
<comment>Wait for the group update event here; `groups.upsert` is only emitted for new-group creation, so this will hang after adding Charlie.</comment>
<file context>
@@ -0,0 +1,99 @@
+ await alice.sock.sendMessage(groupJid, { text: text1 })
+ await bobReceives1
+
+ const charlieSeesGroup = charlie.waitForEvent('groups.upsert', groups => groups.some(g => g.id === groupJid))
+ const addResult = await alice.sock.groupParticipantsUpdate(groupJid, [charlie.meJid], 'add')
+ expect(addResult[0]?.status).toBe('200')
</file context>
| if (connection === 'close') { | ||
| const reason = getDisconnectReason(lastDisconnect) | ||
| if (reason === DisconnectReason.loggedOut) { | ||
| resolve({ kind: 'logged-out' }) | ||
| return | ||
| } | ||
|
|
||
| const label = reason !== undefined ? (DisconnectReason[reason] ?? String(reason)) : 'unknown' | ||
| resolve({ kind: 'reconnect', reason: label }) | ||
| } |
There was a problem hiding this comment.
P2: Only retry the expected pairing-handoff close reasons here; otherwise this can loop forever on fatal disconnects.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/__tests__/e2e/helpers/test-client.ts, line 115:
<comment>Only retry the expected pairing-handoff close reasons here; otherwise this can loop forever on fatal disconnects.</comment>
<file context>
@@ -0,0 +1,287 @@
+ return
+ }
+
+ if (connection === 'close') {
+ const reason = getDisconnectReason(lastDisconnect)
+ if (reason === DisconnectReason.loggedOut) {
</file context>
| if (connection === 'close') { | |
| const reason = getDisconnectReason(lastDisconnect) | |
| if (reason === DisconnectReason.loggedOut) { | |
| resolve({ kind: 'logged-out' }) | |
| return | |
| } | |
| const label = reason !== undefined ? (DisconnectReason[reason] ?? String(reason)) : 'unknown' | |
| resolve({ kind: 'reconnect', reason: label }) | |
| } | |
| if (connection === 'close') { | |
| const reason = getDisconnectReason(lastDisconnect) | |
| if (reason === DisconnectReason.loggedOut) { | |
| resolve({ kind: 'logged-out' }) | |
| return | |
| } | |
| const label = reason !== undefined ? (DisconnectReason[reason] ?? String(reason)) : 'unknown' | |
| if (reason === DisconnectReason.connectionClosed || reason === DisconnectReason.restartRequired) { | |
| resolve({ kind: 'reconnect', reason: label }) | |
| return | |
| } | |
| resolve({ kind: 'error', error: new Error(`socket closed unexpectedly: ${label}`) }) | |
| } |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/Utils/messages-media.ts`:
- Line 527: The current downloadUrl assignment uses
getUrlFromDirectPath(directPath, opts.host) which forces DEF_MEDIA_HOST when
opts.host is falsy and discards a valid url host; change the call so
getUrlFromDirectPath receives opts.host || (url ? new URL(url).host : undefined)
|| DEF_MEDIA_HOST to preserve a fallback to the host parsed from url before
falling back to DEF_MEDIA_HOST; update the logic around downloadUrl, directPath,
getUrlFromDirectPath, opts.host, url and DEF_MEDIA_HOST accordingly and ensure
you guard new URL(url) with a truthy url check to avoid throwing.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 17fbc9b2-68b5-407d-9d8c-80feede4e26b
📒 Files selected for processing (6)
src/Socket/messages-send.tssrc/Socket/socket.tssrc/Utils/messages-media.tssrc/__tests__/e2e/groups.test-e2e.tssrc/__tests__/e2e/helpers/mock-phone.tssrc/__tests__/e2e/send-receive-message.test-e2e.ts
🚧 Files skipped from review as they are similar to previous changes (5)
- src/tests/e2e/groups.test-e2e.ts
- src/Socket/socket.ts
- src/tests/e2e/helpers/mock-phone.ts
- src/Socket/messages-send.ts
- src/tests/e2e/send-receive-message.test-e2e.ts
| ) => { | ||
| const isValidMediaUrl = url?.startsWith('https://mmg.whatsapp.net/') | ||
| const downloadUrl = isValidMediaUrl ? url : getUrlFromDirectPath(directPath!) | ||
| const downloadUrl = directPath ? getUrlFromDirectPath(directPath, opts.host) : url |
There was a problem hiding this comment.
Preserve URL-host fallback when directPath is present.
On Line 527, if directPath exists and opts.host is missing, downloads always target DEF_MEDIA_HOST. In non-default environments, a valid url host gets ignored and can cause fetch failures. Consider falling back to new URL(url).host before the default.
Suggested patch
- const downloadUrl = directPath ? getUrlFromDirectPath(directPath, opts.host) : url
+ const urlHost =
+ !opts.host && url
+ ? (() => {
+ try {
+ return new URL(url).host
+ } catch {
+ return undefined
+ }
+ })()
+ : undefined
+ const downloadUrl = directPath ? getUrlFromDirectPath(directPath, opts.host ?? urlHost) : url📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const downloadUrl = directPath ? getUrlFromDirectPath(directPath, opts.host) : url | |
| const urlHost = | |
| !opts.host && url | |
| ? (() => { | |
| try { | |
| return new URL(url).host | |
| } catch { | |
| return undefined | |
| } | |
| })() | |
| : undefined | |
| const downloadUrl = directPath ? getUrlFromDirectPath(directPath, opts.host ?? urlHost) : url |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/Utils/messages-media.ts` at line 527, The current downloadUrl assignment
uses getUrlFromDirectPath(directPath, opts.host) which forces DEF_MEDIA_HOST
when opts.host is falsy and discards a valid url host; change the call so
getUrlFromDirectPath receives opts.host || (url ? new URL(url).host : undefined)
|| DEF_MEDIA_HOST to preserve a fallback to the host parsed from url before
falling back to DEF_MEDIA_HOST; update the logic around downloadUrl, directPath,
getUrlFromDirectPath, opts.host, url and DEF_MEDIA_HOST accordingly and ensure
you guard new URL(url) with a truthy url check to avoid throwing.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
src/__tests__/e2e/helpers/test-client.ts (1)
94-127:⚠️ Potential issue | 🟠 MajorAdd an overall connect timeout and bound the retry loop.
This still has the same bring-up hang noted earlier: the
connection.updatepromise has no overall timeout, andopenConnection()retries forever on reconnect paths. A bad mock-server run can stallbeforeAlluntil Jest is force-killed.Also applies to: 171-193
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/__tests__/e2e/helpers/test-client.ts` around lines 94 - 127, The connection.await block created with new Promise<AttemptResult> that listens to sock.ev.on('connection.update') needs an overall timeout to prevent hanging: add a timer that resolves with an error (or a specific timeout result) if no terminal event occurs within a configurable timeout, and clear that timer when you resolve normally; do the same for the duplicate block around lines 171-193. Also bound the retry loop in openConnection() by introducing a maxAttempts limit (and optional backoff) so reconnect paths cannot retry forever; reference the Promise/AttemptResult handler (sock.ev.on('connection.update')), postQrToMockPhone, jidNormalizedUser, getDisconnectReason / DisconnectReason, and the openConnection() retry logic when making these changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@package.json`:
- Line 40: Remove the process-wide NODE_TLS_REJECT_UNAUTHORIZED from the
"test:e2e" npm script so tests don't disable TLS globally and to ensure Windows
compatibility; update the package.json script entry named "test:e2e" to omit the
POSIX-style env prefix and rely on the existing scoped TLS relaxations already
implemented in test-client.ts (Agent with rejectUnauthorized: false) and
mock-phone.ts (inline rejectUnauthorized: false) for e2e tests.
In `@src/__tests__/e2e/helpers/test-client.ts`:
- Around line 216-220: The connect method (static async connect in TestClient)
currently always deletes config.authDir which can wipe a caller-provided
directory; change it to only remove the directory when it was auto-generated or
when an explicit reset flag is passed. Update TestClientOptions to include
resetAuthDir?: boolean, and in TestClient.connect call resolveConfig with the
pushName logic and detect whether opts.authDir was provided; if opts.authDir is
undefined treat it as auto-generated and allow the existing rm(config.authDir,
...) behavior, otherwise only call rm if opts.resetAuthDir === true. Keep the
rest of the logic intact and reference resolveConfig, authDir, uniquePushName
and resetAuthDir in your changes.
- Around line 221-228: If resolveTestGroup rejects the code currently returns
with a live socket and registered creds.update handler, wrap the await
client.resolveTestGroup(opts.testGroupName) call in a try/catch; on catch ensure
the socket and any event listeners are cleaned up before rethrowing: call await
sock.close() (or sock.disconnect/close equivalent) and remove the creds.update
listener from sock.ev (e.g. sock.ev.removeAllListeners('creds.update') or the
specific off method used by openConnection), then rethrow the error; reference
openConnection, TestClient, client.resolveTestGroup, sock and the creds.update
handler so the cleanup is applied exactly when resolveTestGroup fails.
---
Duplicate comments:
In `@src/__tests__/e2e/helpers/test-client.ts`:
- Around line 94-127: The connection.await block created with new
Promise<AttemptResult> that listens to sock.ev.on('connection.update') needs an
overall timeout to prevent hanging: add a timer that resolves with an error (or
a specific timeout result) if no terminal event occurs within a configurable
timeout, and clear that timer when you resolve normally; do the same for the
duplicate block around lines 171-193. Also bound the retry loop in
openConnection() by introducing a maxAttempts limit (and optional backoff) so
reconnect paths cannot retry forever; reference the Promise/AttemptResult
handler (sock.ev.on('connection.update')), postQrToMockPhone, jidNormalizedUser,
getDisconnectReason / DisconnectReason, and the openConnection() retry logic
when making these changes.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 032167ed-90b4-4cf3-863c-bf75dadf4d3e
📒 Files selected for processing (2)
package.jsonsrc/__tests__/e2e/helpers/test-client.ts
| "release": "release-it", | ||
| "test": "node --experimental-vm-modules ./node_modules/.bin/jest --testMatch '**/*.test.ts'", | ||
| "test:e2e": "node --experimental-vm-modules ./node_modules/.bin/jest --testMatch '**/*.test-e2e.ts'", | ||
| "test:e2e": "NODE_TLS_REJECT_UNAUTHORIZED=0 node --experimental-vm-modules ./node_modules/.bin/jest --runInBand --forceExit --testMatch '**/*.test-e2e.ts'", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check if the mentioned files exist and examine their TLS-related code
find . -type f -name "test-client.ts" -o -name "mock-phone.ts" | head -20Repository: WhiskeySockets/Baileys
Length of output: 151
🏁 Script executed:
# Search for NODE_TLS_REJECT_UNAUTHORIZED usage in the codebase
rg "NODE_TLS_REJECT_UNAUTHORIZED|rejectUnauthorized" --type ts --type jsRepository: WhiskeySockets/Baileys
Length of output: 250
🏁 Script executed:
# Check the e2e test directory structure
find ./src/__tests__/e2e -type f -name "*.ts" -o -name "*.js" | head -30Repository: WhiskeySockets/Baileys
Length of output: 338
🏁 Script executed:
# Check the full content of test-client.ts to see the TLS context
cat -n ./src/__tests__/e2e/helpers/test-client.tsRepository: WhiskeySockets/Baileys
Length of output: 10920
🏁 Script executed:
# Check the full content of mock-phone.ts to see the TLS context
cat -n ./src/__tests__/e2e/helpers/mock-phone.tsRepository: WhiskeySockets/Baileys
Length of output: 1469
Remove the process-wide NODE_TLS_REJECT_UNAUTHORIZED environment variable from the test script.
Line 40 uses NODE_TLS_REJECT_UNAUTHORIZED=0 to disable certificate verification globally across the Jest process. This is both unnecessary and problematic. The e2e helpers already scope TLS relaxation properly: test-client.ts:56 creates an Agent with rejectUnauthorized: false for WebSocket connections, and mock-phone.ts:25 sets rejectUnauthorized: false inline for the admin endpoint request. Additionally, the POSIX-style env assignment breaks on Windows when running npm run test:e2e. Remove the env var and rely on the existing scoped settings.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@package.json` at line 40, Remove the process-wide
NODE_TLS_REJECT_UNAUTHORIZED from the "test:e2e" npm script so tests don't
disable TLS globally and to ensure Windows compatibility; update the
package.json script entry named "test:e2e" to omit the POSIX-style env prefix
and rely on the existing scoped TLS relaxations already implemented in
test-client.ts (Agent with rejectUnauthorized: false) and mock-phone.ts (inline
rejectUnauthorized: false) for e2e tests.
| static async connect(opts: TestClientOptions = {}): Promise<TestClient> { | ||
| const config = resolveConfig({ ...opts, pushName: opts.pushName ?? uniquePushName() }) | ||
| // stale auth would skip pairing and disconnect with logged-out | ||
| await rm(config.authDir, { recursive: true, force: true }) | ||
|
|
There was a problem hiding this comment.
Don’t recursively delete a caller-provided authDir.
Line 219 wipes whatever path the caller passes in via authDir. That makes the option surprisingly destructive and can delete persisted debug/auth state outside the temp-dir flow this helper creates by default. Gate the cleanup behind an explicit resetAuthDir flag, or only auto-delete when authDir was auto-generated.
Suggested fix
export interface TestClientOptions {
socketUrl?: string
/** Defaults to a unique tmp dir so concurrent clients don't collide. */
authDir?: string
+ resetAuthDir?: boolean
/** Bartender derives a deterministic phone number from this. Defaults to a unique value. */
pushName?: string
advSecretKey?: string
logLevel?: P.Level
resolveTestGroup?: boolean
@@
static async connect(opts: TestClientOptions = {}): Promise<TestClient> {
const config = resolveConfig({ ...opts, pushName: opts.pushName ?? uniquePushName() })
- // stale auth would skip pairing and disconnect with logged-out
- await rm(config.authDir, { recursive: true, force: true })
+ if (opts.resetAuthDir ?? !opts.authDir) {
+ // stale auth would skip pairing and disconnect with logged-out
+ await rm(config.authDir, { recursive: true, force: true })
+ }
const { sock, saveCreds, meJid, meLid } = await openConnection(config)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| static async connect(opts: TestClientOptions = {}): Promise<TestClient> { | |
| const config = resolveConfig({ ...opts, pushName: opts.pushName ?? uniquePushName() }) | |
| // stale auth would skip pairing and disconnect with logged-out | |
| await rm(config.authDir, { recursive: true, force: true }) | |
| export interface TestClientOptions { | |
| socketUrl?: string | |
| /** Defaults to a unique tmp dir so concurrent clients don't collide. */ | |
| authDir?: string | |
| resetAuthDir?: boolean | |
| /** Bartender derives a deterministic phone number from this. Defaults to a unique value. */ | |
| pushName?: string | |
| advSecretKey?: string | |
| logLevel?: P.Level | |
| resolveTestGroup?: boolean | |
| } | |
| static async connect(opts: TestClientOptions = {}): Promise<TestClient> { | |
| const config = resolveConfig({ ...opts, pushName: opts.pushName ?? uniquePushName() }) | |
| if (opts.resetAuthDir ?? !opts.authDir) { | |
| // stale auth would skip pairing and disconnect with logged-out | |
| await rm(config.authDir, { recursive: true, force: true }) | |
| } | |
| const { sock, saveCreds, meJid, meLid } = await openConnection(config) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/__tests__/e2e/helpers/test-client.ts` around lines 216 - 220, The connect
method (static async connect in TestClient) currently always deletes
config.authDir which can wipe a caller-provided directory; change it to only
remove the directory when it was auto-generated or when an explicit reset flag
is passed. Update TestClientOptions to include resetAuthDir?: boolean, and in
TestClient.connect call resolveConfig with the pushName logic and detect whether
opts.authDir was provided; if opts.authDir is undefined treat it as
auto-generated and allow the existing rm(config.authDir, ...) behavior,
otherwise only call rm if opts.resetAuthDir === true. Keep the rest of the logic
intact and reference resolveConfig, authDir, uniquePushName and resetAuthDir in
your changes.
| const { sock, saveCreds, meJid, meLid } = await openConnection(config) | ||
| const client = new TestClient(sock, meJid, meLid, config, saveCreds) | ||
|
|
||
| if (opts.resolveTestGroup) { | ||
| await client.resolveTestGroup(opts.testGroupName) | ||
| } | ||
|
|
||
| return client |
There was a problem hiding this comment.
Clean up the socket if post-connect initialization fails.
If resolveTestGroup() rejects, connect() throws with a live socket and a registered creds.update handler still attached. That leaks handles on a failed setup path and can be one reason the suite now needs --forceExit.
Suggested fix
const { sock, saveCreds, meJid, meLid } = await openConnection(config)
const client = new TestClient(sock, meJid, meLid, config, saveCreds)
- if (opts.resolveTestGroup) {
- await client.resolveTestGroup(opts.testGroupName)
- }
-
- return client
+ try {
+ if (opts.resolveTestGroup) {
+ await client.resolveTestGroup(opts.testGroupName)
+ }
+
+ return client
+ } catch(error) {
+ await client.cleanup()
+ throw error
+ }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/__tests__/e2e/helpers/test-client.ts` around lines 221 - 228, If
resolveTestGroup rejects the code currently returns with a live socket and
registered creds.update handler, wrap the await
client.resolveTestGroup(opts.testGroupName) call in a try/catch; on catch ensure
the socket and any event listeners are cleaned up before rethrowing: call await
sock.close() (or sock.disconnect/close equivalent) and remove the creds.update
listener from sock.ev (e.g. sock.ev.removeAllListeners('creds.update') or the
specific off method used by openConnection), then rethrow the error; reference
openConnection, TestClient, client.resolveTestGroup, sock and the creds.update
handler so the cleanup is applied exactly when resolveTestGroup fails.
Adds an e2e test harness around the bartender mock server and ports messaging + group tests from the rust fork. Building it surfaced three signal-layer bugs:
Also adds optional pushName on SocketConfig so mock servers can pick deterministic phone numbers (real WA ignores it on first pair).
30 e2e tests pass. Jest runs --runInBand since the mock server doesn't isolate concurrent workers.
Summary by CodeRabbit
New Features
Improvements
Behavior Changes