Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
16 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
74 changes: 74 additions & 0 deletions .github/workflows/e2e.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
name: E2E Tests

on:
push:
branches:
- master
pull_request_target:
branches:
- master

permissions:
contents: read

jobs:
e2e:
name: E2E Tests
runs-on: ubuntu-latest
timeout-minutes: 10
services:
mock-server:
image: ghcr.io/whiskeysockets-devtools/bartender:latest
credentials:
username: ${{ github.actor }}
password: ${{ secrets.BARTENDER_GHCR_TOKEN }}
ports:
- 8080:8080
env:
CHATSTATE_TTL_SECS: "3"
ADV_SECRET_KEY: "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA="
options: --log-driver none

steps:
- uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha || github.sha }}
Comment on lines +7 to +35
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
rg -n -C2 'pull_request_target|github\.event\.pull_request\.head\.sha|secrets\.BARTENDER_GHCR_TOKEN' .github/workflows/e2e.yml

Repository: WhiskeySockets/Baileys

Length of output: 508


🏁 Script executed:

cat -n .github/workflows/e2e.yml | head -80

Repository: WhiskeySockets/Baileys

Length of output: 2454


Critical: untrusted PR code runs with repository secrets exposed.

Line 7 triggers pull_request_target, which executes untrusted fork code with base-repository secrets. Line 35 checks out the PR head SHA, and lines 56 and 74 execute package managers and tests from that code. This allows malicious PRs to exfiltrate secrets.BARTENDER_GHCR_TOKEN (line 24) through install scripts, build hooks, or test code.

Replace pull_request_target with pull_request, or gate the job to same-repository PRs only (filter via github.event.pull_request.head.repo.full_name). Do not expose secrets to untrusted code paths.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/e2e.yml around lines 7 - 35, The workflow currently uses
the pull_request_target event (symbol: pull_request_target) which exposes
repository secrets (e.g., secrets.BARTENDER_GHCR_TOKEN) to untrusted PR code
checked out by the checkout step that uses the PR head ref; change the trigger
to pull_request or add a guard to restrict execution to same-repository PRs by
checking github.event.pull_request.head.repo.full_name equals github.repository
before running the e2e job (job: e2e) and/or before starting the mock-server
service and checkout step, ensuring secrets are not passed to untrusted forked
PR code.

Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai Bot Apr 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P0: pull_request_target combined with checking out github.event.pull_request.head.sha executes untrusted PR code with secrets, enabling CI secret exfiltration.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .github/workflows/e2e.yml, line 35:

<comment>`pull_request_target` combined with checking out `github.event.pull_request.head.sha` executes untrusted PR code with secrets, enabling CI secret exfiltration.</comment>

<file context>
@@ -0,0 +1,74 @@
+    steps:
+      - uses: actions/checkout@v4
+        with:
+          ref: ${{ github.event.pull_request.head.sha || github.sha }}
+
+      - name: Setup Node.js and Corepack
</file context>
Fix with Cubic


- name: Setup Node.js and Corepack
uses: actions/setup-node@v4
with:
node-version: 20.x

- name: Enable Corepack and Set Yarn Version
run: |
corepack enable
corepack prepare yarn@4.x --activate

- name: Restore Yarn Cache
uses: actions/cache@v4
with:
path: .yarn/cache
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-

- name: Install dependencies
run: yarn install --immutable

- name: Wait for mock server
run: |
for i in $(seq 1 30); do
if curl -sk https://localhost:8080/ > /dev/null 2>&1; then
echo "Mock server is ready"
exit 0
fi
sleep 1
done
echo "Mock server failed to become ready"
exit 1

- name: Run E2E tests
env:
SOCKET_URL: "wss://127.0.0.1:8080/ws/chat"
ADV_SECRET_KEY: "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA="
run: yarn test:e2e
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
"preinstall": "node ./engine-requirements.js",
"release": "release-it",
"test": "node --experimental-vm-modules ./node_modules/.bin/jest --testMatch '**/*.test.ts'",
"test:e2e": "node --experimental-vm-modules ./node_modules/.bin/jest --testMatch '**/*.test-e2e.ts'",
"test:e2e": "NODE_TLS_REJECT_UNAUTHORIZED=0 node --experimental-vm-modules ./node_modules/.bin/jest --runInBand --forceExit --testMatch '**/*.test-e2e.ts'",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check if the mentioned files exist and examine their TLS-related code
find . -type f -name "test-client.ts" -o -name "mock-phone.ts" | head -20

Repository: WhiskeySockets/Baileys

Length of output: 151


🏁 Script executed:

# Search for NODE_TLS_REJECT_UNAUTHORIZED usage in the codebase
rg "NODE_TLS_REJECT_UNAUTHORIZED|rejectUnauthorized" --type ts --type js

Repository: WhiskeySockets/Baileys

Length of output: 250


🏁 Script executed:

# Check the e2e test directory structure
find ./src/__tests__/e2e -type f -name "*.ts" -o -name "*.js" | head -30

Repository: WhiskeySockets/Baileys

Length of output: 338


🏁 Script executed:

# Check the full content of test-client.ts to see the TLS context
cat -n ./src/__tests__/e2e/helpers/test-client.ts

Repository: WhiskeySockets/Baileys

Length of output: 10920


🏁 Script executed:

# Check the full content of mock-phone.ts to see the TLS context
cat -n ./src/__tests__/e2e/helpers/mock-phone.ts

Repository: WhiskeySockets/Baileys

Length of output: 1469


Remove the process-wide NODE_TLS_REJECT_UNAUTHORIZED environment variable from the test script.

Line 40 uses NODE_TLS_REJECT_UNAUTHORIZED=0 to disable certificate verification globally across the Jest process. This is both unnecessary and problematic. The e2e helpers already scope TLS relaxation properly: test-client.ts:56 creates an Agent with rejectUnauthorized: false for WebSocket connections, and mock-phone.ts:25 sets rejectUnauthorized: false inline for the admin endpoint request. Additionally, the POSIX-style env assignment breaks on Windows when running npm run test:e2e. Remove the env var and rely on the existing scoped settings.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@package.json` at line 40, Remove the process-wide
NODE_TLS_REJECT_UNAUTHORIZED from the "test:e2e" npm script so tests don't
disable TLS globally and to ensure Windows compatibility; update the
package.json script entry named "test:e2e" to omit the POSIX-style env prefix
and rely on the existing scoped TLS relaxations already implemented in
test-client.ts (Agent with rejectUnauthorized: false) and mock-phone.ts (inline
rejectUnauthorized: false) for e2e tests.

"update:version": "tsx ./scripts/update-version.ts"
},
"dependencies": {
Expand Down
1 change: 0 additions & 1 deletion src/Defaults/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,6 @@ export const MIN_PREKEY_COUNT = 5
export const INITIAL_PREKEY_COUNT = 812

export const UPLOAD_TIMEOUT = 30000 // 30 seconds
export const MIN_UPLOAD_INTERVAL = 5000 // 5 seconds minimum between uploads

export const DEFAULT_CACHE_TTLS = {
SIGNAL_STORE: 5 * 60, // 5 minutes
Expand Down
4 changes: 2 additions & 2 deletions src/Socket/chats.ts
Original file line number Diff line number Diff line change
Expand Up @@ -52,12 +52,12 @@
type BinaryNode,
getBinaryNodeChild,
getBinaryNodeChildren,
isHostedLidUser,
isHostedPnUser,
isLidUser,
isPnUser,
jidDecode,
jidNormalizedUser,
isHostedLidUser,
isHostedPnUser,
reduceBinaryNodeToDictionary,
S_WHATSAPP_NET
} from '../WABinary'
Expand Down Expand Up @@ -683,7 +683,7 @@
// collection is done with sync
collectionsToHandle.delete(name)
}
} catch (error: any) {

Check warning on line 686 in src/Socket/chats.ts

View workflow job for this annotation

GitHub Actions / check-lint

Unexpected any. Specify a different type
attemptsMap[name] = (attemptsMap[name] || 0) + 1

const logData = {
Expand Down
62 changes: 26 additions & 36 deletions src/Socket/messages-recv.ts
Original file line number Diff line number Diff line change
Expand Up @@ -213,7 +213,7 @@
return
}

let data: any

Check warning on line 216 in src/Socket/messages-recv.ts

View workflow job for this annotation

GitHub Actions / check-lint

Unexpected any. Specify a different type
try {
data = JSON.parse(mexNode.content.toString())
} catch (error) {
Expand Down Expand Up @@ -309,7 +309,7 @@
case 'update':
const settingsNode = getBinaryNodeChild(child, 'settings')
if (settingsNode) {
const update: Record<string, any> = {}

Check warning on line 312 in src/Socket/messages-recv.ts

View workflow job for this annotation

GitHub Actions / check-lint

Unexpected any. Specify a different type
const nameNode = getBinaryNodeChild(settingsNode, 'name')
if (nameNode?.content) update.name = nameNode.content.toString()

Expand Down Expand Up @@ -534,6 +534,9 @@
}, authState?.creds?.me?.id || 'sendRetryRequest')
}

// Mirrors WAWeb/Handle/PreKeyLow.js: skip a re-issued notification with the same stanza id.
const inFlightPreKeyLow = new Set<string>()

/**
* Fire-and-forget tctoken re-issuance after a peer's device identity changed.
* Mirrors WAWebSendTcTokenWhenDeviceIdentityChange — runs in parallel with
Expand Down Expand Up @@ -574,13 +577,23 @@
const handleEncryptNotification = async (node: BinaryNode) => {
const from = node.attrs.from
if (from === S_WHATSAPP_NET) {
const stanzaId = node.attrs.id
if (stanzaId && inFlightPreKeyLow.has(stanzaId)) {
return
}

const countChild = getBinaryNodeChild(node, 'count')
const count = +countChild!.attrs.value!
const shouldUploadMorePreKeys = count < MIN_PREKEY_COUNT

logger.debug({ count, shouldUploadMorePreKeys }, 'recv pre-key count')
if (shouldUploadMorePreKeys) {
await uploadPreKeys()
if (stanzaId) inFlightPreKeyLow.add(stanzaId)
try {
await uploadPreKeys()
} finally {
if (stanzaId) inFlightPreKeyLow.delete(stanzaId)
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai Bot Apr 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: The stanza-id dedupe only works while upload is in-flight; successful handling removes the id, so later re-issued notifications with the same id will upload prekeys again.

(Based on your team's feedback about requiring strict WhatsApp protocol parity for retry/ack flows.)

View Feedback

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/Socket/messages-recv.ts, line 595:

<comment>The stanza-id dedupe only works while upload is in-flight; successful handling removes the id, so later re-issued notifications with the same id will upload prekeys again.

(Based on your team's feedback about requiring strict WhatsApp protocol parity for retry/ack flows.) </comment>

<file context>
@@ -574,13 +577,23 @@ export const makeMessagesRecvSocket = (config: SocketConfig) => {
+				try {
+					await uploadPreKeys()
+				} finally {
+					if (stanzaId) inFlightPreKeyLow.delete(stanzaId)
+				}
 			}
</file context>
Fix with Cubic

}
Comment on lines +537 to +596
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

This dedupe set never catches queued duplicate PreKeyLow notifications.

handleNotification() runs processNotification() under notificationMutex, so a reissued stanza with the same id does not reach this block until the first uploadPreKeys() call has already finished and the finally at Line 545 has removed the id. In practice, queued duplicates still upload once each.

Possible fix
-	// Mirrors WAWeb/Handle/PreKeyLow.js: skip a re-issued notification with the same stanza id.
-	const inFlightPreKeyLow = new Set<string>()
+	// Keep recently handled stanza ids long enough to suppress queued re-issues too.
+	const recentPreKeyLow = new NodeCache<boolean>({ stdTTL: 30, useClones: false })

 	const handleEncryptNotification = async (node: BinaryNode) => {
 		const from = node.attrs.from
 		if (from === S_WHATSAPP_NET) {
 			const stanzaId = node.attrs.id
-			if (stanzaId && inFlightPreKeyLow.has(stanzaId)) {
+			if (stanzaId && (await recentPreKeyLow.get(stanzaId))) {
 				return
 			}
 
 			const countChild = getBinaryNodeChild(node, 'count')
 			const count = +countChild!.attrs.value!
 			const shouldUploadMorePreKeys = count < MIN_PREKEY_COUNT
 
 			logger.debug({ count, shouldUploadMorePreKeys }, 'recv pre-key count')
 			if (shouldUploadMorePreKeys) {
-				if (stanzaId) inFlightPreKeyLow.add(stanzaId)
-				try {
-					await uploadPreKeys()
-				} finally {
-					if (stanzaId) inFlightPreKeyLow.delete(stanzaId)
-				}
+				if (stanzaId) await recentPreKeyLow.set(stanzaId, true)
+				await uploadPreKeys()
 			}
 		} else {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/Socket/messages-recv.ts` around lines 524 - 546, The dedupe Set in
handleEncryptNotification is added/removed around uploadPreKeys inside
notificationMutex, so queued duplicates still run because the first call hasn't
added the id until after mutex release; fix by performing the dedupe check and
adding the stanza id to inFlightPreKeyLow before acquiring notificationMutex (or
at the very start of handleNotification) so subsequent queued notifications see
the id and return early; ensure the id is removed in the same finally after
uploadPreKeys completes (references: handleEncryptNotification,
handleNotification, notificationMutex, inFlightPreKeyLow, uploadPreKeys).

}
} else {
const result = await handleIdentityChange(node, {
Expand Down Expand Up @@ -929,7 +942,7 @@
const jids = await readTcTokenIndex(authState.keys)
for (const jid of jids) tcTokenKnownJids.add(jid)
logger.debug({ count: tcTokenKnownJids.size }, 'loaded tctoken index')
} catch (err: any) {

Check warning on line 945 in src/Socket/messages-recv.ts

View workflow job for this annotation

GitHub Actions / check-lint

Unexpected any. Specify a different type
logger.warn({ err: err?.message }, 'failed to load tctoken index')
}
})()
Expand Down Expand Up @@ -1365,47 +1378,23 @@
}
}

const errorMessage = msg?.messageStubParameters?.[0] || ''
const isPreKeyError = errorMessage.includes('PreKey')

logger.debug(`[handleMessage] Attempting retry request for failed decryption`)
logger.debug('[handleMessage] Attempting retry request for failed decryption')

// Handle both pre-key and normal retries in single mutex
// WAWeb only retry-receipts here; server emits PreKeyLow if prekeys run low.
await retryMutex.mutex(async () => {
try {
if (!ws.isOpen) {
logger.debug({ node }, 'Connection closed, skipping retry')
return
}

// Handle pre-key errors with upload and delay
if (isPreKeyError) {
logger.info({ error: errorMessage }, 'PreKey error detected, uploading and retrying')

try {
logger.debug('Uploading pre-keys for error recovery')
await uploadPreKeys(5)
logger.debug('Waiting for server to process new pre-keys')
await delay(1000)
} catch (uploadErr) {
logger.error({ uploadErr }, 'Pre-key upload failed, proceeding with retry anyway')
}
}

const encNode = getBinaryNodeChild(node, 'enc')
await sendRetryRequest(node, !encNode)
if (retryRequestDelayMs) {
await delay(retryRequestDelayMs)
}
} catch (err) {
logger.error({ err, isPreKeyError }, 'Failed to handle retry, attempting basic retry')
// Still attempt retry even if pre-key upload failed
try {
const encNode = getBinaryNodeChild(node, 'enc')
await sendRetryRequest(node, !encNode)
} catch (retryErr) {
logger.error({ retryErr }, 'Failed to send retry after error handling')
}
logger.error({ err }, 'Failed to send retry')
}

acked = true
Expand Down Expand Up @@ -1490,14 +1479,14 @@
status
}

if (status === 'relaylatency') {
const latencyValue = infoChild.attrs.latency || infoChild.attrs['latency_ms'] || infoChild.attrs['latency-ms']
const latencyMs = latencyValue ? Number(latencyValue) : undefined
if (Number.isFinite(latencyMs)) {
call.latencyMs = latencyMs
}
}
if (status === 'relaylatency') {
const latencyValue = infoChild.attrs.latency || infoChild.attrs['latency_ms'] || infoChild.attrs['latency-ms']
const latencyMs = latencyValue ? Number(latencyValue) : undefined
if (Number.isFinite(latencyMs)) {
call.latencyMs = latencyMs
}
}

if (status === 'offer') {
call.isVideo = !!getBinaryNodeChild(infoChild, 'video')
call.isGroup = infoChild.attrs.type === 'group' || !!infoChild.attrs['group-jid']
Expand Down Expand Up @@ -1625,6 +1614,7 @@
)
ignoreJid = !isNodeFromMe || isJidGroup(attrs.from) ? attrs.from : attrs.recipient
}

if (ignoreJid && ignoreJid !== S_WHATSAPP_NET && shouldIgnoreJid(ignoreJid)) {
await sendMessageAck(node, type === 'message' ? NACK_REASONS.UnhandledError : undefined)
return
Expand Down Expand Up @@ -1786,7 +1776,7 @@
for (const jid of survivors) tcTokenKnownJids.add(jid)

logger.debug({ mutated, remaining: survivors.size }, 'pruned expired tctokens')
} catch (err: any) {

Check warning on line 1779 in src/Socket/messages-recv.ts

View workflow job for this annotation

GitHub Actions / check-lint

Unexpected any. Specify a different type
logger.warn({ err: err?.message }, 'failed to prune expired tctokens')
}
}
Expand Down
37 changes: 13 additions & 24 deletions src/Socket/messages-send.ts
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
assertMediaContent,
bindWaitForEvent,
decryptMediaRetryData,
DEF_MEDIA_HOST,
encodeNewsletterMessage,
encodeSignedDeviceIdentity,
encodeWAMessage,
Expand Down Expand Up @@ -109,18 +110,15 @@
useClones: false
})

const peerSessionsCache = new NodeCache<boolean>({
stdTTL: DEFAULT_CACHE_TTLS.USER_DEVICES,
useClones: false
})

// Initialize message retry manager if enabled
const messageRetryManager = enableRecentMessageCache ? new MessageRetryManager(logger, maxMsgRetryCount) : null

// Prevent race conditions in Signal session encryption by user
const encryptionMutex = makeKeyedMutex()

let mediaConn: Promise<MediaConnInfo>
/** Per-socket media host; updated whenever media_conn is fetched. Defaults to the public WhatsApp host. */
let mediaHost: string = DEF_MEDIA_HOST
const refreshMediaConn = async (forceGet = false) => {
const media = await mediaConn
if (!media || forceGet || new Date().getTime() - media.fetchDate.getTime() > media.ttl * 1000) {
Expand All @@ -146,6 +144,10 @@
fetchDate: new Date()
}
logger.debug('fetched media conn')
if (node.hosts[0]) {
mediaHost = node.hosts[0].hostname
}

return node
})()
}
Expand Down Expand Up @@ -436,24 +438,15 @@

const assertSessions = async (jids: string[], force?: boolean) => {
let didFetchNewSession = false
const uniqueJids = [...new Set(jids)] // Deduplicate JIDs
const uniqueJids = [...new Set(jids)]
const jidsRequiringFetch: string[] = []

logger.debug({ jids }, 'assertSessions call with jids')

// Check peerSessionsCache and validate sessions using libsignal loadSession
for (const jid of uniqueJids) {
const signalId = signalRepository.jidToSignalProtocolAddress(jid)
const cachedSession = peerSessionsCache.get(signalId)
if (cachedSession !== undefined) {
if (cachedSession && !force) {
continue // Session exists in cache
}
} else {
if (!force) {
const sessionValidation = await signalRepository.validateSession(jid)
const hasSession = sessionValidation.exists
peerSessionsCache.set(signalId, hasSession)
if (hasSession && !force) {
if (sessionValidation.exists) {
continue
}
}
Expand Down Expand Up @@ -494,12 +487,6 @@
})
await parseAndInjectE2ESessions(result, signalRepository)
didFetchNewSession = true

// Cache fetched sessions using wire JIDs
for (const wireJid of wireJids) {
const signalId = signalRepository.jidToSignalProtocolAddress(wireJid)
peerSessionsCache.set(signalId, true)
}
}

return didFetchNewSession
Expand Down Expand Up @@ -559,8 +546,8 @@
const meLid = authState.creds.me?.lid
const meLidUser = meLid ? jidDecode(meLid)?.user : null

const encryptionPromises = (patchedMessages as any).map(

Check warning on line 549 in src/Socket/messages-send.ts

View workflow job for this annotation

GitHub Actions / check-lint

Unexpected any. Specify a different type
async ({ recipientJid: jid, message: patchedMessage }: any) => {

Check warning on line 550 in src/Socket/messages-send.ts

View workflow job for this annotation

GitHub Actions / check-lint

Unexpected any. Specify a different type
try {
if (!jid) return null

Expand Down Expand Up @@ -1028,7 +1015,7 @@
;(stanza.content as BinaryNode[]).push(reportingNode)
logger.trace({ jid }, 'added reporting token to message')
}
} catch (error: any) {

Check warning on line 1018 in src/Socket/messages-send.ts

View workflow job for this annotation

GitHub Actions / check-lint

Unexpected any. Specify a different type
logger.warn({ jid, trace: error?.stack }, 'failed to attach reporting token')
}
}
Expand All @@ -1054,7 +1041,7 @@
: null
try {
await authState.keys.set({ tctoken: { [tcTokenJid]: cleared } })
} catch (err: any) {

Check warning on line 1044 in src/Socket/messages-send.ts

View workflow job for this annotation

GitHub Actions / check-lint

Unexpected any. Specify a different type
logger.debug({ jid: destinationJid, err: err?.message }, 'failed to persist tctoken expiry cleanup')
}
}
Expand Down Expand Up @@ -1235,6 +1222,8 @@
sendReceipts,
readMessages,
refreshMediaConn,
// Function (not getter) so the spread in chats.ts preserves the live closure binding.
getMediaHost: () => mediaHost,
waUploadToServer,
fetchPrivacySettings,
sendPeerDataOperationMessage,
Expand Down Expand Up @@ -1268,10 +1257,10 @@
}

content.directPath = media.directPath
content.url = getUrlFromDirectPath(content.directPath!)
content.url = getUrlFromDirectPath(content.directPath!, mediaHost)

logger.debug({ directPath: media.directPath, key: result.key }, 'media update successful')
} catch (err: any) {

Check warning on line 1263 in src/Socket/messages-send.ts

View workflow job for this annotation

GitHub Actions / check-lint

Unexpected any. Specify a different type
error = err
}
}
Expand Down
28 changes: 8 additions & 20 deletions src/Socket/socket.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ import {
DEF_TAG_PREFIX,
INITIAL_PREKEY_COUNT,
MIN_PREKEY_COUNT,
MIN_UPLOAD_INTERVAL,
NOISE_WA_HEADER,
PROCESSABLE_HISTORY_TYPES,
TimeMs,
Expand Down Expand Up @@ -468,28 +467,18 @@ export const makeSocket = (config: SocketConfig) => {
return +countChild.attrs.value!
}

// Pre-key upload state management
// WAWeb has no time throttle here; the server drives uploads via PreKeyLow notifications.
let uploadPreKeysPromise: Promise<void> | null = null
let lastUploadTime = 0

/** generates and uploads a set of pre-keys to the server */
const uploadPreKeys = async (count = MIN_PREKEY_COUNT, retryCount = 0) => {
// Check minimum interval (except for retries)
if (retryCount === 0) {
const timeSinceLastUpload = Date.now() - lastUploadTime
if (timeSinceLastUpload < MIN_UPLOAD_INTERVAL) {
logger.debug(`Skipping upload, only ${timeSinceLastUpload}ms since last upload`)
return
}
}

// Prevent multiple concurrent uploads
const uploadPreKeys = async (count = MIN_PREKEY_COUNT) => {
if (uploadPreKeysPromise) {
logger.debug('Pre-key upload already in progress, waiting for completion')
await uploadPreKeysPromise
return
}

const uploadLogic = async () => {
const uploadLogic = async (retryCount: number): Promise<void> => {
logger.info({ count, retryCount }, 'uploading pre-keys')

// Generate and save pre-keys atomically (prevents ID collisions on retry)
Expand All @@ -498,23 +487,22 @@ export const makeSocket = (config: SocketConfig) => {
const { update, node } = await getNextPreKeysNode({ creds, keys }, count)
// Update credentials immediately to prevent duplicate IDs on retry
ev.emit('creds.update', update)
return node // Only return node since update is already used
return node
}, creds?.me?.id || 'upload-pre-keys')

// Upload to server (outside transaction, can fail without affecting local keys)
try {
await query(node)
logger.info({ count }, 'uploaded pre-keys successfully')
lastUploadTime = Date.now()
} catch (uploadError) {
logger.error({ uploadError: (uploadError as Error).toString(), count }, 'Failed to upload pre-keys to server')

// Exponential backoff retry (max 3 retries)
// Recurse into uploadLogic; calling uploadPreKeys would await its own in-flight promise.
if (retryCount < 3) {
const backoffDelay = Math.min(1000 * Math.pow(2, retryCount), 10000)
logger.info(`Retrying pre-key upload in ${backoffDelay}ms`)
await new Promise(resolve => setTimeout(resolve, backoffDelay))
return uploadPreKeys(count, retryCount + 1)
return uploadLogic(retryCount + 1)
}

throw uploadError
Expand All @@ -523,7 +511,7 @@ export const makeSocket = (config: SocketConfig) => {

// Add timeout protection
uploadPreKeysPromise = Promise.race([
uploadLogic(),
uploadLogic(0),
new Promise<void>((_, reject) =>
setTimeout(() => reject(new Boom('Pre-key upload timeout', { statusCode: 408 })), UPLOAD_TIMEOUT)
)
Expand Down
Loading
Loading