Skip to content

Conversation

@floze
Copy link
Contributor

@floze floze commented Oct 15, 2025

When the list elements grow larger than 8000 items, Lua obscurely fails with "too many results to unpack".

Found the fix here:
taskforcesh/bullmq#422

Summary by CodeRabbit

  • Performance
    • Optimized job retrieval by type to handle very large lists more efficiently via batched operations, reducing memory overhead and improving responsiveness.
  • Bug Fixes
    • Prevented failures and errors that could occur when querying or merging extremely large job lists, enhancing stability for high-volume workloads.

When the list elements grow larger than 8000 items, Lua obscurely fails with "too many results to unpack".

Found the fix here:
taskforcesh/bullmq#422
@vercel
Copy link

vercel bot commented Oct 15, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
docs Ready Ready Preview Nov 16, 2025 11:51pm
vendure-storybook Ready Ready Preview Comment Nov 16, 2025 11:51pm

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 15, 2025

Walkthrough

Introduces a Lua batching helper and updates list-merging logic to push elements to Redis in chunks via multiple RPUSH calls instead of a single unpack-based RPUSH, altering control flow within the merge step while preserving output semantics.

Changes

Cohort / File(s) Summary of Changes
BullMQ Lua scripts
packages/job-queue-plugin/src/bullmq/scripts/get-jobs-by-type.ts
Added Lua batches(n, batchSize) generator; replaced single RPUSH key unpack(listElements) with iterative chunked RPUSH calls using computed (from,to) ranges to avoid oversized unpack; overall result merging behavior unchanged.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Caller
  participant TS as get-jobs-by-type.ts
  participant Redis

  Caller->>TS: invoke getJobsByType(...)
  TS->>Redis: EVAL Lua script
  activate Redis

  rect rgba(220,235,255,0.5)
    note over Redis: Collect list elements to merge
    Redis->>Redis: batches(total, batchSize)
    loop For each (from,to) batch
      Redis->>Redis: RPUSH key elements[from..to]
    end
  end

  Redis-->>TS: merged list result
  deactivate Redis
  TS-->>Caller: return result
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The pull request description only outlines the error encountered and references an external issue but omits the required summary of changes, breaking changes section, and the checklist sections mandated by the repository template. Please update the PR description to include a clear summary of the changes made, a “Breaking changes” section even if noting none, and a completed checklist as specified in the repository template.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title succinctly captures the core change—batching list elements to avoid oversized unpack calls—without extraneous details, making it clear and specific to the modifications introduced in this PR.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
packages/job-queue-plugin/src/bullmq/scripts/get-jobs-by-type.ts (1)

180-180: Consider applying the same batching pattern for consistency.

While sortedElements is typically small (limited by the take pagination parameter), a caller could pass a very large take value (e.g., to fetch thousands of items in one request), causing this unpack to fail with the same error. For defensive programming and consistency with the fix at lines 160-162, consider batching this operation as well.

Apply this diff to add batching:

-            rcall('RPUSH', tempListKey, unpack(sortedElements))
+            for from, to in batches(#sortedElements, 7000) do
+                rcall('RPUSH', tempListKey, unpack(sortedElements, from, to))
+            end
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1bb9cf8 and 8ff7168.

📒 Files selected for processing (1)
  • packages/job-queue-plugin/src/bullmq/scripts/get-jobs-by-type.ts (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (20)
  • GitHub Check: e2e tests (24.x, mariadb)
  • GitHub Check: e2e tests (20.x, mariadb)
  • GitHub Check: e2e tests (24.x, sqljs)
  • GitHub Check: e2e tests (22.x, postgres)
  • GitHub Check: codegen / codegen
  • GitHub Check: unit tests (22.x)
  • GitHub Check: unit tests (20.x)
  • GitHub Check: build (22.x)
  • GitHub Check: unit tests (24.x)
  • GitHub Check: build (24.x)
  • GitHub Check: build (20.x)
  • GitHub Check: publish_install (ubuntu-latest, 20.x)
  • GitHub Check: publish_install (ubuntu-latest, 24.x)
  • GitHub Check: publish_install (macos-latest, 22.x)
  • GitHub Check: publish_install (windows-latest, 24.x)
  • GitHub Check: publish_install (macos-latest, 20.x)
  • GitHub Check: publish_install (windows-latest, 20.x)
  • GitHub Check: publish_install (macos-latest, 24.x)
  • GitHub Check: publish_install (ubuntu-latest, 22.x)
  • GitHub Check: publish_install (windows-latest, 22.x)
🔇 Additional comments (2)
packages/job-queue-plugin/src/bullmq/scripts/get-jobs-by-type.ts (2)

21-32: LGTM! Well-implemented batching iterator.

The batches function correctly generates (from, to) ranges for chunking arrays in Lua. The logic handles 1-based indexing properly and ensures the final batch doesn't exceed bounds.


160-162: Excellent fix for the unpack limit issue!

Batching the RPUSH operations with a chunk size of 7000 directly resolves the "too many results to unpack" error when list elements exceed 8000 items. The implementation correctly uses unpack(listElements, from, to) to push each batch.

@sonarqubecloud
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

core-team To be picked up by Vendure core team members

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants