Skip to content

Oracle Database node: Maximum call stack size exceeded when Select/Execute returns many rows #26985

@kevinyelee

Description

@kevinyelee

n8n Oracle Node — Maximum call stack size exceeded (Issue Draft)

Summary

In the Oracle Database node (packages/nodes-base/nodes/Oracle/Sql), "Maximum call stack size exceeded" can occur when a Select or Execute SQL operation returns a large number of rows (on the order of tens of thousands, e.g. 10k–100k+). The root cause is the same pattern already fixed in other nodes (Microsoft SQL #8333/#8334, FTP #8657, MongoDB #8530, MySQL #10965): using Array.prototype.push.apply(array, largeArray) or array.push(...largeArray) to append many items at once. JavaScript engines limit the number of arguments to a function; when that limit is exceeded, the stack overflows.

Root cause (file and locations)

File: packages/nodes-base/nodes/Oracle/Sql/helpers/utils.ts
Function: configureQueryRunner (returned async function that runs queries and builds returnData)

1. Transaction branch

When stmtBatching === 'transaction', after executing each query the code does:

returnData.push.apply(returnData, executionData);
// or
returnData.push.apply(returnData, resultOutBinds);

Here executionData / resultOutBinds can be the full result set of a SELECT or Execute Query (e.g. 50k rows). Passing that many arguments to push via .apply() exceeds the engine’s limit and triggers the error.

2. Independently branch

When stmtBatching === 'independently', the same pattern appears:

returnData.push.apply(returnData, executionData);
returnData.push.apply(returnData, resultOutBinds);

Again, for a single item that runs one large SELECT, taskResults.rows is huge, so executionData is huge and push.apply blows the stack.

3. _getResponseForOutbinds (optional hardening)

Inside _getResponseForOutbinds:

if (executionData) {
  returnData.push(...executionData);
}

If constructExecutionMetaData ever returns a large array in this path, the spread form push(...executionData) can also cause stack overflow. Replacing this with a loop would make the node robust for any size.

Suggested fix

Avoid passing a large number of arguments to push. Append in a loop instead.

Replace returnData.push.apply(returnData, executionData) (and same for resultOutBinds)

Before:

returnData.push.apply(returnData, executionData);

After:

for (const item of executionData) {
  returnData.push(item);
}

Same for resultOutBinds:

for (const item of resultOutBinds) {
  returnData.push(item);
}

Apply this in both the transaction and independently branches (four replacements in total).

Optional: harden _getResponseForOutbinds

Before:

if (executionData) {
  returnData.push(...executionData);
}

After:

if (executionData) {
  for (const item of executionData) {
    returnData.push(item);
  }
}

This matches the approach used in the Microsoft SQL, FTP, MongoDB, and MySQL fixes and avoids stack overflow regardless of result set size.

Steps to reproduce

  1. Add an Oracle Database node.
  2. Configure credentials and choose Select (or Execute SQL with a query that returns many rows).
  3. Set Return All (or a high limit) so the query returns e.g. 15,000+ rows.
  4. Run the workflow.
    Observed: RangeError: Maximum call stack size exceeded (or similar).
    Expected: Rows returned without stack overflow.

Environment

  • n8n version: (e.g. self-hosted latest from master or a specific version)
  • Node version: (e.g. 18.x / 20.x)
  • Oracle backend: any (issue is in JS array handling, not DB)

References

Thank you for maintaining the Oracle node; applying the same pattern as in the above PRs should resolve this issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions