Skip to content

Commit af368c0

Browse files
authored
[auto-merge] branch-25.08 to branch-25.10 [skip ci] [bot] (#976)
auto-merge triggered by github actions on `branch-25.08` to create a PR keeping `branch-25.10` up-to-date. If this PR is unable to be merged due to conflicts, it will remain open until manually fix.
2 parents c6aa374 + e74b7e9 commit af368c0

File tree

1 file changed

+6
-0
lines changed

1 file changed

+6
-0
lines changed

docs/site/FAQ.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,3 +14,9 @@ Apache Spark version 3.3.1 or higher.
1414
### What versions of Python are supported
1515

1616
Python 3.10 or higher.
17+
18+
### How do I fix the "java.lang.IllegalArgumentException: valueCount must be >= 0" error?
19+
20+
This error occurs when the product of Arrow batch size and row dimension exceeds 2,147,483,647 (INT32_MAX), typically with very wide datasets (many features per row), causing Arrow serialization to fail. For example, if you set `max_records_per_batch = 10000` and your data has `row_dimension = 300000` (i.e., 300,000 features per row), then `10000 × 300000 = 3,000,000,000`, which exceeds the Arrow limit of 2,147,483,647 (INT32_MAX) and will cause this error.
21+
22+
Be aware that some Spark Rapids ML algorithms (such as NearestNeighbors) may convert sparse vectors to dense format internally if the underlying cuML algorithm does not support sparse input. This conversion can significantly increase memory usage, especially with wide datasets, and may make the Arrow size limit error more likely. To mitigate this, lower the value of `spark.sql.execution.arrow.maxRecordsPerBatch` (for example, to 5,000 or less) so that the product of the batch size and the number of elements per row stays within Arrow's maximum allowed size.

0 commit comments

Comments
 (0)