Add tryRegisterNamedVectorSerde() to VectorSerde classes#1433
Add tryRegisterNamedVectorSerde() to VectorSerde classes#1433prestodb-ci wants to merge 29 commits intooss-baselinefrom
Conversation
|
Test passed for commit 1777c7523, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/pipeline-rebase-ibm-velox/394/display/redirect for details |
Signed-off-by: Yuan <yuanzhou@apache.org> Set ccache maximum size to 1G Remove sed command from Gluten workflow Removed a sed command that replaces 'oap-project' with 'IBM' in the get-velox.sh script. Modify get-velox.sh to change 'ibm' to 'ibm-xxx' Update the get-velox.sh script to replace 'ibm' with 'ibm-xxx'. Update sed command to be case-insensitive Update gluten.yml fix iceberg unit test Signed-off-by: Yuan <yuanzhou@apache.org> Update gluten.yml Enable enhanced features in gluten build script Update cache keys for Gluten workflow
|
❌ Test commit 2a50cd131 failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/PR-2057/2/display/redirect for details |
|
❌ Test commit 0007f37c6 failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/PR-2057/4/display/redirect for details |
|
❌ Test commit 0007f37c6 failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/PR-2057/5/display/redirect for details |
28dabdb to
3026e31
Compare
|
❌ Test commit 0007f37c6 failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/PR-2057/6/display/redirect for details |
|
❌ Test commit 0007f37c6 failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/PR-2057/7/display/redirect for details |
|
❌ Test commit 0007f37c6 failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/PR-2057/8/display/redirect for details |
|
❌ Test commit 0007f37c6 failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/PR-2057/9/display/redirect for details |
3026e31 to
72886ca
Compare
set restore key
72886ca to
4a8bc9b
Compare
|
❌ Test commit a017fca78 failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/pipeline-rebase-ibm-velox/427/display/redirect for details |
|
❌ Test commit a236cf5b3 failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/pipeline-rebase-ibm-velox/428/display/redirect for details |
|
❌ Test commit 0f21ff954 failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/pipeline-rebase-ibm-velox/429/display/redirect for details |
|
❌ Test commit e01ca2187 failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/pipeline-rebase-ibm-velox/430/display/redirect for details |
|
❌ Test commit 4f39eef54 failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/pipeline-rebase-ibm-velox/431/display/redirect for details |
|
❌ Test commit eba05296b failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/pipeline-rebase-ibm-velox/437/display/redirect for details |
|
❌ Test commit bf9e9fd6d failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/pipeline-rebase-ibm-velox/440/display/redirect for details |
|
❌ Test commit 4ff28c6bf failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/pipeline-rebase-ibm-velox/449/display/redirect for details |
|
❌ Test commit b423dc159 failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/pipeline-rebase-ibm-velox/1085/display/redirect for details |
|
❌ Test commit d16e3fb4a failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/pipeline-rebase-ibm-velox/1090/display/redirect for details |
|
❌ Test commit e8fd7cf40 failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/pipeline-rebase-ibm-velox/1097/display/redirect for details |
…ator#16689) Summary: Pull Request resolved: facebookincubator#16689 MarkDistinct is a pure annotating passthrough — for every input row, it emits exactly one output row with an added boolean marker column. It never drops, duplicates, or reorders rows. This satisfies the isFilter() contract ("never more output rows than input rows"), matching the pattern used by AssignUniqueId. Returning true enables mayPushdownAggregation() in Driver to see through MarkDistinct, allowing LazyVector pushdown into downstream aggregations. Previously, the default false return conservatively disabled this optimization. Reviewed By: Yuhta Differential Revision: D95845097 fbshipit-source-id: d8d9440285a9d188276483ec7935aaf6a71c0c4f
|
❌ Test commit f4474718f failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/pipeline-rebase-ibm-velox/1109/display/redirect for details |
…cebookincubator#16511) Summary: Pull Request resolved: facebookincubator#16511 We currently do not validate the correctness of the repeat/define lengths we read in the Parquet header, this can lead us to access memory outside the data buffer. Add checks to validate we do not go off the end of the pageData_ buffer. Reviewed By: Yuhta Differential Revision: D94270200 fbshipit-source-id: 1e78b2c09748fddc52bc01cbda58c2c6cb6a0f1e
…kincubator#16765) Summary: Pull Request resolved: facebookincubator#16765 Reviewed By: Yuhta Differential Revision: D95963120 fbshipit-source-id: 2f4529f4526ba03ebf5f500f590442fbb92873a0
Summary: Pull Request resolved: facebookincubator#16652 Add a new MarkSorted operator that validates data sortedness by specified keys and adds a boolean marker column. This enables detection of data corruption before downstream pipelines that depend on sorted input. The implementation includes: - MarkSortedNode plan node with Builder pattern, serialization, and visitor support - MarkSorted operator with cross-batch comparison logic using CompareFlags - LocalPlanner translation to create the operator from the plan node - PlanBuilder::markSorted() helper for fluent test plan construction - Comprehensive unit tests covering single/multiple keys, ASC/DESC, NULLS FIRST/LAST, cross-batch boundaries, empty batches, null values, and VARCHAR/numeric key types The marker column is set to true for the first row of each batch if it maintains sort order relative to the previous batch's last row, and true for subsequent rows that maintain sort order relative to their predecessor. Key design decisions: - Store only sorting key columns (not full row) for cross-batch comparison - Use CompareFlags built from SortOrder for proper null handling - Match MarkDistinct pattern for consistency with existing operators Reviewed By: Yuhta Differential Revision: D92365997 fbshipit-source-id: b76e624a6acd63be71bea701896eaa73ebd1f1f5
Summary: Add static tryRegisterNamedVectorSerde() method to PrestoVectorSerde, CompactRowVectorSerde, and UnsafeRowVectorSerde. This method checks if the serde is already registered before attempting registration, avoiding duplicate registration errors and simplifying caller code. This is extracted from D96046667 as a minimal API addition to unblock axiom migration Reviewed By: xiaoxmeng Differential Revision: D96412951 fbshipit-source-id: 9726e46ad4930f75ae28b6c3363b931380b89ae1
Alchemy-item: (ID = 1043) [OAP] Support struct schema evolution matching by name commit 1/1 - 5c132f1
Alchemy-item: (ID = 883) [OAP] [13620] Allow reading integers into smaller-range types commit 1/1 - 4cae2f5
…ter join Signed-off-by: Yuan <yuanzhou@apache.org> Alchemy-item: (ID = 1095) [OAP] [11771] Fix smj result mismatch issue commit 1/1 - 791678d
Alchemy-item: (ID = 1103) feat: Enable the hash join to accept a pre-built hash table for joining commit 1/1 - 8ca7ac1
The function toValues removes duplicated values from the vector and return them in a std::vector. It was used to build an InPredicate. It will be needed for building NOT IN filters for Iceberg equality delete read as well, therefore moving it from velox/functions/prestosql/InPred icate.cpp to velox/type/Filter.h. This commit also renames it to deDuplicateValues to make it easier to understand. feat(connector): Support reading Iceberg split with equality deletes This commit introduces EqualityDeleteFileReader, which is used to read Iceberg splits with equality delete files. The equality delete files are read to construct domain filters or filter functions, which then would be evaluated in the base file readers. When there is only one equality delete field, and when that field is an Iceberg identifier field, i.e. non-floating point primitive types, the values would be converted to a list as a NOT IN domain filter, with the NULL treated separately. This domain filter would then be pushed to the ColumnReaders to filter our unwanted rows before they are read into Velox vectors. When the equality delete column is a nested column, e.g. a sub-column in a struct, or the key in a map, such column may not be in the base file ScanSpec. We need to add/remove these subfields to/from the SchemaWithId and ScanSpec recursively if they were not in the ScanSpec already. A test is also added for such case. If there are more than one equality delete field, or the field is not an Iceberg identifier field, the values would be converted to a typed expression in the conjunct of disconjunts form. This expression would be evaluated as the remaining filter function after the rows are read into the Velox vectors. Note that this only works for Presto now as the "neq" function is not registered by Spark. See https://github.com/ facebookincubator/issues/12667 Note that this commit only supports integral types. VARCHAR and VARBINARY need to be supported in future commits (see facebookincubator#12664). Co-authored-by: Naveen Kumar Mahadevuni <Naveen.Mahadevuni@ibm.com> Alchemy-item: (ID = 1153) Iceberg staging hub commit 2/6 - 14edb98 Alchemy-item: (ID = 1172) Iceberg staging hub commit 2/6 - 495bf5c
Add iceberg partition transforms. Co-authored-by: Chengcheng Jin <Chengcheng.Jin@ibm.com> Add NaN statistics to parquet writer. Collect Iceberg data file statistics in dwio. Integrate Iceberg data file statistics and adding unit test. Support write field_id to parquet metadata SchemaElement. Implement iceberg sort order Add clustered Iceberg writer mode. Fix parquet writer ut Add IcebergConnector Fix unittest error Alchemy-item: (ID = 1172) Iceberg staging hub commit 3/6 - a166cdd
Alchemy-item: (ID = 1172) Iceberg staging hub commit 4/6 - 9d223f5
Alchemy-item: (ID = 1172) Iceberg staging hub commit 5/6 - 1537c2c
Alchemy-item: (ID = 1172) Iceberg staging hub commit 6/6 - 9daad64
Signed-off-by: Yuan <yuanzhou@apache.org> Alchemy-item: (ID = 906) fix: Adding daily tests commit 1/2 - e2eb2c6
we can cache ccache on every build even on failure, since ibm/velox is always incremental build Alchemy-item: (ID = 906) fix: Adding daily tests commit 2/2 - 0899ddc
This commit introduces `PartitionedVector` - a low-level execution abstraction that provides an in-place, partition-aware layout of a vector based on per-row partition IDs. 1. **In-place rearrangement**: Rearrange vector data in memory without creating multiple copies 2. **Buffer reuse**: Allow reuse of temporary buffers across multiple partitioning operations 3. **Minimal abstraction**: Similar to `DecodedVector`, focus on efficient execution rather than operator semantics 4. **Thread-unsafe by design**: Optimized for single-threaded execution contexts For more information please see #1703 Alchemy-item: (ID = 1150) Introducing PartitionedVector commit 1/1 - 960f41b
Signed-off-by: Xin Zhang <xin-zhang2@ibm.com> Alchemy-item: (ID = 1167) Add PartitionedRowVector commit 1/1 - f2af427
Alchemy-item: (ID = 1168) Remove website folder commit 1/1 - b4dcd7d
…dthValuesInPlace Alchemy-item: (ID = 1179) Optimized PartitionedOutput staging hub commit 1/3 - 86db93b
Alchemy-item: (ID = 1179) Optimized PartitionedOutput staging hub commit 2/3 - 6dd3661
PartitionedFlatVector::partition() and PartitionedRowVector::partition() called mutableRawNulls() unconditionally. mutableRawNulls() allocates a null buffer if one does not exist, causing mayHaveNulls() to return true for every vector after partitioning, even when the original had no nulls. Fix both sites to check rawNulls() first and only call mutableRawNulls() when a null buffer already exists. Add noNullBufferAllocatedForNullFreeFlat and noNullBufferAllocatedForNullFreeRow tests to PartitionedVectorTest to cover this case. # Conflicts: # velox/vector/PartitionedVector.cpp Alchemy-item: (ID = 1179) Optimized PartitionedOutput staging hub commit 3/3 - 2706c1e
Signed-off-by: Linsong Wang <linsong.wang@ibm.com>
|
❌ Test commit e7dd656c9 failed, open https://ci.ibm.prestodb.dev/job/presto-performance/job/presto-performance/job/pipeline-rebase-ibm-velox/1114/display/redirect for details |
Test PR for branch staging-1e3b969d8-pr with head 1e3b969