Skip to content

Add tryRegisterNamedVectorSerde() to VectorSerde classes#1433

Open
prestodb-ci wants to merge 29 commits intooss-baselinefrom
staging-1777c7523-pr
Open

Add tryRegisterNamedVectorSerde() to VectorSerde classes#1433
prestodb-ci wants to merge 29 commits intooss-baselinefrom
staging-1777c7523-pr

Conversation

@prestodb-ci
Copy link
Collaborator

@prestodb-ci prestodb-ci commented Nov 25, 2025

Test PR for branch staging-1e3b969d8-pr with head 1e3b969

@prestodb-ci
Copy link
Collaborator Author

@prestodb-ci prestodb-ci deleted the staging-1777c7523-pr branch November 26, 2025 01:29
Signed-off-by: Yuan <yuanzhou@apache.org>

Set ccache maximum size to 1G

Remove sed command from Gluten workflow

Removed a sed command that replaces 'oap-project' with 'IBM' in the get-velox.sh script.

Modify get-velox.sh to change 'ibm' to 'ibm-xxx'

Update the get-velox.sh script to replace 'ibm' with 'ibm-xxx'.

Update sed command to be case-insensitive

Update gluten.yml

fix iceberg unit test

Signed-off-by: Yuan <yuanzhou@apache.org>

Update gluten.yml

Enable enhanced features in gluten build script

Update cache keys for Gluten workflow
@FelixYBW FelixYBW restored the staging-1777c7523-pr branch November 27, 2025 09:14
@FelixYBW FelixYBW reopened this Nov 27, 2025
@prestodb-ci
Copy link
Collaborator Author

@prestodb-ci
Copy link
Collaborator Author

@prestodb-ci
Copy link
Collaborator Author

@prestodb-ci
Copy link
Collaborator Author

@prestodb-ci
Copy link
Collaborator Author

@prestodb-ci
Copy link
Collaborator Author

@FelixYBW FelixYBW changed the title fix: Avoid TSAN data race during cache entry initialization (#15623) refactor: Extract common BaseSerializedPage API (#15626) Dec 3, 2025
@prestodb-ci
Copy link
Collaborator Author

@prestodb-ci prestodb-ci changed the title refactor: Extract common BaseSerializedPage API (#15626) fix(build): Ambiguity caused by long literal (#15670) Dec 3, 2025
@prestodb-ci
Copy link
Collaborator Author

@prestodb-ci
Copy link
Collaborator Author

@prestodb-ci
Copy link
Collaborator Author

@prestodb-ci
Copy link
Collaborator Author

@prestodb-ci
Copy link
Collaborator Author

@prestodb-ci
Copy link
Collaborator Author

@prestodb-ci
Copy link
Collaborator Author

@prestodb-ci
Copy link
Collaborator Author

@prestodb-ci
Copy link
Collaborator Author

@prestodb-ci
Copy link
Collaborator Author

@prestodb-ci
Copy link
Collaborator Author

…ator#16689)

Summary:
Pull Request resolved: facebookincubator#16689

MarkDistinct is a pure annotating passthrough — for every input row, it emits
exactly one output row with an added boolean marker column. It never drops,
duplicates, or reorders rows. This satisfies the isFilter() contract ("never
more output rows than input rows"), matching the pattern used by
AssignUniqueId.

Returning true enables mayPushdownAggregation() in Driver to see through
MarkDistinct, allowing LazyVector pushdown into downstream aggregations.
Previously, the default false return conservatively disabled this optimization.

Reviewed By: Yuhta

Differential Revision: D95845097

fbshipit-source-id: d8d9440285a9d188276483ec7935aaf6a71c0c4f
@prestodb-ci
Copy link
Collaborator Author

kevinwilfong and others added 24 commits March 13, 2026 13:57
…cebookincubator#16511)

Summary:
Pull Request resolved: facebookincubator#16511

We currently do not validate the correctness of the repeat/define lengths we read in the Parquet header,
this can lead us to access memory outside the data buffer.

Add checks to validate we do not go off the end of the pageData_ buffer.

Reviewed By: Yuhta

Differential Revision: D94270200

fbshipit-source-id: 1e78b2c09748fddc52bc01cbda58c2c6cb6a0f1e
…kincubator#16765)

Summary: Pull Request resolved: facebookincubator#16765

Reviewed By: Yuhta

Differential Revision: D95963120

fbshipit-source-id: 2f4529f4526ba03ebf5f500f590442fbb92873a0
Summary:
Pull Request resolved: facebookincubator#16652

Add a new MarkSorted operator that validates data sortedness by specified keys and adds a boolean marker column. This enables detection of data corruption before downstream pipelines that depend on sorted input.

The implementation includes:
- MarkSortedNode plan node with Builder pattern, serialization, and visitor support
- MarkSorted operator with cross-batch comparison logic using CompareFlags
- LocalPlanner translation to create the operator from the plan node
- PlanBuilder::markSorted() helper for fluent test plan construction
- Comprehensive unit tests covering single/multiple keys, ASC/DESC, NULLS FIRST/LAST, cross-batch boundaries, empty batches, null values, and VARCHAR/numeric key types

The marker column is set to true for the first row of each batch if it maintains sort order relative to the previous batch's last row, and true for subsequent rows that maintain sort order relative to their predecessor.

Key design decisions:
- Store only sorting key columns (not full row) for cross-batch comparison
- Use CompareFlags built from SortOrder for proper null handling
- Match MarkDistinct pattern for consistency with existing operators

Reviewed By: Yuhta

Differential Revision: D92365997

fbshipit-source-id: b76e624a6acd63be71bea701896eaa73ebd1f1f5
Summary:
Add static tryRegisterNamedVectorSerde() method to PrestoVectorSerde,
CompactRowVectorSerde, and UnsafeRowVectorSerde. This method checks if
the serde is already registered before attempting registration, avoiding
duplicate registration errors and simplifying caller code.

This is extracted from D96046667 as a minimal API addition to unblock
axiom migration

Reviewed By: xiaoxmeng

Differential Revision: D96412951

fbshipit-source-id: 9726e46ad4930f75ae28b6c3363b931380b89ae1
Alchemy-item: (ID = 1043) [OAP] Support struct schema evolution matching by name commit 1/1 - 5c132f1
Alchemy-item: (ID = 883) [OAP] [13620] Allow reading integers into smaller-range types  commit 1/1 - 4cae2f5
…ter join

Signed-off-by: Yuan <yuanzhou@apache.org>

Alchemy-item: (ID = 1095) [OAP] [11771] Fix smj result mismatch issue commit 1/1 - 791678d
Alchemy-item: (ID = 1103) feat: Enable the hash join to accept a pre-built hash table for joining commit 1/1 - 8ca7ac1
Alchemy-item: (ID = 1153) Iceberg staging hub commit 1/6 - c5a69de

Alchemy-item: (ID = 1172) Iceberg staging hub commit 1/6 - c357a2c
The function toValues removes duplicated values from the vector and
return them in a std::vector. It was used to build an InPredicate. It
will be needed for building NOT IN filters for Iceberg equality delete
read as well, therefore moving it from velox/functions/prestosql/InPred
icate.cpp to velox/type/Filter.h. This commit also renames it to
deDuplicateValues to make it easier to understand.

feat(connector): Support reading Iceberg split with equality deletes

This commit introduces EqualityDeleteFileReader, which is used to read
Iceberg splits with equality delete files. The equality delete files
are read to construct domain filters or filter functions, which then
would be evaluated in the base file readers.

When there is only one equality delete field, and when that field is
an Iceberg identifier field, i.e. non-floating point primitive types,
the values would be converted to a list as a NOT IN domain filter,
with the NULL treated separately. This domain filter would then be
pushed to the ColumnReaders to filter our unwanted rows before they
are read into Velox vectors. When the equality delete column is a
nested column, e.g. a sub-column in a struct, or the key in a map,
such column may not be in the base file ScanSpec. We need to add/remove
these subfields to/from the SchemaWithId and ScanSpec recursively if
they were not in the ScanSpec already. A test is also added for such
case.

If there are more than one equality delete field, or the field is not
an Iceberg identifier field, the values would be converted to a typed
expression in the conjunct of disconjunts form. This expression would
be evaluated as the remaining filter function after the rows are read
into the Velox vectors. Note that this only works for Presto now as
the "neq" function is not registered by Spark. See https://github.com/
facebookincubator/issues/12667

Note that this commit only supports integral types. VARCHAR and
VARBINARY need to be supported in future commits (see
facebookincubator#12664).

Co-authored-by: Naveen Kumar Mahadevuni <Naveen.Mahadevuni@ibm.com>

Alchemy-item: (ID = 1153) Iceberg staging hub commit 2/6 - 14edb98

Alchemy-item: (ID = 1172) Iceberg staging hub commit 2/6 - 495bf5c
Add iceberg partition transforms.

Co-authored-by: Chengcheng Jin <Chengcheng.Jin@ibm.com>

Add NaN statistics to parquet writer.

Collect Iceberg data file statistics in dwio.

Integrate Iceberg data file statistics and adding unit test.

Support write field_id to parquet metadata SchemaElement.

Implement iceberg sort order

Add clustered Iceberg writer mode.

Fix parquet writer ut

Add IcebergConnector

Fix unittest error

Alchemy-item: (ID = 1172) Iceberg staging hub commit 3/6 - a166cdd
Alchemy-item: (ID = 1172) Iceberg staging hub commit 4/6 - 9d223f5
Alchemy-item: (ID = 1172) Iceberg staging hub commit 5/6 - 1537c2c
Alchemy-item: (ID = 1172) Iceberg staging hub commit 6/6 - 9daad64
Signed-off-by: Yuan <yuanzhou@apache.org>

Alchemy-item: (ID = 906) fix: Adding daily tests commit 1/2 - e2eb2c6
we can cache ccache on every build even on failure, since ibm/velox is
always incremental build

Alchemy-item: (ID = 906) fix: Adding daily tests commit 2/2 - 0899ddc
1
Alchemy-item: (ID = 988) Add fileNameGenerator to the constructor of IcebergInsertTableHandle commit 1/1 - a5f7e46
This commit introduces `PartitionedVector` - a low-level execution
abstraction that provides an in-place, partition-aware layout of a
vector based on per-row partition IDs.

1. **In-place rearrangement**: Rearrange vector data in memory without
   creating multiple copies
2. **Buffer reuse**: Allow reuse of temporary buffers across multiple
   partitioning operations
3. **Minimal abstraction**: Similar to `DecodedVector`, focus on
   efficient execution rather than operator semantics
4. **Thread-unsafe by design**: Optimized for single-threaded execution
   contexts

For more information please see #1703

Alchemy-item: (ID = 1150) Introducing PartitionedVector commit 1/1 - 960f41b
Signed-off-by: Xin Zhang <xin-zhang2@ibm.com>

Alchemy-item: (ID = 1167) Add PartitionedRowVector commit 1/1 - f2af427
Alchemy-item: (ID = 1168) Remove website folder commit 1/1 - b4dcd7d
…dthValuesInPlace

Alchemy-item: (ID = 1179) Optimized PartitionedOutput staging hub commit 1/3 - 86db93b
Alchemy-item: (ID = 1179) Optimized PartitionedOutput staging hub commit 2/3 - 6dd3661
PartitionedFlatVector::partition() and PartitionedRowVector::partition()
called mutableRawNulls() unconditionally. mutableRawNulls() allocates a
null buffer if one does not exist, causing mayHaveNulls() to return true
for every vector after partitioning, even when the original had no nulls.

Fix both sites to check rawNulls() first and only call mutableRawNulls()
when a null buffer already exists.

Add noNullBufferAllocatedForNullFreeFlat and
noNullBufferAllocatedForNullFreeRow tests to PartitionedVectorTest to
cover this case.

# Conflicts:
#	velox/vector/PartitionedVector.cpp

Alchemy-item: (ID = 1179) Optimized PartitionedOutput staging hub commit 3/3 - 2706c1e
Signed-off-by: Linsong Wang <linsong.wang@ibm.com>
@prestodb-ci
Copy link
Collaborator Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.