Fix stats collection for integer based decimal numbers#1848
Fix stats collection for integer based decimal numbers#1848
Conversation
Alchemy-item: (ID = 1043) [OAP] Support struct schema evolution matching by name commit 1/1 - 5c132f1
Alchemy-item: (ID = 883) [OAP] [13620] Allow reading integers into smaller-range types commit 1/1 - 4cae2f5
…ter join Signed-off-by: Yuan <yuanzhou@apache.org> Alchemy-item: (ID = 1095) [OAP] [11771] Fix smj result mismatch issue commit 1/1 - 791678d
Alchemy-item: (ID = 1208) feat: Enable the hash join to accept a pre-built hash table for joining commit 1/1 - c3141f1
Alchemy-item: (ID = 1153) Iceberg staging hub commit 1/6 - c5a69de3d1021073c13a99e1c7c6d6fcce355178 refactor: Move toValues from InPredicate.cpp to Filter.h The function toValues removes duplicated values from the vector and return them in a std::vector. It was used to build an InPredicate. It will be needed for building NOT IN filters for Iceberg equality delete read as well, therefore moving it from velox/functions/prestosql/InPred icate.cpp to velox/type/Filter.h. This commit also renames it to deDuplicateValues to make it easier to understand. feat(connector): Support reading Iceberg split with equality deletes This commit introduces EqualityDeleteFileReader, which is used to read Iceberg splits with equality delete files. The equality delete files are read to construct domain filters or filter functions, which then would be evaluated in the base file readers. When there is only one equality delete field, and when that field is an Iceberg identifier field, i.e. non-floating point primitive types, the values would be converted to a list as a NOT IN domain filter, with the NULL treated separately. This domain filter would then be pushed to the ColumnReaders to filter our unwanted rows before they are read into Velox vectors. When the equality delete column is a nested column, e.g. a sub-column in a struct, or the key in a map, such column may not be in the base file ScanSpec. We need to add/remove these subfields to/from the SchemaWithId and ScanSpec recursively if they were not in the ScanSpec already. A test is also added for such case. If there are more than one equality delete field, or the field is not an Iceberg identifier field, the values would be converted to a typed expression in the conjunct of disconjunts form. This expression would be evaluated as the remaining filter function after the rows are read into the Velox vectors. Note that this only works for Presto now as the "neq" function is not registered by Spark. See https://github.com/ facebookincubator/issues/12667 Note that this commit only supports integral types. VARCHAR and VARBINARY need to be supported in future commits (see facebookincubator#12664). Co-authored-by: Naveen Kumar Mahadevuni <Naveen.Mahadevuni@ibm.com> Alchemy-item: (ID = 1153) Iceberg staging hub commit 2/6 - 14edb98c67f1c572a5f40682923795bd5b08e7c3 Support insert data into iceberg table. Add iceberg partition transforms. Co-authored-by: Chengcheng Jin <Chengcheng.Jin@ibm.com> Add NaN statistics to parquet writer. Collect Iceberg data file statistics in dwio. Integrate Iceberg data file statistics and adding unit test. Support write field_id to parquet metadata SchemaElement. Implement iceberg sort order Add clustered Iceberg writer mode. Fix parquet writer ut Add IcebergConnector Fix unittest error Resolve confict Resolve confict Fix test build issue Fix crash Alchemy-item: (ID = 1205) Iceberg core code commit 1/1 - 4f10953
Signed-off-by: Yuan <yuanzhou@apache.org> Alchemy-item: (ID = 906) fix: Adding daily tests commit 1/2 - e2eb2c6
we can cache ccache on every build even on failure, since ibm/velox is always incremental build Alchemy-item: (ID = 906) fix: Adding daily tests commit 2/2 - 0899ddc
This commit introduces `PartitionedVector` - a low-level execution abstraction that provides an in-place, partition-aware layout of a vector based on per-row partition IDs. 1. **In-place rearrangement**: Rearrange vector data in memory without creating multiple copies 2. **Buffer reuse**: Allow reuse of temporary buffers across multiple partitioning operations 3. **Minimal abstraction**: Similar to `DecodedVector`, focus on efficient execution rather than operator semantics 4. **Thread-unsafe by design**: Optimized for single-threaded execution contexts For more information please see IBM#1703 Alchemy-item: (ID = 1150) Introducing PartitionedVector commit 1/1 - 960f41b
Signed-off-by: Xin Zhang <xin-zhang2@ibm.com> Alchemy-item: (ID = 1167) Add PartitionedRowVector commit 1/1 - f2af427
Alchemy-item: (ID = 1206) Remove website folder commit 1/1 - 8866603
…dthValuesInPlace Alchemy-item: (ID = 1179) Optimized PartitionedOutput staging hub commit 1/3 - 86db93b
Alchemy-item: (ID = 1179) Optimized PartitionedOutput staging hub commit 2/3 - 6dd3661
PartitionedFlatVector::partition() and PartitionedRowVector::partition() called mutableRawNulls() unconditionally. mutableRawNulls() allocates a null buffer if one does not exist, causing mayHaveNulls() to return true for every vector after partitioning, even when the original had no nulls. Fix both sites to check rawNulls() first and only call mutableRawNulls() when a null buffer already exists. Add noNullBufferAllocatedForNullFreeFlat and noNullBufferAllocatedForNullFreeRow tests to PartitionedVectorTest to cover this case. # Conflicts: # velox/vector/PartitionedVector.cpp Alchemy-item: (ID = 1179) Optimized PartitionedOutput staging hub commit 3/3 - 2706c1e
Signed-off-by: Hazmi <ialhazmim@gmail.com> Alchemy-item: (ID = 1203) Fix iceberg min max statistics for decimal type when encoded as int32 commit 1/1 - 0ac9930
|
alchemy link becfc90 |
|
|
|
alchemy link becfc90 |
|
|
|
alchemy link becfc90 |
|
|
|
alchemy link becfc90 |
|
Added new rebase item:
|
|
alchemy merge |
|
alchemy link becfc90 |
|
The following unexpired item was removed at
Added new rebase item:
|
|
Expired 1 rebase items linked in this issue at 2026-03-30T16:33:29Z |
|
alchemy merge |
|
alchemy link becfc90 |
|
Failed to add new rebase item:
The new rebase item overlaps with the following existing item:
Please double check your input and retry. |
becfc90 to
8531b53
Compare
|
alchemy link 8531b53 |
|
Failed to add new rebase item:
The new rebase item overlaps with the following existing item:
Please double check your input and retry. |
|
Replaced by |
|
alchemy close |
|
Closed 1 rebase item(s) linked in this issue at 2026-04-16T04:05:23Z |
Integer based decimal numbers were being written in little endian, but we expect them to be in network byte order. Reversed bytes in this case.