You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Alchemy-item: (ID = 1153) Iceberg staging hub commit 1/6 - c5a69de3d1021073c13a99e1c7c6d6fcce355178
refactor: Move toValues from InPredicate.cpp to Filter.h
The function toValues removes duplicated values from the vector and
return them in a std::vector. It was used to build an InPredicate. It
will be needed for building NOT IN filters for Iceberg equality delete
read as well, therefore moving it from velox/functions/prestosql/InPred
icate.cpp to velox/type/Filter.h. This commit also renames it to
deDuplicateValues to make it easier to understand.
feat(connector): Support reading Iceberg split with equality deletes
This commit introduces EqualityDeleteFileReader, which is used to read
Iceberg splits with equality delete files. The equality delete files
are read to construct domain filters or filter functions, which then
would be evaluated in the base file readers.
When there is only one equality delete field, and when that field is
an Iceberg identifier field, i.e. non-floating point primitive types,
the values would be converted to a list as a NOT IN domain filter,
with the NULL treated separately. This domain filter would then be
pushed to the ColumnReaders to filter our unwanted rows before they
are read into Velox vectors. When the equality delete column is a
nested column, e.g. a sub-column in a struct, or the key in a map,
such column may not be in the base file ScanSpec. We need to add/remove
these subfields to/from the SchemaWithId and ScanSpec recursively if
they were not in the ScanSpec already. A test is also added for such
case.
If there are more than one equality delete field, or the field is not
an Iceberg identifier field, the values would be converted to a typed
expression in the conjunct of disconjunts form. This expression would
be evaluated as the remaining filter function after the rows are read
into the Velox vectors. Note that this only works for Presto now as
the "neq" function is not registered by Spark. See https://github.com/facebookincubator/issues/12667
Note that this commit only supports integral types. VARCHAR and
VARBINARY need to be supported in future commits (see
facebookincubator#12664).
Co-authored-by: Naveen Kumar Mahadevuni <Naveen.Mahadevuni@ibm.com>
Alchemy-item: (ID = 1153) Iceberg staging hub commit 2/6 - 14edb98c67f1c572a5f40682923795bd5b08e7c3
Support insert data into iceberg table.
Add iceberg partition transforms.
Co-authored-by: Chengcheng Jin <Chengcheng.Jin@ibm.com>
Add NaN statistics to parquet writer.
Collect Iceberg data file statistics in dwio.
Integrate Iceberg data file statistics and adding unit test.
Support write field_id to parquet metadata SchemaElement.
Implement iceberg sort order
Add clustered Iceberg writer mode.
Fix parquet writer ut
Add IcebergConnector
Fix unittest error
Resolve confict
Resolve confict
Fix test build issue
Fix crash
0 commit comments