Skip to content

Core: Interface based DataFile reader and writer API #12298

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

pvary
Copy link
Contributor

@pvary pvary commented Feb 17, 2025

Here is what the PR does:

  • Created 3 interface classes which are implemented by the file formats:
    • ReadBuilder - Builder for reading data from data files
    • AppenderBuilder - Builder for writing data to data files
    • ObjectModel - Providing ReadBuilders, and AppenderBuilders for the specific data file format and object model pair
  • Updated the Parquet, Avro, ORC implementation for this interfaces, and deprecated the old reader/writer APIs
  • Created interface classes which will be used by the actual readers/writers of the data files:
    • AppenderBuilder - Builder for writing a file
    • DataWriterBuilder - Builder for generating a data file
    • PositionDeleteWriterBuilder - Builder for generating a position delete file
    • EqualityDeleteWriterBuilder - Builder for generating an equality delete file
    • No ReadBuilder here - the file format reader builder is reused
  • Created a WriterBuilder class which implements the interfaces above (AppenderBuilder/DataWriterBuilder/PositionDeleteWriterBuilder/EqualityDeleteWriterBuilder) based on a provided file format specific AppenderBuilder
  • Created an ObjectModelRegistry which stores the available ObjectModels, and engines and users could request the readers (ReadBuilder) and writers (AppenderBuilder/DataWriterBuilder/PositionDeleteWriterBuilder/EqualityDeleteWriterBuilder) from.
  • Created the appropriate ObjectModels:
    • GenericObjectModels - for reading and writing Iceberg Records
    • SparkObjectModels - for reading (vectorized and non-vectorized) and writing Spark InternalRow/ColumnarBatch objects
    • FlinkObjectModels - for reading and writing Flink RowData objects
    • An arrow object model is also registered for vectorized reads of Parquet files into Arrow ColumnarBatch objects
  • Updated the production code where the reading and writing happens to use the ObjectModelRegistry and the new reader/writer interfaces to access data files
  • Kept the testing code intact to ensure that the new API/code is not breaking anything

public Key(FileFormat fileFormat, String dataType, String builderType) {
this.fileFormat = fileFormat;
this.dataType = dataType;
this.builderType = builderType;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking of defining the default one using a priority (int) based approach and let the one with the highest priority be the default one. WDYT?

Copy link
Contributor Author

@pvary pvary Feb 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a concrete example for this: Comet vectorized parquet reader spark.sql.iceberg.parquet.reader-type

I think it is good if the reader/writer choice is a conscious decision, and not happening based on some behind the scenes algorithm.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for simplicity. This code should not determine things like whether Comet is used. This should have a single purpose, which is to standardize how object models plug in.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved the config to properties, and the builder method will create the different readers based on this config

import org.apache.iceberg.io.FileAppender;

/** Builder API for creating {@link FileAppender}s. */
public interface AppenderBuilder extends InternalData.WriteBuilder {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wonder that AppenderBuilder has a base interface but the other builders don't.
Guess it might help to have a common DataFileIoBuilder interface defining the common builder attributes (table, schema, properties, meta). It's a bit of an "adventure in Java generics", but doable.

Copy link
Contributor Author

@pvary pvary Feb 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you take a look at the other PRs (#12164, #12069), you can see that first, I took that adventurous route, but the result was too many classes/interfaces and casts.

This PR is aiming for the minimal set of changes, and the InternalData.WriteBuilder is already introduced to Iceberg by #12060. We either need to widen that interface or inherit from it here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm also confused by this inheritance. We're extending but overriding everything and it's not clear to me what we really gain by going with this approach. It looks like it ends up as a completely different builder that produces the same build result.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The goal with the PR was to show the minimal changes required to make the idea work.
We either create a different builder class for the InternalData.WriteBuilder and the DataFile.WriteBuilder, or we need to have inheritance of the interfaces.

Based on our discussion below we might end up using a different strategy, so revisit this comment later.

Comment on lines 91 to 103
return DataFileServiceRegistry.read(
task.file().format(), Record.class.getName(), input, fileProjection, partition)
.split(task.start(), task.length())
.caseSensitive(caseSensitive)
.reuseContainers(reuseContainers)
.filter(task.residual())
.build();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like these simplifications!

@pvary pvary force-pushed the file_Format_api_without_base branch 2 times, most recently from c528a52 to 9975b4f Compare February 20, 2025 09:45
@pvary pvary changed the title WIP: Interface based FileFormat API WIP: Interface based DataFile reader and writer API Feb 20, 2025
Copy link
Contributor

@liurenjie1024 liurenjie1024 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @pvary for this proposal, I left some comments.


/** Enables reusing the containers returned by the reader. Decreases pressure on GC. */
@Override
default ReaderBuilder reuseContainers() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems it should not be here? These are parquet reader specific.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is also used by Avro.
See:

this.reuseContainers = reuseContainers;

* @param rowType of the native input data
* @return {@link DataWriterBuilder} for building the actual writer
*/
public static <S> DataWriterBuilder dataWriterBuilder(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't quite understand in what case need this? I think append would be enough?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will check this. We might be able to remove this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the current approach, the file format api implementation creates the appender, and the PR creates the writers for the different data/delete files

* @return {@link AppenderBuilder} for building the actual writer
*/
public static <S, B extends EqualityDeleteWriterBuilder<B>>
EqualityDeleteWriterBuilder<B> equalityDeleteWriterBuilder(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think file format should consider eqaulity deletion/pos deletion here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current Avro positional delete writer behaves differently than Parquet/ORC positional delete writers.
In case of the positional delete files the schema provided to the Avro writer should omit the PATH and the POS fields, and only needs the actual table schema. The writer handles the PATH/POS fields by static code:

public void write(PositionDelete<D> delete, Encoder out) throws IOException {
PATH_WRITER.write(delete.path(), out);
POS_WRITER.write(delete.pos(), out);
rowWriter.write(delete.row(), out);
}

The Parquet/ORC positional delete writers behave in the same way. They expect the same input.

If we are ready for a more invasive change we can harmonize the writers.
I have aimed for a minimal changeset to allow easier acceptance for the PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The appender doesn't need to know about these, but the file formats and the writer implementations need this

* issues.
*/
private static final class Registry {
private static final Map<Key, ReaderService> READ_BUILDERS = Maps.newConcurrentMap();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is more like a convention problem, I think maybe we just need to store in FileFormatService in registry?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sometimes we don't have writers (arrow), or we have multiple readers vectorized/non-vectorized readers. Also Parquet has Comet reader. So I kept the writers and the readers separate

/** Key used to identify readers and writers in the {@link DataFileServiceRegistry}. */
public static class Key {
private final FileFormat fileFormat;
private final String dataType;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this things like arrow, internal row?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah,
Currenly we have:

  • Record - generic readers/writers
  • ColumnarBatch (arrow) - arrow
  • RowData - Flink
  • InternalRow - Spark
  • ColumnarBatch (spark) - Spark batch

@pvary
Copy link
Contributor Author

pvary commented Feb 21, 2025

I will start to collect the differences here between the different writer types (appender/dataWriter/equalityDeleteWriter/positionalDeleteWriter) for reference:

  • Writer context is different between delete and data files. This contains TableProperties/Configurations which could be different between delete and data files. For example for parquet: RowGroupSize/PageSize/PageRowLimit/DictSize/Compression etc. For ORC and Avro we have some similar changing configs
  • Specific writer functions for position deletes to write out the PositionDelete records
  • Positional delete PathTransformFunction to convert writer data type for the path to file format data type

import org.apache.iceberg.io.DataWriter;

/** Builder API for creating {@link DataWriter}s. */
public interface DataWriterBuilder {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not put the builder interface into the DataWriter class and put it in the same package? It seems odd to me that we're introducing this new datafile package.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The classes which has to be implemented by the file formats are kept in the io package, but moved the others to the data package

@rdblue
Copy link
Contributor

rdblue commented Feb 22, 2025

While I think the goal here is a good one, the implementation looks too complex to be workable in its current form.

The primary issue that we currently have is adapting object models (like Iceber's internal StructLike, Spark's InternalRow, or Flink's RowData) to file formats so that you can separately write object model to format glue code and have it work throughout support for an engine. I think a diff from the InternalData PR demonstrates it pretty well:

-    switch (format) {
-      case AVRO:
-        AvroIterable<ManifestEntry<F>> reader =
-            Avro.read(file)
-                .project(ManifestEntry.wrapFileSchema(Types.StructType.of(fields)))
-                .createResolvingReader(this::newReader)
-                .reuseContainers()
-                .build();
+    CloseableIterable<ManifestEntry<F>> reader =
+        InternalData.read(format, file)
+            .project(ManifestEntry.wrapFileSchema(Types.StructType.of(fields)))
+            .reuseContainers()
+            .build();
 
-        addCloseable(reader);
+    addCloseable(reader);
 
-        return CloseableIterable.transform(reader, inheritableMetadata::apply);
+    return CloseableIterable.transform(reader, inheritableMetadata::apply);
-
-      default:
-        throw new UnsupportedOperationException("Invalid format for manifest file: " + format);
-    }

This shows:

  • Rather than a switch, the format is passed to create the builder
  • There is no longer a callback passed to create readers for the object model (createResolvingReader)

In this PR, there are a lot of other changes as well. I'm looking at one of the simpler Spark cases in the row reader.

The builder is initialized from DataFileServiceRegistry and now requires a format, class name, file, projection, and constant map:

    return DataFileServiceRegistry.readerBuilder(
            format, InternalRow.class.getName(), file, projection, idToConstant)

There are also new static classes in the file. Each creates a new service and each service creates the builder and object model:

  public static class AvroReaderService implements DataFileServiceRegistry.ReaderService {
    @Override
    public DataFileServiceRegistry.Key key() {
      return new DataFileServiceRegistry.Key(FileFormat.AVRO, InternalRow.class.getName());
    }

    @Override
    public ReaderBuilder builder(
        InputFile inputFile,
        Schema readSchema,
        Map<Integer, ?> idToConstant,
        DeleteFilter<?> deleteFilter) {
      return Avro.read(inputFile)
          .project(readSchema)
          .createResolvingReader(schema -> SparkPlannedAvroReader.create(schema, idToConstant));
    }

The createResolvingReader line is still there, just moved into its own service class instead of in branches of a switch statement.

In addition, there are now a lot more abstractions:

  • A builder for creating an appender for a file format
  • A builder for creating a data file writer for a file format
  • A builder for creating an equality delete writer for a file format
  • A builder for creating a position delete writer for a file format
  • A builder for creating a reader for a file format
  • A "service" registry (what is a service?)
  • A "key"
  • A writer service
  • A reader service

I think that the next steps are to focus on making this a lot simpler, and there are some good ways to do that:

  • Focus on removing boilerplate and hiding the internals. For instance, Key, if needed, should be an internal abstraction and not complexity that is exposed to callers
  • The format-specific data and delete file builders typically wrap an appender builder. Is there a way to handle just the reader builder and appender builder?
  • Is the extra "service" abstraction helpful?
  • Remove ServiceLoader and use a simpler solution. I think that formats could simply register themselves like we do for InternalData. I think it would be fine to have a trade-off that Iceberg ships with a list of known formats that can be loaded, and if you want to replace that list it's at your own risk.
  • Standardize more across the builders for FileFormat. How idToConstant is handled is a good example. That should be passed to the builder instead of making the whole API more complicated. Projection is the same.

@pvary
Copy link
Contributor Author

pvary commented Feb 24, 2025

While I think the goal here is a good one, the implementation looks too complex to be workable in its current form.

I'm happy that we agree with the goals. I created a PR to start the conversation. If there are willing reviewers we can introduce more invasive changes to archive a better API. I'm all for it!

The primary issue that we currently have is adapting object models (like Iceber's internal StructLike, Spark's InternalRow, or Flink's RowData) to file formats so that you can separately write object model to format glue code and have it work throughout support for an engine.

I think we need to keep this direct transformations to prevent the performance loss which would be caused by multiple transformations between object model -> common model -> file format.

We have a matrix of transformation which we need to encode somewhere:

Source Target
Parquet StructLike
Parquet InternalRow
Parquet RowData
Parquet Arrow
Avro ...
ORC ...

[..]

  • Rather than a switch, the format is passed to create the builder
  • There is no longer a callback passed to create readers for the object model (createResolvingReader)

The InternalData reader has one advantage over the data file readers/writers. The internal object model is static for these readers/writers. For the DataFile readers/writers we have multiple object models to handle.

[..]
I think that the next steps are to focus on making this a lot simpler, and there are some good ways to do that:

  • Focus on removing boilerplate and hiding the internals. For instance, Key, if needed, should be an internal abstraction and not complexity that is exposed to callers

If we allow adding new builders for the file formats we can remove a good chunk of the boilerplate code. Let me see how this would look like

  • The format-specific data and delete file builders typically wrap an appender builder. Is there a way to handle just the reader builder and appender builder?

We need to refactor the Avro positional delete write for this, or add a positionalWriterFunc. Also need to consider that the format specific configurations which are different for the appenders and the delete files (DELETE_PARQUET_ROW_GROUP_SIZE_BYTES vs. PARQUET_ROW_GROUP_SIZE_BYTES)

  • Is the extra "service" abstraction helpful?

If we are ok with having a new Builder for the readers/writers, then we don't need the service. It was needed to keep the current APIs and the new APIs compatible.

  • Remove ServiceLoader and use a simpler solution. I think that formats could simply register themselves like we do for InternalData. I think it would be fine to have a trade-off that Iceberg ships with a list of known formats that can be loaded, and if you want to replace that list it's at your own risk.

Will do

  • Standardize more across the builders for FileFormat. How idToConstant is handled is a good example. That should be passed to the builder instead of making the whole API more complicated. Projection is the same.

Will see what could be arcived

@pvary pvary force-pushed the file_Format_api_without_base branch 5 times, most recently from c488d32 to 71ec538 Compare February 25, 2025 16:53
@pvary pvary force-pushed the file_Format_api_without_base branch 2 times, most recently from b5bad2c to 7a5f3a0 Compare May 6, 2025 12:54
@pvary pvary force-pushed the file_Format_api_without_base branch 3 times, most recently from 11aa6ae to c6929b1 Compare May 26, 2025 14:45
@pvary pvary force-pushed the file_Format_api_without_base branch from c6929b1 to 05309bc Compare May 26, 2025 15:16
@pvary pvary force-pushed the file_Format_api_without_base branch from 98b7c24 to 8ebe0e7 Compare May 27, 2025 14:02
@pvary pvary force-pushed the file_Format_api_without_base branch from 7f25300 to 546bd6b Compare June 16, 2025 13:10
@github-actions github-actions bot added the API label Jun 16, 2025
@stevenzwu
Copy link
Contributor

stevenzwu commented Jun 23, 2025

Right now, there are two questions that Peter and I are discussing offline.

  1. is it necessary to have a globally shared FileAccessFactoryRegistry in the iceberg-data module and have engine to register the factories class (e.g. FlinkObjectModels)?

The usage pattern in the engine (like Flink) is not much different in both models (with or without registry).

here is the usage pattern for with registry model.

      WriteBuilder<?, RowType, RowData> builder =
          FileAccessFactoryRegistry.writeBuilder(
              format,
              FlinkObjectModels.FLINK_OBJECT_MODEL,
              EncryptedFiles.plainAsEncryptedOutput(outputFile));

Here is the usage pattern for without registry model (from Peter's testing PR #13257).

      WriteBuilder<?, RowType, RowData> builder =
          FlinkFileAccessor.INSTANCE.writeBuilder(
              format, EncryptedFiles.plainAsEncryptedOutput(outputFile));

From my perspective, I don't see much value for sharing the Flink and Spark factory registration globally. Each engine should know the factory it should use.

  1. Peter was thinking about consolidating engine integrations in the file format module.

Let's say we want to add support for Vortex file format. Here are the steps needed after this effort

  1. [api] add vortex to FileFormat enum
  2. [vortex] implement VortexFileAccessFactory
  3. [flink/spark] implement vortex row reader and writer
  4. [flink/spark] add the factory implementation and register to the global factory (if keeping the registry model)

Peter thought 2-4 can be consolidated to the iceberg-vortex module. But it means that iceberg-vortex module would depend on iceberg-flink and iceberg-spark modules.

From my perspective, it does not seem right to have file format module (like iceberg-vortex) depends on engine module. The reverse dependency model (engine depends on file format module) makes more sense to me. The engine integration code for file format (readers for engine data type like Flink RowData) should exist in the engine module. I know it means a new file format support would requires changes in engine modules. But that would be the same for engines living outside iceberg repo.

@pvary pvary force-pushed the file_Format_api_without_base branch from d3b18f3 to cae699c Compare June 24, 2025 11:26
@pvary
Copy link
Contributor Author

pvary commented Jun 24, 2025

Let's say we want to add support for Vortex file format. Here are the steps needed after this effort

[api] add vortex to FileFormat enum
[vortex] implement VortexFileAccessFactory
[flink/spark] implement vortex row reader and writer
[flink/spark] add the factory implementation and register to the global factory (if keeping the registry model)

Peter thought 2-4 can be consolidated to the iceberg-vortex module. But it means that iceberg-vortex module would depend on iceberg-flink and iceberg-spark modules.

From my perspective, it does not seem right to have file format module (like iceberg-vortex) depends on engine module. The reverse dependency model (engine depends on file format module) makes more sense to me. The engine integration code for file format (readers for engine data type like Flink RowData) should exist in the engine module. I know it means a new file format support would requires changes in engine modules. But that would be the same for engines living outside iceberg repo.

There is a part of the code where both Vortex/Spark and Vortex/Flink is needed. We can't get away from it.
We have:

  1. Iceberg File Format API + Vortex specific code for the readers/writers
  2. Vortex + Spark specific code for the Spark readers
  3. Vortex + Flink specific code for the Flink readers

We can add 2 to Spark, and add 3 to Flink, or we can decouple Spark and Flink and have an independent module for 2 and 3. Another option is to merge 1, 2, 3 together.

Maybe the best would be keeping all of them separate, but if the size is not too big, I would prefer to put all of them to a single module and share in a single jar.

If we keep the Vortex/Spark, Vortex/Flink, or for that matter ORC/Spark and ORC/Flink in their separate modules, then the Registry based solution enables us to avoid changing the Spark/Flink code when we decide to add Vortex, or remove ORC support.

@github-actions github-actions bot removed the API label Jun 24, 2025
@pvary pvary force-pushed the file_Format_api_without_base branch from 9d3d2f6 to 820cd3e Compare June 27, 2025 10:34
@pvary pvary force-pushed the file_Format_api_without_base branch from 820cd3e to 0c35110 Compare June 27, 2025 10:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants