Skip to content

Conversation

@luoyuxia
Copy link
Contributor

@luoyuxia luoyuxia commented Dec 22, 2025

Purpose

Linked issue: close #2224

Brief change log

  • In TieringCommitOperator, first prepare commit log offsets to fluss cluster which will write a file to store the log offsets
  • then, record the file path in lake snapshot property, commit the snapshot

Tests

Existing test

API and Format

Documentation

@luoyuxia luoyuxia force-pushed the allow-commit-offset-to-fluss branch from 33969e9 to 5442462 Compare December 22, 2025 08:45
@luoyuxia luoyuxia requested a review from Copilot December 22, 2025 08:47
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

@luoyuxia luoyuxia force-pushed the allow-commit-offset-to-fluss branch from 5442462 to 68a5039 Compare December 22, 2025 12:49
@luoyuxia luoyuxia marked this pull request as ready for review December 22, 2025 13:00
@luoyuxia luoyuxia requested a review from Copilot December 22, 2025 13:00
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 30 out of 30 changed files in this pull request and generated 12 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@luoyuxia luoyuxia force-pushed the allow-commit-offset-to-fluss branch 2 times, most recently from 06af001 to 3889069 Compare December 22, 2025 13:22
@luoyuxia luoyuxia changed the title Allow commit offset to fluss [lake] Record a file path storing log offsets in lake snapshot property Dec 23, 2025
@luoyuxia luoyuxia marked this pull request as draft December 24, 2025 03:07
@luoyuxia luoyuxia force-pushed the allow-commit-offset-to-fluss branch from a6c7f69 to 747e91b Compare December 24, 2025 12:35
@luoyuxia luoyuxia requested a review from Copilot December 24, 2025 12:36
@luoyuxia luoyuxia force-pushed the allow-commit-offset-to-fluss branch from 747e91b to f31d6f5 Compare December 24, 2025 12:40
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 42 out of 42 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 43 out of 43 changed files in this pull request and generated 6 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@luoyuxia luoyuxia force-pushed the allow-commit-offset-to-fluss branch from 37c8016 to f7a09fa Compare December 25, 2025 02:06
@luoyuxia luoyuxia marked this pull request as ready for review December 25, 2025 02:06
@luoyuxia luoyuxia force-pushed the allow-commit-offset-to-fluss branch from f7a09fa to f470b32 Compare December 25, 2025 02:27
@luoyuxia
Copy link
Contributor Author

@wuchong Could you please help review this pr? The pr also handle the back compabitlity when use v2 to serialize lake table snapshot

@luoyuxia luoyuxia force-pushed the allow-commit-offset-to-fluss branch from f470b32 to 0451cbf Compare December 26, 2025 02:19

message CommitLakeTableSnapshotRequest {
repeated PbLakeTableSnapshotInfo tables_req = 1;
message PrepareCommitLakeTableSnapshotRequest {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PrepareLakeTableSnapshotRequest

prepare and commit are 2 different phases.

Comment on lines +456 to +457
message PrepareCommitLakeTableSnapshotResponse {
repeated PbPrepareCommitLakeTableRespForTable prepare_commit_lake_table_resp = 1;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

optional int64 max_timestamp = 6;
}

message PbPrepareCommitLakeTableRespForTable {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a table_id field, as the PrepareCommitLakeTableSnapshotRequest request has multiple table ids, we need to distinguish which table is the PbPrepareCommitLakeTableRespForTable belong to.

Comment on lines +986 to +987
optional int32 error_code = 2;
optional string error_message = 3;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's follow a standard that error_code and error_message as the first 2 fields, because we will add more fields at the end.

optional string error_message = 3;
}

message PbTableBucketOffsets {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename to PbTableOffsets? This makes it more clear it is a set of offsets for a table.

* @return the LakeTableSnapshot
*/
public LakeTableSnapshot getLatestTableSnapshot() throws Exception {
public LakeTableSnapshot getLatestTableSnapshot() throws IOException {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider renaming this method to getOrReadLatestTableSnapshot to explicitly indicate that it may perform I/O operations (e.g., reading from remote storage) when the snapshot isn't already available in memory. This makes the method’s behavior clearer to callers and improves code readability.

Comment on lines +66 to +74
// Version 1: ZK node contains full snapshot data, use LakeTableSnapshotJsonSerde
LakeTableSnapshotJsonSerde.INSTANCE.serialize(
lakeTable.getLatestTableSnapshot(), generator);
} else {
generator.writeStartObject();
generator.writeNumberField(VERSION_KEY, CURRENT_VERSION);

generator.writeArrayFieldStart(LAKE_SNAPSHOTS);
for (LakeTable.LakeSnapshotMetadata lakeSnapshotMetadata :
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be better to extract separate serializeV1() and serializeV2() methods to improve maintainability and readability. The same applies to the deserialize.

* @see LakeTableJsonSerde for the current format (version 2) that uses this serde for legacy
* compatibility
*/
public class LakeTableSnapshotJsonSerde
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider to rename this to LakeTableSnapshotLegacyJsonSerde (Currenlty there is too many LakeXxxSerde)

}

@Test
void testBackwardCompatibility() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We still need this for backward compatibility for the max_timestamp and log_start_offset fields.

}

@Override
public CompletableFuture<PrepareCommitLakeTableSnapshotResponse> prepareCommitLakeTableSnapshot(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move this method before commitLakeTableSnapshot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Record a file path storing log offsets in lake snapshot property

2 participants