Skip to content

Conversation

@comphead
Copy link
Contributor

@comphead comphead commented Apr 10, 2025

Which issue does this PR close?

Closes #.

Rationale for this change

Sometimes Arrow-rs users don't require schema checks on RecordBatch relying on their schema checks and schema validity rules. However now it is not possible to override the schema without performance impact.

Examples: apache/datafusion#15162
apache/datafusion#15603

Proposed with_schema_unchecked method for RecordBatch overrides the schema without additional schema checks however bringing all the schema compatibility responsibilities to the caller site.

What changes are included in this PR?

Are there any user-facing changes?

@github-actions github-actions bot added the arrow Changes to the arrow crate label Apr 10, 2025
Copy link
Contributor

@etseidl etseidl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems reasonable to me. Just left a few minor suggestions.

///
/// If provided schema is not compatible with this [`RecordBatch`] columns the runtime behavior
/// is undefined
pub fn with_schema_force(self, schema: SchemaRef) -> Result<Self, ArrowError> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe with_schema_unchecked would be better?

Comment on lines 362 to 364
/// Forcibly overrides the schema of this [`RecordBatch`]
/// without additional schema checks however bringing all the schema compatibility responsibilities
/// to the caller site.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// Forcibly overrides the schema of this [`RecordBatch`]
/// without additional schema checks however bringing all the schema compatibility responsibilities
/// to the caller site.
/// Overrides the schema of this [`RecordBatch`]
/// without additional schema checks. Note, however, that this pushes all the schema compatibility responsibilities
/// to the caller site. In particular, the caller guarantees that `schema` is a superset
/// of the current schema as determined by [`Schema::contains`].

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice said!

}

#[test]
fn test_batch_with_force_schema() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please also add a check where the forced schema succeeds?

// Wrong number of columns
let invalid_schema_more_cols = Schema::new(vec![
Field::new("a", DataType::Utf8, false),
Field::new("a", DataType::Int32, false),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Field::new("a", DataType::Int32, false),
Field::new("b", DataType::Int32, false),

?. This triggers my OCD 😅

@comphead
Copy link
Contributor Author

Thanks @etseidl for the feedback. Addressed all the comments

@comphead comphead changed the title feat: Adding with_schema_force method for RecordBatch feat: Adding with_schema_unchecked method for RecordBatch Apr 10, 2025
Copy link
Contributor

@etseidl etseidl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @comphead!

@comphead comphead merged commit 6f3a8f0 into apache:main Apr 10, 2025
28 checks passed
@tustvold
Copy link
Contributor

tustvold commented Apr 11, 2025

I think this needs to either be reverted or made unsafe, it allows constructing invalid ArrayData with safe code...

Edit: PR to make this unsafe - #7405

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

arrow Changes to the arrow crate

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants