-
Notifications
You must be signed in to change notification settings - Fork 108
[zephyr] Fix load_parquet memory: use ParquetFile, drop dataset API #4344
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from 2 commits
Commits
Show all changes
9 commits
Select commit
Hold shift + click to select a range
2da0e77
Fix parquet reader memory: read row-group-by-row-group instead of dat…
ravwojdyla 3a16feb
[zephyr] Use ParquetFile for all load_parquet paths, drop dataset API
ravwojdyla f22e6ba
[zephyr] Extract iter_parquet_row_groups, drop pyarrow.dataset from s…
ravwojdyla 0592811
[zephyr] Add parquet reader benchmark (dataset vs row-group-by-row-gr…
ravwojdyla a82354f
[zephyr] Improve benchmark: incompressible data, --row-group-mb flag
ravwojdyla dd891da
[zephyr] Run each benchmark reader in a separate subprocess
ravwojdyla 151542f
[zephyr] Benchmark scatter-style layout (shard_idx + chunk_idx grouping)
ravwojdyla 252516d
[zephyr] Rewrite benchmark to match exact scatter layout
ravwojdyla ea21527
Remove row_filter from iter_parquet_row_groups, simplify API
github-actions[bot] File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Some comments aren't visible on the classic Files Changed page.
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,99 @@ | ||
| # Copyright The Marin Authors | ||
| # SPDX-License-Identifier: Apache-2.0 | ||
|
|
||
| """Tests for parquet reader (load_parquet).""" | ||
|
|
||
|
|
||
| import pyarrow as pa | ||
| import pyarrow.parquet as pq | ||
|
|
||
| from zephyr.expr import ColumnExpr, CompareExpr, LiteralExpr | ||
| from zephyr.readers import InputFileSpec, load_parquet | ||
|
|
||
|
|
||
| def _write_test_parquet(path: str, records: list[dict], row_group_size: int = 2) -> None: | ||
| """Write a parquet file with small row groups for testing.""" | ||
| table = pa.Table.from_pylist(records) | ||
| pq.write_table(table, path, row_group_size=row_group_size) | ||
|
|
||
|
|
||
| RECORDS = [{"id": i, "name": f"row{i}", "score": float(i * 10)} for i in range(10)] | ||
|
|
||
|
|
||
| def test_load_parquet_plain(tmp_path): | ||
| path = str(tmp_path / "data.parquet") | ||
| _write_test_parquet(path, RECORDS) | ||
|
|
||
| result = list(load_parquet(path)) | ||
| assert result == RECORDS | ||
|
|
||
|
|
||
| def test_load_parquet_columns(tmp_path): | ||
| path = str(tmp_path / "data.parquet") | ||
| _write_test_parquet(path, RECORDS) | ||
|
|
||
| spec = InputFileSpec(path=path, columns=["id", "name"]) | ||
| result = list(load_parquet(spec)) | ||
| assert result == [{"id": r["id"], "name": r["name"]} for r in RECORDS] | ||
|
|
||
|
|
||
| def test_load_parquet_row_range(tmp_path): | ||
| path = str(tmp_path / "data.parquet") | ||
| _write_test_parquet(path, RECORDS, row_group_size=3) | ||
|
|
||
| spec = InputFileSpec(path=path, row_start=2, row_end=7) | ||
| result = list(load_parquet(spec)) | ||
| assert [r["id"] for r in result] == [2, 3, 4, 5, 6] | ||
|
|
||
|
|
||
| def test_load_parquet_filter(tmp_path): | ||
| path = str(tmp_path / "data.parquet") | ||
| _write_test_parquet(path, RECORDS) | ||
|
|
||
| spec = InputFileSpec( | ||
| path=path, | ||
| filter_expr=CompareExpr(op="ge", left=ColumnExpr(name="score"), right=LiteralExpr(value=50.0)), | ||
| ) | ||
| result = list(load_parquet(spec)) | ||
| assert all(r["score"] >= 50.0 for r in result) | ||
| assert [r["id"] for r in result] == [5, 6, 7, 8, 9] | ||
|
|
||
|
|
||
| def test_load_parquet_filter_and_row_range(tmp_path): | ||
| path = str(tmp_path / "data.parquet") | ||
| _write_test_parquet(path, RECORDS, row_group_size=3) | ||
|
|
||
| spec = InputFileSpec( | ||
| path=path, | ||
| row_start=1, | ||
| row_end=8, | ||
| filter_expr=CompareExpr(op="ge", left=ColumnExpr(name="score"), right=LiteralExpr(value=50.0)), | ||
| ) | ||
| result = list(load_parquet(spec)) | ||
| # rows 1-7, then filtered to score >= 50 → ids 5, 6, 7 | ||
| assert [r["id"] for r in result] == [5, 6, 7] | ||
|
|
||
|
|
||
| def test_load_parquet_empty(tmp_path): | ||
| path = str(tmp_path / "empty.parquet") | ||
| table = pa.Table.from_pylist([], schema=pa.schema([("id", pa.int64())])) | ||
| pq.write_table(table, path) | ||
|
|
||
| result = list(load_parquet(path)) | ||
| assert result == [] | ||
|
|
||
|
|
||
| def test_load_parquet_no_dataset_api(tmp_path, monkeypatch): | ||
| """Verify that load_parquet does NOT import pyarrow.dataset.""" | ||
| import sys | ||
|
|
||
| path = str(tmp_path / "data.parquet") | ||
| _write_test_parquet(path, RECORDS) | ||
|
|
||
| # Remove pyarrow.dataset from sys.modules and block re-import | ||
| sys.modules.pop("pyarrow.dataset", None) | ||
| monkeypatch.setitem(sys.modules, "pyarrow.dataset", None) | ||
|
|
||
| # Should succeed without pyarrow.dataset | ||
| result = list(load_parquet(path)) | ||
| assert len(result) == len(RECORDS) |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
load_parquetnow reads each row group withcolumns=columnsand appliestable.filter(pa_filter)afterward. In pushed-down pipelines that combine filter + select (e.g..filter(col("score") > 70).select("id")),_compute_file_pushdowncan passcolumns=["id"]while the predicate still referencesscore, so this path drops the predicate column before filtering and causestable.filterto fail (or mis-evaluate). The priordataset.to_table(columns=..., filter=...)flow did not have this ordering problem because the scanner could read predicate columns without projecting them to output.Useful? React with 👍 / 👎.