Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

spec: Variant lower/upper bounds #12658

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions format/spec.md
Original file line number Diff line number Diff line change
Expand Up @@ -648,6 +648,9 @@ Notes:
5. The `content_offset` and `content_size_in_bytes` fields are used to reference a specific blob for direct access to a deletion vector. For deletion vectors, these values are required and must exactly match the `offset` and `length` stored in the Puffin footer for the deletion vector blob.
6. The following field ids are reserved on `data_file`: 141.

For `variant` type, the `lower_bounds` and `upper_bounds` store the lower and upper bounds for all shredded subcolumns within a file. These bounds are represented as a Variant object, where each subcolumn path serves as a key and the corresponding bound value as the value. The object is then serialized into binary format (see [Variant encoding](https://github.com/apache/parquet-format/blob/master/VariantEncoding.md)).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should not state "all shredded subcolumns". First, "all" is not a requirement and is actually misleading because the fields are only those for which we can guarantee the lower and upper bounds are accurate. I would also not use the somewhat confusing term "subcolumn". In the variant spec we refer to fields. You may also want to call out a special case where the root is not an object. In that case the lower or upper bound is tracked by the field name "$" indicating the root.

Subcolumn paths follow the JSON path format to use normalized path, such as `$['location']['latitude']` or `$['user.name']`. If the shredded subcolumn is an array, represent it using the index 0 to indicate the array structure, such as `$['tags'][0]`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There shouldn't be a need to include array paths. Bounds for array data are not tracked.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will defer that for now and we can add if needed later.


For `geometry` and `geography` types, `lower_bounds` and `upper_bounds` are both points of the following coordinates X, Y, Z, and M (see [Appendix G](#appendix-g-geospatial-notes)) which are the lower / upper bound of all objects in the file. For the X values only, xmin may be greater than xmax, in which case an object in this bounding box may match if it contains an X such that `x >= xmin` OR`x <= xmax`. In geographic terminology, the concepts of `xmin`, `xmax`, `ymin`, and `ymax` are also known as `westernmost`, `easternmost`, `southernmost` and `northernmost`, respectively. For `geography` types, these points are further restricted to the canonical ranges of [-180 180] for X and [-90 90] for Y.

The `partition` struct stores the tuple of partition values for each file. Its type is derived from the partition fields of the partition spec used to write the manifest file. In v2, the partition struct's field ids must match the ids from the partition spec.
Expand Down Expand Up @@ -1558,6 +1561,7 @@ The binary single-value serialization can be used to store the lower and upper b
|------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **`geometry`** | A single point, encoded as a x:y:z:m concatenation of its 8-byte little-endian IEEE 754 coordinate values. x and y are mandatory. This becomes x:y if z and m are both unset, x:y:z if only m is unset, and x:y:NaN:m if only z is unset. |
| **`geography`** | A single point, encoded as a x:y:z:m concatenation of its 8-byte little-endian IEEE 754 coordinate values. x and y are mandatory. This becomes x:y if z and m are both unset, x:y:z if only m is unset, and x:y:NaN:m if only z is unset. |
| **`variant`** | A `Variant` object, where each subcolumn path serves as a key and the corresponding bound value as the value. Subcolumn paths follow the JSON path format. |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Variant is not something that we can reference here because that is a class in the implementation. I think what you want to say is that the serialized value is a variant metadata (v1) concatenated with a variant object. The object's fields are field paths in the normalized JSON path format and the values are the upper or lower bound corresponding to the shredded type in the data file.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sense. Here we want to describe how exactly the data is laid out


### JSON single-value serialization

Expand Down