Skip to content

Conversation

@danielenricocahall
Copy link
Contributor

@danielenricocahall danielenricocahall commented Nov 3, 2025

Addresses #21300 by adding support for PyDataset by converting it into a tf.data.Dataset per the suggestions in the Issue. Additionally, raise an exception if an unsupported type is supplied rather than having it proceed and fail on the UnboundLocalVariable error.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @danielenricocahall, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the Normalization layer by integrating support for PyDataset objects within its adapt method. This change addresses a specific issue where PyDataset was not properly handled, and it also improves the layer's robustness by providing clearer error messages for unhandled data input types.

Highlights

  • PyDataset Support: The Normalization layer's adapt method now supports PyDataset objects, allowing for proper adaptation when using this data type.
  • Improved Error Handling: The adapt method now explicitly raises a NotImplementedError for unsupported data types, preventing ambiguous UnboundLocalVariable errors.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for PyDataset to the adapt method of the Normalization layer, which is a great enhancement for usability. The inclusion of a specific test case for PyDataset is also well done. My review includes a couple of suggestions to improve code clarity and error handling, mainly by reducing a small amount of code duplication and making an error message more informative, in line with the Keras API design guidelines.

@codecov-commenter
Copy link

codecov-commenter commented Nov 3, 2025

Codecov Report

❌ Patch coverage is 76.92308% with 3 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.67%. Comparing base (08f102d) to head (ac460d9).
⚠️ Report is 3 commits behind head on master.

Files with missing lines Patch % Lines
keras/src/layers/preprocessing/normalization.py 76.92% 1 Missing and 2 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21817      +/-   ##
==========================================
+ Coverage   82.63%   82.67%   +0.03%     
==========================================
  Files         577      577              
  Lines       59415    59432      +17     
  Branches     9313     9317       +4     
==========================================
+ Hits        49097    49133      +36     
+ Misses       7913     7898      -15     
+ Partials     2405     2401       -4     
Flag Coverage Δ
keras 82.49% <76.92%> (+0.03%) ⬆️
keras-jax 63.33% <76.92%> (+<0.01%) ⬆️
keras-numpy 57.57% <76.92%> (+<0.01%) ⬆️
keras-openvino 34.34% <15.38%> (+0.03%) ⬆️
keras-tensorflow 64.13% <76.92%> (+0.01%) ⬆️
keras-torch 63.63% <76.92%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Comment on lines +237 to +247
tf_dataset = adapter.get_tf_dataset()
if len(tf_dataset.element_spec) == 1:
# just x
data = tf_dataset.map(lambda x: x)
elif len(tf_dataset.element_spec) == 2:
# (x, y) pairs
data = tf_dataset.map(lambda x, y: x)
elif len(tf_dataset.element_spec) == 3:
# (x, y, sample_weight) tuples
data = tf_dataset.map(lambda x, y, z: x)
input_shape = data.element_spec.shape
Copy link

@limzikiki limzikiki Nov 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Coming from your comment.
What I did in my solution is:

        elif isinstance(data, keras.utils.PyDataset):
            sample_input = data[0][0] # pydataset should return a tuple with first element being the data
            if isinstance(sample_input, np.ndarray) or backend.is_tensor(sample_input):
                input_shape = sample_input.shape
            else:
                raise ValueError(f"Unsupported data type: {type(sample_input)} returned from the PyDataset")

The advantage of my option lies in the fact that we don’t need to perform excessive transformations to tf tensors just for the sake of size estimation. PyDataset is also used for experimentation, and when the dataset is too large to be read into RAM, which is common for workstations and personal devices, transforming PyDataset into a TF Tensor will fail due to memmory allocation. However on contrary the drawback of my solution is that it retrieves the first batch, and during that the first batch might get changed and second retrieval of the first batch might return not the same output (If someone implemented non-idempotent PyDataset, but i think then it is a user problem then). Also why retrieval of the first batch is feasable solution is because shape of all the elements across all the batches must be identical for normalization to work correctly.

Considering strategic direction of Keras to move away from being solely dependant on TensorFlow, adding transformation to tensorflow creates a technical debt that Keras team should later take care off.

I am open for a discussion

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the thorough response! Yes, I will defer to the core developers' judgement for this. Happy to revise and infer the shape based on sampling a batch if we think that's the better approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants