-
Notifications
You must be signed in to change notification settings - Fork 19.6k
Support PyDataset in Normalization layer adapt methods
#21817
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Support PyDataset in Normalization layer adapt methods
#21817
Conversation
Summary of ChangesHello @danielenricocahall, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for PyDataset to the adapt method of the Normalization layer, which is a great enhancement for usability. The inclusion of a specific test case for PyDataset is also well done. My review includes a couple of suggestions to improve code clarity and error handling, mainly by reducing a small amount of code duplication and making an error message more informative, in line with the Keras API design guidelines.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #21817 +/- ##
==========================================
+ Coverage 82.63% 82.67% +0.03%
==========================================
Files 577 577
Lines 59415 59432 +17
Branches 9313 9317 +4
==========================================
+ Hits 49097 49133 +36
+ Misses 7913 7898 -15
+ Partials 2405 2401 -4
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| tf_dataset = adapter.get_tf_dataset() | ||
| if len(tf_dataset.element_spec) == 1: | ||
| # just x | ||
| data = tf_dataset.map(lambda x: x) | ||
| elif len(tf_dataset.element_spec) == 2: | ||
| # (x, y) pairs | ||
| data = tf_dataset.map(lambda x, y: x) | ||
| elif len(tf_dataset.element_spec) == 3: | ||
| # (x, y, sample_weight) tuples | ||
| data = tf_dataset.map(lambda x, y, z: x) | ||
| input_shape = data.element_spec.shape |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Coming from your comment.
What I did in my solution is:
elif isinstance(data, keras.utils.PyDataset):
sample_input = data[0][0] # pydataset should return a tuple with first element being the data
if isinstance(sample_input, np.ndarray) or backend.is_tensor(sample_input):
input_shape = sample_input.shape
else:
raise ValueError(f"Unsupported data type: {type(sample_input)} returned from the PyDataset")The advantage of my option lies in the fact that we don’t need to perform excessive transformations to tf tensors just for the sake of size estimation. PyDataset is also used for experimentation, and when the dataset is too large to be read into RAM, which is common for workstations and personal devices, transforming PyDataset into a TF Tensor will fail due to memmory allocation. However on contrary the drawback of my solution is that it retrieves the first batch, and during that the first batch might get changed and second retrieval of the first batch might return not the same output (If someone implemented non-idempotent PyDataset, but i think then it is a user problem then). Also why retrieval of the first batch is feasable solution is because shape of all the elements across all the batches must be identical for normalization to work correctly.
Considering strategic direction of Keras to move away from being solely dependant on TensorFlow, adding transformation to tensorflow creates a technical debt that Keras team should later take care off.
I am open for a discussion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the thorough response! Yes, I will defer to the core developers' judgement for this. Happy to revise and infer the shape based on sampling a batch if we think that's the better approach.
Addresses #21300 by adding support for
PyDatasetby converting it into atf.data.Datasetper the suggestions in the Issue. Additionally, raise an exception if an unsupported type is supplied rather than having it proceed and fail on theUnboundLocalVariableerror.