Migration of Semi-supervision and domain adaptation with AdaMatch to keras 3…#2310
Migration of Semi-supervision and domain adaptation with AdaMatch to keras 3…#2310maitry63 wants to merge 5 commits intokeras-team:masterfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request updates the Semi-supervision and domain adaptation with AdaMatch tutorial to be fully compatible with Keras 3. The changes ensure the tutorial is backend-agnostic, allowing it to run seamlessly on various deep learning frameworks. This migration involves a comprehensive overhaul of data loading, augmentation, model definition, and the custom training loop to align with the latest Keras API standards, improving flexibility and maintainability. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request successfully migrates the AdaMatch example to Keras 3, making it backend-agnostic, with well-executed refactoring from tf.data and a custom train_step to PyDataset and compute_loss. However, a security audit identified the use of unencrypted HTTP connections for downloading the SVHN dataset, introducing a risk of Man-in-the-Middle attacks; updating these URLs to HTTPS is recommended. Additionally, critical code issues include the removal of weight decay and kernel initializer from the network definition, which may degrade performance, and unintentional changes in loss calculation and image preprocessing logic.
|
@gemini-code-assist review |
1 similar comment
|
@gemini-code-assist review |
There was a problem hiding this comment.
Code Review
The pull request successfully migrates the Semi-supervision and domain adaptation with AdaMatch tutorial from Keras 2 to Keras 3, making it backend-agnostic. The changes involve updating imports, refactoring data loading and augmentation utilities into a PyDataset and within the AdaMatch model class, and adapting the Wide ResNet architecture and training loop to Keras 3's compute_loss method. The data loading for SVHN has been updated to use scipy.io instead of tensorflow_datasets for the training data, and similarly for the test data. The overall structure aligns well with Keras 3 best practices for custom training.
examples/vision/adamatch.py
Outdated
| x = layers.Activation("relu")(x) | ||
| x = layers.GlobalAveragePooling2D()(x) | ||
|
|
||
| outputs = layers.Dense(10)(x) |
There was a problem hiding this comment.
The final Dense layer in the get_network function is missing the kernel_regularizer that was present in the Keras 2 version. Given that WEIGHT_DECAY is defined and used for regularization in other parts of the network, it's likely this was an oversight. Removing regularization from the output layer could potentially affect the model's generalization performance.
| outputs = layers.Dense(10)(x) | |
| outputs = layers.Dense(10, kernel_regularizer=keras.regularizers.l2(WEIGHT_DECAY))(x) |
This PR migrates the Semi-supervision and domain adaptation with AdaMatch tutorial from Keras 2 to Keras 3. The implementation is now fully backend-agnostic, allowing it to run seamlessly on JAX, PyTorch, and TensorFlow.
Colab file - Notebook