Adding torch support for Data Parallel#22397
Adding torch support for Data Parallel#22397buildwithsuhana wants to merge 18 commits intokeras-team:masterfrom
Conversation
…eras into mp_parallelize
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the Keras Torch backend by integrating robust support for data parallelism. It ensures that Keras's high-level distribution APIs function seamlessly with PyTorch's distributed capabilities, allowing models to be efficiently scaled across multiple devices or processes. The changes involve making core tensor and variable operations distribution-aware, adding XLA device compatibility, and providing a structured way to parallelize Keras layers. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces support for data parallelism in the Torch backend, which is a significant feature. The implementation leverages PyTorch's Distributed Data Parallel (DDP) and DTensor APIs. The changes include a new distribution_lib for Torch, modifications to core.py to make tensor operations and variables distribution-aware, and updates to layers to hook into the distribution logic.
My review has identified a critical bug related to integer-to-tensor conversion that could lead to incorrect behavior. I've also pointed out a piece of code that needs clarification and a potential design issue in the dot_product_attention op where distribution-specific logic is mixed with the generic implementation. Overall, the approach is solid, but these points should be addressed to ensure correctness and maintainability.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #22397 +/- ##
==========================================
- Coverage 82.99% 82.71% -0.29%
==========================================
Files 596 597 +1
Lines 66423 66982 +559
Branches 10353 10470 +117
==========================================
+ Hits 55130 55406 +276
- Misses 8665 8899 +234
- Partials 2628 2677 +49
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
This PR introduces torch backend support for Data Parallelism (DP) in Keras. It aligns the internal distribution_lib implementations to ensure that high-level Keras Distribution APIs (like DeviceMesh, LayoutMap, and ModelParallel) behave consistently regardless of the underlying framework. Leveraged PyTorch Distributed Data Parallel for the implementation
Design document: go/distributionLib
Kaggle link testing data parallel for torch and jax backend (using keras_hub opt model):
https://www.kaggle.com/code/buildwithsuhana/dataparallel-torch-ddp?scriptVersionId=302683263