Adding Torch support for Model Parallel#22394
Adding Torch support for Model Parallel#22394buildwithsuhana wants to merge 15 commits intokeras-team:masterfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances Keras by adding support for model parallelism with the Torch backend. It introduces necessary components for distributing tensors and variables across multiple devices, enabling larger models to be trained efficiently. The changes include modifications to core tensor operations to be aware of sharding and replication, as well as utilities for initializing and managing distributed environments. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces support for model parallelism with the PyTorch backend. The changes are extensive, touching core backend components, distribution libraries, and some ops. The overall approach of using PyTorch's DTensor and parallelize_module is sound. I've identified a few areas for improvement, mainly concerning code structure and backend abstractions. Specifically, some backend-specific logic has been added to the generic keras.ops module, which should be moved to the torch-specific backend implementation to maintain clean separation. I also have a suggestion to improve code clarity in the Variable class implementation.
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #22394 +/- ##
==========================================
- Coverage 82.99% 82.75% -0.25%
==========================================
Files 596 597 +1
Lines 66423 66945 +522
Branches 10353 10461 +108
==========================================
+ Hits 55130 55400 +270
- Misses 8665 8870 +205
- Partials 2628 2675 +47
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
…eras into mp_parallelize
This PR introduces torch backend support for Model Parallelism (MP) in Keras. It aligns the internal distribution_lib implementations to ensure that high-level Keras Distribution APIs (like DeviceMesh, LayoutMap, and ModelParallel) behave consistently regardless of the underlying framework. Leveraged PyTorch DTensor and DeviceMesh to handle sharding and replication.
Design document: go/distributionLib
Kaggle link testing model parallel for torch and jax backend (using keras_hub opt model): https://www.kaggle.com/code/buildwithsuhana/dtensor-model-parallel-data-parallel-for-torch