Skip to content

Align dispatch_tensor logic#2300

Merged
sachinprasadhs merged 3 commits intokeras-team:masterfrom
maitry63:docs_dispatch_tensor
Mar 5, 2026
Merged

Align dispatch_tensor logic#2300
sachinprasadhs merged 3 commits intokeras-team:masterfrom
maitry63:docs_dispatch_tensor

Conversation

@maitry63
Copy link
Contributor

This PR aligns the code implementation of ext_classification_with_switch_transformer.py with the inline documentation regarding the dispatch_tensor creation.

Fixes : #462

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @maitry63, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily focuses on enhancing the clarity and correctness of the Switch Transformer example by synchronizing the code's behavior with its accompanying documentation. The changes ensure that the dispatch_tensor creation and usage are accurately described, improving the maintainability and understanding of the model's routing mechanism. Additionally, it includes minor formatting and metadata updates for consistency.

Highlights

  • Code-Documentation Alignment: The pull request aligns the implementation details of dispatch_tensor creation in the Router class with its inline documentation, specifically regarding the dtype casting and the shape of the tensor.
  • Shape Comment Correction: The comment describing the dispatch_tensor shape within the Switch class was corrected to accurately reflect its dimensions, changing from [expert_capacity, num_experts, tokens_per_batch] to [tokens_per_batch, num_experts, expert_capacity].
  • Einsum Operation Clarification: An explanatory comment was added to the einsum operation in the Switch class to clarify the tensor transformations involved in creating expert_inputs.
  • Notebook Cleanup and Metadata Update: Various execution_count fields in the Jupyter notebook were reset to 0, and the 'Last modified' date was updated across all relevant files.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • examples/nlp/ipynb/text_classification_with_switch_transformer.ipynb
    • Updated the 'Last modified' date.
    • Reset execution_count to 0 in multiple code cells.
    • Added newlines for better formatting in several code cells.
    • Modified ops.cast call to explicitly use dtype keyword argument for dispatch_tensor.
    • Corrected the comment for dispatch_tensor shape in the Switch class.
    • Added a comment to clarify the einsum operation in the Switch class.
  • examples/nlp/md/text_classification_with_switch_transformer.md
    • Updated the 'Last modified' date.
    • Removed extraneous blank lines within code blocks.
    • Modified ops.cast call to explicitly use dtype keyword argument for dispatch_tensor.
    • Corrected the comment for dispatch_tensor shape in the Switch class.
    • Added a comment to clarify the einsum operation in the Switch class.
    • Updated example training epoch output values.
  • examples/nlp/text_classification_with_switch_transformer.py
    • Updated the 'Last modified' date.
    • Modified ops.cast call to explicitly use dtype keyword argument for dispatch_tensor.
    • Corrected the comment for dispatch_tensor shape in the Switch class.
    • Added a comment to clarify the einsum operation in the Switch class.
Activity
  • No specific review comments or activity have been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully aligns the code implementation with the inline documentation regarding the dispatch_tensor creation, addressing the issue mentioned in the description. The changes improve clarity and accuracy in the comments and code, particularly concerning tensor shapes and type casting. The modifications are well-placed and enhance the overall readability and maintainability of the code.

Comment on lines +294 to +295
" # cast to float32 so it can be used in the einsum product in the Switch layer.\n",
" dispatch_tensor = ops.cast(combined_tensor, dtype=\"float32\")\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Explicitly naming the dtype argument in ops.cast improves readability and makes the function call clearer. The added comment also provides valuable context for why the casting is performed.

# cast to float32 so it can be used in the einsum product in the Switch layer.
dispatch_tensor = ops.cast(combined_tensor, dtype="float32")

Comment on lines +339 to +343
" # dispatch_tensor shape: [tokens_per_batch, num_experts, expert_capacity]\n",
" # combine_tensor shape: [tokens_per_batch, num_experts, expert_capacity]\n",
" dispatch_tensor, combine_tensor = self.router(inputs)\n",
" # expert_inputs shape: [num_experts, expert_capacity, embed_dim]\n",
" # \"ab\" = [tokens, dim], \"acd\" = [tokens, experts, capacity] -> \"cdb\" = [experts, capacity, dim]\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The correction of the dispatch_tensor shape in the comment is a good improvement for documentation accuracy. Additionally, the new comment explaining the einsum operation clarifies the tensor transformations, which is very helpful for understanding the logic.

# dispatch_tensor shape: [tokens_per_batch, num_experts, expert_capacity]
# combine_tensor shape: [tokens_per_batch, num_experts, expert_capacity]
dispatch_tensor, combine_tensor = self.router(inputs)
# expert_inputs shape: [num_experts, expert_capacity, embed_dim]
# "ab" = [tokens, dim], "acd" = [tokens, experts, capacity] -> "cdb" = [experts, capacity, dim]

maitry63 and others added 2 commits February 25, 2026 19:56
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@sachinprasadhs sachinprasadhs merged commit ea77717 into keras-team:master Mar 5, 2026
3 checks passed
@maitry63 maitry63 deleted the docs_dispatch_tensor branch March 12, 2026 02:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

some code different from comment in text_classification_with_switch_transformer

2 participants