Skip to content

fix TPU node pool scale to zero#75

Merged
divyashreepathihalli merged 3 commits intokeras-team:mainfrom
divyashreepathihalli:fix-gke-client
Mar 6, 2026
Merged

fix TPU node pool scale to zero#75
divyashreepathihalli merged 3 commits intokeras-team:mainfrom
divyashreepathihalli:fix-gke-client

Conversation

@divyashreepathihalli
Copy link
Collaborator

This PR fixes a bug in the GKE scale-to-zero preflight validation logic in _check_node_pool_exists_cached
where the detection was returning False for valid TPU node pools like v5litepod-2x2. Previously, the detection failed because cloud.google.com/gke-tpu-accelerator labels and exact chip counts were not mapped correctly for TPU pools that rely on explicit machine types instead of generic accelerator configs.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical bug in the GKE scale-to-zero preflight validation process. Previously, the system failed to correctly identify certain valid TPU node pools, such as v5litepod-2x2, due to misinterpretations of their accelerator labels and chip counts. The changes introduce more robust detection mechanisms, ensuring that TPU node pools relying on explicit machine types are properly recognized, thereby preventing erroneous validation failures.

Highlights

  • TPU Node Pool Detection: Enhanced the _check_node_pool_exists_cached function to correctly identify TPU node pools by inspecting resourceLabels for goog-gke-accelerator-type, addressing a previous failure to recognize valid TPU configurations.
  • Accelerator Count Inference: Implemented logic to infer the cloud.google.com/gke-accelerator-count label from the node pool's machineType by cross-referencing with the accelerators.TPUS registry, ensuring accurate accelerator count mapping for scale-to-zero validation.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • keras_remote/backend/gke_client.py
    • Added logic to extract TPU accelerator type from resourceLabels for improved detection.
    • Introduced a mechanism to infer the GKE accelerator count from the machine type using the internal TPU registry.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request fixes a bug in the TPU node pool detection logic for scale-to-zero scenarios, which is crucial for correct preflight validation. The main part of the fix, which uses resourceLabels to determine the TPU accelerator type, seems correct and effectively addresses the issue described. However, I've identified a new block of code for inferring accelerator counts that appears to be both logically flawed and unnecessary for the current logic. My review includes a suggestion to remove this block to improve the code's correctness and maintainability.

@divyashreepathihalli divyashreepathihalli merged commit 70bd83e into keras-team:main Mar 6, 2026
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants