Skip to content

Conversation

@ashergaga
Copy link

This PR enhances the HAMi scheduler to properly handle Ascend 910C devices in Huawei SuperPod clusters by enforcing module-pair allocation:

  • Ascend 910C hardware consists of 2 NPUs per physical module, and valid allocations must respect this topology (i.e., allocate either 1 NPU — auto-rounded to 2 — or an even number of NPUs).
  • The scheduler now rejects odd-numbered requests > 1 (e.g., 3, 5) with a clear error message.
  • For valid requests, devices are allocated only in complete module pairs.

@hami-robot
Copy link
Contributor

hami-robot bot commented Jan 20, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: ashergaga
Once this PR has been reviewed and has the lgtm label, please assign shouren for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@hami-robot
Copy link
Contributor

hami-robot bot commented Jan 20, 2026

Thanks for your pull request. Before we can look at it, you'll need to add a 'DCO signoff' to your commits.

📝 Please follow instructions in the contributing guide to update your commits with the DCO

Full details of the Developer Certificate of Origin can be found at developercertificate.org.

The list of commits missing DCO signoff:

  • 1ba6374 feat: support module-pair allocation for Ascend 910C devices in SuperPod environments
Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@github-actions github-actions bot added the kind/feature new function label Jan 20, 2026
@hami-robot hami-robot bot added the size/M label Jan 20, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ashergaga, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces crucial enhancements to the HAMi scheduler, specifically targeting Ascend 910C devices in Huawei SuperPod environments. The changes ensure that NPU allocations adhere to the physical hardware topology, where devices are provisioned in module pairs. This prevents invalid resource requests, provides clearer feedback to users, and optimizes resource utilization by ensuring that only complete modules are allocated.

Highlights

  • Ascend 910C Device Support: A new Ascend910CType constant has been introduced to specifically identify and manage Ascend 910C devices within the scheduler.
  • Module-Pair Allocation Enforcement: The scheduler now enforces module-pair allocation for Ascend 910C devices. Requests for a single NPU are automatically rounded up to two, aligning with the hardware's minimum allocation unit.
  • Rejection of Invalid Requests: Odd-numbered NPU requests greater than one (e.g., 3, 5) for Ascend 910C devices are now explicitly rejected with a clear error message, preventing invalid resource assignments.
  • Topology-Aware Allocation Strategy: A new function, computeBestCombination910C, has been added to implement a topology-aware allocation strategy for Ascend 910C, ensuring that devices are allocated only in complete module pairs (2 NPUs per physical card).
  • Improved Error Messaging: The scheduler's error reporting has been enhanced to provide a more specific message ('node device request must be 1 or even') when an invalid Ascend 910C NPU request is made.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for module-pair allocation for Ascend 910C devices, which is a valuable enhancement. The logic correctly rounds up requests for a single NPU to two and rejects other odd-numbered requests. A new allocation strategy, computeBestCombination910C, is added to enforce module-pair selection. While the overall approach is sound, I've identified a critical issue where a partial device allocation could be incorrectly reported as successful. Additionally, there's a bug in the error handling path that would prevent scheduling failure events from being generated correctly for invalid request numbers. I've provided suggestions to fix these issues and improve code clarity and maintainability.

Comment on lines +169 to 176
if strings.Contains(reason, common.CardReqNumInvalid) {
failedNodes[nodeID] = "node device request must be 1 or even"
} else {
failedNodes[nodeID] = common.NodeUnfitPod
}
for reasonType := range common.ParseReason(reason) {
failureReason[reasonType] = append(failureReason[reasonType], nodeID)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This logic has a critical flaw. When reason contains common.CardReqNumInvalid, common.ParseReason(reason) will fail to parse the string, resulting in an empty map. Consequently, failureReason is not populated, and no scheduling failure event will be recorded for the pod. This makes debugging scheduling failures difficult.

You should handle the CardReqNumInvalid case separately to ensure the failure reason is correctly recorded, and only call ParseReason for reasons that are in the expected format.

Suggested change
if strings.Contains(reason, common.CardReqNumInvalid) {
failedNodes[nodeID] = "node device request must be 1 or even"
} else {
failedNodes[nodeID] = common.NodeUnfitPod
}
for reasonType := range common.ParseReason(reason) {
failureReason[reasonType] = append(failureReason[reasonType], nodeID)
}
if strings.Contains(reason, common.CardReqNumInvalid) {
failedNodes[nodeID] = "node device request must be 1 or even"
failureReason[common.CardReqNumInvalid] = append(failureReason[common.CardReqNumInvalid], nodeID)
} else {
failedNodes[nodeID] = common.NodeUnfitPod
for reasonType := range common.ParseReason(reason) {
failureReason[reasonType] = append(failureReason[reasonType], nodeID)
}
}

k.Nums = 2
} else if k.Nums%2 != 0 {
// Reject any other odd-numbered request (e.g., 3, 5, 7...)
errMsg := fmt.Sprintf("CardReqNumInvalid: Ascend910C device request must be 1 or even number, got %d", k.Nums)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better maintainability and to avoid hardcoding strings, it's recommended to use the common.CardReqNumInvalid constant when constructing the error message. This ensures consistency if the constant ever changes.

Suggested change
errMsg := fmt.Sprintf("CardReqNumInvalid: Ascend910C device request must be 1 or even number, got %d", k.Nums)
errMsg := fmt.Sprintf("%s: Ascend910C device request must be 1 or even number, got %d", common.CardReqNumInvalid, k.Nums)

return false, nil, errMsg
}
}
originReq := k.Nums
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The variable originReq is now initialized after k.Nums has been potentially modified for Ascend 910C devices. This makes the name originReq misleading as it no longer holds the original request value but the adjusted one. For better code clarity and maintainability, consider renaming it to something like adjustedReqNum.

Comment on lines +560 to +592
// Group NPU indices by ther module and Sort
cardTopology := make(map[int][]int)
for _, idx := range npuIndices {
cardId := idx / MaxCardNPUNum
cardTopology[cardId] = append(cardTopology[cardId], idx)
}

// Convert the card topology map into a slice for sorting.
cardTopSlice := make([][]int, 0, len(cardTopology))
for _, card := range cardTopology {
cardTopSlice = append(cardTopSlice, card)
}

// Sort cards by the number of available NPUs in ascending order.
sort.Slice(cardTopSlice, func(i, j int) bool {
return len(cardTopSlice[i]) < len(cardTopSlice[j])
})

// Select NPUs card by card, preferring full cards.
var selectedIndices []int
taskNPUNum := reqNum

for _, card := range cardTopSlice {
if taskNPUNum <= 0 {
break
}

// Only consider cards that have both NPUs available (full card).
if len(card) == MaxCardNPUNum {
selectedIndices = append(selectedIndices, card...)
taskNPUNum -= MaxCardNPUNum
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's a minor typo in the comment on line 560 ("ther" should be "their").

More importantly, the logic for selecting full cards can be simplified for better readability and efficiency. Instead of converting the map to a slice and then sorting it, you can directly filter for cards that are full (i.e., have MaxCardNPUNum available NPUs) and then iterate over this filtered list. This makes the intent of the code more explicit.

	// Group NPU indices by their module and Sort
	cardTopology := make(map[int][]int)
	for _, idx := range npuIndices {
		cardId := idx / MaxCardNPUNum
		cardTopology[cardId] = append(cardTopology[cardId], idx)
	}

	// Filter for cards that have a full module of NPUs available.
	var fullCards [][]int
	for _, cardIndices := range cardTopology {
		if len(cardIndices) == MaxCardNPUNum {
			fullCards = append(fullCards, cardIndices)
		}
	}

	// Select NPUs from full cards until the request is met.
	var selectedIndices []int
	taskNPUNum := reqNum

	for _, card := range fullCards {
		if taskNPUNum <= 0 {
			break
		}
		selectedIndices = append(selectedIndices, card...)
		taskNPUNum -= MaxCardNPUNum
	}

@DSFans2014
Copy link
Contributor

please sign-off your commit

return result
}

func (npudev *Devices) computeBestCombination910C(nodeInfo *device.NodeInfo, reqNum int, containerDevices device.ContainerDevices) device.ContainerDevices {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if set dev.CustomInfo["NetworkID"] = dev.Idx / 2, can the current computeBestCombination function meet your allocation requirements?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Theoretically it's feasible, but I'm not sure if there might be some edge cases—like a module having only one gpu available—where the original computeBestCombination method might not meet the requirements.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's problems will occur if (id0, id2) are allocated with Ascend910C?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Ascend driver will hide both NPUs from npu-smi info since each group isn’t fully allocated, like this error in this image. It’s even possible that 4 NPUs are allocated, but npu-smi info only shows the 2 that belong to a fully allocated module.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

微信图片_20260121125215_598_29

@Shouren
Copy link
Collaborator

Shouren commented Jan 21, 2026

This PR enhances the HAMi scheduler to properly handle Ascend 910C devices in Huawei SuperPod clusters by enforcing module-pair allocation:

* Ascend 910C hardware consists of **2 NPUs per physical module**, and valid allocations must respect this topology (i.e., allocate either 1 NPU — auto-rounded to 2 — or an even number of NPUs).

* The scheduler now **rejects odd-numbered requests > 1** (e.g., 3, 5) with a clear error message.

* For valid requests, devices are allocated **only in complete module pairs**.

@ashergaga Since NPUs in one physical module can not be allocated separately, why not to register as one unit in ascend-device-plugin ?


func (npu *Devices) Fit(devices []*device.DeviceUsage, request device.ContainerDeviceRequest, pod *corev1.Pod, nodeInfo *device.NodeInfo, allocated *device.PodDevices) (bool, map[string]device.ContainerDevices, string) {
k := request
if k.Type == Ascend910CType {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ashergaga Can we move the check and modification to MutateAdmission ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants