Skip to content

Conversation

@myersCody
Copy link
Contributor

uv run nise report ocp --static-report-file tests/ocp_gpu_static_report.yml --ocp-cluster-id test1 -w

Example Export:

report_period_start,report_period_end,interval_start,interval_end,node,namespace,pod,gpu_uuid,gpu_model_name,gpu_vendor_name,gpu_memory_capacity_mib,gpu_pod_uptime
2025-11-01 00:00:00 +0000 UTC,2025-12-01 00:00:00 +0000 UTC,2025-11-01 00:00:00 +0000 UTC,2025-11-01 00:59:59 +0000 UTC,gpu-node-2,ai-research,research-workload-1,GPU-8ef63507-bca1-4e52-b513-ece02e245096,H100,nvidia_com_gpu,81920,3600
2025-11-01 00:00:00 +0000 UTC,2025-12-01 00:00:00 +0000 UTC,2025-11-01 00:00:00 +0000 UTC,2025-11-01 00:59:59 +0000 UTC,gpu-node-2,ai-research,research-workload-1,GPU-79ac02d6-9455-455e-802c-89d6779e7adc,H100,nvidia_com_gpu,81920,3600
2025-11-01 00:00:00 +0000 UTC,2025-12-01 00:00:00 +0000 UTC,2025-11-01 00:00:00 +0000 UTC,2025-11-01 00:59:59 +0000 UTC,gpu-node-2,,,GPU-a1f9dc03-bbe5-43ae-ba09-466f32cc5128,A100,nvidia_com_gpu,40960,3600
2025-11-01 00:00:00 +0000 UTC,2025-12-01 00:00:00 +0000 UTC,2025-11-01 00:00:00 +0000 UTC,2025-11-01 00:59:59 +0000 UTC,gpu-node-2,,,GPU-165baf44-50da-4acd-a7c4-8a42ae849642,A100,nvidia_com_gpu,40960,3600

@github-actions
Copy link

⚠️ This PR modifies code in nise/ but doesn't update the version in nise/__init__.py. Consider bumping the version if this includes functional changes.

@gemini-code-assist
Copy link

Summary of Changes

Hello @myersCody, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the OCP generator by adding support for tracking and reporting unallocated GPU resources. Previously, the system only accounted for GPUs explicitly assigned to pods. With this change, GPUs provisioned at the node level, even if not currently in use by a workload, are now included in reports. This provides a more comprehensive understanding of GPU utilization and helps in identifying potential cost savings from idle resources.

Highlights

  • Unallocated GPU Support: Introduced the ability to generate and report GPUs that are provisioned at the node level but not currently allocated to any specific pod, enhancing visibility into idle GPU capacity.
  • Enhanced GPU Generation Logic: The _gen_gpus method in ocp_generator.py was extended to identify and create GPU entries based on node configurations, in addition to existing pod-level GPU assignments.
  • Reporting for Unallocated GPUs: The _gen_hourly_gpu_usage method was updated to correctly process and report both pod-allocated and node-level unallocated GPUs, ensuring that unallocated GPUs are reported with empty namespace and pod fields.
  • Test Coverage for New Feature: The ocp_gpu_static_report.yml test file was updated to include example configurations for unallocated GPUs on nodes, validating the new generation and reporting functionality.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for generating usage data for unallocated GPUs, i.e., GPUs that are present on a node but not assigned to any pod. The changes correctly modify the data generation logic to account for GPUs defined at the node level in the static configuration file and update the hourly usage generation to include these unallocated GPUs.

The implementation is mostly correct, but I've identified a couple of areas for improvement regarding code complexity and performance. My comments focus on simplifying a convoluted piece of logic for finding node specifications and optimizing a lookup that happens inside a nested loop. These changes will make the code more readable, maintainable, and performant.

for gpu_key, gpu_list in self.gpus.items():
if isinstance(gpu_key, tuple):
node_name = gpu_key[0]
node_obj = next((n for n in self.nodes if n.get("name") == node_name), None)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This next() call to find a node by name is inside a nested loop (over hours and GPUs), which can lead to poor performance if there are many nodes, GPUs, and a long time range. The complexity is O(hours * gpus * nodes).

To optimize this, you could create a name-to-node mapping before the loops. This would reduce the complexity to O(nodes + hours * gpus).

Example:

def _gen_hourly_gpu_usage(self, **kwargs):
    node_map = {n.get("name"): n for n in self.nodes}
    for hour in self.hours:
        start = hour.get("start")
        end = hour.get("end")

        for gpu_key, gpu_list in self.gpus.items():
            if isinstance(gpu_key, tuple):
                node_name = gpu_key[0]
                node_obj = node_map.get(node_name)
                if not node_obj:
                    continue
                # ... rest of the logic

Since this change would be outside the current diff, I'm providing it as an example for you to consider.

@codecov
Copy link

codecov bot commented Nov 18, 2025

Codecov Report

❌ Patch coverage is 43.33333% with 17 lines in your changes missing coverage. Please review.
✅ Project coverage is 93.2%. Comparing base (6691e81) to head (73aa12b).

Additional details and impacted files
@@           Coverage Diff           @@
##            main    #596     +/-   ##
=======================================
- Coverage   93.5%   93.2%   -0.3%     
=======================================
  Files         56      56             
  Lines       4730    4750     +20     
  Branches     663     669      +6     
=======================================
+ Hits        4422    4428      +6     
- Misses       165     178     +13     
- Partials     143     144      +1     
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@myersCody myersCody marked this pull request as draft November 18, 2025 18:39
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@myersCody
Copy link
Contributor Author

a gpu that is present but not allocated to a pod will simply not appear in the report. currently we gather metrics for gpu utilization

@myersCody myersCody closed this Nov 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants